project_name,oneliner,git_namespace,git_url,platform,topics,rubric,last_commit_date,stargazers_count,number_of_dependents,stars_last_year,project_active,dominating_language,organization,organization_user_name,languages,homepage,readme_content,refs,project_created,project_age_in_days,license,total_commits_last_year,total_number_of_commits,last_issue_closed,open_issues,closed_pullrequests,closed_issues,issues_closed_last_year,days_until_last_issue_closed,open_pullrequests,reviews_per_pr,development_distribution_score,last_released_date,last_release_tag_name,good_first_issue,contributors,accepts_donations,donation_platforms,code_of_conduct,contribution_guide,dependents_repos,organization_name,organization_github_url,organization_website,organization_location,organization_country,organization_form,organization_avatar,organization_public_repos,organization_created,organization_last_update pvlib-python,A set of documented functions for simulating the performance of photovoltaic energy systems.,pvlib,https://github.com/pvlib/pvlib-python.git,github,"solar-energy,python,renewable-energy,renewables,photovoltaic",Photovoltaics and Solar Energy,"2023/10/23, 15:15:23",958,431,214,true,Python,pvlib,pvlib,"Python,TeX,Cython,Pan",https://pvlib-python.readthedocs.io,"b'\n\n\n\n \n \n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n
Latest Release\n \n \n \n \n \n \n \n \n \n
License\n \n \n \n
Build Status\n \n \n \n \n \n \n \n \n \n
Benchmarks\n \n \n \n
Publications\n \n \n \n \n \n \n
Downloads\n \n \n \n \n \n \n
\n\n\npvlib python is a community supported tool that provides a set of\nfunctions and classes for simulating the performance of photovoltaic\nenergy systems. pvlib python was originally ported from the PVLIB MATLAB\ntoolbox developed at Sandia National Laboratories and it implements many\nof the models and methods developed at the Labs. More information on\nSandia Labs PV performance modeling programs can be found at\nhttps://pvpmc.sandia.gov/. We collaborate with the PVLIB MATLAB project,\nbut operate independently of it.\n\n\nDocumentation\n=============\n\nFull documentation can be found at [readthedocs](http://pvlib-python.readthedocs.io/en/stable/),\nincluding an [FAQ](http://pvlib-python.readthedocs.io/en/stable/user_guide/faq.html) page.\n\nInstallation\n============\n\npvlib-python releases may be installed using the ``pip`` and ``conda`` tools.\nPlease see the [Installation page](https://pvlib-python.readthedocs.io/en/stable/user_guide/installation.html) of the documentation for complete instructions.\n\n\nContributing\n============\n\nWe need your help to make pvlib-python a great tool!\nPlease see the [Contributing page](http://pvlib-python.readthedocs.io/en/stable/contributing.html) for more on how you can contribute.\nThe long-term success of pvlib-python requires substantial community support.\n\n\nCiting\n======\n\nIf you use pvlib-python in a published work, please cite:\n\n William F. Holmgren, Clifford W. Hansen, and Mark A. Mikofski.\n ""pvlib python: a python package for modeling solar energy systems.""\n Journal of Open Source Software, 3(29), 884, (2018).\n https://doi.org/10.21105/joss.00884\n\nPlease also cite the DOI corresponding to the specific version of\npvlib-python that you used. pvlib-python DOIs are listed at\n[Zenodo.org](https://zenodo.org/search?page=1&size=20&q=conceptrecid:593284&all_versions&sort=-version)\n\nIf you use pvlib-python in a commercial or publicly-available application, please\nconsider displaying one of the ""powered by pvlib"" logos:\n\n\n\nGetting support\n===============\n\npvlib usage questions can be asked on\n[Stack Overflow](http://stackoverflow.com) and tagged with\nthe [pvlib](http://stackoverflow.com/questions/tagged/pvlib) tag.\n\nThe [pvlib-python google group](https://groups.google.com/forum/#!forum/pvlib-python)\nis used for discussing various topics of interest to the pvlib-python\ncommunity. We also make new version announcements on the google group.\n\nIf you suspect that you may have discovered a bug or if you\'d like to\nchange something about pvlib, then please make an issue on our\n[GitHub issues page](https://github.com/pvlib/pvlib-python/issues).\n\n\n\nLicense\n=======\n\nBSD 3-clause.\n\n\nNumFOCUS\n========\n\npvlib python is a [NumFOCUS Affiliated Project](https://numfocus.org/sponsored-projects/affiliated-projects)\n\n[![NumFocus Affliated Projects](https://i0.wp.com/numfocus.org/wp-content/uploads/2019/06/AffiliatedProject.png)](https://numfocus.org/sponsored-projects/affiliated-projects)\n'",",https://doi.org/10.5281/zenodo.593284,https://doi.org/10.21105/joss.00884\n\nPlease,https://zenodo.org/search?page=1&size=20&q=conceptrecid:593284&all_versions&sort=-version","2015/02/17, 00:21:33",3172,BSD-3-Clause,124,1555,"2023/10/21, 12:52:12",205,852,1578,263,4,42,3.4,0.542319749216301,"2023/09/21, 20:02:01",v0.10.2,8,105,false,,true,true,"andreabotti/itacca,NREL/EVOLVE,atiqureee51/energy-financial-PV,Satyabhama-Reddy/shadow-analysis,nonzchanon/hello-streamlit,realnumber666/smart-city,sravya97/Shadow-Analysis,shreyasskasetty-tamu/shadow_analysis,Patrickyyh/shadow_analysis_yuhao,IsurangaPerera/shadow-analaysis,woodjmichael/PROPHET-Load_LSTM,Haitham-ghaida/MCC,OneMogin/model_hofmann,TimWalter/solar-power-estimator,blazdob/consmodel,riparise/solar-panel-sizing-tool,D-Agar/CanSat-Data-Client,cutmyenergybill/domestic-energy-bill-reduction-app,MainakRepositor/Activation-Infopedia,haydaroglu/SolarEnergyPredictionSuite,openclimatefix/nowcasting_forecast,makkorjamal/hsdigitizer,openclimatefix/nowcasting_dataset,fulvio9999/PROGETTO-BD,justinfmccarty/morph-app,fgbg03/PROCSIM-Running-Results-Frontend,piotre13/NODES_platform,RWTH-EBC/AixWeather,justinfmccarty/pyepwmorph,pail23/energy-assistant-backend,heetbeet/heliostat-prototype,DavidMbx/Tesi_DavideCasuccio_SimulationTool,maniraman1982/Portola-master,AEJaspan/irradianceforecasting,matiasctrs/app-apv,Deutsches-Brennstoffinstitut/H2-Index-III,manitcs1982/PortolaProd2024,shadingfish/Dingyi_Yu-Digital-Agri-Lab,owentmfoo/WSCForecast,interuss/monitoring,manitcs1982/portolauatlocal,maniraman1982/uat,maniraman1982/portola-UAT,hmendo/chrpa,Shkryob/kostya-optimizer,hmendo/chrpa_test,Geoandres321/Trackers,Victor19970418/Power-consumption-forecast,manitcs1982/HaveBlue-root,cs224/jupyter-docker-hello-world,zhaoshouhang/pandas_learn,amritPVre/Solar_energy_estimator_v01,NREL/OCHRE,flecksi/container_test,stephansmit/heliostrome,AI-nergy/aiecommon,langestefan/pvcast,Deutsches-Brennstoffinstitut/DBI-MAT,mbari-org/pypam-based-processing,chz056/BEAR,cai-yutian/pv-modelling-mirror,NREL/lore,gereon-t/oaemapi,Mousa-Zerai/HiSim,marcwatine/CC-LUT,BETALAB-team/EUReCA,owentmfoo/dusc_race_strategy_archive,jucrramirez/OrnamentaRainbow,shirubana/Approaches2BifacialPerformanceMonitoring,isc-konstanz/th-e-fcst,JustinZarb/natural_maps,cedricleroy/solpos,Hoshang111/lcoe-model,Sim-on-Wheels/Sim-on-wheels-Renderer,wago-stiftung/workshop_data-analytics,Nikita-Belyakov/CLOUD_SNOW_SEGMENTATION,kaiATtum/battery-swapping-station-model,SMEISEN/AutoPV,the-aerospace-corporation/selenium,pdb-94/miguel,dcambie/LSC-PM_solar_miniplant,occamssafetyrazor/deps,SETO2243/forecasting,tug-cps/inframonitor-public,amosnjenga/pcsrt_python_cli,PVSC-Python-Tutorials/PVSC50,in-RET/inretensys-fastapi,Mizunomori/NEM_Pricing,Lodinn/PAM-timeseries,2022Yalin/MSc,CDT-AIMLAC/team_7564616d_models,NOWUM/smartdso,pombredanne/5000-deps,amritPVre/inter-row_beta,in-RET/in.RET-EnSys-open-plan-GUI,tribp/solar-forecast-api,salazarna/synthetic-irradiance-sequence,Abercardsea/Abercardsea-solution,Hykire/photovoltaic_simulation,HZBSolarOptics/pv_tandem,JMMonte/Reentry,LuisGarayF/shipcal_cl,slacgismo/pv-validation-hub,streetplantsolar/enactment-eyelash-strike-a4mm,heliostrome/streamlit_app,guilhermedcastr0/simulador_solar,sebastienlanglois/dash-gis,kabirnund/PROTOTYPE,Constructor-999/PlantSat-CanSat,kperrynrel/time-shift-validation-hub,DarkBrain-LP/Essivi-API,MIPU-MPH/i-nergy,bladekeys/sipi_project,xinhhh/capstone,openclimatefix/gfs-downloader,nicosquare/rl-energy-management,mikecoughlan/multi-station-dbdt-risk-assessment,JonathanRJ404/Code_Etalonnage,data-overflow/solar-output-predictor,kianwasabi/Weather_Information_API,areyc2023/cno,zakwatts/NWP_Island_data,ski907/historicHeatFlux,openclimatefix/pv-site-api,AssessingSolar/AssessingSolar,ijbd/merra-power-generation,dtcc-platform/dtcc-solar,openclimatefix/pv-site-production,antoine-zurcher/PreSimulatorPV,StefaE/PVForecast,pvlib/solarfactors,MahrokhGB/Synthetic_PV_Profiles,openclimatefix/uk-pv-national-xg,zsteenson/GUI-two-axis-tracker-controller,owentmfoo/S5,kandersolar/twoaxistracking,wholmgren/twoaxistracking,TAMUparametric/energiapy,bitenergie/notes,gitter-badger/thea-1,openclimatefix/ocf-ml-metrics,ekck/solar-power-gen-analysis,openclimatefix/pv-solar-farm-forecasting,nicolasdesilles/PIRD,mark-mcnulty/two-axis-tracker-controller-code,UU-ER/EHUB-Py_Training,sofuetakuma112/pvpmc2022,lgaspard/repf,NOWUM/dmas,kike144/Solar,alksarioglou/compression_algorithms_fpga_ultrasound,NREL/PVDegradationTools,json-0201/weather_and_solar_summary,saif-byte/masterpiece_limited,kkriti01/pvsimulator,Apoorva64/mauritius-solar-pannel-placement-analysis,DitisAlex/HAN-OOSE-Prediction-Model-BackEnd,ski907/hff,openclimatefix/pv-site-prediction,Tsiri/RHEIA,19conte/TR_datasophia,DTUWindEnergy/hydesign,MORE-EU/hackathon_athena,NCAR/mlsurfacelayer,NCAR/grafs,ttjaden/vdi4657.app,nhcho91/active-weighting-cmrac,kfiramar/baldar,cambridge-cares/TheWorldAvatar,tadatoshi/agrivoltaics_supply_side_management,FedericoTartarini/tool-risk-scale-football-nsw,Yasmin1209/Freshwater-PV,openclimatefix/PVItaly,ArcticSnow/TopoPyScale,flecksi/pv_design_app,SepehrMosavat/PROGNOES,AlexandreHugoMathieu/pvfault_detection,BenWinchester/HEATDesalination,energyris-com/ai4hotels-demo,ShayanNaderi/PrecoolTool,slacgismo/pv-apache-beam,icaromisquita/WBS-Final-Project-PV-energy-prediction-for-Germany,rseng/rsepedia-analysis,magneb/skycolor,dvdjng/fpvwebsite,stefvra/energy_app,usnistgov/NIST_SG_SolarSim,JPCM95/portfo,simonneidhart/pv_mapping_ch,kaustuvchatterjee/vskp,openclimatefix/ocf_datapipes,pollination/sample-apps,flopaw/pvforecast-docker,moritz-reuter/ESEM-EE,PVSC-Python-Tutorials/PVPMC_2022,cwhanse/ivcurves,brandonhanner/Solar-Simulation,nikohou/pv-data-utility-suite,AssessingSolar/solarstations,openclimatefix/power_perceiver,LBNL-ETA/AFC,olousgap/Combi_CSP,VMLC-PV/PVLC_Diode_Fit,OpenSTEF/openstef,joaoguilhermeS/Web-Development,cire-thk/BifacialSimu,npapnet/Combi_CSP,pranavsinghal30/PVForecast,tongpu/solar-power-simulation,BETALAB-team/eureca-building,slacgismo/gismo-cloud-deploy,jranalli/solartoolbox,jjcaine/top_dependencies_python,LE2P/pybsrnqc,oie-mines-paristech/IEA_PVPS_T16_QC_pynb,msoutojr/IFT-6759-Photovoltaic-forecast,nicosquare/ml-707-project,EvYogi/Rivesaltes,AndresPadillaUcros/PhotovoltaicForecast,mj-xmr/SolOptXMR,e-marco/onetmy,marcoboucas/irradiance,modusV/skia,sandialabs/CFTrack,enermaps/enermaps,helvecioneto/goes-data-toolkit,sophiegribben/EM401,davidusb-geek/emhass-add-on,aimlac-7564616D/models,cycle13/climate,abhijeetmokate/folder9876,zahraghh/DESweatherAnalysis,jsl12/plants,mtress/mtress,SeitaBV/flexmeasures-openweathermap,blaevens/PV_mcmc,ewilczynski/enermaps,energyscope/EnergyScope_multi_criteria,viktor-platform/sample-solar-panel-configurator,bhavyadureja25/ds_tool_with_upload,Sourabh470/IoT_Dashboard,shortfellow/dashapp,sunt05/SuPy,Caedin/RaspberryPiCamera,RubenVanEldik/ect2-bonus-project,instrat-pl/pypsa-pl,ZaninMarco/Copy-of-repository,mape-maker/mape-maker,Adrianonsare/EnergyAnalytics,covetool/clima,AnnabelNkir/La_Galerie,vondeF/solar_prediction-master,AIMLAC-Convergence/Convergence,YashasviBhatt/transformer_streaming,Bimal-Kumar-002/Sentiment-classification,inrae/SISPPEO,burundiocibu/pvlib-prometheus-exporter,fernandochacon92/AAM_APP_MA_2021,SteveShin9330/PV_PRED_LDAPS,PVSC-Python-Tutorials/pyData-2021-Solar-PV-Modeling,andresgm/cno_solar,rehomewebapp/REhome,rohitsanam/streamlit-basic-app,FZJ-IEK3-VSA/HiSim,tjcoathu/bifacial_radiance,AnaDue99/microCPV_Iluminacion,zahraghh/multi_objective_optimization,martinrteran/Taller_Python3,hancse/ai_model_deployment,amritPVre/Hourly_GHI_GII_App,davidusb-geek/emhass,thiagohgmello1/pvsystem,AvG97/Avocado-Analyzer,yaotc/PVODataset,waggle-sensor/plugin-solar-irradiance,AdamRJensen/adamrjensen.github.io,IMMM-SFA/diyepw,slacgismo/solar-data-tools,UlisseProject/ULISSE,isi-ies-group/hiperion_app,bershawi/PV-simulator-challenge,SESMG/SESMG,JoseDiego101/Blog,h-quest/FDDA_app,sandialabs/pecos,nxtrung87/Voila,rheia-framework/RHEIA,PVSC-Python-Tutorials/PVSC48-Python-Tutorial,sandialabs/pvOps,isc-konstanz/pvsys,UARENForecasting/ESPRR,thesethtruth/LESO,tinoetzold/PV-Prognose,cccwam/ift6759-project1-public,arkayyy/surya-disha,EURAC-EEBgroup/CULTURAL-E-Data-Visualization-Library,zahraghh/Two_Stage_SP,gpschnaars/zac_pvlib,francomozo/deepCloud,Saq90/ssg,gripenergy/omf,sillygoose/sbhistory,BBISSlab/DistributedEnergyGen,Vladykart/GPReport,mikofski/PVRW2021,mikofski/OLD-PVSC48-Python-Tutorial,CenterForTheBuiltEnvironment/clima,nacho-fm/glare,Peque/pvlib-procedural-test,SolarPerformanceInsight/solarperformanceinsight,UARENForecasting/erebos-ams2020,benjaminpillot/greece,Weiming-Hu/RenewableSimulator,SolarGenomeProject/uncertainty-pv-generation-models,rayarka/EG3301R_Data_and_Programs,jeanromainroy/spatial-prediction-afg-landmines,Vesino/Exergy-YDX,brizett/reegis_hp,Rumbelstilzchen/Monitoring,Vesino/pvlib_tesis,louis-richard/irfu-python,abachleda/FM_e-services,ihomelab/effect-of-sampling-rate-on-PV-self-consumption,DuraMAT/pvpro,oemof-heat/solar_models,traiyn/abcdtft,gayashiva/air_model,selmaneislam/rdootl,cdeline/bifacialvf,cogent-computing/Heed-Microcontroller,TheUninvitedGuest/tmh-challenge,zoezhang926/SolarMartSim,prathamrg/solarapp,Adrian658/ClearSkyDetection,jcjveraa/sunscreen-controller,RaphiOriginal/blindAutomation,jungse12/Ford-Deploy,jungse12/Ford-Website,Gkrumbach07/solar_forecaster,pvedu/pvon,danielvanpaass/Demonstrator,cropsinsilico/hothouse,chrisorner/PowerPlan,uiandwe/django-best-practice-example,emanuelosva/solaru,simardeep1792/LOHS,keenchan/111,keenchan/pv,david-salac/Renewable-energy-prediction,bradbase/flyingkoala_pvlib,DanielFonteneleNogueira/painel,quintel/scenario-tools,kwhanalytics/data-env,UniOfLeicester/plotdarn,greco-project/pvcompare,HrushikeshBodas/HRES_optimization,mesmo-dev/mesmo,yaricp/py-solarhouse,kwhanalytics/insurance-requirements,pvcaptest/pvcaptest,iansloop/PV_Mismatch_Refactoring,pvlib/pvanalytics,ECE-492-W2020-Group-6/smart-blinds-rpi,greco-project/greco_technologies,UARENForecasting/erebos,maxmills1/Irradiance,vegraux/solar_path,panzer/capstone-mppt,david-salac/Fast-SZA-and-SAA-computation,kwhanalytics/postgis-heliostats-py3.5,nicolasholland/yuce,nano-sippe/bifacial_illumination,FZJ-IEK3-VSA/RESKit,oemof/oemof-thermal,tylunel/pvpumpingsystem,Pyosch/vpplib,renewables-ninja/gsee,greco-project/cpvtopvlib,FZJ-IEK3-VSA/tsib,NREL/Solar-Forecasting,Jeffrey-Simpson/Solar-Forecasting,mikofski/pvwatts_emulator,toddkarin/vocmax,cbaretzky/BuildingEnergySimulation,Ekistica/irradiance_synth,coroa/tmhpvsim,MichaelHopwood/PVPolyfit,kwhanalytics/marvin,rory87/caledonia-energy,SolarArbiter/workshop,harpreet153/pv-graph,toddkarin/pvtools,duplessisaa/Docker,SolarArbiter/solarforecastarbiter-core,mesmo-dev/cobmo,25sal/pvprediction,josephmckinsey/NRELHackathon2018,BLM-UoR/BLM,NREL/pv_tomcat,be-lb/solar-loader,higab85/off-grid,buds-lab/the-building-data-genome-project,SunPower/pvfactors,hackingmaterials/duramat_dashboard,kwhanalytics/postgis-marvin,hackingmaterials/ivtools,michbeg/IBM-DSX,teamvirtue/smart-sockets-dev,JohannesBertens/InSolarBase,JasonTarzan/PV_Forecast,eckara/CSSS,heliotrope-energy/bifacial,louisguitton/ifp-class-ml,BreakingBytes/pvfree,BreakingBytes/simkit,tplemmens/InSolar,mikofski/pvsc44-clearsky-aod,Solcast/howto-pandas,vkinakh/calc-solar-panel-effectivness,WalterGoedecke/virtualenv,BreakingBytes/UncertaintyWrapper,open-fred/lib_validation,SunPower/PVMismatch,reegis/reegis,adriandole/pv_timelapse,isc-konstanz/emonpv,oemof/feedinlib,NREL/bifacialvf,NREL/rdtools,NREL/bifacial_radiance,nshakhat/PVSimulator",,https://github.com/pvlib,,,,,https://avatars.githubusercontent.com/u/11037261?v=4,,, pvfactors,Open source view-factor model for diffuse shading and bifacial PV modeling.,SunPower,https://github.com/SunPower/pvfactors.git,github,"solar-energy,renewable-energy,python,bifacial",Photovoltaics and Solar Energy,"2022/02/22, 21:53:32",75,13,10,true,Python,SunPower,SunPower,"Python,Makefile",http://sunpower.github.io/pvfactors/,"b'pvfactors: irradiance modeling made simple\n==========================================\n\n|Logo|\n\n|CircleCI| |License| |PyPI-Status| |PyPI-Versions|\n\npvfactors is a tool used by PV professionals to calculate the\nirradiance incident on surfaces of a photovoltaic array. It relies on the use of\n2D geometries and view factors integrated mathematically into systems of\nequations to account for reflections between all of the surfaces.\n\npvfactors was originally ported from the SunPower developed \'vf_model\' package, which was introduced at the IEEE PV Specialist Conference 44 2017 (see [#pvfactors_paper]_ and link_ to paper).\n\n------------------------------------------\n\n.. contents:: Table of contents\n :backlinks: top\n :local:\n\n\nDocumentation\n-------------\n\nThe documentation can be found `here `_.\nIt includes a lot of tutorials_ that describe the different ways of using pvfactors.\n\n\nQuick Start\n-----------\n\nGiven some timeseries inputs:\n\n\n.. code:: python\n\n # Import external libraries\n from datetime import datetime\n import pandas as pd\n\n # Create input data\n df_inputs = pd.DataFrame(\n {\'solar_zenith\': [20., 50.],\n \'solar_azimuth\': [110., 250.],\n \'surface_tilt\': [10., 20.],\n \'surface_azimuth\': [90., 270.],\n \'dni\': [1000., 900.],\n \'dhi\': [50., 100.],\n \'albedo\': [0.2, 0.2]},\n index=[datetime(2017, 8, 31, 11), datetime(2017, 8, 31, 15)])\n df_inputs\n\n\n+---------------------+--------------+---------------+--------------+-----------------+--------+-------+--------+\n| | solar_zenith | solar_azimuth | surface_tilt | surface_azimuth | dni | dhi | albedo |\n+=====================+==============+===============+==============+=================+========+=======+========+\n| 2017-08-31 11:00:00 | 20.0 | 110.0 | 10.0 | 90.0 | 1000.0 | 50.0 | 0.2 |\n+---------------------+--------------+---------------+--------------+-----------------+--------+-------+--------+\n| 2017-08-31 15:00:00 | 50.0 | 250.0 | 20.0 | 270.0 | 900.0 | 100.0 | 0.2 |\n+---------------------+--------------+---------------+--------------+-----------------+--------+-------+--------+\n\n\nAnd some PV array parameters\n\n\n.. code:: python\n\n pvarray_parameters = {\n \'n_pvrows\': 3, # number of pv rows\n \'pvrow_height\': 1, # height of pvrows (measured at center / torque tube)\n \'pvrow_width\': 1, # width of pvrows\n \'axis_azimuth\': 0., # azimuth angle of rotation axis\n \'gcr\': 0.4, # ground coverage ratio\n }\n\nThe user can quickly create a PV array with ``pvfactors``, and manipulate it with the engine\n\n\n.. code:: python\n\n from pvfactors.geometry import OrderedPVArray\n # Create PV array\n pvarray = OrderedPVArray.init_from_dict(pvarray_parameters)\n\n\n\n.. code:: python\n\n from pvfactors.engine import PVEngine\n # Create engine\n engine = PVEngine(pvarray)\n # Fit engine to data\n engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,\n df_inputs.solar_zenith, df_inputs.solar_azimuth,\n df_inputs.surface_tilt, df_inputs.surface_azimuth,\n df_inputs.albedo)\n\nThe user can then plot the PV array geometry at any given time of the simulation:\n\n\n.. code:: python\n\n # Plot pvarray shapely geometries\n f, ax = plt.subplots(figsize=(10, 5))\n pvarray.plot_at_idx(1, ax)\n plt.show()\n\n.. image:: https://raw.githubusercontent.com/SunPower/pvfactors/master/docs/sphinx/_static/pvarray.png\n\n\nIt is then very easy to run simulations using the defined engine:\n\n\n.. code:: python\n\n pvarray = engine.run_full_mode(fn_build_report=lambda pvarray: pvarray)\n\n\nAnd inspect the results thanks to the simple geometry API\n\n\n.. code:: python\n\n print(""Incident irradiance on front surface of middle pv row: {} W/m2""\n .format(pvarray.ts_pvrows[1].front.get_param_weighted(\'qinc\')))\n print(""Reflected irradiance on back surface of left pv row: {} W/m2""\n .format(pvarray.ts_pvrows[0].back.get_param_weighted(\'reflection\')))\n print(""Isotropic irradiance on back surface of right pv row: {} W/m2""\n .format(pvarray.ts_pvrows[2].back.get_param_weighted(\'isotropic\')))\n\n\n.. parsed-literal::\n\n Incident irradiance on front surface of middle pv row: [1034.968 886.377] W/m2\n Reflected irradiance on back surface of left pv row: [112.139 86.404] W/m2\n Isotropic irradiance on back surface of right pv row: [0.116 1.849] W/m2\n\n\nThe users can also create a ""report"" while running the simulations that will rely on the simple API shown above, and which will look like whatever the users want.\n\n.. code:: python\n\n # Create a function that will build a report\n def fn_report(pvarray): return {\'total_incident_back\': pvarray.ts_pvrows[1].back.get_param_weighted(\'qinc\'),\n \'total_absorbed_back\': pvarray.ts_pvrows[1].back.get_param_weighted(\'qabs\')}\n\n # Run full mode simulation\n report = engine.run_full_mode(fn_build_report=fn_report)\n\n # Print results (report is defined by report function passed by user)\n df_report = pd.DataFrame(report, index=df_inputs.index)\n df_report\n\n\n+---------------------+---------------------+---------------------+\n| | total_incident_back | total_absorbed_back |\n+=====================+=====================+=====================+\n| 2017-08-31 11:00:00 | 106.627832 | 103.428997 |\n+---------------------+---------------------+---------------------+\n| 2017-08-31 15:00:00 | 79.668878 | 77.278812 |\n+---------------------+---------------------+---------------------+\n\n\n\nInstallation\n------------\n\npvfactors is currently compatible and tested with 3.6+, and is available in `PyPI `_. The easiest way to install pvfactors is to use pip_ as follows:\n\n.. code:: sh\n\n $ pip install pvfactors\n\nThe package wheel files are also available in the `release section`_ of the Github repository.\n\n\nRequirements\n------------\n\nRequirements are included in the ``requirements.txt`` file of the package. Here is a list of important dependencies:\n\n* `numpy `_\n* `pvlib-python `_\n* `shapely `_\n\n\nCiting pvfactors\n----------------\n\nWe appreciate your use of pvfactors. If you use pvfactors in a published work, we kindly ask that you cite:\n\n\n.. parsed-literal::\n\n Anoma, M., Jacob, D., Bourne, B.C., Scholl, J.A., Riley, D.M. and Hansen, C.W., 2017. View Factor Model and Validation for Bifacial PV and Diffuse Shade on Single-Axis Trackers. In 44th IEEE Photovoltaic Specialist Conference.\n\n\nContributing\n------------\n\nContributions are needed in order to improve pvfactors.\nIf you wish to contribute, you can start by forking and cloning the repository, and then installing pvfactors using pip_ in the root folder of the package:\n\n.. code:: sh\n\n $ pip install .\n\n\nTo install the package in editable mode, you can use:\n\n.. code:: sh\n\n $ pip install -e .\n\nReleasing\n+++++++++\n\nWhen releasing pvfactors, you will need to run a couple of build commands. First make sure to activate your virtual environment if any, then:\n\n- create a tag on the latest master branch commit using `git tag -a vX.X.X`, and write a tag message. You can then push that tag to Github so that it will appear there.\n- build the documentation by running `make build-docs`. When done running, you should be able to open `build/sphinx/html/index.html`, and confirm that the version displayed is the same as the one from the git tag. You can deploy by copying the content of of the `build/sphinx/html/` folder into the `gh-pages` branch of the repo (make sure to keep the `.nojekyll` file that\'s already present).\n- build the release files by running `make build-package`. When done running, you should be able to open `dist/` and see both a whl file and and tar file. Make sure that their names include the correct git tag you created. Please confirm that the whl file was built correctly by installing it locally and testing the newly released updates. You can deploy by 1) making a Github release from the tag you created and pushed, and including the files in `dist/` in the release. 2) The last step is to publish a release in PyPI, for which you can use twine and the command `twine upload dist/*`\n\n\n\n\nReferences\n----------\n\n.. [#pvfactors_paper] Anoma, M., Jacob, D., Bourne, B. C., Scholl, J. A., Riley, D. M., & Hansen, C. W. (2017). View Factor Model and Validation for Bifacial PV and Diffuse Shade on Single-Axis Trackers. In 44th IEEE Photovoltaic Specialist Conference.\n\n\n.. _link: https://pdfs.semanticscholar.org/ebb2/35e3c3796b158e1a3c45b40954e60d876ea9.pdf\n\n.. _tutorials: https://sunpower.github.io/pvfactors/tutorials/index.html\n\n.. _`full mode`: https://sunpower.github.io/pvfactors/theory/problem_formulation.html#full-simulations\n\n.. _`fast mode`: https://sunpower.github.io/pvfactors/theory/problem_formulation.html#fast-simulations\n\n.. _pip: https://pip.pypa.io/en/stable/\n\n.. _`release section`: https://github.com/SunPower/pvfactors/releases\n\n.. |Logo| image:: https://raw.githubusercontent.com/SunPower/pvfactors/master/docs/sphinx/_static/logo.png\n :target: http://sunpower.github.io/pvfactors/\n\n.. |CircleCI| image:: https://circleci.com/gh/SunPower/pvfactors.svg?style=shield\n :target: https://circleci.com/gh/SunPower/pvfactors\n\n.. |License| image:: https://img.shields.io/badge/License-BSD%203--Clause-blue.svg\n :target: https://github.com/SunPower/pvfactors/blob/master/LICENSE\n\n.. |PyPI-Status| image:: https://img.shields.io/pypi/v/pvfactors.svg\n :target: https://pypi.org/project/pvfactors\n\n.. |PyPI-Versions| image:: https://img.shields.io/pypi/pyversions/pvfactors.svg?logo=python&logoColor=white\n :target: https://pypi.org/project/pvfactors\n'",,"2018/05/14, 06:10:55",1990,BSD-3-Clause,0,104,"2023/04/28, 14:34:18",19,97,128,1,180,2,2.4,0.12903225806451613,"2022/02/22, 20:16:58",v1.5.2,0,5,false,,false,true,"matiasctrs/app-apv,Hoshang111/lcoe-model,areyc2023/cno,kike144/Solar,tadatoshi/agrivoltaics_supply_side_management,rseng/rsepedia-analysis,cire-thk/BifacialSimu,andresgm/cno_solar,thesethtruth/LESO,prathamrg/solarapp,anomam/pvfactors_iea_pvps_study,toddkarin/vocmax,toddkarin/pvtools",,https://github.com/SunPower,,,,,https://avatars.githubusercontent.com/u/1341977?v=4,,, gsee,Global Solar Energy Estimator.,renewables-ninja,https://github.com/renewables-ninja/gsee.git,github,"solar,pandas,energy,irradiance,photovoltaic,pv,electricity,ninja",Photovoltaics and Solar Energy,"2020/07/21, 06:28:35",105,0,15,false,Python,Renewables.ninja,renewables-ninja,"Python,Makefile",https://gsee.readthedocs.io/,"b""[![Master branch build status](https://img.shields.io/azure-devops/build/renewables-ninja/dcefb182-6481-4ca4-8f5e-75b022ab426d/1?style=flat-square)](https://dev.azure.com/renewables-ninja/gsee/_build?definitionId=1)\n[![Test coverage](https://img.shields.io/codecov/c/github/renewables-ninja/gsee?style=flat-square&token=1b25079ab156419b919462aaba0f469e)](https://codecov.io/gh/renewables-ninja/gsee)\n[![PyPI version](https://img.shields.io/pypi/v/gsee.svg?style=flat-square)](https://pypi.python.org/pypi/gsee)\n[![conda-forge version](https://img.shields.io/conda/vn/conda-forge/gsee.svg?style=flat-square)](https://anaconda.org/conda-forge/gsee)\n\n# GSEE: Global Solar Energy Estimator\n\n`GSEE` is a solar energy simulation library designed for rapid calculations and ease of use. [Renewables.ninja](https://www.renewables.ninja/) uses `GSEE`.\n\nThe development of `GSEE` predates the existence of [`pvlib-python`](https://pvlib-python.readthedocs.io/) but builds on its functionality as of v0.4.0. Use `GSEE` if you want fast simulations with sensible defaults and solar energy technologies other than PV, and `pvlib-python` if you need control over the nuts and bolts of simulating PV systems.\n\n## Installation\n\n`GSEE` requires Python 3. The recommended way to install is through the [Anaconda Python distribution](https://www.continuum.io/downloads) and `conda-forge`:\n\n conda install -c conda-forge gsee\n\nYou can also install with `pip install gsee`, but if you do so, and do not already have `numpy` installed, you will get a compiler error when pip tries to build to `climatedata_interface` Cython extension.\n\n## Documentation\n\nSee the [documentation](https://gsee.readthedocs.io/) for more information on `GSEE`'s functionality and for examples.\n\n## Credits and contact\n\nContact [Stefan Pfenninger](mailto:stefan.pfenninger@usys.ethz.ch) for questions about `GSEE`. `GSEE` is also a component of the [Renewables.ninja](https://www.renewables.ninja) project, developed by Stefan Pfenninger and Iain Staffell. Use the [contact page](https://www.renewables.ninja/about) there if you want more information about Renewables.ninja.\n\n## Citation\n\nIf you use `GSEE` or code derived from it in academic work, please cite:\n\nStefan Pfenninger and Iain Staffell (2016). Long-term patterns of European PV output using 30 years of validated hourly reanalysis and satellite data. *Energy* 114, pp. 1251-1265. [doi: 10.1016/j.energy.2016.08.060](https://doi.org/10.1016/j.energy.2016.08.060)\n\n## License\n\nBSD-3-Clause\n""",",https://doi.org/10.1016/j.energy.2016.08.060","2016/09/01, 11:41:04",2610,BSD-3-Clause,0,56,"2022/03/10, 18:01:09",7,9,9,0,594,2,0.3333333333333333,0.07272727272727275,,,0,3,false,,false,true,,,https://github.com/renewables-ninja,https://www.renewables.ninja/,,,,https://avatars.githubusercontent.com/u/11838260?v=4,,, PVMismatch,An explicit Python PV system IV & PV curve trace calculator which can also calculate mismatch.,SunPower,https://github.com/SunPower/PVMismatch.git,github,"numpy,scipy,python,solar,photovoltaic",Photovoltaics and Solar Energy,"2022/04/14, 19:15:36",63,0,13,false,Jupyter Notebook,SunPower,SunPower,"Jupyter Notebook,Python,Makefile,Batchfile",http://sunpower.github.io/PVMismatch/,"b'PVMismatch\n==========\n\nAn explicit IV & PV curve trace calculator for PV system circuits\n\nModel chain \n Cell > Cell string > Module > String > System\n \nKey Model inputs \n Cell technology characteristics \n \n Effective Irradiance (suns) \n \n Temperature (cell temperature)\n \n Bypass device configuration\n \n Cell string layout\n\n\n\n|Build Status|\n\nInstallation\n------------\n\nPVMismatch is on `PyPI `__. Install it\nwith `pip `__:\n\n::\n\n $ pip install pvmismatch\n\nRequirements\n------------\n\nPVMismatch requires NumPy, SciPy and matplotlib. These packages are available\nfrom PyPI, `Christoph Gohlke `__\nand Anaconda. You must install them prior to using PVMismatch.\n\nUsage\n-----\n\nPlease see the `documenation `__ for\ntutorials and API. Bugs and feature requests can be reported on\n`GitHub `__. The change\nhistory is also on `GitHub `__.\n\n.. |Build Status| image:: https://travis-ci.org/SunPower/PVMismatch.svg?branch=master\n :target: https://travis-ci.org/SunPower/PVMismatch\n\n\nOther Projects that use PVMismatch\n----------------------------------\nSystem level mismatch loss calculator using PVMismatch tool (STC and Annual energy loss)\nhttps://github.com/SunPower/MismatchLossStudy \n\nCiting PVMismatch\n----------------------------------\nWe appreciate your use of PVMismatch, and ask that you appropriately cite the software in exchange for its open-source publication. \n\nMark Mikofski, Bennet Meyers, Chetan Chaudhari (2018). \xe2\x80\x9cPVMismatch Project: https://github.com/SunPower/PVMismatch"". SunPower Corporation, Richmond, CA.\n\nPlease consider adding a # tag ""pvmismatch"" to a StackOverflow/Quora/LinkedIn/ResearchGate posts related to PVMismatch. \n\n\nCurrent Maintainer at SunPower\n----------------------------------\n@ahoffmanSPWR\n'",,"2013/01/23, 00:26:43",3927,BSD-3-Clause,0,415,"2022/04/14, 19:15:36",48,41,95,0,559,5,2.2,0.17341040462427748,"2019/05/11, 20:24:38",v4.1,0,8,false,,false,true,,,https://github.com/SunPower,,,,,https://avatars.githubusercontent.com/u/1341977?v=4,,, rdtools,An open source library to support reproducible technical analysis of time series data from photovoltaic energy systems.,NREL,https://github.com/NREL/rdtools.git,github,,Photovoltaics and Solar Energy,"2023/07/31, 14:42:08",133,7,21,true,Python,National Renewable Energy Laboratory,NREL,Python,https://rdtools.readthedocs.io/,"b'\n\nMaster branch: \n[![Build Status](https://github.com/NREL/rdtools/workflows/pytest/badge.svg?branch=master)](https://github.com/NREL/rdtools/actions?query=branch%3Amaster) \n\nDevelopment branch: \n[![Build Status](https://github.com/NREL/rdtools/workflows/pytest/badge.svg?branch=development)](https://github.com/NREL/rdtools/actions?query=branch%3Adevelopment)\n\nRdTools is an open-source library to support reproducible technical analysis of\ntime series data from photovoltaic energy systems. The library aims to provide\nbest practice analysis routines along with the building blocks for users to\ntailor their own analyses.\nCurrent applications include the evaluation of PV production over several years to obtain\nrates of performance degradation and soiling loss. RdTools can handle\nboth high frequency (hourly or better) or low frequency (daily, weekly,\netc.) datasets. Best results are obtained with higher frequency data.\n\nRdTools can be installed automatically into Python from PyPI using the\ncommand line:\n\n```\npip install rdtools\n```\n\nFor API documentation and full examples, please see the [documentation](https://rdtools.readthedocs.io).\n\nRdTools currently is tested on Python 3.7+.\n\n## Citing RdTools\n\nTo cite RdTools, please use the following along with the version number\nand the specific DOI coresponding to that version from [Zenodo](https://doi.org/10.5281/zenodo.1210316):\n\n- Michael G. Deceglie, Ambarish Nag, Adam Shinn, Gregory Kimball,\n Daniel Ruth, Dirk Jordan, Jiyang Yan, Kevin Anderson, Kirsten Perry,\n Mark Mikofski, Matthew Muller, Will Vining, and Chris Deline\n RdTools, version {insert version}, Compuer Software,\n https://github.com/NREL/rdtools. DOI:{insert DOI}\n\nThe underlying workflow of RdTools has been published in several places.\nIf you use RdTools in a published work, you may also wish to cite the following as\nappropriate:\n\n- Dirk Jordan, Chris Deline, Sarah Kurtz, Gregory Kimball, Michael Anderson, ""Robust PV\n Degradation Methodology and Application"", IEEE Journal of\n Photovoltaics, 8(2) pp. 525-531, 2018, DOI: [10.1109/JPHOTOV.2017.2779779](https://doi.org/10.1109/JPHOTOV.2017.2779779)\n\n- Michael G. Deceglie, Leonardo Micheli and Matthew Muller, ""Quantifying Soiling Loss\n Directly From PV Yield,"" in IEEE Journal of Photovoltaics, 8(2),\n pp. 547-551, 2018, DOI: [10.1109/JPHOTOV.2017.2784682](https://doi.org/10.1109/JPHOTOV.2017.2784682)\n\n- Kevin Anderson and Ryan Blumenthal, ""Overcoming Communications Outages in\n Inverter Downtime Analysis"", 2020 IEEE 47th Photovoltaic Specialists\n Conference (PVSC)"" DOI: [10.1109/PVSC45281.2020.9300635](https://doi.org/10.1109/PVSC45281.2020.9300635)\n\n- Kirsten Perry, Matthew Muller and Kevin Anderson, ""Performance Comparison of Clipping\n Detection Techniques in AC Power Time Series,"" 2021 IEEE 48th Photovoltaic\n Specialists Conference (PVSC), pp. 1638-1643 2021, DOI: [10.1109/PVSC43889.2021.9518733](https://doi.org/10.1109/PVSC43889.2021.9518733).\n\n \n## References\nThe clear sky temperature calculation, `clearsky_temperature.get_clearsky_tamb()`, uses data\nfrom images created by Jesse Allen, NASA\xe2\x80\x99s Earth Observatory using data courtesy of the MODIS Land Group. \nhttps://neo.sci.gsfc.nasa.gov/view.php?datasetId=MOD_LSTD_CLIM_M \nhttps://neo.sci.gsfc.nasa.gov/view.php?datasetId=MOD_LSTN_CLIM_M\n\nOther useful references which may also be consulted for degradation rate methodology include:\n\n - D. C. Jordan, M. G. Deceglie, S. R. Kurtz, ""PV degradation methodology comparison \xe2\x80\x94 A basis for a standard"", in 43rd IEEE Photovoltaic Specialists Conference, Portland, OR, USA, 2016, DOI: 10.1109/PVSC.2016.7749593.\n - Jordan DC, Kurtz SR, VanSant KT, Newmiller J, Compendium of Photovoltaic Degradation Rates, Progress in Photovoltaics: Research and Application, 2016, 24(7), 978 - 989.\n - D. Jordan, S. Kurtz, PV Degradation Rates \xe2\x80\x93 an Analytical Review, Progress in Photovoltaics: Research and Application, 2013, 21(1), 12 - 29.\n - E. Hasselbrink, M. Anderson, Z. Defreitas, M. Mikofski, Y.-C.Shen, S. Caldwell, A. Terao, D. Kavulak, Z. Campeau, D. DeGraaff, ""Validation of the PVLife model using 3 million module-years of live site data"", 39th IEEE Photovoltaic Specialists Conference, Tampa, FL, USA, 2013, p. 7 \xe2\x80\x93 13, DOI: 10.1109/PVSC.2013.6744087.\n\n## Further Instructions and Updates\n\nCheck out the [wiki](https://github.com/NREL/rdtools/wiki) for additional usage documentation, and for information on development goals and framework.\n\n'",",https://doi.org/10.5281/zenodo.1210316,https://doi.org/10.1109/JPHOTOV.2017.2779779,https://doi.org/10.1109/JPHOTOV.2017.2784682,https://doi.org/10.1109/PVSC45281.2020.9300635,https://doi.org/10.1109/PVSC43889.2021.9518733","2016/11/18, 22:17:01",2531,MIT,67,689,"2023/10/18, 01:55:24",62,228,335,29,7,14,0.8,0.5055555555555555,"2023/07/31, 14:46:51",2.1.6,0,12,false,,true,true,"cai-yutian/pv-modelling-mirror,MORE-EU/hackathon_athena,bhavyadureja25/ds_tool_with_upload,h-quest/FDDA_app,mikofski/PVRW2021,josephmckinsey/NRELHackathon2018,hackingmaterials/duramat_dashboard",,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, Machine-Learning-for-Solar-Energy-Prediction,Predict the power production of a solar panel farm from weather measurements using machine learning.,ColasGael,https://github.com/ColasGael/Machine-Learning-for-Solar-Energy-Prediction.git,github,"machine-learning,neural-network,data-processing,python,matlab,tensorflow",Photovoltaics and Solar Energy,"2019/11/07, 18:37:29",183,0,36,false,Python,,,"Python,MATLAB,R,TeX,TSQL",,"b'# Machine-Learning-for-Solar-Energy-Prediction\nby Adele Kuzmiakova, Gael Colas and Alex McKeehan, graduate students from Stanford University\n\nThis is our final project for the CS229: ""Machine Learning"" class in Stanford (2017). Our teachers were Pr. Andrew Ng and Pr. Dan Boneh.\n\nLanguage: Python, Matlab, R\n\nGoal: predict the hourly power production of a photovoltaic power station from the measurements of a set of weather features. \n\nThis project could be decomposed in 3 parts:\n - Data Pre-processing: we processed the raw weather data files (input) from the National Oceanographic and Atmospheric Administration and the power production data files (output) from Urbana-Champaign solar farm to get meaningful numeric values on an hourly basis ;\n - Feature Selection: we run correlation analysis between the weather features and the energy output to discard useless features, we also implemented Principal Component Analysis to reduce the dimension of our dataset ;\n - Machine Learning : we compared the performances of our ML algorithms. Implemented models include Weighted Linear Regression with and without dimension reduction, Boosting Regression Trees, and artificial Neural Networks with and without vanishing temporal gradient\n\nOur final report and poster are available at the root.\n'",,"2018/05/06, 19:43:04",1997,MIT,0,11,"2019/11/15, 19:01:02",1,0,3,0,1440,0,0,0.0,,,0,1,false,,false,true,,,,,,,,,,, elpv-dataset,A dataset of functional and defective solar cells extracted from EL images of solar modules.,zae-bayern,https://github.com/zae-bayern/elpv-dataset.git,github,"photovoltaic,solar-energy,solar-cells,machine-learning,computer-vision",Photovoltaics and Solar Energy,"2021/05/24, 21:44:52",166,0,42,true,Python,ZAE Bayern,zae-bayern,Python,,"b'# A Benchmark for Visual Identification of Defective Solar Cells in Electroluminescence Imagery\n\nThis repository provides a dataset of solar cell images extracted from\nhigh-resolution electroluminescence images of photovoltaic modules.\n\n![An overview of images in the dataset. The darker the red is, the higher is the\nlikelihood of a defect in the solar cell overlayed by the corresponding color.](./doc/images/overview.jpg)\n\n## The Dataset\n\nThe dataset contains 2,624 samples of 300x300 pixels 8-bit grayscale images of\nfunctional and defective solar cells with varying degree of degradations\nextracted from 44 different solar modules. The defects in the annotated images\nare either of intrinsic or extrinsic type and are known to reduce the power\nefficiency of solar modules.\n\nAll images are normalized with respect to size and perspective.\nAdditionally, any distortion induced by the camera lens used to capture the EL images was\neliminated prior to solar cell extraction.\n\n## Annotations\n\nEvery image is annotated with a defect probability (a floating point value\nbetween 0 and 1) and the type of the solar module (either mono- or\npolycrystalline) the solar cell image was originally extracted from.\n\nThe individual images are stored in the `images` directory and the corresponding\nannotations in `labels.csv`.\n\n## Usage\n\nIn Python, use `utils/elpv_reader` in this repository to load the images and the\ncorresponding annotations as follows:\n\n```python\nfrom elpv_reader import load_dataset\nimages, proba, types = load_dataset()\n```\n\nThe code requires NumPy and Pillow to work correctly.\n\n## Citing\n\nIf you use this dataset in scientific context, please cite the following\npublications:\n\n> Buerhop-Lutz, C.; Deitsch, S.; Maier, A.; Gallwitz, F.; Berger, S.; Doll, B.; Hauch, J.; Camus, C. & Brabec, C. J. A Benchmark for Visual Identification of Defective Solar Cells in Electroluminescence Imagery. European PV Solar Energy Conference and Exhibition (EU PVSEC), 2018. DOI: [10.4229/35thEUPVSEC20182018-5CV.3.15](http://dx.doi.org/10.4229/35thEUPVSEC20182018-5CV.3.15)\n\n> Deitsch, S., Buerhop-Lutz, C., Sovetkin, E., Steland, A., Maier, A., Gallwitz, F., & Riess, C. (2021). Segmentation of photovoltaic module cells in uncalibrated electroluminescence images. Machine Vision and Applications, 32(4). DOI: [10.1007/s00138-021-01191-9](https://doi.org/10.1007/s00138-021-01191-9)\n\n> Deitsch, S.; Christlein, V.; Berger, S.; Buerhop-Lutz, C.; Maier, A.; Gallwitz, F. & Riess, C. Automatic classification of defective photovoltaic module cells in electroluminescence images. Solar Energy, Elsevier BV, 2019, 185, 455-468. DOI: [10.1016/j.solener.2019.02.067](http://dx.doi.org/10.1016/j.solener.2019.02.067)\n\nBibTeX details:\n\n
\n\n```bibtex\n\n@InProceedings{Buerhop2018,\n author = {Buerhop-Lutz, Claudia and Deitsch, Sergiu and Maier, Andreas and Gallwitz, Florian and Berger, Stephan and Doll, Bernd and Hauch, Jens and Camus, Christian and Brabec, Christoph J.},\n title = {A Benchmark for Visual Identification of Defective Solar Cells in Electroluminescence Imagery},\n booktitle = {European PV Solar Energy Conference and Exhibition (EU PVSEC)},\n year = {2018},\n eventdate = {2018-09-24/2018-09-28},\n venue = {Brussels, Belgium},\n doi = {10.4229/35thEUPVSEC20182018-5CV.3.15},\n}\n\n@Article{Deitsch2021,\n author = {Deitsch, Sergiu and Buerhop-Lutz, Claudia and Sovetkin, Evgenii and Steland, Ansgar and Maier, Andreas and Gallwitz, Florian and Riess, Christian},\n date = {2021},\n journaltitle = {Machine Vision and Applications},\n title = {Segmentation of photovoltaic module cells in uncalibrated electroluminescence images},\n doi = {10.1007/s00138-021-01191-9},\n issn = {1432-1769},\n number = {4},\n volume = {32},\n}\n\n@Article{Deitsch2019,\n author = {Sergiu Deitsch and Vincent Christlein and Stephan Berger and Claudia Buerhop-Lutz and Andreas Maier and Florian Gallwitz and Christian Riess},\n title = {Automatic classification of defective photovoltaic module cells in electroluminescence images},\n journal = {Solar Energy},\n year = {2019},\n volume = {185},\n pages = {455--468},\n month = jun,\n issn = {0038-092X},\n doi = {10.1016/j.solener.2019.02.067},\n publisher = {Elsevier {BV}},\n}\n```\n
\n\n## License\n\n
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\nFor commercial use, please contact us for further information.\n'",",https://doi.org/10.1007/s00138-021-01191-9","2018/03/07, 10:53:56",2058,CUSTOM,0,18,"2022/12/01, 21:22:32",0,0,4,1,327,0,0,0.0,"2021/05/25, 12:15:29",v1.0,0,1,false,,false,true,,,https://github.com/zae-bayern,https://www.zae-bayern.de,,,,https://avatars.githubusercontent.com/u/37119743?v=4,,, feedinlib,Contains implementations of photovoltaic models to calculate electricity generation from a PV installation based on given solar radiation. Furthermore it contains all necessary pre-calculations.,oemof,https://github.com/oemof/feedinlib.git,github,,Photovoltaics and Solar Energy,"2023/07/30, 04:54:03",79,13,10,true,Python,oemof community,oemof,Python,,"b'========\nOverview\n========\n\n.. start-badges\n\n.. list-table::\n :stub-columns: 1\n\n|workflow_pytests| |workflow_checks| |docs| |appveyor| |requires| |coveralls| |packaging|\n|version| |wheel| |supported-versions| |supported-implementations| |commits-since|\n\n.. |docs| image:: https://readthedocs.org/projects/feedinlib/badge/?style=flat\n :target: https://feedinlib.readthedocs.io/\n :alt: Documentation Status\n\n.. |workflow_pytests| image:: https://github.com/oemof/feedinlib/workflows/tox%20pytests/badge.svg?branch=revision/add-tox-github-workflows-src-directory-ci\n :target: https://github.com/oemof/feedinlib/actions?query=workflow%3A%22tox+pytests%22\n\n.. |workflow_checks| image:: https://github.com/oemof/feedinlib/workflows/tox%20checks/badge.svg?branch=revision/add-tox-github-workflows-src-directory-ci\n :target: https://github.com/oemof/feedinlib/actions?query=workflow%3A%22tox+checks%22\n\n.. |packaging| image:: https://github.com/oemof/feedinlib/workflows/packaging/badge.svg?branch=revision/add-tox-github-workflows-src-directory-ci\n :target: https://github.com/oemof/feedinlib/actions?query=workflow%3Apackaging\n\n.. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/oemof/feedinlib?branch=master&svg=true\n :alt: AppVeyor Build Status\n :target: https://ci.appveyor.com/project/oemof/feedinlib\n\n.. |requires| image:: https://requires.io/github/oemof/feedinlib/requirements.svg?branch=master\n :alt: Requirements Status\n :target: https://requires.io/github/oemof/feedinlib/requirements/?branch=master\n\n.. |coveralls| image:: https://coveralls.io/repos/oemof/feedinlib/badge.svg?branch=master&service=github\n :alt: Coverage Status\n :target: https://coveralls.io/r/oemof/feedinlib\n\n.. |version| image:: https://img.shields.io/pypi/v/feedinlib.svg\n :alt: PyPI Package latest release\n :target: https://pypi.org/project/feedinlib\n\n.. |wheel| image:: https://img.shields.io/pypi/wheel/feedinlib.svg\n :alt: PyPI Wheel\n :target: https://pypi.org/project/feedinlib\n\n.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/feedinlib.svg\n :alt: Supported versions\n :target: https://pypi.org/project/feedinlib\n\n.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/feedinlib.svg\n :alt: Supported implementations\n :target: https://pypi.org/project/feedinlib\n\n.. |commits-since| image:: https://img.shields.io/github/commits-since/oemof/feedinlib/v0.0.12.svg\n :alt: Commits since latest release\n :target: https://github.com/oemof/feedinlib/compare/v0.0.12...master\n\n\n\n.. end-badges\n\nConnect weather data interfaces with interfaces of wind and pv power models.\n\n* Free software: MIT license\n\nInstallation\n============\n\nOn Linux systems, you can just::\n\n pip install feedinlib\n\nYou can also install the in-development version with::\n\n pip install https://github.com/oemof/feedinlib/archive/master.zip\n \nOn Windows systems, some dependencies are not pip-installable. Thus, Windws\nusers first have to manually install the dependencies e.g. using conda or mamba.\n\n\nDocumentation\n=============\n\n\nhttps://feedinlib.readthedocs.io/\n\n\nDevelopment\n===========\n\nTo run all the tests run::\n\n tox\n\nNote, to combine the coverage data from all the tox environments run:\n\n.. list-table::\n :widths: 10 90\n :stub-columns: 1\n\n - - Windows\n - ::\n\n set PYTEST_ADDOPTS=--cov-append\n tox\n\n - - Other\n - ::\n\n PYTEST_ADDOPTS=--cov-append tox\n'",,"2014/07/24, 09:03:29",3380,MIT,3,507,"2023/07/21, 12:14:55",23,19,53,2,96,5,0.8,0.6883720930232557,"2019/01/31, 14:19:59",v0.0.12,0,11,false,,false,true,"in-RET/inretensys-fastapi,UU-ER/EHUB-Py_Training,oemof/oemof,moritz-reuter/ESEM-EE,rl-institut/OWEFE,dpinney/wiires,SESMG/SESMG,greco-project/greco_technologies,birgits/dgs_tool,open-fred/feedin_germany,rl-institut/WAM_APP_vdi,rl-institut/WAM_APP_stemp_mv,rl-institut/appBBB",,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, photovoltaic,A Python library used in photovoltaics.,pvedu,https://github.com/pvedu/photovoltaic.git,github,,Photovoltaics and Solar Energy,"2022/04/25, 22:48:39",44,2,3,false,HTML,,,"HTML,Python,Batchfile",,"b'# photovoltaic\n\n\n**photovoltaic** is a library of python functions used in photovoltaics. Its preferrable to install the library but the functions are simple enough to include in your code.\n\nHelp Index: http://htmlpreview.github.io/?https://github.com/pvedu/photovoltaic/blob/master/html/photovoltaic.html \nCode is at: https://github.com/pvedu/photovoltaic/tree/master/photovoltaic \n\n## Examples\n\nThe best place to start is with the examples at:\nhttps://github.com/pvedu/pvon\n\nThere are instructions on how to run the examples completely within the browser and without installing anything.\n\n## Typical usage\n\n import photovoltaic as pv\n irradiance = pv.sun.blackbody_spectrum(800)\n print(irradiance)\n\nThis would print the blackbody irradiance at 800 nm with the default temperature of 6000 K in W/m2/nm.\n\n\n## Installation\n\nInstallation is via pip from the pypi repositry. From a command propmpt:\n\n pip install photovoltaic\n\nInside a Jupter notebook use:\n\n !pip install photovoltaic\n\nSome systems use pip3 instead of pip. People recommend using a virtual environment, but I don\'t find its necessary on MS Windows.\n\n\t\nTha above command should also install the latest scipy and numpy packages. They can also be installed directly with:\n\n pip install numpy\n\n pip install scipy\n\n## Requirements\n\nKnown to work under plain vanilla Python 3.6 using the standard IDLE editor with Numpy and Scipy installed. The examples also make use of matplotlib. It should also work with the various Python systems such as Anaconda Jupyter etc.\n\n\nAnaconda includes a wealth of scientific packagkes and is available at: https://www.anaconda.com/download/ \n\nStandard Python is at https://www.python.org/downloads/\n\nFor the graphs, Matplotlib is needed in addition to the above numpy and scipy packages:\n\n pip install matplotlib\n\n\n\n\n\n\n## Other\n\n**f** means **from** in some of the function names. For example:\n\nnmfeV() converts the energy of a photon **from** electron volts to a nm.\n\nThis follows the conventions of other python functions such as strfdatetime.\n\n\nThe library is designed to be as simple as possible and an ""algorithm that runs"". While it is easier to install the whole library, it is also straighforward to cut/paste parts of the code.\n\nThere are other python libraries that cover sections of the photovoltaic library in much more detail.\n\n* [pvlib] (https://github.com/pvlib/pvlib-python) covers insolation and systems modeling.\n* [Semiconductors](https://github.com/MK8J) relating to solar.\n'",,"2017/08/29, 23:41:59",2247,GPL-3.0,0,56,"2022/04/25, 22:48:40",2,2,2,0,547,1,0.0,0.4285714285714286,,,0,3,false,,false,false,"NREL/PVDegradationTools,pvedu/pvon",,,,,,,,,, pvcaptest,Collection of functions and Jupyter Notebooks to partially automate running a capacity test following ASTM E2848.,pvcaptest,https://github.com/pvcaptest/pvcaptest.git,github,,Photovoltaics and Solar Energy,"2023/09/05, 01:50:52",17,0,3,true,Python,,pvcaptest,Python,,"b'# pvcaptest\n\n\n\n\n \n \n\n\n
Latest Release
\n\n# What is pvcaptest?\npvcaptest is an open source python package created to facilitate capacity testing following the ASTM E2848 standard. The captest module contains a single class, CapData, which provides methods for loading, visualizing, filtering, and regressing capacity testing data. The module also includes functions that take CapData objects as arguments and provide summary data and capacity test results.\n\nDocumentation and examples are available on [readthedocs](https://pvcaptest.readthedocs.io/en/latest/) including full examples in jupyter notebooks that can be run in the browser without installing anything.\n\n# Installation\nThese instructions assume that you are new to using conda and python, if that is not the case skip to the last section for users familiar with conda and pip.\n\nThe recommended method to install pvcaptest is to create a conda environment for pvcaptest. Installing Anaconda or miniconda will install both python and conda. There is no need to install python separately.\n\n**Easiest Option:**\n1. Download and install the [anaconda distribution](https://www.anaconda.com/products/individual). Follow the default installation settings.\n2. On Windows go to the start menu and open the Anaconda prompt under the newly installed Anaconda program. On OSX or Linux open a terminal window.\n3. Install pvcaptest by typing the command `conda install -c conda-forge pvcaptest` and pressing enter. The `-c conda-forge` option tells conda to install pvcaptest from the [conda forge channel](https://conda-forge.org/#about).\n\n\nThis will install the pvcaptest package in the base environment created when Anaconda is installed. This should work and provide you with jupyter notebook and jupyer lab to run pvcaptest in. If you think you will use your Anaconda installation to create and maintain additional environments, the following process for creating a stand alone option is likely a better option.\n\n**Better long term option:**\n1. If you do not already have it installed, download and install the [anaconda distribution](https://www.anaconda.com/products/individual) or miniconda.\n2. Go to the [project github page](https://github.com/pvcaptest/pvcaptest) and download the project source to obtain a copy of the `environment.yml` file. Click the green code button and click \'Download ZIP\'.\n2. On Windows go to the start menu and open the Anaconda prompt under the newly installed Anaconda program. On OSX or Linux open a terminal window. Note the path in the prompt for the next step. On Windows this should be something like `C:\\Users\\username\\`.\n3. Unzip and move the `environment.yml` file to the folder identified by the path from the previous step.\n4. In your Anaconda prompt or terminal type `conda env create -f environment.yml` \nand hit enter. Wait for a few seconds while conda works to solve the environment. It should ask you if you want to proceed to install new packages including pvcaptest. Type `y` enter to proceed and wait for conda to finish installing pvcaptest and the other packages.\n5. Once the installation is complete conda will print out a command for activating the new environment. Run that command, which should be like `conda activate captest_env`.\n\nThe environment created will include jupyter lab and notebook for you to use pvcaptest in. You can start these using the commands `jupyter lab` or `jupyter notebook`.\n\n\nSee the [conda documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file) for more details on using conda to create and manage environments.\n\n\n### Install for users familiar with conda and pip:\nConda install into an existing environment:\n\n`conda install -c conda-forge pvcaptest`\n\nIf you prefer, you can pip install pvcaptest, but the recommended approach is to use the conda package.\n\n**Note: The conda package is named pvcaptest and the pip package is named captest. The project is moving to consistent use of the pvcaptest name, but the package name on pypi will remain as captest.**\n'",,"2017/09/19, 01:33:23",2227,MIT,213,881,"2023/09/05, 01:53:52",22,35,76,31,50,0,0.3,0.015706806282722474,"2023/09/10, 18:59:38",v0.10.0,0,3,false,,false,false,,,https://github.com/pvcaptest,,,,,https://avatars.githubusercontent.com/u/53880406?v=4,,, pvtrace,Optical ray tracing for luminescent materials and spectral converter photovoltaic devices.,danieljfarrell,https://github.com/danieljfarrell/pvtrace.git,github,"python,photovoltaics,raytracing,optics,energy",Photovoltaics and Solar Energy,"2021/03/30, 17:10:12",91,2,8,true,Python,,,Python,,"b'[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.592982.svg)](https://doi.org/10.5281/zenodo.592982)\n\n![](https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/docs/logo.png)\n\n> Optical ray tracing for luminescent materials and spectral converter photovoltaic devices\n\n# Ray-tracing luminescent solar concentrators\n\n*pvtrace* is a statistical photon path tracer written in Python. Rays are followed through a 3D scene and their interactions with objects are recorded to build up statistical information about energy flow.\n\nThis is useful in photovoltaics and non-imaging optics where the goal is to design systems which efficiently transport light to target locations. \n\nOne of its key features is the ability to simulate re-absorption in luminescent materials. For example, like in devices like Luminescent Solar Concentrators (LSCs).\n\nA basic LSC can be simulated and visualised in five lines of code,\n\n```python\nfrom pvtrace import *\nlsc = LSC((5.0, 5.0, 1.0)) # size in cm\nlsc.show() # open visualiser\nlsc.simulate(100) # emit 100 rays\nlsc.report() # print report\n```\n\nThis script will render the ray-tracing in real time,\n\n![](https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/docs/pvtrace-demo.gif)\n\npvtrace has been validate against three other luminescent concentrator codes. For full details see [Validation.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/Validation.ipynb) notebook\n\n![](https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/examples/Validation.png)\n\n# Install\n\n## MacOS using pyenv\n\nOn MacOS *pvtrace* can be installed easily using [pyenv](https://github.com/pyenv/pyenv), the `pip` command and [homebrew](https://brew.sh). First install [homebrew](https://brew.sh), then install `spatialindex` for the RTree dependency,\n\n brew install spatialindex\n\nNext, create a clean virtual environment for pvtrace\n\n pyenv install 3.7.8\n pyenv virtualenv 3.7.8 pvtrace-env\n pyenv activate pvtrace-env\n pip install pvtrace\n\n## Linux and Windows using Conda\n\nOn Linux and Windows you must use conda to create the python environment. Optionally you can also use this method on MacOS too if you prefer conda over pyenv.\n\n conda create --name pvtrace-env python=3.7.8\n conda activate pvtrace-env\n conda install Rtree\n pip install pvtrace\n\n# Run the example script and notebooks\n\nDownload the [hello_world.py](https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/examples/hello_world.py) example script either manually or using `curl`,\n\n # Download example script\n curl https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/examples/hello_world.py > hello_world.py\n\nNow active your python environment! \n\nIf you installed using **pyenv** do the following,\n\n pyenv local pvtrace-env\n\nIf you are using **conda** to this,\n\n conda activate pvtrace-env\n\nNow start the meshcat server with the command,\n\n meshcat-server\n\nThis will print information like,\n\n zmq_url=tcp://127.0.0.1:6000\n web_url=http://127.0.0.1:7000/static/\n\nOpen a new terminal window and again activate your pvtrace-env.\n\nOpen `hello_world.py` and make sure the line below has `zmq_url` of your meshcat-server,\n\n # Change zmq_url here to be the address of your meshcat-server!\n renderer = MeshcatRenderer(\n zmq_url=""tcp://127.0.0.1:6000"", wireframe=True, open_browser=True\n ) \n\nYou can now run pvtrace scripts! Run this following command,\n\n python hello_world.py\n\nAlso take a look at the online Jupyter notebook tutorial series which provide an overview of pvtrace and examples,\n\n 1. [Quick Start.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/001%20Quick%20Start.ipynb), an interactive ray-tracing tutorial (download an run locally)\n 2. [Materials.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/002%20Materials.ipynb), include physical properties with materials\n 3. [Lights.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/003%20Lights.ipynb), place photon sources in the scene and customise their properties\n 4. [Nodes.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/004%20Nodes.ipynb) translate and rotate scene objects with nodes\n 5. [Geometry.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/005%20Geometry.ipynb) define the shapes of objects in your scene\n 6. [Coatings.ipynb](https://github.com/danieljfarrell/pvtrace/blob/master/examples/006%20Coatings.ipynb) introduce custom reflections with coatings\n\nDownload and run these notebooks locally for a more interactive experience, but first install jupyter,\n\n pip install jupyter\n\nor with conda,\n\n conda install jupyter\n\nThen launch the jupyter notebook,\n\n jupyter notebook\n\n# Features\n\n## Ray optics simulations\n\n*pvtrace* supports 3D ray optics simulations shapes,\n\n* box\n* sphere\n* cylinder\n* mesh\n\nThe optical properties of each shape can be customised,\n\n* refractive index\n* absorption coefficient\n* scattering coefficient\n* emission lineshape\n* quantum yield\n* surface reflection\n* surface scattering\n\n![](https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/docs/example.png)\n\n## High and low-level API\n\n*pvtrace* has a high-level API for handling common problems with LSCs and a low-level API where objects can be positioned in a 3D scene and optical properties customised.\n\nFor example, a script using the low-level API to ray trace this glass sphere is below,\n\n```python\nimport time\nimport sys\nimport functools\nimport numpy as np\nfrom pvtrace import *\n\n# World node contains all objects\nworld = Node(\n name=""world (air)"",\n geometry=Sphere(\n radius=10.0,\n material=Material(refractive_index=1.0),\n )\n)\n\n# The glass sphere\nsphere = Node(\n name=""sphere (glass)"",\n geometry=Sphere(\n radius=1.0,\n material=Material(refractive_index=1.5),\n ),\n parent=world\n)\nsphere.location = (0, 0, 2)\n\n# The source of rays\nlight = Node(\n name=""Light (555nm)"",\n light=Light(direction=functools.partial(cone, np.pi/8)),\n parent=world\n)\n\n# Render and ray-trace\nrenderer = MeshcatRenderer(wireframe=True, open_browser=True)\nscene = Scene(world)\nrenderer.render(scene)\nfor ray in scene.emit(100):\n steps = photon_tracer.follow(scene, ray)\n path, events = zip(*steps)\n renderer.add_ray_path(path)\n time.sleep(0.1)\n\n# Wait for Ctrl-C to terminate the script; keep the window open\nprint(""Ctrl-C to close"")\nwhile True:\n try:\n time.sleep(.3)\n except KeyboardInterrupt:\n sys.exit()\n```\n\n## Scene Graph\n\n*pvtrace* is designed in layers each with as limited scope as possible.\n\n![](https://raw.githubusercontent.com/danieljfarrell/pvtrace/master/docs/pvtrace-design.png)\n\n
\n
Scene
\n
Graph data structure of node and the thing that is ray-traced.
\n \n
Node
\n
Provides a coordinate system, can be nested inside one another, perform arbitrary rotation and translation transformations.
\n \n
Geometry
\n
Attached to nodes to define different shapes (Sphere, Box, Cylinder, Mesh) and handles all ray intersections.
\n \n
Material
\n
Attached to geometry objects to assign physical properties to shapes such as refractive index.
\n \n
Surface
\n
Handles details of interaction between material surfaces and a customisation point for simulation of wavelength selective coatings.
\n \n
Components
\n
Specifies optical properties of the geometries volume, absorption coefficient, scattering coefficient, quantum yield, emission spectrum.
\n \n
Ray-tracing engine
\n
The algorithm which spawns rays, computes intersections, samples probabilities and traverses the rays through the scene.
\n
\n\n## Ray-tracing engine\n\nCurrently *pvtrace* supports only one ray-tracing engine: a photon path tracer. This is physically accurate, down to treating individual absorption and emission events, but is slow because the problem cannot be vectorised as each ray is followed individually.\n\n# Documentation\n\nInteractive Jupyter notebooks are in [examples directory](https://github.com/danieljfarrell/pvtrace/tree/master/examples), download and take a look, although they can be viewed online.\n\n# Contributing\n\nPlease use the github [issue](https://github.com/danieljfarrell/pvtrace/issues) tracker for bug fixes, suggestions, or support questions.\n\nIf you are considering contributing to pvtrace, first fork the project. This will make it easier to include your contributions using pull requests.\n\n## Creating a development environment\n\n1. First create a new development environment using [MacOS instructions](#macos-using-pyenv) or [Linux and Windows instructions](#linux-and-windows-using-conda), but do not install pvtrace using pip! You will need to clone your own copy of the source code in the following steps.\n2. Use the GitHub fork button to make your own fork of the project. This will make it easy to include your changes in pvtrace using a pull request.\n3. Follow the steps below to clone and install the development dependencies\n\n```bash\n# Pull from your fork\ngit clone https://github.com//pvtrace.git\n\n# Get development dependencies\npip install -r pvtrace/requirements_dev.txt \n\n# Add local `pvtrace` directory to known packages\npip install -e pvtrace\n\n# Run units tests\npytest pvtrace/tests\n\n# Run an example\npython pvtrace/examples/hello_world.py\n```\n\nYou should now be able to edit the source code and simply run scripts directly without the need to reinstall anything.\n\n## Unit tests\n\nPlease add or modify an existing unit tests in the `pvtrace/tests` directory if you are adding new code. This will make it much easier to include your changes in the project.\n\n## Pull requests\n\nPull requests will be considered. Please make contact before doing a lot of work, to make sure that the changes will definitely be included in the main project.\n\n# Questions\n\nYou can get in contact with me directly at dan@excitonlabs.com or raise an issue on the issue tracker.\n\n# Dependencies\n\nBasic environment requires the following packages which will be installed with `pip` automatically\n\n* python >= 3.7.2\n* numpy\n* pandas\n* trimesh[easy]\n* meshcat >= 0.0.16\n* anytree\n'",",https://doi.org/10.5281/zenodo.592982","2011/01/05, 14:28:41",4676,CUSTOM,0,185,"2023/05/31, 10:41:40",8,25,48,4,147,0,0.5,0.025316455696202556,"2020/11/18, 23:04:23",2.1.6,0,5,false,,false,false,"dcambie/LSC-PM_solar_miniplant,danieljfarrell/pvtrace",,,,,,,,,, SolarPILOT,Solar power tower layout and optimization tool.,NREL,https://github.com/NREL/SolarPILOT.git,github,,Photovoltaics and Solar Energy,"2023/06/28, 21:42:49",37,0,9,true,C++,National Renewable Energy Laboratory,NREL,"C++,HTML,Python,Rich Text Format,CSS,C,CMake,Inno Setup,JavaScript,TeX,Makefile,Shell",https://www.nrel.gov/csp/solarpilot.html,"b'# SolarPILOT - Solar Power tower Integrated Layout and Optimization Tool\n\n[![**develop** Build Status](https://travis-ci.org/NREL/SolarPILOT.svg?branch=develop)](https://travis-ci.org/NREL/SolarPILOT)\n\nThe SolarPILOT Open Source Project repository contains the source code, tools, and instructions to build a desktop version of the National Renewable Energy Laboratory\'s SolarPILOT. SolarPILOT is a design, characterization, and optimization tool for concentrating solar power (CSP) tower plants. It is available through this repository as a standalone application with full functionality, and it is also included in several CSP tower models within NREL\'s System Advisor Model (SAM) in limited form. For more details about SolarPILOT\'s capabilities, see the SolarPILOT website at [https://www.nrel.gov/csp/solarpilot.html](https://www.nrel.gov/csp/solarpilot.html). For details on integration with SAM, see the SAM website at [sam.nrel.gov](https://sam.nrel.gov).\n\nThe desktop version of SolarPILOT for Windows or Linux builds from the following open source projects:\n\n* [SolTrace](https://github.com/nrel/soltrace) is a general tool for Monte-Carlo ray tracing that allows optical characterization of a wide range of possible systems. The tool is used by SolarPILOT alongside the analytical Hermite polynomial expansion model to ensure model accuracy.\n\n* [SSC](https://github.com/mjwagner2/ssc/tree/solarpilot-develop) is a set of ""compute modules"" that simulate different kinds of power systems and financial structures. It can be run directly using the [SAM Software Development Kit](https://sam.nrel.gov/sdk). **If you are looking for the algorithms underlying the models, they are located in this repository. For a list of SSC release versions that correspond with SolarPILOT GUI releases, see the release list and tags at [Mike Wagner\'s personal GitHub page](https://github.com/mjwagner2/ssc/releases).**\n\n* [LK](https://github.com/nrel/lk) is a scripting language that is integrated into SAM and allows users to add functionality to the program.\n\n* [wxWidgets](https://www.wxwidgets.org/) is a cross-platform graphical user interface platform used for SAM\'s user interface, and for the development tools included with SSC (SDKtool) and LK (LKscript). The current version of SAM uses wxWidgets 3.1.1.\n\n* [WEX](https://github.com/nrel/wex) is a set of extensions to wxWidgets for custom user-interface elements used by SAM, and by LKscript and DView, which are integrated into SAM.\n\n* [Google Test](https://github.com/google/googletest) is a C++ test framework that enables comprehensive unit-testing of software. Contributions to the project will eventually be required to have associated unit tests written in this framework.\n\n* This repository, **SolarPILOT**, provides the user interface to assign values to inputs of the computational modules, run the modules in the correct order, and display calculation results. It also includes tools for editing LK scripts, viewing field layout and receiver flux map data, and performing multi-dimensional system optimization.\n\n## Quick Steps for Building SolarPILOT\n\nFor detailed build instructions see the [wiki](https://github.com/NREL/SolarPILOT/wiki), with specific instructions for:\n\n* [Windows](https://github.com/NREL/SolarPILOT/wiki/build-windows)\n* [Linux](https://github.com/NREL/SolarPILOT/wiki/build-linux)\n\nThese are the general quick steps you need to follow to set up your computer for developing SolarPILOT:\n\n1. Set up your development tools:\n\n * Windows: Visual Studio 2017 Community or other editions available [here](https://www.visualstudio.com/).\n * Linux: g++ compiler available [here](http://www.cprogramming.com/g++.html) or as part of the Linux distribution.\n\n2. Download the [wxWidgets 3.1.1 source code](https://www.wxwidgets.org/downloads/) for your operating system.\n\n3. Build wxWidgets.\n\n4. In Windows, create the WXMSW3 environment variable on your computer to point to the wxWidgets installation folder, or Linux, create the dynamic link `/usr//local/bin/wx-config-3` to point to `/path/to/wxWidgets/bin/wx-config`.\n\n5. As you did for wxWidgets, for each of the following projects, clone (download) the repository, build the project, and then (Windows only) create an environment variable pointing to the project folder. Build the projects in the following order, and assign the environment variable for each project before you build the next one:\n\n\n\n\n\n\n\n\n\n\n
ProjectRepository URLWindows Environment Variable
wxWidgetshttps://www.wxwidgets.org/downloadsWXMSW3
LKhttps://github.com/NREL/lkLKDIR
WEXhttps://github.com/NREL/wexWEXDIR
Google Testhttps://github.com/google/googletestGTEST
SSChttps://github.com/mjwagner2/ssc, -b solarpilot-developSSCDIR
SolTracehttps://github.com/NREL/SolTraceCORETRACEDIR
SolarPILOThttps://github.com/NREL/SolarPILOT
\n\n## Contributing\n\nIf you would like to report an issue with SolarPILOT or make a feature request, please let us know by adding a new issue on the [issues page](https://github.com/NREL/SolarPILOT/issues).\n\nIf you would like to submit code to fix an issue or add a feature, you can use GitHub to do so. Please see [Contributing](CONTRIBUTING.md) for instructions.\n\n## License\n\nSolarPILOT\'s open source code is copyrighted by the Alliance for Sustainable Energy and licensed under a [mixed MIT and GPLv3 license](LICENSE.md). It allows for-profit and not-for-profit organizations to develop and redistribute software based on SolarPILOT under terms of an MIT license and requires that research entities including national laboratories, colleges and universities, and non-profit organizations make the source code of any redistribution publicly available under terms of a GPLv3 license.\n\n## Citing SolarPILOT\n\nIf you find SolarPILOT useful, we ask that you appropriately cite it in documentation of your work. We provide the open-source code and executable distributions for free, but find value in being acknowledged in work that advances scientific knowledge and engineering technology. For general use of SolarPILOT, the preferred citation is:\n\n> Wagner, M.J., Wendelin, T. (2018). ""SolarPILOT: A power tower solar field layout and characterization tool"", _Solar Energy_, Vol. 171, pp. 185-196, ISSN 0038-092X, [https://doi.org/10.1016/j.solener.2018.06.063](https://doi.org/10.1016/j.solener.2018.06.063).\n\nThe work is also presented in the following publication:\n\n> Wagner, M.J., Braun, R.J., Newman, A.M. (2017). ""Optimization of stored energy dispatch for concentrating solar power systems."" Doctoral Thesis. Colorado School of Mines, Golden, Colorado. Chapter II, pp. 19-45. URL: [https://dspace.library.colostate.edu/handle/11124/171000](https://dspace.library.colostate.edu/handle/11124/171000).\n\nFor work that builds substantially upon or is derived from the open source project, the preferred citation is:\n\n> Wagner, M.J. (2018). ""SolarPILOT Open-Source Software Project: [github.com/NREL/SolarPILOT](https://github.com/NREL/SolarPILOT)."" Accessed _(dd/mm/yyyy)_. National Renewable Energy Laboratory, Golden, Colorado.\n'",",https://doi.org/10.1016/j.solener.2018.06.063,https://doi.org/10.1016/j.solener.2018.06.063","2017/03/24, 18:22:25",2406,CUSTOM,11,307,"2022/08/08, 19:28:42",16,39,68,0,443,3,0.0,0.07749077490774903,"2019/01/03, 19:40:08",v1.3.2,0,2,false,,false,true,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, solar-data-tools,Data analysis tools for working with historical PV solar time-series data sets.,slacgismo,https://github.com/slacgismo/solar-data-tools.git,github,,Photovoltaics and Solar Energy,"2023/09/27, 00:01:47",39,11,12,true,Jupyter Notebook,SLAC GISMo,slacgismo,"Jupyter Notebook,Python,Shell",,"b'# solar-data-tools\n\n\n\n \n \n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n
Latest Release\n \n \n \n \n \n \n \n \n \n
License\n \n \n \n
Build Status\n \n \n \n \n \n \n \n \n \n \n
Code Quality\n \n \n \n \n \n \n
Publications\n \n \n \n
PyPI Downloads\n \n \n \n
Conda Downloads\n \n \n \n
Test-Coverage\n \n
\n\nTools for performing common tasks on solar PV data signals. These tasks include finding clear days in\na data set, common data transforms, and fixing time stamp issues. These tools are designed to be\nautomatic and require little if any input from the user. Libraries are included to help with data IO\nand plotting as well.\n\nThere is close integration between this repository and the [Statistical Clear Sky](https://github.com/slacgismo/StatisticalClearSky) repository, which provides a ""clear sky model"" of system output, given only measured power as an input.\n\nSee [notebooks](/notebooks) folder for examples.\n\n## Install & Setup\n\n### 3 ways of setting up, either approach works:\n\n#### 1) Recommended: Set up `conda` environment with provided `.yml` file\n\nWe recommend setting up a fresh Python virtual environment in which to use `solar-data-tools`. We recommend using the [Conda](https://docs.conda.io/projects/conda/en/latest/index.html) package management system, and creating an environment with the environment configuration file named `pvi-user.yml`, provided in the top level of this repository. This will install the `statistical-clear-sky` package as well.\n\nCreating the env:\n\n```bash\n$ conda env create -f pvi-user.yml\n```\n\nStarting the env:\n\n```bash\n$ conda activate pvi_user\n```\n\nStopping the env\n\n```bash\n$ conda deactivate\n```\n\nUpdating the env with latest\n\n```bash\n$ conda env update -f pvi-user.yml\n```\n\nAdditional documentation on setting up the Conda environment is available [here](https://github.com/slacgismo/pvinsight-onboarding/blob/main/README.md).\n\n\n#### 2) PIP Package\n\n```sh\n$ pip install solar-data-tools\n```\n\nAlternative: Clone repo from GitHub\n\nMimic the pip package by setting up locally.\n\n```bash\n$ pip install -e path/to/root/folder\n```\n\n#### 3) Anaconda Package\n\n```sh\n$ conda install -c slacgismo solar-data-tools\n```\n\n### Solvers\n\n#### ECOS\n\nBy default, ECOS solver is used, which is supported by cvxpy because it is Open Source.\nHowever, it is found that Mosek solver is more stable. Thus, we encourage you to install it separately as below and obtain the license on your own.\n\n#### MOSEK\n\n MOSEK is a commercial software package. The included YAML file will install MOSEK for you, but you will still need to obtain a license. More information is available here:\n\n* [mosek](https://www.mosek.com/resources/getting-started/)\n* [Free 30-day trial](https://www.mosek.com/products/trial/)\n* [Personal academic license](https://www.mosek.com/products/academic-licenses/)\n\n## Usage\n\nUsers will primarily interact with this software through the `DataHandler` class.\n\n```python\nfrom solardatatools import DataHandler\nfrom solardatatools.dataio import get_pvdaq_data\n\npv_system_data = get_pvdaq_data(sysid=35, api_key=\'DEMO_KEY\', year=[2011, 2012, 2013])\n\ndh = DataHandler(pv_system_data)\ndh.run_pipeline(power_col=\'dc_power\')\n```\nIf everything is working correctly, you should see something like the following\n\n```\ntotal time: 16.67 seconds\n--------------------------------\nBreakdown\n--------------------------------\nPreprocessing 6.52s\nCleaning 8.62s\nFiltering/Summarizing 1.53s\n Data quality 0.23s\n Clear day detect 0.19s\n Clipping detect 0.21s\n Capacity change detect 0.91s\n```\n\n## Contributors\n\nMust enable pre-commit hook before pushing any contributions\n```\npip install pre-commit\npre-commit install\n```\n\nRun pre-commit hook on all files\n```\npre-commit run --all-files\n```\n\n## Test Coverage\n\nIn order to view the current test coverage metrics, run:\n```\ncoverage run --source solardatatools -m unittest discover && coverage html\nopen htmlcov/index.html\n```\n\n## Versioning\n\nWe use [Semantic Versioning](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/slacgismo/solar-data-tools/tags).\n\n## Authors\n\n* **Bennet Meyers** - *Initial work and Main research work* - [Bennet Meyers GitHub](https://github.com/bmeyers)\n\nSee also the list of [contributors](https://github.com/bmeyers/solar-data-tools/contributors) who participated in this project.\n\n## License\n\nThis project is licensed under the BSD 2-Clause License - see the [LICENSE](LICENSE) file for details\n'",",https://zenodo.org/badge/latestdoi/171066536","2019/02/17, 00:28:25",1711,BSD-2-Clause,269,1080,"2023/09/26, 19:03:30",7,86,98,19,29,5,0.9,0.45444685466377444,"2023/09/27, 00:04:04",v1.0.1,0,9,false,,false,true,"MichaelHopwood/ForwardForwardOneclass,slaclab/neural-representation-sqw,slacgismo/sg2t,ChrisKre/Photovoltaik_GAN,slacgismo/pv-apache-beam,slacgismo/gismo-cloud-deploy,slacgismo/pv-system-profiler,DuraMAT/pvpro,slacgismo/solar-data-pipeline,slacgismo/StatisticalClearSky,tadatoshi/StatisticalClearSky",,https://github.com/slacgismo,https://gismo.slac.stanford.edu/,"SLAC National Accelerator Laboratory, Menlo Park, CA 94025",,,https://avatars.githubusercontent.com/u/19895500?v=4,,, SolarPV-DER-simulation-utility,Allows user to run dynamics simulations for solar photovoltaic distributed energy resource connected to a stiff voltage source or to an external program.,tdcosim,https://github.com/tdcosim/SolarPV-DER-simulation-tool.git,github,,Photovoltaics and Solar Energy,"2023/09/29, 01:15:58",31,2,6,true,Python,TDcoSim Team,tdcosim,"Python,Dockerfile",,"b'**Status:** Expect regular updates and bug fixes.\n\n# Tool for simulating dynamics of PV-DER\n[![Build Status](https://travis-ci.org/sibyjackgrove/SolarPV-DER-simulation-utility.svg?branch=master)](https://travis-ci.org/sibyjackgrove/SolarPV-DER-simulation-utility)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/pvder?label=PyPI%20Downloads)\n[![CodeFactor](https://www.codefactor.io/repository/github/tdcosim/solarpv-der-simulation-tool/badge)](https://www.codefactor.io/repository/github/tdcosim/solarpv-der-simulation-utility)\n\nSolar photovoltaic distributed energy resources (PV-DER) are power electronic inverter based generation (IBG) connected to the electric power distribution system (eg. roof top solar PV systems). This tool can be used to simulate the dynamics of a single DER connected to a stiff voltage source as shown in the following schematic:\n\n![schematic of PV-DER](PVDER_schematic.png)\n\n## Basics\nThe dynamics of the DER are modelled using dynamic phasors. Detailed description of the concepts behind this tool can be found in the IEEE publication [Dynamic Modeling of Solar PV Systems for Distribution System Stability Analysis](https://www.researchgate.net/publication/333985171_Dynamic_Modeling_of_Solar_PV_Systems_for_Distribution_System_Stability_Analysis) and detailed list of equations can be found in the [Model specification document.](docs/PV_DER_model_specification_rev3.docx)\n\n### Features\nThe following features are available currently:\n1. Single phase, three phase balanced, and three phase unbalanced (phase voltages may be unbalanced) DER models.\n2. Run simulation in stand alone mode with internal grid voltage source (stiff) model.\n3. Run simulation in loop mode where grid voltage is supplied every time step by a third party program.\n4. Customize all aspects of the model through a [JSON](config_der.json) file which provides access to parameters in all the model components. \n5. Visualize or retrieve simulation results for voltages, current, active, and reactive power.\n5. Introduce solar insolation events (in all modes), grid voltage, and frequency change events (in stand alone mode).\n6. Retrieve and modify model parameters from a third party program.\n7. Following smart inverter features are available: Low/High voltage ride through (LVRT/HVRT), Low frequency ride through (LFRT), and Volt-VAR control logic.\n\n## Links\n* Source code repository: https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool\n* API Documentation: https://solarpv-der-simulation-utility.readthedocs.io/en/latest/\n* Additional documentation: [Description of attributes and methods](docs/PVDER_flags_variables_methods.md)\n\n## Installation\n\nDependencies:\n\n- SciPy >= 1.2.1\n- Numpy >= 1.16.2\n- Matlplotlib >= 3.0.3\n\nInstall latest release:\n```\npip install pvder\n```\n\nInstall from source:\n```\ngit clone https://github.com/tdcosim/SolarPV-DER-simulation-tool.git\ncd SolarPV-DER-simulation-tool\npip install -e .\n```\n\n## Use cases\nFollowing projects are using Solar PV-DER simulation tool:\n1. [Argonne Transmission and Distribution systems Co-Simulation tool (TDcoSim)](https://github.com/tdcosim/TDcoSim)\n2. [OpenAI Gym Distributed Energy Resource Environment (Gym-DER)](https://github.com/sibyjackgrove/gym-SolarPVDER-environment)\n\n## Using the tool\nThis tool can be imported as a normal python module:\n\n```python\nimport pvder\n```\n\n### Using the stand alone single phase DER model with 10 kW power rating\nThe following steps are required. Additional documentation on attributes and methods are available [here](docs/PVDER_flags_variables_methods.md).\n1. First import the following classes:\n```\nfrom pvder.DER_components_single_phase import SolarPV_DER_SinglePhase\nfrom pvder.grid_components import Grid\nfrom pvder.dynamic_simulation import DynamicSimulation\nfrom pvder.simulation_events import SimulationEvents\nfrom pvder.simulation_utilities import SimulationResults\n```\n1. Create a **_SimulationEvents_** object: This object is used to add or remove disturbance events occurs during the simulation.\n```\nevents = SimulationEvents()\n```\n2. Create a **Grid** object: This object describes the steady state model for the grid voltage source. It needs to be supplied with an **_SimulationEvents_** object.\n```\ngrid = Grid(events=events)\n```\n3. Create a **SolarPV_DER_SinglePhase** or **SolarPV_DER_ThreePhase** object: This object describes the dynamic DER model. It needs both an **_SimulationEvents_** object, and a path name for JSON file containing the DER configuration parameters. It also needs a **Grid** object in stand alone mode). Additionaly either the power rating of the DER or the id for the parameter dictionary should be provided.\n```\nPV_DER = SolarPV_DER_SinglePhase(events=events,configFile=r\'config_der.json\',gridModel=grid,derId= \'10\',standAlone = True)\n```\n4. Create a **DynamicSimulation** object: This object runs the simulation and stores the solution. It takes **_SimulationEvents_**, **Grid** and, **SolarPV_DER_SinglePhase** objects as arguments.\n```\nsim = DynamicSimulation(grid_model=grid,PV_model=PV_DER,events = events)\n```\n5. Create a **SimulationResults** object: This object is used to visualize the simulation results.\n```\nresults = SimulationResults(simulation = sim)\n```\n6. Add an event (for e.g. solar insolation change at 10.0 s):\n```\nevents.add_solar_event(10,90)\n```\n7. Specify simulation flags (for e.g. set the DEBUG_SIMULATION and DEBUG_POWER flag to true to observe the power at each time step.):\n```\nsim.DEBUG_SIMULATION = False\nsim.DEBUG_POWER = False\n```\n8. Specify simulation stop time (for e.g. 20.0 s):\n```\nsim.tStop = 20.0\n```\n9. Run the simulation:\n```\nsim.run_simulation()\n```\n10. Visualize the results (for e.g. the power output at PCC-LV side):\n```\nresults.PER_UNIT = False\nresults.plot_DER_simulation(plot_type=\'active_power_Ppv_Pac_PCC\')\n```\n\n### Examples\nTry out Jupyter notebooks with usage examples in Google Colab:\n\nBasic usage:\n[![Basic usage](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sibyjackgrove/SolarPV-DER-simulation-tool/blob/master/examples/PV-DER_usage_example.ipynb)\n\nRunning simulation in loop mode:\n[![Updating model parameters](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool/blob/master/examples/PV-DER_usage_example_loop_mode.ipynb)\n\nUpdating model parameters:\n[![Updating model parameters](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool/blob/master/examples/PV-DER_parameter_update_example.ipynb)\n\nVoltage anomaly, ride through, and momentary cessation:\n[![Voltage anomaly](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool/blob/master/examples/PV-DER_usage_example_LVRT_momentary_cessation_with_recovery.ipynb)\n\nFrequency anomaly, ride through, and trip:\n[![Frequency anomaly](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool/blob/master/examples/PV-DER_usage_example_LFRT_with_trip.ipynb)\n\n## Module details\nA schematic of the relationship between differen classes in the module is shown in the figure below:\n![schematic of software architecture](docs/software_architecture.png)\n\n## Issues\nPlease feel free to raise an issue for bugs or feature requests.\n\n## Who is responsible?\n\n**Core developer:**\n- Siby Jose Plathottam splathottam@anl.gov\n\n**Contributor:**\n\n- Karthikeyan Balasubramaniam kbalasubramaniam@anl.gov\n\n## Acknowledgement\n\nThis project was supported by Kemal Celik, [U.S. DOE Office of Electricity, Solar Energy Technology Office](https://www.energy.gov/eere/solar/solar-energy-technologies-office) through the [SuNLaMP](https://www.energy.gov/eere/solar/sunshot-national-laboratory-multiyear-partnership-sunlamp) program.\n\nThe authors would like to acknowledge [Shrirang Abhyankar](https://github.com/abhyshr) and Puspal Hazra for their contribution.\n\n## Citation\nIf you use this code please cite it as:\n```\n@misc{pvder,\n title = {{SolarPV-DER-simulation-tool}: A simulation tool for or solar photovoltaic distributed energy resources},\n author = ""{Siby Jose Plathottam,Karthikeyan Balasubramaniam}"",\n howpublished = {\\url{https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool}},\n url = ""https://github.com/sibyjackgrove/SolarPV-DER-simulation-tool"",\n year = 2019,\n note = ""[Online; accessed 19-March-2019]""\n}\n```\n### Copyright and License\nCopyright \xc2\xa9 2019, UChicago Argonne, LLC\n\nPhotovoltaic Distributed Energy Resource (PV-DER) Simulation tool is distributed under the terms of [BSD-3 OSS License.](LICENSE)\n'",,"2019/03/19, 21:59:40",1680,CUSTOM,9,225,"2023/09/29, 01:15:59",2,21,35,3,26,0,0.0,0.013245033112582738,,,0,2,false,,false,false,"NREL/PyDSS,tdcosim/TDcoSim",,https://github.com/tdcosim,,Chicago,,,https://avatars.githubusercontent.com/u/52003368?v=4,,, bifacial_radiance,Toolkit for working with RADIANCE for the ray-trace modeling of Bifacial Photovoltaics.,NREL,https://github.com/NREL/bifacial_radiance.git,github,"radiance,bifacial,photovoltaics,renewable-energy,renewables,gui",Photovoltaics and Solar Energy,"2023/07/18, 15:01:55",70,0,11,true,Python,National Renewable Energy Laboratory,NREL,"Python,C++,TeX,Dockerfile,C,Shell,Makefile,Awk",https://bifacial-radiance.readthedocs.io,"b""![logo](docs/images_wiki/bifacial_radiance.png)\n\n# bifacial_radiance\nMain branch: [![Build Status](https://github.com/nrel/bifacial_radiance/actions/workflows/pytest.yaml/badge.svg?branch=main)](https://github.com/nrel/bifacial_radiance/actions)\n[![Coverage Status](https://coveralls.io/repos/github/NREL/bifacial_radiance/badge.svg?branch=main)](https://coveralls.io/github/NREL/bifacial_radiance?branch=main)\n[![Documentation Status](https://readthedocs.org/projects/bifacial-radiance/badge/?version=stable)](https://bifacial-radiance.readthedocs.io/en/latest/?badge=stable)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3860350.svg)](https://doi.org/10.5281/zenodo.3860350)\n[![status](https://joss.theoj.org/papers/b018890e2ab7ddf723d37b17e308e273/status.svg)](https://joss.theoj.org/papers/b018890e2ab7ddf723d37b17e308e273)\n\nDevelopment branch: [![Build Status](https://github.com/nrel/bifacial_radiance/actions/workflows/pytest.yaml/badge.svg?branch=development)](https://github.com/nrel/bifacial_radiance/actions)\n[![Coverage Status](https://coveralls.io/repos/github/NREL/bifacial_radiance/badge.svg?branch=development)](https://coveralls.io/github/NREL/bifacial_radiance?branch=development)\n[![Documentation Status](https://readthedocs.org/projects/bifacial-radiance/badge/?version=latest)](https://bifacial-radiance.readthedocs.io/en/latest/?badge=latest)\n\n## Introduction\n\nbifacial_radiance contains a series of Python wrapper functions to make working with \nRADIANCE easier, particularly for the PV researcher interested in bifacial PV \nperformance. For more information, check out our [documentation](https://bifacial-radiance.readthedocs.io), \n[Tutorials in the form of Jupyter Notebooks](docs/tutorials/), or reffer to our [Wiki](https://github.com/NREL/bifacial_radiance/wiki)\nand [Issues](https://github.com/NREL/bifacial_radiance/issues) page. \n\n## Installation\n\nhttps://youtu.be/4A9GocfHKyM\nThis video shows how to install the bifacial_radiance software and all associated software needed. More info on the Wiki. Instructions are also shown below.\n\nFor detailed instructions of how to install bifacial_radiance, you can also refer to the [installation guide](https://bifacial-radiance.readthedocs.io/en/stable/installation.html)\n\n## GUI! \n\nA GUI has been added in version 3.0. The GUI reads/writes all input parameters necessary to run a simulation, and runs the specified simulation by calling the correct functions with the specified parameters. So no need to use a journal or a script! But you still need to install following the procedure below. \n\nTo run the gui, import bifacial_radiance and run bifacial_radiance.gui()\n\n![GUI](docs/images_wiki/bifacial_radiance_GUI.png)\n\n\n## Usage\n\nWe have a tutorial video, showing how the program is structured, how to use the Jupyter tutorials and the GUI. You can watch it here [Tutorial Webinar](https://www.youtube.com/watch?v=1X9L-R-RVGA), with the [slides available here](https://www.nrel.gov/docs/fy20osti/75218.pdf).\n\nCheck out the [Jupyter Tutorial Notebooks](docs/tutorials/) to see detailed examples of the capacities of bifacial_radiance.\nThe [Intro examples](https://bifacial-radiance.readthedocs.io/en/stable/introexamples.html) and the [readthedocs documentation](https://bifacial-radiance.readthedocs.io) also provide a good starting point.\n\n## Contributing\n\nWe need your help to make bifacial_radiance a great tool! Please see the [Contributing page](https://bifacial-radiance.readthedocs.io/en/stable/contributing.html) for more on how you can contribute. The long-term success of bifacial_radiance requires substantial community support.\n\n## License\n\nBifacial_radiance open source code is copyrighted by the Alliance for Sustainable Energy and licensed with BSD-3-Clause terms, found [here](https://github.com/NREL/bifacial_radiance/blob/master/LICENSE).\n\n## Getting Support\n\nIf you suspect that you may have discovered a bug or if you'd like to\nchange something about bifacial_radiance, then please make an issue on our\n[GitHub issues page](https://github.com/NREL/bifacial_radiance/issues).\n\nbifacial_radiance questions can be asked on\n[Stack Overflow](http://stackoverflow.com) and tagged with\nthe [bifacial_radiance](http://stackoverflow.com/questions/tagged/bifacial_radiance) tag.\n\nThe [bifacial-radiance google group](https://groups.google.com/forum/#!forum/bifacial_radiance) \nhas just started, and will be used for discussing various topics of interest to the bifacial-radiance\ncommunity. We also make new version announcements on the google group.\n\n## Citing\n\nIf you use bifacial_radiance in a published work, please cite:\n\n Ayala Pelaez and Deline, (2020). bifacial_radiance: a python package for modeling bifacial solar photovoltaic systems. Journal of Open Source Software, 5(50), 1865, https://doi.org/10.21105/joss.01865\n\n\nPlease also cite the DOI corresponding to the specific version of bifacial_radiance that you used. bifacial_radiance DOIs are listed at [Zenodo.org](https://zenodo.org/search?page=1&size=20&q=conceptrecid:3860349&all_versions&sort=-version)\n\nAdditional bifacial_radiance publications with validation of the software include:\n* Deline, Chris, and Ayala, Silvana. Bifacial_Radiance. Computer Software. https://github.com/NREL/bifacial_radiance. USDOE Office of Energy Efficiency and Renewable Energy (EERE), Solar Energy Technologies Office (EE-4S). 17 Dec. 2017. Web. doi:10.11578/dc.20180530.16. https://www.osti.gov/doecode/biblio/6869\n* Ayala Pelaez S, Deline C, Greenberg P, Stein JS, Kostuk RK. Model and validation of single-axis tracking with bifacial PV. IEEE J Photovoltaics. 2019;9(3):715-721. https://ieeexplore.ieee.org/document/8644027 and https://www.nrel.gov/docs/fy19osti/72039.pdf (pre-print, conference version)\n* Ayala Pelaez, Deline C, MacAlpine M, Marion B, Stein J, Kostuk K. Comparison of Bifacial Solar Irradiance Model Predictions with Field Validation. IEEE J Photovoltaics. 2019; 9(1):82-87. https://ieeexplore.ieee.org/document/8534404\n\nOr check our [Github Wiki](https://github.com/NREL/bifacial_radiance/wiki) for a complete list of publications.\n""",",https://doi.org/10.5281/zenodo.3860350,https://doi.org/10.21105/joss.01865\n\n\nPlease,https://zenodo.org/search?page=1&size=20&q=conceptrecid:3860349&all_versions&sort=-version","2017/12/13, 17:56:18",2142,BSD-3-Clause,16,1533,"2023/09/26, 07:09:45",96,139,381,30,29,3,0.1,0.39451114922813035,"2023/03/11, 14:04:01",0.4.2,0,10,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, autoXRD,"A Python package for automatic XRD pattern classification of thin-films, tweaked for small and class-imbalanced datasets.",PV-Lab,https://github.com/PV-Lab/autoXRD.git,github,,Photovoltaics and Solar Energy,"2020/03/04, 18:03:26",46,0,7,false,Python,Accelerated Materials Laboratory for Sustainability,PV-Lab,Python,,"b'\nautoXRD\n===========\n## Description\n\n\nautoXRD is a python package for automatic XRD pattern classification of thin-films, tweaked for small and class-imbalanced datasets. The main application of the package is high-throughput screening of novel materials.\n\nautoXRD performs physics-informed data augmentation to solve the small data problem, implements a state-of-the-art a-CNN architecture and allows interpretation using Average Class Activation Maps (CAMs), according to the following publications:\n\n""**Oviedo, F., Ren, Z., Sun, S., Settens, C., Liu, Z., Hartono, N. T. P., ... & Buonassisi, T. (2019). Fast and interpretable classification of small X-ray diffraction datasets using data augmentation and deep neural networks. npj Computational Materials, 5(1), 60."" Link: [https://doi.org/10.1038/s41524-019-0196-x](https://doi.org/10.1038/s41524-019-0196-x)**\n\n\n""**Fast classification of small X-ray diffraction datasets using data augmentation and deep neural networks, (2019), Felipe Oviedo, Zekun Ren, et. al. Link: [arXiv:1811.08425v](https://arxiv.org/abs/1811.08425v2)**\n\nAccepted to NeurIPS 2018 ML for Molecules and Materials Workshop. Final version published npj Computational Materials 2019\n\n\n## Installation\n\nTo install, just clone the following repository:\n\n`$ git clone https://github.com/PV-Lab/autoXRD.git`\n\n## Usage\n\nJust run `space_group_a_CNN.py` , with the given datasets. Note that this performs classification for patterns into 7 space-groups. Dimensionality data is not included in the code, please contact authors if interested.\nThe package contains the following module and scripts:\n\n| Module | Description |\n| ------------- | ------------------------------ |\n| `space_group_a_CNN.py` | Script for XRD space-group classification with a-CNN |\n| `autoXRD` | Module dedicated to XRD pattern preprocessing and data augmentation |\n| `autoXRD_vis` | Visualizer module for class activation maps (CAMs) |\n| `Demo / XRD_dimensionality_demo.ipynb` | Notebook containing a demo for physics-informed data augmentation. This is a version with a modified CNN and no CAM to speed up the computation\n\n\n## Authors\nFelipe Oviedo and ""Danny"" Zekun Ren\n\n\n|| |\n| ------------- | ------------------------------ |\n| **AUTHORS** | Felipe Oviedo and ""Danny"" Ren Zekun | \n| **VERSION** | 1.0 / May, 2019 | \n| **EMAIL OF REPO OWNER** | foviedo@mit.edu | \n|| |\n\n## Attribution\n\nThis work is under an Apache 2.0 License and data policies of Nature Partner Journal Computational Materials. Please, acknowledge use of this work with the apropiate citation.\n\n## Citation\n\n @article{oviedo2019fast, \n title={Fast and interpretable classification of small X-ray diffraction datasets using data augmentation and deep neural networks},\n author={Oviedo, Felipe and Ren, Zekun and Sun, Shijing and Settens, Charles and Liu, Zhe and Hartono, Noor Titan Putri and Ramasamy, Savitha and DeCost, Brian L and Tian, Siyu IP and Romano, Giuseppe and others},\n journal={npj Computational Materials},\n volume={5},\n number={1},\n pages={60},\n year={2019},\n publisher={Nature Publishing Group}}\n'",",https://doi.org/10.1038/s41524-019-0196-x,https://doi.org/10.1038/s41524-019-0196-x,https://arxiv.org/abs/1811.08425v2","2019/04/23, 19:15:53",1646,Apache-2.0,0,32,"2023/09/26, 07:09:45",1,0,0,0,29,0,0,0.09999999999999998,"2019/05/24, 16:47:16",1.0,0,2,false,,false,false,,,https://github.com/PV-Lab,pv.mit.edu,United States of America,,,https://avatars.githubusercontent.com/u/13911947?v=4,,, BayesProcess,A Python package for Physics informed Bayesian network inference using neural network surrogate model for matching process / variable / performance in solar cells.,PV-Lab,https://github.com/PV-Lab/BayesProcess.git,github,,Photovoltaics and Solar Energy,"2021/08/18, 14:30:54",28,0,1,false,Jupyter Notebook,Accelerated Materials Laboratory for Sustainability,PV-Lab,"Jupyter Notebook,Python",,"b'## Description\n\nBayesProcess is a python package for Physics informed Bayesian network inference using neural network surrogate model for matching process / variable / performance in solar cells.\n\n## Installation\n\nTo install, just clone the following repository:\n\n\npip install -r requirements.txt\n\n\nhttps://github.com/PV-Lab/BayesProcess.git\n\n## Usage\n\nrun `surrogate_model.py` , with the given datasets to create the neural network surrogate for numerical PDE solver.\nrun `Bayes.py` with the saved surrogate model. This performs Bayesian network inference to map the process variable (Temperature) to material descriptors. \nThe package contains the following module and scripts:\n\n| Module | Description |\n| ------------- | ------------------------------ |\n| `JV_surrogate.py` | Script for training neural network JV surrogate model |\n| `Bayes.py` | Script for Bayesian inference using MCMC |\n| `requirements.txt` | required packages |\n\n\n\n## Authors\n""Danny"" Zekun Ren and Felipe Oviedo\n'",,"2019/07/09, 04:43:23",1569,MIT,0,17,"2023/09/26, 07:09:45",2,0,0,0,29,0,0,0.23076923076923073,,,0,2,false,,false,false,,,https://github.com/PV-Lab,pv.mit.edu,United States of America,,,https://avatars.githubusercontent.com/u/13911947?v=4,,, solcore5,"A multi-scale, Python-based library for the modeling of solar cells and semiconductor materials.",qpv-research-group,https://github.com/qpv-research-group/solcore5.git,github,"photovoltaic,semiconductor,solar-cells,python,hacktoberfest",Photovoltaics and Solar Energy,"2023/09/01, 04:28:31",120,5,31,true,Python,Quantum Photovoltaics Research Group,qpv-research-group,"Python,Fortran,Meson,Shell",https://www.solcore.solar/,"b'[![image](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/qpv-research-group/solcore5/develop?urlpath=lab)\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-26-orange.svg?style=flat-square)](#contributors-)\n\n[![image](https://zenodo.org/badge/DOI/10.5281/zenodo.1185316.svg)](https://doi.org/10.5281/zenodo.1185316)\n[![image](https://img.shields.io/badge/License-LGPLv3-blue.svg)](http://www.gnu.org/licenses/lgpl.html)\n[![Documentation Status](http://readthedocs.org/projects/solcore5/badge/?version=latest)](http://solcore5.readthedocs.io/en/latest/?badge=latest)\n![Solcore](https://github.com/qpv-research-group/solcore5/workflows/Solcore/badge.svg)\n[![codecov](https://codecov.io/gh/qpv-research-group/solcore5/branch/develop/graph/badge.svg)](https://codecov.io/gh/qpv-research-group/solcore5)\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/a1d2e6f702e64d878a67dcf85ce9b3b7)](https://app.codacy.com/gh/qpv-research-group/solcore5?utm_source=github.com&utm_medium=referral&utm_content=qpv-research-group/solcore5&utm_campaign=Badge_Grade_Settings)\n\n\nSolcore\n=======\n\n**Solcore** was born as a modular set of tools, written (almost) entirely in Python 3, to address some of the task we had to solve more. With time, however, it has evolved as a complete semiconductor solver able of modelling the optical and electrical properties of a wide range of solar cells, from quantum well devices to multi-junction solar cells.\n\nPlease, visit [Solcore\\\'s Documentation](http://docs.solcore.solar), the [Tutorial](docs/source/Examples/tutorial.rst) for a step-by-step example of how to use *Solcore* to model a solar cell and also check the [Examples folder](examples) for more specific information and examples of usage.\n\n![](docs/source/Infographics.jpg)\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Diego Alonso \xc3\x81lvarez

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\x87 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x94\xa3

Phoebe Pearce

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x94\xa3

Tom Wilson

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x94\xa3

Ned Ekins-Daukes

\xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x92\xb5 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f

MarkusFF

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x8e\xa8 \xf0\x9f\x94\xa3

Jeremy Cohen

\xf0\x9f\x9a\x87

Jonathan Adams

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x91\x80

Mohammad Hosein Ronaghi

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb

Federica Trevisan

\xf0\x9f\x92\xa1

Artyko

\xf0\x9f\x92\xbb

Emmanuel Carreira

\xf0\x9f\x92\xa1

Nimish Verma

\xf0\x9f\x92\xa1

Peter Tillmann

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xe2\x9a\xa0\xef\xb8\x8f

jmllorens

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xe2\x9a\xa0\xef\xb8\x8f

Luigi Giugliano

\xf0\x9f\x92\xbb \xf0\x9f\x9a\x87 \xe2\x9a\xa0\xef\xb8\x8f

michael_oz

\xf0\x9f\x92\xbb

Hrishikesh Suresh

\xf0\x9f\x92\xa1

Justin Cooksey

\xf0\x9f\x92\xa1

Yura Osychenko

\xf0\x9f\x8e\xa8

canns99

\xf0\x9f\x90\x9b

AndiPOz

\xf0\x9f\x90\x9b

Jai Agarwal

\xf0\x9f\x92\xa1

jkrich

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b

Rushil17D070020

\xf0\x9f\x90\x9b

Eric Tervo

\xf0\x9f\x90\x9b

Eli Schwartz

\xf0\x9f\x9a\x87
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!'",",https://doi.org/10.5281/zenodo.1185316","2017/10/29, 20:13:23",2186,CUSTOM,161,988,"2023/06/07, 04:06:21",37,157,222,55,140,0,0.4,0.5503778337531486,"2023/02/19, 06:37:16",v5.9.1,7,23,false,,true,true,"PentW0lf/sunglass,AlexNDRmac/sunglass,qpv-research-group/sunglass,aidanobeirne/OptiFit,qpv-research-group/rayflare",,https://github.com/qpv-research-group,https://www.qpvgroup.org,"UNSW (Sydney, Australia) and Imperial College London (UK) until 2017",,,https://avatars.githubusercontent.com/u/48552948?v=4,,, solax,Read energy usage data from the real-time API on Solax solar inverters.,squishykid,https://github.com/squishykid/solax.git,github,"solax,solar,photovoltaic,home-automation,iot,raspberry-pi",Photovoltaics and Solar Energy,"2023/07/16, 22:40:13",75,253,32,true,Python,,,"Python,Shell,HTML",,"b""# Solax\n\n[![Build Status](https://github.com/squishykid/solax/workflows/tests/badge.svg)](https://github.com/squishykid/solax/actions)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/solax.svg)](https://pypi.org/project/solax)\n\nRead energy usage data from the real-time API on Solax solar inverters.\n\n* Real time power, current and voltage\n* Grid power information\n* Battery level\n* Temperature and inverter health\n* Daily/Total energy summaries\n\n## Usage\n\n`pip install solax`\n\nThen from within your project:\n\n```\nimport solax\nimport asyncio\n\nasync def work():\n r = await solax.real_time_api('10.0.0.1')\n return await r.get_data()\n\nloop = asyncio.new_event_loop()\nasyncio.set_event_loop(loop)\ndata = loop.run_until_complete(work())\nprint(data)\n```\n\n## Confirmed Supported Inverters\n\nThese inverters have been tested and confirmed to be working. If your inverter is not listed below, this library may still work- please create an issue so we can add your inverter to the list \xf0\x9f\x98\x8a.\n\n* SK-TL5000E\n""",,"2019/04/09, 04:14:24",1660,MIT,8,60,"2023/10/07, 13:02:44",43,46,81,19,18,16,0.7,0.34375,"2022/09/11, 07:06:56",v0.3.0,0,14,false,,false,false,"klaernie/core,merca/core,Mic92/dream2nix-home-assistant,pillezu/homeassistantM2,Airler/home-assistant-core,Vip0r/varta_storage,k3a/ha_core,tomerh2001/core,tibber/core,alekslyse/core,mrazekv/solax-mqtt,MarcHagen/home-assistant-core,flake-it/flake-friend-evaluator,alphapum/core,StevenLooman/home-assistant,CyberFlameGO/core-4,navnit3366/core-dev,SemonCat/core,cisswolff/core,PolarFox/ha-core,JasonGao180/home-assistant,DeliciousHouse/core-canary,Hiosdra/core,VadimDor/VadimDor,vandenberghev/home-assistant,lima-limon-inc/OverlayStats,Redracer93/core,petegallagher/core,kundusurinder/HomeAuto,benbruneau/home-assistant-core,rongquantoh-intel/dependabot-demo,mouyuan123/home-assistant,RACELAND-Automacao-Lda/Homeland,RalphHightower/core,Atrejoe/home-assistant,Secure-Platforms-Lab-W-M/Helion-on-Home-Assistant,igewebs/home-assistant,downside-up-git/du-core-dev,sarvex/home-assistant-core,bjarnekrottje/ha-core,kaovilai/home-assistant-core,peterbacil/home-assistant-core,fustom/core,kevinterface/home-assistant,youdroid/core,Multipas84/core,MDevM20/core,jwalberg/ha-core,joushx/core,ChadMoran/hass-core,NiklasA95/home-assistant,Kipjr/home-assistant-core,jasonmadigan/core,samueltardieu/homeassistant-core,gabe565/core,SukramJ/core,kmplngj/home-assistant-core,jeyrb/home-assistant,MarioRamazzotti/Reisinger,aididhaiqal/core,naycha/home-assistant,octodemo/home-assistant-core,lorek123/home-assistant,charliejones1/core,elijahxb/core,Andre0512/home-assistant,marcelhoogantink/core,conallob/core,karman-de-lange/hass_broadlink_ac,majacQ/core-1,sipgate-io/home-assistant-core,oikarinen/core,aronpedersen/test,Nyaran/core,ryfont/core-1,BenoitAnastay/home-assistant-core,aladin2000/core,MarcJenningsUK/home-assistant,jellespijker/core,rohanmuz2/Home-AI,brenank/home-assistant-core,aptalca/wheels,alidblad/ha-clone,lmendezr/home-assistant-core,iamwillbar/core,bakoorahnin/core,piotr-kubiak/home-assistant,shnolshnol1/core,N-hamdy/home-assistant,mww012/core,Stagie/core,joshuaspence/home-assistant-core,janekbaraniewski/home-assistant,jamescurtin/home-assistant,jfroy/core,stagietek/core,Linean1/Project1,Asuse420/core,u240/core,constructorfleet/home-assistant-loader,eltariel/home-assistant-core,classicvalues/core-2,jeroen84/home-assistant,Nyvek/home-assistant,terrorizer1980/core-2,NisaarAgharia/home.AI,tktf50/core,RubenKelevra/home-assistant_core,rohankumardubey/core,brett-fitz/core,fopina/hass-core,samsunga3888/hass,ekmixon/homeassistant-core,seantrue/core,ronaldburns/home-assistant,yxkj2022/python-core,qyl2021/python-core,cabraliebre/core,cnheider/core,Brianspha/home-assistant,btharper/core,kifeo/core,maruel/core,imchipwood/home-assistant,yoki31/core,savonman/core,sshyran/home-assistant-core,huizebruin/core,theyapps/core,mfugate1/core,sjbelisle/core,mikeyhodl/core,DivanX10/OpenWRT-and-Home-Assistant,linuxserver/wheelie,jisakiel/home-assistant,fleXible/home-assistant,jenniferliddle/core,dewgenenny/core,amnnet/core,informaticacba/home-assistant-core,Grendel7/prometheus-solax-exporter,coltoncat/core,Djelibeybi/home-assistant-core,Swissman1/home-assistant,fredrike/home-assistant,JonanOribe/home-assistant,matholiveira91/home-assistant,austinmroczek/home-assistant,stewart123579/home-assistant-core,SiwatINC/home-assistant-tensorflow,preetyrai11/IOT-HomeAssistance,nickovs/home-assistant,mcanaleta/home-assistant-core,vsevolodpohvalenko/home-assistant,PlumeSolution/core,Danielhiversen/home-assistant,ahayworth/home-assistant,kerstef/home-assistant-core,vasili8m/core,12DEP/hikg,gjohansson-ST/core,paulmonigatti/home-assistant-core,matkastner/core,Antall/core,lbschenkel/home-assistant,12DEP/gfgf,12DEP/hik,artchula/home-assistant,therealpedro/core,blastoise186/core,frankhildebrandt/home-assistant,zeehio/home-assistant,djwmarcx/core,CrossEyeORG/homeassistant,miltos04/core,cyr-ius/home-assistant,DKAutomater/home-assistant,angelnu/core,Leviosa-Shades/core,Whoerr/core,fpetillo/home-assistant,qdotme/HomeAssistantRepository,thindiyeh/core,OpenPeerPower/core,joncar/home-assistant,piotrs112/home-assistant,bwghughes/octoclient,Vaarlion/core,Odianosen25/home-assistant,anngel78/core,ties/home-assistant,brentmaxwell/home-assistant,ConnectionMaster/home-assistant,lozoli/myio-integration,corbanmailloux/home-assistant-core,NigelRook/home-assistant,glance-/home-assistant,dannysauer/core,Sureshkumartv/core,TheLastGimbus/hass-core,alandtse/home-assistant,Pyhass/core,iracigt/home-assistant,mstovenour/home-assistant-core,jupe/home-assistant,ajk12345-code/home-assistant,derekxxx/core,mshiznitzh/core,DevSecNinja/core,ms32035/home-assistant,Watemlifts/home-assistant,escoand/home-assistant,keithkyle1989/core,mikeodr/core,sdwilsh/home-assistant,Verbalinsurection/Home-Assistant-core,diederikdevries/ha,Platinumwrist/core,Violet26/home-assistant,Omrisnyk/home-assistant,Vman45/home-assistant,PakapongDev/home-assistant,brianjcarroll8/Home-Assistant-Core,gzsjw/HomeAssistantStudy,peternijssen/home-assistant,ZachT1711/home-assistant,hacf-fr/home-assistant-core,tribut/home-assistant,timmo001/core,THATDONFC/home-assistant,DarkFox/home-assistant,sampou/homeassistant,artefolio/HomeAssistant,gpalsingh30/home-assistant-core,briangomez2016/core,84KaliPleXon3/home-assistant-core,gagebenne/home-assistant,mahadevanmani/home-assistant,switchpanel/ha-core,yenerhelvacioglu/homeasistant,Abigiris/home-assistant,bhchew/hass,MumiLila/gittest4,nordicenergy/home-assistant,gentoo-mirror/HomeAssistantRepository,onkelbeh/HomeAssistantRepository,Bulbutta/home-away,peterpanfy/homeassistant-sm,Watemlifts/Platform-Integration,Watemlifts/subsystems-,Watemlifts/Alfa,fabiandevia/home,home-assistant/core",,,,,,,,,, bifacialvf,Bifacial PV View Factor model for system performance calculation.,NREL,https://github.com/NREL/bifacialvf.git,github,,Photovoltaics and Solar Energy,"2022/09/28, 21:40:30",25,1,6,false,Python,National Renewable Energy Laboratory,NREL,Python,https://bifacialvf.readthedocs.io,"b""![logo](docs/images_wiki/bifacialVF.png)\n\n# Bifacial PV View Factor model\n[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)\n[![Build Status](https://travis-ci.org/NREL/bifacialvf.svg?branch=main)](https://travis-ci.org/NREL/bifacialvf)\n[![DOI](https://zenodo.org/badge/114160149.svg)](https://zenodo.org/badge/latestdoi/114160149)\n\nkeywords: python, configuration factor model, electrical model mismatch for bifacial modules.\n\n## Introduction\n\nbifacialvf is a self-contained view factor (or configuration factor) model which\nreplicates a 5-row PV system of infinite extent perpendicular to the module\nrows. The function returns the irradiance profile along the middle (interior)\nrow by default, but user interface options include `'first'`, `'interior'`,\n`'last'`, and `'single'`. Single-axis tracking is supported, and hourly output\nfiles based on TMY inputs are saved. Spatial nonuniformity is reported, with\nmultiple rear-facing irradiances collected on the back of each module row.\n\nBilinear interpolation code add-on to bifacialvf (description below) to pre-generate IV arrays and bifacial coefficients, and to examine the energy production with back side irradiance mismatch for either a portrait or landscape module. \nIncluded are IV curves and bifacial info for a Yingli (standard) module. \n\n## Pre-requisites\nThis software is written for Python 2 or 3. NREL recommends [Anaconda Python](https://www.anaconda.com/download/).\n\n## Install using pip\n[bifacialvf](https://pypi.org/project/bifacialvf/) is at the Python Package Index (PyPI). Use pip to install the latest release in your conda environment or virtualenv:\n\n (myenv)$ pip install bifacialvf\n\n### Install development mode from GitHub\nFor those interested in contributing to bifacialvf:\n\n1. Clone the bifacialvf repository: `$ git clone https://github.com/NREL/bifacialvf.git bifacialvf-main`\n2. Navigate to the repository directory where `setup.py` is located: `$ cd bifacialvf-main`\n3. Install via pip in development mode: `$ pip install -e .`\n\n## Usage\n\nFor usage examples, see the Jupyter notebooks in \\docs\\\n\n## License\nbifacialvf open source code is copyrighted by the Alliance for Sustainable Energy and licensed with BSD-3-Clause terms, found here.\n\n## Citing bifacialVF\n\nIf you use bifacial_radiance in a published work, please cite:\n\n Marion, B., MacAlpine, S., Deline, C., Asgharzadeh, A., Toor, F., Riley, D., \xe2\x80\xa6 Hansen, C. (2017). A Practical Irradiance Model for Bifacial PV Modules: Preprint. In 44th IEEE Photovoltaic Specialists Conference. Washington, DC. https://www.nrel.gov/docs/fy17osti/67847.pdf. NREL/CP-5J00-67847\n\nPlease also cite the DOI corresponding to the specific version of bifacialVF that you used. bifacialvf DOIs are listed at [Zenodo.org](https://zenodo.org/search?page=1&size=20&q=conceptrecid:6369162&all_versions&sort=-version). \n\n Silvana Ovaitt, Chris Deline, Mark Mikofski, & Nick DiOrio. (2022). NREL/bifacialvf: v0.1.8 Release (v0.1.8). Zenodo. https://doi.org/10.5281/zenodo.6369162\n\nBilinear Interpolation based on the publications:\n\n De Soto, W., Klein, S. A., & Beckman, W. A. (2006). Improvement and validation of a model for photovoltaic array performance. Solar Energy, 80(1), 78\xe2\x80\x9388. https://doi.org/10.1016/j.solener.2005.06.010\n\n Marion, B., Rummel, S., & Anderberg, A. (2004). Current--voltage curve translation by bilinear interpolation. Progress in Photovoltaics: Research and Applications, 12(8), 593\xe2\x80\x93607.\n\nbifacialvf: Original code by Bill Marion, Python translation by Silvana Ayala, Updates by Chris Deline & team\nOriginal bilinear interpolation code by Sara MacAlpine, Python translation & Updates by Silvana Ayala\nPVMismatch add-on: PVmismatch code from [PVMismatch](https://github.com/SUNPower/PVMismatch), by Sunpower\n\n\n""",",https://zenodo.org/badge/latestdoi/114160149,https://zenodo.org/search?page=1&size=20&q=conceptrecid:6369162&all_versions&sort=-version,https://doi.org/10.5281/zenodo.6369162\n\nBilinear,https://doi.org/10.1016/j.solener.2005.06.010\n\n","2017/12/13, 19:23:14",2142,CUSTOM,0,198,"2022/03/18, 18:54:05",14,25,40,0,586,1,0.0,0.5555555555555556,"2022/09/28, 21:45:08",0.1.8.1,0,5,false,,false,false,narest-qa/repo40,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, solaR,Allows for reproducible research both for photovoltaics systems performance and solar radiation.,oscarperpinan,https://github.com/oscarperpinan/solar.git,github,,Photovoltaics and Solar Energy,"2021/10/18, 22:50:16",33,0,1,false,R,,,R,http://oscarperpinan.github.io/solar/,"b""solaR\n=====\n[![CRAN](https://www.r-pkg.org/badges/version/solaR)](https://www.r-pkg.org/pkg/solaR)\n[![CRAN RStudio mirror downloads](https://cranlogs.r-pkg.org/badges/solaR)](https://www.r-pkg.org/pkg/solaR)\n[![Build Status](https://github.com/oscarperpinan/solar/workflows/R-CMD-check/badge.svg)](https://github.com/oscarperpinan/solar/actions)\n\n[![DOI](https://upload.wikimedia.org/wikipedia/commons/1/11/DOI_logo.svg)](https://doi.org/10.18637/jss.v050.i09)\n\n\nThe `solaR` package allows for reproducible research both for\nphotovoltaics (PV) systems performance and solar radiation. It\nincludes a set of classes, methods and functions to calculate the sun\ngeometry and the solar radiation incident on a photovoltaic generator\nand to simulate the performance of several applications of the\nphotovoltaic energy. This package performs the whole calculation\nprocedure from both daily and intradaily global horizontal irradiation\nto the final productivity of grid-connected PV systems and water\npumping PV systems.\n\nIt is designed using a set of `S4` classes whose core is a group of\nslots with multivariate time series. The classes share a variety of\nmethods to access the information and several visualization\nmethods. In addition, the package provides a tool for the visual\nstatistical analysis of the performance of a large PV plant composed\nof several systems.\n\nAlthough `solaR` is primarily designed for time series associated to a\nlocation defined by its latitude/longitude values and the temperature\nand irradiation conditions, it can be easily combined with spatial\npackages for space-time analysis.\n\n# Software #\n\nThe stable version of solaR is hosted at\n[CRAN](https://cran.r-project.org/package=solaR). The development\nversion is available at\n[GitHub](https://github.com/oscarperpinan/solar/).\n\nInstall the stable version with:\n\n install.packages('solaR')\n\nYou can install the development version with the [`remotes`](https://github.com/r-lib/remotes) package:\n\n\tremotes::install_github('oscarperpinan/solar')\n\nor with [`devtools`](https://github.com/r-lib/devtools):\n\n devtools::install_github('oscarperpinan/solar')\n\n# Documentation #\n\nThe best place to learn how to use the package is the companion paper\npublished by the Journal of Statistical Software:\n\nPerpi\xc3\xb1\xc3\xa1n Lamigueiro, O. (2012). solaR: Solar Radiation and\nPhotovoltaic Systems with R. Journal of Statistical Software, 50(9),\n1\xe2\x80\x9332. https://doi.org/10.18637/jss.v050.i09\n\n[This book](https://oscarperpinan.github.io/esf/) (in\nSpanish) contains detailed information about solar radiation and\nphotovoltaic systems. In\n[my articles](https://oscarperpinan.github.io/) I frequently use\n`solaR`. \n\n# Citation #\n\nIf you use `solaR`, please cite it in any publication reporting\nresults obtained with this software:\n\nPerpi\xc3\xb1\xc3\xa1n Lamigueiro, O. (2012). solaR: Solar Radiation and\nPhotovoltaic Systems with R. Journal of Statistical Software, 50(9),\n1\xe2\x80\x9332. https://doi.org/10.18637/jss.v050.i09\n\nA BibTeX entry for LaTeX users is:\n\n @Article{,\n title = {{solaR}: Solar Radiation and Photovoltaic Systems with {R}},\n author = {Oscar Perpi{\\~n}{\\'a}n},\n journal = {Journal of Statistical Software},\n year = {2012},\n volume = {50},\n number = {9},\n pages = {1--32},\n\t\tdoi = {10.18637/jss.v050.i09}\n }\n\n""",",https://doi.org/10.18637/jss.v050.i09,https://doi.org/10.18637/jss.v050.i09\n\n,https://doi.org/10.18637/jss.v050.i09\n\nA","2013/07/28, 14:14:41",3741,GPL-3.0,0,220,"2020/05/24, 17:51:32",6,0,12,0,1249,0,0,0.0,"2014/04/27, 16:47:19",v0.38,0,1,false,,false,false,,,,,,,,,,, SolarTherm,Solar thermal power/fuel station performance simulation and optimization using Modelica.,SolarTherm,https://github.com/SolarTherm/SolarTherm.git,github,"engineering,energy,solar,thermodynamics,optimisation,simulation,modelica,modelica-library",Photovoltaics and Solar Energy,"2023/09/11, 01:34:46",28,0,3,true,Modelica,,SolarTherm,"Modelica,Python,C,Motoko,Shell",,"b'Solar thermal power station performance simulation and optimisation.\n\n[![Linux build status](https://github.com/solartherm/solartherm/actions/workflows/main.yml/badge.svg)](https://github.com/SolarTherm/SolarTherm/actions/workflows/main.yml)\n[![Windows build status](https://github.com/solartherm/solartherm/actions/workflows/msys2.yml/badge.svg)](https://github.com/SolarTherm/SolarTherm/actions/workflows/msys2.yml)\n\n[Documentation](http://solartherm.readthedocs.org/en/latest/)\n\nSee also our wiki, which includes\n* A brief [SolarTherm tutorial](https://github.com/SolarTherm/SolarTherm/wiki/A-brief-tutorial-of-SolarTherm).\n* Instructions on [Building SolarTherm](https://github.com/SolarTherm/SolarTherm/wiki/Building-SolarTherm) on Linux.\n* Instructions for [Running SolarTherm on Windows (MSYS2)](https://github.com/SolarTherm/SolarTherm/wiki/Running-SolarTherm-on-Windows-%28MSYS2%29) (the recommended approach for Windows), or you can try [Running SolarTherm on Windows (using WSL)](https://github.com/SolarTherm/SolarTherm/wiki/Running-SolarTherm-on-Windows-%28using-WSL%29).\n* Instructions on how to [use SolarTherm from OMEdit](https://github.com/SolarTherm/SolarTherm/wiki/Running-SolarTherm-via-OMEdit).\n* A description of our [continuous integration and automated testing](https://github.com/SolarTherm/SolarTherm/wiki/Automated-testing-of-SolarTherm-code) setup.\n* Instructions to [link CoolProp to SolarTherm](https://github.com/SolarTherm/SolarTherm/wiki/Integration-with-CoolProp).\n'",,"2015/07/21, 05:23:10",3018,LGPL-3.0,120,1403,"2023/09/11, 01:34:47",48,18,62,8,44,4,0.3,0.6070615034168565,,,0,8,false,,false,false,,,https://github.com/SolarTherm,,,,,https://avatars.githubusercontent.com/u/15701216?v=4,,, LibreSolar,Firmware for LibreSolar BMS boards based on bq769x0 or ISL94202.,LibreSolar,https://github.com/LibreSolar/bms-firmware.git,github,,Photovoltaics and Solar Energy,"2023/09/11, 14:18:19",97,0,35,true,C,Libre Solar Project,LibreSolar,"C,C++,CMake,Shell",https://libre.solar/bms-firmware/,"b'# Libre Solar BMS Firmware\n\n![build badge](https://github.com/LibreSolar/bms-firmware/actions/workflows/zephyr.yml/badge.svg)\n\nThis repository contains the firmware for Libre Solar Battery Management Systems based on [Zephyr RTOS](https://www.zephyrproject.org/) .\n\n## Development and release model\n\nThe `main` branch is used for ongoing development of the firmware.\n\nReleases are created from `main` after significant updates have been introduced to the firmware. Each release has to pass tests with multiple boards.\n\nA release is tagged with a version number consisting of the release year and a release count for that year (starting at zero). For back-porting of bug-fixes, a branch named after the release followed by `-branch` is created, e.g. `v21.0-branch`.\n\n## Documentation\n\nThe firmware documentation including build instructions and API reference can be found under [libre.solar/bms-firmware](https://libre.solar/bms-firmware/).\n\nIn order to build the documentation locally you need to install Doxygen, Sphinx and Breathe and run `make html` in the `docs` folder.\n\n## License\n\nThis firmware is released under the [Apache-2.0 License](LICENSE).\n'",,"2016/11/30, 11:19:50",2520,Apache-2.0,37,288,"2023/09/24, 15:39:43",6,7,25,5,31,0,1.2857142857142858,0.017421602787456414,"2023/09/11, 15:01:30",v23.1,0,5,false,,false,false,,,https://github.com/LibreSolar,https://libre.solar,"Hamburg, Germany",,,https://avatars.githubusercontent.com/u/17674115?v=4,,, Charge Controller Firmware,Firmware for LibreSolar MPPT/PWM charge controllers.,LibreSolar,https://github.com/LibreSolar/charge-controller-firmware.git,github,,Photovoltaics and Solar Energy,"2023/08/16, 11:34:30",120,0,13,true,C++,Libre Solar Project,LibreSolar,"C++,C,CMake,Python,Shell,Batchfile",https://libre.solar/charge-controller-firmware/,"b'# Libre Solar Charge Controller Firmware\n\n![build badge](https://github.com/LibreSolar/charge-controller-firmware/actions/workflows/zephyr.yml/badge.svg)\n\nThis repository contains the firmware for the different Libre Solar Charge Controllers based on [Zephyr RTOS](https://www.zephyrproject.org/).\n\nCoding style is described [here](https://github.com/LibreSolar/coding-style).\n\n## Development and release model\n\nThe `main` branch is used for ongoing development of the firmware.\n\nReleases are created from `main` after significant updates have been introduced to the firmware. Each release has to pass tests with multiple boards.\n\nA release is tagged with a version number consisting of the release year and a release count for that year (starting at zero). For back-porting of bug-fixes, a branch named after the release followed by `-branch` is created, e.g. `v21.0-branch`.\n\n## Documentation\n\nThe firmware documentation including build instructions and API reference can be found under [libre.solar/charge-controller-firmware](https://libre.solar/charge-controller-firmware/).\n\nIn order to build the documentation locally you need to install Doxygen, Sphinx and Breathe and run `make html` in the `docs` folder.\n\n## License\n\nThis firmware is released under the [Apache-2.0 License](LICENSE).\n'",,"2016/08/02, 12:03:35",2640,Apache-2.0,9,510,"2023/08/16, 11:34:31",10,80,121,1,70,1,1.3,0.10882956878850103,"2021/04/14, 11:02:11",v21.0,0,11,false,,true,true,,,https://github.com/LibreSolar,https://libre.solar,"Hamburg, Germany",,,https://avatars.githubusercontent.com/u/17674115?v=4,,, pvoutput,Python code for downloading PV data from PVOutput.org.,openclimatefix,https://github.com/openclimatefix/pvoutput.git,github,"pvoutput,python,python-library,solar,nowcasting",Photovoltaics and Solar Energy,"2023/07/05, 10:11:51",30,2,6,true,Python,Open Climate Fix,openclimatefix,Python,,"b'\n[![All Contributors](https://img.shields.io/badge/all_contributors-9-orange.svg?style=flat-square)](#contributors-)\n\n\n[![codecov](https://codecov.io/gh/openclimatefix/pvoutput/branch/main/graph/badge.svg?token=GTQDR2ZZ2S)](https://codecov.io/gh/openclimatefix/pvoutput)\n\nDownload historical solar photovoltaic data from [PVOutput.org](https://pvoutput.org).\n\nThis code is a work-in-progress. The aim is to provide both a Python library for interacting with [PVOutput.org\'s API](https://pvoutput.org/help.html#api), and a set of scripts for downloading lots of data :)\n\n# Installation\n\n```bash\n$ pip install pvoutput-ocf\n```\n\n## Register with PVOutput.org\n\nYou need to get an API key *and* a system ID from PVOutput.org.\n\nIf you don\'t have a PV system, click the ""energy consumption only"" box\nwhen registering on PVOutput. If you don\'t include a\nsystem ID, then you\'ll get a ""401 Unauthorized"" response from the PVOutput API.\n\nYou can pass the API key and system ID into the `PVOutput` constructor.\nOr, create a `~/.pvoutput.yml` file which looks like:\n\n```yaml\napi_key: \nsystem_id: \n```\n\nThe default location of the `.pvoutput.yml` is the user\'s home directory, expanded from `~`. This can be overridden by setting the `PVOUTPUT_CONFIG` environment variable.\n\ne.g. `export PVOUTPUT_CONFIG=""/my/preferred/location/.pvoutput.yml""`\n\nAlternatively, you can set `API_KEY`, `SYSTEM_ID` and `DATA_SERVICE_URL` (see below) as environmental variables.\n\n### API quotas and paid subscriptions\nPlease see [here](https://pvoutput.org/help/data_services.html) for update info.\n\n#### Free\n\nPVOutput.org gives you 60 API requests per hour. Per request, you can download one day of data for one PV system. (See PVOutput\'s docs for more info about [rate limits](https://pvoutput.org/help/api_specification.html#rate-limits).)\n\n#### Donate\n[Donating to PVOutput.org](https://pvoutput.org/help/donations.html#donations) increases your quota for a year to 300 requests per hour.\n\n#### Paid\nTo get more historical data, you can pay $600 Australian dollars for a year\'s \'Live System History\' subscription for a single country ([more info here](https://pvoutput.org/help/data_services.html). And [here\'s PVOutput.org\'s full price list](https://pvoutput.org/services.jsp)).\nThis allows you to use the [`get batch status`](https://pvoutput.org/help/data_services.html#get-batch-status-service) API to download 900 PV-system-*years* per hour.\n\nIf you have subscribed to PVOutput\'s data service then either\n- add `data_service_url` to your configuration file (`~/.pvoutput.yml`) or\n- pass `data_service_url` to the `PVOutput` constructor.\n\nThe `data_service_url` should end in `.org` (note this dones include the `/service/r2` part of the URL)\nFor example: `data_service_url: https://pvoutput.org/`\n\n\n## Install pvoutput Python library\n\n`pip install -e git+https://github.com/openclimatefix/pvoutput.git@main#egg=pvoutput-ocf\n\n# Usage\n\nSee the [Quick Start notebook](examples/quick_start.ipynb).\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Jack Kelly

\xf0\x9f\x92\xbb

Sam Murphy-Sugrue

\xf0\x9f\x92\xbb

Gabriel Tseng

\xf0\x9f\x92\xbb

Jamie Taylor

\xf0\x9f\x92\xbb

Peter Dudfield

\xf0\x9f\x9a\x87

Shanmukh Chava

\xf0\x9f\x92\xbb

Antsthebul

\xf0\x9f\x92\xbb

Rachit Singh

\xf0\x9f\x94\xa3 \xf0\x9f\x92\xbb

devsjc

\xf0\x9f\x92\xbb
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",,"2019/07/16, 16:51:03",1562,Apache-2.0,20,350,"2022/12/07, 10:48:22",31,51,90,5,322,3,0.6,0.6733067729083666,"2023/07/05, 10:12:58",v0.1.30,0,12,false,,false,false,"AxReds/solaredge,openclimatefix/PVConsumer",,https://github.com/openclimatefix,openclimatefix.org,London,,,https://avatars.githubusercontent.com/u/48357542?v=4,,, predict_pv_yield,Use machine learning to map satellite imagery of clouds to solar PV yield.,openclimatefix,https://github.com/openclimatefix/predict_pv_yield.git,github,nowcasting,Photovoltaics and Solar Energy,"2022/04/07, 10:59:27",53,0,10,false,Jupyter Notebook,Open Climate Fix,openclimatefix,"Jupyter Notebook,Python",,"b""# Intro\nEarly experiments on predicting solar electricity generation over the next few hours, using deep learning, satellite imagery, and as many other data sources as we can think of :)\n\nThese experiments are focused on predicting solar PV yield.\n\nPlease see [SatFlow](https://github.com/openclimatefix/satflow/) for complementary experiments on predicting the next few hours of satellite imagery (i.e. trying to predict how clouds are going to move!)\n\nAnd please see [OCF's Nowcasting page](https://github.com/openclimatefix/nowcasting) for more context.\n\n# Installation\n\nFrom within the cloned `predict_pv_yield` directory:\n\n```\nconda env create -f environment.yml\nconda activate predict_pv_yield\npip install -e .\n```\n""",,"2020/09/30, 16:46:56",1120,MIT,0,379,"2023/02/08, 12:35:24",46,20,53,1,259,2,0.1,0.37168141592920356,,,1,3,false,,false,false,,,https://github.com/openclimatefix,openclimatefix.org,London,,,https://avatars.githubusercontent.com/u/48357542?v=4,,, solar-panel-detection,"Using a combination of AI (machine vision), open data and short-term forecasting, the project aims to determine the amount of solar electricity being put into the UK grid at a given time (i.e., ""right now"", or ""nowcasting"")",alan-turing-institute,https://github.com/alan-turing-institute/solar-panel-detection.git,github,"hut23,hut23-425",Photovoltaics and Solar Energy,"2020/04/22, 08:13:06",17,0,0,false,Jupyter Notebook,The Alan Turing Institute,alan-turing-institute,"Jupyter Notebook,TSQL,Python,Makefile",,"b'# Solar Panel Detection (Turing Climate Action Call)\n\nProject code: R-SPES-115 - Enabling worldwide solar PV nowcasting via machine vision and open data\n\nHut23 issue: https://github.com/alan-turing-institute/Hut23/issues/425\n\n- [Sheffield Solar](https://www.solar.sheffield.ac.uk/)\n- [Open Climate Fix](https://openclimatefix.org/)\n- [Open Street Map](https://www.openstreetmap.org)\n- [Open Infrastructure Map](https://openinframap.org)\n\n## Main Project Description\n\nUsing a combination of AI (machine vision), open data and short term forecasting, the project aims to determine the amount of solar electricity being put into the UK grid at a given time (i.e., \xe2\x80\x9cright now\xe2\x80\x9d, or \xe2\x80\x9cnowcasting\xe2\x80\x9d).\n\nDan Stowell (Queen Mary) and collaborators are working on using a number of datasets, each of which are incomplete and messy, to create an estimate of all solar panels and their orientation in the UK. This will involve some data wrangling to combine a number of geospatial data sources and then use data science methods to determine the solar panel assets across the UK and provide a web service to disseminate the results.\n\nData sources will be from Open Street Maps, which has been tagging solar panels in the UK, as well as other data provided by Sheffield Solar and Open Climate Fix. The REG would be doing most of the data wrangling and machine learning on the project, with the other partners providing data and expertise.\n\n## REG Project\n\n### Goals\n\n1. Aggregate UK solar PV data into a structured format, which can be accessed.\n2. Link the tagged panels in OSM to the other data sources\n\n## Overview of the directory structure\n\n```\n.\n|-- admin -- project process and planning docs\n|-- data\n| |-- as_received -- downloaded data files\n| |-- raw -- manually edited files (replace dummy data)\n| |-- processed\n|-- db -- database creation\n|-- doc -- documentation\n|-- explorations -- exploratory work\n`-- notebooks\n```\n\n\n## Data\n\nData is held in three directories: `as_received` contains the data precisely as\ndownloaded from its original source and in its original format; `raw` contains\ndata that has been manually restructured or reformatted to be suitable for use by\nsoftware in the project (see ""Using this repo"" header below). `processed` contains data that may have been processed in some way, such as by Python code, but is still thought of as \xe2\x80\x9csource\xe2\x80\x9d data.\n\nThe following sources of data are used:\n\n- OpenStreetMap - [Great Britain download (Geofabrik)](https://download.geofabrik.de/europe/great-britain.html).\n - [OSM data types](https://wiki.openstreetmap.org/wiki/Elements)\n - [Solar PV tagging](https://wiki.openstreetmap.org/wiki/Tag:generator:source%3Dsolar)\n- [FiT](https://www.ofgem.gov.uk/environmental-programmes/fit/contacts-guidance-and-resources/public-reports-and-data-fit/installation-reports) - Report of installed PV (and other tech including wind). 100,000s entries.\n- [REPD](https://www.gov.uk/government/publications/renewable-energy-planning-database-monthly-extract) - Official UK data from the ""renewable energy planning database"". It contains large solar farms only.\n- Machine Vision dataset - supplied by Descartes labs (Oxford), not publicly available yet.\n\n## Project outcome\n\nThis repo includes a set of scripts that will take\ninput datasets (REPD, OSM, FiT and machine vision \xe2\x80\x93 each in diff format),\nperform data cleaning/conversion, populate a PostgreSQL database, perform\ngrouping of data where necessary (there are duplicate entries in REPD, multiple solar farm\ncomponents in OSM) and then match entries between the data tables, based on the\nmatching criteria we have come up with.\n\nThe database creation and matching scripts should work with newer versions of the source data files, or at least do so with minimal changes to the data processing (see ""Using this repo"" below).\n\nThe result of matching is a table in the database called `matches` that links the unique identifiers of the\ndata tables. This also contains a column called `match_rule`, which refers to the method by which the match was determined, as documented in [doc/matching](doc/matching.md).\n\n## Using this repo\n\n### Install requirements\n\n1. Install [PostgreSQL](https://www.postgresql.org/download/)\n2. Install Python 3 (version 3.7 or later) and `pip`\n3. Run `pip install -r requirements.txt`\n4. Install [Osmium](https://osmcode.org/osmium-tool/)\n\n### Download and prepare data files\n\n1. Download the following data files from the internet and store locally. We recommend saving these original data files within the directory structure under `data/as_received`:\n - OSM PBF file (GB extract): [Download](https://download.geofabrik.de/europe/great-britain-latest.osm.pbf)\n - FiT reports: Navigate to [ofgem](https://www.ofgem.gov.uk/environmental-programmes/fit/contacts-guidance-and-resources/public-reports-and-data-fit/installation-reports) and click the link for the latest Installation Report (during the Turing project, 30 September 2019 was used), then download the main document AND subsidiary documents\n - REPD CSV file: [Download](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/879414/renewable-energy-planning-database-march-2020.csv) - this is always the most up to date version\n - Machine Vision dataset: supplied by Descartes labs (Oxford), not publicly available yet.\n2. Navigate to `submodules/compile_osm_solar` and edit the `osmsourcefpath` in `compile_osm_solar.py` so that the file path points to the OSM PBF file you downloaded. After installing the requirements in the submodule README, run `python compile_osm_solar.py`. One of the data files produced is a csv, which we use as source data. You can move this file to `data/as_received`\n3. Carry out manual edits to the data files, as described in [doc/preprocessing](doc/preprocessing.md) and save them in `data/raw` under the names suggested by the doc, replacing the default dummy data files.\n4. Navigate to `data/processed` and type `make` - this will create versions of the data files ready for import to PostgreSQL\n\n### Run the database creation and data matching\n\n4. Make sure you have PostgreSQL on your machine, then run the command: `createdb hut23-425 ""Solar PV database matching""` - this creates the empty database.\n5. Navigate to `db` and run the command `psql -f make-database.sql hut23-425` - this populates the database (see [doc/database](doc/database.md)), carries out some de-duplication of the datasets and performs the matching procedure (see [doc/matching](doc/matching.md)). Note: this may take several minutes.\n\nNote that the above commands require you to have admin rights on your PostgreSQL server. On standard Debian-based machines you could prepend the commands with `sudo -u postgres`, or you could assign privileges to your own user account.\n\n## External collaborators guidance\n\nFrom April 2020 this repo is no longer under active development, however a fork of the project is being created by [Open Climate Fix](https://github.com/openclimatefix) if you wish to open issues and pull requests there.\n'",,"2019/11/13, 11:23:12",1442,MIT,0,341,"2020/04/22, 08:13:07",7,30,55,0,1281,1,0.4,0.33333333333333337,,,0,3,false,,false,false,,,https://github.com/alan-turing-institute,https://turing.ac.uk,,,,https://avatars.githubusercontent.com/u/18304793?v=4,,, solarpy,"This package aims to provide a reliable solar radiation model, mainly based on the work of Duffie, J.A., and Beckman, W. A., 1974, ""Solar energy thermal processes"".",aqreed,https://github.com/aqreed/solarpy.git,github,"solar-energy,solar-cells,sun-position,beam-irradiance,photovoltaic,python,modeling,simulation,flight-simulation",Photovoltaics and Solar Energy,"2019/09/22, 20:12:22",41,10,13,false,Python,,,Python,,"b'\n\n[![Build Status](https://travis-ci.com/aqreed/solarpy.svg?branch=master)](https://travis-ci.com/aqreed/solarpy)\n[![codecov.io](https://codecov.io/gh/aqreed/solarpy/branch/master/graph/badge.svg)](https://codecov.io/gh/aqreed/solarpy/branch/master)\n[![license](https://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](https://github.com/aqreed/solarpy/raw/master/COPYING)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/aqreed/solarpy/master?filepath=examples)\n\n| | |\n| ------ | ------ |\n| Description | Python Solar Radiation model |\n| Author | aqreed |\n| Version | 0.1.3 |\n| Python Version | 3.6 |\n| Requires | Numpy, Matplotlib |\n\nThis packages aims to provide a reliable solar radiation model, mainly based on the work of Duffie, J.A., and Beckman, W. A., 1974, ""Solar energy thermal processes"".\n\nThe main purpose is to generate a **solar beam irradiance** (W/m2) prediction on:\n* **any plane**, thanks to the calculation of the solar vector in NED (North East Down) coordinates, suitable for its use in flight dynamics simulations...\n* **any place of the earth**, taking into account the solar time wrt the standard time, geometric altitude, the latitude influence on solar azimuth and solar altitude as well as sunset/sunrise time and hour angle, etc.\n* **any day of the year**, taking into account the variations of the extraterrestrial radiation, the equation of time, the declination, etc., throughout the year\n\n#### Example 1\nSolar [irradiance](https://en.wikipedia.org/wiki/Solar_irradiance) on the southern hemisphere on October 17, at sea-level 13.01UTC (plane pointing upwards)?\n\n```\nimport numpy as np\nfrom solarpy import irradiance_on_plane\nfrom datetime import datetime\n\nvnorm = np.array([0, 0, -1]) # plane pointing zenith\nh = 0 # sea-level\ndate = datetime(2019, 10, 17, 13, 1) # year, month, day, hour, minute\nlat = -23.5 # southern hemisphere\n\nirradiance_on_plane(vnorm, h, date, lat)\n```\n\nA dedicated Jupyter Notebook on beam irradiance can be found [here](https://github.com/aqreed/solarpy/blob/master/examples/solar_irradiance.ipynb).\n\n#### Example 2\nPower output (in W) of a solar panel with the following characteristics:\n* surface of 2.1 sqm\n* efficiency of 0.2\n* pointing upwards\n* in NYC\n* on December 25, at 16.15\n\n```\nfrom numpy import array\nfrom solarpy import solar_panel\nfrom datetime import datetime\n\npanel = solar_panel(2.1, 0.2, id_name=\'NYC_xmas\') # surface, efficiency and name\npanel.set_orientation(array([0, 0, -1])) # upwards\npanel.set_position(40.73, -73.93, 0) # NYC latitude, longitude, altitude\npanel.set_datetime(datetime(2019, 12, 25, 16, 15)) # Christmas Day!\npanel.power()\n```\n\n#### Example 3\nSolar [declination](https://en.wikipedia.org/wiki/Position_of_the_Sun#Declination_of_the_Sun_as_seen_from_Earth) on August 5?\n\n```\nfrom solarpy import declination\nfrom datetime import datetime\n\ndate = datetime(2019, 8, 5) # August 5\n\ndeclination(date)\n```\n\nPlease find more notebooks on the [\'examples\'](https://github.com/aqreed/solarpy/tree/master/examples) folder that you can open locally, or just try [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/aqreed/solarpy/master?filepath=examples) to launch online interactive Jupyter notebooks.\n\n---\n**NOTE**:\nsolarpy is under development and might change in the near future.\n\n---\n\n### Dependencies\n\nThis package depends on Python, NumPy and Matplotlib and is usually tested on Linux with the following versions:\n\nPython 3.6, NumPy 1.16, Matplotlib 3.0\n\n### Installation\n\nsolarpy has been written in Python3, and its version v0.1 is available in PyPi. It can be installed using:\n\n```\n$ pip install solarpy\n```\n\nTo install in development mode:\n\n```sh\n$ git clone https://github.com/aqreed/solarpy.git\n$ cd solarpy\n$ pip install -e .\n```\n\n### Testing\n\nsolarpy recommends py.test for running the test suite. Running from the top directory:\n\n```sh\n$ pytest\n```\n\nTo test coverage (also from the top directory):\n\n```sh\n$ pytest --cov\n```\n\n### Bug reporting\n\nPlease feel free to open an [issue](https://github.com/aqreed/solarpy/issues) on GitHub!\n\n### License\n\nMIT (see `COPYING`)\n'",,"2019/07/28, 15:53:41",1550,MIT,0,257,"2019/08/13, 12:22:22",7,0,1,0,1534,1,0,0.0,,,0,1,false,,false,false,"TalweSingh/DAISE,tbongiov/mettoolbox_tb,kcpgilbert/Weather-Expert-Solar-Forecasting,robinjo78/solarBAG,timcera/mettoolbox,kmiecikmichal/PV_ENERGY_PRODUCTION_FORECASTING,DCC-Lab/projetNeige,samurai-madrid/data-processing,samurai-madrid/reinforcement-learning,ebasanez/samur.ai",,,,,,,,,, solariot,Leverage your IoT enabled Solar PV Inverter to stream your solar energy usage data to a real time dashboard.,meltaxa,https://github.com/meltaxa/solariot.git,github,"freeboard,iot,solar-energy,sungrow-inverter,dashboard,pvoutput,modbus-sungrow,dweet,influxdb,telemetry",Photovoltaics and Solar Energy,"2023/03/15, 09:33:33",175,0,39,true,Python,,,"Python,JavaScript,Dockerfile",https://solariot.live,"b'# Solariot\n\nLeverage your IoT enabled Solar PV Inverter to stream your solar energy usage\ndata to a real time dashboard.\n\nSolariot will connect directly to your Inverter using Modbus TCP. \n\nCurrently, Solariot is able to talk to a SMA Sunny Boy and Sungrow SH5K & SG5KD inverters. \nSolariot is designed to allow any Modbus TCP enabled inverter to be queried using a Modbus register map.\n\nData is collected and can be streamed to destinations like dweet.io, MQTT, InfluxDB or PVOutput. \nTo visualise the telemetry, use a dashboard such as Grafana. For example, this is Meltaxa\'s Grafana dashboard on \nsolariot.live:\n

\n \n \n

\n\n## Pre-requisites\n\n* The Inverter must be accessible on the network using TCP.\n\n* This Python script should work on most Inverters that talk Modbus TCP. You can \ncustomise your own modbus register file.\n\n* Run on Python 3.5+.\n\n## Installation\n\n1. Download or clone this repository to your local workstation.\n ```\n git clone https://github.com/meltaxa/solariot.git\n cd solariot\n ```\n \n2. Install the required libraries.\n ```\n pip install --upgrade -r requirements.txt\n ```\n \n3. Update the config.py with your values, such as the Inverter\'s IP address, \nport, inverter model (which corresponds to the modbus register file) and the\nregister addresses Solariot should scan from. Enable optional support for MQTT,\nPVOutput, InfluxDB and more.\n\n4. Run the solariot.py script. \n ```\n ./solariot.py\n ```\n * Command line options:\n ```\n -c Python module to load as our config. Default is config.py.\n -v Level of verbosity 0=ERROR 1=INFO 2=DEBUG.\n --one-shot Run Solariot just once then exit.\n ```\n## Docker\n\n1. Create a directory for the config file [config.py].\n\n2. Create a config.py (see config-example.py) and place it in the config directory.\n\n3. Run the Docker image with the volume switch to mount your config directory as /config in the image\n * `docker run -v :/config meltaxa/solariot`\n\nNote that the container runs as UID/GID 2000, so mounted config files will need to be readable. E.G.\n\n```bash\nchgrp 2000 $FILE # Set group of file to 2000\nchown g+r $FILE # Allow group 2000 to read file\n```\n\n## Next Steps\n\nNow that you are collecting the inverter\'s data, you\'ll want to ultimately\ndisplay it in a dashboard as seen above. \n\nThere are many methods to stream the data. Here are a few options, which\ncan be enabled in Solariot. \n\n### Dweet.io and Freeboard\n\nThis is the quickest method and is a good place to start.\n\nMetrics are streamed to dweet.io a free IoT messaging service. No sign up is \nrequired. All you need to do is create a unique identifier by updating the\ndweepy_uuid value in the config.py file.\n\nData can then be visualised using a ~~free~~ low-cost dashboard service from \n[Freeboard](https://freeboard.io/). You\'ll need to create your own dashboard,\nusing dweet.io as your data source.\n\n### MQTT Support\n\nThis is a good way to push data to MQTT topics that you might subscribe various tools \nsuch as Node-Red or Home Assistant to. Running your own MQTT server will mean you can\nalso retrieve these values when your internet is offline.\n\nAll you need to do is to set the `mqtt_server`, `mqtt_port`, `mqtt_username`, \n`mqtt_password` and `mqtt_topic` values in `config.py` file and you\'ll be up \nand running.\n\n### InfluxDB and Grafana\n\nUse a time series database such as \n[InfluxDB](https://github.com/influxdata/influxdb) to store the inverter data as\nit streams in. You\'ll need to install this on your own server.\n\nTo display the data in real time dashboard, you can use \n[Grafana](https://grafana.com/get) to pull the metrics from InfluxDB. You can \neither install your own Grafana server or use their free \n[Grafana hosted solution](https://grafana.com/cloud/grafana).\n\nA json export of solarspy.live Grafana dashboard is available under the grafana folder.\nThe file will require editing to match your InfluxDb settings.\n\n### Prometheus and Grafana\n\n[Prometheus](https://prometheus.io/) can be enabled in config.py by setting `prometheus` to true. the data will then be exported on the port specified by `prometheus_port` (defaults to 8000).\n\nyou can configure [Prometheus](https://prometheus.io/) to scrape this by adding a rule like this to your prometheus.yml\n```\nscrape_configs:\n - job_name: \'solariot\'\n scrape_interval: 30s\n static_configs:\n - targets: [\'localhost:8000\']\n```\n\nalternatively if your using [Kubernetes](https://kubernetes.io/) you can use this [helm chart](https://github.com/slackerlinux85/HelmCharts/tree/master/helm-chart-sources/solariot)\n\n### PVOutput.org\n\nWe offer direct integration to publishing metrics to the \'Add Status\' [API endpoint](https://pvoutput.org/help.html#api-addstatus) of PVOutput.\n\nSupported values are `v1` through to `v6` and an assumption that `v1` and `v3` are values are incremental and reset every day.\n\nAll you need to do is set the `pvoutput_api`, `pvoutput_sid` and `pvoutput_rate_limit` values in `config.py` file and \nyou\'ll be publishing in no time!\n\n## Integration with PVOutput.org and Grafana\n\nIf you are using Grafana as your dashboard, a neat little trick is to then\nincorporate your Grafana panels with your PVOutput as system photos. From your\n[PV Ladder page](https://pvoutput.org/ladder.jsp?f=1&pf=4102&pt=4102&sf=5130&st=5130&country=1&in=Sungrow&pn=Infinity&io=1&oc=0), click on your photos to view the real time Grafana images: \n\n![alt tag](docs/animated-pvoutout-grafana-integration.gif)\n\n1. Obtain your Grafana panel direct link, see their documentation: .\n\n2. In your PVOutput ""Edit System"" page, add your Grafana panel link in the \n""Image Link"" field. Append ""&png"" to the link. Note, if the URL is longer than \n100 characters, use a URL shortener service instead (such as ).\nDon\'t forget to append the ""&png"" string to your URL.\n\n3. Now go to your system in the PV Ladder page and click on the photos.\n\n:bulb: Tip: You can add any URL image, such as the latest weather radar image \n:wink:\n\n## Contributions\n\nIf you have created a modbus register map for an inverter, please submit your\nfile as a pull request for Solariot inclusion.\n\n## Acknowledgements\n\n* [michael-robbins](https://github.com/michael-robbins) for Docker support, modbus contrib and other improvements.\n* [rpvelloso](https://github.com/rpvelloso) for the SungrowModbusTcpClient class that enables decryption of comms.\n* [shannonpasto](https://github.com/shannonpasto) for the Sungrow SG3KD modbus map.\n* [ShogunQld](https://github.com/ShogunQld) for the SMA Sunnuyboy modbus map.\n* [zyrorl](https://github.com/zyrorl) for MQTT support contrib.\n'",,"2017/09/15, 12:26:12",2231,MIT,6,158,"2023/03/21, 15:57:53",13,36,69,14,218,1,0.1,0.5106382978723405,,,0,11,true,github,false,false,,,,,,,,,,, pvanalytics,"Quality control, filtering, feature labeling, and other tools for working with data from photovoltaic energy systems.",pvlib,https://github.com/pvlib/pvanalytics.git,github,"photovoltaic,python,renewable-energy,renewables,solar-energy",Photovoltaics and Solar Energy,"2023/10/17, 20:51:22",76,6,17,true,Python,pvlib,pvlib,Python,https://pvanalytics.readthedocs.io,"b'![lint and test](https://github.com/pvlib/pvanalytics/workflows/lint%20and%20test/badge.svg)\n[![Coverage Status](https://coveralls.io/repos/github/pvlib/pvanalytics/badge.svg?branch=main)](https://coveralls.io/github/pvlib/pvanalytics?branch=main)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6110569.svg)](https://doi.org/10.5281/zenodo.6110569)\n\n\n# PVAnalytics\n\nPVAnalytics is a python library that supports analytics for PV\nsystems. It provides functions for quality control, filtering, and\nfeature labeling and other tools supporting the analysis of PV\nsystem-level data.\n\nPVAnalytics is available at [PyPI](https://pypi.org/project/pvanalytics/)\nand can be installed using `pip`:\n\n pip install pvanalytics\n\nDocumentation and example usage is available at \n[pvanalytics.readthedocs.io](https://pvanalytics.readthedocs.io).\n\n## Library Overview\n\nThe functions provided by PVAnalytics are organized in modules based\non their anticipated use. The structure/organization below is likely\nto change as use cases are identified and refined and as package\ncontent evolves. The functions in `quality` and\n`features` take a series of data and return a series of booleans.\nFor more detailed descriptions, see our\n[API Reference](https://pvanalytics.readthedocs.io/en/stable/api.html).\n\n* `quality` contains submodules for different kinds of data quality\n checks.\n * `data_shifts` contains quality checks for detecting and \n isolating data shifts in PV time series data.\n * `irradiance` provides quality checks for irradiance\n measurements. \n * `weather` has quality checks for weather data (for example tests\n for physically plausible values of temperature, wind speed,\n humidity, etc.)\n * `outliers` contains different functions for identifying outliers\n in the data.\n * `gaps` contains functions for identifying gaps in the data\n (i.e. missing values, stuck values, and interpolation).\n * `time` quality checks related to time (e.g. timestamp spacing)\n * `util` general purpose quality functions.\n\n* `features` contains submodules with different methods for\n identifying and labeling salient features.\n * `clipping` functions for labeling inverter clipping.\n * `clearsky` functions for identifying periods of clear sky\n conditions.\n * `daytime` functions for for identifying periods of day and night.\n * `orientation` functions for labeling data as corresponding to\n a rotating solar tracker or a fixed tilt structure.\n * `shading` functions for identifying shadows.\n* `system` identification of PV system characteristics from data\n (e.g. nameplate power, orientation, azimuth)\n* `metrics` contains functions for computing PV system-level metrics\n'",",https://doi.org/10.5281/zenodo.6110569","2020/02/18, 17:10:34",1345,MIT,17,307,"2023/10/17, 20:51:27",34,112,165,28,7,8,1.8,0.512,"2022/12/16, 19:22:39",v0.1.3,0,7,false,,true,false,"hmendo/chrpa,hmendo/chrpa_test,slacgismo/pv-validation-hub,kperrynrel/time-shift-validation-hub,AlexandreHugoMathieu/pvfault_detection,sandialabs/pvOps",,https://github.com/pvlib,,,,,https://avatars.githubusercontent.com/u/11037261?v=4,,, IonMonger,A free and fast perovskite solar cell simulator with coupled ion vacancy and charge carrier dynamics in one dimension.,PerovskiteSCModelling,https://github.com/PerovskiteSCModelling/IonMonger.git,github,"perovskite-solar-cells,halide-ion-migration",Photovoltaics and Solar Energy,"2022/12/07, 13:12:54",30,0,7,true,MATLAB,,PerovskiteSCModelling,"MATLAB,Python,TeX",https://sites.google.com/view/ionmonger/home,"b'# IonMonger 2\n\nA drift-diffusion model for ion migration and charge carrier transport across a planar perovskite solar cell (PSC).\n\nThis code can be used to simulate the internal state of a PSC over time. The three core layers of a PSC, namely the electron transport layer, perovskite absorber layer and hole transport layer, are modelled explicitly in one spatial dimension. The model variables are the electric potential, halide ion vacancies (existing only within the perovskite layer), electrons (within the ETL and perovskite layers) and holes (within the perovskite and HTL). A variety of experimental protocols can be simulated, including changes in the applied voltage and/or illumination intensity that occur over timescales on the order of microseconds to hours and impedance spectroscopy. The code also outputs the current density and voltage which can be used to plot the current-voltage characteristics of a PSC, including current-voltage hysteresis due to the movement of halide ion vacancies. Please read the [GUIDE](GUIDE.md) to get started.\n\nFor details of changes to the code since the first release, see the Changelog on the [IonMonger Wiki](https://github.com/PerovskiteSCModelling/IonMonger/wiki).\n\n\n# Use Cases\n\nThis code is intended for use by researchers in the field of perovskite solar cells. Example use cases include:\n\n- simulating current-voltage curves, with the ability to change key material properties in order to investigate trends in performance and the extent of hysteresis\n- simulating photo-current or photo-voltage transients to investigate the effects of halide ion migration\n- visualising the effects of halide ion migration on the internal electrical state of a PSC\n- simulating impedance spectra, predicting and analysing the effects of material properties\n\nThe authors of this code published an investigation into how material properties of the transport layers affect perovskite solar cell performance in [Energy & Environmental Science](https://doi.org/10.1039/C8EE01576G), while working at the Universities of Southampton, Bath and Portsmouth.\n\n\n# Requirements and Other Information\n\nRequirements: MATLAB (version R2021a).\n\nThis code was first created at the University of Southampton in 2016. See [AUTHORS](AUTHORS.md) for a list of contributors and [LICENSE](LICENSE) for the conditions of use.\n\nIf you encounter a problem or any unexpected results, please create an Issue on the GitHub website, add details of the problem (including the error message and MATLAB version number) and attach the parameters.m file in use when the problem occurred. For other enquiries, please contact N.E.Courtier(at)soton.ac.uk.\n\nSome features of the code (for example, IS_solver.m and animate_sections.m) will make use of the Parallel Computing Toolbox and the Image Processing Toolbox for increased performance but can still run if the toolboxes are not installed.\n\n\n# How to Cite this Code\n\nPlease cite the release paper published in the [Journal of Computational Electronics](https://link.springer.com/article/10.1007/s10825-019-01396-2) by using the [citation.bib](citation.bib) file.\n\n\n# Technical Features\n\nThis code is based on the finite element scheme first presented in our paper in [Applied Mathematical Modelling](https://doi.org/10.1016/j.apm.2018.06.051) and is performed on a non-uniform (""tanh"") spatial grid.\n\nFiles in the main folder:\n - master.m for running a single simulation\n - parameters.m for setting the inputs to the simulation\n - reset_path.m adds all subfunctions to the MATLAB path\n - IonMongerLite.mlx for running simulations from a user-friendly interface\n\nThe Code/ folder contains all subfunctions, including\n - a function to turn a list of instructions into a protocol for the light or applied voltage\n - functions that provide the ability to find the steady-state Voc and simulate open-circuit conditions\n - a function to plot current-voltage (`J`-`V`) data as well as the recombination currents (`Jl`, `Jr`)\n\nThe solution is saved in dimensional form into one output file, which also contains the input data.\n\nThe Tests/ folder is for developers and contains a set of tests to check the consistency of future updates.\n'",",https://doi.org/10.1039/C8EE01576G,https://doi.org/10.1016/j.apm.2018.06.051","2019/08/22, 10:21:56",1525,AGPL-3.0,4,69,"2023/09/05, 12:41:46",0,27,29,5,50,0,0.7,0.09259259259259256,"2022/08/02, 14:24:06",v2.0,0,2,false,,false,false,,,https://github.com/PerovskiteSCModelling,,,,,https://avatars.githubusercontent.com/u/45872822?v=4,,, rayflare,"Provide a flexible, user-friendly Python environment to model complex optical stacks, with a focus on solar cells.",qpv-research-group,https://github.com/qpv-research-group/rayflare.git,github,"physics,optics,raytracing,ray-tracing,rigorous-coupled-wave,transfer-matrix-method,solar-cells,multiscale-simulation",Photovoltaics and Solar Energy,"2023/03/21, 22:40:51",22,1,7,true,Python,Quantum Photovoltaics Research Group,qpv-research-group,"Python,TeX",,"b'[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![codecov](https://codecov.io/gh/qpv-research-group/rayflare/branch/devel/graph/badge.svg)](https://codecov.io/gh/qpv-research-group/rayflare)\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/7ff9180e5f7a460192440895d823ff15)](https://www.codacy.com/gh/qpv-research-group/rayflare?utm_source=github.com&utm_medium=referral&utm_content=qpv-research-group/rayflare&utm_campaign=Badge_Grade)\n[![Documentation Status](https://readthedocs.org/projects/rayflare/badge/?version=latest)](https://rayflare.readthedocs.io/en/latest/?badge=latest)\n[![status](https://joss.theoj.org/papers/15647ef7b3dd688b47c1b802a4f50a67/status.svg)](https://joss.theoj.org/papers/15647ef7b3dd688b47c1b802a4f50a67)\n\n**Important**: Please check out the [news & updates](https://rayflare.readthedocs.io/en/latest/news.html) page for the most recent updates\nand changes, including any possible backwards compatibility issues. If you have questions, issues, etc., please check the\ndocumentation and (open and closed) [issues](https://github.com/qpv-research-group/rayflare/issues) first,\nor open a new issue using the relevant template.\n\n# rayflare\nOpen-source, integrated optical modelling of complex stacks. RayFlare incorporates the transfer-matrix method (TMM), \nray-tracing and rigorous coupled-wave analysis (RCWA/FMM), in addition to an angular redistribution matrix method which allows multiple \nmethods to be coupled across a single structure to calculate total absorption/reflection/transmission, absorption per \nlayer, and absorption profiles. \n\nYou can view RayFlare\'s documentation, including installation instructions [here](https://rayflare.readthedocs.io/en/latest/).\nThe contributing guidelines are [here](CONTRIBUTING.md) and the Code of Conduct is [here](CODE_OF_CONDUCT.md). This package\nis distributed under a [GNU GPL (version 3) license](GNU_GPL_v3.txt). If you have questions, issues, etc., please check the\ndocumentation first or open an [issue](https://github.com/qpv-research-group/rayflare/issues) using the relevant template.\n\nIf you use RayFlare in your work, please cite the [JOSS paper](https://doi.org/10.21105/joss.03460):\n\n*Pearce, P. M., (2021). RayFlare: flexible optical modelling of solar cells. Journal of Open Source Software, 6(65), 3460. \nhttps://doi.org/10.21105/joss.03460*\n\n![poster](poster.png ""RayFlare poster"")\n'",",https://doi.org/10.21105/joss.03460","2019/06/12, 20:01:41",1595,CUSTOM,18,456,"2023/07/18, 03:04:11",12,33,49,8,99,2,0.0,0.0,"2023/03/21, 08:21:09",v1.2.0,0,1,false,,true,true,qpv-research-group/solcore-education,,https://github.com/qpv-research-group,https://www.qpvgroup.org,"UNSW (Sydney, Australia) and Imperial College London (UK) until 2017",,,https://avatars.githubusercontent.com/u/48552948?v=4,,, pv-terms,Contains nomenclature for PV-relevant terms that are used in modeling and data analysis for PV systems.,DuraMAT,https://github.com/DuraMAT/pv-terms.git,github,,Photovoltaics and Solar Energy,"2020/08/03, 18:19:48",13,0,3,false,Python,,DuraMAT,"Python,Batchfile,Makefile,HTML",,"b""# pv-terms\n\nThe pv-terms project contains nomenclature for PV-relevant terms that are used in modeling and data analysis for PV systems.\n\nThe pv-terms project is a work in progress. The team would greatly appreciate feedback and suggestions. To comment, please open an issue.\n\nFormatted documentation available at http://duramat.github.io/pv-terms/\n\n## Building the documentation\n\nTo build the documentation locally, you'll need to install the sphinx\nrequirements. It's probably a good idea to be working in a virtual\nenvironment, but not strictly necessary. \n\n pip install -r requirements.txt\n\nThere are a few ways to build the docs. To do this, cd into the `docs_source` folder. The first method generates html files in `source/_build/html`:\n\n make html\n\nThe second method does the same thing but then copies the files into `docs/` so that they'll get detected by Github Pages:\n\n make github\n""",,"2020/03/13, 13:05:13",1321,BSD-3-Clause,0,158,"2020/07/23, 14:09:44",12,7,17,0,1189,0,0.0,0.5769230769230769,,,0,4,false,,false,false,,,https://github.com/DuraMAT,,,,,https://avatars.githubusercontent.com/u/55461754?v=4,,, StatisticalClearSky,Statistical estimation of a clear sky signal from PV system power data.,slacgismo,https://github.com/slacgismo/StatisticalClearSky.git,github,,Photovoltaics and Solar Energy,"2022/06/23, 21:50:25",26,3,3,false,Jupyter Notebook,SLAC GISMo,slacgismo,"Jupyter Notebook,Python,Shell",,"b'# StatisticalClearSky\n\n\n\n \n \n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n
Latest Release\n \n \n \n \n \n \n \n \n \n
License\n \n \n \n
Build Status\n \n \n \n \n \n \n \n \n \n \n
Code Quality\n \n \n \n \n \n \n
Publications\n \n \n \n
PyPI Downloads\n \n \n \n
Conda Downloads\n \n \n \n
Test-Coverage\n \n
\n\n_Statistical estimation of a clear sky signal from PV system power data_\n\nThis project implements an algorithm based on [Generalized Low Rank Models](https://stanford.edu/~boyd/papers/glrm.html) for estimating the output of a solar PV system under clear sky or ""cloudless"" conditions, given only measured power as an input. Noteably, no system configuration information, modeling parameters, or correlated environmental data are required. You can read more about this work in these two papers [[1](https://arxiv.org/abs/1907.08279), [2](https://ieeexplore.ieee.org/abstract/document/8939335)].\n\nWe actually recommend that users generally not invoke this software directly. Instead, we recommend using the API provided by [Solar Data Tools](https://github.com/slacgismo/solar-data-tools).\n\n## Getting Started\n\nYou can install pip package or Anaconda package for this project.\n\n### Recommended: Set up `conda` environment with provided `.yml` file\n\n_Updated September 2020_\n\nWe recommend seting up a fresh Python virutal environment in which to use `solar-data-tools`. We recommend using the [Conda](https://docs.conda.io/projects/conda/en/latest/index.html) package management system, and creating an environment with the environment configuration file named `pvi-user.yml`, provided in the top level of this repository. This will install the `solar-data-tools` package as well.\n\nPlease see the Conda documentation page, ""[Creating an environment from an environment.yml file](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file)"" for more information.\n\n### Installation\n\nIf you are using pip:\n\n```sh\n$ pip install statistical-clear-sky\n```\n\nAs of February 11, 2019, it fails because scs package installed as a dependency of cxvpy expects numpy to be already installed.\n[scs issue 85](https://github.com/cvxgrp/scs/issues/85) says, it is fixed.\nHowever, it doesn\'t seem to be reflected in its pip package.\nAlso, cvxpy doesn\'t work with numpy version less than 1.16.\nAs a work around, install numpy separatly first and then install this package.\ni.e.\n```sh\n$ pip install \'numpy>=1.16\'\n$ pip install statistical-clear-sky\n```\n\nIf you are using Anaconda, the problem described above doesn\'t occur since numpy is already installed. And during statistical-clear-sky installation, numpy is upgraded above 1.16:\n\n```sh\n$ conda install -c slacgismo statistical-clear-sky\n```\n\n#### Solvers\n\nThe default convex solver included with cvxpy is ECOS, which is open source. However this solver tends to fail on problems with >1000 variables, as it does not work for this algorithm.\n\nSo, the default behavior of the code is to use the commercial Mosek solver. Thus, we encourage you to install it separately as below and obtain the license on your own.\n\n* [mosek](https://www.mosek.com/resources/getting-started/) - For using MOSEK solver.\n\n If you are using pip:\n ```sh\n $ pip install -f https://download.mosek.com/stable/wheel/index.html Mosek\n ```\n\n If you are using Anaconda:\n ```sh\n $ conda install -c mosek mosek==8.1.43\n ```\n\nAcademic licenses are available for free here: [https://www.mosek.com/products/academic-licenses/](https://www.mosek.com/products/academic-licenses/)\n\n## Usage\n\n### As a part of Python code or inside Jupyter notebook\n\n#### Example 1: Simplest example with the fewest number of input parameters.\n\nUsing default solver (Open Source solver: ECOS)\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d)\n\niterative_fitting.execute()\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\ndegradation_rate = iterative_fitting.degradation_rate()\n```\n\n#### Example 2: Estimating clear sky signals without degradation.\n\nYou can estimate clear sky signals based on the assumption that there is no year-to-year degradation.\nIn this case, you can set is_degradation_calculated keyword argument to False in execute method.\nBy default, it\'s set to True.\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d)\n\niterative_fitting.execute(is_degradation_calculated=False)\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\n```\n\n#### Example 3: Using a different solver.\n\nThe default solver ECOS is not stable with large set of input data.\nThe following example shows how to specify to use Mosek solver by passing solver_type keyword argument (to the constructor).\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d,\n solver_type=\'MOSEK\')\n\niterative_fitting.execute()\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\ndegradation_rate = iterative_fitting.degradation_rate()\n```\n\n#### Example 4: Setting rank for Generalized Low Rank Modeling.\n\nBy default, rank of low rank matrices is specified to be 6.\nYou can change it by specifying rank_k keyword argument (in the constructor).\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d, rank_k=6)\n\niterative_fitting.execute()\n\n# Get the resulting left low rank matrix and right low rank matrix for evaluation.\nleft_low_rank_matrix = iterative_fitting.left_low_rank_matrix()\n# The above can be also obtained as l_cs_value:\nl_cs_value = iterative_fitting.l_cs_value\n\n# Get the resulting right low rank matrix for evaluation.\nright_low_rank_matrix = iterative_fitting.right_low_rank_matrix()\n# The above can be also obtained as r_cs_value:\nr_cs_value = iterative_fitting.r_cs_value\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\n\ndegradation_rate = iterative_fitting.degradation_rate()\n# The above can be also obtained as beta_value:\nbeta_value = iterative_fitting.beta_value\n```\n\n#### Example 5: Setting different hyper-parameters for minimization of objective function of Generalized Low Rank Modeling.\n\nThere are three hyper-parameters in the objective function of Generalized Low Rank Modeling, i.e. mu_l, mu_r, and tau.\nBy default, mu_l is set to 1.0, mu_r is set to 20.0, and tau is set to 0.8.\nYou can change it by specifying mu_l, mu_r, and tau keyword arguments in execute method.\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d)\n\niterative_fitting.execute(mu_l=5e2, mu_r=1e3, tau=0.9)\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\ndegradation_rate = iterative_fitting.degradation_rate()\n```\n\n#### Example 6: Setting different control parameters for minimization of objective function of Generalized Low Rank Modeling.\n\nThere are three control parameters in the objective function of Generalized Low Rank Modeling, i.e. exit criteria - exit_criterion_epsilon, and maximum number of iteration - max_iteration.\nBy default, exit_criterion_epsilon is set to 1e-3, max_iteration is set to 100.\nYou can change it by specifying eps and max_iteration keyword arguments in execute method.\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d)\n\niterative_fitting.execute(exit_criterion_epsilon=1e-6, max_iteration=10)\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\ndegradation_rate = iterative_fitting.degradation_rate()\n```\n\n#### Example 7: Setting limit on degradation rate.\n\nYou can specify the maximum degradation and minimum degradation by setting max_degradation and min_degradation keyword arguments in execute method.\nBy default, they are set not to be used.\n\n```python\nimport numpy as np\nfrom statistical_clear_sky.algorithm.iterative_fitting import IterativeFitting\n\n# Usually read from a CSV file or a database with more data,\n# covering 1 day (column) and a few years (row):\npower_signals_d = np.array([[0.0, 0.0, 0.0, 0.0],\n [1.33389997, 1.40310001, 0.67150003, 0.77249998],\n [1.42349994, 1.51800001, 1.43809998, 1.20449996],\n [1.52020001, 1.45150006, 1.84809995, 0.99949998]])\n\niterative_fitting = IterativeFitting(power_signals_d)\n\niterative_fitting.execute(max_degradation=0.0, min_degradation=-0.5)\n\nclear_sky_signals = iterative_fitting.clear_sky_signals()\ndegradation_rate = iterative_fitting.degradation_rate()\n```\n\n## Jupyter notebook examples\n\nAlternatively, you can clone this repository (GIT) and execute the example codes under notebooks folder.\n\nSimplest way to install dependencies if you are using pip is by\n\n```sh\n$ pip install -r requirements.txt\n```\n\nAs mentioned in the section, ""Getting Started"" above,\nas of February 11, 2019, it fails because scs package installed as a dependency of cxvpy expects numpy to be already installed.\n[scs issue 85](https://github.com/cvxgrp/scs/issues/85) says, it is fixed.\nHowever, it doesn\'t seem to be reflected in its pip package.\nAlso, cvxpy doesn\'t work with numpy version less than 1.16.\nAs a work around, install numpy separatly first and install the other packages using requirements.txt. i.e.\n```sh\n$ pip install \'numpy>=1.16\'\n$ pip install -r requirements.txt\n```\n\n## Running the tests\n\n### Unit tests (developer tests)\n\n1. GIT clone this project.\n\n2. In the project directory in terminal,\n\n ```\n $ python -m unittest\n ```\n\n This runs all the tests under tests folder.\n\nAll the tests are placed under ""tests"" directory directly under the project directory.\nIt is using ""unittest"" that is a part of Python Standard Library by default.\nThere may be a better unit testing framework.\nBut the reason is to invite as many contributors as possible with variety of background.\n\n### Coding style tests\n\n[pylint](https://www.pylint.org/) is used to check if coding style is conforming to ""PEP 8 -- Style Guide for Python Code""\n\nNote: We are open to use [LGTM](https://lgtm.com/).\nHowever, since we decided to use another code coverage tool [codecov](https://codecov.io/) based on a comment by project\'s Technical Advisory Council, we decided not to use another tool that does code coverage.\nWe are also open to use other coding style tools.\n\nExample of using pylint:\n\nIn the project directory in terminal,\n```\n$ pylint statistical_clear_sky\n```\n\n## Contributing\n\nPlease read [CONTRIBUTING.md](https://github.com/bmeyers/StatisticalClearSky/contributing) for details on our code of conduct, and the process for submitting pull requests to us.\n\n## Versioning\n\nWe use [Semantic Versioning](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/bmeyers/StatisticalClearSky/tags).\n\n## Authors\n\n* **Bennet Meyers** - *Initial work and Main research work* - [Bennet Meyers GitHub](https://github.com/bmeyers)\n\n* **Tadatoshi Takahashi** - *Refactoring and Packaging work and Research support work* - [Tadatoshi Takahashi GitHub](https://github.com/tadatoshi)\n\nSee also the list of [contributors](https://github.com/bmeyers/StatisticalClearSky/contributors) who participated in this project.\n\n## License\n\nThis project is licensed under the BSD 2-Clause License - see the [LICENSE](LICENSE) file for details\n\n## References\n\n[1] B. Meyers, M. Tabone, and E. C. Kara, ""Statistical Clear Sky Fitting Algorithm,"" IEEE Photovoltaic Specialists Conference, 2018.\n\n## Acknowledgments\n\n* The authors would like to thank Professor Stephen Boyd from Stanford University for his input and guidance and Chris Deline, Mike Deceglie, and Dirk Jordan from NREL for collaboration.\n'",",https://zenodo.org/badge/latestdoi/117483201,https://arxiv.org/abs/1907.08279","2018/01/15, 01:38:46",2109,BSD-2-Clause,0,530,"2022/06/23, 21:53:13",5,39,41,0,488,2,0.1,0.4156626506024096,"2022/06/23, 22:03:19",v0.4.6,0,6,false,,false,true,"MichaelHopwood/ForwardForwardOneclass,slacgismo/solar-data-tools,slacgismo/solar-data-pipeline",,https://github.com/slacgismo,https://gismo.slac.stanford.edu/,"SLAC National Accelerator Laboratory, Menlo Park, CA 94025",,,https://avatars.githubusercontent.com/u/19895500?v=4,,, Photovoltaic_Fault_Detector,Model-definition is a deep learning application for fault detection in photovoltaic plants.,RentadroneCL,https://github.com/RentadroneCL/Photovoltaic_Fault_Detector.git,github,"yolo3,detector-model,model-detection,detection-boxes,fault-detection,solar-energy,photovoltaic-panels,deep-learning,keras,object-detection,tensorflow",Photovoltaics and Solar Energy,"2023/01/17, 13:43:26",32,0,14,true,Jupyter Notebook,simplemap.io (former Rentadrone.cl),RentadroneCL,"Jupyter Notebook,HTML,Python",https://simplemap.io,"b'\n# Photovoltaic Fault Detector\n\n![GitHub](https://img.shields.io/github/license/RentadroneCL/Photovoltaic_Fault_Detector)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md)\n[![Open Source Helpers](https://www.codetriage.com/rentadronecl/photovoltaic_fault_detector/badges/users.svg)](https://www.codetriage.com/rentadronecl/photovoltaic_fault_detector)\n[![Coverage Status](https://coveralls.io/repos/github/RentadroneCL/Photovoltaic_Fault_Detector/badge.svg)](https://coveralls.io/github/RentadroneCL/Photovoltaic_Fault_Detector)\n\n[SimpleMap.io](https://simplemap.io/)\n\n## Forum\n\nThis project is part of the [UNICEF Innovation Fund Discourse community](https://unicef-if.discourse.group/c/projects/rentadrone/10). You can post comments or questions about each category of [SimpleMap.io Open-Source Initiative](https://rentadronecl.github.io) algorithms. We encourage users to participate in the forum and to engage with fellow users.\n\n## Summary\n\nModel-definition is a deep learning application for fault detection in photovoltaic plants. In this repository you will find trained detection models that point out where the panel faults are by using radiometric thermal infrared pictures. In [Web-API](https://github.com/RentadroneCL/Web-API) contains a performant, production-ready reference implementation of this repository.\n\n![Data Flow](MLDataFlow.svg)\n\n## To do list:\n\n- [x] Import model detection (SSD & YOLO3)\n- [x] Example use Trained Model\n- [x] Train and Evaluate Model with own data\n- [x] Model Panel Detection (SSD7)\n- [x] Model Panel Detection (YOLO3)\n- [x] Model Soiling Fault Detection (YOLO3)\n- [x] Model Diode Fault Detection (YOLO3)\n- [x] Model Other Fault Detection\n- [x] Model Fault Panel Disconnect\n\n## Requirements\n\n* Python 3.x\n* Numpy\n* TensorFlow 2.x\n* Keras 2.x (in TensorFlow)\n* OpenCV\n* Beautiful Soup 4.x\n\n## Quickstart\nIn the root project execute the following command to install all dependencies project\n\n```\npip install -r requirements.txt\n\n```\nYou need install Jupyter notebook to see the code example. You can find the installation documentation for the [Jupyter platform, on ReadTheDocs](https://jupyter.readthedocs.io/en/latest/install.html) or in github page [here](https://github.com/jupyter/notebook).\n\nFor a local installation, make sure you have pip installed and run:\n```\npip install notebook\n\n```\n\n## Example to use trained model\nIn [\'Example_Prediction\'](Code_Example/Example_prediction.ipynb) this is the example of how to implement an already trained model, it can be modified to change the model you have to use and the image in which you want to detect faults.\n\nIn [\'Example Prediction AllInOne\'](Code_Example/Example%20Detection%20AllInOne.ipynb) this is the example of how implement all trained model, you can use this code for predict a folder of images and have a output image with detection boxes.\n\nIn [\'Example_Prediction_Orthophoto\'](Code_Example/Example_prediction_Ortofoto.ipynb) this is the example of how implement all trained model, you can use this code for predict a Orthophot and have a output image with detection boxes.\n\n\n## Developers\nHelp improve our software! We welcome contributions from everyone, whether to add new features, improve speed, fix existing bugs or add support. [Check our code of conduct](CODE_OF_CONDUCT.md), [the contributing guidelines](CONTRIBUTING.md) and how decisions are made.\n\nAny code contributions are welcomed as long as they are discussed in [Github Issues](https://github.com/RentadroneCL/model-definition/issues) with maintainers. Be aware that if you decide to change something and submit a PR on your own, it may not be accepted.\n\n#### Creating an issue\nYou can open a new issue based on code from an existing pull request. For more information, see [the template for filling issues](https://github.com/RentadroneCL/model-definition/blob/master/.github/ISSUE_TEMPLATE/feature_request.md)\n\n\n# Model Detection\nThe models used for detection are SSD [SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325) and YOLOv3 [YOLOv3: An Incremental Improvement] (https://arxiv.org/abs/1804.02767), they are imported from the following repositories:\n* [SSD_Keras](https://github.com/pierluigiferrari/ssd_keras#how-to-fine-tune-one-of-the-trained-models-on-your-own-dataset)\n* [YOLOv3_Keras](https://github.com/experiencor/keras-yolo3)\n\nGrab the pretrained weights of SSD and YOLO3 from [Drive_Weights](https://drive.google.com/drive/folders/1LSc9FkAwJrAAT8pAUWz8aax_biFAMMXS?usp=sharing)\n\n| Model | Pretrained Weights |\n|:-----------:|:-------------------:|\n| SSD7/SSD300 | [Weight VGG16](https://drive.google.com/open?id=1VHTx28tGI94yFqwT_WHp-xkx_8Hh_A31)|\n| YOLO3 | [Weight Full Yolo3](https://drive.google.com/open?id=1cnCQHl-TnOrwb-leug1I0O9vMBaSwJLt)|\n\n\n## Type of Data\nThe images used for the design of this model were extracted by air analysis, specifically: FLIR aerial radiometric thermal infrared pictures, taken by UAV (R-JPEG format). Which were converted into .jpg images for the training of these detection models.\nExample FLIR image:\n\n![FLIR](images/example_flir.jpg)\n\nSame image in .jpg format:\n\n![JPG](images/example.jpg)\n\n## Training\n\n### 1. Data preparation\n\nView folder Train&Test_A/ and Train&Test_S/, example of panel anns and soiling fault anns.\n\nOrganize the dataset into 4 folders:\n\n+ train_image_folder <= the folder that contains the train images.\n\n+ train_annot_folder <= the folder that contains the train annotations in VOC format.\n\n+ valid_image_folder <= the folder that contains the validation images.\n\n+ valid_annot_folder <= the folder that contains the validation annotations in VOC format.\n\nThere is a one-to-one correspondence by file name between images and annotations.\nFor create own data set use LabelImg code from :\n[https://github.com/tzutalin/labelImg](https://github.com/tzutalin/labelImg)\n\n### 2. Edit the configuration file\nThe configuration file for YOLO3 is a json file, which looks like this (example soiling fault ):\n\n```python\n{\n ""model"" : {\n ""min_input_size"": 400,\n ""max_input_size"": 400,\n ""anchors"": [5,7, 10,14, 15, 15, 26,32, 45,119, 54,18, 94,59, 109,183, 200,21],\n ""labels"": [""1""],\n\t""backend"": \t\t""full_yolo_backend.h5""\n },\n\n ""train"": {\n ""train_image_folder"": ""../Train&Test_S/Train/images/"",\n ""train_annot_folder"": ""../Train&Test_S/Train/anns/"",\n\t""cache_name"": ""../Experimento_fault_1/Resultados_yolo3/full_yolo/experimento_fault_1_gpu.pkl"",\n\n ""train_times"": 1,\n\n ""batch_size"": 2,\n ""learning_rate"": 1e-4,\n ""nb_epochs"": 200,\n ""warmup_epochs"": 15,\n ""ignore_thresh"": 0.5,\n ""gpus"": ""0,1"",\n\n\t""grid_scales"": [1,1,1],\n ""obj_scale"": 5,\n ""noobj_scale"": 1,\n ""xywh_scale"": 1,\n ""class_scale"": 1,\n\n\t""tensorboard_dir"": ""log_experimento_fault_gpu"",\n\t""saved_weights_name"": ""../Experimento_fault_1/Resultados_yolo3/full_yolo/experimento_yolo3_full_fault.h5"",\n ""debug"": true\n },\n\n ""valid"": {\n ""valid_image_folder"": ""../Train&Test_S/Test/images/"",\n ""valid_annot_folder"": ""../Train&Test_S/Test/anns/"",\n ""cache_name"": ""../Experimento_fault_1/Resultados_yolo3/full_yolo/val_fault_1.pkl"",\n\n ""valid_times"": 1\n },\n ""test"": {\n ""test_image_folder"": ""../Train&Test_S/Test/images/"",\n ""test_annot_folder"": ""../Train&Test_S/Test/anns/"",\n ""cache_name"": ""../Experimento_fault_1/Resultados_yolo3/full_yolo/test_fault_1.pkl"",\n\n ""test_times"": 1\n }\n}\n```\nThe configuration file for SSD300 is a json file, which looks like this (example soiling fault ) and .txt with name of images (train.txt):\n```\n{\n ""model"" : {\n ""backend"": ""ssd300"",\n ""input"": 400,\n ""labels"": [""1""]\n },\n\n ""train"": {\n ""train_image_folder"": ""Train&Test_S/Train/images"",\n ""train_annot_folder"": ""Train&Test_S/Train/anns"",\n ""train_image_set_filename"": ""Train&Test_S/Train/train.txt"",\n\n ""train_times"": 1,\n ""batch_size"": 12,\n ""learning_rate"": 1e-4,\n ""warmup_epochs"": 3,\n ""nb_epochs"": 100,\n\t ""saved_weights_name"": ""Result_ssd300_fault_1/experimento_ssd300_fault_1.h5"",\n ""debug"": true\n },\n ""valid"": {\n ""valid_image_folder"": ""../Train&Test_D/Test/images/"",\n ""valid_annot_folder"": ""../Train&Test_D/Test/anns/"",\n ""valid_image_set_filename"": ""../Train&Test_D/Test/test.txt""\n },\n\n""test"": {\n ""test_image_folder"": ""Train&Test_S/Test/images"",\n ""test_annot_folder"": ""Train&Test_S/Test/anns"",\n ""test_image_set_filename"": ""Train&Test_S/Test/test.txt""\n }\n}\n```\n\n### 3. Start the training process\n\n`python train_ssd.py -c config.json -o /path/to/result`\n\nor\n`python train_yolo.py -c config.json -o /path/to/result`\n\nBy the end of this process, the code will write the weights of the best model to file best_weights.h5 (or whatever name specified in the setting ""saved_weights_name"" in the config.json file). The training process stops when the loss on the validation set is not improved in 20 consecutive epoches.\n\n### 4. Perform detection using trained weights on image, set of images\n\n`python predict_ssd.py -c config.json -i /path/to/image/or/video -o /path/output/result`\nor\n`python predict_yolo.py -c config.json -i /path/to/image/or/video -o /path/output/result`\n\nIt carries out detection on the image and write the image with detected bounding boxes to the same folder.\n\n## Evaluation\nThe evaluation is integrated into the training process, if you want to do the independent evaluation you must go to the folder ssd_keras-master or keras-yolo3-master and use the following code\n\n`python evaluate.py -c config.json` \nExample:\n`python keras-yolo3-master/evaluate.py -c config_full_yolo_fault_1_infer.json` \n\nCompute the mAP performance of the model defined in `saved_weights_name` on the validation dataset defined in `valid_image_folder` and `valid_annot_folder`.\n\n| Model \t \t| mAP \t\t | Config |\n|:--------------:\t|:------------------:|:------------------:|\n| YOLO3 Soiling \t| 0.7302 \t |[config](config_full_yolo_fault_1_infer.json) |\n| YOLO3 Diode \t| 0.6127 | [config](config_full_yolo_fault_4_infer.json)|\n| YOLO3 Affected Cell | 0.7230 | [config](config_full_yolo_fault_2_infer.json)|\n\n\n# Weights of Trained Models\nAll of weights of this trained model grab from [Drive_Weights](https://drive.google.com/drive/folders/1LSc9FkAwJrAAT8pAUWz8aax_biFAMMXS?usp=sharing)\n\n| Model | Weights Trained | Config |\n|:--------------:|:------------------:|:--------:|\n| SSD7 Panel | [weight](https://drive.google.com/open?id=1qNjfAp9sW1VJh8ewnb3NKuafhZockTqV) | [config](Result_ssd7_panel/config_7_panel.json) |\n| SSD300 Soiling | [weight](https://drive.google.com/open?id=1IiOyYW8yPAh4IALbM_ZVqRhLdxV-ZSPw) | [config](config_300_fault_1.json) |\n| YOLO3 Panel | [weight](https://drive.google.com/open?id=14zgtgDJv3KTvhRC-VOz6sqsGPC_bdrL1) | [config](config_full_yolo_panel_infer.json) |\n| YOLO3 Soiling | [weight](https://drive.google.com/open?id=1YLgkn1wL5xAGOpwd2gzdfsJVGYPzszn-) | [config](config_full_yolo_fault_1_infer.json) |\n| YOLO3 Diode | [weight](https://drive.google.com/open?id=1VUtrK9JVTbzBw5dX7_dgLTMToFHbAJl1) | [config](config_full_yolo_fault_4_infer.json) |\n| YOLO3 Affected Cell | [weight](https://drive.google.com/open?id=1ngyCzw7xF0N5oZnF29EIS5LOl1PFkRRM) | [config](config_full_yolo_fault_2_infer.json) |\n\nThe image used are specified in [Table images](Training_Images.xlsx).\nYou can see some examples in [Summary of results](README_Result.md).\n\n# Contributing\n\nContributions are welcome and will be fully credited. We accept contributions via Pull Requests on GitHub.\n\n## Pull Request Checklist\n\nBefore sending your pull requests, make sure you followed this list.\n\n- Read [contributing guidelines](CONTRIBUTING.md).\n- Read [Code of Conduct](CODE_OF_CONDUCT.md).\n- Check if my changes are consistent with the [guidelines](https://github.com/RentadroneCL/model-definition/blob/master/CONTRIBUTING.md#general-guidelines-and-philosophy-for-contribution).\n- Changes are consistent with the [Coding Style](https://github.com/RentadroneCL/model-definition/blob/master/CONTRIBUTING.md#c-coding-style).\n\n\n'",",https://arxiv.org/abs/1512.02325,https://arxiv.org/abs/1804.02767","2020/01/16, 13:50:45",1378,GPL-3.0,4,152,"2023/04/11, 14:42:56",6,27,32,6,197,2,0.0,0.19827586206896552,,,2,4,false,,true,true,,,https://github.com/RentadroneCL,https://simplemap.io,"Santiago, Chile",,,https://avatars.githubusercontent.com/u/32961692?v=4,,, nasapower,"Aims to make it quick and easy to automate downloading NASA-POWER global meteorology, surface solar energy and climatology data in your R session as a tidy data frame tibble object for analysis and use in modeling or other purposes.",ropensci,https://github.com/ropensci/nasapower.git,github,"agroclimatology,weather,r,nasa-power,nasa,agroclimatology-data,weather-variables,weather-data,earth-science,rstats,data-access,r-package",Photovoltaics and Solar Energy,"2023/10/17, 23:39:09",92,0,11,true,R,rOpenSci,ropensci,"R,TeX",https://docs.ropensci.org/nasapower,"b'---\noutput: github_document\n---\n\n# {nasapower}: NASA POWER API Client \n\n\n\n[![tic](https://github.com/ropensci/nasapower/workflows/tic/badge.svg?branch=main)](https://github.com/ropensci/nasapower/actions) \n[![codecov](https://codecov.io/gh/ropensci/nasapower/branch/main/graph/badge.svg?token=Kq9aea0TQN)](https://app.codecov.io/gh/ropensci/nasapower) \n[![DOI](https://zenodo.org/badge/109224461.svg)](https://zenodo.org/badge/latestdoi/109224461) \n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) \n[![peer-review](https://badges.ropensci.org/155_status.svg)](https://github.com/ropensci/software-review/issues/155) \n[![DOI](http://joss.theoj.org/papers/10.21105/joss.01035/status.svg)](https://doi.org/10.21105/joss.01035)\n[![CRAN status](https://www.r-pkg.org/badges/version/nasapower)](https://CRAN.R-project.org/package=nasapower)\n\n\n## POWER data vs {nasapower}\n\nPlease note that {nasapower} is **NOT** the source of NASA POWER data.\nIt is only an API client that allows easy access to the data.\n{nasapower} does not redistribute the data or provide it in any way, *we encourage users to follow the requests of the POWER Project Team and properly acknowledge them for the data rather than citing this package* (unless you have actually used it in your work).\n\n >*When POWER data products are used in a publication, we request the following acknowledgement be included:\n ""These data were obtained from the NASA Langley Research Center POWER Project funded through the NASA Earth\n Science Directorate Applied Science Program.""*\n\nThe previous statement that properly cites the POWER data is different than the citation for {nasapower}.\nTo cite this R package, {nasapower}, please use the output from `citation(package = ""nasapower"")` and cite both the package manual, which includes the version you used and the paper which refers to the peer-review of the software package as the functionality of the package has changed and will likely change to match the API in the future as necessary.\n\n## About {nasapower}\n\n{nasapower} aims to make it quick and easy to automate *downloading* of the [NASA-POWER](https://power.larc.nasa.gov) global meteorology, surface solar energy and climatology data in your R session as a tidy data frame `tibble` object for analysis and use in modelling or other purposes.\nPOWER (Prediction Of Worldwide Energy Resource) data are freely available for download with varying spatial resolutions dependent on the original data and with several temporal resolutions depending on the POWER parameter and community.\n\n**Note that the data are not static and may be replaced with improved data.**\nPlease see for detailed information in this regard.\n\n### Quick start\n\n{nasapower} can easily be installed using the following code.\n\n#### From CRAN\n\nThe stable version is available through CRAN.\n\n\n```r\ninstall.packages(""nasapower"")\n```\n\n#### From GitHub for the version in-development\n\nA development version is available through GitHub.\n\n\n```r\nif (!require(""remotes"")) {\n install.packages(""remotes"")\n}\n\nremotes::install_github(""ropensci/nasapower"")\n```\n\n### Example\n\nFetch daily \xe2\x80\x9cag\xe2\x80\x9d community temperature, relative humidity and precipitation for January 1, 1985 for Kingsthorpe, Queensland, Australia.\n\n\n```r\nlibrary(""nasapower"")\ndaily_ag <- get_power(community = ""ag"",\n lonlat = c(151.81, -27.48),\n pars = c(""RH2M"", ""T2M"", ""PRECTOTCORR""),\n dates = ""1985-01-01"",\n temporal_api = ""daily""\n )\ndaily_ag\n```\n\n```\n## NASA/POWER CERES/MERRA2 Native Resolution Daily Data \n## Dates (month/day/year): 01/01/1985 through 01/01/1985 \n## Location: Latitude -27.48 Longitude 151.81 \n## Elevation from MERRA-2: Average for 0.5 x 0.625 degree lat/lon region = 442.77 meters \n## The value for missing source data that cannot be computed or is outside of the sources availability range: NA \n## Parameter(s): \n## \n## Parameters: \n## RH2M MERRA-2 Relative Humidity at 2 Meters (%) ;\n## T2M MERRA-2 Temperature at 2 Meters (C) ;\n## PRECTOTCORR MERRA-2 Precipitation Corrected (mm/day) \n## \n## # A tibble: 1 \xc3\x97 10\n## LON LAT YEAR MM DD DOY YYYYMMDD RH2M T2M PRECTOTCORR\n## \n## 1 152. -27.5 1985 1 1 1 1985-01-01 54.7 24.9 0.9\n```\n\n## Documentation\n\nMore documentation is available in the vignette in your R session, `vignette(""nasapower"")` or available online, .\n\n## Meta\n\n- Please [report any issues or bugs](https://github.com/ropensci/nasapower/issues).\nPlease note that the {nasapower} project is released with a [Contributor Code of Conduct](https://github.com/ropensci/nasapower/blob/main/CODE_OF_CONDUCT.md).\nBy participating in the {nasapower} project you agree to abide by its terms.\n\n- License: MIT\n\n## References\n\n\n\n\n'",",https://zenodo.org/badge/latestdoi/109224461,https://doi.org/10.21105/joss.01035","2017/11/02, 06:08:53",2183,CUSTOM,96,2099,"2023/07/26, 22:54:07",0,6,77,6,90,0,0.5,0.0025759917568263235,"2023/08/20, 00:25:05",v4.0.11,0,4,false,,true,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, pvcompare,A model for comparing the benefits of different PV technologies in a specified local energy system in different energy supply scenarios.,greco-project,https://github.com/greco-project/pvcompare.git,github,,Photovoltaics and Solar Energy,"2021/07/28, 07:50:38",10,0,0,false,Python,GRECO project,greco-project,"Python,Fortran,Shell",,"b'|badge_docs| |badge_CI| |badge_coverage| |badge_zenodo|\n\nDeprecated: |badge_travis| \n\n.. |badge_docs| image:: https://readthedocs.org/projects/pvcompare/badge/?version=latest\n :target: https://pvcompare.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. |badge_CI| image:: https://github.com/greco-project/pvcompare/actions/workflows/main.yml/badge.svg\n :target: https://github.com/greco-project/pvcompare/actions/workflows/main.yml\n :alt: Build status\n\n.. |badge_coverage| image:: https://coveralls.io/repos/github/greco-project/pvcompare/badge.svg?branch=dev\n :target: https://coveralls.io/github/greco-project/pvcompare?branch=dev\n :alt: Test coverage\n\n.. |badge_travis| image:: https://travis-ci.com/greco-project/pvcompare.svg?branch=dev\n :target: https://travis-ci.com/greco-project/pvcompare\n\n.. |badge_zenodo| image:: https://zenodo.org/badge/224614782.svg\n :target: https://zenodo.org/badge/latestdoi/224614782\n\n\npvcompare\n~~~~~~~~~\n\nIntroduction\n============\n\n*pvcompare* is a model that compares the benefits of different PV technologies in a specified energy system by running\nan energy system optimization. This model concentrates on the integration of PV technologies into local energy systems but could\neasily be enhanced to analyse other conversion technologies.\n\nThe functionalities include\n\n* calculation of an area potential for PV on rooftops and fa\xc3\xa7ades based on building parameters,\n* calculation of heat and electricity demand time series for a specific amount of people living in these buildings,\n* calculation of PV feed-in time series for a set of PV installations on rooftops and fa\xc3\xa7ades incl. different technologies,\n\n * all technologies in the database of `pvlib `_,\n * a specific concentrator-PV module (`CPV `_) and\n * a module of perovskite-silicon cells (`PeroSI `_),\n\n* calculation of temperature dependent COPs or respectively EERs for heat pumps and chillers,\n* download and formatting of `ERA5 weather data `_ (global reanalysis data set),\n* preparation of data and input files for the energy system optimization,\n* a sensitivity analysis for input parameters and\n* visualisations for the comparison of different technologies.\n\nThe model is being developed within the scope of the H2020 project `GRECO `_.\nThe energy system optimization is based on the `oemof-solph `_ python package,\nwhich *pvcompare* calls via the `Multi-Vector Simulator (MVS) `_, a\ntool for assessing and optimizing Local Energy Systems (LES).\n\nDocumentation\n=============\n\nThe full documentation can be found at `readthedocs `_.\n\nInstallation\n============\n\nTo install *pvcompare* follow these steps:\n\n- Clone *pvcompare* and navigate to the directory ``\\pvcompare`` containing the ``setup.py``:\n\n::\n\n git clone git@github.com:greco-project/pvcompare.git\n cd pvcompare\n\n- Install the package:\n\n::\n\n pip install -e .\n\n- For the optimization you need to install a solver. You can download the open source `cbc-solver `_ from https://ampl.com/dl/open/cbc/ . Please follow the installation `steps `_ in the oemof installation instructions. You also find information about other solvers there.\n\nExamples and basic usage\n========================\nThe basic usage of *pvcompare* is explained in the documentation in section `Basic usage of pvcompare `_.\nExamples are provided on github in the directory `examples/ `_.\n\nContributing\n============\n\nWe are warmly welcoming all who want to contribute to *pvcompare*.\nPlease read our `Contributing Guidelines `_.\nYou can also get in contact by writing an `issue on github `_.\n'",",https://zenodo.org/badge/latestdoi/224614782\n\n\npvcompare\n~~~~~~~~~\n\nIntroduction\n============\n\n*pvcompare*","2019/11/28, 09:16:52",1427,AGPL-3.0,0,2042,"2021/07/20, 14:27:30",32,168,302,0,827,4,1.8,0.5308775731310942,"2021/05/29, 15:29:06",v0.0.3,0,5,false,,false,true,,,https://github.com/greco-project,https://www.greco-project.eu/,,,,https://avatars.githubusercontent.com/u/50671643?v=4,,, SolTrace,A software tool developed at NREL to model concentrating solar power (CSP) systems and analyze their optical performance.,NREL,https://github.com/NREL/SolTrace.git,github,,Photovoltaics and Solar Energy,"2023/09/25, 17:19:34",36,0,10,true,C++,National Renewable Energy Laboratory,NREL,"C++,HTML,TeX,Python,JavaScript,CSS,CMake,Inno Setup,Batchfile,Makefile",https://www.nrel.gov/csp/soltrace.html,"b'# SolTrace\n\nThe SolTrace Open Source Project repository contains the source code, tools, and instructions to build a desktop version of the National Renewable Energy Laboratory\'s SolTrace. SolTrace is a software tool developed at NREL to model concentrating solar power (CSP) systems and analyze their optical performance. Although ideally suited for solar applications, the code can also be used to model and characterize many general optical systems. The creation of the code evolved out of a need to model more complex solar optical systems than could be modeled with existing tools. For more details about SolTrace\'s capabilities, see the [SolTrace website](https://www.nrel.gov/csp/soltrace.html). For details on integration with SAM, see the [SAM website](https://sam.nrel.gov).\n\nThe desktop version of SolTrace for Windows or Linux builds from the following open source projects:\n\n* [LK](https://github.com/nrel/lk) is a scripting language that is integrated into SAM and allows users to add functionality to the program.\n\n* [wxWidgets](https://www.wxwidgets.org/) is a cross-platform graphical user interface platform used for SAM\'s user interface, and for the development tools included with SSC (SDKtool) and LK (LKscript). The current version of SAM uses wxWidgets 3.1.0.\n\n* [WEX](https://github.com/nrel/wex) is a set of extensions to wxWidgets for custom user-interface elements used by SAM, and by LKscript and DView, which are integrated into SAM.\n\n* This repository, **SolTrace**, provides the user interface to assign values to inputs of the computational modules, run the modules in the correct order, and display calculation results. It also includes tools for editing LK scripts and viewing ray intersection and flux map data.\n\n## Quick Steps for Building SolTrace\n\nFor detailed build instructions see the [wiki](https://github.com/NREL/SolTrace/wiki), with specific instructions for:\n\n* [Windows](https://github.com/NREL/SolTrace/wiki/build-windows)\n* [OSX](https://github.com/NREL/SolTrace/wiki/build-osx)\n* [Linux](https://github.com/NREL/SolTrace/wiki/build-linux)\n\nThese are the general quick steps you need to follow to set up your computer for developing SolTrace:\n\n1. Set up your development tools:\n\n * Windows: Visual Studio 2019 Community or other editions available at [https://www.visualstudio.com/](https://www.visualstudio.com/).\n * Linux: g++ compiler available at [http://www.cprogramming.com/g++.html](http://www.cprogramming.com/g++.html) or as part of the Linux distribution.\n\n2. Download and install CMake 3.19 or higher from [https://cmake.org/download/](https://cmake.org/download/) with the ```Add CMake to the System Path for ...``` option selected.\n\n3. Download the wxWidgets 3.1.5 source code for your operating system from [https://www.wxwidgets.org/downloads/](https://www.wxwidgets.org/downloads/).\n\n4. Build wxWidgets.\n\n5. In Windows, create the WXMSW3 environment variable on your computer to point to the wxWidgets installation folder, or Linux, create the dynamic link `/usr//local/bin/wx-config-3` to point to `/path/to/wxWidgets/bin/wx-config`.\n\n6. As you did for wxWidgets, for each of the following projects, clone (download) the repository and then (Windows only) create an environment variable pointing to the project folder. \n\n\n\n\n\n
ProjectRepository URLWindows Environment Variable
LKhttps://github.com/NREL/lkLKDIR
WEXhttps://github.com/NREL/wexWEXDIR
\n\n7. Run CMake to create the project build files\n 1. Copy the file ```parent-dir-CMakeLists.txt``` into the parent directory also containing ```soltrace/ lk/ wex/``` and ```wxwidgets-3.x.x/``` folders.\n \n 2. Rename this file to ```CMakeLists.txt``` before running cmake. You may need to temporarily rename any other file in this directory with the same name. \n \n E.g., the file should be at ```C:/stdev/CMakeLists.txt```\n\n 3. Create a directory in the main parent folder to store the build files. \n E.g., ```C:/stdev/build-soltrace/```\n \n 4. Open a shell or command window in the build folder from step 3\n\n 5. Copy the following cmake command to the shell and run. Replace the cmake target with a [supported generator](https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#manual:cmake-generators(7))\n \n ```> cmake -G ""Visual Studio 16 2019"" -DCMAKE_CONFIGURATION_TYPES=""Debug;Release"" -DCMAKE_SYSTEM_VERSION=10.0 -DSAM_SKIP_TOOLS=1 .. ```\n\n 6. Confirm the project files built. If running visual studio, you should see a ```soltrace_ui.sln``` file in the build-soltrace/ directory.\n \n 7. Build all files. The output is stored in the soltrace repository folder, e.g., ```C:/stdev/soltrace/app/deploy/soltrace.exe```. \n\n Note that output is NOT stored in the ```build-soltrace/``` directory!\n\n## Contributing\n\nIf you would like to report an issue with SolTrace or make a feature request, please let us know by adding a new issue on the [issues page](https://github.com/NREL/SolTrace/issues).\n\nIf you would like to submit code to fix an issue or add a feature, you can use GitHub to do so. Please see [Contributing](CONTRIBUTING.md) for instructions.\n\n## License\n\nSolTrace\'s open source code is copyrighted by the Alliance for Sustainable Energy and licensed under a [mixed MIT and GPLv3 license](LICENSE.md). It allows for-profit and not-for-profit organizations to develop and redistribute software based on SolTrace under terms of an MIT license and requires that research entities including national laboratories, colleges and universities, and non-profit organizations make the source code of any redistribution publicly available under terms of a GPLv3 license.\n\n## Citing SolTrace\n\nWe appreciate your use of SolTrace, and ask that you appropriately cite the software in exchange for its open-source publication. Please use one of the following references in documentation that you provide on your work. For general usage citations, the preferred option is:\n\n> Wendelin, T. (2003). ""SolTRACE: A New Optical Modeling Tool for Concentrating Solar Optics."" Proceedings of the ISEC 2003: International Solar Energy Conference, 15-18 March 2003, Kohala Coast, Hawaii. New York: American Society of Mechanical Engineers, pp. 253-260; NREL Report No. CP-550-32866.\n\nFor citations in work that involves substantial development or extension of the existing code, the preferred option is:\n\n> Wendelin, T., Wagner, M.J. (2018). ""SolTrace Open-Source Software Project: [github.com/NREL/SolTrace](https://github.com/NREL/SolTrace)"". National Renewable Energy Laboratory. Golden, Colorado.'",,"2017/06/29, 15:20:21",2309,CUSTOM,10,78,"2022/11/14, 21:08:08",22,18,25,3,344,4,0.2,0.17741935483870963,,,0,4,false,,false,true,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, CarrierCapture.jl,A set of codes to compute carrier capture and recombination rates in semiconducting compounds like solar cells.,WMD-group,https://github.com/WMD-group/CarrierCapture.jl.git,github,"defects,semiconductors,electronic-structure,materials-design,solar-cells",Photovoltaics and Solar Energy,"2023/03/03, 15:04:47",41,0,6,true,Jupyter Notebook,Materials Design Group,WMD-group,"Jupyter Notebook,Julia,Python,TeX",https://wmd-group.github.io/CarrierCapture.jl/dev/,"b'[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![made-with-julia](https://img.shields.io/badge/Made%20with-Julia-ff69bf.svg)](https://julialang.org)\n[![](https://img.shields.io/badge/docs-dev-blue.svg)](https://wmd-group.github.io/CarrierCapture.jl/dev/)\n[![CI](https://github.com/WMD-group/CarrierCapture.jl/actions/workflows/ci.yml/badge.svg)](https://github.com/WMD-group/CarrierCapture.jl/actions/workflows/ci.yml)\n[![DOI](https://zenodo.org/badge/130691083.svg)](https://zenodo.org/badge/latestdoi/130691083)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02102/status.svg)](https://doi.org/10.21105/joss.02102)\n[![Julia](https://img.shields.io/badge/Julia-1.6%2B-blue)](https://julialang.org)\n\n
\n\n
\n\nA set of codes to compute carrier capture and recombination rates in semiconducting compounds.\nThis topic has a rich history starting from the work by [Huang and Rhys](http://rspa.royalsocietypublishing.org/content/204/1078/406.short).\nOur implementation was inspired by the approach (and FORTRAN code) employed by [Alkauskas and coworkers](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.075202), but has been adapted\nto also describe anharmonic potential energy surfaces.\n\n## Installation\n\nThe codes are written in [Julia](https://julialang.org), while the scripts and [Jupyter Notebooks](http://jupyter.org) also contain [Python](https://www.python.org) and use [pymatgen](http://pymatgen.org) and [pawpyseed](https://github.com/kylebystrom/pawpyseed) (tested on Scientific Linux 7 and Linux Mint 18), which are assumed to be installed.\nThe [Brooglie](https://github.com/RedPointyJackson/Brooglie) package is used to solve the time-independent Schr\xc3\xb6dinger equation.\n\nInstall the package by:\n\n```julia\njulia> using Pkg\n\njulia> Pkg.add(PackageSpec(url=""https://github.com/WMD-group/CarrierCapture.jl.git""))\n```\n\nTo run the unit tests for the package, use the `Pkg.test` function. \n\n```julia\njulia> Pkg.test(""CarrierCapture"")\n```\n\n## Development\n\nThe project is hosted on [Github](https://github.com/WMD-group/carriercapture).\nPlease use the [issue tracker](https://github.com/WMD-group/carriercapture/issues/) for feature requests, bug reports and more general questions.\nIf you would like to contribute, please do so via a pull request.\n\n## Usage\n\nA typical workflow will consist of several steps, implemented in a series of short programs, which may be run from the command line. Input for the calculations is provided in `input.yaml`.\n\n 0. Prepare a sequence of atomic structure models with displacements that interpolate between two defect configurations (e.g. a site vacancy in charge states q=0 and q=+1).\n Run single-point energy calculations on these structures, and extract the total energies. Scripts for preprocessing may be found in `script`.\n\n 1. Find a best fit for the energy calculations of the deformed structures (`potential`) to generate potential energy surfaces (PES).\n Solve the 1D Schr\xc3\xb6dinger equation for each PES to obtain their phonon (nuclear) wavefunctions.\n\n 3. Construct configuration coordinate (`conf_coord`) to calculate the wavefunction overlap between each PES, \n which forms part of the temperature-dependent capture coefficient.\n\n![schematics](https://github.com/WMD-group/CarrierCapture.jl/blob/master/schematics/carrier_capture_sketch.png?raw=true ""schematics"")\n\nThe command-line interface (`GetPotential.jl` and `GetRate.jl`) is depreciated.\nUse [Jupyter Notebook](http://jupyter.org) [examples](https://github.com/WMD-group/CarrierCapture.jl/blob/master/example/notebook/) as a template.\n\n## Examples\n\nThe following examples are provided to illustrate some of the applications of these codes. The input data has been generated from density functional theory (DFT) using [VASP](https://www.vasp.at), but the framework can easily be adapted to accept output from other electronic structure calculators. \n\n* [SnZn in Cu2ZnSnS4](./example/notebook/Harmonic%20(Sn_Zn).ipynb): Harmonic approximation\n\n* [DX-center in GaAs](./example/notebook/Anharmonic%20(DX%20center).ipynb): Anharmonic fitting\n\n* [Electron-phonon coupling](./example/notebook/e-ph.ipynb): Electron-phonon coupling matrix element\n\n## Theory\n\n> The electronic matrix element frequently causes feelings of discomfort (Stoneham, 1981)\n\nThe capture of electrons or holes by point defects in a crystalline materials requires the consideration of a number of factors including the coupling between electronic and vibrational degrees of freedom. Many theories and approximations have been developed to describe the reaction kinetics.\n\nThe capture coefficient between an initial and final state for this computational set up is given by (eq. 22 in [Alkauskas and coworkers](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.075202)):\n\n
\n\n
\n\nHere, *V* is the volume of the supercell, *Wif* is the electron-phonon overlap and *\xce\xbeim* and *\xce\xbefn* describe the wavefunctions of the *mth* and *nth* phonons in the initial *i* and final *f* states. The final delta-function term serves to conserve energy and in practice is replaced by a smearing Gaussian of finite width *\xcf\x83*.\n\n### User Warning\n\nThe values produced by this type of analysis procedure are sensitive to the quality of the input. \nWe expect that most input data will have been generated by DFT where the basis set, k-points, and ionic forces have been carefully converged.\nIn addition, the alignment of energy surfaces for defects in different charge states requires appropriate finite-size corrections (e.g. see [Freysoldt and coworkers](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.253)).\n\n### Extended Reading List\n\n#### Theory Development\n\n* [Heny and Lang, Nonradiative capture and recombination by multiphonon emission in GaAs and GaP (1977)](https://journals.aps.org/prb/pdf/10.1103/PhysRevB.15.989)\n*Seminal contribution that introduces many important concepts*\n\n* [Huang, Adiabatic approximation theory and static coupling theory of nonradiative transition (1981)](https://www.worldscientific.com/doi/epdf/10.1142/9789812793720_0009)\n*Context for the static approximation that we employ*\n\n* [Stoneham, Non-radiative transitions in semiconductors (1981)](http://iopscience.iop.org/article/10.1088/0034-4885/44/12/001/meta)\n*Review on theory and various models of recombination*\n\n* [Markvart, Determination of potential surfaces from multiphonon transition rates (1981)](http://iopscience.iop.org/article/10.1088/0022-3719/14/15/002)\n*Discussion and treatment of anharmonicity*\n\n* [Markvart, Semiclassical theory of non-radiative transitions (1981)](http://iopscience.iop.org/article/10.1088/0022-3719/14/29/006/meta)\n*Semiclassical treatment of matrix elements following Landau and Holstein*\n\n#### Applications of CarrierCapture\n\n* [Kavanagh et al, Impact of metastable defect structures on carrier recombination in solar cells (2022)](https://pubs.rsc.org/en/content/articlelanding/2022/fd/d2fd00043a)\n\n* [Kavanagh et al, Rapid recombination by cadmium vacancies in CdTe (2021)](https://doi.org/10.1021/acsenergylett.1c00380)\n\n* [Whalley et al, Giant Huang\xe2\x80\x93Rhys factor for electron capture by the iodine intersitial in perovskite solar cells (2021)](https://pubs.acs.org/doi/full/10.1021/jacs.1c03064)\n\n* [Kim and Walsh, Ab initio calculation of the detailed balance limit to the photovoltaic efficiency of single p-n junction kesterite solar cells (2021)](https://aip.scitation.org/doi/10.1063/5.0049143) \n\n* [Dahliah et al, High-throughput computational search for high carrier lifetime, defect-tolerant solar absorbers (2021)](https://pubs.rsc.org/en/content/articlelanding/2021/EE/D1EE00801C)\n\n* [Kim et al, Upper limit to the photovoltaic efficiency of imperfect crystals (2020)](https://dx.doi.org/10.1039/D0EE00291G) \n\n* [Kim et al, Anharmonic lattice relaxation during non-radiative carrier capture (2019)](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.100.041202) \n\n* [Kim et al, Lone-pair effect on carrier capture in Cu2ZnSnS4 solar cells (2019)](https://pubs.rsc.org/en/content/articlehtml/2019/ta/c8ta10130b)\n\n* [Kim et al, Identification of killer defects in kesterite thin-film solar cells (2018)](https://pubs.acs.org/doi/abs/10.1021/acsenergylett.7b01313)\n'",",https://zenodo.org/badge/latestdoi/130691083,https://doi.org/10.21105/joss.02102,https://doi.org/10.1021/acsenergylett.1c00380","2018/04/23, 12:10:13",2011,MIT,18,298,"2021/05/06, 23:51:49",0,9,18,0,901,0,0.1111111111111111,0.6124031007751938,"2023/04/14, 13:09:00",v0.6,0,10,false,,false,false,,,https://github.com/WMD-group,https://wmd-group.github.io,London,,,https://avatars.githubusercontent.com/u/1716969?v=4,,, honeybee,"A Python library to create, run and visualize the results of daylight (RADIANCE) and energy analysis (EnergyPlus/OpenStudio).",ladybug-tools,https://github.com/ladybug-tools/honeybee.git,github,,Photovoltaics and Solar Energy,"2021/07/21, 11:22:06",90,0,4,false,Python,Ladybug Tools,ladybug-tools,"Python,Shell,Dockerfile",http://ladybug-tools.github.io/honeybee/docs,"b""![Honeybee](http://www.ladybug.tools/assets/img/honeybee.png)\n\n[![Build Status](https://travis-ci.org/ladybug-tools/honeybee.svg?branch=master)](https://travis-ci.org/ladybug-tools/honeybee)\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)\n[![Coverage Status](https://coveralls.io/repos/github/ladybug-tools/honeybee/badge.svg)](https://coveralls.io/github/ladybug-tools/honeybee)\n\n[![Python 2.7](https://img.shields.io/badge/python-2.7-green.svg)](https://www.python.org/downloads/release/python-270/) [![IronPython](https://img.shields.io/badge/ironpython-2.7-red.svg)](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)\n\n# honeybee\n\nHoneybee is a Python library to create, run and visualize the results of daylight ([RADIANCE](https://radiance-online.org//)) and energy analysis ([EnergyPlus](https://energyplus.net/)/[OpenStudio](https://www.openstudio.net/)). The current version supports only Radiance integration. For energy simulation you may use the [legacy honeybee for Grasshopper](https://github.com/mostaphaRoudsari/honeybee).\n\nThis repository includes the core library which is the base for Honeybee plugins. For plugin-specific questions and comments refer to [honeybee-grasshopper](https://github.com/ladybug-tools/honeybee-grasshopper) or [honeybee-dynamo](https://github.com/ladybug-tools/honeybee-dynamo) repositories.\n\nCheck [this repository](https://github.com/mostaphaRoudsari/honeybee) for the legacy honeybee plugin for Grasshopper.\n\n## Installation\n\n```\npip install lbt-honeybee==0.1.16\n```\n\n## Tentative road map\n- [x] Basic Radiance Integration.\n- [x] Support annual daylight simulation - daylight coefficient method [Nov 2016].\n- [x] Support three-phase daylight simulation [Dec 2016].\n- [x] Support five-phase daylight simulation [Aug 2017].\n- [x] Fix PEP 8 issues [Dec 2017].\n- [x] Code documentation [Dec 2017].\n- [ ] Provide cloud service support for daylight simulation [Under progress]\n- [x] Basic EnergyPlus integration [Nov 2019]\n- [ ] Support basic HVAC modeling.\n- [ ] Full OpenStudio integration.\n\n\n## [API Documentation](http://ladybug-tools.github.io/apidoc/honeybee)\n\n## Citing honeybee\n\nFor the daylighting library cite this presentation:\n\n*Sadeghipour Roudsari, Mostapha. Subramaniam, Sarith. 2016. Automating Radiance workflows with Python. The 15th Annual Radiance Workshop. Padua, Italy. Available at: https://www.radiance-online.org/community/workshops/2016-padua/presentations/213-SadeghipourSubramaniam-AutomatingWorkflows.pdf*\n`\n\n## Examples\nHere is a Python example that shows how to put a grid-based analysis together. For more examples check one of the plugins repository.\n\n```python\nfrom honeybee_plus.room import Room\nfrom honeybee_plus.radiance.material.glass import Glass\nfrom honeybee_plus.radiance.sky.certainIlluminance import CertainIlluminanceLevel\nfrom honeybee_plus.radiance.recipe.pointintime.gridbased import GridBased\n\n# create a test room\nroom = Room(origin=(0, 0, 3.2), width=4.2, depth=6, height=3.2,\n rotation_angle=45)\n\n# add fenestration\n# # add a window to the back wall\nroom.add_fenestration_surface(wall_name='back', width=2, height=2, sill_height=0.7)\n\n# add another window with custom material. This time to the right wall\nglass_60 = Glass.by_single_trans_value('tvis_0.6', 0.6)\nroom.add_fenestration_surface('right', 4, 1.5, 1.2, radiance_material=glass_60)\n\n# run a grid-based analysis for this room\n# generate the sky\nsky = CertainIlluminanceLevel(illuminance_value=2000)\n\n# generate grid of test points\nanalysis_grid = room.generate_test_points(grid_size=0.5, height=0.75)\n\n# put the recipe together\nrp = GridBased(sky=sky, analysis_grids=(analysis_grid,), simulation_type=0,\n hb_objects=(room,))\n\n# write simulation to folder\nbatch_file = rp.write(target_folder='.', project_name='room')\n\n# run the simulation\nrp.run(batch_file, debug=False)\n\n# results - in this case it will be an analysis grid\nresult = rp.results()[0]\n\n# print the values for each point\nfor value in result.combined_value_by_id():\n print('illuminance value: %d lux' % value[0])\n```\n""",,"2015/12/24, 23:43:38",2861,GPL-3.0,0,797,"2022/02/02, 08:08:41",38,282,453,0,630,2,0.0,0.4323899371069182,"2020/08/13, 22:16:10",v1.0.1,0,8,false,,true,true,,,https://github.com/ladybug-tools,ladybug.tools,Worldwide,,,https://avatars.githubusercontent.com/u/14942270?v=4,,, Open Solar Project,ESP32 Smart Solar Charger.,opensolarproject,https://github.com/opensolarproject/OSPController.git,github,,Photovoltaics and Solar Energy,"2023/10/17, 04:24:49",217,0,37,true,C++,Open Solar Project,opensolarproject,"C++,Python,C",https://github.com/opensolarproject/OSPController/wiki,"b'\n\n# OSP Controller \xe2\x98\x80\xef\xb8\x8f\xf0\x9f\x95\xb9 _now on [discord](https://discord.gg/GtR3JShfGu)_\n\n_DC -> DC -> DC_ Solar. With a single used solar panel, a few used batteries, and $40 in parts you can power your life, transportation and all. Add an ESP32 Arduino to a 95% efficient DC-DC buck converter controlled over serial and you get an internet-connected, privately hosted smart solar MPPT power system. [Parts list](https://github.com/opensolarproject/OSPController/wiki/Step-1-Parts-List). [Instructions](https://github.com/opensolarproject/OSPController/wiki). [About](https://github.com/opensolarproject/OSPController/wiki/About). Go build one! (And reach out! I\'m happy to help)\n
\n\n[![GitHub version](https://img.shields.io/github/release/opensolarproject/OSPController.svg?style=flat-square)](https://github.com/opensolarproject/OSPController/releases/latest)\n[![version since](https://img.shields.io/github/commits-since/opensolarproject/OSPController/latest.svg?style=flat-square&color=green)](https://github.com/opensolarproject/OSPController/commits)\n[![version date](https://img.shields.io/github/release-date/opensolarproject/OSPController.svg?style=flat-square)](https://github.com/opensolarproject/OSPController/commits)\n[![GitHub download](https://img.shields.io/github/downloads/opensolarproject/OSPController/total.svg?style=flat-square&color=green)](https://github.com/opensolarproject/OSPController/releases/latest)\n[![build](https://img.shields.io/travis/opensolarproject/OSPController.svg?style=flat-square)](https://travis-ci.com/github/opensolarproject/OSPController)\n\n[![Language Type](https://img.shields.io/github/languages/top/opensolarproject/OSPController?style=flat-square)](https://github.com/opensolarproject/OSPController/commits)\n[![GitHub stars](https://img.shields.io/github/stars/opensolarproject/OSPController.svg?style=flat-square&label=Star)](https://github.com/arendst/Tasmota/stargazers)\n[![GitHub forks](https://img.shields.io/github/forks/opensolarproject/OSPController.svg?style=flat-square&label=Fork)](https://github.com/opensolarproject/OSPController/network)\n[![Issues open](https://img.shields.io/github/issues/opensolarproject/OSPController?style=flat-square)](https://github.com/opensolarproject/OSPController/issues)\n[![Issues closed](https://img.shields.io/github/issues-closed/opensolarproject/OSPController?style=flat-square&color=green)](https://github.com/opensolarproject/OSPController/issues)\n[![Chat](https://img.shields.io/discord/720686061159841852.svg?style=flat-square&color=blueviolet)](https://discord.gg/GtR3JShfGu)\n\n| ![dashboard view](https://raw.githubusercontent.com/wiki/opensolarproject/OSPController/images/charts-grafana.png) |\n:-------------------------:|\n| A dashboard view Grafana (optional). More details & options [here](https://github.com/opensolarproject/OSPController/wiki/Step-4-Data-Visualization) |\n\n### This solar controller:\n- Costs less than $35 in [total parts](https://github.com/opensolarproject/OSPController/wiki/Step-1-Parts-List)\n- Works with 12 - 82VDC Solar Panels, _(enabling big and efficient strings of panels!)_\n- Works with 4.2 - 60VDC batteries. Directly charge your high-voltage eBike batteries!\n- Is open source, modify it as you wish!\n- Connects to your MQTT smart home\n- Lets you own your own data\n- Gives you [graphs and charts](https://github.com/opensolarproject/OSPController/wiki/Step-4:-Data-Visualization) about your system from anywhere\n\n### But really, head over [to the wiki](https://github.com/opensolarproject/OSPController/wiki) for \n\n- [Background & About](https://github.com/opensolarproject/OSPController/wiki/About)\n- [Part 1:Parts](https://github.com/opensolarproject/OSPController/wiki/Step-1-Parts-List)\n- [Part 2:Hardware](https://github.com/opensolarproject/OSPController/wiki/Step-2-Hardware-Build)\n- [Part 3:Software](https://github.com/opensolarproject/OSPController/wiki/Step-3-Software-Setup)\n- [Part 4:Data](https://github.com/opensolarproject/OSPController/wiki/Step-4-Data-Visualization)\n- [Part 5:Wiring](https://github.com/opensolarproject/OSPController/wiki/Step-5-Wiring-Things)\n\n## Also join the [Discord Channel](https://discord.gg/GtR3JShfGu)\nIt\'s the discussion board to talk shop, get ideas, get help, triage issues, and share success! [discord.gg/MRQvKR](https://discord.gg/GtR3JShfGu)\n\n'",,"2019/08/09, 20:33:23",1537,GPL-3.0,3,92,"2023/10/17, 04:16:21",13,3,17,2,8,1,0.0,0.02197802197802201,"2020/06/26, 19:38:24",v2.0,0,3,false,,false,false,,,https://github.com/opensolarproject,,,,,https://avatars.githubusercontent.com/u/53953954?v=4,,, MPPT-Solar-Charger,Supporting documentation and software for the MPPT Solar Charger.,danjulio,https://github.com/danjulio/MPPT-Solar-Charger.git,github,,Photovoltaics and Solar Energy,"2023/05/09, 03:26:46",123,0,39,true,C,,,"C,Makefile,C++,Assembly",,"b'## makerPower\xe2\x84\xa2 MPPT Solar Charger\n\n![MPPT Solar Charger](hardware/pictures/35_00082_02.png)\n\n### LiFePO4 support (added 2/2023)\nFirmware version 2.0 adds support for 12V (4-cell) LiFePO4 batteries. Several charging parameters, shown below, are changed when a LiFePO4 battery type is selected.\n\n1. Float/Bulk initial charge state threshold set to 13.2V\n2. Bulk Voltage set to 14.4V\n3. Power On charge voltage set to 13.6V\n4. Temperature Compensation disabled\n5. Charge temperature range between 0\xc2\xb0C and 50\xc2\xb0C\n\nThe charger will default to the lead acid battery type. Add a jumper between the test pads shown below to configure the charger for a LiFePO4 battery.\n\n![LiFePO4 configuration](hardware/pictures/LiFePO4_jumper.png)\n\nInstructions are provided in the Firmware directory for upgrading existing boards.\n\n### Contents\nThis repository contains documentation and software for the makerPower MPPT Solar Charger board (design documented at [hackaday.io](https://hackaday.io/project/161351-solar-mppt-charger-for-247-iot-devices)). It can be found in my [tindie store](https://www.tindie.com/products/globoy/mppt-solar-charger-for-intelligent-devices/).\n\n1. firmware - Charger C source code\n2. hardware - Board documentation, schematic and connection diagrams for different uses\n3. arduino - Arduino library and examples (can be compiled with wiringPi for Raspberry Pi too)\n4. mppt_dashboard - Mac OS, Windows and Linux monitoring application that communicates with the charger via the mpptChgD daemon\n5. mpptChgD - Linux Daemon compiled for Raspberry Pi that communicates with the charger via I2C\n\nThe makerPower is a combination solar battery charger and 5V power supply for IOT-class devices designed for 24/7 operation off of solar power. It manages charging a 12V AGM lead acid or LiFePO4 battery from common 36-cell 12V solar panels. It provides 5V power output at up to 2A for systems that include sensors or communication radios. Optimal charging is provided through a dynamic perturb-and-observe maximum power-point transfer converter (MPPT) and a 3-stage (BULK, ABSORPTION, FLOAT) charging algorithm. A removable temperature sensor provides temperature compensation. Operation is plug&play although additional information and configuration may be obtained through a digital interface.\n\n* Optimized for commonly available batteries in the 7-18 Ah range and solar panels in the 10-35 Watt range\n* Reverse Polarity protected solar panel input with press-to-open terminal block\n* Fused battery input with press-to-open terminal block\n* Maximum 2A at 5V output on USB Type A power output jack and solder header\n* Automatic low-battery disconnect and auto-restart on recharged battery\n* Temperature compensation sensor with internal sensor fallback (lead acid batteries)\n* Disable charging when battery is too cold or too hot\n* Status LED indicating charge and power conditions, fault information\n* I2C interface for detailed operation condition readout and configuration parameter access\n* Configurable battery charge parameters\n* Status signals for Night detection and pre-power-down alert\n* Night-only operating mode (switch 5V output on only at night)\n* Watchdog functionality to power-cycle connected device if it crashes or for timed power-off control\n\n### Applications\n* Remote control and sense applications\r* Solar powered web or timelapse camera\r* Night-time \xe2\x80\x9ccritter cam""\r* Solar powered LED night lighting controller\n\n#### Bonus Application\nThe charger works well as a 12- and/or 5-V UPS when combined with a laptop power supply. The laptop supply should be able to supply at least 3.5A at between 18.5 - 21V output (for example a Dell supply at 20V/3.5A) - a high enough voltage to initiate charging. The charger will both charge the battery and supply the load current to the user\'s device and the battery will supply power if AC power fails.\n\n### Compatible Solar Panels and Batteries\nThe makerPower is designed to use standard 25- or 35-Watt 12V solar panels with 7-Ah to 18-Ah 12V AGM type lead acid or LiFePO4 batteries. It has a maximum charge capacity of about 35-38 watts. A detailed sizing method is described in the user manual but it is possible to use smaller or larger panels and batteries depending on the application.\n\nTypically a 25-Watt panel is paired with a 7-Ah battery for small systems (Arduino-type up to Raspberry Pi Zero type). A 35-Watt panel is paired with 9-Ah to 18-Ah batteries for larger systems. Larger batteries provide longer run-time during poor (lower light) charging conditions. A larger panel can provide more charge current during poor charging conditions.\n\nSolar panels should be a 36-cell type with a typical maximum power-point of around 18V and maximum open-circuit voltage of 23V (typically around 21-22V). Available panels and batteries I have tested with are shown below.\n\n* [25 Watt Panel](https://www.amazon.com/gp/product/B014UND3LA)\n* [35 Watt Panel](https://www.amazon.com/gp/product/B01G1II6LY)\n* [9 Ah Lead Acid Battery](https://www.amazon.com/Power-Sonic-PS-1290-Rechargeable-Battery-Terminals/dp/B002L6R130)\n* [10 Ah LiFePO4 Battery](https://www.amazon.com/ExpertPower-Lithium-Rechargeable-2500-7000-lifetime/dp/B07X3Y3LS5)\n* [18 Ah Lead Acid Battery](https://www.amazon.com/ExpertPower-EXP12180-Rechargeable-Battery-Bolts/dp/B00A82A3RK)\n\n### Enclosures\n\nI have used the [Carlon E989N](https://www.homedepot.com/p/Carlon-8-in-x-4-in-PVC-Junction-Box-E989N-CAR/100404099) enclosure found at a Home Depot home improvement store to hold the battery, charger and single-board computer. It is a good size providing room for a 7-Ah to 10-Ah battery as well as room for heat dissipation from both the charger and computer. Note that the charger can dissipate upwards of 5W when running at full capacity.\n\nOther possible enclosures include the following.\n\n* Hammond Manufacturing [RP1465/RP1465C](https://www.hammfg.com/electronics/small-case/plastic/rp)\n* Bud Industries [PIP-11774/PIP-11774-C](https://www.budind.com/view/NEMA+Boxes/NEMA+4X+-+PIP)\n\n### Questions?\n\nContact the designer - dan@danjuliodesigns.com\n'",,"2018/04/30, 04:48:29",2004,GPL-3.0,4,53,"2021/01/10, 22:43:46",2,0,2,0,1017,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Tonatiuh,A Monte Carlo ray tracer for the optical simulation of solar concentrating systems.,iat-cener,https://github.com/iat-cener/tonatiuh.git,github,"solar-concentrating-systems,simulation,solar,energy",Photovoltaics and Solar Energy,"2023/03/27, 11:34:44",51,0,6,true,C++,National Renewable Energy Centre - CENER,iat-cener,"C++,QMake,C,Batchfile",http://iat-cener.github.io/tonatiuh/,"b'![http://tonatiuhdocs.googlepages.com/Logodefinitivo301x115.gif](http://tonatiuhdocs.googlepages.com/Logodefinitivo301x115.gif)\r\n# News #\r\n\r\n## Tonatiuh release 2.2.4 is now available! ##\r\n\r\nIn this release some bugs have been solved. The most relevant one is a bugs in ShapeCAD instersection algorithm only visible in some operating systems. In addition, Pillbox distribution function removed from material errors in order to avoid some simulations errors by using this distribution function\r\n \r\nFind the last release following this link: https://github.com/iat-cener/tonatiuh/releases/tag/v2.2.4\r\n\r\n# Overview #\r\n_The Tonatiuh project aims to create an open source, cutting-edge, accurate, and easy to use Monte Carlo ray tracer for the optical simulation of solar concentrating systems. It intends to advance the state-of-the-art of the simulation tools available for the design and analysis of solar concentrating systems, and to make those tools freely available to anyone interested in using and improving them._\r\nSome of the most relevant design goals of Tonatiuh are:\r\n * To develop a robust theoretical foundation that will facilitate the optical simulation of almost any type of solar concentrating systems.\r\n * To exhibit a clean and flexible software architecture, that will allow the user to adapt, expand, increase, and modify its functionalities with ease.\r\n * To achieve operating system independence at source level, and run on all major platforms with none, or minor, modifications to its source code.\r\n * To provide the users with an advanced and easy-of-use Graphic User Interface (GUI).\r\nAdditional information on the rationale for this open source project, and on the goals, general characteristics, and current status of Tonatiuh is given in the two following videos. The first video is based on a [Pecha Kucha presentation](http://en.wikipedia.org/wiki/Pecha_Kucha) given by Dr. Manuel J. Blanco at the University of Seville in September 2008. Although, the audio track is in Spanish, the video is closed-captioned in Spanih, and subtitled in English. Information on how to use the close-caption/subtitle features of YouTube videos can be found [here](http://help.youtube.com/support/youtube/bin/answer.py?answer=100078). The second video is based on an oral presentation of the paper entitled ""Preliminary validation of Tonatiuh"" given by Dr. Manuel J. Blanco on September 17th, at the 2009 International Energy Agency\'s SolarPACES Symposium, celebrated in Berlin, Germany.\r\n\r\n| | |\r\n|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\r\n\r\n## Features ##\r\nThe use of extended Open Inventor files to represent the ""scene"" (i.e. the solar concentrating system, the sunlight model, etc.)\r\nAn advance and easy-to-use GUI providing:\r\n * 3D and tree views of the ""scene"" to simulate.\r\n * Handlers and manipulators to modify and query scene objects using 3D views.\r\n * Interface elements to manage the undo and redo of user actions.\r\n * Interface elements to define de type of Monte Carlo ray tracing to execute.\r\nA pervasive plugin architecture which allos the user to:\r\n * Add new sunlight models.\r\n * Add new geometric surfaces.\r\n * Add new reflective materials.\r\n * (planned) Add new refractive materials.\r\n * (planned) Add new photon map and other results analyzers, and post-processors.\r\n * (planned) Add new spectrum models.\r\n## Requirements ##\r\nAs any other ambitious open source program, Tonatiuh uses and leverages on several existing open source libraries, and tools. The principal open source resources used by Tonatiuh are:\r\n * Digia Qt for the Graphic User Interface (GUI).\r\n * Coin3D Toolkit for 3D Graphics Visualization.\r\n * Marble generic geographical map widget and framework.\r\n * CPPUnit for testing the code.\r\nAll these tools are used for developing Tonatiuh within the Eclipse IDE in a standard development environment used by the entire developing team.\r\n\r\n## Tonatiuh\'s output files format ##\r\n\r\nFrom version 2.0.1 the format of the outputs of simulations has been changed to be more flexible for post-processing. You can find a description of the format [Tonatiuh\'s output files format](https://github.com/iat-cener/tonatiuh/wiki/Output-files-format) .\r\n\r\n## Citing Tonatiuh ##\r\n\r\nThese are some of the most relevant references:\r\n * Les, I., Mutuberria, A., Sch\xc3\xb6ttl, P., Nitz, P. (2017). New Functionalities for the Tonatiuh Ray-tracing Software. Proceedings of the 23rd SolarPACES Conference.\r\n * J. Cardoso, J., Mutuberria, A., Marakko, C., Schoettl, P., Os\xc3\xb3rio, T., Les, I.,(2017). New Functionalities for the Tonatiuh Ray-tracing Software. Proceedings of the 23rd SolarPACES Conference.\r\n * Blanco, M., Mutuberria, A., Monreal, A., & Albert, R. (2011). Results of the empirical validation of Tonatiuh at Mini-Pegase CNRS-PROMES facility. Proc SolarPACES.\r\n * Blanco, M. J., Mutuberria, A., & Martinez, D. (2010). Experimental validation of Tonatiuh using the Plataforma Solar de Almer\xc3\xada secondary concentrator test campaign data. In 16th annual SolarPACES symposium.\r\n * Blanco, M. J., Mutuberria, A., Garcia, P., Gastesi, R., & Martin, V. (2009). Preliminary validation of Tonatiuh SOLARPACES Symposium. Berlin, Germany.\r\n * Blanco, M. J., Amieva, J. M., & Mancillas, A. (2005, January). The Tonatiuh Software Development Project: An open source approach to the simulation of solar concentrating systems. In ASME 2005 International Mechanical Engineering Congress and Exposition (pp. 157-164). American Society of Mechanical Engineers.\r\n \r\n \r\n\'Creative
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.\r\n'",,"2015/05/08, 10:51:30",3092,GPL-3.0,3,1171,"2023/04/28, 14:30:58",50,1,74,5,180,0,0.0,0.07553551296505068,"2017/09/15, 07:38:25",v2.2.4,0,5,false,,false,false,,,https://github.com/iat-cener,https://www.cener.com,"Sarriguren, Spain",,,https://avatars.githubusercontent.com/u/12331888?v=4,,, PV4GER,Aims at democratizing and accelerating the access to photovoltaic systems data in Germany and beyond.,kdmayer,https://github.com/kdmayer/3D-PV-Locator.git,github,"neurips-2020,solar,renewable-energy,pv-systems,deepsolar,computer-vision,inception-v3,deeplabv3,network-planning,solar-panels,climate-change,ai,deep-learning,remote-sensing,satellite-imagery",Photovoltaics and Solar Energy,"2023/07/06, 19:11:46",42,0,7,true,Python,,,"Python,Dockerfile",,"b'# 3D-PV-Locator\n\n![Pipeline Overview](https://github.com/kdmayer/3D-PV-Locator/blob/master/pipeline_visualization_new.png)\n\nRepo with [documentation](docs/_build/rinoh/pv4ger.pdf) for ""[3D-PV-Locator: Large-scale detection of rooftop-mounted photovoltaic systems in 3D](https://www.sciencedirect.com/science/article/pii/S0306261921016937?via%3Dihub)"" published in Applied Energy.\n\nIn case you would like to explore the code with which we created the image datasets and pre-processed the CityGML files, please have a look at the following [GitHub repo](https://github.com/kdmayer/CityGML-Preprocessing-Demo).\n\n## About\n\n3D-PV-Locator is a joint research initiative between [Stanford University](http://web.stanford.edu/group/energyatlas/home.html), [University of Freiburg](https://www.is.uni-freiburg.de/research/smart-cities-industries-group/smart-cities-industries-sci-group), and [LMU Munich](https://www.en.compecon.econ.uni-muenchen.de/staff/postdocs/arlt1/index.html) that aims at democratizing and accelerating the access to photovoltaic (PV) systems data in Germany and beyond. \n\nTo do so, we have developed a computer vision-based pipeline leveraging aerial imagery with a spatial resolution of\n10 cm/pixel and 3D building data to automatically create address-level and rooftop-level PV registries for all counties\nwithin Germany\'s most populous state North Rhine-Westphalia.\n\n![Exemplary Pipeline Output](https://github.com/kdmayer/3D-PV-Locator/blob/master/exemplary_pipeline_output.png)\n\n### Address-level registry\n\nFor every address equipped with a PV system in North Rhine-Westphalia, the automatically produced address-level\nregistry in GeoJSON-format specifies the respective PV system\'s: \n\n- geometry: Real-world coordinate-referenced polygon describing the shape of the rooftop-mounted PV system\n- area_inter: The total area covered by the PV system in square meters\n- area_tilted: The total area covered by the PV system in square meters, corrected by the respective rooftop tilt\n- capacity_not_tilted_area: The total PV capacity in kWp of area_inter\n- capacity_titled_area: The total PV capacity in kWp of area_tilted \n- location of street address in latitude and longitude \n- street address\n- city and\n- ZIP code\n\n### Rooftop-level registry\n\nFor every rooftop equipped with a PV system in North Rhine-Westphalia, the automatically produced rooftop-level\nregistry in GeoJSON-format specifies the respective PV system\'s: \n\n- Azimuth: Orientation of the rooftop-mounted PV system, with 0\xc2\xb0 pointing to the North\n- Tilt: Tilt of the rooftop-mounted PV system, with 0\xc2\xb0 being flat\n- RoofTopID: Identifier of the respective rooftop\n- geometry: Real-world coordinate-referenced polygon describing the shape of the rooftop-mounted PV system\n- area_inter: The total area covered by the PV system in square meters\n- area_tilted: The total area covered by the PV system in square meters, corrected by the respective rooftop tilt\n- capacity_not_tilted_area: The total PV capacity in kWp of area_inter\n- capacity_titled_area: The total PV capacity in kWp of area_tilted\n- street address\n- city and\n- ZIP code \n\nFor a detailed description of the underlying pipeline and a case study for the city of Bottrop, please have a look at our spotlight talk at NeurIPS 2020:\n\n- [Paper](https://www.climatechange.ai/papers/neurips2020/46/paper.pdf)\n- [Slides](https://www.climatechange.ai/papers/neurips2020/46/slides.pdf)\n- [Recorded Talk](https://slideslive.com/38942134/an-enriched-automated-pv-registry-combining-image-recognition-and-3d-building-data)\n\nYou might also want to take a look at other projects within Stanford\'s EnergyAtlas initiative:\n\n- [EnergyAtlas](http://web.stanford.edu/group/energyatlas/home.html)\n- DeepSolar for Germany: [Publication](https://ieeexplore.ieee.org/document/9203258) and [Code](https://github.com/kdmayer/PV_Pipeline)\n\n## Datasets and pre-processing code are public\n\nPlease note that apart from the pipeline code and documentation, we also provide you with\n\n- A **pre-trained model checkpoint for PV classification** on aerial imagery with a spatial resolution of 10cm/pixel.\n- A **pre-trained model checkpoint for PV segmentation** on aerial imagery with a spatial resolution of 10cm/pixel.\n- A **100,000+ image dataset** for PV system classification.\n- A **4,000+ image dataset** for PV system segmentation.\n- **Pre-processed 3D building data** in .GeoJSON format for the entire state of North Rhine-Westphalia.\n\nIn case you would like to explore the code with which we created the image datasets and pre-processed the CityGML files, please have a look at the following [GitHub repo](https://github.com/kdmayer/CityGML-Preprocessing-Demo).\n\nWhen using these resources, please cite our work as specified at the bottom of this page.\n\n**NOTE**: All images and 3D building data is obtained from [openNRW](https://www.bezreg-koeln.nrw.de/brk_internet/geobasis/luftbildinformationen/aktuell/digitale_orthophotos/index.html). Labeling of the images for PV system classification and segmentation has been conducted by us.\n\n## Usage Instructions:\n\n git clone https://github.com/kdmayer/3D-PV-Locator.git\n cd 3D-PV-Locator\n\nDownload pre-trained classification and segmentation models for PV systems from our public AWS S3 bucket. This bucket is in ""requester pays"" mode, which means that you need to configure your AWS CLI before being able to download the files. Instructions on how to do it can be found [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html).\n\nOnce you have configured your AWS CLI with \n\n aws configure\n\nyou can list and browse our public bucket with\n\n aws s3 ls --request-payer requester s3://pv4ger/\n \nPlease download our pre-trained networks for PV system classification and segmentation by executing\n\n aws s3 cp --request-payer requester s3://pv4ger/NRW_models/inceptionv3_weights.tar models/classification/\n aws s3 cp --request-payer requester s3://pv4ger/NRW_models/deeplabv3_weights.tar models/segmentation/\n\nTo create PV registries for any county within North Rhine-Westphalia, you need to \n\n1. Download the 3D building data for your desired county from our S3 bucket by executing and replacing with a county name from the list below:\n\n aws s3 cp --request-payer requester s3://pv4ger/NRW_rooftop_data/ data/nrw_rooftop_data/\n \n Example for the county of **Essen**:\n \n aws s3 cp --request-payer requester s3://pv4ger/NRW_rooftop_data/Essen.geojson data/nrw_rooftop_data/\n\n2. Specify the name of your desired county for analysis in the config.yml next to the ""county4analysis"" element by\n choosing one of the counties from the list below:\n\n Example:\n \n county4analysis: Essen\n \n3. **OPTIONAL STEP**: Obtain your Bing API key for geocoding from [here](https://docs.microsoft.com/en-us/bingmaps/getting-started/bing-maps-dev-center-help/getting-a-bing-maps-key) and paste it in the config.yml file next to the ""bing_key"" element\n\n Example:\n \n bing_key: \n \n **NOTE**: If you leave empty, geocoding will be done by the free OSM geocoding service.\n\nOnce the data and models are in place, we build and run the docker container with all required dependencies in interactive mode and mount the /data and /log directory in the container to our local machine.\nMounting the /data and /log directories allows us to share the code outputs between the container and our local machine.\n\n docker build -t 3d_pv_docker .\n docker run -it -v /data/:/app/data/ /logs/:/app/logs/ 3d_pv_docker\n\nPlease ensure that ** corresponds to your absolute path to the 3D-PV-Locator repo on your local machine, e.g., */Users/kevin/Projects/Active/3D-PV-Locator/* in my case.\n\nNote: Depending on how many tiles you want to download, you will need to adjust the memory of your Docker container with the following flag for the docker run command:\n\n --memory=\n\nHaving the docker container in interactive mode, we can now decide which pipeline steps we want to run by putting a ""1"" next them.\n\n Example:\n \n run_tile_creator: 1\n\n run_tile_downloader: 1\n\n run_tile_processor: 1\n\n run_tile_coords_updater: 0\n\n run_registry_creator: 1\n \nIn the interactive Docker container, we then execute the pipeline with:\n\n python run_pipeline.py\n\nAfter successful completion, the resulting PV registry for your area of interest will be written to /data/pv_registry.\n\n## List of available counties:\n \nPlease choose the county you would like to run the pipeline for from the following list:\n\n- D\xc3\xbcren\n- Essen\n- Unna\n- M\xc3\xb6nchengladbach\n- Solingen\n- Dortmund\n- G\xc3\xbctersloh\n- Olpe\n- Steinfurt\n- Bottrop\n- Coesfeld\n- Leverkusen\n- K\xc3\xb6ln\n- Soest\n- M\xc3\xbclheim-a.d.-Ruhr\n- M\xc3\xbcnster\n- Heinsberg\n- Oberhausen\n- Euskirchen\n- Krefeld\n- Warendorf\n- Recklinghausen\n- Bochum\n- Rhein-Kreis-Neuss\n- Rheinisch-Bergischer-Kreis\n- Herne\n- Kleve\n- Bonn\n- Minden-L\xc3\xbcbbecke\n- Herford\n- Rhein-Sieg-Kreis\n- D\xc3\xbcsseldorf\n- Hagen\n- Paderborn\n- Wuppertal\n- Oberbergischer-Kreis\n- Viersen\n- Rhein-Erft-Kreis\n- M\xc3\xa4rkischer-Kreis\n- St\xc3\xa4dteregion-Aachen\n- Remscheid\n- Mettmann\n- Lippe\n- Ennepe-Ruhr-Kreis\n- Hochsauerlandkreis\n- Gelsenkirchen\n- H\xc3\xb6xter\n- Borken\n- Hamm\n- Bielefeld\n- Duisburg\n- Siegen-Wittgenstein\n- Wesel \n\n## OpenNRW Platform:\n\nFor the German state of North Rhine-Westphalia (NRW), OpenNRW provides:\n\n- Aerial imagery at a spatial resolution of 10cm/pixel\n- Extensive 3D building data in CityGML format\n\n## License:\n\n[MIT](https://github.com/kdmayer/PV_Pipeline/blob/master/LICENSE)\n\n## BibTex Citation:\n\nPlease cite our work as\n\n @article{MAYER2022,\n title = {3D-PV-Locator: Large-scale detection of rooftop-mounted photovoltaic systems in 3D},\n journal = {Applied Energy},\n volume = {310},\n pages = {118469},\n year = {2022},\n issn = {0306-2619},\n doi = {https://doi.org/10.1016/j.apenergy.2021.118469},\n url = {https://www.sciencedirect.com/science/article/pii/S0306261921016937},\n author = {Kevin Mayer and Benjamin Rausch and Marie-Louise Arlt and Gunther Gust and Zhecheng Wang and Dirk Neumann and Ram Rajagopal},\n keywords = {Solar panels, Renewable energy, Image recognition, Deep learning, Computer vision, 3D building data, Remote sensing, Aerial imagery},\n }\n'",",https://doi.org/10.1016/j.apenergy.2021.118469","2021/01/20, 12:47:54",1008,MIT,2,109,"2023/07/06, 18:49:09",0,3,5,1,111,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, PV Free,A public API for PV modeling parameters.,BreakingBytes,https://github.com/BreakingBytes/pvfree.git,github,"api,python,solar-energy",Photovoltaics and Solar Energy,"2023/08/10, 03:59:55",18,0,3,true,Python,Breaking Bytes,BreakingBytes,"Python,HTML,Procfile",https://pvfree.azurewebsites.net/,"b'[![travis](https://travis-ci.org/BreakingBytes/pvfree.svg?branch=master)](https://travis-ci.org/BreakingBytes/pvfree)\n\nPV Free\n=======\nA public API for PV modeling parameters and pvlib API for learning about solar.\n\nAnnouncements\n-------------\npvfree is moving to Microsoft Azure Cloud b/c Heroku free dyno service will end\nNov 28th. Please use https://pvfree.azurewebsites.net/ from now on to get module\nand inverter parameters for pvlib and to learn about solar energy modeling.\n\nUsage\n-----\nBrowsing to\n[`pvfree.azurewebsites.net/api/v1/pvinverter/`](https://pvfree.azurewebsites.net/api/v1/pvinverter/?format=json)\nwill display a JSON string with the first 20 records. The endpoint and query\nstring to obtain the next set\n[`api/pvinverter/?limit=20&offset=20`](https://pvfree.azurewebsites.net/api/v1/pvinverter/?format=json&limit=20&offset=20)\nis contained in the `next` key of the string as are the endpoints for each\ninverter. Note: the query string `?format=json` is only necessary when using the API url directly in a browser to display the response.\n\n[Tastypie](https://django-tastypie.readthedocs.org/en/latest/)\n--------------------------------------------------------------\nThe API is generated by the Tastypie django extension. Add the following endpoints to the base URL, [`https://pvfree.azurewebsites.net/`](https://pvfree.azurewebsites.net/):\n\n* Get first 20 pvinverters.\n\n api/pvinverter/\n\n* Get first pvinverter.\n\n api/pvinverter/1/\n\n* Get pvinverter set containing #\'s 2, 3, 5, and 10.\n\n api/pvinverter/set/2;3;5;10/\n\n* Get 100 pvinverters starting from pvinverter # 500.\n\n api/pvinverter/?limit=100&offset=500\n\n* Get pvinverter database schema.\n\n api/pvinverter/schema/\n\n[Python Requests](https://requests.readthedocs.io/en/master/)\n-------------------------------------------------------------\nPython has several libraries for interacting with URLs. The Requests package is available from [PyPI](https://pypi.python.org/pypi/requests).\n\n```python\n>>> import requests\n>>> response = requests.get(\'https://pvfree.azurewebsites.net/api/v1/pvinverter/set/1;3;5/\')\n>>> response\n \n>>> response.status_code\n 200\n>>> response.content\n {""objects"": [{""C0"": -2.48104842861e-05, ""C1"": -9.0149429405099999e-05, ""C2"": 0.00066889632690700005, ""C3"": -0.018880466688599998, ""Idcmax"": 10.0, ""MPPT_hi"": 50.0, ""MPPT_low"": 20.0, ""Paco"": 250.0, ""Pdco"": 259.52205054799998, ""Pnt"": 0.02, ""Pso"": 1.7716142241299999, ""Sandia_ID"": 1399, ""Tamb_low"": -40.0, ""Tamb_max"": 85.0, ""Vaco"": 208.0, ""Vdcmax"": 65.0, ""Vdco"": 40.242603174599999, ""id"": 1, ""manufacturer"": ""ABB"", ""name"": ""MICRO-0.25-I-OUTD-US-208"", ""numberMPPTChannels"": 1, ""resource_uri"": ""/api/v1/pvinverter/1/"", ""source"": ""CEC"", ""vintage"": ""2014-01-01"", ""weight"": 1.6499999999999999}, ...]}\n```\n'",,"2015/02/28, 08:09:30",3161,BSD-2-Clause,13,251,"2022/11/23, 09:26:16",6,24,34,4,336,0,0.0,0.03167420814479638,,,0,3,false,,true,false,,,https://github.com/BreakingBytes,http://breakingbytes.github.io/,"Oakland, CA",,,https://avatars.githubusercontent.com/u/1437576?v=4,,, Pysolar,A collection of Python libraries for simulating the irradiation of any point on earth by the sun. It includes code for extremely precise ephemeris calculations.,pingswept,https://github.com/pingswept/pysolar.git,github,,Photovoltaics and Solar Energy,"2023/09/25, 22:37:43",348,0,37,true,Jupyter Notebook,,,"Jupyter Notebook,Python",http://pysolar.org,"b'## Pysolar ##\n\n[![Test Python package](https://github.com/pingswept/pysolar/actions/workflows/testpackage.yml/badge.svg)](https://github.com/pingswept/pysolar/actions/workflows/testpackage.yml)\n\nPysolar is a collection of Python libraries for simulating the irradiation of any point on earth by the sun. It includes code for extremely precise ephemeris calculations, and more.\n\n# Note: right now, the latest commits of Pysolar don\'t work with Python 2.x #\n\n* Release 0.6 works with 2.x: https://github.com/pingswept/pysolar/releases/tag/0.6 but 0.7 and later have a bunch of changes. They have been validated for Python 3.4, but releases 3.2 or earlier are missing features that the changes require. *\n\nAlso, the API has changed slightly:\n\n * Pysolar now expects you to supply a **timezone-aware datetime**, rather than a naive datetime in UTC. If your results seem crazy, this is probably why.\n * Function names are now `lowercase_separated_by_underscores`, in compliance with [PEP8](https://www.python.org/dev/peps/pep-0008/#function-names).\n\n## Installation ##\n\nAssuming you have Python 3.4 or higher installed, you can install Pysolar with `pip`:\n\n sudo pip install pysolar\n\nDocumentation now appears at [docs.pysolar.org](http://docs.pysolar.org).\n\n## Contributions ##\n\nAll contributions go through pull requests on Github.\n\nEditing [the documentation](http://docs.pysolar.org) is particularly easy-- just click the ""Edit on Github"" link at the top of any page.\n\nCode contributions are welcome under the terms of the GPLv3 license. If you\'re unfamiliar with Github, you could start with [this guide to working on open source projects](https://guides.github.com/activities/contributing-to-open-source/).\n\n## Support ##\n\nYour first move should be to read the [documentation](http://docs.pysolar.org) and think. But you\'ve probably already tried that.\n\nYour second move is to ask a question on the [pysolar-discuss](http://lists.pysolar.org) mailing list. The original author of Pysolar, [Brandon Stafford](http://rascalmicro.com), monitors the mailing list. Please understand that I wrote (most of) Pysolar around a decade ago when I worked in the solar industry. Now, I\'m an electrical engineer who just maintains Pysolar as a fun hobby. The other people on the list are other users like you-- some are experts; some are amateurs. None of them are getting paid for this.\n\nTo subscribe to the mailing list, send a blank email to pysolar-discuss+subscribe@pysolar.org. After a few minutes, you\'ll get a confirmation message; reply to the confirmation to complete the subscription.\n\nIn order to post to the list, you have to subscribe. You also have to pass a threshold of civil discourse regulated by me.\n\nThe archive of the list is publicly available here: http://lists.pysolar.org/.\n\nIf you ever want to unsubscribe, send an email to pysolar-discuss+unsubscribe@pysolar.org and reply to the confirmation message. If you run into trouble, just send me an email at brandon@pingswept.org, and I\'ll remove your address manually.\n\nPlease report bugs to [the issue tracker on Github](https://github.com/pingswept/pysolar/issues); I am automatically notified when a new issue is opened.\n\n## License ##\n\nPysolar is licensed under [the GPLv3](https://www.gnu.org/licenses/gpl-3.0.html).\n'",,"2008/03/01, 23:35:48",5715,GPL-3.0,15,364,"2023/09/25, 22:37:44",14,73,144,10,29,0,0.1,0.41509433962264153,"2023/07/25, 17:13:16",0.11,0,41,false,,false,false,,,,,,,,,,, PV_ICE,"An open-source tool to quantify Solar Photovoltaics (PV) Energy and Mass Flows in the Circular Economy, from a Reliability and Lifetime approach.",NREL,https://github.com/NREL/PV_ICE.git,github,"circular-economy,circularity-metrics,reliability,solar-energy,circularity,mass-flow,repair,reuse,recycle,photovoltaics,lifetime",Photovoltaics and Solar Energy,"2023/10/25, 00:35:04",27,0,8,true,Python,National Renewable Energy Laboratory,NREL,Python,https://pv-ice.readthedocs.io/en/latest/,"b'\n\n\n\n \n \n\n\n\n \n \n\n\n \n \n\n
Version\n \n
License\n \n \n \n
Documentation\n\t\n\t \'Documentation\n\t\n
\n\n\n# PV ICE: PV in the Circular Economy, a Dynamic Energy and Materials Tool\n\nThis open-source tool explores the effects of Circular Economy (CE) pathways for photovoltaic (PV) materials. It can be used to quantify and assign a value framework to CE efforts including re-design, reduction, replacement, reuse, recycling, and lifetime and reliability improvements across the PV value chain. PV ICE enables tradeoff analysis through scenario comparisons, and is highly customizable through user inputs such as deployment schedules, module properties and component materials, and CE pathways.\n\nThe provided PV ICE module and material baselines leverage published data from many sources on PV manufacturing and predicted technological changes. Input data are being compiled [here](https://docs.google.com/spreadsheets/d/1WV54lNAdA2uP6a0g5wMOOE9bu8nbwvnQDgLj3GuGojE/edit?usp=sharing) and the baselines are available here for use in other projects as well as for the PV ICE tool.\n\n\nHow it Works\n=============\n\nThis section provides a brief description of how the PV ICE tool works. FULL DOCUMENTATION CAN BE FOUND AT [readthedocs](https://pv-ice.readthedocs.io/en/latest/?badge=latest).\n\n\nMass\n-----\n\nPV ICE is a dynamic mass flow based tool. It takes in any deployment forecast of any evolving module design along with it\'s component materials and uses sophisticated lifetime and reliability parameters to calculate effective capacity, virgin material demand, and life cycle wastes. The calculator captures all the mass flows shown in the simplified diagram below for all years studied in a simulation (ex: 2020-2050). \n\n\n\nAnnually deployed cohorts of modules are tracked through the simulation, subjected to lifetime, degradation, and reliability parameters, and guided along user defined CE pathways (ex: resell, recycling). The PV ICE framework is designed for scenario comparisons (ex: different deployment schedules, module designs, or circular pathways) and is capable of both geospatial and temporal analysis (i.e. when and where materials will be demanded or are available).\n\nModule and material properties are known to be variable with time, and PV ICE can capture this dynamic evolution of PV technology. Dynamic baseline inputs for crystalline silicon PV modules and component materials are provided in the PV_ICE \\ baselines folder. These baselines are dervied from [literature and report data](https://docs.google.com/spreadsheets/d/1Ec5JRBSN2NFXjEABgUp1ch-EG6uQao8j5Rk1MLuZZYI/edit?usp=sharing). Module baselines capture the annual average crystalline silicon module (i.e. a market share weighted average of the silicon PV technologies deployed). Each material similarly is a market share weighted average of silicon PV technologies, compiled from multiple sources, most notably consistent with ITRPV data. Please see the Jupyter Journals (tutorials \\ baseline development documentation) for the derivations and sources (baselines \\ SupportingMaterials) of the provided c-Si baselines. Alternate module and material files can be created by the user, and an expanded set of PV technology baselines is planned for the future, including CdTe and perovskites.\n\n\nEnergy\n-------\n\nThe energy balance of renewable energy technologies is as important and the mass balance when evaluating sustainability. Additionally, few studies of Circular Economy (CE) pathways consider the energy return on investment of a particular pathway. PV ICE energy flows fill this analysis gap, and provide useful insights into the potential tradeoffs between mass and energy of CE pathways.\n\nThe energy flows of PV ICE are based on the mass flows. These energy flows, like the mass flows, are dynamic with time and are seperated into module and material energies. For each supply chain process step captured in the mass flows, an energy per module area or energy per material mass is captured as an input (ex: module manufacturing energy, energy to manufacture rolled glass from silica sand, energy to crush a module for recycling ). The energy demanded for each step is the sum of all electrical energy demands and all fuel/heating energy demands. \n\nWe provide an energy baseline for crystalline silicon modules and component materials. Data for these baselines is being compiled from [literature and report data](https://docs.google.com/spreadsheets/d/1Ec5JRBSN2NFXjEABgUp1ch-EG6uQao8j5Rk1MLuZZYI/edit?usp=sharing). For the complete derivation of the energy demands for crystalline silicon modules and materials, please see the Jupyter Journals (tutorials \\ baseline development documentation) and (baselines \\ SupportingMaterials). Alternate module and material files can be created by the user, and an expanded set of PV technology baselines is planned for the future, including CdTe and perovskites.\n\nAfter running a mass flow simulation, an energy flow calculation can be run which will multiply the energy demands by the mass flows and calculate annual generation from the deployed modules. Results of this calculation provide annual, cumulative, and lifetime energy demands and energy generated. These values can be used to calculate energy balance metrics such as energy return on investment (EROI), net energy, and energy payback time (EPBT). These features are actively under development, so check back for updates soon!\n\n\nInstallation for PV ICE\n=======================\n\nPV ICE releases may be installed using the ``pip`` and ``conda`` tools.\nPlease see the [Installation page](http://PV_ICE.readthedocs.io/en/latest/installation.html) of the documentation for complete instructions.\n\nPV ICE is compatible with Python 3.5 and above.\n\nInstall with:\n\n pip install PV_ICE\n\nFor developer installation, download the repository, navigate to the folder location and install as:\n\n pip install -e .\n\n\nHow to Get Started\n===================\n\nAfter you have installed PV ICE, we recommend heading over to our tutorials jupyter journals (PV ICE \\ docs \\ tutorials). There you will find journals [""0 - quick start Example""](https://github.com/NREL/PV_ICE/blob/development/docs/tutorials/0%20-%20quickStart%20Example.ipynb) and [""1 - Beginner Example""](https://github.com/NREL/PV_ICE/blob/development/docs/tutorials/1%20-%20Beginner%20Example.ipynb) which can help guide you through your first simulation using the PV ICE provided crystalline silicon PV baselines. In journals 2-4 we walk you through modifications to the basic simulation, including modifying parameters with PV ICE functions to suit your analysis needs.\n\n\nSome Analyses Featuring/Leveraging PV ICE\n==========================================\n\nPV ICE has been used in a variety of published analyses, including:\n\n**High Impact Report: The Solar Futures Report and Circular Economy Technical Report**\n\n Ardani, Kristen, Paul Denholm, Trieu Mai, Robert Margolis, \n Eric O\xe2\x80\x99Shaughnessy, Timothy Silverman, and Jarett Zuboy. 2021. \n \xe2\x80\x9cSolar Futures Study.\xe2\x80\x9d EERE DOE. \n https://www.energy.gov/eere/solar/solar-futures-study.\n\n Heath, Garvin, Dwarakanath Ravikumar, Silvana Ovaitt, \n Leroy Walston, Taylor Curtis, Dev Millstein, Heather Mirletz, \n Heidi Hartman, and James McCall. 2022. \n \xe2\x80\x9cEnvironmental and Circular Economy Implications of Solar Energy\n in a Decarbonized U.S. Grid.\xe2\x80\x9d NREL/TP-6A20-80818. NREL.\n\n**Peer Reviewed Journals**\n\n H. Mirletz, S. Ovaitt, S. Sridhar, and T. M. Barnes. 2022. \n \xe2\x80\x9cCircular Economy Priorities for Photovoltaics in the Energy Transition.\xe2\x80\x9d \n PLOS ONE 17 (9): e0274351. https://doi.org/10.1371/journal.pone.0274351.\n\n S. Ovaitt & H. Mirletz, S. Seetharaman, and T. Barnes, \n \xe2\x80\x9cPV in the Circular Economy, A Dynamic Framework Analyzing \n Technology Evolution and Reliability Impacts,\xe2\x80\x9d \n ISCIENCE, Jan. 2022, doi: https://doi.org/10.1016/j.isci.2021.103488.\n\n\nThere are other multiple publications citing PV ICE like PVSC, PVRW, etc. Please see the list in the [readthedocs](http://CircularEconomy-MassFlowCalculator.readthedocs.io/en/latest/) documentation.\n \n\n\nContributing\n============\n\nWe need your help to make PV ICE a great tool!\nPlease see the [Contributing page](http://PV_ICE.readthedocs.io/en/stable/contributing.html) for more on how you can contribute.\nThe long-term success of PV ICE requires substantial community support.\n\n\nLicense\n=======\n\nPV_ICE open-source code is copyrighted by the Alliance for Sustainable Energy and licensed with BSD-3-Clause terms, found [here](https://github.com/NREL/PV_ICE/blob/main/LICENSE.md).\n\n\nGetting support\n===============\n\nIf you suspect that you may have discovered a bug or if you\'d like to\nchange something about CF-MFA, then please make an issue on our\n[GitHub issues page](https://github.com/NREL/PV_ICe/issues).\n\n\nCiting\n======\n\nIf you use PV_ICE in a published work, please cite:\n\n S. Ovaitt & H. Mirletz, S. Seetharaman, and T. Barnes, \n \xe2\x80\x9cPV in the Circular Economy, A Dynamic Framework Analyzing \n Technology Evolution and Reliability Impacts,\xe2\x80\x9d \n ISCIENCE, Jan. 2022, doi: https://doi.org/10.1016/j.isci.2021.103488.\n\n\nand also please also cite the DOI corresponding to the specific version of\nPV_ICE that you used. PV_ICE DOIs are listed at\n[Zenodo.org](https://zenodo.org/badge/latestdoi/248347431). For example for version 0.3.2:\n\n\tS. Ovaitt, H. Mirletz, M. Mendez Ribo (2023). \n\tNREL/PV_ICE: v0.3.2 Release. Zenodo. \n\thttps://doi.org/10.5281/zenodo.7651576\n'",",https://zenodo.org/badge/latestdoi/248347431,https://doi.org/10.1371/journal.pone.0274351.\n\n,https://doi.org/10.1016/j.isci.2021.103488.\n\n\nThere,https://doi.org/10.1016/j.isci.2021.103488.\n\n\nand,https://zenodo.org/badge/latestdoi/248347431","2020/03/18, 21:31:34",1315,CUSTOM,132,934,"2023/10/25, 00:39:36",4,18,22,5,0,0,0.1,0.5184729064039408,"2023/02/18, 00:07:12",v0.3.2,0,7,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, Solar electricity Nowcasting,Build the world's best near-term forecasting system for solar electricity generation.,openclimatefix,https://github.com/openclimatefix/nowcasting.git,github,nowcasting,Photovoltaics and Solar Energy,"2023/10/16, 11:44:00",85,0,35,true,TypeScript,Open Climate Fix,openclimatefix,"TypeScript,JavaScript,Python,CSS,Shell",https://openclimatefix.org/projects/nowcasting/,"b'# Solar Electricity Nowcasting\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-8-orange.svg?style=flat-square)](#contributors-)\n\n\nThis is a ""meta-repository"" for [Open Climate Fix](https://openclimatefix.org/)\'s solar electricity nowcasting project. See [this great Wired article about OCF\'s solar electricity forecasting work](https://www.wired.co.uk/article/solar-weather-forecasting) for a good intro to solar electricity nowcasting.\n\nThe plan is to enable the community to build the world\'s best near-term forecasting system for solar electricity generation, and then let anyone use it! :) We\'ll do this by using state-of-the-art machine learning and 5-minutely satellite imagery to predict the movement of clouds over the next few hours, and then use this to predict solar electricity generation.\n\nThe term ""nowcasting"" just means ""forecasting for the next few hours using statistical techniques"".\n\n# Why is all this stuff open-source?\n\nIn OCF, we\'re curious to see if it\'s possible to rapidly mitigate climate change by:\n\n1. Enabling thousands of people to help solve ML problems which, if solved, might help reduce CO2 emissions\n2. Running small(ish) pilot projects to implement the best solution in industry\n3. Enabling thousands of practitioners to use the code in their products.\n\n# What\'s the likely climate impact?\n\nIt\'s really, really, _really_ hard to estimate climate impact of forecasting! But, as a super-rough back-of-the-envelope calculation, we estimate that better solar forecasts, if rolled out globally, could reduce CO2 emissions by about a billion tonnes between now and 2035.\n\n# Getting involved\n\n- [List of ""good first issues""](https://github.com/search?l=&p=1&q=user%3Aopenclimatefix+label%3A%22good+first+issue%22&ref=advsearch&type=Issues&utf8=%E2%9C%93&state=open): GitHub ""issues"" which describe changes we\'d like to make to the code.\n- [OCF\'s coding style](https://github.com/openclimatefix/nowcasting/blob/main/coding_style.md)\n- The main tools we use include: PyTorch, PyTorch Lighting, xarray, pandas, pvlib\n\n# Overview of OCF\'s nowcasting repositories\n\n## Downloading data & getting the data in the right shape for ML experiments\n\n- [nowcasting_dataset](https://github.com/openclimatefix/nowcasting_dataset): Pre-prepares ML training batches. Loads satellite data, numerical weather predictions, solar PV power generation timeseries, and other datasets. Outputs pre-prepared ML training batches as NetCDF files (one batch per NetCDF file).\n- [Satip](https://github.com/openclimatefix/Satip): Retrieve, transform and store EUMETSAT data.\n- [pvoutput](https://github.com/openclimatefix/pvoutput): Python code for downloading PV data from [PVOutput.org](https://PVOutput.org).\n\n### Older code (no longer maintained)\n\n- [satellite_image_processing](https://github.com/openclimatefix/satellite_image_processing)\n- [eumetsat](https://github.com/openclimatefix/eumetsat): Tools for downloading and processing satellite images from EUMETSAT\n\n## Machine Learning\n\n### Main repositories for our experiments:\n\n- [satflow](https://github.com/openclimatefix/satflow): Satellite Optical Flow with machine learning models. Predicting the next few hours of satellite imagery from the recent history of satellite imagery (and other data sources).\n- [predict_pv_yield](https://github.com/openclimatefix/predict_pv_yield): Using optical flow (and the output of satflow) & machine learning to predict solar PV yield (i.e. to predict the power generated by solar electricity systems over the next few hours). An older set of experiments is in [predict_pv_yield_OLD](https://github.com/openclimatefix/predict_pv_yield_OLD), which is no longer maintained..\n- [nowcasting_utils](https://github.com/openclimatefix/nowcasting_utils): Forecasting performance metrics, plotting functions, loss functions, etc.\n- [nowcasting_dataloader](https://github.com/openclimatefix/nowcasting_dataloader): PyTorch dataloader for taking pre-prepared batches from `nowcasting-dataset` and getting them into our models.\n\n### PyTorch implementations of ML models from the literature\n\n- [MetNet](https://github.com/openclimatefix/metnet): PyTorch Implementation of Google Research\'s MetNet ([S\xc3\xb8nderby et al. 2020](https://arxiv.org/abs/2003.12140)), inspired from Thomas Capelle\'s [metnet_pytorch](https://github.com/tcapelle/metnet_pytorch/tree/master/metnet_pytorch).\n- [skillful_nowcasting](https://github.com/openclimatefix/skillful_nowcasting): Implementation of DeepMind\'s Skillful Nowcasting GAN ([Ravuri et al. 2021](https://arxiv.org/abs/2104.00954)) in PyTorch Lightning.\n- [perceiver-pytorch](https://github.com/openclimatefix/perceiver-pytorch): Implementation of DeepMind\'s Perceiver ([Jaegle et al. 2021](https://arxiv.org/abs/2103.03206)) and Perceiver IO ([Jaegle et al. 2021](https://arxiv.org/abs/2107.14795)) in Pytorch. Forked from [lucidrains/perceiver-pytorch](https://github.com/lucidrains/perceiver-pytorch).\n\n### Older code (no longer maintained)\n\n- [solar-power-mapping-data](https://github.com/openclimatefix/solar-power-mapping-data): Code to create rich harmonised geographic data for PV installations from OpenStreetMap and other sources. Mostly by Dan Stowell, The Turing Institute, and Sheffield Solar. The code behind the 2020 paper [""A harmonised, high-coverage, open dataset of solar photovoltaic installations in the UK""](https://www.nature.com/articles/s41597-020-00739-0) by Stowell et al.\n- [predict_pv_yield_OLD](https://github.com/openclimatefix/predict_pv_yield_OLD)\n- [predict_pv_yield_NWP](https://github.com/openclimatefix/predict_pv_yield_nwp): Build a baseline model for predicting PV yield using NWP (numerical weather predictions), as opposed to satellite imagery. This model is intentionally very simple, so we can get an end-to-end system up and running quickly to interate on.\n- [metoffice_ec2](https://github.com/openclimatefix/metoffice_ec2): Extract specific parts of the [UK Met Office\'s UKV and MOGREPS-UK numerical weather predictions from AWS](https://registry.opendata.aws/uk-met-office/), compress, and save to S3 as Zarr. Intended to run on AWS EC2.\n- [metoffice_aws_lambda](https://github.com/openclimatefix/metoffice_aws_lambda): Simple AWS Lambda function to extract specific parts of the UK Met Office\'s UKV and MOGREPS-UK numerical weather predictions, compress, and save to S3 as Zarr. (We found that AWS Lambda is not a good fit for this task because we actually have to do a bit of heavy-lifting, which gets very expensive on Lambda!)\n\n## Operational solar nowcasting\n\n- [nowcasting_api](https://github.com/openclimatefix/nowcasting_api): API for hosting nowcasting solar predictions. Will just return \'dummy numbers\' until about mid-2022!\n\nFor a complete list of all of OCF\'s repositories tagged with ""nowcasting"", see [this link](https://github.com/search?l=&o=desc&q=topic%3Anowcasting+org%3Aopenclimatefix&s=updated&type=Repositories)\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Damien Tanner

\xf0\x9f\x93\x86

lina

\xf0\x9f\x92\xbb

AlaaTohamy

\xf0\x9f\x92\xbb

Flo

\xf0\x9f\x92\xbb

dantravers

\xf0\x9f\xa4\x94

Peter Dudfield

\xf0\x9f\x92\xbb

braddf

\xf0\x9f\x92\xbb

rachel tipton

\xf0\x9f\x91\x80 \xf0\x9f\x92\xbb
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",",https://arxiv.org/abs/2003.12140,https://arxiv.org/abs/2104.00954,https://arxiv.org/abs/2103.03206,https://arxiv.org/abs/2107.14795","2020/12/15, 15:40:01",1044,MIT,328,818,"2023/10/18, 12:13:52",48,160,372,138,7,9,2.5,0.6124197002141327,,,1,12,false,,false,false,,,https://github.com/openclimatefix,openclimatefix.org,London,,,https://avatars.githubusercontent.com/u/48357542?v=4,,, Solar Forecast Arbiter,"Core data gathering, validation, processing, and reporting package for the Solar Forecast Arbiter.",SolarArbiter,https://github.com/SolarArbiter/solarforecastarbiter-core.git,github,,Photovoltaics and Solar Energy,"2023/01/24, 23:03:27",31,4,10,true,Python,Solar Forecast Arbiter,SolarArbiter,"Python,Jinja,HTML,TeX,Dockerfile,C",https://solarforecastarbiter-core.readthedocs.io,"b'[![Build Status](https://github.com/solararbiter/solarforecastarbiter-core/workflows/CI/badge.svg)](https://github.com/SolarArbiter/solarforecastarbiter-core/actions)\n[![Total alerts](https://img.shields.io/lgtm/alerts/g/SolarArbiter/solarforecastarbiter-core.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/SolarArbiter/solarforecastarbiter-core/alerts/)\n[![codecov](https://codecov.io/gh/solararbiter/solarforecastarbiter-core/branch/master/graph/badge.svg)](https://codecov.io/gh/solararbiter/solarforecastarbiter-core)\n[![Documentation Status](https://readthedocs.org/projects/solarforecastarbiter-core/badge/?version=latest)](https://solarforecastarbiter-core.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3473590.svg)](https://doi.org/10.5281/zenodo.3473590)\n[![Docker Repository on Quay](https://quay.io/repository/solararbiter/solarforecastarbiter-core/status ""Docker Repository on Quay"")](https://quay.io/repository/solararbiter/solarforecastarbiter-core)\n\n# solarforecastarbiter-core\nCore Solar Forecast Arbiter data gathering, validation, processing, and\nreporting package.\n\n# Installation\n\nSee the [installation](https://solarforecastarbiter-core.readthedocs.io/en/latest/installation.html) instructions in the documentation.\n\n# Documentation\n\nThe documentation is hosted at [solarforecastarbiter-core.readthedocs.io](https://solarforecastarbiter-core.readthedocs.io/en/latest/)\n\n# Contributing\n\nWe welcome your contributions. Please see our [contributing guide](https://solarforecastarbiter-core.readthedocs.io/en/latest/contributing.html).\n\n# Architecture\n\nThe Solar Forecast Arbiter consists of the Dashboard, API, and this Core package.\nSee [solarforecastarbiter.org/documentation](https://solarforecastarbiter.org/documentation/)\nfor descriptions of each project and how they work together.'",",https://doi.org/10.5281/zenodo.3473590","2019/01/23, 17:31:36",1736,MIT,19,493,"2023/06/30, 07:18:12",151,350,661,7,117,12,1.8,0.6616257088846881,"2022/02/11, 21:26:33",1.0.13,0,7,false,,false,true,"Njadrick/solar-dashboard-,SolarArbiter/workshop,SolarArbiter/solarforecastarbiter-dashboard,SolarArbiter/solarforecastarbiter-api",,https://github.com/SolarArbiter,https://forecastarbiter.epri.com,United States of America,,,https://avatars.githubusercontent.com/u/43686373?v=4,,, pv-system-profiler,Estimating PV array location and orientation from real-world power datasets.,slacgismo,https://github.com/slacgismo/pv-system-profiler.git,github,,Photovoltaics and Solar Energy,"2021/10/13, 19:41:56",8,3,2,false,Jupyter Notebook,SLAC GISMo,slacgismo,"Jupyter Notebook,Python,Shell",,"b'# pv-system-profiler\n### Estimating PV array location and orientation from real-world power datasets.\n\n\n\n \n \n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n\n \n \n\n
Latest Release\n \n \n \n \n \n \n \n \n \n
License\n \n \n \n
Build Status\n \n \n \n
Code Quality\n \n \n \n \n \n \n
Publications\n \n \n \n
PyPI Downloads\n \n \n \n
Conda Downloads\n \n \n \n
\n\n## Install & Setup\n\n#### 1) Recommended: Set up `conda` environment with provided `.yml` file\n\nWe recommend setting up a fresh Python virtual environment in which to use `pv-system-profiler`. We recommend using the [Conda](https://docs.conda.io/projects/conda/en/latest/index.html) package management system, and creating an environment with the environment configuration file named `pvi-user.yml`, provided in the top level of this repository. This will install the `statistical-clear-sky` and `solar-data-tools` packages as well.\n\nCreating the env:\n\n```bash\n$ conda env create -f pvi-user.yml\n```\n\nStarting the env:\n\n```bash\n$ conda activate pvi_user\n```\n\nStopping the env\n\n```bash\n$ conda deactivate\n```\n\nAdditional documentation on setting up the Conda environment is available [here](https://github.com/slacgismo/pvinsight-onboarding/blob/main/README.md).\n\n\n#### 2) PIP Package\n\n```sh\n$ pip install pv-system-profiler\n```\n\nAlternative: Clone repo from GitHub\n\nMimic the pip package by setting up locally.\n\n```bash\n$ pip install -e path/to/root/folder\n```\n\n#### 3) Anaconda Package\n\n```sh\n$ conda install -c slacgismo pv-system-profiler\n```\n\n## Solver Dependencies\n\nRefer to [solar-data-tools](https://github.com/slacgismo/solar-data-tools) documentation to get more info about solvers being used.\n\n## Usage / Run Scripts\n### Serial run\nThe `parameter_estimation_script.py` script creates a report of all systems based on the `csv` files with the system signals located in a given folder.\nThe script takes all input parameters as `kwargs`. The example below illustrates the use of report_script:\n```shell\npython \'repository location of run script\'/parameter_estimation_script.py report None all \ns3://s3_bucket_with_signals/ \'repeating_part_of label\' /home/results.csv True False \nFalse False s3://\'s3_path_to_file_containing_metadata/metadata.csv\' None s3\n```\nIn the example above the full path to `parameter_estimation_script.py` is specified to run a\n`report`. The script allows to provide a `csv` file with list of sites to be analyzed. In this case no list is provided \nand therefore the `kwarg` `None` is entered. The script also allows to run an analysis on the first `n_files` containing \ninput signals in the `s3` repository. In this, case the `all` `kwarg` specifies that all input signals are to be analyzed. \nIn this example, all `csv` files containing the input signals are located in the `s3` bucket with the name \n`s3://s3_bucket_with_signals/`. Usually these `csv` files are of the form `ID_repeating_part_of_label.csv`, for example:\n`1_composite_10.csv`, `2_composite_10.csv`, where `_composite_10` is the repeating part of the label. The repeating part \nof the label is either None or a string as in the example above. Next, an absolute path to the desired location of the \nresults file is provided, in this case `/home/results.csv`. The two following `kwargs` are type Boolean and are used to set the \nvalues of the `correct_tz` and `fix_shifts` pipeline `kwargs`. The next `kwarg`, `check_json` is also Boolean. It \nis used to indicate if there is a `json` file present in `s3://s3_bucket_with_signals/` with additional site information \nthat is to be analyzed. The next Boolean `kwarg` is used to set the `convert_to_ts` `kwarg` when instantiating the data \nhandler. The next `kawrg` contains the full path to the `csv` file containing site metadata, here called `metadata.csv`. \nThe information that this file should contain varies depending on the `estimation` to be performed. This file is \noptional and the `kwarg` can be set to `None`. For the case of a `report`, a `csv` file with columns labeled `site`, \n`system` and `gmt_offset` and their respective values need to be provided. Alternatively, if the `gmt_offset` `kwarg`, \nthe next `kwarg` (in the example above set to `None`), has a numeric \nvalue different to `None`, all sites will use that single value when running the report. For the case of the `report` \nestimation, the metadata file should contain `site`, `system` and `gmt_offset` columns with the respective\nvalues for each system. For the case of the `longitude` estimation, the metadata file should contain `site`, `system` \nand `latitude` columns with the respective values for each system. For the case of the `tilt_azimuth` estimation, the \nmetadata file should contain `site`, `system`, `gmt_offset`, `estimated_longitude` and `estimated_latitude`, `tilt`, \n`azimuth` columns and with the respective values for each system. Additionally, if a manual inspection for time shifts \nwas performed, another \ncolumn labeled `time_shift_manual` having a zero for systems with no time shift and ones for systems with time shift\nmay be included. If a `time_shift_manual` column is included, it will be used to determine whether the `fix_dst()` \nmethod is run after instantiating the data handler. The next `karg is` `gmt_offset` and in this case it is set to None. \nThe last kwarg corresponds to the `data_source`. In this case the value is `s3` since files with the input signals are\nlocated in an `s3` bucket.\n ## Partitioned run\nA script that runs the site report, the longitude, latitude and tilt and azimuth scripts using a number of prescribed \nAmazon Web Services (AWS), instances is provided. The script reads the folder containing the system signals and \npartitions these signals to run in a `n` user prescribed AWS instances in parallel. Here is an example shell command \nfor a partitioned run:\n```shell\npython \'repository location of run script\'/run_partition_script.py parameter_estimation_script.py report None all \ns3://s3_bucket_with_signals/ \'repeating_part_of label\' /home/results.csv True False \nFalse False s3://\'s3_path_to_file_containing_metadata/metadata.csv\' None s3\n\'repository location of run script\'/parameter_estimation_script.py pvi-dev my_instance\n ```\nwhere the individual value of each kwarg are defined in run_partition_script.py. This script takes the same inputs as\nthe `parameter_estimation_script.py` plus three additional parameters. Note that the first kwarg is the partitioning \nscript repository location of run script `/run_partition_script.py parameter_estimation_script.py`. The estimation run\nscript `/parameter_estimation_script.p` is specified as the third to last kwarg. The second to last kwarg is the conda\nenviroment to be used to run the estimation, in this case `pvi-dev`. The last kwarg is the name of the AWS instances to\nbe used to run `run_partition_script.py`, in this case `my_instance`. Previous to running this command it is necessary to create `n` identical AWS \ninstances that correspond to the number of desired partitions. These instances need to have the same \n`Name=\'instance name\'` AWS tag. The simplest way to accomplish this is by parting from an AWS image of a previously \nconfigured instance. This image needs to have all the repositories and conda environments that \nwould be needed in a serial run. Once each partitioned run is finished, results will be automatically collected in the \nlocal folder where `run_partition_script.py` was run. \n## Unit tests\n\nIn order to run unit tests:\n```\npython -m unittest -v\n```\n\n## Test Coverage\n\nIn order to view the current test coverage metrics:\n```\ncoverage run --source pvsystemprofiler -m unittest discover && coverage html\nopen htmlcov/index.html\n```\n\n## Versioning\n\nWe use [Semantic Versioning](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/slacgismo/pv-system-profiler/tags).\n\n## License\n\nThis project is licensed under the BSD 2-Clause License - see the [LICENSE](LICENSE) file for details\n'",",https://zenodo.org/badge/latestdoi/183074637","2019/04/23, 18:33:36",1646,BSD-2-Clause,0,778,"2021/10/13, 19:42:00",4,28,29,0,742,1,0.1,0.19581749049429653,"2021/10/13, 19:42:35",v0.1.4,0,4,false,,false,true,"MichaelHopwood/ForwardForwardOneclass,ChrisKre/Photovoltaik_GAN,slacgismo/solar-data-tools",,https://github.com/slacgismo,https://gismo.slac.stanford.edu/,"SLAC National Accelerator Laboratory, Menlo Park, CA 94025",,,https://avatars.githubusercontent.com/u/19895500?v=4,,, "A Global Inventory of Commercial-, Industrial-, and Utility-Scale Photovoltaic Solar Generating Units",Used to produce a global inventory of utility-scale solar photovoltaic generating station.,Lkruitwagen,https://github.com/Lkruitwagen/solar-pv-global-inventory.git,github,,Photovoltaics and Solar Energy,"2021/11/25, 08:55:36",122,0,20,false,Jupyter Notebook,,,"Jupyter Notebook,Python",,"b'# A Global Inventory of Commerical-, Industrial-, and Utility-Scale Photovoltaic Solar Generating Units\nRepository for machine learning and remote sensing pipeline described in [Kruitwagen, L., Story, K., Friedrich, J., Byers, L., Skillman, S., & Hepburn, C. (2021) A global inventory of photovoltaic solar energy generating units, _Nature_ **598**, 604\xe2\x80\x93610](https://www.nature.com/articles/s41586-021-03957-7). \n\n# Project Summary\n\n## Abstract\n\nPhotovoltaic (PV) solar energy generating capacity has grown by 41\\% per year since 2009. This rapid deployment of solar energy must continue if climate and Sustainable Development Goals are to be met. Energy system projections that mitigate climate change and facilitate universal energy access show a nearly ten-fold increase in PV solar energy generating capacity by 2040. Geospatial data describing the energy system is required to manage generation intermittency, mitigate climate change risks, and identify trade-offs with biodiversity, conservation, and land protection priorities caused by the land use and land cover change necessary for PV deployment. Currently available inventories of solar generating capacity cannot fully address these needs. Here, we provide a global inventory of commercial-, industrial-, and utility-scale PV solar energy generation stations (i.e. PV generating stations in excess of 10kW nameplate capacity) using a longitudinal corpus of remote sensing imagery, machine learning, and a large cloud computation infrastructure. We locate and verify 68,661 facilities, an increase of 253\\% (in number of facilities) on the previously best-available asset-level data. With the help of a hand-labelled test set, we estimate global installed generating capacity to be 423GW [-75GW, +77GW] at the end of 2018. Enrichment of our dataset with estimates of facility installation date, historic land cover classification, and proximity to protected areas and indigenous and community lands allows us to show that the majority of the PV solar energy facilities are sited on cropland, followed by aridlands and grassland. Our inventory can aid PV delivery aligned with the Sustainable Development Goals.\n\n## Figure Highlights\n\n### Computer Vision with Remote Sensing Imagery to Detect Solar PV\n\n**Figure 1:** We detect utility-scale (>10kW) solar PV facilities with machine learning in Sentinel-2 and SPOT6/7 remote sensing imagery. Here, we show out-of-training-sample examples showing SPOT6/7 and Sentinel-2 optical imagery, primary inference from U-Net[2](https://arxiv.org/abs/1505.04597) computer vision models, and vectorised polygon outputs. Our models are robust to a variety of geometries and orientations, land covers, seasons, and atmospheric conditions.\n\n![alt text](makefigs/figures/fig-1_samples.png ""Computer Vision with Remote Sensing Imagery to Detect Solar PV"")\n\n\n### A Machine Learning Pipeline for Global Deployment\n\n**Figure A1:** Our machine learning pipeline diagram. The pipeline was split into two branches, one for each satellite constellation, and two steps: global search to minimise false negatives, and filtering to eliminate false positives. The pipeline was deployed on 72.1mn km2, approximately half of the Earth\'s land surface area, based on population density. Additional machine learning models were used to filter the dataset for false positives. The remaining detections were verified by hand to ensure a high-quality dataset. Installation date for each solar PV facility was inferred heuristically from the detection timeseries. \n\n![alt text](makefigs/figures/fig-A1_pipeline.png ""A Machine Learning Pipeline for Global Deployment"")\n\n### A Global Dataset\n\n**Figure 2:** We deploy our pipeline on imagery captured until 2018-12-31, providing a snapshot of the state of utility-scale solar PV diffusion at the end of 2018. We visualise our dataset and observe the emegence of hotspots in space and time. We use global data for incident irradiation and solar PV productivity to estimate facility-level AC generation capacity. Over our 30-month study window, we observe an increase of 81\\% in deployed generating capacity, led by increases in China (120\\%), India (184\\%), the EU-27+GB (20\\%), the United States (58\\%), and Japan (119\\%). \n\n![alt text](makefigs/figures/fig-2_global.png ""A Global Dataset"")\n\n### Novel Land-Cover Analysis\n\n**Figure 3:** To demonstrate the utility of our asset-level dataset, we prepare an analysis of pre-installation landcover for utility-scale solar PV. The land chosen for the development of solar PV has impacts on and trade-offs with the costs of the solar PV system, greenhouse gas emissions net of land cover change, ecosystem health, water resources and good production, land and property values, and political acceptability, and so is an urgent priority for study. We find no consistent trend in land cover chosen for solar PV development over the study period (panel b), and we observe that the areas chosen for PV deployment skew heavily towards areas with excessive cropland (panel e). However, within these areas, deployment skews to barren and grasslands (panel d). Installation size skews larger for barren land covers (i.e. solar PV mega-projects) and smaller for developed areas (i.e. rooftop commercial and industrial installations)(panel c).\n\n![alt text](makefigs/figures/fig-3_land_cover_global.png ""Land Cover Analysis"")\n\n### Detailed Country-Level Insight\n\n**Figure A10:** We provide analysis at the country-level for the top 20 countries in our dataset. PV installations in most countries displace cropland. China, Chile, India, and South Africa have barren-land PV megaprojects. Among European Economic Area countries, France and Germany are unique for showing a local skew _towards_ developed areas, while all others show a reinforced skew towards croplands. Development in most countries appears to disfavour sites with pre-existing forests, with the exception of South Africa.\n\n![alt text](makefigs/figures/fig-A10_land_cover_regional.png ""Country Level Insight"")\n\n## Dataset Availability\n\nRecognising the fundamental public-goods nature of asset-level data and its importance in the urgent mitigation of climate change, we make our dataset publicly available.\n\nThe complete dataset can be downloaded from the [Zenodo data repository](https://zenodo.org/record/5005868).\n\nAn interactive visualisation of our dataset is from the World Resources Institute [here](https://resourcewatch.org/data/explore/ene032-Solar-Plants_1).\n\n## Acknowledgements\n\nThe authors acknowledge the generous contribution of [Descartes Labs, Inc.](https://www.descarteslabs.com/) which provided the authors with API credentials for easy imagery access and manipulation, and a cloud computation platform for imagery analysis. Descartes Labs is spin-out company from Los Alamos National Laboratory that provides a data refinery for satellite imagery. The authors also acknowledge the generous support of the [World Resource Institute](https://www.wri.org/) who provided insight and data resources to the project. [Wiki-Solar](https://wiki-solar.org/) also provided valuable insight and data. The Sentinel-2 semantic segmentation model was trained on Amazon Web Services with a supporting grant. The hand-verification of Sentinel-2 detections was supported by Microsoft Azure cloud computing services with credits provided by the AIforEarth program.\n\n\n# Repository\n\n## Setup\n\n### Virtual Environment\n\nWe recommend using Conda for package and environment management. Create a new conda environment:\n\n conda create -n solar-pv python=3.6\n\n### Clone Repository\n\nClone this repository using git:\n\n git clone\n\nAdd the directory root to the Python path environment variable:\n\n export PYTHONPATH=$(pwd):$PYTHONPATH\n\n(optional) You may want to add this to a bash script for your environment:\n\n touch //path/to/conda/envs/solar-pv/etc/conda/activate.d/env_vars.sh\n nano //path/to/conda/envs/solar-pv/etc/conda/activate.d/env_vars.sh\n\nThen input:\n\n export PYTHONPATH=$(pwd):$PYTHONPATH\n\nand save and exit.\n\n### Install Packages\n\nInstall Python packages via pip:\n\n pip install -r requirements.txt\n\n### Descartes Labs\n\nDescartes Lab alpha and Airbus SPOT6/7 access is required to run this repository.\n\n### Gurobi\n\nMatching installations to existing data uses a mixed integer linear program specified using [PuLP](https://pypi.org/project/PuLP/). We use [Gurobi](https://www.gurobi.com/), a commercial solver, to solve the linear program. Gurobi requires a [license](https://www.gurobi.com/documentation/9.0/quickstart_mac/retrieving_and_setting_up_.html), it can be installed with:\n\n conda config --add channels http://conda.anaconda.org/gurobi\n conda install gurobi\n\n## Directories and Scripts\n- **solarpv\\\\**\n - **training\\\\**\n - **s2\\\\**\n - `model_resunet.json`: ResUNet specification\n - `S2_training_data.py`: Training data generator from cloud resource to disk\n - `training_data_mp.py`: Multithreaded training data generator from cloud resource to disk\n - `train_S2_RNN1.py`: Training for S2 Branch RNN-1\n - `train_S2_RNN1.py`: Training for S2 Branch RNN-2\n - `train_S2_unet.py`: Training for S2 Branch UNet with generator from dist\n - **spot\\\\**\n - `generator.py`: SPOT UNet training generator\n - `optimizer.py`: SPOT UNet training optimizer\n - `train.py`: SPOT UNet training entry point\n - `train_classifier.py`: SPOT classifier training entry point\n - `train_solar_unet.ipynb`: SPOT UNet training prototyping\n - `transforms.py`: SPOT UNet training transforms\n - `unet.py`: SPOT UNet model generator\n - **deployment\\\\**\n - `cloud_dl_functions.py`: Pipeline functions for deployment on DL tasks queuing\n - `create_cloud_functions.py`: Scripts for deploying DL cloud functions\n - `create_cloud_products.py`: Scripts for creating DL cloud products for reading/writing by cloud functions\n - `pipeline.py`: Entrypoint for running geographies through the deployment pipeline\n - `store_model.py`: Scripts to push inference models to cloud storage\n - **analysis\\\\**\n - **quality\\\\**\n - `deploy_precisions.ipynb`: Notebook for obtaining precision of S2 and SPOT branch deployment\n - `precision-recall.ipynb`: Notebook for obtaining cross-validation precision, recall, and intersection-over-union for all pipeline stages\n - `S2_band_dropout.py`: Band dropout analysis for the primary S2 inference model\n - `SPOT_band_dropout.ipynb`: Band dropout analysis for the primary SPOT inference model\n - **matching\\\\**\n - `match_region.py`: Mixed-integer linear programming (MILP) matching script for our dataset with other asset-level data\n - `match_postprocess.ipynb`: Notebook to extract insight after matching between our dataset and other available datasets\n - `MILP_WRI-matching_stripped.ipynb`: Prototyping for MILP matching\n - `vincenty.py`: Vincenty geodescic distance scripts\n - **landcover\\\\**\n - `add_land_cover.py`: Multithreaded script for adding land cover to PV detections\n - `cloud_land_cover.ipynb`: Notebook for cloud-based addition of land cover to PV detections\n - `land_cover_skew_analysis.ipynb`: Notebook for analysis of land cover in our PV detections\n - `land_cover_skew_deploy.ipynb`: Notebook for cloud-based reduction of land cover across large geographies\n - **generating_capacity\\\\**\n - `MW_capacity.ipynb`: Add generating capacity to PV detections\n - `utils.py`: Shared utilities\n- **makefigs\\\\**\n - `fig-1_prediction_map.py`: Script to generate Figure 1\n - `fig-2_results_map.py`: Script to generate Figure 2\n - `fig-3_fig-A10_maplandcover.py`: Script to generate Figure 3 and Figure A10\n - `fig-A2_area_dist.py`: Script to generate Figure A2\n - `fig-A3_deployment_area.py`: Script to generate Figure A3\n - `fig-A6_PR_summary.py`: Script to generate Figure A6\n - `fig-A7_deploy_precision.py`: Script to generate Figure A7\n - `fig-A8_band_perturbation.py`: Script to generate Figure A8\n - `fig-A9_install_date_US.py`: Script to generate Figure A9\n - **figures\\\\**\n - [All figures used in the preparation of the paper]\n- **data\\\\**\n - [All data used in the training, deployment, and analysis workflows]\n- **bin\\\\**\n - `CORINE2DL.ipynb`: Notebook for uploading Copernicus CORINE land cover products to DL product\n - `genyaml_cloudfunctions.py`: Script to generate DL cloud function YAML \n - `genyaml_cloudproducts.py`: Script to generate DL cloud product YAML\n - `make_S2_cv_samples.py`: Script to generate samples for S2 Branch cross-validation\n - `MODIS2DL.ipynb`: Notebook for uploading MODIS land cover products to DL product\n - `color_gdf.ipynb`: Notebook to add color to PV detections for Earth Engine visualisation\n\n\n## Data\n\nAll data, including training, validation, test, and predicted datasets, is available to download from the [Zenodo repository](https://zenodo.org/record/5005868). An interactive visualisation is also offered by the [World Resources Institute ResourceWatch](https://resourcewatch.org/data/explore/ene032-Solar-Plants).\n\n## Workflows\n\n### Training\n\n#### Sentinel-2 Training\n\n1. Sentinel-2 training samples can be generated using the multithreaded generator: \n```python\npython solarpv/training/s2/training_data_mp.py\n```\n\n2. Train the primary inference UNet model, RNN-1, and RNN-2\n```python\npython solarpv/training/s2/train_S2_unet.py\npython solarpv/training/s2/train_S2_RNN1.py\npython solarpv/training/s2/train_S2_RNN2.py\n```\n\n#### SPOT Training\n\n1. SPOT training samples use proprietary Airbus SPOT6/7 imagery and so must be retained on an infrastructure licensed for SPOT6/7 imagery. Training the SPOT branch requires access to the DescartesLabs platform. Contact the authors for details.\n2. Obtain the UNet training imagery and move these images into `data/SPOT_train/ground/` and obtain or develop `train_keys.txt` and `val_keys.txt`.\n3. Train the SPOT UNet model, entering the SPOT training scripts with `solarpv/training/spot/train.py`:\n```python\npython solarpv/training/spot/train.py --train\n```\n4. Obtain the classifier training imagery and move these images into `data/SPOT_train/classifier_ground/` and sort them into `train` and `val`, and `neg` and `pos` within each.\n5. Train the SPOT classifier model:\n```python\npython solarpv/training/spot/train_classifier.py --train\n```\n\n### Deployment\n\n1. Deployment makes use of the DescartesLabs platform. DescartesLabs _alpha_ access is required to successfully deploy the machine learning model. Contact the authors for details.\n2. Generate the YAML which tracks the cloud functions and cloud products used in the pipeline deployment.\n```python\npython bin/genyaml_cloudfunctions.py\npython bin/genyaml_cloudproducts.py\n```\n3. Generate the cloud product and the cloud functions.\n```python\npython solarpv/deployment/create_cloud_products.py\npython soalrpv/deployment/create_cloud_functions.py\n```\n4. Store the machine learning models\n```python\npython solarpv/deployment/store_model.py --model_path=""./s2_rnn1.h5""\npython solarpv/deployment/store_model.py --model_path=""./s2_rnn2.h5""\npython solarpv/deployment/store_model.py --model_path=""./s2_unet.h5""\npython solarpv/deployment/store_model.py --model_path=""./solar_pv_airbus_spot_rgbn_v5.h5""\n```\n5. Deploy geographies deploy pipeline stages to selected geographies. Pipeline stages must be one of `[S2Infer1, S2RNN1, S2Infer2, SPOTVectoriser]`, geographies must be an iso-3166-1 two-letter code. For example, to deploy the primary inference stage for the United Kingdom:\n```python\npython solarpv/deployment/pipeline.py --model_path=""S2Infer1"" --geography=""GB""\n```\n\n### Analysis\n\n#### Quality\n\n1. Run the cross-validation precision-recall notebook `solarpv/analysis/quality/precision-recall.ipynb`\n2. Run the deployment precision nodebook `solarpv/analysis/quality/deploy_precision.ipynb`\n3. Run the S2 band dropout script `solarpv/analysis/quality/S2_band_dropout.py`\n4. Run the SPOT band dropout `solarpv/analysis/quality/SPOT_band_dropout.ipynb`\n\n#### Generating Capacity\n\n1. Run the generation capacity notebook `solarpv/analysis/generating_capacity/MW_capacity.ipynb`\n\n#### Landcover\n\n1. Run the cloud land cover notebook `solarpv/analysis/landcover/cloud_land_cover.ipynb`\n2. Run the cloud land cover skew notebook `solarpv/analysis/landcover/land_cover_skew_deploy.ipynb`\n3. Run the land cover skew analysis notebook `solarpv/analysis/landcover/land_cover_skew_analysis.ipynb`\n\n#### Matching\n\n1. Match geographies of interest with the WRI Global Power Plant Database and the EIA power plant dataset using `solarpv/analysis/matching/match_region.py`:\n```python\npython solarpv/analysis/matching/match_region.py --dataset=""wri"" --geography=""GB""\n```\n2. Postprocess the matches with `solarpv/analysis/matching/match_postprocess.ipynb`\n'",",https://arxiv.org/abs/1505.04597,https://zenodo.org/record/5005868,https://zenodo.org/record/5005868","2019/11/24, 22:39:07",1430,MIT,0,111,"2022/07/13, 12:53:29",1,7,11,0,469,0,0.0,0.03157894736842104,"2021/06/30, 08:43:52",v1.0.0,0,2,false,,false,true,,,,,,,,,,, dGen,"Forecast PV adoption based on user specified configurations like electricity rate prices, electricity load growth, solar resource factors, and much more.",NREL,https://github.com/NREL/dgen.git,github,,Photovoltaics and Solar Energy,"2023/10/03, 17:09:39",52,0,13,true,Python,National Renewable Energy Laboratory,NREL,"Python,Jupyter Notebook",,"b'\xef\xbb\xbf

\n \n

\n\nThe Distributed Generation Market Demand (dGen) Model\n=====================================================\n\n

\n \n \n \n \n \n \n

\n\n\n## Documentation\n- [Webinar and Setup Tutorial](https://youtu.be/-Te5_KKZR8o)\n- [Official dGen Documentation](https://nrel.github.io/dgen/) \n- [Wiki](https://github.com/NREL/dgen/wiki)\n\nNote, after September 30th 2021 the model will be updated to version 2.0.0 and use parquet, rather than pickle (.pkl) formatted agent files. The agent data will be unchanged and the new parquet agent files can be found in [OEDI](https://data.openei.org/submissions/1931). If you wish to continue using version 1.0.0 with the pickle formatted agent files then you can find these agent files [here](https://data.nrel.gov/submissions/169).\n\n## Get Your Tools\nInstall Docker for [(Mac)](https://docs.docker.com/docker-for-mac/install/) or [(Windows)](https://docs.docker.com/docker-for-windows/install/)\n\n- Important: In Docker, go into Docker > Preferences > Resources and up the allocation for disk size image for Docker. 16 GB is recommended for smaller (state level) databasese. 32 GB is recommended for ISO specific databases. 70+GB is required for restoring the national level database. If you get a memory issue then you\'ll need to up the memory allocation and or will need to prune past failed images/volumes. Running the below docker commands will clear these out and let you start fresh:\n```\n $ docker system prune -a \n $ docker volume prune -f\n``` \n- Please refer to Docker\xe2\x80\x99s [documentation](https://docs.docker.com/reference/) for more details.\n\n- Install [Anaconda for Python 3.7](https://www.anaconda.com/distribution/). Users with VPNs may need to turn their VPNs off while installing or updating Anaconda.\n\n- Install [PgAdmin](https://www.pgadmin.org/download/). Ignore all of the options for docker, python, os host, etc.\n\n- Install Git: If you don\'t already have git installed, then navigate [here](https://www.atlassian.com/git/tutorials/install-git) to install it for your operating system.\n\nWindows users: \n- We recommend using Powershell.\n- If you don\'t have UNIX commands enabled for command prompt/powershell then you\'ll need to install Cygwin or QEMU to run a UNIX terminal.\n\n## Download Code \nUsers need to fork a copy of the dGen repo to their own private github account. \n\nNext, clone the forked repository to your local machine by running the following in a terminal/powershell/command prompt:\n```$ git clone https://github.com//dgen.git```\n\n\n# Running and Configuring dGen\n\n### A. Create Environment\nAfter cloning this repository and installing (and running) Docker as well as Anaconda, we\'ll create our environment and container:\n\n1. Depending on directory you cloned this repo into, navigate in terminal to the python directory (/../dgen/python) and run the following command:\n\n```$ conda env create -f dg3n.yml```\n\n- This will create the conda environment needed to run the dgen model.\n- The dgen model is optimized for Python v3 and above. Run ```$ conda list ``` to verify you have this version.\n\n2. This command will create a container with PostgreSQL initialized.\n\n```$ docker run --name postgis_1 -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -d mdillon/postgis```\n\n- Alternatively, if having issues connecting to the postgres server in pgAdmin, run:\n\n```$ docker run --name postgis_1 -p 5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -d mdillon/postgis```\n\n- This will allow the docker container to select a different port to forward to 5432.\n\nTo setup another docker container with a different database you can run:\n```$ docker run --name postgis_2 -p 7000:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -d mdillon/postgis``` where ```7000``` can be any port number not already in use. \n\n\n3. Connect to our postgresql DB. In the command line run the following:\n\n```\n $ docker exec -it psql -U postgres\n $ postgres=# CREATE DATABASE dgen_db;\n```\n\nNotes:\n- Use the alpha-numeric container id rather than the container name.\n- The container id can be gotten by running ```$ docker container ls```. If this doesn\'t display anything try running ```$ docker container ps```.\n- If you get the error ``` psql: FATAL: the database system is starting up ``` try rerunning the docker exec command again after a minute or so because docker can take some time to initialize everything.\n- ```CREATE DATABASE``` will be printed when the database is created. ```\\l``` will display the databases in your server.\n\n\n### B. Download data (agents and database):\nDownload data by navigating to https://data.openei.org/submissions/1931 and clicking the \'model inputs\' tab. Make sure to unzip any zipped files once downloaded. Note, the 13.5 GB dgen_db.sql.zip file contains all of the data for national level runs. We recommend starting with the database specific to the state or ISO region you\'re interested in. \n\nFor example, if you want to simulate only California then navigate to the \'ca_final_db\' folder and download the dgen_db.sql file. \n\nYou will also need to download and unzip the agent files ""OS_dGen_Agents.zip"", making sure the use the correct agent file corresponding to the scenario you\'d like to run (e.g. commercial agents for California).\n\n#### Windows Users\n\nWe recommend using Powershell. If you don\'t have UNIX commands enabled for command prompt/powershell then you\'ll need to install Cygwin or QEMU to run a UNIX terminal.\n\nIn Powershell run the following (replace \'path_to_where_you_saved_database_file\' below with the actual path where you saved your database file):\n\n```\n $ docker cp /path_to_where_you_saved_data/dgen_db.sql :/dgen_db.sql\n $ docker exec -i psql -U postgres -d dgen_db -f dgen_db.sql\n```\n\n\n#### Mac Users\n\nIn a new terminal widnow run the following (make sure to replace \'path_to_where_you_saved_database_file\' below with the actual path where you saved your database file): \n\n```$ cat /path_to_where_you_saved_data/dgen_db.sql | docker exec -i psql -U postgres -d dgen_db```\n\nNotes:\n- Backing up state/ISO databases will likely take 5-15 minutes. The national database will likely take 45-60 minutes.\n- Don\'t close docker at any point while running dGen.\n- The container can be ""paused"" by running ```$ docker stop ``` and ""started"" by running ```$ docker start ```\n- The container must be started/running to restore and or access the database (including during model run time).\n\nTroublshooting Container/Database Issues:\n- Make sure the disk size for Docker has been properly allocated (make sure at least 16GB has been allocated for state level databases, at least 32 GB for ISO level databases, and at least 70 GB for the national database). You\'ll need to restart docker after changing the disk size in Docker\'s system preferences and will need to make a new container/start from scratch.\n- If making a new container first run ```docker system prune -a``` and ```docker volume prune -f```.\n- Make sure you\'ve specificed the right path to the .sql file and make sure the .sql file is unzipped.\n- Make sure the use the container\'s alpha-numeric ID rather than the container name. \n- If on a VPN try turning the VPN off when making the container and restoring the database.\n- Try googling errors.\n\n### C. Create Local Server:\nOnce the database is restored (it will take some time), open PgAdmin and create a new server. Name this whatever you want. Input ""localhost"" (or 127.0.0.1) in the host/address cell and ""postgres"" in both the username and password cells. Upon refreshing this and opening the database dropdown, you should be able to see your database. \n\n### D: Activate Environment \nActivate the dg3n environment and launch spyder by opening a new terminal window and run the following command:\n\n```\n $ conda activate dg3n\n $ (dg3n) spyder\n```\n\n- In spyder, open the ```dgen_model.py``` file. This is what we will run once everything is configured.\n\nNotes:\n- Sometimes Spyder can have issues accessing files. It may be helpful to set the working directory by right clicking the white folder icon in the upper righthand corner and navigating to ```/path_to_where_you_cloned_dgen/dgen_os```.\n- Spyder\'s kernel can sometimes have issues/stop unexpectedly. Refreshing the kernel might help if you\'re encountering issues running dgen_model.py.\n- Spyder isn\'t necessary to use. If you\'d rather run dGen by launching python from the dg3n environment then by all means do so.\n\n### E: Configure Scenario\n1. Open the blank input sheet located in ```dgen_os/excel/input_sheet_v_beta.xlsm ``` (don\'t forget to enable macros!). This file defines most of the settings for a scenario. Configure it depending on the desired model run and save a copy in the input_scenarios folder, i.e. ```dgen_os/input_scenarios/my_scenario.xlsm```. \n\nSee the Input Sheet [Wiki page](https://github.com/NREL/dgen/wiki) for more details on customizing scenarios. \n\n\n2. In the python folder, open ```pg_params_connect.json``` and configure it to your local database. If you didn\'t change your username or password settings while setting up the docker container, this file should look like the below example:\n\n```\n {\t\n\t""dbname"": """",\n \t""host"": ""localhost"",\n\t""port"": ""5432"",\n\t""user"": ""postgres"",\n\t""password"": ""postgres""\n }\n```\n\n- dbname will likely just be ""dgen_db"" unless you changed the name of this database in postgres\n- Localhost could also be set as ""127.0.0.1""\n- Save this file\n- Make sure the role is set as ""postgres"" in ```settings.py``` (it is set as ""postgres"" already by default)\n\nThe cloned repository will have already initialized the default values for the following important parameters:\n\n* ``` start_year = 2014 ``` ( in /../dgen/python/config.py) --> start year the model will begin at\n* ``` pg_procs = 2 ``` ( in /../dgen/python/config.py) --> number of parallel processes the model will run with\n* ``` cores = 2 ``` ( in /../dgen/python/config.py) --> number of cores the model will run with\n* ``` role = ""postgres"" ``` ( in /../dgen/python/config.py) --> set role of the restored database\n\n\n### F: Run the Model\n\nOpen ```dgen_model.py``` in the Spyder IDE and hit the large green arrow ""play button"" near the upper left to run the model.\n\nOr, launch python from within the dg3n environment and run:\n```$ python dgen_model.py```\n\nNotes:\n- Only one agent file can be put in the input_agents directory.\n- Results from the model run will be placed in a SQL table called ""agent_outputs"" within a newly created schema in the connected database. \n- The database and results will be preserved in the docker container if you stop the container and or close docker. Simply start the container to access the database again.\n- The database will not persist once a docker container is terminated. Results will need to be saved locally by downloading the agent_outputs table from the schema run of interest or by dumping the entire database to a .sql file (see below).\n\n## Saving Results:\n1. To backup the whole database, including the results from the completed run, please run the following command in terminal after changing the save path and database name:\n\n```$ docker exec pg_dumpall -U postgres > /../path_to_save_directory/dgen_db.sql```\n\n- this .sql file can be restored in the same way as was detailed above. \n\n2. To export just the ""agent_outputs"" table, simply right click on this table and select the ""Import/Export"" option and configure how you want the data to be saved. Note, if a save directory isn\'t specified this will likely save in the home directory.\n\n\n## Notes:\n- The ""load_path"" variable in config.py from the beta release has been removed for the final release. The load data is now integrated into each database. Load data and meta data for the agents is still accessible via the OEDI data submission.\n'",,"2020/04/15, 15:51:02",1288,BSD-3-Clause,7,88,"2023/06/23, 18:52:15",8,7,24,5,124,2,0.0,0.5909090909090908,"2021/03/26, 20:34:39",1.0.0,0,5,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, SOLECTRUS,An alternative photovoltaic dashboard that visualizes the yield and consumption.,solectrus,https://github.com/solectrus/solectrus.git,github,"senec,photovoltaic,photovoltaics,photovoltaics-dashboard,influxdb",Photovoltaics and Solar Energy,"2023/10/25, 17:36:43",72,0,40,true,Ruby,SOLECTRUS,solectrus,"Ruby,Slim,TypeScript,JavaScript,Shell,CSS,Dockerfile,Procfile",https://solectrus.de,"b'[![Build Status](https://github.com/solectrus/solectrus/workflows/Continuous%20integration/badge.svg)](https://github.com/solectrus/solectrus/actions)\n[![Maintainability](https://api.codeclimate.com/v1/badges/10d74fb7665c045afcf4/maintainability)](https://codeclimate.com/repos/5fe98897e985f4018b001e7d/maintainability)\n[![Test Coverage](https://api.codeclimate.com/v1/badges/10d74fb7665c045afcf4/test_coverage)](https://codeclimate.com/repos/5fe98897e985f4018b001e7d/test_coverage)\n[![wakatime](https://wakatime.com/badge/user/697af4f5-617a-446d-ba58-407e7f3e0243/project/ce8d6e54-7457-42e5-94a3-33a9d4021d45.svg)](https://wakatime.com/badge/user/697af4f5-617a-446d-ba58-407e7f3e0243/project/ce8d6e54-7457-42e5-94a3-33a9d4021d45)\n\n# SOLECTRUS\n\nPhotovoltaic Dashboard, read here about the motivation (in German):\nhttps://ledermann.dev/blog/2021/02/03/photovoltaik-dashboard-als-web-applikation/\n\n![Screenshot](screenshot.webp)\n\n## Installation\n\nFor self-hosting SOLECTRUS, please look at https://github.com/solectrus/hosting\n\n## Development\n\n1. Clone the repo locally:\n\n```bash\ngit clone git@github.com:solectrus/solectrus.git\ncd solectrus\n```\n\n2. Install PostgreSQL, Redis, and puma-dev (if not already present). On a Mac with HomeBrew, run this to install from the `Brewfile`:\n\n```bash\nbrew bundle\n```\n\n3. Install and set up [puma-dev](https://github.com/puma/puma-dev) to use HTTPS for development. Do this on macOS:\n\n```bash\nsudo puma-dev -setup\npuma-dev -install\npuma-dev link\n\n# Use Vite via puma-dev proxy\n# Adopted from https://github.com/puma/puma-dev#webpack-dev-server\necho 3036 > ~/.puma-dev/vite.solectrus\n```\n\n4. Setup the application to install gems and NPM packages and create the database:\n\n```bash\nbin/setup\n```\n\n5. Start the application locally:\n\n```bash\nbin/dev\n```\n\nThen open https://solectrus.test in your browser.\n\n## Test\n\nAfter preparing development environment (see above):\n\n```bash\nbin/influxdb-restart.sh\nDISABLE_SPRING=1 bin/rspec\nDISABLE_SPRING=1 RAILS_ENV=test bin/rake cypress:run\nopen coverage/index.html\n```\n\nRuboCop:\n\n```\nbin/rubocop\n```\n\nESLint:\n\n```\nbin/yarn lint\n```\n\nTypeScript:\n\n```\nbin/yarn tsc\n```\n\nThere is a shortcut to run **all** test and linting tools:\n\n```bash\nbin/test\n```\n\n## License\n\nCopyright (c) 2020-2023 Georg Ledermann, released under the AGPL-3.0 License\n'",,"2020/12/19, 18:28:37",1040,AGPL-3.0,1606,3682,"2023/10/25, 17:36:44",41,2278,2380,979,0,0,0.0,0.5207756232686981,"2023/10/01, 13:36:19",v0.13.2,0,4,false,,false,false,,,https://github.com/solectrus,https://solectrus.de,"Jülich, Germany",,,https://avatars.githubusercontent.com/u/76243773?v=4,,, pvdeg,Set of tools to calculate degradation responses and degradation related parameters for PV.,NREL,https://github.com/NREL/PVDegradationTools.git,github,"degradation,photovoltaic-systems,python,reliability,duramat,pv-modules",Photovoltaics and Solar Energy,"2023/09/14, 18:01:59",14,0,10,true,Jupyter Notebook,National Renewable Energy Laboratory,NREL,"Jupyter Notebook,Python,Dockerfile,TeX",https://pvdegradationtools.readthedocs.io/,"b'\n\n\n\n \n \n\n\n \n \n\n\n \n \n\n
License\n \n \n \n
Publications\n \n
Documentation\n\t\n\t \'Documentation\n\t\n
\n\n\n\n# PV Degradation Tools (pvdeg)\n\nThis repository contains functions for calculating degradation of photovoltaic modules. For example, functions to calculate front and rear relative Humidity, as well as Acceleration Factors. A degradation calculation function is also being developed, considering humidity and spectral irradiances models.\n\n\nTutorials\n=========\n\n### Jupyter Book\n\nFor in depth Tutorials you can run online, see our [jupyter-book](https://nrel.github.io/PVDegradationTools/intro.html) [![Jupyter Book Badge](https://jupyterbook.org/badge.svg)](https://nrel.github.io/PVDegradationTools/intro.html)\n\nClicking on the rocket-icon on the top allows you to launch the journals on [Google Colaboratory](https://colab.research.google.com/) for interactive mode.\nJust uncomment the first line `pip install ...` to install the environment on each journal if you follow this mode.\n\n### Binder\n\nTo run these tutorials in Binder, you can click here:\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/NREL/PVDegradationTools/main)\nIt takes a minute to load the environment.\n\n### Locally\n\nYou can also run the tutorial locally with\n[miniconda](https://docs.conda.io/en/latest/miniconda.html) by following thes\nsteps:\n\n1. Install [miniconda](https://docs.conda.io/en/latest/miniconda.html).\n\n1. Clone the repository:\n\n ```\n git clone https://github.com/NREL/PVDegradationTools.git\n ```\n\n1. Create the environment and install the requirements. The repository includes\n a `requirements.txt` file that contains a list the packages needed to run\n this tutorial. To install them using conda run:\n\n ```\n conda create -n pvdeg jupyter -c pvlib --file requirements.txt\n conda activate pvdeg\n ```\n\n or you can install it with `pip install pvdeg` as explained in the installation instructions into the environment.\n\n1. Start a Jupyter session:\n\n ```\n jupyter notebook\n ```\n\n1. Use the file explorer in Jupyter lab to browse to `tutorials`\n and start the first Tutorial.\n\n\nDocumentation\n=============\n\nDocumentation is available in [ReadTheDocs](https://PVDegradationTools.readthedocs.io) where you can find more details on the API functions.\n\n\nInstallation\n============\n\nRelative Humidity and Acceleration Factors for Solar Modules releases may be installed using the ``pip`` and ``conda`` tools. Compatible with Python 3.5 and above.\n\nInstall with:\n\n pip install pvdeg\n\nFor developer installation, download the repository, navigate to the folder location and install as:\n\n pip install -e .\n\n\nLicense\n=======\n\n[BSD 3-clause](https://github.com/NREL/PVDegradationTools/blob/main/LICENSE.md)\n\n\nContributing\n=======\n\nWe welcome contributiosn to this software, but please read the copyright license agreement (cla-1.0.md), with instructions on signing it in sign-CLA.md. For questions, email us.\n\n\nGetting support\n===============\n\nIf you suspect that you may have discovered a bug or if you\'d like to\nchange something about pvdeg, then please make an issue on our\n[GitHub issues page](hhttps://github.com/NREL/PVDegradationTools/issues).\n\n\nCiting\n======\n\nIf you use this functions in a published work, please cite:\n\n\tHolsapple, Derek, Ayala Pelaez, Silvana, Kempe, Michael. ""PV Degradation Tools"", NREL Github 2020, Software Record SWR-20-71.\n\nAnd/or the specific release from Zenodo:\n\n\tOvaitt, Silvana, Brown, Matt, Springer, Martin, Karas, Joe, Holsapple, Derek, Kempe, Michael. (2023). NREL/PVDegradationTools: v0.1.0 official release (0.1.0). Zenodo. https://doi.org/10.5281/zenodo.8088403\n'",",https://doi.org/10.5281/zenodo.8088578,https://doi.org/10.5281/zenodo.8088403\n","2020/06/03, 20:26:52",1238,CUSTOM,436,466,"2023/10/20, 21:34:18",2,25,25,24,4,0,0.1,0.650137741046832,"2023/09/13, 18:11:05",0.1.3,0,8,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, solarthing,An application that can monitor data from a variety of solar charge controllers and inverters.,wildmountainfarms,https://github.com/wildmountainfarms/solarthing.git,github,"solar,outback-mate,renogy,renogy-rover,couchdb,raspberry-pi,solarthing,solar-energy,dashboard,modbus,pvoutput,packets,energy-monitor,crne-solar,slack,slack-bot,solcast",Photovoltaics and Solar Energy,"2023/10/23, 18:36:35",106,0,26,true,Java,Wild Mountain Farms,wildmountainfarms,"Java,TypeScript,Shell,Python,CSS,Perl,HTML,ANTLR,JavaScript,Dockerfile,C++",https://solarthing.readthedocs.io,"b'![SolarThing](other/docs/solarthing_logo.png ""SolarThing"")\n\n[![](https://img.shields.io/github/last-commit/wildmountainfarms/solarthing.svg)](https://github.com/wildmountainfarms/solarthing/commits/master)\n[![](https://img.shields.io/github/stars/wildmountainfarms/solarthing.svg?style=social)](https://github.com/wildmountainfarms/solarthing/stargazers)\n[![](https://img.shields.io/github/v/release/wildmountainfarms/solarthing.svg)](https://github.com/wildmountainfarms/solarthing/releases)\n[![](https://img.shields.io/github/release-date/wildmountainfarms/solarthing.svg)](https://github.com/wildmountainfarms/solarthing/releases)\n[![](https://img.shields.io/github/downloads/wildmountainfarms/solarthing/total.svg)](https://solarthing.readthedocs.io/en/latest/installation.html)\n\nStores solar data in a database to view on Android, Grafana, or PVOutput\n\n

\n Supported Products •\n Documentation •\n Features •\n Supported Databases •\n Examples\n

\n\n## Supported Products\n* **Outback MATEs** (FX Inverter, MX/FM Charge Controller)\n* **Renogy Rover** (And other Renogy products) over modbus serial.\n * Includes Rover, Rover Elite, Wanderer, Adventurer, Dual Input DCDC Charger, Rover Boost and possibly others\n * Compatible with all [SRNE Solar](https://www.srnesolar.com) Charge Controllers (And rebranded products)\n * Compatible with **Zenith Grape** Solar Charge Controller, **PowMr** MPPT Charge Controller, **RICH** SOLAR MPPT, **WindyNations TrakMax** MPPT\n* **EPEver Tracer**\n * Includes the AN series and the TRIRON N series\n * Possibly includes the BN series (untested)\n* DS18B20 Temperature Sensors and PZEM-003 and PZEM-017 Shunts\n\n# Quickstart\nReady to install? Use the [Quickstart](https://solarthing.readthedocs.io/en/latest/installation.html)!\n\n\n# Features\n* Supports **multiple types of solar products**.\n* Runs reliably **24-7**. Recovers from connection errors and has verbose logging features.\n* Fully customizable through JSON (**No programming experience required**).\n* Supports CouchDB, InfluxDB, local JSON file, and PVOutput exporting.\n * Multiple databases can even be used at the same time!\n * Packets are uploaded in parallel to multiple databases at the same time\n* Can [report Raspberry Pi CPU temperature](https://solarthing.readthedocs.io/en/latest/config/rpi-cpu-temp.html).\n* Easy setup on Linux. Runs *without* root.\n\n## Supported Databases\n* CouchDB\n * Allows for [SolarThing Android](https://github.com/wildmountainfarms/solarthing-android) and [SolarThing Web](https://github.com/wildmountainfarms/solarthing-web) to function\n * Used for PVOutput data collection\n* GraphQL\n * Allows use of CouchDB SolarThing data with Grafana\n * Supplements the CouchDB database\n* InfluxDB\n * Simplest to set up with Grafana\n* [PVOutput.org](https://pvoutput.org)\n * Allows for viewing of data on [pvoutput.org](https://pvoutput.org)\n * Requires CouchDB to be set up\n * Enables usage of the [PVOutput Mycroft skill](https://github.com/wildmountainfarms/pvoutput-mycroft)\n* REST API\n * With the ""post"" database, all packets can be posted to a URL endpoint, useful for REST APIs\n\n\n### Examples\nPVOutput Wild Mountain Farms: [PVOutput System](https://pvoutput.org/intraday.jsp?sid=72206) and \n[PVOutput SolarThing Teams](https://pvoutput.org/listteam.jsp?tid=1528)\n\n---\n\nSolarThing Android: [Github](https://github.com/wildmountainfarms/solarthing-android)\n|\n[Google Play](https://play.google.com/store/apps/details?id=me.retrodaredevil.solarthing.android)\n\nSolarThing Android displays data in a persistent notification that updates at a configurable rate\n![alt text](other/docs/solarthing-android-notification-screenshot-1.jpg ""SolarThing Android Notification"")\n
\n\nYou can get data in [Grafana](https://github.com/grafana/grafana) via InfluxDB or via CouchDB+SolarThing GraphQL.\n\n[Snapshot of Wild Mountain Farms Dashboard](https://snapshot.raintank.io/dashboard/snapshot/iPsTvb6a0eOxEtvvu58dvRuJsJ38Onnp?orgId=2)\n\nGrafana is very customizable. Rearrange graphs and make it how you want!\n![alt text](other/docs/grafana-screenshot-1.png ""SolarThing with Grafana"")\n\n---\n\n## Usage at Wild Mountain Farms\nWe monitor an Outback MATE2, Renogy Rover PG 40A, EPEver Tracer2210AN (20A) using a Raspberry Pi 3.\nEach device has its own instance of SolarThing running. Each instance uploads data to CouchDB. CouchDB, Grafana,\nand SolarThing GraphQL run on a separate ""NAS"" computer. This NAS runs the automation and pvoutput programs.\nThe automation program handles the sending of Slack messages for low battery notifications.\n\n### Database Setup\n* [CouchDB setup](https://solarthing.readthedocs.io/en/latest/config/couchdb.html)
\n * Used for SolarThing Android, SolarThing Web, and SolarThing GraphQL (which gets data to Grafana)\n* [InfluxDB 2.0 setup](https://solarthing.readthedocs.io/en/latest/config/influxdb2.html)
\n * Used for direct Grafana queries\n\n#### [Developer Use](other/docs/developer_use.md)\n#### [Contributing](CONTRIBUTING.md)\n#### [Technical](other/docs/technical/technical.md)\n#### [Project Structure](other/docs/technical/project_structure.md)\n#### [History](other/docs/history.md)\n#### [Google Analytics](https://solarthing.readthedocs.io/en/latest/config/analytics.html)\n#### [Updating](https://solarthing.readthedocs.io/en/latest/updating.html)\n\n#### Configuration\nThis uses all JSON for configuring everything. The files you edit are all in one place unless you decide to move them.\n\nSee [configuration](https://solarthing.readthedocs.io/en/latest/configuration.html) to see how to set them up\n\n### Renogy Rover Monitoring Alternatives\nDon\'t like something about SolarThing? Here are some alternatives to monitor your Renogy Rover.\n* https://github.com/corbinbs/solarshed\n* https://github.com/logreposit/renogy-rover-reader-service\n* https://github.com/menloparkinnovation/renogy-rover\n* https://github.com/floreno/renogy-rover-modbus\n* https://github.com/CyberRad/CoopSolar\n* https://github.com/amigadad/SolarDataCollection\n\n### Suggestions?\nIf you have suggestions on how to improve the documentation or have a feature request, I\'d love to\nhear from you! [SolarThing Issues](https://github.com/wildmountainfarms/solarthing/issues)\n\nIf you get confused while trying to configure solarthing, that\'s probably because the documentation is\nalways a work in progress. If you find something confusing, please report it, so I can make it clearer.\n\n---\n\n[![](https://img.shields.io/badge/author-Lavender%20Shannon-brightgreen.svg)](https://github.com/retrodaredevil)\n[![](https://img.shields.io/github/repo-size/wildmountainfarms/solarthing.svg)](#)\n[![](https://img.shields.io/github/languages/code-size/wildmountainfarms/solarthing.svg)](#)\n[![](https://img.shields.io/librariesio/github/wildmountainfarms/solarthing.svg)](https://libraries.io/github/wildmountainfarms/solarthing)\n[![](https://img.shields.io/github/commit-activity/m/wildmountainfarms/solarthing.svg)](#)\n'",,"2019/07/03, 09:49:26",1575,MIT,227,1145,"2023/10/23, 18:36:44",7,99,168,98,2,0,0.0,0.052631578947368474,"2023/07/02, 18:19:55",v2023.4.0,0,3,true,github,false,true,,,https://github.com/wildmountainfarms,,,,,https://avatars.githubusercontent.com/u/52333871?v=4,,, solXpect,Android app to forecast the output of your photovoltaic system (PV) or balcony pv using data from Open-Meteo.com.,woheller69,https://github.com/woheller69/solxpect.git,github,"android,balkonpv,forecasting,powerplant,solar-energy,photovoltaic,photovoltaics,renewable-energy,sustainability",Photovoltaics and Solar Energy,"2023/10/12, 08:39:32",47,0,47,true,Java,,,"Java,HTML,CSS",,"b'
Send a coffee to woheller69@t-online.de \n
\n\n\n| **RadarWeather** | **Gas Prices** | **Smart Eggtimer** |\n|:---:|:---:|:---:|\n| [](https://f-droid.org/packages/org.woheller69.weather/)| [](https://f-droid.org/packages/org.woheller69.spritpreise/) | [](https://f-droid.org/packages/org.woheller69.eggtimer/) |\n| **Bubble** | **hEARtest** | **GPS Cockpit** |\n| [](https://f-droid.org/packages/org.woheller69.level/) | [](https://f-droid.org/packages/org.woheller69.audiometry/) | [](https://f-droid.org/packages/org.woheller69.gpscockpit/) |\n| **Audio Analyzer** | **LavSeeker** | **TimeLapseCam** |\n| [](https://f-droid.org/packages/org.woheller69.audio_analyzer_for_android/) |[](https://f-droid.org/packages/org.woheller69.lavatories/) | [](https://f-droid.org/packages/org.woheller69.TimeLapseCam/) |\n| **Arity** | **omWeather** | **solXpect** |\n| [](https://f-droid.org/packages/org.woheller69.arity/) | [](https://f-droid.org/packages/org.woheller69.omweather/) | [](https://f-droid.org/packages/org.woheller69.solxpect/) |\n| **gptAssist** | | |\n| [](https://f-droid.org/packages/org.woheller69.gptassist/) | | |\n\n# solXpect\n\nsolXpect is an app that forecasts the output of your solar power plant by using direct and diffuse radiation data from Open-Meteo.com, calculating the position of the sun, and projecting the radiation on your solar panel. \nIt shows the estimated energy production for the next 16 days, with hourly values calculated for the preceding hour. As an example, if there are 150 Wh shown at 11:00 this means you can expect 150 Wh between 10:00 and 11:00 from your photovoltaic system.\nThe values starting with \xce\xa3 show the cumulated energy since midnight of the first day.\nTo use solXpect, you simply enter your latitude and longitude coordinates, as well as the azimuth and tilt of your solar panel. \nYou also enter information about the peak power, efficiency, temperature coefficient, and area of your solar panel, as well as the maximum power and efficiency of your inverter.\nAdditionally, solXpect allows you to define shading on your solar panels by specifying the minimum elevation of the sun necessary for the sun to hit the solar panels, as well as the percentage of shading for elevations below this value.\nIf you have multiple solar panels with the same latitude and longitude but pointing in different directions, you can define them as separate locations and use the \'show sum\' feature to summarize their output.\nOverall, solXpect is a powerful tool for optimizing the use of your own energy and reduce your energy costs. \n\n \n\n[](https://f-droid.org/de/packages/org.woheller69.solxpect/)\n\n## Parameters\n\n#### Name\nDefine a name for the location.\nIf you have several modules pointing in different directions at the same location you can activate ""showSum"" mode in settings.\nIn this case you should define your location names as \'myPV | part1\', \'myPV | part2\', etc. In \'show sum\' mode the location is then shown as \'myPV\' and \'|\' is taken as delimiter.\n\n#### Latitude [\xc2\xb0] \nEnter the north-south position of your solar power plant, ranging from -90\xc2\xb0 at the south pole to 90\xc2\xb0 at the north pole.\n\n#### Longitude [\xc2\xb0]\nEnter the east-west position of your solar power plant, with 0\xc2\xb0 defined as the prime meridian. Positive longitudes are east of the prime meridian, negative ones are west.\n\n#### Azimuth [\xc2\xb0]\nSpecify the horizontal direction of your solar power plant, with 0\xc2\xb0 corresponding to North, 90\xc2\xb0 to East, 180\xc2\xb0 to South, and 270\xc2\xb0 to West.\n\n#### Tilt [\xc2\xb0]\nSpecify the vertical direction of your solar power plant, with 0\xc2\xb0 pointing upwards towards the sky and 90\xc2\xb0 being a vertical orientation pointing towards the horizon.\n\n#### Cells peak power [W]\nEnter the maximum power your solar cells (total of all cells) can deliver. At the moment this value is only used if a value of 0 is specified for cells efficiency or cell area.\nIn this case it is assumed that the cells peak power is given at an irradiance of 1000W/sqm.\n\n#### Cells efficiency [%]\nSpecify the portion of energy in the form of sunlight that can be converted into electricity by the solar cell.\n\n#### Temperature coefficient [%/K]\nEnter the dependence of the cell power on temperature, usually in the range of -0.4%/K. Cell temperature is estimated from ambient temperature and total irradiance.\n\n#### Cell area [m2]\nEnter the size of your solar panels (total of all cells).\n\n#### Diffuse radiation efficiency [%]\nSpecify the efficiency of your solar power plant for diffuse radiation. When pointing up, it should be around 100%, but when pointing towards the horizon, it may be 50% or less, depending on the environment.\nYou probably need to optimize this parameter.\n\n#### Albedo [0..1]\nSpecify the average albedo for your environment to take reflections into account. The value ranges from 0 (all radiation is absorbed) to 1 (all radiation is reflected).\nExamples: Fresh snow: 0.8, green gras: 0.25, asphalt: 0.1\nYou probably need to optimize this parameter.\n\n#### Inverter power [W]\nSpecify the maximum power of your inverter. If it is lower than the maximum power of your panels, the output power of your system will be limited by this parameter.\n\n#### Inverter efficiency [%] \nEnter the efficiency of your inverter.\n\n#### Shading\nIn this section you can define the shading on your solar panels.\nFor each azimuth angle range, you can specify the minimum elevation of the sun that is necessary for the sun to hit the solar panels.\nFor elevations below this value you can set the percentage of shading. For example, a building will reduce radiation by 100%, a tree maybe only by 60%.\nYou can use the sun icon button in the main window to get information about the current azimuth and elevation of the sun to determine at what elevation the sun gets above buildings or trees.\n\n## License\n\nThis app is licensed under the GPLv3.\n\nThe app uses:\n- Parts from Privacy Friendly Weather (https://github.com/SecUSo/privacy-friendly-weather) which is licensed under the GPLv3\n- The weather data service is provided by [Open-Meteo](https://open-meteo.com/), under Attribution 4.0 International (CC BY 4.0)\n- Icons from [Google Material Design Icons](https://material.io/resources/icons/) licensed under Apache License Version 2.0\n- Material Components for Android (https://github.com/material-components/material-components-android) which is licensed under Apache License Version 2.0\n- Leaflet which is licensed under the very permissive 2-clause BSD License\n- WilliamChart (com.db.chart) (https://github.com/diogobernardino/williamchart) which is licensed under Apache License Version 2.0\n- Android Volley (com.android.volley) (https://github.com/google/volley) which is licensed under Apache License Version 2.0\n- AndroidX libraries (https://github.com/androidx/androidx) which is licensed under Apache License Version 2.0\n- AutoSuggestTextViewAPICall (https://github.com/Truiton/AutoSuggestTextViewAPICall) which is licensed under Apache License Version 2.0\n- Map data from OpenStreetMap, licensed under the Open Data Commons Open Database License (ODbL) by the OpenStreetMap Foundation (OSMF) (https://www.openstreetmap.org/copyright)\n- Solar positioning library (https://github.com/klausbrunner/solarpositioning) which is licensed under MIT License\n- Zip4j (https://github.com/srikanth-lingala/zip4j) which is licensed under Apache License Version 2.0\n- CompassView (https://github.com/kix2902/CompassView) which is published under Apache License 2.0\n\n## Contributing\n\nIf you find a bug, please open an issue in the Github repository, assuming one does not already exist.\n - Clearly describe the issue including steps to reproduce when it is a bug. In some cases screenshots can be supportive.\n - Make sure you mention the Android version and the device you have used when you encountered the issue.\n - Make your description as precise as possible.\n\nIf you know the solution to a bug please report it in the corresponding issue and if possible modify the code and create a pull request.\n'",,"2023/04/01, 13:28:22",207,GPL-3.0,70,70,"2023/09/15, 10:47:43",0,0,13,13,40,0,0,0.0,"2023/09/15, 05:41:34",V2.1,0,1,false,,false,false,,,,,,,,,,, Solar Stations,A catalog of high-quality solar radiation monitoring stations.,AssessingSolar,https://github.com/AssessingSolar/solarstations.git,github,"solar,photovoltaics,measurement-data,open-source,solar-energy",Photovoltaics and Solar Energy,"2023/01/06, 12:03:43",9,0,3,true,,Assessing Solar,AssessingSolar,,https://SolarStations.org,b'# Solar Stations\n[![Jupyter Book Badge](https://jupyterbook.org/badge.svg)](https://solarstations.org)\n\nA catalog of high-quality solar radiation monitoring stations.\n\nAn interactive map and listing of the stations can be found at [SolarStations.org](https://SolarStations.org).\n\nThe file [solarstations.csv](solarstations.csv) contains the list of stations and their metadata.\n\nPull requests with new stations or updates are highly welcome!\n',,"2021/09/23, 12:55:08",762,BSD-3-Clause,10,132,"2023/01/06, 10:32:31",34,26,41,9,292,4,0.0,0.007692307692307665,,,0,2,false,,false,false,,,https://github.com/AssessingSolar,assessingsolar.org,,,,https://avatars.githubusercontent.com/u/65510739?v=4,,, OTSun,A python package that uses the Monte Carlo Forward Ray Tracing for the optical analysis of Solar Thermal Collectors and Solar Cells.,bielcardona,https://github.com/bielcardona/OTSun.git,github,,Photovoltaics and Solar Energy,"2023/10/08, 06:51:14",7,1,1,true,Python,,,Python,,"b'[![PyPI version](https://badge.fury.io/py/OTSun.svg)](https://badge.fury.io/py/OTSun)\n\n![OTSun logo](https://github.com/bielcardona/OTSun/raw/master/logo_OTSun.png)\n\n# OTSun\n\nOTSun is a python package that uses the Monte Carlo Forward Ray Tracing for the optical analysis of Solar Thermal Collectors and Solar Cells. \n\n## Installation\n\nThe package can be installed either from pypi:\n`pip install otsun` or downloaded installed with `python setup.py install`.\n\nIn order to use the package the libraries of FreeCAD (https://www.freecadweb.org/) must be available and included in your python path.\n\n## Documentation\n\nThe documentation of the module is available at http://otsun.readthedocs.io/\n\n## How to cite OTSun?\n\nIf you need to cite OTSun, please use the following reference:\n\n* Cardona G, Pujol-Nadal R (2020) OTSun, a python package for the optical analysis of solar-thermal collectors and photovoltaic cells with arbitrary geometry. PLoS ONE 15(10): e0240735. https://doi.org/10.1371/journal.pone.0240735\n'",",https://doi.org/10.1371/journal.pone.0240735\n","2017/01/13, 08:04:48",2476,MIT,23,467,"2023/09/06, 17:43:53",0,51,54,4,49,0,0.0,0.2817955112219451,,,0,3,false,,false,false,otsun-uib/OTSunWebApp,,,,,,,,,, pvOps,Contains a series of functions to facilitate fusion of text-based data with time series production data collected at photovoltaic sites.,sandialabs,https://github.com/sandialabs/pvOps.git,github,,Photovoltaics and Solar Energy,"2023/10/24, 22:07:04",11,0,5,true,Jupyter Notebook,Sandia National Laboratories,sandialabs,"Jupyter Notebook,Python",https://pvops.readthedocs.io/en/latest/,"b'\n\n[![GitHub version](https://badge.fury.io/gh/sandialabs%2FpvOps.svg)](https://badge.fury.io/gh/sandialabs%2FpvOps)\n[![License](https://img.shields.io/pypi/l/pvOps?color=green)](https://github.com/sandialabs/pvOps/blob/master/LICENSE)\n[![ActionStatus](https://github.com/sandialabs/pvOps/workflows/lint%20and%20test/badge.svg)](https://github.com/sandialabs/pvOps/actions)\n[![DOI](https://zenodo.org/badge/289032705.svg)](https://zenodo.org/badge/latestdoi/289032705)\n[![status](https://joss.theoj.org/papers/6c3554c98b1771125613cff94241847c/status.svg)](https://joss.theoj.org/papers/6c3554c98b1771125613cff94241847c)\n\npvops contains a series of functions to facilitate fusion of text-based data with time series production data collected at photovoltaic sites. The package also contains example datasets and tutorials to help demonstrate how the functions can be used.\n\nInstallation\n=============\npvops can be installed using `pip`. See more information at [readthedocs](https://pvops.readthedocs.io/en/latest/).\n\nTutorials\n=========\nTo get started with pvops we recommended working with the [tutorials](https://pvops.readthedocs.io/en/latest/pages/tutorials.html)\n\n\nPackage Layout and Documentation\n==============\n\nThe package is delineated into the following directories.\n```\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80docs : Documentation directory\n|\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80tutorials : Contains tutorials of functionality\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 example_data : \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 Example data\n|\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80pvops : Source function library\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80tests : \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 Library stability tests\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80text : \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 Text processing functions\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80text2time : \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 Text2Timeseries functions\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80timeseries : \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 Timeseries functions\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80iv : \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 Current-voltage functions\n```\n\nMore information about these modules is available at [readthedocs](https://pvops.readthedocs.io/en/latest/).\n\nCiting\n======\n\nIf using this package, please cite [our paper](https://ieeexplore.ieee.org/document/9518439) using the following:\n\n**Citation:** \n\n```\nH. Mendoza, M. Hopwood and T. Gunda, ""pvOps: Improving Operational Assessments through Data Fusion,"" 2021 IEEE 48th Photovoltaic Specialists Conference (PVSC), 2021, pp. 0112-0119, doi: 10.1109/PVSC43889.2021.9518439.\n```\n\n**BibTex:**\n\n```\n@INPROCEEDINGS{9518439,\n author={Mendoza, Hector and Hopwood, Michael and Gunda, Thushara},\n booktitle={2021 IEEE 48th Photovoltaic Specialists Conference (PVSC)}, \n title={pvOps: Improving Operational Assessments through Data Fusion}, \n year={2021},\n volume={},\n number={},\n pages={0112-0119},\n doi={10.1109/PVSC43889.2021.9518439}}\n```\n\nContributing\n============\n\nThe long-term success of pvops requires community support. Please see the [Contributing page](https://pvops.readthedocs.io/en/latest/) for more on how you can contribute.\n\n[![Contributors Display](https://badges.pufler.dev/contributors/sandialabs/pvOps?size=50&padding=5&bots=true)](https://badges.pufler.dev)\n\nLogo Credit: [Daniel Rubinstein](http://www.danielrubinstein.com/)\n\nCopyright and License\n=======\n\npvops is copyright through Sandia National Laboratories. The software is distributed under the Revised BSD License. See [copyright and license](https://github.com/sandialabs/pvops/blob/master/LICENSE) for more information.\n'",",https://zenodo.org/badge/latestdoi/289032705","2020/08/20, 14:48:48",1161,CUSTOM,174,453,"2023/10/24, 22:07:05",6,42,79,49,0,1,0.0,0.5308219178082192,"2023/08/09, 14:09:35",0.2.0,0,8,false,,true,false,,,https://github.com/sandialabs,https://software.sandia.gov,United States,,,https://avatars.githubusercontent.com/u/4993680?v=4,,, IEA-15-240-RWT,A 15 MW reference wind turbine repository developed in conjunction with IEA Wind.,IEAWindTask37,https://github.com/IEAWindTask37/IEA-15-240-RWT.git,github,,Wind Energy,"2023/10/22, 02:50:30",178,0,40,true,Roff,,IEAWindTask37,"Roff,Python,Scheme,Smalltalk,F*,Shell",,"b""[![DOI](https://zenodo.org/badge/213679527.svg)](https://zenodo.org/badge/latestdoi/213679527)\n\n# IEA-15-240-RWT v1.1\nThis repository contains the model data for the 15-MW offshore reference turbine developed within IEA Wind Task 37.\n\nThe documentation for the turbine is accessible here: https://www.nrel.gov/docs/fy20osti/75698.pdf\nand the semisubmersible floating support structure is documented here: https://www.nrel.gov/docs/fy20osti/76773.pdf\n\nData in this repository includes:\n* Documentation, including tabular data used in the figures from the technical report\n* OpenFAST aeroelastic model inputs\n* HAWC2 aeroelastic model inputs\n* WISDEM optimization files\n* Wind turbine ontology .yaml files\n* CAD modeling of turbine where available\n\n## Requirements\n\n*OpenFAST*:\n* Please check the release notes for OpenFAST version compatability. OpenFAST can be compiled [from source here](https://github.com/OpenFAST/openfast.git) or precompiled Windows binaries are [available for download](https://github.com/OpenFAST/openfast/releases/latest/download/windows_openfast_binaries.zip). More information on installing and running OpenFAST is available in [OpenFAST documention](https://openfast.readthedocs.io/en/master/).\n* NREL's Reference OpenSource Controller (ROSCO) is required. This can be compiled [from source here](https://github.com/nrel/rosco) or precompiled binaries for all platforms are [available for download](https://github.com/NREL/ROSCO/releases/). The version of the ROSCO controller can be found in the header of the [DISCON.IN](https://github.com/IEAWindTask37/IEA-15-240-RWT/blob/3a00f7f38a6373f6b026aef5878b671ca7af3605/OpenFAST/IEA-15-240-RWT-UMaineSemi/IEA-15-240-RWT-UMaineSemi_DISCON.IN#L2)\n\n*HAWC2*:\n* HAWC2 can be acquired from its [homepage](https://www.hawc2.dk/). The DTU Basic Controller can be obtained from its [repository](https://gitlab.windenergy.dtu.dk/OpenLAC/BasicDTUController).\n\n*WISDEM*:\n * WISDEM can be installed from its Github [repository](https://github.com/WISDEM/WISDEM).\n * See the [documentation](https://wisdem.readthedocs.io) for installation and usage guides.\n\n\n## Design Updates\n\nThe IEA Wind Task 37 authors endeavor to keep the model input decks current with the latest releases and API changes. Errors and other issues pointed out by the community are also addressed to the extent that available resources make that possible. See the [Release Notes](blob/master/ReleaseNotes.md) for a detailed description of changes.\n\nWe also encourage the broader wind community to submit design updates by forking the repository and letting us know of your design customatization. Community contributions that we are aware of include:\n* [Bladed model](https://github.com/IEAWindTask37/IEA-15-240-RWT/wiki/Frequently-Asked-Questions-(FAQ)#is-bladed-supported) implemented by [DNV](mailto:renewables.support@dnv.com)\n* [OrcaFlex model](https://github.com/IEAWindTask37/IEA-15-240-RWT/wiki/Frequently-Asked-Questions-(FAQ)#is-orcaflex-supported) implemented by Orcina, contact [Orcina](mailto:orcina@orcina.com)\n* [Detailed rotor redesign](https://data.bris.ac.uk/data/dataset/3jrb4mejp9vfd2qb3s7dreymr1) from University of Bristol described in a TORQUE 2022 [paper](https://iopscience.iop.org/article/10.1088/1742-6596/2265/3/032029/pdf), contact [Peter Greaves](mailto:peter.greaves@ore.catapult.org.uk)\n* [NuMAD model](https://github.com/UTDGriffithLab/UTD-IEA15MWBlade) developed at The University of Texas at Dallas (UTD) by [Alejandra S. Escalera Mendoza](mailto:ase180001@utdallas.edu) and [Prof D. Todd Griffith](mailto:tgriffith@utdallas.edu)\n* [Jacket support structure](https://github.com/mmrocze2/IEA-15-240-RWT) The DEME Group created a 3-legged jacket for a 50m water depth, contact [Maciej Mroczek](mailto:Mroczek.Maciej@deme-group.com)\n\n## Citations\n\nFor a list of academic papers that use or cite this turbine, please see [here (fixed-bottom)](https://scholar.google.com/scholar?cites=11739673662820715884&as_sdt=4005&sciodt=0,6&hl=en) and [here (floating)](https://scholar.google.com/scholar?cites=17665986740213390479&as_sdt=4005&sciodt=0,6&hl=en).\n\nIf you use this model in your research or publications, please cite the appropriate report as:\n\n @techreport{IEA15MW_ORWT,\n author = {Evan Gaertner and Jennifer Rinker and Latha Sethuraman and Frederik Zahle and Benjamin Anderson and Garrett Barter and Nikhar Abbas and Fanzhong Meng and Pietro Bortolotti and Witold Skrzypinski and George Scott and Roland Feil and Henrik Bredmose and Katherine Dykes and Matt Sheilds and Christopher Allen and Anthony Viselli},\n Howpublished = {NREL/TP-75698},\n institution = {International Energy Agency},\n title = {Definition of the {IEA} 15-Megawatt Offshore Reference Wind Turbine},\n URL = {https://www.nrel.gov/docs/fy20osti/75698.pdf},\n Year = {2020}\n }\n\n @techreport{IEA15MW_ORWT_Floating,\n author = {Christopher Allen and Anthony Viselli and Habib Dagher and Andrew Goupee and Evan Gaertner and Nikhar Abbas and Matthew Hall and Garrett Barter},\n Howpublished = {NREL/TP-76773},\n institution = {International Energy Agency},\n title = {Definition of the {UMaine} {VolturnUS-S} Reference Platform Developed for the {IEA Wind} 15-Megawatt Offshore Reference Wind Turbine}},\n URL = {https://www.nrel.gov/docs/fy20osti/76773.pdf},\n Year = {2020}\n }\n\n## Questions\n\nBefore reaching out to NREL or DTU authors with questions on the model or reports, please see our frequently asked questions (FAQ) on our [Github Wiki](https://github.com/IEAWindTask37/IEA-15-240-RWT/wiki/Frequently-Asked-Questions-(FAQ)) and current or prior [Issues](https://github.com/IEAWindTask37/IEA-15-240-RWT/issues).\n\nIf neither the FAQ or Issues address your need, please create a new Issue on this repository so that the dialogue is archived for others that might have similar questions. You can also reach out to the authors directly if that is your preference. The technical report lists the contributions of individual authors if you have a specific question. Otherwise, you can contact Garrett Barter (garrett.barter@nrel.gov).\n""",",https://zenodo.org/badge/latestdoi/213679527","2019/10/08, 15:18:15",1478,Apache-2.0,129,762,"2023/10/22, 02:50:31",24,73,145,55,3,0,0.6,0.5467479674796748,"2023/10/22, 16:32:03",v1.1.8,0,13,false,,false,false,,,https://github.com/IEAWindTask37,,,,,https://avatars.githubusercontent.com/u/36546446?v=4,,, windpowerlib,A library to model the output of wind turbines and farms.,wind-python,https://github.com/wind-python/windpowerlib.git,github,"wind,energy,power,model,modelling",Wind Energy,"2023/04/12, 19:01:00",271,25,44,true,Python,,wind-python,"Python,Jupyter Notebook",https://oemof.org/,"b'.. image:: https://travis-ci.org/wind-python/windpowerlib.svg?branch=dev\n :target: https://travis-ci.org/wind-python/windpowerlib\n.. image:: https://coveralls.io/repos/github/wind-python/windpowerlib/badge.svg?branch=dev\n :target: https://coveralls.io/github/wind-python/windpowerlib?branch=dev\n.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.824267.svg\n :target: https://doi.org/10.5281/zenodo.824267\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/wind-python/windpowerlib/dev?filepath=example\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n\n.. image:: https://img.shields.io/lgtm/grade/python/g/wind-python/windpowerlib.svg?logo=lgtm&logoWidth=18\n :target: https://lgtm.com/projects/g/wind-python/windpowerlib/context:python\n \nIntroduction\n=============\n\nThe windpowerlib is a library that provides a set of functions and classes to calculate the power output of wind turbines. It was originally part of the \n`feedinlib `_ (windpower and photovoltaic) but was taken out to build up a community concentrating on wind power models.\n\nFor a quick start see the `Examples and basic usage `_ section.\n\n\nDocumentation\n==============\n\nFull documentation can be found at `readthedocs `_.\n\nUse the `project site `_ of readthedocs to choose the version of the documentation. \nGo to the `download page `_ to download different versions and formats (pdf, html, epub) of the documentation.\n\n\nInstallation\n============\n\nIf you have a working Python 3 (>= 3.6) environment, use pypi to install the latest windpowerlib version:\n\n::\n\n pip install windpowerlib\n\nThe windpowerlib is designed for Python 3 and tested on Python >= 3.5. We highly recommend to use virtual environments.\nPlease see the `installation page `_ of the oemof documentation for complete instructions on how to install python and a virtual environment on your operating system.\n\nOptional Packages\n~~~~~~~~~~~~~~~~~\n\nTo see the plots of the windpowerlib example in the `Examples and basic usage `_ section you should `install the matplotlib package `_.\nMatplotlib can be installed using pip:\n\n::\n\n pip install matplotlib\n\n.. _examplereference-label:\n\nExamples and basic usage\n=========================\n\nThe simplest way to run the example notebooks without installing windpowerlib is to click `here `_ and open them with Binder.\n\nThe basic usage of the windpowerlib is shown in the `ModelChain example `_ that is available as jupyter notebook and python script:\n\n* `ModelChain example (Python script) `_\n* `ModelChain example (Jupyter notebook) `_\n\nTo run the example you need example weather that is downloaded automatically and can also be downloaded here:\n\n* `Example weather data file `_\n\nTo run the examples locally you have to install the windpowerlib. To run the notebook you also need to install `notebook` using pip3. To launch jupyter notebook type ``jupyter notebook`` in the terminal.\nThis will open a browser window. Navigate to the directory containing the notebook to open it. See the jupyter notebook quick start guide for more information on `how to install `_ and\n`how to run `_ jupyter notebooks. In order to reproduce the figures in a notebook you need to install `matplotlib`.\n\nFurther functionalities, like the modelling of wind farms and wind turbine clusters, are shown in the `TurbineClusterModelChain example `_. As the ModelChain example it is available as jupyter notebook and as python script. The weather used in this example is the same as in the ModelChain example.\n\n* `TurbineClusterModelChain example (Python script) `_\n* `TurbineClusterModelChain example (Jupyter notebook) `_\n\nYou can also look at the examples in the `Examples section `_.\n\nWind turbine data\n==================\n\nThe windpowerlib provides data of many wind turbines but it is also possible to\nuse your own turbine data.\n\nUse internal data\n~~~~~~~~~~~~~~~~~\n\nThe windpowerlib provides `wind turbine data `_\n(power curves, hub heights, etc.) for a large set of wind turbines. See `Initialize wind turbine` in `Examples section `_ on how\nto use this data in your simulations.\n\nThe dataset is hosted and maintained on the `OpenEnergy database `_ (oedb).\nTo update your local files with the latest version of the `oedb turbine library `_ you can execute the following in your python console:\n\n.. code:: python\n\n from windpowerlib.data import store_turbine_data_from_oedb\n store_turbine_data_from_oedb()\n\nIf you find your turbine in the database it is very easy to use it in the\nwindpowerlib\n\n.. code:: python\n\n from windpowerlib import WindTurbine\n enercon_e126 = {\n ""turbine_type"": ""E-126/4200"", # turbine type as in register\n ""hub_height"": 135, # in m\n }\n e126 = WindTurbine(**enercon_e126)\n\nWe would like to encourage anyone to contribute to the turbine library by adding turbine data or reporting errors in the data.\nSee `the OEP `_ for more information on how to contribute.\n\nUse your own turbine data\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIt is possible to use your own power curve. However, the most sustainable way\nis to send us the data to be included in the windpowerlib and to be available\nfor all users. This may not be possible in all cases.\n\nAssuming the data files looks like this:\n\n.. code::\n\n wind,power\n 0.0,0.0\n 3.0,39000.0\n 5.0,270000.0\n 10.0,2250000.0\n 15.0,4500000.0\n 25.0,4500000.0\n\nYou can use pandas to read the file and pass it to the turbine dictionary. I\nyou have basic knowledge of pandas it is easy to use any kind of data file.\n\n.. code:: python\n\n import pandas as pd\n from windpowerlib import WindTurbine, create_power_curve\n my_data = pd.read_csv(""path/to/my/data/file.csv"")\n\n my_turbine_data = {\n ""nominal_power"": 6e6, # in W\n ""hub_height"": 115, # in m\n ""power_curve"": create_power_curve(\n wind_speed=my_data[""wind""], power=my_data[""power""]\n ),\n }\n\n my_turbine = WindTurbine(**my_turbine_data)\n\nSee the `modelchain_example` for more information.\n\nContributing\n==============\n\nWe are warmly welcoming all who want to contribute to the windpowerlib. If you are interested in wind models and want to help improving the existing model do not hesitate to contact us via github or email (windpowerlib@rl-institut.de).\n\nClone: https://github.com/wind-python/windpowerlib and install the cloned repository using pip:\n\n.. code:: bash\n\n pip install -e /path/to/the/repository\n\nAs the windpowerlib started with contributors from the `oemof developer group `_ we use the same\n`developer rules `_.\n\n**How to create a pull request:**\n\n* `Fork `_ the windpowerlib repository to your own github account.\n* Change, add or remove code.\n* Commit your changes.\n* Create a `pull request `_ and describe what you will do and why.\n* Wait for approval.\n\n**Generally the following steps are required when changing, adding or removing code:**\n\n* Add new tests if you have written new functions/classes.\n* Add/change the documentation (new feature, API changes ...).\n* Add a whatsnew entry and your name to Contributors.\n* Check if all tests still work by simply executing pytest in your windpowerlib directory:\n\n.. role:: bash(code)\n :language: bash\n\n.. code:: bash\n\n pytest\n\nCiting the windpowerlib\n========================\n\nWe use the zenodo project to get a DOI for each version. `Search zenodo for the right citation of your windpowerlib version `_.\n\nLicense\n============\n\nCopyright (c) 2019 oemof developer group\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the ""Software""), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n'",",https://doi.org/10.5281/zenodo.824267\n,https://zenodo.org/search?page=1&size=20&q=windpowerlib","2016/08/26, 13:50:35",2616,MIT,2,1527,"2023/10/04, 14:52:13",23,52,108,2,21,9,2.2,0.37207357859531776,"2021/03/09, 16:25:46",v0.2.1,0,9,false,,false,false,"fgbg03/PROCSIM-Running-Results-Frontend,T1mTop1a/team_project_wonders_of_wind,YuTian8328/WindAtlas,pdb-94/miguel,in-RET/inretensys-fastapi,TAMUparametric/energiapy,YuTian8328/FMIWindMap,UU-ER/EHUB-Py_Training,patsec/ot-sim,NOWUM/dmas,oemof/oemof,remmyd/HRESopt,moritz-reuter/ESEM-EE,dpinney/wiires,Runamook/iec_62056-21,Adrianonsare/Geospatial-PowerAnalysis,Adrianonsare/EnergyAnalytics,rohitsanam/streamlit-basic-app,montefesp/EPIPPy,thesethtruth/LESO,brizett/reegis_hp,Pyosch/vpplib,open-fred/lib_validation,reegis/reegis,oemof/feedinlib",,https://github.com/wind-python,,,,,https://avatars.githubusercontent.com/u/21263042?v=4,,, turbinesFoam,A library for simulating wind and marine hydrokinetic turbines in OpenFOAM using the actuator line method.,turbinesFoam,https://github.com/turbinesFoam/turbinesFoam.git,github,"openfoam,turbines,airfoils,blade-element,actuator-line,wind-energy",Wind Energy,"2022/05/16, 14:48:37",76,0,13,true,C++,turbinesFoam,turbinesFoam,"C++,C,Python,Roff,Shell",,"b""turbinesFoam\n============\n\n[![Build Status](https://app.travis-ci.com/turbinesFoam/turbinesFoam.svg?branch=master)](https://app.travis-ci.com/turbinesFoam/turbinesFoam)\n![OpenFOAM v2106](https://img.shields.io/badge/OpenFOAM-v2106-brightgreen.svg)\n![OpenFOAM 8](https://img.shields.io/badge/OpenFOAM-8-brightgreen.svg)\n![OpenFOAM 7](https://img.shields.io/badge/OpenFOAM-7-brightgreen.svg)\n[![DOI](https://zenodo.org/badge/4234/turbinesFoam/turbinesFoam.svg)](https://zenodo.org/badge/latestdoi/4234/turbinesFoam/turbinesFoam)\n\nturbinesFoam is a library for simulating wind and marine hydrokinetic turbines\nin OpenFOAM using the actuator line method.\n\n[![](https://cloud.githubusercontent.com/assets/4604869/10141523/f2e3ad9a-65da-11e5-971c-b736abd30c3b.png)](https://www.youtube.com/watch?v=THZvV4R1vow)\n\nBe sure to check out the\n[development snapshot videos on YouTube](https://www.youtube.com/playlist?list=PLOlLyh5gytG8n8D3V1lDeZ3e9fJf9ux-e).\n\n\nContributing\n------------\n\nPull requests are very welcome!\nSee the [issue tracker](https://github.com/petebachant/turbinesFoam/issues)\nfor more details.\n\n\nFeatures\n--------\n\n`fvOptions` classes for adding actuator lines and turbines constructed from\nactuator lines to any compatible solver or turbulence model, e.g.,\n`simpleFoam`, `pimpleFoam`, `interFoam`, etc.\n\n\nInstallation\n------------\n\n```bash\ncd $WM_PROJECT_USER_DIR\ngit clone https://github.com/turbinesFoam/turbinesFoam.git\ncd turbinesFoam\n./Allwmake\n```\n\n\nUsage\n-----\n\nThere are tutorials located in `turbinesFoam/tutorials`.\n\n\nPublications\n------------\n\nBachant, P., Goude, A., and Wosnik, M. (2016) [_Actuator line modeling of vertical-axis turbines_](https://arxiv.org/abs/1605.01449). arXiv preprint 1605.01449.\n\n\nHow to cite\n-----------\n\nThe latest release of turbinesFoam can be cited via DOI thanks to Zenodo: [![DOI](https://zenodo.org/badge/4234/turbinesFoam/turbinesFoam.svg)](https://zenodo.org/badge/latestdoi/4234/turbinesFoam/turbinesFoam)\n\n\nAcknowledgements\n----------------\n\nThis work was funded through a National Science Foundation CAREER award,\nprincipal investigator Martin Wosnik ([NSF CBET\n1150797](http://www.nsf.gov/awardsearch/showAward?AWD_ID=1150797), Energy for\nSustainability, original program manager Geoffrey A. Prentice, current program\nmanager Gregory L. Rorrer).\n\nOpenFOAM is free, open source software for computational fluid dynamics (CFD),\ndeveloped primarily by [CFD Direct](http://cfd.direct), on behalf of the\n[OpenFOAM](http://openfoam.org) Foundation.\n\nInterpolation, Gaussian projection, and vector rotation functions adapted from\nNREL's [SOWFA](https://github.com/NREL/SOWFA).\n""",",https://zenodo.org/badge/latestdoi/4234/turbinesFoam/turbinesFoam,https://arxiv.org/abs/1605.01449,https://zenodo.org/badge/latestdoi/4234/turbinesFoam/turbinesFoam","2014/06/24, 19:30:49",3410,CUSTOM,0,1189,"2023/06/16, 15:59:05",138,59,220,1,131,7,1.2,0.02930402930402931,"2019/11/15, 01:53:16",v0.1.1,0,5,false,,false,false,,,https://github.com/turbinesFoam,,,,,https://avatars.githubusercontent.com/u/10404114?v=4,,, nalu-wind,Solver for wind farm simulations targeting exascale computational platforms.,Exawind,https://github.com/Exawind/nalu-wind.git,github,"low-mach,wind-energy,les,cfd,ecp,exascale-computing,exawind",Wind Energy,"2023/10/24, 22:47:17",102,0,24,true,C,exawind,Exawind,"C,C++,CMake,Fortran,Python,Shell",https://nalu-wind.readthedocs.io,"b'# Nalu-Wind \n\n[Website](https://www.exawind.org/) | [Documentation](https://nalu-wind.readthedocs.io) | [Nightly test dashboard](http://my.cdash.org/index.php?project=Exawind) \n\nNalu-Wind is a generalized, unstructured, massively parallel, incompressible\nflow solver for wind turbine and wind farm simulations. The codebase is a\nwind-focused fork of [NaluCFD](https://github.com/NaluCFD/Nalu); NaluCFD is developed \nand maintained by Sandia National Laboratories. Nalu-Wind is being actively\ndeveloped and maintained by a dedicated, multi-institutional team from [National\nRenewable Energy Laboratory](https://nrel.gov), [Sandia National\nLaboratories](https://sandia.gov), and [Univ. of Texas Austin](https://utexas.edu).\n\nNalu-Wind is developed as an open-source code with the following objectives: \n\n- an open, well-documented implementation of the state-of-the-art computational\n models for modeling wind farm flow physics at various fidelities that are\n backed by a comprehensive verification and validation (V&V) process;\n\n- be capable of performing the highest-fidelity simulations of flowfields within\n wind farms; and \n\n- be able to leverage the high-performance leadership class computating\n facilities available at DOE national laboratories.\n\nWe hope that this community developed model will be used by research\nlaboratories, academia, and industry to develop the next-generation of wind farm\ntechnologies. We welcome the wind energy community to use Nalu-Wind in their\nresearch. When disseminating technical work that includes Nalu-Wind simulations\nplease reference the following citation:\n\n Sprague, M. A., Ananthan, S., Vijayakumar, G., Robinson, M., ""ExaWind: A multifidelity \n modeling and simulation environment for wind energy"", NAWEA/WindTech 2019 Conference, \n Amherst, MA, 2019.\n\n## Documentation\n\nDocumentation is available online at https://nalu-wind.readthedocs.io/ and is\nsplit into the following sections:\n\n- [Theory manual](https://nalu-wind.readthedocs.io/en/latest/source/theory/index.html):\n This section provides a detailed overview of the supported equation sets, the\n discretization and time-integration schemes, turbulence models available, etc.\n \n- [Verification manual](https://nalu-wind.readthedocs.io/en/latest/source/verification/index.html):\n This section documents the results from verification studies of the spatial\n and temporal schemes available in Nalu-Wind.\n \n- [User manual](https://nalu-wind.readthedocs.io/en/latest/source/user/index.html):\n The user manual contains detailed instructions on building the code, along\n with the required third-party libraries (TPLs) and usage.\n \nAll documentation is maintained alongside the source code within the git\nrepository and automatically deployed to ReadTheDocs website upon new commits.\n \n## Compilation and usage\n\nNalu-Wind is primarily built upon the packages provided by the [Trilinos\nproject](https://trilinos.org), which in turn depends on several third-party\nlibraries (MPI, HDF5, NetCDF, parallel NetCDF), and YAML-CPP. In addition, it\nhas the following optional dependencies: hypre, TIOGA, and OpenFAST. Detailed\nbuild instructions are available in the [user\nmanual](https://nalu-wind.readthedocs.io/en/latest/source/user/building.html).\nWe recommend using [Spack](https://spack.io/) package manager to install\nNalu-Wind on your system.\n\n### Testing and quality assurance\n\nNalu-Wind comes with a comprehensive unit test and regression test suite that\nexercise almost all major components of the code. The `master` branch is\ncompiled and run through a regression test suite with different compilers\n([GCC](https://gcc.gnu.org/), [LLVM/Clang](https://clang.llvm.org/), and\n[Intel](https://software.intel.com/en-us/compilers)) on Linux and MacOS\noperating systems, against both the `master` and `develop` branches of\n[Trilinos](https://github.com/trilinos/Trilinos). Tests are performed both using\nflat MPI and hybrid MPI-GPU hardware configurations. The results of the nightly\ntesting are publicly available on [CDash\ndashboard](http://my.cdash.org/index.php?project=Nalu-Wind).\n\n### Contributing, reporting bugs, and requesting help\n\nTo report issues or bugs please [create a new\nissue](https://github.com/Exawind/nalu-wind/issues/new) on GitHub.\n\nWe welcome contributions from the community in form of bug fixes, feature\nenhancements, documentation updates, etc. All contributions are processed\nthrough pull-requests on GitHub. Please follow our [contributing\nguidelines](https://github.com/Exawind/nalu-wind/blob/master/CONTRIBUTING.md)\nwhen submitting pull-requests.\n \n## License\n\nNalu-Wind is licensed under BSD 3-clause license. Please see the\n[LICENSE](https://github.com/Exawind/nalu-wind/blob/master/LICENSE) included in\nthe source code repository for more details.\n\n## Acknowledgements \n\nNalu-Wind is currently being developed with funding from Department of Energy\'s\n(DOE) Office of Science [Exascale Computing Project\n(ECP)](https://www.exascaleproject.org/) and Energy Efficiency and Renewable\nEnergy (EERE) Wind Energy Technology Office (WETO). Please see [authors\nfile](https://github.com/Exawind/nalu-wind/blob/master/AUTHORS) for a \nlist of contributors to Nalu-Wind. \n'",,"2018/05/03, 15:39:32",2001,CUSTOM,136,2376,"2023/10/24, 22:05:53",60,919,1167,153,0,13,1.3,0.8009259259259259,"2021/03/21, 18:48:03",v1.3.0,0,47,false,,false,true,,,https://github.com/Exawind,,,,,https://avatars.githubusercontent.com/u/22328575?v=4,,, openfast,"A multi-physics, multi-fidelity tool for simulating the coupled dynamic response of wind turbines and wind farms.",OpenFAST,https://github.com/OpenFAST/openfast.git,github,"wind-turbine,wind-energy,wind-farm,aeroelasticity,wind-power,wind",Wind Energy,"2023/10/20, 17:37:40",542,0,132,true,Fortran,OpenFAST,OpenFAST,"Fortran,C++,C,Python,CMake,MATLAB,Makefile,Batchfile,Shell,Dockerfile",http://openfast.readthedocs.io,"b'OpenFAST\n========\n\n|actions| |nbsp| |rtfd|\n\n.. |actions| image:: https://github.com/openfast/openfast/actions/workflows/automated-dev-tests.yml/badge.svg?branch=dev\n :target: https://github.com/OpenFAST/openfast/actions/workflows/automated-dev-tests.yml?query=workflow%3A%22Development+Pipeline%22\n :alt: Build Status\n.. |rtfd| image:: https://readthedocs.org/projects/openfast/badge/?version=dev\n :target: https://openfast.readthedocs.io/en/dev\n :alt: Documentation Status\n.. |nbsp| unicode:: 0xA0\n :trim:\n\nOpenFAST is a wind turbine simulation tool which builds on FAST v8. FAST.Farm\nextends the capability of OpenFAST to simulate multi-turbine wind farms. They were\ncreated with the goal of being community models developed and used by research\nlaboratories, academia, and industry. They are managed by a dedicated team at the\nNational Renewable Energy Lab. Our objective is to ensure that OpenFAST and FAST.Farm\nare sustainable software that are well tested and well documented. If you\'d like\nto contribute, see the `Developer Documentation `_\nand any open GitHub issues with the\n`Help Wanted `_\ntag.\n\n**OpenFAST is under active development**.\n\nFAST v8 - OpenFAST\n------------------\nThe transition from FAST v8 to OpenFAST represents the effort to better\nsupport an open-source developer community around FAST-based aero-hydro-servo-\nelastic engineering models of wind-turbines and wind-plants. OpenFAST is the\nnext generation of FAST analysis tools. More information is available in the\n`transition notes `_.\n\nFAST v8, now OpenFAST, is a physics-based engineering tool for simulating the coupled dynamic\nresponse of wind turbines. OpenFAST joins aerodynamics models, hydrodynamics models\nfor offshore structures, control and electrical system (servo) dynamics models,\nand structural (elastic) dynamics models to enable coupled nonlinear aero-\nhydro-servo-elastic simulation in the time domain. The OpenFAST tool enables the\nanalysis of a range of wind turbine configurations, including two- or\nthree-blade horizontal-axis rotor, pitch or stall regulation, rigid or\nteetering hub, upwind or downwind rotor, and lattice or tubular tower. The wind\nturbine can be modeled on land or offshore on fixed-bottom or floating\nsubstructures. OpenFAST is based on advanced engineering models derived from\nfundamental laws, but with appropriate simplifications and assumptions, and\nsupplemented where applicable with computational solutions and test data.\n\nWith OpenFAST, you can run large numbers of nonlinear time-domain simulations\nin approximately real time to enable standards-based loads analysis for predicting\nwind system ultimate and fatigue loads. You can also linearize the underlying\nnonlinear model about an operating point to understand the system response\nand enable the calculation of natural frequencies, damping, and mode shapes;\nthe design of controllers, and analysis of aero-elastic instabilities.\n\nThe aerodynamic models use wind-inflow data and solve for the rotor-wake\neffects and blade-element aerodynamic loads, including dynamic stall. The\nhydrodynamics models simulate the regular or irregular incident waves and\ncurrents and solve for the hydrostatic, radiation, diffraction, and viscous\nloads on the offshore substructure. The control and electrical system models\nsimulate the controller logic, sensors, and actuators of the blade-pitch,\ngenerator-torque, nacelle-yaw, and other control devices, as well as the\ngenerator and power-converter components of the electrical drive. The\nstructural-dynamics models apply the control and electrical system\nreactions, apply the aerodynamic and hydrodynamic loads, adds gravitational\nloads, and simulate the elasticity of the rotor, drivetrain, and support\nstructure. Coupling between all models is achieved through a modular\ninterface and coupler (glue code).\n\nFAST.Farm extends the capabilities of OpenFAST to provide physics-based\nengineering simulation of multi-turbine land-based, fixed-bottom offshore,\nand floating offshore wind farms. With FAST.Farm, you can simulate each wind\nturbine in the farm with an OpenFAST model and capture the relevant\nphysics for prediction of wind farm power performance and structural loads,\nincluding wind farm-wide ambient wind, super controller, and wake advection,\nmeandering, and merging. FAST.Farm maintains computational efficiency\nthrough parallelization to enable loads analysis for predicting the ultimate\nand fatigue loads of each wind turbine in the farm.\n\n\nDocumentation\n-------------\nThe full documentation is available at http://openfast.readthedocs.io/.\n\nThis documentation is stored and maintained alongside the source code.\nIt is compiled into HTML with Sphinx and is tied to a particular version\nof OpenFAST. `Readthedocs `_ hosts the following\nversions of the documentation:\n\n* ``latest`` - The latest commit on the ``main`` branch\n* ``stable`` - Corresponds to the last tagged release\n* ``dev`` - The latest commit on the ``dev`` branch\n\nThese can be toggled with the ``v: latest`` button in the lower left corner of\nthe docs site.\n\nObtaining OpenFAST and FAST.Farm\n--------------------------------\nOpenFAST and FAST.Farm are hosted entirely on GitHub so you are in the\n`right place `_!\nThe repository is structured with two branches following the\n""git-flow"" convention:\n\n* ``main``\n* ``dev``\n\nThe ``main`` branch is stable, well tested, and represents the most up to\ndate released versions of OpenFAST and FAST.Farm. The latest commit on ``main``\ncontains a tag with version info and brief release notes. The tag history can be\nobtained with the ``git tag`` command and viewed in more detail on\n`GitHub Releases `_. For general\nuse, the ``main`` branch is highly recommended.\n\nThe ``dev`` branch is generally stable and tested, but not static. It contains\nnew features, bug fixes, and documentation updates that have not been compiled\ninto a production release. Before proceeding with new development, it is\nrecommended to explore the ``dev`` branch. This branch is updated regularly\nthrough pull requests, so be sure to ``git fetch`` often and check\n`outstanding pull requests `_.\n\nFor those not familiar with git and GitHub, there are many resources:\n\n* https://guides.github.com\n* https://try.github.io\n* https://help.github.com/categories/bootcamp/\n* https://desktop.github.com/\n* http://nvie.com/posts/a-successful-git-branching-model/\n\nCompilation, Usage, and Development\n-----------------------------------\nDetails for compiling\n`compiling `_,\n`using `_, and\n`developing `_\nOpenFAST and FAST.Farm on Unix-based and Windows machines are available at\n`readthedocs `_.\n\nHelp\n----\nPlease use `GitHub Issues `_ to:\n\n* ask usage questions\n* report bugs\n* request code enhancements\n\nUsers and developers may also be interested in the NREL National Wind\nTechnology Center (NWTC) `phpBB Forum `_,\nwhich is still maintained and has a long history of FAST-related questions\nand answers.\n\nAcknowledgments\n---------------\n\nOpenFAST and FAST.Farm are maintained and developed by researchers and software\nengineers at the `National Renewable Energy Laboratory `_\n(NREL), with support from the US Department of Energy\'s Wind Energy Technology\nOffice. NREL gratefully acknowledges development contributions from the following\norganizations:\n\n* Envision Energy USA, Ltd\n* Brigham Young University\n* The University of Massachusetts\n* `Intel\xc2\xae Parallel Computing Center (IPCC) `_\n'",,"2016/08/31, 20:07:10",2610,Apache-2.0,522,9558,"2023/10/25, 13:02:44",245,622,1327,358,0,8,2.0,0.6383053221288515,"2023/10/20, 17:55:43",v3.5.1,0,55,false,,false,false,,,https://github.com/OpenFAST,https://openfast.readthedocs.io,,,,https://avatars.githubusercontent.com/u/15838605?v=4,,, amr-wind,"A massively parallel, block-structured adaptive-mesh, incompressible flow solver for wind turbine and wind farm simulations.",Exawind,https://github.com/Exawind/amr-wind.git,github,"ecp,exascale-computing,amrex,amr,wind,wind-turbines",Wind Energy,"2023/10/20, 16:12:21",82,0,37,true,C++,exawind,Exawind,"C++,CMake,Python,C,Makefile",https://exawind.github.io/amr-wind,"b'# AMR-Wind \n\n[Website](https://www.exawind.org/) | [User manual](https://exawind.github.io/amr-wind) | [API docs](https://exawind.github.io/amr-wind/api_docs) | [Nightly test dashboard](http://my.cdash.org/index.php?project=Exawind) \n\n[![Powered by AMReX](https://amrex-codes.github.io/badges/powered%20by-AMReX-red.svg)](https://amrex-codes.github.io/amrex/) [![Build Status](https://github.com/Exawind/amr-wind/workflows/AMR-Wind-CI/badge.svg)](https://github.com/Exawind/amr-wind/actions)\n\n\nAMR-Wind is a massively parallel, block-structured adaptive-mesh, incompressible\nflow solver for wind turbine and wind farm simulations. The codebase is a\nwind-focused fork of [incflo](https://github.com/AMReX-Codes/incflo). The solver\nis built on top of the [AMReX library](https://amrex-codes.github.io/amrex).\nAMReX library provides the mesh data structures, mesh adaptivity, as well as the\nlinear solvers used for solving the governing equations. AMR-Wind is actively\ndeveloped and maintained by a dedicated multi-institutional team from [Lawrence\nBerkeley National Laboratory](https://www.lbl.gov/), [National Renewable Energy\nLaboratory](https://nrel.gov), and [Sandia National\nLaboratories](https://sandia.gov).\n\nThe primary applications for AMR-Wind are: performing large-eddy simulations\n(LES) of atmospheric boundary layer (ABL) flows, simulating wind farm\nturbine-wake interactions using actuator disk or actuator line models for\nturbines, and as a background solver when coupled with a near-body solver (e.g.,\n[Nalu-Wind](https://github.com/exawind/nalu-wind)) with overset methodology to\nperform blade-resolved simulations of multiple wind turbines within a wind farm.\nFor offshore applications, the ability to model the air-sea interaction effects\nand its impact on the ABL characteristics is another focus for the code\ndevelopment effort. As with other codes in the\n[Exawind](https://github.com/exawind) ecosystem, AMR-wind shares the following\nobjectives:\n\n- an open, well-documented implementation of the state-of-the-art computational\n models for modeling wind farm flow physics at various fidelities that are\n backed by a comprehensive verification and validation (V&V) process;\n\n- be capable of performing the highest-fidelity simulations of flowfields within\n wind farms; and \n\n- be able to leverage the high-performance leadership class computating\n facilities available at DOE national laboratories.\n\n## Documentation\n\nDocumentation is organized into a [user manual](https://exawind.github.io/amr-wind)\nand a developer-focused [API\ndocumentation](https://exawind.github.io/amr-wind). You can either\nbrowse the docs online by following the links, or you can generate them locally\nafter downloading the code. Please follow the instructions in user manual to\nbuild documentation locally.\n\n## Compilation and usage\n\nAMR-Wind is built upon the [AMReX library](https://amrex-codes.github.io/amrex).\nA snapshot of the AMReX library is distributed along with the AMR-Wind source\ncode as a `git-submodule`. In addition to the AMReX library, you will require a\nmodern C++ compiler that supports the C++17 standard. Users wishing to execute\nthe code on high-performance computing (HPC) systems will also need MPI\nlibraries installed on their system. The code can also be compiled using MPI+X, \nwhere X can be OpenMP for CPU shared memory parallelism,\nCUDA to target NVIDIA GPUs, HIP for AMD GPUs, or DPC++ for Intel GPUs.\n\n### Contributing, reporting bugs, and requesting help\n\nTo report issues or bugs please [create a new\nissue](https://github.com/Exawind/amr-wind/issues/new) on GitHub.\n\nWe welcome contributions from the community in form of bug fixes, feature\nenhancements, documentation updates, etc. All contributions are processed\nthrough pull-requests on GitHub.\n\n## License\n\nAMR-Wind is licensed under BSD 3-clause license. Please see the\n[LICENSE](https://github.com/Exawind/amr-wind/blob/development/LICENSE) included in\nthe source code repository for more details.\n\n'",,"2019/11/04, 19:10:43",1451,CUSTOM,149,2005,"2023/10/20, 16:12:26",34,788,879,235,5,9,0.8,0.7905686546463245,,,0,40,false,,false,false,,,https://github.com/Exawind,,,,,https://avatars.githubusercontent.com/u/22328575?v=4,,, QBlade,Provides a hands-on design and simulation capabilities for HAWT and VAWT rotor design and shows all the fundamental relationships of design concepts and turbine performance in an easy and intuitive way.,,,custom,,Wind Energy,,,,,,,,,,http://www.q-blade.org/#welcome,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OpenOA,"This library provides a framework for working with large time series data from wind plants, such as SCADA.",NREL,https://github.com/NREL/OpenOA.git,github,,Wind Energy,"2023/09/29, 23:19:35",153,6,27,true,Jupyter Notebook,National Renewable Energy Laboratory,NREL,"Jupyter Notebook,Python,TeX,Batchfile,Makefile,CSS",https://openoa.readthedocs.io/,"b'\n\n[![Binder Badge](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/NREL/OpenOA/develop_v3?filepath=examples) [![Gitter Badge](https://badges.gitter.im/NREL_OpenOA/community.svg)](https://gitter.im/NREL_OpenOA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Journal of Open Source Software Badge](https://joss.theoj.org/papers/d635ef3c3784d49f6e81e07a0b35ff6b/status.svg)](https://joss.theoj.org/papers/d635ef3c3784d49f6e81e07a0b35ff6b)\n\n[![Documentation Badge](https://readthedocs.org/projects/openoa/badge/?version=latest)](https://openoa.readthedocs.io) ![Tests Badge](https://github.com/NREL/OpenOA/workflows/Tests/badge.svg?branch=develop) [![Code Coverage Badge](https://codecov.io/gh/NREL/OpenOA/branch/develop/graph/badge.svg)](https://codecov.io/gh/NREL/OpenOA)\n\n[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n\n-----\n\nThis library provides a framework for working with large timeseries data from wind plants, such as SCADA.\nIts development has been motivated by the WP3 Benchmarking (PRUF) project,\nwhich aims to provide a reference implementation for plant-level performance assessment.\n\nAnalysis routines are grouped by purpose into methods,\nand these methods in turn rely on more abstract toolkits.\nIn addition to the provided analysis methods,\nanyone can write their own, which is intended to provide natural\ngrowth of tools within this framework.\n\nThe library is written around Pandas Data Frames, utilizing a flexible backend\nso that data loading, processing, and analysis could be performed using other libraries,\nsuch as Dask and Spark, in the future.\n\nIf you would like to try out the code before installation or simply explore the possibilities, please see our examples on [Binder](https://mybinder.org/v2/gh/NREL/OpenOA/develop_v3?filepath=examples).\n\nIf you use this software in your work, please cite our JOSS article with the following BibTex:\n\n```\n@article{Perr-Sauer2021,\n doi = {10.21105/joss.02171},\n url = {https://doi.org/10.21105/joss.02171},\n year = {2021},\n publisher = {The Open Journal},\n volume = {6},\n number = {58},\n pages = {2171},\n author = {Jordan Perr-Sauer and Mike Optis and Jason M. Fields and Nicola Bodini and Joseph C.Y. Lee and Austin Todd and Eric Simley and Robert Hammond and Caleb Phillips and Monte Lunacek and Travis Kemper and Lindy Williams and Anna Craig and Nathan Agarwal and Shawn Sheng and John Meissner},\n title = {OpenOA: An Open-Source Codebase For Operational Analysis of Wind Farms},\n journal = {Journal of Open Source Software}\n}\n```\n\n### Requirements\n\n- Python 3.8, 3.9, or 3.10 with pip.\n\nWe strongly recommend using the Anaconda Python distribution and creating a new conda environment for OpenOA. You can download Anaconda through [their website.](https://www.anaconda.com/products/individual)\n\nAfter installing Anaconda, create and activate a new conda environment with the name ""openoa-env"":\n\n```bash\nconda create --name openoa-env python=3.10\nconda activate openoa-env\n```\n\n### Installation\n\nClone the repository and install the library and its dependencies using pip:\n\n```bash\ngit clone https://github.com/NREL/OpenOA.git\ncd OpenOA\npip install .\n```\n\nYou should now be able to import openoa from the Python interpreter:\n\n```bash\npython\n>>> import openoa\n>>> openoa.__version__\n```\n\n#### Common Installation Issues\n\n- In Windows you may get an error regarding geos_c.dll. To fix this install Shapely using:\n\n```bash\nconda install Shapely\n```\n\n- In Windows, an ImportError regarding win32api can also occur. This can be resolved by fixing the version of pywin32 as follows:\n\n```bash\npip install --upgrade pywin32==255\n```\n\n#### Example Notebooks and Data\n\nThe example data will be automaticaly extracted as needed by the tests. To manually extract the example data for use with the example notebooks, use the following command:\n\n```bash\nunzip examples/data/la_haute_borne.zip -d examples/data/la_haute_borne/\n```\n\nThe example notebooks are located in the `examples` directory. We suggest installing the Jupyter notebook server to run the notebooks interactively. The notebooks can also be viewed statically on [Read The Docs](http://openoa.readthedocs.io/en/latest/examples).\n\n```bash\njupyter lab # ""jupyter notebook"" is also ok if that\'s your preference\n```\n\n### Development\n\nPlease see the developer section of the contributing guide [here](contributing.md), or on the [documentation site](https://openoa.readthedocs.io/en/latest/getting_started/contributing.html) for complete details.\n\nDevelopment dependencies are provided through the develop extra flag in setup.py. Here, we install\nOpenOA, with development dependencies, in editable mode, and activate the pre-commit workflow (note:\nthis second step must be done before committing any changes):\n\n```bash\ncd OpenOA\npip install -e "".[develop, docs]""\npre-commit install\n```\n\nOccasionally, you will need to update the dependencies in the pre-commit workflow, which will provide an error when this needs to happen. When it does, this can normally be resolved with the below code, after which you can continue with your normal git workflow:\n\n```bash\npre-commit autoupdate\ngit add .pre-commit-config.yaml\n```\n\n#### Testing\nTests are written in the Python unittest or pytest framework and are runnable using pytest. There\nare two types of tests, unit tests (located in `test/unit`) run quickly and are automatically for\nevery pull request to the OpenOA repository. Regression tests (located at `test/regression`) provide\na comprehensive suite of scientific tests that may take a long time to run (up to 20 minutes on our\nmachines). These tests should be run locally before submitting a pull request, and are run weekly on\nthe develop and main branches.\n\nTo run all unit and regresison tests:\n\n```bash\npytest\n```\n\nTo run unit tests only:\n\n```bash\npytest test/unit\n```\n\nTo run all tests and generate a code coverage report\n\n```bash\npytest --cov=openoa\n```\n\n#### Documentation\n\nDocumentation is automatically built by, and visible through\n[Read The Docs](http://openoa.readthedocs.io/).\n\nYou can build the documentation with [sphinx](http://www.sphinx-doc.org/en/stable/), but will need\nto ensure [Pandoc is installed](https://pandoc.org/installing.html) on your computer first.\n\n```bash\ncd OpenOA\npip install -e "".[docs]""\ncd sphinx\nmake html\n```\n\n### Contributors\n\n\n\n\n\n\n\n\n\n[![All Contributors](https://img.shields.io/github/all-contributors/NREL/OpenOA?color=ee8449&style=flat-square)](#contributors)\n'",",https://doi.org/10.21105/joss.02171","2016/12/22, 18:16:30",2498,BSD-3-Clause,107,751,"2023/09/29, 23:19:35",8,151,250,48,25,0,4.8,0.6322701688555348,"2023/09/29, 23:24:46",v3.0,0,15,false,,false,true,"NREL/a2e2g,Riderwiesiek/CeneoWebScraper,entralliance/py-entr,paulf81/flasc_pl,NREL/flasc,vchaparro/wind-power-forecasting",,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, ROSCO,NREL's Reference OpenSource Controller for wind turbine applications.,NREL,https://github.com/NREL/ROSCO.git,github,,Wind Energy,"2023/04/11, 16:13:40",77,0,27,true,Python,National Renewable Energy Laboratory,NREL,"Python,Fortran,Roff,Jupyter Notebook,MATLAB,F*,Scheme,CMake,C,Shell,M",https://rosco.readthedocs.io/en/latest/,"b'# NREL\'s Reference OpenSource Controller (ROSCO) toolbox for wind turbine applications\nNREL\'s Reference OpenSource Controller (ROSCO) for wind turbine applications is a toolset designed to ease controller use and implementation for the wind turbine researcher. Some primary capabilities include:\n* A reference controller with industry-standard functionality \n* Generic tuning of NREL\'s ROSCO controller\n* Simple 1-DOF turbine simulations for quick controller capability verifications\n* Parsing of OpenFAST input and output files\n\n\n## Introduction\nThe NREL Reference OpenSource Controller (ROSCO) provides an open, modular and fully adaptable baseline wind turbine controller to the scientific community. The ROSCO toolbox leverages this architecture and implementation to provide a generic tuning process for the controller. Because of the open character and modular set-up, scientists are able to collaborate and contribute in making continuous improvements to the code for the controller and the toolbox. The ROSCO controller is implemented in FORTRAN, while the remainder of the toolset is a mostly-python code base with a number of functionalities.\n\n* [ROSCO](https://github.com/NREL/ROSCO/tree/main/ROSCO) - the fortran source code for the ROSCO controller. \n* [Examples](https://github.com/NREL/ROSCO/tree/main/Examples) - short working examples of the capabilities of the ROSCO toolbox. \n* [Tune_Cases](https://github.com/NREL/ROSCO/tree/main/Tune_Cases) - example generic tuning scripts for a number of open-source reference turbines.\n* [Test_Cases](https://github.com/NREL/ROSCO/tree/main/Test_Cases) - numerous NREL 5MW bases cases to run for controller updates and comparisons. A ""test-suite"", if you will...\n* [Matlab_Toolbox](https://github.com/NREL/ROSCO/tree/main/Matlab_Toolbox) - MATLAB scripts to parse and plot simulation output data.\n* [ofTools](https://github.com/NREL/ROSCO/tree/main/ROSCO_toolbox/ofTools) - A number of scripts to facilitate usage of OpenFAST and manage OpenFAST input and output files. \n* [linear](https://github.com/NREL/ROSCO/tree/main/ROSCO_toolbox/linear) - Scripts to aid with the use of linear models for controller tuning and simplified simulation. \n\n\n## Documentation\nAll relevant documentation about the ROSCO toolbox and ROSCO controller can be found at through [ROSCO\'s readthedocs webpage](https://rosco.readthedocs.io/en/latest/). Here, users can find the information on [installing the ROSCO tools](https://rosco.readthedocs.io/en/latest/source/install.html) for control purposes. Additionally, there is information on the [standard workflow](https://rosco.readthedocs.io/en/latest/source/standard_use.html), details of the input files, use cases for the ROSCO tool-chain, and more. \n\n## Issues and Discussion\nIf you find issues with any of the code that resides in this repository, it is encouraged for you to open a [GitHub issue](https://github.com/NREL/ROSCO/issues). If you have general questions or comments regarding the code, please start a [discussion via GitHub](https://github.com/NREL/ROSCO/discussions). We encourage you to use these resources for all ROSCO-related questions and comments, rather than other resources such as the FAST forums. This helps us keep ROSCO-related items centralized, and provides a singular place for the community to look when they have questions that might arise. Please keep in mind that we will do our very best to respond in a timely manner, but may take a few days to get back to you if you catch us during a busy time. \n\n## Contributing\nIf it wasn\'t obvious from _open-source_ being in the title of the tool-set, this is an open-source code base that we would love for the community to contribute to. If you find yourself fixing any bugs, writing new routines, or even making small typo changes, please submit a [pull request](https://github.com/NREL/ROSCO/pulls). \n\n## Survey\nPlease help us better understand the ROSCO user-base and how we can improve rosco through this brief survey:\n[ROSCO toolchain survey](https://forms.office.com/Pages/ResponsePage.aspx?id=fp3yoM0oVE-EQniFrufAgGWnC45k8q5Kl90RBkHijqBUN0JTNzBJT1QwMjIzNDhCWDlDTUZPWDdMWC4u)\n\n## Referencing\nTo reference the ROSCO source code directly, please use the following DOI:\n[![DOI](https://zenodo.org/badge/220498357.svg)](https://zenodo.org/badge/latestdoi/220498357)\n\nIf the ROSCO Toolbox played a role in your research, please cite it. This software can be\ncited as:\n\n NREL: ROSCO. Version 2.4.1, https://github.com/NREL/ROSCO, 2021.\n\nFor LaTeX users:\n\n```\n@misc{ROSCO_toolbox_2021,\n author = {NREL},\n title = {{ROSCO. Version 2.4.1}},\n year = {2021},\n publisher = {GitHub},\n journal = {GitHub repository},\n url = {https://github.com/NREL/ROSCO}\n }\n```\nIf the ROSCO generic tuning theory and implementation played a roll in your research, please cite the following paper\n```\n@Article{wes-2021-19,\nAUTHOR = {Abbas, N. and Zalkind, D. and Pao, L. and Wright, A.},\nTITLE = {A Reference Open-Source Controller for Fixed and Floating Offshore Wind Turbines},\nJOURNAL = {Wind Energy Science Discussions},\nVOLUME = {2021},\nYEAR = {2021},\nPAGES = {1--33},\nURL = {https://wes.copernicus.org/preprints/wes-2021-19/},\nDOI = {10.5194/wes-2021-19}\n}\n```\n\n## Additional Contributors and Acknowledgments\nPrimary contributions to ROSCO have been provided by researchers the National Renewable Energy Laboratory and the University of Colorado Boulder. Additionally, the ROSCO controller was built upon the foundations of the [Delft Research Controller](https://github.com/TUDelft-DataDrivenControl/DRC_Fortran). Much of the intellect behind these contributions has been inspired or derived from an extensive amount of work in the literature. The bulk of this has been cited through the primary publications about this work. \n'",",https://zenodo.org/badge/latestdoi/220498357","2019/11/08, 15:47:14",1447,Apache-2.0,3,477,"2023/10/20, 06:31:52",11,136,214,74,5,4,0.6,0.25761772853185594,"2023/09/12, 15:45:48",raaw1.4,0,7,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, floris,A controls-oriented engineering wake modeling framework for evaluating the impact of wind farm controls on AEP and wind farm design.,NREL,https://github.com/NREL/floris.git,github,,Wind Energy,"2023/07/27, 19:49:18",158,0,49,true,Python,National Renewable Energy Laboratory,NREL,Python,http://nrel.github.io/floris,"b'# FLORIS Wake Modeling and Wind Farm Controls Software\n\nFLORIS is a controls-focused wind farm simulation software incorporating\nsteady-state engineering wake models into a performance-focused Python\nframework. It has been in active development at NREL since 2013 and the latest\nrelease is [FLORIS v3.4.1](https://github.com/NREL/floris/releases/latest).\nOnline documentation is available at https://nrel.github.io/floris.\n\nThe software is in active development and engagement with the development team\nis highly encouraged. If you are interested in using FLORIS to conduct studies\nof a wind farm or extending FLORIS to include your own wake model, please join\nthe conversation in [GitHub Discussions](https://github.com/NREL/floris/discussions/)!\n\n## Installation\n\n**If upgrading from v2, it is highly recommended to install FLORIS V3 into a new virtual environment**.\nInstalling into a Python environment that contains FLORIS v2 may cause conflicts.\nIf you intend to use [pyOptSparse](https://mdolab-pyoptsparse.readthedocs-hosted.com/en/latest/) with FLORIS,\nit is recommended to install that package first before installing FLORIS.\n\nFLORIS can be installed by downloading the source code or via the PyPI\npackage manager with `pip`.\n\nThe simplest method is with `pip` by using this command:\n\n```bash\npip install floris\n```\n\nDevelopers and anyone who intends to inspect the source code\ncan install FLORIS by downloading the git repository\nfrom GitHub with ``git`` and use ``pip`` to locally install it.\nIt is highly recommended to use a Python virtual environment manager\nsuch as [conda](https://docs.conda.io/en/latest/miniconda.html)\nin order to maintain a clean and sandboxed environment. The following\ncommands in a terminal or shell will download and install FLORIS.\n\n```bash\n # Download the source code from the `main` branch\n git clone -b main https://github.com/NREL/floris.git\n\n # If using conda, be sure to activate your environment prior to installing\n # conda activate \n\n # If using pyOptSpare, install it first\n conda install -c conda-forge pyoptsparse\n\n # Install FLORIS\n pip install -e floris\n```\n\nWith both methods, the installation can be verified by opening a Python interpreter\nand importing FLORIS:\n\n```python\n >>> import floris\n >>> help(floris)\n\n Help on package floris:\n\n NAME\n floris - # Copyright 2021 NREL\n\n PACKAGE CONTENTS\n logging_manager\n simulation (package)\n tools (package)\n turbine_library (package)\n type_dec\n utilities\n version\n\n VERSION\n 3.4\n\n FILE\n ~/floris/floris/__init__.py\n```\n\nIt is important to regularly check for new updates and releases as new\nfeatures, improvements, and bug fixes will be issued on an ongoing basis.\n\n## Quick Start\n\nFLORIS is a Python package run on the command line typically by providing\nan input file with an initial configuration. It can be installed with\n```pip install floris``` (see [installation](https://github.nrel.io/floris/installation)).\nThe typical entry point is\n[FlorisInterface](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface)\nwhich accepts the path to the input file as an argument. From there,\nchanges can be made to the initial configuration through the\n[FlorisInterface.reinitialize](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface.reinitialize)\nroutine, and the simulation is executed with\n[FlorisInterface.calculate_wake](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface.calculate_wake).\n\n```python\nfrom floris.tools import FlorisInterface\nfi = FlorisInterface(""path/to/input.yaml"")\nfi.reinitialize(wind_directions=[i for i in range(10)])\nfi.calculate_wake()\n```\n\nFinally, results can be analyzed via post-processing functions available within\n[FlorisInterface](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface)\nsuch as\n- [FlorisInterface.get_turbine_layout](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface.get_turbine_layout)\n- [FlorisInterface.get_turbine_powers](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface.get_turbine_powers)\n- [FlorisInterface.get_farm_AEP](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.floris_interface.FlorisInterface.get_farm_AEP)\n\nand in a visualization package at [floris.tools.visualization](https://nrel.github.io/floris/_autosummary/floris.tools.floris_interface.FlorisInterface.html#floris.tools.visualization).\nA collection of examples describing the creation of simulations as well as\nanalysis and post processing are included in the\n[repository](https://github.com/NREL/floris/tree/main/examples)\nand described in detail in [Examples Index](https://github.nrel.io/floris/examples).\n\n## Engaging on GitHub\n\nFLORIS leverages the following GitHub features to coordinate support and development efforts:\n\n- [Discussions](https://github.com/NREL/floris/discussions): Collaborate to develop ideas for new use cases, features, and software designs, and get support for usage questions\n- [Issues](https://github.com/NREL/floris/issues): Report potential bugs and well-developed feature requests\n- [Projects](https://github.com/orgs/NREL/projects/18/): Include current and future work on a timeline and assign a person to ""own"" it\n\nGenerally, the first entry point for the community will be within one of the\ncategories in Discussions.\n[Ideas](https://github.com/NREL/floris/discussions/categories/ideas) is a great spot to develop the\ndetails for a feature request. [Q&A](https://github.com/NREL/floris/discussions/categories/q-a)\nis where to get usage support.\n[Show and tell](https://github.com/NREL/floris/discussions/categories/show-and-tell) is a free-form\nspace to show off the things you are doing with FLORIS.\n\n\n# License\n\nCopyright 2022 NREL\n\nLicensed under the Apache License, Version 2.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n'",,"2019/04/01, 17:38:14",1668,Apache-2.0,97,1940,"2023/10/22, 12:41:50",71,338,470,118,3,16,1.9,0.41226472374013357,"2023/07/27, 19:54:40",v3.4.1,0,25,false,,false,true,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, windtools,The Wind Energy Generation Tools provides useful tools to assist in wind energy simulations.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/windtools.git,github,,Wind Energy,"2020/06/19, 12:12:52",5,0,1,false,Python,FZJ-IEK3,FZJ-IEK3-VSA,Python,,"b'\xef\xbb\xbf \n\n# Wind Energy Generation Tools\n\nThe Wind Energy Generation Tools provides useful tools to assist in wind energy simulations.\n\nCurrent list of tools:\n * Synthetic Wind Turbine Power Curve Generator: \n Produces turbine power curves as as function of a turbine\'s specific capacity.\n\n\n---\n## Usage Examples\n* [Synthetic Power Curve](Examples/SyntheticPowerCurve.ipynb)\n\n---\n## Installation\n\nClone a local copy of the repository to your computer\n\n $ git clone https://github.com/FZJ-IEK3-VSA/windtools.git\n \nThen install via pip as follows \n \n $ cd /windtools\n $ pip install -e .\n \n\n---\n## Associated papers\n\n* [The future of European onshore wind energy potential: Detailed distribution and simulation of advanced turbine designs](https://linkinghub.elsevier.com/retrieve/pii/S0360544219311818)\n\n* [The Techno-Economic Potential of Offshore Wind Energy With Optimized Future Turbine Designs in Europe](https://www.preprints.org/manuscript/201902.0121/v1)\n\n---\n## Citation\n\nIf you decide to use ths module anywhere in a published work, please kindly cite us using the following\n\n```bibtex\n@article{Ryberg2019,\n author = {Ryberg, David Severin and Caglayan, Dilara Gulcin and Schmitt, Sabrina and Lin{\\ss}en, Jochen and Stolten, Detlef and Robinius, Martin},\n doi = {10.1016/j.energy.2019.06.052},\n issn = {03605442},\n journal = {Energy},\n month = {sep},\n pages = {1222--1238},\n title = {{The future of European onshore wind energy potential: Detailed distribution and simulation of advanced turbine designs}},\n url = {https://linkinghub.elsevier.com/retrieve/pii/S0360544219311818},\n volume = {182},\n year = {2019}\n}\n```\n\n---\n## License\n\nMIT License\n\nCopyright (c) 2017 David Severin Ryberg (FZJ IEK-3), Heidi Heinrichs (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program. \nIf not, see \n\n## About Us \n \n\nWe are the [Process and Systems Analysis](http://www.fz-juelich.de/iek/iek-3/EN/Forschung/_Process-and-System-Analysis/_node.html) department at the [Institute of Energy and Climate Research: Electrochemical Process Engineering (IEK-3)](http://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n\n## Acknowledgment\n\nThis work was supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 \xe2\x80\x93 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/).\n\n\n'",,"2019/08/01, 10:06:34",1546,Apache-2.0,0,12,"2020/06/19, 12:12:53",1,1,1,0,1223,1,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, PyWake,An AEP calculator for wind farms implemented in Python including a collection of wake models.,TOPFARM,,custom,,Wind Energy,,,,,,,,,,https://gitlab.windenergy.dtu.dk/TOPFARM/PyWake,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, WISDEM,Wind Plant Integrated System Design and Engineering Model.,WISDEM,https://github.com/WISDEM/WISDEM.git,github,"openmdao,systems-engineering,wisdem,wind",Wind Energy,"2023/09/27, 18:58:51",123,2,32,true,Python,WISDEM,WISDEM,"Python,C,Fortran,Meson,Shell,Makefile",https://www.nrel.gov/wind/systems-engineering.html,"b'# WISDEM®\n\n[![Actions Status](https://github.com/WISDEM/WISDEM/workflows/CI_WISDEM/badge.svg?branch=develop)](https://github.com/WISDEM/WISDEM/actions)\n[![Coverage Status](https://coveralls.io/repos/github/WISDEM/WISDEM/badge.svg?branch=develop)](https://coveralls.io/github/WISDEM/WISDEM?branch=develop)\n[![Documentation Status](https://readthedocs.org/projects/wisdem/badge/?version=master)](https://wisdem.readthedocs.io/en/master/?badge=master)\n\n\nThe Wind-Plant Integrated System Design and Engineering Model (WISDEM®) is a set of models for assessing overall wind plant cost of energy (COE). The models use wind turbine and plant cost and energy production as well as financial models to estimate COE and other wind plant system attributes. WISDEM® is accessed through Python, is built using [OpenMDAO](https://openmdao.org/), and uses several sub-models that are also implemented within OpenMDAO. These sub-models can be used independently but they are required to use the overall WISDEM® turbine design capability. Please install all of the pre-requisites prior to installing WISDEM® by following the directions below. For additional information about the NWTC effort in systems engineering that supports WISDEM® development, please visit the official [NREL systems engineering for wind energy website](https://www.nrel.gov/wind/systems-engineering.html).\n\nAuthor: [NREL WISDEM Team](mailto:systems.engineering@nrel.gov)\n\n## Documentation\n\nSee local documentation in the `docs`-directory or access the online version at \n\n## Packages\n\nWISDEM® is a family of modules. The core modules are:\n\n* _CommonSE_ includes several libraries shared among modules\n* _FloatingSE_ works with the floating platforms\n* _DrivetrainSE_ sizes the drivetrain and generator systems (formerly DriveSE and GeneratorSE)\n* _TowerSE_ is a tool for tower (and monopile) design\n* _RotorSE_ is a tool for rotor design\n* _NREL CSM_ is the regression-based turbine mass, cost, and performance model\n* _ORBIT_ is the process-based balance of systems cost model for offshore plants\n* _LandBOSSE_ is the process-based balance of systems cost model for land-based plants\n* _Plant_FinanceSE_ runs the financial analysis of a wind plant\n\nThe core modules draw upon some utility packages, which are typically compiled code with python wrappers:\n\n* _Airfoil Preppy_ is a tool to handle airfoil polar data\n* _CCBlade_ is the BEM module of WISDEM\n* _pyFrame3DD_ brings libraries to handle various coordinate transformations\n* _MoorPy_ is a quasi-static mooring line model\n* [_pyOptSparse_](https://github.com/mdolab/pyoptsparse) provides some additional optimization algorithms to OpenMDAO\n\n\n## Installation\n\nInstallation with [Anaconda](https://www.anaconda.com) is the recommended approach because of the ability to create self-contained environments suitable for testing and analysis. WISDEM® requires [Anaconda 64-bit](https://www.anaconda.com/distribution/). However, the `conda` command has begun to show its age and we now recommend the one-for-one replacement with `mamba` via the [Miniforge distribution](https://github.com/conda-forge/miniforge/releases), which is much more lightweight and more easily solves for the WISDEM package dependencies.\n\n### Installation as a ""library""\n\nTo use WISDEM\'s modules as a library for incorporation into other scripts or tools, WISDEM is available via `mamba install wisdem` or `pip install wisdem`, assuming that you have already setup your python environment. Note that on Windows platforms, we suggest using `conda/mamba` exclusively.\n\n### Installation for direct use\n\nThese instructions are for interaction with WISDEM directly, the use of its examples, and the direct inspection of its source code.\n\nThe installation instructions below use the environment name, ""wisdem-env,"" but any name is acceptable. For those working behind company firewalls, you may have to change the conda authentication with `conda config --set ssl_verify no`. Proxy servers can also be set with `conda config --set proxy_servers.http http://id:pw@address:port` and `conda config --set proxy_servers.https https://id:pw@address:port`. To setup an environment based on a different Github branch of WISDEM, simply substitute the branch name for `master` in the setup line.\n\n1. Setup and activate the Anaconda environment from a prompt (Anaconda3 Power Shell on Windows or Terminal.app on Mac)\n\n mamba config --add channels conda-forge\n mamba env create --name wisdem-env -f https://raw.githubusercontent.com/WISDEM/WISDEM/master/environment.yml python=3.10\n mamba activate wisdem-env\n\n2. In order to directly use the examples in the repository and peek at the code when necessary, we recommend all users install WISDEM in *developer / editable* mode using the instructions here. If you really just want to use WISDEM as a library and lean on the documentation, you can always do `conda install wisdem` and be done. Note the differences between Windows and Mac/Linux build systems. For Linux, we recommend using the native compilers (for example, gcc and gfortran in the default GNU suite).\n\n mamba install -y petsc4py mpi4py # (Mac / Linux only)\n mamba install -y gfortran # (Mac only without Homebrew or Macports compilers)\n mamba install -y m2w64-toolchain libpython # (Windows only)\n git clone https://github.com/WISDEM/WISDEM.git\n cd WISDEM\n python setup.py develop\t\t\t\t # Currently more reliable than: pip install -e\n\n\n**NOTE:** To use WISDEM again after installation is complete, you will always need to activate the conda environment first with `conda activate wisdem-env`\n\n\n## Run Unit Tests\n\nEach package has its own set of unit tests. These can be run in batch with the `test_all.py` script located in the top level `test`-directory.\n\n## Feedback\n\nFor software issues please use . For functionality and theory related questions and comments please use the NWTC forum for [Systems Engineering Software Questions](https://wind.nrel.gov/forum/wind/viewtopic.php?f=34&t=1002).\n'",,"2014/09/04, 20:30:24",3337,Apache-2.0,245,3597,"2023/10/17, 21:59:53",10,326,460,66,7,0,0.5,0.5296334361082562,"2023/09/27, 18:59:21",v3.11.1,0,22,false,,false,false,"DTUWindEnergy/hydesign,NREL/ROSCO_toolbox",,https://github.com/WISDEM,https://www.nrel.gov/wind/systems-engineering.html,"NREL National Wind Technology Center, Boulder, CO",,,https://avatars.githubusercontent.com/u/5444272?v=4,,, WOMBAT,Windfarm Operations & Maintenance cost-Benefit Analysis Tool.,WISDEM,https://github.com/WISDEM/WOMBAT.git,github,"simulation,wind-energy,python3,simpy,operations-maintenance",Wind Energy,"2023/09/20, 23:32:26",12,2,3,true,Python,WISDEM,WISDEM,"Python,TeX",https://wisdem.github.io/WOMBAT/,"b'# WOMBAT: Windfarm Operations & Maintenance cost-Benefit Analysis Tool\n\n[![DOI 10.2172/1894867](https://img.shields.io/badge/DOI-10.2172%2F1894867-brightgreen?link=https://doi.org/10.2172/1894867)](https://www.osti.gov/biblio/1894867)\n[![PyPI version](https://badge.fury.io/py/wombat.svg)](https://badge.fury.io/py/wombat)\n[![Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/WISDEM/WOMBAT/main?filepath=examples)\n[![Jupyter Book](https://jupyterbook.org/badge.svg)](https://wisdem.github.io/WOMBAT)\n\n[![Pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)\n[![Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n\nThis library provides a tool to simulate the operation and maintenance phase (O&M) of\ndistributed, land-based, and offshore windfarms using a discrete event simultaion\nframework.\n\nWOMBAT is written around the [`SimPy`](https://gitlab.com/team-simpy/simpy) discrete\nevent simulation framework. Additionally, this is supported using a flexible and modular\nobject-oriented code base, which enables the modeling of arbitrarily large (or small)\nwindfarms with as many or as few failure and maintenance tasks that can be encoded.\n\nPlease note that this is still heavily under development, so you may find some functionality\nto be incomplete at the current moment, but rest assured the functionality is expanding.\nWith that said, it would be greatly appreciated for issues or PRs to be submitted for\nany improvements at all, from fixing typos (guaranteed to be a few) to features to\ntesting.\n\nIf you use this library please cite our NREL Technical Report:\n\n```bibtex\n @techreport{hammond2022wombat,\n title = {Windfarm Operations and Maintenance cost-Benefit Analysis Tool (WOMBAT)},\n author = {Hammond, Rob and Cooperman, Aubryn},\n abstractNote = {This report provides technical documentation and background on the newly-developed Wind Operations and Maintenance cost-Benefit Analysis Tool (WOMBAT) software. WOMBAT is an open-source model that can be used to obtain cost estimates for operations and maintenance of land-based or offshore wind power plants. The software was designed to be flexible and modular to allow for implementation of new strategies and technological innovations for wind plant maintenance. WOMBAT uses a process-based simulation approach to model day-to-day operations, repairs, and weather conditions. High-level outputs from WOMBAT, including time-based availability and annual operating costs, are found to agree with published results from other models.},\n doi = {10.2172/1894867},\n url = {https://www.osti.gov/biblio/1894867},\n place = {United States},\n year = {2022},\n month = {10},\n institution = {National Renewable Energy Lab. (NREL)},\n }\n```\n\n## WOMBAT in Action\n\nThere a few Jupyter notebooks to get users up and running with WOMBAT in the `examples/`\nfolder, but here are a few highlights:\n\n> **Note**\n> In v0.6 the results will diverge significantly under certain modeling conditions from\n> past versions due to substantial model upgrades on the backend and new/updated\n> features to better specify how repairs are managed.\n\n* Dinwoodie, et al. replication for `wombat` can be found in the\n `examples folder `_.\n* IEA Task 26\n `validation exercise `_.\n* Presentations: `slides `_.\n\n\n## Setup\n\n### Requirements\n\n* Python 3.8 through 3.10\n\n> **Note**\n> For Python 3.10 users that seek to install more than the base dependencies, it has\n> been noted that pip may take a long time to resolve all of the package requirements,\n> so it is recommended to use the following workflow:\n\n```console\n# Enter the source code directory\ncd wombat/\n\n# First install the base package requirements\npip install -e .\n\n# Then install whichever additional dependencies are required/desired\npip install -e \'.[dev]\' # \'.[docs]\' or \'.[all]\'\n```\n\n### Environment Setup\n\nDownload the latest version of [Miniconda](https://docs.conda.io/en/latest/miniconda.html)\nfor the appropriate OS. Follow the remaining\n[steps](https://conda.io/projects/conda/en/latest/user-guide/install/index.html#regular-installation)\nfor the appropriate OS version.\n\nUsing conda, create a new virtual environment:\n\n```console\nconda create -n python=3.8 --no-default-packages\nconda activate \nconda install -c anaconda pip\n\n# activate the environment\nconda activate \n\n# to deactivate\nconda deactivate\n```\n\n### Installation\n\n\n#### Pip\n\n```console\npip install wombat\n```\n\n#### From Source\n\nInstall it directly into an activated virtual environment:\n\n```console\ngit clone https://github.com/WISDEM/WOMBAT.git\ncd wombat\npython setup.py install\n\n# Alternatively:\npip install .\n```\n\n#### Usage\n\nAfter installation, the package can imported:\n\n```console\npython\nimport wombat\nwombat.__version__\n```\n\nFor further usage, please see the documentation site at https://wisdem.github.io/WOMBAT.\n\n\n### Requirements for Contributing to WOMBAT\n\n#### Code Contributions\n\nCode contributors should note that there is both an additional dependency suite for\nrunning the tests and enabling the pre-commit workflow to automically standardize the\ncore code formatting principles.\n\n```console\ngit clone https://github.com/WISDEM/WOMBAT.git\ncd wombat\n\n# Install the additional dependencies for running the tests and automatic code formatting\npip install -e \'.[dev]\'\n\n# Enable the pre-commit workflow for automatic code formatting\npre-commit install\n\n# ... contributions and commits ...\n\n# Run the tests and ensure they all pass\npytest tests\n```\n\nBasic pre-commit issues that users might encounter and their remedies:\n\n* For any failed run, changes may have been either automatically applied or require\n further edits from the contributor. In either case, after changes have been made,\n contributors will have to rerun `git add ` and\n `git commit -m ` to restart the pre-commit workflow with the\n applied changes. Once all checks pass, the commit is safe to be pushed.\n* `isort`, `black`, or simple file checks failed, but made changes\n * rerun the `add` and `commit` processes as needed until the changes satisfy the checks\n* `pylint` or `flake8` failed:\n * Address the errors and rerun the `add` and `commit` processes\n* `mypy` has type errors that seem incorrect\n * Double check the typing is in fact as correct as it seems it should be and rerun the\n `add` and `commit` processes\n * If `mypy` simply seems confused with seemingly correct types, the following statement\n can be added above the `mypy` error:\n `assert isinstance(, )`\n * If that\'s still not working, but you are definitely sure the types are correct,\n simply add a `# type ignore` comment at the end of the line. Sometimes `mypy` struggles\n with complex scenarios, or especially with certain `attrs` conventions.\n\n#### Documentation Contributions\n\n```console\ngit clone https://github.com/WISDEM/WOMBAT.git\ncd wombat\npip install -e \'.[docs]\'\n```\n\nBuild the site\n\n> **Note**\n> You may want to change the ""execute_notebook"" parameter in the `conf.py` file to\n> ""off"" unless you\'re updating the coded examples or they will be run every time you\n> build the site.\n\n```console\ncd docs/\nsphinx-build -b html source _build && make html\n```\n\nView the results: `docs/_build/html/index.html`\n\n#### Code and Documentation Contributions\n\n```console\ngit clone https://github.com/WISDEM/WOMBAT.git\ncd wombat\npip install -e \'.[all]\'\n```\n'",,"2021/04/19, 18:17:42",919,Apache-2.0,74,228,"2023/10/23, 19:58:06",15,97,107,71,2,1,0.0,0.10067114093959728,"2023/08/28, 18:30:33",v0.8.1,0,3,false,,false,false,"NREL/WAVES,sevstafiev/RaifHack2021",,https://github.com/WISDEM,https://www.nrel.gov/wind/systems-engineering.html,"NREL National Wind Technology Center, Boulder, CO",,,https://avatars.githubusercontent.com/u/5444272?v=4,,, LandBOSSE,"The Land-based Balance-of-System Systems Engineering model is a systems engineering tool that estimates the balance-of-system costs associated with installing utility scale wind plants (10, 1.5 MW turbines or larger).",WISDEM,https://github.com/WISDEM/LandBOSSE.git,github,,Wind Energy,"2023/06/19, 22:12:07",16,0,3,true,Python,WISDEM,WISDEM,Python,,"b""# LandBOSSE\n\n## Welcome to LandBOSSE!\n\nThe Land-based Balance-of-System Systems Engineering (LandBOSSE) model is a systems engineering tool that estimates the balance-of-system (BOS) costs associated with installing utility scale wind plants (10, 1.5 MW turbines or larger). It can execute on macOS and Windows. At this time, for both platforms, it is a command line tool that needs to be accessed from the command line.\n\nThe methods used to develop this model (specifically, LandBOSSE Version 2.1.0) are described in greater detail the following report:\n\nEberle, Annika, Owen Roberts, Alicia Key, Parangat Bhaskar, and Katherine Dykes.\n2019. NREL\xe2\x80\x99s Balance-of-System Cost Model for Land-Based Wind. Golden, CO:\nNational Renewable Energy Laboratory. NREL/TP-6A20-72201.\nhttps://www.nrel.gov/docs/fy19osti/72201.pdf.\n\n## User Guides\n\nFirst, read the technical report to understand the big picture of LandBOSSE. In the technical report, you will find process diagrams, equations and the modules that implement them. Then, come back to this documentation and read the user guide.\n\nIn brief, LandBOSSE takes `.xlsx` spreadsheets, reads input data from tabs on the spreadsheets, and writes the results to an output `.xlsx` file. There are three sections in the user guide to demonstrate how to perform these steps.\n\nThe user guide comes in three parts:\n\n1. Software installation,\n\n2. Input data configuration, and\n\n3. Output data analysis.\n\n### Software Installation\n\nThere are two options depending on whether you are a developer or an end user and what operating system you are running.\n\n- **Windows end-user**: If you run the Microsoft Windows operating system and aren't setting up as a developer who is going to be modifying the core library, these instructions are for you. [Find out how to configure Windows for end users.](installation_instructions/windows_end_user.md)\n\n- **macOS end user** and **macOS developer**: If you run the macOS operating system, either as an end-user or as a developer, these instructions are for you. Both developers and end-users will need most of the steps. [Find out how to configure macOS for end users and developers.](installation_instructions/macos_developer.md)\n\n### Operation after the installation\n\nReview the installation instructions on how to activate a virtual environment, if you haven't already.\n\nThen, read the [Operation and Folder Structure](installation_instructions/operation_and_folder_structure.md) for details on running the command that executes LandBOSSE from the command line.\n""",,"2014/10/06, 20:54:57",3305,CUSTOM,18,618,"2023/06/19, 22:12:08",22,91,168,8,127,2,0.1,0.40529531568228105,"2023/06/19, 22:13:14",v2.5.0,0,7,false,,false,false,,,https://github.com/WISDEM,https://www.nrel.gov/wind/systems-engineering.html,"NREL National Wind Technology Center, Boulder, CO",,,https://avatars.githubusercontent.com/u/5444272?v=4,,, OpenMDAO,Optimization of Aerodynamic systems.,OpenMDAO,https://github.com/OpenMDAO/OpenMDAO.git,github,"nasa,open-source,framework,openmdao,optimization",Wind Energy,"2023/10/23, 14:51:31",443,198,89,true,Python,OpenMDAO,OpenMDAO,"Python,Jupyter Notebook,JavaScript,HTML,CSS,TeX,Shell",http://openmdao.org,"b'[![GitHub Actions Test Badge][17]][18]\n[![Coveralls Badge][13]][14]\n[![PyPI version][10]][11]\n[![PyPI Monthly Downloads][12]][11]\n\n# [OpenMDAO][0]\n\nOpenMDAO is an open-source high-performance computing platform for\nsystems analysis and multidisciplinary optimization, written in Python.\nIt enables you to decompose your models, making them easier to build and\nmaintain, while still solving them in a tightly coupled manner with\nefficient parallel numerical methods.\n\nThe OpenMDAO project is primarily focused on supporting gradient-based\noptimization with analytic derivatives to allow you to explore large\ndesign spaces with hundreds or thousands of design variables, but the\nframework also has a number of parallel computing features that can\nwork with gradient-free optimization, mixed-integer nonlinear\nprogramming, and traditional design space exploration.\n\nIf you are using OpenMDAO, please [cite][20] us!\n\n## Documentation\n\nDocumentation for the latest version can be found [here][2].\n\nDocumentation archives for prior versions can be found [here][3].\n\n## Important Notice\n\nWhile the API is relatively stable, **OpenMDAO** remains in active development.\nThere will be periodic changes to the API.\nUser\'s are encouraged to pin their version of OpenMDAO to a recent release and\nupdate periodically.\n\n## Install OpenMDAO\n\nYou have two options for installing **OpenMDAO**, (1) from the\n[Python Package Index (PyPI)][1], and (2) from the [GitHub repository][4].\n\n**OpenMDAO** includes several optional sets of dependencies including:\n`test` for installing the developer tools (e.g., testing, coverage),\n`docs` for building the documentation and\n`visualization` for some extra visualization tools.\nSpecifying `all` will include all of the optional dependencies.\n\n### Install from [PyPI][1]\n\nThis is the easiest way to install **OpenMDAO**. To install only the runtime\ndependencies:\n\n pip install openmdao\n\nTo install all the optional dependencies:\n\n pip install openmdao[all]\n\n### Install from a Cloned Repository\n\nThis allows you to install **OpenMDAO** from a local copy of the source code.\n\n git clone http://github.com/OpenMDAO/OpenMDAO\n pip install OpenMDAO\n\nIf you would like to make changes to **OpenMDAO** it is recommended you\ninstall it in *[editable][16]* mode (i.e., development mode) by adding the `-e`\nflag when calling `pip`, this way any changes you make to the source code will\nbe included when you import **OpenMDAO** in *Python*. You will also want to\ninstall the packages necessary for running **OpenMDAO**\'s tests and documentation\ngenerator. You can install everything needed for development by running:\n\n pip install -e OpenMDAO[all]\n\n## OpenMDAO Versions\n\n**OpenMDAO 3.x.y** represents the current, supported version. It requires Python 3.7\nor later and is maintained [here][4]. To upgrade to the latest release, run:\n\n pip install --upgrade openmdao\n\n**OpenMDAO 2.10.x** was the last version to support Python 2.x and is no longer supported.\nTo install this older release, run:\n\n pip install ""openmdao<3""\n\n**OpenMDAO 1.7.4** was an earlier version of OpenMDAO and is also no longer supported.\nThe code repository is now named **OpenMDAO1**, and has moved [here][5]. To install it, run:\n\n pip install ""openmdao<2""\n\nThe legacy **OpenMDAO v0.x** (versions 0.13.0 and older) of the\n**OpenMDAO-Framework** are [here][6].\n\n## Test OpenMDAO\n\nUsers are encouraged to run the unit tests to ensure **OpenMDAO** is performing\ncorrectly. In order to do so, you must install the testing dependencies.\n\n1. Install **OpenMDAO** and its testing dependencies:\n\n `pip install openmdao[test]`\n\n > Alternatively, you can clone the repository, as explained\n [here](#install-from-a-cloned-repository), and install the development\n dependencies as described [here](#install-the-developer-dependencies).\n\n2. Run tests:\n\n `testflo openmdao -n 1`\n\n3. If everything works correctly, you should see a message stating that there\nwere zero failures. If the tests produce failures, you are encouraged to report\nthem as an [issue][7]. If so, please make sure you include your system spec,\nand include the error message.\n\n > If tests fail, please include your system information, you can obtain\n that by running the following commands in *python* and copying the results\n produced by the last line.\n\n import platform, sys\n\n info = platform.uname()\n (info.system, info.version), (info.machine, info.processor), sys.version\n\n > Which should produce a result similar to:\n\n ((\'Windows\', \'10.0.17134\'),\n (\'AMD64\', \'Intel64 Family 6 Model 94 Stepping 3, GenuineIntel\'),\n \'3.6.6 | packaged by conda-forge | (default, Jul 26 2018, 11:48:23) ...\')\n\n## Build the Documentation for OpenMDAO\n\nDocumentation for the latest version can always be found [here][2], but if you would like to build a local copy you can find instructions to do so [here][19].\n\n[0]: http://openmdao.org/ ""OpenMDAO""\n[1]: https://pypi.org/project/openmdao/ ""OpenMDAO @PyPI""\n\n[2]: http://openmdao.org/newdocs/versions/latest ""Latest Docs""\n[3]: http://openmdao.org/docs ""Archived Docs""\n\n[4]: https://github.com/OpenMDAO/OpenMDAO ""OpenMDAO Git Repo""\n[5]: https://github.com/OpenMDAO/OpenMDAO1 ""OpenMDAO 1.x Git Repo""\n[6]: https://github.com/OpenMDAO/OpenMDAO-Framework ""OpenMDAO Framework Git Repo""\n\n[7]: https://github.com/OpenMDAO/OpenMDAO/issues/new ""Make New OpenMDAO Issue""\n\n[8]: https://help.github.com/articles/changing-a-remote-s-url/ ""Update Git Remote URL""\n\n[10]: https://badge.fury.io/py/openmdao.svg ""PyPI Version""\n[11]: https://badge.fury.io/py/openmdao ""OpenMDAO @PyPI""\n\n[12]: https://img.shields.io/pypi/dm/openmdao ""PyPI Monthly Downloads""\n\n[13]: https://coveralls.io/repos/github/OpenMDAO/OpenMDAO/badge.svg?branch=master ""Coverage Badge""\n[14]: https://coveralls.io/github/OpenMDAO/OpenMDAO?branch=master ""OpenMDAO @Coveralls""\n\n[15]: https://en.wikipedia.org/wiki/Software_release_life_cycle#Beta ""Wikipedia Beta""\n\n[16]: https://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode ""Pip Editable Mode""\n\n[17]: https://github.com/OpenMDAO/OpenMDAO/actions/workflows/openmdao_test_workflow.yml/badge.svg ""Github Actions Badge""\n[18]: https://github.com/OpenMDAO/OpenMDAO/actions ""Github Actions""\n\n[19]: http://openmdao.org/newdocs/versions/latest/other_useful_docs/developer_docs/doc_build.html\n\n[20]: https://openmdao.org/newdocs/versions/latest/other/citing.html\n'",,"2016/08/25, 15:53:25",2617,CUSTOM,1406,17261,"2023/10/23, 14:51:44",63,2173,2992,404,2,2,1.4,0.6629639664047684,,,0,52,false,,false,true,"akshatd/umich-aerosp588,mid2SUPAERO/LCA4MDAO,kanekosh/Dymos_parallel_analysis,h1he2li3/MS-pycycle,dmkeijzer/AetheriaPackage,ovidner/openmdao-catia,ovidner/facit,Felix-Deng/Aircraft-Tire-Selector,jsfriedman/explore-topfarm,whatsopt/WhatsOpt-CLI,tuckerbabcock/E2M2,burn-research/OpenMEASURE,jbussemaker/OpenTurbofanArchitecting,MariusLRuh/lsdo_atmos,MariusLRuh/lift_plus_cruise_weights,JeremyDecroix/pyCycleMDP_PW1133,sherschlock/ISAE-Supaero,Team-Breakfast-Analytica/breakfast,aravindvenkatachalapathy/WIsDEM,albertoprocacci/OpenMEASURE,Scorpion-DX/bag2_test_ff,davidtls/FAST-GA,irfan-gh/FAST-OAD,666FAUST666/Mechatronics_Project_MERCIER-NOURMAMOD,antSGS/FAST-OAD_CS25,rparello/FAST-OAD,KaurCharan/FAST-GA,irfan-gh/FAST-GA,relf/FAST-OAD,marcowijaya/FAST-GA,NOTaShinigami/FAST-GA,marcowijaya/FAST-OAD,SharmaAnubhuti/FAST-GA,Yash-Malandkar/FAST-GA,fomra/FAST-GA,MRn0bod1/FAST-OAD_gui,RB-E/FAST-OAD,florentLutz/FAST-OAD_CS25,davidtls/FAST-OAD_CS25,hugogarcia06/FAST-OAD_CS25,nthepaut/FAST-OAD,esnguyenvan/FAST-OAD,ChaogeCanFly/FAST-OAD,florentLutz/FAST-GA-1,RemyCharayron/FAST-OAD,areysset/FAST-OAD-1,csar-on/FAST-OAD,jnain/FAST-OAD,PeterDezy/FAST-OAD,christophe-david/FAST-OAD,Iamkiller233/FAST-OAD,areysset/FAST-OAD,daptablade/docs,WISDEM/GeneratorSE,DeepFriedDerp/WEIS-DAC_bb_featuredevelopment,joshdowdell1/openconcept,AOE-khkhan/FAST-OAD,niftylab/BAG_framework,daoos/dymos-pre-release,DTUWindEnergy/hydesign,SizingLab/multirotor_sizing_isae_coa_2023_student_version,ElieKadoche/configuration_files,jexalto/ThesisCode,SizingLab/sizing_course,PrincetonUniversity/FAROES,zhemez/ORBIT_NREL,LSDOlab/lsdo_rotor,fast-aircraft-design/FAST-OAD,LSDOlab/ozone,SnowJeffSnow/test.1,SnowJeffSnow/test,nichco/tc1-stability,PlasmaControl/FAROES,Quentief/Rocket,kanekosh/eVTOL_sizing,Creelle/FAST-OAD,Tangxiaotian11/SEM,malmakova-na/HGV_model,tuckerbabcock/MotorModel,dingraha/omjl,mihir0210/WINDOW_static,WISDEM/LTS,rseng/rsepedia-analysis,berlinexpress174/openconcept_winter,OpenMDAO/mphys,ovidner/openmdao-deap,florentLutz/FAST-GA2-MODELS,daptablade/parametric_cgx_model,FredLin0421/4thtubes_ctr,WISDEM/CCBlade,OpenMDAO/zappy,MIT-LAE/pyNA,Atif-Aerospace/OpenAeroStructModels,fast-aircraft-design/FAST-OAD_CS25,HMDomingues/TrackTrajectoryOptimizer,Atif-Aerospace/Aero,Atif-Aerospace/WindTurbine,SizingLab/multirotor_sizing_isae_coa_2022_student_version,astridwalle/doe_visualisation_dashboard,kittyofheaven/Real-Truth-or-Dare,joeyji123/Training,fast-aircraft-design/FAST-OAD_notebooks,SEhrm/sim-comp,kejacobson/om_dash,NREL/WindSE,supaero-aircraft-design/FAST-GA,Bossgb97/CCBlade,sumukhbhoopalam/Covid-19_SAT,LSDOlab/csdl_om,UCSDMorimotoLab/CTRoptimization,anugrahjo/atomics_lite,LSDOlab/omtools,abcdefg781/f1.github.io,Mith15/ECE496_2021,LSDOlab/atomics,jiy352/doc_test,anugrahjo/array_manager,dingraha/CCBladeOpenMDAOExamples,mid2SUPAERO/RP_MAE_VictorGUADANO,ShinsukeSakai0321/Zassou,ShinsukeSakai0321/Image-Web-Apli,WISDEM/Plant_FinanceSE,Rlmuniz/toa,WISDEM/Turbine_CostsSE,NitroCortex/aerostructures,Partmedia/bag,LSDOlab/lsdo_cubesat,vgucsd/cubesat,ovidner/openmdao-nsga,mid2SUPAERO/ecoHALE,JustinSGray/path_dependent_missions,ovidner/openmdao-scop,sdaudlin/BAG_framework,WISDEM/FloatingSE,WISDEM/TowerSE,MAE155B-Group-3-SP20/Group3Repo,ocakgun/BAG_framework,WISDEM/WEIS,jcchin/boring,jcchin/RadialMotorDesign,Kenneth-T-Moore/AMIEGO,git-it/pulse_cam,johnjasa/multifidelity_studies,johnjasa/nrel_openmdao_extensions,fmsnew/nas-bench-nlp-release,Team-Neurons/Covid-Doctor,shubhamgoel90/Covid-Doctor,kushaggarwal/Covid-Doctor,byuflowlab/CCBlade.jl,metamorph-inc/run_mdao,thearn/pandemic,onodip/OpenMDAO-XDSM,WISDEM/ORBIT,naylor-b/om_devtools,bluecheetah/bag,bbrelje/jenkins-github-ci-test,DARcorporation/airfoil-optimizer,DARcorporation/nsde,ucb-art/BAG_framework,DARcorporation/airfoil-optimizer-web-ui,OpenMDAO/pyCycle,ovidner/openmdao-bridge-excel,ovidner/openmdao-bridge-matlab,byuflowlab/OpenMDAO.jl,ovidner/openmdao-omt,metamorph-inc/openmdao-csv-driver,metamorph-inc/conductor-mdao,edwardcwang/BAG_framework,WISDEM/WISDEM,byuflowlab/gaussian-wake,whatsopt/openmdao_extensions,William-Metz/OASFORNEURALNETWORKS,mid2SUPAERO/aerostructures,daniel-de-vries/fortran-mdao,DTUWindEnergy/TopFarm2,ovidner/staircase-optigurator,mid2SUPAERO/StaticAeroelasticity-MDO-IACONO,byuflowlab/stanley2018-turbine-design,OpenMDAO/library_template,sairajk/Image-Super-Resolution-Application,mdolab/OpenAeroStruct,OpenMDAO/dymos,mushtaq96/8semproject,TOPFARM-Wind/tutorial_openmdao2,mdolab/openconcept,bbrelje/travis_tutorial,rethore/FUSED-meteor,metamorph-inc/matlab_wrapper,OpenMDAO/flops_wrapper,daniel-de-vries/OpenLEGO,jarrodsinclair/doegen,metamorph-inc/fmu_wrapper,metamorph-inc/bayesopt_openmdao,jcchin/MagnePlane,OpenMDAO/CADRE,byuflowlab/PlantEnergy,OpenMDAO/NRELTraining,naylor-b/aserver",,https://github.com/OpenMDAO,http://www.openmdao.org,,,,https://avatars.githubusercontent.com/u/861615?v=4,,, TopFarm2,A Python package developed by DTU Wind Energy to help with wind-farm optimizations.,TOPFARM,,custom,,Wind Energy,,,,,,,,,,https://gitlab.windenergy.dtu.dk/TOPFARM/TopFarm2,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, BasicDTUController,"The scope of this project is to provide an open source, open access controller that can be used by the wind energy community as a reference.",OpenLAC,,custom,,Wind Energy,,,,,,,,,,https://gitlab.windenergy.dtu.dk/OpenLAC/BasicDTUController,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, WindEnergyToolbox,"A collection of Python scripts that facilitate working with (potentially a lot) of HAWC2,HAWCStab2, FAST or other text input based simulation tools.",toolbox,,custom,,Wind Energy,,,,,,,,,,https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, windfarmGA,Genetic algorithm to optimize the layout of wind farms.,YsoSirius,https://github.com/YsoSirius/windfarmGA.git,github,,Wind Energy,"2023/04/30, 22:11:24",26,0,6,true,R,,,"R,C++",https://ysosirius.github.io/windfarmGA/,"b'# windfarmGA\n\n

\n \n

\n\n\n[![](https://www.r-pkg.org/badges/version/windfarmGA)](https://www.r-pkg.org/pkg/windfarmGA)\n\n[![R build status](https://github.com/YsoSirius/windfarmGA/workflows/R-CMD-check/badge.svg)](https://github.com/YsoSirius/windfarmGA/actions)\n[![lifecycle](https://img.shields.io/badge/lifecycle-stable-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![codecov](https://codecov.io/gh/YsoSirius/windfarmGA/branch/master/graph/badge.svg)](https://app.codecov.io/gh/YsoSirius/windfarmGA)\n\n\n\n\nGenetic algorithm to optimize the layout of windfarms.\nThe package is hosted on [CRAN](https://CRAN.R-project.org/package=windfarmGA)\n\n# Installation\nThe latest version can be installed from GitHub with:\n```sh\ndevtools::install_github(""YsoSirius/windfarmGA"")\n```\n\nand the CRAN-version with:\n```sh\ninstall.packages(""windfarmGA"")\n```\n\n# Description\nThe genetic algorithm is designed to optimize wind farms of any shape.\nIt requires a predefined number of turbines, a uniform rotor radius and \nan average wind speed per wind direction.\nIt can include a terrain effect model, which downloads an \n\'SRTM\' elevation model and a \'Corine Land Cover\' raster automatically. The elevation \nmodel is used to find mountains and valleys and to adjust the \nwind speeds accordingly by \'wind multipliers\' and to determine \nthe air densities at rotor heights. The land cover raster with an additional elevation\nroughness value is used to re-evaluate the surface roughness and to individually\ndetermine the wake-decay constant for each turbine.\n\nTo start an optimization, either the function `windfarmGA` or `genetic_algorithm` can \nbe used. The function `windfarmGA` checks the user inputs interactively and then \nruns the function `genetic_algorithm`. If the input parameters are already known, an \noptimization can be run directly via `genetic_algorithm`. \n\n
\n \n \n
\n\nSince version 1.1, hexagonal grid cells are possible, with \ntheir center points being possible locations for wind turbines. \nFurthermore, rasters can be included, which contain information on the Weibull \nparameters. For Austria this data is already included in the package. \n \n## Create an input Polygon\n- Input Polygon by source\n```sh\nlibrary(sf)\ndsn <- ""Path to the Shapefile""\nlayer <- ""Name of the Shapefile""\nPolygon1 <- sf::st_read(dsn = dsn, layer = layer)\nplot(Polygon1, col = ""blue"")\n```\n\n- Or create a random Polygon\n```sh\nlibrary(sf)\nPolygon1 <- sf::st_as_sf(sf::st_sfc(\n sf::st_polygon(list(cbind(\n c(0, 0, 2000, 2000, 0),\n c(0, 2000, 2000, 0, 0)))),\n crs = 3035\n))\nplot(Polygon1, col = ""blue"", axes = TRUE)\n```\n\n## Create random Wind data \n- Exemplary input Wind data with *uniform* wind speed and *single* wind direction\n```sh\nwind_df <- data.frame(ws = c(12, 12), wd = c(0, 0), probab = c(25, 25))\nwindrosePlot <- plot_windrose(data = wind_df, spd = wind_df$ws,\n dir = wind_df$wd, dirres=10, spdmax = 20)\n```\n\n- Exemplary input Wind data with *random* wind speeds and *random* wind directions\n```sh\nwind_df <- data.frame(ws = sample(1:25, 10), wd = sample(1:260, 10)))\nwindrosePlot <- plot_windrose(data = wind_df, spd = wind_df$ws,\n dir = wind_df$wd)\n```\n\n## Grid Spacing\n### Rectangular Grid Cells\nVerify that the grid spacing is appropriate. Adapt the following input variables if necessary:\n- *Rotor*: The rotor radius in meters.\n- *fcrR*: The grid spacing factor, which should at least be 2, so that a single grid covers at least the whole rotor diameter.\n- *prop*: The proportionality factor used for grid calculation. It determines the minimum percentage that a grid cell must cover of the area.\n\n*Make sure that the Polygon is projected in meters.*\n```sh\nRotor <- 20\nfcrR <- 9\nGrid <- grid_area(Polygon1, size = (Rotor * fcrR), prop = 1, plotGrid = TRUE)\nstr(Grid)\n```\n### Hexagonal Grid Cells\n```sh\nRotor <- 20\nfcrR <- 9\nHexGrid <- hexa_area(Polygon1, size = (Rotor * fcrR), plotGrid = TRUE)\nstr(HexGrid)\n```\n

\n \n

\n\n\n## Terrain Effect Model\nIf the input variable **topograp** for the functions `windfarmGA` or `genetic_algorithm` is TRUE, the genetic algorithm will take terrain effects into account. For this purpose an elevation model and a Corine Land Cover raster are downloaded automatically, but can also be given manually. ( [Download a CLC raster](https://www.eea.europa.eu/data-and-maps/data/clc-2006-raster-4) ).\n \n\nIf you want to include your own Land Cover Raster, you must assign the Raster Image path to the input variable **sourceCCL**. The algorithm uses an adapted version of the Raster legend (""clc_legend.csv""), which is stored in the package subdirectory (/extdata). To use own values for the land cover roughness lengths, insert a column named **Rauhigkeit_z** to the .csv file. Assign a surface roughness length to all land cover types. \nBe sure that all rows are filled with numeric values and save the .csv file with "";"" delimiter. Assign the .csv file path to the input variable **sourceCCLRoughness**.\n\n\n## Start an Optimization\nAn optimization run can be initiated with the following functions: \n- genetic_algorithm\n- windfarmGA\n\n### Function calls for windfarmGA\n- without terrain effects\n```sh\nresult <- windfarmGA(Polygon1 = Polygon1, n = 12, Rotor = 20, fcrR = 9, iteration = 10,\n vdirspe = wind_df, crossPart1 = ""EQU"", selstate = ""FIX"", mutr = 0.8,\n Proportionality = 1, SurfaceRoughness = 0.3, topograp = FALSE,\n elitism =TRUE, nelit = 7, trimForce = TRUE,\n referenceHeight = 50, RotorHeight = 100)\n```\n\n- with terrain effects\n```sh\nsourceCCL <- ""Source of the CCL raster (TIF)""\nsourceCCLRoughness <- ""Source of the Adaped CCL legend (CSV)""\n\nresult <- windfarmGA(Polygon1 = Polygon1, n = 12, Rotor = 20, fcrR = 9, iteration = 10,\n vdirspe = wind_df, crossPart1 = ""EQU"", selstate = ""FIX"", mutr = 0.8,\n Proportionality = 1, SurfaceRoughness = 0.3, topograp = TRUE,\n elitism = TRUE, nelit = 7, trimForce = TRUE,\n referenceHeight = 50, RotorHeight = 100, sourceCCL = sourceCCL,\n sourceCCLRoughness = sourceCCLRoughness)\n```\n\n### Function calls for genetic_algorithm\n- without terrain effects\n```sh\nresult <- genetic_algorithm(Polygon1 = Polygon1, n = 12, Rotor = 20, fcrR = 9, iteration = 10,\n vdirspe = wind_df, crossPart1 = ""EQU"", selstate = ""FIX"", mutr =0.8,\n Proportionality = 1, SurfaceRoughness = 0.3, topograp = FALSE,\n elitism = TRUE, nelit = 7, trimForce = TRUE,\n referenceHeight = 50, RotorHeight = 100)\n```\n\n- with terrain effects\n```sh\nsourceCCL <- ""Source of the CCL raster (TIF)""\nsourceCCLRoughness <- ""Source of the Adaped CCL legend (CSV)""\nresult <- genetic_algorithm(Polygon1 = Polygon1, n= 12, Rotor = 20, fcrR = 9, iteration = 10,\n vdirspe = wind_df, crossPart1 = ""EQU"", selstate = ""FIX"", mutr = 0.8,\n Proportionality = 1, SurfaceRoughness = 0.3, topograp = TRUE,\n elitism = TRUE, nelit = 7, trimForce = TRUE,\n referenceHeight = 50, RotorHeight = 100, sourceCCL = sourceCCL,\n sourceCCLRoughness = sourceCCLRoughness)\n```\n\n```sh\n## Run an optimization with your own Weibull parameter rasters. The shape and scale \n## parameter rasters of the weibull distributions must be added to a list, with the first\n## list item being the shape parameter (k) and the second list item being the scale\n## parameter (a). Adapt the paths to your raster data and run an optimization.\nkraster <- ""/..pathto../k_param_raster.tif""\naraster <- ""/..pathto../a_param_raster.tif""\nweibullrasters <- list(raster(kraster), raster(araster))\n\nresult_weibull <- genetic_algorithm(Polygon1 = Polygon1, GridMethod =""h"", n=12,\n fcrR=5, iteration=10, vdirspe = wind_df, crossPart1 = ""EQU"",\n selstate=""FIX"", mutr=0.8, Proportionality = 1, Rotor=30,\n SurfaceRoughness = 0.3, topograp = FALSE,\n elitism=TRUE, nelit = 7, trimForce = TRUE,\n referenceHeight = 50,RotorHeight = 100,\n weibull = TRUE, weibullsrc = weibullrasters)\nplot_windfarmGA(result = result_weibull, Polygon1 = Polygon1)\n```\nThe argument **GridMethod**, **weibull**, **weibullsrc** can also be given to the function `windfarmGA`.\n\n#### Plot the Results on a Leaflet Map\n```sh\n## Plot the best wind farm on a leaflet map (ordered by energy values)\nplot_leaflet(result = resulthex, Polygon1, which = 1)\n\n## Plot the last wind farm (ordered by chronology).\nplot_leaflet(result = resulthex, Polygon1, orderitems = FALSE, which = 1)\n```\n\n## Plotting Methods of the Genetic Algorithm \nSeveral plotting functions are available:\n```sh\n - plot_windfarmGA(result, Polygon1)\n - plot_result(result, Polygon1, best = 1)\n - plot_evolution(result, ask = TRUE, spar = 0.1)\n - plot_development(result)\n - plot_parkfitness(result, spar = 0.1)\n - plot_fitness_evolution(result)\n - plot_cloud(result, pl = TRUE)\n - plot_heatmap(result = result, si = 5)\n - plot_leaflet(result = result, Polygon1 = Polygon1, which = 1)\n```\n\nA full documentation of the genetic algorithm is given in my [master thesis](https://homepage.boku.ac.at/jschmidt/TOOLS/Masterarbeit_Gatscha.pdf).\n\n# Shiny Windfarm Optimization\nI also made a [Shiny App](https://windfarmga.shinyapps.io/windga_shiny/) for the Genetic Algorithm. \nUnfortunately, as an optimization takes quite some time and the app is currently hosted by shinyapps.io under a public license, there is only 1 R-worker at hand. So only 1 optimization can be run at a time. \n\n# Full Optimization example:\n```sh\nlibrary(sf)\nlibrary(windfarmGA)\n\nPolygon1 <- sf::st_as_sf(sf::st_sfc(\n sf::st_polygon(list(cbind(\n c(4651704, 4651704, 4654475, 4654475, 4651704),\n c(2692925, 2694746, 2694746, 2692925, 2692925)))), \n crs = 3035\n))\nplot(Polygon1, col = ""blue"", axes = TRUE)\n\nwind_df <- data.frame(ws = 12, wd = 0)\nwindrosePlot <- plot_windrose(data = wind_df, spd = wind_df$ws,\n dir = wind_df$wd, dirres = 10, spdmax = 20)\nRotor <- 20\nfcrR <- 9\nGrid <- grid_area(shape = Polygon1, size = (Rotor*fcrR), prop = 1, plotGrid = TRUE)\n\nresult <- genetic_algorithm(Polygon1 = sp_polygon, \n n = 20,\n Rotor = Rotor, fcrR = fcrR, \n iteration = 50, \n vdirspe = wind_df,\n referenceHeight = 50, RotorHeight = 100)\n\n# The following function will execute all plotting function further below:\nplot_windfarmGA(result, Polygon1, whichPl = ""all"", best = 1, plotEn = 1)\n\n# The plotting functions can also be called individually:\nplot_result(result, Polygon1, best = 1, plotEn = 1, topographie = FALSE)\nplot_evolution(result, ask = TRUE, spar = 0.1)\nplot_parkfitness(result, spar = 0.1)\nplot_fitness_evolution(result)\nplot_cloud(result, pl = TRUE)\nplot_heatmap(result = result, si = 5)\nplot_leaflet(result = result, Polygon1 = Polygon1, which = 1)\n```\n'",,"2017/02/17, 16:56:52",2441,CUSTOM,18,390,"2023/05/02, 07:10:37",4,25,32,6,176,1,0.0,0.4828571428571429,"2021/05/06, 12:45:22",3.0.0,0,2,false,,false,false,,,,,,,,,,, wtphm,"The Wind Turbine Prognostics and Health Management library processes wind turbine events data, as well as operational SCADA data for easier fault detection, prognostics or reliability research.",lkev,https://github.com/lkev/wtphm.git,github,"wind-turbine,wind-energy,fault-detection,machine-learning,scada",Wind Energy,"2021/01/07, 17:46:26",53,0,12,false,Python,,,Python,,"b'.. comment\n\nWTPHM\n*****\n\nThe **W**\\ind **T**\\urbine **P**\\rognostics and **H**\\ealth **M**\\anagement library\nprocesses wind turbine events (also called alarms or status) data, as well as\noperational SCADA data (the usually 10-minute data coming off of wind turbines)\nfor easier fault detection, prognostics or reliability research.\n\nTurbine alarms often appear in high numbers during fault events, and significant\neffort can be involved in processing these alarms in order to find what actually\nhappened, what the root cause was, and when the turbine came back online.\nThis module solves this by automatically identifying stoppages and fault periods\nin the data and assigning a high-level ""stoppage category"" to each.\nIt also provides functionality to use this info to label SCADA data for training\npredictive maintenance algorithms.\n\nAlthough there are commercial packages that can perform this task, this library\naims to be an open-source alternative for use by the research community.\n\nPlease reference this repo if used in any research. Any bugs, questions or\nfeature requests can be raised on GitHub. Can also reach me on twitter\n@leahykev.\n\nThis library was used to build the ""batch creation"" and ""data labelling"" steps of `this paper `_.\n\nInstallation\n============\n\nInstall using pip! ::\n\n pip install wtphm\n\nDocumentation\n=============\n\nFull documentation and user guide can be found on\n`readthedocs `_.\n\nA local copy of the docs can\nbe built by running ``_ with sphinx installed.\n\nIs my Data Compatible?\n======================\n\nThe data manipulated in this library are turbine events/status/alarms data and\n10-minute operational SCADA data.\nThey must be in the formats described below.\n\nEvent Data\n----------\n\n.. start event comment\n\nThe ``event_data`` is related to any fault or information messages generated by\nthe turbine. This is instantaneous, and records information like faults that have\noccurred, or status messages like low- or no- wind, or turbine shutting down due\nto storm winds.\n\nThe data must have the following column headers and information available:\n\n* ``turbine_num``: The turbine the data applies to\n* ``code``: There are a set list of events which can occur on the\n turbine. Each one of these has an event code\n* ``description``: Each event code also has an associated description\n* ``time_on``: The start time of the event\n* ``stop_cat``: This is a category for events which cause the turbine to come to\n a stop. It could be the functional location of where in the turbine the event\n originated (e.g. pitch system), a category for grid-related events,\n that the turbine is down for testing or maintenance, in curtailment due to\n shadow flicker, etc.\n* In addition, there must be a specific event ``code`` which signifies return to\n normal operation after any downtime or abnormal operating period.\n\n.. end event comment\n\nSCADA/Operational data\n----------------------\n\n.. start scada comment\n\nThe ``scada_data`` is typically recorded in 10-minute intervals and has attributes like\naverage power output, maximum, minimum and average windspeeds, etc. over the previous\n10-minute period.\n\nFor the purposes of this library, it must have the following column headers and\ndata:\n\n* ``turbine_num``: The turbine the data applies to\n* ``time``: The 10-minute period the data belongs to\n* availability counters: Some of the functions for giving the batches a stop\n category rely on availability counters. These are sometimes stored as part of\n scada data, and sometimes in separate availability data. They count the portion\n of time the turbine was in some mode of operation in each 10-minute period,\n for availability calculations. For example, maintenance time, fault time, etc.\n In order to be used in this library, the availability counters are\n assumed to range between 0 and\n *n* in each period, where *n* is some arbitrary maximum (typically 600, for\n the 600 seconds in the 10-minute period).\n\n.. end scada comment\n'",,"2018/08/22, 18:39:45",1890,GPL-3.0,0,70,"2020/06/20, 14:58:49",2,1,3,0,1222,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, AirfoilPreppy,"A Python module for pre-processing and evaluating aerodynamic airfoil data, primarily for wind turbine applications.",WISDEM,https://github.com/WISDEM/AirfoilPreppy.git,github,,Wind Energy,"2023/10/22, 02:40:21",31,0,4,true,Python,WISDEM,WISDEM,Python,,"b'# Airfoil Preppy\n\nA Python module for preprocessing and evaluating aerodynamic airfoil data---primarily for wind turbine applications.\n\nAuthor: [NREL WISDEM Team](mailto:systems.engineering@nrel.gov) \n\n## Documentation\n\nFor detailed documentation see \n\n## Installation\n\nFor detailed installation instructions of WISDEM modules see or to install AirfoilPreppy by itself do:\n\n $ python setup.py install\n\n## Unit Tests\n\nTo check for successful installation, run the unit tests\n\n $ python test/test_airfoilprep.py\n\nFor software issues please use . For functionality and theory related questions and comments please use the NWTC forum for [Systems Engineering Software Questions](https://wind.nrel.gov/forum/wind/viewtopic.php?f=34&t=1002).\n\n'",,"2013/12/13, 22:21:11",3602,CUSTOM,1,44,"2020/06/11, 20:16:59",1,10,19,0,1230,0,0.0,0.6470588235294117,,,0,7,false,,false,false,,,https://github.com/WISDEM,https://www.nrel.gov/wind/systems-engineering.html,"NREL National Wind Technology Center, Boulder, CO",,,https://avatars.githubusercontent.com/u/5444272?v=4,,, GreenGuard,A collection of end-to-end solutions for machine learning problems commonly found in monitoring wind energy production system.,signals-dev,https://github.com/sintel-dev/Draco.git,github,"classification,machine-learning,time-series",Wind Energy,"2023/07/31, 15:36:01",49,0,13,true,Jupyter Notebook,The Signal Intelligence Project,sintel-dev,"Jupyter Notebook,Python,Makefile,Dockerfile",https://sintel-dev.github.io/Draco,"b'

\n\nAn open source project from Data to AI Lab at MIT.\n

\n\n

\n\n

\n\n

\nAutoML for Time Series.\n

\n\n\n[![PyPI Shield](https://img.shields.io/pypi/v/draco-ml.svg)](https://pypi.python.org/pypi/draco-ml)\n[![Tests](https://github.com/sintel-dev/Draco/workflows/Run%20Tests/badge.svg)](https://github.com/sintel-dev/Draco/actions?query=workflow%3A%22Run+Tests%22+branch%3Amaster)\n[![Downloads](https://pepy.tech/badge/draco-ml)](https://pepy.tech/project/draco-ml)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/sintel-dev/Draco/master?filepath=tutorials)\n\n\n# Draco\n\n- License: [MIT](https://github.com/sintel-dev/Draco/blob/master/LICENSE)\n- Documentation: https://sintel-dev.github.io/Draco\n- Homepage: https://github.com/sintel-dev/Draco\n\n## Overview\n\nThe Draco project is a collection of end-to-end solutions for machine learning problems\ncommonly found in time series monitoring systems. Most tasks utilize sensor data\nemanating from monitoring systems. We utilize the foundational innovations developed for\nautomation of machine Learning at Data to AI Lab at MIT.\n\nThe salient aspects of this customized project are:\n\n* A set of ready to use, well tested pipelines for different machine learning tasks. These are\n vetted through testing across multiple publicly available datasets for the same task.\n* An easy interface to specify the task, pipeline, and generate results and summarize them.\n* A production ready, deployable pipeline.\n* An easy interface to ``tune`` pipelines using Bayesian Tuning and Bandits library.\n* A community oriented infrastructure to incorporate new pipelines.\n* A robust continuous integration and testing infrastructure.\n* A ``learning database`` recording all past outcomes --> tasks, pipelines, outcomes.\n\n## Resources\n\n* [Data Format](DATA_FORMAT.md).\n* [Draco folder structure](DATA_FORMAT.md#folder-structure).\n\n# Install\n\n## Requirements\n\n**Draco** has been developed and runs on Python 3.6, 3.7 and 3.8.\n\nAlso, although it is not strictly required, the usage of a [virtualenv](\nhttps://virtualenv.pypa.io/en/latest/) is highly recommended in order to avoid interfering\nwith other software installed in the system where you are trying to run **Draco**.\n\n## Download and Install\n\n**Draco** can be installed locally using [pip](https://pip.pypa.io/en/stable/) with\nthe following command:\n\n```bash\npip install draco-ml\n```\n\nThis will pull and install the latest stable release from [PyPi](https://pypi.org/).\n\nIf you want to install from source or contribute to the project please read the\n[Contributing Guide](https://sintel-dev.github.io/Draco/contributing.html#get-started).\n\n# Data Format\n\nThe minimum input expected by the **Draco** system consists of the following two elements,\nwhich need to be passed as `pandas.DataFrame` objects:\n\n## Target Times\n\nA table containing the specification of the problem that we are solving, which has three\ncolumns:\n\n* `turbine_id`: Unique identifier of the turbine which this label corresponds to.\n* `cutoff_time`: Time associated with this target\n* `target`: The value that we want to predict. This can either be a numerical value or a\n categorical label. This column can also be skipped when preparing data that will be used\n only to make predictions and not to fit any pipeline.\n\n| | turbine_id | cutoff_time | target |\n|----|--------------|---------------------|----------|\n| 0 | T1 | 2001-01-02 00:00:00 | 0 |\n| 1 | T1 | 2001-01-03 00:00:00 | 1 |\n| 2 | T2 | 2001-01-04 00:00:00 | 0 |\n\n## Readings\n\nA table containing the signal data from the different sensors, with the following columns:\n\n * `turbine_id`: Unique identifier of the turbine which this reading comes from.\n * `signal_id`: Unique identifier of the signal which this reading comes from.\n * `timestamp (datetime)`: Time where the reading took place, as a datetime.\n * `value (float)`: Numeric value of this reading.\n\n| | turbine_id | signal_id | timestamp | value |\n|----|--------------|-------------|---------------------|---------|\n| 0 | T1 | S1 | 2001-01-01 00:00:00 | 1 |\n| 1 | T1 | S1 | 2001-01-01 12:00:00 | 2 |\n| 2 | T1 | S1 | 2001-01-02 00:00:00 | 3 |\n| 3 | T1 | S1 | 2001-01-02 12:00:00 | 4 |\n| 4 | T1 | S1 | 2001-01-03 00:00:00 | 5 |\n| 5 | T1 | S1 | 2001-01-03 12:00:00 | 6 |\n| 6 | T1 | S2 | 2001-01-01 00:00:00 | 7 |\n| 7 | T1 | S2 | 2001-01-01 12:00:00 | 8 |\n| 8 | T1 | S2 | 2001-01-02 00:00:00 | 9 |\n| 9 | T1 | S2 | 2001-01-02 12:00:00 | 10 |\n| 10 | T1 | S2 | 2001-01-03 00:00:00 | 11 |\n| 11 | T1 | S2 | 2001-01-03 12:00:00 | 12 |\n\n## Turbines\n\nOptionally, a third table can be added containing metadata about the turbines.\nThe only requirement for this table is to have a `turbine_id` field, and it can have\nan arbitraty number of additional fields.\n\n| | turbine_id | manufacturer | ... | ... | ... |\n|----|--------------|----------------|-------|-------|-------|\n| 0 | T1 | Siemens | ... | ... | ... |\n| 1 | T2 | Siemens | ... | ... | ... |\n\n## CSV Format\n\nA part from the in-memory data format explained above, which is limited by the memory\nallocation capabilities of the system where it is run, **Draco** is also prepared to\nload and work with data stored as a collection of CSV files, drastically increasing the amount\nof data which it can work with. Further details about this format can be found in the\n[project documentation site](DATA_FORMAT.md#csv-format).\n\n# Quickstart\n\nIn this example we will load some demo data and classify it using a **Draco Pipeline**.\n\n## 1. Load and split the demo data\n\nThe first step is to load the demo data.\n\nFor this, we will import and call the `draco.demo.load_demo` function without any arguments:\n\n```python3\nfrom draco.demo import load_demo\n\ntarget_times, readings = load_demo()\n```\n\nThe returned objects are:\n\n* ``target_times``: A ``pandas.DataFrame`` with the ``target_times`` table data:\n\n ```\n turbine_id cutoff_time target\n 0 T001 2013-01-12 0\n 1 T001 2013-01-13 0\n 2 T001 2013-01-14 0\n 3 T001 2013-01-15 1\n 4 T001 2013-01-16 0\n ```\n\n* ``readings``: A ``pandas.DataFrame`` containing the time series data in the format explained above.\n\n ```\n turbine_id signal_id timestamp value\n 0 T001 S01 2013-01-10 323.0\n 1 T001 S02 2013-01-10 320.0\n 2 T001 S03 2013-01-10 284.0\n 3 T001 S04 2013-01-10 348.0\n 4 T001 S05 2013-01-10 273.0\n ```\n\nOnce we have loaded the `target_times` and before proceeding to training any Machine Learning\nPipeline, we will have split them in 2 partitions for training and testing.\n\nIn this case, we will split them using the [train_test_split function from scikit-learn](\nhttps://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html),\nbut it can be done with any other suitable tool.\n\n```python3\nfrom sklearn.model_selection import train_test_split\n\ntrain, test = train_test_split(target_times, test_size=0.25, random_state=0)\n```\n\nNotice how we are only splitting the `target_times` data and not the `readings`.\nThis is because the pipelines will later on take care of selecting the parts of the\n`readings` table needed for the training based on the information found inside\nthe `train` and `test` inputs.\n\nAdditionally, if we want to calculate a goodness-of-fit score later on, we can separate the\ntesting target values from the `test` table by popping them from it:\n\n```python3\ntest_targets = test.pop(\'target\')\n```\n\n## 2. Exploring the available Pipelines\n\nOnce we have the data ready, we need to find a suitable pipeline.\n\nThe list of available Draco Pipelines can be obtained using the `draco.get_pipelines`\nfunction.\n\n```python3\nfrom draco import get_pipelines\n\npipelines = get_pipelines()\n```\n\nThe returned `pipeline` variable will be `list` containing the names of all the pipelines\navailable in the Draco system:\n\n```\n[\'lstm\',\n \'lstm_with_unstack\',\n \'double_lstm\',\n \'double_lstm_with_unstack\']\n```\n\nFor the rest of this tutorial, we will select and use the pipeline\n`lstm_with_unstack` as our template.\n\n```python3\npipeline_name = \'lstm_with_unstack\'\n```\n\n## 3. Fitting the Pipeline\n\nOnce we have loaded the data and selected the pipeline that we will use, we have to\nfit it.\n\nFor this, we will create an instance of a `DracoPipeline` object passing the name\nof the pipeline that we want to use:\n\n```python3\nfrom draco.pipeline import DracoPipeline\n\npipeline = DracoPipeline(pipeline_name)\n```\n\nAnd then we can directly fit it to our data by calling its `fit` method and passing in the\ntraining `target_times` and the complete `readings` table:\n\n```python3\npipeline.fit(train, readings)\n```\n\n## 4. Make predictions\n\nAfter fitting the pipeline, we are ready to make predictions on new data by calling the\n`pipeline.predict` method passing the testing `target_times` and, again, the complete\n`readings` table.\n\n```python3\npredictions = pipeline.predict(test, readings)\n```\n\n## 5. Evaluate the goodness-of-fit\n\nFinally, after making predictions we can evaluate how good the prediction was\nusing any suitable metric.\n\n```python3\nfrom sklearn.metrics import f1_score\n\nf1_score(test_targets, predictions)\n```\n\n## What\'s next?\n\nFor more details about **Draco** and all its possibilities and features, please check the\n[project documentation site](https://sintel-dev.github.io/Draco/)\nAlso do not forget to have a look at the [tutorials](\nhttps://github.com/sintel-dev/Draco/tree/master/tutorials)!\n'",,"2018/09/27, 14:50:42",1854,MIT,11,294,"2023/07/24, 12:12:04",6,54,71,7,93,0,0.8,0.5914893617021277,"2023/07/31, 16:02:44",v0.3.0,0,5,false,,false,true,,,https://github.com/sintel-dev,https://dai.lids.mit.edu/,,,,https://avatars.githubusercontent.com/u/13336772?v=4,,, pyconturb,Constrained Stochastic Turbulence for Wind Energy Applications.,pyconturb,,custom,,Wind Energy,,,,,,,,,,https://gitlab.windenergy.dtu.dk/pyconturb/pyconturb,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ORBIT,Offshore Renewable Balance-of-system Installation Tool computes capital costs and activity times for offshore wind plant balance-of-system (everything besides the turbine) costs.,WISDEM,https://github.com/WISDEM/ORBIT.git,github,,Wind Energy,"2023/04/21, 15:30:22",14,0,5,true,Python,WISDEM,WISDEM,Python,https://wisdem.github.io/ORBIT/,"b'ORBIT\n=====\n\nOffshore Renewables Balance of system and Installation Tool\n\n\n:Version: 1.0.8\n:Authors: `Jake Nunemaker `_, `Matt Shields `_, `Rob Hammond `_\n:Documentation: `ORBIT Docs `_\n\nInstallation\n------------\n\nAs of version 0.5.2, ORBIT is now pip installable with ``pip install orbit-nrel``.\n\nDevelopment Setup\n-----------------\n\nThe steps below are for more advanced users that would like to modify and\nand contribute to ORBIT.\n\nEnvironment\n~~~~~~~~~~~\n\nA couple of notes before you get started:\n - It is assumed that you will be using the terminal on MacOS/Linux or the\n Anaconda Prompt on Windows. The instructions refer to both as the\n ""terminal"", and unless otherwise noted the commands will be the same.\n - To verify git is installed, run ``git --version`` in the terminal. If an error\n occurs, install git using these `directions `_.\n - The listed installation process is intended to be the easiest for any OS\n to get started. An alternative setup that doesn\'t rely on Anaconda for\n setting up an environment can be followed\n `here `_.\n\nInstructions\n~~~~~~~~~~~~\n\n1. Download the latest version of `Miniconda `_\n for the appropriate OS. Follow the remaining `steps `_\n for the appropriate OS version.\n2. From the terminal, install pip by running: ``conda install -c anaconda pip``\n3. Next, create a new environment for the project with the following.\n\n .. code-block:: console\n\n conda create -n python=3.7 --no-default-packages\n\n To activate/deactivate the environment, use the following commands.\n\n .. code-block:: console\n\n conda activate \n conda deactivate \n\n4. Clone the repository:\n ``git clone https://github.com/WISDEM/ORBIT.git``\n5. Navigate to the top level of the repository\n (``/ORBIT/``) and install ORBIT as an editable package\n with following command.\n\n .. code-block:: console\n\n # Note the ""."" at the end\n pip install -e .\n\n # OR if you are you going to be contributing to the code or building documentation\n pip install -e \'.[dev]\'\n6. (Development only) Install the pre-commit hooks to autoformat code and\n check that tests pass.\n\n .. code-block:: console\n\n pre-commit install\n\nDependencies\n~~~~~~~~~~~~\n\n- Python 3.7+\n- marmot-agents\n- NumPy\n- SciPy\n- Matplotlib\n- OpenMDAO (>=3.2)\n\nDevelopment Specific\n~~~~~~~~~~~~~~~~~~~~\n\n- black\n- isort\n- pre-commit\n- pytest\n- sphinx\n- sphinx-rtd-theme\n\n\nRecommended packages for easy iteration and running of code:\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n- jupyterlab\n- pandas\n'",,"2019/12/09, 19:05:10",1416,Apache-2.0,62,632,"2023/04/04, 16:03:22",7,96,129,20,204,0,0.6,0.14589665653495443,"2020/07/01, 16:36:38",v0.5.0,0,6,false,,false,false,,,https://github.com/WISDEM,https://www.nrel.gov/wind/systems-engineering.html,"NREL National Wind Technology Center, Boulder, CO",,,https://avatars.githubusercontent.com/u/5444272?v=4,,, WindTurbineClassification,Specification of 'normal' wind turbine operating behaviour for rapid anomaly detection.,nmstreethran,https://github.com/nmstreethran/WindTurbineClassification.git,github,"energy,renewable-energy,python,jupyter-notebook,wind-turbines",Wind Energy,"2023/10/04, 18:31:41",49,0,11,true,Python,,,Python,,"b""# WindTurbineClassification\n\n[![DOI](images/badges/DOI.svg)](https://doi.org/10.5281/zenodo.2875795)\n[![View report (PDF)](images/badges/REPORT.svg)](https://raw.githubusercontent.com/nmstreethran/WindTurbineClassification/current/docs/nms_dissertation.pdf)\n\n\n\n***Specification of 'normal' wind turbine operating behaviour for rapid anomaly detection: through the use of machine learning algorithms***\n\nby Nithiya Streethran (nmstreethran@gmail.com)\n\nThis work is derived from my dissertation for the degree of Master of Science (MSc) in Renewable Energy Engineering at Heriot-Watt University, which was completed during a technical placement at Natural Power between May and August 2017.\n\n**Unfortunately, the datasets are proprietary industry data and I do not own the rights to distribute them to the public. Please do not contact me to request the datasets.**\n\n## Abstract\n\nMaximising the economic effectiveness of a wind farm is essential in making wind a more economic source of energy. This effectiveness can be increased through the reduction of operation and maintenance costs, which can be achieved through continuously monitoring the condition of wind turbines. An alternative to expensive condition monitoring systems, which can be uneconomical especially for older wind turbines, is to implement classification algorithms on supervisory control and data acquisition (SCADA) signals, which are collected in most wind turbines. Several publications were reviewed, which were all found to use separate algorithms to predict specific faults in advance. In reality, wind turbines tend to have multiple faults which may happen simultaneously and have correlations with one another. This project focusses on developing a methodology to predict multiple wind turbine faults in advance simultaneously by implementing classification algorithms on SCADA signals for a wind farm with 25 turbines rated at 2,500 kW, spanning a period of 30 months. The data, which included measurements of wind speed, active power and pitch angle, was labelled using corresponding downtime data to detect normal behaviour, faults and varying timescales before a fault occurs. Three different classification algorithms, namely decision trees, random forests and k nearest neighbours were tested using imbalanced and balanced training data, initially to optimise a number of hyperparameters. The random forest classifier produced the best results. Upon conducting a more detailed analysis on the performance of specific faults, it was found that the classifier was unable to detect the varying timescales before a fault with accuracy comparable to that of normal or faulty behaviour. This could have been due to the SCADA data, which are used as features, being unsuitable for detecting the faults, and there is potential to improve this by balancing only these classes.\n\n***Keywords:*** wind turbine, classification algorithm, SCADA, fault detection, condition monitoring\n\n## Scripts\n\n  | Source | Output\n------ | -- | --\nProcess SCADA and downtime data | [![View Python script](images/badges/SCRIPT.svg)](scripts/process_data.py) | [![View Jupyter Notebook](images/badges/NOTEBOOK.svg)](https://nbviewer.org/github/nmstreethran/WindTurbineClassification/blob/current/docs/jupyter-notebooks/process_data.ipynb)\nDowntime categories | [![View Python script](images/badges/SCRIPT.svg)](scripts/downtime_categories.py) | [![View Jupyter Notebook](images/badges/NOTEBOOK.svg)](https://nbviewer.org/github/nmstreethran/WindTurbineClassification/blob/current/docs/jupyter-notebooks/downtime_categories.ipynb)\nMerge SCADA and downtime data | [![View Python script](images/badges/SCRIPT.svg)](scripts/SCADA_downtime_merge.py)\nPower curves for all turbines | | [![View Jupyter Notebook](images/badges/NOTEBOOK.svg)](https://nbviewer.org/github/nmstreethran/WindTurbineClassification/blob/current/docs/jupyter-notebooks/powercurves_all.ipynb)\n\n## License\n\nUnless otherwise stated:\n\n- Code and scripts are licensed under the [MIT License](https://opensource.org/licenses/MIT).\n- Content, images, and documentation are licensed under a [Creative Commons Attribution 4.0 International (CC-BY-4.0) License](https://creativecommons.org/licenses/by/4.0/).\n\nProject badges are generated using [Shields.io](https://shields.io/) and [Simple Icons](https://simpleicons.org/).\n""",",https://doi.org/10.5281/zenodo.2875795","2018/07/29, 23:36:18",1913,MIT,1,97,"2022/04/10, 16:19:44",0,1,1,0,563,0,0.0,0.0,"2020/11/27, 17:33:37",v1.1.0,0,1,false,,false,false,,,,,,,,,,, ANYstructure,Offshore Steel structure calculation tool with automatic optimization and report generation.,audunarn,https://github.com/audunarn/ANYstructure.git,github,"dnvgl-os-c101,design-of-offshore-steel-structures,buckling,fatigue-analysis,analysis-framework,plate-thickness,beam-section,gui-based,dnvgl-rp-c201,optimization-tools,structural-engineering,naval-architecture,dnvgl-rp-c203,cylinders,plates,girder,dnv-os-c101",Wind Energy,"2023/10/23, 19:43:00",36,1,11,true,Python,,,"Python,JavaScript",,"b'# ANYstructure #\nANYstructure is the ultimate steel structure design tool for plate fields and cylinders! \nWeight optimization for all structures with machine learning capabilities. \nCalculations are based on DNV standards and rules\n### What\'s new in 4.10 ###\n* Corrected minor bug on membrane stresses for unstiffened cylinder.\n### What\'s new in 4.9.1 ###\n* Corrected bug in loading old save files\n* Corrected error on buckling flat plate calculation\n### What\'s new in 4.8 ###\n* Reporting table on cylinders.\n* Color coding on come cylinder properties.\n* Corrected error on additional hoop stress input for cylinders.\n### What\'s new in 4.7 ###\n* Corrected error on girder caluculation for cylinder buckling.\n* Added 1.10 load factor option for cylinder buckling.\n* Better compability with linux.\n* Python 3.11 based.\n### What\'s new in 4.4 ###\n* Backup and restore feature added.\n### What\'s new in 4.3 ###\n* General stability.\n* User friendliness.\n### What\'s new in 4.2 ###\n* Bug fixing.\n* Ukraininan theme.\n### What\'s new in 4.0 ###\n* Cylinder design and optimization!\n* Flat plate prescriptive buckling improved. Girder calculation added.\n* Updated GUI with color themes.\n### What\'s new in 3.3 ###\n* Extremely efficient Machine Learning version of PULS called ML-CL. Implemented for all optimizer options.\n* Calculation of Center of Gravity and Center of Buoyancy.\n* Reporting of weights and COG.\n* Lots of bug fixes.\n\n------------------------------------------------------------------------\n\n## The following is calculated: ##\n* Minimum plate thickness (DNV-OS-C101)\n* Minimum section modulus of stiffener/plate (DNVGL-OS-C101)\n* Minimum shear area (DNVGL-OS-C101)\n* Buckling (DNVGL-RP-C201)or PULS (licenced DNV software)\n* Buckling strength of shells DNV-RP-C202\n* PULS buckling (DNV license needed)\n* Machine learning buckling, PULS based\n* Fatigue for plate/stiffener connection (DNVGL-RP-C203)\n\nCompartments (tank pressures) are created automatically.\n\nPressures on external hull (or any other generic location) is defined by specifying equations.\n\nYou can optimize cylinders, single plate/stiffener field or multiple. Geometry of double bottom can be optimized.\n\nPLEASE CONTRIBUTE. REPORT BUGS ERRORS ETC.\nFor windows executable (.exe) version for non-coders, use the link below.\n\nFeedback: audunarn@gmail.com or discuss on github.\n\nPlease like, share or comment on LinkedIn: https://www.linkedin.com/in/audun-arnesen-nyhus-6aa17118/\n\nScreenshot (this example can be loaded from file ""ship_section_example.txt""):\n\n![picture](https://docs.google.com/uc?id=1HJeT50bNJTLJbcHTfRke4iySV8zNOAl_)\n'",,"2018/04/10, 09:10:37",2024,MIT,48,834,"2023/08/30, 07:14:49",5,100,122,10,56,0,0.1,0.0028169014084507005,"2023/08/30, 07:53:17",4.10,0,3,false,,false,false,narest-qa/repo14,,,,,,,,,, windrose,A graphic tool used by meteorologists to give a succinct view of how wind speed and direction are typically distributed at a particular location.,python-windrose,https://github.com/python-windrose/windrose.git,github,"python,matplotlib,windrose,wind,speed,pandas,numpy",Wind Energy,"2023/10/03, 09:40:55",315,128,43,true,Python,,python-windrose,"Python,Jupyter Notebook,TeX",https://python-windrose.github.io/windrose,"b'[![Latest Version](https://img.shields.io/pypi/v/windrose.svg)](https://pypi.python.org/pypi/windrose/)\n[![Supported Python versions](https://img.shields.io/pypi/pyversions/windrose.svg)](https://pypi.python.org/pypi/windrose/)\n[![Wheel format](https://img.shields.io/pypi/wheel/windrose.svg)](https://pypi.python.org/pypi/windrose/)\n[![License](https://img.shields.io/pypi/l/windrose.svg)](https://pypi.python.org/pypi/windrose/)\n[![Development Status](https://img.shields.io/pypi/status/windrose.svg)](https://pypi.python.org/pypi/windrose/)\n[![Tests](https://github.com/python-windrose/windrose/actions/workflows/tests.yml/badge.svg)](https://github.com/python-windrose/windrose/actions/workflows/tests.yml)\n[![DOI](https://zenodo.org/badge/37549137.svg)](https://zenodo.org/badge/latestdoi/37549137)\n[![JOSS](https://joss.theoj.org/papers/10.21105/joss.00268/status.svg)](https://joss.theoj.org/papers/10.21105/joss.00268)\n\n# Windrose\n\nA [wind rose](https://en.wikipedia.org/wiki/Wind_rose) is a graphic tool used by meteorologists to give a succinct view of how wind speed and direction are typically distributed at a particular location. It can also be used to describe air quality pollution sources. The wind rose tool uses Matplotlib as a backend. Data can be passed to the package using Numpy arrays or a Pandas DataFrame.\n\nWindrose is a Python library to manage wind data, draw windroses (also known as polar rose plots), and fit Weibull probability density functions.\n\nThe initial use case of this library was for a technical report concerning pollution exposure and wind distributions analyzes. Data from local pollution measures and meteorologic information from various sources like Meteo-France were used to generate a pollution source wind rose.\n\nIt is also used by some contributors for teaching purpose.\n\n![Map overlay](https://raw.githubusercontent.com/python-windrose/windrose/main/paper/screenshots/overlay.png)\n\nSome others contributors have used it to make figures for a [wind power plant control optimization study](https://www.nrel.gov/docs/fy17osti/68185.pdf).\n\nSome academics use it to track lightning strikes during high intensity storms. They are using it to visualize the motion of storms based on the relative position of the lightning from one strike to the next.\n\n## Try windrose on mybinder.org\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/python-windrose/windrose/HEAD?labpath=notebooks)\n\n## Install\n\n### Requirements\n\n- matplotlib http://matplotlib.org/\n- numpy http://www.numpy.org/\n- and naturally python https://www.python.org/ :-P\n\nOptional libraries:\n\n- Pandas http://pandas.pydata.org/ (to feed plot functions easily)\n- Scipy http://www.scipy.org/ (to fit data with Weibull distribution)\n- ffmpeg https://www.ffmpeg.org/ (to output video)\n- click http://click.pocoo.org/ (for command line interface tools)\n- seaborn https://seaborn.pydata.org/ (for easy subplots)\n\n### Install latest release version via pip\n\nA package is available and can be downloaded from PyPi and installed using:\n\n```bash\n$ pip install windrose\n```\n\n### Install latest development version\n\n```bash\n$ pip install git+https://github.com/python-windrose/windrose\n```\n\nor\n\n```bash\n$ git clone https://github.com/python-windrose/windrose\n$ python setup.py install\n```\n\n## Documentation\nFull documentation of library is available at https://python-windrose.github.io/windrose/\n\n## Community guidelines\n\nYou can help to develop this library.\n\n### Code of Conduct\n\nIf you are using Python Windrose and want to interact with developers, others users...\nwe encourage you to follow our [code of conduct](https://github.com/python-windrose/windrose/blob/master/CODE_OF_CONDUCT.md).\n\n### Contributing\n\nIf you discover issues, have ideas for improvements or new features, please report them.\n[CONTRIBUTING.md](https://github.com/python-windrose/windrose/blob/master/CONTRIBUTING.md) explains\nhow to contribute to this project.\n\n### List of contributors and/or notable users\nhttps://github.com/python-windrose/windrose/blob/main/CONTRIBUTORS.md\n'",",https://zenodo.org/badge/latestdoi/37549137","2015/06/16, 18:42:14",3053,CUSTOM,86,391,"2023/10/03, 09:40:59",23,120,233,47,22,2,0.0,0.3980891719745223,"2023/06/12, 17:45:05",v1.9.0,0,19,false,,true,true,"Alexsaez1990/proyecto_cultivos,Maxbeal/noisemodel,cathrinr/driving_change,imorinigo/dashboard_space_apps,MET-OM/metocean-stats,BahadoriMohammad/PyFluxPro,aesirkth/euroc23_electronics,adrnfk19/SoundingRocketDesign,Huell-Howitzer/docker-stacks,calstar/LE2,jisoo-j/RocketPy,JayMangukiya1614/TempDriverSchedule,OpenFDEM-geomechanics/Post-processing,os-simopt/wrfplotter,Kostyak7/Avellon_tech,CarstenOtl/WESP_RocketPy,miky21121996/HFR-project,RocketPy-Team/Infinity-API,juliet29/windows,edmundhong/formula1-dataanalysis,vieirasaulo/wasserweise,bavodenys/kitesurf_AI,sia-information-system/siaplotlib,Serg-NSD/SkillFactory-Data_Science,miky21121996/MO_project,rcushen/wind-turbines,camirmas/REStats,danilo-pilacuan/TechnicalEvaluationExchangeAPI,AnujaDassanayake/test-deployment-1,devSmarak972/Chakrasindhu-portal,jhaalbu/av_klima_nve,derekeden/aisio,Sakura-echos/CH4-caculate,robintw/PyAURN,SINTEF/blues-metocean-lib-examples,jhaalbu/klima_app,umweltschutzsoftware/windrose,jhaalbu/klima_docs,GEUS-Glaciology-and-Climate/GC-Net-evaluation,iliketoast-create/rocket,kaustuvchatterjee/vskp2,Thomasjkeel/Examining-The-Mid-Latitude-Jet-Stream,onaci/bard,Hariramakrishnan919/Textiledefect,Hariramakrishnan919/Minor-Project,timsta95/mcwindrose,jhaalbu/hendelse,nRiccobo/Leosphere,shanshan825/aiap12-Tong-Shanshan-359I,UNISvalbard/unisacsi,MurzikVasilyevich/info_perun,Mavengence/linkedin-job-scraper-data-analysis,KonstantinosF/Wekeo_Competition,claws-scot/CLAWS,anushavc/rapddetect,RocketPy-Team/RocketPy,OzFlux/PyFluxPro,JohnPapagiannakos/meteoAPI,cgadal-pythonpackages/pydune,os-simopt/wrftamer,jgmsantos/Livro-Python,jhaalbu/frost,alberduris/met-main,cycle13/climate,randulphmorales/romeomemo,SoftwareDevEngResearch/albatross,galendal/FACTS,raj-26singh/Wind-Farm-Simulation,raj-26singh/Wind-Farm-Simulator,nunesotavio/GestaoQualidadeAr_SC,christine-berlin/Capstone_WindPowerPredictions,goameli/Windenergy_Dashboard,vduseev/number-encoding,SooHooLee/test,brynjarmorka/climvis,JeromeSauer/Capstone_WindPowerPredicting,aeaa1998/lab_10_redes,ECMWFCode4Earth/MaLePoM,autoconsumes/aa,kirubhaharini/URECA,kastnerp/pedestrian-wind-comfort-weibull,nick123pig/wind-rose,Mapacherama/WindAnalyseTool,xiazemin/Wind-Speed-Analysis,kasiagunia/wind-turbine,nusrathfathima/Rain-in-Australia-Prediction,thilowrona/fatbox,vinitshah24/Australia-Rain-Classification,Rajvardhan7/Textile-Detection,Rajvardhan7/WindFarm-Layout-Optimization,julianasierra97/chat_bot,ElieKadoche/configuration_files,GregoireJan/met,halvorot/Floating-wind-turbine-stabilization-RL,jeeve/outilsflask,cgadal-pythonpackages/Wind_data,adamconrad7/Tree-Segmentation,launda/avguide,jhaalbu/av-klima,Wolfrax/tv,vlemeur/vlm-ds-toolbox,mozhemeng/flask-frame,tsunghao-huang/master_thesis,SmartPracticeschool/SBSPS-Challenge-2585-Forecasting-the-power-output-of-wind-farm-based-on-the-weather-conditions.,SmartPracticeschool/SBSPS-Challenge-4546-Predicting-the-energy-output-of-wind-turbine-based-on-weather-condition,SmartPracticeschool/SBSPS-Challenge-3618-Predicting-the-energy-output-of-wind-turbine-based-on-weather-condition,CENER-EPR/OWAbench,miky211296/climvis,clementbrizard/mapping-finland-weather,MeteoR-OI/bd-climato,varungv/django-web-crawler-web-app,SoftwareDefinedBuildings/XBOS,rsoutelino/pyromsgui,launda/learn_flask,ocni-dtu/weather_visualizer,abkfenris/mwac-wind,slj287/tempe_town_lake,weber-s/code_example_ige,oyan99/Wind-Speed-Analysis,cqcn1991/Wind-Speed-Analysis,leportella/oceanpyscripts,rmsare/ec-simple,sandtimething/adhd_diagnosis,nclv/Python-3.5,socib/HFRadarReports,akrherz/pyIEM,RCand/maritima,LionelR/pyair_utils",,https://github.com/python-windrose,,,,,https://avatars.githubusercontent.com/u/28726174?v=4,,, SHARPy,Simulation of High Aspect Ratio aeroplanes and wind turbines in Python.,ImperialCollegeLondon,https://github.com/ImperialCollegeLondon/sharpy.git,github,"aeroelasticity,simulation,aeronautics,structures,structural-dynamics,wind-turbines",Wind Energy,"2023/10/18, 21:42:02",102,0,22,true,Python,Imperial College London,ImperialCollegeLondon,"Python,Dockerfile,Shell,CMake",https://imperial.ac.uk/aeroelastics/sharpy,"b""# Simulation of High Aspect Ratio aeroplanes in Python [SHARPy]\n\n![Version badge](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Fraw.githubusercontent.com%2FImperialCollegeLondon%2Fsharpy%2Fmain%2F.version.json)\n![Build Status](https://github.com/ImperialCollegeLondon/sharpy/actions/workflows/sharpy_tests.yaml/badge.svg)\n[![Documentation Status](https://readthedocs.org/projects/ic-sharpy/badge/?version=main)](https://ic-sharpy.readthedocs.io/en/main/?badge=main)\n[![codecov](https://codecov.io/gh/ImperialCollegeLondon/sharpy/branch/main/graph/badge.svg)](https://codecov.io/gh/ImperialCollegeLondon/sharpy)\n[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)\n[![status](https://joss.theoj.org/papers/f7ccd562160f1a54f64a81e90f5d9af9/status.svg)](https://joss.theoj.org/papers/f7ccd562160f1a54f64a81e90f5d9af9)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3531965.svg)](https://doi.org/10.5281/zenodo.3531965)\n\nSHARPy is a nonlinear aeroelastic analysis package originally developed at the Department of Aeronautics, Imperial\nCollege London. It can be used for the structural, aerodynamic and aeroelastic analysis of flexible wings, aircraft and wind turbines. It is shared here under a BSD 3-Clause permissive license.\n\n![XHALE](./docs/source/_static/XHALE-render.jpg)\n\n### Contact\n\nFor more information on the [research team](http://www.imperial.ac.uk/aeroelastics/software/) developing SHARPy or to get \nin touch, [visit our homepage](http://www.imperial.ac.uk/aeroelastics).\n\n## Physical Models\n\nSHARPy is a modular aeroelastic solver that currently uses two specific models for the structural and aerodynamic response of the system.\n\nFor the structural model, SHARPy employs a geometrically-exact displacement-based composite beam formulation,\naugmented with Lagrange multipliers for additional kinematic constraints.\nThis model has the advantage of providing the solution directly in the physical problem's degrees of freedom, making the \ncoupling with the aerodynamic solver simple and not requiring any post-processing. The 1D beam formulation used limits \nthe analyses that can be done by SHARPy to slender structures, such as high aspect ratio wings.\n\nThe aerodynamic model utilises the Unsteady Vortex Lattice Method (UVLM). The aerodynamic surfaces are modelled as a thin\nvortex ring lattice with the boundary conditions enforced at the collocation points in the middle of the vortex rings.\nThe Kutta condition is also enforced at the trailing edge. The wake can be simulated by either additional vortex rings\nor by infinitely long horseshoe vortices, which are ideally suited for steady simulations only.\n\nThe aerodynamic model has recently been extended by a linear source panel method (SPM) to model nonlifting bodies for example fuselages. The SPM and UVLM can be coupled to model fuselage-wing configuration and a junction handling approach, based on phantom panels and circulation interpolation, has been added.\n\nThe input problems can be structural, aerodynamic or coupled, yielding an aeroelastic system.\n\n## [Capabilities](http://ic-sharpy.readthedocs.io/en/latest/content/capabilities.html)\n\nThe base solver SHARPy is a nonlinear aeroelastic analysis package that can be used on free-flying flexible aircraft,\nwings and wind turbines. In addition, it supports linearisation of these nonlinear systems about\narbitrary conditions and includes various tools such as: model reduction or frequency analysis.\n\nIn short, SHARPy offers (amongst others) the following solutions to the user:\n* Static aerodynamic, structural and aeroelastic solutions including fuselage effects\n* Finding trim conditions for aeroelastic configurations\n* Nonlinear, dynamic time domain simulations under a large number of conditions such as:\n + Prescribed trajectories.\n + Free flight.\n + Dynamic follower forces.\n + Control inputs in thrust, control surface deflection...\n + Arbitrary time-domain gusts, including non span-constant ones.\n + Full 3D turbulent fields.\n* Multibody dynamics with hinges, articulations and prescribed nodal motions:\n + Applicable to wind turbines.\n + Hinged aircraft.\n + Catapult assisted takeoffs.\n* Linear analysis:\n + Linearisation around a nonlinear equilibrium.\n + Frequency response analysis.\n + Asymptotic stability analysis.\n* Model order reduction:\n + Krylov-subspace reduction methods.\n + Several balancing reduction methods.\n\n## Documentation\n\nThe documentation for SHARPy can be found [here](http://ic-sharpy.readthedocs.io).\n\n## Installing SHARPy\n\nFor the latest documentation, see the \n[installation docs](https://ic-sharpy.readthedocs.io/en/latest/content/installation.html).\n\nSHARPy can also be obtained from Docker Hub to avoid compilation\nand platform-dependant issues. If you are interested, make sure you check \nthe [SHARPy Docker distribution docs](https://ic-sharpy.readthedocs.io/en/latest/content/installation.html#using-sharpy-from-a-docker-container).\n\n## Contributing and Bug reports\n\nIf you think you can add a useful feature to SHARPy, want to write documentation or you encounter a bug, by all means, \ncheck out the [collaboration guide](https://ic-sharpy.readthedocs.io/en/latest/content/contributing.html).\n\n## Citing SHARPy\n\nSHARPy has been published in the Journal of Open Source Software (JOSS) and the relevant paper can be found\n[here](https://joss.theoj.org/papers/10.21105/joss.01885).\n\nIf you are using SHARPy for your work, please remember to cite it using the paper in JOSS as:\n\n`del Carre et al., (2019). SHARPy: A dynamic aeroelastic simulation toolbox for very flexible aircraft and wind\nturbines. Journal of Open Source Software, 4(44), 1885, https://doi.org/10.21105/joss.01885`\n\nThe bibtex entry for this citation is:\n\n```\n@Article{delCarre2019,\ndoi = {10.21105/joss.01885},\nurl = {https://doi.org/10.21105/joss.01885},\nyear = {2019},\nmonth = dec,\npublisher = {The Open Journal},\nvolume = {4},\nnumber = {44},\npages = {1885},\nauthor = {Alfonso del Carre and Arturo Mu{\\~{n}}oz-Sim\\'on and Norberto Goizueta and Rafael Palacios},\ntitle = {{SHARPy}: A dynamic aeroelastic simulation toolbox for very flexible aircraft and wind turbines},\njournal = {Journal of Open Source Software}\n}\n```\n\n\n## Continuous Integration Status\n\nSHARPy uses Continuous Integration to control the integrity of its code. The status in the release and develop branches\nis:\n\nMain\n![Build Status](https://github.com/ImperialCollegeLondon/sharpy/actions/workflows/sharpy_tests.yaml/badge.svg)\n![Docker Status](https://github.com/ImperialCollegeLondon/sharpy/actions/workflows/docker_build.yaml/badge.svg)\n\nDevelop\n![Build Status](https://github.com/ImperialCollegeLondon/sharpy/actions/workflows/sharpy_tests.yaml/badge.svg?branch=develop)\n""",",https://doi.org/10.5281/zenodo.3531965,https://doi.org/10.21105/joss.01885`\n\nThe,https://doi.org/10.21105/joss.01885","2016/10/07, 10:11:51",2574,BSD-3-Clause,301,4255,"2023/10/18, 21:49:55",20,134,188,32,6,6,0.4,0.7001184132622853,"2023/10/18, 15:52:35",2.2,0,18,false,,false,true,,,https://github.com/ImperialCollegeLondon,,Imperial College London,,,https://avatars.githubusercontent.com/u/1220306?v=4,,, WindSE,A Python package that uses a FEniCS backend to perform wind farm simulations and optimization.,NREL,https://github.com/NREL/WindSE.git,github,,Wind Energy,"2023/07/11, 19:52:21",43,0,4,true,Python,National Renewable Energy Laboratory,NREL,"Python,Shell",,"b'WindSE: Wind Systems Engineering\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSimple Description:\n===================\n\nWindSE is a python package that uses a FEniCS backend to perform wind farm simulations and optimization. Documentation can be found at: https://windse.readthedocs.io/en/latest/ \n\nQuick Start-Up Guide:\n=====================\n\nIt is easiest to run WindSE within a conda environment. To install conda check this link: `Conda Installation `_. Additionally, WindSE has been tested on MacOS (Catalina 10.15) and Linux (CentOS 7). Windows is not recommended. \n\nQuick Installation Instructions:\n--------------------------------\n\nThe easiest way to install windse is to run::\n\n sh install.sh \n\nThen the enviroment can be activated using::\n\n conda activate \n\nQuick Demo Instructions:\n------------------------\n\nActivate the conda environment using::\n\n conda activate \n\nThen to run a simple demo, navigate to /demos/documented/Yaml_Examples/ and run::\n\n windse run 0-wind_farm_2D.yaml\n\nThe output of this simulation will be located in the output/2_5D_Wind_Farm/ folder. Use `Paraview `_ to visualize the results in the solutions/ folder. To learn what parameter can be set in the yaml file, head to the `Parameter Documentation `_.\n\n\n\n'",,"2019/04/26, 21:00:20",1642,CUSTOM,1,483,"2023/07/07, 21:49:19",19,79,91,11,109,0,0.0,0.425974025974026,"2023/07/11, 20:07:38",2023.07.01,0,5,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, WEIS,WEIS is a framework that combines multiple tools to enable design optimization of floating offshore wind turbines.,WISDEM,https://github.com/WISDEM/WEIS.git,github,,Wind Energy,"2023/10/19, 15:29:10",41,0,15,true,Fortran,WISDEM,WISDEM,"Fortran,Roff,Python,C,C++,Jupyter Notebook,CMake,TeX,MATLAB,Scheme,F*,Makefile,Batchfile,Shell,Dockerfile,M",,"b'# WEIS\n\n[![Coverage Status](https://coveralls.io/repos/github/WISDEM/WEIS/badge.svg?branch=develop)](https://coveralls.io/github/WISDEM/WEIS?branch=develop)\n[![Actions Status](https://github.com/WISDEM/WEIS/workflows/CI_WEIS/badge.svg?branch=develop)](https://github.com/WISDEM/WEIS/actions)\n[![Documentation Status](https://readthedocs.org/projects/weis/badge/?version=develop)](https://weis.readthedocs.io/en/develop/?badge=develop)\n[![DOI](https://zenodo.org/badge/289320573.svg)](https://zenodo.org/badge/latestdoi/289320573)\n\nWEIS, Wind Energy with Integrated Servo-control, performs multifidelity co-design of wind turbines. WEIS is a framework that combines multiple NREL-developed tools to enable design optimization of floating offshore wind turbines.\n\nAuthor: [NREL WISDEM & OpenFAST & Control Teams](mailto:systems.engineering@nrel.gov)\n\n## Version\n\nThis software is a version 0.0.1.\n\n## Documentation\n\nSee local documentation in the `docs`-directory or access the online version at \n\n## Packages\n\nWEIS integrates in a unique workflow four models:\n* [WISDEM](https://github.com/WISDEM/WISDEM) is a set of models for assessing overall wind plant cost of energy (COE).\n* [OpenFAST](https://github.com/OpenFAST/openfast) is the community model for wind turbine simulation to be developed and used by research laboratories, academia, and industry.\n* [TurbSim](https://www.nrel.gov/docs/fy09osti/46198.pdf) is a stochastic, full-field, turbulent-wind simulator.\n* [ROSCO](https://github.com/NREL/ROSCO) provides an open, modular and fully adaptable baseline wind turbine controller to the scientific community.\n\nIn addition, three external libraries are added:\n* [pCrunch](https://github.com/NREL/pCrunch) is a collection of tools to ease the process of parsing large amounts of OpenFAST output data and conduct loads analysis.\n* [pyOptSparse](https://github.com/mdolab/pyoptsparse) is a framework for formulating and efficiently solving nonlinear constrained optimization problems.\n\nSoftware Model Versions:\nSoftware | Version\n--- | ---\nOpenFAST | 3.2.1\nROSCO | 2.6.0\n\nThe core WEIS modules are:\n * _aeroelasticse_ is a wrapper to call [OpenFAST](https://github.com/OpenFAST/openfast)\n * _control_ contains the routines calling the [ROSCO_Toolbox](https://github.com/NREL/ROSCO_toolbox) and the routines supporting distributed aerodynamic control devices, such trailing edge flaps\n * _gluecode_ contains the scripts glueing together all models and libraries\n * _multifidelity_ contains the codes to run multifidelity design optimizations\n * _optimization_drivers_ contains various optimization drivers\n * _schema_ contains the YAML files and corresponding schemas representing the input files to WEIS\n\n## Installation\n\nOn laptop and personal computers, installation with [Anaconda](https://www.anaconda.com) is the recommended approach because of the ability to create self-contained environments suitable for testing and analysis. WEIS requires [Anaconda 64-bit](https://www.anaconda.com/distribution/). WEIS is currently supported on Linux, MAC and Windows Sub-system for Linux (WSL). Installing WEIS on native Windows is not supported.\n\nThe installation instructions below use the environment name, ""weis-env,"" but any name is acceptable. For those working behind company firewalls, you may have to change the conda authentication with `conda config --set ssl_verify no`. Proxy servers can also be set with `conda config --set proxy_servers.http http://id:pw@address:port` and `conda config --set proxy_servers.https https://id:pw@address:port`.\n\n0. On the DOE HPC system eagle, make sure to start from a clean setup and type\n\n module purge\n module load conda \n\n1. Setup and activate the Anaconda environment from a prompt (WSL terminal on Windows or Terminal.app on Mac)\n\n conda env create --name weis-env -f https://raw.githubusercontent.com/WISDEM/WEIS/main/environment.yml python=3.9\n conda activate weis-env # (if this does not work, try source activate weis-env)\n sudo apt update # (WSL only, assuming Ubuntu)\n\n2. Use conda to add platform specific dependencies.\n\n conda config --add channels conda-forge\n conda install -y petsc4py mpi4py # (Mac / Linux only) \n conda install -y compilers # (Mac only) \n sudo apt install gcc g++ gfortran libblas-dev liblapack-dev -y # (WSL only, assuming Ubuntu)\n\n3. Clone the repository and install the software\n\n git clone https://github.com/WISDEM/WEIS.git\n cd WEIS\n git checkout branch_name # (Only if you want to switch branches, say ""develop"")\n python setup.py develop # (The common ""pip install -e ."" will not work here)\n\n4. Instructions specific for DOE HPC system Eagle. Before executing the setup script, do:\n\n module load comp-intel intel-mpi mkl\n module unload gcc\n python setup.py develop\n\n**NOTE:** To use WEIS again after installation is complete, you will always need to activate the conda environment first with `conda activate weis-env` (or `source activate weis-env`). On Eagle, make sure to reload the necessary modules\n\n## Developer guide\n\nIf you plan to contribute code to WEIS, please first consult the [developer guide](https://weis.readthedocs.io/en/latest/how_to_contribute_code.html).\n\n## Feedback\n\nFor software issues please use . \n'",",https://zenodo.org/badge/latestdoi/289320573","2020/08/21, 16:54:28",1160,Apache-2.0,2,4365,"2023/10/24, 13:02:35",15,181,226,31,1,3,0.3,0.5947242206235013,"2022/10/05, 18:39:03",v1.1,0,23,false,,false,false,,,https://github.com/WISDEM,https://www.nrel.gov/wind/systems-engineering.html,"NREL National Wind Technology Center, Boulder, CO",,,https://avatars.githubusercontent.com/u/5444272?v=4,,, pyNuMAD,"An object-oriented, open-source software written in Python which facilitates the creation and analysis of three-dimensional models of wind turbine blades.",sandialabs,https://github.com/sandialabs/pyNuMAD.git,github,,Wind Energy,"2023/10/19, 19:59:53",3,0,3,true,Python,Sandia National Laboratories,sandialabs,Python,https://sandialabs.github.io/pyNuMAD/,"b"" # pyNuMAD\n[pyNuMAD (Python Numerical Manufacturing And Design)](https://github.com/sandialabs/pyNuMAD) is an object-oriented, open-source software program written in Python which simplifies the process of creating a three-dimensional model of a wind turbine blade. The tool organizes all blade information including aerodynamic and material properties as well as material placement into an\nintuitive API for use with other softwares. The purpose of pyNuMAD is to provide an intermediary between raw blade data in the form of yaml, excel, xml files and analytical platforms\n(ANSYS, Cubit, openFAST, etc).\n\nFor any questions or support [create a new issue](https://github.com/sandialabs/pyNuMAD/issues/new) on GitHub.\n\n## Documentation\nDocumentation for pynumad is accessible at https://sandialabs.github.io/pyNuMAD/.\n\n![](docs/_static/images/pyNuMAD_overview.png)\n\n## Examples\n\nStep-by-step examples are located in the [examples](https://github.com/sandialabs/pyNuMAD/tree/main/examples) folder. Follow allong in the documentation.\n\n## License\n\npyNuMAD is licensed under BSD 3-clause license. Please see the\n[LICENSE](https://github.com/sandialabs/pyNuMAD/blob/main/LICENSE) included in\nthe source code repository for more details.\n\n## Acknowledgements \n\npyNuMAD is currently being developed with funding from Department of Energy's\n(DOE) Energy Efficiency and Renewable Energy (EERE) Wind Energy Technology Office (WETO). \n""",,"2023/06/29, 17:01:10",118,BSD-3-Clause,39,39,"2023/10/19, 20:00:06",5,34,35,35,6,1,0.1,0.25,"2023/10/19, 20:15:33",1.0.0,0,3,false,,false,false,,,https://github.com/sandialabs,https://software.sandia.gov,United States,,,https://avatars.githubusercontent.com/u/4993680?v=4,,, HAMS,An open-source computer program for the analysis of wave diffraction and radiation of three-dimensional floating or submerged structures.,YingyiLiu,https://github.com/YingyiLiu/HAMS.git,github,"wave-structure-interaction,boundary-element-method,potential-flow-theory,offshore-wind-platforms,ocean-wave-energy-converters",Wind Energy,"2023/09/08, 04:44:27",75,0,21,true,Roff,,,"Roff,Fortran,Python,Makefile,Batchfile",,"b'\xef\xbb\xbf\n\n# HAMS\n**An open-source computer program for the analysis of wave diffraction and radiation of three-dimensional floating or submerged structures.**\n\n[![License: Apache v2](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](http://www.apache.org/licenses/LICENSE-2.0)\n\n

\n \nHAMS (Hydrodynamic Analysis of Marine Structures) is a free open-source software to analyse wave-structure interactions in the frequency domain. It is based on the boundary integral equation method within the framework of the potential flow theory. The code is currently written in FORTRAN 90. It has been developed by the author Yingyi Liu for nearly a decade. \n\nHAMS is released in the hope that it will contribute to eliminating the inequality (for those who are not able to afford to purchase a costly commercial BEM software) in the continuous research developments related to offshore engineering and ocean renewable energies.\n\nHAMS is freely distributed under the Apache License, Version 2.0, http://www.apache.org/licenses/LICENSE-2.0, and may be modified and extended by researchers who intend to enhance its capabilities and port the code to other platforms. \n\nThe success of HAMS should to a large extent be attributed to Prof. Bin Teng (Dalian University of Technology), who has tutored me the theory of potential flow in marine hydrodynamics and the programming skills using the [Boundary Element Method](https://en.wikipedia.org/wiki/Boundary_element_method). The code structure and the coding style of HAMS are exactly two of the examples that I have learned and inherited from Prof. Bin Teng.\n\n## Theoretical Basis\n\n

\n \n### - Please refer to the following papers for the theory:\n\nThe theory of panel method that has been used by HAMS is written in detail in the following two papers:\n\n* Yingyi Liu (2021). Introduction of the Open-Source Boundary Element Method Solver HAMS to the Ocean Renewable Energy Community. In: Proc. of the 14th European Wave and Tidal Energy Conference, Plymouth, UK, Sep. 5\xe2\x80\x939, 2021.\n\n* Yingyi Liu (2019). ""HAMS: A Frequency-Domain Preprocessor for Wave-Structure Interactions\xe2\x80\x94Theory, Development, and Application."" _Journal of Marine Science and Engineering_, 7: 81.\n\n* Yingyi Liu, Changhong Hu, Makoto Sueyoshi, Hidetsugu Iwashita, Masashi Kashiwagi (2016). ""Motion response prediction by hybrid panel-stick models for a semi-submersible with bracings."" _Journal of Marine Science and Technology_, 21:742\xe2\x80\x93757.\n\nThe deepwater Green function is using a fortran subroutine (https://github.com/Hui-Liang/Green-function-in-deep-water) developed by Dr. Hui Liang. For the detailed theory you may refer to the following three papers:\n\n* Hui Liang, Huiyu Wu, and Francis Noblesse (2018). ""Validation of a global approximation for wave diffraction-radiation in deep water."" _Applied Ocean Research_, 74 : 80-86.\n\n* Huiyu Wu, Hui Liang, and Francis Noblesse (2018). ""Wave component in the Green function for diffraction radiation of regular water waves."" _Applied Ocean Research_, 81: 72-75.\n\n* Huiyu Wu, Chenliang Zhang, Yi Zhu, Wei Li, Decheng Wan, Francis Noblesse (2017). ""A global approximation to the Green function for diffraction radiation of water waves."" _European Journal of Mechanics-B/Fluids_, 65: 54-64.\n\nThe finite-depth Green function is using a fortran subroutine FinGreen3D (https://github.com/YingyiLiu/FinGreen3D) developed by Dr. Yingyi Liu. For the detailed theory you may refer to the following two papers:\n\n* Yingyi Liu, Shigeo Yoshida, Changhong Hu, Makoto Sueyoshi, Liang Sun, Junliang Gao, Peiwen Cong, Guanghua He (2018). ""A reliable open-source package for performance evaluation of floating renewable energy systems in coastal and offshore regions."" _Energy Conversion and Management_, 174: 516-536.\n\n* Yingyi Liu, Hidetsugu Iwashita, Changhong Hu (2015). ""A calculation method for finite depth free-surface green function."" _International Journal of Naval Architecture and Ocean Engineering_, 7(2): 375-389.\n\nPlease cite appropriately the above papers in your relevant publications, reports, etc., if the HAMS code or its executable program has contributed to your work.\n\n## Generated numerical results\n\n### - Hydrodynamic coefficients\n\n

\n\n### - Wave excitation force\n\n

\n\n### - Motion RAOs\n\n

\n\n### - Free-surface elevation\n\n

\n\n## Features\n\n### - Mesh element type\n\n* HAMS can import meshes containing triangular panel type, quadrilateral panel type, or both.\n

\n\n### - OpenMP parallel processing\n\n* HAMS can be run in the parallel mode using OpenMP techniques on PC\'s with multiple processors (CPU\'s).\n

\n \n### - Computational efficiency\n\n* The following graph shows an example of DeepCwind semisubmersible using 8 threads for the computation:\n

\n\n## Useful Links\n\nThe following open-source software can be used to view the HAMS results:

\n[1]. [BEMRosetta](https://github.com/izabala123/BEMRosetta). Developed by I\xc3\xb1aki Zabala, Markel Pe\xc3\xb1alba, Yerai Pe\xc3\xb1a-Sanchez.
\n[2]. [BEMIO](https://wec-sim.github.io/bemio/). Developed by National Renewable Energy Laboratory and Sandia National Laboratories.
\n\nYou may need HAMS to do the frequency-domain pre-processing before you use the following programs:

\n[1]. [FAST](https://www.nrel.gov/wind/nwtc/fast.html) or [OpenFAST](https://openfast.readthedocs.io/en/master/). Developed by National Renewable Energy Laboratory.
\n[2]. [WEC-Sim](https://wec-sim.github.io/WEC-Sim/). Developed by National Renewable Energy Laboratory and Sandia National Laboratories.
\n\nUsed by other open-source software:

\n[1]. [pyHAMS](https://github.com/WISDEM/pyHAMS). Developed by Garrett Barter, National Renewable Energy Laboratory.
\n[2]. [RAFT](https://github.com/WISDEM/RAFT). Developed by Matt Hall, Stein Housner, David Ogden, Garrett Barter, National Renewable Energy Laboratory.
\n\n## License\n\nCode original author: Yingyi Liu (\xe5\x8a\x89\xe7\x9b\x88\xe6\xba\xa2), [Google Scholar](https://scholar.google.co.jp/citations?hl=ja&user=mpR3MvAAAAAJ&view_op=list_works&sortby=pubdate).\n\nHAMS is free software: you can redistribute it and/or modify it under the terms of the Apache License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.\n\nHAMS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Apache License for details. You should have received a copy of the Apache License along with HAMS. If not, see http://www.apache.org/licenses/LICENSE-2.0
\n\n\n'",,"2020/10/30, 15:48:46",1090,Apache-2.0,27,205,"2023/09/05, 08:31:05",1,12,31,14,50,0,0.0,0.19387755102040816,,,0,3,false,,false,false,,,,,,,,,,, brightwind,A Python library aims to empower wind resource analysts and establish a common industry standard toolset.,brightwind-dev,https://github.com/brightwind-dev/brightwind.git,github,,Wind Energy,"2023/08/11, 17:00:09",38,1,4,true,Python,brightwind,brightwind-dev,Python,,"b""--------------\n```\n __ _ __ __ _ __\n / /_ _____(_)___ / /_ / /__ __(_)___ ___ / /\n / __ \\/ ___/ / __ \\/ __ \\/ __/ | /| / / / __ \\/ __ /\n / /_/ / / / / /_/ / / / / /_ | |/ |/ / / / / / /_/ /\n /_.___/_/ /_/\\__, /_/ /_/\\__/ |__/|__/_/_/ /_/\\__,_/\n /____/\n ```\n            **A Python library primarily for wind resource assessments.**\n\n--------------\n\n
\n\nBrightwind is a Python library specifically built for wind analysis. It can load in wind speed, wind direction and \nother metrological timeseries data. There are various plots you can use to understand this data and to find any \npotential issues. You can perform many common functions to the data such as shear and long-term adjustments. The \nresulting adjusted data is then outputted as a frequency distribution tab file which can be used in wind analysis \nsoftware such as WAsP.\n\nThis library can also be used for solar resource analysis.\n\n
\n\n---\n### Installation\n\nYou can use pip from the command line to install the library.\n\n```\nC:\\Users\\Stephen> pip install brightwind\n```\nIt is advisable to use a separate environment to avoid any dependency clashes with other libraries such as Pandas, Numpy \nor Matplotlib you may already have installed.\n\n
\n\nFor those that do not have Python installed and are just getting started, we recommend installing Anaconda. Anaconda is \na Python distribution for scientific computing and so provides everything you need, Python, pip and Jupyter Notebook \nalong with libraries such as Pandas, Numpy and Matplotlib. Datacamp provide a good tutorial for [installing \nAnaconda on Windows](https://www.datacamp.com/tutorial/installing-anaconda-windows) to get started.\n\nOnce Anaconda is installed, you can use the **Anaconda Prompt** to run the above command line `pip install brightwind`. \nOr first use **Anaconda Navigator** to create an environment.\n\n---\n### Documentation\n\nDocumentation on how to get setup and use the library can be found at https://brightwind-dev.github.io/brightwind-docs/\n\n
\n\nExample usage of the brightwind library is shown below using Jupyter Notebook. Jupyter Notebook is a powerful way to \nimmediately see the results of code you have written.\n
\n\n

\n\n![demo_image_1](read_me_1.png)\n![demo_image_2](read_me_2.png)\n

\n\n\n\n\n
\n\n##### Features\nThe library provides wind analysts with easy to use tools for working with\nmeteorological data. It supports loading of meteorological data, averaging,\nfiltering, plotting, correlations, shear analysis, long term adjustments, etc.\nThe library can then export a resulting long term adjusted tab file to be used in\nother wind analysis software.\n\n
\n\n##### Benefits\nThe key benefits to an open-source library is that it provides complete transparency\nand traceability. Anyone in the industry can review any part of the code and suggest changes,\nthus creating a standardised, validated toolkit for the industry.\n\nBy default, during an assessment every manipulation or adjustment made to the wind data is\ncontained in a single file. This can easily be reviewed and checked by internal reviewers or,\nas the underlying code is open-sourced, there is no reason why this file cannot be sent to\n3rd parties for review thus increasing the effectiveness of a banks due diligence.\n\n
\n\n##### License\nThe library is licensed under the MIT license.\n\n
\n\n---\n### Test datasets\nA test dataset is included in this repository and is used to demonstrate function and test functions in the code. \nOther files and datasets are also included to complement this demo dataset. These are outlined below:\n\n
\n\n| Dataset | Source | Notes |\n|:--------------------- |:-------------|:-----|\n| demo_data.csv | BrightWind | A modified 2 year met mast dataset in csv and Campbell Scientific format. |\n| MERRA-2_XX_2000-01-01_2017-06-30.csv | NASA [GES DISC](https://disc.gsfc.nasa.gov/) | 4 x MERRA-2 18-yr datasets to complement the demo data for long term analyses. |\n| demo_cleaning_file.csv | BrightWind | A file containing information on what periods to clean out from the demo data. |\n| windographer_flagging_log.txt | BrightWind | The same cleaning info as found in 'demo_cleaning_file.csv' formatted as a Windographer flagging file. |\n| demo_data_iea43_wra_data_model.json | BrightWind | A JSON file formatted according to the IEA Wind Task 43 [WRA Data Model](https://github.com/IEA-Task-43/digital_wra_data_standard) standard which describes the mast configuration for the demo data. |\n\n
\n\n---\n### Contributing\nIf you wish to be involved or find out more please contact stephen@brightwindanalysis.com.\n\nMore information can be found in the [contributing.md](https://github.com/brightwind-dev/brightwind/blob/master/contributing.md) section of the website.\n\n
\n""",,"2018/12/11, 15:49:26",1779,MIT,240,1547,"2023/10/10, 08:14:29",70,179,349,61,15,1,0.2,0.6412273800157356,"2023/06/01, 14:29:54",v2.1.0,0,10,false,,false,true,narest-qa/repo50,,https://github.com/brightwind-dev,,,,,https://avatars.githubusercontent.com/u/45794645?v=4,,, NRWAL,A library of offshore wind cost equations.,NREL,https://github.com/NREL/NRWAL.git,github,,Wind Energy,"2023/09/13, 18:08:28",15,3,5,true,Python,National Renewable Energy Laboratory,NREL,"Python,Shell",https://nrel.github.io/NRWAL/,"b""*****************\nWelcome to NRWAL!\n*****************\n\n.. image:: https://github.com/NREL/NRWAL/workflows/Documentation/badge.svg\n :target: https://nrel.github.io/NRWAL/\n\n.. image:: https://github.com/NREL/NRWAL/workflows/Pytests/badge.svg\n :target: https://github.com/NREL/NRWAL/actions?query=workflow%3A%22Pytests%22\n\n.. image:: https://github.com/NREL/NRWAL/workflows/Lint%20Code%20Base/badge.svg\n :target: https://github.com/NREL/NRWAL/actions?query=workflow%3A%22Lint+Code+Base%22\n\n.. image:: https://img.shields.io/pypi/pyversions/NREL-NRWAL.svg\n :target: https://pypi.org/project/NREL-NRWAL/\n\n.. image:: https://badge.fury.io/py/NREL-NRWAL.svg\n :target: https://badge.fury.io/py/NREL-NRWAL\n\n.. image:: https://anaconda.org/nrel/nrel-NRWAL/badges/version.svg\n :target: https://anaconda.org/nrel/nrel-NRWAL\n\n.. image:: https://anaconda.org/nrel/nrel-NRWAL/badges/license.svg\n :target: https://anaconda.org/nrel/nrel-NRWAL\n\n.. image:: https://codecov.io/gh/nrel/NRWAL/branch/main/graph/badge.svg?token=NB29X039VU\n :target: https://codecov.io/gh/nrel/NRWAL\n\n.. image:: https://zenodo.org/badge/319377095.svg\n :target: https://zenodo.org/badge/latestdoi/319377095\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/NREL/NRWAL/HEAD\n\n\n.. inclusion-intro\n\nThe National Renewable Energy Laboratory Wind Analysis Library (NRWAL):\n\n#. A library of offshore wind cost equations (plus new energy technologies like marine hydro!)\n#. Easy equation manipulation without editing source code\n#. Full continental-scale integration with the NREL Renewable Energy Potential Model (reV)\n#. Ready-to-use configs for basic users\n#. Dynamic python tools for intuitive equation handling\n#. One seriously badass sea unicorn\n\nTo get started with NRWAL, check out the `NRWAL Config documentation `_ or the `NRWAL example notebook `_. You can also launch the notebook in an interactive jupyter shell right in your browser without any downloads or software using `binder `_. \n\nReady to build a model with NRWAL but don't want to contribute to the library? No problem! Check out the example getting started project `here `_.\n\nHere is the important stuff:\n\n - `The NRWAL Equation Library `_.\n - `Default NRWAL Configs `_.\n\nInstalling NRWAL\n================\n\nOption 1: Install from PIP or Conda (recommended for analysts):\n---------------------------------------------------------------\n\n1. Create a new environment:\n ``conda create --name nrwal``\n\n2. Activate directory:\n ``conda activate nrwal``\n\n3. Install reVX:\n 1) ``pip install NREL-NRWAL`` or\n 2) ``conda install nrel-nrwal --channel=nrel``\n\nOption 2: Clone repo (recommended for developers)\n-------------------------------------------------\n\n1. from home dir, ``git clone https://github.com/NREL/NRWAL.git``\n 1) enter github username\n 2) enter github password\n\n2. Create ``NRWAL`` environment and install package\n 1) Create a conda env: ``conda create -n nrwal``\n 2) Run the command: ``conda activate nrwal``\n 3) cd into the repo cloned in 1.\n 4) prior to running ``pip`` below, make sure the branch is correct (install\n from master!)\n 5) Install ``NRWAL`` and its dependencies by running:\n ``pip install .`` (or ``pip install -e .`` if running a dev branch\n or working on the source code)\n\nNRWAL Variables for Offshore Wind (OSW)\n=======================================\n\n.. list-table:: NRWAL Inputs\n :widths: auto\n :header-rows: 1\n\n * - Variable Name\n - Long Name\n - Source\n - Units\n * - `aeff`\n - Array Efficiency\n - `array_efficiency` input layer, computed from ORBIT\n - `%`\n * - `capex_multi`\n - CAPEX Multiplier\n - Supplied by user\n - unit-less\n * - `depth`\n - Water depth (positive values)\n - `bathymetry` input layer\n - m\n * - `dist_a_to_s`\n - Distance from assembly area to site\n - Computed from `assembly_area` input layer\n - km\n * - `dist_op_to_s`\n - Distance from operating port to site\n - `ports_operations` input layer\n - km\n * - `dist_p_to_a`\n - Distance from port (construction no-limit) to assembly area\n - `assembly_area` input layer\n - km\n * - `dist_p_to_s`\n - Distance from construction port to site\n - `ports_construction` input layer\n - km\n * - `dist_p_to_s_nolimit`\n - Distance from no-limit construction port to site\n - `ports_construction_nolimit` input layer\n - km\n * - `dist_s_to_l`\n - Distance site to nearest land\n - `dist_to_coast` input layer\n - km\n * - `fixed_downtime`\n - Average weather downtime for fixed structure turbines\n - `weather_downtime_fixed_bottom` input layer\n - fraction\n * - `floating_downtime`\n - Average weather downtime for floating structure turbines\n - `weather_downtime_floating` input layer\n - fraction\n * - `gcf`\n - Gross capacity factor\n - Computed by reV / SAM with losses == 0\n - unit-less\n * - `hs_average`\n - Significant wave height to determine weather downtime\n - `weather_downtime_mean_wave_height_buoy` input layer\n - m\n * - `num_turbines`\n - Number of turbines in array\n - Supplied by user\n - unit-less\n * - `transmission_multi`\n - Tranmission cost multiplier\n - Supplied by user\n - unit-less\n * - `turbine_capacity`\n - Capacity of each turbine in the array\n - Supplied by user\n - MW\n\nRecommended Citation\n====================\n\nIf using the NRWAL software (replace with current version and DOI):\n\n - Grant Buster, Jake Nunemaker, and Michael Rossol. The National Renewable Energy Laboratory Wind Analysis Library (NRWAL). https://github.com/NREL/NRWAL (version v0.0.2), 2021. https://doi.org/10.5281/zenodo.4705961.\n\nIf using the Offshore Wind (OSW) cost equations:\n\n - Beiter, Philipp, Walter Musial, Aaron Smith, Levi Kilcher, Rick Damiani, Michael Maness, Senu Sirnivas, Tyler Stehly, Vahan Gevorgian, Meghan Mooney, and George Scott. \xe2\x80\x9cA Spatial-Economic Cost-Reduction Pathway Analysis for U.S. Offshore Wind Energy Development from 2015\xe2\x80\x932030.\xe2\x80\x9d National Renewable Energy Lab. (NREL), Golden, CO (United States), September 1, 2016. https://doi.org/10.2172/1324526. https://www.nrel.gov/docs/fy16osti/66579.pdf.\n\nIf using the marine energy reference model (RM) cost models:\n\n - https://energy.sandia.gov/programs/renewable-energy/water-power/projects/reference-model-project-rmp/\n - Jenne, D. S., Y. H. Yu, and V. Neary. \xe2\x80\x9cLevelized Cost of Energy Analysis of Marine and Hydrokinetic Reference Models: Preprint.\xe2\x80\x9d National Renewable Energy Lab. (NREL), Golden, CO (United States), April 24, 2015. https://www.osti.gov/biblio/1215196-levelized-cost-energy-analysis-marine-hydrokinetic-reference-models-preprint.\n""",",https://zenodo.org/badge/latestdoi/319377095\n\n,https://doi.org/10.5281/zenodo.4705961.\n\nIf,https://doi.org/10.2172/1324526","2020/12/07, 16:23:18",1052,BSD-3-Clause,39,303,"2023/09/13, 18:08:29",0,44,47,11,42,0,0.6,0.4380952380952381,"2022/11/17, 22:51:23",v0.0.11,0,7,false,,false,false,"pswild/king_pine,Eric-Musa/EnergyCapability,NREL/reV",,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, welib,"Wind energy library, python and matlab tools for wind turbines analyses.",ebranlard,https://github.com/ebranlard/welib.git,github,,Wind Energy,"2023/06/16, 18:17:09",48,0,20,true,Python,,,"Python,MATLAB,Fortran,Jupyter Notebook,F*,Mathematica,Makefile,M",,"b'\xef\xbb\xbf[![Build status](https://github.com/ebranlard/welib/workflows/Tests/badge.svg)](https://github.com/ebranlard/welib/actions?query=workflow%3A%22Tests%22)\r\n\r\n\r\n# Wind Energy Library - welib\r\n\r\nWind energy library: suite of python and matlab tools for aero-servo-hydro-elasticity (aerodynanmics, controls, hydrodynamics, structure/elasticity) and wind energy.\r\n\r\n# Installation and testing\r\nInstalling the latest release:\r\n```bash\r\npip install --upgrade welib\r\n```\r\nInstalling the latest dev version and running the unittests:\r\n```bash\r\ngit clone http://github.com/ebranlard/welib -b dev\r\ncd welib\r\npython -m pip install -r requirements.txt\r\npython -m pip install -e .\r\npytest\r\n```\r\n\r\n# Gallery of example scripts\r\n\r\nA sample of the figures generated by the examples in this repository are given below.\r\nAdditional examples can be found in the `examples` and `tests` folders of the different subpackages.\r\n\r\nClick on the links to access the corresponding scripts. \r\nClick on the figures to enlarge the figures.\r\n\r\n| | | | | |\r\n| :-------------------------: | :-------------------------: | :-------------------------: | :-------------------------: | :-------------------------: |\r\n| [Airfoils - 3D correction](/welib/airfoils/examples/correction3D.py) | [Airfoils - MGH dynamic stall model](/welib/airfoils/examples/dynamic_stall_mhh.py) | [Airfoils - Oye dynamic stall model](/welib/airfoils/examples/dynamic_stall_oye.py) | [Airfoils - Wagner function](/welib/airfoils/examples/wagner.py) | [Beam - Analytical mode shapes of a beam](/welib/beams/examples/Ex1_BeamModes.py) |\r\n| ![Airfoils - 3D correction](/../figs/_figs/Airfoils-3DCorrection.png) | ![Airfoils - MGH dynamic stall model](/../figs/_figs/Airfoils-MGHDynamicStallModel.png) | ![Airfoils - Oye dynamic stall model](/../figs/_figs/Airfoils-OyeDynamicStallModel.png) | ![Airfoils - Wagner function](/../figs/_figs/Airfoils-WagnerFunction.png) | ![Beam - Analytical mode shapes of a beam](/../figs/_figs/Beam-AnalyticalModeShapesOfABeam.png) |\r\n| [Beam - Analytical mode shapes different BC](/welib/beams/examples/Ex2_BeamModesAllBC.py) | [BEM Steady - High thrust correction](/welib/BEM/examples/Example_AxialInduction.py) | [BEM Steady - Performance curve](/welib/BEM/examples/Example_BEM_2.py) | [BEM Steady - CP-lambda-pitch ](/welib/BEM/examples/Example_BEM_CPLambdaPitch.py) | [BEM Theory - Ideal rotor planform](/welib/BEM/examples/Example_IdealRotor.py) |\r\n| ![Beam - Analytical mode shapes different BC](/../figs/_figs/Beam-AnalyticalModeShapesDifferentBC.png) | ![BEM Steady - High thrust correction](/../figs/_figs/BEMSteady-HighThrustCorrection.png) | ![BEM Steady - Performance curve](/../figs/_figs/BEMSteady-PerformanceCurve.png) | ![BEM Steady - CP-lambda-pitch ](/../figs/_figs/BEMSteady-CP-lambda-pitch.png) | ![BEM Theory - Ideal rotor planform](/../figs/_figs/BEMTheory-IdealRotorPlanform.png) |\r\n| [BEM Unsteady - Prescribed surge motion](/welib/BEM/examples/Example_UnsteadyBEM_2_PrescribedMotion.py) | [Dynamic Inflow (Oye) - induction step](/welib/dyninflow/examples/Ex1_StepUp.py) | [FAST - interpolate radial time series](/welib/fast/examples/Example_RadialInterp.py) | [FAST - Average radial outputs](/welib/fast/examples/Example_RadialPostPro.py) | [FEM - mode shapes of tower](/welib/FEM/examples/Beam_ModeShapes_Tower.py) |\r\n| ![BEM Unsteady - Prescribed surge motion](/../figs/_figs/BEMUnsteady-PrescribedSurgeMotion.png) | ![Dynamic Inflow (Oye) - induction step](/../figs/_figs/DynamicInflow(Oye)-InductionStep.png) | ![FAST - interpolate radial time series](/../figs/_figs/FAST-InterpolateRadialTimeSeries.png) | ![FAST - Average radial outputs](/../figs/_figs/FAST-AverageRadialOutputs.png) | ![FEM - mode shapes of tower](/../figs/_figs/FEM-ModeShapesOfTower.png) |\r\n| [FEM - mode shapes of a beam](/welib/FEM/examples/Beam_ModeShapes_UniformBeamFrame3d.py) | [Hydro - Wave kinematics](/welib/hydro/examples/Ex1_WaveKinematics.py) | [Hydro - Jonswap spectrum](/welib/hydro/examples/Ex2_Jonswap_spectrum.py) | [Hydro - wave generation](/welib/hydro/examples/Ex3_WaveTimeSeries.py) | [Hydro - Morison loads on monopile](/welib/hydro/examples/Ex4_WaveLoads.py) |\r\n| ![FEM - mode shapes of a beam](/../figs/_figs/FEM-ModeShapesOfABeam.png) | ![Hydro - Wave kinematics](/../figs/_figs/Hydro-WaveKinematics.png) | ![Hydro - Jonswap spectrum](/../figs/_figs/Hydro-JonswapSpectrum.png) | ![Hydro - wave generation](/../figs/_figs/Hydro-WaveGeneration.png) | ![Hydro - Morison loads on monopile](/../figs/_figs/Hydro-MorisonLoadsOnMonopile.png) |\r\n| [Plot - 3D blades](/welib/plot/examples/Plot_3D_blades.py) | [IEC Standards - Turbulence classes](/welib/standards/examples/Ex1_TurbulenceClasses.py) | [IEC Standards - Extreme operating gusts](/welib/standards/examples/Ex2_EOG.py) | [System - Lorenz attractor](/welib/system/examples/Lorenz.py) | [System - 2nd order - forced vibrations](/welib/system/examples/MassSpringDamper_ForcedVibrations.py) |\r\n| ![Plot - 3D blades](/../figs/_figs/Plot-3DBlades.png) | ![IEC Standards - Turbulence classes](/../figs/_figs/IECStandards-TurbulenceClasses.png) | ![IEC Standards - Extreme operating gusts](/../figs/_figs/IECStandards-ExtremeOperatingGusts.png) | ![System - Lorenz attractor](/../figs/_figs/System-LorenzAttractor.png) | ![System - 2nd order - forced vibrations](/../figs/_figs/System-2ndOrder-ForcedVibrations.png) |\r\n| [System - LTI Bode plot - 2nd order mass spring damper](/welib/system/examples/MassSpringDamper_StateSpace_FreqDomain.py) | [System - 3D pendulum - motion](/welib/system/examples/pendulum_3d.py) | [System - 2nd order - Responses](/welib/system/examples/StepResponse.py) | [Signal - Correlation coefficient](/welib/tools/examples/ExampleCorrelation.py) | [Signal - FFT](/welib/tools/examples/Example_FFT.py) |\r\n| ![System - LTI Bode plot - 2nd order mass spring damper](/../figs/_figs/System-LTIBodePlot-2ndOrderMassSpringDamper.png) | ![System - 3D pendulum - motion](/../figs/_figs/System-3DPendulum-Motion.png) | ![System - 2nd order - Responses](/../figs/_figs/System-2ndOrder-Responses.png) | ![Signal - Correlation coefficient](/../figs/_figs/Signal-CorrelationCoefficient.png) | ![Signal - FFT](/../figs/_figs/Signal-FFT.png) |\r\n| [Vortilib - Elliptical Coordinates](/welib/vortilib/elements/examples/EllipticalCoordinates.py) | [Vortilib - Inviscid Vorticity Patch](/welib/vortilib/elements/examples/InviscidVortexPatch.py) | [Vortilib - Flow about an Ellipsoid](/welib/vortilib/elements/examples/SourceEllipsoid_Plots.py) | [Vortilib - Vortex helix lifting line velocity](/welib/vortilib/elements/examples/VortexHelix.py) | [Vortilib - Vortex particle regularization](/welib/vortilib/elements/examples/VortexParticle_Regularization.py) |\r\n| ![Vortilib - Elliptical Coordinates](/../figs/_figs/Vortilib-EllipticalCoordinates.png) | ![Vortilib - Inviscid Vorticity Patch](/../figs/_figs/Vortilib-InviscidVorticityPatch.png) | ![Vortilib - Flow about an Ellipsoid](/../figs/_figs/Vortilib-FlowAboutAnEllipsoid.png) | ![Vortilib - Vortex helix lifting line velocity](/../figs/_figs/Vortilib-VortexHelixLiftingLineVelocity.png) | ![Vortilib - Vortex particle regularization](/../figs/_figs/Vortilib-VortexParticleRegularization.png) |\r\n| [Vortilib - 2D vorticity patch discretized with vortex points](/welib/vortilib/elements/examples/VortexPoint2DDistribution.py) | [Vortilib - Vortex segment regularization](/welib/vortilib/elements/examples/VortexSegment_Regularization.py) | [Vortilib - Flow about an axisymmetric vorticity surface](/welib/vortilib/elements/examples/VortexSurfaceFlowField.py) | [Wind - wind generation at point](/welib/wind/examples/WindGenerationAtPoint.py) | [WT Theory - Wake Expansion Models](/welib/wt_theory/examples/WakeExpansion.py) |\r\n| ![Vortilib - 2D vorticity patch discretized with vortex points](/../figs/_figs/Vortilib-2DVorticityPatchDiscretizedWithVortexPoints.png) | ![Vortilib - Vortex segment regularization](/../figs/_figs/Vortilib-VortexSegmentRegularization.png) | ![Vortilib - Flow about an axisymmetric vorticity surface](/../figs/_figs/Vortilib-FlowAboutAnAxisymmetricVorticitySurface.png) | ![Wind - wind generation at point](/../figs/_figs/Wind-WindGenerationAtPoint.png) | ![WT Theory - Wake Expansion Models](/../figs/_figs/WTTheory-WakeExpansionModels.png) |\r\n| [WT Theory - Induced velocity vs Wake length](/welib/wt_theory/examples/WakeLengthInducedVelocity.py) | [PartDyn - Gravitational and spring interactions](/welib/yams/partdyn/examples/ThreePart_Gravitation.py) | [PartDyn - Gravitational interaction - Moon Orbit](/welib/yams/partdyn/examples/TwoPart_Orbits.py) | | |\r\n| ![WT Theory - Induced velocity vs Wake length](/../figs/_figs/WTTheory-InducedVelocityVsWakeLength.png) | ![PartDyn - Gravitational and spring interactions](/../figs/_figs/PartDyn-GravitationalAndSpringInteractions.png) | ![PartDyn - Gravitational interaction - Moon Orbit](/../figs/_figs/PartDyn-GravitationalInteraction-MoonOrbit.png) | | |\r\n\r\n# Examples of application\r\n\r\n\r\nYou can have a look at the example gallery below for direct links to examples and associated plots.\r\n\r\n- Aerodynamic applications (package `airfoils`, `BEM`):\r\n - Manipulation of airfoil curves, find slopes, interpolate (see [airfoils](welib/airfoils/examples/))\r\n - Run different dynamic stall models (e.g Oye or MHH/HGM model) (see [airfoils/DS](welib/airfoils/examples/))\r\n\r\n- Hydrodynamics applications (package `hydro`):\r\n - Wave kinematics for linear waves (see [hydro/Ex1](welib/hydro/examples/Ex1_WaveKinematics.py))\r\n - Generation of wave time series from a given spectrum (see [hydro/Ex3](welib/hydro/examples/Ex3_WaveTimeSeries.py))\r\n - Computation of wave loads on a monopile (see [hydro/Ex4](welib/hydro/examples/Ex4_WaveLoads.py))\r\n\r\n- Structural dynamics and system dynamics applications (packages `FEM`, `system`, `yams`):\r\n - Setup the equation of motions for a multibody system with flexible members analytically or numerically (see [yams](welib/yams/tests))\r\n - Linearize a non-linear system defined by a state and output equation (implicit or explicit) (see [system](welib/system/tests))\r\n - Perform 2d/3d FEM analyses using beam/frame elements (see [FEM](welib/FEM/examples))\r\n - Craig-Bampton / Guyan reduction of a structure (see [FEM](welib/FEM/examples))\r\n - Perform time integration of mechanical systems (see [system](welib/system/examples))\r\n\r\n- Controls applications (packages `ctrl`, `kalman`):\r\n - Run a kalman filter to estimate states of a system (see [kalman](welib/kalman/))\r\n\r\n- Wind energy applications:\r\n - Run steady state BEM simulations (see [BEM/steady 1-2](welib/BEM/examples)\r\n - Run unsteady BEM simulations (see [BEM/unsteady 1-2](welib/BEM/examples/)\r\n - Read and write common wind energy file formats (see [weio](welib/weio), a clone of [weio](http://github.com/ebranlard/weio/))\r\n - Generate stochastic wind and [waves](welib/hydro/examples/Ex3_WaveTimeSeries.py) times series\r\n - Estimate wind speed (see \'welib\\ws\\_estimator`))\r\n - Theory of optimal circulation\r\n - Standards\r\n\r\n- Other (packages `tools`, `ode`):\r\n - Spectral analyses, signal processing, time integration, vector analyses\r\n\r\nSee also:\r\n\r\n- [pyDatView](http://github.com/ebranlard/pyDatView/): GUI to visualize files (supported by weio) and perform analyses on the data\r\n\r\n\r\n\r\n\r\n# Libraries\r\n\r\nThe repository contains a set of small packages, for aerodynamics, structure, control and more:\r\n\r\n- airfoils: polar manipulations, dynamic stall models\r\n- beams: analytical results for beams\r\n- BEM: steady and unsteady bem code\r\n- ctrl: control tools\r\n- dyninflow: dynamic inflow models\r\n- fastlib: tools to handle OpenFAST models (run simulations, postprocess, linear model)\r\n- FEM: Finite Element Method tools (beams)\r\n- hydro: hydrodynamic tools\r\n- kalman: kalman filter\r\n- mesh: meshing tools\r\n- ode: tools for time integration of ODE\r\n- standards: some formulae and scripts useful for the IEC standards\r\n- system: tools for dynamic systems (e.g. LTI, state space) and mechanical systems (M,C,K matrices), eigenvalue analysis, time integration\r\n- tools: mathematical tools, signal processing\r\n- weio: library to read and write files used in wind energy, clone of [weio](http://github.com/ebranlard/weio/) \r\n- wt\\_theory: scripts implementing some wind turbine aerodynamic theory \r\n- ws\\_estimator: wind speed estimator for wind energy based on tabulated Cp Ct\r\n- yams: multibody analyses\r\n\r\n\r\n# References and how to cite\r\nIf you find some of this repository useful and use it in your research, thank you for using the following citations.\r\n\r\n - General wind turbine scripts and aerodynamics:\r\n```bibtex\r\n@book{Branlard:book,\r\n author = {E. Branlard},\r\n title = {Wind Turbine Aerodynamics and Vorticity-Based Methods: Fundamentals and Recent Applications},\r\n year = {2017},\r\n publisher= {Springer International Publishing},\r\n doi={10.1007/978-3-319-55164-7},\r\n isbn={ 978-3-319-55163-0}\r\n}\r\n```\r\n - Structural dynamics:\r\n```bibtex\r\n@article{Branlard:2019,\r\n title = {{Flexible multibody dynamics using joint coordinates and the Rayleigh-Ritz approximation: The general framework behind and beyond Flex}},\r\n author = {E. Branlard},\r\n journal = {Wind Energy},\r\n volume = {22},\r\n number = {7},\r\n pages = {877-893},\r\n year = {2019},\r\n doi = {10.1002/we.2327}\r\n}\r\n```\r\n\r\n\r\n\r\n\r\n\r\n# Contributing\r\nAny contributions to this project are welcome! If you find this project useful, you can also buy me a coffee (donate a small amount) with the link below:\r\n\r\n\r\n\r\n'",,"2018/10/17, 22:47:46",1833,MIT,119,497,"2023/10/06, 17:42:40",0,5,6,5,19,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, digital_wra_data_standard,This standard data model and associated tools are intended as universal building blocks for Wind Energy Resource Assessment applications.,IEA-Task-43,https://github.com/IEA-Task-43/digital_wra_data_standard.git,github,,Wind Energy,"2023/10/20, 15:04:58",48,0,14,true,Jupyter Notebook,IEA Wind Digitalization (Task 43),IEA-Task-43,"Jupyter Notebook,TypeScript,JavaScript,HTML,Python,CSS",,"b'\n\n\n[![Mentioned in Awesome Wind](https://awesome.re/mentioned-badge.svg)](https://github.com/IEA-Task-43/awesome-wind)\n\n# Digital WRA Data Standards\n\nThis repository is for the work, carried out by IEA Wind Task 43, on standardizing digital tools in wind resource \nassessment (WRA). It currently consists of data models for wind resource measurements (_WRA Data Model_) and \ndigital calibration certificates. Along with associated tools, these are intended as universal building blocks for \nwind energy yield assessment applications.\n\n## Mission\n_**""Our mission is to make the energy yield assessment process more efficient, transparent and reproducible \nthrough digitizing and automation.""**_\n\n## What is a ""Data Model""?\nA data model is an abstract model of real-world entities that organizes elements of data and standardizes how they \nrelate to one another. In this instance, the _WRA Data Model_ standardizes how properties of a wind resource measurement station (e.g. \nlatitude, longitude, anemometer serial number, installation height, logger slope, logger offset, etc.) are recorded and \nhow all the properties relate to each other. An implementation of the data model therefore describes how a specific met mast was \ninstalled, how the sensors were mounted on that met mast, how these sensors were programmed into the logger and how \nall these properties may have changed over time.\n\nFor more information on the definition of a Data Model and other terminology such as Schema, please see the \n[Task 43 Glossary](https://iea-task-43.gitbook.io/iea-task-43-glossary/terms/data-model).\n\n
\n\n# The WRA Data Model\n\nThe _WRA Data Model_ provides the instructions for how to digitally represent the configuration of an installed met mast, lidar, sodar, \nfloating lidar or solar measurement station. An implementation of the data model therefore can contain the latitude and \nlongitude of where the met mast was installed. At what height is a wind speed measurement been made and by what \nsensor. How is this sensor mounted onto the mast. How the sensor is connected to the logger and how this logger channel\nis programmed (i.e. what slope and offset values are programmed into the logger). And how all these properties \nmay have changed over time.\n\nTo learn more about the _WRA Data Model_, please read Amit Bohara\'s \n[introduction](https://github.com/IEA-Task-43/digital_wra_data_standard/wiki/Task-43-WRA-Data-Model---An-introduction)\nin the Wiki section of this GitHub repository.\n\nJSON ([JavaScript Object Notation](https://www.json.org/)) is used to implement the _WRA Data Model_ and \n[JSON Schema](https://json-schema.org/) is used to express the _WRA Data Model_. The JSON Schema file, located at \n[./schema/iea43_wra_data_model.schema.json](./schema/iea43_wra_data_model.schema.json), is the \n_WRA Data Model_. It can be thought of as a blueprint for how an implementation of the data model can be described with \nthe JSON data-interchange format. See Figure 1 below for a snippet of an example implementation.\n\n![example_implementation](https://user-images.githubusercontent.com/25622575/211047742-e83ee47b-d756-4e5e-a48f-cfb3d2fa00c6.png)\n
\n_Figure 1: Example implementation of the WRA Data Model._\n\nThe following tools are part of this undertaking:\n\n- [WRA Data Model](./schema/iea43_wra_data_model.schema.json): This JSON Schema file is the _WRA Data Model_. It describes \n how a JSON file that describes wind resource measurement data should be composed.\n\n- [Documentation](./docs/README.md): Markdown documentation for the _WRA Data Model_ created directly from the JSON Schema.\n\n- [Form App](https://iea-task-43.github.io/digital_wra_data_standard/): This app shows a form that is modeled after the \n JSON Schema and can create JSON data out of your inputs that is in accordance with the _WRA Data Model_.\n\n- [Python Data Model Loading Example](./tools/load_demo_schema.ipynb): This notebook shows how to read an example file that \n uses the _WRA Data Model_ with Python. [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/IEA-Task-43/digital_wra_data_standard/master?filepath=.%2Ftools%2Fload_demo_schema.ipynb)\n\n## Dissemination\nThe Task 43 team is actively working to disseminate information and improve user experience. \nThe [Wiki](https://github.com/IEA-Task-43/digital_wra_data_standard/wiki) tab in this GitHub repository is the starting \npoint to learn more via tutorials, recorded presentations, videos and more.\n\n- [Wiki](https://github.com/IEA-Task-43/digital_wra_data_standard/wiki)\n\n
\n\n# The Digital Calibration Certificate Data Model\n\nAs a requirement resulting from the development of the _WRA Data Model_, a \n[digital calibration certificate](./digital_calibration_certificate) is introduced as a complementary standard. This \nstandard currently supports anemometer calibration certificates according to IEC 61400-12-1:2017. \n\n
\n\n# Contributing to the Standard\nWe welcome all contributions including issue reporting, new features and bug fixes. For full details see the contributing \nguidelines and other resources below:\n\n- [Contribution Guidelines](./contributing.md)\n- [Issue Log](https://github.com/IEA-Task-43/digital_wra_data_standard/issues) where you can ask for new features or inform us of any bugs.\n- [Kanban board](https://github.com/IEA-Task-43/digital_wra_data_standard/projects/1) showing what issues are been worked on, completed or yet to do.\n- [Meeting Notes](https://github.com/IEA-Task-43/digital_wra_data_standard/discussions/129#discussion-3748501) of our regular meetings.\n\n# Getting Help\n\nPlease file a new issue in this repository with questions or concerns you might have. If you would like to chat directly with the contributors, please join our Slack channel at [ieawinddigitalization.slack.com](https://ieawinddigitalization.slack.com/).\n\nYou can find out more about the IEA\'s Wind Task 43 working group at [ieawindtask43.org](https://www.ieawindtask43.org/work-package-4-digital-wra).\n\n\n## Pipeline Status\n| Pipeline | Status | Result |\n|:---------|:-------|:-------|\n| Documentation | ![Compile Documentation to Markdown](https://github.com/IEA-Task-43/digital_wra_data_standard/workflows/Compile%20Documentation%20to%20Markdown/badge.svg) | [Documentation](./docs/README.md) |\n| Form App | ![Deploy Form App to GitHub Pages](https://github.com/IEA-Task-43/digital_wra_data_standard/workflows/Deploy%20Form%20App%20to%20GitHub%20Pages/badge.svg) | [Form App](https://iea-task-43.github.io/digital_wra_data_standard/) |\n'",,"2019/12/03, 21:26:02",1421,BSD-3-Clause,80,581,"2023/10/20, 15:12:11",37,96,194,37,5,0,0.6,0.4004576659038902,"2023/01/27, 14:41:09",v1.2.0-2023.01,0,13,false,,false,true,,,https://github.com/IEA-Task-43,www.ieawindtask43.org,,,,https://avatars.githubusercontent.com/u/57503085?v=4,,, awebox,Modelling and optimal control of single- and multiple-kite systems for airborne wind energy.,awebox,https://github.com/awebox/awebox.git,github,,Wind Energy,"2023/08/15, 10:50:59",19,0,2,true,Python,,awebox,"Python,Shell",,"b'# awebox\n\n`awebox` is a Python toolbox for modelling and optimal control of multiple-kite systems for Airborne Wind Energy (AWE). It provides interfaces that aim to take away from the user the burden of\n\n* generating optimization-friendly system dynamics for different combinations of modeling options.\n* formulating optimal control problems for common multi-kite trajectory types.\n* solving the trajectory optimization problem reliably\n* postprocessing and visualizing the solution and performing quality checks \n* tracking MPC design and handling for offline closed-loop simulations\n\nThe main focus of the toolbox are _rigid-wing_, _lift_- and _drag_-mode multiple-kite systems.\n\n## Installation\n\n`awebox` runs on Python 3. It depends heavily on the modeling language CasADi, which is a symbolic framework for algorithmic differentiation. CasADi also provides the interface to the NLP solver IPOPT. \nIt is optional but highly recommended to use HSL linear solvers as a plugin with IPOPT.\n\n1. Get a local copy of the latest `awebox` release:\n\n ```\n git clone https://github.com/awebox/awebox.git\n ```\n\n2. Install using pip\n\n ```\n pip3 install awebox/\n ```\n\n3. In order to get the HSL solvers and render them visible to CasADi, follow these [instructions](https://github.com/casadi/casadi/wiki/Obtaining-HSL). Additional installation instructions can be found [here](https://github.com/awebox/awebox/blob/develop/INSTALLATION.md).\n\n\n## Getting started\n\nTo run one of the examples from the `awebox` root folder:\n\n```\npython3 examples/single_kite_lift_mode_simple.py\n```\n\n## Acknowledgments\n\nThis software has been developed in collaboration with the company [Kiteswarms Ltd](http://www.kiteswarms.com). The company has also supported the project through research funding.\n\nThis project has received funding from the European Union\xe2\x80\x99s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642682 (_AWESCO_)\n\n## Citing `awebox`\nPlease use the following citation: \n\n""_awebox: Modelling and optimal control of single- and multiple-kite systems for airborne wind energy. https://github.com/awebox/awebox_""\n\n## Literature\n\n### `awebox`-based research\n\n[Optimal Control of Stacked Multi-Kite Systems for Utility-Scale Airborne Wind Energy](https://cdn.syscop.de/publications/DeSchutter2019.pdf) \\\nDe Schutter et al. / IEEE Conference on Decision and Control (CDC) 2019\n\n[Wake Characteristics of Pumping Mode Airborne Wind Energy Systems](https://cdn.syscop.de/publications/Haas2019.pdf) \\\nHaas et al. / Journal of Physics: Conference Series 2019\n\n[Operational Regions of a Multi-Kite AWE System](https://cdn.syscop.de/publications/Leuthold2018.pdf) \\\nLeuthold et al. / European Control Conference (ECC) 2018\n\n[Optimal Control for Multi-Kite Emergency Trajectories](https://cdn.syscop.de/publications/Bronnenmeyer2018.pdf) \\\nBronnenmeyer (Masters thesis) / University of Stuttgart 2018\n\n### Models\n\n**Induction models**\\\n[Engineering Wake Induction Model For Axisymmetric Multi-Kite Systems](https://www.researchgate.net/publication/334616920_Engineering_Wake_Induction_Model_For_Axisymmetric_Multi-Kite_Systems) \\\nLeuthold et al. / Wake Conference 2019\n\n**Point-mass model**\\\n[Airborne Wind Energy Based on Dual Airfoils](https://cdn.syscop.de/publications/Zanon2013a.pdf) \\\nZanon et al. / IEEE Transactions on Control Systems Technology 2013\n\n### Methods\n\n**Homotopy strategy** \\\n[A Relaxation Strategy for the Optimization of Airborne Wind Energy Systems](https://cdn.syscop.de/publications/Gros2013a.pdf) \\\nGros et al. / European Control Conference (ECC) 2013\n\n**Trajectory optimization** \\\n[Numerical Trajectory Optimization for Airborne Wind Energy Systems Described by High Fidelity Aircraft Models](https://cdn.syscop.de/publications/Horn2013.pdf) \\\nHorn et al. / Airborne Wind Energy 2013\n\n### Software\n\n**IPOPT**\\\n[On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming](http://cepac.cheme.cmu.edu/pasilectures/biegler/ipopt.pdf) \\\nW\xc3\xa4chter et al. / Mathematical Programming 106 (2006) 25-57\n\n**CasADi**\\\n[CasADi - A software framework for nonlinear optimization and optimal control](https://cdn.syscop.de/publications/Andersson2018.pdf) \\\nAndersson et al. / Mathematical Programming Computation 2018\n'",,"2019/04/26, 09:12:42",1643,LGPL-3.0,11,1105,"2023/08/15, 10:50:59",17,96,114,6,71,9,0.0,0.37368421052631584,,,0,5,false,,false,true,,,https://github.com/awebox,,,,,https://avatars.githubusercontent.com/u/49553965?v=4,,, CCBlade.jl,A blade element momentum method for propellers and turbines.,byuflowlab,https://github.com/byuflowlab/CCBlade.jl.git,github,"rotor,propeller,wind-turbine,bem,bemt,aerodynamics,aircraft,rotorcraft",Wind Energy,"2023/08/07, 23:03:20",44,0,12,true,Python,BYU FLOW Lab,byuflowlab,"Python,Julia",,"b'# CCBlade.jl\n\n[![](https://img.shields.io/badge/docs-stable-blue.svg)](https://flow.byu.edu/CCBlade.jl/stable)\n![](https://github.com/byuflowlab/CCBlade.jl/workflows/Run%20tests/badge.svg)\n\n\n**Summary**: A blade element momentum method for propellers and turbines. \n\n**Author**: Andrew Ning\n\n**Features**:\n\n- Methodology is provably convergent (see although multiple improvements have been made since then)\n- Prandtl hub/tip losses (or user-defined losses)\n- Glauert/Buhl empirical region for high thrust turbines\n- Convenience functions for inflow with shear, precone, yaw, tilt, and azimuth\n- Can do airfoil corrections beforehand or on the fly (Mach, Reynolds, rotation)\n- Allows for flow reversals (negative inflow/rotation velocities)\n- Allows for a hover condition (only rotation, no inflow) and rotor locked (no rotation, only inflow)\n- Compatible with AD tools like ForwardDiff\n\n**Installation**:\n\n```julia\n] add CCBlade\n```\n\n**Documentation**:\n\nThe [documentation](https://flow.byu.edu/CCBlade.jl/stable/) contains\n- A quick start tutorial to learn basic usage,\n- Guided examples to address specific or more advanced tasks,\n- A reference describing the API,\n- Theory in full detail.\n\n**Run Unit Tests**:\n\n```julia\npkg> activate .\npkg> test\n```\n\n**Citing**:\n\nNing, A., \xe2\x80\x9cUsing Blade Element Momentum Methods with Gradient-Based Design Optimization,\xe2\x80\x9d Structural and Multidisciplinary Optimization, Vol. 64, No. 2, pp. 994\xe2\x80\x931014, May 2021. doi:10.1007/s00158-021-02883-6\n\n**Python / OpenMDAO users**\n\nIn the `openmdao` folder there is a Python wrapper to this package to enable usage from [OpenMDAO](https://openmdao.org). This wrapper was developed/maintained by Daniel Ingraham and Justin Gray at NASA Glenn.\n'",,"2016/05/18, 19:33:38",2716,CUSTOM,16,254,"2023/08/07, 23:26:29",4,7,23,8,78,2,0.14285714285714285,0.3883928571428571,"2023/08/08, 20:08:40",v0.2.5,0,7,false,,false,false,,,https://github.com/byuflowlab,http://flow.byu.edu,"Provo, UT",,,https://avatars.githubusercontent.com/u/10734941?v=4,,, lidarwind,Retrieve wind speed and direction profiles from Doppler lidar observations from the WindCube-200s.,jdiasn,https://github.com/jdiasn/lidarwind.git,github,,Wind Energy,"2023/06/10, 14:07:57",9,0,4,true,Python,,,"Python,TeX,Makefile",,"b""======================\nlidarwind introduction\n======================\n\n.. image:: https://joss.theoj.org/papers/28430a0c6a79e6d1ff33579ff13458f7/status.svg\n :target: https://doi.org/10.21105/joss.04852\n\n.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.7026548.svg\n :target: https://doi.org/10.5281/zenodo.7026548\n\n.. image:: https://readthedocs.org/projects/lidarwind/badge/?version=latest\n :target: https://lidarwind.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/jdiasn/lidarwind/main?labpath=docs%2Fexamples\n\n.. image:: https://img.shields.io/pypi/v/lidarwind.svg\n :target: https://pypi.python.org/pypi/lidarwind/\n\n.. image:: https://codecov.io/gh/jdiasn/lidarwind/branch/main/graph/badge.svg?token=CEZM17YY3I\n :target: https://codecov.io/gh/jdiasn/lidarwind\n\nlidarwind is an open-source Python project to retrieve wind speed and direction profiles from Doppler lidar observations from the WindCube-200s, and it was developed to be easy to use. It can retrieve wind profiles from the 6-beam and DBS scanning strategies and allow users to set the signal-to-noise ratio threshold to reduce the noise. It also calculates the Reynolds stress tensor matrix elements from the 6-beam observations.\n\nlidarwind is a result of an effort to create an environment where it would be flexible and easy to process the observations from the WindCube Doppler lidar. Its development started in 2021 when I had to retrieve wind profiles from the 6-beam observations.\n\nThis current version focuses on the WindCube's observations, and the wind retrievals are dedicated to the 6-beam and DBS observations. However, it can be expanded to other Doppler lidar observations and scanning strategies.\n\n\n-------------\nDocumentation\n-------------\n\nThe lidarwind's documentation is available at https://lidarwind.readthedocs.io. There you can find the steps needed for installing the package. You can also find a short description of how the lidar wind derives the wind speed and direction from WindCube's observations.\n\n\nNotebooks\n=========\n\nAn introductory set of rendered notebooks are available at https://nbviewer.org/github/jdiasn/lidarwind/tree/main/docs/examples/ or at https://github.com/jdiasn/lidarwind/tree/main/docs/examples. If you want to try the package without installing it locally, click on the binder badge above. You will be redirected to a virtual environment where you can also access the same notebooks and test the package.\n\n.. warning::\n\n Beware that between versions 0.1.6 and 0.2.0, the package underwent significant refactoring. Now the classes' names\n follow the Pascal case, while module names, functions and attributes follow the snake case. Codes developed using the previous\n version will need revision.\n \n--------\nCitation\n--------\n\nIf you use lidarwind, or replicate part of it, in your work/package, please consider including the reference:\n\nNeto, J. D. and Castel\xc3\xa3o, G. P., (2023). lidarwind: A Python package for retrieving wind profiles from Doppler lidar observations. Journal of Open Source Software, 8(82), 4852, https://doi.org/10.21105/joss.04852\n\n::\n\n @article{Neto2023,\n doi = {10.21105/joss.04852},\n url = {https://doi.org/10.21105/joss.04852},\n year = {2023}, publisher = {The Open Journal},\n volume = {8}, number = {82}, pages = {4852},\n author = {Jos\xc3\xa9 Dias Neto and Guilherme P. Castelao},\n title = {lidarwind: A Python package for retrieving wind profiles from Doppler lidar observations},\n journal = {Journal of Open Source Software}\n }""",",https://doi.org/10.21105/joss.04852\n\n,https://doi.org/10.5281/zenodo.7026548\n\n,https://doi.org/10.21105/joss.04852\n\n::\n\n,https://doi.org/10.21105/joss.04852","2021/04/08, 13:20:14",930,CUSTOM,55,163,"2023/06/16, 13:36:12",2,98,106,62,131,0,0.1,0.10457516339869277,"2023/01/31, 21:42:27",v0.2.2,0,2,false,,false,true,,,,,,,,,,, FLOWUnsteady,An interactional aerodynamics and acoustics solver for multirotor aircraft and wind energy.,byuflowlab,https://github.com/byuflowlab/FLOWUnsteady.git,github,"aerodynamics,aircraft,cfd,vpm,vortex-methods,acoustics,aeroacoustics,computational-fluid-dynamics,rotorcraft,vtol,wind-turbine",Wind Energy,"2023/04/20, 22:07:38",192,0,91,true,Julia,BYU FLOW Lab,byuflowlab,Julia,,"b'\n\n

\n \n Interactional aerodynamics solver for multirotor aircraft and wind energy\n \n

\n\n

\n \n \n \n \n \n \n

\n\n---\n\nFLOWUnsteady is an open-source variable-fidelity framework for unsteady\naerodynamics and aeroacoustics based on the\n[reformulated vortex particle method](https://scholarsarchive.byu.edu/etd/9589/)\n(rVPM).\nThis suite brings together various tools developed by the\n[FLOW Lab](http://flow.byu.edu/) at Brigham Young University: Vortex lattice\nmethod, strip theory, blade elements, 3D panel method, and rVPM.\nThe suite also integrates an FW-H solver and a BPM code for tonal\nand broadband prediction of aeroacoustic noise.\nIn the low end of fidelity, simulations are similar to a free-wake method,\nwhile in the high end simulations become meshless large eddy simulations.\n\n\n* *Documentation:* [flow.byu.edu/FLOWUnsteady](https://flow.byu.edu/FLOWUnsteady)\n* *Code:* [github.com/byuflowlab/FLOWUnsteady](https://github.com/byuflowlab/FLOWUnsteady)\n\n### What is the Reformulated VPM?\n\nThe [reformulated VPM](https://scholarsarchive.byu.edu/etd/9589/) is a meshless\nCFD method solving the LES-filtered incompressible Navier-Stokes equations in\ntheir vorticity form,\n

\n \n

\nIt uses a Lagrangian (meshless) scheme, which not only\navoids the hurdles of mesh generation, but it also conserves vortical structures\nover long distances with minimal numerical dissipation.\n\nThe rVPM uses particles to discretize the Navier-Stokes equations, with the\nparticles representing radial basis functions that construct a continuous\nvorticity/velocity field. The basis functions become the LES filter, providing a\nvariable filter width and spatial adaption as the particles are convected and\nstretched by the velocity field. The local evolution of the filter width\nprovides an extra degree of freedom to reinforce conservations laws, which makes\nthe reformulated VPM numerically stable (overcoming the numerical issues that\nplague the classic VPM).\n\nThis meshless LES has several advantages over conventional mesh-based CFD.\nIn the absence of a mesh, \n1. the rVPM does not suffer from the numerical dissipation introduced by a mesh\n2. integrates over coarser discretizations without losing physical accuracy\n3. derivatives are calculated analytically rather than approximated through a stencil.\n\nFurthermore, rVPM is highly efficient since it uses computational elements only\nwhere there is vorticity (rather than meshing the entire space), making it 100x\nfaster than conventional mesh-based LES with comparable accuracy.\n\n\nWhile rVPM is well suited for resolving unbounded flows (wakes), complications\narise when attempting to impose boundary conditions (solid boundaries) on the flow.\nThis is because (1) the method is meshless, and (2) boundary conditions must\nbe imposed on the Navier-Stokes equations in the form of vorticity.\nFLOWUnsteady is a framework designed to introduce solid boundaries into the rVPM\nusing actuator models.\nWings and rotors are introduced in the computational domain through actuator\nline and surface models that use low-fidelity aerodynamic methods\n(*e.g.*, VLM, lifting line,\npanels, etc) to compute forces and embed the associated\nvorticity back into the LES domain.\n\n\n


\n\n\n

\n\n\n### Variable Fidelity for Preliminary-to-Detailed Design\n\nrVPM considerably reduces engineering time by avoiding the hurdles of mesh\ngeneration. Furthermore, since it is not limited by the time-step and stability\nconstraints of conventional mesh-based CFD, rVPM can be used across all levels\nof fidelity, all in the same framework by simply coarsening or refining the\nsimulation.\nIn the low end of fidelity, simulations are similar to a free-wake method,\nwhile in the high end simulations become meshless large eddy simulations.\nThus, FLOWUnsteady can be used as a high-fidelity tool that is orders of\nmagnitude faster than mesh-based CFD, or as a variable-fidelity tool for\nthe different stages of design.\n\n

\n \n

\n\n### Capabilities\n\n > **Simulation:**\n > *Tilting wings and rotors*\n > *\xe2\x80\xa2 Rotors with variable RPM and variable pitch*\n > *\xe2\x80\xa2 Asymmetric and stacked rotors*\n > *\xe2\x80\xa2 Maneuvering vehicle with prescribed kinematics*\n >\n > **rVPM Solver:**\n > *Fast-multipole acceleration through [ExaFMM](https://joss.theoj.org/papers/10.21105/joss.03145)*\n > *\xe2\x80\xa2 CPU parallelization through OpenMP*\n > *\xe2\x80\xa2 Second-order spatial accuracy and third-order RK time integration*\n > *\xe2\x80\xa2 Numerically stable by reshaping particles subject to vortex stretching*\n > *\xe2\x80\xa2 Subfilter-scale (SFS) model of turbulence associated to vortex stretching*\n > *\xe2\x80\xa2 SFS model coefficient computed dynamically or prescribed*\n > *\xe2\x80\xa2 Viscous diffusion through core spreading*\n >\n > **Wing Models:**\n > *Actuator line model through lifting line + VLM*\n > *\xe2\x80\xa2 Actuator surface model through vortex sheet + VLM*\n > *\xe2\x80\xa2 Parasitic drag through airfoil lookup tables*\n >\n > **Rotor Model:**\n > *Actuator line model through blade elements*\n > *\xe2\x80\xa2 Airfoil lookup tables automatically generated through XFOIL*\n > *\xe2\x80\xa2 Aeroacoustic noise through FW-H (PSU-WOPWOP) and BPM*\n >\n > **Under development *(*\xf0\x9f\xa4\x9e*coming soon)*:**\n > *Advanced actuator surface models through 3D panel method (for ducts, wings, and fuselage)*\n > *\xe2\x80\xa2 Bluff bodies through vortex sheet method*\n >\n > **Limitations:**\n > *Viscous drag and separation is only captured through airfoil lookup tables, without attempting to shed separation wakes*\n > *\xe2\x80\xa2 Incompressible flow only (though wave drag can be captured through airfoil lookup tables)*\n > *\xe2\x80\xa2 CPU parallelization through OpenMP without support for distributed memory (no MPI, i.e., only single-node* runs)\n >\n > *Coded in [the Julia language](https://www.infoworld.com/article/3284380/what-is-julia-a-fresh-approach-to-numerical-computing.html) for Linux, MacOS, and Windows WSL.*\n\n\n\n\n\nMore about the models inside FLOWUnsteady:\n

\n \n \n \n

\n\n


\n\n### Selected Publications\nSee the following publications for an in-depth dive into the theory and validation:\n\n* E. J. Alvarez, J. Mehr, & A. Ning (2022), ""FLOWUnsteady: An Interactional Aerodynamics Solver for Multirotor Aircraft and Wind Energy,"" *AIAA AVIATION Forum*. [**[VIDEO]**](https://youtu.be/SFW2X8Lbsdw) [**[PDF]**](https://scholarsarchive.byu.edu/facpub/5830/)\n* E. J. Alvarez & A. Ning (2022), ""Reviving the Vortex Particle Method: A Stable Formulation for Meshless Large Eddy Simulation,"" *(in review)*. [**[PDF]**](https://arxiv.org/pdf/2206.03658.pdf)\n* E. J. Alvarez (2022), ""Reformulated Vortex Particle Method and Meshless Large Eddy Simulation of Multirotor Aircraft.,"" *Doctoral Dissertation, Brigham Young University*. [**[VIDEO]**](https://www.nas.nasa.gov/pubs/ams/2022/08-09-22.html) [**[PDF]**](https://scholarsarchive.byu.edu/etd/9589/)\n\n


\n\n### Examples\n\n**Propeller:** [[Tutorial](https://flow.byu.edu/FLOWUnsteady/examples/propeller-J040)] [[Validation](https://flow.byu.edu/FLOWUnsteady/theory/validation/#Propeller)]\n\n

\n\n\n**Rotor in Hover:** [[Tutorial](https://flow.byu.edu/FLOWUnsteady/examples/rotorhover-aero)] [[Validation](https://flow.byu.edu/FLOWUnsteady/theory/validation/#Rotor)]\n\n

\n\n\n**Blown Wing:** [[Tutorial](https://flow.byu.edu/FLOWUnsteady/examples/blownwing-aero)] [[Validation](https://flow.byu.edu/FLOWUnsteady/theory/validation/#Rotor-Wing-Interactions)]\n\n

\n \n

\n\n


\n\n**Airborne-Wind-Energy Aircraft:** [[Video](https://www.youtube.com/watch?v=iFM3B4_N2Ls)]\n\n

\n \n

\n\n\n**eVTOL Transition:** [[Tutorial](https://flow.byu.edu/FLOWUnsteady/examples/vahana-vehicle/)]\n\nMid-fidelity\n

\n\nHigh-fidelity\n

\n\n\n**Aeroacoustic Noise:** [[Tutorial](https://flow.byu.edu/FLOWUnsteady/examples/rotorhover-acoustics)] [[Validation](https://flow.byu.edu/FLOWUnsteady/theory/validation/#Rotor)]\n\n

\n \n

\n\n

\n\n\n\n\n### Sponsors\n\n

\n \n


\n

\n\n\n\n### About\n\nFLOWUnsteady is an open-source project jointly led by the\n[FLOW Lab](http://flow.byu.edu/) at Brigham Young University and\n[Whisper Aero](http://whisper.aero/).\nAll contributions are welcome.\n\nIf you find FLOWUnsteady useful in your work, we kindly request that you cite the following paper [[URL]](https://arc.aiaa.org/doi/10.2514/6.2022-3218) [[PDF]](https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=6735&context=facpub):\n\n>Alvarez, E. J., Mehr, J., and Ning, A., \xe2\x80\x9cFLOWUnsteady: An Interactional Aerodynamics Solver for Multirotor Aircraft and Wind Energy,\xe2\x80\x9d AIAA AVIATION 2022 Forum, Chicago, IL, 2022. DOI:[10.2514/6.2022-3218](https://doi.org/10.2514/6.2022-3218).\n\nIf you were to encounter any issues, please first read through\n[the documentation](https://flow.byu.edu/FLOWUnsteady/) and [open/closed\nissues](https://github.com/byuflowlab/FLOWUnsteady/issues?q=is%3Aissue+is%3Aclosed).\nIf the issue still persists, please\n[open a new issue](https://github.com/byuflowlab/FLOWUnsteady/issues).\n\n * Main developer : Eduardo J. Alvarez ([edoalvarez.com](https://www.edoalvarez.com/))\n * Created : Sep 2017\n * License : MIT License\n'",",https://arxiv.org/pdf/2206.03658.pdf,https://doi.org/10.2514/6.2022-3218","2019/09/30, 19:55:38",1486,MIT,102,767,"2023/10/20, 17:21:04",17,13,60,31,5,6,0.0,0.3287292817679558,"2023/04/20, 22:12:56",v3.2.1,0,8,false,,false,false,,,https://github.com/byuflowlab,http://flow.byu.edu,"Provo, UT",,,https://avatars.githubusercontent.com/u/10734941?v=4,,, stochLAB,A tool to run collision risk models for seabirds on offshore wind farms.,HiDef-Aerial-Surveying,https://github.com/HiDef-Aerial-Surveying/stochLAB.git,github,"collision-risk,offshore-wind,seabirds,migratoryspecies",Wind Energy,"2023/03/10, 09:46:11",6,0,5,true,R,HiDef Aerial Surveying,HiDef-Aerial-Surveying,R,https://hidef-aerial-surveying.github.io/stochLAB/,"b'\n\n\n# stochLAB \n\n\n\n[![test-coverage](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/test-coverage.yaml/badge.svg)](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/test-coverage.yaml)\n[![pkgdown](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/pkgdown.yaml/badge.svg)](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/pkgdown.yaml)\n[![R-CMD-check](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/R-CMD-check.yaml)\n[![pkgcheck](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/pkgcheck.yaml/badge.svg)](https://github.com/HiDef-Aerial-Surveying/stochLAB/actions/workflows/pkgcheck.yaml)\n[![Status at rOpenSci Software Peer\nReview](https://badges.ropensci.org/551_status.svg)](https://github.com/ropensci/software-review/issues/551)\n\n\n\n`{stochLAB}` is a tool to run Collision Risk Models (CRMs) for seabirds\non offshore wind farms.\n\n## Overview\n\nThe `{stochLAB}` package is an adaptation of the [R\ncode](https://data.marine.gov.scot/dataset/developing-avian-collision-risk-model-incorporate-variability-and-uncertainty-r-code-0)\ndeveloped by [Masden\n(2015)](https://data.marine.gov.scot/dataset/developing-avian-collision-risk-model-incorporate-variability-and-uncertainty)\nto incorporate variability and uncertainty in the avian collision risk\nmodel originally developed by [Band\n(2012)](https://www.bto.org/sites/default/files/u28/downloads/Projects/Final_Report_SOSS02_Band1ModelGuidance.pdf).\nThe package is for use by individuals modelling collision risk of\nseabirds at offshore wind farms. The primary functions take input\ninformation on the morphology, behaviour and densities of seabirds as\nwell data pertaining to the proposed wind farm (i.e., turbine\ndimensions, speed and number).\n\nThese collision risk models are useful for marine ornithologists who are\nworking in the offshore wind industry, particularly in UK waters.\nHowever, the package itself relies on generic biological and windfarm\ndata and can be applied anywhere (i.e., in any marine environment) as\nlong as the parameters are appropriate for the species and windfarms of\ninterest.\n\nCode developed under `{stochLAB}` substantially re-factored and\nre-structured Masden\xe2\x80\x99s (heavily script-based) implementation into a\nuser-friendly, streamlined, well documented and easily distributed tool.\nFurthermore, the package lays down the code infrastructure for easier\nincorporation of new functionality, e.g.\xc2\xa0extra parameter sampling\nfeatures, model expansions, etc.\n\nIn addition, previous code underpinning core calculations for the\nextended model has been replaced by an alternative approach, resulting\nin significant gains in computational speed over Masden\xe2\x80\x99s code. This\noptimization is particularly beneficial under a stochastic context, when\ncore calculations are called repeatedly during simulations.\n\nFor a more detailed overview type `?stochLAB`, once installed!\n\n## Installation\n\nYou can install the released version of stochLAB from\n[CRAN](https://CRAN.R-project.org) with:\n\n``` r\ninstall.packages(""stochLAB"")\n```\n\nYou can install the development version with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""HiDef-Aerial-Surveying/stochLAB"")\n```\n\nThis package depends on the following packages, which should be\ninstalled automatically:\n\n`cli dplyr glue logr magrittr msm pracma purrr rlang stats tibble tidyr`\n\n## Bug reports\n\nTo report any bugs, please log an\n[ISSUE](https://github.com/HiDef-Aerial-Surveying/stochLAB/issues)\n\n## Input parameters\n\nMany of the input parameters for the `stoch_crm()` function need to be\nobtained from developers (e.g., blade pitch, rotor radius, wind speed,\netc\xe2\x80\xa6). However, there are many parameters around the morphology and\nbiology of birds that are built into the `sCRM` package for UK seabirds,\nwhich can be found [HERE](https://github.com/dmpstats/sCRM). `sCRM` is\nan R Shiny application that wraps up the `stoch_crm()` and `band_crm()`\nfunctions. These biological parameters can be accessed by installing the\n`sCRM` package and running `sCRM::spp_dflts`, which will bring up a\ntibble object with all the relevant information.\n\nIf performing a collision risk assessment in UK waters, default\nbiological data for the following parameters can be obtained from the\n`sCRM` package:\n\n`flt_speed_pars, body_lt_pars, wing_span_pars, avoid_bsc_pars, avoid_ext_pars, noct_act_pars, flight_type, gen_fhd_boots`\n\nOther parameters around the species of interest need to be derived from\nsite-based surveys:\n\n`prop_crh_pars, bird_dens_dt,` and `site_fhd_boots`\n\nAll wind farm parameters need to be obtained by the wind farm\ndevelopers:\n\n`n_blades, air_gap_pars, rtr_radius_pars, bld_width_pars, bld_pitch_pars, rtn_speed_pars, windspd_pars, trb_wind_avbl, trb_downtime_pars, wf_n_trbs, wf_width, wf_latitude, tidal_offset, season_specs, bld_chord_prf, lrg_arr_corr`\n\nThe following parameters refer to the outputs:\n\n`out_format, out_sampled_pars, out_period, verbose, log_file`\n\nMore information on input parameter specifics can be found in the\nvignettes for `stoch_crm` and `band_crm`.\n\n## Outputs\n\nOnce the collision risk model is run, the key outputs are presented as a\ntable which contains the mean, standard deviation and median number of\ncollisions summarised by month, season, or year. Quantiles of the\nbootstrapped collisions are also presented in the tables. These tables\nare accessed through calling from the model object. Run the\n[Examples](#examples) to view exemplar outputs.\n\n``` r\nstochOUT <- stochLAB::stoch_crm(...)\n\nstochOUT$collisions$opt1 #For outputs from option 1 of the stochastic collision risk model \nstochOUT$collisions$opt2 #For outputs from option 2 of the stochastic collision risk model \nstochOUT$collisions$opt3 #For outputs from option 3 of the stochastic collision risk model \n```\n\n## Examples\n\n### Simple example\n\nThis is a basic example of running the stochastic collision model for\none seabird species and one turbine/wind-farm scenario, with fictional\ninput parameter data.\n\n``` r\nlibrary(stochLAB)\n\n# ------------------------------------------------------\n# Setting some of the required inputs upfront\n\nb_dens <- data.frame(\n month = month.abb,\n mean = runif(12, 0.8, 1.5),\n sd = runif(12, 0.2, 0.3))\n\n# Generic FHD bootstraps for one species, from Johnson et al (2014)\nfhd_boots <- generic_fhd_bootstraps[[1]]\n\n# wind speed vs rotation speed vs pitch\nwind_rtn_ptch <- data.frame(\n wind_speed = seq_len(30),\n rtn_speed = 10/(30:1),\n bld_pitch = c(rep(90, 4), rep(0, 8), 5:22))\n\n# wind availability\nwindavb <- data.frame(\n month = month.abb,\n pctg = runif(12, 85, 98))\n\n# maintenance downtime\ndwntm <- data.frame(\n month = month.abb,\n mean = runif(12, 6, 10),\n sd = rep(2, 12))\n\n# seasons specification\nseas_dt <- data.frame(\n season_id = c(""a"", ""b"", ""c""),\n start_month = c(""Jan"", ""May"", ""Oct""), end_month = c(""Apr"", ""Sep"", ""Dec""))\n\n# ----------------------------------------------------------\n# Run stochastic CRM, treating rotor radius, air gap and\n# blade width as fixed parameters (i.e. not stochastic)\n\nstoch_crm(\n model_options = c(1, 2, 3),\n n_iter = 1000,\n flt_speed_pars = data.frame(mean = 7.26, sd = 1.5),\n body_lt_pars = data.frame(mean = 0.39, sd = 0.005),\n wing_span_pars = data.frame(mean = 1.08, sd = 0.04),\n avoid_bsc_pars = data.frame(mean = 0.99, sd = 0.001),\n avoid_ext_pars = data.frame(mean = 0.96, sd = 0.002),\n noct_act_pars = data.frame(mean = 0.033, sd = 0.005),\n prop_crh_pars = data.frame(mean = 0.06, sd = 0.009),\n bird_dens_opt = ""tnorm"",\n bird_dens_dt = b_dens,\n flight_type = ""flapping"",\n prop_upwind = 0.5,\n gen_fhd_boots = fhd_boots,\n n_blades = 3,\n rtr_radius_pars = data.frame(mean = 80, sd = 0), # sd = 0, rotor radius is fixed\n air_gap_pars = data.frame(mean = 36, sd = 0), # sd = 0, air gap is fixed\n bld_width_pars = data.frame(mean = 8, sd = 0), # sd = 0, blade width is fixed\n rtn_pitch_opt = ""windSpeedReltn"",\n windspd_pars = data.frame(mean = 7.74, sd = 3),\n rtn_pitch_windspd_dt = wind_rtn_ptch,\n trb_wind_avbl = windavb,\n trb_downtime_pars = dwntm,\n wf_n_trbs = 200,\n wf_width = 15,\n wf_latitude = 56.9,\n tidal_offset = 2.5,\n lrg_arr_corr = TRUE,\n verbose = TRUE,\n seed = 1234,\n out_format = ""summaries"",\n out_sampled_pars = TRUE,\n out_period = ""seasons"",\n season_specs = seas_dt,\n log_file = paste0(getwd(), ""scrm_example.log"")\n)\n#> \n#> \xe2\x94\x80\xe2\x94\x80 Stochastic CRM \xe2\x94\x80\xe2\x94\x80\n#> \n#> \xe2\x84\xb9 Checking inputs\xe2\x9c\x94 Checking inputs [87ms]\n#> \xe2\x84\xb9 Preparing data\xe2\x9c\x94 Preparing data [152ms]\n#> \xe2\x84\xb9 Sampling parameters\xe2\x9c\x94 Sampling parameters [423ms]\n#> \xe2\xa0\x99 Calculating collisions | 3/1000 iterations\xe2\xa0\xb9 Calculating collisions | 117/1000 iterations\xe2\xa0\xb8 Calculating collisions | 272/1000 iterations\xe2\xa0\xbc Calculating collisions | 420/1000 iterations\xe2\xa0\xb4 Calculating collisions | 570/1000 iterations\xe2\xa0\xa6 Calculating collisions | 717/1000 iterations\xe2\xa0\xa7 Calculating collisions | 870/1000 iterations\xe2\xa0\x87 Calculating collisions | 1000/1000 iterations\xe2\x9c\x94 Calculating collisions | 1000/1000 iterations [1.6s]\n#> \xe2\x84\xb9 Sorting outputs\xe2\x9c\x94 Sorting outputs [761ms]\n#> \xe2\x9c\x94 Job done!\n#> $collisions\n#> $collisions$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 a Jan_Apr 28.0 12.7 28.8 7.15 18.2 36.4 52.0 67.3\n#> 2 b May_Sep 56.4 25.3 58.1 15.4 37.2 73.5 105. 142. \n#> 3 c Oct_Dec 19.3 8.92 19.5 5.13 12.2 25.5 36.6 45.4\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $collisions$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 a Jan_Apr 0.728 1.28 0.393 0.0676 0.230 0.655 5.05 11.6 \n#> 2 b May_Sep 1.47 2.64 0.804 0.135 0.469 1.30 10.8 21.8 \n#> 3 c Oct_Dec 0.499 0.888 0.274 0.0448 0.158 0.445 3.43 8.06\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $collisions$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 a Jan_Apr 0.350 0.790 0.143 0.0337 0.0913 0.256 3.05 6.66\n#> 2 b May_Sep 0.710 1.61 0.294 0.0687 0.185 0.487 5.55 15.2 \n#> 3 c Oct_Dec 0.240 0.546 0.0995 0.0223 0.0636 0.177 1.93 4.73\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $sampled_pars\n#> $sampled_pars$air_gap\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 36 0 36 36 36\n#> \n#> $sampled_pars$bld_width\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 8 0 8 8 8\n#> \n#> $sampled_pars$body_lt\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.390 0.00499 0.390 0.380 0.400\n#> \n#> $sampled_pars$flt_speed\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 7.28 1.47 7.28 4.30 10.0\n#> \n#> $sampled_pars$noct_actv\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.0333 0.00498 0.0333 0.0241 0.0436\n#> \n#> $sampled_pars$rtr_radius\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 80 0 80 80 80\n#> \n#> $sampled_pars$wing_span\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 1.08 0.0398 1.08 1.00 1.16\n#> \n#> $sampled_pars$hub_height\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 116 0 116 116 116\n#> \n#> $sampled_pars$dens_mth\n#> # A tibble: 12 \xc3\x97 6\n#> period mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 Jan 0.849 0.286 0.847 0.315 1.42\n#> 2 Feb 0.833 0.224 0.826 0.415 1.28\n#> 3 Mar 1.12 0.225 1.12 0.665 1.53\n#> 4 Apr 1.11 0.242 1.11 0.647 1.59\n#> 5 May 1.22 0.270 1.21 0.700 1.75\n#> 6 Jun 0.835 0.252 0.838 0.299 1.35\n#> 7 Jul 1.07 0.247 1.07 0.600 1.55\n#> 8 Aug 1.04 0.279 1.05 0.490 1.61\n#> 9 Sep 1.13 0.277 1.13 0.601 1.66\n#> 10 Oct 1.04 0.222 1.04 0.616 1.48\n#> 11 Nov 0.965 0.267 0.959 0.460 1.50\n#> 12 Dec 1.21 0.199 1.21 0.813 1.59\n#> \n#> $sampled_pars$prop_oper_mth\n#> # A tibble: 12 \xc3\x97 6\n#> period mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 Jan 0.793 0.0207 0.792 0.752 0.829\n#> 2 Feb 0.769 0.0207 0.769 0.729 0.808\n#> 3 Mar 0.850 0.0204 0.850 0.807 0.890\n#> 4 Apr 0.807 0.0196 0.807 0.769 0.845\n#> 5 May 0.847 0.0195 0.847 0.809 0.884\n#> 6 Jun 0.866 0.0198 0.866 0.827 0.905\n#> 7 Jul 0.869 0.0199 0.869 0.829 0.910\n#> 8 Aug 0.806 0.0194 0.806 0.768 0.844\n#> 9 Sep 0.879 0.0205 0.879 0.838 0.917\n#> 10 Oct 0.820 0.0200 0.820 0.779 0.858\n#> 11 Nov 0.858 0.0201 0.858 0.819 0.898\n#> 12 Dec 0.910 0.0201 0.911 0.872 0.947\n#> \n#> $sampled_pars$downtime\n#> # A tibble: 12 \xc3\x97 6\n#> period mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 Jan 6.09 2.07 6.12 2.39 10.1\n#> 2 Feb 8.98 2.07 8.99 5.12 12.9\n#> 3 Mar 8.54 2.04 8.54 4.52 12.8\n#> 4 Apr 8.57 1.96 8.61 4.79 12.4\n#> 5 May 7.83 1.95 7.81 4.12 11.7\n#> 6 Jun 7.64 1.98 7.61 3.78 11.6\n#> 7 Jul 9.43 1.99 9.42 5.37 13.5\n#> 8 Aug 6.35 1.94 6.31 2.55 10.1\n#> 9 Sep 7.46 2.05 7.44 3.62 11.6\n#> 10 Oct 6.08 2.00 6.10 2.28 10.1\n#> 11 Nov 7.56 2.01 7.54 3.58 11.4\n#> 12 Dec 7.03 2.01 6.94 3.32 10.8\n#> \n#> $sampled_pars$wind_speed\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 7.73 3.00 7.73 2.15 13.7\n#> \n#> $sampled_pars$rtn_speed\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.428 0.0574 0.417 0.345 0.556\n#> \n#> $sampled_pars$bld_pitch\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.329 0.635 0 0 1.57\n#> \n#> $sampled_pars$avoid_bsc\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.990 0.000985 0.990 0.988 0.992\n#> \n#> $sampled_pars$avoid_ext\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.960 0.00203 0.960 0.956 0.964\n#> \n#> $sampled_pars$prop_crh\n#> # A tibble: 1 \xc3\x97 5\n#> mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0.0605 0.00904 0.0601 0.0441 0.0800\n#> \n#> $sampled_pars$gen_fhd\n#> # A tibble: 500 \xc3\x97 6\n#> height mean sd median pctl_2.5 pctl_97.5\n#> \n#> 1 0 0.163 0.0188 0.166 0.109 0.187 \n#> 2 1 0.136 0.0129 0.138 0.0967 0.152 \n#> 3 2 0.114 0.00855 0.115 0.0863 0.124 \n#> 4 3 0.0950 0.00530 0.0963 0.0769 0.100 \n#> 5 4 0.0794 0.00296 0.0803 0.0686 0.0816\n#> 6 5 0.0664 0.00146 0.0669 0.0606 0.0670\n#> 7 6 0.0556 0.00116 0.0558 0.0530 0.0567\n#> 8 7 0.0465 0.00166 0.0466 0.0439 0.0490\n#> 9 8 0.0390 0.00216 0.0389 0.0357 0.0432\n#> 10 9 0.0327 0.00250 0.0324 0.0290 0.0386\n#> # \xe2\x80\xa6 with 490 more rows\n```\n\n### Multiscenario example\n\nThis is an example usage of `stoch_crm()` for multiple scenarios. The\naim is two-fold:\n\n1. Suggest how input parameter datasets used in the previous\n implementation can be reshaped to fit `stoch_crm()`\xe2\x80\x99s interface.\n Suggested code is also relevant in the context of multiple scenarios\n applications, since the wide tabular structure of these datasets is\n likely the favoured format for users to compile input parameters\n under different scenarios.\n\n2. Propose a functional programming framework to run `stoch_crm()` for\n multiple species and wind-farm/turbines features.\n\nPlease note the example runs on fictional data.\n\n``` r\nlibrary(stochLAB)\n\n# --------------------------------------------------------- #\n# ---- Reshaping into list-column data frames ----\n# --------------------------------------------------------- #\n#\n# --- bird features\nbird_pars <- bird_pars_wide_example %>%\n dplyr::relocate(Flight, .after = dplyr::last_col()) %>%\n tidyr::pivot_longer(AvoidanceBasic:Prop_CRH_ObsSD) %>%\n dplyr::mutate(\n par = dplyr::if_else(grepl(""SD|sd|Sd"", name), ""sd"", ""mean""),\n feature = gsub(""SD|sd|Sd"","""", name)) %>%\n dplyr::select(-name) %>%\n tidyr::pivot_wider(names_from = par, values_from = value) %>%\n tidyr::nest(pars = c(mean, sd)) %>%\n tidyr::pivot_wider(names_from = feature, values_from = pars) %>%\n tibble::add_column(prop_upwind = 0.5)\n\n# --- bird densities: provided as mean and sd Parameters for Truncated Normal lower\n# bounded at 0\ndens_pars <- dens_tnorm_wide_example %>%\n tibble::add_column(\n dens_opt = rep(""tnorm"", nrow(.)),\n .after = 1) %>%\n tidyr::pivot_longer(Jan:DecSD) %>%\n dplyr::mutate(\n par = dplyr::if_else(grepl(""SD|sd|Sd"", name), ""sd"", ""mean""),\n month = gsub(""SD|sd|Sd"","""", name)) %>%\n dplyr::select(-name) %>%\n tidyr::pivot_wider(names_from = par, values_from = value) %>%\n tidyr::nest(mth_dens = c(month, mean, sd))\n\n# --- FHD data from Johnson et al (2014) for the species under analysis\ngen_fhd_boots <- generic_fhd_bootstraps[bird_pars$Species]\n\n# --- seasons definitions (made up)\nseason_dt <- list(\n Arctic_Tern = data.frame(\n season_id = c(""breeding"", ""feeding"", ""migrating""),\n start_month = c(""May"", ""Sep"", ""Jan""),\n end_month = c(""Aug"", ""Dec"", ""Apr"")),\n Black_headed_Gull = data.frame(\n season_id = c(""breeding"", ""feeding"", ""migrating""),\n start_month = c(""Jan"", ""May"", ""Oct""),\n end_month = c(""Apr"", ""Sep"", ""Dec"")),\n Black_legged_Kittiwake = data.frame(\n season_id = c(""breeding"", ""feeding"", ""migrating""),\n start_month = c(""Dec"", ""Mar"", ""Sep""),\n end_month = c(""Feb"", ""Aug"", ""Nov"")))\n\n# --- turbine parameters\n## address operation parameters first\ntrb_opr_pars <- turb_pars_wide_example %>%\n dplyr::select(TurbineModel, JanOp:DecOpSD) %>%\n tidyr::pivot_longer(JanOp:DecOpSD) %>%\n dplyr::mutate(\n month = substr(name, 1, 3),\n par = dplyr::case_when(\n grepl(""SD|sd|Sd"", name) ~ ""sd"",\n grepl(""Mean|MEAN|mean"", name) ~ ""mean"",\n TRUE ~ ""pctg""\n )) %>%\n dplyr::select(-name) %>%\n tidyr::pivot_wider(names_from = par, values_from = value) %>%\n tidyr::nest(\n wind_avbl = c(month, pctg),\n trb_dwntm = c(month, mean, sd))\n\n## address turbine features and subsequently merge operation parameters\ntrb_pars <- turb_pars_wide_example %>%\n dplyr::select(TurbineModel:windSpeedSD ) %>%\n dplyr::relocate(RotorSpeedAndPitch_SimOption, .after = 1) %>%\n tidyr::pivot_longer(RotorRadius:windSpeedSD) %>%\n dplyr::mutate(\n par = dplyr::if_else(grepl(""SD|sd|Sd"", name), ""sd"", ""mean""),\n feature = gsub(""(SD|sd|Sd)|(Mean|MEAN|mean)"","""", name)\n ) %>%\n dplyr::select(-name) %>%\n tidyr::pivot_wider(names_from = par, values_from = value) %>%\n tidyr::nest(pars = c(mean, sd)) %>%\n tidyr::pivot_wider(names_from = feature, values_from = pars) %>%\n dplyr::left_join(., trb_opr_pars)\n#> Joining, by = ""TurbineModel""\n\n# --- windspeed, rotation speed and blade pitch relationship\nwndspd_rtn_ptch_example\n#> wind_speed rtn_speed bld_pitch\n#> 1 0 0.0 90\n#> 2 1 0.0 90\n#> 3 2 0.0 90\n#> 4 3 6.8 0\n#> 5 4 6.8 0\n#> 6 5 6.8 0\n#> 7 6 6.8 0\n#> 8 7 6.8 0\n#> 9 8 8.1 0\n#> 10 9 9.1 0\n#> 11 10 9.3 0\n#> 12 11 9.4 4\n#> 13 12 9.5 7\n#> 14 13 9.7 9\n#> 15 14 9.7 11\n#> 16 15 9.9 13\n#> 17 16 10.2 15\n#> 18 17 10.2 16\n#> 19 18 10.2 18\n#> 20 19 10.2 19\n#> 21 20 10.2 20\n#> 22 21 10.2 22\n#> 23 22 10.2 23\n#> 24 23 10.2 24\n#> 25 24 10.2 25\n#> 26 25 10.2 26\n#> 27 26 10.2 27\n#> 28 27 10.2 28\n#> 29 28 10.2 29\n#> 30 29 10.2 30\n\n# --- windfarm parameters\nwf_pars <- data.frame(\n wf_id = c(""wf_1"", ""wf_2""),\n n_turbs = c(200, 400),\n wf_width = c(4, 10),\n wf_lat = c(55.8, 55.0),\n td_off = c(2.5, 2),\n large_array_corr = c(FALSE, TRUE)\n)\n\n\n# -------------------------------------------------------------- #\n# ---- Run stoch_crm() for multiple scenarios ----\n# -------------------------------------------------------------- #\n\n# --- Set up scenario combinations\nscenarios_specs <- tidyr::expand_grid(\n spp = bird_pars$Species,\n turb_id = trb_pars$TurbineModel,\n wf_id = wf_pars$wf_id) %>%\n tibble::add_column(\n scenario_id = paste0(""scenario_"", 1:nrow(.)),\n .before = 1)\n\n# --- Set up progress bar for the upcoming iterative mapping step\npb <- progress::progress_bar$new(\n format = ""Running Scenario: :what [:bar] :percent eta: :eta"",\n width = 100,\n total = nrow(scenarios_specs))\n\n# --- Map stoch_crm() to each scenario specification via purrr::pmap\noutputs <- scenarios_specs %>%\n purrr::pmap(function(scenario_id, spp, turb_id, wf_id, ...){\n\n pb$tick(tokens = list(what = scenario_id))\n\n # params for current species\n c_spec <- bird_pars %>%\n dplyr::filter(Species == {{spp}}) \n\n # density for current species\n c_dens <- dens_pars %>%\n dplyr::filter(Species == {{spp}})\n\n # params for current turbine scenario\n c_turb <- trb_pars %>%\n dplyr::filter(TurbineModel == {{turb_id}})\n\n # params for current windfarm scenario\n c_wf <- wf_pars %>%\n dplyr::filter(wf_id == {{wf_id}})\n\n # inputs in list-columns need to be unlisted, either via `unlist()` or\n # indexing `[[1]]`\n # switching off `verbose`, otherwise console will be \n # cramped with log messages\n \n stoch_crm(\n model_options = c(1, 2, 3),\n n_iter = 1000,\n flt_speed_pars = c_spec$Flight_Speed[[1]],\n body_lt_pars = c_spec$Body_Length[[1]],\n wing_span_pars = c_spec$Wingspan[[1]],\n avoid_bsc_pars = c_spec$AvoidanceBasic[[1]],\n avoid_ext_pars = c_spec$AvoidanceExtended[[1]],\n noct_act_pars = c_spec$Nocturnal_Activity[[1]],\n prop_crh_pars = c_spec$Prop_CRH_Obs[[1]],\n bird_dens_opt = c_dens$dens_opt,\n bird_dens_dt = c_dens$mth_dens[[1]],\n flight_type = c_spec$Flight,\n prop_upwind = c_spec$prop_upwind,\n gen_fhd_boots = gen_fhd_boots[[spp]],\n n_blades = c_turb$Blades,\n rtr_radius_pars = c_turb$RotorRadius[[1]],\n air_gap_pars = c_turb$HubHeightAdd[[1]],\n bld_width_pars = c_turb$BladeWidth[[1]],\n rtn_pitch_opt = c_turb$RotorSpeedAndPitch_SimOption,\n bld_pitch_pars = c_turb$Pitch[[1]],\n rtn_speed_pars = c_turb$RotationSpeed[[1]],\n windspd_pars = c_turb$windSpeed[[1]],\n rtn_pitch_windspd_dt = wndspd_rtn_ptch_example,\n trb_wind_avbl = c_turb$wind_avbl[[1]],\n trb_downtime_pars = c_turb$trb_dwntm[[1]],\n wf_n_trbs = c_wf$n_turbs,\n wf_width = c_wf$wf_width,\n wf_latitude = c_wf$wf_lat,\n tidal_offset = c_wf$td_off,\n lrg_arr_corr = c_wf$large_array_corr,\n verbose = FALSE,\n seed = 1234,\n out_format = ""summaries"",\n out_sampled_pars = FALSE,\n out_period = ""seasons"",\n season_specs = season_dt[[spp]],\n log_file = NULL\n )\n })\n\n# --- close progress bar\npb$terminate()\n\n# --- identify elements of output list\nnames(outputs) <- scenarios_specs$scenario_id\n\noutputs\n#> $scenario_1\n#> $scenario_1$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 984. 75.9 982. 847. 933. 1031. 1130. 1241.\n#> 2 feeding Sep_Dec 552. 41.6 551. 478. 523. 579. 634. 683.\n#> 3 migrating Jan_Apr 626. 47.7 623. 541. 591. 658. 722. 780.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_1$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 60.7 67.3 41.9 2.41 25.3 65.7 293. 389.\n#> 2 feeding Sep_Dec 34.0 37.7 23.5 1.35 14.3 37.0 162. 217.\n#> 3 migrating Jan_Apr 38.6 42.8 26.8 1.55 16.2 42.4 186. 249.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_1$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 17.9 24.8 10.5 0.394 5.92 17.9 106. 151. \n#> 2 feeding Sep_Dec 10.1 13.9 5.86 0.221 3.35 10.0 59.0 84.5\n#> 3 migrating Jan_Apr 11.4 15.8 6.77 0.251 3.79 11.5 66.9 93.8\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_2\n#> $scenario_2$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 1836. 137. 1831. 1586. 1743. 1921. 2098. 2301.\n#> 2 feeding Sep_Dec 1045. 76.4 1044. 909. 991. 1096. 1195. 1288.\n#> 3 migrating Jan_Apr 1181. 87.6 1177. 1025. 1118. 1241. 1356. 1463.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_2$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 122. 132. 85.2 5.20 52.1 133. 574. 759.\n#> 2 feeding Sep_Dec 69.2 75.0 48.4 2.95 29.9 75.9 323. 430.\n#> 3 migrating Jan_Apr 78.3 84.8 55.1 3.37 33.7 86.5 369. 491.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_2$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 35.9 48.7 21.4 0.848 12.2 36.1 208. 294.\n#> 2 feeding Sep_Dec 20.4 27.7 12.1 0.482 7.00 20.6 118. 167.\n#> 3 migrating Jan_Apr 23.1 31.3 13.9 0.547 7.91 23.5 133. 185.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_3\n#> $scenario_3$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 2131. 143. 2126. 1875. 2034. 2223. 2415. 2690.\n#> 2 feeding Sep_Dec 1149. 76.8 1143. 1011. 1095. 1197. 1302. 1446.\n#> 3 migrating Jan_Apr 1300. 89.6 1292. 1131. 1239. 1357. 1491. 1638.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_3$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 53.7 77.6 30.1 0.894 17.0 53.9 347. 477.\n#> 2 feeding Sep_Dec 28.9 41.8 16.1 0.496 9.16 29.0 185. 257.\n#> 3 migrating Jan_Apr 32.7 47.1 18.3 0.546 10.3 32.5 205. 291.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_3$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 14.4 25.7 6.47 0.124 3.37 12.7 109. 162. \n#> 2 feeding Sep_Dec 7.74 13.8 3.48 0.0685 1.81 6.84 59.8 86.3\n#> 3 migrating Jan_Apr 8.74 15.6 3.93 0.0756 2.05 7.65 66.9 96.0\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_4\n#> $scenario_4$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 3720. 240. 3713. 3290. 3560. 3872. 4188. 4654.\n#> 2 feeding Sep_Dec 2037. 131. 2025. 1801. 1944. 2122. 2297. 2540.\n#> 3 migrating Jan_Apr 2297. 152. 2284. 2012. 2193. 2393. 2626. 2854.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_4$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 100. 142. 57.0 1.79 32.6 101. 633. 867.\n#> 2 feeding Sep_Dec 54.8 77.7 31.0 1.01 17.8 55.5 342. 474.\n#> 3 migrating Jan_Apr 61.7 87.3 35.1 1.11 20.1 62.2 382. 531.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_4$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 26.8 47.1 12.3 0.249 6.47 23.9 199. 295.\n#> 2 feeding Sep_Dec 14.7 25.7 6.69 0.140 3.53 13.1 111. 160.\n#> 3 migrating Jan_Apr 16.5 28.9 7.57 0.153 3.98 14.7 124. 177.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_5\n#> $scenario_5$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 1176. 341. 1053. 825. 985. 1219. 2220. 2634.\n#> 2 feeding Sep_Dec 615. 176. 551. 430. 516. 640. 1136. 1338.\n#> 3 migrating Jan_Apr 703. 202. 630. 489. 591. 731. 1323. 1551.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_5$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 0.837 2.75 0.131 0.000231 0.0341 0.390 8.68 31.9\n#> 2 feeding Sep_Dec 0.438 1.45 0.0687 0.000118 0.0180 0.202 4.66 17.0\n#> 3 migrating Jan_Apr 0.500 1.63 0.0782 0.000136 0.0209 0.230 5.20 18.4\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_5$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 0.221 0.795 0.0245 0.0000261 0.00602 0.0786 2.41 8.88\n#> 2 feeding Sep_Dec 0.116 0.418 0.0129 0.0000129 0.00319 0.0408 1.29 4.73\n#> 3 migrating Jan_Apr 0.132 0.472 0.0147 0.0000151 0.00365 0.0462 1.44 5.12\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_6\n#> $scenario_6$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 2164. 568. 1961. 1557. 1840. 2263. 3874. 4500.\n#> 2 feeding Sep_Dec 1148. 298. 1043. 822. 977. 1210. 2021. 2323.\n#> 3 migrating Jan_Apr 1310. 341. 1188. 934. 1117. 1370. 2325. 2688.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_6$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 1.64 5.28 0.266 0.000499 0.0711 0.779 17.1 59.5\n#> 2 feeding Sep_Dec 0.872 2.82 0.142 0.000254 0.0379 0.415 9.33 32.2\n#> 3 migrating Jan_Apr 0.991 3.18 0.162 0.000295 0.0437 0.472 10.4 34.7\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_6$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding May_Aug 0.433 1.53 0.0503 0.0000579 0.0126 0.157 4.77 16.6 \n#> 2 feeding Sep_Dec 0.230 0.817 0.0271 0.0000291 0.00672 0.0832 2.59 8.97\n#> 3 migrating Jan_Apr 0.261 0.921 0.0304 0.0000337 0.00768 0.0947 2.88 9.68\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_7\n#> $scenario_7$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 429. 93.1 423. 270. 360. 490. 626. 767.\n#> 2 feeding May_Sep 697. 151. 685. 442. 588. 791. 1006. 1252.\n#> 3 migrating Oct_Dec 266. 59.2 262. 164. 223. 303. 390. 498.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_7$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 257. 150. 223. 62.9 153. 320. 603. 1071.\n#> 2 feeding May_Sep 417. 244. 366. 100. 251. 517. 985. 1797.\n#> 3 migrating Oct_Dec 159. 93.0 137. 37.8 94.0 199. 377. 649.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_7$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 28.2 25.9 20.9 4.02 12.5 34.1 86.4 239.\n#> 2 feeding May_Sep 45.9 42.4 34.3 6.51 20.3 55.1 145. 409.\n#> 3 migrating Oct_Dec 17.5 16.0 12.8 2.48 7.78 21.8 54.6 144.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_8\n#> $scenario_8$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 855. 186. 844. 539. 719. 977. 1248. 1531.\n#> 2 feeding May_Sep 1376. 299. 1352. 872. 1160. 1560. 1985. 2472.\n#> 3 migrating Oct_Dec 534. 119. 526. 329. 448. 608. 782. 999.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_8$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 536. 307. 468. 135. 324. 668. 1246. 2183.\n#> 2 feeding May_Sep 862. 494. 760. 213. 524. 1067. 2013. 3624.\n#> 3 migrating Oct_Dec 334. 192. 290. 81.5 200. 418. 783. 1330.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_8$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 58.6 52.8 43.8 8.62 26.4 71.2 178. 485.\n#> 2 feeding May_Sep 94.4 85.7 71.2 14.0 42.5 114. 295. 820.\n#> 3 migrating Oct_Dec 36.6 32.8 27.1 5.36 16.5 45.6 113. 293.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_9\n#> $scenario_9$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 571. 123. 563. 360. 484. 643. 818. 1015.\n#> 2 feeding May_Sep 965. 210. 953. 615. 815. 1089. 1428. 1738.\n#> 3 migrating Oct_Dec 354. 78.1 349. 223. 299. 401. 511. 646.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_9$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 194. 140. 157. 32.9 100. 245. 539. 1050.\n#> 2 feeding May_Sep 329. 238. 265. 55.1 169. 414. 905. 1788.\n#> 3 migrating Oct_Dec 121. 87.9 97.6 20.1 62.0 153. 332. 682.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_9$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 23.0 25.6 15.7 2.36 8.83 27.4 79.6 265.\n#> 2 feeding May_Sep 38.9 43.5 26.3 3.96 15.0 47.4 137. 472.\n#> 3 migrating Oct_Dec 14.3 16.2 9.75 1.46 5.51 17.2 48.9 183.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_10\n#> $scenario_10$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 1136. 245. 1120. 717. 963. 1279. 1628. 2019.\n#> 2 feeding May_Sep 1901. 413. 1875. 1212. 1605. 2145. 2811. 3422.\n#> 3 migrating Oct_Dec 709. 156. 698. 446. 598. 802. 1024. 1294.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_10$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 404. 286. 328. 70.2 211. 509. 1101. 2131.\n#> 2 feeding May_Sep 676. 481. 547. 116. 351. 852. 1833. 3592.\n#> 3 migrating Oct_Dec 252. 181. 205. 43.2 131. 320. 686. 1393.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_10$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 47.6 52.0 32.8 5.05 18.6 56.9 164. 535.\n#> 2 feeding May_Sep 79.8 87.6 54.3 8.36 31.2 95.8 279. 944.\n#> 3 migrating Oct_Dec 29.8 33.2 20.5 3.15 11.7 35.6 101. 373.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_11\n#> $scenario_11$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 570. 121. 558. 366. 484. 647. 826. 979.\n#> 2 feeding May_Sep 981. 211. 957. 624. 830. 1115. 1431. 1682.\n#> 3 migrating Oct_Dec 347. 76.0 344. 221. 293. 395. 515. 606.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_11$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 24.7 40.3 11.5 0.787 4.98 29.1 111. 440.\n#> 2 feeding May_Sep 42.5 69.2 19.8 1.35 8.56 48.3 189. 768.\n#> 3 migrating Oct_Dec 15.0 24.2 7.02 0.486 3.02 17.4 67.2 263.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_11$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 3.52 11.1 0.933 0.0455 0.337 2.66 21.6 133. \n#> 2 feeding May_Sep 6.04 19.0 1.57 0.0814 0.584 4.48 36.6 233. \n#> 3 migrating Oct_Dec 2.13 6.60 0.568 0.0283 0.207 1.57 13.3 75.9\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_12\n#> $scenario_12$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 1135. 241. 1111. 728. 964. 1287. 1645. 1949.\n#> 2 feeding May_Sep 1932. 415. 1885. 1229. 1635. 2197. 2819. 3315.\n#> 3 migrating Oct_Dec 696. 152. 689. 442. 587. 791. 1032. 1212.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_12$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 50.9 81.9 24.0 1.68 10.5 60.3 229. 887.\n#> 2 feeding May_Sep 86.6 139. 41.0 2.85 17.8 99.1 384. 1534.\n#> 3 migrating Oct_Dec 31.2 49.5 14.8 1.04 6.37 36.3 139. 535.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_12$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Jan_Apr 7.17 22.2 1.95 0.0973 0.707 5.54 43.2 268.\n#> 2 feeding May_Sep 12.2 37.7 3.25 0.172 1.22 9.21 73.2 464.\n#> 3 migrating Oct_Dec 4.38 13.4 1.19 0.0606 0.439 3.28 27.1 154.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_13\n#> $scenario_13$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 21.5 4.95 21.0 13.1 18.0 24.4 32.0 39.7\n#> 2 feeding Mar_Aug 60.9 13.9 59.8 37.2 50.8 69.7 91.1 112. \n#> 3 migrating Sep_Nov 24.4 5.70 24.0 14.9 20.3 27.8 37.0 45.4\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_13$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 21.6 4.72 21.3 13.2 18.4 24.6 31.5 36.8\n#> 2 feeding Mar_Aug 61.3 13.3 60.9 37.6 52.2 69.8 89.0 106. \n#> 3 migrating Sep_Nov 24.6 5.48 24.3 14.8 20.9 27.9 36.1 42.8\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_13$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 18.6 3.99 18.7 11.4 15.9 21.1 27.2 35.6\n#> 2 feeding Mar_Aug 52.9 11.3 52.9 32.7 45.1 59.8 77.3 100. \n#> 3 migrating Sep_Nov 21.2 4.68 21.1 13.1 18.0 24.0 31.4 41.3\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_14\n#> $scenario_14$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 43.5 10.0 42.6 26.6 36.5 49.5 64.8 80.5\n#> 2 feeding Mar_Aug 121. 27.6 119. 73.7 101. 138. 181. 223. \n#> 3 migrating Sep_Nov 49.0 11.4 48.1 30.0 40.8 55.7 74.3 91.2\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_14$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 46.0 9.96 45.4 28.2 39.2 52.4 66.7 78.3\n#> 2 feeding Mar_Aug 128. 27.6 127. 78.7 109. 145. 185. 219. \n#> 3 migrating Sep_Nov 51.8 11.5 51.3 31.3 44.2 58.8 75.8 89.4\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_14$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 39.6 8.40 39.7 24.3 33.9 45.0 57.7 75.1\n#> 2 feeding Mar_Aug 110. 23.3 110. 68.7 94.2 125. 160. 206. \n#> 3 migrating Sep_Nov 44.7 9.76 44.6 27.7 38.1 50.5 65.7 85.9\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_15\n#> $scenario_15$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 30.8 6.55 30.3 20.0 26.2 34.7 45.5 53.8\n#> 2 feeding Mar_Aug 88.3 18.8 86.9 57.3 75.0 99.5 130. 157. \n#> 3 migrating Sep_Nov 34.5 7.50 33.9 22.1 29.1 38.9 51.6 60.5\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_15$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 16.7 3.64 16.5 10.1 14.0 19.1 23.8 29.4\n#> 2 feeding Mar_Aug 47.8 10.4 47.6 29.1 40.2 54.7 68.6 83.1\n#> 3 migrating Sep_Nov 18.6 4.16 18.6 11.2 15.7 21.3 27.0 33.1\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_15$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 15.4 3.59 15.4 9.16 12.9 17.5 22.1 31.7\n#> 2 feeding Mar_Aug 44.1 10.3 44.1 26.2 37.0 50.2 62.9 91.6\n#> 3 migrating Sep_Nov 17.2 4.09 17.3 10.2 14.5 19.5 25.1 37.2\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_16\n#> $scenario_16$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 62.3 13.3 61.3 40.4 53.0 70.3 92.2 109.\n#> 2 feeding Mar_Aug 175. 37.3 172. 114. 149. 197. 258. 311.\n#> 3 migrating Sep_Nov 69.1 15.0 68.0 44.2 58.3 78.1 104. 121.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_16$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 35.4 7.67 35.0 21.5 29.9 40.6 50.5 62.3\n#> 2 feeding Mar_Aug 99.4 21.5 98.9 60.8 83.8 114. 143. 173. \n#> 3 migrating Sep_Nov 39.2 8.67 39.1 23.6 33.1 44.8 56.8 69.4\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_16$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 32.7 7.54 32.7 19.6 27.6 37.1 46.9 66.3\n#> 2 feeding Mar_Aug 91.9 21.1 91.8 54.9 77.1 104. 131. 188. \n#> 3 migrating Sep_Nov 36.3 8.51 36.3 21.5 30.6 40.9 52.7 77.3\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_17\n#> $scenario_17$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 27.1 6.46 26.8 16.4 22.5 31.3 40.4 51.6\n#> 2 feeding Mar_Aug 81.3 19.3 80.0 49.3 67.2 93.3 122. 157. \n#> 3 migrating Sep_Nov 31.0 7.53 30.4 18.5 25.4 35.7 47.4 58.1\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_17$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 1.10 0.470 1.04 0.417 0.761 1.34 2.18 3.68\n#> 2 feeding Mar_Aug 3.28 1.40 3.12 1.24 2.27 4.02 6.51 11.0 \n#> 3 migrating Sep_Nov 1.25 0.541 1.19 0.471 0.869 1.54 2.51 4.42\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_17$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 0.774 0.408 0.725 0.283 0.520 0.957 1.56 4.85\n#> 2 feeding Mar_Aug 2.32 1.21 2.18 0.855 1.56 2.88 4.64 14.1 \n#> 3 migrating Sep_Nov 0.884 0.467 0.840 0.333 0.593 1.10 1.78 5.34\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> \n#> $scenario_18\n#> $scenario_18$opt1\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 54.9 13.1 54.2 33.2 45.5 63.3 81.7 104.\n#> 2 feeding Mar_Aug 161. 38.2 159. 97.8 133. 185. 242. 310.\n#> 3 migrating Sep_Nov 62.2 15.1 61.0 37.2 51.0 71.7 95.0 117.\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_18$opt2\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 2.33 0.992 2.22 0.889 1.62 2.84 4.63 7.73\n#> 2 feeding Mar_Aug 6.84 2.90 6.50 2.61 4.73 8.38 13.6 22.7 \n#> 3 migrating Sep_Nov 2.64 1.13 2.52 0.998 1.83 3.24 5.29 9.20\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n#> \n#> $scenario_18$opt3\n#> # A tibble: 3 \xc3\x97 10\n#> season_id period mean sd median pctl_2.5 pctl_25 pctl_75 pctl_9\xe2\x80\xa6\xc2\xb9 pctl_99\n#> \n#> 1 breeding Dec_Feb 1.65 0.855 1.54 0.606 1.11 2.03 3.31 10.1\n#> 2 feeding Mar_Aug 4.83 2.49 4.55 1.79 3.26 6.00 9.54 28.7\n#> 3 migrating Sep_Nov 1.86 0.969 1.77 0.707 1.25 2.31 3.68 11.0\n#> # \xe2\x80\xa6 with abbreviated variable name \xc2\xb9\xe2\x80\x8bpctl_97.5\n```\n\n### Band model example\n\nThis is an example usage of `band_crm()`. This is for a single species\nand single set of turbine parameters. This replicates the Band (2012)\nworksheet. The `stoch_crm()` function wraps around this function, where\n`band_crm()` acts in essence as a single draw of `stoch_crm()`.\n\nPlease note the example runs on fictional data.\n\n``` r\nlibrary(stochLAB)\n# ------------------------------------------------------\n# Run with arbitrary parameter values, for illustration\n# ------------------------------------------------------\n\n# Setting a dataframe of parameters to draw from\nparams <- data.frame(\n flight_speed = 13.1, # Flight speed in m/s\n body_lt = 0.85, # Body length in m\n wing_span = 1.01, # Wing span in m\n flight_type = ""flapping"", # flapping or gliding flight\n avoid_rt_basic = 0.989, # avoidance rate for option 1 and 2\n avoid_rt_ext = 0.981, # extended avoidance rate for option 3 and 4\n noct_activity = 0.5, # proportion of day birds are inactive\n prop_crh_surv = 0.13, # proportion of birds at collision risk height (option 1 only)\n prop_upwind = 0.5, # proportion of flights that are upwind\n rotor_speed = 15, # rotor speed in m/s\n rotor_radius = 120, # radius of turbine in m\n blade_width = 5, # width of turbine blades at thickest point in m\n blade_pitch = 15, # mean radius pitch in Radians\n n_blades = 3, # total number of blades per turbine\n hub_height = 150, # height of hub in m above HAT\n n_turbines = 100, # number of turbines in the wind farm\n wf_width = 52, # width across longest section of wind farm\n wf_latitude = 56, # latitude of centroid of wind farm\n tidal_offset = 2.5, # mean tidal offset from HAT of the wind farm\n lrg_arr_corr = TRUE # apply a large array correction?\n)\n\n# Monthly bird densities\nb_dens <- data.frame(\n month = month.abb,\n dens = runif(12, 0.8, 1.5)\n)\n\n# flight height distribution from Johnston et al\ngen_fhd_dat <- Johnston_Flight_heights_SOSS %>%\n dplyr::filter(variable==""Gannet.est"") %>%\n dplyr::select(height,prop)\n\n# monthly operational time of the wind farm\nturb_oper <- data.frame(\n month = month.abb,\n prop_oper = runif(12,0.5,0.8)\n)\n\n\nstochLAB::band_crm(\n model_options = c(1,2,3),\n flight_speed = params$flight_speed,\n body_lt = params$body_lt,\n wing_span = params$wing_span,\n flight_type = params$flight_type,\n avoid_rt_basic = params$avoid_rt_basic,\n avoid_rt_ext = params$avoid_rt_ext,\n noct_activity = params$noct_activity,\n prop_crh_surv = params$prop_crh_surv,\n dens_month = b_dens,\n prop_upwind = params$prop_upwind,\n gen_fhd = gen_fhd_dat,\n site_fhd = NULL, # Option 4 only\n rotor_speed = params$rotor_speed,\n rotor_radius = params$rotor_radius,\n blade_width = params$blade_width,\n blade_pitch = params$blade_pitch,\n n_blades = params$n_blades,\n hub_height = params$hub_height,\n chord_prof = chord_prof_5MW,\n n_turbines = params$n_turbines,\n turb_oper_month = turb_oper,\n wf_width = params$wf_width,\n wf_latitude = params$wf_latitude,\n tidal_offset = params$tidal_offset,\n lrg_arr_corr = params$lrg_arr_corr\n )\n#> # A tibble: 12 \xc3\x97 4\n#> month opt1 opt2 opt3\n#> \n#> 1 Jan 98.6 25.4 10.4 \n#> 2 Feb 53.2 13.7 5.61\n#> 3 Mar 83.7 21.6 8.83\n#> 4 Apr 61.4 15.8 6.48\n#> 5 May 117. 30.2 12.4 \n#> 6 Jun 91.7 23.6 9.67\n#> 7 Jul 114. 29.3 12.0 \n#> 8 Aug 91.1 23.5 9.61\n#> 9 Sep 106. 27.3 11.2 \n#> 10 Oct 64.1 16.5 6.77\n#> 11 Nov 60.7 15.7 6.40\n#> 12 Dec 73.4 18.9 7.75\n```\n'",,"2021/07/15, 19:55:27",832,CUSTOM,22,178,"2023/03/10, 09:46:12",11,15,21,5,229,0,0.1,0.24242424242424243,"2022/12/21, 18:10:01",v1.1.2,0,3,false,,false,true,,,https://github.com/HiDef-Aerial-Surveying,http://hidef.bioconsult-sh.de/,"Cleator Moor, Cumbria, UK",,,https://avatars.githubusercontent.com/u/41015119?v=4,,, Energy Research and Forecasting,"Designed to provide a flexible computational framework for the exploration and investigation of different physics parameterizations and numerical strategies, and to characterize the flow field that impacts the ability of wind turbines to extract wind energy.",erf-model,https://github.com/erf-model/ERF.git,github,,Wind Energy,"2023/10/20, 00:07:56",15,0,10,true,C++,Energy Research and Forecasting,erf-model,"C++,Assembly,CMake,Python,Makefile,Shell,TeX",https://erf.readthedocs.io/en/latest/,"b""Energy Research and Forecasting (ERF): An atmospheric modeling code\n----\n\n`ERF` is built upon the `AMReX `_ software framework\nfor massively parallel block-structured applications.\n\n.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.8102984.svg\n :target: https://doi.org/10.5281/zenodo.8102984\n\n.. image:: https://joss.theoj.org/papers/10.21105/joss.05202/status.svg\n :target: https://doi.org/10.21105/joss.05202\n\nTest Status\n~~~~~~~~~~~\n\n================= =============\nRegression Tests |regtests|\n================= =============\n\n.. |regtests| image:: https://github.com/erf-model/ERF/actions/workflows/ci.yml/badge.svg?branch=development\n\nGetting Started\n~~~~~~~~~~~~~~~\n\nSee `Getting Started `_ for instructions as to how to clone the ERF\nand AMReX codes, and for how to build and run an ERF example. Mimimum requirements for system software are also given there.\n\nDocumentation\n~~~~~~~~~~~~~~~~~\n\nDocumentation of the ERF theory and implementation is available `here `_ .\n\nIn addition, there is doxygen documentation of the ERF Code available `here `_\n\nDevelopment model\n~~~~~~~~~~~~~~~~~\n\nSee CONTRIBUTING.md for how to contribute to ERF development.\n\nAcknowledgments\n~~~~~~~~~~~~~~~\n\nThe development of the Energy Research and Forecasting (ERF) code is funded by the Wind Energy Technologies Office (WETO), part of the U.S. Department of Energy (DOE)'s Office of Energy Efficiency & Renewable Energy (EERE).\n\nThe developers of ERF acknowledge and thank the developers of the AMReX-based\n`PeleC `_ ,\n`FHDeX `_ and\n`AMR-Wind `_ codes. In the spirit of open source code\ndevelopment, the ERF project has ported sections of code from each of these projects rather\nthan writing them from scratch.\nERF is built on the `AMReX `_ library.\n\nLicense\n~~~~~~~~~\n\nERF Copyright (c) 2022, The Regents of the University of California,\nthrough Lawrence Berkeley National Laboratory, National Renewable Energy Laboratory,\nLawrence Livermore National Laboratory and Argonne National\nLaboratory (subject to receipt of any required approvals from the\nU.S. Dept. of Energy). All rights reserved.\n\nIf you have questions about your rights to use or distribute this\nsoftware, please contact Berkeley Lab's Innovation & Partnerships\nOffice at IPO@lbl.gov.\n\nNOTICE. This Software was developed under funding from the\nU.S. Department of Energy and the U.S. Government consequently retains\ncertain rights. As such, the U.S. Government has been granted for\nitself and others acting on its behalf a paid-up, nonexclusive,\nirrevocable, worldwide license in the Software to reproduce,\ndistribute copies to the public, prepare derivative works, and perform\npublicly and display publicly, and to permit other to do so.\n\nThe license for ERF can be found in the LICENSE.md file.\n\n""",",https://doi.org/10.5281/zenodo.8102984\n\n,https://doi.org/10.21105/joss.05202\n\nTest","2020/10/21, 21:05:22",1098,CUSTOM,662,2862,"2023/10/20, 00:08:00",5,1215,1270,518,5,4,0.4,0.6463364715513713,"2023/10/02, 16:32:18",23.10,0,37,false,,false,true,,,https://github.com/erf-model,,,,,https://avatars.githubusercontent.com/u/59941622?v=4,,, WecOptTool,Allows users to perform wave energy converter device design optimization studies with constrained optimal control.,SNL-WaterPower,https://github.com/SNL-WaterPower/WecOptTool.git,github,snl-applications,Hydro Energy,"2023/05/10, 21:18:06",1,0,1,true,HTML,Sandia National Laboratories Water Power Technologies,SNL-WaterPower,HTML,https://sandialabs.github.io/WecOptTool/,b'# WecOptTool is now hosted at https://github.com/sandialabs/WecOptTool.',,"2023/05/10, 17:34:05",168,CUSTOM,3,3,"2023/10/20, 00:08:00",0,0,0,0,5,0,0,0.0,,,0,1,false,,false,false,,,https://github.com/SNL-WaterPower,http://energy.sandia.gov/energy/renewable-energy/water-power/,"Albuquerque, NM",,,https://avatars.githubusercontent.com/u/5272629?v=4,,, CACTUS,"A turbine performance simulation code, based on a free wake vortex method, to study wind turbines and marine hydrokinetic devices.",SNL-WaterPower,https://github.com/sandialabs/CACTUS.git,github,,Hydro Energy,"2021/07/19, 11:58:37",17,0,1,true,Fortran,Sandia National Laboratories,sandialabs,"Fortran,Python,GLSL,MATLAB,CMake",,"b""![](media/CACTUS.png) \n# CACTUS (Code for Axial and Cross-flow TUrbine Simulation)\n\nCACTUS (**C**ode for **A**xial and **C**ross-flow **TU**rbine **S**imulations),\ndeveloped at Sandia National Laboratories, is a turbine simulation code based on a free wake vortex method. \n\n\n### Compiling\n\nCACTUS can be compiled via CMake. For details, see [doc/compile.md](doc/compile.md)\n\n#### Tests\nSimple regression tests are included. After compiling, navigate to `test/RegTest/` and run:\n\n```\nPATH=$PATH:../../bin pytest runreg.py\n```\n\n### Directory Structure\n\n- `bin`: Default target for compiled executables\n- `DAKOTA`: DAKOTA drivers (by Jon Murray) and examples\n- `doc`: Documentation -- user's manual, install instructions, DAKOTA drivers manual, relevant publications\n- `make`: Makefiles for various compilers and platforms\n- `src`: Source code\n- `test`: Test cases (regression tests, example HAWT/VAWT input files, airfoil files)\n\n### Post-processing\n\nTools for post-processing data from CACTUS simulations are available in the\n[CACTUS-tools](https://github.com/SNL-WaterPower/CACTUS-tools) repository.\n\n\n### References\n\nFor details about the development of CACTUS, please see\n\n- Murray, J., and Barone, M., \xe2\x80\x9cThe Development of CACTUS, a Wind and Marine Turbine Performance Simulation Code,\xe2\x80\x9d _49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition_, Reston, Virginia: American Institute of Aeronautics and Astronautics, 2011, pp. 1\xe2\x80\x9321.\n\n### Disclaimer\n\nA CACTUS model V&V studies (Michelen et al. 2014, Wosnik et al. 2016) for cross-flow hydrokinetic turbines demonstrated it accurately predicts performance characteristics for axial-flow turbines, but it should not be used for cross-flow geometries.\n\n- Michelen, C., V.S. Neary, J. Murray, and M. Barone, M. (2014). CACTUS open-source code for hydrokinetic turbine design and analysis: Model performance evaluation and public dissemination as open-source tool. Proceedings of 2nd Marine Energy Technology Symposium 2014 (METS2014), at the 7th Annual Global Marine Renewable Energy Conference (GMREC 2014), Seattle, WA, April 15-18. \n\n- Wosnik M., Bachant P., Neary V.S., and A.W. Murphy (2016). Evaluation of Design & Analysis Code, CACTUS, for Predicting Cross-flow Hydrokinetic Turbine Performance. SAND2016-9787, September 2016. 34 pages.\n\n""",,"2013/12/04, 20:59:48",3611,BSD-3-Clause,0,361,"2022/12/02, 15:11:44",18,17,38,1,327,0,0.3,0.30722891566265065,"2020/06/03, 16:27:21",REL-1.1,0,6,false,,false,false,,,https://github.com/sandialabs,https://software.sandia.gov,United States,,,https://avatars.githubusercontent.com/u/4993680?v=4,,, hydro-power-database,Collects basic information on all the European hydro-power plants.,energy-modelling-toolkit,https://github.com/energy-modelling-toolkit/hydro-power-database.git,github,"hydropower,energy-system,power-system-simulation",Hydro Energy,"2023/09/15, 19:18:23",46,0,6,true,,Energy Modelling Toolkit ,energy-modelling-toolkit,,,"b"" [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)\n [![DOI](https://zenodo.org/badge/179688356.svg)](https://zenodo.org/badge/latestdoi/179688356)\n\n# JRC Hydro-power plants database\n\n![map of hydro-power plants](map-location.png)\n\nThe development of this dataset started as an output of the Energy work package of the Water-Energy-Food-Ecosystems (WEFE) Nexus project at the European Commission's Joint Research Centre (JRC). This dataset has been created for power system modelling purposes and it is based on publicly available sources. This dataset tries to collect some basic information on all the European hydro-power plants. Other related datasets are available in [the JRC Data Catalogue](https://data.jrc.ec.europa.eu/collection/id-00134).\n\nThe dataset contains the following variables (documented in the data package JSON):\n - id of the plant\n - name of the power plants\n - installed capacity (and pumping capacity when available)\n - country\n - coordinates\n - typology of the power plant (run-of-river, reservoir based or pumped-storage)\n - nominal head or height of the dam\n - size of the usable reservoir in million of cubic meters\n - maximum storage capacity in MWh\n - link with the GEO, PyPSA-EUR and WRI Global Power Plants databases\n - annual average/expected generation in GWh\n \nThe storage capacity is reported only when directly available from the source, thus is **not** estimated or derived from the other variables. \n\nThis dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/). This dataset is an open project and it is not an official product of the European Commission.\n\n## Discuss & Contribute\nIf you have any question or comment the best way would be to add a post in the Issues to keep track of open and closed issues/questions.\nIf you want to contribute please [send me an email](mailto:matteo.defelice@outlook.copm) or submit a Pull Request on this repository. \n\n## Author and contributors\nHere a list of all the people which have personally contributed to this dataset. If you think that your name should be here please send me an email.\n\n - Matteo De Felice: main author\n - Konstantinos Kanellopoulos, JRC\n - Ignacio Hidalgo-Gonz\xc3\xa1lez, JRC\n - [Kachirayil Febin](https://github.com/febinka)\n - Arnau Cangr\xc3\xb2s, Catalan Water Agency\n - Hrvoje Medarac, JRC\n - Goran Stunjek, University of Zagreb\n - Goran Krajacic, University of Zagreb\n - Jonas H\xc3\xb6rsch, Reiner Lemoine Institut (RLI) and KIT\n - Francesco Careri, JRC\n - Sebastian Busch, JRC\n - Andr\xc3\xa9 Ortner, MVV Energie and TU Wien\n - Cl\xc3\xa9ment Cabot, PSL - Mines-ParisTech\n - [@flacombe](https://github.com/flacombe)\n - [@timtroendle](https://github.com/timtroendle)\n - Antoine Dubois, University of Li\xc3\xa8ge\n - [Shruthi Patil](https://github.com/Shruthi-Patil)\n \n## Coverage\n\nThe dataset contains 4182 hydro-power plants. This is a table summarising the installed capacity in GW for all the countries appearing in the database. \n\n|country | GW|\n|:----------------------|----:|\n|Albania | 2.1|\n|Austria | 13.5|\n|Belgium | 1.4|\n|Bosnia and Herzegovina | 2.0|\n|Bulgaria | 2.9|\n|Croatia | 2.1|\n|Czechia | 1.9|\n|Finland | 2.6|\n|France | 20.6|\n|Germany | 10.8|\n|Greece | 3.4|\n|Hungary | 0.0|\n|Ireland | 0.5|\n|Italy | 19.4|\n|Kosovo | 0.1|\n|Latvia | 1.5|\n|Lithuania | 1.0|\n|Montenegro | 0.7|\n|Norway | 33.0|\n|North Macedonia | 0.6|\n|Poland | 2.1|\n|Portugal | 8.0|\n|Romania | 6.2|\n|Serbia | 2.8|\n|Slovakia | 2.5|\n|Slovenia | 1.2|\n|Spain | 16.1|\n|Sweden | 13.9|\n|Switzerland | 19.3|\n|United Kingdom | 4.3|\n\n## Sources\n\nThe database has been built collecting the information from several other sources and then cross-checking and comparing in case of inconsistencies. The list of the used sources is here:\n\n - [JRC Power Plants Database (PPDB)](https://zenodo.org/record/3349843)\n - [Global Energy Observatory (GEO)](http://globalenergyobservatory.org/)\n - [PyPSA-EUR](https://github.com/PyPSA/pypsa-eur)\n - [Global Reservoir and Dam Database (GRanD) 1.3](http://globaldamwatch.org/) \n - [Marktstammdatenregister MaStR 1.2](https://www.marktstammdatenregister.de/MaStR)\n - [Open Data R\xc3\xa9seaux \xc3\x89nergies](https://opendata.reseaux-energies.fr/pages/accueil/)\n - [OpenStreetMap](www.openstreetmap.org)\n - [Engadiner Kraftwerke AG website](https://www.ekwstrom.ch/startseite.html)\n - [Latvenergo AS website](https://www.latvenergo.lv/)\n - [Gimeno-Guti\xc3\xa9rrez and Roberto Lacal-Ar\xc3\xa1ntegui, Renewable Energy, 2015](https://www.sciencedirect.com/science/article/pii/S096014811400706X)\n - [Gerritsma, M.K. (2016) Faculty of Geosciences Theses](https://dspace.library.uu.nl/handle/1874/339185)\n - [Swiss committee on dams website](http://swissdams.ch/)\n - [VGB PowerTech e.V. website](https://www.vgb.org/)\n - [EDP website](www.edp.pt)\n - [Drinsko-Limske Hidroelektrane website](http://dlhe.rs/)\n - [Verbund website](https://www.verbund.com/)\n - [Geth et al., Renewable and Sustainable Energy Reviews, 2015](https://www.sciencedirect.com/science/article/pii/S1364032115007923)\n - Wikipedia EN, DE, FR\n - [Karin Salevid, Uppsala Universitet, 2013](https://uu.diva-portal.org/smash/get/diva2:661286/FULLTEXT01.pdf)\n - [Vattenkraft.info](https://vattenkraft.info/)\n - [The United Nations Economic Commission for Europe (UNECE) website](http://www.unece.org/)\n - [Statkraft website](https://www.statkraft.com/)\n - [Salini-Impregilo website](www.salini-impregilo.com)\n - [HEP group website](hep.hr)\n - [Spanish Ministry of Agriculture, Fisheries and Food website](https://www.mapama.gob.es/)\n - [Comit\xc3\xa9 Fran\xc3\xa7ais des Barrages et R\xc3\xa9servoirs website](http://www.barrages-cfbr.eu/)\n - [JP \xe2\x80\x9eElektroprivreda HZ HB website](https://www.ephzhb.ba/)\n - [SSE website](https://sse.com/)\n - [WaterGenPower website](https://www.watergenpower.eu/)\n - [Compagnia Valdostana delle Acque website](http://www.cvaspa.it/)\n - [Fingrid website](https://www.fingrid.fi/)\n - [HE \xc4\x90erdap website](http://www.djerdap.rs/)\n - [Vattenfall website](http://www.vattenfall.se/)\n - [P\xc3\xb6yry PLC website](https://www.poyry.com/)\n - [Innogy website](https://www.innogy.com/)\n - [Dravske elektrarne Maribor website](http://www.dem.si/)\n - [Hidroelektrarne na Spodnji Savi website](http://www.he-ss.si/)\n - [Savske elektrarne Ljubljana website](http://www.sel.si/)\n - [Kemijoki website](https://www.kemijoki.fi)\n - [NVE website](https://www.nve.no/)\n - [Regione Sardegna website](https://www.regione.sardegna.it)\n - [ENAS Regione Sardegna website](http://www.enas.sardegna.it)\n - [GHT Engineering website](http://www.ghtengineering.it)\n - [TPF Ingenerie website](https://tpf.eu/)\n - [Bissi Holding website](http://www.bissiholding.com)\n - [Burgo group website](https://www.burgo.com)\n - [Edison website](https://www.edison.it)\n - [Italgen website](http://www.italgen.it/)\n - [Eneco website](http://www.eneco.it/it/home.html)\n - [IREN Energia website](http://www.irenenergia.it)\n - [Tirreno power](http://www.tirrenopower.com)\n - [\xc4\x8cEZ website](https://www.cez.cz)\n - [\xc4\x8cEPS website](https://www.ceps.cz/cs/)\n - [ESB website](https://www.esb.ie)\n - [Charlotta Canzler, THE ECONOMICS OF SWISS HYDROPOWER PRODUCTION](https://ivm.vu.nl/en/Images/Canzler_Charlotta_-_Thesis_Charlotta_Canzler_PDF_tcm234-352241.pdf)\n - [UK Energy Watch website](http://www.ukenergywatch.org)\n - [hydrelect.info](http://www.hydrelect.info/)\n - [Digit\xc3\xa1lis Tank\xc3\xb6nyvt\xc3\xa1r](https://www.tankonyvtar.hu/en)\n - [Hidroelectrica](www.hidroelectrica.ro)\n - [Kalivac HPP](http://kalivachpp.com/)\n - [Ayen](http://www.ayen.com.tr/eng/)\n - [International Hydropower Association](https://www.hydropower.org/)\n - [\xd0\x9d\xd0\x95\xd0\x9a \xd0\x95\xd0\x90\xd0\x94](https://vec.nek.bg/)\n - [E-Control](www.e-control.at)\n - [UGT](https://www.ugt.es/)\n - [Kesh](http://kesh.al/)\n - [Andritz](https://www.andritz.com/group-en)\n - [Devoll Hydropower Sh.A](www.devollhydropower.al)\n - [Enti Rregullator i Energjise](https://ere.gov.al/)\n - [Catalan Water Agency](http://aca.gencat.cat/ca/inici)\n - [Electricit\xc3\xa9 d'Emosson SA](http://www.emosson.ch)\n - [Hydropower and climate change, insights from the integrated water-energy modelling of the Drin Basin](https://www.sciencedirect.com/science/article/pii/S2211467X23000482?via%3Dihub)\n""",",https://zenodo.org/badge/latestdoi/179688356,https://zenodo.org/record/3349843","2019/04/05, 13:38:23",1664,CC-BY-4.0,1,62,"2022/03/23, 17:26:07",5,5,13,0,581,3,0.2,0.11111111111111116,,,0,4,false,,false,false,,,https://github.com/energy-modelling-toolkit,,,,,https://avatars.githubusercontent.com/u/32776077?v=4,,, MHKiT-Python,"Provides the marine renewable energy community tools for data processing, visualization, quality control, resource assessment, and device performance.",MHKiT-Software,https://github.com/MHKiT-Software/MHKiT-Python.git,github,"mhkit-python,mhkit,marine-renewable-energy,quality-control,visualization,python",Hydro Energy,"2023/08/31, 13:11:43",38,0,9,true,Python,MHKiT,MHKiT-Software,Python,https://mhkit-software.github.io/MHKiT/,"b'![](figures/logo.png) MHKiT-Python\n=====================================\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n

\n\nMHKiT-Python is a Python package designed for marine renewable energy applications to assist in \ndata processing and visualization. The software package include functionality for:\n\n* Data processing\n* Data visualization\n* Data quality control\n* Resource assessment\n* Device performance\n* Device loads\n\nDocumentation\n------------------\nMHKiT-Python documentation includes overview information, installation instructions, API documentation, and examples.\nSee the [MHKiT documentation](https://mhkit-software.github.io/MHKiT) for more information.\n\nInstallation\n------------------------\nMHKiT-Python requires Python (3.7, 3.8, or 3.9) along with several Python \npackage dependencies. MHKiT-Python can be installed from PyPI using the command ``pip install mhkit``.\nSee [installation instructions](https://mhkit-software.github.io/MHKiT/installation.html) for more information.\n\nCopyright and license\n------------------------\nMHKiT-Python is copyright through the National Renewable Energy Laboratory, \nPacific Northwest National Laboratory, and Sandia National Laboratories. \nThe software is distributed under the Revised BSD License.\nSee [copyright and license](LICENSE.md) for more information.\n\nIssues\n------------------------\nThe GitHub platform has the Issues feature that is used to track ideas, feedback, tasks, and/or bugs. To submit an Issue, follow the steps below. More information about GitHub Issues can be found [here](https://docs.github.com/en/issues/tracking-your-work-with-issues/about-issues)\n1. Navigate to the [MHKiT-Python main page](https://github.com/MHKiT-Software/MHKiT-Python)\n2. 2.Under the repository name (upper left), click **Issues**.\n3. Click **New Issue**.\n4. If the Issue is a bug, use the **Bug report** template and click **Get started**, otherwise click on the **Open a blank issue** link.\n5. Provide a **Title** and **description** for the issue. Be sure the title is relevant to the issue and that the description is clear and provided with sufficient detail.\n6. When you\'re finished, click **Submit new issue**. The developers will follow-up once the issue is addressed.\n\nCreating a fork\n------------------------\nThe GitHub platform has the Fork feature that facilitates code modification and contributions. A fork is a new repository that shares code and visibility settings with the original upstream repository. To fork MHKiT-Python, follow the steps below. More information about GitHub Forks can be found [here](https://docs.github.com/en/get-started/quickstart/fork-a-repo)\n1. Navigate to the [MHKiT-Python main page](https://github.com/MHKiT-Software/MHKiT-Python)\n2. Under the repository name (upper left), click **Fork**.\n3. Select an owner for the forked repository.\n4. Specify a name for the fork. By default, forks are named the same as their upstream repositories.\n5. Add a description of your fork (optional).\n6. Choose whether to copy only the default branch or all branches to the new fork. You will only need copy the default branch to contribute to MHKiT-Python.\n7. When you\'re finished, click **Create fork**. You will now have a fork of the MHKiT-Python repository.\n\nCreating a branch\n------------------------\nThe GitHub platform has the branch feature that facilitates code contributions and collaboration amongst developers. A branch isolates development work without affecting other branches in the repository. Each repository has one default branch, and can have multiple other branches. To create a branch of your forked MHKiT-Python repository, follow the steps below. More information about GitHub branches can be found [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-branches)\n1. Navigate to your fork of MHKiT-Python (see instructions above)\n2. Above the list of files, click **Branches**.\n3. Click **New Branch**. \n4. Enter a name for the branch. Be sure to select **MHKiT-Software/MHKiT-Python:master** as the source.\n5. Click **Create branch**. You will now have a branch on your fork of MHKiT-Python that you can use to work with the code base.\n\nCreating a pull request\n------------------------\nThe GitHub platform has the pull request feature that allows you to propose changes to a repository such as MHKiT-Python. The pull request will allow the repository administrators to evaluate the pull request. To create a pull request for MHKiT-Python repository, follow the steps below. More information about GitHub pull requests can be found [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)\n1. Navigate to the [MHKiT-Python main page](https://github.com/MHKiT-Software/MHKiT-Python)\n2. Above the list of files, click **Pull request**.\n3. On the compare page, click **Compare accross forks**. \n4. In the ""base branch"" drop-down menu, select the branch of the upstream repository you\'d like to merge changes into. \n5. In the ""head fork"" drop-down menu, select your fork, then use the ""compare branch"" drop-down menu to select the branch you made your changes in.\n6. Type a title and description for your pull request.\n7. If you want to allow anyone with push access to the upstream repository to make changes to your pull request, select **Allow edits from maintainers**.\n8. To create a pull request that is ready for review, click **Create Pull Request**. To create a draft pull request, use the drop-down and select **Create Draft Pull Request**, then click **Draft Pull Request**. More information about draft pull requests can be found [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests#draft-pull-requests)\n9. MHKiT-Python adminstrators will review your pull request and contact you if needed.\n\n'",",https://doi.org/10.5281/zenodo.3924683","2019/12/17, 18:24:24",1408,BSD-3-Clause,26,658,"2023/10/16, 16:18:39",23,167,245,83,9,6,0.4,0.6236363636363637,"2023/08/11, 19:00:12",v0.7.0.1,0,19,false,,false,false,,,https://github.com/MHKiT-Software,https://mhkit-software.github.io/MHKiT/,,,,https://avatars.githubusercontent.com/u/58954145?v=4,,, hydropowerlib,Designed to calculate feed-in time series of run-of-the-river hydropower plants.,hydro-python,https://github.com/hydro-python/hydropowerlib.git,github,,Hydro Energy,"2019/06/11, 13:15:20",10,0,4,false,Python,,hydro-python,"Python,MATLAB",,"b'hydropowerlib\n==============\n\nThe hydropowerlib is designed to calculate feedin time series of run-of-the-river hydropower plants. The hydropowerlib is an out-take from the \n`feedinlib `_ (hydropower, windpower and pv) to build up a community concentrating on hydropower models.\n\n.. contents:: `Table of contents`\n :depth: 1\n :local:\n :backlinks: top\n\nIntroduction\n============\n\nHaving water flow data sets you can use the hydropowerlib to calculate the electrical output of common hydropower turbines. \nBasic parameters for different types of turbine are provided with the library so that you can start directly using one of these parameter sets. Of course you are free to add your own parameter set.\nFor a quick start download the example water flow data and basic example file and execute it:\n\nhttps://github.com/hydro-python/hydropowerlib/tree/master/example\n\nDocumentation\n==============\n\nFull documentation can be found at `readthedocs `_. Use the project site of readthedocs to choose the version of the documentation. \n\nContributing\n==============\n\nClone/Fork: https://github.com/hydro-python/hydropowerlib\n\nIf you are interested in hydropower models and want to help improve the existing model do not hesitate to contact us.\nAs the hydropowerlib started with contributors from the `oemof developer group `_ we use the same \n`developer rules `_.\n\n\nInstallation\n============\n\nInstall the hydropowerlib using pip3.\n\n::\n\n pip3 install hydropowerlib\n\nSo far, the hydropowerlib is mainly tested on python 3.4 but seems to work down to 2.7.\nPlease see the `installation page `_ of the oemof documentation for complete instructions on how to install python on your operating system.\n\n \nOptional Packages\n~~~~~~~~~~~~~~~~~\n\nTo see the plots of the example file you should install the matplotlib package.\n\nMatplotlib can be installed using pip3 but some Linux users reported that it is easier and more stable to use the pre-built packages of your Linux distribution.\n\nhttp://matplotlib.org/users/installing.html\n\n'",,"2017/04/10, 11:18:02",2389,GPL-3.0,0,44,"2019/06/11, 09:56:10",9,0,1,0,1597,0,0,0.0714285714285714,,,0,2,false,,false,false,,,https://github.com/hydro-python,,,,,https://avatars.githubusercontent.com/u/27208044?v=4,,, HydroPowerSimulations.jl,Contains extensions on PowerSystems.jl and PowerSimulations.jl to enable enhanced hydropower representations.,NREL-SIIP,https://github.com/NREL-Sienna/HydroPowerSimulations.jl.git,github,,Hydro Energy,"2023/09/14, 15:40:07",5,0,1,true,Julia,NREL-Sienna,NREL-Sienna,Julia,,"b'# HydroPowerSimulations.jl\n\n[![Main - CI](https://github.com/NREL-Sienna/HydroPowerSimulations.jl/actions/workflows/main-tests.yml/badge.svg)](https://github.com/NREL-Sienna/HydroPowerSimulations.jl/actions/workflows/main-tests.yml)\n[![codecov](https://codecov.io/gh/NREL-Sienna/HydroPowerSimulations.jl/branch/main/graph/badge.svg?token=4TAeajF0h6)](https://codecov.io/gh/NREL-Sienna/HydroPowerSimulations.jl)\n[![Documentation Build](https://github.com/NREL-Sienna/HydroPowerSimulations.jl/actions/workflows/docs.yml/badge.svg)](https://nrel-sienna.github.io/HydroPowerSimulations.jl/dev/)\n[](https://join.slack.com/t/nrel-sienna/shared_invite/zt-glam9vdu-o8A9TwZTZqqNTKHa7q3BpQ)\n\n## Development\n\nContributions to the development and enahancement of HydroPowerSimulations is welcome. Please see [CONTRIBUTING.md](https://github.com/NREL-Sienna/HydroPowerSimulations.jl/blob/master/CONTRIBUTING.md) for code contribution guidelines.\n\n## License\n\nHydroPowerSimulations is released under a BSD [license](https://github.com/NREL-Sienna/HydroPowerSimulations/blob/master/LICENSE). HydroPowerSimulations has been developed as part of the R2D2 Project at the U.S. Department of Energy\'s National Renewable Energy Laboratory ([NREL](https://www.nrel.gov/))\n'",,"2020/08/18, 16:55:39",1163,BSD-3-Clause,74,129,"2023/09/14, 15:30:42",2,16,20,12,41,1,1.7,0.4563106796116505,"2023/09/14, 16:09:16",v0.4.1,0,4,false,,false,false,,,https://github.com/NREL-Sienna,https://www.nrel.gov/analysis/sienna.html,"Golden, CO",,,https://avatars.githubusercontent.com/u/44615001?v=4,,, OpenHPL,An open source hydropower library that consists of hydropower unit models and is modeled using Modelica.,simulatino,https://github.com/OpenSimHub/OpenHPL.git,github,"hydropower,modelica-library,open-source",Hydro Energy,"2023/06/19, 13:40:09",16,0,1,true,Modelica,OpenSimHub,OpenSimHub,"Modelica,TeX,Motoko,Python",https://openhpl.opensimhub.org,"b""# OpenHPL\n\nOpenHPL is an open-source hydropower library that consists of hydropower unit models and is modelled using Modelica.\n\n## Library description\n\nThe OpenHPL makes it possible to model hydropower systems of different complexity and connect them\nwith models from other libraries, e.g., with models of the power system or other power generating\nsources.\n\nMore information about the library can be found in the [User's Guide](OpenHPL/Resources/Documents/UsersGuide.pdf) and the following [PhD Thesis](http://hdl.handle.net/11250/2608105).\n\n## Current release\n\nDownload [OpenHPL v2.0.1 (2023-03-10)](../../releases/tag/v2.0.1)\n\n## License\n\nCopyright © 2019-2023\n* [University of South-Eastern Norway](https://www.usn.no/english/) (USN)\n\nThis Source Code Form is subject to the terms of the [Mozilla Public License, v. 2.0](LICENSE).\n\n## Contact\n\nThe maintainers can be contacted by email: [OpenHPL@opensimhub.org](mailto:OpenHPL@opensimhub.org)\n""",,"2019/08/26, 12:09:25",1521,MPL-2.0,20,270,"2023/06/19, 13:42:07",10,31,44,3,128,0,0.5,0.2204081632653061,"2023/03/10, 13:12:06",v2.0.1,0,5,false,,false,false,,,https://github.com/OpenSimHub,https://opensimhub.org,Norway,,,https://avatars.githubusercontent.com/u/90337436?v=4,,, WEC-Sim,Wave Energy Converter Simulator is an open source code for simulating wave energy converters.,WEC-Sim,https://github.com/WEC-Sim/WEC-Sim.git,github,"wec-sim,marine-renewable-energy,wave-energy,matlab,simulink,snl-applications,hydrodynamics",Hydro Energy,"2023/10/20, 03:33:21",129,0,25,true,MATLAB,,WEC-Sim,MATLAB,https://wec-sim.github.io/WEC-Sim,"b""# Refer to [WEC-Sim documentation](http://wec-sim.github.io/WEC-Sim) for more information.\n[![DOI](https://zenodo.org/badge/20451353.svg)](https://zenodo.org/badge/latestdoi/20451353)\n[![Documentation](https://github.com/WEC-Sim/WEC-Sim/actions/workflows/docs.yml/badge.svg)](https://github.com/WEC-Sim/WEC-Sim/actions/workflows/docs.yml)\n[![Run MATLAB tests on master branch](https://github.com/WEC-Sim/WEC-Sim/actions/workflows/run-tests-master.yml/badge.svg?branch=master)](https://github.com/WEC-Sim/WEC-Sim/actions/workflows/run-tests-master.yml)\n[![Run MATLAB tests on dev branch](https://github.com/WEC-Sim/WEC-Sim/actions/workflows/run-tests-dev.yml/badge.svg?branch=dev)](https://github.com/WEC-Sim/WEC-Sim/actions/workflows/run-tests-dev.yml)\n\n## WEC-Sim Repository\n\n* **Docs**: [WEC-Sim documentation](http://wec-sim.github.io/WEC-Sim), to refer to [doc compile instructions](https://github.com/WEC-Sim/WEC-Sim/tree/master/docs) \n* **Examples**: WEC-Sim examples\n* **Source**: WEC-Sim source code\n* **Tests**: WEC-Sim tests for [MATLAB Continuous Integration](https://www.mathworks.com/solutions/continuous-integration.html)\n* **Tutorials**: [WEC-Sim tutorials](http://wec-sim.github.io/WEC-Sim/master/user/tutorials.html)\n\nRefer to the [WEC-Sim Applications](https://github.com/WEC-Sim/WEC-Sim_Applications) repository for more applications of WEC-Sim.\n\n## Source Code Management\n\nA stable version of WEC-Sim is maintained on WEC-Sim's [master branch](https://github.com/WEC-Sim/WEC-Sim), and WEC-Sim [releases](https://github.com/WEC-Sim/WEC-Sim/releases) are tagged on GitHub. \nWEC-Sim development is performed on WEC-Sim's [dev branch](https://github.com/WEC-Sim/WEC-Sim/tree/dev) using a [forking workflow](https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow). \nNew WEC-Sim features are developed on forks of the WEC-Sim repository, and [pull-requests](https://github.com/WEC-Sim/WEC-Sim/pulls) are submitted to merge new features from a development fork into the main WEC-Sim repository. \nPull requests for new WEC-Sim features should be submitted to the WEC-Sim dev branch. \nThe only exception to this workflow is for bug fixes; pull requests for bug fixes should be should submitted to the WEC-Sim master branch.\nWhen a new version of WEC-Sim is released, the dev branch becomes the master branch, and all updates are included in the tagged release.\n\n""",",https://zenodo.org/badge/latestdoi/20451353","2014/06/03, 16:56:49",3431,Apache-2.0,86,1298,"2023/10/24, 22:50:49",11,375,1137,278,0,4,0.8,0.6474677259185699,"2023/10/20, 04:21:51",v6.0,0,34,false,,false,false,,,https://github.com/WEC-Sim,http://wec-sim.github.io/WEC-Sim/,,,,https://avatars.githubusercontent.com/u/7662103?v=4,,, BEMRosetta,"Used to model hydrodynamic forces in offshore devices like ships, offshore wind platforms and wave energy converters.",BEMRosetta,https://github.com/BEMRosetta/BEMRosetta.git,github,"hydrodynamics,meshviewer,mesh-processing,potential-flow,offshore-wind-platforms,hydrodynamic-coefficients-viewer,boundary-element,wave-energy",Hydro Energy,"2023/10/17, 12:00:38",63,0,18,true,C++,BEMRosetta,BEMRosetta,"C++,C,Turing,Batchfile,Shell,SWIG",,"b'\n\n# BEMRosetta\n**Hydrodynamic coefficients viewer and converter for Boundary Element Method solver formats.**\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n\n\n\n

\n \n \n

\n\n[Boundary Element Methods](https://en.wikipedia.org/wiki/Boundary_element_method) are extensively used to model hydrodynamic forces in offshore devices like ships, offshore wind platforms and wave energy converters. These solvers use device geometry mesh to get some hydrodynamics coefficients as radiation damping, added mass, wave diffraction force, and wave excitation force. All these data is saved in file formats incompatible between them. These may avoid to use the coefficients between programs. \n\nBEMRosetta allows to load the hydrodynamic coefficients from a format saving it in another. In addition it allows to compare the results obtained between programs, the results between similar geometries and the same geometry with different discretization levels.\n\nMoreover, BEMRosetta allows to view and visually compare the meshes from different programs.\n\nBEMRosetta runs on Windows and Linux, **no install is necessary in Windows** [(see Install)](https://github.com/izabala123/BEMRosetta/tree/master/install), and it includes a GUI, [a command line version](https://github.com/izabala123/BEMRosetta/blob/master/other/test), a library (DLL), and glue code for Python. \n\n## Features\n### - Supported file formats\n\n* BEM coefficients\n * Load-View\n * [Wamit](https://www.wamit.com/): .out, .3sc, 3fk, .1, .3, .4, .hst, .7, .8, .9, .12s, .12d\n * [HAMS](https://github.com/YingyiLiu/HAMS): ControlFile.in\n * [Nemoh](https://lheea.ec-nantes.fr/logiciels-et-brevets/nemoh-presentation-192863.kjsp) and [Capytaine](https://github.com/mancellin/capytaine): Nemoh.cal, Mesh/Hydrostatics*.dat, Mesh/KH*.dat, RadiationCoefficients.tec, ExcitationForce.tec, DiffractionForce.tec, FKForce.tec, IRF.tec\n * [Diodore](https://www.principia-group.com/blog/product/diodore/): .hdb\n * [Hydrostar](https://marine-offshore.bureauveritas.com/hydrostar-software-powerful-hydrodynamic): .out\n * [OrcaWave](https://www.orcina.com/orcaflex/): .yml\n * [OpenFAST-Wamit](https://www.nrel.gov/wind/nwtc/openfast.html): HydroDyn.dat\n * [SeaFEM/TDyn-Nemoh](http://www.compassis.com/compass/en/Productos/SeaFEM): .flavia.inf, RadiationCoefficients.tec, ExcitationForce.tec, DiffractionForce.tec, FKForce.tec\n * [Ansys AQWA](https://www.ansys.com/products/structures/ansys-aqwa): .lis, .ah1, .qtf\n * [FOAMM](http://www.eeng.nuim.ie/coer/downloads/): .mat\n \n * Save\n * [Wamit](https://www.wamit.com/): .out, .1, .3, .hst, .4, .7, .8, .9, .12s, .12d\n * [HAMS](https://github.com/YingyiLiu/HAMS): ControlFile.in and all the folder structure.\n\t* [Diodore](https://www.principia-group.com/blog/product/diodore/): .hdb\n * [Ansys AQWA](https://www.ansys.com/products/structures/ansys-aqwa): .qtf\n * [OpenFAST-Wamit](https://nwtc.nrel.gov/FAST): HydroDyn.dat\n\n* Case files\n * Load-View\n * [HAMS](https://github.com/YingyiLiu/HAMS): ControlFile.in\n * [Ansys AQWA](https://www.ansys.com/products/structures/ansys-aqwa): .dat\n * [Nemoh](https://lheea.ec-nantes.fr/logiciels-et-brevets/nemoh-presentation-192863.kjsp) and [Capytaine](https://github.com/mancellin/capytaine): Nemoh.cal and all the folder structure.\n * Save\n * [HAMS](https://github.com/YingyiLiu/HAMS): ControlFile.in and all the folder structure.\n * [Nemoh](https://lheea.ec-nantes.fr/logiciels-et-brevets/nemoh-presentation-192863.kjsp) and [Capytaine](https://github.com/mancellin/capytaine): Nemoh.cal and all the folder structure.\n \n* Mesh files\n * Load-View\n * [Wamit](https://www.wamit.com/): .gdf, pan.dat\n * [HAMS](https://github.com/YingyiLiu/HAMS): .pnl\n * [Nemoh](https://lheea.ec-nantes.fr/logiciels-et-brevets/nemoh-mesh-192932.kjsp?RH=1489593406974) and [Capytaine](https://github.com/mancellin/capytaine): .dat\n * [Ansys AQWA](https://www.ansys.com/products/structures/ansys-aqwa): .dat\n * [Hydrostar](https://marine-offshore.bureauveritas.com/hydrostar-software-powerful-hydrodynamic): .hst\n * [Salome](https://www.salome-platform.org/): .dat\n * [STL format](https://en.wikipedia.org/wiki/STL_(file_format)): .stl (binary and text)\n * [SeaFEM/TDyn](http://www.compassis.com/compass/en/Productos/SeaFEM): .msh\n * Save\n * [Wamit](https://www.wamit.com/): .gdf\n * [HAMS](https://github.com/YingyiLiu/HAMS): HullMesh.pnl, WaterplaneMesh.pnl\n * [Ansys AQWA](https://www.ansys.com/products/structures/ansys-aqwa): .dat\n * [STL format](https://en.wikipedia.org/wiki/STL_(file_format)): .stl (binary and text)\n\n* Time domain simulations\n * Load-View\n * [OpenFAST](https://www.nrel.gov/wind/nwtc/openfast.html): .out, .outb\n * [Deeplines Wind](https://www.principia-group.com/blog/product/deeplines-wind/): .db\n * [Ansys AQWA Naut](https://www.ansys.com/products/structures/ansys-aqwa): .lis\n * CSV: .csv\n * Save\n * [OpenFAST](https://www.nrel.gov/wind/nwtc/openfast.html): .out\n * CSV: .csv\n\n\n### - Load the hydrodynamic coefficients from one format and save them in another\n\nThe goal is to have a good robustness in the handling of files\n\n\n### - Compare the hydrodynamic coefficients for the same geometry from different software\n\n- Damping for the same geometry got from different solvers\n \n

\n\n- Excitation force for the same geometry got from different solvers_\n \n

\n\n### - Forces handling\n\nIt simmetrizes the available forces in all directions, averaging them when they are available on both possitive and negative headings. Some examples cases:\n* Only the forces on positive headings from 0 to 180\xc2\xba have been processed: Symmetrize duplicates them to the negative heading values from 0 to -180\xc2\xba\n* Both positive and negative headings forces have been processed: Symmetrize averages them\n\n### - Compare the hydrodynamic coefficients for the same geometry for different discretization levels\n### - Compare the hydrodynamic coefficients for different geometries\n\n- Damping for different offshore wind floating platforms_\n \n

\n\n- Excitation force for different offshore wind floating platforms_\n \n

\n\n### - FOAMM connection\n\n[Finite Order Approximation by Moment-Matching (FOAMM)](http://www.eeng.nuim.ie/coer/wp-content/uploads/2019/02/FOAMM-Manual.pdf) is an application developed by N. Faedo, Y. Pe\xc3\xb1a-Sanchez and J. V. Ringwood in the [Maynooth University](https://www.maynoothuniversity.ie/)\'s [Centre for Ocean Energy Research (COER)](http://www.eeng.nuim.ie/coer/), that implements the moment-matching based frequency-domain identification algorithm.\n\nBEMRosetta allows an interactive and seamless FOAMM connection to get state space coefficients.\n\n### - Mesh loading, combining them for visual comparison \n\nSeveral meshes can be loaded in this basic viewer, allowing a visual comparison of geometries.\n\n

\n\n\n### - Mesh handling\n\n- Interactive mesh rotation and translation around user defined center\n- Automatic free surface, underwater surface, center of buoyancy, hydrostatic stiffness matrix, and other parameters calculation\n- Improved viewer including dropdown menu in viewer screen\n- Hydrostatic stiffness matrix viewer\n- Mesh healing option\n \n### - Case launcher, Nemoh & HAMS\n\nAdded Nemoh and [HAMS](https://github.com/YingyiLiu/HAMS) launcher. It can loadexisting files from HAMS, Nemoh or ANSYS AQWA, it lets you editing it, and creates the set of files to launch Nemoh and HAMS from a .bat file (it replaces the classic Nemoh MATLAB launcher)\n\n### - Time domain simulations\n\nBEMRosetta includes a time domain simulations viewer supporting OpenFAST, Deeplines Wind, Ansys AQWA Naut and csv formats, designed to be very easy to use.\nFiles may be opened by drag and drop, and parameters are filtered by name or units.\n\n

\n\n### - OrcaFlex command line\n\nIf you have an [OrcaFlex](https://www.orcina.com/orcaflex/) licence, the command line version allows you to perform operations not available directly in OrcaWave/OrcaFlex, like:\n- Calculating hydrodynamic coefficients with OrcaWave.\n- Performing time domain simulations with OrcaWave.\n- Save the results of time domain simulations to .csv files.\n\n### - Other\n\nAll files, mesh, case or BEM files, can be loaded by Drag and Drop or Copy and Paste from file explorer in Windows and Linux.\n\n

\n \n \n

\n\n## Acknowledgments\n\nJ. C. C. Portillo, J. C. C. Henriques, M. J. Sanchez-Lara, J. Galvan, A. Otter, M. Alonso, A. Aristondo.
\nSome file parsing strategies taken from the [BEMIO project](https://wec-sim.github.io/bemio/).
\nDone with the [U++ multiplatform library](https://www.ultimatepp.org/).\n\n## License\n\nCopyright \xc2\xa9 2019-2021 I\xc3\xb1aki Zabala, Markel Pe\xc3\xb1alba, Yerai Pe\xc3\xb1a-Sanchez, Thomas Kelly.\n\nBEMRosetta is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\\\nBEMRosetta is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for details. You should have received a copy of the GNU General Public License along with BEMRosetta. If not, see http://www.gnu.org/licenses/.
\n
\n
\n

\n\n'",,"2019/03/13, 11:46:14",1687,GPL-3.0,18,18,"2023/08/05, 17:37:00",7,1,25,15,81,0,0.0,0.0,"2023/10/13, 17:43:55",October_2023,0,1,false,,false,false,,,https://github.com/BEMRosetta,,,,,https://avatars.githubusercontent.com/u/79283040?v=4,,, Capytaine,A Python package for the simulation of the interaction between water waves and floating bodies in frequency domain.,mancellin,https://github.com/capytaine/capytaine.git,github,"python,fortran,hydrodynamics,potential-flow,boundary-element-method,water-wave,wave-energy",Hydro Energy,"2023/10/24, 13:09:20",116,12,35,true,Python,,capytaine,"Python,Fortran,Meson,Makefile,MATLAB,Dockerfile",https://capytaine.github.io,"b'# Capytaine: a linear potential flow BEM solver with Python.\n\n![CI status](https://github.com/capytaine/capytaine/actions/workflows/test_new_commits.yaml/badge.svg?event=push)\n![CI status](https://github.com/capytaine/capytaine/actions/workflows/test_with_latest_dependencies.yaml/badge.svg)\n\n\nCapytaine is Python package for the simulation of the interaction between water waves and floating bodies in frequency domain.\nIt is built around a full rewrite of the open source Boundary Element Method (BEM) solver Nemoh for the linear potential flow wave theory.\n\n## Installation\n\n[![PyPI](https://img.shields.io/pypi/v/capytaine)](https://pypi.org/project/capytaine)\n[![Conda-forge](https://img.shields.io/conda/vn/conda-forge/capytaine)](https://github.com/conda-forge/capytaine-feedstock)\n\nPackages for Windows, macOS and Linux are available on PyPI:\n\n```bash\npip install capytaine\n```\nand Conda-forge\n\n```bash\nconda install -c conda-forge capytaine\n```\n\n## Documentation\n\n[https://capytaine.github.io/](https://capytaine.github.io/)\n\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.01341/status.svg)](https://doi.org/10.21105/joss.01341)\n\n## License\n\nCopyright (C) 2017-2023, Matthieu Ancellin\n\nSince April 2022, the development of Capytaine is funded by the Alliance for Sustainable Energy, LLC, Managing and Operating Contractor for the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy.\n\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\n\nIt is based on [Nemoh](https://lheea.ec-nantes.fr/logiciels-et-brevets/nemoh-presentation-192863.kjsp), which has been developed by G\xc3\xa9rard Delhommeau, Aur\xc3\xa9lien Babarit et al., (\xc3\x89cole Centrale de Nantes) and is distributed under the Apache License 2.0.\n\nIt includes code from [meshmagick](https://github.com/LHEEA/meshmagick/) by Fran\xc3\xa7ois Rong\xc3\xa8re (\xc3\x89cole Centrale de Nantes), licensed under the GNU General Public License (GPL).\n'",",https://doi.org/10.21105/joss.01341","2017/09/16, 13:11:11",2230,GPL-3.0,128,1070,"2023/10/24, 13:09:56",48,206,328,165,1,9,0.0,0.040000000000000036,"2023/06/21, 07:38:03",v2.0,1,16,false,,false,true,"capytaine/capytaine-standalone,narest-qa/repo54,akeow/WecOptTool,oalmeyda/WecOptTool,wpbonelli/WecOptTool,michaelcdevin/WecOptTool,dtgaebe/WecOptTool,cmichelenstrofer/WecOptTool,ryancoe/WecOptTool,sandialabs/WecOptTool,FreeCAD/freecad.ship,RubendeBruin/mafredo",,https://github.com/capytaine,,,,,https://avatars.githubusercontent.com/u/110032913?v=4,,, reservoir,"Tools for Analysis, Design, and Operation of Water Supply Storages.",swd-turner,https://github.com/Critical-Infrastructure-Systems-Lab/reservoir.git,github,"reservoir,simulation,water-resources,hydrology",Hydro Energy,"2021/06/06, 20:17:31",25,0,3,false,R,CRITICAL Infrastructure Systems Lab,Critical-Infrastructure-Systems-Lab,R,http://swd-turner.github.io/reservoir,"b'# Capytaine: a linear potential flow BEM solver with Python.\n\n![CI status](https://github.com/capytaine/capytaine/actions/workflows/test_new_commits.yaml/badge.svg?event=push)\n![CI status](https://github.com/capytaine/capytaine/actions/workflows/test_with_latest_dependencies.yaml/badge.svg)\n\n\nCapytaine is Python package for the simulation of the interaction between water waves and floating bodies in frequency domain.\nIt is built around a full rewrite of the open source Boundary Element Method (BEM) solver Nemoh for the linear potential flow wave theory.\n\n## Installation\n\n[![PyPI](https://img.shields.io/pypi/v/capytaine)](https://pypi.org/project/capytaine)\n[![Conda-forge](https://img.shields.io/conda/vn/conda-forge/capytaine)](https://github.com/conda-forge/capytaine-feedstock)\n\nPackages for Windows, macOS and Linux are available on PyPI:\n\n```bash\npip install capytaine\n```\nand Conda-forge\n\n```bash\nconda install -c conda-forge capytaine\n```\n\n## Documentation\n\n[https://capytaine.github.io/](https://capytaine.github.io/)\n\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.01341/status.svg)](https://doi.org/10.21105/joss.01341)\n\n## License\n\nCopyright (C) 2017-2023, Matthieu Ancellin\n\nSince April 2022, the development of Capytaine is funded by the Alliance for Sustainable Energy, LLC, Managing and Operating Contractor for the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy.\n\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\n\nIt is based on [Nemoh](https://lheea.ec-nantes.fr/logiciels-et-brevets/nemoh-presentation-192863.kjsp), which has been developed by G\xc3\xa9rard Delhommeau, Aur\xc3\xa9lien Babarit et al., (\xc3\x89cole Centrale de Nantes) and is distributed under the Apache License 2.0.\n\nIt includes code from [meshmagick](https://github.com/LHEEA/meshmagick/) by Fran\xc3\xa7ois Rong\xc3\xa8re (\xc3\x89cole Centrale de Nantes), licensed under the GNU General Public License (GPL).\n'",,"2015/07/16, 04:55:20",3023,GPL-3.0,0,76,"2016/01/19, 02:56:30",4,5,5,0,2836,0,0.0,0.015625,,,0,3,false,,false,false,,,https://github.com/Critical-Infrastructure-Systems-Lab,https://galelli.cee.cornell.edu,United States of America,,,https://avatars.githubusercontent.com/u/133989297?v=4,,, DOLPHYN,"Evaluates investments and operations across the bulk supply chain for electricity and Hydrogen including production, storage, transmission, conditioning, and end-use consumption.",macroenergy,https://github.com/macroenergy/DOLPHYN.git,github,,Hydro Energy,"2023/10/25, 16:17:32",15,0,10,true,Julia,MacroEnergy,macroenergy,Julia,https://macroenergy.github.io/DOLPHYN/,"b'# DOLPHYN\n## Overview\nDOLPHYN is a configurable, [open source](https://github.com/macroenergy/DOLPHYN/blob/README_Doc_Update/LICENSE) energy system optimization model developed to explore interactions between multiple energy vectors and emerging technologies across their supply chains as part of a future integrated low-carbon energy system.\n\nThe DOLPHYN model evaluates investments and operations across the electricity and Hydrogen (H2) supply chains, including production, storage, transmission, conditioning, and end-use consumption. Importantly, the model is able to capture interactions between electricity and hydrogen infrastructure through: a) using hydrogen for power generation and b) production of hydrogen using electricity. The model is set up as a single-stage investment planning model and determines the least-cost mix of electricity and H2 infrastructure to meet electricity and H2 demand subject to a variety of operational, policy and carbon emission constraints. The DOLPHYN model incorporates the [GenX](https://github.com/GenXProject/GenX) electricity system model to characterize electricity system operations and investments (v0.3.6). Periodically, the electricity system representation is regularly updated to the latest GenX version.\n\nDOLPHYN is designed to be highly flexible and configurable, for use in a variety of applications from academic research and technology evaluation to public policy and regulatory analysis and resource planning. We are currently working to add biofuel supply chains and carbon capture, transport, and storage to the model. \n\nWe welcome you to add new features and resources to DOLPHYN for use in your own work and to [share them here for others](https://github.com/macroenergy/DOLPHYN/pulls). If you have issues using DOLPHYN [please let us know by opening an issue](https://github.com/macroenergy/DOLPHYN/issues).\n \n## Requirements\n\nDOLPHYN is written in [Julia](https://julialang.org/) and requires a mathematical programme solver to run. We recommend using the [latest stable version of Julia](https://julialang.org/downloads/) unless otherwise noted in the installation instructions below. DOLPHYN can run using several open-source and commercial solvers. DOLPHYN is most extensively tested using:\n- [HiGHS](https://highs.dev/) - a free open-source solver\n- [Gurobi](https://www.gurobi.com) - a commercial solver requiring a paid commercial license or free academic license\n\nDOLPHYN also has limited support for:\n- [Clp](https://github.com/jump-dev/Clp.jl) - a free open-source solver\n- [Cbc](https://github.com/jump-dev/Cbc.jl) - a free open-source solver\n- [CPLEX](https://www.ibm.com/analytics/cplex-optimizer) - a commercial solver requiring a paid commercial license or free academic license\n\n## Installing DOLPHYN\n\n### If you are doing a fresh install\n\n#### ZIP download\n\nIf you would like a one-time download of DOLPHYN which is not set up to pull updates using git, then simply download and unzip the files [using this link](https://github.com/macroenergy/DOLPHYN/archive/refs/heads/main.zip).\n\n#### Fresh Install Using GitHub Desktop\n\nUse the File -> Clone Respository -> URL dropdown menu to clone the DOLPHYN repository from:\n\n- https://github.com/macroenergy/DOLPHYN.git\n\n#### Fresh Install Using GitHub via your terminal / command line\n\nIn the top-level folder where you want to place DOLPHYN, run:\n\n- git clone --recurse-submodules https://github.com/macroenergy/DOLPHYN\n\n### If you are working from an existing project\n\n#### Existing Project Using GitHub Desktop\n\nPull the latest version of DOLPHYN from the main branch. Sometimes the GenX submodule in the src/GenX folder will not update correctly and remain empty.\n\nIf this happens, either use the command line arguments below to init and update the submodule, or delete the entire DOLPHYN folder from your computer and re-clone the repository.\n\n#### Existing Project Using GitHub via your terminal / command line\n\nIn your top-level folder (generally DOLPHYN or DOLPHYN-DEV), run:\n\n-\tgit pull\n-\tgit checkout main\n-\tcd src/GenX\n-\tgit submodule init\n-\tgit submodule update\n\n## Install the Gurobi and / or HiGHS solvers\n\nHiGHS will be automatically downloaded and installed when you instantiate the DOLPHYN Julia environment, so you do not need to download it separately. However if you would like to use a specific version of have a separate copy, it can be downloaded from: [https://highs.dev/](https://highs.dev/)\n\nGurobi is a commercial solver which requires either a free academic license or paid commercial license. You should download the latest version of the Gurobi Optimizer from:[https://www.gurobi.com/downloads/gurobi-software/](https://www.gurobi.com/downloads/gurobi-software/)\n\n## Setting up the Julia environment\n\nIn order to run DOLPHYN, several Julia packages must be downloaded and installed. To help users install the correct packages and versions, we have created a Julia environment file. This file is located in the top-level DOLPHYN folder and is called `Project.toml`.\n\n### First time running DOLPHYN\n\nThe first time you run DOLPHYN, you must instantiate the Julia environment. This will download and install all the required packages.\n\nIn your command terminal (not the Julia REPL), navigate to your DOLPHYN folder then run the following commands:\n\n- julia --project=. (this starts the Julia REPL using the environment file found in the current directory)\n- julia> ] (Enter \']\' at the prompt)\n- (DOLPHYN) pkg> instantiate (you should see DOLPHYN project name here, if not, enter `activate .`)\n- (DOLPHYN) pkg> build Gurobi (if you plan to use Gurobi)\n\nHere is a snapshot for you to see the commands (instantiate and build Gurobi) used from above:\n![Screen Shot 2023-09-07 at 11 19 22 AM](https://github.com/macroenergy/DOLPHYN/assets/2174909/8e5720fd-28f5-4bdc-840c-70fec0212cd3)\n\nIf the step to build Gurobi fails, the most likely cause is that the Gurobi installation cannot be found. [Use the following instructions](https://support.gurobi.com/hc/en-us/articles/13443862111761-How-do-I-set-system-environment-variables-for-Gurobi-) to define the ""GUROBI_HOME"" and ""GRB_LICENSE_FILE"" environment variables on your computer. For example, for Gurobi 10.0 on Ubuntu they should point to:\n- GUROBI_HOME = ...path to Gurobi install/gurobi1000/linux64\n- GRB_LICENSE_FILE = ...path to Gurobi install/gurobi1000/gurobi.lic\n\nYou can now press backspace to exit the Julia package manager and start using DOLPHYN by [running your first example](#running-your-first-example).\n\n### Second+ time running DOLPHYN:\n\nIn your command terminal (not the Julia REPL), navigate to your DOLPHYN folder then run the following commands:\n\n- julia --project=.\n- julia> ] \n- (DOLPHYN) pkg> st (this is for checking the status of packages installed for DOLPHYN)\n\n## Running your first example: \n\nNavigate to one of the example systems, e.g.:\n\n`julia> cd(""Example_Systems/SmallNewEngland/OneZone"")`\n\nEnsure you are not in the package manager by hitting the backspace key.\n\nUse the Run.jl file to run the case:\n\n`julia> include(""Run.jl"")`\n\nOnce the model has completed running, results will be written into the ""Results"" folder.\n\n## Example Systems\n\n**SmallNewEngland: OneZone** is a one-year example with hourly resolution representing Massachusetts. A rate-based carbon cap of 50 gCO2 per kWh is specified in the `CO2_cap.csv` input file. Expect a run time of ~5 seconds.\n\n**SmallNewEngland: ThreeZones** is similar to the above example but contains zones representing Massachusetts, Connecticut, and Maine. Expect a run time of ~1 minute.\n\n**NorthSea_2030** is a combined power and hydrogen model for the EU for the year 2030. It contains a power model with hourly resolution, contains zones representing Belgium, Germany, Denmark, France, Great Britain, the Netherlands, Sweden, and Norway. The model also includes a CO2 constraint representing 30% of 2015 power sector CO2 emissions applied to the hydrogen and power sector jointly. Expect a run time of ~10 minutes.\n\n## DOLPHYN Team\nThe model was originally [developed](https://pubs.rsc.org/en/content/articlehtml/2021/ee/d1ee00627d) by [Guannan He](https://www.guannanhe.com/) while at the MIT Energy Initiative, and is now maintained by a team contributors at [MITEI](https://energy.mit.edu/) led by [Dharik Mallapragada](http://mallapragada.mit.edu/) and Ruaridh Macdonald as well as Guannan He\'s research group at Peking University. Key contributors include Dharik S. Mallapragada, Ruaridh Macdonald, Guannan He, Mary Bennett, Shantanu Chakraborty, Anna Cybulsky, Michael Giovanniello, Jun Wen Law, Youssef Shaker, Nicole Shi and Yuheng Zhang.'",,"2021/11/01, 16:01:42",723,GPL-2.0,258,944,"2023/10/25, 16:17:33",18,126,162,91,0,12,0.6,0.661134163208852,,,1,23,false,,false,false,,,https://github.com/macroenergy,,,,,https://avatars.githubusercontent.com/u/114940386?v=4,,, pygfunction,An open source toolbox for the evaluation of thermal response factors of geothermal borehole fields.,MassimoCimmino,https://github.com/MassimoCimmino/pygfunction.git,github,,Geothermal Energy,"2023/01/06, 18:23:55",43,12,13,true,Python,,,Python,,"b'# pygfunction: A g-function calculator for Python\n\n[![Tests](https://github.com/MassimoCimmino/pygfunction/actions/workflows/test.yml/badge.svg)](https://github.com/MassimoCimmino/pygfunction/actions/workflows/test.yml)\n[![DOI](https://zenodo.org/badge/100305705.svg)](https://zenodo.org/badge/latestdoi/100305705)\n\n## What is *pygfunction*?\n\n*pygfunction* is a Python module for the calculation of thermal response\nfactors, or *g*-functions, for fields of geothermal boreholes. *g*-functions\nform the basis of many simulation and sizing programs for geothermal heat pump\nsystems. *g*-Functions are superimposed in time to predict fluid and ground\ntemperatures in these systems.\n\nAt its core, *pygfunction* relies on the analytical finite line source solution\nto evaluate the thermal interference between boreholes in the same bore field.\nThis allows for the very fast calculation of *g*-functions, even for very large\nbore fields with hundreds of boreholes.\n\nUsing *pygfunction*, *g*-functions can be calculated for any bore field\nconfiguration (i.e. arbitrarily positionned in space), including fields of\nboreholes with individually different lengths and radiuses. For regular fields\nof boreholes of equal size, setting-up the calculation of the *g*-function is\nas simple as a few lines of code. For example, the code for the calculation of\nthe *g*-function of a 10 x 10 square array of boreholes (100 boreholes\ntotal):\n\n```python\nimport pygfunction as gt\nimport numpy as np\ntime = np.array([(i+1)*3600. for i in range(24)]) # Calculate hourly for one day\nboreField = gt.boreholes.rectangle_field(N_1=10, N_2=10, B_1=7.5, B_2=7.5, H=150., D=4., r_b=0.075)\ngFunc = gt.gfunction.gFunction(boreField, alpha=1.0e-6, time=time)\ngFunc.visualize_g_function()\n```\n\nOnce the *g*-function is evaluated, *pygfunction* provides tools to predict\nborehole temperature variations (using load aggregation methods) and to evaluate\nfluid temperatures in the boreholes for several U-tube pipe configurations.\n\n\n## Requirements\n\n*pygfunction* was developed and tested using Python 3.7. In addition, the\nfollowing packages are needed to run *pygfunction* and its examples:\n- matplotlib (>= 3.5.1),\n- numpy (>= 1.21.5)\n- scipy (>= 1.7.3)\n- SecondaryCoolantProps (>= 1.1)\n\nThe documentation is generated using [Sphinx](http://www.sphinx-doc.org). The\nfollowing packages are needed to build the documentation:\n- sphinx (>= 4.4.0)\n- numpydoc (>= 1.2.0)\n\n\n## Quick start\n\n**Users** - [Download pip](https://pip.pypa.io/en/latest/) and install the latest release:\n\n```\npip install pygfunction\n```\n\nAlternatively, [download the latest release](https://github.com/MassimoCimmino/pygfunction/releases) and run the installation script:\n\n```\npip install .\n```\n\n**Developers** - To get the latest version of the code, you can [download the\nrepository from github](https://github.com/MassimoCimmino/pygfunction) or clone\nthe project in a local directory using git:\n\n```\ngit clone https://github.com/MassimoCimmino/pygfunction.git\n```\n\nInstall *pygfunction* in development mode (this requires `pip >= 21.1`):\n```\npip install --editable .\n```\n\nOnce *pygfunction* is copied to a local directory, you can verify that it is\nworking properly by running the examples in `pygfunction/examples/`.\n\n\n## Documentation\n\n*pygfunction*\'s documentation is hosted on\n[ReadTheDocs](https://pygfunction.readthedocs.io).\n\n\n## License\n\n*pygfunction* is licensed under the terms of the 3-clause BSD-license.\nSee [pygfunction license](LICENSE.md).\n\n\n## Contributing to *pygfunction*\n\nYou can report bugs and propose enhancements on the\n[issue tracker](https://github.com/MassimoCimmino/pygfunction/issues).\n\nTo contribute code to *pygfunction*, follow the\n[contribution workflow](CONTRIBUTING.md).\n\n\n## Contributors\n\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-3-orange.svg?style=flat-square)](#contributors-)\n\n\n\n\n\n\n \n \n \n \n \n \n \n

Massimo Cimmino

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 :rocket: \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80

Jack Cook

\xf0\x9f\x92\xbb \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x93\x96

Matt Mitchell

\xf0\x9f\x92\xbb \xf0\x9f\xa4\x94
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",",https://zenodo.org/badge/latestdoi/100305705","2017/08/14, 20:13:54",2262,BSD-3-Clause,36,700,"2023/07/21, 12:43:18",31,125,237,30,96,6,0.2,0.0509977827050998,"2023/01/09, 21:53:49",v2.2.2,0,4,false,,false,true,"magnesyljuasen/streamlit-internside,FloreVerbist/GEOTABS_exercises_TS,BETSRG/GHEDesigner,tblanke/ScenarioGUI,tblanke/GHEtool_test,giecli/ghedt,magnesyljuasen/streamlit-internside_gammel,rseng/rsepedia-analysis,wouterpeere/GHEtool,viv89/Lake2C,j-c-cook/ghedt,mitchute/GLHE",,,,,,,,,, GHEtool,GHEtool is an open source Python package that contains all the functionalities needed to deal with borefield design.,wouterpeere,https://github.com/wouterpeere/GHEtool.git,github,"geothermal-energy,borefields,sizing,energy,storage,geothermal",Geothermal Energy,"2023/10/17, 09:08:28",18,0,12,true,Python,,,"Python,Inno Setup,TeX",https://ghetool.eu,"b'# GHEtool: An open-source tool for borefield sizing\n\n[![PyPI version](https://badge.fury.io/py/GHEtool.svg)](https://badge.fury.io/py/GHEtool)\n[![Tests](https://github.com/wouterpeere/GHEtool/actions/workflows/test.yml/badge.svg)](https://github.com/wouterpeere/GHEtool/actions/workflows/test.yml)\n[![codecov](https://codecov.io/gh/wouterpeere/GHEtool/branch/main/graph/badge.svg?token=I9WWHW60OD)](https://codecov.io/gh/wouterpeere/GHEtool)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.04406/status.svg)](https://doi.org/10.21105/joss.04406)\n[![Downloads](https://static.pepy.tech/personalized-badge/ghetool?period=total&units=international_system&left_color=black&right_color=blue&left_text=Downloads)](https://pepy.tech/project/ghetool)\n[![Downloads](https://static.pepy.tech/personalized-badge/ghetool?period=week&units=international_system&left_color=black&right_color=orange&left_text=Downloads%20last%20week)](https://pepy.tech/project/ghetool)\n[![Read the Docs](https://readthedocs.org/projects/ghetool/badge/?version=stable)](https://ghetool.readthedocs.io/en/stable/)\n## What is *GHEtool*?\n\n\nGHEtool is a Python package that contains all the functionalities needed to deal with borefield design. GHEtool has been developed as a joint effort of KU Leuven (The SySi Team), boydens engineering (part of Sweco) and FH Aachen.\nThe core of this package is the automated sizing of borefield under different conditions. By making use of combination of just-in-time calculations of thermal ground responses (using [pygfunction](https://github.com/MassimoCimmino/pygfunction)) with\nintelligent interpolation, this automated sizing can be done in the order of milliseconds. Please visit our website [https://GHEtool.eu](https://GHEtool.eu) for more information.\n\n#### Read The Docs\nGHEtool has an elaborate documentation were all the functionalities of the tool are explained, with examples, literature and validation.\nThis can be found on [GHEtool.readthedocs.io](https://ghetool.readthedocs.io).\n\n#### Graphical user interface\nGHEtool comes with a *graphical user interface (GUI)*. This GUI is built using [ScenarioGUI](https://github.com/tblanke/ScenarioGUI).\nOne can download the open-source GUI [here](https://ghetool.eu/wp-content/uploads/setups/GHEtool%20Community_setup_v2_2_0.exe).\n

\n\n

\n\n### Development\nGHEtool is in constant development with new methods, enhancements and features added to every new version. Please visit our [project board](https://github.com/users/wouterpeere/projects/2) to check our progress.\n\n## Requirements\nThis code is tested with Python 3.8, 3.9, 3.10 and 3.11 and requires the following libraries (the versions mentioned are the ones with which the code is tested)\n\n* Numpy (>=1.20.2)\n* Scipy (>=1.6.2)\n* Matplotlib (>=3.4.1)\n* Pygfunction (>=2.2.2)\n* Openpyxl (>=3.0.7)\n* Pandas (>=1.2.4)\n\nFor the GUI\n\n* ScenarioGUI (>=0.3.0)\n\nFor the tests\n\n* Pytest (>=7.1.2)\n\n## Quick start\n### Installation\n\nOne can install GHEtool by running Pip and running the command\n\n```\npip install GHEtool\n```\n\nor one can install a newer development version using\n\n```\npip install --extra-index-url https://test.pypi.org/simple/ GHEtool\n```\n\nDevelopers can clone this repository.\n\nIt is a good practise to use virtual environments (venv) when working on a (new) Python project so different Python and package versions don\'t conflict with eachother. For GHEtool, Python 3.8 or higher is recommended. General information about Python virtual environments can be found [here](https://docs.Python.org/3.9/library/venv.html) and in [this article](https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/).\n\n### Check installation\n\nTo check whether everything is installed correctly, run the following command\n\n```\npytest --pyargs GHEtool\n```\n\nThis runs some predefined cases to see whether all the internal dependencies work correctly. All test should pass successfully.\n\n## Get started with GHEtool\n\n### Building blocks of GHEtool\nGHEtool is a flexible package that can be extend with methods from [pygfunction](https://pygfunction.readthedocs.io/en/stable/) (and [ScenarioGUI](https://github.com/tblanke/ScenarioGUI) for the GUI part).\nTo work efficiently with GHEtool, it is important to understand the main structure of the package.\n\n#### Borefield\nThe Borefield object is the central object within GHEtool. It is within this object that all the calculations and optimizations take place.\nAll attributes (ground properties, load data ...) are set inside the borefield object.\n\n#### Ground properties\nWithin GHEtool, there are multiple ways of setting the ground data. Currently, your options are:\n\n* _GroundConstantTemperature_: if you want to model your borefield with a constant, know ground temperature.\n* _GroundFluxTemperature_: if you want to model your ground with a varying ground temperature due to a constant geothermal heat flux.\n* _GroundTemperatureGradient_: if you want to model your ground with a varying ground temperature due to a geothermal gradient.\n\nPlease note that it is possible to add your own ground types by inheriting the attributes from the abstract _GroundData class.\n\n#### Pipe data\nWithin GHEtool, you can use different structures for the borehole internals: U-tubes or coaxial pipes.\nConcretely, the classes you can use are:\n\n* _Multiple U-tubes_\n* _Single U-tubes (special case of multiple U-tubes)_\n* _Double U-tubes (special case of multiple U-tubes)_\n* _Coaxial pipe_\n \nPlease note that it is possible to add your own pipe types by inheriting the attributes from the abstract _PipeData class.\n\n#### Fluid data\nYou can set the fluid data by using the FluidData class. In the future, more fluid data classes will be made available.\n\n#### Load data\nOne last element which you will need in your calculations, is the load data. Currently, you can only set the primary (i.e. geothermal) load of the borefield.\nIn a future version of GHEtool, also secundary building loads will be included. For now, you can use the following inputs:\n\n* _MonthlyGeothermalLoadAbsolute_: You can set one the monthly baseload and peak load for heating and cooling for one standard year which will be used for all years within the simulation period.\n* _HourlyGeothermalLoad_: You can set (or load) the hourly heating and cooling load of a standard year which will be used for all years within the simulation period.\n* _HourlyGeothermalLoadMultiYear_: You can set (or load) the hourly heating and cooling load for multiple years (i.e. for the whole simulation period). This way, you can use secundary loads already with GHEtool as shown in [this example](https://ghetool.readthedocs.io/en/stable/sources/code/Examples/active_passive_cooling.html).\n\nAll load classes also have the option to add a yearly domestic hot water usage.\n\nPlease note that it is possible to add your own load types by inheriting the attributes from the abstract _LoadData class.\n\n### Options for sizing methods\nLike always with iterative methods, there is a trade-off between speed and accuracy. Within GHEtool (using the CalculationSetup class) one can alter different parameters\nto customize the behaviour they want. Note that these options are additive, meaning that, for example, the strongest criteria from the\natol and rtol is chosen when sizing. The options are:\n\n* _atol_: For the sizing methods, an absolute tolerance in meters between two consecutive iterations can be set.\n* _rtol_: For the sizing methods, a relative tolerance in meters between two consecutive iterations can be set.\n* _max_nb_of_iterations_: For the sizing methods, a maximum number of iterations can be set. If the size is not converged, a RuntimeError is thrown.\n* _use_precalculated_dataset_: This option makes sure the custom g-function dataset (if available) is not used.\n* _interpolate_gfunctions_: Calculating the gvalues gives a large overhead cost, although they are not that sensitive to a change in borehole depth. If this parameter is True \nit is allowed that gfunctions are interpolated. (To change the threshold for this interpolation, go to the Gfunction class.)\n* _deep_sizing_: An alternative sizing method for cases with high cooling (peaks) and a variable ground temperature.\nThis method is potentially slower, but proves to be more robust.\n* _force_deep_sizing_: When the alternative method from above should always be used.\n\n### Simple example\n\nTo show how all the pieces of GHEtool work together, below you can find a step-by-step example of how, traditionally, one would work with GHEtool.\nStart by importing all the relevant classes. In this case we are going to work with a ground model which assumes a constant ground temperature (e.g. from a TRT-test),\nand we will provide the load with a monthly resolution.\n\n```Python\nfrom GHEtool import Borefield, GroundDataConstantTemperature, MonthlyGeothermalLoadAbsolute\n```\n\nAfter importing the necessary classes, the relevant ground data parameters are set.\n\n```Python\ndata = GroundDataConstantTemperature(3, # ground thermal conductivity (W/mK)\n 10, # initial/undisturbed ground temperature (deg C)\n 2.4*10**6) # volumetric heat capacity of the ground (J/m3K) \n```\n\nFurthermore, for our loads, we need to set the peak loads as well as the monthly base loads for heating and cooling.\n\n```Python\npeak_cooling = [0., 0, 34., 69., 133., 187., 213., 240., 160., 37., 0., 0.] # Peak cooling in kW\npeak_heating = [160., 142, 102., 55., 0., 0., 0., 0., 40.4, 85., 119., 136.] # Peak heating in kW\n\nmonthly_load_heating = [46500.0, 44400.0, 37500.0, 29700.0, 19200.0, 0.0, 0.0, 0.0, 18300.0, 26100.0, 35100.0, 43200.0] # in kWh\nmonthly_load_cooling = [4000.0, 8000.0, 8000.0, 8000.0, 12000.0, 16000.0, 32000.0, 32000.0, 16000.0, 12000.0, 8000.0, 4000.0] # in kWh\n\n# set load object\nload = MonthlyGeothermalLoadAbsolute(monthly_load_heating, monthly_load_cooling, peak_heating, peak_cooling)\n\n```\n\nNext, we create the borefield object in GHEtool and set the temperature constraints and the ground data.\nHere, since we do not use a pipe and fluid model (see [Examples](https://ghetool.readthedocs.io/en/stable/sources/code/examples.html) if you need examples were no borehole thermal resistance is given),\nwe set the borehole equivalent thermal resistance.\n\n```Python\n# create the borefield object\nborefield = Borefield(load=load)\n\n# set ground parameters\nborefield.set_ground_parameters(data)\n\n# set the borehole equivalent resistance\nborefield.Rb = 0.12\n\n# set temperature boundaries\nborefield.set_max_avg_fluid_temperature(16) # maximum temperature\nborefield.set_min_avg_fluid_temperature(0) # minimum temperature\n```\n\nNext we create a rectangular borefield.\n\n```Python\n# set a rectangular borefield\nborefield.create_rectangular_borefield(10, 12, 6, 6, 110, 4, 0.075)\n```\n\nNote that the borefield can also be set using the [pygfunction](https://pygfunction.readthedocs.io/en/stable/) package, if you want more complex designs.\n\n```Python\nimport pygfunction as gt\n\n# set a rectangular borefield\nborefield_gt = gt.boreholes.rectangle_field(10, 12, 6, 6, 110, 1, 0.075) \nborefield.set_borefield(borefield_gt)\n```\n\nOnce a Borefield object is created, one can make use of all the functionalities of GHEtool. One can for example size the borefield using:\n\n```Python\ndepth = borefield.size()\nprint(""The borehole depth is: "", depth, ""m"")\n```\n\nOr one can plot the temperature profile by using\n\n```Python\nborefield.print_temperature_profile(legend=True)\n```\n\nA full list of functionalities is given below.\n\n## Functionalities\nGHEtool offers functionalities of value to all different disciplines working with borefields. The features are available both in the code environment and in the GUI.\nFor more information about the functionalities of GHEtool, please visit the [ReadTheDocs](https://ghetool.readthedocs.org).\n\n## License\n\n*GHEtool* is licensed under the terms of the 3-clause BSD-license.\nSee [GHEtool license](LICENSE).\n\n## GHEtool Pro\nTo further increase the feasibility of geothermal solutions, we have launched our professional version of GHEtool which supports drilling companies, engineering firms, architects, government organizations with automated reporting and online courses \xe2\x80\xa6\nWith our insightful software they can minimize the environmental and societal impact while maximizing the cost-effective utilization of geothermal projects.\nVisit our website at [https://ghetool.eu](htts://ghetool.eu) to learn more about the synergy between the open-source and commercial version of GHEtool.\n\n## Contact GHEtool\n- Do you want to support GHEtool financially or by contributing to our software?\n- Do you have a great idea for a new feature?\n- Do you have a specific remark/problem?\n\nPlease do contact us at [info@ghetool.eu](mailto:info@ghetool.eu).\n\n## Citation\nPlease cite GHEtool using the JOSS paper.\n\nPeere, W., Blanke, T.(2022). GHEtool: An open-source tool for borefield sizing in Python. _Journal of Open Source Software, 7_(76), 4406, https://doi.org/10.21105/joss.04406\n\nFor more information on how to cite GHEtool, please visit the ReadTheDocs at [GHEtool.readthedocs.io](https://ghetool.readthedocs.io/en/stable/).\n\n## References\n\n### Development of GHEtool\nPeere, W., Hermans, L., Boydens, W., and Helsen, L. (2023). Evaluation of the oversizing and computational speed of different open-source borefield sizing methods. In _Proceedings of International Building Simulation Conference 2023_. Shanghai (Belgium), 4-6 September 2023.\n\nConinx, M., De Nies, J. (2022). Cost-efficient Cooling of Buildings by means of Borefields with Active and Passive Cooling. Master thesis, Department of Mechanical Engineering, KU Leuven, Belgium.\n\nPeere, W., Blanke, T. (2022). GHEtool: An open-source tool for borefield sizing in Python. _Journal of Open Source Software, 7_(76), 4406, https://doi.org/10.21105/joss.04406\n\nPeere, W., Picard, D., Cupeiro Figueroa, I., Boydens, W., and Helsen, L. (2021). Validated combined first and last year borefield sizing methodology. In _Proceedings of International Building Simulation Conference 2021_. Brugge (Belgium), 1-3 September 2021. https://doi.org/10.26868/25222708.2021.30180\n\nPeere, W. (2020). Methode voor economische optimalisatie van geothermische verwarmings- en koelsystemen. Master thesis, Department of Mechanical Engineering,\nKU Leuven, Belgium.\n\n### Applications/Mentions of GHEtool\nWeynjes, J. (2023). Methode voor het dimensioneren van een geothermisch systeem met regeneratie binnen verschillende ESCO-structuren. Master thesis, Department of Mechanical Engineering, KU Leuven, Belgium.\n\nHermans, L., Haesen, R., Uytterhoeven, A., Peere, W., Boydens, W., Helsen, L. (2023). Pre-design of collective residential solar districts with seasonal thermal energy storage: Importance of level of detail. _Applied thermal engineering_ 226, Art.No. 120203, 10.1016/j.applthermaleng.2023.120203\n\nCimmino, M., Cook., J. C. (2022). pygfunction 2.2 : New Features and Improvements in Accuracy and Computational Efficiency. In _Proceedings of IGSHPA Research Track 2022_. Las Vegas (USA), 6-8 December 2022. https://doi.org/10.22488/okstate.22.000015\n\nVerleyen, L., Peere, W., Michiels, E., Boydens, W., Helsen, L. (2022). The beauty of reason and insight: a story about 30 years old borefield equations. _IEA HPT Magazine 40_(3), 36-39, https://doi.org/10.23697/6q4n-3223\n\nPeere, W., Boydens, W., Helsen, L. (2022). GHEtool: een open-sourcetool voor boorvelddimensionering. Presented at the 15e warmtepompsymposium: van uitdaging naar aanpak, Quadrivium, Heverlee, Belgi\xc3\xab.\n\nPeere, W., Coninx, M., De Nies, J., Hermans, L., Boydens, W., Helsen, L. (2022). Cost-efficient Cooling of Buildings by means of Borefields with Active and Passive Cooling. Presented at the 15e warmtepompsymposium: van uitdaging naar aanpak, Quadrivium, Heverlee, Belgi\xc3\xab.\n\nPeere, W. (2022). Technologie\xc3\xabn voor de energietransitie. Presented at the Energietransitie in meergezinswoningen en kantoorgebouwen: uitdagingen!, VUB Brussel Bruxelles - U Residence.\n\nSharifi., M. (2022). Early-Stage Integrated Design Methods for Hybrid GEOTABS Buildings. PhD thesis, Department of Architecture and Urban Planning, Faculty of Engineering and Architecture, Ghent University.\n\nConinx, M., De Nies, J. (2022). Cost-efficient Cooling of Buildings by means of Borefields with Active and Passive Cooling. Master thesis, Department of Mechanical Engineering, KU Leuven, Belgium.\n\nMichiels, E. (2022). Dimensionering van meerdere gekoppelde boorvelden op basis van het type vraagprofiel en de verbinding met de gebruikers. Master thesis, Department of Mechanical Engineering, KU Leuven, Belgium.\n\nVanpoucke, B. (2022). Optimale dimensionering van boorvelden door een variabel massadebiet. Master thesis, Department of Mechanical Engineering, KU Leuven, Belgium.\n\nHaesen R., Hermans L. (2021). Design and Assessment of Low-carbon Residential District Concepts with (Collective) Seasonal Thermal Energy Storage. Master thesis, Departement of Mechanical Engineering, KU Leuven, Belgium.\n'",",https://doi.org/10.21105/joss.04406,https://doi.org/10.21105/joss.04406\n\nFor,https://doi.org/10.21105/joss.04406\n\nPeere,https://doi.org/10.26868/25222708.2021.30180\n\nPeere,https://doi.org/10.22488/okstate.22.000015\n\nVerleyen,https://doi.org/10.23697/6q4n-3223\n\nPeere","2021/04/03, 14:11:24",935,BSD-3-Clause,1088,1526,"2023/10/17, 09:34:53",12,87,172,138,8,0,0.0,0.2957110609480813,"2023/10/17, 09:35:44",v2.2.0,0,4,false,,false,false,,,,,,,,,,, multiphysics,Interactive (Heat Transfer) Simulations for Everyone.,charxie,https://github.com/Institute-for-Future-Intelligence/multiphysics.git,github,"simulation,energy,physics,heat-transfer,engineering,science,multiphysics,computational-fluid-dynamics,finite-difference-time-domain,energy2d",Geothermal Energy,"2023/10/02, 01:00:51",51,0,5,true,Java,Institute for Future Intelligence,Institute-for-Future-Intelligence,"Java,Inno Setup",https://intofuture.org/energy2d.html,"b""## Under the hood...\n\n- [Numerical Algorithms for Simulating Three Modes of Heat Transfer](https://intofuture.org/energy2d-equations.html)\n\n- [Coupled Fluid-Particle Dynamics](https://intofuture.org/energy2d-particle-dynamics.html)\n\n- [Modeling Thermal Bridges](https://intofuture.org/energy2d-thermal-bridges.html)\n\n- [Comparison with Infrared Imaging](https://intofuture.org/ie-thermal-equilibrium.html)\n\n- [Running on Raspberry Pi](https://medium.com/@charlesxie/computational-fluid-dynamics-on-the-incredible-raspberry-pi-85cbbb46237c)\n\n\n## How to cite it?\n\nCharles Xie, Interactive Heat Transfer Simulations for Everyone, The Physics Teacher, Volume 50, Issue 4, pp. 237-240, 2012.\n\n\n## Acknowledgements\n\nThe development of this open-source program was initially supported by the National Science Foundation (NSF) of the United States under grant numbers 0918449, 1124281, and 1512868. The project is currently under the auspices of the Institute for Future Intelligence (https://intofuture.org). Special thanks to the Concord Consortium for kindly allowing the transfer of the intellectual properties of this product to the Institute for Future Intelligence.\n\n## Applications\n\nThis program has been used as a simulation tool in the following publications:\n\n1. T\xc3\xa2nia M. Ribeiro, Andrea D\xe2\x80\x99Ambrosio, Guillermo J. Dominguez Calabuig, Dimitrios Athanasopoulos, Helena Bates, Clemens Riegler, Oriane Gassot, Selina-Barbara Gerig, Juan L. G\xc3\xb3mez-Gonz\xc3\xa1lez, Nikolaus Huber, Ragnar Seton, Tiago E.C. Magalh\xc3\xa3es, CARINA: A near-Earth D-type asteroid sample return mission, Acta Astronautica, Volume 212, November 2023, Pages 213-225, https://doi.org/10.1016/j.actaastro.2023.07.035\n2. M. Ravikumar, M.K. Srinath, & M.S. Ganesha Prasad, Thermal modelling of microwave dehydration of fruit slice, Case Studies in Thermal Engineering, 2023. https://doi.org/10.1016/j.csite.2023.103543\n3. Milad Ramezankhani, Mehrtash Harandi, Rudolf Seethaler, & Abbas S. Milani, Smart manufacturing under limited and heterogeneous data: a sim-to-real transfer learning with convolutional variational autoencoder in thermoforming, International Journal of Computer Integrated Manufacturing, 2023, https://doi.org/10.1080/0951192X.2023.2257623\n4. Christine K. McGinn, Vikrant Kumar, Megan Noga, Zachary Lamport, & Ioannis Kymissis, Ultra-Thin Ceramic Substrates for Improved Heat sinking for MicroLEDs, Advanced Materials Technologies, 2023, https://doi.org/10.1002/admt.202300390\n5. Milad Ramezankhani, Data-efficient and uncertainty-aware hybrid machine learning in advanced composites manufacturing, PhD thesis, University of British Columbia, 2023\n6. Fu-Ye Du, Wang Zhang, Hui-Qiong Wang, & Jin-Cheng Zheng, Enhancement of thermal rectification by asymmetry engineering of thermal conductivity and geometric structure for the multi-segment thermal rectifier, Chinese Physics B, 2023\n7. Angela Terracina, Angelo Armano, Manuela Meloni, Annamaria Panniello, Gianluca Minervini, Antonino Madonia, Marco Cannas, Marinella Striccoli, Luca Malfatti, & Fabrizio Messina, Photobleaching and Recovery Kinetics of a Palette of Carbon Nanodots Probed by In Situ Optical Spectroscopy, ACS Applied Materials & Interfaces, 14, 31, 36038\xe2\x80\x9336051, 2022\n8. Laura Bragante Corssac, A digital twin based smart home : a Proof-of-concept study, Universidade Federal do Rio Grande do Sul, https://www.lume.ufrgs.br/handle/10183/245270\n9. \xc3\x9cmit Aksoy, D\xc3\xbcnya, Mars ve Ay'da yerel kaynaklar ile \xc3\xbcretilen betonlardan yap\xc4\xb1lm\xc4\xb1\xc5\x9f yap\xc4\xb1lar\xc4\xb1n sonlu elemanlar y\xc3\xb6ntemi ile tasarlanmas\xc4\xb1 ve kar\xc5\x9f\xc4\xb1la\xc5\x9ft\xc4\xb1r\xc4\xb1lmas\xc4\xb1. KM\xc3\x9c, Fen Bilimleri Enstit\xc3\xbcs\xc3\xbc, \xc4\xb0n\xc5\x9faat M\xc3\xbchendisli\xc4\x9fi Ana Bilim Dal\xc4\xb1, 2022\n10. Martin Ferjan, Izdelava interaktivnega pripomo\xc4\x8dka za razumevanje bioklimatskih ukrepov za pasivno solarno ogrevanje [na spletu]. 2022. [Dostopano\xc2\xa08\xc2\xa0julij\xc2\xa02022]. Pridobljeno https://repozitorij.uni-lj.si/IzpisGradiva.php?lang=slv&id=137735 \n11. Pau Chung Leng, Siew Bee Aw, Nor Eeda Haji Ali, Gabriel Hoh Teck Ling, Yoke Lai Lee, & Mohd Hamdan Ahmad, Solar Chimneys as an Effective Ventilation Strategy in Multi-Storey Public Housing in the Post-COVID-19 Era, Buildings, 12(6), 820, 2022 \n12. Miguel Chen Austin, An\xc3\xa1lisis de la influencia de la masa t\xc3\xa9rmica en climas tropicales mediante simulaci\xc3\xb3n con Energy2D, PRISMA Tecnol\xc3\xb3gico | Vol. 13, n.\xc2\xb0 1, edici\xc3\xb3n, 2022\n13. Ren Tier, Mpemba Effect Demystified, https://engrxiv.org/preprint/view/2104/\n14. Miguel Chen Austin, Kevin Araque, Paola Palacios, Katherine Rodr\xc3\xadguez Maure, & Dafni Mora, Numerical Assessment of Zebra-Stripes-Based Strategies in Buildings Energy Performance: A Case Study under Tropical Climate, Biomimetics 2022, 7, 14\n15. Ipsita Negi & Kirti Pal, Effect of Tower Height and Collector Radius on Performance of Solar Updraft Tower Power Plant, International Journal of Social Ecology and Sustainable Development, vol. 13, no. 1, 2022\n16. Messaoud Moudjari, Hafida Marouf, Hameed Muhamad, Omar Chaalal, Marc Mequignon, Walid Maherzi, & Mahfoud Benzerzour, Using Local Materials to Optimize the Eco-design of a Resilient Urban Environment in Sustainable Urban Project Process, Civil Engineering and Architecture 9(6): 2084-2097, 2021\n17. Luis Bernardo L\xc3\xb3pez-Sosa, Mario Morales-M\xc3\xa1ximo, Rogelio Anastacio-Paulino, Abraham Custodio-Hern\xc3\xa1ndez, Juan Carlos Corral-Huacuz, & Arturo Aguilera-Mandujano, Electron Microscopy Characterization of Sargassum Spp. from the Mexican Caribbean for Application as a Bioconstruction Material, Microscopy and Microanalysis, Vol. 27 S1, 2021\n18. S. Turkat,et al., Measurement of the 2H(p, \xce\xb3) 3He S-factor at 265 - 1094 keV, Physical Review C 103, 045805, 2021\n19. Songsheng Li & Christopher Chiu, Improved Smart Pillow for Remote Health Care System, Journal of Sensor and Actuator Networks, 10, 9, 2021 \n20. D. Thomas, H. Andoni, Steven, R.A.Achsani, I.M.Sutjahja, Mardiyati, & S.Wonorahardjo, Thermophysical studies of common wall panels for controlling building thermal environment, MaterialsToday Proceedings, 2021\n21. Rizka Tri Arinta & LMF Purwanto, PROSES SIMULASI KONDUKTIVITAS PERPINDAHAN PANAS PADA ATAP DAK MENGGUNAKAN ENERGY2D, Jurnal Teknik Sipil, 14(2), 2021\n22. B. Budiana, Rifaldi Dwi Priana, Xena Mutiara Sinurat, & Politeknik Negeri Batam, Canderif Amsal Oloan SilabanPenggunaan Perangkat Lunak Energy2D dalam Mempelajari Konduktivitas Panas Pada Plastik, Jurnal Integrasi, Vol. 13, No. 1, pp. 40-45, 2021\n23. Jay Kalpesh Panchal, Jaysinh Rajendrasinh Dabhi, Harshil Dineshkumar Khurana, & Rutvik Jigneshkumar Jani, Effect of Different Chimney and Collector Contour on Output in a Solar Updraft Tower, International Research Journal of Engineering and Technology, Vol. 8, No. 4, pp. 2882-2885, 2021\n24. Shao Ing Wong, Han Lin, Jaka Sunarso, Basil T. Wong, & Baohua Jia, Triggering a Self-Sustaining Reduction of Graphenes Oxide for High-Performance Energy Storage Devices, ACS Applied Nano Materials, Vol. 3, Issue 9, pp. 9117\xe2\x80\x939126, 2020\n25. Norbert Schneider, Alexander Kolew, Marc Schneider, Matthias Worgull, Thurid Gspann, & Juerg Leuthold, High\xe2\x80\x90Resolution On\xe2\x80\x90Demand Nanostructures, Physica Status Solidi A, 217, 1900688, 2020\n26. Roshni Senthilkumar & Mohamed Al Musleh, Smart temperature-controlled infant car seat using thermoelectric devices, 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), 2020\n27. Jacob B. Setera, Brent D. Turrin, Gregory F. Herzog, Jill A. VanTongeren, Jeremy S. Delaney, & Carl C. Swisher III, 40Ar/39Ar thermochronology for sub\xe2\x80\x90milligram samples using a Ta\xe2\x80\x90platform micro\xe2\x80\x90furnace, with illustrations from the Bushveld Complex, Geochemistry, Geophysics, Geosystems, 2020\n28. Dario Valter Conca, Development of novel optical techniques for the study of cell-surface proteins in living cells at the single-molecule level, University College London, PhD dissertation, 2020\n29. Rui Li, Exploring the Use of Smartphone, Wireless Sensors, and 3D-Printing for Low-Cost Medical Technology: Diagnosis, Treatment, and Rehabilitation, University of Georgia, ProQuest Dissertations Publishing, 2020. 27833521\n30. Giovanni Dipierro, Alessandro Massaro, & Angelo Galiano, Characterization of an efficient thermal insulation composite system for non-residential buildings: a comprehensive theoretical, numerical and experimental approach, International Journal of Current Advanced Research, Vol. 9, pp. 22177-22185, 2020\n31. LI Da-peng, YANG Dao-yu, & HUANG Dan, Energy2D\xe5\x9c\xa8\xe4\xbc\xa0\xe7\x83\xad\xe5\xad\xa6\xe8\x99\x9a\xe6\x8b\x9f\xe4\xbb\xbf\xe7\x9c\x9f\xe6\x95\x99\xe5\xad\xa6\xe4\xb8\xad\xe7\x9a\x84\xe5\xba\x94\xe7\x94\xa8\xe7\xa0\x94\xe7\xa9\xb6, \xe6\x95\x99\xe8\x82\xb2\xe7\x8e\xb0\xe4\xbb\xa3\xe5\x8c\x96, Vol. 49, 2020\n32. Inge Magdalena Sutjahja, Sufiyah Assegaf, & Surjamanto Wonorahardjo, Digital Simulation as Learning Aid for Heat Flow in Solid Theoretical Understanding, Journal of Physical Science and Engineering, 5(1), 11-21, 2020\n33. Fabio Longo, Alessio Cascardi, Paola Lassandro, & Maria Antonietta Aiello, A New Fabric Reinforced Geopolymer Mortar (FRGM) with Mechanical and Energy Benefits, Fibers, 8(8), 49, 2020\n34. Abhar Bhattarai, Bivek Baral, & Malesh Shah, Redesigning Direct Air Capture Using Renewable Energy, Bhutan Journal of Research and Development, pp.86-106, Spring, 2020\n35. Erik Mukk, Reinforcement learning based smart home heating solution in Energy2D simulation software, University of Tartu, 2020\n36. Vaithinathan Karthikeyan, James UtamaSurjadi, Joseph C.K.Wong, Venkataraman Kannan, Kwok-Ho Lam, Xianfeng Chen, Yang Lu, & Vellaisamy A.L.Roy, Wearable and flexible thin film thermoelectric module for multi-scale energy harvesting, Journal of Power Sources, 455, 2020\n37. Stefano Oss, Infrared imaging of a non-stationary thermal conductive process and observation of its Green\xe2\x80\x99s kernel, European Journal of Physics, 41(1), 015102, 2020\n38. Massimiliano Malgieri & Pasquale Onorato, Teaching the heat transfer law using a stochastic toy model, European Journal of Physics, 41(1), 015103, 2020\n39. Surjamanto Wonorahardjo, Inge Magdalena Sutjahja, Y. Mardiyati, Heri Andoni, Dixon Thomas, Rizky Amalia Achsani, & S. Steven, Characterising thermal behaviour of buildings and its effect on urban heat island in tropical areas, International Journal of Energy and Environmental Engineering, 11, pp. 129\xe2\x80\x93142, 2020\n40. Kishan Narotam, Design of a Thermally Controlled System for Medications, University of the Witwatersrand, 2019\n41. Margarita Castillo Tellez, Beatriz Castillo Tellez, Jose Andres Alan\xc3\xads Navarro, Juan Carlos Ovando Sierra, & Gerardo A. Mejia Perez, Kinetics of Drying Medicinal Plants by Hybridization of Solar Technologies, IntechOpen, DOI: 10.5772/intechopen.89686, 2019\n42. S. Sufiandi, T. Darmawan, & W. Dwianto, Mechanism of heat transfer using water in wood pores during wood densification, IOP Conference Series: Earth and Environmental Science, 374, 012013, 2019\n43. Bo Zhang, Tomas Mikysek, Veronika Cicmancova, Stanislav Slang, Roman Svoboda, Petr Kutalek, & Tomas Wagner, 2D GeSe2 amorphous monolayer, Pure and Applied Chemistry, 91(11), 2019\n44. Adem Yilmaz, Sinan Unvar, Ali Serkan Avci, & Bunyamin Aygun, Difference of Solar Chimney System Chimney Designs and Numerical Modeling in Collector Surface Areas, International Journal of Scientific & Engineering Research, Volume 10, Issue 9, pp 82-89, 2019\n45. Attila Zakarias, Tamas Laszlo, Csaba Krizbai, Tamas Szabo, & Norbert Demeter, Design and Development of Computer-Controlled Temperature Gradient System, Muszaki Tudomanyos Kozlemenyek, Volume 11, Issue 1, pp 187-190, 2019\n46. Graziella Gini, Thermal component conditioning: Creation of a temperature map using Energy2D, Department of Architecture, ETH Zurich, 2019\n47. Amardeep Singh, Simulation Of Coffee Stain Effects Using ANSYS Fluent, Journal of Physics: Conference Series, 1276, 012004, 2019\n48. Pasquale Onorato, Luigi M. Gratton, Stefano Oss, & Massimiliano Malgieri, From the dicey world to the physical laws: dice toy models for bridging microscopic and macroscopic understanding of physical phenomena, Journal of Physics: Conference Series, 1287, 012026, 2019\n49. Dorin Copaci, Dolores Blanco, & Luis E. Moreno, Flexible Shape-Memory Alloy-Based Actuator: Mechanical Design Optimization According to Application, Actuators, Volume 8, Issue 3, p. 63, 2019\n50. Sabri Pllana, Suejb Memeti, & Joanna Kolodziej, Customizing Pareto Simulated Annealing for Multi-objective Optimization of Control Cabinet Layout, The 22nd International Conference on Control Systems and Computer Science, 2019\n51. Surjamanto Wonorahardjo, Inge Sutjahja, Siti Aisyah Damiati, & Daniel Kurnia, Adjustment of Indoor Temperature using Internal Thermal Mass under Different Tropical Weather Conditions, Science and Technology for the Built Environment, 2019\n52. Jos\xc3\xa9 Andr\xc3\xa9s Alan\xc3\xads Navarro, Margarita Castillo T\xc3\xa9llez, Mario Arturo Rivera Mart\xc3\xadnez, Gabriel Pedroza Silvar, & Francisco Christian Mart\xc3\xadnez Tejeda, Computational thermal analysis of a double slope solar still using Energy2D, Desalination and Water Treatment, Volume 151, pp 26-33, 2019\n53. D. Copaci, F. Martin, L. Moreno, & D. Blanco, SMA-Based Elbow Exoskeleton for Rehabilitation Therapy and Patient Evaluation, IEEE Access, 2019\n54. R. C. G. M. Loonen, M. L. de Klijn-Chevalerias, & J. L. M. Hensen, Opportunities and Pitfalls of Using Building Performance Simulation in Explorative R&D Contexts, Journal of Building Performance Simulation, 2019\n55. Weera Punin, Somchai Maneewan, & Chantana Punlek, Heat Transfer Characteristics of a Thermoelectric Power Generator System for Low-Grade Waste Heat Recovery from the Sugar Industry, Heat and Mass Transfer, Volume 55, Issue 4, pp 979-991, 2019\n56. SC Corr\xc3\xaaa, Desenvolvimento de uma sequ\xc3\xaancia did\xc3\xa1tica para o ensino de conceitos de calor segundo princ\xc3\xadpios da teoria da flexibilidade cognitiva, 2019\n57. Alexandre Dubor, Edouard Cabay, & Angelos Chronis, Energy Efficient Design for 3D Printed Earth Architecture, Humanizing Digital Reality, pp 383-393, 2018\n58. Savior Billianos, Thermal phenomena and flow streams in urban environment and simulation with Energy2D program, Hellenic Open University, 2018\n59. Ashenafi Tesfaye, Solomon Mariam, & Tewodros Walle, Design and Development of Low-Power Output Solar Chimney Power Plant, Addis Ababa Institute of Technology, 2018\n60. Alessandro Massaro, Angelo Galiano, Giacomo Meuli, & Saverio Francesco Massari, Overview and Application of Enabling Technologies Oriented on Energy Routing Monitoring, on Network Installation and on Predictive Maintenance, International Journal of Artificial Intelligence and Applications, Volume 9, No.2, pp 1-20, 2018\n61. Muammar Mansor, Khadouja Harouaka, Matthew S. Gonzales, Jennifer L. Macalady, & Matthew S. Fantle, Transport-Induced Spatial Patterns of Sulfur Isotopes (\xce\xb434S) as Biosignatures, Astrobiology, Volume 18, No. 1, pp 59-72, 2018\n62. Mahfoud Abderrezek & Mohamed Fathi, Effect of Dust Deposition on the Performance of Thin Film Solar Cell, Elektronika ir Elektrotechnika, Volume 24, pp 41-45, 2018\n63. FT Moro, IG Neide, & MJH Rehfeldt, TRANSFER\xc3\x8aNCIA DE ENERGIA T\xc3\x89RMICA: UMA PROPOSTA INTEGRANDO ATIVIDADES EXPERIMENTAIS E SIMULA\xc3\x87\xc3\x95ES COMPUTACIONAIS PARA O ENSINO M\xc3\x89DIO, 2018\n64. Milan Marjanovi\xc4\x87, Sne\xc5\xbeana Dragi\xc4\x87evi\xc4\x87, Ivan Mili\xc4\x87evi\xc4\x87, Marko Popovi\xc4\x87, & Vojislav Vuji\xc4\x8di\xc4\x87, Application of Computer Simulation in Engineering Education, 7th International Scientific Conference Technics and Informatics in Education, 2018\n65. Rishi Gupta, Harsh Rathod, Balasubramanian Esakki, & Sean Blaney, NON-CONTACT NON-DESTRUCTIVE INFRARED THERMOGRAPHY BASED EVALUATION OF REINFORCED CONCRETE STRUCTURES, Leadership in Sustainable Infrastructure, 2017\n66. Amjad Nasser, Asmin Ashraf, & Dilna K. Salim, Comparative Study of Conventional and Green Residential Buildings, International Journal of Innovative Science and Research Technology, Volume 2, No. 4, 2017\n67. Stefan Heusler & Daniel Laumann, Himmlische Physik - Wolkenbilder als Ausgangspunkt fur die digitale Modellierung von Strukturbildungsprozessen, Unterricht Physik, 159/160, pp 69-73, 2017\n68. Faqih Rosyadi, Tomo Djudin, & Syaiful B Arsyid, REMEDIASI MISKONSEPSI PERPINDAHAN KALOR MENGGUNAKAN MODEL DIRECT INSTRUCTION BERBANTUAN ANIMASI ENERGY2D DI SMP, Jurnal Pendidikan dan Pembelajaran Khatulistiwa, Volume 6, No. 12, 2017\n69. Dorin Sabin Copaci, Non-Linear Actuators and Simulation Tools for Rehabilitation Devices, Ph.D. Dissertation, Carlos III University of Madrid, p. 89, 2017\n70. Proma Chakraborty, Impact of Furniture on the Energy Consumption of Commercial Buildings, Proceedings of the 2nd International Conference on Communication and Electronics Systems, pp 316-319, 2017\n71. E. Rozos, I. Tsoukalas, & C. Makropoulos, Turning Black into Green: Ecosystem Services from Treated Wastewater, Desalination and Water Treatment, Volume 91, October, pp 198-205, 2017\n72. Gu Hai-Rong, Dong Qiang-Zhu, Li Jin-Ping, Liang Feng-Dian, Zhang Fei, Wang Zuo-Jia, Heating Modes and Heat Transfer Process of Asphalt Pavement Hot In-place Recycling (\xe6\xb2\xa5\xe9\x9d\x92\xe8\xb7\xaf\xe9\x9d\xa2\xe5\xb0\xb1\xe5\x9c\xb0\xe7\x83\xad\xe5\x86\x8d\xe7\x94\x9f\xe5\x8a\xa0\xe7\x83\xad\xe6\x96\xb9\xe5\xbc\x8f\xe4\xb8\x8e\xe4\xbc\xa0\xe7\x83\xad\xe8\xbf\x87\xe7\xa8\x8b), Road Machinery & Construction Mechanization, Volume 34, Issue 11, pp 96-99, 2017\n73. Sadik A. Yildizel, Mechanical and Thermal Behaviors Comparison of Basalt and Glass Fibers Reinforced Concrete with Two Different Fiber Length Distributions, Challenge Journal of Structural Mechanics, Volume 3, Issue 4, pp 155-159, 2017\n74. Georgia Kaklamani, David Cheneler, Liam M. Grover, Michael J. Adams, Spiros H. Anastasiadis, & James Bowen, Anisotropic Dehydration of Hydrogel Surfaces, Progress in Biomaterials, Volume 6, Issue 4, pp157-164, 2017\n75. M. St\xc3\xbctz, F. Pixner, J. Wagner, N. Reheis, E. Raiser, H. Kestler, & N. Enzinger, Rotary Friction Welding of Molybdenum Components, The 19th Plansee Seminar, 2017\n76. Marie L. de Klijn-Chevalerias, Roel C.G.M. Loonen, A. Zarzycka, Dennis de Witte, M. V. Sarakinioti, & Jan L.M. Hensen, Assisting the Development of Innovative Responsive Facade Elements Using Building Performance Simulation, in M. Turrin, B. Peters, W. O'Brien, R. Stouffs, & T. Dogan (Eds.), Proceedings of the Symposium on Simulation for Architecture and Urban Design, pp. 243-250, 2017\n77. Mahfoud Abderrezek & Mohamed Fathi, Experimental Study of the Dust Effect on Photovoltaic Panels' Energy Yield, Solar Energy, Volume 142, pp 308-320, 2017\n78. Dennis de Witte, Marie L. de Klijn-Chevalerias, Roel C.G.M. Loonen, Jan L.M. Hensen, Ulrich Knaack, & Gregor Zimmermann, Convective Concrete: Additive Manufacturing to Facilitate Activation of Thermal Mass, Journal of Facade Design and Engineering, Volume 5, No. 1, 2017\n79. Javier G. Monroy & Javier Gonzalez-Jimenez, Gas Classification in Motion: An Experimental Analysis, Sensors and Actuators B: Chemical, Volume 240, pp 1205-1215, 2017\n80. P Bas Calopa, Projecte d'una incubadora amb regulaci\xc3\xb3 atmosf\xc3\xa8rica, Universitat Polit\xc3\xa8cnica de Catalunya, 2017\n81. Jose Manuel Mart\xc3\xadn Perez-Solorzano, High-Efficiency SMA-Based Actuator, Universidad Carlos III de Madrid, 2016\n82. Tom Rainforth, Tuan Anh Le, Jan-Willem van de Meent, Michael A. Osborne, & Frank Wood, Bayesian Optimization for Probabilistic Programs, 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016\n83. W. Taylor Shoulders, Richard Locke, & Romain M. Gaume, Elastic Airtight Container for the Compaction of Air-Sensitive Materials, Review of Scientific Instruments, Volume 87, 063908, 2016\n84. Zachary R. Adam, Temperature Oscillations near Natural Nuclear Reactor Cores and the Potential for Prebiotic Oligomer Synthesis, Origins of Life and Evolution of Biospheres, Volume 46, Issue 2, pp 171-187, 2016\n85. Jiarui Chen, Shuyu Qin, Xinglong Wu, & Paul K Chu, Morphology and Pattern Control of Diphenylalanine Self-Assembly via Evaporative Dewetting, ACS Nano, Volume 10, No. 1, pp 832-838, 2016\n86. Teresa Moro, Italo Gabriel Neide, & Marcia Jussara Hepp Rehfeldt, Experimental activities and computer simulations: integration for the construction of thermal energy transfer concepts in high school, Brazilian Notebook of Physics Teaching, Volume 36, No. 3, 2016\n87. Truong Minh Thang, Research and simulate basic heat transfer processes using Energy2D software, The University of Transport Vietnam, 2016\n88. Atanas Vasilev, Geothermal Evolution of Gas Hydrate Deposits: Bulgarian Exclusive Economic Zone in the Black Sea, Comptes rendus de l\xe2\x80\x98Acad\xc3\xa9mie bulgare des Sciences, Volume 68, No. 9, pp 1135-1144, 2015\n89. Yit Man Heng, Experimental Investigation of Roof Top Solar Chimney for Natural Ventilation, Universiti Teknologi PETRONAS, 2015\n90. Mikolaj Michal Chojnacki, Modelling and Simulation of Transient Heat Transfer for Sustainable Buildings, Warsaw University of Technology, 2015\n91. Pedro A. Hern\xc3\xa1ndez, et al., Magma Emission Rates from Shallow Submarine Eruptions Using Airborne Thermal Imaging, Remote Sensing of Environment, Volume 154, pp 219-225, November 2014\n""",",https://doi.org/10.1016/j.actaastro.2023.07.035\n2,https://doi.org/10.1016/j.csite.2023.103543\n3,https://doi.org/10.1080/0951192X.2023.2257623\n4,https://doi.org/10.1002/admt.202300390\n5","2014/03/28, 15:24:48",3498,LGPL-3.0,22,46,"2023/09/23, 15:52:52",9,5,6,3,32,0,0.2,0.0,,,0,1,false,,false,false,,,https://github.com/Institute-for-Future-Intelligence,https://intofuture.org,Massachusetts,,,https://avatars.githubusercontent.com/u/68472371?v=4,,, OpenGeoSys 6,A scientific open source project for the development of numerical methods for the simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media.,ogs,,custom,,Geothermal Energy,,,,,,,,,,https://gitlab.opengeosys.org/ogs/ogs,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, FEHM,"Has proved to be a valuable asset on a variety of projects of national interest including Environmental Remediation of the Nevada Test Site, the LANL Groundwater Protection Program, geologic CO2 sequestration, Enhanced Geothermal Energy programs, Oil and Gas production, Nuclear Waste Isolation, and Arctic Permafrost.",lanl,https://github.com/lanl/FEHM.git,github,"porous-flow,simulation-modeling,reactive-chemistry,coupled-heat-mass,multiphase-transport,groundwater-modelling,carbon-storage,soil-vapor,earth-science,carbon-sequestration,subsurface-remediation,nuclear-waste-repositories,subsurface-hydrology,geothermal-energy",Geothermal Energy,"2021/12/14, 20:22:08",40,0,8,false,Fortran,Los Alamos National Laboratory,lanl,"Fortran,GLSL,Roff,Python,Julia,Makefile,Batchfile,C,Smarty,Tcl",https://fehm.lanl.gov,"b'## FEHM: Finite Element Heat and Mass Transfer Code ##\n**LANL Software: LA-CC-2012-083 No. C13022**\n**LANL Documents: LA-UR-12-24493**\n\n[![Build Status](https://travis-ci.org/lanl/FEHM.svg?branch=master)](https://travis-ci.org/lanl/FEHM)\n\nThe numerical background of the FEHM computer code can be traced to the early 1970s when it was used to simulate geothermal and hot dry rock reservoirs. The primary use over a number of years was to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the potential Yucca Mountain repository. Today FEHM is used to simulate groundwater and contaminant flow and transport in deep and shallow, fractured and un-fractured porous media throughout the US DOE complex. FEHM has proved to be a valuable asset on a variety of projects of national interest including Environmental Remediation of the Nevada Test Site, the LANL Groundwater Protection Program, geologic CO2 sequestration, Enhanced Geothermal Energy (EGS) programs, Oil and Gas production, Nuclear Waste Isolation, and Arctic Permafrost. Subsurface physics has ranged from single fluid/single phase fluid flow when simulating basin scale groundwater aquifers to complex multifluid/ multi-phase fluid flow that includes phase change with boiling and condensing in applications such as unsaturated zone surrounding nuclear waste storage facility or leakage of CO2/brine through faults or wellbores. The numerical method used in FEHM is the control volume method (CV) for fluid flow and heat transfer equations which allows FEHM to exactly enforce energy/mass conservation; while an option is available to use the finite element (FE) method for displacement equations to obtain more accurate stress calculations. In addition to these standard methods, an option to use FE for flow is available, as well as a simple Finite Difference scheme.\n\n\n#### [FEHM Homepage](https://fehm.lanl.gov) \xe2\x80\xa2 [FEHM Documentation](http://lanl.github.io/FEHM/) \xe2\x80\xa2 [Fehmpytests Documentation](http://lanl.github.io/FEHM/fehmpytests/html/index.html)\n\n\n## License ##\n\nFEHM is distributed as as open-source software under a BSD 3-Clause License. See [Copyright License](LICENSE.md)\n\n\n## Developers ##\n\nExternal Collaborators must sign a Contribution Agreement. [Contribution Agreement for External Collaborators](CONTRIBUTING.md)\n\nThe following are reminders for FEHM code developers using this repository.\n\nA Git workflow follows these basic steps:\n* Make changes to files\n* Add the files (\xe2\x80\x98stage\xe2\x80\x99 files)\n* \xe2\x80\x98Commit\xe2\x80\x99 the staged files\n* Push the commit (containing all modified files) to the central repo\n \n1. To first get the repo, run the command\n\n```\ngit clone https://github.com/lanl/FEHM.git\n```\n\nThis will download the FEHM Git repo to your current directory.\n \n2. Let\xe2\x80\x99s say you\xe2\x80\x99ve done some editing and you\xe2\x80\x99re ready to push your changes to the FEHM repository.\nRun the command\n\n```\ngit add file1 file2 ... fileN\n```\n \nto add any files you have changed. You can also just run `git add .` if you want to add every changed file.\n \n3. Now, run\n \n``` \ngit status\n```\n \nThis gives an overview of all tracked and untracked files.\nA tracked file is one that Git considers as part of the repo.\nUntracked files are everything else \xe2\x80\x93 think of *.o files, or some test data output generated by an FEHM run.\n \nTracked files can be:\n* Unmodified (you haven\xe2\x80\x99t made any changes to it, relative to the last commit)\n* Modified (you have edited the file since the last commit)\n* Staged (the file has been added and is ready to be committed and then pushed)\n \nUntracked files become tracked by using\n```\ngit add filename\n```\n \n4. After verifying (with `git status`) that all the files you want to be pushed are properly staged, commit them using\n\n```\ngit commit -m ""My first Git commit!""\n```\n \nThen, push the files onto the GitHub repo with\n\n```\ngit push origin master\n```\n \n5. If someone else has made edits, you will need to pull their changes to your local FEHM clone before you can push.\n \n```\ngit pull origin master\ngit push origin master\n```\n\n## BUILD FEHM ##\n\nBuild FEHM. See src and Makefile in that directory.\n\n\n## FEHM Release Versions ##\n\n\nSee Versions and Notes under the Releases tab this repository.\n\nThe Most recent distributed release is FEHM V3.4.0 (September 2019) which is the version cloned for this repository. The FEHM software is a continuation of QA work performed for the Yucca Mountain Project (YMP) under Software Configuration Control Request (SCCR) (Software Tracking Numbers STN: 10086-2.21-00 August 2003, V2.22, STN 10086-2.22-01, V2.23, STN 10086-2.23-00, V2.24-01, STN 10086-2.24-01, and V2.25, STN 10086-2.25-00). \nThe QA for these codes started under YMP QA and continue under under LANL EES-16 Software QA Policy and Proceedures as outlined in: ""EES-16-13-003.SoftwareProcedure.pdf"" \n\nBefore distribution of FEHM software, tests are executed and verified as acceptable on LANL computers with operating systems Linux, Mac OSX, and WINDOWS. The overall validation effort for the FEHM software consists of a suite of directories and scripts that test the model whenever possible, against known analytical solutions of the same problem. The test suite was developed under YMP QA for FEHM RD.10086-RD-2.21-00 and is available for download.\n\n'",,"2017/12/13, 21:46:09",2141,CUSTOM,0,911,"2021/12/14, 20:22:09",28,9,15,0,679,3,0.0,0.15921787709497204,"2019/11/13, 18:01:50",v3.4.0,0,8,false,,false,true,,,https://github.com/lanl,https://www.lanl.gov/,"Los Alamos, New Mexico, USA",,,https://avatars.githubusercontent.com/u/585305?v=4,,, thermo,"Thermodynamics, phase equilibrium, transport properties and chemical database component of Chemical Engineering Design Library.",CalebBell,https://github.com/CalebBell/thermo.git,github,"thermodynamics,chemistry,cheminformatics,chemical-engineering,mechanical-engineering,viscosity,density,heat-capacity,thermal-conductivity,surface-tension,combustion,environmental-engineering,solubility,vapor-pressure,equation-of-state,molecule,process-simulation,physics",Geothermal Energy,"2023/10/21, 21:52:04",511,66,102,true,Python,,,Python,,"b'======\nThermo\n======\n\n.. image:: http://img.shields.io/pypi/v/thermo.svg?style=flat\n :target: https://pypi.python.org/pypi/thermo\n :alt: Version_status\n.. image:: http://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat\n :target: https://thermo.readthedocs.io/\n :alt: Documentation\n.. image:: http://img.shields.io/badge/license-MIT-blue.svg?style=flat\n :target: https://github.com/CalebBell/thermo/blob/master/LICENSE.txt\n :alt: license\n.. image:: https://img.shields.io/coveralls/CalebBell/thermo.svg\n :target: https://coveralls.io/github/CalebBell/thermo\n :alt: Coverage\n.. image:: https://img.shields.io/pypi/pyversions/thermo.svg\n :target: https://pypi.python.org/pypi/thermo\n :alt: Supported_versions\n.. image:: https://badges.gitter.im/CalebBell/thermo.svg\n :alt: Join the chat at https://gitter.im/CalebBell/thermo\n :target: https://gitter.im/CalebBell/thermo\n.. image:: https://zenodo.org/badge/62404647.svg\n :alt: Zendo\n :target: https://zenodo.org/badge/latestdoi/62404647\n\n\n.. contents::\n\nWhat is Thermo?\n---------------\n\nThermo is open-source software for engineers, scientists, technicians and\nanyone trying to understand the universe in more detail. It facilitates \nthe retrieval of constants of chemicals, the calculation of temperature\nand pressure dependent chemical properties (both thermodynamic and \ntransport), and the calculation of the same for chemical mixtures (including\nphase equilibria) using various models.\n\nThermo runs on all operating systems which support Python, is quick to install, and is\nfree of charge. Thermo is designed to be easy to use while still providing powerful\nfunctionality. If you need to know something about a chemical or mixture, give Thermo a try.\n\nInstallation\n------------\n\nGet the latest version of Thermo from\nhttps://pypi.python.org/pypi/thermo/\n\nIf you have an installation of Python with pip, simple install it with:\n\n $ pip install thermo\n \nAlternatively, if you are using `conda `_ as your package management, you can simply\ninstall Thermo in your environment from `conda-forge `_ channel with:\n\n $ conda install -c conda-forge thermo\n\nTo get the git version, run:\n\n $ git clone git://github.com/CalebBell/thermo.git\n\nDocumentation\n-------------\n\nThermo\'s documentation is available on the web:\n\n http://thermo.readthedocs.io/\n\nGetting Started - Rigorous Interface\n------------------------------------\n\nCreate a pure-component flash object for the compound ""decane"", using the Peng-Robinson equation of state. Perform a flash calculation at 300 K and 1 bar, and obtain a variety of properties from the resulting object:\n\n\n.. code-block:: python\n\n >>> from thermo import ChemicalConstantsPackage, PRMIX, CEOSLiquid, CEOSGas, FlashPureVLS\n >>> # Load the constant properties and correlation properties\n >>> constants, correlations = ChemicalConstantsPackage.from_IDs([\'decane\'])\n >>> # Configure the liquid and gas phase objects\n >>> eos_kwargs = dict(Tcs=constants.Tcs, Pcs=constants.Pcs, omegas=constants.omegas)\n >>> liquid = CEOSLiquid(PRMIX, HeatCapacityGases=correlations.HeatCapacityGases, eos_kwargs=eos_kwargs)\n >>> gas = CEOSGas(PRMIX, HeatCapacityGases=correlations.HeatCapacityGases, eos_kwargs=eos_kwargs)\n >>> # Create a flash object with possible phases of 1 gas and 1 liquid\n >>> flasher = FlashPureVLS(constants, correlations, gas=gas, liquids=[liquid], solids=[])\n >>> # Flash at 300 K and 1 bar\n >>> res = flasher.flash(T=300, P=1e5)\n >>> # molar enthalpy and entropy [J/mol and J/(mol*K) respectively] and the mass enthalpy and entropy [J/kg and J/(kg*K)]\n >>> res.H(), res.S(), res.H_mass(), res.S_mass()\n (-48458.137745529726, -112.67831317511894, -340578.897757812, -791.9383098029132)\n >>> # molar Cp and Cv [J/(mol*K)] and the mass Cp and Cv [J/(kg*K)]\n >>> res.Cp(), res.Cv(), res.Cp_mass(), res.Cv_mass()\n (295.17313861592686, 269.62465319082014, 2074.568831461133, 1895.0061117553582)\n >>> # Molar volume [m^3/mol], molar density [mol/m^3] and mass density [kg/m^3]\n >>> res.V(), res.rho(), res.rho_mass()\n (0.00020989856076374984, 4764.206082982839, 677.8592453530177)\n >>> # isobatic expansion coefficient [1/K], isothermal compressibility [1/Pa], Joule Thomson coefficient [K/Pa]\n >>> res.isobaric_expansion(), res.kappa(), res.Joule_Thomson()\n (0.0006977350520992281, 1.1999043797490713e-09, -5.622547043844744e-07)\n >>> # Speed of sound in molar [m*kg^0.5/(s*mol^0.5)] and mass [m/s] units\n >>> res.speed_of_sound(), res.speed_of_sound_mass()\n (437.61281158744987, 1160.1537167375043)\n\nThe following example shows the retrieval of chemical properties for a two-phase system with methane, ethane, and nitrogen, using a few sample kijs:\n\n.. code-block:: python\n\n >>> from thermo import ChemicalConstantsPackage, CEOSGas, CEOSLiquid, PRMIX, FlashVL\n >>> from thermo.interaction_parameters import IPDB\n >>> constants, properties = ChemicalConstantsPackage.from_IDs([\'methane\', \'ethane\', \'nitrogen\'])\n >>> kijs = IPDB.get_ip_asymmetric_matrix(\'ChemSep PR\', constants.CASs, \'kij\')\n >>> kijs\n [[0.0, -0.0059, 0.0289], [-0.0059, 0.0, 0.0533], [0.0289, 0.0533, 0.0]]\n >>> eos_kwargs = {\'Pcs\': constants.Pcs, \'Tcs\': constants.Tcs, \'omegas\': constants.omegas, \'kijs\': kijs}\n >>> gas = CEOSGas(PRMIX, eos_kwargs=eos_kwargs, HeatCapacityGases=properties.HeatCapacityGases)\n >>> liquid = CEOSLiquid(PRMIX, eos_kwargs=eos_kwargs, HeatCapacityGases=properties.HeatCapacityGases)\n >>> flasher = FlashVL(constants, properties, liquid=liquid, gas=gas)\n >>> zs = [0.965, 0.018, 0.017]\n >>> PT = flasher.flash(T=110.0, P=1e5, zs=zs)\n >>> PT.VF, PT.gas.zs, PT.liquid0.zs\n (0.10365, [0.881788, 2.6758e-05, 0.11818], [0.97462, 0.02007, 0.005298])\n >>> flasher.flash(P=1e5, VF=1, zs=zs).T\n 133.6\n >>> flasher.flash(T=133, VF=0, zs=zs).P\n 518367.4\n >>> flasher.flash(P=PT.P, H=PT.H(), zs=zs).T\n 110.0\n >>> flasher.flash(P=PT.P, S=PT.S(), zs=zs).T\n 110.0\n >>> flasher.flash(T=PT.T, H=PT.H(), zs=zs).T\n 110.0\n >>> flasher.flash(T=PT.T, S=PT.S(), zs=zs).T\n 110.0\n\nThere is also a N-phase flash algorithm available, FlashVLN. There are no solid models implemented in this interface at this time.\n\n\nGetting Started - Simple Interface\n----------------------------------\n\nThe library is designed around base SI units only for development\nconvenience. All chemicals default to 298.15 K and 101325 Pa on \ncreation, unless specified. All constant-properties are loaded on\nthe creation of a Chemical instance.\n\n.. code-block:: python\n\n >>> from thermo.chemical import Chemical\n >>> tol = Chemical(\'toluene\')\n >>> tol.Tm, tol.Tb, tol.Tc\n (179.2, 383.75, 591.75)\n >>> tol.rho, tol.Cp, tol.k, tol.mu\n (862.238, 1706.07, 0.13034, 0.0005522)\n\n\nFor pure species, the phase is easily\nidentified, allowing for properties to be obtained without needing\nto specify the phase. However, the properties are also available in the\nhypothetical gas phase (when under the boiling point) and in the hypothetical\nliquid phase (when above the boiling point) as these properties are needed\nto evaluate mixture properties. Specify the phase of a property to be retrieved \nby appending \'l\' or \'g\' or \'s\' to the property.\n\n.. code-block:: python\n\n >>> from thermo.chemical import Chemical\n >>> tol = Chemical(\'toluene\')\n >>> tol.rhog, tol.Cpg, tol.kg, tol.mug\n (4.0320096, 1126.553, 0.010736, 6.97332e-06)\n\nCreating a chemical object involves identifying the appropriate chemical by name\nthrough a database, and retrieving all constant and temperature and pressure dependent\ncoefficients from Pandas DataFrames - a ~1 ms process. To obtain properties at different\nconditions quickly, the method calculate has been implemented. \n \n.. code-block:: python\n\n >>> tol.calculate(T=310, P=101325)\n >>> tol.rho, tol.Cp, tol.k, tol.mu\n (851.1582219886011, 1743.280497511088, 0.12705495902514785, 0.00048161578053599225)\n >>> tol.calculate(310, 2E6)\n >>> tol.rho, tol.Cp, tol.k, tol.mu\n (852.7643604407997, 1743.280497511088, 0.12773606382684732, 0.0004894942399156052)\n\nEach property is implemented through an independent object-oriented method, based on \nthe classes TDependentProperty and TPDependentProperty to allow for shared methods of\nplotting, integrating, differentiating, solving, interpolating, sanity checking, and\nerror handling. For example, to solve for the temperature at which the vapor pressure\nof toluene is 2 bar. For each property, as many methods of calculating or estimating\nit are included as possible. All methods can be visualized independently:\n\n.. code-block:: python\n\n >>> Chemical(\'toluene\').VaporPressure.solve_property(2E5)\n 409.5909115602903\n >>> Chemical(\'toluene\').SurfaceTension.plot_T_dependent_property()\n\nMixtures are supported and many mixing rules have been implemented. However, there is\nno error handling. Inputs as mole fractions (`zs`), mass fractions (`ws`), or volume\nfractions (`Vfls` or `Vfgs`) are supported. Some shortcuts are supported to predefined\nmixtures.\n\n.. code-block:: python\n\n >>> from thermo.chemical import Mixture\n >>> vodka = Mixture([\'water\', \'ethanol\'], Vfls=[.6, .4], T=300, P=1E5)\n >>> vodka.Prl,vodka.Prg\n (35.13075699606542, 0.9822705235442692)\n >>> air = Mixture(\'air\', T=400, P=1e5)\n >>> air.Cp\n 1013.7956176577836\n\nWarning: The phase equilibria of Chemical and Mixture are not presently\nas rigorous as the other interface. The property model is not particularly\nconsistent and uses a variety of ideal and Peng-Robinson methods together.\n\nLatest source code\n------------------\n\nThe latest development version of Thermo\'s sources can be obtained at\n\n https://github.com/CalebBell/thermo\n\n\nBug reports\n-----------\n\nTo report bugs, please use the Thermo\'s Bug Tracker at:\n\n https://github.com/CalebBell/thermo/issues\n\n\nLicense information\n-------------------\n\nSee ``LICENSE.txt`` for information on the terms & conditions for usage\nof this software, and a DISCLAIMER OF ALL WARRANTIES.\n\nAlthough not required by the Thermo license, if it is convenient for you,\nplease cite Thermo if used in your work. Please also consider contributing\nany changes you make back, and benefit the community.\n\n\nCitation\n--------\n\nTo cite Thermo in publications use::\n\n Caleb Bell and Contributors (2016-2023). Thermo: Chemical properties component of Chemical Engineering Design Library (ChEDL)\n https://github.com/CalebBell/thermo.\n'",",https://zenodo.org/badge/latestdoi/62404647\n\n\n","2016/07/01, 16:04:56",2672,MIT,409,2939,"2023/10/09, 20:29:17",9,35,136,18,15,2,0.0,0.029207232267037586,"2023/04/23, 22:46:47",0.2.24,0,13,false,,false,false,"yalinli2/binder-test,QSD-Group/QSDsan-env,QSD-Group/QSDedu,Ahmedhassan676/pump_specs,cnm13ryan/ChemEngPlayGround_AE,maBeigi98/OpenPNM,SERGGroup/BHEModel3.0,aflofo/paraflow,Ahmedhassan676/htcalc,Zhang-Zhiyuan-zzy/hotpot,kdvillavicencio/chedl-api,Ahmedhassan676/Properties_app,AlexanderPerez11/INME4003_Projects,wlaur/encomp,OpenOrion/paraflow,valentindelachaux/1Dmodel,chrinide/overreact,Noha2007/overreact,idilismail/overreact,brunocuevas/overreact,icamps/overreact,Leticia-maria/overreact,dogusariturk/PyMDL,rshormazabal/modelBasedTL,SalvadorBrandolin/ReactorD,dogusariturk/HEACalculator,sstopkin/leaders2022-hackathon,PMEAL/OpenPNM,Stanford-EAO/OPGEEv4,jfp6/jupyterNotebooks,NREL/ml4pd,manapaap/thermo_website,schneiderfelipe/overreact-workshop,rseng/rsepedia-analysis,QSD-Group/QSDsan-workshop,abhishekvraman/Propylean,TomAveyard/SLRE,Mr-Yuri/simul_gaseif_newton,geem-lab/overreact,AlFACHEUNG/ChemIA-Viscosity,BioSTEAMDevelopmentGroup/thermosteam,Kankelborg-Group/kgpy,RoryKurek/prfs,cuspaceflight/torch,cuspaceflight/octopus,Dlittlewood12/Kiln-Energy-Balance,cuspaceflight/bamboo,SAgaienz/CPJReactorDesign,jmox0351/chemEPy,cuspaceflight/CamPyRoS,afrazahmad21/TrolleyApp,Fortune-Adekogbe/titanic-api,hfsf/antares,CREM-APP/OSEM,hfsf/sloth,ppuertocrem/pandangas,joebowen/hybrid_rocket_motor_sim,BioSTEAMDevelopmentGroup/biosteam,bcusick/engine-calcs,bcusick/radiator-sizing,IntegrCiTy/PandaThermal,IntegrCiTy/PandaNGas,IntegrCiTy/simple-models,ukaea/neutronics_material_maker,apthorpe/jeppson-python,vsreelasya/Search-engine",,,,,,,,,, waiwera,"A parallel, open-source geothermal flow simulator.",waiwera,https://github.com/waiwera/waiwera.git,github,"geothermal,reservoir-simulation,reservoir-modeling,parallel,petsc",Geothermal Energy,"2023/06/30, 02:54:50",35,0,14,true,Fortran,Waiwera,waiwera,"Fortran,Python,Meson,Shell,Dockerfile,Jinja",,"b'Waiwera\n=======\n\n![Unit and benchmark tests](https://github.com/waiwera/waiwera/workflows/Unit%20and%20benchmark%20tests/badge.svg?branch=testing) ![Docker Cloud Build Status](https://img.shields.io/docker/cloud/build/waiwera/waiwera) ![Docker Pulls](https://img.shields.io/docker/pulls/waiwera/waiwera?color=green) [![Documentation Status](https://readthedocs.org/projects/waiwera/badge/?version=latest)](https://waiwera.readthedocs.io/en/latest/?badge=latest)\n\nWaiwera is a parallel, open-source geothermal flow simulator.\n\nWaiwera features:\n\n- numerical simulation of high-temperature subsurface flows, including robust phase changes\n- parallel execution on shared- or distributed-memory computers and clusters\n- use of [PETSc](https://petsc.org/) (Portable Extensible Toolkit for Scientific Computation) for parallel data structures, linear and non-linear solvers, etc.\n- standard file formats for input ([JSON](http://www.json.org)) and output ([HDF5](https://portal.hdfgroup.org/display/HDF5/HDF5), [YAML](http://www.yaml.org/about.html))\n- structured, object-oriented code written in Fortran 2003\n- free, open-source license (GNU LGPL)\n\nWaiwera was developed at the University of Auckland\'s [Geothermal Institute](http://www.geothermal.auckland.ac.nz/). Initial development was part of the ""Geothermal Supermodels"" research project, funded by the NZ Ministry of Business, Innovation and Employment ([MBIE](https://www.mbie.govt.nz/)), with additional support from [Contact Energy Ltd](https://contact.co.nz/).\n\nThe word *Waiwera* comes from the M\xc4\x81ori language and means ""hot water"".\n\nFurther information can be found on the Waiwera [website](https://waiwera.github.io/) and in the [user guide](https://waiwera.readthedocs.io/).\n'",,"2019/04/01, 00:27:32",1668,LGPL-3.0,161,4206,"2023/04/27, 16:40:31",4,0,12,2,181,0,0,0.03686517143593715,,,0,4,false,,false,false,,,https://github.com/waiwera,https://waiwera.github.io/,,,,https://avatars.githubusercontent.com/u/49133194?v=4,,, fractoolbox,"Python tools for structural geology and borehole image analysis which includes data handling, frequency and geometric analysis, and reservoir geomechanics.",ICWallis,https://github.com/ICWallis/fractoolbox.git,github,"geology,structural-geology,fracture-mechanics,fracture,fractures,geomechanics,borehole-image-analysis,python,mplstereonet,geothermal",Geothermal Energy,"2023/09/30, 22:52:49",49,0,17,true,Python,,,Python,,"b'# fractoolbox\n\nPython tools for structural geology, borehole image analysis, and reservoir geomechanics. \n\nfractoolbox is pre-release. Use is on an \'as is\' basis and without any warranty implied. Backward compatibility is not guaranteed.\n\nRefer to https://github.com/ICWallis/borehole-image-analysis-with-python for use examples and tutorials.\n\n### Installation\nFractoolbox has not been packaged so can\'t be installed using pip or conda. To use fractoolbox, download the repo and place in the same folder as your code. Import the methods you want to use into your code using the following syntax:\n\n```python\n> import fractoolbox as ftb\n```\n\n### License\n\nThe content of this repository is licensed under the Apache License, Version 2.0 (the ""License""); you may use these files if you comply with the License, which includes attribution. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0\n\n'",,"2020/08/19, 19:55:52",1162,Apache-2.0,6,177,"2023/04/27, 16:40:31",0,0,0,0,181,0,0,0.0,"2022/04/07, 06:07:51",v0.1.0,0,1,false,,false,false,,,,,,,,,,, GeoThermalCloud.jl,A repository containing all the data and codes required to demonstrate applications of machine learning methods for geothermal exploration.,SmartTensors,https://github.com/SmartTensors/GeoThermalCloud.jl.git,github,"julia,geothermal,machine-learning,unsupervised-machine-learning",Geothermal Energy,"2023/08/18, 19:30:41",23,0,8,true,Jupyter Notebook,SmartTensors,SmartTensors,"Jupyter Notebook,HTML,TeX,Python,Julia",,"b'# GeoThermalCloud: A Machine Learning Framework for Geothermal Resources Exploration\n\n
\n\t\n \t\n\t\n
\n\n**GeoThermalCloud.jl** is a repository containing all the data and codes required to demonstrate applications of machine learning methods for geothermal exploration.\n\n**GeoThermalCloud.jl** includes:\n- site data\n- simulation scripts\n- jupyter notebooks\n- intermediate results\n- code outputs\n- summary figures\n- readme markdown files\n\n**GeoThermalCloud.jl** showcases the machine learning analyses performed for the following geothermal sites:\n\n- **Brady**: geothermal exploration of the Brady geothermal site, Nevada\n- **SWNM**: geothermal exploration of the Southwest New Mexico (SWNM) region\n- **GreatBasin**: geothermal exploration of the Great Basin region\n\nReports, research papers, and presentations summarizing these machine learning analyses are also available and will be posted soon.\n\n## Julia installation\n\nGeoThermalCloud Machine Learning analyses are performed using Julia.\n\nTo install the most recent version of Julia, follow the instructions at https://julialang.org/downloads/\n\n## GeoThermalCloud installation\n\nTo install all required the modules, execute in the Julia REPL:\n\n```julia\nimport Pkg\nPkg.add(""GeoThermalCloud"")\n```\n## GeoThermalCloud examples\n\nGeoThermalCloud machine learning analyses can be executed as follows:\n\n```julia\nimport Pkg\nPkg.add(""GeoThermalCloud"")\nimport GeoThermalCloud\n\nGeoThermalCloud.SWNM() # performs analyses of the Sounthwest New Mexico region\nGeoThermalCloud.GreatBasin() # performs analyses of the Great Basin region\nGeoThermalCloud.Brady() # performs analyses of the Brady site, Nevada\n```\n\nGeoThermalCloud machine learning analyses can be also executed as Jupyter notebooks as well\n\n```julia\nGeoThermalCloud.notebooks() # open Jupyter notebook to acccess all GeoThermalCloud notebooks\nGeoThermalCloud.SWNM(notebook=true) # opens Jupyter notebook for analyses of the Sounthwest New Mexico region\nGeoThermalCloud.GreatBasin(notebook=true) # opens Jupyter notebook for analyses of the Great Basin region\nGeoThermalCloud.Brady(notebook=true) # opens Jupyter notebook for analyses of the Brady site, Nevada\n```\n## SmartTensors\n\nGeoThermalCloud analyses are performed using the [**SmartTensors**](https://github.com/SmartTensors) machine learning framework.\n\n\n\n[**SmartTensors**](https://github.com/SmartTensors) provides tools for Unsupervised and Physics-Informed Machine Learning.\n\nMore information about [**SmartTensors**](https://github.com/SmartTensors) can be found at [smarttensors.github.io](https://smarttensors.github.io) and [tensors.lanl.gov](http://tensors.lanl.gov).\n\n[**SmartTensors**](https://github.com/SmartTensors) includes a series of modules. Key modules are:\n\n- [**NMFk**](https://github.com/SmartTensors/NMFk.jl): Nonnegative Matrix Factorization + k-means clustering\n- [**NTFk**](https://github.com/SmartTensors/NTFk.jl): Nonnegative Tensor Factorization + k-means clustering\n\n\n\n\n\n## Publications\n\n### Book chapter\n\n- Vesselinov, V.V., Mudunuru, M.K. Ahmmed, B., Karra, S., and O\xe2\x80\x99Malley, D., (accepted): Machine Learning to Discover, Characterize, and Produce Geothermal Energy, CRS Press, Boca Raton, FL.\n\n### Peer reviewed\n\n- Rau, E., Ahmmed, B., Vesselinov, V.V, Mudunuru, M.K., and Karra, S. (in preparation): Geothermal play development using machine learning, geophysics, and reservoir simulation, Geothermics.\n- Ahmmed, B. and Vesselinov, V.V. (in review): Machine learning and shallow groundwater chemistry to identify geothermal resources, to be submitted to Renewable Energy, http://dx.doi.org/10.2139/ssrn.4072512. \n- Vesselinov, V.V., Ahmmed, B., Mudunuru, M.K., Pepin, J. D., Burns, E.R., Siler, D.L., Karra, S., and Middleton, R.S. (in review): Discovering hidden geothermal signatures using unsupervised machine learning, Geothermics.\n- Siler, D.L., Pepin, J.D., Vesselinov, V.V., Mudunuru, M.K., and Ahmmed, B. (2021): Machine learning to identify geologic factors associated with production in geothermal \xe2\x80\x80fields: A case-study using 3D geologic data, Brady geothermal field, Nevada, Geothermal Energy, https://doi.org/10.1186/s40517-021-00199-8.\n\n### Conference papers\n\n- Ahmmed, B., Vesselinov, V.V., Rau, E., and Mudunuru, M.K., and Karra, S.: Machine learning and a process model to better characterize hidden geothermal resources, Geothermal rising conference, Reno, NV, August 28-31, 2022.\n- Vesselinov, V.V., Ahmmed, B., Frash, L., and Mudunuru, M.K.: GeoThermalCloud: Machine Learning for Discovery, Exploration, and Development of Hidden Geothermal Resources, 47th Annual Stanford Geothermal Workshop, Stanford, CA, February 7-9, 2022. \n- Vesselinov, V.V., Frash, L., Ahmmed, B., and Mudunuru, M.K.: Machine Learning to Characterize the State of Stress and its Influence on Geothermal Production, Geothermal Rising Conference, San Diego, CA, October3-6, 2021. \n- Ahmmed, B., Vesselinov, V.V.: Prospectivity Analyses of the Utah FORGE Site using Unsupervised Machine Learning, Geothermal Rising Conference, San Diego, CA, October 3-6, 2021. \n- Ahmmed, B., Vesselinov, V.V., Mudunuru, M.K., Middleton, R.S., and Karra, S.: Geochemical characteristics of low-, medium-, and hot-temperature geothermal resources of the Great Basin, USA, World Geothermal Congress, Reykjavik, Iceland, May 21-26, 2021. \n- Vesselinov, V.V., Ahmmed, B., Mudunuru, M.K., Karra, S., and Middleton, R.: Hidden geothermal signatures of the southwest New Mexico, World Geothermal Congress, Reykjavik, Iceland, May 21-26, 2021. \n- Mudunuru, M.K., Ahmmed, B., Vesselinov, V.V., Burns, E., Livingston, D.R., Karra, S., Middleton, R.S.: Machine learning for geothermal resource analysis and exploration, XXIII International Conference on Computational Methods in Water Resources (CMWR), Stanford, CA, December 13-15, 2020, no. 81. [Extended abstract]\n- Mudunuru, M.K., Ahmmed, B., Karra S., Vesselinov, V.V., Livingston D.R., and Middleton R.S.: Site-scale and regional-scale modeling for geothermal resource analysis and exploration, 45th Annual Stanford Geothermal Workshop, Stanford, CA, February 10-12, 2020. \n- Vesselinov, V.V., Mudunuru, M.K., Ahmmed, B., Karra, S. and Middleton, R.S.: Discovering signatures of hidden geothermal resources based on unsupervised learning, 45th Annual Stanford Geothermal Workshop, Stanford, CA, February 10-12, 2020. \n\n### Presentations\n\n- Siler, D., Pepin, J., Vesselinov, V.V., Ahmmed, B., and Mudunuru, M.K.: A tale of two unsupervised machine learning techniques: What PCA and NMFk tell us about the geologic controls of hydrothermal processes, American Geophysical Union, New Orleans, LA,, December 13\xe2\x80\x9317, 2021.\n- Siler, D., Pepin, J., Vesselinov, V.V., Ahmmed, B., and Mudunuru, M.K.: A tale of two unsupervised machine learning techniques: What PCA and NMFk tell us about the geologic controls of hydrothermal processes, Geothermal Rising Conference, San Diego, CA, October 3-6, 2021.\n- Ahmmed, B. Vesselinov, V. and Mudunuru, M.K., Integration of Data, Numerical Inversion, and Unsupervised Machine Learning to Identify Hidden Geothermal Resources in Southwest New Mexico, American Geophysical Union Fall Conference, San Francisco, CA, December 1-17, 2020.\n- Ahmmed, B., Vesselinov, V.V., and Mudunuru, M.K., Machine learning to characterize regional geothermal reservoirs in the western USA, Abstract T185-358249, Geological Society of America, October 26-29, 2020.\n- Ahmmed, B., Lautze, N., Vesselinov, V.V., Dores, D., and Mudunuru, M.K., Unsupervised Machine Learn- ing to Extract Dominant Geothermal Attributes in Hawaii Island Play Fairway Data, Geothermal Resources Council, Reno, NV, October 18-23, 2020.\n- Vesselinov, V.V., Ahmmed, B., and Mudunuru, M.K., Unsupervised Machine Learning to discover attributes that characterize low, moderate, and high-temperature geothermal resources, Geothermal Resources Council, Reno, NV, October 18-23, 2020.\n- Ahmmed, B., Vesselinov, V., and Mudunuru, M.K., Non-negative Matrix Factorization to Discover Dominant Attributes in Utah FORGE Data, Geothermal Resources Council, Reno, NV, October 18-23, 2020.\n- Ahmmed, B., Vesselinov, V.V., and Mudunuru, M.K., Unsupervised machine learning to discover dominant attributes of mineral precipitation due to CO2 sequestration, LA-UR-20-20989, 3rd Machine Learning in Solid Earth Science Conference, Santa Fe, NM, March 16-20, 2020.\n\n\n'",",https://doi.org/10.1186/s40517-021-00199-8.\n\n###","2021/01/12, 19:31:13",1016,GPL-3.0,27,131,"2023/07/22, 14:27:18",1,24,25,25,95,1,0.0,0.2520325203252033,"2023/07/23, 01:08:33",v0.2.0,0,3,false,,false,false,,,https://github.com/SmartTensors,http://EnviTrace.com,United States of America,,,https://avatars.githubusercontent.com/u/75088073?v=4,,, GOLEM,A numerical simulator for modelling coupled Thermo-Hydro-Mechanical processes in faulted geothermal reservoirs.,ajacquey,https://github.com/ajacquey/golem.git,github,"golem,geothermal-reservoirs,moose-framework",Geothermal Energy,"2023/05/30, 21:15:40",29,0,10,true,C++,,,"C++,Assembly,GLSL,Makefile,Shell,Python,C",,"b'

\n
\n \n
\n A MOOSE-based application\n
\n

\n\n

A numerical simulator for modelling coupled THM processes in faulted geothermal reservoirs based on MOOSE.

\n\n

\n \n \n \n \n \n \n \n \n \n

\n\n## About\nGOLEM is a numerical simulator for modelling coupled Thermo-Hydro-Mechanical processes in faulted geothermal reservoirs.\nThe simulator is developed by [Antoine Jacquey](http://www.gfz-potsdam.de/en/staff/antoine-jacquey/) and [Mauro Cacace](http://www.gfz-potsdam.de/en/section/basin-modeling/staff/profil/mauro-cacace/) at the [GFZ German Research Centre for Geosciences](http://www.gfz-potsdam.de/en/home/) from the section [Basin Modelling](http://www.gfz-potsdam.de/en/section/basin-modeling/).\n\n\nGOLEM is a MOOSE-based application. Visit the [MOOSE framework](http://mooseframework.org) page for more information.\n\n## Licence\nGOLEM is distributed under the [GNU GENERAL PUBLIC LICENSE v3](https://github.com/ajacquey/Golem/blob/master/LICENSE).\n\n\n## Getting Started\n\n#### Minimum System Requirements\nThe following system requirements are from the MOOSE framework (see [Getting Started](http://mooseframework.org/getting-started/) for more information):\n* Compiler: C++11 Compliant GCC 4.8.4, Clang 3.4.0, Intel20130607\n* Python 2.7+\n* Memory: 16 GBs (debug builds)\n* Processor: 64-bit x86\n* Disk: 30 GBs\n* OS: UNIX compatible (OS X, most flavors of Linux)\n\n#### 1. Setting Up a MOOSE Installation\nTo install GOLEM, you need first to have a working and up-to-date installation of the MOOSE framework. \nTo do so, please visit the [Getting Started](http://mooseframework.org/getting-started/) page of the MOOSE framework and follow the instructions. If you encounter difficulties at this step, you can ask for help on the [MOOSE-users Google group](https://groups.google.com/forum/#!forum/moose-users).\n\n#### 2. Clone GOLEM\nGOLEM can be cloned directly from [GitHub](https://github.com/ajacquey/Golem) using [Git](https://git-scm.com/). In the following, we refer to the directory `projects` which you created during the MOOSE installation (by default `~/projects`): \n\n cd ~/projects\n git clone https://github.com/ajacquey/Golem.git\n cd ~/projects/golem\n git checkout master\n\n*Note: the ""master"" branch of GOLEM is the ""stable"" branch which is updated only if all tests are passing.*\n\n#### 3. Compile GOLEM\nYou can compile GOLEM by following these instructions:\n\n cd ~/projects/golem\n make -j4\n\n#### 4. Test GOLEM\nTo make sure that everything was installed properly, you can run the tests suite of GOLEM:\n\n cd ~/projects/golem\n ./run_tests -j2\n\nIf all the tests passed, then your installation is working properly. You can now use the GOLEM simulator!\n\n## Usage\nTo run GOLEM from the command line with multiple processors, use the following command:\n\n mpiexec -n ~/projects/golem/golem-opt -i \n\nWhere `` is the number of processors you want to use and `` is the path to your input file (extension `.i`). \n\nInformation about the structure of the GOLEM input files can be found in the documentation (link to follow).\n## Cite\n\nIf you use GOLEM for your work please cite:\n* This repository: \nAntoine B. Jacquey, & Mauro Cacace. (2017, September 29). GOLEM, a MOOSE-based application. Zenodo. http://doi.org/10.5281/zenodo.999401\n* The publication presenting GOLEM: \n Cacace, M. and Jacquey, A. B.: Flexible parallel implicit modelling of coupled thermal\xe2\x80\x93hydraulic\xe2\x80\x93mechanical processes in fractured rocks, Solid Earth, 8, 921-941, https://doi.org/10.5194/se-8-921-2017, 2017. \n\n\nPlease read the [CITATION](https://github.com/ajacquey/Golem/blob/master/CITATION) file for more information.\n\n## Publications using GOLEM\n\n* Freymark, J., Bott, J., Cacace, M., Ziegler, M., Scheck-Wenderoth, M.: Influence of the Main Border Faults on the 3D Hydraulic Field of the Central Upper Rhine Graben, *Geofluids*, 2019.\n* Bl\xc3\xb6cher, G., Cacace, M., Jacquey, A. B., Zang, A., Heidbach, O., Hofmann, H., Kluge, C., Zimmermann, G.: Evaluating Micro-Seismic Events Triggered by Reservoir Operations at the Geothermal Site of Gro\xc3\x9f Sch\xc3\xb6nebeck (Germany), *Rock Mechanics and Rock Engineering*, 2018.\n* Jacquey, A. B., Urpi, L., Cacace, M., Bl\xc3\xb6cher, G., Zimmermann, G., Scheck-Wenderoth, M.: Far field poroelastic response of geothermal reservoirs to hydraulic stimulation treatment: Theory and application at the Gro\xc3\x9f Sch\xc3\xb6nebeck geothermal research facility, *International Journal of Rock Mechanics and Mining Sciences*, 2018.\n* Peters, E., Bl\xc3\xb6cher, G., Salimzadeh, S., Egberts, P. J. P., Cacace, M.: Modelling of multi-lateral well geometries for geothermal applications, *Advances in Geosciences*, 2018.\n* Magri, F., Cacace, M., Fischer, T., Kolditz, O., Wang, W., Watanabe, N.: Thermal convection of viscous fluids in a faulted system: 3D benchmark for numerical codes, *Energy Procedia*, 2017.\n* Cacace, M. and Jacquey, A. B.: Flexible parallel implicit modelling of coupled Thermal-Hydraulic-Mechanical processes in fractured rocks, Solid Earth, 2017.\n* Jacquey, A. B.: Coupled Thermo-Hydro-Mechanical Processes in Geothermal Reservoirs: a Multiphysic and Multiscale Approach Linking Geology and 3D Numerical Modelling, PhD thesis, RWTH Aachen, 2017.\n* Jacquey, A. B., Cacace, M., Bl\xc3\xb6cher, G.: Modelling coupled fluid flow and heat transfer in fractured reservoirs: description of a 3D benchmark numerical case, Energy Procedia, 2017.\n* Jacquey, A. B., Cacace, M., Bl\xc3\xb6cher, G., Milsch, H., Deon, F., Scheck-Wenderoth, M.: Processes Responsible for Localized Deformation within Porous Rocks: Insights from Laboratory Experiments and Numerical Modelling, 6th Biot Conference on Poromechanics, Paris 2017.'",",https://zenodo.org/record/999401#.Wc5NqBdx1pg,http://doi.org/10.5281/zenodo.999401\n*,https://doi.org/10.5194/se-8-921-2017","2017/07/28, 08:11:19",2280,GPL-3.0,6,109,"2023/05/30, 21:15:41",2,19,25,3,147,0,0.2,0.03947368421052633,"2017/09/29, 13:31:52",v1.0,0,3,false,,false,false,,,,,,,,,,, biogas,Tools for biogas research in R: process biogas data and predict biogas production.,sashahafner,https://github.com/sashahafner/biogas.git,github,,Bioenergy,"2020/04/07, 20:30:16",13,0,2,false,R,,,R,,"b'# Overview\nThe biogas package is an R package for biogas research. It provides tools for processing biogas data, predicting biogas production, making conversions, and planning experiments. For example, the calcBgVol() function can be used for calculating biochemical methane potential from original measurements. See the vignettes and the links given below for more information.\n\n# Links etc.\n* Open-source paper describing the package: https://doi.org/10.1016/j.softx.2018.06.005\n* Researchgate project page: https://www.researchgate.net/project/Biogas-Software\n* Web application interface to the package (OBA): https://biotransformers.shinyapps.io/oba1/\n* Mailing list: sasha.hafner at eng.au.dk \n\n# Branch organization\n* dev: main development branch for work on package\n* master: merged with dev before submission to CRAN\n* version number branches (e.g., 10.0.2): archived versions that are available on CRAN\n'",",https://doi.org/10.1016/j.softx.2018.06.005\n*","2018/05/02, 06:37:21",2002,GPL-3.0,0,264,"2021/12/09, 09:26:04",34,1,17,0,685,0,0.0,0.3914728682170543,"2022/08/26, 18:48:49",v1.40,0,2,false,,false,false,,,,,,,,,,, biosteam,The Biorefinery Simulation and Techno-Economic Analysis Modules.,BioSTEAMDevelopmentGroup,https://github.com/BioSTEAMDevelopmentGroup/biosteam.git,github,"distillation,flash,biorefinery,bioprocess,fermentation,thermodynamics,pump,monte-carlo,heat-exchanger,techno-economic-analysis,chemical-engineering,biochemical-process,unit-operation,process-simulation,sensitivity-analysis,reactor,centrifuge,life-cycle-assessment",Bioenergy,"2023/10/24, 03:15:50",136,21,24,true,Python,BioSTEAM Development Group,BioSTEAMDevelopmentGroup,"Python,JavaScript,CSS",,"b""=========================================================================\nBioSTEAM: The Biorefinery Simulation and Techno-Economic Analysis Modules\n=========================================================================\n\n.. image:: http://img.shields.io/pypi/v/biosteam.svg?style=flat\n :target: https://pypi.python.org/pypi/biosteam\n :alt: Version_status\n.. image:: http://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat\n :target: https://biosteam.readthedocs.io/en/latest/\n :alt: Documentation\n.. image:: http://img.shields.io/badge/license-UIUC-blue.svg?style=flat\n :target: https://github.com/BioSTEAMDevelopmentGroup/biosteam/blob/master/LICENSE.txt\n :alt: license\n.. image:: https://img.shields.io/pypi/pyversions/biosteam.svg\n :target: https://pypi.python.org/pypi/biosteam\n :alt: Supported_versions\n.. image:: https://zenodo.org/badge/164639830.svg\n :target: https://zenodo.org/badge/latestdoi/164639830\n.. image:: https://coveralls.io/repos/github/BioSTEAMDevelopmentGroup/biosteam/badge.svg?branch=master\n :target: https://coveralls.io/github/BioSTEAMDevelopmentGroup/biosteam?branch=master\n.. image:: https://badges.gitter.im/BioSTEAM-users/BioSTEAM.svg\n :alt: Join the chat at https://gitter.im/BioSTEAM-users/community\n :target: https://gitter.im/BioSTEAM-users/community\n\n**Read in:** `Espa\xc3\xb1ol `_\n\n.. contents::\n\nWorkshops\n---------\nJoin us on Friday, Jan 20, 9:15-10:15am CST, for a BioSTEAM workshop! \nEmail biosteamdevelopmentgroup@gmail.com for details.\n\nWhat is BioSTEAM?\n-----------------\n\nBioSTEAM is a fast and flexible package for the design, simulation, \ntechno-economic analysis, and life cycle assessment of biorefineries under uncertainty [1]_. \nBioSTEAM is built to streamline and automate early-stage technology evaluations \nand to enable rigorous sensitivity and uncertainty analyses. Complete \nbiorefinery configurations are available at the `Bioindustrial-Park \n`_ GitHub repository, \nBioSTEAM's premier repository for biorefinery models and results. The long-term \ngrowth and maintenance of BioSTEAM is supported through both community-led \ndevelopment and the research institutions invested in BioSTEAM, including the \n`Center for Advanced Bioenergy and Bioproducts Innovations (CABBI) `_. \nThrough its open-source and community-lead platform, BioSTEAM aims to foster \ncommunication and transparency within the biorefinery research community for an \nintegrated effort to expedite the evaluation of candidate biofuels and \nbioproducts.\n\nData on chemicals and algorithms to estimate thermodynamic properties are \nimported from `chemicals `_\nand `thermo `_,\ncommunity-driven open-source libraries developed by Caleb Bell. BioSTEAM's \npremire thermodynamic engine, `ThermoSTEAM `_, \nbuilds upon these libraries to facilitate the creation of thermodynamic property packages.\n\nInstallation\n------------\n\nGet the latest version of BioSTEAM from `PyPI `__. If you have an installation of Python with pip, simple install it with:\n\n $ pip install biosteam\n\nTo get the git version, run:\n\n $ git clone git://github.com/BioSTEAMDevelopmentGroup/biosteam\n\nFor help on common installation issues, please visit the `documentation `__.\n\nDocumentation\n-------------\n\nBioSTEAM's documentation is available on the web:\n\n http://biosteam.readthedocs.io/\n\nBug reports\n-----------\n\nTo report bugs, please use the BioSTEAM's Bug Tracker at:\n\n https://github.com/BioSTEAMDevelopmentGroup/biosteam\n\nContributing\n------------\nFor guidelines on how to contribute, visit:\n\n https://biosteam.readthedocs.io/en/latest/contributing/index.html\n\n\nLicense information\n-------------------\n\nSee ``LICENSE.txt`` for information on the terms & conditions for usage\nof this software, and a DISCLAIMER OF ALL WARRANTIES.\n\nAlthough not required by the BioSTEAM license, if it is convenient for you,\nplease cite BioSTEAM if used in your work. Please also consider contributing\nany changes you make back, and benefit the community.\n\n\nAbout the authors\n-----------------\n\nBioSTEAM was created and developed by `Yoel Cort\xc3\xa9s-Pe\xc3\xb1a `__ as part of the `Guest Group `__ and the `Center for Advanced Bioenergy and Bioproducts Innovation (CABBI) `__ at the `University of Illinois at Urbana-Champaign (UIUC) `__. \n\nReferences\n----------\n.. [1] `Cort\xc3\xa9s-Pe\xc3\xb1a et al. BioSTEAM: A Fast and Flexible Platform for the Design, Simulation, and Techno-Economic Analysis of Biorefineries under Uncertainty. ACS Sustainable Chem. Eng. 2020. `__.\n\n\n""",",https://zenodo.org/badge/latestdoi/164639830\n,https://doi.org/10.1021/acssuschemeng.9b07040","2019/01/08, 12:02:16",1751,CUSTOM,538,2204,"2023/10/24, 03:15:50",9,61,159,28,1,2,0.7,0.11643482740855227,"2020/02/06, 17:55:56",2.11.7,2,8,false,,false,false,"yalinli2/binder-test,QSD-Group/QSDsan-env,QSD-Group/QSDedu,safdarabbas123/biosteam,narest-qa/repo41,BioSTEAMDevelopmentGroup/How2STEAM,sarangbhagwat/autosynthesis,KimmieDC/qsdsan,BioSTEAMDevelopmentGroup/Bioindustrial-Park,QSD-Group/QSDsan-workshop,emilypl2/BioSTEAMconnectors,QSD-Group/QSDsan-webapp,helloftroy/Omics_TD1.0,lumedee007/CUWP-codes,markjet7/mrf_economics,markjet7/plastic_pyrolysis_tea,markjet7/isu-cuwp,QSD-Group/QSDsan,BioSTEAMDevelopmentGroup/biosteam,scyjth/biosteam_lca,BioSTEAMDevelopmentGroup/thermosteam",,https://github.com/BioSTEAMDevelopmentGroup,https://biosteam.readthedocs.io/en/latest/index.html,,,,https://avatars.githubusercontent.com/u/61621970?v=4,,, Multiscale_Ulva,"A multi-reactor, algae farm, simulation base function that will be solved in time.",alexliberzonlab,https://github.com/alexliberzonlab/Multiscale_Ulva.git,github,,Bioenergy,"2021/07/07, 17:54:49",4,0,2,false,Jupyter Notebook,Alex Liberzon Lab,alexliberzonlab,"Jupyter Notebook,Python",,"b""[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5017069.svg)](https://doi.org/10.5281/zenodo.5017069)\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/alexliberzonlab/Multiscale_Ulva/master?filepath=Notebooks)\n\n\n# Multiscale_Ulva\nCode and data of multiscale Ulva model\n\n## Introduction\n\n This is a multi-reactor, algae farm, simulation base function that will be \n solved in time. The multiple reactors are built along the streamwise (x) direction and numbered\n in the order of appearance, i.e. x=0 is the first reactor that meets the original nutrient stream. \n Every following reactor will receive a different nutrient concentration, Nenv(x) due to the uptake \n by the previous reactor and dilution, in case of a diluting environment.\n\n The main construction here is that each reactor is 4 ODEs: \n 1. dNenv/dt (Nenv = Nsea)\n 2. dNext/dt\n 3. dNint/dt, and \n 4. dm/dt\n\n we develop a list of such using a loop and in the loop we create the coupling by propagating the Nenv[i] \n from the first reactor to the following ones.\n\n## List of variables and parameters\n\n State variables\n Nsea (Nenv): nutrient content in the stream of water, [umol N/l]\n Next : nutrient content in the reactor [umol N/l]\n Nint : nutrient content in the algae biomass [% g N/g DW]\n m : biomass density in reactor [g DW/l]\n \n Parameters (given and fitted)*:\n miu: Maximum specific growth rate, [1/h]\n losses20: Specific rate of biomass losses, [1/h]\n teta: An empiric factor of biomass losses, [-] \n Nintcrit: Threshold Nint level below which the growth rate slows down (f(N_int)<1), [% g N/g DW]\n Nintmax: Maximum internal nitrogen concentration, [% g N/g DW]\n Nintmin: Minimum internal nitrogen concentration, [% g N/g DW]\n Vmax: Maximum N uptake rate, [\xce\xbcmol N/g DW/ h]\n Ks: Nitrogen half saturation uptake constant, [\xce\xbcmol N/l] \n S: Salinity level in water, [PSU]\n T: Water temperature [C]\n I0: Incident photon irradiance at water surface, [\xce\xbcmol photons/m2/s]\n Iaverage: Average photon irradiance in reactor, [\xce\xbcmol photons/m2/s]\n KI: Light half saturation constant, [\xce\xbcmol photons/m2/s] \n K0: Water light extinction coefficient, [1/m]\n Ka: Ulva light extinction coefficient, [m2/gDW] \n Tmin: Minimal temperature allowing Ulva growth, [\xe2\x84\x83]\n Topt: Optimal temperature for Ulva growth, [\xe2\x84\x83]\n Tmax: Maximal temperature allowing Ulva growth, [\xe2\x84\x83]\n Smin: Minimal salinity allowing Ulva growth, [PSU]\n Sopt: Optimal salinity for Ulva growth, [PSU]\n Smax: Maximal salinity allowing Ulva growth, [PSU]\n Qsea: Stream flow through an area equivalent to the reactor narrow-side cross-section, [l/h]\n Qp: flow rate of the airlift pump [l/h], transfers Sea to the reactor to increase Next, due to conservation, \n the same flow rate overflows from the reactor back to the Qsea, reducing the concentration in the Qsea stream, thus affecting the following reactors\n Nsea_upstream : nutrient content upstream the reactor, [umol N/l]\n d: Dilution ratio between every two reactors, [%]\n Vcage: Reactor volume, [m3]\n Z: Maximum water depth is the reactor, [m]\n \n *additional arguments are imported into args\n \n## Functions and notebooks\n\n All functions are writen in 'myfunctions_multi_scale.py'\n The basic function is that performs the numeric solution in time (t) and space (x). This function is used in the\n notebooks: 'Sensitivity_analysis', 'Year-round_productivities' and others.\n The functions and were used for calibration process using two different types of irradiance data.\n \n Calibration was performed and evaluated in the following notebooks:\n 1. 'Supplementary Figure 6-9'\n 2. 'Supplementary Figure 10'\n 3. 'Supplementary Figure 11'\n 4. 'RMSRE 1 - test results'\n 5. 'RMSRE 2 - test results'\n \n Some functions were built especially for specific figures:\n 1. looks at all 4 state variables and plots a line in a different color for every X's cage (i.e every 10th cage).\n This function is used in the 'fig6' notebook\n 2. adds cages till the level of Nsea decreases to a set value. When it finds this cage, it plots the lines for dynamics of the 4 state variables in the first and the last cages. This function is used in the 'fig4' notebook.\n 3. plots for the three relevant seasons (Autumn, Winter and Spring) to find farm size for each season. Last-cage dynamics for each season, following a farm-size calculated by winter dynamics are plotted in the 'fig4' notebook too (box 13).\n 4. The dynamics of the last cage according to the farm-size calculated by winter dynamics are also plotted in the 'fig4' notebook (box 20). The difference compared to previous plots, is that here, cultivation cycles are adjusted in each season to achieve a concstant Nint content rather than the Nsea threshold.\n 5. plots the dynamics of the first and last cages in a farm with specific Qp, enabling to examine the effect of different Qp values on spatial distribution in the farm.\n\n## Data\n \n The data is distributed along six excel files:\n 1. input\n 'Parameters_multi-scale' - has all parameter/arguments/initial conditions of model and simulations\n 'Parameters_Reading' - has all parameters used for calibration\n 'T_multi-scale' - has water temperature data (before interpulation) for simulations\n 2. ims_data_2017_PAR, ims_data_2017_umol_photons, ims_T_data_2017 and HOBO - have temperature and light intensity data for calibration\n 3. ims_data_2014_umol_photons - has light intensity data for simulations\n\n\n## How to install \n\n We recommend using Anaconda Python Distribution and `conda` environments. After installing Anaconda or Miniconda, please use the following command to reproduce the environment:\n\n conda env create --file multiscale_ulva.yml\n conda activate multiscale_ulva\n\n It is also possible to use `pip`: \n\n\n pip install -r requirement.txt\n\n\n\n## How to cite this work\n\n Please cite our publication: \n Zollmann, M., Rubinsky, B., Liberzon, A., & Golberg, A. (2021). Multi-Scale Modeling of Intensive Macroalgae Cultivation and Marine Nitrogen Sequestration. Communications Biology http://doi.org/10.1038/s42003-021-02371-z\n \n Please cite our code using DOI: http://doi.org/10.5281/zenodo.5017069\n\n""",",https://doi.org/10.5281/zenodo.5017069,http://doi.org/10.1038/s42003-021-02371-z\n,http://doi.org/10.5281/zenodo.5017069\n\n","2020/10/01, 16:56:19",1119,CUSTOM,0,28,"2020/10/01, 17:41:08",0,1,1,0,1119,0,0.0,0.11538461538461542,"2021/06/23, 09:08:47",1.0.3,0,2,false,,false,false,,,https://github.com/alexliberzonlab,https://turbulencelab.sites.tau.ac.il,"Tel Aviv University, Tel Aviv, Israel",,,https://avatars.githubusercontent.com/u/2659993?v=4,,, BETYdb,Web-interface to the Biofuel Ecophysiological Traits and Yields Database.,PecanProject,https://github.com/PecanProject/bety.git,github,"pecan,ruby,trait,database,postgis,agriculture,plants,phenotyping,crops,ecosystem-models",Bioenergy,"2023/03/29, 23:56:56",15,0,0,true,Ruby,PEcAn Project,PecanProject,"Ruby,HTML,JavaScript,XSLT,PLpgSQL,CSS,Shell,Python,R,Dockerfile,Batchfile",https://www.betydb.org,"b'# BETYdb\n\n[![DOI](https://zenodo.org/badge/4469/PecanProject/bety.svg)](https://zenodo.org/badge/latestdoi/4469/PecanProject/bety)\n\n[![Build Status](https://github.com/PecanProject/bety/workflows/CI/badge.svg)](https://github.com/PecanProject/bety/actions?query=workflow%3ACI)\n\n[![Slack](https://img.shields.io/badge/slack-login-brightgreen.svg)](https://pecanproject.slack.com/) \n[![Slack](https://img.shields.io/badge/slack-join_chat-brightgreen.svg)](https://publicslack.com/slacks/pecanproject/invites/new) \n\nThis is the source code for the [Biofuel Ecophysiological Traits and Yields database (BETYdb)](http://www.betydb.org)\n\nThe website is primarily written in Ruby-on-Rails, and has a PostgreSQL backend.\nBETYdb provides an interface for contributing and accessing data, and is the informatics backend for the [Predictive Ecosystem Analyzer (PEcAn)](http://www.pecanproject.org).\n\n## Running BETY using Docker\n\nTo get started with BETY you can use the docker-compose.yml file included. This will start the database (postgresql with postgis version 9.5) as well as the BETY container. If this is the first time you start it you will need to initialize the database, this can be done using the following commands:\n\n```\ndocker-compose -p bety up -d postgres\ndocker run --rm --network bety_bety pecan/db\n```\n\nIf you want to change the id of the database, you can use:\n\n```\ndocker-compose run -e LOCAL_SERVER=77 bety fix\n```\n\nTo add initial users you can use the following commands (this will add the guestuser as well as the carya demo user)\n\n```\ndocker-compose run bety user \'guestuser\' \'guestuser\' \'Guest User\' \'betydb@gmail.com\' 4 4\ndocker-compose run bety user \'carya\' \'illinois\' \'Demo User\' \'betydb@gmail.com\' 1 1\n```\n\nOnce bety finishes inializing the database, or to restart BETY, you can bring up the all the containers using:\n\n```\ndocker-compose -p bety up -d\n```\n\nTo change the path BETY runs under you can change the path using the environment variable RAILS_RELATIVE_URL_ROOT, for example to just run bety you can use the following command. This will precompile any of the static assets and run BETY.\n\n```\ndocker run -e RAILS_RELATIVE_URL_ROOT=""/bety"" pecan/bety\n```\n\n## Documentation.\n\n* Technical Documentation: https://pecanproject.github.io/bety-documentation/technical/\n* Data Entry: https://pecanproject.github.io/bety-documentation/dataentry/\n* Data Access: https://pecan.gitbook.io/betydb-data-access/\n \n'",",https://zenodo.org/badge/latestdoi/4469/PecanProject/bety","2012/11/25, 21:09:46",3985,BSD-3-Clause,24,2688,"2023/03/29, 23:57:01",182,284,571,11,209,16,0.4,0.37714021286441457,"2021/10/11, 02:28:06",betydb_5.4.1,0,19,false,,true,false,,,https://github.com/PecanProject,http://pecanproject.org,,,,https://avatars.githubusercontent.com/u/2879854?v=4,,, portalcasting,"Provides a model development, deployment, and evaluation system for forecasting how ecological systems change through time, with a focus on a widely used long-term study of mammal population and community dynamics.",weecology,https://github.com/weecology/portalcasting.git,github,"ecology,forecasting,portal,r,r-package,r-stats,reproducible-research,shiny,workflow",Bioenergy,"2023/05/23, 01:16:33",8,0,2,true,R,Weecology,weecology,"R,TeX,Dockerfile",https://weecology.github.io/portalcasting,"b'# Supporting [automated forecasting](https://github.com/weecology/portalPredictions) of [rodent populations](https://portal.weecology.org/)\n\n \n\n[![R-CMD-check](https://github.com/weecology/portalcasting/actions/workflows/r-cmd-check.yaml/badge.svg)](https://github.com/weecology/portalcasting/actions/workflows/r-cmd-check.yaml)\n[![Docker](https://github.com/weecology/portalcasting/actions/workflows/docker-publish.yml/badge.svg)](https://github.com/weecology/portalcasting/actions/workflows/docker-publish.yml)\n[![Codecov test coverage](https://img.shields.io/codecov/c/github/weecology/portalcasting/main.svg)](https://app.codecov.io/github/weecology/portalcasting/branch/main)\n[![Lifecycle:maturing](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![License](http://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/weecology/portalPredictions/master/LICENSE)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3332973.svg)](https://doi.org/10.5281/zenodo.3332973)\n[![NSF-1929730](https://img.shields.io/badge/NSF-1929730-blue.svg)](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1929730)\n[![JOSS](https://joss.theoj.org/papers/10.21105/joss.03220/status.svg)](https://doi.org/10.21105/joss.03220)\n\n\n## Overview\n\nThe `portalcasting` package provides a model development, deployment, and evaluation system for forecasting how ecological systems change through time, with a focus on a widely used long-term study of mammal population and community dynamics, the [Portal Project](https://portal.weecology.org/).\nIt combines the [Portal Data Repository](https://github.com/weecology/PortalData) and [portalr data management package](https://github.com/weecology/portalr) in a pipeline to automate weekly forecasting.\nForecasts are archived on [GitHub](https://github.com/weecology/portalPredictions) and [Zenodo](https://doi.org/10.5281/zenodo.833438).\nThe [Portal Forecasting website](https://portal.naturecast.org/) provides a dynamic view of the results.\n\n`portalcasting`\'s functions are also portable, allowing users to set up a fully-functional replica repository on a local or remote machine.\nThis facilitates development and testing of new models\nvia a [sandbox](https://en.wikipedia.org/wiki/Sandbox_(software_development)) approach. \n\n## Status: Deployed, Active Development\n\nThe `portalcasting` package is deployed for use within the [Portal Predictions repository](https://github.com/weecology/portalPredictions), providing the underlying R code to populate the directory with up-to-date data, analyze the data, produce new forecasts, generate new output figures, and render a new version of the [website](https://portal.naturecast.org/). \nAll of the code underlying the forecasting functionality has been migrated over from the [predictions repository](https://github.com/weecology/portalPredictions), which contains the code executed by the continuous integration. \nHaving relocated the code here, the `portalcasting` package is the location for active development of the model set and additional functionality. \n\nWe leverage a [software container](https://en.wikipedia.org/wiki/Operating-system-level_virtualization) to enable reproducibility of the [predictions repository](https://github.com/weecology/portalPredictions). \nPresently, we use a [Docker](https://hub.docker.com/r/weecology/portalcasting) image of the software environment to create a container for running the code. \nThe image is automatically rebuilt when there is a new `portalcasting` release, tagged with both the `latest` and version-specific (`vX.X.X`) tags, and pushed to [DockerHub](https://hub.docker.com/r/weecology/portalcasting). \n\nBecause the `latest` image is updated with releases, the current main branch code in `portalcasting` is typically, but not necessarily always, being executed within the [predictions repository](https://github.com/weecology/portalPredictions). \n\nThe API is moderately well defined at this point, but is still evolving.\n\n## Installation\n\nYou can install the package from github:\n\n```r\ninstall.packages(""remotes"")\nremotes::install_github(""weecology/portalcasting"")\n```\n\n## Production environment\n\nIf you wish to spin up a local container from the `latest` `portalcasting` image (to ensure that you are using a copy of the current production environment for implementation of the `portalcasting` pipeline), you can run\n\n```\nsudo docker pull weecology/portalcasting\n```\nfrom a shell on a computer with [Docker](https://www.docker.com/) installed. \n\n\n## Usage\n\nGet started with the [""how to set up a Portal Predictions directory"" vignette](https://weecology.github.io/portalcasting/articles/getting_started.html).\n\nIf you are interested in adding a model to the preloaded [set of models](https://weecology.github.io/portalcasting/articles/current_models.html), see the [""adding a model and data"" vignette](https://weecology.github.io/portalcasting/articles/adding_model_and_data.html). That document also details how to expand the datasets available to new and existing models.\n\n\n## Developer and Contributor notes\n\nWe welcome any contributions in form of models or pipeline changes. \n\nFor the workflow, please checkout the [contribution](.github/CONTRIBUTING.md) and [code of conduct](.github/CODE_OF_CONDUCT.md) pages. \n\n\n## Acknowledgements\n\nThis project is developed in active collaboration with [DAPPER Stats](https://www.dapperstats.com/).\n\nThe motivating study\xe2\x80\x94the Portal Project\xe2\x80\x94has been funded nearly continuously since 1977 by the [National Science Foundation](https://www.nsf.gov/), most recently by [DEB-1622425](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1622425) to S. K. M. Ernest. \nMuch of the computational work was supported by the [Gordon and Betty Moore Foundation\xe2\x80\x99s Data-Driven Discovery Initiative](https://www.moore.org/initiative-strategy-detail?initiativeId=data-driven-discovery) through [Grant GBMF4563](https://www.moore.org/grant-detail?grantId=GBMF4563) to E. P. White. \n\nWe thank Heather Bradley for logistical support, John Abatzoglou for assistance with climate forecasts, and James Brown for establishing the Portal Project. \n\n\n'",",https://doi.org/10.5281/zenodo.3332973,https://doi.org/10.21105/joss.03220,https://doi.org/10.5281/zenodo.833438","2018/04/11, 19:34:03",2023,CUSTOM,579,1754,"2023/06/01, 00:15:56",28,230,337,100,146,5,0.0,0.04931862426995459,"2023/05/23, 01:17:17",v0.60.1,0,8,false,,true,true,,,https://github.com/weecology,http://weecology.org,,,,https://avatars.githubusercontent.com/u/1156696?v=4,,, bslib,Database with battery parameters based on PerMod as well as functions in order to simulate battery storages.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/bslib.git,github,"energy,simulation,battery",Battery,"2023/01/18, 07:42:06",8,11,2,true,Python,FZJ-IEK3,FZJ-IEK3-VSA,Python,,"b'[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6514527.svg)](https://doi.org/10.5281/zenodo.6514527)\n\n\n\n# bslib - battery storage library\n\nRepository with code to\n \n- build a **database** with relevant data from PerMod database (HTW Berlin) and ""Stromspeicher-Inspektion""\n- **simulate** ac- and dc-coupled battery storages with regards to electrical power (ac and dc) and state-of-charge as timeseries.\n\nFor the simulation, it is possible to calculate outputs of a **specific manufacturer + model** or alternatively for one of **2 different generic battery storage types**. \n\n**For reference purposes:**\n- DOI: https://doi.org/10.5281/zenodo.6514527\n- Citation: Kai R\xc3\xb6sken, Tjarko Tjaden, & Hauke Hoops. (2022). FZJ-IEK3-VSA/bslib: v0.7. Zenodo. https://doi.org/10.5281/zenodo.6514527\n\n## Documentation\n\nThe documentation is still under development.\n\n## Usage\n\nSimply install via\n\n- `pip install bslib`\n\nor clone repository and create environment via:\n\n- `git clone https://github.com/FZJ-IEK3-VSA/bslib.git`\n- `conda env create --name bslib --file requirements.txt`\n\nAfterwards you\'re able to create some code with `import bslib` and use the included functions `load_database`, `get_parameters` and `simulate`.\n\n## Battery models and Group IDs\nThe bslib_database.csv contains the following number of battery storages, sorted by Group ID\n\n| [Group ID]: Count | Description |\n| :--- | :--- |\n| [S_ac]: 2 | AC-coupled |\n| [S_dc]: 3 | DC-coupled |\n| [INV]: 2 | PV Inverter |\n\n## Database\n\nAll resulting database CSV file are under [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/).\n\nThe following columns are available for every battery storage of this library\n\n| Column | Description | Comment |\n| :--- | :--- | :--- |\n| .. | .. | .. |\n\n\n## Input-Data and further development\n\nIf you find errors or are interested in develop the bslib, please create an ISSUE and/or FORK this repository and create a PULL REQUEST.\n\n## License\nMIT License\n\nCopyright (c) 2022\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\n## About Us\n

\nWe are the Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n'",",https://doi.org/10.5281/zenodo.6514527,https://doi.org/10.5281/zenodo.6514527\n-,https://doi.org/10.5281/zenodo.6514527\n\n##","2021/09/22, 13:32:40",763,MIT,8,129,"2023/01/18, 07:31:56",0,4,6,2,280,0,0.0,0.24193548387096775,"2023/01/18, 07:46:55",v0.7,0,4,false,,false,false,"madelinelewis230/BenefitCostBarriersFishEscapement,spsaswat/point_spec_dev,mengxu98/ImmuCycReg-framework,narest-qa/repo50,magpie15/data-science-portfolio,Macedonia-Tax/VAT-GAP-model,ttjaden/vdi4657.app,RE-Lab-Projects/prosumer-wp-trj,FZJ-IEK3-VSA/HiSim,AMMAR-62/Stock-Forecasting-in-R,cmarquardt/homebrew-setup",,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, foxBMS,"A free, open and flexible development environment to design battery management systems. It is the first modular open source BMS development platform.",foxBMS,https://github.com/foxBMS/foxBMS.git,github,,Battery,"2021/09/08, 10:06:57",2,0,1,false,,,,,,"b'# The foxBMS Project\n\n![foxBMS](./foxbms.png)\n\nfoxBMS is a free, open and flexible development environment to design battery\nmanagement systems. It is the first modular open source BMS development\nplatform.\n\n## foxBMS 2\n\n- [latest version foxBMS 2 documentation](https://iisb-foxbms.iisb.fraunhofer.de/foxbms/gen2/docs/html/latest/)\n- [Previous versions of the foxBMS 2 documentation](https://iisb-foxbms.iisb.fraunhofer.de/foxbms/gen2/docs/html/)\n- Links to specific version are permalinks (e.g.,\n [foxBMS 2 version 1.0.0](https://iisb-foxbms.iisb.fraunhofer.de/foxbms/gen2/docs/html/v1.0.0/))\n- Sources are available at\n [https://www.github.com/foxBMS/foxbms-2](https://www.github.com/foxBMS/foxbms-2)\n\n## foxBMS 1\n\n- [latest version foxBMS 1 documentation](https://iisb-foxbms.iisb.fraunhofer.de/foxbms/gen1/docs/html/latest/)\n- [Previous versions of the foxBMS 1 documentation](https://iisb-foxbms.iisb.fraunhofer.de/foxbms/gen1/docs/html/)\n- Links to specific version are permalinks (e.g.,\n [foxBMS 1 version 1.6.7](https://iisb-foxbms.iisb.fraunhofer.de/foxbms/gen1/docs/html/v1.6.7/))\n- Sources are available at\n [https://www.github.com/foxBMS/foxbms-1](https://www.github.com/foxBMS/foxbms-1)\n\nFor more information or if you want to give use some feedback, please [contact us](https://foxbms.org/support/#heading_contact_us).\n\nThe Fraunhofer IISB foxBMS Team\n\n[foxbms.org](https://foxbms.org/)\n'",,"2021/04/01, 17:54:48",937,MIT,0,3,"2023/01/18, 07:31:56",0,0,0,0,280,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, impedance.py,A Python package for working with electro-chemical impedance data.,ECSHackWeek,https://github.com/ECSHackWeek/impedance.py.git,github,"impedance,lithium-ion,battery,electrochemistry,fuel-cell,corrosion",Battery,"2023/07/10, 02:10:34",171,12,51,true,Python,ECS Hack Week ,ECSHackWeek,"Python,TeX",,"b'[![DOI](https://zenodo.org/badge/136110609.svg)](https://zenodo.org/badge/latestdoi/136110609) ![GitHub release](https://img.shields.io/github/release/ECSHackWeek/impedance.py)\n\n![PyPI - Downloads](https://img.shields.io/pypi/dm/impedance?style=flat-square) [![All Contributors](https://img.shields.io/badge/all_contributors-11-orange.svg?style=flat-square)](#contributors)\n\n[![Build Status](https://travis-ci.org/ECSHackWeek/impedance.py.svg?branch=master&kill_cache=1)](https://travis-ci.org/ECSHackWeek/impedance.py) [![Documentation Status](https://readthedocs.org/projects/impedancepy/badge/?version=latest&kill_cache=1)](https://impedancepy.readthedocs.io/en/latest/?badge=latest) [![Coverage Status](https://coveralls.io/repos/github/ECSHackWeek/impedance.py/badge.svg?branch=master&kill_cache=1)](https://coveralls.io/github/ECSHackWeek/impedance.py?branch=master)\n\nimpedance.py\n------------\n\n`impedance.py` is a Python package for making electrochemical impedance spectroscopy (EIS) analysis reproducible and easy-to-use.\n\nAiming to create a consistent, [scikit-learn-like API](https://arxiv.org/abs/1309.0238) for impedance analysis, impedance.py contains modules for data preprocessing, validation, model fitting, and visualization.\n\nFor a little more in-depth discussion of the package background and capabilities, check out our [Journal of Open Source Software paper](https://joss.theoj.org/papers/10.21105/joss.02349).\n\nIf you have a feature request or find a bug, please [file an issue](https://github.com/ECSHackWeek/impedance.py/issues) or, better yet, make the code improvements and [submit a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/)! The goal is to build an open-source tool that the entire impedance community can improve and use!\n\n### Installation\n\nThe easiest way to install impedance.py is from [PyPI](https://pypi.org/project/impedance/) using pip.\n\n```bash\npip install impedance\n```\n\nSee [Getting started with impedance.py](https://impedancepy.readthedocs.io/en/latest/getting-started.html) for instructions on getting started from scratch.\n\n#### Dependencies\n\nimpedance.py requires:\n\n- Python (>=3.7)\n- SciPy (>=1.0)\n- NumPy (>=1.14)\n- Matplotlib (>=3.0)\n- Altair (>=3.0)\n\nSeveral example notebooks are provided in the `docs/source/examples/` directory. Opening these will require Jupyter notebook or Jupyter lab.\n\n#### Examples and Documentation\n\nSeveral examples can be found in the `docs/source/examples/` directory (the [Fitting impedance spectra notebook](https://impedancepy.readthedocs.io/en/latest/examples/fitting_example.html) is a great place to start) and the documentation can be found at [impedancepy.readthedocs.io](https://impedancepy.readthedocs.io/en/latest/).\n\n## Citing impedance.py\n\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02349/status.svg)](https://doi.org/10.21105/joss.02349)\n\nIf you use impedance.py in published work, please consider citing https://joss.theoj.org/papers/10.21105/joss.02349 as\n\n```bash\n@article{Murbach2020,\n doi = {10.21105/joss.02349},\n url = {https://doi.org/10.21105/joss.02349},\n year = {2020},\n publisher = {The Open Journal},\n volume = {5},\n number = {52},\n pages = {2349},\n author = {Matthew D. Murbach and Brian Gerwe and Neal Dawson-Elli and Lok-kun Tsui},\n title = {impedance.py: A Python package for electrochemical impedance analysis},\n journal = {Journal of Open Source Software}\n}\n```\n\n## Contributors \xe2\x9c\xa8\n\nThis project started at the [2018 Electrochemical Society (ECS) Hack Week in Seattle](https://www.electrochem.org/233/hack-week) and has benefited from a community of users and contributors since. Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Lok-kun Tsui

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96

Brian Gerwe

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96 \xf0\x9f\x91\x80

Neal

\xf0\x9f\x91\x80 \xf0\x9f\x92\xbb

Matt Murbach

\xf0\x9f\x93\x96 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb

Kenny Huynh

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

lawrencerenna

\xf0\x9f\xa4\x94

Rowin

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

Michael Plews

\xf0\x9f\xa4\x94

Chebuskin

\xf0\x9f\x90\x9b

environmat

\xf0\x9f\x90\x9b

Abdullah Sumbal

\xf0\x9f\x90\x9b

nobkat

\xf0\x9f\x92\xbb

Nick

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

aokomorowski

\xf0\x9f\x92\xbb

Peter Attia

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96

sdkang

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb

lucasfdvx

\xf0\x9f\x90\x9b

Marcus Karlstad

\xf0\x9f\x90\x9b

Mark Bouman

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

oslopanda

\xf0\x9f\x90\x9b

pililac

\xf0\x9f\x90\x9b

Kavin Teenakul

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96

Enrico

\xf0\x9f\x92\xbb
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",",https://zenodo.org/badge/latestdoi/136110609,https://arxiv.org/abs/1309.0238,https://doi.org/10.21105/joss.02349,https://doi.org/10.21105/joss.02349","2018/06/05, 02:49:41",1968,MIT,33,593,"2023/08/29, 03:02:30",34,141,244,51,57,4,1.0,0.5073068893528183,"2023/07/10, 02:12:42",v1.7.1,2,24,false,,true,true,"seanbuchanan-eng/battery_sizing_model,AUTODIAL/AutoEIS,lfo96/auto_echem,ppravatto/GES-EIS-ML-demo,fuzhanrahmanian/MADAP,SamuelMassas/ImpedanceGUi,GES-compchem/GES-EIS-toolbox,ileu/eisplottingtool,Yobmod/dmlmung,karthikmayil/EIS-uncertainty-analysis,Alex6022/SPEIS-analysis,SSJenny90/laboratory",,https://github.com/ECSHackWeek,,,,,https://avatars.githubusercontent.com/u/32744210?v=4,,, PyBaMM,Fast and flexible physics-based battery models in Python.,pybamm-team,https://github.com/pybamm-team/PyBaMM.git,github,"pybamm,battery-models,solvers,python,batteries,simulation,hacktoberfest",Battery,"2023/10/24, 07:51:34",712,46,267,true,Python,PyBaMM Team,pybamm-team,"Python,CMake,C++,Shell,Dockerfile",https://www.pybamm.org/,"b'![PyBaMM_logo](https://user-images.githubusercontent.com/20817509/107091287-8ad46a80-67cf-11eb-86f5-7ebef7c72a1e.png)\n\n#\n\n
\n\n[![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](http://numfocus.org)\n[![Scheduled](https://github.com/pybamm-team/PyBaMM/actions/workflows/run_periodic_tests.yml/badge.svg?branch=develop)](https://github.com/pybamm-team/PyBaMM/actions/workflows/run_periodic_tests.yml)\n[![readthedocs](https://readthedocs.org/projects/pybamm/badge/?version=latest)](https://docs.pybamm.org/en/latest/?badge=latest)\n[![codecov](https://codecov.io/gh/pybamm-team/PyBaMM/branch/main/graph/badge.svg)](https://codecov.io/gh/pybamm-team/PyBaMM)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pybamm-team/PyBaMM/blob/main/)\n[![DOI](https://zenodo.org/badge/DOI/10.5334/jors.309.svg)](https://doi.org/10.5334/jors.309)\n[![release](https://img.shields.io/github/v/release/pybamm-team/PyBaMM?color=yellow)](https://github.com/pybamm-team/PyBaMM/releases)\n[![code style](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-66-orange.svg)](#-contributors)\n\n\n
\n\n# PyBaMM\n\nPyBaMM (Python Battery Mathematical Modelling) is an open-source battery simulation package\nwritten in Python. Our mission is to accelerate battery modelling research by\nproviding open-source tools for multi-institutional, interdisciplinary collaboration.\nBroadly, PyBaMM consists of\n(i) a framework for writing and solving systems\nof differential equations,\n(ii) a library of battery models and parameters, and\n(iii) specialized tools for simulating battery-specific experiments and visualizing the results.\nTogether, these enable flexible model definitions and fast battery simulations, allowing users to\nexplore the effect of different battery designs and modeling assumptions under a variety of operating scenarios.\n\n[//]: # ""numfocus-fiscal-sponsor-attribution""\n\nPyBaMM uses an [open governance model](./GOVERNANCE.md)\nand is fiscally sponsored by [NumFOCUS](https://numfocus.org/). Consider making\na [tax-deductible donation](https://numfocus.org/donate-for-pybamm) to help the project\npay for developer time, professional services, travel, workshops, and a variety of other needs.\n\n
\n \n \n \n
\n
\n\n## \xf0\x9f\x92\xbb Using PyBaMM\n\nThe easiest way to use PyBaMM is to run a 1C constant-current discharge with a model of your choice with all the default settings:\n\n```python3\nimport pybamm\n\nmodel = pybamm.lithium_ion.DFN() # Doyle-Fuller-Newman model\nsim = pybamm.Simulation(model)\nsim.solve([0, 3600]) # solve for 1 hour\nsim.plot()\n```\n\nor simulate an experiment such as a constant-current discharge followed by a constant-current-constant-voltage charge:\n\n```python3\nimport pybamm\n\nexperiment = pybamm.Experiment(\n [\n (\n ""Discharge at C/10 for 10 hours or until 3.3 V"",\n ""Rest for 1 hour"",\n ""Charge at 1 A until 4.1 V"",\n ""Hold at 4.1 V until 50 mA"",\n ""Rest for 1 hour"",\n )\n ]\n * 3,\n)\nmodel = pybamm.lithium_ion.DFN()\nsim = pybamm.Simulation(model, experiment=experiment, solver=pybamm.CasadiSolver())\nsim.solve()\nsim.plot()\n```\n\nHowever, much greater customisation is available. It is possible to change the physics, parameter values, geometry, submesh type, number of submesh points, methods for spatial discretisation and solver for integration (see DFN [script](https://github.com/pybamm-team/PyBaMM/blob/develop/examples/scripts/DFN.py) or [notebook](https://github.com/pybamm-team/PyBaMM/blob/develop/docs/source/examples/notebooks/models/DFN.ipynb)).\n\nFor new users we recommend the [Getting Started](https://github.com/pybamm-team/PyBaMM/tree/develop/docs/source/examples/notebooks/getting_started/) guides. These are intended to be very simple step-by-step guides to show the basic functionality of PyBaMM, and can either be downloaded and used locally, or used online through [Google Colab](https://colab.research.google.com/github/pybamm-team/PyBaMM/blob/main/).\n\nFurther details can be found in a number of [detailed examples](https://github.com/pybamm-team/PyBaMM/tree/develop/examples), hosted here on\ngithub. In addition, there is a [full API documentation](https://docs.pybamm.org/en/latest/source/api/index.html),\nhosted on [Read The Docs](https://readthedocs.org/).\nAdditional supporting material can be found\n[here](https://github.com/pybamm-team/pybamm-supporting-material/).\n\nNote that the examples on the default `develop` branch are tested on the latest `develop` commit. This may sometimes cause errors when running the examples on the pybamm pip package, which is synced to the `main` branch. You can switch to the `main` branch on github to see the version of the examples that is compatible with the latest pip release.\n\n## Versioning\n\nPyBaMM makes releases every four months and we use [CalVer](https://calver.org/), which means that the version number is `YY.MM`. The releases happen, approximately, at the end of January, May and September. There is no difference between releases that increment the year and releases that increment the month; in particular, releases that increment the month may introduce breaking changes. Breaking changes for each release are communicated via the [CHANGELOG](CHANGELOG.md), and come with deprecation warnings or errors that are kept for at least one year (3 releases). If you find a breaking change that is not documented, or think it should be undone, please open an issue on [GitHub](https://github.com/pybamm-team/pybamm).\n\n## \xf0\x9f\x9a\x80 Installing PyBaMM\n\nPyBaMM is available on GNU/Linux, MacOS and Windows.\nWe strongly recommend to install PyBaMM within a python virtual environment, in order not to alter any distribution python files.\nFor instructions on how to create a virtual environment for PyBaMM, see [the documentation](https://docs.pybamm.org/en/latest/source/user_guide/installation/GNU-linux.html#user-install).\n\n### Using pip\n\n[![pypi](https://img.shields.io/pypi/v/pybamm?color=blue)](https://pypi.org/project/pybamm/)\n[![downloads](https://img.shields.io/pypi/dm/pybamm?color=blue)](https://pypi.org/project/pybamm/)\n\n```bash\npip install pybamm\n```\n\n### Using conda\n\nPyBaMM is available as a conda package through the conda-forge channel.\n\n[![conda_forge](https://img.shields.io/conda/vn/conda-forge/pybamm?color=green)](https://anaconda.org/conda-forge/pybamm)\n[![downloads](https://img.shields.io/conda/dn/conda-forge/pybamm?color=green)](https://anaconda.org/conda-forge/pybamm)\n\n```bash\nconda install -c conda-forge pybamm\n```\n\n### Optional solvers\n\nFollowing GNU/Linux and macOS solvers are optionally available:\n\n- [scikits.odes](https://scikits-odes.readthedocs.io/en/latest/)-based solver, see [the documentation](https://docs.pybamm.org/en/latest/source/user_guide/installation/GNU-linux.html#optional-scikits-odes-solver).\n- [jax](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html)-based solver, see [the documentation](https://docs.pybamm.org/en/latest/source/user_guide/installation/GNU-linux.html#optional-jaxsolver).\n\n## \xf0\x9f\x93\x96 Citing PyBaMM\n\nIf you use PyBaMM in your work, please cite our paper\n\n> Sulzer, V., Marquis, S. G., Timms, R., Robinson, M., & Chapman, S. J. (2021). Python Battery Mathematical Modelling (PyBaMM). _Journal of Open Research Software, 9(1)_.\n\nYou can use the BibTeX\n\n```\n@article{Sulzer2021,\n title = {{Python Battery Mathematical Modelling (PyBaMM)}},\n author = {Sulzer, Valentin and Marquis, Scott G. and Timms, Robert and Robinson, Martin and Chapman, S. Jon},\n doi = {10.5334/jors.309},\n journal = {Journal of Open Research Software},\n publisher = {Software Sustainability Institute},\n volume = {9},\n number = {1},\n pages = {14},\n year = {2021}\n}\n```\n\nWe would be grateful if you could also cite the relevant papers. These will change depending on what models and solvers you use. To find out which papers you should cite, add the line\n\n```python3\npybamm.print_citations()\n```\n\nto the end of your script. This will print BibTeX information to the terminal; passing a filename to `print_citations` will print the BibTeX information to the specified file instead. A list of all citations can also be found in the [citations file](https://github.com/pybamm-team/PyBaMM/blob/develop/pybamm/CITATIONS.bib). In particular, PyBaMM relies heavily on [CasADi](https://web.casadi.org/publications/).\nSee [CONTRIBUTING.md](https://github.com/pybamm-team/PyBaMM/blob/develop/CONTRIBUTING.md#citations) for information on how to add your own citations when you contribute.\n\n## \xf0\x9f\x9b\xa0\xef\xb8\x8f Contributing to PyBaMM\n\nIf you\'d like to help us develop PyBaMM by adding new methods, writing documentation, or fixing embarrassing bugs, please have a look at these [guidelines](https://github.com/pybamm-team/PyBaMM/blob/develop/CONTRIBUTING.md) first.\n\n## \xf0\x9f\x93\xab Get in touch\n\nFor any questions, comments, suggestions or bug reports, please see the [contact page](https://www.pybamm.org/contact).\n\n## \xf0\x9f\x93\x83 License\n\nPyBaMM is fully open source. For more information about its license, see [LICENSE](https://github.com/pybamm-team/PyBaMM/blob/develop/LICENSE.txt).\n\n## \xe2\x9c\xa8 Contributors\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Valentin Sulzer

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85 \xf0\x9f\x93\x9d

Robert Timms

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85

Scott Marquis

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85

Martin Robinson

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85

Ferran Brosa Planella

\xf0\x9f\x91\x80 \xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85 \xf0\x9f\x93\x9d

Tom Tranter

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85

Thibault Lestang

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x92\xa1 \xf0\x9f\xa4\x94 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x9a\x87

Diego

\xf0\x9f\x90\x9b \xf0\x9f\x91\x80 \xf0\x9f\x92\xbb \xf0\x9f\x9a\x87

felipe-salinas

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

suhaklee

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

viviantran27

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

gyouhoc

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Yannick Kuhn

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Jacqueline Edge

\xf0\x9f\xa4\x94 \xf0\x9f\x93\x8b \xf0\x9f\x94\x8d

Fergus Cooper

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

jonchapman1

\xf0\x9f\xa4\x94 \xf0\x9f\x94\x8d

Colin Please

\xf0\x9f\xa4\x94 \xf0\x9f\x94\x8d

cwmonroe

\xf0\x9f\xa4\x94 \xf0\x9f\x94\x8d

Greg

\xf0\x9f\xa4\x94 \xf0\x9f\x94\x8d

Faraday Institution

\xf0\x9f\x92\xb5

Alexander Bessman

\xf0\x9f\x90\x9b \xf0\x9f\x92\xa1

dalbamont

\xf0\x9f\x92\xbb

Anand Mohan Yadav

\xf0\x9f\x93\x96

WEILONG AI

\xf0\x9f\x92\xbb \xf0\x9f\x92\xa1 \xe2\x9a\xa0\xef\xb8\x8f

lonnbornj

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xa1

Priyanshu Agarwal

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xf0\x9f\x91\x80 \xf0\x9f\x9a\xa7 \xe2\x9c\x85

DrSOKane

\xf0\x9f\x92\xbb \xf0\x9f\x92\xa1 \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85 \xf0\x9f\x91\x80

Saransh Chopra

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96 \xe2\x9c\x85 \xf0\x9f\x91\x80 \xf0\x9f\x9a\xa7

David Straub

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

maurosgroi

\xf0\x9f\xa4\x94

Amarjit Singh Gaba

\xf0\x9f\x92\xbb

KennethNwanoro

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Ali Hussain Umar Bhatti

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Leshinka Molel

\xf0\x9f\x92\xbb \xf0\x9f\xa4\x94

tobykirk

\xf0\x9f\xa4\x94 \xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85

Chuck Liu

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

partben

\xf0\x9f\x93\x96

Gavin Wiggins

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

Dion Wilde

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

Elias Hohl

\xf0\x9f\x92\xbb

KAschad

\xf0\x9f\x90\x9b

Vaibhav-Chopra-GT

\xf0\x9f\x92\xbb

bardsleypt

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

ndrewwang

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

MichaPhilipp

\xf0\x9f\x90\x9b

Alec Bills

\xf0\x9f\x92\xbb

Agriya Khetarpal

\xf0\x9f\x9a\x87 \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x91\x80

Alex Wadell

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96

iatzak

\xf0\x9f\x93\x96 \xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

Ankit Kumar

\xf0\x9f\x92\xbb

Aniket Singh Rawat

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96

Jerom Palimattom Tom

\xf0\x9f\x93\x96 \xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Brady Planden

\xf0\x9f\x92\xa1

jsbrittain

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Arjun

\xf0\x9f\x9a\x87 \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x91\x80

CHEN ZHAO

\xf0\x9f\x90\x9b

darryl-ad

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b \xf0\x9f\xa4\x94

julian-evers

\xf0\x9f\x92\xbb

Jason Siegel

\xf0\x9f\x92\xbb \xf0\x9f\xa4\x94

Tom Maull

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

ejfdickinson

\xf0\x9f\xa4\x94 \xf0\x9f\x90\x9b

bobonice

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

Eric G. Kratz

\xf0\x9f\x93\x96 \xf0\x9f\x9a\x87 \xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Andr\xc3\xa9s Ignacio Torres

\xf0\x9f\x9a\x87

Agnik Bakshi

\xf0\x9f\x93\x96

RuiheLi

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",",https://doi.org/10.5334/jors.309","2018/10/31, 10:26:29",1820,BSD-3-Clause,2763,13769,"2023/10/24, 07:51:38",137,1531,2639,632,1,32,1.8,0.6270862658355117,"2023/10/05, 09:42:37",v23.9rc0,0,67,false,,false,true,"ahemmatifar/sox,Marosli/SoCNN,pybamm-team/pybamm-tea,martinjrobins/diffeq-pybamm,schweini2409/pybamm-case-studies,pybop-team/PyBOP,pybamm-team/pybamm-cookiecutter,tekritesh/battery-controls,brosaplanella/contact-lens,alisterde/Dispersion_Area,HubertJN/Half-SPM-Solver,BradyPlanden/EstOptParticleModels,rish31415/pybamm-eis,YannickNoelStephanKuhn/EP-BOLFI,aced-differentiate/battery-parameter-pipeline,fardmoshiri/liionpack,Aoibheann1/BatteryModelling,OlliRuokojoki/liionpack,cyasing/liionpack-power-trial,abillscmu/liionpack,priyanshuone6/liionpack,srikanthallu/liionpack,MrP01/BatteryModelling,lucabanetta/aiida-seigrowth,brosaplanella/SPMe_SR,pybamm-team/bpx-example,HOLL95/General_electrochemistry,About-Energy-OpenSource/About-Energy-BPX-Parameterisation,molel-gt/ssb,mr-prometheus/More-Fruits-using-Deep-Learning,paramm-team/pybamm-param,soumyadeeproy12/Python_optimization_battery_design,Maky55-hub/IoTSoCEstimation,rtimms/dfn-example,Maky55-hub/soc_pybamm_simpy,wengandrew/battsim,ide3a/connecticity,pybamm-team/liionpack,pybamm-team/BattBot,ndrewwang/pybamm_cellmodels,brosaplanella/TEC-reduced-model,TomTranter/pybamm_pnm,pybamm-team/workshop-notebooks,martinjrobins/electrochem_catalysis,martinjrobins/electrocatalytic,pints-team/electrochem_pde_model",,https://github.com/pybamm-team,https://www.pybamm.org/,,,,https://avatars.githubusercontent.com/u/48961907?v=4,,, liionpack,A battery pack simulation tool that uses the PyBaMM framework.,pybamm-team,https://github.com/pybamm-team/liionpack.git,github,,Battery,"2023/07/05, 13:16:42",62,1,28,true,Python,PyBaMM Team,pybamm-team,"Python,TeX",https://liionpack.readthedocs.io/en/latest/,"b'![logo](https://raw.githubusercontent.com/pybamm-team/liionpack/main/docs/liionpack.png)\n\n#\n
\n\n[![liionpack](https://github.com/pybamm-team/liionpack/actions/workflows/test_on_push.yml/badge.svg)](https://github.com/pybamm-team/liionpack/actions/workflows/test_on_push.yml)\n[![Documentation Status](https://readthedocs.org/projects/liionpack/badge/?version=develop)](https://liionpack.readthedocs.io/en/develop/?badge=develop)\n[![codecov](https://codecov.io/gh/pybamm-team/liionpack/branch/main/graph/badge.svg)](https://codecov.io/gh/pybamm-team/liionpack)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pybamm-team/liionpack/blob/main/)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.04051/status.svg)](https://doi.org/10.21105/joss.04051)\n\n
\n\n# Overview of liionpack\n*liionpack* takes a 1D PyBaMM model and makes it into a pack. You can either specify\nthe configuration e.g. 16 cells in parallel and 2 in series (16p2s) or load a\nnetlist.\n\n## Installation\n\nFollow the steps given below to install `liionpack`. The package must be installed to run the included examples. It is recommended to create a virtual environment for the installation, see [the documentation](https://liionpack.readthedocs.io/en/main/install/).\n\nTo install `liionpack` using `pip`, run the following command:\n```bash\npip install liionpack\n```\n\n### Conda\n\nThe following terminal commands are for setting up a conda development environment for liionpack. This requires the [Anaconda](https://www.anaconda.com) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) Python distribution. This environment installs liionpack in editable mode which is useful for development of the liionpack source code. General users should install liionpack with pip.\n\n```bash\n# Create a conda environment named lipack\ncd liionpack\nconda env create --file environment.yml\n\n# Activate the environment\nconda activate lipack\n\n# Exit the environment\nconda deactivate\n\n# Delete the environment\nconda env remove --name lipack\n```\n\n### LaTeX\n\nIn order to use the `draw_circuit` functionality a version of Latex must be installed on your machine. We use an underlying Python package `Lcapy` for making the drawing and direct you to its installation instructions [here](https://lcapy.readthedocs.io/en/latest/install.html) for operating system specifics.\n\n## Example Usage\n\nThe following code block illustrates how to use liionpack to perform a simulation:\n\n```python\nimport liionpack as lp\nimport numpy as np\nimport pybamm\n\n# Generate the netlist\nnetlist = lp.setup_circuit(Np=16, Ns=2, Rb=1e-4, Rc=1e-2, Ri=5e-2, V=3.2, I=80.0)\n\noutput_variables = [\n \'X-averaged total heating [W.m-3]\',\n \'Volume-averaged cell temperature [K]\',\n \'X-averaged negative particle surface concentration [mol.m-3]\',\n \'X-averaged positive particle surface concentration [mol.m-3]\',\n]\n\n# Heat transfer coefficients\nhtc = np.ones(32) * 10\n\n# Cycling experiment, using PyBaMM\nexperiment = pybamm.Experiment([\n ""Charge at 20 A for 30 minutes"",\n ""Rest for 15 minutes"",\n ""Discharge at 20 A for 30 minutes"",\n ""Rest for 30 minutes""],\n period=""10 seconds"")\n\n# PyBaMM parameters\nparameter_values = pybamm.ParameterValues(""Chen2020"")\n\n# Solve pack\noutput = lp.solve(netlist=netlist,\n parameter_values=parameter_values,\n experiment=experiment,\n output_variables=output_variables,\n htc=htc)\n```\n\n## Documentation\n\nThere is a full API documentation, hosted on Read The Docs that can be found [here](https://liionpack.readthedocs.io/).\n\n## Contributing to liionpack\n\nIf you\'d like to help us develop liionpack by adding new methods, writing documentation, or fixing embarrassing bugs, please have a look at these [guidelines](https://github.com/pybamm-team/liionpack/blob/main/docs/contributing.md) first.\n\n## Get in touch\n\nFor any questions, comments, suggestions or bug reports, please see the [contact page](https://www.pybamm.org/contact).\n\n## Acknowledgments\n\nPyBaMM-team acknowledges the funding and support of the Faraday Institution\'s multi-scale modelling project and Innovate UK.\n\nThe development work carried out by members at Oak Ridge National Laboratory was partially sponsored by the Office of Electricity under the United States Department of Energy (DOE).\n\n## License\n\nliionpack is fully open source. For more information about its license, see [LICENSE](https://github.com/pybamm-team/liionpack/blob/main/LICENSE).\n'",",https://doi.org/10.21105/joss.04051","2021/09/28, 10:41:21",757,MIT,102,776,"2023/08/04, 05:59:03",21,128,231,49,82,5,0.4,0.31484502446982054,"2023/07/05, 14:23:10",v0.3.7,0,12,false,,false,false,wengandrew/battsim,,https://github.com/pybamm-team,https://www.pybamm.org/,,,,https://avatars.githubusercontent.com/u/48961907?v=4,,, ENNOID-BMS,Open Source: Modular BMS based on LTC68XX & STM32 MCU for up to 400V EV battery pack.,EnnoidMe,https://github.com/EnnoidMe/ENNOID-BMS.git,github,"bms,batteries,powerwall,slave-boards,battery-management-system",Battery,"2021/07/19, 14:52:01",177,0,52,false,,,,,,"b'# ENNOID - BMS\r\n\r\nENNOID-BMS is an open-source configurable battery management system consisting of a Master board based on an STM32 microcontroller connected through an ISOSPI interface to several modular slave boards. ENNOID-BMS can monitor the specifics temperatures, currents & voltages that are critical for any lithium-ion battery packs. Based on the monitored inputs & the configured parameters, the master board can allow or interrupt the flow of energy from the battery pack by switching the state of external heavy-duty contactors. ENNOID-BMS can measure each cell voltage level & can trigger the passive balancing function during charging for cells above the configured limit to ensure that all cells have a similar State-Of-Charge (SOC). Parameters can be configured through the ENNOID-BMS-Tool software running on a USB connected host computer.\r\n\r\n## Ordering:\r\n\r\nFor ordering assembled BMS or battery packs, please visit:\r\n\r\nhttps://www.ennoid.me/bms/gen-1\r\n\r\n## Documentation:\r\nhttps://blog.ennoid.me/\r\n\r\n## Block diagram\r\n\r\n![alt text](Master/LV/PIC/Wiring-LV.png)\r\n\r\nThe evolution of the BMS can be followed on this thread:\r\n\r\nhttps://endless-sphere.com/forums/viewtopic.php?f=14&t=92952\r\n\r\n\r\n## Features:\r\n\r\n- Modular with master/slave topology\r\n- 12S, 15S & 18S slaves board options\r\n- Master board options: High Voltage (Master-HV) & Low voltage (Master-LV)\r\n- Up to 500A continuous operation\r\n- Integrated bi-directional current sensor\r\n- 12V drive coil outputs for charge, discharge & auxiliary circuits\r\n- Communication between slaves & master through a two-wire daisy chained ISOSPI interface\r\n- Isolated CAN bus interface \r\n- Isolated charger detection circuit\r\n- Voltage measurement for battery pack & load\r\n- Build-in precharge circuits\r\n- USB interface for programming and firmware upgrades through an easy to use graphical user interface\r\n- OLED Display, serial output & power button\r\n- 0V to 5V cell voltage operation\r\n\r\n## Documentation:\r\n\r\n[ENNOID-BMS Datasheet](https://github.com/EnnoidMe/ENNOID-BMS/blob/master/Datasheet.pdf)\r\n\r\n## Software:\r\n\r\n![alt text](PIC/Tool.png)\r\n\r\nENNOID-BMS GUI configuration tool:\r\n[ENNOID-BMS tool](https://github.com/EnnoidMe/ENNOID-BMS-Tool)\r\n\r\n## Firmware:\r\n\r\nENNOID-BMS firmware .bin file:\r\n[ENNOID-BMS.bin](https://github.com/EnnoidMe/ENNOID-BMS-Firmware)\r\n\r\n\r\nView this project on [CADLAB.io](https://cadlab.io/project/1987). \r\n\r\n\r\n\r\n'",,"2018/02/20, 23:24:58",2072,GPL-3.0,0,271,"2020/08/13, 16:27:33",8,1,1,0,1168,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, cellpy,Extract and tweak data from electro-chemical tests of battery cells.,jepegit,https://github.com/jepegit/cellpy.git,github,"chemistry,electrochemistry,physics,data-analysis,opensource,battery",Battery,"2023/10/25, 14:05:38",61,2,17,true,Python,,,"Python,TeX",,"b'.. image:: https://raw.githubusercontent.com/jepegit/cellpy/master/docs/_static/cellpy-icon-long.svg\n :height: 100\n :alt: cellpy\n\n===================================================================\ncellpy - *a library for assisting in analysing batteries and cells*\n===================================================================\n\n\n.. image:: https://img.shields.io/pypi/v/cellpy.svg\n :target: https://pypi.python.org/pypi/cellpy\n\n.. image:: https://readthedocs.org/projects/cellpy/badge/?version=latest\n :target: https://cellpy.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://static.pepy.tech/badge/cellpy\n :target: https://pepy.tech/project/cellpy\n\n\nThis Python Package was developed to help the\nresearchers at IFE, Norway, in their cumbersome task of\ninterpreting and handling data from cycling tests of\nbatteries and cells.\n\n\nDocumentation\n=============\n\nThe documentation for ``cellpy`` is hosted on `Read the docs\n`_.\n\n\nInstallation and dependencies\n=============================\n\nThe easiest way to install ``cellpy`` is to install with conda or pip.\n\nWith conda::\n\n conda install -c conda-forge cellpy\n\nOr if you prefer installing using pip::\n\n python -m pip install cellpy\n\nHave a look at the documentation for more detailed installation procedures, especially\nwith respect to ""difficult"" dependencies when installing with pip.\n\nLicence\n=======\n\n``cellpy`` is free software made available under the MIT License.\n\nFeatures\n========\n\n* Load test-data and store in a common format.\n* Summarize and compare data.\n* Filter out the steps of interest.\n* Process and plot the data.\n* And more...\n\n\n'",,"2015/12/17, 11:33:35",2869,MIT,311,2429,"2023/10/20, 13:18:25",21,153,237,39,5,0,0.0,0.04994850669412976,"2023/08/21, 12:59:33",v1.0.0post4,0,10,false,,true,true,"narest-qa/repo62,ife-bat/st-apps",,,,,,,,,, prediction-of-battery-cycle,Data driven prediction of battery cycle life before capacity degradation.,rdbraatz,https://github.com/rdbraatz/data-driven-prediction-of-battery-cycle-life-before-capacity-degradation.git,github,,Battery,"2022/11/26, 04:11:22",387,0,118,true,Jupyter Notebook,,,"Jupyter Notebook,MATLAB",,"b""# data-driven-prediction-of-battery-cycle-life-before-capacity-degradation\n\n**NOTE: For access to the modeling code, please contact Richard Braatz at braatz@mit.edu for the academic license. Only the data processing code is available without agreeing to a license.**\n\nThe code in this repository shows how to load the data associated with the paper ['Data driven prediciton of battery cycle life before capacity degradation' by K.A. Severson, P.M. Attia, et al](https://www.nature.com/articles/s41560-019-0356-8). The data is available at [https://data.matr.io/1/](https://data.matr.io/1/). There you can also find more details about the data.\n\nThis analysis was originally performed in MATLAB, but here we also provide access information in python. In the MATLAB files (.mat), this data is stored in a struct. In the python files (.pkl), this data is stored in nested dictionaries.\n\nTo execute the python code, we recommended setting up a new python environment with packages matching the requirements.txt file. You can do this with conda: conda create --name --file requirements.txt\n\nThe data associated with each battery (cell) can be grouped into one of three categories: descriptors, summary, and cycle.\n- **Descriptors** for each battery include charging policy, cycle life, barcode and channel. Note that barcode and channel are currently not available in the pkl files).\n- **Summary data** include information on a per cycle basis, including cycle number, discharge capacity, charge capacity, internal resistance, maximum temperature, average temperature, minimum temperature, and chargetime.\n- **Cycle data** include information within a cycle, including time, charge capacity, current, voltage, temperature, discharge capacity. We also include derived vectors of discharge dQ/dV, linearly interpolated discharge capacity (i.e. `Qdlin`) and linearly interpolated temperature (i.e. `Tdlin`).\n\nThe `LoadData` files show how the data can be loaded and which cells were used for analysis in the paper.\n""",,"2019/02/07, 00:11:26",1721,MIT,1,32,"2023/08/30, 07:38:25",9,8,26,5,56,1,0.125,0.6875,,,0,6,false,,false,false,,,,,,,,,,, BatterySense,"A simple Arduino library to monitor battery consumption of your battery powered projects, being LiPo, LiIon, NiCd or any other battery type, single or multiple cells: if it can power your Arduino you can monitor it.",rlogiacco,https://github.com/rlogiacco/BatterySense.git,github,,Battery,"2021/01/23, 23:47:11",366,0,29,true,C++,,,C++,,"b""### ⚠ **IMPORTANT**\n \n> Please, before submitting a support request read carefully this README and check if an answer already exists among [previously answered questions](https://github.com/rlogiacco/BatterySense/discussions): do not abuse of the Github issue tracker.\n\n\nBattery Sense [![Build Status][travis-status]][travis]\n=============\n[travis]: https://travis-ci.org/rlogiacco/BatterySense\n[travis-status]: https://travis-ci.org/rlogiacco/BatterySense.svg?branch=master\n\nThis is a simple Arduino library to monitor battery consumption of your battery powered projects, being LiPo, LiIon, NiCd or any other battery type, single or multiple cells: if it can power your Arduino you can monitor it!\n\nThe principle is simple: we are going to measure our battery capacity by measuring the voltage across the battery terminals.\n\nThe big assumption here is that battery capacity is linearly correlated to its voltage: the assumption itself is wrong, but in most cases it's *close enough* to reality, especially when it comes to the battery higher capacity side.\n\nIn reality, the relation between battery capacity and its voltage is better represented by a curve and there are many factors affecting it: current drawn, temperature, age, etc...\n\n![Battery discharge curves at 750 mA](http://www.philohome.com/batteries/discharge-750.gif)\n\n\n- [How to](#how-to)\n - [Lesser than 5V, with voltage booster](#lesser-than-5v-with-voltage-booster)\n - [Higher than 5V, with internal voltage regulator](#higher-than-5v-with-internal-voltage-regulator)\n - [Higher than 5V, with external voltage regulator](#higher-than-5v-with-external-voltage-regulator)\n - [Higher than 5V, activated on demand](#higher-than-5v-activated-on-demand)\n- [Voltage divider ratio](#voltage-divider-ratio)\n- [Remaining capacity approximation](#remaining-capacity-approximation)\n - [Improvable](#improvable)\n - [Good enough](#good-enough)\n- [Examples](#examples)\n - [Single-cell Li-Ion on 3.3V MCU](#single-cell-li-ion-on-33v-mcu)\n - [Double cell Li-Ion (2S) on 5V MCU](#double-cell-li-ion-2s-on-5v-mcu)\n - [9V Alkaline on 5V MCU](#9v-alkaline-on-5v-mcu)\n\n\n## How to\nThe library requires at least 1 analog pin (we will call this the `sense pin`) and no less than 2 pieces of info on your battery: the voltage you will consider the minimum acceptable level, below which your project/product becomes unreliable and should be shut down, and the maximum voltage you can expect when the battery is fully charged.\n\nAdditionally, you can provide a second pin (either analog or digital) to activate the battery measurement circuit (we call it the `activation pin`), useful in all those situations where you can sacrifice a pin to further increase your battery duration.\n\nIf you want your readings to be more accurate we *strongly suggest* to calibrate the library by providing your board reference voltage: most of the times you assume your board has exactly 5V between `Vcc` and `GND`, but this is rarely the case. To improve this we suggest using the [VoltageReference](https://github.com/rlogiacco/VoltageReference) library to obtain a better calibration value for all analog readings.\n\nThe `sense pin` wiring can vary depending on your battery configuration, but here are a few examples based on the assumption you are using a 5V board: in case of a 3.3V board you should be performing the necessary adjustments.\n\n### Lesser than 5V, with voltage booster\nVoltage sources made of single cell LiPo or LiIon, along with some single or multi-cell NiCd configurations (like up to 3 AA or AAA), are not able to provide the *suggested* `5.0 volts input` to your board and a voltage booster can solve your problem.\nWhat does that mean when it comes to measuring your battery level? We need to measure the battery voltage *before* it gets boosted, which means your `sense pin` must be connected between the battery positive terminal and the booster positive input and we don't need any additional components as the voltage is already in the acceptable range:\n\n```\n +---------+\n +----------------------------- SENSE | |\n | +---------+ | |\n | | | | |\n BAT+ ---+--- IN+ | 5V | OUT+ ------- 5V | Arduino |\n | BOOSTER | | |\n BAT- ------- IN- | | OUT- ------ GND | |\n +---------+ | |\n +---------+\n```\n\n### Higher than 5V, with internal voltage regulator\nVoltage sources made of multiple cells LiPo or LiIon, along with some single or multi-cell NiCd configurations (like up the classic 9V battery or 4+ AA or AAA), provide voltages above the `5.0 volts input`: most of the Arduino boards are equipped with voltage regulators able to dissipate into heat all the excess.\nTo measure such batteries we need to hook our `sense pin` *before* it gets regulated, between the battery positive terminal and the Arduino unregulated input `VIN` or `RAW`, but we require two resistors to reduce the voltage to acceptable values:\n\n```\n +---------+\n BAT+ ---+--------- VIN | |\n | | |\n R1 | |\n | | |\n +------- SENSE | Arduino |\n | | |\n R2 | |\n | | |\n BAT- ------------- GND | |\n +---------+\n```\n\nThe values of R1 and R2 determine the `voltage ratio` parameter for this library: for information about this value refer to the section below.\n\nBecause the resistors in this configuration will constantly draw power out of your battery, you shouldn't pick values under `1k Ohm`, or you'll deplete your batteries much faster than normal. On the other end, going too high on the resistor values will impede the library from getting accurate readings.\n\n### Higher than 5V, with external voltage regulator\nWhenever your battery maximum voltage exceeds the onboard regulator (if there is any) an external voltage regulator is required.\nOnce again, to measure such batteries we need to hook our `sense pin` *before* it gets regulated, between the battery positive terminal and the voltage regulator positive input `VIN` or `RAW` and, as before, we require two resistors to reduce the voltage to acceptable values:\n\n```\n+---------------------------------+\n| +---------+ | +---------+\n| BAT+ ---+--- IN+ | | +-- SENSE | |\n| | | | | |\n| R1 | | | |\n| | | | | |\n+---------+ | REG | OUT+ ---- 5V | Arduino |\n | | | | |\n R2 | | | |\n | | | | |\n BAT- ---+--- IN- | | OUT- --- GND | |\n +---------+ +---------+\n```\n\nThe values of R1 and R2 determine the `voltage ratio` parameter for this library: for information about this value refer to the section below.\n\n### Higher than 5V, activated on demand\nBatteries are a precious resource and you want to prolong their life as much as you can so, deplete your battery to determine its capacity is not desirable.\n\nAs a consequence of connecting the battery terminals through two resistors we are drawing some energy out of the battery: for a 9V battery and 1k Ohm for R1 and R2, you will be adding a constant 4.5mA current consumption to your circuit. Not a huge amount, but definitely not desirable.\n\nIf you have an unused pin on your Arduino it will be easy to limit this additional current consumption to be drawn only when needed: during battery measurement. We will be turning the `activation pin` HIGH during battery measurement so that the voltage divider will be disconnected most of the time:\n\n```\n +---------+\n BAT+ ---+--------- VIN | |\n | | |\n SW--------- ACT | |\n | | |\n R1 | |\n | | Arduino |\n +------- SENSE | |\n | | |\n R2 | |\n | | |\n BAT- ------------- GND | |\n +---------+\n```\n\nIn the above schematics **SW** is a circuit which can connect or disconnect the sensing circuit depending on the voltage on `ACT`: the most common and cheap circuit is made of a NPN transistor *Q1*, a p-channel MOSFET *Q2*, a 1k-4.7k Ohm resistor *R3* and a 5k-20k Ohm resistor *R4*:\n\n```\n BAT+\n |\n +-----+\n | |\n R4 |\n |\\ |\n ACT --- R3 ---Q1 \\-- Q2\n | |\n | |\n GND VDIV\n to R1/R2/SENSE\n\n```\n\nFeel free to refer to this [circuit simulation](http://tinyurl.com/ydbjfk67) to better understand how the circuit works and how much current draws when in operation.\n\n![Simulated Circuit](https://image.ibb.co/iob7AV/sense.gif)\n\n## Voltage divider ratio\nWhenever your battery voltage is above your board voltage you need a voltage divider to constraint your readings within the 0-5V range allowed by your Arduino and you will have to provide this library with its *ratio*.\n\n```\n\n BAT+ ---+\n |\n R1\n |\n +------- SENSE\n |\n R2\n |\n BAT- ----\n```\n\nThe `voltage divider ratio` is determined by the formula `(R1 + R2) / R2`: if you use two resistors of the same value the ratio will be **2**, which can be interpreted as *whatever value we read it will be **half** of the actual value*. This allows us to sense batteries up to 10V. \nIf you use a 22k Ohm resistor for R1 and a 10k Ohm for R2 than your `voltage ratio` will be **3.2** and you will be able to safely monitor a 12-15V battery.\n\nYou **must** select the resistors in order to get a ratio which will produce values between the 0-5V range (or 0-3.3V for 3.3V devices) **at all the times** and to obtain that the process is quite simple: divide your battery maximum voltage by 5V and you'll get the *absolute minimum value* for the `voltage ratio`, then pick any two resistors values whose combination produce a ratio *equal or higher* than the absolute minimum. For a 12V battery the *absolute minimum voltage ratio* is **12/5=2.4**, meaning you can't use a *split supply divider* made of two equal resistors: you need R1 to be a higher value than R2! Get this wrong and you will probably burn your `sense pin`.\n\nYou can use [this nice website](http://www.ohmslawcalculator.com/voltage-divider-calculator) to find some appropriate values for the resistors setting your battery maximum voltage as _Voltage source_ and aiming at obtaining a _Output voltage_ value lesser than your board voltage (`5V` or `3.3V`) but as close as possible.\n\nThe *voltage divider total resistance*, made of `R1 + R2`, will determine the current drawn from your battery by the sensing circuit: lower is the total resistance and more accurate are your readings, higher the resistance and less current is drawn from your battery ([Ohm's law](http://www.ohmslawcalculator.com/ohms-law-calculator) rulez!). My suggestion is to keep this value within 20k-22k Ohm when using an *always-connected* circuit and under 10k Ohm if you use an *on-demand* configuration.\n\nWhen determining the *ratio* don't stick with the resistors nominal values, instead, if possible, use a multimeter to actually measure their resistance so to improve your results: a `4.7k\xe2\x84\xa6` resistor could easily be a `4.75k\xe2\x84\xa6` in reality!\n\n## Remaining capacity approximation\nThe `level` available functions aim at providing an approximation of the remaining battery capacity in percentage. This is not an easy task when you want to achieve reliable values and it is something the industry of mobile devices invests a decent amount of resources.\nWhen an accurate estimate is desireable the battery voltage is not the sole parameter you want to take into consideration:\n * cell **chemistry** has a very high influence, obviously\n * cells based on the same chemistry might produce pretty different results depending on the **production process**\n * each chemistry has a different ideal operating **cell temperature**\n * the rate you **draw current** from the battery influences the remaining capacity\n * batteries are not everlasting: as the cell **age**s, the battery capacity gets reduced\n * _and more_ \n\nThe library itself doesn't aim at providing accurate estimates, but what I consider _an improvable but good enough_ estimate.\n\n### Improvable\nThe library can be configured to use a mapping function of your choice, given the function complies with the `mapFn_t` interface:\n\n```cpp\nuint8_t mapFunction(uint16_t voltage, uint16_t minVoltage, uint16_t maxVoltage)\n```\n\nTo configure your personalized function you only have to provide a pointer to it during initialization:\n\n```cpp\nBattery batt = Battery(3000, 4200, SENSE_PIN);\n\nuint8_t myMapFunction(uint16_t voltage, uint16_t minVoltage, uint16_t maxVoltage) {\n // your code here\n}\n\n\nvoid setup() {\n batt.begin(3300, 1.47, &myMapFunction);\n}\n```\n\n> You are not limited in considering only the parameters listed in the function interface, meaning you can take into consideration the cell(s) temperature, current consumption or age: that's open to your requirements and circuitry.\n\n### Good enough\nAfter collecting a few data points on battery voltage vs. battery capacity, I've used the https://mycurvefit.com/ and https://www.desmos.com online tools to calculate the math functions best representing the data I've collected.\n\n![Mapping functions](https://github.com/rlogiacco/BatterySense/blob/master/map-fn.png?raw=true)\n\n> In the above plot I represent the battery percentage (Y axis) as a function of the difference between the current battery voltage and the minimum value (X axis): the graph represents a battery with a voltage swing of 1200mV from full to empty, but the functions scale accordingly to the `minVoltage` and `maxVoltage` parameters.\n\nThe library ships with three different implementations of mapping function: \n\n * _linear_ is the default one (dashed red), probably the least accurate but the easiest to understand. It's main drawback is, for most chemistries, it will very quickly go from 25-20% to 0%, meaning you have to select the `minVoltage` parameter for your battery accordingly. As an example, a typical Li-Ion battery having a 3V to 4.2V range, you want to specify a 3.3V configuration value as _minimum voltage_.\n * _sigmoidal_ (in blue) is a good compromise between computational effort and approximation, modeled after the tipical discharge curve of Li-Ion and Li-Poly chemistries. It's more representative of the remaining charge on the lower end of the spectrum, meaning you can set the minimum voltage accordingly to the battery safe discharge limit (typically 3V for a Li-Ion or Li-Poly).\n * _asymmetric sigmoidal_ (in green) is probably the best approximation when you only look at battery voltage, but it's more computational expensive compared to _sigmoidal_ function and, in most cases, it doesn't provide a great advantage over it's simmetric counterpart.\n\nI strongly encourage you to determine the function that best matches your particular battery chemistry/producer when you want to use this library in your product.\n\n## Examples\nHere follow a few real case scenarios which can guide you in using this library.\n\n### Single-cell Li-Ion on 3.3V MCU\nAs an example, for a single cell Li-Ion battery (4.2V - 3.7V) powering a `3.3V MCU`, you'll need to use a voltage divider with a ratio no less than `1.3`. Considering only E6 resistors, you can use a `4.7k\xe2\x84\xa6` (R1) and a `10k\xe2\x84\xa6` (R2) to set a ratio of `1.47`: this allows to measure batteries with a maximum voltage of `4.85V`, well within the swing of a Li-Ion. It's a little too current hungry for my tastes in an *always-connected* configuration, but still ok. Considering the chemistry maps pretty well to our sigmoidal approximation function I'm going to set it accordingly along with the minimum voltage which lowest safe value clearly is 3.0V (if a Li-Ion is drained below `3.0V` the risk of permanent damage is high), so your code should look like:\n\n```cpp\nBattery batt = Battery(3000, 4200, SENSE_PIN);\n\nvoid setup() {\n // specify an activationPin & activationMode for on-demand configurations\n //batt.onDemand(3, HIGH);\n batt.begin(3300, 1.47, &sigmoidal);\n}\n```\n\n### Double cell Li-Ion (2S) on 5V MCU\nFor a double cell Li-Ion battery (8.4V - 7.4V) powering a `5V MCU`, you'll need to use a voltage divider with a ratio no less than `1.68`: you can use a `6.8k\xe2\x84\xa6` (R1) and a `10k\xe2\x84\xa6` (R2) to set the ratio *precisely* at `1.68`, perfect for our `8.4V` battery pack. The circuit will continuously draw 0.5mA in an *always-connected* configuration, if you can live with that. As we don't want to ruin our battery pack and we don't want to rush from 20% to empty in afew seconds, we'll have to set the minimum voltage to `6.8V` (with a _linear_ mapping) to avoid the risk of permanent damage, meaning your code should look like:\n\n```cpp\nBattery batt = Battery(6800, 8400, SENSE_PIN); \n\nvoid setup() {\n // specify an activationPin & activationMode for on-demand configurations\n //batt.onDemand(3, HIGH);\n batt.begin(5000, 1.68);\n}\n```\n\n> **NOTE**: I could have used the _sigmoidal_ approximation, as the chemistry fits pretty well on the curve, in which case a `6V` minimum voltage would have been a better configuration value.\n\n### 9V Alkaline on 5V MCU\nAnother classic example might be a single 9V Alkaline battery (9V - 6V) powering a `5V MCU`. In this case, you'll need to use a voltage divider with a ratio no less than `1.8` and, for sake of simplicity, we'll go for a nice round `2` ratio. Using a nice `10k\xe2\x84\xa6` both for R1 and R2 we'll be able to measure batteries with a maximum voltage of `10V` consuming only 0.45mA. The trick here is to determine when our battery should be considered empty: a 9V Alkaline, being a non-rechargeable one, can potentially go down to 0V, but it's hard our board can still be alive when this occurs. Assuming we are using a linear regulator to step down the battery voltage to power our board we'll have to account for the regulator voltage drop: assuming it's a `1.2V` drop, we might safely consider our battery empty when it reaches `6.2V` (5V + 1.2V), leading to the following code:\n\n```cpp\nBattery batt = Battery(6200, 9000, SENSE_PIN);\n\nvoid setup() {\n // specify an activationPin & activationMode for on-demand configurations\n //batt.onDemand(3, HIGH);\n batt.begin(5000, 2.0);\n}\n```\n\n\n> **NOTE**: Most `5V MCU` can actually continue to operate when receiving `4.8V` or even less: if you want to squeeze out as much energy as you can you can fine tune the low end, but also consider there is not much juice left when a battery voltage drops that much.\n""",,"2015/03/03, 23:32:50",3157,LGPL-3.0,0,97,"2021/12/13, 22:47:20",1,9,19,1,680,0,0.1111111111111111,0.3023255813953488,,,0,9,true,"github,custom",false,false,,,,,,,,,,, beep,A set of tools designed to support Battery Evaluation and Early Prediction of cycle life corresponding to the research of the d3batt program and the Toyota Research Institute.,TRI-AMDD,https://github.com/TRI-AMDD/beep.git,github,,Battery,"2023/05/16, 20:36:10",108,11,31,true,Python,Toyota Research Institute - Accelerated Materials Design & Discovery (AMDD),TRI-AMDD,"Python,JetBrains MPS",,"b'# Battery Evaluation and Early Prediction (BEEP)\n\n

\n \n![Testing - main](https://github.com/TRI-AMDD/beep/workflows/Testing%20-%20main/badge.svg)\n[![Coverage Status](https://coveralls.io/repos/github/TRI-AMDD/beep/badge.svg?branch=master)](https://coveralls.io/github/TRI-AMDD/beep?branch=master)\n[![GitHub Repo Size](https://img.shields.io/github/repo-size/TRI-AMDD/beep?label=Repo+Size)](https://github.com/TRI-AMDD/beep/graphs/contributors)\n\n

\n \nBEEP is a set of tools designed to support Battery Evaluation and Early Prediction of cycle life corresponding to the research of the [d3batt program](https://d3batt.mit.edu/) and the [Toyota Research Institute](http://www.tri.global/accelerated-materials-design-and-discovery/).\n\n* **Documentation:** https://tri-amdd.github.io/beep\n* **Source code:** https://github.com/tri-amdd/beep\n* **PyPi release:** https://pypi.org/project/beep/\n\n#### How to cite\nIf you use BEEP, please cite this article:\n\n> P. Herring, C. Balaji Gopal, M. Aykol, J.H. Montoya, A. Anapolsky, P.M. Attia, W. Gent, J.S. Hummelsh\xc3\xb8j, L. Hung, H.-K. Kwon, P. Moore, D. Schweigert, K.A. Severson, S. Suram, Z. Yang, R.D. Braatz, B.D. Storey, SoftwareX 11 (2020) 100506.\n[https://doi.org/10.1016/j.softx.2020.100506](https://doi.org/10.1016/j.softx.2020.100506)\n\n'",",https://doi.org/10.1016/j.softx.2020.100506,https://doi.org/10.1016/j.softx.2020.100506","2020/02/13, 23:09:45",1349,Apache-2.0,65,2145,"2023/08/14, 05:58:25",40,751,792,122,72,11,0.0,0.6888612321095209,"2022/12/08, 17:26:47",v2022.10.3.16,0,20,false,,true,true,"felix-tri/ncm,narest-qa/repo38,Buhankoanon/OAI_Proxy_Checker,SmitPatel-31/yoga,allenwangs/battery-cell-streamlit,AmalPF/imageForensic,ShreyasMavle/maccor,ashukumar27/streamlit-classification,TjGhostMx3/ShortStory,Varat7v2/PERSON-DETECTION-TRAINING,bluesky49/google_map_scraping",,https://github.com/TRI-AMDD,,,,,https://avatars.githubusercontent.com/u/62159728?v=4,,, snl-quest,"An open source, Python-based software application suite for energy storage simulation and analysis developed by Sandia National Laboratories.",snl-quest,https://github.com/sandialabs/snl-quest.git,github,"energy-storage,pyomo,kivy,sandia-national-laboratories,optimization,python",Battery,"2022/12/01, 18:24:52",110,0,20,true,Python,Sandia National Laboratories,sandialabs,"Python,kvlang,HTML,Batchfile,Makefile,Shell",,"b'\n\n# QuESt: Optimizing Energy Storage\n[![Build Status](https://travis-ci.com/rconcep/snl-quest.svg?branch=master)](https://travis-ci.com/rconcep/snl-quest)\n\nCurrent release version: 1.6\n\nRelease date: April, 2022\n\n## Contact\nFor issues and feedback we would appreciate it if you could use the ""Issues"" feature of this repository. This helps others join the discussion and helps us keep track of and document issues.\n\n### Email\nEntity account `@sandia.gov: snl-quest`\n\nProject maintainer (Tu Nguyen) `@sandia.gov: tunguy`\n\n## Table of contents\n- [Introduction](#intro)\n- [Getting started](#getting-started)\n- [Frequently Asked Questions](#faq)\n - [QuESt Data Manager](#faq-data-manager)\n - [QuESt Valuation](#faq-valuation)\n - [QuESt BTM](#faq-btm)\n\n### What is it?\n\nQuESt is an open source, Python-based application suite for energy storage simulation and analysis developed by the Energy Storage Systems program at Sandia National Laboratories, Albuquerque, NM. It is designed to give users access to models and analysis for energy storage used and developed by Sandia National Laboratories. It\'s designed to be transparent and easy to use without having to have knowledge of the mathematics behind the models or knowing how to develop code in Python. At the same time, because it is open source, users may modify it to suit their needs should they desire to. We will continue developing QuESt and its applications to enable more functionality.\n\n#### QuESt Data Manager\nAn application for acquiring data from open sources. Data selected for download is acquired in a format and structure compatible with other QuESt applications. Data that can be acquired includes:\n* Independent system operators (ISOs) and regional transmission organization (RTOs) market and operations data\n* U.S. utility rate structures (tariffs)\n* Commercial or residential building load profiles\n* Photovoltaic (PV) power profiles\n\n*Note: An internet connection is required to download data.*\n\n*Note: Certain data sources require registering an account to obtain access.*\n\n\n\n#### QuESt Valuation\nAn application for energy storage valuation, an analysis where the maximum revenue of a hypothetical energy storage device is estimated using historical market data. This is done by determining the sequence of state of charge management actions that optimize revenue generation, assuming perfect foresight of the historical data. QuESt Valuation is aimed at optimizing value stacking for ISO/RTO services such as energy arbitrage and frequency regulation.\n\n\n\n\n\n\n\n#### QuESt BTM\nAn application for behind-the-meter energy storage system analysis. Tools include:\n* Cost savings for time-of-use and net-energy-metering customers\n\n\n\n\n\n#### QuESt Performance\nAn application for analyzing battery energy storage system performance due to parasitic heating, ventilation, and air conditioning loads. This tool leverages the building simulation tool EnergyPlus to model the energy consumption of a particular battery housing.\n\n\n\n#### QuESt Technology Selection\n\nAn application for identifying the energy storage technologies most suitable for a given project. This tool is based on multiple parameters that characterize each storage technology; the technologies that do not satisfy the minimum application requirements are filtered out and the remaining technologies are ranked to indicate their compatibility to the desired project.\n\n\n\n### Who should use it?\nThe software is designed to be used by anyone with an interest in performing analysis of energy storage or its applications without having to create their own models or write their own code. It\xe2\x80\x99s designed to be easy to use out of the box but also modifiable by the savvy user if they so choose. The software is intended to be used as a platform for running simulations, obtaining results, and using the information to inform planning decisions. \n\n## Getting started\n\n\n### Installing from executable (recommended)\nRunning QuESt from an executable is the most straightforward way to get started with QuESt. You do not require any Python installation to install QuESt with this method; simply run the executable. What is required:\n\n* QuESt executable package\n* Solver compatible with Pyomo\n\nWe are currently looking into packaging a basic solver to simplify the installation process further.\n\n#### Windows 10\nYou can find the executable version with each release in the [**Releases**](https://github.com/rconcep/snl-quest/releases) section.\n\n1. Download and extract the `.zip` that is *not* labeled ""Source code."" Its name will be `snl-quest-v{version number}-win10.zip`.\n2. Inside the extracted folder, there will be a lot of files and folders. Locate the `snl-quest-v{version number}.exe` file and run it.\n3. A command prompt should open along with the QuESt GUI.\n\n#### OSX, Linux\nCurrently, we do not offer executable packages of QuESt for OSX or Linux operating systems. They are possible to package but we have not implemented those packaging processes yet. Installing from source code is an option.\n\n#### Solvers\nWhen running the executable version of QuESt, a solver compatible for Pyomo is still required to be installed and on your system path. Please refer to the [solvers](#install-solvers) section for details.\n\n### Installing from source code (advanced)\nFor all platforms, you can instead install QuESt using the codebase in this repository.\n\nYou will want to obtain the codebase for QuESt. You can do that by downloading a release version in a compressed archive from the ""releases"" tab on the GitHub repository page labeled as ""Source code"". Alternatively, you can clone this repository or download a compressed archive of it by clicking the ""Clone or download"" button on this page. We recommend keeping the QuESt files in a location where you have read/write permission. Once you have the codebase, follow the appropriate set of instructions for your operating system.\n\n**Requirements**\n* Python 3.6+\n* Kivy 1.10.1+ and its dependencies\n* Solver compatible with Pyomo\n\n#### Windows\n\n#### Method 1 (With Anaconda)(Recommended)\n\n1. Install Python via [Anaconda](https://www.anaconda.com/download/). Use the 64 or 32-Bit Graphical installer as appropriate.\n2. Add the following three locations to your `path` variable: `/path/to/Anaconda3`, `/path/to/Anaconda3/Scripts`, `/path/to/Anaconda3/Library/bin`\n3. Open Anaconda Prompt. Create a new conda environment: `conda create --name quest python=3.9`. Activate this environment: `conda activate quest`. \n4. Install Kivy: `pip install kivy` or `conda install kivy`. More information on Kivy can be found [here](https://kivy.org/docs/installation/installation-windows.html).\n5. Navigate to the root directory of the codebase (where the `main.py` file is)"": `cd /path/to/snl-quest`. Then run the setup file using\n ``python setup.py develop`` This will check dependencies for QuESt and install them as needed.\n4. Install a solver for Pyomo to use. See other sections for instructions on this.\n\n#### Method 2 (Without Anaconda)\n\n1. Install Python from [here](https://www.python.org/downloads/).\n2. Add the path to the Python executable to your `path` variable.\n3. Open Windows command prompt.\n4. Install Kivy: `pip install kivy`. Remember that `conda install kivy` will not work here since we are not using Anaconda Prompt in this method. \n5. Follow steps 5 and 6 from Method 1. \n \n#### OSX\n1. Install Python, preferably via scientific distribution such as [Anaconda](https://www.anaconda.com/download/). Use the 64 or 32-Bit Graphical installer as appropriate.\n2. Install Kivy. Check [here](https://kivy.org/doc/stable/installation/installation-osx.html#using-homebrew-with-pip) for the latest instructions. (Refer to ""Using Homebrew with pip"" OR ""Using MacPorts with pip"")\n3. Navigate to the root directory of the codebase. Then run the setup\n ``python setup.py develop`` This will check dependencies for QuESt and install them as needed.\n4. Install a solver for Pyomo to use. See other sections for instructions on this.\n\n### Solvers for Pyomo\n\nAt least one solver compatible with Pyomo is required to solve optimization problems. Currently, a solver capable of solving linear programs is required. GLPK and CBC are suggested options for freely available solvers. Note that this list is not meant to be exhaustive but contains the most common viable options that we have tested. \n\n#### Installing GLPK (for Windows)\n1. Download and extract the executables for Windows linked [here](http://winglpk.sourceforge.net/).\n2. The glpk_*.dll and glpsol.exe files are in the `w32` and `w64` subdirectories for 32-Bit and 64-Bit Windows, respectively. Select the pair for the appropriate version of Windows that you are using. You can place them in the same directory as the QuESt executable. \n * Alternatively, you can place those files to the `C:\\windows\\system32` directory in order to have them in your system path. This will make GLPK available for the rest of your system instead of just for QuESt.\n * (When placing the files in your system path) Try running the command ``glpsol`` in the command prompt (Windows) or terminal (OSX). If you receive a message other than something like ""command not found,"" it means the solver is successfully installed.\n\n#### Installing GLPK (for Windows via Anaconda)\nIf you\'ve installed Python using Anaconda, you may be able to install several solvers through Anaconda\'s package manager with the following (according to Pyomo\'s [installation instructions](https://pyomo.readthedocs.io/en/latest/installation.html)):\n\n``conda install -c conda-forge glpk``\n\n#### Installing GLPK (for OSX)\nYou will need to either build GLPK from source or install it using the [homebrew](https://brew.sh/) package manager. This [blog post](http://arnab-deka.com/posts/2010/02/installing-glpk-on-a-mac/) may be useful.\n\n#### Installing GLPK or CBC (for OSX via Anaconda)\nIf you\'ve installed Python using Anaconda, you may be able to install several solvers through Anaconda\'s package manager with the following (according to Pyomo\'s [installation instructions](https://pyomo.readthedocs.io/en/latest/installation.html)):\n\n``conda install -c conda-forge glpk``\n\n``conda install -c conda-forge coincbc``\n\n#### Installing IPOPT (for Windows)\n1. Download and extract the pre-compiled binaries linked [here](https://www.coin-or.org/download/binary/Ipopt/). Select the latest version appropriate for your system and OS.\n2. Add the directory with the `ipopt.exe` executable file to your path system environment variable. For example, if you extracted the archive to `C:\\ipopt`, then `C:\\ipopt\\bin` must be added to your path.\n3. Try running the command ``ipopt`` in the command prompt (Windows) or terminal (OSX). If you receive a message other than something like ""command not found,"" it means the solver is successfully installed.\nRegardless of which solver(s) you install, remember to specify which of them to use in Settings within QuESt.\n\n### Running QuESt\nIf you are using the executable version, simply run the `snl-quest-v{version number}.exe` file.\n\nIf you are running from the codebase, from the Anaconda Prompt or Command Prompt, run:\n```\npython main.py\n```\n\nAlternatively, run ```main.py``` in a Python IDE of your choice.\n\n**NOTE: The current working directory must be where ``main.py`` is located (the root of the repository).**\n\n### Updating QuESt\n#### Installed from executable\nDownload and extract the executable package as previously. You can copy over your `\\data\\` directory to transfer your data bank to the new version. You can also copy over your `\\quest.ini` file to migrate your QuESt settings as well.\n\n#### Installed from source code\nIf you cloned the GitHub repository, you can execute a `git pull` command in the terminal/cmd while in the root of the QuESt directory. If you haven\'t modified any source code, there should be no conflicts. The master branch of the repository is reserved for release versions and is the most stable.\n\nIf you downloaded an archive of the master branch, you can download the latest release version as if it were a fresh install. You can drag and drop your old data directory so that you do not have to download all the data again if you would like. You can also move your `/quest.ini` file to migrate your settings.\n\n## Frequently Asked Questions\n\n\n### General\n\n> I am getting import errors when trying to run QuESt.\n\nThe current working directory must be where ``main.py`` is located.\n\n> The appearance of GUI elements in QuESt do not appear correct/The window does not display properly/The window is too big for my display/I cannot click or interact with the UI properly.\n\nQuESt is designed to be displayed at minimum resolution of 1600x900.\n\nThere are a number of possible reasons for display issues, but the most likely reason is due to operating system scaling. For example, Windows 10 has a feature that scales the appearance of display elements, usually to assist with higher resolution displays. For example, if scaling is set to 125% in Windows, this will scale the QuESt window to be too big for the display (on a 1920x1080 resolution display).\n\nScaling may also have the effect of confusing Kivy of where a UI element is and where it is displayed; e.g., you may be clicking where a button appears to be, but the scaling causes Kivy to not ""detect"" that you are pressing the button.\n\nSo far, this issue has been observed on a variety of laptops of both Windows and OSX varieties. Our suggestion is to disable OS level scaling or to connect to an external display and try to launch QuESt on it.\n\n> Are there any help tutorials/manuals/etc. for QuESt?\n\nWe strive to make QuESt as lightweight and intuitive to use as possible through its design. In version 1.2.f, we integrated additional help carousels within QuESt to provide additional details throughout the software. We currently do not intend to make a comprehensive manual but may share presentation materials such as mini tutorials that may be of interest.\n\n> I want to know more about how the algorithms work/how the results are computed.\n\nPlease see the [references](#references) for relevant publications describing the models that were implemented into QuESt. As we further develop the API and documentation, we will aggregate formulation details in those documents.\n\n> I\'m interested in a tool/capability that is not currently in QuESt.\n\nFeel free to drop us a line! User feedback helps shape our development goals and priorities and we would welcome hearing what users would like to have.\n\n### QuESt Data Manager\n\n\n> I am connecting to the internet through a proxy, such as on a corporate network. How should I configure my connection settings?\n\nTypically, devices have their connection settings configured for the network they will primarily residing on. For example, proxy settings may already be configured in system environment variables and whatnot. We recommend that your proxy settings be configured at the operating system level and that you do not additionally specify using a proxy in QuESt settings. In our experience, additionally specifying the same proxy settings in QuESt ""does no harm,"" but your mileage may vary.\n\n> I am trying to download data and am receiving many messages about connection errors, timeouts, etc. What should I do?\n\nWe found these issues to be very network dependent and hard to diagnose or mitigate against. The best practice would be to limit the amount of data that you request at a time. Additionally, QuESt Data Manager is configured to skip data that is already downloaded so you can just issue the same request to patch up any data that may have failed to download.\n\n> I downloaded data but other QuESt applications are telling me that I haven\'t downloaded any.\n\nQuESt expects data to be in a certain directory structure as structured by QuESt Data Manager. Changing directory names, filenames, modifying files, etc. will produce unexpected results. We recommend not performing any modifications to downloaded data files except for perhaps deleting them.\n\n> How do I obtain PJM Data Miner 2 API access?\n\nRefer to the instructions in QuESt Data Manager or see the API guide [here](http://www.pjm.com/markets-and-operations/etools/data-miner-2.aspx).\n\n> How do I obtain an ISO-NE ISO Express account?\n\nRefer to the instructions in QuESt Data Manager or simply register an account at ISO-NE\'s [website](https://www.iso-ne.com/). Make sure to sign up for Data Feeds access.\n\n> Why can\'t I download [data for which no option in QuESt Data Manager exists]?\n\nRTO/ISO/etc. provide a lot more varieties of data than what is shown in QuESt Data Manager. We are focused on acquiring data necessary for other QuESt applications to function. Additionally, acquiring data is not the fastest process. In order to improve the user experience, we decided to limit the amount of data that one can request at a time. For that reason, we have limited the number of pricing nodes for which data can be requested directly. (For example, PJM has over 11,000 pricing nodes.) We can consider lifting some of these limitations if requested.\n\n> I downloaded data for a previous month before the month was over. Now I can\'t complete the month\'s data set because it skips over it. What should I do?\n\nQuESt Data Manager skips downloads if a file with the anticipated filename already exists. If you delete the specific file(s) in the `/data/` directory, it should try to download the data.\n\n> Why does it take so long to download data?\n\n* Some download requests are requesting large amounts of data.\n* Some ISO/RTO websites or APIs have connection issues. We incorporate mechanisms for automatically retrying a limited amount of times.\n* CAISO\'s API limits API requests to every five seconds.\n\n> How do I obtain an API key for Data.gov / OpenEI / utility rate structure database / PVWatts / PV profile data?\n\nRefer to the instructions in QuESt Data Manager or see the signup form [here](https://developer.nrel.gov/signup/). The API key is the same for all those applications.\n\n> What do these buttons in the utility rate structure search tool do?\n\n\n\nThese buttons copy the text in the text input field down to the next row. It is mainly used for monthly flat rate schedules where every value is the same but you want to adjust the amount.\n\n> What are the purposes of the minimum and maximum peak demand values in the rate structure?\n\nTypically, a rate structure is only applicable to customers with certain peak demand values. QuESt does not currently enforce these minima and maxima and it is up to you to select the appropriate rate structure. In some cases, it may be possible to be contracted to a rate structure with a minimum peak demand that is greater than the use case\'s demand; additionally, a minimum demand charge may be applicable. We are planning on supporting minimum demand charges in QuESt BTM\'s cost savings tool in a future release. \n\n> Why can\'t I find my city in the commercial and residential load profile selection? / I downloaded a profile for a building in New York-Central Park but can\'t find it in QuESt BTM.\n\nThe locations are based on meteorological measurement sites which are typically airports (""AP"") or weather stations. This is because the profiles are based on typical meteorological years (TMY). See this [page](https://openei.org/community/blog/commercial-and-residential-hourly-load-data-now-available-openei) for information about the load profile database, including the definitions of the commercial building types and residential load types.\n\nEach specific location is matched up to a specific climate in order to simulate load profiles. For example, locations in New York City were matched up to Baltimore (in terms of climate) and the resulting load profile filenames were named after Baltimore.\n\n> I want to know more about how the PV power profiles are simulated.\n\nSee the API description and PVWatts manual [here](https://developer.nrel.gov/docs/solar/pvwatts/v6/).\n\n> What is the default latitude and longitude in the PV power profile download tool?\n\nIt is approximately Albuquerque, NM.\n\n> I\'m not sure what to put for the tilt angle in the PV profile download tool.\n\nLeaving the text input field blank sets the tilt angle to the latitude of the site.\n\n### QuESt Valuation\n\n\n> Why are only [x] options available for market areas/historical data/revenue streams/etc.?\n\nThese options are based on the data that you have downloaded through QuESt Data Manager. Download more varieties if you wish to use them!\n\n> I\'m getting solver errors/QuESt is crashing when building optimization models. How can I fix that?\n\nOur experience indicates that most crashes are due to data issues. For example, data for a month is missing unexpectedly, disallowing the model building process from completing. We make every effort to limit these incidents from happening, but it is difficult to perfectly predict the data that we need to design around. We will try to handle these exceptions as best we can as we learn more about the common situations.\n\n> An electricity market area I want to do analysis on isn\'t available. When will it be available?\n\nThe development team is working on modeling and doing analysis for the remaining market areas. When we have vetted the results and viability of data acquisition and processing, we will work on implementing them into QuESt. Please look forward to it!\n\n> I selected [x] year for my historical dataset and only [y] months had results after the optimization. Why is that?\n\nDue to (rolling) data availability, data for certain periods may be absent. For example, ERCOT\'s 2010 data only starts at December or data sets for the current year will obviously be incomplete. There\'s also the possibility that the data failed to download.\n\n> Why can I only adjust [x] parameters for my energy storage device?\n\nTo streamline the user experience in the Wizard, we decided to reduce the range of options available. Please try the ""Batch Runs"" interface for fuller flexibility.\n\n> The pro forma report\'s appearance doesn\'t seem quite right/there is a bunch of cryptic commands underneath the ""Optimization formulation"" section.\n\nFor best results when viewing the report, you must be connected to the internet and enable JavaScript. We use content delivery services for resources such as fonts (Google Fonts) and use JavaScript to render the equations under the ""Optimization formulation"" section (MathJax).\n\n> What is a parameter sweep?\n\nA parameter sweep will adjust the specified parameter from the min value to the max value in the given number of steps. It will do this for each month of data selected on the ""data"" interface. A simulation will be performed for the all of the combinations. This is a useful way for performing sensitivity analysis.\n\n### QuESt BTM\n\n\n> Can I adjust the rate structure parameters in the wizard?\n\nNo, any adjustments must be made in QuESt Data Manager before saving the entire rate structure.\n\n> I accidentally selected a PV power profile in the cost savings wizard. Can I remove my selection?\n\nNo, but this is a known issue. The easiest workaround is to reset the entire wizard by exiting it to the QuESt BTM home screen, the QuESt home screen, or restarting QuESt.\n\n> Sometimes the figures in the cost savings wizard report do not appear.\n\nThis is a known issue. You can try to generate the report again in order to fix it.\n\n> I want to use my own rate structure / PV profile / load profile.\n\nSince version 1.2.f, you can import your own time series data (PV and load profiles) through the user interface. You can also import your own data by adding to the QuESt data bank manually. See below for details.\n\n#### Rate structure\nThe rate structure files are stored as .json files in `/data/rate_structures/` after being downloaded through QuESt Data Manager. You can add a new file following the format of one downloaded using QuESt Data Manager. The general structure of the .json object is as follows:\n\n* name - the display name\n* utility\n * utility name - display name for utility \n * rate structure - display name for rate structure\n* energy rate structure\n * weekday schedule - a 2D array where the entries correspond to the integer-valued period of that hour. Each row represents a month. Each column represents an hour.\n * weekend schedule - same as weekday schedule\n * energy rates - each field name corresponds to the integer-valued periods from the weekday and weekend schedule and each field value is the $/kWh time-of-use energy rate for that period\n* demand rate structure\n * weekday schedule - same as energy rate structure\n * weekend schedule - same as energy rate structure\n * time of use rates - same as energy rates but with values in $/kW\n * flat rates - each field name is the abbreviation of the month and each field value is the $/kW flat demand charge for the peak demand of that month, if applicable\n * minimum peak demand - minimum peak demand in kW for this rate structure\n * maximum peak demand - maximum peak demand in kW for this rate structure\n* net metering\n * type - true if net metering 2.0 (use time-of-use energy rate), false if net metering 1.0 (use a fixed $/kWh)\n * energy sell price - fixed $/kWh for net metering 1.0; use null for type == true (net metering 2.0)\n\n#### PV profile\nThe PV profile files are stored as .json files in `/data/pv/` after being downloaded through QuESt Data Manager. You can add a new file following the format of one downloaded using QuESt Data Manager. The format is essentially that of the direct files from the PVWatts API. The relevant fields are described as follows:\n\n* inputs - the API inputs for this resulting .json object; these are for display purposes only \n* station_info - same as the inputs\n* outputs\n * ac - the hourly AC output in Watts in a single 1D array; this is what QuESt uses (after internal conversion to kW)\n\n#### Load profile\nThe load profile files are stored as .csv files in `/data/load/` after being downloaded through QuESt Data Manager. You can add a new file following the format of one downloaded using QuESt Data Manager. You can create a new directory under `commercial` for example like `/data/load/commercial/custom` and add a new .csv file.\n\nThe format is basically two columns; the ""Date/Time"" column gives the month, day, and hour and the second column is the hourly kW load. The ""Date/Time"" columna is used for parsing the correct data for a selected month, for example. A year is not provided because the building data is simulated based on TMY3 (typical meteorological year).\n\nOnce new files are added to the `data` bank appropriately, they should be picked up in the relevant applications when you are prompted to make a selection.\n\n## References\n\nNguyen, Tu A., David A. Copp, and Raymond H. Byrne. ""Stacking Revenue of Energy Storage System from Resilience, T&D Deferral and Arbitrage."" 2019 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2019.\n\nByrne, Raymond H., Tu A. Nguyen, and Ricky J. Concepcion. ""Opportunities for Energy Storage in CAISO."" 2018 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2018.\n[Available online](https://www.osti.gov/servlets/purl/1489129).\n\nByrne, Raymond H., Tu Anh Nguyen, and Ricky James Concepcion. Opportunities for Energy Storage in CAISO. No. SAND2018-5272C. Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2018.\n[Available online](https://www.osti.gov/servlets/purl/1515132).\n\nConcepcion, Ricky J., Felipe Wilches-Bernal, and Raymond H. Byrne. ""Revenue Opportunities for Electric Storage Resources in the Southwest Power Pool Integrated Marketplace."" 2018 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2018.\n[Available online](https://www.osti.gov/servlets/purl/1574578).\n\nWilches-Bernal, Felipe, Ricky J. Concepcion, and Raymond H. Byrne. ""Electrical Energy Storage Participation in the NYISO Electricity and Frequency Regulation Markets."" 2018 IEEE Power & Energy Society General Meeting (PESGM). IEEE, 2018.\n\nNguyen, Tu A., and Raymond H. Byrne. ""Maximizing the cost-savings for time-of-use and net-metering customers using behind-the-meter energy storage systems."" 2017 North American Power Symposium (NAPS). IEEE, 2017.\n[Available online](https://www.osti.gov/servlets/purl/1431654).\n\nNguyen, Tu A., et al. ""Maximizing revenue from electrical energy storage in MISO energy & frequency regulation markets."" 2017 IEEE Power & Energy Society General Meeting. IEEE, 2017.\n[Available online](https://www.osti.gov/servlets/purl/1408956).\n\nByrne, Raymond H., Ricky J. Concepcion, and C\xc3\xa9sar A. Silva-Monroy. ""Estimating potential revenue from electrical energy storage in PJM."" 2016 IEEE Power and Energy Society General Meeting (PESGM). IEEE, 2016.\n[Available online](https://www.osti.gov/servlets/purl/1239334).\n\nByrne, Raymond H., et al. ""The value proposition for energy storage at the Sterling Municipal Light Department."" 2017 IEEE Power & Energy Society General Meeting. IEEE, 2017.\n[Available online](https://www.osti.gov/servlets/purl/1427423).\n\nByrne, Raymond H., et al. ""Energy management and optimization methods for grid energy storage systems."" IEEE Access 6 (2017): 13231-13260.\n[Available online](https://ieeexplore.ieee.org/abstract/document/8016321).\n\nByrne, Raymond H., and C\xc3\xa9sar A. Silva-Monroy. ""Potential revenue from electrical energy storage in ERCOT: The impact of location and recent trends."" 2015 IEEE Power & Energy Society General Meeting. IEEE, 2015.\n[Available online](https://www.osti.gov/servlets/purl/1244909).\n'",,"2018/07/26, 16:43:03",1917,CUSTOM,1,199,"2022/10/04, 18:48:32",28,4,24,0,386,1,0.0,0.5087719298245614,"2022/08/25, 19:19:40",v1.6,0,5,false,,false,false,,,https://github.com/sandialabs,https://software.sandia.gov,United States,,,https://avatars.githubusercontent.com/u/4993680?v=4,,, simses,Software for techno-economic Simulation of Stationary Energy Storage Systems.,open-ees-ses,,custom,,Battery,,,,,,,,,,https://gitlab.lrz.de/open-ees-ses/simses,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, CAEBAT OAS,"A flexible, robust, and computationally scalable open-architecture framework that integrates multi-physics and multi- scale battery models.",,,custom,,Battery,,,,,,,,,,https://vibe.ornl.gov/#introduction,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, universal-battery-database,The Universal Battery Database is an open source software for managing Lithium-ion cell data.,Samuel-Buteau,https://github.com/Samuel-Buteau/universal-battery-database.git,github,"lithium-ion,universal-battery-database,lithium-ion-cells,tensorflow,ml,deep-learning",Battery,"2020/05/24, 14:01:49",60,0,19,true,Python,,,"Python,HTML,Shell,Batchfile",,"b""# Universal Battery Database\n\nThe Universal Battery Database is an open source software for managing Lithium-ion cell data. Its primary purposes are:\n1. Organize and parse experimental measurement (e.g. long term cycling and electrochemical impedance spectroscopy) data files of Lithium-ion cells.\n2. Perform sophisticated modelling using machine learning and physics-based approaches.\n3. Describe and organize the design and chemistry information of cells (e.g. electrodes, electrolytes, geometry), as well as experimental conditions (e.g. temperature).\n4. Automatically refresh a database as new data comes in.\n5. Visualize experimental results.\n6. Quickly search and find data of interest.\n7. Quality control.\n\nThe Universal Battery Database was developed at the [Jeff Dahn Research Group](https://www.dal.ca/diff/dahn/about.html) at Dalhousie University.\n\n## Table of Contents\n\n- [Preliminary Results](#preliminary-results)\n- [Data Management Software Demo](#data-management-software-demo)\n- [Installation](#installation)\n * [Prerequisites](#prerequisites)\n * [Two Installation Options](#two-installation-options)\n- [Using the Software](#using-the-software)\n- [Physics and Computer Science Behind the Software](#physics-and-computer-science-behind-the-software)\n- [Contributing](#contributing)\n * [Code Conventions](#code-conventions)\n \n## Preliminary Results\n\n![alt text](https://github.com/Samuel-Buteau/universal-battery-database/blob/master/demo_screenshots/capacity_measured_and_modelled.png)\n\n**Figure 1**: Model measurements and make predictions using [`ml_smoothing.py`](https://github.com/Samuel-Buteau/universal-battery-database/wiki/ml_smoothing.py).\n\n## Data Management Software Demo\n\n![alt text](https://github.com/Samuel-Buteau/universal-battery-database/blob/master/demo_screenshots/fix_cycle_example.png)\n\n**Figure 2**: Fix anomologous cycling data using the web browser provided by [`manage.py`](https://github.com/Samuel-Buteau/universal-battery-database/wiki/manage.py).\n\n## Installation\n\n### Prerequisites\n\n- [Python 3](https://www.python.org/downloads/)\n- [pip and virtualenv](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)\n\n### Two Installation Options\n\n1. If you only want to play around with modelling and you have a compiled dataset from somewhere else, you can [install without a database](https://github.com/Samuel-Buteau/universal-battery-database/wiki/Installing-Without-a-Database-(Windows)). This option is simpler and you can always install a database later.\n2. If you want to use the full database features such as parsing and organising experimental data and metadata, you should [install with a database](https://github.com/Samuel-Buteau/universal-battery-database/wiki/Installing-With-a-Database-(Windows)).\n\n\n## Using the Software\n\nUse [`manage.py`](https://github.com/Samuel-Buteau/universal-battery-database/wiki/manage.py) to see the web page and use its analytic features.\n\nUse [`ml_smoothing.py`](https://github.com/Samuel-Buteau/universal-battery-database/wiki/ml_smoothing.py) to use the machine learning model and see the results.\n\n\n## Physics and Computer Science Behind the Software\n\nWe hypothesize that we can make [good generalizations](https://github.com/Samuel-Buteau/universal-battery-database/wiki/Generalization-Criteria) by [approximating](https://github.com/Samuel-Buteau/universal-battery-database/wiki/The-Universal-Approximation-Theorem) the functions that map one degradation mechanism to another using neural networks. \n\nWe aim to develop a theory of lithium-ion cells. We first break down the machine learning problem into smaller sub-problems. From there, we develop frameworks to convert the theory to practical implementations. Finally, we apply the method to experimental data and evaluate the result.\n\n## Contributing\n\n### Code Conventions\n\nGenerally, we follow [Google's Python Style Guide](https://github.com/google/styleguide/blob/gh-pages/pyguide.md).\n""",,"2019/10/15, 15:33:35",1471,Apache-2.0,0,606,"2022/11/21, 22:46:35",38,62,90,1,337,7,0.0,0.3783783783783784,,,0,2,false,,false,false,,,,,,,,,,, open_BEA,Open Battery Models for Electrical Grid Applications.,open-ees-ses,,custom,,Battery,,,,,,,,,,https://gitlab.lrz.de/open-ees-ses/openbea,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, lp_opt,Linear Programming Optimization Tool for Battery Energy Storage Systems.,open-ees-ses,,custom,,Battery,,,,,,,,,,https://gitlab.lrz.de/open-ees-ses/lp_opt,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, SLIDE,SLIDE is a C++ code that simulates degradation of lithium ion cell.,davidhowey,https://github.com/Battery-Intelligence-Lab/SLIDE.git,github,"battery,digital-twin,physical-modeling,simulation",Battery,"2023/04/08, 04:07:24",89,0,39,true,C++,Battery Intelligence Lab,Battery-Intelligence-Lab,"C++,MATLAB,CMake,Python,TeX",,"b'_Slide_\n===========================\n[![Ubuntu unit](https://github.com/Battery-Intelligence-Lab/SLIDE/workflows/Ubuntu%20unit/badge.svg)](https://github.com/Battery-Intelligence-Lab/SLIDE/actions)\n[![macOS unit](https://github.com/Battery-Intelligence-Lab/SLIDE/workflows/macOS%20unit/badge.svg)](https://github.com/Battery-Intelligence-Lab/SLIDE/actions)\n[![Windows unit](https://github.com/Battery-Intelligence-Lab/SLIDE/workflows/Windows%20unit/badge.svg)](https://github.com/Battery-Intelligence-Lab/SLIDE/actions)\n![Website](https://img.shields.io/website?url=https%3A%2F%2FBattery-Intelligence-Lab.github.io%2FSLIDE%2F)\n[![codecov](https://codecov.io/gh/Battery-Intelligence-Lab/SLIDE/branch/master/graph/badge.svg?token=K739SRV4QG)](https://codecov.io/gh/Battery-Intelligence-Lab/SLIDE)\n\n\n![GitHub all releases](https://img.shields.io/github/downloads/Battery-Intelligence-Lab/SLIDE/total) \n[![](https://img.shields.io/badge/license-BSD--3--like-5AC451.svg)](https://github.com/Battery-Intelligence-Lab/SLIDE/blob/master/LICENSE)\n\nIn this `readme.md` a summary is given. You may find the detailed documentation [here](https://Battery-Intelligence-Lab.github.io/SLIDE/). \nIf you are affected by the sudden change of master branch, please switch to [SLIDE_v2](https://github.com/Battery-Intelligence-Lab/SLIDE/tree/SLIDE_v2) branch. \n\nTo cite this code, check the lastest release DOI at https://zenodo.org/badge/latestdoi/185216614.\n\n

\n \n

\n\n\n_Slide_ (simulator for lithium-ion degradation) is a code project mainly written in C++ to do fast simulations of degradation of lithium-ion batteries.\nSimulating 5000 1C CC cycles should take less than 1 minute; adding a CV phase doubles the calculation time to below 2 minutes. The project uses object oriented programming in C++, see documentation for more details. \n\nThe underlying battery model is the Single Particle Model (SPM) with a coupled bulk thermal model. \nA spectral implementation of the SPM in Matlab was developed by Bizeray and Howey and is [available separately on GitHub](https://github.com/davidhowey/Spectral_li-ion_SPM). _Slide_ adds various degradation models on top of the SPM. The equations were taken from literature and implemented in one large coupled model. Users can easily select which models they want to include in their simulations. They can set the values of the fitting parameters of those degradation models to fit their own data.\n\n_Slide_ is written to behave similarly to a battery tester. It offers functions to load cells with a constant current, a current profile or a constant voltage such that users can program their own degradation procedures. Some standard procedures have already been implemented (for calendar ageing and cycle ageing with regular CCCV cycles or with drive cycles). Also some reference performance tests have already been coded (to simulate a the capacity measurement, OCV curves, pulse discharge, etc.). Users can choose to store data points (current, voltage, temperature) at fixed time intervals during the degradation experiments, similar to how a battery tester stores such data.\n\nThe results from the simulations are written to csv files. Users can write their own code to read and plot these results, but MATLAB-scripts are provided for this too.\n\nDetailed documentation is provided in the pdf documents. The code itself is also extensively documented.\n\nIf you use _Slide_ in your work, please cite our paper:\n\nJ.M. Reniers, G. Mulder, D.A. Howey, ""Review and performance comparison of mechanical-chemical degradation models for lithium-ion batteries"", Journal of The Electrochemical Society, 166(14), A3189, 2019, DOI [10.1149/2.0281914jes](https://doi.org/10.1149/2.0281914jes).\n\nThis code has been developed at the Department of Engineering Science of \nthe University of Oxford. \nFor information about our lithium-ion battery research, visit the [Battery Intelligence Lab](https://howey.eng.ox.ac.uk) website. \n\nFor more information and comments, please contact \n[david.howey@eng.ox.ac.uk](david.howey@eng.ox.ac.uk).\n\n\nRequirements\n============\nYou will need a C++ programming environment to edit, compile and run the code.\nEclipse is the environment used to develop the code, but other environments should work as well.\nYour computer must also have a C++ compiler installed.\nThe code has been tested using g++.\nExtensive guidelines on how to install those programs is provided in the documentation.\n\nTo display the results, various Matlab scripts are provided.\nTo run those, you will need to have installed Matlab. \nThe code has been tested using Matlab R2018a, but should work with other releases with no or minor modifications.\n\nTo calculate the spatial discretisation, two open-source Matlab functions developped by others are being used.\nIf you don\'t change the discretisation, you will not need them.\nIf you do change the discretisation, please read the license files attached to those two functions (\'license chebdif.txt\' and \'lisence cumsummat.txt\').\n\n \nInstallation\n============\n### Option 1 - Downloading a .zip file ###\n[Download a .zip file of the code](https://github.com/Battery-Intelligence-Lab/SLIDE/archive/master.zip)\n\nThen, unzip the folder in a chosen directory on your computer.\n\n### Option 2 - Cloning the repository with Git ###\nTo clone the repository, you will first need to have [Git][6] installed on \nyour computer. Then, navigate to the directory where you want to clone the \nrepository in a terminal, and type:\n```\ngit clone https://github.com/Battery-Intelligence-Lab/SLIDE.git\n```\nThe folder containing all the files should appear in your chosen directory.\n\n\nGetting started\n===============\nDetailed instructions on how to get started are in the documentation.\nYou first have to import the code to your programming environment and make sure the settings are correct (e.g. to allow enough memory for the calculation).\nThen you can open Main.cpp, which implements the main-function. In this function you choose what to simulate by uncommenting the thing you want to do (and commenting all other lines). \nIt is recommended to start with the CCCV-function, which simulates a few CCCV cycles.\nYou will then have to build (or compile) the code, which might take a while the first time you do this.\nNow you can run the code (either locally in the programming environment or by running the executable which was created by the compiler).\nWhile the simulation is running, csv files with the results are written in one or multiple subfolders.\nWhen the simulation has finished, you can run the corresponding MATLAB-script (e.g. readCCCV.m) to plot the outcomes.\n\nMuch more detailed documentation can be found in the documentation (from \'1 Getting started\' to \'7 appendixes; debugging, basics of C++, object oriented programming\'). These guides are mostly independent of each other, so you don\'t have to read all of them.\nAlso the code itself is extensively commented, so you might not have to read the guides at all.\n\n\nLicense\n=======\nThis open-source C++ and Matlab code is published under the BSD 3-clause License,\nplease read `LICENSE` file for more information.\n\nThis software also uses the following external libraries:\nTwo Matlab functions used by the code to produce the spatial discretisation have been developed by others.\n- DMSUITE: see [matlab/license-chebdif.txt](matlab/LICENSE-chebdif.txt) for details. \n- Cumsummat: see [matlab/licence-cumsummat.txt](matlab/LICENSE-cumsummat.txt) for details.\n'",",https://zenodo.org/badge/latestdoi/185216614.\n\n,https://doi.org/10.1149/2.0281914jes","2019/05/06, 14:47:40",1633,CUSTOM,229,338,"2023/04/08, 04:07:24",13,9,13,8,200,0,0.0,0.09999999999999998,"2019/05/15, 18:23:16",v1.0.2,0,2,false,,false,true,,,https://github.com/Battery-Intelligence-Lab,https://howey.eng.ox.ac.uk,,,,https://avatars.githubusercontent.com/u/93661605?v=4,,, equiv-circ-model,"An equivalent circuit model for a battery cell, module, and pack.",batterysim,https://github.com/batterysim/equiv-circ-model.git,github,"battery,electric-vehicles,equivalent-circuit-model,battery-cell",Battery,"2023/08/03, 03:48:26",72,0,18,true,Python,,batterysim,Python,,"b'# Equivalent circuit model\n\nThis repository contains Python code for running an equivalent circuit model (ECM) developed for a 2013 Nissan Leaf battery cell and module. The `ecm` package contains source code for the equivalent circuit model while the `examples` folder provides scripts for running the ECM for a battery cell, module, and pack. Model parameters are determined from hybrid pulse power characterization (HPPC) and discharge tests conducted at ORNL. The battery cell and module specifications were provided by NREL.\n\n## Installation\n\nThe [Anaconda](https://www.anaconda.com) or [Miniconda](https://conda.io/miniconda.html) distribution of Python 3 is recommended for this project. The `ecm` package requires Matplotlib, NumPy, Pandas, and SciPy.\n\nThe simplest way to install the ECM package is with pip. This can be done from within the equiv-circ-model directory:\n\n```bash\n# Install the ecm package\npip install -e .\n```\n\nA requirements file is provided for running the ECM in a virtual environment using pip:\n\n```bash\n# Create a new virtual environment\npython -m venv env\n\n# Activate the environment\nsource env/bin/activate\n\n# Install packages needed for the ECM\npip install -r requirements.txt\n\n# From within equiv-circ-model directory, install the ecm package\npip install -e .\n\n# Deactivate the environment\ndeactivate\n\n# Remove the environment by deleting the `env` folder\n```\n\nAn environment yaml file is also provided for running the ECM in a conda environment:\n\n```bash\n# Create a new conda environment and install packages needed for the ECM\nconda env create -f environment.yml\n\n# Activate the environment\nconda activate ecm\n\n# From within equiv-circ-model directory, install the ecm package\npip install -e .\n\n# Deactivate the environment\nconda deactivate\n\n# Remove the environment and its installed packages\nconda env remove -n ecm\n```\n\n## Usage\n\nExamples of using the `ecm` package are provided in the `examples` folder. Examples are organized into subfolders for battery cell and battery module models. From within the subfolder, each script can be run from the command line such as:\n\n```bash\n# View plots of the battery cell HPPC data\ncd ~/equiv-circ-model/examples/cell\npython view_hppc_data.py\n\n# Run the ECM for a battery cell and compare to HPPC battery cell data\ncd ~/equiv-circ-model/examples/cell\npython hppc_vt.py\n```\n\n## Project structure\n\n**ecm** - Python package containing source code for the equivalent circuit model (ECM).\n\n**examples/cell** - Example scripts for running the battery cell ECM.\n\n**examples/cell-to-module** - Examples of using a cell model to predict a battery module.\n\n**examples/cell-to-pack** - Examples of using a cell model to predict a battery pack.\n\n**examples/data** - Data files from 2013 Nissan Leaf battery cell and module tests. This data is used for developing and validating the ECM.\n\n**examples/module** - Example scripts for running a battery module ECM.\n\n**examples/module-to-pack** - Examples of using a module model to predict a battery pack.\n\n## Contributing\n\nComments, suggestions, and other feedback can be submitted on the [Issues](https://github.com/batterysim/equiv-circ-model/issues) page.\n\n## License\n\nThis code is available under the MIT License - see the [LICENSE](LICENSE) file for more information.\n'",,"2019/05/04, 19:02:30",1635,MIT,1,59,"2023/08/03, 03:48:54",4,0,3,2,83,0,0,0.0,,,0,1,false,,false,false,,,https://github.com/batterysim,,,,,https://avatars.githubusercontent.com/u/26285431?v=4,,, long-live-the-battery,Predicting total battery cycle life time with machine learning.,dsr-18,https://github.com/dsr-18/long-live-the-battery.git,github,,Battery,"2019/07/02, 10:17:50",80,0,14,true,Jupyter Notebook,,dsr-18,"Jupyter Notebook,Python,JavaScript,HTML,Shell",,"b'# long-live-the-battery\n\nPredicting total battery cycle life time with [TensorFlow 2](https://www.tensorflow.org/beta). We\'re going to publish a blog post describing the project in-depth soon.\n\nThis project is based on the work done in the paper [\'Data driven prediciton of battery cycle life before capacity degradation\'](https://www.nature.com/articles/s41560-019-0356-8) by K.A. Severson, P.M. Attia, et al., and uses the corresponding data set. The original instructions for how to load the data can be found [here](https://github.com/rdbraatz/data-driven-prediction-of-battery-cycle-life-before-capacity-degradation).\n\n\n## Setup\n\nWe recommend to set up a virtual environment using a tool like [Virtualenv](https://virtualenv.pypa.io/en/latest/).\n\nClone this repo\n```\ngit clone https://github.com/dsr-18/long-live-the-battery\n```\nand install dependencies.\n```\npip install -r requirements.txt\n```\n\nYou can download the processed dataset [here](https://github.com/dsr-18/long-live-the-battery-dataset) and jump to *Train Model*. If you want to reproduce the data preprocessing step, go ahead with *Generate Local Data*.\n\n\n## Generate Local Data\n\nBefore running the model, generate local data:\n\n1. Download the original three batch files [here](https://data.matr.io/1/projects/5c48dd2bc625d700019f3204) into a data directory like this:\n```\nlong-live-the-battery\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data\n| \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 2017-05-12_batchdata_updated_struct_errorcorrect.mat\n| \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 2018-04-12_batchdata_updated_struct_errorcorrect.mat\n| \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 2017-06-30_batchdata_updated_struct_errorcorrect.mat\n```\n2. Make sure *data* is empty otherwise. Then from the base directory run\n```\npython -m trainer.data_preprocessing\n```\nto create a *processed_data.pkl* file.\n\n3. Then run\n```\npython -m trainer.data_pipeline\n```\nto recreate the *.tfrecord* files.\n\n\n## Train Model\n\nMake sure the *.tfrecord* files are saved in this structure:\n```\nlong-live-the-battery\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data\n| \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 tfrecords\n| | \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 scaling_factors.csv\n| | \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 train\n| | \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 test\n| | \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 secondary_test\n```\nTo start training the model, run this command from the base directory:\n```\npython -m trainer.task\n```\nThe default is set to three epochs which is okay for testing, but too short to train a reasonably fit model. Use the above command with the *--num-epochs* flag to set a higher number. To get a list of other parameters, use the *--help* flag.\n\nTo run the model in Google Cloud Platform (team members only):\n1. Make sure you have access to the ion-age project.\n2. Install [GCloud SDK](https://cloud.google.com/sdk/docs/). \n3. Run from base directory (with -h to see configurable options):\n```\n./train.sh\n```\nFollow the output URL to stream logs.\n\n\n## Predict\n\nEvery training run saves a TensorBoard logfile and at least one model checkpoint by default in the *Graph* directory. One way to test your model\'s performance without writing your own [TensorFlow Keras code](https://www.tensorflow.org/beta/guide/keras/training_and_evaluation) is to start a local Flask server that serves predictions:\n1. Copy any model from a *checkpoints* directory within *Graph* to the *server* directory.\n2. Rename that model folder to *saved_model*.\n3. Create sample data to predict on:\n```\npython generate_json_samples.py\n```\n4. Go into the server and start it from there:\n```\ncd server\npython server.py\n```\n5. Now visit ""localhost:5000"" in your browser and you should see the start page with a prompt to upload battery data in json-format. The site also lets you select the sample data randomly.\n'",,"2019/05/16, 13:14:55",1623,MIT,0,360,"2023/03/25, 00:14:51",6,37,38,2,214,2,0.1,0.4593639575971732,,,0,3,false,,false,false,,,https://github.com/dsr-18,,,,,https://avatars.githubusercontent.com/u/50706126?v=4,,, ISEAFramework,Allows coupled electrical-thermal simulations of single storage systems (e.g. lithium ion batteries or double layer capacitors) or complete storage system packs.,isea,,custom,,Battery,,,,,,,,,,https://git.rwth-aachen.de/isea/framework,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Ampere,Advanced Model Package for ElectRochemical Experiments.,nealde,https://github.com/nealde/Ampere.git,github,,Battery,"2021/04/22, 16:47:32",20,3,2,false,Jupyter Notebook,,,"Jupyter Notebook,C,Python",,"b""\n\n\n\n\nAmpere - Advanced Model Package for ElectRochemical Experiments\n------------\n\n`Ampere` is a Python module for working with battery models.\n\nUsing a [scikit-learn-like API](https://arxiv.org/abs/1309.0238), we hope to make visualizing, fitting, and analyzing impedance spectra more intuitive and reproducible.\n\nAmpere is currently in the alpha phase and new features are rapidly being added.\nIf you have a feature request or find a bug, please feel free to [file an issue](https://github.com/nealde/Ampere/issues) or, better yet, make the code improvements and [submit a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/)! The goal is to build an open-source tool that the entire electrochemical community can use and improve\n\nAmpere currently provides:\n- A simple API for fitting, predicting, and plotting discharge curves\n- A simple API for generating data, or fitting with arbitrary charge / discharge patterns.\n\n\n## Installation\n### Dependencies\n\nAmpere requires:\n\n- Python (>=3.5)\n- SciPy (>=1.0)\n- NumPy (>=1.14)\n- Matplotlib (>=2.0)\n- Cython (>=0.29)\n\n\nSeveral example notebooks are provided in the examples/ directory. Opening these will require Jupyter notebook or Jupyter lab.\n\n### User Installation\n\nThe easiest way to install Ampere is using pip:\n\n`pip install ampere`\n\n\nHowever, it depends on Cython and Microsoft c++ libraries in order to install (on windows). Those should be added as follows:\n\n`pip install --upgrade cython setuptools`\n\nfollow [these instructions](https://docs.microsoft.com/en-us/answers/questions/136595/error-microsoft-visual-c-140-or-greater-is-require.html) to install the proper c++ libraries using Microsoft tools.\n\nThat may or may not work, depending upon your system. An alternative method of installation that works is:\n\n`git clone https://github.com/nealde/ampere`\n\nI've recently added the Cython-generated c files back to the repo, so it may be as simple as:\n\n`cd ampere`\n`python setup.py install`\n\nHowever, if that doesn't work, the following will rebuild the files:\n\n`cd ampere/models/P2D`\n\n`python setup.py build_ext --inplace`\n\n`cd ../SPM`\n\n`python setup.py build_ext --inplace`\n\nThis will build the local C code that is needed by the main compiler. Then, you can cd back up to the main folder and\n\n`python setup.py install`\n\nThat will typically work. I'm still working on getting pip installation working, and it will likely require some package modifications,\nfollowing SKLearn as a guide.\n\n## Examples and Documentation\n\nExamples and documentation will be provided after my Defense, which is set for the end of May.\n\n### On the Horizon\n\n- Currently, all models are solved with Finite Difference discretization. I would love to use some higher order spatial discretizations.\n- Currently, the results have not been verified with external models. That is still on the to-do list, and to incorporate those values into the test suite would be excellent.\n- Some of my published work regarding surrogate models for solving and fitting will be implemented once they are fully fleshed out.\n\n- Add ability to serialize / deserialize models from disk, to save the result of an optimization\n- add ability to have custom Up / Un functions for different battery chemistries\n- add documentation / fix docstrings to be accurate\n- add Latex equations and node spacings""",",https://arxiv.org/abs/1309.0238","2018/09/28, 19:23:50",1853,MIT,0,32,"2021/04/25, 02:53:24",8,0,3,0,913,2,0,0.0,,,0,1,false,,true,false,"narest-qa/repo12,vduseev/number-encoding,nealde/dash-flask-application",,,,,,,,,, offgridsystems,"Data sheet and assembly manual, component data sheets, busbars and files needed to build no-weld wireless BMS DKblock style battery packs.",offgridsystems,https://github.com/offgridsystems/Documents.git,github,"battery,cell,bms,lithium,wireless,welding,cc1310,powerwall,ev,electric-vehicle,dkblock,no-weld",Battery,"2020/05/18, 19:08:51",35,0,2,false,,,,,,"b'# DKblock documents - \nHere is the DK website for more information: https://dkblock922508958.wordpress.com \n\nHere is the ebay store for some parts. Let me know at the ""Contact"" tab of Wordpress (or ebay) if there is something else you would like.\nhttps://www.ebay.com/sch/i.html?_from=R40&_trksid=p2380057.m570.l1313.TR0.TRC0.A0.H0.Xdkblock.TRS4&_nkw=dkblock&_sacat=0\n\nAll documents can be opened in OpenOffice format or Adobe Acrobat Reader. Some drawing files are in Alibre Design (for example AD_DRW extension). Please note that if your CAD program cannot read these files, not to worry because most CAD programs can output formats that can be read by other CAD programs, so please ask if you would like another format, we will try to accomodate.\n\n![blocks connected2](https://user-images.githubusercontent.com/6006120/69199353-8ea59f80-0aec-11ea-82b3-e0e048fd2252.png)\n\n\n\nAll DKblock software and hardware is released as open source hardware (OSHW) as defined by the OSHWA: https://www.oshwa.org/definition/ and under the JSON license defined at https://www.json.org/license.html\n'",,"2019/10/10, 21:00:55",1475,MIT,0,94,"2022/10/11, 07:23:44",0,0,2,0,379,0,0,0.10989010989010994,,,0,2,false,,false,false,,,,,,,,,,, 3d_milp,Energy Arbitrage Optimization With Battery Storage.,ElektrikAkar,https://github.com/ElektrikAkar/3d_milp.git,github,,Battery,"2022/07/22, 01:16:19",4,0,1,false,MATLAB,,,MATLAB,,"b'

\n \n

\n\n# Energy Arbitrage Optimization With Battery Storage: 3D-MILP for Electro-Thermal Performance and Semi-Empirical Aging Models\n\n\nThe tool was created by [Volkan Kumtepeli](https://scholar.google.com/citations?user=Z43mRIsAAAAJ&hl=en) at the Energy Research Institute at Nanyang Technological University\nin collaboration with Institute for Electrical Energy Storage Technology at the Technical University of Munich.\n\n\n## How to cite: \n\nV. Kumtepeli, HC. Hesse, M. Schimpe, A. Tripathi, Y. Wang, and A. Jossen,\nEnergy Arbitrage Optimization With Battery Storage: 3D-MILP for Electro-Thermal Performance and Semi-Empirical Aging Models.\nIEEE Access, vol. 8, pp. 204325-204341, 2020. [Online]. Available:\nhttps://doi.org/10.1109/ACCESS.2020.3035504\n\n```\n@article{kumtepeli2020energy,\n title={Energy Arbitrage Optimization With Battery Storage: 3D-MILP for Electro-Thermal Performance and Semi-Empirical Aging Models},\n author={Kumtepeli, Volkan and Hesse, Holger C and Schimpe, Michael and Tripathi, Anshuman and Youyi, Wang and Jossen, Andreas},\n journal={IEEE Access},\n volume={8},\n pages={204325--204341},\n year={2020},\n publisher={IEEE}\n doi={10.1109/ACCESS.2020.3035504},\n ISSN={2169-3536},\n}\n\n```\n\n\n## Dependencies / Requirements: \n\n* [Gurobi](https://www.gurobi.com/) 9.03\n* [YALMIP](https://yalmip.github.io/download/) R20200116\n* MATLAB >=2017a for string operations and >=2019a for readmatrix function.\n* [Robust Statistical Toolbox](https://github.com/CPernet/Robust_Statistical_Toolbox) (not used but may be necessary for some functions in RainCloudPlots library) \n* Partially provided external libraries: \n - [RainCloudPlots](https://github.com/RainCloudPlots/RainCloudPlots)\n - [cbrewer](https://www.mathworks.com/matlabcentral/fileexchange/34087-cbrewer-colorbrewer-schemes-for-matlab)\n - [Custom Colormap](https://www.mathworks.com/matlabcentral/fileexchange/69470-custom-colormap)\n - [MATLAB-Dataspace-to-Figure-Units](https://github.com/michellehirsch/MATLAB-Dataspace-to-Figure-Units)\n - [Tight Subplot](https://www.mathworks.com/matlabcentral/fileexchange/27991-tight_subplot-nh-nw-gap-marg_h-marg_w)\n\n\n## How to use: \n\nRun Optimization_single.m or Optimization_batch.m file. Default settings are given in simulationSettings which can be called with additional settings.'",,"2020/11/06, 13:20:26",1083,CUSTOM,0,10,"2020/11/19, 13:04:05",0,2,2,0,1070,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, LIONSIMBA,"A Matlab framework based on a finite volume model suitable for Li-ion battery design, simulation, and control.",lionsimbatoolbox,https://github.com/lionsimbatoolbox/LIONSIMBA.git,github,,Battery,"2021/05/14, 18:16:06",90,0,23,true,MATLAB,,,MATLAB,,"b'\n# LIONSIMBA - Lithium-ION SIMulation BAttery Toolbox\n\n\n\n A Matlab framework based on a finite volume model suitable for Li-ion battery design, simulation, and control\n\n
\n\n## Official Web Page\n\nConnect to the official web page to get the latest news\n\n[http://sisdin.unipv.it/labsisdin/lionsimba.php](http://sisdin.unipv.it/labsisdin/lionsimba.php)\n\n-----------------------------------------------------------------\n## Installation, and requirements\n\nPlease refer to the [Wiki](https://github.com/lionsimbatoolbox/LIONSIMBA/wiki) of this project for information about installation and requirements.\n\n-----------------------------------------------------------------\n## Authors\n\n+ [Marcello Torchio](mailto:marcello.torchio01@ateneopv.it)\n+ [Lalo Magni](http://sisdin.unipv.it/labsisdin/people/maglal/maglal.php)\n+ [Bhushan R. Gopaluni](http://www.chbe.ubc.ca/profile/bhushan-gopaluni/)\n+ [Richard D.Braatz](https://cheme.mit.edu/profile/richard-d-braatz/)\n+ [Davide M. Raimondo](http://sisdin.unipv.it/labsisdin/raimondo/raimondo.php)\n\n## Contributors to LIONSIMBA 2.0 besides the previous authors\n+ [Ian Campbell](http://www.imperial.ac.uk/electrochem-sci-eng/people/)\n+ [Krishnakumar Gopalakrishnan](https://www.edx.org/bio/krishnakumar-gopalakrishnan)\n+ [Gregory Offer](https://www.imperial.ac.uk/people/gregory.offer)\n\n\n## Acknowledgments\n\n+ [Andrea Pozzi](https://scholar.google.com/citations?user=RLCmMM8AAAAJ) for his extensive support for LIONSIMBA 2.0 beta testing and continous support\n+ **Alessio Stefanini** for his contribution to the maintenance of the LIONSIMBA 2.0 user\'s guide\n-----------------------------------------------------------------\n## Citations\n\nIf LIONSIMBA Toolbox is used for research purposes, the authors would like to have it mentioned. Here below the necessary information can be found\n+ **Title:** LIONSIMBA: A Matlab framework based on a finite volume model suitable for Li-ion battery design, simulation, and control\n\n+ **Journal:** The Electrochemical Society\n\n+ **Volume:** 163\n\n+ **Number:** 7\n\n+ **Pages:** A1192-A1205\n\n+ **Year:** 2016\n\n##### **Download here the [BibTeX](http://sisdin.unipv.it/labsisdin/mtorchio/lionsimba.bib) file**\n\n##### **Read the Journal paper** [here](https://web.mit.edu/braatzgroup/Torchio_JElectSoc_2016.pdf)\n\n### Typos\n\nThe equation for the ionic flux reported in Table I and Table III has a typo in the formulation. Indeed the version on the paper reports the Faraday\'s constant (F) at the denominator, while the universal gas constat at the numerator. This should be inverted. The correct equation is:\n\n\nThanks to [sarasyha](https://github.com/sarasyha) for pointing it out in [Issue#11](https://github.com/lionsimbatoolbox/LIONSIMBA/issues/11)\n\n-----------------------------------------------------------------\n\n## How to start using LIONSIMBA\n\nYou can get LIONSIMBA in two ways:\n\n### 1 - Download the latest version in zip format\n\n Download the latest zip package from [HERE](https://github.com/lionsimbatoolbox/LIONSIMBA/archive/master.zip)\n\n### 2 - Clone the repository\n ```sh\n$ git clone https://github.com/lionsimbatoolbox/LIONSIMBA.git\n```\n## Bugs report\n\nPlease feel free to use the \'issue\' section on GitHub or write to\n\ndavide (**dot**) raimondo (**at**) unipv (**dot**) it\n\n## Forks\nFeel free to fork the project and modify at your best convenienve. The framework is continously under development, and contributions through push requests are welcome.\n\n-----------------------------------------------------------------\n\n## Changelog\n\n### Last Update 01/19/2020 - V 2.1 Released (Now supports Octave)\n**Major changes**\n+ Fixed bug in the analytical initialisation of the model equations that was not allowing simulations with ageing (issue#6, thanks to mariapaygani for spotting it)\n+ Implemented functions to support execution in Octave\n\n### Last Update 04/02/2018 - V 2.0 Released\n**Major changes**\n+ Constant and variable profile power input mode added\n+ Analytical initialisation of the model equations\n+ Added thermal lumped model\n+ Added stoichiometry indices for SOC calculation\n+ Added possibility to initialize cell SOC through Parameters_init call\n+ Added solid phase diffusion scheme based on spectral methods (provides proper results, but still in beta version)\n\n\n**Minor changes**\n+ General code review, polishing and variable renaming\n+ Added possibility to chose the interpolation scheme at the control volumes edges\n+ Normalized the finite-difference numerical scheme for the solid phase diffusion (it reduces numerical inaccuracies)\n\n**Known bugs/issues**\n+ Thermal diffusivities are different when considering thermal enabled or isothermal scenario\n+ SOC initialization through initial cell (dis)charge and through Parameters_init leads to different results due to numerical inaccuracies\n\n### Last Update 06/24/2017 - V 1.024 Released\n\n+ Feedback-based custom current profile\n+ New examples\n+ Handling of input current discontinuities\n+ Minor changes\n\n\n### Last Update 04/04/2017\n\n+ Minor fixes and bug corrections (thanks to Jeesoon Choi for pointing the bugs out)\n\n### Last Update 01/03/2017 - V 1.023 Released\n\n+ The code has been reorganized and some functions have been modularized for a better maintenance.\n+ Added support in the user\'s guide for the installation and configuration of the SUNDIALS Matlab interface.\n\n### Last Update 09/23/2016 - V 1.022 Released\n\n+ Added support for analytical Jacobian. LIONSIMBA is now able to derive automatically the analytical form of the Jacobian describing the P2D dynamics. This knowledge is the exploited from the integration process to speed up the resolution of the DAEs. (Thanks to Dr. Sergio Lucia and Prof. Rolf Findeisen for pointing us out the automatic differentiation provided by [CasADi](https://github.com/casadi/casadi/wiki) toolbox)\n+ Minor fixes in the examples.\n\n### 08/27/2016\n\n+ Fixed bug in multicell simulation (Thanks to Chintan Pathak for pointing out the bug)\n\n### V 1.021b\n+ Fixed SOC calculation bug for Fick\'s diffusion\n+ Minor fixes\n'",,"2016/08/27, 07:55:59",2615,MIT,0,68,"2023/04/17, 17:08:53",4,0,22,2,191,0,0,0.0,"2020/01/19, 20:56:52",2.1,0,1,false,,false,false,,,,,,,,,,, emobpy,An open tool for creating battery-electric vehicle time series from empirical data.,diw-evu/emobpy,https://gitlab.com/diw-evu/emobpy/emobpy,gitlab,,Battery,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, BattMo,The Battery Modelling Toolbox (BattMo) is a resource for continuum modelling of electrochemical devices in MATLAB.,BattMoTeam,https://github.com/BattMoTeam/BattMo.git,github,,Battery,"2023/10/16, 09:32:25",22,0,7,true,MATLAB,BatteryModel.com,BattMoTeam,"MATLAB,HTML,Python,M",,"b'========================================================================\n BattMo is framework for continuum modelling of electrochemical devices.\n========================================================================\n\n.. image:: https://zenodo.org/badge/410005581.svg\n :target: https://zenodo.org/badge/latestdoi/410005581\n\nThe Battery Modelling Toolbox (**BattMo**) is a resource for continuum modelling of electrochemical devices in MATLAB. The initial development features a pseudo X-dimensional (PXD) framework for the Doyle-Fuller-Newman model of lithium-ion battery cells. However, the development plan for BattMo includes extensions to other battery chemistries (e.g. metal-air) and eventually hydrogen systems (i.e. electrolyzers and fuel cells).\n\n**BattMo** offers users a flexible framework for building fully coupled electrochemical-thermal simulations of electrochemical devices using 1D, 2D, or 3D geometries. **BattMo** is implemented in MATLAB and builds on the open-source MATLAB Reservoir Simulation Toolbox (MRST) developed at SINTEF. MRST provides a solid basis for finite volume mesh generation of complex geometries and advanced numerical solvers that enable fast simulations for large systems.\n\nFor the latest information including video tutorials and project gallery, please visit the project webpage:\n`https://batterymodel.com `_\n\nWe are also working on a `documentation webpage `_. Even if it is now at a\npreliminary stage, you may be interested in having a look at it.\n\n.. raw:: html\n\n \n\nInstallation\n------------\n\nBefore cloning this reposity you must make sure you have **Git LFS** installed. See `https://git-lfs.com` for instructions on downloading and installation. \n\nBattMo is based on `MRST `_, which provides a general unstructured grid format,\ngeneric MATLAB automatic differentiation tools and Newton solvers. The MRST code source wil be installed directly via\n**git submodules**. To install BattMo, you have therefore to clone this repository with the submodule option\n``--recurse-submodules``, as follows:\n\n``git clone --recurse-submodules https://github.com/BattMoTeam/BattMo.git``\n\nThen start MATLAB and in the directory where you cloned the repository, run:\n\n``startupBattMo``\n\nYou can check that that your installation is setup correctly by running one of the example scripts :\n\n``runBattery1D``\n\nIterative solvers\n-----------------\n\nIterative solvers are needed to solve large problems with many degrees of freedom. The **open source** version from 2012\nof the `AGMG `_ iterative solver from **2012** is provided as a `submodule\n`_. We plan to integrate newer open source iterative solvers such as `AMGCL\n`_\n\nTutorials\n---------\n\nTutorials are presented in `documentation `_ (in progress ...)\n\nNaming Conventions (TBC)\n------------------------\nClass names are nouns in UpperCamelCase. \nFunction names are verbs or phrases in lowerCamelCase. \nInstance names are nouns in lower_snake_case. \nCommon variable names are represented by Latin letters (case set according to convention) or spelled-out lowercase Greek letters (e.g. phi). \nOther variable names may be nouns in lowerCamelCase. \n\nConntributors, in alphabetical order\n-----------------------------------\n\n* Dr. Simon Clark, SINTEF Industry \n* Dr. Mike Gerhardt, SINTEF Industry \n* Dr. Halvor M\xc3\xb8ll Nilsen, SINTEF Digital\n* Dr. Xavier Raynaud, SINTEF Digital \n* Dr. Roberto Scipioni, SINTEF Industry \n\nAcknowledgements\n-----------------\nBattMo has received funding from the European Union\xe2\x80\x99s Horizon 2020 innovation program under grant agreement numbers:\n\n* 875527 HYDRA \n* 957189 BIG-MAP \n'",",https://zenodo.org/badge/latestdoi/410005581\n\nThe","2021/09/24, 15:03:02",761,GPL-3.0,642,2045,"2023/10/16, 09:33:20",5,3,9,3,9,0,0.0,0.13936557462298493,"2023/03/05, 21:48:28",v0.2.1-beta,0,5,false,,false,false,,,https://github.com/BattMoTeam,www.batterymodel.com,Norway,,,https://avatars.githubusercontent.com/u/91334501?v=4,,, LiBRA,Create reduced-order state-space models for lithium-ion batteries utilising realisation algorithms.,BradyPlanden,https://github.com/BradyPlanden/LiiBRA.jl.git,github,"battery,battery-models,bms,control-systems,control,lithium-ion,reduced-order-models,julia",Battery,"2023/05/12, 15:37:18",22,0,15,true,Julia,,,Julia,,"b'# Lithium-ion Battery Realisation Algorithms (LiiBRA)\n\n[![Build Status](https://github.com/BradyPlanden/LiiBRA.jl/workflows/CI/badge.svg)](https://github.com/BradyPlanden/LiiBRA.jl/actions)\n[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)\n[![ColPrac: Contributor\'s Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor\'s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)\n[![DOI:10.1016/j.est.2022.105637](http://img.shields.io/badge/DOI-10.1016/j.est.2022.105637-blue.svg)](https://doi.org/10.1016/j.est.2022.105637)\n\n

\n\n

\n\n## Create and Simulate Reduced Order Lithium-Ion Battery Models\nLiBRA provides an open-source implementation of realisation algorithms used for generating reduced-order state-space models. This work aims to develop real-time capable physics-informed models for deployment onto embedded hardware. LiiBRA provides capabilities in offline and online model creation, as well as a framework for lithium-ion degradation predictions. For more information on LiiBRA, and the computationally-informed discrete realisation algorithm (CI-DRA), please refer to the publication above.\n\nFurther examples are located in the ""examples"" directory. Please open an [issue] if you have requests or ideas for additional examples.\n\nInstall (Julia 1.7 and later)\n-----------------------------\n\n```julia\n(v1.7) pkg> add LiiBRA\n```\n\n(Type `]` to enter package mode.)\n\n## Example Usage\n\n```julia\nusing LiiBRA, Plots\n```\n\nSetup:\n```julia\nS\xe2\x82\x91 = 4\nS\xe2\x82\x9b = 2\nCell = Construct(""LG M50"")\nSpatial!(Cell, S\xe2\x82\x91, S\xe2\x82\x9b)\nS\xcc\x82 = collect(1.0:-0.25:0.0)\nSOC = 0.75\nCell.Const.T = 298.15\n```\n\nRealisation:\n```julia\nA,B,C,D = Realise(Cell,S\xcc\x82);\n```\n\nHPPC Simulation:\n```julia\nResults = HPPC(Cell,S\xcc\x82,SOC,4.0,-3.0,A,B,C,D);\n```\n\nPlotting Results:\n```julia\nplotly()\nplot(Results.t, Results.Cell_V;\n legend=:topright,\n color=:blue,\n bottom_margin=5Plots.mm,\n left_margin = 5Plots.mm,\n right_margin = 15Plots.mm,\n ylabel = ""Terminal Voltage (V)"",\n xlabel = ""Time (s)"",\n title=""HPPC Voltage"",\n label=""Voltage"",\n size=(1280,720)\n )\n```\n\n

\n\n

\n\n```julia\nplot(Results.t, Results.Ce;\n legend=:topright,\n bottom_margin=5Plots.mm, \n left_margin = 5Plots.mm, \n right_margin = 15Plots.mm, \n ylabel = ""Electrolyte Concen. (mol/m\xc2\xb3)"", \n xlabel = ""Time (s)"",\n title=""Electrolyte Concentration"",\n label=[""Neg. Separator Interface"" ""Neg. Current Collector"" ""Pos. Current Collector"" ""Pos. Separator Interface""], \n size=(1280,720)\n )\n```\n\n

\n\n

\n\n```julia\nplot(Results.t, Results.Cse\xe2\x82\x9a;\n legend=:topright,\n bottom_margin=5Plots.mm, \n left_margin = 5Plots.mm, \n right_margin = 15Plots.mm, \n ylabel = ""Concentration (mol/m\xc2\xb3)"", \n xlabel = ""Time (s)"",\n title=""Positive Electrode Concentration"",\n label=[""Current Collector"" ""Separator Interface""], \n size=(1280,720)\n )\n```\n\n

\n\n

\n\n```julia\nplot(Results.t, Results.Cse\xe2\x82\x99;\n legend=:topright,\n bottom_margin=5Plots.mm, \n left_margin = 5Plots.mm, \n right_margin = 15Plots.mm, \n ylabel = ""Concentration (mol/m\xc2\xb3)"", \n xlabel = ""Time [s]"", \n title=""Negative Electrode Concentration"",\n label=[""Current Collector"" ""Separator Interface""],\n size=(1280,720)\n )\n```\n\n

\n\n

\n\n\n## Bug Tracking\n\nPlease report any issues using the Github [issue tracker]. All feedback is welcome.\n\n[issue tracker]: https://github.com/BradyPlanden/LiiBRA/issues\n[issue]: https://github.com/BradyPlanden/LiiBRA/issues\n'",",https://doi.org/10.1016/j.est.2022.105637","2020/11/04, 14:44:10",1085,MIT,26,261,"2023/05/12, 15:50:44",4,17,30,18,166,0,0.0,0.008968609865470878,"2023/01/10, 16:17:53",v0.3.3,0,3,false,,false,false,,,,,,,,,,, PyBOP,Provides a comprehensive suite of tools for parameterisation and optimisation of battery models.,pybop-team,https://github.com/pybop-team/PyBOP.git,github,,Battery,"2023/10/25, 16:12:20",14,0,14,true,Python,,pybop-team,Python,,"b'
\n\n \n

Python Battery Optimisation and Parameterisation

\n\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

\n\n
\n\n\n## PyBOP\nPyBOP provides a comprehensive suite of tools for parameterisation and optimisation of battery models. It aims to implement Bayesian and frequentist techniques with example workflows to guide the user. PyBOP can be applied to parameterise a wide range of battery models, including the electrochemical and equivalent circuit models available in [PyBaMM](https://pybamm.org/). A major emphasis in PyBOP is understandable and actionable diagnostics for the user, while still providing extensibility for advanced probabilistic methods. By building on the state-of-the-art battery models and leveraging Python\'s accessibility, PyBOP enables agile and robust parameterisation and optimisation.\n\nThe figure below gives PyBOP\'s current conceptual structure. The living software specification of PyBOP can be found [here](https://github.com/pybop-team/software-spec). This package is under active development, expect API (Application Programming Interface) evolution with releases.\n\n\n

\n \n

\n\n\n## Getting Started\n\n\n### Prerequisites\nTo use and/or contribute to PyBOP, you must first install Python 3 (specifically, 3.8-3.11). For example, on a Debian-based distribution (Debian, Ubuntu - including via WSL, Linux Mint), open a terminal and enter:\n\n```bash\nsudo apt update\nsudo apt install python3 python3-virtualenv\n```\n\nFor further information, please refer to the similar [installation instructions for PyBaMM](https://docs.pybamm.org/en/latest/source/user_guide/installation/GNU-linux.html).\n\n### Installation\n\nCreate a virtual environment called `pybop-env` within your current directory using:\n\n```bash\nvirtualenv pybop-env\n```\n\nActivate the environment with:\n\n```bash\nsource pybop-env/bin/activate\n```\n\nYou can check which version of python is installed within the virtual environment by typing:\n\n```bash\npython --version\n```\n\nLater, you can deactivate the environment and go back to your original system using:\n\n```bash\ndeactivate\n```\n\nNote that there are alternative packages that can be used to create and manage [virtual environments](https://realpython.com/python-virtual-environments-a-primer/), for example [pyenv](https://github.com/pyenv/pyenv#installation) and [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv#installation). In this case, follow the instructions to install these packages and then to create, activate and deactivate a virtual environment, use:\n\n```bash\npyenv virtualenv pybop-env\npyenv activate pybop-env\npyenv deactivate\n```\n\nWithin your virtual environment, install the `develop` branch of PyBOP:\n\n```bash\npip install git+https://github.com/pybop-team/PyBOP.git@develop\n```\n\nTo alternatively install PyBOP from a local directory, use the following template, substituting in the relevant path:\n\n```bash\npip install -e ""PATH_TO_PYBOP""\n```\n\nNow, with PyBOP installed in your virtual environment, you can run Python scripts that import and use the functionality of this package.\n\n\n### Usage\nPyBOP has two classes of intended use cases:\n1. parameter estimation from battery test data\n2. design optimisation subject to battery manufacturing/usage constraints\n\nThese classes encompass a wide variety of optimisation problems, which depend on the choice of battery model, the available data and/or the choice of design parameters.\n\n### Parameter estimation\nThe example below shows a simple fitting routine that starts by generating synthetic data from a single particle model with modified parameter values. An RMSE cost function using the terminal voltage as the optimised signal is completed to determine the unknown parameter values. First, the synthetic data is generated:\n\n```python\nimport pybop\nimport pybamm\nimport pandas as pd\nimport numpy as np\n\ndef getdata(x0):\n model = pybamm.lithium_ion.SPM()\n params = model.default_parameter_values\n\n params.update(\n {\n ""Negative electrode active material volume fraction"": x0[0],\n ""Positive electrode active material volume fraction"": x0[1],\n }\n )\n experiment = pybamm.Experiment(\n [\n (\n ""Discharge at 2C for 5 minutes (1 second period)"",\n ""Rest for 2 minutes (1 second period)"",\n ""Charge at 1C for 5 minutes (1 second period)"",\n ""Rest for 2 minutes (1 second period)"",\n ),\n ]\n * 2\n )\n sim = pybamm.Simulation(model, experiment=experiment, parameter_values=params)\n return sim.solve()\n\n\n# Form observations\nx0 = np.array([0.55, 0.63])\nsolution = getdata(x0)\n```\nNext, the observed variables are defined, with the model construction and parameter definitions following. Finally, the parameterisation class is constructed and parameter fitting is completed. \n```python\nobservations = [\n pybop.Observed(""Time [s]"", solution[""Time [s]""].data),\n pybop.Observed(""Current function [A]"", solution[""Current [A]""].data),\n pybop.Observed(""Voltage [V]"", solution[""Terminal voltage [V]""].data),\n]\n\n# Define model\nmodel = pybop.models.lithium_ion.SPM()\nmodel.parameter_set = model.pybamm_model.default_parameter_values\n\n# Fitting parameters\nparams = [\n pybop.Parameter(\n ""Negative electrode active material volume fraction"",\n prior=pybop.Gaussian(0.5, 0.05),\n bounds=[0.35, 0.75],\n ),\n pybop.Parameter(\n ""Positive electrode active material volume fraction"",\n prior=pybop.Gaussian(0.65, 0.05),\n bounds=[0.45, 0.85],\n ),\n]\n\nparameterisation = pybop.Parameterisation(\n model, observations=observations, fit_parameters=params\n)\n\n# get RMSE estimate using NLOpt\nresults, last_optim, num_evals = parameterisation.rmse(\n signal=""Voltage [V]"", method=""nlopt"" # results = [0.54452026, 0.63064801]\n)\n```\n\n\n## Code of Conduct\n\nPyBOP aims to foster a broad consortium of developers and users, building on and\nlearning from the success of the [PyBaMM](https://pybamm.org/) community. Our values are:\n\n- Open-source (code and ideas should be shared)\n\n- Inclusivity and fairness (those who want to contribute may do so, and their input is appropriately recognised)\n\n- Inter-operability (aiming for modularity to enable maximum impact and inclusivity)\n\n- User-friendliness (putting user requirements first, thinking about user-assistance & workflows)\n\n\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n

Brady Planden

\xf0\x9f\x9a\x87 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb \xf0\x9f\x92\xa1

NicolaCourtier

\xf0\x9f\x92\xbb \xf0\x9f\x91\x80

David Howey

\xf0\x9f\xa4\x94 \xf0\x9f\xa7\x91\xe2\x80\x8d\xf0\x9f\x8f\xab

Martin Robinson

\xf0\x9f\xa4\x94 \xf0\x9f\xa7\x91\xe2\x80\x8d\xf0\x9f\x8f\xab
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specifications. Contributions of any kind are welcome! See `contributing.md` for ways to get started.'",,"2023/06/13, 10:44:32",134,BSD-3-Clause,141,141,"2023/10/25, 16:12:27",25,27,40,40,0,5,1.0,0.14912280701754388,,,2,4,false,,false,true,,,https://github.com/pybop-team,,,,,https://avatars.githubusercontent.com/u/136452226?v=4,,, OPEM,A modeling tool for evaluating the performance of proton exchange membrane fuel cells.,ECSIM,https://github.com/ECSIM/opem.git,github,"chemistry,pem,fuel-cell,opem,script,python,electrochemistry,dynamic-analysis,simulation,static-analysis,static-analyzer,simulator,physics,physics-simulation",Hydrogen,"2021/08/18, 18:14:20",159,1,34,true,Python,ECSIM,ECSIM,"Python,MATLAB,TeX,Shell,Dockerfile,Batchfile",http://opem.ecsim.ir,"b'
\n\n
\n\n\n \n\n\n\n\n\n\n
\n\t\n----------\t\t\t\t\n\n## Table of Contents\n * [What is PEM?](http://physics.oregonstate.edu/~hetheriw/energy/topics/doc/electrochemistry/fc/basic/The_Polymer_Electrolyte_Fuel_Cell.htm)\t\t\t\t\t\n * [Overview](https://github.com/ECSIM/opem#overview)\n * [Installation](https://github.com/ECSIM/opem/blob/master/INSTALL.md)\n * [Usage](https://github.com/ECSIM/opem#usage)\n \t\t* [Executable](https://github.com/ECSIM/opem#executable)\n \t\t* [Library](https://github.com/ECSIM/opem#library)\t\n \t\t* [Telegram Bot](https://github.com/ECSIM/opem#telegram-bot)\n \t\t* [Try OPEM in Your Browser!](https://github.com/ECSIM/opem#try-opem-in-your-browser)\n \t\t* [MATLAB](https://github.com/ECSIM/opem/tree/master/MATLAB)\n * [Issues & Bug Reports](https://github.com/ECSIM/opem#issues--bug-reports)\n * [Contribution](https://github.com/ECSIM/opem/blob/master/.github/CONTRIBUTING.md)\n * [Todo](https://github.com/ECSIM/opem/blob/master/TODO.md)\n * [Outputs](https://github.com/ECSIM/opem#outputs)\n * [Dependencies](https://github.com/ECSIM/opem#dependencies)\n * [Thanks](https://github.com/ECSIM/opem#thanks)\n * [Reference](https://github.com/ECSIM/opem#reference)\n * [Cite](https://github.com/ECSIM/opem#cite)\n * [Authors](https://github.com/ECSIM/opem/blob/master/AUTHORS.md)\n * [License](https://github.com/ECSIM/opem#license)\n * [Show Your Support](https://github.com/ECSIM/opem#show-your-support)\n * [Changelog](https://github.com/ECSIM/opem/blob/master/CHANGELOG.md)\n * [Code of Conduct](https://github.com/ECSIM/opem/blob/master/.github/CODE_OF_CONDUCT.md)\n\n## Overview\t\t\n\n\n

\nModeling and simulation of proton-exchange membrane fuel cells (PEMFC) may work as a powerful tool in the research & development of renewable energy sources. The Open-Source PEMFC Simulation Tool (OPEM) is a modeling tool for evaluating the performance of proton exchange membrane fuel cells. This package is a combination of models (static/dynamic) that predict the optimum operating parameters of PEMFC. OPEM contained generic models that will accept as input, not only values of the operating variables such as anode and cathode feed gas, pressure and compositions, cell temperature and current density, but also cell parameters including the active area and membrane thickness. In addition, some of the different models of PEMFC that have been proposed in the OPEM, just focus on one particular FC stack, and some others take into account a part or all auxiliaries such as reformers. OPEM is a platform for collaborative development of PEMFC models.\n

\n\n
\n\n\n

Fig1. OPEM Block Diagram

\n\n\n
\n\n\n\t \n\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n
Open Hub
PyPI Counter
Github Stars
\n\n\n\t \n\t\t\n\t\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\t\n\t\n
Branchmasterdevelop
CI
\n\n\n\t \n\t\t\n\t\t\t\n\t\t\n\t\n
Code Quality
\n\n\n## Usage\n\n### Executable\n- Open `CMD` (Windows) or `Terminal` (UNIX)\n- Run `python -m opem` or `python3 -m opem` (or run `OPEM.exe`)\n- Enter PEM cell parameters (or run standard test vectors)\n\t1. Amphlett Static Model\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t
InputDescriptionUnit
TCell operation temperatureK
PH2Partial pressureatm
PO2Partial pressureatm
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
AActive areacm^2
lMembrane thicknesscm
lambdaAn adjustable parameter with a min value of 14 and max value of 23--
R(*Optional)R-Electronicohm
JMaxMaximum current densityA/(cm^2)
NNumber of single cells--
\n\t\t\n\t\t* For more information about this model visit here\n\t2. Larminie-Dicks Static Model\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t
InputDescriptionUnit
E0Fuel cell reversible no loss voltageV
AThe slope of the Tafel lineV
TCell operation temperatureK
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
i_nInternal currentA
i_0Exchange current at which the overvoltage begins to move from zero\tA
i_LLimiting currentA
RMThe membrane and contact resistancesohm
NNumber of single cells--
\n\t\t\n\t\t* For more information about this model visit here\n\t3. Chamberline-Kim Static Model\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t
InputDescriptionUnit
E0Open circuit voltageV
bTafel\'s parameter for the oxygen reductionV
RResistanceohm.cm^2
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
AActive areacm^2
mDiffusion\'s parametersV
nDiffusion\'s parameters(A^-1)(cm^2)
NNumber of single cells--
\n\t\t\n\t\t* For more information about this model visit here\n\t4. Padulles Dynamic Model I\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t
InputDescriptionUnit
E0No load voltageV
TFuel cell temperatureK
KH2Hydrogen valve constantkmol.s^(-1).atm^(-1)
KO2Oxygen valve constantkmol.s^(-1).atm^(-1)
tH2Hydrogen time constants
tO2Oxygen time constants
BActivation voltage constantV
CActivation constant parameterA^(-1)
RintFuel cell internal resistanceohm
rhoHydrogen-Oxygen flow ratio--
qH2Molar flow of hydrogenkmol/s
N0Number of cells--
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
\n\t\t\n\t\t* For more information about this model visit here\t\t\t\t\n\n\t5. Padulles Dynamic Model II\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t
InputDescriptionUnit
E0No load voltageV
TFuel cell temperatureK
KH2Hydrogen valve constantkmol.s^(-1).atm^(-1)
KH2OWater valve constantkmol.s^(-1).atm^(-1)
KO2Oxygen valve constantkmol.s^(-1).atm^(-1)
tH2Hydrogen time constants
tH2OWater time constants
tO2Oxygen time constants
BActivation voltage constantV
CActivation constant parameterA^(-1)
RintFuel cell internal resistanceohm
rhoHydrogen-Oxygen flow ratio--
qH2Molar flow of hydrogenkmol/s
N0Number of cells--
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
\n\t\t\n\t\t* For more information about this model visit here\n\t6. Padulles-Hauer Dynamic Model\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t
InputDescriptionUnit
E0No load voltageV
TFuel cell temperatureK
KH2Hydrogen valve constantkmol.s^(-1).atm^(-1)
KH2OWater valve constantkmol.s^(-1).atm^(-1)
KO2Oxygen valve constantkmol.s^(-1).atm^(-1)
tH2Hydrogen time constants
tH2OWater time constants
tO2Oxygen time constants
t1Reformer time constants
t2Reformer time constants
BActivation voltage constantV
CActivation constant parameterA^(-1)
CVConversion factor--
RintFuel cell internal resistanceohm
rhoHydrogen-Oxygen flow ratio--
qMethanolMolar flow of methanolkmol/s
N0Number of cells--
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
\n\t\t\n\t\t* For more information about this model visit here\n\t7. Padulles-Amphlett Dynamic Model\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
InputDescriptionUnit
E0No load voltageV
TFuel cell temperatureK
KH2Hydrogen valve constantkmol.s^(-1).atm^(-1)
KH2OWater valve constantkmol.s^(-1).atm^(-1)
KO2Oxygen valve constantkmol.s^(-1).atm^(-1)
tH2Hydrogen time constants
tH2OWater time constants
tO2Oxygen time constants
t1Reformer time constants
t2Reformer time constants
AActive areacm^2
lMembrane thicknesscm
lambdaAn adjustable parameter with a min value of 14 and max value of 23--
R(*Optional)R-Electronicohm
JMaxMaximum current densityA/(cm^2)
CVConversion factor--
rhoHydrogen-Oxygen flow ratio--
qMethanolMolar flow of methanolkmol/s
N0Number of cells--
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
\n\t\t\n\t\t* For more information about this model visit here\n\t8. Chakraborty Dynamic Model\n\t\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
InputDescriptionUnit
E0No load voltageV
TCell operation temperatureK
KH2Hydrogen valve constantkmol.s^(-1).atm^(-1)
KH2OWater valve constantkmol.s^(-1).atm^(-1)
KO2Oxygen valve constantkmol.s^(-1).atm^(-1)
rhoHydrogen-Oxygen flow ratio--
RintFuel cell internal resistanceohm
N0Number of cells--
uFuel utilization ratio--
i-startCell operating current start pointA
i-stepCell operating current stepA
i-stopCell operating current end pointA
\n\t\t\n\t\t* For more information about this model visit here\n\t\t\n\t- Find your reports in `Model_Name` folder\t\t\t\n\t\n\t#### Screen Record\n\t
\n\t\t\n\t\t

Screen Record

\n\t
\n\n### Library\t\t\t\t\n\n\n1. Amphlett Static Model\n\t```pycon\n\t>>> from opem.Static.Amphlett import Static_Analysis\n\t>>> Test_Vector={""T"": 343.15,""PH2"": 1,""PO2"": 1,""i-start"": 0,""i-stop"": 75,""i-step"": 0.1,""A"": 50.6,""l"": 0.0178,""lambda"": 23,""N"": 1,""R"": 0,""JMax"": 1.5,""Name"": ""Amphlett_Test""}\n\t>>> data=Static_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t ```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
Eta_ActiveEta activationList
Eta_ConcEta concentrationList
Eta_OhmicEta ohmicList
VEEstimated FC voltageList
\n\t\t\n\t\t\n\t- For more information about this model visit here\n2. Larminie-Dicks Static Model\n\t```pycon\n\t>>> from opem.Static.Larminie_Dicks import Static_Analysis\n\t>>> Test_Vector = {""A"": 0.06,""E0"": 1.178,""T"": 328.15,""RM"": 0.0018,""i_0"": 0.00654,""i_L"": 100.0,""i_n"": 0.23,""N"": 23,""i-start"": 0.1,""i-stop"": 98,""i-step"": 0.1,""Name"": ""Larminiee_Test""}\n\t>>> data=Static_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t ```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
VEEstimated FC voltageList
\n\t\t\t\n\t\t\t\n\t- For more information about this model visit here\n3. Chamberline-Kim Static Model\n\t```pycon\n\t>>> from opem.Static.Chamberline_Kim import Static_Analysis\n\t>>> Test_Vector = {""A"": 50.0,""E0"": 0.982,""b"": 0.0689,""R"": 0.328,""m"": 0.000125,""n"": 9.45,""N"": 1,""i-start"": 1,""i-stop"": 42.5,""i-step"": 0.1,""Name"": ""Chamberline_Test""}\n\t>>> data=Static_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
VEEstimated FC voltageList
\n\t\t\n\t\t\t\n\t- For more information about this model visit here\n4. Padulles Dynamic Model I\n\t```pycon\n\t>>> from opem.Dynamic.Padulles1 import Dynamic_Analysis\n\t>>> Test_Vector = {""T"": 343,""E0"": 0.6,""N0"": 88,""KO2"": 0.0000211,""KH2"": 0.0000422,""tH2"": 3.37,""tO2"": 6.74,""B"": 0.04777,""C"": 0.0136,""Rint"": 0.00303,""rho"": 1.168,""qH2"": 0.0004,""i-start"": 0,""i-stop"": 100,""i-step"": 0.1,""Name"": ""PadullesI_Test""}\n\t>>> data=Dynamic_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PO2Partial pressureList
PH2Partial pressureList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
VEEstimated FC voltageList
\n\t\t\n\t\n\t- For more information about this model visit here\n5. Padulles Dynamic Model II\n\t```pycon\n\t>>> from opem.Dynamic.Padulles2 import Dynamic_Analysis\n\t>>> Test_Vector = {""T"": 343,""E0"": 0.6,""N0"": 5,""KO2"": 0.0000211,""KH2"": 0.0000422,""KH2O"": 0.000007716,""tH2"": 3.37,""tO2"": 6.74,""tH2O"": 18.418,""B"": 0.04777,""C"": 0.0136,""Rint"": 0.00303,""rho"": 1.168,""qH2"": 0.0004,""i-start"": 0.1,""i-stop"": 100,""i-step"": 0.1,""Name"": ""Padulles2_Test""}\n\t>>> data=Dynamic_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PO2Partial pressureList
PH2Partial pressureList
PH2OPartial pressureList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
VEEstimated FC voltageList
\n\t\t\n\t\t\t\t\n\t- For more information about this model visit here\n6. Padulles-Hauer Dynamic Model\n\t```pycon\n\t>>> from opem.Dynamic.Padulles_Hauer import Dynamic_Analysis\n\t>>> Test_Vector = {""T"": 343,""E0"": 0.6,""N0"": 5,""KO2"": 0.0000211,""KH2"": 0.0000422,""KH2O"": 0.000007716,""tH2"": 3.37,""tO2"": 6.74,""t1"": 2,""t2"": 2,""tH2O"": 18.418,""B"": 0.04777,""C"": 0.0136,""Rint"": 0.00303,""rho"": 1.168,""qMethanol"": 0.0002,""CV"": 2,""i-start"": 0.1,""i-stop"": 100,""i-step"": 0.1,""Name"": ""Padulles_Hauer_Test""}\n\t>>> data=Dynamic_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PO2Partial pressureList
PH2Partial pressureList
PH2OPartial pressureList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
VEEstimated FC voltageList
\n\t\t\n\t\t\t\n\t- For more information about this model visit here\n7. Padulles-Amphlett Dynamic Model\n\t```pycon\n\t>>> from opem.Dynamic.Padulles_Amphlett import Dynamic_Analysis\n\t>>> Test_Vector = {""A"": 50.6,""l"": 0.0178,""lambda"": 23,""JMax"": 1.5,""T"": 343,""N0"": 5,""KO2"": 0.0000211,""KH2"": 0.0000422,""KH2O"": 0.000007716,""tH2"": 3.37,""tO2"": 6.74,""t1"": 2,""t2"": 2,""tH2O"": 18.418,""rho"": 1.168,""qMethanol"": 0.0002,""CV"": 2,""i-start"": 0.1,""i-stop"": 75,""i-step"": 0.1,""Name"": ""Padulles_Amphlett_Test""}\n\t>>> data=Dynamic_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PO2Partial pressureList
PH2Partial pressureList
PH2OPartial pressureList
PhThermal powerList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
Eta_ActiveEta activationList
Eta_ConcEta concentrationList
Eta_OhmicEta ohmicList
VEEstimated FC voltageList
\n\t\t\n\t\t\t\t\t\t\t\t\t\n\t- For more information about this model visit here\n\n8. Chakraborty Dynamic Model\n\t```pycon\n\t>>> from opem.Dynamic.Chakraborty import Dynamic_Analysis\n\t>>> Test_Vector = {""T"": 1273,""E0"": 0.6,""u"":0.8,""N0"": 1,""R"": 3.28125 * 10**(-3),""KH2O"": 0.000281,""KH2"": 0.000843,""KO2"": 0.00252,""rho"": 1.145,""i-start"": 0.1,""i-stop"": 300,""i-step"": 0.1,""Name"": ""Chakraborty_Test""}\n\t>>> data=Dynamic_Analysis(InputMethod=Test_Vector,TestMode=True,PrintMode=False,ReportMode=False)\n\t```\n\t\n\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t
KeyDescriptionType
StatusSimulation statusBool
PPowerList
ICell operating currentList
VFC voltageList
EFFEfficiencyList
PO2Partial pressureList
PH2Partial pressureList
PH2OPartial pressureList
PhThermal powerList
Nernst GainNernst GainList
Ohmic LossOhmic LossList
V0Linear-Apx interceptFloat
KLinear-Apx slopeFloat
VEEstimated FC voltageList
\n\t\t\n\t\t\t\t\t\t\t\t\t\n\t- For more information about this model visit here\n\n\t#### Modes\t\n\n\t1. `TestMode` : Active test mode and get/return data as `dict`, (Default : `False`)\n\t2. `ReportMode` : Generate reports(`.csv`,`.opem`,`.html`) and print result in console, (Default : `True`)\n\t3. `PrintMode` : Control printing in console, (Default : `True`)\n\t\n\t#### Note\n\t\n\t- Return type : `dict`\n\n\n### Telegram Bot\n- Send `/start` command to [OPEM BOT](https://t.me/opembot)\n- Choose models from menu\n- Send your test vector according to the template\n- Download your results\n\n\n### Try OPEM in Your Browser!\nOPEM can be used online in interactive Jupyter Notebooks via the Binder service! Try it out now! :\t\n\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ECSIM/opem/master)\n\n- Check `.ipynb` files in `Documents` folder\n- Edit and execute each part of the notes, step by step from the top panel by run button\n- For executing a complete simulation, you can edit `Test_Vector` in `Full Run` section\n\t\t\n\n## Issues & Bug Reports\t\t\t\n\nJust fill an issue and describe it. We\'ll check it ASAP!\t\t\t\t\t\t\t\nor send an email to [opem@ecsim.ir](mailto:opem@ecsim.ir ""opem@ecsim.ir""). \n\nGitter is another option :\t\t\t\t\n\n[![Gitter](https://badges.gitter.im/ECSIM/opem.svg)](https://gitter.im/ECSIM/opem?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\n\n## Outputs\t\n\n1. [HTML](http://www.ecsim.ir/opem/outputs/test.html)\n2. [CSV](https://github.com/ECSIM/opem/blob/master/otherfile/test.csv)\n3. [OPEM](https://github.com/ECSIM/opem/blob/master/otherfile/test.opem)\t\n\n## Dependencies\n\n\n\t \n\t\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n
masterdevelop
\n\n\n## Thanks\n\n* [Chart.js](https://github.com/chartjs/Chart.js ""Chartjs"")\n* [PyInstaller](https://github.com/pyinstaller/pyinstaller)\n* [Draw.io](https://www.draw.io/)\n\n## Reference\n\n
1- J. C. Amphlett, R. M. Baumert, R. F. Mann, B. A. Peppley, and P. R. Roberge. 1995. ""Performance Modeling of the Ballard Mark IV Solid Polymer Electrolyte Fuel Cell."" J. Electrochem. Soc. (The Electrochemical Society, Inc.) 142 (1): 9-15. doi: 10.1149/1.2043959.
\n\n
2- Jeferson M. Correa, Felix A. Farret, Vladimir A. Popov, Marcelo G. Simoes. 2005. ""Sensitivity Analysis of the Modeling Parameters Used in Simulation of Proton Exchange Membrane Fuel Cells."" IEEE Transactions on Energy Conversion (IEEE) 20 (1): 211-218. doi:10.1109/TEC.2004.842382.
\n\n\n
3- Junbom Kim, Seong-Min Lee, Supramaniam Srinivasan, Charles E. Chamberlin. 1995. ""Modeling of Proton Exchange Membrane Fuel Cell Performance with an Empirical Equation."" Journal of The Electrochemical Society (The Electrochemical Society) 142 (8): 2670-2674. doi:10.1149/1.2050072.
\n\n
\n4- I. Sadli, P. Thounthong, J.-P. Martin, S. Rael, B. Davat. 2006. ""Behaviour of a PEMFC supplying a low voltage static converter."" Journal of Power Sources (Elsevier) 156: 119\xe2\x80\x93125. doi:10.1016/j.jpowsour.2005.08.021.\n
\n\n
\n5- J. Padulles, G.W. Ault, J.R. McDonald. 2000. ""An integrated SOFC plant dynamic model for power systems simulation."" Journal of Power Sources (Elsevier) 86 (1-2): 495-500. doi:10.1016/S0378-7753(99)00430-9.\n
\n\t\t\t\t\t\t\n
\n6- Hauer, K.-H. 2001. ""Analysis tool for fuel cell vehicle hardware and software (controls) with an application to fuel economy comparisons of alternative system designs."" Ph.D. dissertation, Transportation Technology\nand Policy, University of California Davis.\n
\n\n
\n7- A. Saadi, M. Becherif, A. Aboubou, M.Y. Ayad. 2013. ""Comparison of proton exchange membrane fuel cell static models."" Renewable Energy (Elsevier) 56: 64-71. doi:dx.doi.org/10.1016/j.renene.2012.10.012.\n
\n\n
\n8- Diego Feroldi, Marta Basualdo. 2012. ""Description of PEM Fuel Cells System."" Green Energy and Technology (Springer) 49-72. doi:10.1007/978-1-84996-184-4_2\n
\n\n
\n9- Gottesfeld, Shimshon. n.d. The Polymer Electrolyte Fuel Cell: Materials Issues in a Hydrogen Fueled Power Source.\n http://physics.oregonstate.edu/~hetheriw/energy/topics/doc/electrochemistry/fc/basic/The_Polymer_Electrolyte_Fuel_Cell.htm\n
\n\n
\n10- Mohamed Becherif, A\xc3\xafcha Saadi, Daniel Hissel, Abdennacer Aboubou, Mohamed Yacine Ayad. 2011.\n ""Static and dynamic proton exchange membrane fuel cell models."" Journal of Hydrocarbons Mines and Environmental Research 2 (1)\n
\n\n
\n11- Larminie, J., Dicks, A., & McDonald, M. S. 2003. Fuel cell systems explained (Vol. 2, pp. 207-225). Chichester, UK: J. Wiley. doi: 10.1002/9781118706992.\n
\n\n
\n12- Rho, Y. W., Srinivasan, S., & Kho, Y. T. 1994. \'\'Mass transport phenomena in proton exchange membrane fuel cells using o 2/he, o 2/ar, and o 2/n 2 mixtures ii. Theoretical analysis.\'\' Journal of the Electrochemical Society, 141(8), 2089-2096. doi: 10.1149/1.2055066.\n
\n\n
\n13- U. Chakraborty, A New Model for Constant Fuel Utilization and Constant Fuel Flow in Fuel Cells, Appl. Sci. 9 (2019) 1066. https://doi.org/10.3390/app9061066.\n
\n\n\n## Cite\n\nIf you use OPEM in your research , please cite this paper :\n\n
\n\n@article{Haghighi2018,\n  doi = {10.21105/joss.00676},\n  url = {https://doi.org/10.21105/joss.00676},\n  year  = {2018},\n  month = {jul},\n  publisher = {The Open Journal},\n  volume = {3},\n  number = {27},\n  pages = {676},\n  author = {Sepand Haghighi and Kasra Askari and Sarmin Hamidi and Mohammad Mahdi Rahimi},\n  title = {{OPEM} : Open Source {PEM} Cell Simulation Tool},\n  journal = {Journal of Open Source Software}\n}\n\n\n
\n\nDownload [OPEM.bib](http://www.ecsim.ir/opem/OPEM.bib)(BibTeX Format)\n\n\n\n\t \n\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n
JOSS
Zenodo
Researchgate
\t\t\t\t\t\t\t\t\t\n\n\n\n\n\n\n## License\n[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2FECSIM%2Fopem.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2FECSIM%2Fopem?ref=badge_large)\n\n\n## Show Your Support\n\t\t\t\t\t\t\t\t\n

Star This Repo

\t\t\t\t\t\n\nGive a \xe2\xad\x90\xef\xb8\x8f if this project helped you! \n\n

Donate to Our Project

\n\t\t\t\t\t\t\t\t\nIf you do like our project and we hope that you do, can you please support us? Our project is not and is never going to be working for profit. We need the money just so we can continue doing what we do ;-) .\n\n\n\n'",",https://doi.org/10.3390/app9061066.\n,https://doi.org/10.21105/joss.00676,https://doi.org/10.21105/joss.00676,https://doi.org/10.5281/zenodo.1133110","2017/12/16, 15:42:52",2139,MIT,0,1137,"2023/09/25, 04:15:06",7,132,191,14,30,0,0.6,0.10962821734985706,"2021/06/30, 15:57:43",v1.3,0,9,true,custom,true,true,ECSIM/gopem,,https://github.com/ECSIM,https://ecsim.ir,"Tehran, Iran",,,https://avatars.githubusercontent.com/u/34425602?v=4,,, gopem,GOPEM is a graphical user interface of OPEM.,ECSIM,https://github.com/ECSIM/gopem.git,github,"opem,python,matplotlib,pyqt5,qt5,simulation,chemistry,fuel-cell,physics,electrochemistry,physics-simulation",Hydrogen,"2021/08/18, 09:04:33",23,0,5,true,Python,ECSIM,ECSIM,"Python,Inno Setup,Shell,Batchfile,Dockerfile",http://gopem.ecsim.ir/,"b'
\n\n
\n\n\n\n
\n\n--------\n\n## Table of Contents\t\t\t\t\n * [Overview](https://github.com/ECSIM/gopem#overview)\n * [Installation](https://github.com/ECSIM/gopem#installation)\n * [Usage](https://github.com/ECSIM/gopem#usage)\n * [Issues & Bug Reports](https://github.com/ECSIM/gopem#issues--bug-reports)\n * [Contribution](https://github.com/ECSIM/gopem/blob/master/.github/CONTRIBUTING.md)\n * [Dependencies](https://github.com/ECSIM/gopem#dependencies)\n * [Thanks](https://github.com/ECSIM/gopem#thanks)\n * [Cite](https://github.com/ECSIM/gopem#cite)\n * [Authors](https://github.com/ECSIM/gopem/blob/master/AUTHORS.md)\n * [License](https://github.com/ECSIM/gopem#license)\n * [Show Your Support](https://github.com/ECSIM/gopem#show-your-support)\n * [Changelog](https://github.com/ECSIM/gopem/blob/master/CHANGELOG.md)\n * [Code of Conduct](https://github.com/ECSIM/gopem/blob/master/.github/CODE_OF_CONDUCT.md)\n\n## Overview\t\t\n\nGOPEM is a graphical user interface of [OPEM (Open Source PEM Fuel Cell Simulation Tool)](https://github.com/ECSIM/opem ""OPEM"").\n\n\n\t \n\t\t\n\t\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\t\n\t\n
Branchmasterdevelop
CI
\n\n\n\t \n\t\t\n\t\t\n\t\t\n\t\n
Code Quality
\n\n## Installation\t\n\n### Source Code\n- Download and install [Python3.x](https://www.python.org/downloads/) (>=3.6)\n\t- [x] Select `Add to PATH` option\n\t- [x] Select `Install pip` option\n- Download [Version 0.7](https://github.com/ecsim/gopem/archive/v0.7.zip) or [Latest Source ](https://github.com/ecsim/gopem/archive/develop.zip)\n- Run `pip install -r requirements.txt` or `pip3 install -r requirements.txt` (Need root access)\n- Run `python3 setup.py install` or `python setup.py install` (Need root access)\t\t\t\t\n\n### PyPI\n\n\n- Check [Python Packaging User Guide](https://packaging.python.org/installing/) \n- Run `pip install gopem` or `pip3 install gopem` (Need root access)\n\n### Easy Install\n\n- Run `easy_install --upgrade gopem` (Need root access)\n\n\n### Docker\t\n\n- Run `docker pull ecsim/gopem` (Need root access)\n- Configuration :\n\t- Ubuntu 16.04\n\t- Python 3.6\n\n\n### Exe Version (Only Windows)\n- Download [Installer-Version 0.7](https://github.com/ECSIM/gopem/releases/download/v0.7/GOPEM-0.7.exe) or [Portable-Version 0.7](https://github.com/ECSIM/gopem/releases/download/v0.7/GOPEM-Portable-0.7.exe)\n- Run and install\n\n\xe2\x9a\xa0\xef\xb8\x8f The portable build is slower to start\n\n### DMG Version (MacOS)\n- Download [DMG-Version 0.7](https://github.com/ECSIM/gopem/releases/download/v0.7/GOPEM-0.7.dmg)\n- Open DMG file\n- Copy `GOPEM` into your system\n- Run `GOPEM`\n\n\n### Exe Version Note\nFor GOPEM targeting Windows < 10, the user needs to take special care to include the Visual C++ run-time .dlls: Python >=3.5 uses Visual Studio 2015 run-time, which has been renamed into \xe2\x80\x9cUniversal CRT\xe2\x80\x9c and has become part of Windows 10. For Windows Vista through Windows 8.1 there are Windows update packages, which may or may not be installed in the target-system. So you have the following options:\n\n1. Use [OPEM](https://github.com/ECSIM/opem) (Without GUI)\n2. Use [Source Code](https://github.com/ECSIM/gopem#source-code)\n3. Download and install [Visual C++ Redistributable for Visual Studio 2015](https://www.microsoft.com/en-us/download/details.aspx?id=48145)\n\n### System Requirements\nGOPEM will likely run on a modern dual core PC. Typical configuration is:\n\n- Dual Core CPU (2.0 Ghz+)\n- 2GB of RAM\n\n\xe2\x9a\xa0\xef\xb8\x8f Note that it may run on lower end equipment though good performance is not guaranteed.\n\n## Usage\n\n
\n\n\n

GIF

\n\n\n

Screenshot 1

\n\n\n

Screenshot 2

\n\n
\t\n\n- Open `CMD` (Windows) or `Terminal` (UNIX)\n- Run `python -m gopem` or `python3 -m gopem` (or run `GOPEM.exe`)\n- Wait about 4-15 seconds (depends on your system specification)\n- Enter PEM cell parameters (or run standard test vectors)\t\n- For more information about parameters visit [OPEM (Open Source PEM Fuel Cell Simulation Tool)](https://github.com/ECSIM/opem ""OPEM"")\n## Issues & Bug Reports\t\t\t\n\nJust fill an issue and describe it. We\'ll check it ASAP!\t\t\t\t\t\t\t\nor send an email to [opem@ecsim.ir](mailto:opem@ecsim.ir ""opem@ecsim.ir""). \n\nGitter is another option :\t\t\t\t\n\n[![Gitter](https://badges.gitter.im/ECSIM/opem.svg)](https://gitter.im/ECSIM/opem?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\n\n## Dependencies\t\t\n\n\n\n\t \n\t\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n
masterdevelop
\n\n## Thanks\n\n* [PyInstaller](https://github.com/pyinstaller/pyinstaller)\n\n\n\n## Cite\n\nIf you use OPEM in your research , please cite this paper :\n\n
\n\n@article{Haghighi2018,\n  doi = {10.21105/joss.00676},\n  url = {https://doi.org/10.21105/joss.00676},\n  year  = {2018},\n  month = {jul},\n  publisher = {The Open Journal},\n  volume = {3},\n  number = {27},\n  pages = {676},\n  author = {Sepand Haghighi and Kasra Askari and Sarmin Hamidi and Mohammad Mahdi Rahimi},\n  title = {{OPEM} : Open Source {PEM} Cell Simulation Tool},\n  journal = {Journal of Open Source Software}\n}\n\n\n
\n\nDownload [OPEM.bib](http://www.ecsim.ir/opem/OPEM.bib)(BibTeX Format)\t\t\t\t\t\t\t\t\t\n\n\n\t \n\t\t\n\t\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n
JOSS
Zenodo
Researchgate
\n\n## License\n[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2FECSIM%2Fgopem.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2FECSIM%2Fgopem?ref=badge_large)\t\t\n\n## Show Your Support\t\t\t\n\n

Star This Repo

\t\t\t\t\t\n\nGive a \xe2\xad\x90\xef\xb8\x8f if this project helped you! \n\n\n

Donate to Our Project

\n\t\t\t\t\t\t\t\t\nIf you do like our project and we hope that you do, can you please support us? Our project is not and is never going to be working for profit. We need the money just so we can continue doing what we do ;-) .\n\n\n'",",https://doi.org/10.21105/joss.00676,https://doi.org/10.21105/joss.00676,https://doi.org/10.5281/zenodo.1133110","2018/08/26, 17:59:31",1886,MIT,0,521,"2023/08/28, 12:35:04",8,135,150,25,58,4,1.2,0.3521739130434782,"2021/08/18, 13:09:04",v0.7,0,6,true,custom,true,true,,,https://github.com/ECSIM,https://ecsim.ir,"Tehran, Iran",,,https://avatars.githubusercontent.com/u/34425602?v=4,,, pem-dataset1,Proton Exchange Membrane Fuel Cell Dataset.,ECSIM,https://github.com/ECSIM/pem-dataset1.git,github,"pem,fuel-cell,dataset,polarization,proton-exchange-membrane,mea,data,data-science,chemistry,electrochemistry,energy,power,science,science-research,physics,activation-procedure,open-source,open-science,nafion,impedance",Hydrogen,"2022/08/13, 13:41:28",66,0,20,false,Jupyter Notebook,ECSIM,ECSIM,"Jupyter Notebook,Python",,"b'# Proton Exchange Membrane (PEM) Fuel Cell Dataset\n\n## Overview\n\nThis dataset are about Nafion 112 membrane standard tests and MEA activation\ntests of PEM fuel cell in various operation condition. Dataset include two general electrochemical\nanalysis method, Polarization and Impedance curves. In this dataset, effect of different pressure of\nH2/O2 gas, different voltages and various humidity conditions in several steps are considered.\nBehavior of PEM fuel cell during distinct operation condition tests, activation procedure and different operation condition\nbefore and after activation analysis can be concluded from data. In Polarization curves, voltage\nand power density change as a function of flows of H2/O2 and relative humidity. Resistance of the\nused equivalent circuit of fuel cell can be calculated from Impedance data. Thus, experimental\nresponse of the cell is obvious in the presented data, which is useful in depth analysis, simulation\nand material performance investigation in PEM fuel cell researches.\t\t\n\nFor more information about MEA(Membrane Electrode Assembly) activation procedure visit [here](https://github.com/ECSIM/pem-dataset1/tree/master/MEA.md)\n\n## Tests\n\n\n1. [Activation Test MEA Constant Current 0.25A](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Constant%20Current%200.25A)\n2. [Activation Test MEA Constant Voltage 0.6V](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Constant%20Voltage%200.6V)\n3. [Activation Test MEA Constant Voltage 0.6V-2](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Constant%20Voltage%200.6V-2)\n4. [Activation Test MEA Constant Voltage 0.6V-3](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Constant%20Voltage%200.6V-3)\n5. [Activation Test MEA Constant Voltage 0.6V-4](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Constant%20Voltage%200.6V-4)\n6. [Activation Test MEA Cycling Potential](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Cycling%20Potential)\n7. [Activation Test MEA Standard Protocol](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Standard%20Protocol)\n8. [Activation Test MEA Standard Protocol (Repeat)](https://github.com/ECSIM/pem-dataset1/tree/master/Activation%20Test%20MEA%20Standard%20Protocol%20(Repeat))\n9. [Standard Test of Nafion Membrane 112](https://github.com/ECSIM/pem-dataset1/tree/master/Standard%20Test%20of%20Nafion%20Membrane%20112)\n\n\n
\n
\n\n## Notebooks\n\nWe have provided some **Jupyter Notebooks** to visualize the data, visit [here](https://github.com/ECSIM/pem-dataset1/tree/master/Notebooks)\n\n## Issues & Bug Reports\t\t\t\n\nJust fill an issue and describe it. We\'ll check it ASAP!\t\t\t\t\t\t\t\nor send an email to [data@ecsim.ir](mailto:data@ecsim.ir ""data@ecsim.ir""). \n\nYou can also join our discord server\t\t\t\n\n\n \n\n\n## Cite\n\nIf you use this dataset in your research, we would appreciate citations to the following paper :\n\n
\nS. Hamidi, S. Haghighi, K. Askari, Dataset of Standard Tests of Nafion 112 Membrane and Membrane Electrode Assembly (MEA) Activation Tests of Proton Exchange Membrane (PEM) Fuel Cell, ChemRxiv, (2020). doi:10.26434/chemrxiv.11902023.\n
\n
\n\n@article{Hamidi2020,\n  doi = {10.26434/chemrxiv.11902023},\n  url = {https://doi.org/10.26434/chemrxiv.11902023},\n  year = {2020},\n  month = feb,\n  publisher = {American Chemical Society ({ACS})},\n  author = {Sarmin Hamidi and Sepand Haghighi and Kasra Askari},\n  title = {Dataset of Standard Tests of Nafion 112 Membrane and Membrane Electrode Assembly ({MEA}) Activation Tests of Proton Exchange Membrane ({PEM}) Fuel Cell}\n}\n\n\n
\n\n## License\n\n\n\n\n## Show Your Support\n\n

Star This Repo

\t\t\t\t\t\n\nGive a \xe2\xad\x90\xef\xb8\x8f if this project helped you! \n\n

Donate to Our Project

\t\n\nIf you do like our project and we hope that you do, can you please support us? Our project is not and is never going to be working for profit. We need the money just so we can continue doing what we do ;-) .\n\n'",",https://doi.org/10.26434/chemrxiv.11902023","2020/01/04, 08:57:29",1390,CC-BY-4.0,0,456,"2022/08/13, 13:41:33",0,6,24,0,438,0,1.8333333333333333,0.4311111111111111,"2020/03/19, 14:08:33",v1.1,0,4,true,custom,true,true,,,https://github.com/ECSIM,https://ecsim.ir,"Tehran, Iran",,,https://avatars.githubusercontent.com/u/34425602?v=4,,, HIM,Hydrogen Infrastructure model for the analysis of spatially resolved hydrogen infrastructure pathways.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/HIM.git,github,,Hydrogen,"2020/06/18, 16:15:34",12,0,6,false,Jupyter Notebook,FZJ-IEK3,FZJ-IEK3-VSA,"Jupyter Notebook,Python",,"b' \n\n# HIM- Hydrogen Infrastructure Model for Python\n\nHSC offers the functionality to calculate predefined hydrogen supply chain architectures with respect to spatial resolution for the analysis of explicit nationwide infrastructures.\n\n## Installation and application\n\nFirst, download and install [Anaconda](https://www.anaconda.com/). Then, clone a local copy of this repository to your computer with git\n\n\tgit clone https://github.com/FZJ-IEK3-VSA/HSCPathways.git\n\t\nor download it directly. Move to the folder\n\n\tcd HIM\n\nand install the required Python environment via\n\n\tconda env create -f environment.yml \n\nTo determine the optimal pipeline design, a mathematical optimization solver is required. [Gurobi](https://www.gurobi.com/) is used as default solver, but other optimization solvers can be used as well.\n\n## Examples\n\nA number of [**examples**](apps/) shows the capabilities of HIM. Either for [abstract costs analyses](apps/Example%20-%20Abstract%20analysis%20without%20geoferenced%20locations.ipynb) \n\n \n \nor for [exact infrastructure design](apps/Example%20Hydrogen%20Supply%20Chain%20Cost%20Generation.ipynb) \n\n \n\n\n## License\n\nMIT License\n\nCopyright (C) 2016-2019 Markus Reuss (FZJ IEK-3), Thomas Grube (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\n## About Us \n \n\nWe are the [Techno-Economic Energy Systems Analysis](http://www.fz-juelich.de/iek/iek-3/EN/Forschung/_Process-and-System-Analysis/_node.html) department at the [Institute of Energy and Climate Research: Electrochemical Process Engineering (IEK-3)](http://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html) belonging to the [Forschungszentrum J\xc3\xbclich](www.fz-juelich.de/). Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n\n## Acknowledgment\n\nThis work was supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 \xe2\x80\x93 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/).\n\n\n'",,"2019/09/25, 11:01:02",1491,MIT,0,13,"2020/06/18, 16:15:35",1,1,1,0,1224,1,0.0,0.33333333333333337,,,0,3,false,,true,true,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, pandapipes,"A pipeflow calculation tool that complements pandapower in the simulation of multi energy grids, in particular heat and gas networks.",e2nIEE,https://github.com/e2nIEE/pandapipes.git,github,"district-heating,gas-network-simulation,hydrogen,multi-energy-systems,pipe-network,distribution-networks,water-network-distribution",Hydrogen,"2023/09/19, 05:00:30",96,9,35,true,Python,,e2nIEE,"Python,Jupyter Notebook",https://www.pandapipes.org,"b'\n.. image:: https://www.pandapipes.org/images/pp.svg\n :target: https://www.pandapipes.org\n :width: 300em\n :alt: logo\n\n|\n\n.. image:: https://badge.fury.io/py/pandapipes.svg\n :target: https://badge.fury.io/py/pandapipes\n :alt: PyPI\n\n.. image:: https://img.shields.io/pypi/pyversions/pandapipes.svg\n :target: https://pypi.python.org/pypi/pandapipes\n :alt: versions\n\n.. image:: https://readthedocs.org/projects/pandapipes/badge/\n :target: http://pandapipes.readthedocs.io/\n :alt: docs\n\n.. image:: https://codecov.io/gh/e2nIEE/pandapipes/branch/master/graph/badge.svg\n :target: https://codecov.io/github/e2nIEE/pandapipes?branch=master\n :alt: codecov\n\n.. image:: https://api.codacy.com/project/badge/Grade/86c876ab23fc40d98e85f7d59bdef928\n :target: https://app.codacy.com/gh/e2nIEE/pandapipes/dashboard\n :alt: Codacy Badge\n\n.. image:: https://img.shields.io/badge/License-BSD%203--Clause-blue.svg\n :target: https://github.com/e2nIEE/pandapipes/blob/master/LICENSE\n :alt: BSD\n\n.. image:: https://pepy.tech/badge/pandapipes\n :target: https://pepy.tech/project/pandapipes\n :alt: pepy\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/e2nIEE/pandapipes/master?filepath=tutorials\n :alt: binder\n\n\nA pipeflow calculation tool that complements `pandapower `_ in the\nsimulation of multi energy grids, in particular heat and gas networks. More information can be found on `www.pandapipes.org `_.\n\nGetting started:\n\n- `Installation Notes `_\n- `Documentation `_\n- `Tutorials on github `_\n- `Interactive tutorials on Binder `_\n\n\n\npandapipes is a development of the Department for Distribution System Operation at the Fraunhofer\nInstitute for Energy Economics and Energy System Technology (IEE), Kassel, and the research group\nEnergy Management and Power System Operation, University of Kassel.\n\n\n.. image:: https://upload.wikimedia.org/wikipedia/commons/e/ee/FraunhoferIEELogo.gif\n :target: https://www.iee.fraunhofer.de/en.html\n :width: 350\n\n|\n\n.. image:: https://www.uni-kassel.de/uni/fileadmin/sys/resources/images/logo/logo-main.svg\n :target: https://www.uni-kassel.de/\n :width: 350\n\n|\n\n.. image:: https://www.uni-kassel.de/eecs/index.php?eID=dumpFile&t=p&p=1674&token=b0509afe2e7c4e41d7cdad83edd0ce49af9fceaa\n :target: https://www.uni-kassel.de/eecs/en/sections/energiemanagement-und-betrieb-elektrischer-netze/home\n :width: 250\n\n|\n\nWe welcome contributions to pandapipes of any kind - if you want to contribute, please check out\nthe `pandapipes contribution guidelines `_.\n'",,"2020/02/03, 15:04:57",1360,CUSTOM,373,1635,"2023/09/15, 13:11:50",61,396,468,139,40,17,1.8,0.6677384780278671,"2023/07/31, 12:00:03",v0.8.5,1,16,false,,false,true,"Nieuwe-Warmte-Nu/simulator-core,Nieuwe-Warmte-Nu/template-python,ERIGrid2/toolbox_doe_sa,e2nIEE/pandapower-qgis,ERIGrid2/JRA-2.1.3-STL,ERIGrid2/dhnsim_pandapipes,e2nIEE/pandahub,hoangtranthe/JRA-1.1-multi-energy,ERIGrid2/benchmark-model-multi-energy-networks",,https://github.com/e2nIEE,,"Kassel, Germany",,,https://avatars.githubusercontent.com/u/40853245?v=4,,, The Hydrogen Risk Assessment Models,"The first-ever software toolkit that integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing hydrogen hazards impact on people and structures.",sandialabs,https://github.com/sandialabs/hyram.git,github,snl-applications,Hydrogen,"2022/11/15, 19:21:29",34,0,8,true,Python,Sandia National Laboratories,sandialabs,"Python,C#,Rich Text Format",,"b'# Hydrogen Plus Other Alternative Fuels Risk Assessment Models (HyRAM+)\nThe Hydrogen Plus Other Alternative Fuels Risk Assessment Models (HyRAM+) toolkit integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing the impact on people from hydrogen and other alternative fuels.\n\nAdditional descriptions and documentation, as well as a Windows installer, can be found at https://hyram.sandia.gov/.\n\n \n## Copyright and License\nThe copyright language is available in the [COPYRIGHT.txt](./COPYRIGHT.txt) file.\nThe license, as well as terms and conditions, are available in the [COPYING.txt](./COPYING.txt) file. \n\n \n## Contributing\nThe application comprises a frontend GUI written in C# and a backend module written in Python.\nAnyone who wants to contribute to the development of the open-source HyRAM+ project should refer to the details in the [CONTRIBUTING](./CONTRIBUTING.md) document. \n\n \n## Documentation\nThe [HyRAM+ Technical Reference Manual](https://hyram.sandia.gov/) contains descriptions of the models and calculations used within HyRAM+. It also contains references to the original works that these models and calculations are based on.\n\nThe [HyRAM 2.0 User Guide](https://energy.sandia.gov/download/44669/) contains details and examples on how to use the HyRAM+ software through the graphical user interface (GUI), with example calculations updated with changes to the interface and improved calculation options. This document more references how to use the software interface, rather than specifics on the models and calculations themselves. While there have been many changes to the current HyRAM+ version of the code, many of the examples are still applicable even though the User Guide is based on the previous version; a new version of the User Guide will be published in the future. \n\n \n## Repository Layout\nThe HyRAM+ repository includes both the C# frontend GUI and the backend Python module.\nApplication code is organized in directories in the git repository in the following way:\n\n```\n$\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80src\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80gui\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.gui\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.PythonApi\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.PythonDirectory\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.Setup\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.SetupBootstrapper\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.State\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.Units\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80Hyram.Utilities\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80cs_api\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80hyram\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80tests\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80hyram\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80phys\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80qra\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80utilities\n```\n\n* `src` - Project source code, including C# GUI and python modules\n* `src/gui` - Front-end C# interface providing convenient access to HyRAM+ tools\n* `src/cs_api` - Python functions providing C# access to HyRAM+ python code via the python.NET library.\n* `src/hyram` - Python module of HyRAM+ tools including physics, quantitative risk assessment, and miscellaneous utilities\n * Additional information on the usage and development of the HyRAM+ Python module can be found in the [README](./src/hyram/README.md) of that directory\n* `src/hyram/hyram` - Python source code for physics and risk models\n * This directory contains the code for the risk and physics model calculations that are accessible through the front-end GUI\n'",,"2019/05/07, 16:06:52",1632,GPL-3.0,1,12,"2023/10/03, 22:36:15",3,7,9,3,21,3,0.0,0.25,,,0,2,false,,false,true,,,https://github.com/sandialabs,https://software.sandia.gov,United States,,,https://avatars.githubusercontent.com/u/4993680?v=4,,, GasModels.jl,A Julia/JuMP Package for Gas Network Optimization.,lanl-ansi,https://github.com/lanl-ansi/GasModels.jl.git,github,"gas-network-formulations,optimization,gas-flow,network",Hydrogen,"2022/11/22, 22:16:01",62,0,9,true,Julia,advanced network science initiative,lanl-ansi,"Julia,MATLAB",https://lanl-ansi.github.io/GasModels.jl/latest/,"b'# GasModels.jl\n\n\n\nRelease: [![](https://img.shields.io/badge/docs-stable-blue.svg)](https://lanl-ansi.github.io/GasModels.jl/stable)\n\nDev:\n[![Build Status](https://travis-ci.org/lanl-ansi/GasModels.jl.svg?branch=master)](https://travis-ci.org/lanl-ansi/GasModels.jl)\n[![codecov](https://codecov.io/gh/lanl-ansi/GasModels.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/lanl-ansi/GasModels.jl)\n[![](https://img.shields.io/badge/docs-latest-blue.svg)](https://lanl-ansi.github.io/GasModels.jl/latest)\n\nGasModels.jl is a Julia/JuMP package for Steady-State Gas Network Optimization.\nIt is designed to enable computational evaluation of emerging Gas network formulations and algorithms in a common platform.\nThe code is engineered to decouple problem specifications (e.g. Gas Flow, Expansion planning, ...) from the gas network formulations (e.g. DWP, CRDWP, ...).\nThis enables the definition of a wide variety of gas network formulations and their comparison on common problem specifications.\n\n**Core Problem Specifications**\n* Gas Flow (gf)\n* Expansion Planning (ne)\n* Load Shed (ls)\n\n**Core Network Formulations**\n* DWP\n* WP\n* CRDWP\n* LRDWP\n* LRWP\n\n## Basic Usage\n\n\nOnce GasModels is installed, a optimizer is installed, and a network data file has been acquired, a Gas Flow can be executed with,\n```\nusing GasModels\nusing \n\nrun_gf(""foo.m"", FooGasModel, FooSolver())\n```\n\nSimilarly, an expansion optimizer can be executed with,\n```\nrun_ne(""foo.m"", FooGasModel, FooSolver())\n```\n\nwhere FooGasModel is the implementation of the mathematical program of the Gas equations you plan to use (i.e. DWPGasModel) and FooSolver is the JuMP optimizer you want to use to solve the optimization problem (i.e. IpoptSolver).\n\n\n## Acknowledgments\n\nThis code has been developed as part of the Advanced Network Science Initiative at Los Alamos National Laboratory.\nThe primary developer is Russell Bent, with significant contributions from Conrado Borraz-Sanchez, Hassan Hijazi, and Pascal van Hentenryck.\n\nSpecial thanks to Miles Lubin for his assistance in integrating with Julia/JuMP.\n\n\n## License\n\nThis code is provided under a BSD license as part of the Multi-Infrastructure Control and Optimization Toolkit (MICOT) project, LA-CC-13-108.\n'",,"2016/08/29, 16:01:10",2613,CUSTOM,1,1011,"2022/11/22, 22:16:02",48,106,204,1,336,1,1.2,0.5879682179341656,"2022/07/25, 23:24:15",v0.9.2,0,10,false,,false,false,,,https://github.com/lanl-ansi,https://lanl-ansi.github.io/,"Los Alamos, NM",,,https://avatars.githubusercontent.com/u/17053288?v=4,,, SciGRID_gas,Methods to create an automated network model of the European gas transportation network.,,,custom,,Hydrogen,,,,,,,,,,https://www.gas.scigrid.de/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Vehicle with Fuel Cell Powertrain,Fuel cell electric vehicle with battery model and cooling system.,mathworks,https://github.com/mathworks/Fuel-Cell-Vehicle-Model-Simscape.git,github,,Hydrogen,"2023/05/13, 20:59:23",36,0,8,true,MATLAB,MathWorks,mathworks,"MATLAB,HTML",,"b'# **Vehicle with Fuel Cell Powertrain**\nCopyright 2020 The MathWorks(TM), Inc.\n\nThis example shows a fuel cell powertrain modeled in Simscape. A single\nfuel cell stack in parallel with a battery powers a single motor that\npropels the vehicle. The fuel cell is modeled using a custom domain to\ntrack the different species of gas that are used in the fuel cell. The\nvehicle can be tested on custom drive cycles or using the Drive Cycle\nSource from Powertrain Blockset.\n\nOpen the project Fuel_Cell_Vehicle.prj to get started.\n\nPlease visit the [Simscape Fluids](https://www.mathworks.com/products/simscape-fluids.html) \npage to learn more about modeling fluid systems.\n\n[![View Fuel Cell Vehicle Model in Simscape on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://www.mathworks.com/matlabcentral/fileexchange/82340-fuel-cell-vehicle-model-in-simscape)\n\n## **Fuel Cell Vehicle Model**\n![](Overview/html/ssc_car_fuel_cell_1motor_modelLevel.png)\n\n## **Fuel Cell Vehicle Model, Powertrain System**\n![](Overview/html/ssc_car_fuel_cell_1motor_powertrainLevel.png)'",,"2020/11/06, 12:29:55",1083,CUSTOM,1,10,"2022/11/22, 22:16:02",2,0,0,0,336,0,0,0.0,"2023/05/14, 11:34:43",23.1.1.5,0,1,false,,false,false,,,https://github.com/mathworks,https://mathworks.github.io,,,,https://avatars.githubusercontent.com/u/8590076?v=4,,, VirtualFCS,A Modelica library for hybrid hydrogen fuel cell and battery power systems.,Virtual-FCS,https://github.com/Virtual-FCS/VirtualFCS.git,github,,Hydrogen,"2023/03/24, 15:33:12",25,0,12,true,Modelica,Virtual-FCS H2020,Virtual-FCS,"Modelica,Motoko",,"b'# VirtualFCS Library\nVirtualFCS is a Modelica library for fuel cell system modelling developed through the EU H2020 research project Virtual-FCS.\n\n[![2023: v2.0.0](https://img.shields.io/badge/Release-2023%3A%20v2.0.0-blue)![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7552220.svg)](https://doi.org/10.5281/zenodo.7552220)\n[![OpenModelica v1.20 win64](https://img.shields.io/badge/OpenModelica-v1.20%20win64-blue)](https://openmodelica.org/)\n\nNote that this library might also work with other platforms that win64, but that is not tested during development.\n\n
\nCiting other library versions\n\nNote that library versions older than v2.0.0 requires OpenModelica v1.14.\n\n| Released | Version | DOI |\n|------------|---------------|-----|\n| 2023-01-19 | v2.0.0 | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7552220.svg)](https://doi.org/10.5281/zenodo.7552220) |\n| 2022-02-16 | v2022.1.0-beta| [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6104901.svg)](https://doi.org/10.5281/zenodo.6104901) |\n| 2021-12-15 | v0.2.1-beta | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7566040.svg)](https://doi.org/10.5281/zenodo.7566040) |\n| 2021-09-22 | v0.2.0-beta | [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7566037.svg)](https://doi.org/10.5281/zenodo.7566037) |\n| 2021-07-01 | v0.1.0-beta | |\n\n
\n\n## Library Description\n\nThe objective of the complete hybrid system model is to reproduce and simulate the dynamic behavior of all the components according to the desired architecture. Depending on the possibilities, degradation mechanisms of the components will be considered in order to predict the performance losses of the entire system.\n\n![picture](img/hydrogen-high-res.jpg)\n\nThe model is rather dedicated to transport applications. However, it should remain reliable for other applications. Consequently, the model must considerate dynamics phenomena linked to all applications. The perimeter of the model is limited to hybrid fuel cell system which refers to the fuel cell stack, the battery, and ancillaries. The Figure 2 highlights the system considered in the project. \n\n![picture](img/VirtualFCS_Model_Scope.png)\n\n\n## System Requirements and Installation\nThe VirtualFCS library is designed to work with OpenModelica and supports version 1.20. To install OpenModelica, please visit their website:\n\n[Information about OpenModelica](https://www.openmodelica.org/)
\n[Download OpenModelica v1.20 for Windows](https://build.openmodelica.org/omc/builds/windows/releases/1.20/0/)
\n[OpenModelica on GitHub](https://github.com/OpenModelica)
\n\nTo use the VirtualFCS library, follow these steps:\n1. Clone this repository to your computer\n2. Open the OpenModelica Connection Editor\n3. Open the file VirtualFCS\\package.mo\n4. The VirtualFCS library will load in the library browser on the left of the Connection Editor\n\nDevelopment and conventions\n------------------------\n\n### Workflow\nThe VirtualFCS library is currently in development by members of the Virtual-FCS project. The project started in 2020 and will continue through 2022. Minor releases are planned every 3 months of the project. Development should always take place on a side branch. Never pull to main. Contributions submitted as [Pull Requests](https://github.com/Virtual-FCS/VirtualFCS/pulls) are welcome. Recommended git comment method for contributors can be found here: [How to Write a Git Commit Message](https://cbea.ms/git-commit/). \n\nIssues can be reported using the [\xc2\xabIssues](https://github.com/Virtual-FCS/VirtualFCS/issues) button. \n\n### Naming conventions\nNaming conventions are laid out below:\n\nClasses. Classes should be nouns in UpperCamelCase (e.g. FuelCellStack).
\nInstance. Instance names should be nouns in lowerCamelCase. An underscore at the end of the name may be used to characterize an upper or lower index (e.g. automotiveStack, pin_a).
\nMethod. Methods should be verbs in lowerCamelCase (e.g. updateFuelCellStack).
\nVariables. Local variables, instance variables, and class variables are also written either as single letters or in lowerCamelCase (e.g. U, cellVoltage).
\nConstants. Constants should be written in uppercase characters separated by underscores (e.g. T_REF).
\n\nLicense\n-------\nVirtual-FCS is shared under a MIT license. For more information, please see the file LICENSE.\n\nAttributions and credits\n------------------------\n\n### Contributors (in alphabetical order)\nAmelie Pinard, \t\tSINTEF Industry, Trondheim, Norway
\nBenjamin Synnev\xc3\xa5g, \t\tSINTEF Industry, Trondheim, Norway
\nDr. Loic Vichard, \tUBFC, Belfort, France
\nDr. Mike Gerhardt, \tSINTEF Industry, Trondheim, Norway
\nDr. Nadia Steiner, \tUBFC, Belfort, France
\nDr. Roberto Scipioni, \tSINTEF Industry Trondheim, Norway
\nDr. Simon Clark, \tSINTEF Industry, Trondheim, Norway
\nDr. Yash Raka, \t\tSINTEF Industry, Trondheim, Norway
\n\n### Projects\n- [Virtual-FCS](http://www.virtual-fcs.eu/); Grant Agreement No 875087.\n\n### Acknowledgements\n This code repository is part of a project that has received funding from the Fuel Cells and Hydrogen 2 Joint Undertaking under Grant Agreement No 875087. This Joint Undertaking receives support from the European Union\xe2\x80\x99s Horizon 2020 Research and Innovation programme, Hydrogen Europe and Hydrogen Europe Research.\n'",",https://doi.org/10.5281/zenodo.7552220,https://doi.org/10.5281/zenodo.7552220,https://doi.org/10.5281/zenodo.6104901,https://doi.org/10.5281/zenodo.7566040,https://doi.org/10.5281/zenodo.7566037","2021/04/19, 04:31:30",919,MIT,20,179,"2023/03/24, 15:40:30",17,19,26,7,215,5,0.3,0.6190476190476191,"2023/01/19, 18:15:36",v2.0.0,0,6,false,,false,false,,,https://github.com/Virtual-FCS,virtual-fcs.eu,,,,https://avatars.githubusercontent.com/u/73799930?v=4,,, OpenTerrace,A pure Python framework for thermal energy storage packed bed simulations.,OpenTerrace,https://github.com/OpenTerrace/openterrace-python.git,github,"energy-storage,heat-transfer,multiphase-flow,python,cuda,phase-change-materials,packed-bed,numerical",Thermal Energy Storage,"2023/10/06, 11:02:45",11,0,7,true,Python,OpenTerrace,OpenTerrace,"Python,TeX",,"b'[![Logo](docs/_figures/logo-banner-paths-grey.svg)](#)\n\nOpenTerrace is a pure Python framework for thermal energy storage packed bed simulations. It is built from the ground up to be flexible and extendable on modern Python 3.x with speed in mind. It utilises Nvidia CUDA cores to harness the power of modern GPUs and has automatic fallback to CPU cores.\n\nOpenTerrace uses awesome open-source software such as\n[Numba](https://numba.pydata.org), [NumPy](https://numpy.org/) and [SciPy](https://scipy.org/):grey_exclamation:\n\n### [Read the docs](https://openterrace.github.io/openterrace-python/)\n\n## Why OpenTerrace?\n- **FAST** \nBy making use of modern compilers and optimised tri-diagonal matrix solvers, OpenTerrace approaches the speed of compiled C or FORTRAN code with the added convenience of easy-to-read Python language.\n\n- **FLEXIBLE** \nOpenTerrace is built from the ground up to be flexible for easy integration in system models or optimisation loops.\n\n- **EXTENDABLE** \nModules for new materials such as non-spherical rocks or exotic Phase Change Materials (PCM) can easily be plugged into the OpenTerrace framework.\n\n## Want to contribute?\nContributions are welcome :pray: Feel free to send pull requests or get in touch with me to discuss how to collaborate. More details in the [docs](https://openterrace.github.io/openterrace-python/).\n\n## Code contributors\n* Jakob H\xc3\xa6rvig, Associate Professor, AAU Energy, Aalborg University, Denmark'",,"2022/01/01, 12:23:59",662,GPL-3.0,176,475,"2023/10/06, 11:02:45",0,77,78,51,19,0,0.0,0.0,"2023/06/27, 08:35:31",v0.0.7,0,2,false,,false,false,,,https://github.com/OpenTerrace,,Denmark,,,https://avatars.githubusercontent.com/u/96943321?v=4,,, Open Energy System Models,Used to explore future energy systems and are often applied to questions involving energy and climate policy.,wiki,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://en.wikipedia.org/wiki/Open_energy_system_models,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Open Energy System Databases,"Employ open data methods to collect, clean, and republish energy-related datasets for open use.",wiki,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://en.wikipedia.org/wiki/Open_energy_system_databases,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Open Models,This page lists energy models published under open source licenses.,wiki,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://wiki.openmod-initiative.org/wiki/Open_Models,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, System Advisor Model,"A simulation program for electricity generation projects. It has models for different kinds of renewable energy systems and financial models for residential, commercial, and utility-scale projects.",NREL,https://github.com/NREL/SAM.git,github,,Energy Modeling and Optimization,"2023/10/25, 05:34:36",294,0,64,true,PowerBuilder,National Renewable Energy Laboratory,NREL,"PowerBuilder,HTML,C++,C,Jupyter Notebook,JavaScript,Python,CSS,CMake,ASP.NET,PowerShell,Shell,Dockerfile",,"b'# System Advisor Model (SAM)\n![Build](https://github.com/NREL/SAM/actions/workflows/ci.yml/badge.svg)\n[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2FNREL%2FSAM.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2FNREL%2FSAM?ref=badge_shield)\n\nThe SAM Open Source Project repository contains the source code, tools, and instructions to build a desktop version of the National Renewable Energy Laboratory\'s System Advisor Model (SAM). SAM is a simulation program for electricity generation projects. It has models for different kinds of renewable energy systems and financial models for residential, commercial, and utility-scale projects. For more details about SAM\'s capabilities, see the SAM website at [https://sam.nrel.gov/](https://sam.nrel.gov/).\n\nFor a short video describing the SAM repositories, see https://youtu.be/E5z1iiZfZ3M.\n\nThe [SAM release notes](https://nrel.github.io/SAM/doc/releasenotes.html) are in https://github.com/NREL/SAM/blob/gh-pages/doc/releasenotes.html.\n\nThe desktop version of SAM for Windows, Mac, or Linux builds from the following open source projects:\n\n* [SSC](https://github.com/nrel/ssc) is a set of ""compute modules"" that simulate different kinds of power systems and financial structures. It can be run directly using the [SAM Software Development Kit](https://sam.nrel.gov/sdk). **This is the source code for SAM\'s models, and is the repository to use for researching the algorithms underlying the models.**\n\n* [LK](https://github.com/nrel/lk) is a scripting language that is integrated into SAM. SAM\'s user interface uses LK to calculate values to display on input pages. The user interface includes a script editor that allows users to write their own scripts from the user interface.\n\n* [wxWidgets](https://www.wxwidgets.org/) is a cross-platform graphical user interface platform used for SAM\'s user interface, and for the development tools included with SSC (SDKtool) and LK (LKscript). The current version of SAM uses wxWidgets 3.1.5.\n\n* [WEX](https://github.com/nrel/wex) is a set of extensions to wxWidgets for custom user-interface elements developed specifically for SAM, LK script, and DView.\n\n* [Google Test](https://github.com/google/googletest) is a C++ test framework that enables comprehensive unit-testing of software. Contributions to the project will eventually be required to have associated unit tests written in this framework.\n\n* [jsoncpp](https://github.com/open-source-parsers/jsoncpp) is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings.\n\n* [Python](https://www.python.org)/[Miniconda](https://docs.conda.io/) is for integration of Python scripts with the SAM user interface.\n\nThis repository, **SAM**, contains the code for SAM\'s user interface that assigns values to inputs of the SSC compute modules, runs the modules in the correct order, and displays simulation results. It also includes tools for editing LK scripts, viewing time series results, and generating shade data from a 3-dimensional representation of a photovoltaic array or solar hot water collector and nearby shading objects.\n\nThe SAM repository also includes [two libraries](https://github.com/NREL/SAM/tree/develop/Sandia) from Sandia National Laboratories, [stepwise](https://dakota.sandia.gov/content/packages/stepwise), and [LHS](https://dakota.sandia.gov/content/packages/lhs), which are distributed as part of the Dakota platform, licensed under [LGPL](https://www.gnu.org/licenses/lgpl-3.0.en.html).\n\n# Quick Steps for Building SAM\n\nFor detailed build instructions see the [wiki](https://github.com/NREL/SAM/wiki) with specific instructions for:\n\n * [Windows](https://github.com/NREL/SAM/wiki/Windows-Build-Instructions)\n * [Mac](https://github.com/NREL/SAM/wiki/Mac-Build-Instructions)\n * [Linux](https://github.com/NREL/SAM/wiki/Linux-Build-Instructions)\n\nThese are the general quick steps you need to follow to set up your computer for developing SAM:\n\n1. Set up your development tools:\n\n * Windows: Visual Studio 2019 Community or other editions available at https://www.visualstudio.com/.\n * Mac: Apple Command Line Tools, available at https://developer.apple.com/download/more/ (requires Apple ID and password).\n * Linux: g++ compiler available at [https://gcc.gnu.org](https://gcc.gnu.org/) or installed via your Linux distribution.\n\n2. Download the wxWidgets 3.1.5 source code for your operating system from https://www.wxwidgets.org/downloads/.\n\n3. Build wxWidgets.\n\n5. In Windows, create the WXMSW3 environment variable on your computer to point to the wxWidgets installation folder, or in MacOS and Linux, create the dynamic link `/usr//local/bin/wx-config-3` to point to `/path/to/wxWidgets/bin/wx-config`.\n\n6. As you did for wxWidgets, for each of the following projects, clone (download) the repository, build the project, and then (Windows only) create an environment variable pointing to the project folder. Build the projects in the following order, and assign the environment variable for each project before you build the next one:\n\n\n\n\n\n\n\n\n
ProjectRepository URLWindows Environment Variable
LKhttps://github.com/NREL/lkLKDIR
WEXhttps://github.com/NREL/wexWEXDIR
SSChttps://github.com/NREL/sscSSCDIR
SAMhttps://github.com/NREL/SAMSAMNTDIR
Google Testhttps://github.com/google/googletestGTEST
\n\n# Contributing\n\nIf you would like to report an issue with SAM or make a feature request, please let us know by adding a new issue on the [issues page](https://github.com/NREL/SAM/issues).\n\nIf you would like to submit code to fix an issue or add a feature, you can use GitHub to do so. Please see [Contributing](CONTRIBUTING.md) for instructions.\n\n# License\nSAM\'s open source code is copyrighted by the Alliance for Sustainable Energy and licensed with BSD-3-Clause terms, found [here](https://github.com/NREL/SAM/blob/develop/LICENSE).\n\nThe stepwise and LHS [LGPL](https://www.gnu.org/licenses/lgpl-3.0.en.html) licensed libraries from Sandia National Laboratories are pre-compiled Fortran libraries that are included in the SAM repository as binaries in the [Sandia folder](https://github.com/NREL/SAM/tree/develop/Sandia). You can replace the binaries with different versions by compiling your own version and replacing the binary/executable viles in the Sandia folder.\n\n# Citing this package\n\nSystem Advisor Model Version 2022.11.21 (2022). SAM source code. National Renewable Energy Laboratory. Golden, CO. Accessed November 28, 2022. https://github.com/NREL/SAM\n'",,"2013/01/10, 02:52:47",3940,BSD-3-Clause,999,15705,"2023/10/25, 05:34:40",148,735,1366,320,0,5,1.1,0.6580807141528733,"2023/06/04, 10:22:22",2022.11.21.r3.ssc.280,2,28,false,,false,true,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, openTEPES,"Determines the investment plans of new facilities (generators, ESS and lines) for supplying the forecasted demand at minimum cost.",IIT-EnergySystemModels,https://github.com/IIT-EnergySystemModels/openTEPES.git,github,,Energy Modeling and Optimization,"2023/10/22, 16:25:33",25,0,8,true,Python,Comillas Pontifical University,IIT-EnergySystemModels,"Python,Jupyter Notebook",,"b'\n.. image:: https://github.com/IIT-EnergySystemModels/openTEPES/blob/master/doc/img/openTEPES.png\n :target: https://pascua.iit.comillas.edu/aramos/openTEPES/index.html\n :alt: logo\n :align: center\n\n|\n\n.. image:: https://badge.fury.io/py/openTEPES.svg\n :target: https://badge.fury.io/py/openTEPES\n :alt: PyPI\n\n.. image:: https://img.shields.io/pypi/pyversions/openTEPES.svg\n :target: https://pypi.python.org/pypi/openTEPES\n :alt: versions\n\n.. image:: https://img.shields.io/readthedocs/opentepes\n :target: https://opentepes.readthedocs.io/en/latest/index.html\n :alt: docs\n\n.. image:: https://img.shields.io/badge/License-AGPL%20v3-blue.svg\n :target: https://github.com/IIT-EnergySystemModels/openTEPES/blob/master/LICENSE\n :alt: AGPL\n\n.. image:: https://static.pepy.tech/badge/openTEPES\n :target: https://pepy.tech/project/openTEPES\n :alt: pepy\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/IIT-EnergySystemModels/openTEPES-tutorial/HEAD\n\n**Open Generation, Storage, and Transmission Operation and Expansion Planning Model with RES and ESS (openTEPES)**\n\n*Simplicity and Transparency in Power Systems Planning*\n\n\n\nThe **openTEPES** model has been developed at the `Instituto de Investigaci\xc3\xb3n Tecnol\xc3\xb3gica (IIT) `_ of the `Universidad Pontificia Comillas `_.\n\nIt is integrated in the `open energy system modelling platform `_ helping modelling Europe\'s energy system.\n\nIt has been used by the **Ministry for the Ecological Transition and the Demographic Challenge (MITECO)** to analyze the electricity sector in the latest Spanish `National Energy and Climate Plan (NECP) 2023-2030 `_ in June 2023.\n\nReference\n############\n**A. Ramos, E. Quispe, S. Lumbreras** `\xe2\x80\x9cOpenTEPES: Open-source Transmission and Generation Expansion Planning\xe2\x80\x9d `_ SoftwareX 18: June 2022 `10.1016/j.softx.2022.101070 `_.\n\nDescription\n############\n**openTEPES** determines the investment plans of new facilities (generators, ESS, and lines)\nfor supplying the forecasted demand at minimum cost. Tactical planning is concerned with time horizons of 10-20 years. Its objective is to evaluate the future generation, storage, and network needs.\nThe main results are the guidelines for the future structure of the generation, storage, and transmission systems.\n\nThe **openTEPES** model presents a decision support system for defining the integrated generation, storage, and transmission expansion plan of a large-scale electric system at a tactical level,\ndefined as a set of generation, storage, and network investment decisions for future years. The expansion candidate, generators, ESS and lines, are pre-defined by the user, so the model determines\nthe optimal decisions among those specified by the user.\n\nIt determines automatically optimal expansion plans that satisfy simultaneously several attributes. Its main characteristics are:\n\n- **Dynamic**: the scope of the model corresponds to several periods (years) at a long-term horizon, 2030 to 2040 for example.\n\n It represents hierarchically the different time scopes to take decisions in an electric system:\n\n - Load level: 01-01 00:00:00+01:00 to 12-30 23:00:00+01:00\n\n The time division allows a user-defined flexible representation of the periods for evaluating the system operation. Additionally, it can be run with chronological periods of several consecutive hours (bi-hourly, tri-hourly resolution)\n to allow decreasing the computational burden without accuracy loss.\n\n- **Stochastic**: several stochastic parameters that can influence the optimal generation, storage, and transmission expansion decisions are considered. The model considers stochastic\n medium-term yearly uncertainties (scenarios) related to the system operation. These operation scenarios are associated with renewable energy sources and electricity demand.\n\nThe objective function incorporates the two main quantifiable costs: **generation, storage, and transmission investment cost (CAPEX)** and **expected variable operation costs (including generation emission and reliability costs) (system OPEX)**.\n\nThe model formulates a stochastic optimization problem including generation, storage, and network binary investment/retirement decisions, generation operation decisions (commitment, startup and shutdown decisions are also binary) and line switching decisions.\nThe capacity expansion considers adequacy system reserve margin constraints.\n\nThe operation model is a **network constrained unit commitment (NCUC)** based on a **tight and compact** formulation including operating reserves with a\n**DC power flow (DCPF)** including **line switching** decisions. Network ohmic losses are considered proportional to the line flow. It considers different **energy storage systems (ESS)**, e.g., pumped-storage hydro,\nbattery, etc. It allows analyzing the trade-off between the investment in generation/storage/transmission and the investment or use of storage capacity.\n\nThe main results of the model can be structured in these topics:\n\n- **Investment**: investment decisions and cost\n- **Operation**: unit commitment, startup, and shutdown of non-renewable units, unit output and aggregation by technologies (thermal, storage hydro, pumped-hydro storage, RES), RES curtailment, line flows, line ohmic losses, node voltage angles, upward and downward operating reserves, ESS inventory levels\n- **Emissions**: CO2 emissions by unit\n- **Marginal**: Locational Short-Run Marginal Costs (LSRMC), water energy value\n- **Economic**: operation, emission, and reliability costs and revenues from operation and operating reserves\n- **Flexibility**: flexibility provided by demand, by the different generation and consumption technologies, and by power not served\n\nResults are shown in csv files and graphical plots.\n\nA careful implementation has been done to avoid numerical problems by scaling parameters, variables and equations of the optimization problem allowing the model to be used for large-scale cases, e.g., the European system with hourly detail.\n\nInstallation\n############\nThere are 2 ways to get all required packages under Windows. We recommend using the Python distribution Miniconda. If you don\'t want to use it or already have an existing Python (version 3.8 | 3.9 **recommended**, 2.7 is supported as well) installation, you can also download the required packages by yourself.\n\n\nMiniconda (recommended)\n=======================\n1. `Miniconda `_. Choose the 64-bit installer if possible.\n\n 1. During the installation procedure, keep both checkboxes ""modify the PATH"" and ""register Python"" selected! If only higher Python versions are available, you can switch to a specific Python Version by typing ``conda install python=``\n 2. **Remark:** if Anaconda or Miniconda was installed previously, please check that python is registered in the environment variables.\n2. **Packages and Solver**:\n\n 1. Launch a new command prompt (Windows: Win+R, type ""cmd"", Enter)\n 2. Install `CBC solver `_ via `Conda `_ by ``conda install -c conda-forge coincbc``. If you have any problem about the installation, you can also follow the steps that are shown in this `link `_.\n 3. Install openTEPES via pip by ``pip install openTEPES``\n\nContinue at `Get Started <#get-started>`_ and see the `Tips <#tips>`_.\n\n\nGitHub Repository (the hard way)\n================================\n1. Clone the `openTEPES `_ repository.\n2. Launch the command prompt (Windows: Win+R, type ""cmd"", Enter), or the Anaconda prompt\n3. Set up the path by ``cd ""C:\\Users\\\\...\\openTEPES""``. (Note that the path is where the repository was cloned.)\n4. Install openTEPES via pip by ``pip install .``\n\nSolvers\n###########\n\nGLPK\n================================\nAs an easy option for installation, we have the free and open-source `GLPK solver `_. However, it takes too much time for large-scale problems. It can be installed using: ``conda install -c conda-forge glpk``.\n\nCBC\n================================\nThe `CBC solver `_ is our recommendation if you want a free and open-source solver. For Windows users, the effective way to install the CBC solver is downloading the binaries from `this link `_, copy and paste the *cbc.exe* file to the PATH that is the ""bin"" directory of the Anaconda or Miniconda environment. It can be installed using: ``conda install -c conda-forge coincbc``.\n\nGurobi\n================================\nAnother recommendation is the use of `Gurobi solver `_. However, it is commercial solver but most powerful than GLPK and CBC for large-scale problems.\nAs a commercial solver it needs a license that is free of charge for academic usage by signing up in `Gurobi webpage `_.\nIt can be installed using: ``conda install -c gurobi gurobi`` and then ask for an academic or commercial license. Activate the license in your computer using the ``grbgetkey`` command (you need to be in the university domain if you are installing an academic license).\n\nMosek\n================================\nAnother alternative is the `Mosek solver `_. Note that it is a commercial solver and you need a license for it. Mosek is a good alternative to deal with QPs, SOCPs, and SDPs problems. You only need to use ``conda install -c mosek mosek`` for installation and request a license (academic or commercial).\nTo request the academic one, you can request `here `_. Moreover, Mosek brings a `license guide `_. But if you are request an academic license, you will receive the license by email, and you only need to locate it in the following path ``C:\\Users\\(your user)\\mosek`` in your computer.\n\nGet started\n###########\n\nDevelopers\n==========\nBy cloning the `openTEPES `_ repository, you can create branches and propose pull-request. Any help will be very appreciated.\n\nContinue like the users for a simple way of executions.\n\nUsers\n=====\n\nIf you are not planning on developing, please follows the instructions of the `Installation <#installation>`_.\n\nOnce installation is complete, `openTEPES `_ can be executed in a test mode by using a command prompt.\nIn the directory of your choice, open and execute the openTEPES_run.py script by using the following on the command prompt (Windows) or Terminal (Linux). (Depending on what your standard python version is, you might need to call `python3` instead of `python`.):\n\n ``openTEPES_Main``\n\nThen, four parameters (case, dir, solver, and console log) will be asked for.\n\n**Remark:** at this step only press enter for each input and openTEPES will be executed with the default parameters.\n\nAfter this in a directory of your choice, make a copy of the `9n `_ or `sSEP `_ case to create a new case of your choice but using the current format of the CSV files.\nA proper execution by ``openTEPES_Main`` can be made by introducing the new case and the directory of your choice. Note that the solver is **glpk** by default, but it can be changed by other solvers that pyomo supports (e.g., gurobi, mosek).\n\nThen, the **results** should be written in the folder who is called with the case name. The results contain plots and summary spreadsheets for multiple optimised energy scenarios, periods and load levels as well as the investment decisions.\n\n**Note that** there is an alternative way to run the model by creating a new script **script.py**, and write the following:\n\n ``from openTEPES.openTEPES import openTEPES_run``\n\n ``openTEPES_run(, , )``\n\nTips\n####\n\n1. A complete documentation of the openTEPES model can be found at ``_, which presents the mathematical formulation, input data and output results.\n2. Try modifying the **TimeStep** in **oT_Data_Parameter_.csv** and see their effect on results.\n3. Using **0** or **1**, the optimization options can be activated or deactivated in **oT_Data_Option_.csv**.\n4. If you need a nice python editor, think about using `PyCharm `_. It has many features including project management, etc.\n5. We also suggest the use of `Gurobi `_ (for Academics and Researchers) as a solver to deal with MIP and LP problems instead of GLPK.\n\nRun the Tutorial\n################\n\nIt can be run in Binder: \n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/IIT-EnergySystemModels/openTEPES-tutorial/HEAD\n\nExpected Results\n################\n.. image:: doc/img/oT_Map_Network_TF2030.png\n :scale: 50 %\n :align: center\n :alt: Network map with investment decisions\n'",",https://doi.org/10.1016/j.softx.2022.101070","2020/07/24, 21:11:41",1187,AGPL-3.0,471,2061,"2023/08/07, 16:00:51",2,52,52,1,79,1,0.1,0.4860365198711063,"2022/02/09, 18:43:24",v4.3.6,0,5,false,,false,false,,,https://github.com/IIT-EnergySystemModels,https://www.iit.comillas.edu/research-area/sadse,Comillas Pontifical University,,,https://avatars.githubusercontent.com/u/68757227?v=4,,, PowerGenome,A tool to quickly and easily create inputs for power systems models.,PowerGenome,https://github.com/PowerGenome/PowerGenome.git,github,"python,capacity-expansion-planning,power-systems",Energy Modeling and Optimization,"2023/09/08, 17:42:19",172,0,27,true,Python,,PowerGenome,"Python,Jupyter Notebook",,"b""# PowerGenome\n\n[![The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![code style black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4426097.svg)](https://doi.org/10.5281/zenodo.4426096)\n[![pytest](https://github.com/PowerGenome/PowerGenome/actions/workflows/pytest.yml/badge.svg)](https://github.com/PowerGenome/PowerGenome/actions/workflows/pytest.yml)\n[![codecov](https://codecov.io/gh/PowerGenome/PowerGenome/branch/master/graph/badge.svg?token=7KJYLE3jOW)](https://codecov.io/gh/PowerGenome/PowerGenome)\n\nThe code and data for PowerGenome are under active development and some changes may break existing functions. Keep up to date with major code and data releases by joining [PowerGenome on groups.io](https://groups.io/g/powergenome). And **check out the growing documentation on the [Wiki](https://github.com/PowerGenome/PowerGenome/wiki)** for helpful background information.\n\nPower system optimization models can be used to explore the cost and emission implications of different regulations in future energy systems. One of the most difficult parts of running these models is assembling all the data. A typical model will define several regions, each of which need data such as:\n\n- All existing generating units (perhaps grouped into a few discrete clusters within each region)\n- Transmission constraints between regions\n- Hourly load profiles (including new loads from vehicle and building electrification)\n- Hourly generation profiles for wind & solar\n- Cost estimates for new generating units\n\nBecause computational complexity and run times increase as the number of regions and generating unit clusters increases, a user might want only want to disaggregate regions and generating units close to the primary region of interest. For example, a study focused on clean electricity regulations in New Mexico might combine several states in the Pacific Northwest into a single region while also splitting Arizona combined cycle units into multiple clusters.\n\nThe goal of PowerGenome is to let a user make all of these choices in a settings file and then run a single script that generates input files for the power system model. PowerGenome currently generates input files for [GenX](https://energy.mit.edu/wp-content/uploads/2017/10/Enhanced-Decision-Support-for-a-Changing-Electricity-Landscape.pdf), and we hope to expand to other models in the near future.\n\n## Data\n\nPowerGenome uses data from a number of different sources, including EIA, NREL, and EPA. The data are accessed through a combination of sqlite databases, CSV files, and parquet data files. All data files [are available here](https://drive.google.com/drive/folders/1K5GWF5lbe-mKSTUSuJxnFdYGCdyDJ7iE?usp=sharing). \n\n1. EIA data on existing generating units are already compiled into a [single sqlite database (PUDL)](https://doi.org/10.5281/zenodo.3653158) (see instructions for using it below). This file is available at the link above or you can download it from the Zenodo repository.\n2. A second sqlite database (`pg_misc_tables_efs.sqlite`) has tables with new resource costs from NREL ATB, transmission constraints between IPM regions from EIA, and hourly demand within each IPM region derived from NREL or FERC data.\n3. The hourly incremental demand for different flexible demand technologies, and stock values across a range of projection scenarios (`efs_files_utc`).\n\n## PUDL Dependency\n\nThis project pulls data from [PUDL](https://github.com/catalyst-cooperative/pudl). As such, it requires installation of PUDL to access a normalized sqlite database and some of the convienience PUDL functions.\n\n`catalystcoop.pudl` is included in the `environment.yml` file and will be installed automatically in the conda environment (see instructions below). Catalyst Cooperative will be creating versioned data releases of PUDL, which can be [accessed on Zenodo](https://doi.org/10.5281/zenodo.3653158). Download the zip file from Zenodo, unzip it, and find the sqlite database under `pudl_data/sqlite/pudl.sqlite`. Note that the version of `catalystcoop.pudl` software may change based on the database version you use. Look on the right-hand side of the zenodo archive to see what software version was used to compile the data. If the version in your conda environment does not match the version used to compile the data, you can change it in the `environment.yml` file or install a [different version](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#installing-packages) using `mamba install catalystcoop.pudl=`.\n\n![PUDL software version for database](/docs/_static/pudl_version.png)\n\n## Installation from GitHub\n\n1. Clone this repository to your local machine and navigate to the top level (PowerGenome) folder.\n\n2. Create a conda environment named `powergenome` using the provided `environment.yml` file. If you don't already use conda it is easiest to download and install [Mambaforge](https://github.com/conda-forge/miniforge#mambaforge), which will install conda with mamba in the `base` environment. See [this description](https://bioconda.github.io/faqs.html#what-s-the-difference-between-miniconda-miniforge-mambaforge-micromamba) for more information on the difference between different ways to install conda and mamba. Conda usually fail to resolve dependencies in under a day so I highly recommend that you either start with Mambaforge or [install mamba in your `base` environment](https://mamba.readthedocs.io/en/latest/installation.html#existing-conda-install) and use it instead.\n\n```sh\nmamba env create -f environment.yml\n```\n\n3. Activate the `powergenome` environment.\n\n```sh\nconda activate powergenome\n```\n\n4. pip-install an editable version of this project\n\n```sh\npip install -e .\n```\n\n5. Download the PUDL database [from Zenodo](https://doi.org/10.5281/zenodo.3653158) or the [PowerGenome data repository](https://drive.google.com/drive/folders/1K5GWF5lbe-mKSTUSuJxnFdYGCdyDJ7iE?usp=sharing), unzip it, and copy the `/pudl_data/sqlite/pudl.sqlite` to wherever you would like to store PowerGenome data on your computer. The zip file contains other data sets that aren't needed for PowerGenome and can be deleted. Note that as of May 2023 the most recent version of this database (v2022.11.30) is compatible with `catalystcoop.pudl` version v2022.11.30 and may not work if an earlier software version is included in your conda environment.\n\n6. Download the additional PowerGenome database from the [PowerGenome data repository](https://drive.google.com/drive/folders/1K5GWF5lbe-mKSTUSuJxnFdYGCdyDJ7iE?usp=sharing). It includes NREL ATB cost data, transmission constraints between IPM regions, and hourly demand for each IPM region. Hourly demand is based on a 2012 weather year and was constructed either directly from FERC 714 data (`load_curves_ferc`) or from NREL EFS data (`load_curves_nrel_efs`) that also sources back to FERC 714. The NREL load curves, which separate hourly demand by sector and subsector, are now the default source for load curves in PowerGenome. See [the wiki](https://github.com/PowerGenome/PowerGenome/wiki/Settings-files#demand) for more information. These files will eventually be provided through a data repository with citation information.\n\n7. Download the appropriate renewable resource data files from the [PowerGenome data repository](https://drive.google.com/drive/folders/1K5GWF5lbe-mKSTUSuJxnFdYGCdyDJ7iE?usp=sharing). There is a single set of generation profiles and resource group folders specific to different regional aggregations. Read through the [included README](https://docs.google.com/document/d/1p_zDk6ng4tvgL1v1ZAXkvY9b34FmaLqdWFlJmUHP1Eo/edit?usp=share_link) for more background. This folder contains:\n\n- `generation_profiles` can be saved in a single place and used across multiple studies.\n- Each of the folders under `resource_groups` has CSV files that tell PowerGenome the metro that each potential wind/solar site will deliver power to based on a set of regional aggregations. Use the corresponding regional aggregations in your settings file. You can request new resource group files for different regional aggregations on the PowerGenome [repository discussion page](https://github.com/PowerGenome/PowerGenome/discussions)\n\n8. Download data files derived from NREL's EFS from the [PowerGenome data repository](https://drive.google.com/drive/folders/1K5GWF5lbe-mKSTUSuJxnFdYGCdyDJ7iE?usp=sharing). These provide hourly demand profiles for growing electrification technologies like electric vehicles and heat pumps and are used to both build up demand profiles in the future and create flexible demand resources that can shift their load.\n\n9. Download distributed generation profiles from the [PowerGenome data repository](https://drive.google.com/drive/folders/1K5GWF5lbe-mKSTUSuJxnFdYGCdyDJ7iE?usp=sharing) compiled from NREL Cambium 2022 scenarios.\n\n9. Create the file `PowerGenome/powergenome/.env`. In this file, add:\n\n- `PUDL_DB=YOUR_PATH_HERE` (your path to the PUDL database downloaded in step 5)\n- `PG_DB=YOUR_PATH_HERE` (your path to the additional PowerGenome data downloaded in step 6)\n- `RESOURCE_GROUP_PROFILES=YOUR_PATH_HERE` (your path to the folder with hourly wind/solar generation parquet files)\n- `EFS_DATA=YOUR_PATH_HERE` (your path to the folder with EFS derived data files)\n- `DISTRIBUTED_GEN_DATA=YOUR_PATH_HERE` (your path to the folder with distributed generation profiles)\n- OPTIONAL: `RESOURCE_GROUPS=YOUR_PATH_HERE` (your path to the resource groups data for a project -- **this can be included in your settings file instead of the .env file**)\n\nQuotation marks are only needed if your values contain spaces. The `.env` file is included in `.gitignore` and will not be synced with the repository.\n\n## Installation with a packaged version (pip/conda-forge)\n\nInstalling Powergenome with pip has only been tested within a conda environment but it should work in other environment management systems. Make sure that you have an updated version of pip installed. If you hit dependency errors I suggest trying to install them using mamba or conda. PowerGenome has `catalystcoop.pudl` as a dependency, which has a large number of its own dependencies. I have not (yet) had to install `catalystcoop.pudl` using mamba but doing so may help clear up errors.\n\nDepending on your operating system you might also have issues installing some other packages from pip. The example code below is what works for me on a Mac, where python-snappy fails to build wheels.\n\n```sh\n(base) conda create -n powergenome python=3.10 pip python-snappy\n(base) conda activate powergenome\n(powergenome) pip install powergenome\n```\n\nPowerGenome has been submitted to conda-forge but is not yet available.\n\n\nIf you are installing a packaged version of PowerGenome you won't be able to easily use a .env file. Instead, add the environment parameters (`PUDL_DB`, `PG_DB`, etc) to a YAML file in the same folder as the rest of your settings. It doesn't really matter which file these parameters are included in but creating a new file such as `env_params.yml` will help keep them separate from other settings parameters that might be shared with other PowerGenome users.\n\n## Running code\n\n### Suggested folder structure\n\nIt is best practice to set up project folders outside of the cloned repository so that git doesn't track any new/changed files within the upper-level `PowerGenome` folder. Try copying one of the example systems (settings file and extra inputs) and modifying it. Copy the `notebooks` folder into your project folder, change the path to the settings file as needed, and run code in the notebooks. This can also be a good way to learn how data are created in PowerGenome and debug problem.\n\nKeeping project folders separate from the cloned `PowerGenome` folder will also make it easier to pull changes as they are released.\n\n### Example systems\n\nA few example systems are included under `PowerGenome/example_systems`. Each system has settings files in a folder (`settings`) and a folder with extra user inputs (`extra_inputs`). The different example systems are not meant to be accurate for real-world analysis, so please do not blindly use the external data files included with them in your own studies!\n\n### Settings\n\nSettings are controlled in a set of YAML files within a folder or combined into a single file. An example folder of settings files (`settings`) and folder with extra user inputs (`extra_inputs`) are included in each of the example systems. Scenario options across different planning years are defined in the file `test_scenario_inputs.csv`. Documentation on extra inputs is included in the folder of each example system.\n\n### Example notebooks\n\nA series of example notebooks are included in [`PowerGenome/notebooks`](/notebooks) describe how to access different functions within PowerGenome to create resource clusters, variable generation profiles, fuel costs, hourly demand, and transmission constraints. They include a description of how the data are compiled and the settings parameters that are required for each type of data.\n\n### Command line interface\n\nThe outputs are all formatted for GenX we hope to make the data formatting code more module to allow users to easily switch between outputs for different power system models.\n\nFunctions from each module can be imported and used in an interactive environment (e.g. JupyterLab). Examples of how to load data in this way are included in `PowerGenome/notebooks`. To run from the command line, navigate to a project folder that contains a settings file and extra inputs (e.g. `myproject/powergenome`), activate the `powergenome` conda environment, and use the command `run_powergenome_multiple` with flags for the settings file name and where the results should be saved. Since the `powergenome` package is installed in the `powergenome` conda environment, you can run the command line function from anywhere on your computer (not just within the cloned `PowerGenome` folder).\n\n```sh\nrun_powergenome_multiple --settings_file settings --results_folder test_system\n```\n\nThe command line arguments `--settings_file` and `--results_folder` can be shortened to `-sf` and `-rf` respectively. For all options, run:\n\n```sh\nrun_powergenome_multiple --help\n```\n\nA folder with extra user inputs is required when using the `run_powergenome_multiple` command. The name of this folder is defined in the settings YAML file with the `input_folder` parameter. Look at the files in each example system for test cases to follow.\n\nIf you have previously installed PowerGenome and the `run_powergenome_multiple` command doesn't work, try reinstalling it using `pip install -e .` as described above. If you downloaded the custom PUDL database before May of 2020, some errors may be resolved by downloading a new version.\n\n## Licensing\n\nPowerGenome is released under the [MIT License](https://opensource.org/licenses/MIT). Most data inputs are from US government sources (EIA, EPA, FERC, etc), which should not be [subject to copyright in the US](https://www.usa.gov/government-works). Hourly FERC demand data has been cleaned using [techniques](https://github.com/truggles/EIA_Cleaned_Hourly_Electricity_Demand_Code) developed by Tyler Ruggles and David Farnham, and allocated to IPM regions using [methods developed](https://github.com/catalyst-cooperative/electricity-demand-mapping) by Catalyst Cooperative. Hourly generation profiles for wind and solar resources were created by [Vibrant Clean Energy](https://www.vibrantcleanenergy.com/) and provided without usage restrictions. All PowerGenome data outputs are released under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode) license.\n\n## Contributing\n\nContributions are welcome! There is significant work to do on this project and additional perspective on user needs will help make it better. If you see something that needs to be improved, [open an issue](https://github.com/gschivley/PowerGenome/issues). If you have questions or need assistance, join [PowerGenome on groups.io](https://groups.io/g/powergenome) and post a message there.\n\nPull requests are always welcome. To start modifying/adding code, make a fork of this repository, create a new branch, and [submit a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork).\n\nAll code added to the project should be formatted with [black](https://black.readthedocs.io/en/stable/). After making a fork and cloning it to your own computer, run `pre-commit install` to [install the git hook scripts](https://pre-commit.com/#3-install-the-git-hook-scripts) that will run every time you make a commit. These hooks will automatically run `black` (in case you forgot), fix trailing whitespace, check yaml formatting, etc.\n""",",https://doi.org/10.5281/zenodo.4426096,https://doi.org/10.5281/zenodo.3653158,https://doi.org/10.5281/zenodo.3653158,https://doi.org/10.5281/zenodo.3653158","2019/07/18, 17:33:40",1560,MIT,284,1557,"2023/10/24, 14:12:37",62,149,217,48,1,6,0.0,0.2347560975609756,"2023/09/08, 17:28:41",v0.6.1,2,11,false,,false,false,,,https://github.com/PowerGenome,,,,,https://avatars.githubusercontent.com/u/76453163?v=4,,, load_forecasting,"Load forecasting on Delhi area electric power load using ARIMA, RNN, LSTM and GRU models.",pyaf,https://github.com/pyaf/load_forecasting.git,github,"machine-learning,rnn,lstm,gru,arima,wma,ses,sma,electric-load-forecasting,time-series-forecasting",Energy Modeling and Optimization,"2019/07/16, 13:13:40",415,0,90,true,Jupyter Notebook,,,"Jupyter Notebook,Python,HTML,JavaScript,CSS,Shell",,"b'# Electric Load Forecasting\n\nUnder graduate project on short term electric load forecasting. Data was taken from [State Load Despatch Center, Delhi](www.delhisldc.org/) website and multiple time series algorithms were implemented during the course of the project.\n\n### Models implemented:\n\n`models` folder contains all the algorithms/models implemented during the course of the project:\n\n* Feed forward Neural Network [FFNN.ipynb](models/FFNN.ipynb)\n* Simple Moving Average [SMA.ipynb](models/SMA.ipynb)\n* Weighted Moving Average [WMA.ipynb](models/WMA.ipynb)\n* Simple Exponential Smoothing [SES.ipynb](models/SES.ipynb)\n* Holts Winters [HW.ipynb](models/HW.ipynb)\n* Autoregressive Integrated Moving Average [ARIMA.ipynb](models/ARIMA.ipynb)\n* Recurrent Neural Networks [RNN.ipynb](models/RNN.ipynb)\n* Long Short Term Memory cells [LSTM.ipynb](models/LSTM.ipynb)\n* Gated Recurrent Unit cells [GRU.ipynb](models/GRU.ipynb)\n\nscripts:\n\n* `aws_arima.py` fits ARIMA model on last one month\'s data and forecasts load for each day.\n* `aws_rnn.py` fits RNN, LSTM, GRU on last 2 month\'s data and forecasts load for each day.\n* `aws_smoothing.py` fits SES, SMA, WMA on last one month\'s data and forecasts load for each day.\n* `aws.py` a scheduler to run all above three scripts everyday 00:30 IST.\n* `pdq_search.py` for grid search of hyperparameters of ARIMA model on last one month\'s data.\n* `load_scrap.py` scraps day wise load data of Delhi from [SLDC](https://www.delhisldc.org/Loaddata.aspx?mode=17/01/2018) site and stores it in csv format.\n* `wheather_scrap.py` scraps day wise whether data of Delhi from [wunderground](https://www.wunderground.com/history/airport/VIDP/2017/8/1/DailyHistory.html) site and stores it in csv format.\n\n`server` folder contains django webserver code, developed to show the implemented algorithms and compare their performance. All the implemented algorithms are being used to forecast today\'s Delhi electricity load [here](http://forecast.energyandsystems.com) [now deprecated]. Project report can be found in [Report](Report) folder. \n\n![A screenshot of the website](screenshots/website.png ""A screenshot of the website"")\n\n\n### Team Members:\n\n* Ayush Kumar Goyal\n* Boragapu Sunil Kumar\n* Srimukha Paturi\n* Rishabh Agrahari\n'",,"2018/01/19, 06:26:42",2105,MIT,0,70,"2022/11/21, 21:22:54",16,40,42,1,337,12,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, REopt_Lite_API,"Offers a subset of features from NREL's more comprehensive REopt model. Both models provide concurrent, multiple technology integration and optimization capabilities to help organizations meet their cost savings and energy performance goals.",NREL,https://github.com/NREL/REopt_API.git,github,"nrel,renewable-energy,optimization,reopt-lite-api,reopt-api,reoptjl",Energy Modeling and Optimization,"2023/10/06, 22:52:59",71,0,14,true,Python,National Renewable Energy Laboratory,NREL,"Python,PowerBuilder,Julia,Shell,HTML,Ruby,Roff,Dockerfile,Procfile",https://developer.nrel.gov/docs/energy-optimization/reopt,"b""REopt\xc2\xae API\n=========\nThe REopt\xc2\xae model in this repository is a free, open-source, development version of the [REopt API](https://developer.nrel.gov/docs/energy-optimization/reopt/). A production version of the REopt API lies behind the [REopt Web Tool](https://reopt.nrel.gov/tool).\n\nThe REopt API provides concurrent, multiple technology integration and optimization capabilities to help organizations meet their cost savings, energy performance, resilience, and emissions reduction goals. Formulated as a mixed integer linear program, the REopt model recommends an optimally sized mix of renewable energy, conventional generation, and energy storage technologies; estimates the net present value of implementing those technologies; and provides a dispatch strategy for operating the technology mix at maximum economic efficiency. A list of the REopt model capabilities is provided [here](https://reopt.nrel.gov/about/capabilities.html). Example projects using REopt can be viewed [here](https://reopt.nrel.gov/projects/).\n\n## Should I be using or modifying the REopt API or the REopt Julia Package? \n\nThe REopt Julia package will soon become the backend of the REopt API. That means that the optimization model will be contained in [REopt.jl](https://github.com/NREL/REopt.jl), and that a user could supply the same inputs to the API and Julia package and get the same results. So which should you use? \n\n**1. When and how to _use_ the REopt Julia package:**\n- You want to be able to use the REopt model without incorporating an API call (and associated rate limits).\n- You want slightly more flexibility in how you interact with model inputs, optimization parameters, and run types.\n- You can install an optimization solver for use with REopt.\n- You do not need your results saved in an external database. \n- **How do I use the REopt Julia package?:** see instructions [here](https://nrel.github.io/REopt.jl/dev/).\n \n**2. When and how to _modify_ the REopt Julia package:**\n- You want to make changes to the REopt model beyond modifying input values (e.g., add a new technology).\n- You want to suggest a bug fix in the REopt model.\n- **How do I modify the REopt Julia package?:** get the (free, open-source) model [here](https://github.com/NREL/REopt.jl) and see additional instructions [here](https://nrel.github.io/REopt.jl/dev/).\n \n**3. When and how to _use_ the REopt_API:**\n- You do not want to modify the code or host the API on your own server. \n- You do not want to install or use your own optimization solver (simply POSTing to the REopt API does not require a solver, whereas using the Julia package does).\n- You want to be able to access or share results saved in a database using a runuuid.\n- You want to be able to view your API results in the REopt web tool using a runuuid. (we can do this?)\n- **How do I use the REopt API?:** you can access our production version of the API via the [NREL Developer Network](https://developer.nrel.gov/docs/energy-optimization/reopt/). You can view examples of using the API in the [REopt-API-Analysis Repo](https://github.com/NREL/REopt-API-Analysis/wiki).\n\n**4. When and how to _modify_ the REopt_API:**\n- You have made changes to the REopt Julia package that include modified inputs or outputs, and want to reflect those in the REopt API.\n- You want to suggest a bug fix in the REopt API or add or modify validation or API endpoints.\n- You want to host the API on your own servers.\n- **How do I modify the REopt API?:** See this repo's [Wiki](https://github.com/NREL/reopt_api/wiki) for detailed instructions on installing and developing the API. Also, our [contributing guidelines](https://github.com/NREL/reopt_api/blob/develop/CONTRIBUTING.md) provide guidelines for suggesting improvements, creating pull requests, and more.\n""",,"2018/12/17, 21:24:59",1772,CUSTOM,1250,4541,"2023/10/24, 03:12:01",52,442,482,149,1,11,0.5,0.6827859978347167,"2023/10/11, 17:51:27",v3.1.1,0,18,false,,false,true,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, pandapower,"An easy to use open source tool for power system modeling, analysis and optimization with a high degree of automation.",e2nIEE,https://github.com/e2nIEE/pandapower.git,github,"powerflow,python,power,system,analysis,optimization,short-circuit,state-estimation,loadflow",Energy Modeling and Optimization,"2023/10/24, 08:03:29",675,192,126,true,Python,,e2nIEE,"Python,Jupyter Notebook,MATLAB,HTML,Shell,Julia",https://www.pandapower.org,"b""\n.. image:: https://www.pandapower.org/images/pp.svg\n :target: https://www.pandapower.org\n :alt: logo\n\n|\n\n.. image:: https://badge.fury.io/py/pandapower.svg\n :target: https://pypi.python.org/pypi/pandapower\n :alt: PyPI\n \n.. image:: https://img.shields.io/pypi/pyversions/pandapower.svg\n :target: https://pypi.python.org/pypi/pandapower\n :alt: versions\n\n.. image:: https://readthedocs.org/projects/pandapower/badge/\n :target: http://pandapower.readthedocs.io/\n :alt: docs\n\n.. image:: https://codecov.io/github/e2nIEE/pandapower/coverage.svg?branch=master\n :target: https://app.codecov.io/github/e2nIEE/pandapower?branch=master\n :alt: codecov\n \n.. image:: https://api.codacy.com/project/badge/Grade/e2ce960935fd4f96b4be4dff9a0c76e3\n :target: https://app.codacy.com/gh/e2nIEE/pandapower?branch=master\n :alt: codacy\n \n.. image:: https://img.shields.io/badge/License-BSD%203--Clause-blue.svg\n :target: https://github.com/e2nIEE/pandapower/blob/master/LICENSE\n :alt: BSD\n\n.. image:: https://pepy.tech/badge/pandapower\n :target: https://pepy.tech/project/pandapower\n :alt: pepy\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/e2nIEE/pandapower/master?filepath=tutorials\n :alt: binder\n \n\n\n\n \npandapower is an easy to use network calculation program aimed to automate the analysis and optimization of power\nsystems. It uses the data analysis library `pandas `_ and is compatible with the commonly\nused MATPOWER / PYPOWER case format. pandapower allows using different solvers including an improved Newton-Raphson\npower flow implementation, all `PYPOWER `_ solvers, the Newton-Raphson power\nflow solvers in the C++ library `lightsim2grid `_, and the\n`PowerModels.jl `_ library.\n\nMore information about pandapower can be found on `www.pandapower.org `_:\n\nAbout pandapower:\n\n- `Power System Modeling `_\n- `Power System Analysis `_\n- `Citing pandapower `_\n\nGetting Started:\n\n- `Installation Notes `_\n- `Minimal Example `_\n- `Interactive Tutorials `_\n- `Documentation `_\n\nIf you are interested in the latest pandapower developments, subscribe to our `mailing list `_!\n\n.. image:: https://simbench.de/wp-content/uploads/2019/01/logo.png\n :target: https://www.simbench.net\n :alt: SimBench_logo\n\nTo get realistic load profile data and grid models across all voltage levels that are ready to\nbe used in pandapower, have a look at the *SimBench* `project website `_ or\n`on GitHub `_.\n\n.. image:: https://www.pandapipes.org/images/pp.svg\n :target: https://www.pandapipes.org\n :width: 270pt\n :alt: pandapipes_logo\n\nIf you want to model pipe networks (heat, gas or water) as well, we recommend\npandapower's sibling project *pandapipes* (`website `_, `GitHub repository `_).\n\n|\n\npandapower is a joint development of the research group Energy Management and Power System Operation, University of Kassel and the Department for Distribution System\nOperation at the Fraunhofer Institute for Energy Economics and Energy System Technology (IEE), Kassel.\n\n.. image:: http://www.pandapower.org/images/contact/Logo_e2n.png\n :target: https://www.uni-kassel.de/eecs/en/sections/energiemanagement-und-betrieb-elektrischer-netze/home\n :width: 500\n\n|\n\n.. image:: http://www.pandapower.org/images/contact/Logo_Fraunhofer_IEE.png\n :target: https://www.iee.fraunhofer.de/en.html\n :width: 500\n\n|\n\nWe welcome contributions to pandapower of any kind - if you want to contribute, please check out the `pandapower contribution guidelines `_.\n""",,"2017/01/12, 13:27:53",2477,CUSTOM,1066,8105,"2023/10/24, 08:03:33",181,1142,1857,370,1,17,1.3,0.8036717062634989,"2023/05/12, 15:47:47",v2.13.1,0,88,false,,false,true,"bdonon/powerdata-view,Dellintel98/MILP-Nanogrid-optimisation,skloibhofer/fh-pandapower-cosimulation,AlexDeLos/GNN_OPF,ktehranchi/ScenarioSelection_PCS,TTyyds-art/MPP_Powersystem,gridcapacitymap/gridcapacity,JaHeRoth/SimplifiedDOPFSolver,heig-vd-iese/pandapower-heig-ui,heig-vd-iese/test_sphinx,KenHeLiqin/outreach-automation-streamlit,Nieuwe-Warmte-Nu/template-python,marichka-dobko/Modal-Hackathon-2023,leonlan/tree-partitioning,fkie-cad/powerowl,sustainable-computing/VPP-Contracts,EsaLaboratory/MARLEM,aiplan4eu/AIPlan4Grid,tomwebmaster/python_test,chaimaa-ess/project,viktor-ktorvi/thesis-proposal,codet-glitch/DC_Power_Flow,chaimaa-ess/oplem1,Fernando3161/pandapower_ml,vjugor1/OptimalControlScenarioApproximation,kravitsjacob/mocot,facafile/Honeypot-Bachelor-Thesis,EV4EU/ev-profiling,smartgridadsc/CyberRange,ERIGrid2/toolbox_doe_sa,KIT-IAI/ThreePhaseHouseGrid,e2nIEE/pandapower-qgis,israelsgalaxy/load-flow,patrik-bartak/L2RPN-Delft-2023-Team-Conceptual,ERIGrid2/JRA-2.1.3-STL,J27avier/EvCharge,leeraiyan/-EH2745-Assignment-2-raiyan-soundhariya-,tum-ens/pylovo,vjugor1/IS-SA,PowerGridModel/IEC-CIM-conversion-services-beta,EsaLaboratory/OPLEM,GridMaster2022/event-chain,GridMaster2022/initialization,mile888/hybrid_gp,fkie-cad/wattson,mukeswebrian/Parallel-PSO-OPF-Scalability,leeraiyan/EH2745-Assignment-1-raiyan-soundhariya,Without-wax/Split-based-sampling,bdonon/powerdata-gen,renatotomaz/fullStackPowerFlow,CarachinoAlessio/ML-techniques-for-State-Estimation,xiaohongri/grid_control,antoshka17/tph,stbalduin/pyrate-analyse,aloytag/electrical-grid-simulator,nensanc/PowerSystem,laure-crochepierre/ml4ps,bdonon/ml4ps,DKMahto/opf,the-guti/opf,aqwrfv/grid2viz,thanever/grid2viz,1217720084/mgrid,djmartingale/mgrid,qytreinforcementlearning/grid2viz,halbupt/grid2viz,fmarten99/grid2viz,RanZhu1989/grid2viz,marota/grid2viz,kod-rs/attest_topology,batrdos/pandaPowerFP,floracharbo/MARL_local_electricity,joaojosemfigueiredo/Sistema-Automatizado-para-Analise-de-Solicitacoes-de-Acesso-de-Minigeradores-em-Geracao-Distribuida,arpkoirala/grid_visualization_tool,FlorianDe/power-grid-gans,jinningwang/andes,cristian-castro-a/global-avg-temperature-forecasting,cjdjr/StateGridDispatch,NonlinearArtificialIntelligenceLab/NonlinearArtificialIntelligenceLab.github.io,marota/Grid2viz-dataset-ICAPS,bdonon/updating_case60nordic,manuvarkey/GElectrical,EsaLaboratory/OPENGUI,evgenytsydenov/ieee118_power_flow_data,aeonu/single-line-diagram,LeonardoCampos-EE/power_sym,dpinney/omf,IRT-SystemX/LIPS,prasang-gupta/L2RPN-2020-robustness,Quentin36250/PRE_Quentin_Garsault,FraunhoferIEE/curriculumagent,MauricioSalazare/tensorpowerflow,heartymcfly/pandapipes,AzamatIlyasov/pandafp,jamilsonjr/AI-to-forecast-constraints-in-the-energy-systems,aaavendanop/HEPISA,NVIDIA/energy-sdk-l2rpn,hat-edu/hat-aimm-anomaly,emiliefro/pandapower-notebook,HDApowersystems/PowerSystemDemonstator,hat-edu/hat-aimm-forecast,Mleyliabadi/LIPS,seifou23i/Grid2Bench,CURENT2/andes,e2nIEE/pandahub,CURENT/andes,jnmelchorg/pyensys,edxu96/mgrid,eddie-atkinson/opendss-comparison,DecodEPFL/eiv-grid-id,LeonardoCampos-EE/MHeuristicORPD,lduran2/cis4xxx-cyber_physical_systems_intro,cmacana/pandapower-exporter,hoangtranthe/JRA-1.1-multi-energy,enlite-ai/maze-l2rpn-2021-submission,Adrianonsare/Geospatial-PowerAnalysis,marota/ExpertOp4Grid,jasjit4u/Evolve,zepben/evolve-tutorials,horacioMartinez/L2RPN,rppiot2021/power-grid-simulator,Adrianonsare/EnergyAnalytics,laure-crochepierre/reinforcement-based-grammar-guided-symbolic-regression,juuhnei/impact_of_distance_based_value_loss,ERIGrid2/benchmark-model-multi-energy-networks,PowerGridModel/power-grid-model-benchmark,eddie-atkinson/honours-research,andrevks/microblog-flask,maximilianboehm/ba_boehm,amahoro12/anne,marin-jovanovic/power-grid-simulator,dgusain1/energysim,m-junaidaslam/Pandapower-Youtube-Tutorials,eh-tien/L2RPN_submission_simple,marin-jovanovic/scada-demo,SeafyLiang/deepLearning_study,JuanDavidG1997/iota_microgrid,russelljjarvis/sirg_geo_net,hepengli/multiagent-powergrid,wobniarin/OPF_MV_basic,JuanDavidG1997/microgridDLT,hat-open/aimm,hepengli/multiagent-microgrid-envs,Luiz-Phillip/State_Estimation,pnzo/pandapower,nitbharambe/cap-map,cmacana/power-system-utils,zepben/pp-translator,annieefu/power-flow-tool,russelljjarvis/CoauthorNetVis,vidakDK/gs-power,AlenBernadic/pandaRL,ckittl/transformerCalculationValidation,rte-france/grid2viz,benjaminpillot/greece,AsprinChina/L2RPN_NIPS_2020_a_PPO_Solution,tkarndt/commtailment,jurasofish/loadflow_nr,mugoh/l2rpn,rossrochford/coding_assignments,vinault/Grid2Kpi,ElectricBlocks/ebpp,EfimovIN/L2RPN,ZM-Learn/L2RPN_WCCI_a_Solution,jurasofish/diff_loadflow,vermashresth/power-grid-neurips,Jst3p/ASCIIDeluxe,roksikonja/thesis-code,BDonnot/ChroniX2Grid,cerealkill/pandapower_api,thediavel/RL-ThesisProject-ABB,ChangoBuitrago/panda_sim,ErnestoZarza/pandaspower_simulation,raselTarikul/simulation,BDonnot/lightsim2grid,thomaswolgast/gaopf,e2nIEE/pandapipes,EPGOxford/OPEN,Pyosch/vpplib,BDonnot/data_generation,gridsingularity/gsy-e,0x3bfc/Pflow,rushivarun/Short_circuit_analysis,Damowerko/opf,rte-france/Grid2Op,marcialguerra/pandapower-testing,comnetstud/ARIES,e2nIEE/simbench,bmeyers/VirtualMicrogridSegmentation,taqen/asgrids,e2nIEE/pandapower-paper,AIT-IES/pandapowerFMU",,https://github.com/e2nIEE,,"Kassel, Germany",,,https://avatars.githubusercontent.com/u/40853245?v=4,,, urbs,A linear optimization model for distributed energy systems.,tum-ens,https://github.com/tum-ens/urbs.git,github,"python,pyomo,pandas,optimisation-model,mathematical-modelling,linear-programming,energy-system",Energy Modeling and Optimization,"2023/07/18, 13:18:39",166,0,17,true,Python,Chair of Renewable and Sustainable Energy Systems,tum-ens,"Python,Jupyter Notebook",,"b'# urbs\n\nurbs is a [linear programming](https://en.wikipedia.org/wiki/Linear_programming) optimisation model for capacity expansion planning and unit commitment for distributed energy systems. Its name, latin for city, stems from its origin as a model for optimisation for urban energy systems. Since then, it has been adapted to multiple scales from neighbourhoods to continents.\n\n[![Documentation Status](https://readthedocs.org/projects/urbs/badge/?version=latest)](http://urbs.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.594200.svg)](https://doi.org/10.5281/zenodo.594200)\n\n## Features\n\n * urbs is a linear programming model for multi-commodity energy systems with a focus on optimal storage sizing and use.\n * It finds the minimum cost energy system to satisfy given demand time series for possibly multiple commodities (e.g. electricity).\n * By default, operates on hourly-spaced time steps (configurable).\n * Thanks to [Pandas](https://pandas.pydata.org), complex data analysis is easy.\n * The model itself is quite small thanks to relying on package [Pyomo](http://www.pyomo.org/).\n * The small codebase includes reporting and plotting functionality.\n\n## Screenshots\n\n\n\n\n\n## Installation\n\nThere are 2 ways to get all required packages under Windows. We recommend using the Python distribution Anaconda. If you don\'t want to use it or already have an existing Python (version 3.6 **recommended**, 2.7 is supported as well) installation, you can also download the required packages by yourself..\n\n### Anaconda/Miniconda (recommended)\n\n 1. **[Anaconda (Python 3)](http://continuum.io/downloads)/[Miniconda](https://docs.conda.io/en/latest/miniconda.html)**. Choose the 64-bit installer if possible.\n During the installation procedure, keep both checkboxes ""modify PATH"" and ""register Python"" selected! If only higher Python versions are available, you can switch to a specific Python Version by typing `conda install python=`\n 2. **Packages and Solver**: [GLPK](http://winglpk.sourceforge.net/).\n 1. Download the [environment file](https://github.com/tum-ens/urbs/blob/master/urbs-env.yml).\n 2. Launch a new command prompt (Windows: Win+R, type ""cmd"", Enter)\n 3. Install it via conda by `conda env create -f urbs-env.yml`.\n 4. Each time you open a new terminal for running urbs, you can activate the environment by `conda activate urbs`.\n\nContinue at [Get Started](#get-started).\n\n### Manually (the hard way)\n\nFor all packages, best take the latest release or release candidate version. Both 32 bit and 64 bit versions work, though 64 bit is recommended. The list of packages can be found in the [environment file](https://github.com/tum-ens/urbs/blob/master/urbs-env.yml).\n \n## Get started\n\n### Developers\nOnce installation is complete, finally [install git (for version control)](http://git-scm.com/). **Remark:** at step ""Adjusting your PATH environment"", select ""Run Git from the Windows Command Prompt"".\n\nThen, in a directory of your choice, clone this repository by:\n\n git clone https://github.com/tum-ens/urbs.git\n \nContinue like the users after they downloaded the zip file. \n\n### Users\n\nIf you are not planning on developing urbs, pick the [latest release](https://github.com/tum-ens/urbs/releases) and download the zip file.\n\nIn the downloaded directory, open a execute the runme script by using the following on the command prompt (Windows) or Terminal (Linux). (Depending on what your standard python version is, you might need to call `python3` instead of `python`.):\n \n python runme.py\n\nSome minutes later, the subfolder `result` should contain plots and summary spreadsheets for multiple optimised energy supply scenarios, whose definitions are contained in the run script (watch out for `def scenario` lines). *Not working at the moment:* To get a graphical and tabular summary over all scenarios, execute\n\n python comp.py\n\nand look at the new files `result/mimo-example-.../comparison.xlsx` and `result/mimo-example-.../comparison.png` for a quick comparison. This script parses the summary spreadsheets for all scenarios.\n\n## Next steps and tips\n\n 1. Head over to the tutorial at http://urbs.readthedocs.io, which goes through runme.py step by step. \n 2. Read the source code of `runme.py` and `comp.py`. \n 3. Try adding/modifying scenarios in `scenarios.py` and see their effect on results.\n 4. If you need a nice python editor, think about using [PyCharm](https://www.jetbrains.com/pycharm/download). It has many features including easy Git integration, package management, etc.\n 5. Fire up IPython (`ipython3`) and run the scripts from there using the run command: `run runme` and `run comp`. Then use `whos` and inspect the workspace afterwards (`whos`). See what you can do (analyses, plotting) with the DataFrames. Take the `urbs.get_constants`, `urbs.get_timeseries` and `urbs.plot` functions as inspriation and the [Pandas docs](http://pandas.pydata.org/pandas-docs/stable/) as reference.\n \n## Further reading\n\n - If you do not know anything about the command line, read [Command Line Crash Course](https://learnpythonthehardway.org/book/appendixa.html). Python programs are scripts that are executed from the command line, similar to MATLAB scripts that are executed from the MATLAB command prompt.\n - If you do not know Python, try one of the following ressources:\n * The official [Python Tutorial](https://docs.python.org/3/tutorial/index.html) walks you through the language\'s basic features.\n * [Learn Python the Hard Way](https://learnpythonthehardway.org/book/preface.html). It is meant for programming beginners.\n - The book [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do) best summarises the capabilities of the packages installed here. It starts with IPython, then adds NumPy, slowly fades to pandas and then shows first basic, then advanced data conversion and analysis recipes. Visualisation with matplotlib is given its own chapter, both with and without pandas.\n - For a huge buffet of appetizers showing the capabilities of Python for scientific computing, I recommend browsing this [gallery of interesting IPython Notebooks](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks).\n \n## Example uses\n\n - Branch [1node](https://github.com/ojdo/urbs/tree/1node) in the forked repository [ojdo/urbs](https://github.com/ojdo/urbs) shows a small example of a real-world usage of the model. It includes a [`scenario_generator`](https://github.com/ojdo/urbs/blob/dfa9cf0ad7b03289bf7c64d79ea93c7886a00a96/run1node.py#L10-L37) function in its run script, which is useful for extensive parameter sweeps.\n - Branch [1house](https://github.com/ojdo/urbs/tree/1house) in the forked repository [ojdo/urbs](https://github.com/ojdo/urbs) shows another (newer) example of a small-scale application of the model. It demonstrates the use for two demand commodities (electricity and heat) for a single consumer (a single site named \'house\'). It also shows how to create a very customized comparison script:\n \n\n \n - Branch [haag15](https://github.com/ojdo/urbs/tree/haag15) in the forked repository [ojdo/urbs](https://github.com/ojdo/urbs) shows a larger example of a real-world use. Its input file contains a town divided into 12 regions, 12 process types, and 2 demand commodities (electricity and heat) . Patience and RAM (64 GB or more) is needed to run these scenarios with 8760 timesteps. The branch also contains three IPython notebooks that are used for result analysis and coupling to model [rivus](https://github.com/tum-ens/rivus).\n \n## List of branches\n - ASEAN\n - CoTraDis\n - decensys\n - extremos\n - MIQCP\n - near_optimal\n - urbs_gui\n\n## Copyright\n\nCopyright (C) 2014-2019 TUM ENS\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program. If not, see \n'",",https://doi.org/10.5281/zenodo.594200","2014/07/19, 13:40:41",3385,GPL-3.0,6,829,"2023/07/18, 13:18:39",36,172,282,3,99,14,0.0,0.6489533011272142,"2019/07/02, 11:54:28",1.0.1,0,22,false,,false,false,,,https://github.com/tum-ens,http://www.ens.ei.tum.de,Technical University of Munich,,,https://avatars.githubusercontent.com/u/8157454?v=4,,, Dispa-SET,"Allows to model a power system at any level of detail e.g. micro-grid, region, country, continent.",energy-modelling-toolkit,https://github.com/energy-modelling-toolkit/Dispa-SET.git,github,"python,dispatch,power",Energy Modeling and Optimization,"2023/03/14, 02:57:41",76,0,15,true,Python,Energy Modelling Toolkit ,energy-modelling-toolkit,"Python,Jupyter Notebook,GAMS,Batchfile",,"b'![dispaset logo](https://raw.githubusercontent.com/energy-modelling-toolkit/Dispa-SET/master/Docs/figures/logo.png)\n===================\n ![Documentation](https://img.shields.io/badge/python-2.7,%203.7-blue.svg) [![License](https://img.shields.io/badge/License-EUPL--1.2-blue.svg)](https://opensource.org/licenses/EUPL-1.2) [![Documentation](https://readthedocs.org/projects/dispa-set/badge/?branch=master)](http://dispa-set.readthedocs.io/en/latest/) [![Build Status](https://travis-ci.org/energy-modelling-toolkit/Dispa-SET.svg?branch=master)](https://travis-ci.org/energy-modelling-toolkit/Dispa-SET)\n\n### Description\nThe Dispa-SET model is a unit commitment and dispatch model developed within the \xe2\x80\x9cJoint Research Centre\xe2\x80\x9d and focused on the balancing and flexibility problems focusing on the European context. It is written in GAMS with advanced input/output data handling and visualization routines in Python.\n\nThree different formulations are available offering a trade-off between accuracy and computational complexity ( Linear Programming (LP), Mixed-Integer Linear Programming (MILP)). This allows\n to model a power system at any level of detail e.g. micro-grid, region, country, continent. A Pan-European scenario is included with the model as of version 2.3.\n \n### Features\nThe model is expressed as an optimization problem. \nContinuous variables include the individual unit dispatched power, the shedded load and the curtailed power generation. The binary variables are the commitment status of each unit. The main model features can be summarized as follows:\n\n- Minimum and maximum power for each unit\n- Power plant ramping limits\n- Reserves up and down\n- Minimum up/down times\n- Load Shedding\n- Curtailment\n- Pumped-hydro storage\n- Non-dispatchable units (e.g. wind turbines, run-of-river, etc.)\n- Start-up, ramping and no-load costs\n- Multi-nodes with capacity constraints on the lines (congestion)\n- Constraints on the targets for renewables and/or CO2 emissions\n- Yearly schedules for the outages (forced and planned) of each units\n- CHP power plants and thermal storage\n\nThe demand is assumed to be inelastic to the price signal. The MILP objective function is therefore the total generation cost over the optimization period. \n\n### Quick start\n\nIf you want to download the latest version from github for use or development purposes, make sure that you have git and the [anaconda distribution](https://www.anaconda.com/distribution/) installed and type the following:\n\n```bash\ngit clone https://github.com/energy-modelling-toolkit/Dispa-SET.git\ncd Dispa-SET\nconda env create # Automatically creates environment based on environment.yml\nconda activate dispaset # Activate the environment\npip install -e . # Install editable local version\npytest # Ensure that the test cases are running\n```\n\nThe above commands create a dedicated environment so that your anaconda configuration remains clean from the required dependencies installed.\nTo check that everything runs fine, you can build and run a test case by typing:\n```bash\ndispaset -c ConfigFiles/ConfigTest.xlsx build simulate\n```\n\n### Documentation\nThe documentation and the stable releases are available on the main Dispa-SET website: http://www.dispaset.eu\n \n### Get involved\nThis project is an open-source project. Interested users are therefore invited to test, comment or [contribute](CONTRIBUTING.md) to the tool. Submitting issues is the best way to get in touch with the development team, which will address your comment, question, or development request in the best possible way. We are also looking for contributors to the main code, willing to contibute to its capabilities, computational-efficiency, formulation, etc. Finally, we are willing to collaborate with national agencies, reseach centers, or academic institutions on the use on the model for different data sets relative to EU countries.\n\n### License\nDispa-SET is a free software licensed under the \xe2\x80\x9cEuropean Union Public Licence"" EUPL v1.2. It \ncan be redistributed and/or modified under the terms of this license.\n\n### Main developers\nThis software has been developed initially within the Directorate C Energy, Transport and Climate, which is one of the 7 scientific directorates of the Joint Research Centre (JRC) of the European Commission. Directorate C is based both in Petten, the Netherlands, and Ispra, Italy. \nCurrently the main developers are the following:\n\n- Sylvain Quoilin (KU Leuven, Belgium)\n- Konstantinos Kavvadias (Joint Research Centre, European Commission)\n- Matija Pavi\xc4\x8devi\xc4\x87 (KU Leuven, Belgium)\n- Matthias Zech (Deutsches Zentrum f\xc3\xbcr Luft-und Raumfahrt, DLR)\n- Matteo De Felice (Joint Research Centre, European Commission)\n\n'",,"2017/10/13, 16:22:18",2203,EUPL-1.2,2,718,"2023/10/10, 12:46:21",18,21,86,6,15,2,0.5,0.556060606060606,"2022/10/05, 17:50:19",v2.5.0,0,14,false,,true,true,,,https://github.com/energy-modelling-toolkit,,,,,https://avatars.githubusercontent.com/u/32776077?v=4,,, Calliope,"A framework to develop energy system models, with a focus on flexibility, high spatial and temporal resolution, the ability to execute many runs based on the same base model, and a clear separation of framework and model.",calliope-project,https://github.com/calliope-project/calliope.git,github,"python,pyomo,energy,optimisation,energy-system",Energy Modeling and Optimization,"2023/10/25, 16:48:30",242,0,54,true,Python,Calliope,calliope-project,"Python,Makefile",https://www.callio.pe,"b'[![Chat on Gitter](https://img.shields.io/gitter/room/calliope-project/calliope.svg?style=flat-square)](https://app.gitter.im/#/room/#calliope-project_calliope:gitter.im)\n[![Main branch build status](https://github.com/calliope-project/calliope/actions/workflows/commit-ci.yml/badge.svg?branch=main)](https://github.com/calliope-project/calliope/actions/workflows/commit-ci.yml)\n[![Documentation build status](https://img.shields.io/readthedocs/calliope.svg?style=flat-square)](https://readthedocs.org/projects/calliope/builds/)\n[![Test coverage](https://codecov.io/gh/calliope-project/calliope/graph/badge.svg?token=UM542yaYrh)](https://codecov.io/gh/calliope-project/calliope)\n[![PyPI version](https://img.shields.io/pypi/v/calliope.svg?style=flat-square)](https://pypi.python.org/pypi/calliope)\n[![Anaconda.org/conda-forge version](https://img.shields.io/conda/vn/conda-forge/calliope.svg?style=flat-square&label=conda)](https://anaconda.org/conda-forge/calliope)\n[![JOSS DOI](https://img.shields.io/badge/JOSS-10.21105/joss.00825-green.svg?style=flat-square)](https://doi.org/10.21105/joss.00825)\n\n---\n\n\n\n*A multi-scale energy systems modelling framework* | [www.callio.pe](http://www.callio.pe/)\n\n---\n\n## Contents\n\n- [Contents](#contents)\n- [About](#about)\n- [Quick start](#quick-start)\n- [Documentation](#documentation)\n- [Contributing](#contributing)\n- [What\'s new](#whats-new)\n- [Citing Calliope](#citing-calliope)\n- [License](#license)\n\n---\n\n## About\n\nCalliope is a framework to develop energy system models, with a focus on flexibility, high spatial and temporal resolution, the ability to execute many runs based on the same base model, and a clear separation of framework (code) and model (data). Its primary focus is on planning energy systems at scales ranging from urban districts to entire continents. In an optional operational it can also test a pre-defined system under different operational conditions.\n\nA Calliope model consists of a collection of text files (in YAML and CSV formats) that fully define a model, with details on technologies, locations, resource potentials, etc. Calliope takes these files, constructs an optimization problem, solves it, and reports back results. Results can be saved to CSV or NetCDF files for further processing, or analysed directly in Python through Python\'s extensive scientific data processing capabilities provided by libraries like [Pandas](http://pandas.pydata.org/) and [xarray](https://docs.xarray.dev/en/stable/).\n\nCalliope comes with several built-in analysis and visualisation tools. Having some knowledge of the Python programming language helps when running Calliope and using these tools, but is not a prerequisite.\n\n## Quick start\n\nCalliope can run on Windows, macOS and Linux. Installing it is quickest with the `mamba` package manager by running a single command: `mamba create -c conda-forge -n calliope calliope`.\n\nSee the documentation for more [information on installing](https://calliope.readthedocs.io/en/stable/user/installation.html).\n\nSeveral easy to understand example models are [included with Calliope](calliope/example_models) and accessible through the `calliope.examples` submodule.\n\nThe [tutorials in the documentation run through these examples](https://calliope.readthedocs.io/en/stable/user/tutorials.html). A good place to start is to look at these tutorials to get a feel for how Calliope works, and then to read the ""Introduction"", ""Building a model"", ""Running a model"", and ""Analysing a model"" sections in the online documentation.\n\nMore fully-featured examples that have been used in peer-reviewed scientific publications are available in our [model gallery](https://www.callio.pe/model-gallery/).\n\n## Documentation\n\nDocumentation is available on Read the Docs:\n\n- [Read the documentation online (recommended)](https://calliope.readthedocs.io/en/stable/)\n- [Download all documentation in a single PDF file](https://readthedocs.org/projects/calliope/downloads/pdf/stable/)\n\n## Contributing\n\nTo contribute changes:\n\n1. Fork the project on GitHub\n2. Create a feature branch to work on in your fork (`git checkout -b new-feature`)\n3. Add your name to the AUTHORS file\n4. Commit your changes to the feature branch\n5. Push the branch to GitHub (`git push origin my-new-feature`)\n6. On GitHub, create a new pull request from the feature branch\n\nSee our [contribution guidelines](https://github.com/calliope-project/calliope/blob/main/CONTRIBUTING.md) for more information -- and [join us on Gitter](https://app.gitter.im/#/room/#calliope-project_calliope:gitter.im) to ask questions or discuss code.\n\n## What\'s new\n\nSee changes made in recent versions in the [changelog](https://github.com/calliope-project/calliope/blob/main/changelog.rst).\n\n## Citing Calliope\n\nIf you use Calliope for academic work please cite:\n\nStefan Pfenninger and Bryn Pickering (2018). Calliope: a multi-scale energy systems modelling framework. *Journal of Open Source Software*, 3(29), 825. [doi: 10.21105/joss.00825](https://doi.org/10.21105/joss.00825)\n\n## License\n\nCopyright since 2013 Calliope contributors listed in AUTHORS\n\nLicensed under the Apache License, Version 2.0 (the ""License""); you\nmay not use this file except in compliance with the License. You may\nobtain a copy of the License at\n\n\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n'",",https://doi.org/10.21105/joss.00825,https://doi.org/10.21105/joss.00825","2013/09/18, 17:58:41",3689,Apache-2.0,55,1152,"2023/10/25, 16:48:33",69,225,424,97,0,7,1.2,0.2416918429003021,"2023/01/18, 14:11:52",v0.6.10,0,13,false,,true,true,,,https://github.com/calliope-project,https://www.callio.pe/,,,,https://avatars.githubusercontent.com/u/10073020?v=4,,, Euro-Calliope,A model of the European electricity system built using Calliope.,calliope-project,https://github.com/calliope-project/euro-calliope.git,github,"calliope-models,conda,europe,research,energy,renewable-energy",Energy Modeling and Optimization,"2023/06/16, 12:28:23",30,0,5,true,Python,Calliope,calliope-project,Python,https://euro-calliope.readthedocs.io,"b""# Euro-Calliope\n\nModels of the European electricity system for _Calliope_.\n\nThis repository contains the workflow routines that automatically build models from raw data. Alternatively to building models yourself, you can use [pre-built models](https://doi.org/10.5281/zenodo.3949552) that run out-of-the-box. You can find a more detailed description of the first application in a [scientific article in Joule](https://doi.org/10.1016/j.joule.2020.07.018).\n\n[![article DOI](https://img.shields.io/badge/article-10.1016/j.joule.2020.07.018-blue)](https://doi.org/10.1016/j.joule.2020.07.018)\n[![pre-built models DOI](https://img.shields.io/badge/prebuilts-10.5281%2Fzenodo.3949552-blue)](https://doi.org/10.5281/zenodo.3949552)\n[![Documentation Status](https://readthedocs.org/projects/euro-calliope/badge/?version=latest)](https://euro-calliope.readthedocs.io/en/latest/?badge=latest)\n[![Check Markdown links](https://github.com/calliope-project/euro-calliope/actions/workflows/externallinks.yaml/badge.svg)](https://github.com/calliope-project/euro-calliope/actions/workflows/externallinks.yaml)\n[![Tests of YAML configuration and schema](https://github.com/calliope-project/euro-calliope/actions/workflows/schemavalidation.yaml/badge.svg)](https://github.com/calliope-project/euro-calliope/actions/workflows/schemavalidation.yaml)\n[![Tests of eurocalliopelib and scripts](https://github.com/calliope-project/euro-calliope/actions/workflows/pythonpackage.yaml/badge.svg)](https://github.com/calliope-project/euro-calliope/actions/workflows/pythonpackage.yaml)\n\n## At a glance\n\nEuro-Calliope models the European electricity system with each location representing an administrative unit. It is built on three spatial resolutions: on the continental level as a single location, on the national level with 34 locations, and on the regional level with 497 locations. At each location, renewable generation capacities (wind, solar, bioenergy) and balancing capacities (battery, hydrogen) can be built. In addition, hydro electricity and pumped hydro storage capacities can be built up to the extent to which they exist today. All capacities are used to satisfy electricity demand on all locations where demand is based on historic data. Locations are connected through transmission lines of either unrestricted capacity or projections. Using [Calliope](https://www.callio.pe), the model is formulated as a linear optimisation problem with total monetary cost of all capacities as the minimisation objective. Due to the flexibility of Calliope and the availability of the routines building the model all components can be adapted to the modeller's needs.\n\n## More information\n\nHere is where you can find more information:\n\n* [Full documentation](https://euro-calliope.readthedocs.io/)\n* [Release notes](./CHANGELOG.md)\n* [Citation information](./docs/about/citation.md)\n* [Contributing information](./CONTRIBUTING.md)\n* [License](./LICENSE.md)\n\nIf you are unable to access the full documentation via ReadTheDocs following the link above, or otherwise want to build the documentation locally, you can run the following from the repository top-level directory (assuming you have [conda](https://conda.io) installed):\n\n\n```bash\nconda env create -f requirements-docs.yaml\nconda activate docs\nmkdocs build --no-directory-urls\n```\n\nThe documentation can then be accessed by opening `build/docs/index.html`.\n\n## License\n\nEuro-Calliope is developed and maintained within the [Calliope project](https://www.callio.pe). The code in this repository is [MIT licensed](./LICENSE.md).\n""",",https://doi.org/10.5281/zenodo.3949552,https://doi.org/10.1016/j.joule.2020.07.018,https://doi.org/10.1016/j.joule.2020.07.018,https://doi.org/10.5281/zenodo.3949552","2018/10/15, 14:03:05",1836,MIT,33,575,"2023/06/07, 19:52:15",77,113,193,18,140,5,4.0,0.08417508417508412,"2021/12/23, 16:41:14",v1.1,0,7,false,,false,true,,,https://github.com/calliope-project,https://www.callio.pe/,,,,https://avatars.githubusercontent.com/u/10073020?v=4,,, OSeMOSYS,"An open source modeling system for long-run integrated assessment and energy planning. It has been employed to develop energy systems models from the scale of continents (African Power Pools, South America, EU28+2) down to the scale of countries, regions and villages.",OSeMOSYS,https://github.com/OSeMOSYS/OSeMOSYS.git,github,"energy-model,energy,osemosys,energy-planners,investment,students",Energy Modeling and Optimization,"2023/06/20, 09:05:01",131,0,20,true,,OSeMOSYS,OSeMOSYS,,http://www.osemosys.org,"b'# OSeMOSYS - Open Source Energy Modelling System\n\n[![Build Status](https://travis-ci.com/OSeMOSYS/OSeMOSYS.svg?branch=master)](https://travis-ci.com/OSeMOSYS/OSeMOSYS)\n[![Documentation Status](https://readthedocs.org/projects/osemosys/badge/?version=latest)](https://osemosys.readthedocs.io/en/latest/?badge=latest)\n\nWelcome to OSeMOSYS - the Open Source energy MOdelling SYStem. This source code\nrepository contains the Apache-2.0 licensed source-code for the different\nimplementations of OSeMOSYS - GNU MathProg, Pyomo, PuLP and GAMS.\n\nFor an in-depth introduction to the underlying model and its structure, you can\nread the [original paper](https://www.sciencedirect.com/science/article/abs/pii/S0301421511004897)\n(needs access to Elsevier ScienceDirect).\n\nThe different versions are contained in subfolders, together with readme files\nwhich provide information of how to install and run the code.\n\n## Getting the OSeMOSYS code\n\n### Modellers\n\nThe OSeMOSYS code packages you need for writing your own models are released on the\n[website](http://www.osemosys.org/get-started.html), along with a lot of useful\ninformation on how to get started.\n\n### Developers\n\nOSeMOSYS consists of this repository and several submodules, which contain the different language implementations of\nthe OSeMOSYS formulation.\n\nTo obtain all the OSeMOSYS code including the language implementations\nfor development purposes, run the following commands from your command line:\n\n```bash\ncd \ngit clone https://github.com/OSeMOSYS/OSeMOSYS # obtain the OSeMOSYS repository code\ngit submodule init # initialize your local submodule configuration file\ngit submodule update # fetch all the data from project and check out correct commit\n```\n\nIf successful, this should download all the code to the folder you specified in\nthe first step.\n\nAlternatively, use the `--recurse-submodules` argument to the `git clone` command:\n\n```bash\ncd \ngit clone https://github.com/OSeMOSYS/OSeMOSYS --recurse-submodules\n# obtain the OSeMOSYS repository code and submodules all in one line\n```\n\n## Contributing\n\nPlease view our separate [contributing](https://github.com/OSeMOSYS/OSeMOSYS/blob/master/contributing.md)\ndocument to find out how to contribute to the OSeMOSYS community.\n\n## Background\n\nOSeMOSYS is a full-fledged systems optimization model generator for long-term\nenergy planning.\nUnlike long established energy systems models,\nsuch as MARKAL/TIMES (ETSAP, 2010), MESSAGE (IAEA, 2010), PRIMES (NTUA, 2010),\nEFOM (Van der Voort, 1982) and POLES (Enerdata, 2010),\nOSeMOSYS potentially requires a less significant learning curve and time\ncommitment to build and operate.\nAdditionally, by not using proprietary software or commercial programming\nlanguages and solvers, OSeMOSYS requires no upfront financial investment.\nThese two advantages extend the availability of energy modeling\nto large communities of students, business analysts, government specialists\nand developing countries energy researchers.\n\n## Motivation\n\nOSeMOSYS is designed to fill a gap in the analytical toolbox available to the energy research community and energy planners in developing countries. At present there exists a useful, but limited set of accessible energy system models. These tools often require significant investment in terms of human resources, training and software purchases in order to apply or further develop them. In addition, their structure is often such that integration with other tools, when possible, can be difficult.\n\n## Energy Specialists\n\nThe OSeMOSYS code is relatively straightforward and transparent and allows for simple refinements and the ability to conduct sophisticated analyses. As models are made to generate insights, OSeMOSYS allows a test-bed for new energy model developments.\n\n## Education\n\nEnabling graduate students to build and iteratively develop formal energy models will impart this knowledge base to very wide range of energy market roles and positions. Extending the human capacity of private and public policy makers to use and understand energy models is a key step in the effective use and interpretation of formal analytical tools. And growing human capacity in energy modeling in developing countries \xe2\x80\x93 whose institutions have relatively fewer research resources \xe2\x80\x93 is particularly important, given the growth of developing countries in energy related emissions, resource use, and demand for energy services.\n\n## Community\n\nOSeMOSYS community welcomes professionals and experts from different levels: decision makers, policy officers, energy planners, developers of new model functionalities, programmers.\n\n## The OpTIMUS Community, Practice 3\n\nOSeMOSYS is part of the OpTIMUS Community, Practice 3: Open Software, together with other world class, peer reviewed open source tools and data.\n\nOpTIMUS aims at promoting quantitative analysis to inform sustainable development policy, through the coordination of networks to advance open source software, knowledge development and capacity building. It is organized in three practices -modeling and capacity building for policy support, expert review and quality control, and software development.\nFor more information on the OpTIMUS Community, please visit the related website: http://www.optimus.community/.\n'",,"2016/10/03, 08:53:55",2578,Apache-2.0,1,336,"2023/07/12, 13:21:08",44,33,75,5,105,0,0.6,0.5757575757575757,,,0,12,false,,true,true,,,https://github.com/OSeMOSYS,http://www.osemosys.org/,,,,https://avatars.githubusercontent.com/u/14215860?v=4,,, REVUB,The main objective is to model how flexible operation of hydropower plants can help renewable electricity mixes with variable solar and wind power to provide reliable electricity supply and load-following services.,VUB-HYDR,https://github.com/VUB-HYDR/REVUB.git,github,"standalone-software,renewable,electricity,solar,wind,hydro,balancing,flexibility",Energy Modeling and Optimization,"2023/09/21, 11:25:43",13,0,3,true,Python,HYDR,VUB-HYDR,Python,,"b'\n# REVUB (Renewable Electricity Variability, Upscaling and Balancing) \n\n# \n\nAuthors: Sebastian Sterl\n\n\nContact author: sebastian.sterl@vub.be\n\n# 1. Introduction\n---\nThe main objective of REVUB is to model how the operation of hydropower plants can be hybridised with variable solar and wind power (VRE) plants, allowing the combination of hydro with VRE to operate ""as a single unit"" to provide reliable electricity supply and load-following services. The model can be used, for instance, in due diligence processes for power plant financing.\n\nThis model was first introduced in the paper ""Smart renewable electricity portfolios in West Africa"" by Sterl et al. (2020; https://www.nature.com/articles/s41893-020-0539-0); hereafter referred to as ""the publication"". It has since been used for several more peer-reviewed publications.\n\nA detailed description of all involved principles and equations can be found in the dedicated Manual (https://github.com/VUB-HYDR/REVUB/blob/master/manual/REVUB_manual.pdf).\n\nThe REVUB code simulates dedicated hydropower plant operation to provide an effective capacity credit to VRE, and allows to derive:\n\n* Suitable mixes of hydro, solar and wind power to maximise load-following under user-defined constraints;\n* Reliable operating rules for hydropower reservoirs to enable this load-following across wet- and dry-year conditions;\n* Hourly to decadally resolved hydro, solar and wind power generation.\n\n# 2. Installation\n---\nThe most recent version of the REVUB model was written for Python 3.9. The files given in this GitHub folder contain code and data needed to run a minimum working example. \n\nIn the past, a MATLAB version (written for MATLAB R2017b) of the REVUB model existed, which can be obtained upon request but is no longer being supported by code updates.\n\n# 3. Tool\'s structure\n---\n\n### Scripts\nThe code is divided into four scripts: one for initialisation (A), one containing the core code (B), and two for plotting (C). For a detailed explanation of the purpose of each file, the user is referred to the Manual.\n\n* **A_REVUB_initialise_minimum_example**\n\nThis script initialises the data needed for the minimum working example to run (which covers Bui hydropower plant in Ghana, and Buyo hydropower plant in C\xc3\xb4te d\'Ivoire). \n\nIt reads in an Excel file with overall modelling parameters (""parameters_simulation.xlsx""), and several Excel files with tabulated time series and other data (""data_xxx.xlsx""; in this case, these datasets are themselves the results of external computations, described in the publication). \n\nThese datasets are given in the folder ""data"". These data files should be downloaded and placed in the same folder in which this script is located. The names of the worksheets of all files named ""data_xxx.xlsx"" must be linked to the corresponding hydropower plant with the parameters ""HPP_name_data_xxx"" in the file ""parameters_simulation.xlsx"".\n\nThe folder ""data"" contains a sub-folder with an auxiliary script to parse monthly-scale time series to hourly-scale, the latter being the needed timescale for REVUB simulations. This is relevant for quantities such as inflow, evaporation flux and precipitation flux which may often not be available at hourly timescale and for which the hourly detail is of limited importance for reservoir operation.\n\n* **B_REVUB_main_code**\n\nThis script runs the actual REVUB model simulation and optimisation.\n \n* **C_REVUB_plotting_individual**\n\nThis script produces figure outputs for the individually simulated plants, in this case Bui or Buyo, chosen by the user from an Excel file named ""plotting_settings.xlsx"". Most of these figures also be found in the publication or its SI (for the same example). \n\nThe figures include (i) time series of hydropower lake levels and reservoir outflows without and with complementary hydro-VRE operation, (ii) power generation curves from the selected hydropower plant alongside solar and wind power at hourly, seasonal and multiannual scales, and (iii) hydropower release rule curves for given months and times of the day.\n\n* **C_REVUB_plotting_multiple**\n\nThis script produces figure outputs of the overall power mix of a given region/country/grid. \n\nFor a user-defined ensemble of the simulated plants, which the user can set in the Excel file ""plotting_settings.xlsx"" (in this minimum example, the options for this ensemble are (i) only Bui, (ii) only Buyo, and (iii) Bui + Buyo together), the script plots overall hydro-solar-wind power generation from this ensemble at hourly, seasonal and multiannual time scales, and compares it to a user-set overall hourly power demand curve (representing overall demand in the country/region/grid). \n\nThe difference between hydro-VRE and this overall demand is assumed to be covered by other power sources (thermal power sources are used as default in the script). Thus, this script can be used to provide insights on the overall power mix of a country/region/grid upon implementing hydro-VRE complementary operation.\n\nTo produce the figure outputs, simply run the scripts in the order A-B-C.\n\n## Versions\nVersion 0.1.0 - January 2020\n\nVersion 1.0.0 - August 2023\n\nVersion 1.0.1 - September 2023\n\n## License\nSee also the [LICENSE](./LICENSE.md) file.\n\n'",,"2019/09/23, 14:38:22",1493,MIT,36,144,"2023/07/12, 13:21:08",0,0,0,0,105,0,0,0.0,"2023/09/01, 11:52:19",v1.0.1,0,1,false,,false,false,,,https://github.com/VUB-HYDR,http://www.hydr.vub.ac.be/,"Brussels, Belgium",,,https://avatars.githubusercontent.com/u/35221117?v=4,,, FINE,"Provides a framework for modeling, optimizing and assessing energy systems.",FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/FINE.git,github,,Energy Modeling and Optimization,"2023/10/12, 15:22:34",58,0,11,true,Python,FZJ-IEK3,FZJ-IEK3-VSA,"Python,Dockerfile",,"b'[![Build Status](https://travis-ci.com/FZJ-IEK3-VSA/FINE.svg?branch=master)](https://travis-ci.com/FZJ-IEK3-VSA/FINE)\n[![Version](https://img.shields.io/pypi/v/FINE.svg)](https://pypi.python.org/pypi/FINE)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/fine.svg)](https://anaconda.org/conda-forge/fine)\n[![Documentation Status](https://readthedocs.org/projects/vsa-fine/badge/?version=latest)](https://vsa-fine.readthedocs.io/en/latest/)\n[![PyPI - License](https://img.shields.io/pypi/l/FINE)](https://github.com/FZJ-IEK3-VSA/FINE/blob/master/LICENSE.txt)\n[![codecov](https://codecov.io/gh/FZJ-IEK3-VSA/FINE/branch/master/graph/badge.svg)](https://codecov.io/gh/FZJ-IEK3-VSA/FINE)\n\n\n \n\n# ETHOS.FINE - Framework for Integrated Energy System Assessment\n\nThe ETHOS.FINE python package provides a framework for modeling, optimizing and assessing energy systems. With the provided framework, systems with multiple regions, commodities and time steps can be modeled. Target of the optimization is the minimization of the total annual cost while considering technical and enviromental constraints. Besides using the full temporal resolution, an interconnected typical period storage formulation can be applied, that reduces the complexity and computational time of the model.\n\nETHOS.FINE is used for the modelling of a diverse group of optimization problems within the [Energy Transformation PatHway Optimization Suite (ETHOS) at IEK-3](https://www.fz-juelich.de/de/iek/iek-3/leistungen/model-services). \n\nIf you want to use ETHOS.FINE in a published work, please [**kindly cite following publication**](https://www.sciencedirect.com/science/article/pii/S036054421830879X) which gives a description of the first stages of the framework. The python package which provides the time series aggregation module and its corresponding literatur can be found [here](https://github.com/FZJ-IEK3-VSA/tsam).\n\n## Features\n* representation of an energy system by multiple locations, commodities and time steps\n* complexity reducing storage formulation based on typical periods\n\n## Documentation\nA ""Read the Docs"" documentation of ETHOS.FINE can be found [here](https://vsa-fine.readthedocs.io/en/latest/).\n\n## Requirements\nThe installation process uses a Conda-based Python package manager. We highly recommend using [(Micro-)Mamba](https://mamba.readthedocs.io/en/latest/) instead of Anaconda. The recommended way to use Mamba on your system is to install the [Miniforge distribution](https://github.com/conda-forge/miniforge#miniforge3). They offer installers for Windows, Linux and OS X. Have a look at the Miniforge Readme for further details.\n\nThe project environment includes [GLPK](https://sourceforge.net/projects/winglpk/files/latest/download) as Mixed Integer Linear Programming (MILP) solver. If you want to solve large problems it is highly recommended to install [GUROBI](http://www.gurobi.com/). See [""Installation of an optimization solver""](#installation-of-an-optimization-solver) for more information.\n\n## Installation\n\n### Installation via conda-forge\nThe simplest way ist to install FINE into a fresh environment from `conda-forge` with:\n```bash\nmamba create -n fine -c conda-forge fine\n```\n\n### Installation from local folder\nAlternatively you can first clone the content of this repository and perform the installation from there: \n\n1. Clone the content of this repository \n```bash\ngit clone https://github.com/FZJ-IEK3-VSA/FINE.git \n```\n2. Move into the FINE folder with\n```bash\ncd fine\n```\n3. It is recommended to create a clean environment with conda to use FINE because it requires many dependencies. \n```bash\nmamba env create -f requirements.yml\n```\n5. Activate the new enviroment. You should see `(fine)` in front of your command prompt to indicate that you are now in the virtual environment.\n```bash\nmamba activate fine\n```\n\n### Installation for developers\nI you want to work on the FINE codebase you need to run. \n```bash\nmamba env create -f requirements_dev.yml\n```\nThis installs additional dependencies such as `pytest` and installs FINE from the folder in editable mode with `pip -e`. Changes in the folder are then reflected in the package installation.\n\nYou can run the following command in the project root folder:\n```\npytest\n```\n\n## Installation of an optimization solver\n\nFINE requires an MILP solver which can be accessed using [PYOMO](https://pyomo.readthedocs.io/en/stable/index.html). It searches for the following solvers in this order:\n- [GUROBI](http://www.gurobi.com/)\n - Recommended due to better performance but requires license (free academic version available)\n - Set as standard solver\n- [GLPK](https://sourceforge.net/projects/winglpk/files/latest/download)\n - This solver is installed with the FINE environment.\n - Free version available \n- [CBC](https://projects.coin-or.org/Cbc)\n - Free version available\n\n### Gurobi installation\nThe installation requires the following three components:\n- Gurobi Optimizer\n - In order to [download](https://www.gurobi.com/downloads/gurobi-optimizer-eula/) the software you need to create an account and obtain a license.\n- Gurobi license\n - The license needs to be installed according to the instructions in the registration process.\n- Gurobi python api\n - The python api can be installed according to [this instruction](https://support.gurobi.com/hc/en-us/articles/360044290292-How-do-I-install-Gurobi-for-Python-).\n\n### GLPK installation\nA complete installation instruction for Windows can be found [here](http://winglpk.sourceforge.net/).\n\n### CBC\nInstallation procedure can be found [here](https://projects.coin-or.org/Cbc).\n\n## Examples\n\nA number of [examples](https://github.com/FZJ-IEK3-VSA/FINE/tree/master/examples) shows the capabilities of FINE.\n\n## License\n\nMIT License\n\nCopyright (C) 2016-2023 FZJ-IEK-3\n\nActive Developers: Theresa Gro\xc3\x9f, Kevin Knosala, Noah Pflugradt, Johannes Behrens, Julian Belina, Arne Burdack, Toni Busch, Philipp Dunkel, Patrick Freitag, Thomas Grube, Heidi Heinrichs, Maximilian Hoffmann, Shitab Ishmam, Stefan Kraus, Felix Kullmann, Jochen Lin\xc3\x9fen, Rachel Maier, Peter Markewitz, Lars Nolting, Shruthi Patil, Jan Priesmann, Stanley Risch, Julian Sch\xc3\xb6nau, Bismark Singh, Maximilian Stargardt, Christoph Winkler, Michael Zier, Detlef Stolten\n\nAlumni: Robin Beer, Henrik B\xc3\xbcsing, Dilara Caglayan, Timo Kannengie\xc3\x9fer, Leander Kotzur, Martin Robinius, Andreas Smolenko, Peter Stenzel, Chloi Syranidou, Johannes Th\xc3\xbcrauf, Lara Welder\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\n\n## About Us \n\n\n\nWe are the Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n## Contributions and Users\n\nFrom 2018 to 2022 we developed methods and models for ETHOS.FINE together with the RWTH-Aachen ([Prof. Aaron Praktiknjo](http://www.wiwi.rwth-aachen.de/cms/Wirtschaftswissenschaften/Die-Fakultaet/Institute-und-Lehrstuehle/Professoren/~jgfr/Praktiknjo-Aaron/?allou=1&lidx=1)), the [EDOM Team at FAU](https://www.math.fau.de/wirtschaftsmathematik/) and the [J\xc3\xbclich Supercomputing Centre](http://www.fz-juelich.de/ias/jsc/DE/Home/home_node.html) within the BMWi funded project [METIS](http://www.metis-platform.net/).\n\n

\n            \n \n

\n\n## Acknowledgement\n\nThis work was supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/).\n\n

\n\n

'",,"2018/07/02, 14:48:55",1941,CUSTOM,192,1774,"2023/01/17, 13:53:47",0,36,46,4,281,0,0.1,0.7465007776049767,"2023/10/12, 13:10:45",v2.3.0,0,25,false,,false,false,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, CoMPAS,Formed to develop open source software components related to IEC 61850 model implementation (profile management) and configuration of a power industry Protection Automation and Control System.,com-pas,https://github.com/com-pas/compas-architecture.git,github,,Energy Modeling and Optimization,"2023/02/08, 12:18:16",8,0,0,true,CSS,CoMPAS - (Co)nfiguration (M)odules for (P)ower industry (A)utomation (S)ystems,com-pas,"CSS,XQuery,HTML",,"b'\n\n# CoMPAS Architecture\n\n[![REUSE status](https://api.reuse.software/badge/github.com/com-pas/compas-architecture)](https://api.reuse.software/info/github.com/com-pas/compas-architecture)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5925/badge)](https://bestpractices.coreinfrastructure.org/projects/5925)\n[![Slack](public/LFEnergy-slack.svg)](http://lfenergy.slack.com/)\n\nThis site provides the architectural description of CoMPAS. It describes the [Functional Architecture](FUNCTIONAL_ARCHITECTURE.md), presenting the logical view and design decisions, as well as the [Technical Architecture](technical/TECHNICAL_ARCHITECTURE.md), presenting the deployment view. It describes the [Technology Choices](TECHNOLOGY.md) made for the project together with the motivation.\nGeneral information about CoMPAS can be found [on the wiki at the LFE site](https://wiki.lfenergy.org/display/HOME/CoMPAS).\n\n## Open Community Calls\nWe hold an open community call on the first Monday of every month (always check the calendar). The start time right now is:\n\n3:00pm - 4:00pm CET\n\nIf you wish to participate, please join the [CoMPAS mailing list](https://lists.lfenergy.org/g/CoMPAS). Or contact us in the slack #compas channel on [public LFEnergy slack](http://lfenergy.slack.com/)/\n\nThe schedule for the next calls can be found in the mailing list [calendar](https://lists.lfenergy.org/g/CoMPAS/calendar).\nIf you want to subscribe and stay up to date about upcoming events, consider subscribing to the [CoMPAS mailing list](https://lists.lfenergy.org/g/CoMPAS).\n\n## Contributing\nInterested in contributing? Please read carefully the [CONTRIBUTING guidelines](https://com-pas.github.io/contributing/).\n\n## GitHub Pages\nThis site is provided as a [gitHub pages site](https://com-pas.github.io/compas-architecture/). \nThe content is maintained and edited on [GitHub](https://github.com/com-pas/compas-architecture). \nContributors are only allowed to contribute by editing the content on GitHub and must do so by presenting their modifications as *pull-request* to the community. \nThe diagrams on this page are created using [DrawIO](https://github.com/jgraph/drawio-desktop/releases) \nand follow [Unified Modeling Language (UML)](https://www.omg.org/spec/UML/). \nThe drawIO design file is available on this site: [/blob-files/CoMPAS.drawio](blob-files/CoMPAS.drawio). \nModification made to UML diagrams on this site must be made in this file and the modified file must be part of the pull request.\n'",,"2020/04/22, 12:41:08",1281,CC-BY-4.0,6,280,"2023/10/04, 08:59:00",24,91,159,13,21,0,0.8,0.22959183673469385,,,0,10,false,,false,false,,,https://github.com/com-pas,https://www.lfenergy.org/all-projects/compas/,,,,https://avatars.githubusercontent.com/u/63782197?v=4,,, PowerSimulations.jl,A Julia package for power system modeling and simulation of Power Systems operations.,NREL-SIIP,https://github.com/NREL-Sienna/PowerSimulations.jl.git,github,"julia,powersystems,optimization,energy,electricity,analysis,simulations",Energy Modeling and Optimization,"2023/10/18, 23:26:32",236,0,57,true,Julia,NREL-Sienna,NREL-Sienna,Julia,https://www.nrel.gov/analysis/sienna.html,"b'# PowerSimulations.jl\n\n[![Main - CI](https://github.com/NREL-Sienna/PowerSimulations.jl/actions/workflows/main-tests.yml/badge.svg)](https://github.com/NREL-Sienna/PowerSimulations.jl/actions/workflows/main-tests.yml)\n[![codecov](https://codecov.io/gh/NREL-Sienna/PowerSimulations.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/NREL-Sienna/PowerSimulations.jl)\n[![Documentation](https://github.com/NREL-Sienna/PowerSimulations.jl/workflows/Documentation/badge.svg)](https://nrel-sienna.github.io/PowerSimulations.jl/latest)\n[![DOI](https://zenodo.org/badge/109443246.svg)](https://zenodo.org/badge/latestdoi/109443246)\n[](https://join.slack.com/t/nrel-sienna/shared_invite/zt-glam9vdu-o8A9TwZTZqqNTKHa7q3BpQ)\n[![PowerSimulations Downloads](https://shields.io/endpoint?url=https://pkgs.genieframework.com/api/v1/badge/PowerSimulations)](https://pkgs.genieframework.com?packages=PowerSimulations)\n\n`PowerSimulations.jl` is a Julia package for power system modeling and simulation of Power Systems operations. The objectives of the package are:\n\n- Provide a flexible modeling framework that can accommodate problems of different complexity and at different time-scales.\n\n- Streamline the construction of large scale optimization problems to avoid repetition of work when adding/modifying model details.\n\n- Exploit Julia\'s capabilities to improve computational performance of large scale power system quasi-static simulations.\n\nThe flexible modeling framework is enabled through a modular set of capabilities that enable scalable power system analysis and exploration of new analysis methods. The modularity of PowerSimulations results from the structure of the simulations enabled by the package:\n\n- _Simulations_ define a set of problems that can be solved using numerical techniques.\n\nFor example, an annual production cost modeling simulation can be created by formulating a unit commitment model against system data to assemble a set of 365 daily time-coupled scheduling problems.\n\n## Simulations enabled by PowerSimulations\n\n- Integrated Resource Planning\n- Production Cost Modeling\n\n## Opertation model formulations contained in PowerSimulations\n\n- [Unit Commitment](https://en.wikipedia.org/wiki/Unit_commitment_problem_in_electrical_power_production)\n- [Economic Dispatch](https://en.wikipedia.org/wiki/Economic_dispatch)\n\n## Installation\n\n```julia\njulia> ]\n(v1.9) pkg> add PowerSystems\n(v1.9) pkg> add PowerSimulations\n```\n\n## Usage\n\n`PowerSimulations.jl` uses [PowerSystems.jl](https://github.com/NREL-Sienna/PowerSystems.jl) to handle the data used in the simulations.\n\n```julia\nusing PowerSimulations\nusing PowerSystems\n```\n\n## Development\n\nContributions to the development and enhancement of PowerSimulations is welcome. Please see [CONTRIBUTING.md](https://github.com/NREL-Sienna/PowerSimulations.jl/blob/main/CONTRIBUTING.md) for code contribution guidelines.\n\n## License\n\nPowerSimulations is released under a BSD [license](https://github.com/NREL-Sienna/PowerSimulations.jl/blob/main/LICENSE). PowerSimulations has been developed as part of the Scalable Integrated Infrastructure Planning (SIIP) initiative at the U.S. Department of Energy\'s National Renewable Energy Laboratory ([NREL](https://www.nrel.gov/))\n'",",https://zenodo.org/badge/latestdoi/109443246","2017/11/03, 21:11:22",2181,BSD-3-Clause,961,8166,"2023/10/18, 23:26:33",23,730,981,124,6,2,3.3,0.2628037896471235,"2023/09/23, 00:12:42",v0.23.3,0,22,false,,false,true,,,https://github.com/NREL-Sienna,https://www.nrel.gov/analysis/sienna.html,"Golden, CO",,,https://avatars.githubusercontent.com/u/44615001?v=4,,, PowerSystems.jl,Provides a rigorous data model using Julia structures to enable power systems analysis and modeling.,NREL-SIIP,https://github.com/NREL-Sienna/PowerSystems.jl.git,github,"julia,powersystems,energy-system,electrical,nrel",Energy Modeling and Optimization,"2023/10/12, 22:16:41",263,0,45,true,Julia,NREL-Sienna,NREL-Sienna,"Julia,Python",https://www.nrel.gov/analysis/sienna.html,"b'# PowerSystems.jl\n\n[![Main - CI](https://github.com/NREL-Sienna/PowerSystems.jl/workflows/Main%20-%20CI/badge.svg)](https://github.com/NREL-Sienna/PowerSystems.jl/actions/workflows/main-tests.yml)\n[![codecov](https://codecov.io/gh/NREL-Sienna/PowerSystems.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/NREL-Sienna/PowerSystems.jl)\n[![Documentation Build](https://github.com/NREL-Sienna/PowerSystems.jl/workflows/Documentation/badge.svg?)](https://nrel-sienna.github.io/PowerSystems.jl/stable)\n[![DOI](https://zenodo.org/badge/114039584.svg)](https://zenodo.org/badge/latestdoi/114039584)\n[](https://join.slack.com/t/nrel-sienna/shared_invite/zt-glam9vdu-o8A9TwZTZqqNTKHa7q3BpQ)\n[![PowerSystems.jl Downloads](https://shields.io/endpoint?url=https://pkgs.genieframework.com/api/v1/badge/PowerSystems)](https://pkgs.genieframework.com?packages=PowerSystems)\n\nThe `PowerSystems.jl` package provides a rigorous data model using Julia structures to enable power systems analysis and modeling. In addition to stand-alone system analysis tools and data model building, the `PowerSystems.jl` package is used as the foundational data container for the [PowerSimulations.jl](https://github.com/NREL/PowerSimulations.jl) and [PowerSimulationsDynamics.jl](https://github.com/NREL-Sienna/PowerSimulationsDynamics.jl) packages. `PowerSystems.jl` supports a limited number of data file formats for parsing.\n\n## Version Advisory\n\n- PowerSystems will work with Julia v1.6+.\n- If you are planning to use `PowerSystems.jl` in your package, check the [roadmap to version 4.0](https://github.com/NREL-Sienna/PowerSystems.jl/projects/4) for upcoming changes\n\n## Device data enabled in PowerSystems\n\n- Generators (Thermal, Renewable and Hydro)\n- Transmission (Lines, and Transformers)\n- Active Flow control devices (DC Lines and Phase Shifting Transformers)\n- TwoTerminal and Multiterminal HVDC\n- Topological elements (Buses, Arcs, Areas)\n- Storage (Batteries)\n- Load (Static, and Curtailable)\n- Services (Reserves, Transfers)\n- TimeSeries (Deterministic, Scenarios, Probabilistic)\n- Dynamic Generators Models\n- Dynamic Inverter Models\n\nFor a more exhaustive list, check the [Documentation](https://nrel-sienna.github.io/PowerSystems.jl/stable).\n\n## Parsing capabilities in PowerSystems\n\n- MATPOWER CaseFormat\n- PSS/e - PTI Format v30 and v33(.raw and .dyr files) \n- [RTS-GMLC](https://github.com/GridMod/RTS-GMLC/tree/main/RTS_Data/SourceData) table data format\n\n## Development\n\nContributions to the development and enhancement of PowerSystems are welcome. Please see\n[CONTRIBUTING.md](https://github.com/NREL/PowerSystems.jl/blob/main/CONTRIBUTING.md) for\ncode contribution guidelines.\n\n## Citing PowerSystems.jl\n\n[Paper describing `PowerSystems.jl`](https://www.sciencedirect.com/science/article/pii/S2352711021000765)\n\n```bibtex\n@article{LARA2021100747,\ntitle = {PowerSystems.jl \xe2\x80\x94 A power system data management package for large scale modeling},\njournal = {SoftwareX},\nvolume = {15},\npages = {100747},\nyear = {2021},\nissn = {2352-7110},\ndoi = {https://doi.org/10.1016/j.softx.2021.100747},\nurl = {https://www.sciencedirect.com/science/article/pii/S2352711021000765},\nauthor = {Jos\xc3\xa9 Daniel Lara and Clayton Barrows and Daniel Thom and Dheepak Krishnamurthy and Duncan Callaway},\nkeywords = {Power Systems, Julia, Energy},\n```\n\n## License\n\nPowerSystems is released under a BSD [license](https://github.com/NREL/PowerSystems.jl/blob/main/LICENSE).\nPowerSystems has been developed as part of the Scalable Integrated Infrastructure Planning (SIIP)\ninitiative at the U.S. Department of Energy\'s National Renewable Energy Laboratory ([NREL](https://www.nrel.gov/)).\n'",",https://zenodo.org/badge/latestdoi/114039584,https://doi.org/10.1016/j.softx.2021.100747","2017/12/12, 21:11:06",2142,BSD-3-Clause,243,4726,"2023/09/12, 01:00:52",40,655,975,98,43,1,2.5,0.42905405405405406,"2023/09/13, 03:12:49",v3.0.1,0,21,false,,false,true,,,https://github.com/NREL-Sienna,https://www.nrel.gov/analysis/sienna.html,"Golden, CO",,,https://avatars.githubusercontent.com/u/44615001?v=4,,, Balmorel,A partial equilibrium model for analyzing the electricity and combined heat and power sectors in an international perspective.,balmorelcommunity,https://github.com/balmorelcommunity/Balmorel.git,github,"energy-system-model,gams,optimisation-model",Energy Modeling and Optimization,"2022/03/29, 12:41:13",23,0,1,false,Jupyter Notebook,Balmorel Community,balmorelcommunity,"Jupyter Notebook,GAMS,HTML,Python,MATLAB",http://www.balmorel.com,"b'# Balmorel\n\n## What is Balmorel?\n\nBalmorel is a partial equilibrium model for analysing the electricity and combined heat and power sectors in an international perspective. It is highly versatile and may be applied for long range planning as well as shorter time operational analysis. Balmorel is implemented as a mainly linear programming optimisation problem.\n\nThe model is developed in a model language, and the source code is readily available under open source conditions, thus providing complete documentation of the functionalities. Moreover, the user may modify the model according to specific requirements, making the model suited for any purpose within the focus parts of the energy system.\n\n## What can Balmorel be used for?\n\nThe Balmorel model has been applied in projects or other activities in a number of countries, e.g., in Denmark, Norway, Sweden, Estonia, Latvia, Lithuania, Poland, Germany, Austria, Ghana, Mauritius, Canada and China. It has been used for analyses of, i.a., security of electricity supply, the role of flexible electricity demand, hydrogen technologies, wind power development, the role of natural gas, development of international electricity markets, market power, heat transmission and pricing, expansion of electricity transmission, international markets for green certificates and emission trading, electric vehicles in the energy system, environmental policy evaluation.\n\nSee ""Activities"" and ""Publications"" sections in the menu for description of ongoing and past projects using the Balmorel model.\n\n## Who can use Balmorel?\n\nBalmorel is a modelling tool that can be used by energy system experts, energy companies, authorities, transmission system operators, researchers and others for the analyses of future developments of a regional energy sector.\n\n## How is Balmorel supported and further developed?\n\nThe model is developed and distributed under open source ideals. The source code has been provided on its homepage since 2001 and was assigned the [ISC license](https://opensource.org/licenses/ISC) in 2017. Ample documentation is available in the folder [within this repository](base/documentation). Application examples and contact information can be found on the [Balmorel homepage](https://balmorel.com). Presently the model development is mainly project driven, with a users\' network around it, supporting the open source development idea.\n'",,"2017/07/06, 15:22:21",2302,BSD-3-Clause,0,431,"2021/12/08, 10:22:50",2,17,17,0,686,1,0.4,0.1486146095717884,"2019/12/13, 09:34:34",REgas_191215,0,6,false,,false,false,,,https://github.com/balmorelcommunity,,,,,https://avatars.githubusercontent.com/u/29952453?v=4,,, DistAIX,A simulator for cyber-physical power systems that makes use of high performance computing techniques to scale up the simulation.,acs/public/simulation/DistAIXFramework,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://git.rwth-aachen.de/acs/public/simulation/DistAIXFramework/distaix,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The Open Energy Ontology,A domain ontology of the energy-system modeling context.,OpenEnergyPlatform,https://github.com/OpenEnergyPlatform/ontology.git,github,"ontology,energy,open-energy-family",Energy Modeling and Optimization,"2023/10/25, 11:38:00",91,0,33,true,Python,Open Energy Family,OpenEnergyPlatform,"Python,Makefile,Shell",,"b'\xef\xbb\xbf\n\n[![License: CC0-1.0](https://img.shields.io/badge/License-CC0%201.0-lightgrey.svg)](http://creativecommons.org/publicdomain/zero/1.0/)\n![GitHub release (latest by date)](https://img.shields.io/github/v/release/OpenEnergyPlatform/ontology)\n![Coverage Badge](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/areleu/6d00affa9fbc89c79684d62091d96551/raw/open_energy_ontology__heads_feature-1419-competency-question-coverage-report.json)\n\n# Open Energy Family - Open Energy Ontology (OEO)\n\nDeveloping a common ontology for the domain of energy system analysis.\n\n## Introduction\n\nThe **Open Energy Ontology (OEO)** is a domain ontology of the energy system analysis context. It is developed as part of the [Open Energy Family](https://github.com/OpenEnergyPlatform). The OEO is published on GitHub under an open source license. The language used is the Manchester OWL Syntax, which was chosen because it is user-friendly for editing and viewing differences of edited files. The OEO is constantly being extended. The first version of the OEO has been released on June 11th 2020. A Steering Committee (OEO-SC) was created to accompany the development, increase awareness of the ontology and include it in current projects.\n\n## Scope of this ontology\n\nThis domain ontology is a collaborative effort to represent the context of **energy system analysis** based on standard terminologies used by human experts in this field of research. It is designed to improve transparency and facilitate comparability and transferability of energy system modelling and scenario analysis. This ontology makes use of the Basic Formal Ontology ([BFO](https://github.com/OpenEnergyPlatform/ontology/wiki/BFO-Upper-Ontology-Classes)) and its principles. It re-uses several other ontologies as described in the [GitHub Wiki](https://github.com/OpenEnergyPlatform/ontology/wiki/use-of-external-ontologies).\n\n## License / Copyright / Citation\n\nThis repository is licensed under `CC0 1.0 Universal (CC0 1.0) Public Domain Dedication`.
\nFor a scientific citation please see the [CITATION.cff](CITATION.cff).
\n\nTo cite a specific class of the ontology and its definition please use the following convention:\n> \'class label\' (FUll-URI) from the [Open Energy Ontology (OEO)](https://github.com/OpenEnergyPlatform/ontology)\n\nExample:\n> \'energy system\' (https://openenergy-platform.org/ontology/oeo/OEO_00030024) from the [Open Energy Ontology (OEO)](https://github.com/OpenEnergyPlatform/ontology)\n\n\n## Releases and installation\n\nThe latest version of the OEO can be accessed on the [Open Energy Platform](https://openenergy-platform.org/ontology/oeo) and the [Master Branch](https://github.com/OpenEnergyPlatform/ontology/tree/master).
\nAll released versions can be downloaded directly from the [GitGub Releases](https://github.com/OpenEnergyPlatform/ontology/releases/).
\nThe currently developed version is available on the default [dev Branch](https://github.com/OpenEnergyPlatform/ontology/).\n\nThe source code of the ontology is found in the folder `src/ontology/`
\nThe main file is `src/ontology/oeo.omn`
\n\nAll own modules are collected in the folder `src/ontology/edits/`
\nThe following diagram illustrates the modular file structure of the OEO. It depicts the import and file hierarchy from external imports (right) to the main file oeo.omn (left).\n![Structure of the OEO](https://user-images.githubusercontent.com/38690039/275459325-1c6eb63d-287a-45b5-a107-839c8c09bfe0.png)\n\nThe imported modules are under `src/ontology/imports/`
\nTo get an overview of the existing modules, take a look at the following wiki article: [GitHub Wiki](https://github.com/OpenEnergyPlatform/ontology/wiki/Modules-of-the-OEO)\nWe recommend to use the software [Prot\xc3\xa9g\xc3\xa9](https://protege.stanford.edu/) to open and edit the ontology. Additionally, an ontology viewer is available on the [Open Energy Platform](https://openenergy-platform.org/viewer/oeo/).\n\n\n## Collaboration\nThis is an interdisciplinary open source project, help is always welcome.
\nEveryone is invited to develop this repository with good intentions.\n\nThe development of the ontology happens mainly on [GitHub](https://github.com/OpenEnergyPlatform/ontology) and is supplemented by regular (online) developer meetings to review the progress and discuss more complicated topics. \n\nIf you\'re new to GitHub or ontologies check out our [""How to participate""](https://github.com/OpenEnergyPlatform/ontology/wiki/Welcome!-How-to-participate) article for some first advice and helpful links.\nThe workflow is described in the [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/ontology/blob/dev/CONTRIBUTING.md) file. Please check it out before you start working on this project. Points that are not clear in the file can be discussed in a [GitHub Issue](https://github.com/OpenEnergyPlatform/ontology/issues/new/choose).\nPlease read the [GitHub Wiki](https://github.com/OpenEnergyPlatform/ontology/wiki) for more information about the ontology, its standards, its best practice principles and the BFO classification.\n \n## Teams\nExperts in ontology engineering, economy and energy-system modelling work together collaboratively.
\nWe combine domain knowledge with knowledge about how an ontology should be designed.\n\nIf you have a specific question about ontology design, energy system modelling or economy (in context of this ontology), you can [tag](https://github.com/OmahaGirlsWhoCode/OmahaGirlsWhoCode/wiki/How-to-tag-someone-in-a-pull-request) one of these teams (or persons) to notify its members that you need their feedback or help.\n\nThe OEO is organised in a general team and several [subteams](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-dev/teams):\n \n- [@oeo-dev](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-dev)\n - All developers of the OEO\n\n### Organisation\n\n- [@oeo-community-manager](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-community-manager)\n - Contact point for personal and team related concerns\n- [@oeo-concept-owner](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-concept-owner)\n - Strategic and long-term development and coordination of developers\n- [@oeo-steering-committee](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-steering-committee)\n - Members of the [Steering Committee (OEO-SC)](https://openenergy-platform.org/ontology/oeo-steering-committee/)\n- [@oeo-release-team](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-release-team)\n - Coordinates the periodic releases\n\n### Domain Experts\n\n- [@oeo-domain-expert-energy-modelling](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-domain-expert-energy-modelling)\n - Knowledge related to energy system modelling and simulation\n- [@oeo-domain-expert-economy](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-domain-expert-economy)\n - Knowledge related to economic system, costs, monetary issues\n- [@oeo-domain-expert-linked-open-data](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-domain-expert-linked-open-data)\n - Knowledge related to linked open data\n- [@oeo-domain-expert-meteorology](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-domain-expert-meteorology)\n - Knowledge related to meteorology and weather\n\n### Ontology experts\n\n- [@oeo-general-expert-formal-ontology](https://github.com/orgs/OpenEnergyPlatform/teams/oeo-general-expert-formal-ontology)\n - Knowledge related to formal ontology expertise and BFO\n'",,"2018/06/06, 12:04:47",1967,CC0-1.0,1646,5246,"2023/10/25, 11:38:04",128,848,1590,366,0,3,2.1,0.7087719298245614,"2023/10/25, 09:57:21",v.2.0.0,4,34,false,,true,true,,,https://github.com/OpenEnergyPlatform,https://github.com/OpenEnergyPlatform/organisation/blob/master/README.md,"Magdeburg, Germany",,,https://avatars.githubusercontent.com/u/37101913?v=4,,, nempy,Aims to enhance the Australian electricity industries modeling and analytical capabilities.,UNSW-CEEM,https://github.com/UNSW-CEEM/nempy.git,github,,Energy Modeling and Optimization,"2023/10/17, 10:51:35",36,6,10,true,Python,Collaboration on Energy and Environmental Markets (CEEM),UNSW-CEEM,Python,,"b'# Nempy\n\n[![Current build](https://github.com/UNSW-CEEM/nempy/actions/workflows/test.yml/badge.svg)](https://github.com/UNSW-CEEM/nempy/actions/workflows/test.yml)\n[![Documentation](https://readthedocs.org/projects/nempy/badge/?version=latest)](https://nempy.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.03596/status.svg)](https://doi.org/10.21105/joss.03596)\n\n## Table of Contents\n- [Introduction](https://github.com/UNSW-CEEM/nempy#introduction)\n- [Installation](https://github.com/UNSW-CEEM/nempy#installation)\n- [Documentation](https://github.com/UNSW-CEEM/nempy#documentation)\n- [Community](https://github.com/UNSW-CEEM/nempy#community)\n- [Author](https://github.com/UNSW-CEEM/nempy#author)\n- [Citation](https://github.com/UNSW-CEEM/nempy#citation)\n- [License](https://github.com/UNSW-CEEM/nempy#license)\n- [Examples](https://github.com/UNSW-CEEM/nempy#examples)\n\n## Introduction\n\nNempy is a Python package for modelling the dispatch procedure of the Australian National Electricity Market (NEM). The idea is \nthat you can start simple and grow the complexity of your model by adding features such as \nramping constraints, interconnectors, FCAS markets and more. See the [examples](https://github.com/UNSW-CEEM/nempy#examples) below.\n\n| ![nempy-accuracy](https://github.com/prakaa/nempy/assets/40549624/6a994cee-3255-4e3d-b04b-6d4d7e155065) | \n|:--:| \n| *Dispatch price results from the New South Wales region for 1000 randomly selected intervals in the 2019 calendar year. The actual prices, prior to scaling or capping, are also shown for comparison. Results from two Nempy models are shown, one with a full set of dispatch features, and one without FCAS markets or generic constraints (network and security constraints). Actual prices, results from the full featured model, and the simpler model are shown in descending order for actual prices, results from the simpler model are also shown resorted.* |\n\nFor further details, refer to the [documentation](https://nempy.readthedocs.io/en/latest/intro.html#).\n\nFor a brief introduction to the NEM, refer to this [ document](https://aemo.com.au/-/media/Files/Electricity/NEM/National-Electricity-Market-Fact-Sheet.pdf).\n\n## Installation\nInstalling Nempy to use in your project is easy.\n\n```bash\npip install nempy\n```\n\n## Documentation\n\nA more detailed introduction to Nempy, examples, and reference documentation can be found on the \n[readthedocs](https://nempy.readthedocs.io/en/latest/) page.\n\n## Community\n\nNempy is open-source and we welcome all forms of community engagement.\n\n### Support\n\nYou can seek support for using Nempy using the [discussion tab on GitHub](https://github.com/UNSW-CEEM/nempy/discussions), checking the [issues register](https://github.com/UNSW-CEEM/nempy/issues), or by contacting Nick directly (n.gorman at unsw.edu.au).\n\nIf you cannot find a pre-existing issue related to your enquiry, you can submit a new one via the [issues register](https://github.com/UNSW-CEEM/nempy/issues). Issue submissions do not need to adhere to any particular format.\n\n### Future support and maintenance\n\nPlanning to continue support and maintenance for Nempy after the PhD project is complete is currently underway. If Nempy\nis useful to your work, research, or business, please reach out and inform us so we can consider your use case and\nneeds.\n\n### Contributing\n\nContributions via pull requests are welcome. Contributions should:\n\n1. Follow the PEP8 style guide (with exception of line length up to 120 rather than 80)\n2. Ensure that all existing automated tests continue to pass (unless you are explicitly changing intended behavour; if you are, please highlight this in your pull request description)\n3. Implement automated tests for new features\n4. Provide doc strings for public interfaces\n\n#### Installation for development\n\nNempy uses [`poetry`](https://python-poetry.org/docs/) as a dependency and project management tool. To install Nempy for development, clone or fork the repo and then run the following command in the main directory to install required dependencies and the source code as an editable project:\n\n```bash\npoetry install --with=dev\n```\nYou can then work within the virtual environment using `poetry shell` or run commands within it using `poetry run`.\n\n## Author\n\nNempy\'s development is being led by Nick Gorman as part of his PhD candidature at the Collaboration on Energy and Environmental\nMarkets at the University of New South Wales\' School of Photovoltaics and Renewable Energy Engineering. (https://www.ceem.unsw.edu.au/). \n\n## Citation\n\nIf you use Nempy, please cite the package via the [JOSS paper](https://doi.org/10.5281/zenodo.7397514) (suggested citation below):\n> Gorman et al., (2022). Nempy: A Python package for modelling the Australian National Electricity Market dispatch procedure. Journal of Open Source Software, 7(70), 3596, https://doi.org/10.21105/joss.03596\n\n## License\n\nNempy was created by Nicholas Gorman. It is licensed under the terms of [the BSD 3-Clause Licence](./LICENSE).\n\n## Examples\n
\n\nA simple example\n\n```python\nimport pandas as pd\nfrom nempy import markets\n\n# Volume of each bid, number of bands must equal number of bands in price_bids.\nvolume_bids = pd.DataFrame({\n \'unit\': [\'A\', \'B\'],\n \'1\': [20.0, 50.0], # MW\n \'2\': [20.0, 30.0], # MW\n \'3\': [5.0, 10.0] # More bid bands could be added.\n})\n\n# Price of each bid, bids must be monotonically increasing.\nprice_bids = pd.DataFrame({\n \'unit\': [\'A\', \'B\'],\n \'1\': [50.0, 50.0], # $/MW\n \'2\': [60.0, 55.0], # $/MW\n \'3\': [100.0, 80.0] # . . .\n})\n\n# Other unit properties\nunit_info = pd.DataFrame({\n \'unit\': [\'A\', \'B\'],\n \'region\': [\'NSW\', \'NSW\'], # MW\n})\n\n# The demand in the region\\s being dispatched\ndemand = pd.DataFrame({\n \'region\': [\'NSW\'],\n \'demand\': [120.0] # MW\n})\n\n# Create the market model\nmarket = markets.SpotMarket(unit_info=unit_info, \n market_regions=[\'NSW\'])\nmarket.set_unit_volume_bids(volume_bids)\nmarket.set_unit_price_bids(price_bids)\nmarket.set_demand_constraints(demand)\n\n# Calculate dispatch and pricing\nmarket.dispatch()\n\n# Return the total dispatch of each unit in MW.\nprint(market.get_unit_dispatch())\n# unit service dispatch\n# 0 A energy 40.0\n# 1 B energy 80.0\n\n# Return the price of energy in each region.\nprint(market.get_energy_prices())\n# region price\n# 0 NSW 60.0\n```\n\n
\n\n
\n\nA detailed example\n\nThe example demonstrates the broad range of market features that can be implemented with Nempy and the use of auxiliary \nmodelling tools for accessing historical market data published by AEMO and preprocessing it for compatibility with Nempy.\n\n> [!WARNING] \n> This example downloads approximately 54 GB of data from AEMO.\n\n```python\n# Notice:\n# - This script downloads large volumes of historical market data (~54 GB) from AEMO\'s nemweb\n# portal. You can also reduce the data usage by restricting the time window given to the\n# xml_cache_manager and in the get_test_intervals function. The boolean on line 23 can\n# also be changed to prevent this happening repeatedly once the data has been downloaded.\n\nimport sqlite3\nfrom datetime import datetime, timedelta\nimport random\nimport pandas as pd\nfrom nempy import markets\nfrom nempy.historical_inputs import loaders, mms_db, \\\n xml_cache, units, demand, interconnectors, constraints, rhs_calculator\nfrom nempy.help_functions.helper_functions import update_rhs_values\n\ncon = sqlite3.connect(\'D:/nempy_2021/historical_mms.db\')\nmms_db_manager = mms_db.DBManager(connection=con)\n\nxml_cache_manager = xml_cache.XMLCacheManager(\'D:/nempy_2021/xml_cache\')\n\n# The second time this example is run on a machine this flag can\n# be set to false to save downloading the data again.\ndownload_inputs = True\n\nif download_inputs:\n # This requires approximately 4 GB of storage.\n mms_db_manager.populate(start_year=2021, start_month=1,\n end_year=2021, end_month=1)\n\n # This requires approximately 50 GB of storage.\n xml_cache_manager.populate_by_day(start_year=2021, start_month=1, start_day=1,\n end_year=2021, end_month=2, end_day=1)\n\nraw_inputs_loader = loaders.RawInputsLoader(\n nemde_xml_cache_manager=xml_cache_manager,\n market_management_system_database=mms_db_manager)\n\n\n# A list of intervals we want to recreate historical dispatch for.\ndef get_test_intervals(number=100):\n start_time = datetime(year=2021, month=12, day=1, hour=0, minute=0)\n end_time = datetime(year=2021, month=12, day=31, hour=0, minute=0)\n difference = end_time - start_time\n difference_in_5_min_intervals = difference.days * 12 * 24\n random.seed(1)\n intervals = random.sample(range(1, difference_in_5_min_intervals), number)\n times = [start_time + timedelta(minutes=5 * i) for i in intervals]\n times_formatted = [t.isoformat().replace(\'T\', \' \').replace(\'-\', \'/\') for t in times]\n return times_formatted\n\n\n# List for saving outputs to.\noutputs = []\nc = 0\n# Create and dispatch the spot market for each dispatch interval.\nfor interval in get_test_intervals(number=100):\n c += 1\n print(str(c) + \' \' + str(interval))\n raw_inputs_loader.set_interval(interval)\n unit_inputs = units.UnitData(raw_inputs_loader)\n interconnector_inputs = interconnectors.InterconnectorData(raw_inputs_loader)\n constraint_inputs = constraints.ConstraintData(raw_inputs_loader)\n demand_inputs = demand.DemandData(raw_inputs_loader)\n rhs_calculation_engine = rhs_calculator.RHSCalc(xml_cache_manager)\n\n unit_info = unit_inputs.get_unit_info()\n market = markets.SpotMarket(market_regions=[\'QLD1\', \'NSW1\', \'VIC1\',\n \'SA1\', \'TAS1\'],\n unit_info=unit_info)\n\n # Set bids\n volume_bids, price_bids = unit_inputs.get_processed_bids()\n market.set_unit_volume_bids(volume_bids)\n market.set_unit_price_bids(price_bids)\n\n # Set bid in capacity limits\n unit_bid_limit = unit_inputs.get_unit_bid_availability()\n market.set_unit_bid_capacity_constraints(unit_bid_limit)\n cost = constraint_inputs.get_constraint_violation_prices()[\'unit_capacity\']\n market.make_constraints_elastic(\'unit_bid_capacity\', violation_cost=cost)\n\n # Set limits provided by the unconstrained intermittent generation\n # forecasts. Primarily for wind and solar.\n unit_uigf_limit = unit_inputs.get_unit_uigf_limits()\n market.set_unconstrained_intermitent_generation_forecast_constraint(\n unit_uigf_limit)\n cost = constraint_inputs.get_constraint_violation_prices()[\'uigf\']\n market.make_constraints_elastic(\'uigf_capacity\', violation_cost=cost)\n\n # Set unit ramp rates.\n def set_ramp_rates(run_type):\n ramp_rates = unit_inputs.get_ramp_rates_used_for_energy_dispatch(run_type=run_type)\n market.set_unit_ramp_up_constraints(\n ramp_rates.loc[:, [\'unit\', \'initial_output\', \'ramp_up_rate\']])\n market.set_unit_ramp_down_constraints(\n ramp_rates.loc[:, [\'unit\', \'initial_output\', \'ramp_down_rate\']])\n cost = constraint_inputs.get_constraint_violation_prices()[\'ramp_rate\']\n market.make_constraints_elastic(\'ramp_up\', violation_cost=cost)\n market.make_constraints_elastic(\'ramp_down\', violation_cost=cost)\n\n\n set_ramp_rates(run_type=\'fast_start_first_run\')\n\n # Set unit FCAS trapezium constraints.\n unit_inputs.add_fcas_trapezium_constraints()\n cost = constraint_inputs.get_constraint_violation_prices()[\'fcas_max_avail\']\n fcas_availability = unit_inputs.get_fcas_max_availability()\n market.set_fcas_max_availability(fcas_availability)\n market.make_constraints_elastic(\'fcas_max_availability\', cost)\n cost = constraint_inputs.get_constraint_violation_prices()[\'fcas_profile\']\n regulation_trapeziums = unit_inputs.get_fcas_regulation_trapeziums()\n market.set_energy_and_regulation_capacity_constraints(regulation_trapeziums)\n market.make_constraints_elastic(\'energy_and_regulation_capacity\', cost)\n contingency_trapeziums = unit_inputs.get_contingency_services()\n market.set_joint_capacity_constraints(contingency_trapeziums)\n market.make_constraints_elastic(\'joint_capacity\', cost)\n\n\n def set_joint_ramping_constraints(run_type):\n cost = constraint_inputs.get_constraint_violation_prices()[\'fcas_profile\']\n scada_ramp_down_rates = unit_inputs.get_scada_ramp_down_rates_of_lower_reg_units(\n run_type=run_type)\n market.set_joint_ramping_constraints_lower_reg(scada_ramp_down_rates)\n market.make_constraints_elastic(\'joint_ramping_lower_reg\', cost)\n scada_ramp_up_rates = unit_inputs.get_scada_ramp_up_rates_of_raise_reg_units(\n run_type=run_type)\n market.set_joint_ramping_constraints_raise_reg(scada_ramp_up_rates)\n market.make_constraints_elastic(\'joint_ramping_raise_reg\', cost)\n\n\n set_joint_ramping_constraints(run_type=""fast_start_first_run"")\n\n # Set interconnector definitions, limits and loss models.\n interconnectors_definitions = \\\n interconnector_inputs.get_interconnector_definitions()\n loss_functions, interpolation_break_points = \\\n interconnector_inputs.get_interconnector_loss_model()\n market.set_interconnectors(interconnectors_definitions)\n market.set_interconnector_losses(loss_functions,\n interpolation_break_points)\n\n # Calculate rhs constraint values that depend on the basslink frequency controller from scratch so there is\n # consistency between the basslink switch runs.\n # Find the constraints that need to be calculated because they depend on the frequency controller status.\n constraints_to_update = (\n rhs_calculation_engine.get_rhs_constraint_equations_that_depend_value(\'BL_FREQ_ONSTATUS\', \'W\'))\n initial_bl_freq_onstatus = rhs_calculation_engine.scada_data[\'W\'][\'BL_FREQ_ONSTATUS\'][0][\'@Value\']\n # Calculate new rhs values for the constraints that need updating.\n new_rhs_values = rhs_calculation_engine.compute_constraint_rhs(constraints_to_update)\n\n # Add generic constraints and FCAS market constraints.\n fcas_requirements = constraint_inputs.get_fcas_requirements()\n fcas_requirements = update_rhs_values(fcas_requirements, new_rhs_values)\n market.set_fcas_requirements_constraints(fcas_requirements)\n violation_costs = constraint_inputs.get_violation_costs()\n market.make_constraints_elastic(\'fcas\', violation_cost=violation_costs)\n generic_rhs = constraint_inputs.get_rhs_and_type_excluding_regional_fcas_constraints()\n generic_rhs = update_rhs_values(generic_rhs, new_rhs_values)\n market.set_generic_constraints(generic_rhs)\n market.make_constraints_elastic(\'generic\', violation_cost=violation_costs)\n\n unit_generic_lhs = constraint_inputs.get_unit_lhs()\n market.link_units_to_generic_constraints(unit_generic_lhs)\n interconnector_generic_lhs = constraint_inputs.get_interconnector_lhs()\n market.link_interconnectors_to_generic_constraints(\n interconnector_generic_lhs)\n\n # Set the operational demand to be met by dispatch.\n regional_demand = demand_inputs.get_operational_demand()\n market.set_demand_constraints(regional_demand)\n\n # Set tiebreak constraint to equalise dispatch of equally priced bids.\n cost = constraint_inputs.get_constraint_violation_prices()[\'tiebreak\']\n market.set_tie_break_constraints(cost)\n\n # Get unit dispatch without fast start constraints and use it to\n # make fast start unit commitment decisions.\n market.dispatch()\n dispatch = market.get_unit_dispatch()\n fast_start_profiles = unit_inputs.get_fast_start_profiles_for_dispatch(dispatch)\n set_ramp_rates(run_type=\'fast_start_second_run\')\n set_joint_ramping_constraints(run_type=\'fast_start_second_run\')\n market.set_fast_start_constraints(fast_start_profiles)\n if \'fast_start\' in market._constraints_rhs_and_type.keys():\n cost = constraint_inputs.get_constraint_violation_prices()[\'fast_start\']\n market.make_constraints_elastic(\'fast_start\', violation_cost=cost)\n\n # First run of Basslink switch runs\n market.dispatch() # First dispatch without allowing over constrained dispatch re-run to get objective function.\n objective_value_run_one = market.objective_value\n if constraint_inputs.is_over_constrained_dispatch_rerun():\n market.dispatch(allow_over_constrained_dispatch_re_run=True,\n energy_market_floor_price=-1000.0,\n energy_market_ceiling_price=15000.0,\n fcas_market_ceiling_price=1000.0)\n prices_run_one = market.get_energy_prices() # If this is the lowest cost run these will be the market prices.\n\n # Re-run dispatch with Basslink Frequency controller off.\n # Set frequency controller to off in rhs calculations\n rhs_calculation_engine.update_spd_id_value(\'BL_FREQ_ONSTATUS\', \'W\', \'0\')\n new_bl_freq_onstatus = rhs_calculation_engine.scada_data[\'W\'][\'BL_FREQ_ONSTATUS\'][0][\'@Value\']\n # Find the constraints that need to be updated because they depend on the frequency controller status.\n constraints_to_update = (\n rhs_calculation_engine.get_rhs_constraint_equations_that_depend_value(\'BL_FREQ_ONSTATUS\', \'W\'))\n # Calculate new rhs values for the constraints that need updating.\n new_rhs_values = rhs_calculation_engine.compute_constraint_rhs(constraints_to_update)\n # Update the constraints in the market.\n fcas_requirements = update_rhs_values(fcas_requirements, new_rhs_values)\n violation_costs = constraint_inputs.get_violation_costs()\n market.set_fcas_requirements_constraints(fcas_requirements)\n market.make_constraints_elastic(\'fcas\', violation_cost=violation_costs)\n generic_rhs = update_rhs_values(generic_rhs, new_rhs_values)\n market.set_generic_constraints(generic_rhs)\n market.make_constraints_elastic(\'generic\', violation_cost=violation_costs)\n\n # Reset ramp rate constraints for first run of second Basslink switchrun\n set_ramp_rates(run_type=\'fast_start_first_run\')\n set_joint_ramping_constraints(run_type=\'fast_start_first_run\')\n\n # Get unit dispatch without fast start constraints and use it to\n # make fast start unit commitment decisions.\n market.remove_fast_start_constraints()\n market.dispatch()\n dispatch = market.get_unit_dispatch()\n fast_start_profiles = unit_inputs.get_fast_start_profiles_for_dispatch(dispatch)\n set_ramp_rates(run_type=\'fast_start_second_run\')\n set_joint_ramping_constraints(run_type=\'fast_start_second_run\')\n market.set_fast_start_constraints(fast_start_profiles)\n if \'fast_start\' in market.get_constraint_set_names():\n cost = constraint_inputs.get_constraint_violation_prices()[\'fast_start\']\n market.make_constraints_elastic(\'fast_start\', violation_cost=cost)\n\n market.dispatch() # First dispatch without allowing over constrained dispatch re-run to get objective function.\n objective_value_run_two = market.objective_value\n if constraint_inputs.is_over_constrained_dispatch_rerun():\n market.dispatch(allow_over_constrained_dispatch_re_run=True,\n energy_market_floor_price=-1000.0,\n energy_market_ceiling_price=15000.0,\n fcas_market_ceiling_price=1000.0)\n prices_run_two = market.get_energy_prices() # If this is the lowest cost run these will be the market prices.\n\n prices_run_one[\'time\'] = interval\n prices_run_two[\'time\'] = interval\n\n # Getting historical prices for comparison. Note, ROP price, which is\n # the regional reference node price before the application of any\n # price scaling by AEMO, is used for comparison.\n historical_prices = mms_db_manager.DISPATCHPRICE.get_data(interval)\n\n # The prices from the run with the lowest objective function value are used.\n if objective_value_run_one < objective_value_run_two:\n prices = prices_run_one\n else:\n prices = prices_run_two\n\n prices[\'time\'] = interval\n prices = pd.merge(prices, historical_prices,\n left_on=[\'time\', \'region\'],\n right_on=[\'SETTLEMENTDATE\', \'REGIONID\'])\n\n outputs.append(prices)\n\ncon.close()\n\noutputs = pd.concat(outputs)\n\noutputs[\'error\'] = outputs[\'price\'] - outputs[\'ROP\']\n\nprint(\'\\n Summary of error in energy price volume weighted average price. \\n\'\n \'Comparison is against ROP, the price prior to \\n\'\n \'any post dispatch adjustments, scaling, capping etc.\')\nprint(\'Mean price error: {}\'.format(outputs[\'error\'].mean()))\nprint(\'Median price error: {}\'.format(outputs[\'error\'].quantile(0.5)))\nprint(\'5% percentile price error: {}\'.format(outputs[\'error\'].quantile(0.05)))\nprint(\'95% percentile price error: {}\'.format(outputs[\'error\'].quantile(0.95)))\n\n# Summary of error in energy price volume weighted average price.\n# Comparison is against ROP, the price prior to\n# any post dispatch adjustments, scaling, capping etc.\n# Mean price error: -0.3284696359015098\n# Median price error: 0.0\n# 5% percentile price error: -0.5389930178124978\n# 95% percentile price error: 0.13746097842649457\n```\n
\n \n'",",https://doi.org/10.21105/joss.03596,https://doi.org/10.5281/zenodo.7397514,https://doi.org/10.21105/joss.03596\n\n##","2020/04/14, 04:36:37",1289,BSD-3-Clause,67,372,"2023/10/24, 08:55:06",7,5,14,7,1,5,0.2,0.06779661016949157,"2023/10/17, 10:59:02",v2.0.1,0,4,false,,false,false,"EllieKallmier/ppa_analysis,dylanjmcconnell/NEMED,ShayanNaderi/NEMED,dec-heim/NEMED,dec-heim/NEMGLO,UNSW-CEEM/NEMED",,https://github.com/UNSW-CEEM,http://ceem.unsw.edu.au/,Sydney Australia,,,https://avatars.githubusercontent.com/u/33536784?v=4,,, NEMO,The National Electricity Market Optimizer is a chronological dispatch model for testing and optimizing different portfolios of conventional and renewable electricity generation technologies.,bje-,https://github.com/bje-/NEMO.git,github,,Energy Modeling and Optimization,"2023/10/04, 10:39:19",33,0,7,true,Python,,,"Python,Makefile",,"b'# National Electricity Market Optimiser (NEMO)\n\n![Build status\nbadge](https://github.com/bje-/NEMO/actions/workflows/buildtest.yml/badge.svg)\n[![Coverage\nStatus](https://coveralls.io/repos/github/bje-/NEMO/badge.svg?branch=master)](https://coveralls.io/github/bje-/NEMO?branch=master)\n[![CodeFactor](https://www.codefactor.io/repository/github/bje-/nemo/badge)](https://www.codefactor.io/repository/github/bje-/nemo)\n[![Bandit](https://img.shields.io/badge/security-bandit-yellow.svg)](https://github.com/PyCQA/bandit)\n\nNEMO is a chronological production-cost and capacity expansion model\nfor testing and optimising different portfolios of renewable and\nfossil electricity generation technologies. It has been developed and\nimproved over the past decade and has a growing number of users.\n\n![NEMO dispatch results](http://nemo.ozlabs.org/theworks.png)\n\nIt requires no proprietary software to run, making it particularly\naccessible to the governments of developing countries, academic\nresearchers and students. The model is available for others to inspect\nand, importantly, to validate the results.\n\n## Installation\n\n```bash\npip install nemopt\n```\n\n## Features\n\nFor a set of given (or default) generation or demand traces, users can:\n\n 1. Specify & simulate a custom resource mix, or;\n 2. ""Evolve"" a resource mix using pre-configured scenarios, or\n configure their own scenario\n\n### Evolution strategy\n\nThe benefit of an evolutionary approach is that while NEMO is\nsearching for the least-cost solution, NEMO can also explore\n""near-optimal"" resource mixes.\n\nNEMO no longer uses genetic algorithms, but has adopted the better\nperforming [CMA-ES](https://en.wikipedia.org/wiki/CMA-ES) method.\n\n### Resource models\n\nNEMO has models for the following resources: wind (including\noffshore), photovoltaics, concentrating solar power (CSP), hydropower,\npumped storage hydro, biomass, black coal, open cycle gas turbines\n(OCGTs), combined cycle gas turbines (CCGTs), diesel generators, coal\nwith carbon capture and storage (CCS), CCGT with CCS, geothermal,\ndemand response, batteries, electrolysers, hydrogen fuelled gas\nturbines, and more.\n\n## Documentation\n\nDocumentation is progressively being added to a [User\'s\nGuide](https://nbviewer.org/urls/nemo.ozlabs.org/guide.ipynb?flush_cache=1)\nin the form of a Jupyter notebook.\n\n[API documentation](http://nemo.ozlabs.org/pdoc/index.html) exists for\nthe `nemo` module. This is useful when building new tools that use the\nsimulation framework.\n\nThe model is described in an Energy Policy paper titled [Least cost\n100% renewable electricity scenarios in the Australian National\nElectricity\nMarket](http://ceem.unsw.edu.au/sites/default/files/documents/LeastCostElectricityScenariosInPress2013.pdf)\nby Elliston, MacGill and Diesendorf (2013).\n\n## System requirements\n\nNEMO should run on any operating system where Python 3 is available\n(eg, Windows, Mac OS X, Linux). It utilises some add-on packages:\n\n- [DEAP](https://deap.readthedocs.io/en/master/),\n- [Gooey](https://pypi.org/project/Gooey/),\n- [Matplotlib](http://matplotlib.org/),\n [Numpy](http://www.numpy.org/), [Pandas](http://pandas.pydata.org/)\n and\n- [Pint](https://pint.readthedocs.io).\n\n### Scaling up\n\nFor simple simulations or scripted sensitivity analyses, a laptop or\ndesktop PC will be adequate. However, for optimising larger systems, a\ncluster of compute nodes is desirable. The model is scalable and you\ncan devote as many locally available CPU cores to the model as you\nwish.\n\n> #### Note\n>\n> Due to a lack of active development, support for\n> [SCOOP](https://pypi.python.org/pypi/scoop) has been removed. It\n> will be soon replaced with something like [Ray](https://ray.io/).\n\n## Citation\n\nIf you use NEMO, please cite the following paper:\n\n> Ben Elliston, Mark Diesendorf, Iain MacGill, [Simulations of\n> scenarios with 100% renewable electricity in the Australian National\n> Electricity\n> Market](https://www.sciencedirect.com/science/article/pii/S0301421512002169?via=ihub#s0010),\n> Energy Policy, Volume 45, 2012, Pages 606-613, ISSN 0301-4215,\n> \n\n## Community\n\nThe [nemo-devel](https://lists.ozlabs.org/listinfo/nemo-devel) mailing\nlist is where users and developers can correspond.\n\n## Contributing\n\nEnhancements and bug fixes are very welcome. Please report bugs in the\n[issue tracker](https://github.com/bje-/NEMO/issues). Authors retain\ncopyright over their work.\n\n## License\n\nNEMO was first developed by [Dr Ben\nElliston](https://www.ceem.unsw.edu.au/staff/ben-elliston) in 2011 at\nthe [Collaboration for Energy and Environmental Markets, UNSW\nSydney](https://www.ceem.unsw.edu.au/).\n\nNEMO is free software and the source code is licensed under the [GPL version 3 license](COPYING).\n\n## Useful references\n\nAustralian cost data are taken from the [Australian Energy Technology\nAssessments](https://www.industry.gov.au/Office-of-the-Chief-Economist/Publications/Pages/Australian-energy-technology-assessments.aspx)\n(2012, 2013), the [Australian Power Generation Technology\nReport](http://www.co2crc.com.au/publication-category/reports/) (2015)\nand the CSIRO [GenCost\nreports](https://data.csiro.au/collections/collection/CIcsiro:44228)\n(2021, 2022, 2023). The GenCost reports provide the basis of the input\ncost assumptions for the AEMO [Integrated System\nPlan](https://aemo.com.au/en/energy-systems/major-publications/integrated-system-plan-isp).\nCosts for other countries may be added in time.\n\nRenewable energy trace data covering the Australian National\nElectricity Market territory are taken from the AEMO 100% Renewables\nStudy. An accompanying\n[report](http://content.webarchive.nla.gov.au/gov/wayback/20140211194248/http://www.climatechange.gov.au/sites/climatechange/files/files/reducing-carbon/APPENDIX3-ROAM-report-wind-solar-modelling.pdf)\ndescribes the method of generating the traces.\n\n## Acknowledgements\n\nEarly development of NEMO was financially supported by the [Australian\nRenewable Energy Agency](http://www.arena.gov.au/) (ARENA). Thanks to\nundergraduate and postgraduate student users at UNSW who have provided\nvaluable feedback on how to improve (and document!) the model.\n'",",https://doi.org/10.1016/j.enpol.2012.03.011","2015/11/23, 00:43:29",2893,GPL-3.0,94,1589,"2023/07/10, 08:55:31",0,1,5,1,107,0,0.0,0.0009661835748792091,,,0,2,false,,false,false,,,,,,,,,,, GlobalEnergyGIS,Generates input data for energy models on renewable energy in arbitrary world regions using public datasets.,niclasmattsson,https://github.com/niclasmattsson/GlobalEnergyGIS.git,github,,Energy Modeling and Optimization,"2023/04/28, 17:28:32",56,0,5,true,Julia,,,Julia,,"b'# GlobalEnergyGIS.jl\n\nAutomatic generation of renewable energy input data for energy models in arbitrary world regions using\npublic datasets. Written in Julia.\n\n## Paper\n\nThe work here has been described in a\n[scientific paper](paper/Mattsson%20et%20al.%202019%20-%20An%20autopilot%20for%20energy%20models.pdf)\nsubmitted to Energy Strategy Reviews along with its\n[supplementary material](paper/Mattsson%20et%20al.%202019%20-%20Supplementary%20-%20An%20autopilot%20for%20energy%20models.pdf).\n\n## Disk space requirements\n\nThis package uses several large datasets and requires a lot of disk space: roughly 10 GB + 29 GB/year of\nreanalysis data stored. Also, at least 50 GB of **additional** disk space will be required temporarily. Please\nensure that you have enough space available (perhaps on a secondary hard drive) before proceeding with the\ndata download. You also need a minimum of 8 GB of RAM memory.\n\n## Installation\n\nMake sure you are on Julia v1.6 or higher. Type `]` to enter Julia\'s package mode, then:\n\n```\n(@v1.6) pkg> add https://github.com/niclasmattsson/GlobalEnergyGIS\n``` \n\nGrab some coffee, because installing and compiling dependencies can take quite some time to run. If you don\'t\nyet have a Copernicus account, you can create one while you wait for the compilation to complete.\n\n## List of datasets and terms of use\n\nThe GlobalEnergyGIS package makes use of the following datasets. By using this package, you agree to abide by\ntheir terms of use. Please click the links to open the terms of use in your browser.\n\n* ECMWF ERA5 reanalysis and Copernicus download service: https://apps.ecmwf.int/datasets/licences/copernicus/\n* Global Wind Atlas version 1: https://globalwindatlas.info/about/TermsOfUse\n* World Database of Protected Areas: https://www.protectedplanet.net/c/terms-and-conditions\n* GADM (Global Administrative Areas): https://gadm.org/license.html\n* Eurostat NUTS (Administrative areas in Europe): https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units\n* USGS MODIS 500m Land Cover: https://www.usgs.gov/centers/eros/data-citation\n* ETOPO1 Topography: https://www.ngdc.noaa.gov/mgg/global/dem_faq.html#sec-2\n* Population scenarios downscaled to 1 km resolution: http://www.cgd.ucar.edu/iam/modeling/spatial-population-scenarios.html\n* Global population & GDP. Original data: http://www.cger.nies.go.jp/gcp/population-and-gdp.html. Raster converted: https://github.com/Nowosad/global_population_and_gdp.\n\n## Setup and data preparation\n\n### 1. Create Copernicus account\n\nGlobalEnergyGIS is based on several public datasets, most notably the\n[ERA5 reanalysis](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5) by the ECMWF\n(European Centre for Medium-Range Weather Forecasts). The reanalysis data is used to calculate hourly solar\ninsolation and wind speeds for any location on earth. To download ERA5 data you need to [create a free\naccount at the Copernicus Data Service (CDS)](https://cds.climate.copernicus.eu/user/register).\n\n### 2. Create config files and agree to dataset terms\n\nNow we will create two config files that will keep track of your preferred data storage path and your\nCopernicus ID.\n\nSince the datasets will require a lot of disk space (see hardware requirements above), the package allows\nyou to store all data on a secondary disk drive if you have one available. For example, if you have a fast\nSSD as a boot drive and a larger HDD, then consider storing the data on the HDD. It may also be a good idea\nto choose a storage location which doesn\'t get automatically backuped if your backup storage or bandwidth is\nlimited.\n\n[Login to your Copernicus account](https://cds.climate.copernicus.eu/user/login?destination=%2F%23!%2Fhome)\nand go to your profile page (click your name in the upper right), then scroll down to the API key section.\nHere you will find your user ID (UID) and the long API key string that needs to be copy/pasted to the command\nbelow.\n\nRun `saveconfig(folder_path, Copernicus_UID, ""your API key"", agree_terms=true)` and substitute your own\nCopernicus data to create a small configuration file (.cdsapirc) in your home directory. Use forward slash\n`/` or double backslashes `\\\\` as folder delimiters in the path string. For example:\n\n```\njulia> using GlobalEnergyGIS\n\njulia> saveconfig(""D:/GISdata"", 12345, ""abcdef123-ab12-cd34-ef56-abcdef123456"", agree_terms=true)\n```\n\nThe argument `agree_terms=true` is required to continue. By including it you agree to the terms of use of all\ndatasets listed above. The first time you run `using GlobalEnergyGIS` there will be another delay (a minute\nor two) while Julia precompiles the dependencies.\n\nTo agree to Copernicus terms, you **must** download a small amount of test data once using the web interface\n(otherwise you will get errors in step 5 below). Visit [the CDS web interface](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-pressure-levels-monthly-means?tab=form),\nselect one checkbox in each parameter section (e.g. Product type, Variable, Pressure level, etc.),\nand finally click the ""Agree terms"" and ""Submit form"" buttons as in the screenshot below. If you have any problem with this, follow\n[step 2 in these detailed instructions](https://confluence.ecmwf.int/display/CKB/How+to+download+ERA5#HowtodownloadERA5-1-Prerequisites).\n\n![screenshot of CDS terms](https://github.com/niclasmattsson/GlobalEnergyGIS/blob/master/CDSterms.png)\n\n### 3. Download auxiliary datasets\n\nNow we will download the auxiliary public datasets listed above. The following command will download them all\nto the data folder you supplied when you created the configuration file.\n\n```\njulia> download_datasets()\n```\n\nA total of 17 files will be downloaded and unpacked. This will probably take 30 minutes or so depending on\nyour internet connection.\n\n### 4. Rasterization\n\nSeveral of the datasets are vector data (shapefiles). To speed up and simplify the subsequent processing, we\nwill rasterize them all to a common resolution (0.01 degrees, or roughly 1 km at the equator).\n\n```\njulia> rasterize_datasets(cleanup=:all)\n```\n\nThis command will automatically delete the original datasets to save disk space. Use the argument\n`cleanup=:limited` to keep the original files, or `cleanup=:none` to also keep intermediate raster files.\n\n### 5. Download and convert ERA5 data\n\nRun `download_and_convert_era5(data_year)` to begin downloading the ERA5 wind, solar and temperature data for a given year. About 26 GB of raw data must be downloaded (split into 24 files of 1.1 GB) *per variable, per year*. So 72 downloads in total. It will likely take several hours to download, depending on your internet connection and whether or not there is congestion in the CDS download queue.\n\n```\njulia> download_and_convert_era5(2018)\n```\n\nAfter the raw data for a variable has been downloaded, it is immediately aggregated and converted into more suitable file format (HDF5) and recompressed to save disk space. To minimize disk usage we also throw away data we won\'t need (by default we discard wind direction, far offshore wind speeds and solar insolation over oceans). This will reduce disk usage to about 31 GB per year of ERA5 data in total (on average 14.7 GB for solar, 8.6 GB for wind and 7.4 GB for temperatures - solar needs roughly twice the space since we need to store both direct and diffuse solar insolation). \n\n## Usage (creating renewable energy input data for arbitrary model regions)\n\nThere are three main steps:\n\n1. Create economic background scenario (run only once per combination of SSP scenario and year)\n2. Create region files (run only once per set of model regions)\n3. Calculate potential capacities and hourly capacity factors for solar-, wind- and hydropower.\n\nThe output of steps 2 and 3 are directly read by [Supergridmodel](https://github.com/niclasmattsson/Supergrid)\n(a companion energy system model designed to work with this GIS package).\n\n### Create economic background scenario\n\nRun `create_scenario_datasets(SSPscenario, target_year)`, where `SSPscenario` is one of `""SSP1""`, `""SSP2""`\nor `""SSP3""` and target_year is one of 2020, 2030, ..., 2100.\n\n```\njulia> create_scenario_datasets(""SSP2"", 2050)\n```\n\n### Create region files\n\nRun `saveregions(regionset_name, region_definitions)`, where `regionset_name` is a string and\n`region_definitions` is a matrix as specified below. Then run `makedistances(regionset_name)` to determine\nwhich regions are connected onshore and offshore and calculate distances between population-weighted region\ncenters.\n\n```\njulia> saveregions(""Europe13"", europe13)\n\njulia> makedistances(""Europe13"")\n\n```\n\nHere `europe13` is a region matrix defined in `regiondefinitions.jl`, but you can refer to your own region\nmatrices defined elsewhere. See the next section for syntax examples. To get visual confirmation of the\nresults, run `createmaps(regionset_name)` to create images of onshore and offshore region territories\n(in `/GISdata_folder_path/output`).\n\n```\njulia> createmaps(""Europe13"")\n```\n\n### Region definition matrix syntax\n\nRegions are specified using an n x 2 matrix, where the first column contains names of model regions, and the\nsecond column contains information on which subregions are included in each main region using GADM or NUTS\nsubregion names. This is facilitated using the `GADM()` and `NUTS()` helper functions. Here\'s a simple\nexample of making a 4-region definition matrix of Scandiavian countries using both administrative border\ndatasets:\n\n```\nscandinavia4 = [\n ""SWE"" GADM(""Sweden"")\n ""NOR"" GADM(""Norway"")\n ""DEN"" GADM(""Denmark"")\n ""FIN"" GADM(""Finland"", ""\xc3\x85land"") # in GADM, the island of \xc3\x85land is a separate level-0 region\n]\n\nscandinavia4_nuts = [\n ""SWE"" NUTS(""SE"")\n ""NOR"" NUTS(""NO"")\n ""DEN"" NUTS(""DK"")\n ""FIN"" NUTS(""FI"")\n]\n```\n\nSubregions of the same level can be concatenated by listing multiple arguments to the `GADM()` or `NUTS()`\ncall. For example, mainland Portugal can be defined by `NUTS(""PT11"",""PT15"",""PT16"",""PT17"",""PT18"")`. This will\nexclude islands that are otherwise included in `NUTS(""PT"")`.\n\nSubregion names (codes) are unique in NUTS but not always in GADM. For this reason, GADM subregions must\nindicate their parent regions. Here are two examples:\n\n```\ngadm_subregions = [\n ""India_E"" GADM([""India""], ""Odisha"", ""Jharkhand"", ""West Bengal"", ""Bihar"")\n ""\xc3\x96land"" GADM([""Sweden"",""Kalmar""], ""Borgholm"", ""M\xc3\xb6rbyl\xc3\xa5nga"")\n]\n```\n\nIf the first argument to a `GADM()` call is a vector, then the remaining arguments are subregions to the last\nvector element. Here India_E (Eastern India) is defined using four level-1 subregions of the ""India"" level-0\nregion. Next we define the Swedish island of \xc3\x96land using its two municipalities that are level-2 subregions\nof Kalmar, which is itself a level-1 subregion of Sweden. If the first argument to a `GADM()` call is not a\nvector, then all arguments are assumed to be level-0 regions.\n\nThis vector syntax is not used in `NUTS()` calls since all subregion code names are unique.\n\nTo concatenate different levels of subregions or to mix NUTS and GADM calls in the same region, use a tuple\nof GADM and NUTS calls by enclosing them in parentheses:\n\n```\nconcatenation_examples = [\n ""China_SC"" (GADM([""China""], ""Henan"",""Hubei"",""Hunan"",""Guangdong"",""Guangxi"",""Hainan""), GADM(""Hong Kong"",""Macao""))\n ""France"" (NUTS(""FR""), GADM(""Monaco""))\n]\n```\n\nHere China_SC (southcentral) is defined using six level-1 subregions of China, in addition to Hong Kong and\nMacao which are level-0 regions in GADM. Next, we define a France NUTS region that includes Monaco (which is\nnot included in NUTS) by concatenating the GADM definition of Monaco.\n\nFor more syntax examples, see `regiondefinitions.jl` (in the GlobalEnergyGIS /src folder).\n[Maps of NUTS regions can be found here.](https://ec.europa.eu/eurostat/web/nuts/nuts-maps).\n\n### Subregion helper function\n\nThere is a simple helper function `subregion()` to facilitate finding the names of subregions. Note that the\nGADM version takes multiple subregion arguments while the NUTS version only takes a single argument (and matches\nthe beginning of the subregion name). The function will return a vector of subregion names.\n\n```\njulia> subregions(GADM)\n256-element Array{String,1}:\n ""Afghanistan""\n ""Akrotiri and Dhekelia""\n ""Albania""\n ""Algeria""\n[...]\n\njulia> subregions(GADM, ""France"")\n13-element Array{String,1}:\n ""Auvergne-Rh\xc3\xb4ne-Alpes""\n ""Bourgogne-Franche-Comt\xc3\xa9""\n ""Bretagne""\n ""Centre-Val de Loire""\n [...]\n\njulia> subregions(GADM, ""France"", ""Bretagne"")\n4-element Array{String,1}:\n ""C\xc3\xb4tes-d\'Armor""\n ""Finist\xc3\xa8re""\n ""Ille-et-Vilaine""\n ""Morbihan""\n\njulia> subregions(NUTS)\n37-element Array{String,1}:\n ""AL""\n ""AT""\n ""BE""\n ""BG""\n [...]\n\njulia> subregions(NUTS, ""UK"")\n179-element Array{String,1}:\n ""UKC11""\n ""UKC12""\n ""UKC13""\n ""UKC14""\n [...]\n\njulia> subregions(NUTS, ""UKN"")\n11-element Array{String,1}:\n ""UKN06""\n ""UKN07""\n ""UKN08""\n ""UKN09""\n [...]\n\n```\n\n### The actual GIS analysis\n\nFinally we have everything we need to actually calculate potential capacities and hourly capacity factors for\nsolar-, wind- and hydropower. This is the basic syntax, which assumes default values for all unlisted GIS\nparameters.\n\n```\njulia> GISsolar(gisregion=""Europe13"")\n\njulia> GISwind(gisregion=""Europe13"")\n\njulia> GIShydro(gisregion=""Europe13"")\n\n```\n\nHere is a call that changes some parameters:\n\n```\njulia> GISwind(gisregion=""Europe13"", scenarioyear=""ssp2_2020"", era_year=2016, persons_per_km2=100,\n\t\t\t\tmax_depth=60, min_shore_distance=2, area_onshore=0.05, area_offshore=0.20)\n```\n\nThe parameters that can be changed are listed in the section ""GIS options"" below. `GISwind()` and\n`GISsolar()` also take an optional boolean parameter `plotmasks=true` that will generate .png images\nof the dataset masks resulting from the other parameters. This will increase run times by a minute\nor two. Images will be placed in `/GISdata_folder_path/output`.\n\n```\njulia> GISwind(gisregion=""Europe13"", ..., plotmasks=true)\n\njulia> GISsolar(gisregion=""Europe13"", ..., plotmasks=true)\n```\n\n## Synthetic electricity demand\n\nThe synthetic demand module estimates the profile of hourly electricity demand in each model region given the\ntotal annual demand (determined by current national electricity demand per capita extrapolated using the SSP\nbackground scenario and target year). This is done using machine learning, specifically a method called\ngradient boosting tree regression. This is similar to ordinary regression, except that underlying mathematical\nrelationships between variables are determined automatically using a black box approach.\n\nWe train the model based on real electricity demand in 44 countries for the year 2015. Regression variables\ninclude calendar effects (e.g. hour of day and weekday/weekend indicators), temperature variables (e.g. hourly\ntemperature series in the most populated areas of each model region, or monthly averages as seasonality\nindicators) and economic indicators, e.g. local GDP per capita or electricity demand per capita (using the\nlatter variable is not ""cheating"", since we are merely interested in predicting hourly profiles of normalized\ndemand, not the demand level).\n\n### Easy version using our default parameters and regression variables\n\nAssuming you have already downloaded the requisite temperature data for the year you want to study\n(see `era5download()` and `maketempera5()`), and created population and GDP datasets for the SSP scenario\n(see `create_scenario_datasets()`):\n\n```\njulia> predictdemand(gisregion=""Europe8"", sspscenario=""ssp2-26"", sspyear=2050, era_year=2018)\n```\n\nThis will create a matrix (size 8760x`number_of_regions`) with the predicted electricity demand for each\nmodel region and hour of the year. This data is saved in a new JLD file in `/GISdata_folder_path/output`. Here the full SSP scenario variant must be specified including the 2-digit code representing radiative forcing target (e.g. 19, 26, 34, 45).\n\n### Selecting variables to train on\n\nThese are the default nine variables (so this will produce the exact same result as the previous example).\n\n```\njulia> selectedvars = [:hour, :weekend01, :temp_monthly, :ranked_month, :temp_top3,\n :temp1_mean, :temp1_qlow, :temp1_qhigh, :demandpercapita]\n\njulia> predictdemand(variables=selectedvars, gisregion=""Europe8"", sspscenario=""ssp2-26"", sspyear=2050, era_year=2018)\n```\n\nAnd here is a simpler example using seven variables:\n\n```\njulia> selectedvars = [:hour, :weekend01, :ranked_month, :temp_top3, :temp1_qlow, :temp1_qhigh, :gdppercapita]\n\njulia> predictdemand(variables=selectedvars, gisregion=""Europe8"", sspscenario=""ssp2-26"", sspyear=2050, era_year=2018)\n```\n\nCurrently we calculate data for 12 different variables. Any combination of these can be used to train on.\nThe full list along with brief explanations appears below near the bottom of this README.\n\n### Selecting custom learning parameters\n\nThese are the default parameters:\n\n```\njulia> predictdemand(variables=defaultvariables, gisregion=""Europe8"", sspscenario=""ssp2-34"", sspyear=2050, era_year=2018,\n nrounds=100, max_depth=7, eta=0.05, subsample=0.75, metrics=[""mae""]) \n```\n\nAnd here we modify them:\n\n```\njulia> predictdemand(variables=defaultvariables, gisregion=""Europe8"", sspscenario=""ssp1-45"", sspyear=2030, era_year=2018,\n nrounds=40, max_depth=8, eta=0.30, subsample=0.85, metrics=[""rmse""]) \n```\n\nThese parameters are explained briefly below. Additionally, any other\n[XGBoost parameters](https://xgboost.readthedocs.io/en/latest/parameter.html) can be specified.\n\n### Cross-validation of the training demand dataset (44 countries)\n\nCross-validation of the training dataset can help determine which variables to train on and what values of\nlearning parameters to use. This will predict the demand for all 44 countries in the demand dataset, but the\nmodel built for each country will only use data from the other 43 countries.\n\nIterations appear significantly slower than `predictdemand()` because it trains 44 models in parallel. \n\n```\njulia> crossvalidate(variables=defaultvariables, nrounds=100, max_depth=7, eta=0.05, subsample=0.75, metrics=[""mae""])\n```\n\nNote that there is no `gisregion` argument since we are both training and predicting the same 44 country\ndataset. The log will show two columns of training errors. The right column `cv-train-mae` will have lower\nerrors, but this results from evaluating the model on the same data it trained on (i.e ""cheating""). The\ncolumn `cv-test-mae` on the left is the real (non-cheating) result. Resist the temptation of adapting\nparameters to the right column.\n\n## GIS options\n\n### Default GISsolar() options\n\n```\nsolaroptions() = Dict(\n :gisregion => ""Europe8"", # ""Europe8"", ""Eurasia38"", ""Scand3""\n :filenamesuffix => """", # e.g. ""_landx2"" to save high land availability data as ""GISdata_solar2018_Europe8_landx2.mat"" \n\n :pv_density => 45, # Solar PV land use 45 Wp/m2 = 45 MWp/km2 (includes PV efficiency & module spacing, add latitude dependency later)\n :csp_density => 35, # CSP land use 35 W/m2\n\n :pvroof_area => .05, # area available for rooftop PV after the masks have been applied\n :plant_area => .05, # area available for PV or CSP plants after the masks have been applied\n\n :distance_elec_access => 300, # max distance to grid [km] (for solar classes of category B)\n :plant_persons_per_km2 => 150, # not too crowded, max X persons/km2 (both PV and CSP plants)\n :pvroof_persons_per_km2 => 200, # only in populated areas, so AT LEAST x persons/km2\n # US census bureau requires 1000 ppl/mile^2 = 386 ppl/km2 for ""urban"" (half in Australia)\n # roughly half the people of the world live at density > 300 ppl/km2\n :exclude_landtypes => [0,1,2,3,4,5,8,12], # exclude water, forests and croplands. See codes in table below.\n :protected_codes => [1,2,3,4,5,8], # IUCN codes to be excluded as protected areas. See codes in table below.\n\n :scenarioyear => ""ssp2_2050"", # default scenario and year for population and grid access datasets\n :era_year => 2018, # which year of the ERA5 time series to use \n\n :res => 0.01, # resolution of auxiliary datasets [degrees per pixel]\n :erares => 0.28125, # resolution of ERA5 datasets [degrees per pixel]\n\n :pvclasses_min => [0.08,0.14,0.18,0.22,0.26], # lower bound on annual PV capacity factor for class X [0:0.01:0.49;]\n :pvclasses_max => [0.14,0.18,0.22,0.26,1.00], # upper bound on annual PV capacity factor for class X [0.01:0.01:0.50;]\n :cspclasses_min => [0.10,0.18,0.24,0.28,0.32], # lower bound on annual CSP capacity factor for class X\n :cspclasses_max => [0.18,0.24,0.28,0.32,1.00] # upper bound on annual CSP capacity factor for class X\n)\n```\n\n### Default GISwind() options\n\n```\nwindoptions() = Dict(\n :gisregion => ""Europe8"", # ""Europe8"", ""Eurasia38"", ""Scand3""\n :filenamesuffix => """", # e.g. ""_landx2"" to save high land availability data as ""GISdata_solar2018_Europe8_landx2.mat"" \n\n :onshore_density => 5, # about 30% of existing farms have at least 5 W/m2, will become more common\n :offshore_density => 8, # varies a lot in existing parks (4-18 W/m2)\n # For reference: 10D x 5D spacing of 3 MW turbines (with 1D = 100m) is approximately 6 MW/km2 = 6 W/m2\n :area_onshore => .08, # area available for onshore wind power after the masks have been applied\n :area_offshore => .33, # area available for offshore wind power after the masks have been applied\n\n :distance_elec_access => 300, # max distance to grid [km] (for wind classes of category B and offshore)\n :persons_per_km2 => 150, # not too crowded, max X persons/km2\n # US census bureau requires 1000 ppl/mile^2 = 386 ppl/km2 for ""urban"" (half in Australia)\n # roughly half the people of the world live at density > 300 ppl/km2\n :max_depth => 40, # max depth for offshore wind [m]\n :min_shore_distance => 5, # minimum distance to shore for offshore wind [km]\n :exclude_landtypes => [0,11,13], # exclude water, wetlands and urban areas. See codes in table below.\n :protected_codes => [1,2,3,4,5,8], # IUCN codes to be excluded as protected areas. See codes in table below.\n\n :scenarioyear => ""ssp2_2050"", # default scenario and year for population and grid access datasets\n :era_year => 2018, # which year of the ERA5 time series to use \n :rescale_to_wind_atlas => true, # rescale the ERA5 time series to fit annual wind speed averages from the Global Wind Atlas\n\n :res => 0.01, # resolution of auxiliary datasets [degrees per pixel]\n :erares => 0.28125, # resolution of ERA5 datasets [degrees per pixel]\n\n :onshoreclasses_min => [2,5,6,7,8], # lower bound on annual onshore wind speeds for class X [0:0.25:12.25;]\n :onshoreclasses_max => [5,6,7,8,99], # upper bound on annual onshore wind speeds for class X [0.25:0.25:12.5;]\n :offshoreclasses_min => [3,6,7,8,9], # lower bound on annual offshore wind speeds for class X\n :offshoreclasses_max => [6,7,8,9,99] # upper bound on annual offshore wind speeds for class X\n)\n```\n\n### Default GIShydro() options\n\n```\nhydrooptions() = Dict(\n :gisregion => ""Europe8"", # ""Europe8"", ""Eurasia38"", ""Scand3""\n\n :costclasses_min => [ 0, 50, 100], # US $/MWh\n :costclasses_max => [50, 100, 999],\n\n :storageclasses_min => [ 0, 1e-6, 12], # weeks (discharge time)\n :storageclasses_max => [1e-6, 12, 9e9]\n)\n```\n\n### Land types\n\n```\n 0 \'Water\' \n 1 \'Evergreen Needleleaf Forests\'\n 2 \'Evergreen Broadleaf Forests\' \n 3 \'Deciduous Needleleaf Forests\'\n 4 \'Deciduous Broadleaf Forests\' \n 5 \'Mixed Forests\' \n 6 \'Closed Shrublands\' \n 7 \'Open Shrublands\' \n 8 \'Woody Savannas\' \n 9 \'Savannas\' \n10 \'Grasslands\' \n11 \'Permanent Wetlands\' \n12 \'Croplands\' \n13 \'Urban\' \n14 \'Cropland/Natural\' \n15 \'Snow/Ice\' \n16 \'Barren\'\n``` \n\n### Protected areas (IUCN codes from the WDPA)\n\n``` \n1 \'Ia\' \'Strict Nature Reserve\' \n2 \'Ib\' \'Wilderness Area\' \n3 \'II\' \'National Park\' \n4 \'III\' \'Natural Monument\' \n5 \'IV\' \'Habitat/Species Management\' \n6 \'V\' \'Protected Landscape/Seascape\' \n7 \'VI\' \'Managed Resource Protected Area\'\n8 \'Not Reported\' \'Not Reported\' \n9 \'Not Applicable\' \'Not Applicable\' \n10 \'Not Assigned\' \'Not Assigned\' \n``` \n\n\n\n### Synthetic demand: list of training variables\n\nList of variables in the training dataset that can be use for the regression:\n\n```\nCalendar variables\n :hour hour of day\n :month month of year\n :weekend01 weekend indicator\n\nHourly temperatures:\n :temp1 temperature in the largest population center of each region\n :temp_top3 average temperature in the three largest population centers of each region\n\nMonthly temperatures (season indicators):\n :temp_monthly average monthly temperature in the largest population center of each region\n :ranked_month rank of the average monthly temperature of each month (1-12)\n\nAnnual temperature levels and variability:\n :temp1_mean average annual temperature in the largest population center of each region\n :temp1_qlow low annual temperature - 5% quantile of hourly temperatures\n :temp1_qhigh high annual temperature - 95% quantile of hourly temperatures\n\nEconomic indicators:\n :demandpercapita level of annual average electricity demand [MWh/year/capita] in each region\n :gdppercapita level of annual average GDP per capita [USD(2010)/capita] in each region\n```\n\n### Default learning parameters for the synthetic demand regression\n\nOur default values are adapted to our data and differ from the default XGBoost parameters. The parameters\nalso need to be adapted to each other. For example, a lower `eta` value may require a higher `nrounds` value\nto reach full benefit. \n\n```\nnrounds=100 # number of rounds of learning improvements\nmax_depth=7 # tree depth, i.e. complexity of the underlying black box model. Increasing this may lead to overfitting.\neta=0.05 # learning rate. Higher values will improve faster, but may ultimately lead to a less efficient model.\nsubsample=0.75 # how much of the training data to use in each iteration. Same tradeoff as \'eta\' parameter.\nmetrics=[""mae""] # ""mae"" = mean absolute error, ""rmse"" = root mean square error, or both. Note the brackets (it\'s a vector).\n```\n\nA slightly better but much more computationally demanding set of parameters: `nnrounds=1000`, `max_depth=7`, `eta=0.005`, and `subsample=0.05`. These were the parameters used to produce figures 1 and 2 in the paper above.\n\nFor more information on these and other selectable parameters, see https://xgboost.readthedocs.io/en/latest/parameter.html.\n'",,"2019/05/21, 08:27:31",1618,MIT,3,362,"2020/05/11, 12:19:09",16,0,3,0,1262,5,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Antares Simulator,"An Open Source power system simulator to quantify the adequacy or the economic performance of interconnected energy systems, at short or remote time horizons.",AntaresSimulatorTeam,https://github.com/AntaresSimulatorTeam/Antares_Simulator.git,github,"adequacy,simulation,power-systems",Energy Modeling and Optimization,"2023/10/25, 10:15:18",50,0,15,true,C,AntaresSimulatorTeam,AntaresSimulatorTeam,"C,C++,CMake,Python,Ruby,Shell,NSIS",https://antares-simulator.org,"b'# Antares Simulator\n[![Status][ubuntu_precompiled_svg]][ubuntu_precompiled_link] [![Status][windows_precompiled_svg]][windows_precompiled_link] [![Status][centos_precompiled_svg]][centos_precompiled_link] [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=AntaresSimulatorTeam_Antares_Simulator&metric=alert_status)](https://sonarcloud.io/dashboard?id=AntaresSimulatorTeam_Antares_Simulator)\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n\n\nAntares Simulator is an open source power system simulator meant to be\nused by anybody placing value in quantifying the adequacy or the \neconomic performance of interconnected power systems, at short or \nremote time horizons: \n\nTransmission system Operators, Power Producers, Regulators, Academics,\nConsultants, NGO and all others actors concerned by energy policy issues\nare welcome to use the software.\n\nThe Antares Simulator project was initiated by RTE (French Electricity \nTransmission system Operator) in 2007. It was developed from the start\nas a cross-platform application (Windows, GNU/Linux ,Unix). \n\nUntil 2018 it was distributed under the terms of a proprietary license.\n\nIn May 2018 RTE decided to release the project under the GPLv3 license.\n\n[linux_system_svg]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/workflows/Linux%20CI%20(system%20libs)/badge.svg\n\n[linux_system_link]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/actions?query=workflow%3A""Linux%20CI%20(system%20libs)""\n\n[windows_vcpkg_svg]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/workflows/Windows%20CI%20(VCPKG)/badge.svg\n\n[windows_vcpkg_link]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/actions?query=workflow%3A""Windows%20CI%20(VCPKG)""\n\n[centos7_system_svg]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/workflows/Centos7%20CI%20(system%20libs)/badge.svg\n\n[centos7_system_link]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/actions?query=workflow%3A""Centos7%20CI%20(system%20libs)""\n\n\n# Links:\n\n- Antares web site : https://antares-simulator.org\n- RTE web site : http://www.rte-france.com/\n- Doxygen code documentation : https://antaressimulatorteam.github.io/Antares_Simulator/\n\n\n# Installation\n\nThis software suite has been tested under:\n\n* Ubuntu 20.04 [![Status][ubuntu_precompiled_svg]][ubuntu_precompiled_link] \n* Microsoft Windows with Visual Studio 2019 (64-bit) [![Status][windows_precompiled_svg]][windows_precompiled_link]\n* Centos7 [![Status][centos_precompiled_svg]][centos_precompiled_link] \n\nAntares Simulator is built using CMake.\nFor installation instructions, please visit [INSTALL.md](INSTALL.md)\n\n# Source Code Content\n\n* [AUTHORS](AUTHORS.txt) - Antares Simulator authors\n* [CERTIFICATE](CERTIFICATE.txt)\t - A standard DCO that has to be signed by every contributor \n* [CONTRIBUTING](CONTRIBUTING.txt)\t - How to submit patches and discuss about code evolutions\n* [COPYING](COPYING.txt) - The GPL v3 license.\n* [INSTALL](INSTALL.md) - Installation and building instructions.\n* [NEWS](NEWS.md) - Important modifications between the releases.\n* [README](README.md) - This file.\n* [ROADMAP](ROADMAP.txt) - Main orientations for further developements \n* [THANKS](THANKS.txt) - Attribution notices for external libraries and contributors.\n* [resources/](resources)\t - Free sample data sets. \n* [src/analyzer/](src/analyzer) - source code for the statistical analysis of historical time-series.\n* [src/cmake/](src/cmake) - files for initializing a solution ready for compilation. \n* [src/distrib/](src/distrib) - system redistributable libraries Win(x64,x86),unix. \n* [src/ext/](src/ext) \t - third party libraries used by Antares_Simulator: libYuni, Sirius_Solver.\n* [src/libs/](src/libs)\t\t - miscellaneous Antares_Simulator libraries.\n* [src/solver/](src/solver) - simulation and optimization part.\n* [src/tools/](src/tools) - miscellaneous tools for dataset management. \n* [src/ui/](src/ui) - Graphic user interface. \n\n\n[ubuntu_precompiled_svg]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/workflows/Ubuntu%20CI%20(pre-compiled)/badge.svg\n[ubuntu_precompiled_link]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/actions?query=workflow%3A""Ubuntu%20CI%20(pre-compiled)""\n\n[windows_precompiled_svg]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/workflows/Windows%20CI%20(VCPKG%20and%20pre-compiled)/badge.svg\n[windows_precompiled_link]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/actions?query=workflow%3A""Windows%20CI%20(VCPKG%20and%20pre-compiled)""\n\n[centos_precompiled_svg]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/workflows/Centos7%20CI%20(pre-compiled)/badge.svg\n[centos_precompiled_link]: https://github.com/AntaresSimulatorTeam/Antares_Simulator/actions?query=workflow%3A""Centos7%20CI%20(pre-compiled)""\n'",,"2018/07/03, 14:05:14",1940,CUSTOM,383,1668,"2023/10/25, 10:15:19",143,1122,1558,787,0,13,1.8,0.4617330803289058,"2023/10/13, 10:30:54",v8.8.0-rc2,0,21,false,,false,true,,,https://github.com/AntaresSimulatorTeam,,,,,https://avatars.githubusercontent.com/u/40797821?v=4,,, HELICS,"Today the core uses are in the energy domain, where there is extensive and growing support for a wide-range of electric power system, natural gas, communications and control-schemes, transportation, buildings, and related domain tools.",GMLC-TDC,https://github.com/GMLC-TDC/HELICS.git,github,"simulation,co-simulation,power-grids,simulation-framework",Energy Modeling and Optimization,"2023/07/11, 15:59:48",101,0,22,true,C++,GMLC-TDC,GMLC-TDC,"C++,C,Java,CMake,Shell,Python,MATLAB,SWIG,C#,Makefile,M,CSS",https://docs.helics.org/en/latest/,"b'

\n\n

\n\nA multi-language, cross-platform library that enables different simulators to easily exchange data and stay synchronized in time. Scalable from two simulators on a laptop to 100,000+ running on supercomputers, the cloud, or a mix of these platforms.\n\n[![](https://badges.gitter.im/GMLC-TDC/HELICS.png)](https://gitter.im/GMLC-TDC/HELICS)\n[![](https://img.shields.io/badge/docs-ready-blue.svg)](https://helics.readthedocs.io/en/latest)\n[![](https://img.shields.io/conda/pn/gmlc-tdc/helics.svg)](https://anaconda.org/gmlc-tdc/helics/)\n[![](https://ci.appveyor.com/api/projects/status/9rnwrtelsa68k5lt/branch/develop?svg=true)](https://ci.appveyor.com/project/HELICS/helics/history)\n[![Cirrus Status](https://api.cirrus-ci.com/github/GMLC-TDC/HELICS.svg)](https://cirrus-ci.com/github/GMLC-TDC/HELICS)\n[![](https://codecov.io/gh/GMLC-TDC/HELICS/branch/develop/graph/badge.svg)](https://codecov.io/gh/GMLC-TDC/HELICS/branch/develop)\n[![Releases](https://img.shields.io/github/tag-date/GMLC-TDC/HELICS.svg)](https://github.com/GMLC-TDC/HELICS/releases)\n[![](https://img.shields.io/badge/License-BSD-blue.svg)](https://github.com/GMLC-TDC/HELICS/blob/main/LICENSE)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/GMLC-TDC/HELICS/main.svg)](https://results.pre-commit.ci/latest/github/GMLC-TDC/HELICS/main)\n\n## Table of contents\n\n- [Introduction](#introduction)\n - [Philosophy](#philosophy-of-helics)\n- [Getting Started](#getting-started)\n - [Language Bindings](#language-bindings)\n- [Documentation](#documentation)\n - [Changelog](#changelog)\n - [RoadMap](#roadmap)\n - [Installation](#installation)\n - [Quick links](#quick-links)\n- [Tools with HELICS support](#tools-with-helics-support)\n- [Contributing](#contributing)\n- [Build Status](#build-status)\n- [Publications](#publications)\n- [In the News](#in-the-news)\n- [History and Motivation](#history-and-motivation)\n- [Release](#release)\n\n## Introduction\n\nWelcome to the repository for the Hierarchical Engine for Large-scale Infrastructure Co-Simulation (HELICS). HELICS provides a general-purpose, modular, highly-scalable co-simulation framework that runs cross-platform and has bindings for multiple languages. It is a library that enables multiple existing simulation tools (and/or instances of the same tool), known as ""federates"", to exchange data during runtime and stay synchronized in time such that together they act as one large simulation, or ""federation"". This enables bringing together simulation tools from multiple domains to form a complex software simulation without having to change the individual tools.\n\nIt is important to note that HELICS cannot in and of itself simulate anything, rather it is a framework to make it easy to bring together other existing (or novel) simulation tools to tackle problems that can\'t readily be solved by a single tool alone. After all ""simulations are better together,"" and HELICS is designed to help get you there easily and quickly. HELICS has also already worked out many of the more subtle aspects of synchronizing simulations so you don\'t have to.\n\nToday the core uses of HELICS are in the energy domain, where there is extensive and growing support for a wide-range of electric power system, natural gas, communications and control-schemes, transportation, buildings, and related domain tools ([Supported Tools](docs/references/Tools_using_HELICS.md)). However, it is possible to use HELICS for co-simulation in any domain; the HELICS API and language bindings make it straightforward to connect any simulation tool that provides a scripting interface or access to source code.\n\nPrevious and existing use cases have stretched across a wide range of scales in time and spatial area, from transient dynamics to long-term planning studies, and from individual appliance behavior to nation-wide simulations.\n\n### Philosophy of HELICS\n\nThe design and development of HELICS is driven by a number of philosophical considerations that have a clear path to design decisions in the code and reflect the needs of the use cases that drive HELICS development.\n\n- Make it as easy as possible for federates of all kinds to work together\n- Federates cannot impose restrictions or requirements on other federates\n- Federates should maintain control and autonomy\n- The design should be layered and modular to be adaptable to a wide variety of circumstances\n- Centralized control should be minimized\n\nThese design priorities directed much of the design of HELICS and supporting tools, including operation as a library vs a run time interface that requires simulations be loaded as modules into HELICS, the use of distributed timing and control, and giving federates fine grained control over the time management and operations that is independent of operations of other federates. These core philosophies support an underlying belief driving co-simulation that ""Simulations are Better Together"".\n\n## Getting Started\n\nA [User Guide](https://docs.helics.org/en/latest/) is available with some tutorial examples. We suggest starting here if you are looking for more information on HELICS, whether it is for getting started, or learning about more advanced features, the new documentation should have something for everyone (Please let us know if it doesn\'t via [![](https://badges.gitter.im/GMLC-TDC/HELICS.png)](https://gitter.im/GMLC-TDC/HELICS) or by [creating an issue on github](https://github.com/GMLC-TDC/HELICS/issues/new/choose)).\n\nThe [Orientation](https://docs.helics.org/en/latest/user-guide/orientation.html) goes through a series of examples that step through the basic usage and concepts of HELICS.\n\nYou can also [Try HELICS online](https://mybinder.org/v2/gh/kdheepak/openmod-2019-helics-tutorial/master?urlpath=lab/tree/notebooks/cosimulation-introduction.ipynb) without having to install any software.\n\nEarlier we also created a series of roughly 10-minute mini-tutorial videos that discuss various design topics, concepts, and interfaces, including how to use the tool. They can be found on our [YouTube channel](https://www.youtube.com/channel/UCPa81c4BVXEYXt2EShTzbcg). These videos do not reflect recent HELICS advances but do introduce some basic concepts.\n\nSeveral examples of HELICS federates and projects are located in HELICS-Examples with corresponding documentation in the [User Guide](https://docs.helics.org/en/latest/user-guide/examples/examples_index.html). This repo provides a number of examples using the different libraries and interfaces, including those used in the user guide.\n\nThe [HELICS-Tutorial repository](https://github.com/GMLC-TDC/HELICS-Tutorial) provides a series of tutorials using HELICS to build a co-simulation using domain-specific external modeling tools that is built around an electric power system use case with integrated transmission-distribution-market-communication quasi-steady-state-timeseries (QSTS) simulation.\n\nThe [HELICS-Use-Cases repository](https://github.com/GMLC-TDC/HELICS-Use-Cases) includes examples for a growing range of research use cases for inspiration.\n\nA [Tutorial](https://github.com/GMLC-TDC/pesgm-2019-helics-tutorial) was prepared for the IEEE PES General meeting in Atlanta. The example materials are available on Binder.\n\nThe HELICS team holds office hours [every-other Thursday](https://helics.org/HELICSOfficeHours.ics); bring your questions and get help from the development team.\n\n### Language Bindings\n\nHELICS provides a rich set of APIs for other languages including [Python](https://github.com/GMLC-TDC/pyhelics), C, Java, Octave, [Julia](https://github.com/GMLC-TDC/HELICS.jl), and [Matlab](https://github.com/GMLC-TDC/matHELICS). [nim](https://github.com/GMLC-TDC/helics.nim) and C# APIs are available on an experimental basis, and with an active open-source community, the set of supported languages is growing all the time. See [Language bindings](https://docs.helics.org/en/latest/user-guide/installation/language.html) for additional details.\n\n## Documentation\n\nOur [ReadTheDocs](https://docs.helics.org/en/latest/) site provides a set of documentation including a set of introductory [examples](https://docs.helics.org/en/latest/user-guide/examples/examples_index.html), a [developers guide](https://docs.helics.org/en/latest/developer-guide/index.html), complete Doxygen generated [API documentation](https://helics.readthedocs.io/en/latest/doxygen/annotated.html), [API references for the supported languages](https://docs.helics.org/en/latest/references/api-reference/index.html#c-api-doxygen). A few more questions and answers are available on the [Wiki](https://github.com/GMLC-TDC/HELICS/wiki).\n\n[Installation Guide](https://docs.helics.org/en/latest/user-guide/installation/index.html)\n\n### Documentation downloads\n\n- [PDF](https://docs.helics.org/_/downloads/en/latest/pdf/)\n- [HTML Zip file](https://docs.helics.org/_/downloads/en/latest/htmlzip/)\n- [EPUB](https://docs.helics.org/_/downloads/en/latest/epub/)\n\nAdditionally, our initial design requirements document can be found [here](docs/introduction/original_specification.md), which describes a number of our early design considerations and some directions that might be possible in the future.\n\n### [CHANGELOG](CHANGELOG.md)\n\nA history of changes to HELICS\n\n### [ROADMAP](docs/ROADMAP.md)\n\nA snapshot of some current plans for what is to come.\n\n### [Installation](https://docs.helics.org/en/latest/user-guide/installation/index.html)\n\nA guide to installing HELICS on different platforms\n\n### Quick links\n\n- [configuration option reference](docs/references/configuration_options_reference.md)\n- [Queries](docs/user-guide/advanced_topics/queries.md)\n- [Environment variables](docs/user-guide/advanced_topics/environment_variables.md)\n- [C function reference](https://docs.helics.org/en/latest/doxygen/C_api_index.html)\n- [CMake Variables](docs/user-guide/installation/helics_cmake_options.md)\n- [HELICS Apps](docs/references/apps/index.md)\n\n### Docker\n\nSome of the HELICS apps are available from [docker](https://cloud.docker.com/u/helics/repository/docker/helics/helics). This image does not include any libraries for linking just the executables. `helics_broker`, `helics_app`, `helics_recorder`, `helics_player`, and `helics_broker_server`. Other images are expected to be available in the future. See [Docker](https://docs.helics.org/en/latest/user-guide/installation/docker.html) for a few more details.\n\n## Tools with HELICS support\n\nAs a co-simulation framework, HELICS is designed to bring together domain-specific modeling tools so they interact during run time. It effectively tries to build on the shoulders of giants by not reinventing trusted simulation tools, but instead, merely acting as a mediator to coordinate such interactions. HELICS\'s full power is only apparent when you use it to combine these domain-specific tools.\n\nThankfully the HELICS API is designed to be minimally invasive and make it straightforward to connect most any tool that provides either a scripting interface or access to source code. As listed on [Tools using HELICS](docs/references/Tools_using_helics.md), a growing set of energy domain tools have HELICS support either natively or through an external interface. We also provide a set of helper apps for various utility and testing purposes.\n\nWe are always looking for help adding support for more tools, so please contact us if you have any additions.\n\n[Supported Tools](docs/references/Tools_using_HELICS.md)\n\n### HELICS helper Apps\n\n- [HELICS CLI](https://github.com/GMLC-TDC/helics-cli) provides a simple way to automate configuring, starting, and stopping HELICS co-simulations. This helps in overcoming the challenges associated with successfully sequencing and starting simulations of all sizes and is particularly helpful for larger simulations.\n- [Broker](./docs/references/apps/Broker.md), which is a command line tool for running a Broker, the core hub in HELICS for data exchange. One or more brokers are what tie the simulation tools together in a HELICS federation. There is also a [Broker Server](https://helics.readthedocs.io/en/latest/user-guide/simultaneous_cosimulation) which can generate brokers as needed, and can include a REST API.\n- [Player](./docs/references/apps/Player.md), which acts as a simple send-only federate that simply publishes a stream of timed HELICS messages from a user-defined file. This can be very useful when testing a federate in isolation by mimicking the data that will eventually come from other sources, and in assembling or debugging federations to stand in for any federates which might not be quite ready or that take a long time to run. The Player can also readily playback the files created by the HELICS Recorder (see below). HELICS Player is included in the HELICS distribution.\n- [Recorder](./docs/references/apps/Recorder.md), which acts as a simple receive-only federate that prints out or saves messages from one or more subscribed streams. This makes it easy to monitor some or all of the data exchanged via HELICS and can also be part of debugging and modular workflows. For example it can record the data exchanged during a (partly?) successful run to play back (see Player above) to other federates without having to launch those parts again or to isolate/test changes to a subset of a federation. HELICS Recorder is included in the HELICS distribution.\n- [App](./docs/references/apps/App.md) is a general app executable which can run a number of other apps including Player and Recorder, as well as a [Tracer](./docs/references/apps/Tracer.md), [Echo](./docs/references/apps/Echo.md), [Source](./docs/references/apps/Source.md), and [Clone](./docs/references/apps/Clone.md).\n\n## Contributing\n\nContributors are welcome, see the [Contributing](CONTRIBUTING.md) guidelines for more details on the process of contributing. See the [Code of Conduct](.github/CODE_OF_CONDUCT.md) for guidelines on the community expectations. All prior contributors can be found [here](CONTRIBUTORS.md) along with a listing of included and optional components to HELICS.\n\n## Build Status\n\n
\n Click to expand!\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ServiceMainDevelop
Azure
Circle-CI
Docs
\n
\n\n## Publications\n\nIf you use HELICS in your research, please cite:\n\n\\[1\\] B. Palmintier, D. Krishnamurthy, P. Top, S. Smith, J. Daily, and J. Fuller, \xe2\x80\x9cDesign of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework,\xe2\x80\x9d in _Proc. of the 2017 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems_, Pittsburgh, PA, 2017. [pre-print](https://www.nrel.gov/docs/fy17osti/67928.pdf) | [published](https://ieeexplore.ieee.org/document/8064542/)\n\n## In the News\n\nHELICS was selected as an [R&D 100 Award Finalist](https://www.rdworldonline.com/finalists-announced-for-2019-rd-100-awards/).\n\n## History and Motivation\n\n**Brief History:** HELICS began as the core software development of the Grid Modernization Laboratory Consortium ([GMLC](https://gridmod.labworks.org/)) project on integrated Transmission-Distribution-Communication simulation (TDC, GMLC project 1.4.15) supported by the U.S. Department of Energy\'s Offices of Electricity ([OE](https://www.energy.gov/oe/office-electricity-delivery-and-energy-reliability)) and Energy Efficiency and Renewable Energy ([EERE](https://www.energy.gov/eere/office-energy-efficiency-renewable-energy)). As such, its first use cases were around modern electric power systems, though today it is used for a much larger range of applications. HELICS\'s layered, high-performance, co-simulation framework builds on the collective experience of multiple national labs.\n\n**Motivation:** Energy systems and their associated information and communication technology systems are becoming increasingly intertwined. As a result, effectively designing, analyzing, and implementing modern energy systems increasingly relies on advanced modeling that simultaneously captures both the cyber and physical domains in combined simulations.\n\n## Source Repo\n\nThe HELICS source code is hosted on GitHub: [https://github.com/GMLC-TDC/HELICS](https://github.com/GMLC-TDC/HELICS)\n\n## Release\n\nHELICS is distributed under the terms of the BSD-3 clause license. All new\ncontributions must be made under this license. [LICENSE](LICENSE)\n\nSPDX-License-Identifier: BSD-3-Clause\n\nportions of the code written by LLNL with release number\nLLNL-CODE-739319\n'",,"2017/06/01, 17:03:19",2337,BSD-3-Clause,36,3417,"2023/10/25, 16:48:26",82,1900,2452,92,0,9,2.5,0.5679245283018868,"2023/01/20, 21:48:22",v3.4.0,1,32,false,,true,true,,,https://github.com/GMLC-TDC,https://helics.org,,,,https://avatars.githubusercontent.com/u/22627455?v=4,,, oemof-solph,A model generator for energy system modeling and optimization.,oemof,https://github.com/oemof/oemof-solph.git,github,"energy,energy-system,modelling,modelling-framework",Energy Modeling and Optimization,"2023/10/25, 11:30:50",252,0,33,true,Python,oemof community,oemof,"Python,Batchfile",https://oemof.org,"b'\n|tox-pytest| |tox-checks| |appveyor| |coveralls| |codecov|\n\n|scrutinizer| |codacy| |codeclimate|\n\n|wheel| |packaging| |supported-versions|\n\n|docs| |zenodo|\n\n|version| |commits-since| |chat|\n\n\n------------------------------\n\n.. |tox-pytest| image:: https://github.com/oemof/oemof-solph/workflows/tox%20pytests/badge.svg?branch=dev\n :target: https://github.com/oemof/oemof-solph/actions?query=workflow%3A%22tox+checks%22\n\n.. |tox-checks| image:: https://github.com/oemof/oemof-solph/workflows/tox%20checks/badge.svg?branch=dev\n :target: https://github.com/oemof/oemof-solph/actions?query=workflow%3A%22tox+checks%22\n\n.. |packaging| image:: https://github.com/oemof/oemof-solph/workflows/packaging/badge.svg?branch=dev\n :target: https://github.com/oemof/oemof-solph/actions?query=workflow%3Apackaging\n\n.. |docs| image:: https://readthedocs.org/projects/oemof-solph/badge/?style=flat\n :target: https://readthedocs.org/projects/oemof-solph\n :alt: Documentation Status\n\n.. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/oemof/oemof-solph?branch=dev&svg=true\n :alt: AppVeyor Build Status\n :target: https://ci.appveyor.com/project/oemof-developer/oemof-solph\n\n.. |coveralls| image:: https://coveralls.io/repos/oemof/oemof-solph/badge.svg?branch=dev&service=github\n :alt: Coverage Status\n :target: https://coveralls.io/github/oemof/oemof-solph\n\n.. |codecov| image:: https://codecov.io/gh/oemof/oemof-solph/branch/dev/graphs/badge.svg?branch=dev\n :alt: Coverage Status\n :target: https://codecov.io/gh/oemof/oemof-solph\n\n.. |codacy| image:: https://api.codacy.com/project/badge/Grade/a6e5cb2dd2694c73895e142e4cf680d5\n :target: https://app.codacy.com/gh/oemof/oemof-solph/dashboard\n :alt: Codacy Code Quality Status\n\n.. |codeclimate| image:: https://codeclimate.com/github/oemof/oemof-solph/badges/gpa.svg\n :target: https://codeclimate.com/github/oemof/oemof-solph\n :alt: CodeClimate Quality Status\n\n.. |version| image:: https://img.shields.io/pypi/v/oemof.solph.svg\n :alt: PyPI Package latest release\n :target: https://pypi.org/project/oemof.solph\n\n.. |wheel| image:: https://img.shields.io/pypi/wheel/oemof.solph.svg\n :alt: PyPI Wheel\n :target: https://pypi.org/project/oemof.solph\n\n.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/oemof.solph.svg\n :alt: Supported versions\n :target: https://pypi.org/project/oemof.solph\n\n.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/oemof.solph.svg\n :alt: Supported implementations\n :target: https://pypi.org/project/oemof.solph\n\n.. |commits-since| image:: https://img.shields.io/github/commits-since/oemof/oemof-solph/v0.5.1/dev\n :alt: Commits since latest release\n :target: https://github.com/oemof/oemof-solph/compare/v0.5.1...dev\n\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.596235.svg\n :alt: Zenodo DOI\n :target: https://doi.org/10.5281/zenodo.596235\n\n.. |scrutinizer| image:: https://img.shields.io/scrutinizer/quality/g/oemof/oemof-solph/dev.svg\n :alt: Scrutinizer Status\n :target: https://scrutinizer-ci.com/g/oemof/oemof-solph/\n\n.. |chat| image:: https://img.shields.io/badge/chat-oemof:matrix.org-%238ADCF7\n :alt: matrix-chat\n :target: https://matrix.to/#/#oemof:matrix.org\n\n\n.. figure:: https://raw.githubusercontent.com/oemof/oemof-solph/492e3f5a0dda7065be30d33a37b0625027847518/docs/_logo/logo_oemof_solph_FULL.svg\n :align: center\n\n------------------------------\n\n===========\noemof.solph\n===========\n\n**A model generator for energy system modelling and optimisation (LP/MILP)**\n\n.. contents::\n :depth: 2\n :local:\n :backlinks: top\n\n\nIntroduction\n============\n\nThe oemof.solph package is part of the\n`Open energy modelling framework (oemof) `_.\nThis is an organisational framework to bundle tools for energy (system) modelling.\noemof-solph is a model generator for energy system modelling and optimisation.\n\nThe package ``oemof.solph`` is very often called just ``oemof``.\nThis is because installing the ``oemof`` meta package was once the best way to get ``oemof.solph``.\nNotice that you should prefeably install ``oemof.solph`` instead of ``oemof``\nif you want to use ``solph``.\n\n\nEverybody is welcome to use and/or develop oemof.solph.\nRead our `contribution `_ section.\n\nContribution is already possible on a low level by simply fixing typos in\noemof\'s documentation or rephrasing sections which are unclear.\nIf you want to support us that way please fork the oemof-solph repository to your own\nGitHub account and make changes as described in the `github guidelines `_\n\nIf you have questions regarding the use of oemof including oemof.solph you can visit the openmod forum (`tag oemof `_ or `tag oemof-solph `_) and open a new thread if your questions hasn\'t been already answered.\n\nKeep in touch! - You can become a watcher at our `github site `_,\nbut this will bring you quite a few mails and might be more interesting for developers.\nIf you just want to get the latest news, like when is the next oemof meeting,\nyou can follow our news-blog at `oemof.org `_.\n\nDocumentation\n=============\nThe `oemof.solph documentation `_ is powered by readthedocs. Use the `project site `_ of oemof.solph to choose the version of the documentation. Go to the `download page `_ to download different versions and formats (pdf, html, epub) of the documentation.\n\n\n.. _installation_label:\n\nInstallation\n============\n\n\nIf you have a working Python installation, use pypi to install the latest version of oemof.solph.\nPython >= 3.8 is recommended. Lower versions may work but are not tested.\n\nWe highly recommend to use virtual environments.\nPlease refer to the documentation of your Python distribution (e.g. Anaconda,\nMicromamba, or the version of Python that came with your Linux installation)\nto learn how to set up and use virtual environments.\n\n::\n\n (venv) pip install oemof.solph\n\nIf you want to use the latest features, you might want to install the **developer version**. The developer version is not recommended for productive use::\n\n (venv) pip install https://github.com/oemof/oemof-solph/archive/dev.zip\n\n\nFor running an oemof-solph optimisation model, you need to install a solver.\nFollowing you will find guidelines for the installation process for different operating systems.\n\n.. _windows_solver_label:\n.. _linux_solver_label:\n\nInstalling a solver\n-------------------\n\nThere are several solvers that can work with oemof, both open source and commercial.\nTwo open source solvers are widely used (CBC and GLPK), but oemof suggests CBC (Coin-or branch and cut).\nIt may be useful to compare results of different solvers to see which performs best.\nOther commercial solvers, like Gurobi or Cplex, are also options.\nHave a look at the `pyomo docs `_ to learn about which solvers are supported.\n\nCheck the solver installation by executing the test_installation example below (see section Installation Test).\n\n**Linux**\n\nTo install the solvers have a look at the package repository of your Linux distribution or search for precompiled packages. GLPK and CBC ares available at Debian, Feodora, Ubuntu and others.\n\n**Windows**\n\n 1. Download `CBC `_\n 2. Download `GLPK (64/32 bit) `_\n 3. Unpack CBC/GLPK to any folder (e.g. C:/Users/Somebody/my_programs)\n 4. Add the path of the executable files of both solvers to the PATH variable using `this tutorial `_\n 5. Restart Windows\n\nCheck the solver installation by executing the test_installation example (see the `Installation test` section).\n\n\n**Mac OSX**\n\nPlease follow the installation instructions on the respective homepages for details.\n\nCBC-solver: https://projects.coin-or.org/Cbc\n\nGLPK-solver: http://arnab-deka.com/posts/2010/02/installing-glpk-on-a-mac/\n\nIf you install the CBC solver via brew (highly recommended), it should work without additional configuration.\n\n\n**conda**\n\nProvided you are using a Linux or MacOS, the CBC-solver can also be installed in a `conda` environment. Please note, that it is highly recommended to `use pip after conda `_, so:\n\n.. code:: console\n\n (venv) conda install -c conda-forge coincbc\n (venv) pip install oemof.solph\n\n\n.. _check_installation_label:\n\nInstallation test\n-----------------\n\nTest the installation and the installed solver by running the installation test\nin your virtual environment:\n\n.. code:: console\n\n (venv) oemof_installation_test\n\nIf the installation was successful, you will receive something like this:\n\n.. code:: console\n\n *********\n Solver installed with oemof:\n glpk: working\n cplex: not working\n cbc: working\n gurobi: not working\n *********\n oemof.solph successfully installed.\n\nas an output.\n\nContributing\n============\n\nA warm welcome to all who want to join the developers and contribute to\noemof.solph.\n\nInformation on the details and how to approach us can be found\n`in the oemof documentation `_ .\n\nCiting\n======\n\nFor explicitly citing solph, you might want to refer to\n`DOI:10.1016/j.simpa.2020.100028 `_,\nwhich gives an overview over the capabilities of solph.\nThe core ideas of oemof as a whole are described in\n`DOI:10.1016/j.esr.2018.07.001 `_\n(preprint at `arXiv:1808.0807 `_).\nTo allow citing specific versions, we use the zenodo project to get a DOI for each version.\n\n\n.. _solph_examples_label:\n\nExamples\n========\n\nThe combination of specific modules (often including other packages) is called an\napplication (app). For example, it can depict a concrete energy system model.\nYou can find a large variety of helpful examples in the documentation.\nThe examples show the optimisation of different energy systems and are supposed\nto help new users to understand the framework\'s structure.\nThere is some elaboration on the examples in the respective repository.\nThe repository has sections for each major release.\n\nYou are welcome to contribute your own examples via a `pull request `_\nor by e-mailing us (see `here `_ for contact information).\n\nLicense\n=======\n\nCopyright (c) 2023 oemof developer group\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the ""Software""), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n'",",https://doi.org/10.5281/zenodo.596235\n\n,https://doi.org/10.1016/j.simpa.2020.100028,https://doi.org/10.1016/j.esr.2018.07.001,https://arxiv.org/abs/1808.08070v1","2015/11/24, 13:38:22",2892,MIT,678,6847,"2023/10/23, 07:42:12",98,455,904,145,2,21,3.2,0.6477536793183578,"2023/08/31, 10:23:31",v0.5.1,0,43,false,,true,true,,,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, oemof-thermal,"Provides tools to model thermal energy components as an extension of oemof.solph, e.g. compression heat pumps, concentrating solar plants, thermal storage and solar thermal collectors.",oemof,https://github.com/oemof/oemof-thermal.git,github,,Energy Modeling and Optimization,"2023/06/12, 17:37:43",25,5,5,true,Python,oemof community,oemof,Python,https://oemof.org,"b'|badge_pypi| |badge_travis| |badge_docs| |badge_coverage| |link-latest-doi|\n\n#############\noemof.thermal\n#############\n\nThis package provides tools to model thermal energy components as an extension of\noemof.solph, e.g. compression heat pumps, concentrating solar plants, thermal\nstorages and solar thermal collectors.\n\n.. contents::\n\nAbout\n=====\n\nThe aim of oemof.thermal is to create a toolbox for building models of\nthermal energy systems. Modeling thermal energy systems requires specific preprocessing\nand postprocessing steps whose detail exceeds the generic formulation of components in\noemof.solph. Currently, most of the functions collected here are intended to be used\ntogether with oemof.solph. However, in some instances they may be useful independently\nof oemof.solph.\n\noemof.thermal is under active development. Contributions are welcome.\n\nQuickstart\n==========\n\nInstall oemof.thermal by running\n\n.. code:: bash\n\n pip install oemof.thermal\n\nin your virtualenv. In your code, you can import modules like e.g.:\n\n.. code:: python\n\n from oemof.thermal import concentrating_solar_power\n\nAlso, have a look at the\n`examples `_.\n\nDocumentation\n=============\n\nFind the documentation at ``_.\n\nContributing\n============\n\nEverybody is welcome to contribute to the development of oemof.thermal. Find here the `developer\nguidelines of oemof `_.\n\nLicense\n=======\n\nMIT License\n\nCopyright (c) 2019 oemof developing group\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the ""Software""), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\n.. |badge_pypi| image:: https://badge.fury.io/py/oemof.thermal.svg\n :target: https://badge.fury.io/py/oemof.thermal\n :alt: PyPI version\n\n.. |badge_docs| image:: https://readthedocs.org/projects/oemof-thermal/badge/?version=stable\n :target: https://oemof-thermal.readthedocs.io/en/stable/\n :alt: Documentation status\n\n.. |badge_coverage| image:: https://coveralls.io/repos/github/oemof/oemof-thermal/badge.svg?branch=dev&service=github\n :target: https://coveralls.io/github/oemof/oemof-thermal?branch=dev\n :alt: Test coverage\n\n.. |badge_travis| image:: https://travis-ci.org/oemof/oemof.svg?branch=dev\n :target: https://travis-ci.org/oemof/oemof-thermal\n :alt: Build status\n\n.. |link-latest-doi| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3606385.svg\n :target: https://doi.org/10.5281/zenodo.3606385\n :alt: Zenodo DOI\n'",",https://doi.org/10.5281/zenodo.3606385\n","2016/01/11, 10:29:40",2844,MIT,20,1188,"2023/06/12, 17:31:23",20,101,166,4,135,3,0.5,0.7097186700767264,"2023/02/20, 20:46:20",v0.0.6.dev2,0,12,false,,false,false,"GregorBecker/SESMG,in-RET/in.RET-EnSys-open-plan-GUI,mtress/mtress,moritz-reuter/ESEM-EE,SESMG/SESMG",,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, dpsim,A real-time power system simulator that operates in the dynamic phasor as well as electromagnetic transient domain.,acs/public/simulation/dpsim,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://git.rwth-aachen.de/acs/public/simulation/dpsim/dpsim,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, VILLASnode,Connecting real-time power grid simulation equipment.,acs/public/villas,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://git.rwth-aachen.de/acs/public/villas/node,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, openENTRANCE,"The Horizon 2020 openENTRANCE project aims at developing, using and disseminating an open, transparent and integrated modeling platform for assessing low-carbon transition pathways in Europe.",openENTRANCE,https://github.com/openENTRANCE/openentrance.git,github,,Energy Modeling and Optimization,"2023/10/02, 16:05:04",28,0,3,true,Python,H2020 openENTRANCE,openENTRANCE,"Python,Jupyter Notebook",http://openentrance.eu,"b'# Project definitions for the openENTRANCE project\n\nCopyright 2020-2023 openENTRANCE consortium\n\nThis repository is licensed under the Apache License, Version 2.0 (the ""License""); see\nthe [LICENSE](LICENSE) for details.\n\n[![license](https://img.shields.io/badge/License-Apache%202.0-black)](https://github.com/openENTRANCE/openentrance/blob/main/LICENSE)\n[![python](https://img.shields.io/badge/python-3.7_|_3.8_|_3.9-blue?logo=python&logoColor=white)](https://github.com/openENTRANCE/openentrance)\n[![Code style:\nblack](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n## Aim and scope of this repository\n\n\n\nThe [Horizon 2020 openENTRANCE project](https://openentrance.eu) aims at\ndeveloping, using and disseminating an open, transparent and integrated\nmodelling platform for assessing low-carbon transition pathways in Europe.\nA key requirement for an effective linking of models and consistent analysis is\na common ""nomenclature"", i.e., shared lists of variables, regions and units\nused across the entire project.\n\nThis repository makes available the nomenclature used within the consortium and\nserves as a discussion platform for extending the lists of terms.\n\n*We invite other modelling teams to contribute and join the discussion,\nhoping to facilitate increased cooperation across research projects\non (European) energy and climate policy!*\n\n## How to work with this repository\n\nThere are several ways to interact with the nomenclature and definitions\nprovided in this repository. The simplest approach is to just read the `yaml`\nfiles on GitHub - see the links [below](#Timeseries-data-dimensions).\n\nThe repository is structured so that it can be parsed by the\nPython package **nomenclature** for scenario ensemble validation and processing.\nRead more on [GitHub](https://github.com/iamconsortium/nomenclature)!\n\n### An installable Python package\n\n\n\nTo facilitate using the definitions in data processing workflows and scripts,\nthere is an installable Python package with several utility\nfunctions and dictionaries. [More information](openentrance)\n\n## Data format structure\n\nThe openENTRANCE project uses a **common data format** based on a template\ndeveloped by the [Integrated Assessment Modeling Consortium (IAMC)](https://www.iamconsortium.org/)\nand already in use in many model comparison projects at the global and national\nlevel. While the IAMC comprises (mostly) integrated-assessment teams, the data\nformat is generic and can be used for a wide range of applications, including\nenergy-systems analysis or modelling of specific sectors like transport,\nindustry or the building stock.\n\n### Timeseries data dimensions\n\nIn the data format, every timeseries is described by six dimensions (codes):\n\n1.\tModel - [more information](definitions/model)\n2.\tScenario - [more information](definitions/scenario)\n3.\tRegion - [more information](definitions/region)\n4.\tVariable - [more information](definitions/variable)\n5.\tUnit - see the section on [variables](definitions/variable)\n for details\n6.\tSubannual (optional, default \'Year\')[1] -\n [more information](definitions/subannual)\n\nIn addition to these six dimensions, every timeseries is described by\na set of **year-value** pairs.\n\nThe resulting table can be either shown as\n- **wide format** (see example below, with *years* as columns), or\n- **long format** (two columns *year* and *value*).\n\n| **model** | **scenario** | **region** | **variable** | **unit** | **subannual** | **2015** | **2020** | **2025** |\n|-------------|---------------------|------------|----------------|----------|---------------|---------:|---------:|---------:|\n| GENeSYS-MOD | Societal Commitment | Europe | Primary Energy | EJ/y | Year | 69.9 | 65.7 | ... |\n| ... | ... | ... | ... | ... | ... | ... | ... | ... |\n\nData via the [IAMC 1.5\xc2\xb0C scenario explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer),\n showing a scenario from the [CD-LINKS](https://www.cd-links.org) project.\n\n[1] *The index \'Subannual\' is an extension of the original format introduced by\nthe openENTRANCE project to accomodate data at a subannual temporal resolution.*\n\n### Recommended usage of this data format\n\n\n\nThe Python package **pyam** was developed to facilitate working with timeseries\ndata conforming to this structure. Features include validation of values,\naggregation and downscaling of data, and import/export with various file formats\n(`xlsx`, `csv`, ...) and table layouts (wide vs. long data).\n\n[Read the docs](https://pyam-iamc.readthedocs.io) for more information!\n\n## Funding acknowledgement\n\n\nThis project has received funding from the European Union\xe2\x80\x99s Horizon 2020 research\nand innovation programme under grant agreement No. 835896.\n'",,"2020/03/13, 14:21:37",1321,Apache-2.0,66,236,"2023/10/24, 05:01:16",34,245,279,81,1,7,1.2,0.5809128630705394,"2021/11/18, 09:08:34",v0.2,0,33,false,,false,false,,,https://github.com/openENTRANCE,https://openentrance.eu,,,,https://avatars.githubusercontent.com/u/60662804?v=4,,, Joulia.jl,A Large-Scale Spatial Power System Model for Julia.,JuliaEnergy,https://github.com/JuliaEnergy/Joulia.jl.git,github,,Energy Modeling and Optimization,"2019/03/28, 17:01:01",30,0,2,false,Julia,JuliaEnergy,JuliaEnergy,Julia,,"b'# Joulia.jl\n\nJoulia.jl: A Large-Scale Spatial Power System Model for Julia\n\n[Weibezahn, Jens; Kendziorski, Mario. 2019. ""Illustrating the Benefits of Openness: A Large-Scale Spatial Economic Dispatch Model Using the Julia Language."" Energies 12, no. 6: 1153.](https://doi.org/10.3390/en12061153)\n\nSee [JouliaExamples](https://github.com/JuliaEnergy/JouliaExamples) for an example instance.\n'",",https://doi.org/10.3390/en12061153","2019/03/24, 17:08:17",1676,MIT,0,2,"2023/10/24, 05:01:16",2,0,0,0,1,1,0,0.0,"2019/03/28, 17:07:39",v0.1.0,0,1,false,,false,false,,,https://github.com/JuliaEnergy,,,,,https://avatars.githubusercontent.com/u/42609159?v=4,,, The IDAES Toolkit,"Aims to provide multi-scale, simulation-based, open source computational tools and models to support the design, analysis, optimization, scale-up, operation and troubleshooting of innovative, advanced energy systems.",IDAES,https://github.com/IDAES/idaes-pse.git,github,"process-systems-engineering,process-modeling,chemical-engineering",Energy Modeling and Optimization,"2023/10/16, 12:37:18",168,16,44,true,Python,IDAES,IDAES,"Python,PureBasic,Jupyter Notebook,Dockerfile,Shell",https://idaes-pse.readthedocs.io/,"b'# IDAES Toolkit\r\n\r\nThe IDAES Toolkit aims to provide multi-scale, simulation-based, open source\r\ncomputational tools and models to support the design, analysis, optimization,\r\nscale-up, operation and troubleshooting of innovative, advanced energy systems.\r\n\r\n\r\n## Project Build and Download Statuses\r\n[![Tests](https://github.com/IDAES/idaes-pse/actions/workflows/core.yml/badge.svg)](https://github.com/IDAES/idaes-pse/actions/workflows/core.yml)\r\n[![Integration](https://github.com/IDAES/idaes-pse/actions/workflows/integration.yml/badge.svg)](https://github.com/IDAES/idaes-pse/actions/workflows/integration.yml)\r\n[![codecov](https://codecov.io/gh/IDAES/idaes-pse/branch/main/graph/badge.svg?token=1lNQNbSB29)](https://codecov.io/gh/IDAES/idaes-pse)\r\n[![Documentation Status](https://readthedocs.org/projects/idaes-pse/badge/?version=latest)](https://idaes-pse.readthedocs.io/en/latest/?badge=latest)\r\n[![Services](https://github.com/Pyomo/jenkins-status/blob/main/idaes_services.svg)](https://pyomo-jenkins.sandia.gov/)\r\n[![GitHub contributors](https://img.shields.io/github/contributors/IDAES/idaes-pse.svg)](https://github.com/IDAES/idaes-pse/graphs/contributors)\r\n[![Merged PRs](https://img.shields.io/github/issues-pr-closed-raw/IDAES/idaes-pse.svg?label=merged+PRs)](https://github.com/IDAES/idaes-pse/pulls?q=is:pr+is:merged)\r\n[![Issue stats](http://isitmaintained.com/badge/resolution/IDAES/idaes-pse.svg)](http://isitmaintained.com/project/IDAES/idaes-pse)\r\n[![Downloads](https://pepy.tech/badge/idaes-pse)](https://pepy.tech/project/idaes-pse)\r\n\r\n\r\n## Getting Started\r\n\r\nOur [complete documentation is online](https://idaes-pse.readthedocs.io/en/stable/) but here is a summarized set of steps to get started using the framework.\r\n\r\nWhile not required, we encourage the installation of [Anaconda](https://www.anaconda.com/products/individual#Downloads) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) and using the `conda` command to create a separate python environment in which to install the IDAES Toolkit.\r\n\r\nUse conda to create a new ""idaes-pse"" (could be any name you like) environment then activate that environment:\r\n```bash\r\nconda create --name idaes-pse python=3.10\r\nconda activate idaes-pse\r\n```\r\n\r\nNow, in that ""idaes-pse"" environment, install the IDAES Toolkit using either `pip install` or `conda install` (but not both):\r\n\r\n```bash\r\n# install latest stable release\r\npip install idaes-pse\r\n# install latest stable release with one set of optional dependencies, e.g. `ui` for the user interface\r\npip install ""idaes-pse[ui]""\r\n# install latest stable release with multiple sets of optional dependencies\r\npip install ""idaes-pse[ui,dmf,omlt,grid,coolprop]""\r\n# install latest version from the main branch of this repository\r\npip install ""idaes-pse @ git+https://github.com/IDAES/idaes-pse@main""\r\n# install from the `mybranch` branch of the fork belonging to `myuser`\r\npip install ""idaes-pse @ git+https://github.com/myuser/idaes-pse@mybranch""\r\n```\r\n\r\nYou can check the version installed with the command:\r\n\r\n```bash\r\nidaes --version\r\n```\r\n\r\nNow install the pre-built extensions (binary solvers):\r\n\r\n```bash\r\nidaes get-extensions\r\n```\r\n\r\nThe IDAES examples can be installed by running:\r\n\r\n```bash\r\npip install idaes-examples\r\n```\r\n\r\nFor more information, refer to the [IDAES/examples](https://github.com/IDAES/examples) repository, as well as the online static version of the examples available at .\r\n\r\nFinally, refer to the [complete idaes-pse documentation](https://idaes-pse.readthedocs.io/en/latest) for detailed [installation instructions](https://idaes-pse.readthedocs.io/en/latest/tutorials/getting_started/index.html), examples, guides, and reference.\r\n\r\n## System requirements\r\n\r\nThe code and examples have been tested with the following operating systems:\r\n\r\n|Operating system|Supported versions |\r\n|----------------|--------------------|\r\n| Linux | Any modern Linux |\r\n| Windows | Windows 10 |\r\n| macOS | Partly supported* |\r\n\r\n*HSL is not currently provided for macOS on Intel processors, so some features may be limited or not available.\r\n\r\nMost of the functionality is implemented in Python. In accordance with\r\nthe end-of-life for many Python 2 libraries, the IDAES Toolkit is written\r\nfor Python 3. The following sub-versions are supported:\r\n\r\n* Python 3.8\r\n* Python 3.9\r\n* Python 3.10\r\n* Python 3.11\r\n\r\nNote that Python 3.6 is *not* supported.\r\n\r\n## Contacts and more information\r\n\r\nGeneral, background and overview information is available at the [IDAES main website](https://www.idaes.org).\r\nFramework development happens at our [GitHub repo](https://github.com/IDAES/idaes-pse) where you can ask questions by starting a [discussion](https://github.com/IDAES/idaes-pse/discussions), [report issues/bugs](https://github.com/IDAES/idaes-pse/issues) or [make contributions](https://github.com/IDAES/idaes-pse/pulls).\r\nFor further enquiries, send an email to: \r\n\r\n## Funding acknowledgements\r\n\r\nThis work was conducted as part of the [Institute for the Design of Advanced Energy Systems (IDAES)](https://idaes.org)\r\nwith support through the [Simulation-Based Engineering, Crosscutting Research Program](https://netl.doe.gov/coal/simulation-based-engineering)\r\nwithin the U.S. Department of Energy\xe2\x80\x99s [Office of Fossil Energy and Carbon Management (FECM)](https://www.energy.gov/fecm/office-fossil-energy-and-carbon-management).\r\nAs of 2021, additional support was provided by FECM\xe2\x80\x99s [Solid Oxide Fuel Cell Program](https://www.energy.gov/fecm/science-innovation/clean-coal-research/solid-oxide-fuel-cells),\r\nand [Transformative Power Generation Program](https://www.energy.gov/fecm/science-innovation/office-clean-coal-and-carbon-management/advanced-energy-systems/transformative).\r\n\r\n## Contributing\r\n\r\nPlease see our [Advanced User Installation](https://idaes-pse.readthedocs.io/en/stable/tutorials/advanced_install/) and [How-to Guides](https://idaes-pse.readthedocs.io/en/stable/how_to_guides/) on how to work with the idaes-pse source code and contribute changes to the project.\r\n\r\n**By contributing to this repository, you are agreeing to all the terms set out in the LICENSE.md and COPYRIGHT.md files in this directory.**\r\n'",,"2019/02/01, 01:12:51",1727,CUSTOM,184,9249,"2023/10/05, 20:06:24",76,788,1145,330,20,7,2.5,0.6456999085086916,"2023/09/23, 00:22:38",2.2.0,1,55,false,,false,false,"IDAES/idaes-ui,IDAES/idaes-compatibility,andrewlee94/idaes-compatibility,MarcusHolly/examples,lbianchi-lbl/examples,tarnold17/examples,dangunter/idaes-examples,IDAES/examples,dangunter/examples-sandbox,armtfgh/automation,doyle-lab-ucla/edboplus,project-pareto/pareto-ui,IDAES/publications,htran7-uw/Covid19_Pipeline,project-pareto/project-pareto,watertap-org/watertap",,https://github.com/IDAES,https://idaes.org,,,,https://avatars.githubusercontent.com/u/16325231?v=4,,, Temoa,Tools for Energy Model Optimization and Analysis (Temoa) is an open source modeling framework for conducting energy system analysis.,TemoaProject,https://github.com/TemoaProject/temoa.git,github,,Energy Modeling and Optimization,"2022/11/06, 14:18:05",74,0,8,true,Python,,,"Python,Jupyter Notebook,Shell",http://temoacloud.com,"b""# Overview\n\nThe 'energysystem' branch is the current master branch of\nTemoa. The four subdirectories are:\n\n1. `temoa_model/`\nContains the core Temoa model code.\n\n2. `data_files/`\nContains simple input data (DAT) files for Temoa. Note that the file \n'utopia-15.dat' represents a simple system called 'Utopia', which \nis packaged with the MARKAL model generator and has been used \nextensively for benchmarking exercises.\n\n3. `data_processing/`\nContains several modules to make output graphs, network diagrams, and \nresults spreadsheets.\n\n3. `tools/`\nContains scripts used to conduct sensitivity and uncertainty analysis. \nSee the READMEs inside each subfolder for more information.\n\n4. `docs/`\nContains the source code for the Temoa project manual, in reStructuredText\n(ReST) format.\n\n## Creating a Temoa Environment\n\nTemoa requires several software elements, and it is most convenient to create \na conda environment in which to run the model. To begin, you need to have conda \ninstalled either via miniconda or anaconda. Next, download the environment.yml file, \nand place in a new directory named 'temoa-py3.' Create this new directory in \na location where you wish to store the environment. From the command line:\n\n```$ conda env create```\n\nThen activate the environment as follows:\n\n```$ source activate temoa-py3```\n\nThis new conda environment contains several elements, including Python 3, a \ncompatible version of Pyomo, matplotlib, numpy, scipy, and two free solvers \n(GLPK and CBC). A note for Windows users: the CBC solver is not available for Windows through conda. Thus, in order to install the environment properly, the last line of the 'environment.yml' file specifying 'coincbc' should be deleted.\n\nTo download the Temoa source code, either clone the repository or download from GitHub \nas a zip file.\n\n## Running Temoa\n\nTo run Temoa, you have a few options. All commands below should be executed from the \ntop-level 'temoa' directory.\n\n**Option 1 (full-featured):**\nInvokes python directly, and gives the user access to \nseveral model features via a configuration file:\n\n```$ python temoa_model/ --config=temoa_model/config_sample```\n\nRunning the model with a config file allows the user to (1) use a sqlite \ndatabase for storing input and output data, (2) create a formatted Excel \noutput file, (2) specify the solver to use, (3) return the log file produced during model execution, (4) return the lp file utilized by the solver, and (5) to execute modeling-to-generate alternatives (MGA). Note that if you do not have access to a commercial solver, it may be faster run cplex on the NEOS server. To do so, simply specify cplex as the solver and uncomment the '--neos' flag.\n\n\n**Option 2 (basic):**\nUses Pyomo's own scripts and provides basic solver output:\n\n```$ pyomo solve --solver= temoa_model/temoa_model.py path/to/dat/file```\n\nThis option will only work with a text ('DAT') file as input. \nResults are placed in a yml file within the top-level 'temoa' directory.\n\n\n**Option 3 (basic +):**\nCopies the relevant Temoa model files into an executable archive \n(this only needs to be done once):\n\n```$ python create_archive.py```\n\nThis makes the model more portable by placing all contents in a \nsingle zipped file. Now it is possible to execute the model with the \nfollowing simply command:\n\n```$ python temoa.py path/to/dat/file```\n\nFor general help use --help:\n\n```$ python temoa_model/ --help```\n\n\n\n""",,"2015/01/10, 19:22:06",3210,GPL-2.0,1,878,"2023/10/12, 12:32:43",7,39,48,20,13,2,0.2,0.5555555555555556,"2023/03/08, 18:13:15",v2.0,0,11,false,,false,true,,,,,,,,,,, PowerSystemDataModel,Provides an extensive data model capable of modeling energy systems with high granularity e.g. for bottom-up simulations.,ie3-institute,https://github.com/ie3-institute/PowerSystemDataModel.git,github,"simulation,datamodel,powersystem",Energy Modeling and Optimization,"2023/10/20, 20:32:47",15,0,1,true,Java,"Institute of Energy Systems, Energy Efficiency and Energy Economics - ie3",ie3-institute,"Java,Groovy",,"b'# PowerSystemDataModel\n[![Build Status](https://simona.ie3.e-technik.tu-dortmund.de/ci/buildStatus/icon?job=ie3-institute%2FPowerSystemDataModel%2Fdev)](https://simona.ie3.e-technik.tu-dortmund.de/ci/job/ie3-institute/job/PowerSystemDataModel/job/dev/)\n[![Quality Gate Status](https://simona.ie3.e-technik.tu-dortmund.de/sonar/api/project_badges/measure?project=edu.ie3%3APowerSystemDataModel&metric=alert_status)](https://simona.ie3.e-technik.tu-dortmund.de/sonar/dashboard?id=edu.ie3%3APowerSystemDataModel)\n[![codecov](https://codecov.io/gh/ie3-institute/PowerSystemDataModel/branch/master/graph/badge.svg)](https://codecov.io/gh/ie3-institute/PowerSystemDataModel)\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/d1d73fb87e084904993f968178274835)](https://www.codacy.com/gh/ie3-institute/PowerSystemDataModel?utm_source=github.com&utm_medium=referral&utm_content=ie3-institute/PowerSystemDatamodel&utm_campaign=Badge_Grade)\n[![License](https://img.shields.io/github/license/ie3-institute/powersystemdatamodel)](https://github.com/ie3-institute/powersystemdatamodel/blob/master/LICENSE)\n[![Maven Central](https://img.shields.io/maven-central/v/com.github.ie3-institute/PowerSystemDataModel.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22com.github.ie3-institute%22%20AND%20a:%22PowerSystemDataModel%22)\n\nProvides an extensive data model capable of modelling energy systems with high granularity e.g. for bottom-up simulations. Additionally, useful functions to process, augment and furnish model i/o information is provided. Effective handling of geographic information related to power grids is also possible. Currently, i/o processing capabilities are provided for *.csv* files.\n\n**Supported physical models:**\n\n\t- Power Grids containing nodes, lines, switches and transformers\n\t- Conventional and renewable generating components such as fixed feed, biomass plants, wind turbines and photovoltaics\n\t- Power to heat units e.g. combined heat and power plants and heat pumps\n\t- Electrical storages, electric vehicles and charging stations\n\t- Thermal units consisting of thermal building and cylindrical storage models\n\n**Supported simulation data:**\n\n\tLoad profiles, weather data etc.\n\nFor more information visit [ReadTheDocs](https://powersystemdatamodel.readthedocs.io/en/latest/) or the [API docs](https://ie3-institute.github.io/PowerSystemDataModel/).\n'",,"2020/01/23, 16:00:59",1371,BSD-3-Clause,678,3392,"2023/10/20, 20:32:49",90,571,822,225,5,18,0.9,0.7017751479289941,"2023/08/01, 14:20:05",4.0.0,13,22,false,,false,true,,,https://github.com/ie3-institute,https://ie3.etit.tu-dortmund.de/,"Dortmund, Germany ",,,https://avatars.githubusercontent.com/u/58265273?v=4,,, PyPSA-Eur-Sec,A Sector-Coupled Open Optimisation Model of the European Energy System.,PyPSA,https://github.com/PyPSA/pypsa-eur-sec.git,github,"pypsa,energy,energy-system,energy-model,energy-system-model",Energy Modeling and Optimization,"2023/03/18, 12:51:31",82,0,22,true,Python,PyPSA,PyPSA,Python,https://pypsa-eur-sec.readthedocs.io/,"b""![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/pypsa/pypsa-eur-sec?include_prereleases)\n[![Documentation](https://readthedocs.org/projects/pypsa-eur-sec/badge/?version=latest)](https://pypsa-eur-sec.readthedocs.io/en/latest/?badge=latest)\n![GitHub](https://img.shields.io/github/license/pypsa/pypsa-eur-sec)\n![Size](https://img.shields.io/github/repo-size/pypsa/pypsa-eur-sec)\n[![Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.3938042.svg)](https://doi.org/10.5281/zenodo.3938042)\n[![Gitter](https://badges.gitter.im/PyPSA/community.svg)](https://gitter.im/PyPSA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\n# PyPSA-Eur-Sec: A Sector-Coupled Open Optimisation Model of the European Energy System\n\n**PyPSA-Eur-Sec v0.7.0 has been merged into PyPSA-Eur with version [v0.8.0](https://pypsa-eur.readthedocs.io/en/latest/release_notes.html). Please go to [PyPSA-Eur](https://github.com/pypsa/pypsa-eur) to run sector-coupling studies for the European energy system with PyPSA. This repository is now deprecated!**\n\nPyPSA-Eur-Sec is an open model dataset of the European energy system at the\ntransmission network level that covers the full ENTSO-E area.\n\nPyPSA-Eur-Sec builds on the electricity generation and transmission\nmodel [PyPSA-Eur](https://github.com/PyPSA/pypsa-eur) to add demand\nand supply for the following sectors: transport, space and water\nheating, biomass, industry and industrial feedstocks, agriculture,\nforestry and fishing. This completes the energy system and includes\nall greenhouse gas emitters except waste management and land use.\n\n**WARNING**: PyPSA-Eur-Sec is under active development and has several\n[limitations](https://pypsa-eur-sec.readthedocs.io/en/latest/limitations.html) which\nyou should understand before using the model. The github repository\n[issues](https://github.com/PyPSA/pypsa-eur-sec/issues) collect known\ntopics we are working on (please feel free to help or make suggestions).\nThe [documentation](https://pypsa-eur-sec.readthedocs.io/) remains somewhat\npatchy.\nYou can find showcases of the model's capabilities in the preprint\n[Benefits of a Hydrogen Network in Europe](https://arxiv.org/abs/2207.05816),\na [paper in Joule with a description of the industry\nsector](https://arxiv.org/abs/2109.09563), or in [a 2021\npresentation at EMP-E](https://nworbmot.org/energy/brown-empe.pdf).\nWe cannot support this model if you choose to use it.\n\nPlease see the [documentation](https://pypsa-eur-sec.readthedocs.io/)\nfor installation instructions and other useful information about the snakemake workflow.\n\nThis diagram gives an overview of the sectors and the links between\nthem:\n\n![sector diagram](graphics/multisector_figure.png)\n\nEach of these sectors is built up on the transmission network nodes\nfrom [PyPSA-Eur](https://github.com/PyPSA/pypsa-eur):\n\n![network diagram](https://github.com/PyPSA/pypsa-eur/blob/master/doc/img/base.png?raw=true)\n\nFor computational reasons the model is usually clustered down\nto 50-200 nodes.\n\n\nPyPSA-Eur-Sec was initially based on the model PyPSA-Eur-Sec-30 described\nin the paper [Synergies of sector coupling and transmission\nreinforcement in a cost-optimised, highly renewable European energy\nsystem](https://arxiv.org/abs/1801.05290) (2018) but it differs by\nbeing based on the higher resolution electricity transmission model\n[PyPSA-Eur](https://github.com/PyPSA/pypsa-eur) rather than a\none-node-per-country model, and by including biomass, industry,\nindustrial feedstocks, aviation, shipping, better carbon management,\ncarbon capture and usage/sequestration, and gas networks.\n\n\nPyPSA-Eur-Sec includes PyPSA-Eur as a\n[snakemake](https://snakemake.readthedocs.io/en/stable/index.html)\n[subworkflow](https://snakemake.readthedocs.io/en/stable/snakefiles/modularization.html#snakefiles-sub-workflows). PyPSA-Eur-Sec\nuses PyPSA-Eur to build the clustered transmission model along with\nwind, solar PV and hydroelectricity potentials and time series. Then\nPyPSA-Eur-Sec adds other conventional generators, storage units and\nthe additional sectors.\n\n\n# Licence\n\nThe code in PyPSA-Eur-Sec is released as free software under the\n[MIT License](https://opensource.org/licenses/MIT), see `LICENSE.txt`.\nHowever, different licenses and terms of use may apply to the various\ninput data.\n""",",https://doi.org/10.5281/zenodo.3938042,https://arxiv.org/abs/2207.05816,https://arxiv.org/abs/2109.09563,https://arxiv.org/abs/1801.05290","2019/04/17, 15:43:45",1652,MIT,211,1055,"2023/05/16, 08:38:48",1,197,197,82,162,1,0.5,0.7162162162162162,"2023/02/16, 20:39:50",v0.7.0,0,20,false,,false,false,,,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, antaresViz,"Visualize the results of Antares, an Open Source power system simulator meant to be used by anybody placing value in quantifying the adequacy or the economic performance of interconnected energy systems, at short or remote time horizons.",rte-antares-rpackage,https://github.com/rte-antares-rpackage/antaresViz.git,github,"r,monte-carlo-simulation,simulation,optimization,linear-programming,stochastic-simulation-algorithm,energy,electric,renewable-energy,adequacy,shiny,shiny-apps,manipulatewidge,dygraphs,plotly,leaflet,rte,bilan,previsionnel,tyndp",Energy Modeling and Optimization,"2023/10/17, 07:52:27",20,0,2,true,R,rte-antares-rpackage,rte-antares-rpackage,"R,HTML,JavaScript,CSS",https://rte-antares-rpackage.github.io/antaresViz,"b'\r\n
\r\n\r\n# antaresViz \r\n\r\n> `antaresViz` is the package to visualize the results of your Antares simulations that you have imported in the R session with package `antaresRead`. It provides some functions that generate interactive visualisations. Moreover, by default, these functions launch a shiny widget that provides some controls to dynamically choose what data is displayed in the graphics.\r\n\r\n\r\n[![R build status](https://github.com/rte-antares-rpackage/antaresViz/workflows/R-CMD-check/badge.svg)](https://github.com/rte-antares-rpackage/antaresViz/actions)\r\n[![Codecov test coverage](https://codecov.io/gh/rte-antares-rpackage/antaresViz/branch/master/graph/badge.svg)](https://app.codecov.io/gh/rte-antares-rpackage/antaresViz?branch=master)\r\n[![CRAN status](https://www.r-pkg.org/badges/version/antaresViz)](https://CRAN.R-project.org/package=antaresViz)\r\n[![R-CMD-check](https://github.com/rte-antares-rpackage/antaresViz/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/rte-antares-rpackage/antaresViz/actions/workflows/R-CMD-check.yaml)\r\n\r\n\r\n\r\n## Installation\r\n\r\nYou can install stable version from CRAN with:\r\n\r\n```r\r\ninstall.packages(""antaresViz"")\r\n```\r\n\r\nTo install the last development version:\r\n\r\n```r\r\ndevtools::install_github(""rte-antares-rpackage/antaresViz"")\r\n```\r\n\r\nTo display the help of the package and see all the functions it provides, use:\r\n\r\n```r \r\nhelp(package=""antaresViz"")\r\n```\r\n\r\n\r\n## Basic plots\r\n\r\n`antaresViz` provides a plot method for tables generated with `antaresRead`. This method is for visualizing a single variable in different formats (times series, barplot, monotone, distribution and cumulative distribution). For instance, the following code displays the distribution of marginal price in different areas.\r\n\r\n```\r\nmydata <- readAntares(areas = ""all"")\r\nplot(mydata, variable = ""MRG. PRICE"")\r\n```\r\n\r\nFor more information, run:\r\n\r\n```r\r\n?plot.antaresDataTable\r\n```\r\n\r\n\r\n## Stacks\r\n\r\nFunction `prodStack` generates a production stack for a set of areas. Different stacks have been defined. One can see their definition with command `productionStackAliases()`.\r\n\r\nWith function `exchangesStack`, one can visualize the evolution and origin/destination of imports and exports for a given area.\r\n\r\n\r\n\r\n## Maps\r\n\r\nThe construction of maps first requires to associate geographic coordinates to the areas of a study. antaresViz provides function `mapLayout` to do interactively this association.\r\n\r\n```r\r\n# Get the coordinates of the areas as they have been placed in the antaresSoftware\r\nlayout <- readLayout()\r\n\r\n# Associate geographical coordinates\r\nmyMapLayout <- mapLayout(layout)\r\n\r\n# This mapping should be done once and the result be saved on disk.\r\nsave(myMapLayout, file = ""myMapLayout.rda"")\r\n\r\n```\r\n\r\nThen map can be generated with function `plotMap`:\r\n\r\n```r\r\nmyData <- readAntares(areas = ""all"", links = ""all"")\r\nplotMap(myData, myMapLayout)\r\n```\r\n\r\nYou can use `spMaps` to set a map background or download some files at https://gadm.org/download_country_v3.html.\r\n\r\n\r\n## Contributing:\r\n\r\nContributions to the library are welcome and can be submitted in the form of pull requests to this repository.\r\n\r\n## ANTARES :\r\n Antares is a powerful software developed by RTE to simulate and study electric power systems (more information about Antares here : ).\r\n \r\n ANTARES is now an open-source project (since 2018), you can download the sources [here](https://github.com/AntaresSimulatorTeam/Antares_Simulator) if you want to use this package. \r\n\r\n\r\n## License Information:\r\n\r\nCopyright 2015-2016 RTE (France)\r\n\r\n* RTE: https://www.rte-france.com/\r\n\r\nThis Source Code is subject to the terms of the GNU General Public License, version 2 or any higher version. If a copy of the GPL-v2 was not distributed with this file, You can obtain one at https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html.\r\n'",,"2016/11/15, 16:47:23",2535,MIT,13,932,"2023/10/17, 07:52:46",40,40,166,13,8,1,0.0,0.6235446313065977,"2022/11/25, 15:54:43",v0.17.1,0,11,false,,false,false,,,https://github.com/rte-antares-rpackage,https://rte-antares-rpackage.github.io/rPackagesRte/,,,,https://avatars.githubusercontent.com/u/23478907?v=4,,, antaresRead,"Import, manipulate and explore the results of an Antares simulation.",rte-antares-rpackage,https://github.com/rte-antares-rpackage/antaresRead.git,github,"r,monte-carlo-simulation,optimisation,simulation,linear-algebra,energy,electricity,adequacy,rhdf5,hdf5,rte,tyndp,bilan,previsionnel",Energy Modeling and Optimization,"2023/10/03, 12:05:01",11,0,2,true,R,rte-antares-rpackage,rte-antares-rpackage,R,,"b'\n
\n\n# antaresRead\n\n> Read data from an Antares study with R package \'antaresRead\'\n\n\n[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/antaresRead)](https://cran.r-project.org/package=antaresRead)\n[![Lifecycle: stable](https://img.shields.io/badge/lifecycle-stable-brightgreen.svg)](https://www.tidyverse.org/lifecycle/#stable)\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![R-CMD-check](https://github.com/rte-antares-rpackage/antaresRead/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/rte-antares-rpackage/antaresRead/actions/workflows/R-CMD-check.yaml)\n[![Codecov test coverage](https://codecov.io/gh/rte-antares-rpackage/antaresRead/branch/master/graph/badge.svg)](https://app.codecov.io/gh/rte-antares-rpackage/antaresRead?branch=master)\n\n\n\n## Installation\n\nYou can install the package from CRAN:\n```r\ninstall.packages(""antaresRead"")\n```\n\nYou can also install the last development version from Github:\n```r\ndevtools::install_github(""rte-antares-rpackage/antaresRead"")\n```\n\nTo display the help of the package and see all the functions it provides, type:\n```r \nhelp(package=""antaresRead"")\n```\n\nTo see a practical example of use of the package, look at the vignette :\n```r\nvignette(""antares"")\n```\n\nFinally, you can download a cheatsheet that summarize in a single page how to use the package: https://github.com/rte-antares-rpackage/antaresRead/raw/master/cheat_sheet/antares_cheat_sheet_en.pdf .\n\nSee website for more documentation: https://rte-antares-rpackage.github.io/antaresRead/\n\n\n\n## Initialisation\n\nLoad the package\n\n```r\nlibrary(antaresRead)\n```\n\nSelect an Antares simulation interactively.\n\n```r\nsetSimulationPath()\n```\n\nYou can also select it programmatically:\n\n```r\nsetsimulationPath(""study_path"", simulation)\n```\n\nThe parameter `simulation` can be the name of a simulation, the name of the folder containing the simulation results, or the index of the simulation. `1` corresponds to the oldest simulation, `-1` to the newest one, 0 to the inputs.\n\n\n## Read data from a simulation\n\nMost data from a simulation can be imported in the R session with function `readAntares()`. It has many parameters that control what data is imported. Here are a few examples: \n\n```r\n# Read synthetic results of all areas of a study with hourly time step.\nareaData <- readAntares(areas = ""all"")\n\n# Same but with a daily time step:\nareaData <- readAntares(areas = ""all"", timeStep = ""daily"")\n\n# Read all Monte Carlo scenarios for a given area.\nmyArea <- readAntares(areas = ""my_area"", mcYears = ""all"")\n\n# Same but add miscelaneous production time series to the result \nmyArea <- readAntares(areas = ""my_area"", mcYears = ""all"", misc = TRUE)\n\n# Read only columns ""LOAD"" and ""MRG. PRICE""\nareaData <- readAntares(areas = ""all"", select = c(""LOAD"", ""MRG. PRICE""))\n```\n\nFunctions `getAreas` and `getLinks` are helpful to create a selection of areas or links of interest. Here are a few examples:\n\n```r\n# select areas containing ""fr""\nmyareas <- getAreas(""fr"")\n\n# Same but remove areas containing ""hvdc""\nmyareas <- getAreas(""fr"", exclude = ""hvdc"")\n\n# Get the links that connect two of the previous areas\nmylinks <- getLinks(myareas, internalOnly = FALSE)\n\n# Get the results for these areas and links\nmydata <- readAntares(areas = myareas, links = mylinks)\n```\n\n## Work with the imported data\n\nWhen only one type of elements is imported (only areas or only links, etc.) `readAntares()` read antares returns a `data.table` with some extra attributes. A `data.table` is a table with some enhanced capacities offered by package `data.table`. In particular it provides a special syntax to manipulate its content:\n\n```r\nname_of_the_table[filter_rows, select_columns, group_by]\n```\n\nHere are some examples:\n\n```r\n# Select lines based on some criteria\nmydata[area == ""fr"" & month == ""JUL""]\n\n# Select columns, and compute new ones\nmydata[, .(area, month, load2 = LOAD^2)]\n\n# Aggregate data by some variables\nmydata[, .(total = sum(LOAD)), by = .(month)]\n\n# All three operations can be done with a single line of code\nmydata[area == ""fr"", .(total = sum(LOAD)), by = .(month)]\n\nhelp(package = ""data.table"")\n```\n\nIf you are not familiar with package `data.table`, you should have a look at the documentation and especially at the vignettes of the package:\n\n```r\nhelp(package=""data.table"")\nvignette(""datatable-intro"")\n```\n## Contributing:\n\nContributions to the library are welcome and can be submitted in the form of pull requests to this repository.\n\nThe folder test_case contains a test Antares study used to run automatic tests. If you modifies it, you need to run the following command to include the modifications in the tests:\n\n```r\nsaveWd<-getwd()\nsetwd(\'inst/testdata/\')\ntar(\n tarfile = ""antares-test-study.tar.gz"", \n files = ""test_case"", \n compression = ""gzip""\n)\n\nsetwd(saveWd)\n```\n\nYou must also change the h5 file [here](https://github.com/rte-antares-rpackage/antaresRead/blob/master/tests/testthat/helper_init.R#L35).\n\n## ANTARES :\n Antares is a powerful software developed by RTE to simulate and study electric power systems (more information about Antares here : ).\n \nANTARES is now an open-source project (since 2018), you can download the sources [here](https://github.com/AntaresSimulatorTeam/Antares_Simulator) if you want to use this package. \n\n## License Information:\n\nCopyright 2015-2016 RTE (France)\n\n* RTE: https://www.rte-france.com\n\nThis Source Code is subject to the terms of the GNU General Public License, version 2 or any higher version. If a copy of the GPL-v2 was not distributed with this file, You can obtain one at https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html.\n'",,"2016/11/15, 15:26:32",2535,MIT,51,1001,"2023/10/03, 12:05:11",8,99,213,44,22,3,0.0,0.5989974937343359,"2023/09/04, 12:52:46",v2.6.0,0,14,false,,false,false,,,https://github.com/rte-antares-rpackage,https://rte-antares-rpackage.github.io/rPackagesRte/,,,,https://avatars.githubusercontent.com/u/23478907?v=4,,, Spine-Toolbox,"An application to define, manage, and execute various energy system simulation models.",Spine-project,https://github.com/spine-tools/Spine-Toolbox.git,github,"spine-toolbox,python,energy,anaconda,miniconda,simulation-model,workflow,data",Energy Modeling and Optimization,"2023/10/09, 10:46:46",54,0,13,true,Python,Spine tools,spine-tools,"Python,Inno Setup,GAMS,Batchfile,Julia",https://www.tools-for-energy-system-modelling.org/,"b'# Spine Toolbox\nLink to the documentation: [https://spine-toolbox.readthedocs.io/en/latest/?badge=latest](https://spine-toolbox.readthedocs.io/en/latest/?badge=latest)\n\n[![Python](https://img.shields.io/badge/python-3.8%20|%203.9%20|%203.10%20|%203.11-blue.svg)](https://www.python.org/downloads/release/python-379/)\n[![Documentation Status](https://readthedocs.org/projects/spine-toolbox/badge/?version=latest)](https://spine-toolbox.readthedocs.io/en/latest/?badge=latest)\n[![Test suite](https://github.com/spine-tools/Spine-Toolbox/actions/workflows/test_runner.yml/badge.svg)](https://github.com/spine-tools/Spine-Toolbox/actions/workflows/test_runner.yml)\n[![codecov](https://codecov.io/gh/spine-tools/Spine-Toolbox/branch/master/graph/badge.svg)](https://codecov.io/gh/spine-tools/Spine-Toolbox)\n[![PyPI version](https://badge.fury.io/py/spinetoolbox.svg)](https://badge.fury.io/py/spinetoolbox)\n[![Join the chat at https://gitter.im/spine-tools/Spine-Toolbox](https://badges.gitter.im/spine-tools/Spine-Toolbox.svg)](https://gitter.im/spine-tools/Spine-Toolbox?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n\nSpine Toolbox is an open source Python package to manage data, scenarios and workflows for modelling and simulation. \nYou can have your local workflow, but work as a team through version control and SQL databases.\n\n

\n \n \n \n \n

\n\n## Programming language\n\n- Python 3.8*\n- Python 3.9\n- Python 3.10**\n- Python 3.11**\n\n*Python 3.8.0 is not supported (use Python 3.8.1 or later).
\n**Python 3.10 and Python 3.11 require Microsoft Visual C++ 14.0 or greater on Windows.\n\n## License\n\nSpine Toolbox is released under the GNU Lesser General Public License (LGPL) license. \nAll accompanying documentation, original graphics and other material are released under the \n[Creative Commons BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).\nLicenses of all packages used by Spine Toolbox are listed in the Spine Toolbox User \nGuide.\n\n## Attribution\n\nIf you use Spine Toolbox in a published work, please cite the following publication (Chicago/Turabian Style).\n\nKiviluoma Juha, Pallonetto Fabiano, Marin Manuel, Savolainen Pekka T., Soininen Antti, Vennstr\xc3\xb6m Per, Rinne Erkka, \nHuang Jiangyi, Kouveliotis-Lysikatos Iasonas, Ihlemann Maren, Delarue Erik, O\xe2\x80\x99Dwyer Ciara, O\xe2\x80\x99Donnel Terence, \nAmelin Mikael, S\xc3\xb6der Lennart, and Dillon Joseph. 2022. ""Spine Toolbox: A flexible open-source workflow management \nsystem with scenario and data management"" SoftwareX, Vol. 17, 100967, https://doi.org/10.1016/j.softx.2021.100967.\n\n## Installation\n\nWe provide three options for installing Spine Toolbox: \n- [Python/pipx](#installation-with-python-and-pipx) (we intend to make stable releases every month or so)\n- [From source files](#installation-from-sources-using-git) (this is the cutting edge - and more likely to have bugs)\n- [Windows installation package](#windows-64-bit-installer-package) (these are quite old - not recommended)\n\n### Installation with Python and pipx\n\nThis works best for users that just want to use Spine Toolbox and keep it updated \nwith new releases. The instructions below are written for Windows, but they also \napply to Linux and Mac where applicable.\n\n1. If you don\'t have Python installed, please install e.g. **Python 3.9**\n from [Python.org](https://www.python.org/downloads/release/python-3913/).\n\n2. If you want to use Python 3.10 or 3.11 on Windows, please install **Microsoft Visual C++ 14.0 or greater** on \n Windows. Get it with *Microsoft C++ Build Tools*: \n https://visualstudio.microsoft.com/visual-cpp-build-tools/. (Earlier versions of Python use earlier versions of \n C++ libraries that should be available with Windows.)\n \n3. If you don\'t have Git, Windows version can be found here: https://git-scm.com/download/win.\n\n4. Open a terminal (e.g., Command Prompt). Windows: If you have issues with Python and/or git not found in path, \n you can add the paths to environment variables manually. This can be done from Windows Control Panel \n (use find with \'environmental\') or from a command prompt using `set PATH=%PATH%;[path-to-executable]` e.g. \n `set PATH=%PATH%;C:\\Users\\my_user_name\\AppData\\Local\\Programs\\Git\\Cmd`. \n\n5. Get the latest version of `pip` (pip is a package manager for Python)\n\n python -m pip install --upgrade pip\n\n6. Install [pipx](https://pypa.github.io/pipx/). pipx helps in creating an isolated \n environment for Spine Toolbox to avoid package conflicts.\n\n python -m pip install --user pipx\n python -m pipx ensurepath\n\n7. Restart the terminal or re-login for the changes of the latest command to take effect.\n\n8. Choose which Toolbox version to install (**NOTE: There is no release version for Python 3.11, yet**). Latest \n *release* version is installed using \n\n python -m pipx install spinetoolbox\n\n or get the latest *development* version using\n\n python -m pipx install git+https://github.com/spine-tools/spinetoolbox-dev\n\nThat\xe2\x80\x99s it! To launch Spine Toolbox, open a terminal and run\n\n spinetoolbox\n\nIf for some reason the command is not found, the executable can be found under `~/.local/bin` \n(`%USERPROFILE%\\.local\\bin` on Windows).\n\nTo update Spine Toolbox to the latest available release, open a terminal and run\n\n python -m pipx upgrade spinetoolbox\n\nHere, replace `spinetoolbox` with `spinetoolbox-dev` if you installed the latest\ndevelopment version.\n\n### Installation from sources using Git\n\nThis option is for developers and other contributors who want to debug or edit Spine Toolbox source code. First, \nfollow the instructions above to install Python and get the latest version of pip.\n\n1. Make sure you have git: https://git-scm.com/download/win\n\n2. Clone or download the source code from this repository.\n \n3. Browse to the folder where Spine Toolbox was cloned and create a Python virtual environment using\n\n python -m venv .venv\n \n Make sure you have the right Python version in the system path, or then use the full path of the Python \n version you want to use.\n
\n
\n Instead of venv, one can also use a \n [miniconda](https://docs.conda.io/projects/conda/en/stable/glossary.html#miniconda-glossary) environment. You \n can [download miniconda from here](https://docs.conda.io/en/latest/miniconda.html). **Note: Anaconda \n environments are not supported.** Create a new miniconda environment without linking packages from the base \n environment for e.g. Python 3.9 using\n\n conda create -n spinetoolbox python=3.9\n \n4. Activate the environment using `.venv\\Scripts\\activate.bat` (Windows cmd.exe) \n or `source .venv/bin/activate` (bash, zsh) or `conda activate spinetoolbox`. \n\n5. Make sure that the terminal prompt indicates the active environment\n and get the latest version of `pip` (pip is a package manager for Python)\n\n python -m pip install --upgrade pip\n\n6. Install Spine Toolbox along with its dependencies with\n\n python -m pip install -r requirements.txt\n \n7. (Optional) Install additional development packages with\n\n python -m pip install -r dev-requirements.txt\n\nYou can now launch Spine Toolbox by calling `spinetoolbox` when the environment \nis active. \n\n**To upgrade**, pull (or copy) the latest changes from the Spine Toolbox repository \n\n git pull\n \nand run (to upgrade the other Spine Toolbox packages)\n\n python -m pip install -U -r requirements.txt\n\n### Windows 64-bit installer package\n\nThere are old Windows installer packages available for a quick install, but they are\nat this point (26.1.2023) quite obsolete and cannot be recommended for anything but \na quick look at how Spine Toolbox looks and feels (although even that has changed).\nDownload the installer package from \n[here](https://github.com/spine-tools/Spine-Toolbox/releases),\nrun it, and follow the instructions to install Spine Toolbox.\n\n### About requirements\n\nPython 3.8.1-3.11 is required. Python 3.8.0 is not supported due to problems in DLL loading on Windows.\n\nSee file `setup.cfg` and `requirements.txt` for packages required to run Spine Toolbox.\n(Additional packages needed for development are listed in `dev-requirements.txt`.)\n\nThe requirements include three packages ([`spinedb_api`](https://github.com/spine-tools/Spine-Database-API),\n[`spine_engine`](https://github.com/spine-tools/spine-engine), and \n[`spine_items`](https://github.com/spine-tools/spine-items)), developed by the Spine project consortium.\n\n### Building the User Guide\n\nYou can find the latest documentation on [readthedocs](https://spine-toolbox.readthedocs.io/en/latest/index.html).\nIf you want to build the documentation yourself,\nsource files for the User Guide can be found in `docs/source` directory. In order to \nbuild the HTML docs, you need to install the *optional requirements* (see section \n\'Installing requirements\' above). This installs Sphinx (among other things), which \nis required in building the documentation. When Sphinx is installed, you can build the \nHTML pages from the user guide source files by using the `bin/build_doc.bat` script on \nWindows or the `bin/build_doc.sh` script on Linux and Mac. After running the script, the \nindex page can be found in `docs/build/html/index.html`. The User Guide can also \nbe opened from Spine Toolbox menu Help->User Guide (F2).\n\n### Troubleshooting\n\n#### Obscure crashes that may produce a traceback related to PySide6\'s model classes\n\nThe first thing is to make sure that you are not using Anaconda. Only Miniconda is supported. Anaconda\'s base \nenvironment includes Qt related packages and we suspect that they leak some shared Qt libraries into environments,\neven when specifically requesting that base environment packages should not be linked to environment packages. See\nalso [Problems in starting the application](#problems-in-starting-the-application) below.\n\n#### Installation fails\n\nPlease make sure you are using Python 3.8, 3.9, 3.10, or 3.11 to install the requirements.\n\nIf you are on **Python 3.10 or 3.11**, please install **Microsoft Visual C++ 14.0 or greater** on Windows. \nGet it with *Microsoft C++ Build Tools*: https://visualstudio.microsoft.com/visual-cpp-build-tools/.\n\n#### \'No Python\' error when installing with pipx\n\nIf you see the following error when running the command `python -m pipx install spinetoolbox`\n\n```\nNo Python at \'c:\\python38\\python.exe\'\nFatal error from pip prevented installation. Full pip output in file:\n```\n\nWhere c:\\python38\\python.exe may be some other path. To fix this, delete the folder \n`C:\\Users\\\\.local\\pipx\\shared` and run the `python -m pipx install spinetoolbox` command again.\n\n#### Installation fails on Linux\nIf Python runs into errors while installing on Linux systems, running the \nfollowing command in a terminal may help:\n\n```shell\n$\xc2\xa0sudo apt install libpq-dev\n```\n\n#### Problems in starting the application\n\nIf there are problems in starting Spine Toolbox, the chances are that the required \npackages were not installed successfully. In case this happens, the first thing you \nshould check is that you don\'t have `Qt`, `PyQt4`, `PyQt5`, `PySide`, `PySide2`, and \n`PySide6` packages installed in the same environment. These do not play nice together \nand may introduce conflicts. In addition, make sure that you do not have multiple versions \nof these `Qt` related packages installed in the same environment. The easiest way \nto solve this problem is to create a blank (e.g. venv virtual environment) Python \nenvironment just for `PySide6` applications and installing the requirements again.\n\n**Warning: Using the *conda-forge* channel for installing the requirements is not \nrecommended.**\n\nThe required `qtconsole` package from the ***conda-forge*** channel also\ninstalls `qt` and `PyQt` packages. Since this is a `PySide6` application, those \nare not needed and there is a chance of conflicts between the packages.\n\n**Note**: Python 3.8.0 is not supported. Use Python 3.8.1 or later.\n\n## Recorded Webinars showing the use of Spine Tools\n\n### Spine Toolbox: Data, workflow and scenario management for modelling\n*Wednesday Sep 8, 17:00-18:30 CEST (11:00-12:30 EDT) - Organized together with G-PST*\n\nSpine Toolbox is an open source software to manage data, scenarios and workflows for modelling and simulation. You can have your local workflow, but work as a team through version control and SQL databases. This webinar gives a quick overview of the different functionalities and showcases them through two examples.\n\nRecording Chapters:\n- [00:00-00:05](https://www.youtube.com/watch?v=jaDIxonOmfY) Relation to G-PST Pillar 5 (Clayton Barrows, NREL)\n- [00:05-00:35](https://youtu.be/jaDIxonOmfY?t=1350) Building a workflow in Spine Toolbox [PDF](http://www.spine-model.org/pdf/webinar/Spine%20Toolbox%20-%20Building%20a%20workflow%20in%20Toolbox%20by%20Juha%20Kiviluoma.pdf), *(Juha Kiviluoma, VTT)*\n- [00:35-00:55](https://youtu.be/jaDIxonOmfY?t=2445) Example workflow from Canada to manage lots of input sources [PDF](http://www.spine-model.org/pdf/webinar/Spine%20Toolbox%20-%20Case%20in%20energy%20system%20integration%20with%20Toolbox%20by%20Madeleine%20McPherson%20and%20Jacob%20Monroe.pdf), *(Madeleine McPherson and Jake Monroe, University of Victoria)*\n- [01:05-01:15](https://youtu.be/jaDIxonOmfY?t=3722) Example workflow from EU project TradeRES to serve\nmultiple models *(Milos Cvetkovic and Jim Hommes, TU Delft)*\n- [01:15-01:30](https://youtu.be/jaDIxonOmfY?t=4500) Moderated Q&A \n\n### SpineOpt: A flexible energy system modelling framework in Julia\n*Tuesday Sep 7, 14:00-15:30 CEST (8:00-9:30 EDT) - Organized together with EERA ESI*\n\nThe importance of operational details in planning future energy systems has been rapidly increasing. One driver for this is the increasing role of variable power generation, which requires that the energy system models consider higher temporal granularity, longer time series and new aspects in power system stability. Another important driver is a consequence of sector coupling through the electrification of transport, space heating and industries. As the other sectors become more integrated with electricity, they need to be modelled at a granularity that is sufficient for describing the potential flexibility they can bring to the power system dominated by variable and uncertain power generation.\n\nThis webinar will present the open source Julia based energy system modelling framework SpineOpt, which has been built with these challenges in mind. It can represent different energy sectors using representation typically available only in sector specific models and with highly adaptable temporal and stochastic structures available both for planning and operations. More information at https://spine-project.github.io/SpineOpt.jl/latest/index.html.\n\nRecording Chapters:\n- [00:00-00:08](https://www.youtube.com/watch?v=FiiqZNcx7Ds) Introduction [PDF](http://www.spine-model.org/pdf/webinar/SpineOpt_Introduction%20by%20Erik%20Delarue.pdf)\n- [00:08-00:31](https://www.youtube.com/watch?v=FiiqZNcx7Ds&t=475s) Basic elements of SpineOpt and the flexible spatial structure [PDF](http://www.spine-model.org/pdf/webinar/SpineOpt_Basic%20elements%20of%20SpineOpt%20by%20Maren%20Ihlemann.pdf)\n- [00:31-00:40](https://youtu.be/FiiqZNcx7Ds?t=1876) Adaptable temporal and stochastic structures [PDF](http://www.spine-model.org/pdf/webinar/SpineOpt_Flexible%20temporal%20and%20stochastic%20structure%20by%20Topi%20Rasku.pdf)\n- [00:50-01:30](https://youtu.be/FiiqZNcx7Ds?t=3608) Representation of different energy sectors &\nAttempts to make the model faster [PDF](http://www.spine-model.org/pdf/webinar/SpineOpt_Different%20energy%20sectors%20and%20accomodating%20complexity%20by%20Jody%20Dillon.pdf)\n- [01:25-01:35](https://youtu.be/FiiqZNcx7Ds?t=5065) Q&A + Step-by-step 10 minute demo on how to build a simple model with SpineOpt using Spine Toolbox\n\n### Demonstration of Spine modelling tools through selected case studies\n*Thursday Sep 9, 14:00-16:00 CEST (8:00-10:00 EDT)*\n\n[Full recording](https://youtu.be/i2fxDwsMuF8), all presentations slides [PDF](http://www.spine-model.org/pdf/webinar/Case_Studies_all_presentations.pdf)\n\nOver the past 4 years, the EU project Spine has developed a set of open-source tools for modelling complex energy systems. This webinar demonstrates the Spine software through six selected case studies, covering topics such as sector coupling, co-optimization of operation and investments, stochastic modelling, and rolling horizon optimization. Each subsection described below consists of a brief introduction followed by a live demonstration of the particular case, where some of the outstanding features of Spine are highlighted and discussed.\n\nRecording Chapters:\n- [00:05-00:15](https://youtu.be/i2fxDwsMuF8?t=252) Introduction to Spine: This section uses a simple example to demonstrate the SpineOpt modelling principle. First, the user defines the different objects in their system, such as units and nodes, as well as the relationships between them, such as which units are connected to which nodes. Then, they specify values for certain pre-defined parameters such as node demand, unit capacity, cost, and conversion ratio. The resulting dataset is passed to SpineOpt which generates the corresponding optimisation model, optimizes it, and delivers the results.\n- [00:15-00:30](https://youtu.be/i2fxDwsMuF8?t=939) Hydro: This section demonstrates hydropower modelling in Spine as performed in Case study A5. The objective is to model part of the Swedish hydropower system, namely the Skellefte river with its 15 power stations, by coupling the river system with the power system. The model maximizes income from electricity supply over one week with an hourly resolution, while respecting basic hydrological constraints.\n- [00:29-00:45](https://youtu.be/i2fxDwsMuF8?t=1745) Building heating: This section demonstrates building heating modelling in Spine as performed in Case study A4. The objective is to model the Finnish power and district heating system coupled with an electrically heated residential building stock. The result is a rolling unit commitment and economic dispatch model, that optimizes system performance over one year at hourly resolution. \n- [00:45-00:57] Break\n- [00:57-01:06](https://youtu.be/i2fxDwsMuF8?t=3411) Gas grid: This section demonstrates gas grid modelling in Spine as performed in Case study A2. The objective is to model a natural gas transmission system with pressure-driven gas and couple it with an electricity system to capture the flexibility provided by the gas network. The result is a dispatch model that co-optimizes operations in both systems over a day at hourly resolution.\n- [01:06-01:22](https://youtu.be/i2fxDwsMuF8?t=3992) Stochastic: This section demonstrates stochastic modelling in Spine using a simple example system. Three different stochastic structures are demonstrated: purely deterministic, stochastic fan, and converging fan.\n- [01:22-01:35](https://youtu.be/i2fxDwsMuF8?t=4970) Power grid investments: This section demonstrates power grid investment modelling in Spine as performed in case study C2. The objective is twofold: (i) to model the Nordic synchronous power system (Norway, Sweden, Finland, and Denmark) with high operational detail; and (ii) to find optimal transmission line investment decisions over a long-term horizon of 10 years, for three different wind penetration scenarios. \n- [01:35-01:40](https://youtu.be/i2fxDwsMuF8?t=5652) Q&A \n\n### SpineInterface: How to quickly and easily create new optimization models in Julia Friday\n*Friday Sep 10, 14:00-15:30 CEST (8:00-9:30 EDT)*\n\n[Full recording](https://youtu.be/cUopRUTzXpY), all presentations [PDF](http://www.spine-model.org/pdf/webinar/SpineInterface_all_presentations.pdf)\n\nCreation of new optimisation models requires a lot of work to get the data to the models and the results out of the models. Spine Toolbox is an open source data and workflow management tool to assist with these tasks and can work with models written in any language. Meanwhile, SpineInterface is a Julia package that links the data management capabilities of Spine Toolbox with Julia/JuMP modelling environment in a very direct way.\n\nThe data interfaces of Spine Toolbox together with SpineInterface simplifies the process of developing optimization models by allowing the model developer to focus on the constraint equations. The required data structures and data are defined with a graphical interface in Spine Toolbox and are immediately available to the model developer inside the constraint equation code without any action or code required by the model developer. SpineInterface supports the full range of data parameter types supported by Toolbox and provides a mechanism for representation of time and time-based data, either time series, time patterns or arbitrarily varying temporal data.\n\nThis session will be of interest to model developers and/or students who want a significant head start in developing optimization models. The sessions will also be of interest to model developers who may wish to translate existing models developed in other platforms such as GAMS, into the Spine framework using SpineInterface. The power of SpineInterface will be demonstrated in an interactive session where the full modelling workflow will be illustrated from data structure design and implementation to constraint equation development.\n\nAgenda:\n- Overview of SpineInterface\n- Toolbox concepts and data structures including the Spine data API\n- SpineInterface: convenience functions and accessing Spine Toolbox data in Julia\n- Defining a model data structure in Spine Toolbox\n- Building and solving an optimization model using SpineInterface\n- Q&A + live demo [00:45-01:21](https://youtu.be/cUopRUTzXpY?t=2737)\n\n## Contribution Guide\n\nAll are welcome to contribute!\n\nSee detailed instructions for contribution in \n[Spine Toolbox User Guide](https://spine-toolbox.readthedocs.io/en/latest/contribution_guide.html).\n\nBelow are the bare minimum things you need to know.\n\n### Setting up development environment\n\n1. Install the developer requirements:\n\n python -m pip install -r dev-requirements.txt\n\n2. Optionally, run `pre-commit install` in project\'s root directory. This sets up some git hooks.\n\n### Coding style\n\n- [Black](https://github.com/python/black) is used for Python code formatting.\n The project\'s GitHub page includes instructions on how to integrate Black in IDEs.\n- Google style docstrings\n\n### Linting\n\nIt is advisable to run [`pylint`](https://pylint.readthedocs.io/en/latest/) \nregularly on files that have been changed.\nThe project root includes a configuration file for `pylint`.\n`pylint`\'s user guide includes instructions on how to \n[integrate the tool in IDEs](https://pylint.readthedocs.io/en/latest/user_guide/ide-integration.html#pylint-in-pycharm).\n\n### Unit tests\n\nUnit tests are located in the `tests` directory.\nYou can run the entire test suite from project root by\n\n python -m unittest\n\n### Reporting bugs\nIf you think you have found a bug, please check the following before creating a new \nissue:\n1. **Make sure you\xe2\x80\x99re on the latest version.** \n2. **Try older versions.**\n3. **Try upgrading/downgrading the dependencies**\n4. **Search the project\xe2\x80\x99s bug/issue tracker to make sure it\xe2\x80\x99s not a known issue.**\n\nWhat to put in your bug report:\n1. **Python version**. What version of the Python interpreter are you using? 32-bit \n or 64-bit?\n2. **OS**. What operating system are you on?\n3. **Application Version**. Which version or versions of the software are you using? \n If you have forked the project from Git, which branch and which commit? Otherwise, \n supply the application version number (Help->About menu).\n4. **How to recreate**. How can the developers recreate the bug? A screenshot \n demonstrating the bug is usually the most helpful thing you can report. Relevant \n output from the Event Log and debug messages from the console of your run, should \n also be included.\n\n### Feature requests\nThe developers of Spine Toolbox are happy to hear feature requests or ideas for improving \nexisting functionality. The format for requesting new features is free. Just fill \nout the required fields on the issue tracker and give a description of the new feature. \nA picture accompanying the description is a good way to get your idea into development\nfaster. But before you make a new issue, please check that there isn\'t a related idea \nalready open in the issue tracker.\n\n \n
\n
\n\n\n\n\n\n\n\n
\nThis work has been partially supported by EU project Mopo (2023-2026), which has received funding \nfrom European Climate, Infrastructure and Environment Executive Agency under the European Union\xe2\x80\x99s HORIZON Research and \nInnovation Actions under grant agreement N\xc2\xb0101095998.
\nThis work has been partially supported by EU project Spine (2017-2021), which has received funding \nfrom the European Union\xe2\x80\x99s Horizon 2020 research and innovation programme under grant agreement No 774629.
\n
\n'",",https://doi.org/10.1016/j.softx.2021.100967.\n\n##","2018/09/26, 07:24:52",1855,LGPL-3.0,560,6627,"2023/10/25, 06:20:50",250,212,2076,549,0,3,0.4,0.622357463164638,"2023/08/25, 10:51:26",0.7.0,17,16,false,,false,false,,,https://github.com/spine-tools,http://www.tools-for-energy-system-modelling.org,,,,https://avatars.githubusercontent.com/u/42807090?v=4,,, demandlib,With the demandlib you can create power and heat profiles for various sectors by scaling them to your desired demand.,oemof,https://github.com/oemof/demandlib.git,github,,Energy Modeling and Optimization,"2023/06/26, 15:36:16",46,21,7,true,Python,oemof community,oemof,Python,https://oemof.org,"b'========\nOverview\n========\n\n.. start-badges\n\n.. list-table::\n :stub-columns: 1\n\n * - docs\n - |docs|\n * - tests\n - | |tox-pytest| |tox-checks| |coveralls|\n * - package\n - | |version| |wheel| |supported-versions| |supported-implementations| |commits-since| |packaging|\n\n\n.. |tox-pytest| image:: https://github.com/oemof/demandlib/workflows/tox%20pytests/badge.svg?branch=dev\n :target: https://github.com/oemof/demandlib/actions?query=workflow%3A%22tox+checks%22\n\n.. |tox-checks| image:: https://github.com/oemof/demandlib/workflows/tox%20checks/badge.svg?branch=dev\n :target: https://github.com/oemof/demandlib/actions?query=workflow%3A%22tox+checks%22\n\n.. |packaging| image:: https://github.com/oemof/demandlib/workflows/packaging/badge.svg?branch=dev\n :target: https://github.com/oemof/demandlib/actions?query=workflow%3Apackaging\n\n.. |docs| image:: https://readthedocs.org/projects/demandlib/badge/?style=flat\n :target: https://demandlib.readthedocs.io/\n :alt: Documentation Status\n\n.. |coveralls| image:: https://coveralls.io/repos/oemof/demandlib/badge.svg?branch=dev&service=github\n :alt: Coverage Status\n :target: https://coveralls.io/github/oemof/demandlib?branch=dev\n\n.. |version| image:: https://img.shields.io/pypi/v/demandlib.svg\n :alt: PyPI Package latest release\n :target: https://pypi.org/project/demandlib\n\n.. |wheel| image:: https://img.shields.io/pypi/wheel/demandlib.svg\n :alt: PyPI Wheel\n :target: https://pypi.org/project/demandlib\n\n.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/demandlib.svg\n :alt: Supported versions\n :target: https://pypi.org/project/demandlib\n\n.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/demandlib.svg\n :alt: Supported implementations\n :target: https://pypi.org/project/demandlib\n\n.. |commits-since| image:: https://img.shields.io/github/commits-since/oemof/demandlib/v0.1.8.svg\n :alt: Commits since latest release\n :target: https://github.com/oemof/demandlib/compare/v0.1.9...dev\n\n\n\n.. end-badges\n\nCreating heat and power demand profiles from annual values.\n\n* Free software: MIT license\n\nInstallation\n============\n\n::\n\n pip install demandlib\n\nYou can also install the in-development version with::\n\n pip install https://github.com/oemof/demandlib/archive/master.zip\n\n\nDocumentation\n=============\n\n\nhttps://demandlib.readthedocs.io/\n\n\nDevelopment\n===========\n\nTo run all the tests run::\n\n tox\n\nNote, to combine the coverage data from all the tox environments run:\n\n.. list-table::\n :widths: 10 90\n :stub-columns: 1\n\n - - Windows\n - ::\n\n set PYTEST_ADDOPTS=--cov-append\n tox\n\n - - Other\n - ::\n\n PYTEST_ADDOPTS=--cov-append tox\n'",,"2016/04/08, 06:17:48",2756,MIT,11,241,"2023/03/29, 11:52:18",12,21,43,2,210,2,1.5,0.42647058823529416,"2021/01/27, 16:20:30",v0.1.8,0,11,false,,false,true,"stadt-land-energie-projekt/oemof-B3-robustness-module,in-RET/inretensys-fastapi,jowie1508/oemof-B3,sharijide/Nigeria_oemof,rl-institut/oemof-B3-methane,oemof/oemof,moritz-reuter/ESEM-EE,rl-institut/Uganda_oemof,MarBrandt/MBTools,inecmod/LAEND_v031,rl-institut/oemof-B3,SESMG/SESMG,lilychau1/smart_meter_data_analysis,brizett/reegis_hp,greco-project/pvcompare,reegis/berlin_hp,reegis/deflex,rl-institut/appBBB,reegis/reegis,openego/data_processing,openego/eDisGo",,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, dieter_py,An open source power sector optimization model that has been developed to investigate the role of electricity storage and sector coupling options in future scenarios with high shares of renewable energy sources.,diw-evu/dieter_public,https://gitlab.com/diw-evu/dieter_public/dieterpy,gitlab,,Energy Modeling and Optimization,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OMEGAlpes,"Aims to be an energy systems modeling tool for linear optimization (LP, MILP).",omegalpes,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://gricad-gitlab.univ-grenoble-alpes.fr/omegalpes/omegalpes,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, deflex,"Flexible multi-regional energy system model for heat, power and mobility.",reegis,https://github.com/reegis/deflex.git,github,,Energy Modeling and Optimization,"2022/06/22, 19:47:21",11,2,0,false,Python,reegis,reegis,Python,,"b'.. start-badges\n\n| |workflow_pytests| |workflow_checks| |coveralls| |docs| |packaging|\n| |lgt_general| |lgt_alerts| |codacy| |requires|\n\n\\\n\n| |version| |wheel| |supported-versions| |supported-implementations|\n| |commits-since| |licence| |code_Style| |zenodo|\n\n\n.. |docs| image:: https://readthedocs.org/projects/deflex/badge/?style=flat\n :target: https://readthedocs.org/projects/deflex\n :alt: Documentation Status\n\n.. |workflow_pytests| image:: https://github.com/reegis/deflex/workflows/tox%20pytests/badge.svg?branch=master\n :target: https://github.com/reegis/deflex/actions?query=workflow%3A%22tox+pytests%22\n\n.. |workflow_checks| image:: https://github.com/reegis/deflex/workflows/tox%20checks/badge.svg?branch=master\n :target: https://github.com/reegis/deflex/actions?query=workflow%3A%22tox+checks%22\n\n.. |packaging| image:: https://github.com/reegis/deflex/workflows/packaging/badge.svg?branch=master\n :target: https://github.com/reegis/deflex/actions?query=workflow%3Apackaging\n\n.. |requires| image:: https://requires.io/github/reegis/deflex/requirements.svg?branch=master\n :alt: Requirements Status\n :target: https://requires.io/github/reegis/deflex/requirements/?branch=master\n\n.. |coveralls| image:: https://coveralls.io/repos/github/reegis/deflex/badge.svg?branch=master\n :alt: Coverage Status\n :target: https://coveralls.io/github/reegis/deflex?branch=master\n\n.. |version| image:: https://img.shields.io/pypi/v/deflex.svg\n :alt: PyPI Package latest release\n :target: https://pypi.org/project/deflex\n\n.. |wheel| image:: https://img.shields.io/pypi/wheel/deflex.svg\n :alt: PyPI Wheel\n :target: https://pypi.org/project/deflex\n\n.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/deflex.svg\n :alt: Supported versions\n :target: https://pypi.org/project/deflex\n\n.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/deflex.svg\n :alt: Supported implementations\n :target: https://pypi.org/project/deflex\n\n.. |commits-since| image:: https://img.shields.io/github/commits-since/reegis/deflex/v0.3.0.svg\n :alt: Commits since latest release\n :target: https://github.com/reegis/deflex/compare/v0.3.0...master\n\n.. |lgt_general| image:: https://img.shields.io/lgtm/grade/python/g/reegis/deflex.svg?logo=lgtm&logoWidth=18\n :target: https://lgtm.com/projects/g/reegis/deflex/context:python\n\n.. |lgt_alerts| image:: https://img.shields.io/lgtm/alerts/g/reegis/deflex.svg?logo=lgtm&logoWidth=18\n :target: https://lgtm.com/projects/g/reegis/deflex/alerts/\n\n.. |code_style| image:: https://img.shields.io/badge/automatic%20code%20style-black-blueviolet\n :target: https://black.readthedocs.io/en/stable/\n\n.. |codacy| image:: https://api.codacy.com/project/badge/Grade/b91ed03ffa8e407ab3e69a10c5115efa\n :target: https://app.codacy.com/gh/reegis/deflex?utm_source=github.com&utm_medium=referral&utm_content=reegis/deflex&utm_campaign=Badge_Grade\n\n.. |licence| image:: https://img.shields.io/badge/licence-MIT-blue\n :target: https://spdx.org/licenses/MIT.html\n\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3572594.svg\n :target: https://doi.org/10.5281/zenodo.3572594\n\n\n------------------------------------------------\n\n.. end-badges\n\n\\\n\n.. image:: https://raw.githubusercontent.com/reegis/deflex/master/docs/images/logo_deflex_big.svg\n :target: https://github.com/reegis/deflex\n :width: 600pt\n\n=================================================================================\ndeflex - flexible multi-regional energy system model for heat, power and mobility\n=================================================================================\n\n++++++ multi sectoral energy system of Germany/Europe ++++++ dispatch\noptimisation ++++++ highly configurable and adaptable ++++++ multiple analyses\nfunctions +++++\n\nThe following README gives you a brief overview about deflex. Read the full\n`documentation `_ for all\ninformation.\n\n.. contents::\n :depth: 1\n :local:\n :backlinks: top\n\nInstallation\n------------\n\nTo run `deflex` you have to install the Python package and a solver:\n\n* deflex is available on `PyPi `_ and can be\n installed using ``pip install deflex``.\n* an LP-solver is needed such as CBC (default), GLPK, Gurobi*, Cplex*\n* for some extra functions additional packages and are needed\n\n\\* Proprietary solver\n\n\nExamples\n--------\n\n1. Run ``pip install deflex[example]`` to get all dependencies.\n2. Create a local directory (e.g. /home/user/deflex_examples).\n3. Browse the `examples `_ for deflex v0.4.x or\n download all examples as `zip file `_ and copy/extract them to your local directory.\n4. Read the comments of each example, execute it and modify it to your needs.\n Do not forget to set a local path in the examples if needed.\n5. In parallel you should read the ``usage guide`` of the documentation to get\n the full picture.\n\nThe example scripts will download the example scenarios to the $HOME/deflex\nfolder. It is also possible to browse the\n`example scenarios `_.\n\nImprove deflex\n--------------\n\nWe are warmly welcoming all who want to contribute to the deflex library. This\nincludes the following actions:\n\n* Write bug reports or comments\n* Improve the documentation (including typos, grammar)\n* Add features improve the code (open an issue first)\n\n\nCiting deflex\n-------------\n\nGo to the `Zenodo page of deflex `_ to find the DOI of your version. To cite all deflex versions use:\n\n.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3572594.svg\n :target: https://doi.org/10.5281/zenodo.3572594\n\nGallery\n-------\n\nThe following figures will give you a brief impression about deflex.\n\n.. image:: https://raw.githubusercontent.com/reegis/deflex/master/docs/images/model_regions.svg\n\n**Figure 1:** Use one of the include regions sets or create your own one. You\ncan also include other European countries.\n\n-------------------------------------------------------------------------------\n\n.. image:: https://raw.githubusercontent.com/reegis/deflex/master/docs/images/spreadsheet_examples.png\n :width: 950pt\n\n**Figure 2:** The input data can be organised in spreadsheets or csv files.\n\n-------------------------------------------------------------------------------\n\n.. image:: https://raw.githubusercontent.com/reegis/deflex/master/docs/images/mcp.svg\n\n**Figure 3:** The resulting system costs of deflex have been compared with the\nday-ahead prices from the Entso-e downloaded from `Open Power System Data\n`_. The plot shows three different periods\nof the year.\n\n-------------------------------------------------------------------------------\n\n.. image:: https://raw.githubusercontent.com/reegis/deflex/master/docs/images/emissions.svg\n\n**Figure 4:** It is also possible to get a time series of the average emissions. Furthermore,\nit shows the emissions of the most expensive power plant which would be\nreplaced by an additional feed-in.\n\n-------------------------------------------------------------------------------\n\n.. image:: https://raw.githubusercontent.com/reegis/deflex/master/docs/images/transmission.svg\n\n**Figure 5:** The following plot shows fraction of the time on which the utilisation of the\npower lines between the regions is more than 90% of its maximum capacity:\n\nDocumentation\n-------------\n\nThe `full documentation of deflex `_\nis available on readthedocs.\n\nGo to the `download page `_\nto download different versions and formats (pdf, html, epub) of the\ndocumentation.\n\nLicense\n-------\n\nCopyright (c) 2016-2021 Uwe Krien\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the ""Software""), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n'",",https://doi.org/10.5281/zenodo.3572594\n\n\n------------------------------------------------\n\n,https://doi.org/10.5281/zenodo.3572594,https://doi.org/10.5281/zenodo.3572594\n\nGallery\n-------\n\nThe","2018/01/29, 14:07:48",2095,MIT,0,939,"2022/02/25, 08:45:03",9,27,27,0,607,0,0.0,0.07692307692307687,"2022/02/25, 14:08:07",v0.4.0rc2,1,5,false,,false,true,"reegis/deflex,uvchik/reegis_phd",,https://github.com/reegis,,,,,https://avatars.githubusercontent.com/u/35924869?v=4,,, energy-py-linear,Optimizing energy systems using mixed integer linear programming.,ADGEfficiency,https://github.com/ADGEfficiency/energy-py-linear.git,github,"python,optimization,energy",Energy Modeling and Optimization,"2023/10/21, 02:39:04",53,2,15,true,Python,,,"Python,Makefile",,"b'# energy-py-linear\n\n [![Checked with mypy](https://www.mypy-lang.org/static/mypy_badge.svg)](https://mypy-lang.org/)\n\n---\n\nDocumentation: [energypylinear.adgefficiency.com](https://energypylinear.adgefficiency.com/latest)\n\n---\n\nA Python library for optimizing energy assets with mixed-integer linear programming:\n\n- electric batteries,\n- combined heat & power (CHP) generators,\n- electric vehicle smart charging,\n- heat pumps,\n- renewable (wind & solar) generators.\n\nAssets & sites can be optimized to either maximize profit or minimize carbon emissions.\n\nEnergy balances are performed on electricity, high & low temperature heat.\n\n## Setup\n\nRequires Python 3.10+:\n\n```shell-session\n$ pip install energypylinear\n```\n\n## Quick Start\n\n### Asset API\n\nThe asset API allows optimizing a single asset at once:\n\n```python\nimport energypylinear as epl\n\n# 2.0 MW, 4.0 MWh battery\nasset = epl.Battery(\n power_mw=2,\n capacity_mwh=4,\n efficiency_pct=0.9,\n electricity_prices=[100.0, 50, 200, -100, 0, 200, 100, -100],\n export_electricity_prices=40\n)\n\nsimulation = asset.optimize()\n```\n\n### Site API\n\nThe site API allows optimizing multiple assets together:\n\n```python\nimport energypylinear as epl\n\nassets = [\n # 2.0 MW, 4.0 MWh battery\n epl.Battery(\n power_mw=2.0,\n capacity_mwh=4.0\n ),\n # 30 MW open cycle generator\n epl.CHP(\n electric_power_max_mw=100,\n electric_power_min_mw=30,\n electric_efficiency_pct=0.4\n ),\n # 2 EV chargers & 4 charge events\n epl.EVs(\n chargers_power_mw=[100, 100],\n charge_events_capacity_mwh=[50, 100, 30, 40],\n charge_events=[\n [1, 0, 0, 0, 0],\n [0, 1, 1, 1, 0],\n [0, 0, 0, 1, 1],\n [0, 1, 0, 0, 0],\n ],\n ),\n epl.Boiler(),\n epl.Valve()\n]\n\nsite = epl.Site(\n assets=assets,\n electricity_prices=[100, 50, 200, -100, 0],\n high_temperature_load_mwh=[105, 110, 120, 110, 105],\n low_temperature_load_mwh=[105, 110, 120, 110, 105]\n)\n\nsimulation = site.optimize()\n```\n\n## Documentation\n\n[See more asset types & use cases in the documentation](https://energypylinear.adgefficiency.com/latest).\n\n## Test\n\n```shell\n$ make test\n```\n'",,"2019/01/15, 21:59:41",1743,MIT,25,143,"2023/10/17, 00:52:10",3,31,46,28,8,1,0.0,0.2063492063492064,"2023/10/21, 05:27:48",v1.1.0,0,3,false,,false,false,"ADGEfficiency/space-between-money-and-the-planet,ADGEfficiency/energy-py-experiments",,,,,,,,,, switch-model,Optimal planning model for power systems with large shares of renewable energy.,switch-model,https://github.com/switch-model/switch.git,github,,Energy Modeling and Optimization,"2022/11/01, 23:56:15",109,0,13,true,Python,Switch Power System Planning Model,switch-model,Python,http://switch-model.org/,"b'This contains version 2 of the Switch electricity planning model.\nThis optimization model is modular and can be used with varying levels\nof complexity. Look in the examples directory for demonstrations of\nusing Switch for investment planning or production cost simulation. The\nexamples enable varying levels of model complexity by choosing which\nmodules to include.\n\nSee INSTALL for installation instructions.\n\nTo generate documentation, go to the doc folder and run ./make_doc.sh.\nThis will build html documentation files from python doc strings which\nwill include descriptions of each module, their intentions, model\ncomponents they define, and what input files they expect.\n\nTESTING\nTo test the entire codebase, run this command from the root directory:\n\tpython run_tests.py\n\nEXAMPLES\nTo run an example, navigate to an example directory and run the command:\n\tswitch solve --verbose --log-run\n\nCONFIGURING YOUR OWN MODELS\n\nAt a minimum, each model requires a list of SWITCH modules to define the model\nand a set of input files to provide the data. The SWITCH framework and\nindividual modules also accept command-line arguments to change their behavior.\n\nEach SWITCH model or collection of models is defined in a specific directory\n(e.g., examples/3zone_toy). This directory contains one or more subdirectories\nthat hold input data and results (e.g., ""inputs"" and ""outputs""). The models in\nthe examples directory show the type of text files used to provide inputs for a\nmodel. You can change any of the model\'s input data by editing the *.csv files\nin the input directory.\n\nSWITCH contains a number of different modules, which can be selected and\ncombined to create models with different capabilities and amounts of detail.\nYou can look through the *.py files within switch_mod and its subdirectories to\nsee the standard modules that are available and the columns that each one will\nread from the input files. You can also add modules of your own by creating\nPython files in the main model directory and adding their name (without the\n"".py"") to the module list, discussed below. These should define the same\nfunctions as the standard modules (e.g., define_components()).\n\nEach model has a text file which lists the modules that will be used for that\nmodel. Normally this file is called ""modules.txt"" and is stored in the main\nmodel directory or in an inputs subdirectory. SWITCH will automatically look in\nthose locations for this list; alternatively, you can specify a different file\nwith the ""--module-list"" argument.\n\nUse ""switch --help"", ""switch solve --help"" or ""switch solve-scenarios --help""\nto see a list of command-line arguments that are available.\n\nYou can specify these arguments on the command line when you solve the model\n(e.g., ""switch solve --solver cplex""). You can also place frequently used\narguments in a file called ""options.txt"" in the main model directory. These can\nall be on one line, or they can be placed on multiple lines for easier\nreadability (be sure to include the ""--"" part in all the argument names in\noptions.txt). The ""switch solve"" command first reads all the arguments from\noptions.txt, and then applies any arguments you specified on the command line.\nIf the same argument is specified multiple times, the last one takes priority.\n\nYou can also define scenarios, which are sets of command-line arguments to\ndefine different models. These additional arguments can be placed in a scenario\nlist file, usually called ""scenarios.txt"" in the main model directory (or you\ncan use a different file specified by ""--scenario-list""). Each scenario should\nbe defined on a single line, which includes a ""--scenario-name"" argument and\nany other arguments needed to define the scenario. ""switch solve-scenarios""\nwill solve all the scenarios listed in this file. For each scenario, it will\nfirst apply all arguments from options.txt, then arguments from the relevant\nline of scenarios.txt, then any arguments specified on the command line.\n\nAfter the model runs, results will be written in tab-separated text files (with\nextension .tsv or .tab) in the ""outputs"" directory (or some other directory\nspecified via the ""--outputs-dir"" argument).\n'",,"2015/04/08, 00:59:34",3122,CUSTOM,18,744,"2023/01/29, 17:30:41",27,99,119,6,269,11,0.2,0.3429003021148036,"2022/11/02, 00:53:23",2.0.7,0,9,false,,false,false,,,https://github.com/switch-model,http://switch-model.org,,,,https://avatars.githubusercontent.com/u/11792892?v=4,,, AnyMOD.jl,Creating large scale energy system models with multiple periods of capacity expansion formulated as linear optimization problems.,leonardgoeke,https://github.com/leonardgoeke/AnyMOD.jl.git,github,"capacity-expansion-planning,energy,julia,linear-programming,optimization",Energy Modeling and Optimization,"2022/10/14, 17:09:00",60,0,10,true,Julia,,,Julia,,"b'[![Build Status](https://travis-ci.org/leonardgoeke/AnyMOD.jl.svg?branch=master)](https://travis-ci.org/leonardgoeke/AnyMOD.jl)\n[![codecov](https://codecov.io/gh/leonardgoeke/AnyMOD.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/leonardgoeke/AnyMOD.jl)\n[![Gitter chat](https://badges.gitter.im/AnyMOD-jl/typedoc.svg)](https://gitter.im/AnyMOD-jl/community)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n\n\n[AnyMOD.jl](https://github.com/leonardgoeke/AnyMOD.jl) is a [Julia](https://julialang.org/) framework for creating large scale energy system models with multiple periods of capacity expansion formulated as linear optimization problems. It was developed to address the challenges in modelling high-levels of intermittent generation and sectoral integration. A comprehensive description of the framework\'s graph based methodology can found in the working paper [G\xc3\xb6ke (2020), AnyMOD - A graph-based framework for energy system modelling with high levels of renewables and sector integration](https://arxiv.org/abs/2004.10184). The software itself is seperately introduced in [G\xc3\xb6ke (2020), AnyMOD.jl: A Julia package for creating energy system models](https://arxiv.org/abs/2011.00895).\n\nAny questions, suggestions, feature requests, or contributions are welcome. To get in touch open an [issue](https://github.com/leonardgoeke/AnyMOD.jl/issues) or use the [chat](https://gitter.im/AnyMOD-jl/community).\n\n## Documentation\n\n- [**STABLE**](https://leonardgoeke.github.io/AnyMOD.jl/stable/) — **last thoroughly tested and fully documented version**\n- [**DEV**](https://leonardgoeke.github.io/AnyMOD.jl/dev/) — *in-development version of the tool*\n\n\n## Acknowledgement\n\nThe development of AnyMOD is receiving funding from the European Union\xe2\x80\x99s Horizon 2020 research and innovation programme within the [OSMOSE project](https://www.osmose-h2020.eu/) under grant agreement No 773406.\n\n\n'",",https://arxiv.org/abs/2004.10184,https://arxiv.org/abs/2011.00895","2019/09/11, 19:08:21",1505,MIT,0,427,"2023/10/13, 15:48:14",5,12,16,3,12,2,0.0,0.0,"2022/12/28, 23:06:03",flexibleElectrificationWorkingPaper,0,1,false,,false,false,,,,,,,,,,, FlexiGIS,"Extracts, filters and categorizes the geo-referenced urban energy infrastructure and allocates the required decentralized storage in urban settings.",FlexiGIS,https://github.com/FlexiGIS/FlexiGIS.git,github,,Energy Modeling and Optimization,"2023/02/22, 10:10:26",17,0,5,true,Python,,,"Python,Makefile",,"b'# FlexiGIS: an open source GIS-based platform for modelling energy systems and flexibility options in urban areas.\n\nFlexiGIS stands for Flexibilisation in Geographic Information Systems (GIS). It extracts, filters and categorises the geo-referenced urban energy infrastructure, simulates the local electricity consumption and power generation from on-site renewable energy resources, and allocates the required decentralised storage in urban settings. FlexiGIS investigates systematically different scenarios of self-consumption, it analyses the characteristics and roles of flexibilisation technologies in promoting higher autarky levels in cities. The extracted urban energy infrastructure are based mainly on OpenStreetMap data.\n\n[![Documentation Status](https://readthedocs.org/projects/flexigis/badge/?version=latest)](https://flexigis.readthedocs.io/en/latest/?badge=latest)[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4497334.svg)](https://doi.org/10.5281/zenodo.4497334)\n\n# FlexiGIS Components\n\n## Module I: FlexiGIS urban spatial platform\n\nThis package establishes urban energy infrastructure. It extracts, acquires and processes urban georeferenced data extracted from OpenStreetMap. In order to extract the OpenStreetMap georeferenced datasets of urban energy infrustructure datasets and its required features, Module I derives an automatised extraction procedure. Firstly, the raw OpenStreetMap data is downloaded from the OpenStreetMap database for the invistigated urban space from `Geofabrik`. Second, the OpenStreetMap datasets are filtered for the respective case study defined by the provided `.POLY` file using the open source java tool `osmosis`. The OpenStreetMap data are filtered for the follwoing OSM elements:\n\n- `landuse = *` tag provides information about the human use of land in the respective area (see [Figure 1](data/04_Visualisation/landuse.png))\n- `building = *` tag describes all mapped objects considered as buildings of different categories, example residential, schools, etc. (see [Figure 2](data/04_Visualisation/buildings.png))\n- `highway = *` tag describes all lines considered as streets, roads, paths, etc. (see [Figure 3](data/04_Visualisation/highway.png))\n ![FlexiGIS Buildings](data/04_Visualisation/landuse.png)\n Fig. 1. Extracted OpenStreetMap `landuse` datasets for the city of Oldenburg. Credits: OpenStreetMap contributors.\n ![FlexiGIS Buildings](data/04_Visualisation/buildings.png)\n Fig. 2. Extracted OpenStreetMap `building` datasets for the city of Oldenburg. Credits: OpenStreetMap contributors.\n ![FlexiGIS Highway](data/04_Visualisation/highway.png)\n Fig. 3. Extracted OpenStreetMap `highway` datasets for the city of Oldenburg. Credits: OpenStreetMap contributors.\n\nAfter filtering the OSM raw data, the georeferenced building and roard infrastructure in the city of Oldenburg are exported to a relational postgis-enabled database using the open source `osm2pgsql`. Theses datasets can be exported as CSV files or visualised as maps (see Figures 1-3).\n\n## Module II: FlexiGIS temporal dimension\n\nIt simulates urban energy requirments (consumption and generation). The spatio-temporal electricity consumption and renewable energy generation from PV and wind in the defined urban area are modelled. This component models the combined spatial parameters of urban geometries (Module I) and linkes them to real-world applications using GIS. Here, a bottom-up simulation approach is developed to calculate local urban electricity demand and power generation from available renewable energy resources. For instance, using open source datases like Standardised Load Profiles and publicaly available weather data, [ Figure 4 ](data/04_Visualisation/Energy_Requirments.png) shows the generated quarter-hourly time series of the aggregated load and PV power supply profile for invistigated case study. The PV and wind power supply are simulated using [feedinlib](https://feedinlib.readthedocs.io/en/latest/) python package.\n\n![FlexiGIS Simulated Energy_requirements](data/04_Visualisation/Energy_Requirments.png)\nFig. 4. Simulated electricity consumption (green) and solar power generation (red) for the city of Oldenburg.\n\n## Module III: Optimisation of flexibility options\n\nThe spatial-temporal simulation outputs from Module I and II are time series of electricity demand and supply. Theses generated datasets will be used by the [oemof/solph](https://oemof-solph.readthedocs.io/en/latest/index.html) model as inputs to the linear optimisation problem. This module aims to determine the minimum system costs at the given spatial urban scale while matching simultenously the simulated electricity demand. In addition, it aims to allocate and optimise distributed storage and other flexibility options in urban energy systems.\n![Optimal supply and Storage](data/04_Visualisation/om.png)\nFig. 5. Example result of optimal energy requirements, which minimize investement cost simulated for the city of Oldenburg.\n\n## System requirements\n\nFlexiGIS is developed and tested on Linux (Ubuntu 16.04.6). The tools and software used in FlexiGIS and their versions are listed in the following:\n\n- Operating system: Ubuntu 16.04.6 LTS, Release: 16.04, Codename: xenial\n- PostgreSQL version: 11.2 (64-bit)\n- PostGIS version: 1.5.3\n- osmosis version: 0.44.1\n- osm2pgsql version: 0.88.1 (64bit id space)\n- GNU Make version: 4.2.1\n- Python: 3.6.7\n- GNU bash: 4.3.48(1)-release (x86_64-pc-linux-gnu)\n\nPostgreSQL: To install PostgreSQL, refer to the [PostgreSQL](http://www.postgresql.org/) webpag. Please note that you need at least the version or higher to run FlexiGIS.\n\nOsmosis: To use [Osmosis](http://wiki.openstreetmap.org/wiki/Osmosis), unzip the distribution in the location of your choice. On unix/linux systems, make the bin/osmosis script executable (ie. chmod u+x osmosis). If desired, create a symbolic link to the osmosis script somewhere on your path (eg. ln -s appdir/bin/osmosis ~/bin/osmosis).\n\nosm2pgsql: Instruction are available on how to download and install osm2pgsql for Linux systems on the [osm2pgsql](http://wiki.openstreetmap.org/wiki/Osm2pgsql) webpage.\n\nPython: Ensure you can run python (with version 3 and above) on your OS. Python can be downloaded following this [link](https://www.python.org/downloads/) or download the [Anaconda](https://www.anaconda.com/distribution/) distro.\n\ncbc solver: See [here](https://github.com/coin-or/Cbc) for cbc installation instruction.\n\n## Getting Started\n\nTo use the FlexiGIS spatial-temporal platform download the FlexiGIS code and data folder as a zip file or clone the repository from the FlexiGIS GitHub repo. After downloading the FlexiGIS code, unzip the folder FlexiGIS in the location of your choice. The file structure of the FlexiGIS code folder is as follows:\n\n- FlexiGIS\n - \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 code\n - \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data\n - \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 01_raw_input_data\n - \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 02_urban_output_data\n - \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 03_urban_energy_requirements\n - \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 04_Visualisation\n - \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 doc\n - \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md\n - \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 requirements.txt\n\n## Installation\n\nAfter making sure all system requirements are satisfied, clone FlexiGIS repo\ncreate a Python virtual environment where the required python dependencies can be installed using Python virtual environment can be created by following the tutorial steps from [here](https://packaging.python.org/tutorials/installing-packages/).\n\nFirst clone the FlexiGIS code from the GitHub repository. In the next step, create a Python virtual environment (give a name e.g. \\_env_name) where the required python dependencies can be installed using pip, then activate the virtual environment.\n\n```console\nuser@terminal:~$ git clone https://github.com/FlexiGIS/FlexiGIS.git\nuser@terminal:~$ python3 -m venv _env_name\nuser@terminal:~$ source _env_name/bin/activate\n```\n\nAfter creating and activativating the python virtual environment, cd into the cloned FlexiGIS directory, and install the required python dependencies.\n\n```console\n(_env_name) user@terminal:~$ cd ../FlexiGIS\n(_env_name) user@terminal:~/FlexiGIS$ pip install -r requirements.txt\n```\n\nclone the [oemof/feedinlib](https://feedinlib.readthedocs.io/en/latest/) package from flexigis github repository(recommended) and install localy for the renewable feedin simulations. Also install the [oemof/solph](https://oemof-solph.readthedocs.io/en/latest/index.html) python package for the modelling and optimization of energy systems. An additional eomof package [oemof_visio](https://github.com/oemof/oemof-visio) is required as a dependency for generating nice plots of the optimization results.\n**_Note: The default solver used here for the linear optimization by FlexiGIS is the cbc slover_**\n\n```console\n(_env_name) user@terminal:~/FlexiGIS$ git clone https://github.com/FlexiGIS/feedinlib.git\n(_env_name) user@terminal:~/FlexiGIS$ pip install -e feedinlib\n(_env_name) user@terminal:~/FlexiGIS$ pip install oemof.solph\n(_env_name) user@terminal:~/FlexiGIS$ pip install git+https://github.com/oemof/oemof_visio.git\n```\n\nNow you are ready to excute FlexiGIS using the make commands.\n\n## Running FlexiGIS\n\nTo run the first two components of the FlexiGIS package, execute the following steps:\n\n1. After completing the setting up of FlexiGIS environment as mentioned in the steps above, go to the folder code, check the parameters in config.mk file.\n\n2. Ensure that your database parameters are properly set in the config.mk file, and also the OSM data link of the spartial location of choice.\n\n3. Ensure the poly file of the spartial location of choice is available in the data/01_raw_input_data directory before running FlexiGIS.\n\n###### The available makefile options are:\n\nTo run the available makefile options, go into the code folder of the flexigis directory in your linux terminal, then run\n\n```console\n(_env_name) user@terminal:~/FlexiGIS/code$ make all\n```\n\nmake-all executes multiple make options, from download to the simulation of load and PV profiles for the urban infrastructures, and finally the optimization of electricity supply and the alocated storage system. However, instead of running \'make all\', you can run the various steps one after the other as decribed in the [documentation](https://flexigis.readthedocs.io/en/latest/flexigis_install/install.html#running-flexigis).\n\n```console\n(_env_name) user@terminal:~/FlexiGIS/code$ make weather_data\n```\n\n\'make weather_data\' downloads ECMWF ERA5 weather data of a location or region of interest from the climate data store ([CDS](https://cds.climate.copernicus.eu/#!/home)) using the feedinlib interface.\n\n```console\n(_env_name) user@terminal:~/FlexiGIS/code$ make feedin_data_format\n```\n\n\'make feedin_data_format\' prepares the dowloaded weather data into feedinlib format for feedin simulation.\n\n```console\n(_env_name) user@terminal:~/FlexiGIS/code$ make example\n```\n\n\'make example\' can be run to generate an example simulation of aggregated load and PV profile for Oldenburg and also model the optimal allocated storage and onsite renewable supply. The example imports spatially filtered OSM Highway, landuse and building data stored as csv files in the ../data/01_raw_input_data/example_OSM_data folder. In other words, it runs all steps using the provided example data. Hence, the make example can be used as a test run of the flexigis package. After running the FlexiGIS package using the makefile commands, the resulting\naggregated load and PV profiles, urban infrastructure data, and optimization results are stored in folder data/03_urban_energy_requirements, also static plots of the urban infrastructures and simulated load and PV profiles are created and stored in the data/04-visualisation folder. To visualise the extracted georeferenced urban infrastructures data interactively, the generated shape file of the extracted urban infrastructures, can be used in [QGIS](https://www.qgis.org/en/site/) to generate interactive plots.\n\n## Documentation\n\nJump to [documentation](https://flexigis.readthedocs.io/en/latest/) for more details about FlexiGIS.\n\n## Publications List\n\nTo see the list of publications jump to [publications](https://flexigis.readthedocs.io/en/latest/#publications-list).\n\n## License\n\nFlexiGIS is licensed under the [BSD-3-Clause](https://opensource.org/licenses/BSD-3-Clause), ""New BSD License"" or ""Modified BSD License"".\n\nThe OpenStreetMap (OSM) data is available under the Open Database License (ODbL). A description of the ODbL license is available [here](http://opendatacommons.org/licenses/odbl). OpenStreetMap cartography is licensed as CC BY-SA. For more information on the copyright of OpenStreetMap please visit [here](http://www.openstreetmap.org/copyright).\n\n## Acknowledgments\n\nThe main author of FlexiGIS acknowledges the financial support provided by the [Heinrich Boell Foundation](https://www.boell.de/en) through a PhD scholarship.\n\nThis open source tool is currently supported by [e-shape](https://e-shape.eu/)- EuroGEO Showcases: Applications powered by Europe. This project receives funding from the European Union\xe2\x80\x99s Horizon 2020 research and innovation programme under grant agreement 820852 within a pilot study on [High photovoltaic penetration at urban scale](https://e-shape.eu/index.php/showcases/pilot3-2-high-photovoltaic-penetration-at-urban-scale)\n\n\n\n## Contact\n\nAuthor: Alaa Alhamwi alaa.alhamwi@dlr.de\n\nOrganisation: German Aerospace Center ([DLR](https://www.dlr.de/ve/)) - Institute of Networked Energy Systems, department for Energy system Analysis(ESY).\n\n## Project Team\n\nDr. Wided Medjroubi (project leader) and Dr. Alaa Alhamwi (FlexiGIS main author)\n'",",https://doi.org/10.5281/zenodo.4497334","2019/12/02, 08:47:33",1423,MIT,1,133,"2022/08/23, 05:06:01",1,6,6,0,428,1,0.0,0.16129032258064513,"2021/02/03, 08:41:08",v1.0.0,0,4,false,,false,false,,,,,,,,,,, EMMA,"A techno-economic model of the north-west European power market covering France, Benelux, Germany and Poland.",,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://neon.energy/emma/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, reVX,Renewable Energy Potential(V) eXchange Toot.,NREL,https://github.com/NREL/reVX.git,github,,Energy Modeling and Optimization,"2023/10/05, 17:06:02",15,1,4,true,Python,National Renewable Energy Laboratory,NREL,"Python,Tcl,Dockerfile",https://nrel.github.io/reVX/,"b'************************************************************************\nWelcome to the `reV `_ eXchange (reVX) tool!\n************************************************************************\n\n.. image:: https://github.com/NREL/reVX/workflows/Documentation/badge.svg\n :target: https://nrel.github.io/reVX/\n\n.. image:: https://github.com/NREL/reVX/workflows/Pytests/badge.svg\n :target: https://github.com/NREL/reVX/actions?query=workflow%3A%22Pytests%22\n\n.. image:: https://github.com/NREL/reVX/workflows/Lint%20Code%20Base/badge.svg\n :target: https://github.com/NREL/reVX/actions?query=workflow%3A%22Lint+Code+Base%22\n\n.. image:: https://img.shields.io/pypi/pyversions/NREL-reVX.svg\n :target: https://pypi.org/project/NREL-reVX/\n\n.. image:: https://badge.fury.io/py/NREL-reVX.svg\n :target: https://badge.fury.io/py/NREL-reVX\n\n.. image:: https://codecov.io/gh/nrel/reVX/branch/main/graph/badge.svg?token=3J5M44VAA9\n :target: https://codecov.io/gh/nrel/reVX\n\n.. image:: https://zenodo.org/badge/201337735.svg\n :target: https://zenodo.org/badge/latestdoi/201337735\n\n.. inclusion-intro\n\nreVX command line tools\n=======================\n\n- `reVX `_\n- `reV-rpm `_\n- `reV-plexos `_\n- `plexos-plants `_\n- `mean-wind-dirs `_\n- `prominent-wind-dirs `_\n- `setbacks `_\n- `offshore-assembly-areas `_\n- `offshore-dist-to-ports `_\n- `offshore-inputs `_\n\nInstalling reVX\n===============\n\nNOTE: The installation instruction below assume that you have python installed\non your machine and are using `conda `_\nas your package/environment manager.\n\n#. Create a new environment:\n ``conda create --name revx python=3.8``\n\n#. Activate your new environment:\n ``conda activate revx``\n\n#. Clone the repo:\n From your home directory ``/home/{user}/`` or another directory that you have permissions in, run the command ``git clone git@github.com:NREL/reVX.git`` and then go into your cloned repository: ``cd reVX``\n\n#. Install reVX:\n 1) Follow the installation commands installation process that we use for our automated test suite `here `_. Make sure that you call ``pip install -e .`` from within the cloned repository directory e.g. ``/home/{user}/reVX/``\n\n - NOTE: If you install using pip and want to run `exclusion setbacks `_ you will need to install rtree manually:\n * ``conda install rtree``\n * `pip installation instructions `_\n\nRecommended Citation\n====================\n\nUpdate with current version and DOI:\n\nMichael Rossol, Grant Buster, and Robert Spencer. The Renewable Energy\nPotential(V) eXchange Tool: reVX. https://github.com/NREL/reVX\n(version v0.3.20), 2021. https://doi.org/10.5281/zenodo.4507580.\n'",",https://zenodo.org/badge/latestdoi/201337735\n\n,https://doi.org/10.5281/zenodo.4507580.\n","2019/08/08, 21:07:31",1538,BSD-3-Clause,271,1848,"2023/10/05, 17:06:06",7,139,191,35,20,2,2.4,0.47492625368731567,"2023/08/15, 15:41:09",v0.3.53,0,8,false,,false,false,NREL-Sienna/reV-PowerSystems,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, CapacityExpansion.jl,Future energy system planning (Generation and Transmission Capacity Expansion Planning) in Julia.,YoungFaithful,https://github.com/YoungFaithful/CapacityExpansion.jl.git,github,"capacity-expansion-planning,energy,energy-optimization-model,julia,jump,clustering,california,germany",Energy Modeling and Optimization,"2021/04/27, 07:58:56",22,0,6,false,Julia,,,"Julia,TeX",https://youngfaithful.github.io/CapacityExpansion.jl/stable,"b'![CapacityExpansion logo](docs/src/assets/cep_text.svg)\n===\n[![](https://img.shields.io/badge/docs-stable-blue.svg)](https://YoungFaithful.github.io/CapacityExpansion.jl/stable)\n[![](https://img.shields.io/badge/docs-dev-blue.svg)](https://YoungFaithful.github.io/CapacityExpansion.jl/dev)\n[![Build Status](https://travis-ci.com/YoungFaithful/CapacityExpansion.jl.svg?branch=master)](https://travis-ci.com/YoungFaithful/CapacityExpansion.jl)\n[![DOI](https://zenodo.org/badge/178281868.svg)](https://zenodo.org/badge/latestdoi/178281868)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02034/status.svg)](https://doi.org/10.21105/joss.02034)\n\n[CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl) is a [julia](https://julialang.org/) implementation of an input-data-scaling capacity expansion modeling framework.\n\nThe primary purpose of the package is providing an extensible, simple-to-use generation and transmission capacity expansion model that allows addressing a diverse set of research questions in the area of energy systems planning. The secondary purposes are:\n1) Providing a simple process to integrate (clustered) time-series input data, geographical input data, cost input data, and technology input data.\n2) Providing a model configuration, a modular model setup and model optimization.\n3) Providing an interface between the optimization result and further analysis.\n\nPlease refer to the [documentation](https://YoungFaithful.github.io/CapacityExpansion.jl/stable) for details on how to use this software.\n\n| Model Information | |\n| ----------------- | ------------------------------------------------------------------------------------------ |\n| Model class | Capacity Expansion Problem |\n| Model type | Optimization, Linear optimization model input-data depending energy system |\n| Carriers | Electricity, Hydrogen,... |\n| Technologies | dispatchable and non-dispatchable Generation, Conversion, Storage (seasonal), Transmission |\n| Decisions | investment and dispatch |\n| Objective | Total system cost |\n| Variables | Cost, Capacities, Generation, Storage, Lost-Load, Lost-Emissions |\n\n| Input Data Depending | Provided Input Data |\n| --------------------- | ----------------------------------------------------------------------------------- |\n| Regions | California, USA (single and multi-node) and Germany, Europe (single and multi-node) |\n| Geographic Resolution | aggregated regions |\n| Time resolution | hourly |\n| Network coverage | transmission, DCOPF load flow |\n\nThe package uses [TimeSeriesClustering](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl) as a basis for its time-series aggregation.\n\nElias Kuepper [@YoungFaithful](https://github.com/youngfaithful) and Holger Teichgraeber [@holgerteichgraeber](https://github.com/holgerteichgraeber) developed this package.\n\n## Installation\nThis package runs under julia v1.0 and higher.\nIt depends on multiple packages, which are also listed in the [`Project.toml`](https://github.com/YoungFaithful/CapacityExpansion.jl/blob/master/Project.toml). The julia package manager automatically installs the packages:\n- `JuMP.jl` - for the modeling environment\n- `CSV.jl` - for handling of `.csv`-Files\n- `DataFrames.jl` - for handling of tables\n- `StatsBase.jl` - for handling of basic \n- `JLD2` - for saving your result data\n- `FileIO` - for file accessing\n- `TimeSeriesClustering.jl` - for time-series data\n\nInstall `CapacityExpansion` using the package mode:\n```julia\n]\nadd CapacityExpansion\n```\nor using the `Pkg.add` function:\n```julia\nusing Pkg\nPkg.add(""CapacityExpansion"")\n```\n\nA solver is required to run an optimization, as explained in section [Solver](https://youngfaithful.github.io/CapacityExpansion.jl/stable/opt_cep/#Solver-1).\nInstall e.g. `Clp` using the package mode:\n```julia\n]\nadd Clp\n```\nor using the `Pkg.add` function:\n```julia\nusing Pkg\nPkg.add(""Clp"")\n```\n## Example Workflow\n```julia\nusing CapacityExpansion\nusing Clp\noptimizer=Clp.Optimizer # select optimizer\n\n## LOAD DATA ##\n# laod ts-data\nts_input_data = load_timeseries_data_provided(""GER_1""; T=24, years=[2016])\n# load cep-data\ncep_data = load_cep_data_provided(""GER_1"")\n\n## OPTIMIZATION ##\n# run a simple\nrun_opt(ts_input_data,cep_data,optimizer)\n```\n\n## Testing\nThe model is being tested against a capacity expansion model presented in the paper [`On representation of temporal variability in electricity capacity\nplanning models` by Merrick 2016](http://dx.doi.org/10.1016/j.eneco.2016.08.001). The model additionally tests itself against previously calculated data to detect new errors.\n\n## Links\n- [Documentation of the stable version](https://YoungFaithful.github.io/CapacityExpansion.jl/stable)\n- [Documentation of the development version](https://YoungFaithful.github.io/CapacityExpansion.jl/dev)\n- [Contributing guidelines](https://github.com/YoungFaithful/CapacityExpansion.jl/blob/master/CONTRIBUTING.md)\n- [CapacityExpansion: A capacity expansion modeling framework in Julia](https://doi.org/10.21105/joss.02034)\n\n## Citing CapacityExpansion\nIf you find CapacityExpansion usefuel in your work, we kindly request that you cite the following [paper](https://doi.org/10.21105/joss.02034)\n```\n@article{Kuepper2020,\n doi = {10.21105/joss.02034},\n url = {https://doi.org/10.21105/joss.02034},\n year = {2020},\n publisher = {The Open Journal},\n volume = {5},\n number = {52},\n pages = {2034},\n author = {Lucas Elias Kuepper and Holger Teichgraeber and Adam R. Brandt},\n title = {CapacityExpansion: A capacity expansion modeling framework in Julia},\n journal = {Journal of Open Source Software}\n}\n```\n'",",https://zenodo.org/badge/latestdoi/178281868,https://doi.org/10.21105/joss.02034,https://doi.org/10.21105/joss.02034,https://doi.org/10.21105/joss.02034,https://doi.org/10.21105/joss.02034","2019/03/28, 21:02:36",1671,MIT,0,543,"2021/04/27, 07:58:56",6,45,60,0,911,1,0.6,0.49629629629629635,"2020/09/30, 18:35:18",v0.2.2,0,3,false,,false,true,,,,,,,,,,, DPsim,A solver library for dynamic power system simulation.,sogno-platform,https://github.com/sogno-platform/dpsim.git,github,"simulation,real-time,dynamic-phasors,powerflow,electromagnetic-transient,power-systems,emt,quasi-stationary",Energy Modeling and Optimization,"2023/10/18, 15:15:10",52,4,18,true,C++,SOGNO,sogno-platform,"C++,CMake,C,Shell,Python,Dockerfile,Makefile",https://sogno.energy/dpsim/,"b'# DPsim\n\n[![Build & Test CentOS](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_linux_centos.yaml/badge.svg)](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_linux_centos.yaml)\n\n[![Build & Test Fedora](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_linux_fedora.yaml/badge.svg)](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_linux_fedora.yaml)\n\n[![Build & Test Fedora Minimal](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_linux_fedora_minimal.yaml/badge.svg)](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_linux_fedora_minimal.yaml)\n\n[![Build & Test Windows](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_windows.yaml/badge.svg)](https://github.com/sogno-platform/dpsim/actions/workflows/build_test_windows.yaml)\n\n[![License: MPL 2.0](https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg)](https://opensource.org/licenses/MPL-2.0)\n\nDPsim is a solver library for dynamic power system simulation.\n\n- It supports both the electromagnetic transient (EMT) and dynamic phasor (DP) domain for dynamic simulation.\n- A powerflow solver is included for standalone usage or initialization of dynamic simulations.\n- It provides a Python module which can be embedded in any Python 3 application / scripts.\n- The simulation core is implemented in highly-efficient C++ code.\n- It supports real-time execution with time-steps down to 50 uS.\n- It can load models in the IEC61970 CIM / CGMES XML format.\n- It can be interfaced to a variety of protocols and interfaces via [VILLASnode](https://fein-aachen.org/projects/villas-node/).\n\n## Documentation\n\nThe [documentation](https://dpsim.fein-aachen.org/) has build / installation instructions, links to examples and explains the concepts implemented in DPsim as well as its architecture.\n\n## License\n\nThe project is released under the terms of the [MPL 2.0](https://mozilla.org/MPL/2.0/).\n\n## Contact\n\n- Markus Mirz \n- Steffen Vogel \n- Jan Dinkelbach \n'",,"2020/01/29, 08:49:59",1365,MPL-2.0,424,4413,"2023/10/18, 15:15:11",93,124,163,90,7,31,0.9,0.666210295728368,,,0,20,false,,false,false,"sogno-platform/dpsim,JTS22/dpsim,gnakti/dpsim,pipeacosta/dpsim",,https://github.com/sogno-platform,https://sogno.energy/,,,,https://avatars.githubusercontent.com/u/73550268?v=4,,, GenX,"A highly-configurable, open source electricity resource capacity expansion model that incorporates several state-of-the-art practices in electricity system planning to offer improved decision support for a changing electricity landscape.",GenXProject,https://github.com/GenXProject/GenX.git,github,,Energy Modeling and Optimization,"2023/08/04, 18:51:45",201,0,67,true,Julia,,,Julia,https://genxproject.github.io/GenX/dev/,"b'# GenX \n[![Build Status](https://travis-ci.com/GenXProject/GenX.svg?branch=main)](https://travis-ci.com/GenXProject/GenX)\n[![Coverage Status](https://coveralls.io/repos/github/GenXProject/GenX/badge.svg?branch=main)](https://coveralls.io/github/GenXProject/GenX?branch=main)\n\n\n[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://genxproject.github.io/GenX/dev)\n[![ColPrac: Contributor\'s Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor\'s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)\n## Overview\nGenX is a highly-configurable, [open source](https://github.com/GenXProject/GenX/blob/main/LICENSE) electricity resource capacity expansion model \nthat incorporates several state-of-the-art practices in electricity system planning to offer improved decision support for a changing electricity landscape. \n\nThe model was [originally developed](https://energy.mit.edu/publication/enhanced-decision-support-changing-electricity-landscape/) by \n[Jesse D. Jenkins](https://mae.princeton.edu/people/faculty/jenkins) and \n[Nestor A. Sepulveda](https://energy.mit.edu/profile/nestor-sepulveda/) at the Massachusetts Institute of Technology and is now jointly maintained by \n[a team of contributors](https://github.com/GenXProject/GenX#genx-team) at the MIT Energy Initiative (led by [Dharik Mallapragada](https://energy.mit.edu/profile/dharik-mallapragada/)) and the Princeton University ZERO Lab (led by Jenkins). \n\nGenX is a constrained linear or mixed integer linear optimization model that determines the portfolio of electricity generation, \nstorage, transmission, and demand-side resource investments and operational decisions to meet electricity demand in one or more future planning years at lowest cost,\nwhile subject to a variety of power system operational constraints, resource availability limits, and other imposed environmental, market design, and policy constraints.\n\nGenX features a modular and transparent code structure developed in [Julia](http://julialang.org/) + [JuMP](http://jump.dev/).\nThe model is designed to be highly flexible and configurable for use in a variety of applications from academic research and technology evaluation to public policy and regulatory analysis and resource planning.\nDepending on the planning problem or question to be studied,\nGenX can be configured with varying levels of model resolution and scope, with regards to:\n(1) temporal resolution of time series data such as electricity demand and renewable energy availability;\n(2) power system operational detail and unit commitment constraints;\nand (3) geospatial resolution and transmission network representation.\nThe model is also capable of representing a full range of conventional and novel electricity resources,\nincluding thermal generators, variable renewable resources (wind and solar), run-of-river, reservoir and pumped-storage hydroelectric generators,\nenergy storage devices, demand-side flexibility, demand response, and several advanced technologies such as long-duration energy storage.\n\nThe \'main\' branch is the current master branch of GenX. The various subdirectories are described below:\n\n1. `src/` Contains the core GenX model code for reading inputs, model generation, solving and writing model outputs.\n\n2. `Example_Systems/` Contains fully specified examples that users can use to test GenX and get familiar with its various features. Within this folder, we have two main sets of examples:\n - `SmallNewEngland/` , a simplified system consisting of 4 different resources per zone.\n - `RealSystemExample/`, a detailed system representation based on ISO New England and including many different resources (up to 58)\n\n3. `docs/` Contains source files for documentation pertaining to the model.\n\n## Requirements\n\nGenX currently exists in version 0.3.6 and runs only on Julia v1.5.x, 1.6.x, 1.7.x, 1.8.x, and 1.9.x, where x>=0 and a minimum version of JuMP v1.1.1. We recommend the users to either stick to a particular version of Julia to run GenX. If however, the users decide to switch between versions, it\'s very important to delete the old Manifest.toml file and do a fresh build of GenX when switching between Julia versions.\nThere is also an older version of GenX, which is also currently maintained and runs on Julia 1.3.x and 1.4.x series.\nFor those users who has previously cloned GenX, and has been running it successfully so far,\nand therefore might be unwilling to run it on the latest version of Julia:\nplease look into the GitHub branch, [old_version](https://github.com/GenXProject/GenX/tree/old_version).\nIt is currently setup to use one of the following open-source freely available solvers:\n(A) the default solver: [HiGHS](https://github.com/jump-dev/HiGHS.jl) for linear programming and MILP,\n(B) [Clp](https://github.com/jump-dev/Clp.jl) for linear programming (LP) problems,\n(C) [Cbc](https://github.com/jump-dev/Cbc.jl) for mixed integer linear programming (MILP) problems\nWe also provide the option to use one of these two commercial solvers: \n(D) [Gurobi](https://www.gurobi.com), or \n(E) [CPLEX](https://www.ibm.com/analytics/cplex-optimizer).\nNote that using Gurobi and CPLEX requires a valid license on the host machine.\nThere are two ways to run GenX with either type of solver options (open-source free or, licensed commercial) as detailed in the section, `Running an Instance of GenX`.\n\nThe file `Project.toml` in the parent directory lists all of the packages and their versions needed to run GenX.\nYou can see all of the packages installed in your Julia environment and their version numbers by running `pkg> status` on the package manager command line in the Jula REPL.\n\n## Documentation\n\nDetailed documentation for GenX can be found [here](https://genxproject.github.io/GenX/dev).\nIt includes details of each of GenX\'s methods, required and optional input files, and outputs.\nInterested users may also want to browse through [prior publications](https://energy.mit.edu/genx/#publications) that have used GenX to understand the various features of the tool.\n\n## Running an Instance of GenX\n1. Download or clone the GenX repository on your machine.\nFor this tutorial it will be assumed to be within your home directory: `/home/youruser/GenX`.\n### Creating the Julia environment and installing dependencies\nYou could either start from a default terminal or a Julia REPL terminal. \n#### For a default terminal:\n2. Start a terminal and navigate into the `GenX` folder.\n3. Type `julia --project=.` to start an instance of the `julia` kernel with the `project` set to the current folder.\nThe `.` indicates the current folder. On Windows the location of Julia can also be specified as e.g., \'C:\\julia-1.6.0\\bin\\julia.exe --project=.\'\n\n If it\'s your first time running GenX (or, if you have pulled after some major upgrades/release/version) execute steps 3-6.\n\n4. Type `]` to bring up the package system `(GenX) pkg >` prompt. This indicates that the GenX project was detected. If you see `(@v1.6) pkg>` as the prompt, then the `project` was not successfully set.\n5. Type `instantiate` from the `(GenX) pkg` prompt.\n On Windows there is an issue with the prepopulated MUMPS_seq_jll v5.5.1 that prevents compilation of the solvers. To avoid this issue type \'add MUMPS_seq_jll@5.4.1\' after running instantiate.\n6. Type `st` to check that the dependecies have been installed. If there is no error, it has been successful.\n7. Type the back key to come back to the `julia>` prompt.\n\n These steps can be skipped on subsequent runs.\n\n Steps 2-5 are shown in Figure 1 and Steps 6-8 are shown in Figure 2.\n \n ![Creating the Julia environment and installing dependencies: Steps 2-7](docs/src/assets/GenX_setup_tutorial_part_1.png)\n *Figure 1. Creating the Julia environment and installing dependencies from Project.toml file from inside the GenX folder: Steps 2-5*\n\n8. Since we have already started Julia, we can run a case by executing the command `julia> include(\xe2\x80\x9c/Run.jl\xe2\x80\x9d)`. \n\nFor example, in order to run the OneZone case within the `Example_Systems/SmallNewEngland` folder,\ntype `include(""Example_Systems/SmallNewEngland/OneZone/Run.jl"")` from the `julia>` prompt.\n\n![Creating the Julia environment and installing dependencies: Steps 6-8](docs/src/assets/GenX_setup_tutorial_part_2.png)\n*Figure 2. Creating the Julia environment and installing dependencies from Project.toml file from inside the GenX folder: Steps 6-8*\n\nAfter the script runs to completion, results will be written to a folder called \xe2\x80\x9cResults\xe2\x80\x9d, located in the same directory as `Run.jl`.\n\n#### For a Julia REPL terminal:\n2. Open your desired version of Julia\n3. In the Julia terminal, enter pkg manager mode by typing ]\nActivate the project by typing activate /path/to/GenX\n4. Type `instantiate` from the `(GenX) pkg` prompt.\n On Windows there is an issue with the prepopulated MUMPS_seq_jll v5.5.1 that prevents compilation of the solvers. To avoid this issue type \'add MUMPS_seq_jll@5.4.1\' after running instantiate.\n5. Type `st` to check that the dependecies have been installed. If there is no error, it has been successful.\n6. Type the back key to come back to the `julia>` prompt.\n7. Since we have already started Julia, we can run a case by executing the command `julia> include(\xe2\x80\x9c/Run.jl\xe2\x80\x9d)`. \n\nFor example, in order to run the OneZone case within the `Example_Systems/SmallNewEngland` folder,\ntype `include(""Example_Systems/SmallNewEngland/OneZone/Run.jl"")` from the `julia>` prompt.\n\n\n### Running a case\n\nOnce Steps 1-6 have been performed, a case can be run from the terminal in a single line.\nThere\'s no need to be in a certain folder to run a case, but it is required to point `julia` to the project that you created.\n\nFor example, from inside the `GenX` folder:\n`/home/youruser/GenX > julia --project=. /home/youruser/GenX/Example_Systems/SmallNewEngland/OneZone/Run.jl`\n\nOr from another folder\n\n`/arbitrary/location > julia --project=""/home/youruser/GenX"" /home/youruser/GenX/Example_Systems/SmallNewEngland/OneZone/Run.jl`\n\nIn fact, a best practice is to place your cases outside of the GenX repository:\n\n`/arbitrary/location > julia --project=""/home/youruser/GenX"" /your/custom/case/Run.jl`\n\n### What happens when you run a case\n\nThe Run.jl file in each of the example systems calls a function `run_genx_case!(""path/to/case"")` which is suitable for capacity expansion modeling of several varieties.\nThe following are the main steps performed in that function:\n\n1. Establish path to environment setup files and GenX source files.\n2. Read in model settings `genx_settings.yml` from the example directory.\n3. Configure solver settings.\n4. Load the model inputs from the example directory and perform time-domain clustering if required.\n5. Generate a GenX model instance.\n6. Solve the model.\n7. Write the output files to a specified directory.\n\nIf your needs are more complex, it is possible to use a customized run script in place of simply calling `run_genx_case!`; the contents of that function could be a starting point. \n\n### Using commercial solvers: Gurobi or CPLEX\nIf you want to use the commercial solvers Gurobi or CPLEX:\n\n1. Make sure you have a valid license and the actual solvers for either of Gurobi or CPLEX installed on your machine\n2. Add Gurobi or CPLEX to the Julia Project.\n\n```\n> julia --project=/home/youruser/GenX\n\njulia> \n(GenX) pkg> add Gurobi\n-or-\n(GenX) pkg> add CPLEX\n```\n\n3. At the beginning of the `GenX/src/GenX.jl` file, uncomment `using Gurobi` and/or `using CPLEX`.\n4. Set the appropriate solver in the `genx_settings.yml` file of your case\n\nNote that if you have not already installed the required Julia packages or you do not have a valid Gurobi license on your host machine, you will receive an error message and Run.jl will not run to completion.\n\n\n## Running Modeling to Generate Alternatives with GenX\nGenX includes a modeling to generate alternatives (MGA) package that can be used to automatically enumerate a diverse set of near cost-optimal solutions to electricity system planning problems. To use the MGA algorithm, user will need to perform the following tasks:\n\n1. Add a `Resource_Type` column in the `Generators_data.csv` file denoting the type of each technology.\n2. Add a `MGA` column in the `Generators_data.csv` file denoting the availability of the technology.\n3. Set the `ModelingToGenerateAlternatives` flag in the `GenX_Settings.yml` file to 1.\n4. Set the `ModelingtoGenerateAlternativeSlack` flag in the `GenX_Settings.yml` file to the desirable level of slack.\n5. Create a `Rand_mga_objective_coefficients.csv` file to provide random objective function coefficients for each MGA iteration.\n For each iteration, number of rows in the `Rand_mga_objective_coefficients.csv` file represents the number of distinct technology types while number of columns represent the number of model zones.\n6. Solve the model using `Run.jl` file.\n\nResults from the MGA algorithm would be saved in `MGA_max` and `MGA_min` folders in the `Example_Systems/` folder.\n\n# Limitations of the GenX Model\n\nWhile the benefits of an openly available generation and transmission expansion model are high, many approximations have been made due to missing data or to manage computational tractability.\nThe assumptions of the GenX model are listed below.\nIt serves as a caveat to the user and as an encouragement to improve the approximations.\n\n## Time period\nGenX makes the simplifying assumption that each time period contains n copies of a single, representative year.\nGenX optimizes generation and transmission capacity for just this characteristic year within each time period, assuming the results for different years in the same time period are identical.\nHowever, the GenX objective function accounts only for the cost of the final model time period.\n\n## Cost\nThe GenX objective function assumes that the cost of powerplants is specified in the unit of currency per unit of capacity.\nGenX also assumes that the capital cost of technologies is paid through loans.\n\n## Market\nGenX is a bottom-up (technology-explicit), partial equilibrium model that assumes perfect markets for commodities.\nIn other words, each commodity is produced such that the sum of producer and consumer surplus is maximized.\n\n## Technology\nBehavioral response and acceptance of new technology are often modeled simplistically as a discount rate or by externally fixing the technology capacity.\nA higher, technology-specific discount rate represents consumer reluctance to accept newer technologies.\n\n## Uncertainty\nBecause each model realization assumes a particular state of the world based on the input values drawn, the parameter uncertainty is propagated through the model in the case of myopic model runs\n\n## Decision-making\nGenX assumes rational decision making, with perfect information and perfect foresight, and simultaneously optimizes all decisions over the user-specified time horizon.\n\n## Demand\nGenX assumes price-elastic demand segments that are represented using piece-wise approximation rather than an inverse demand curve to keep the model linear.\n\n# How to cite GenX\n\nWe request that users of GenX to cite it in their academic publications and patent filings.\n\n```\nMIT Energy Initiative and Princeton University ZERO lab. GenX: a configurable power system capacity expansion model for studying low-carbon energy futures n.d. https://github.com/GenXProject/GenX\n```\n\n# pygenx: Python interface for GenX\n\nPython users can now run GenX from a thin-python-wrapper interface, developed by [Daniel Olsen](https://github.com/danielolsen).\nThis tool is called `pygenx` and can be cloned from the github page: [pygenx](https://github.com/danielolsen/pygenx).\nIt needs installation of Julia 1.3 and a clone of GenX repo along with your python installation. \n\n## Simple GenX Case Runner: For automated sequential batch run for GenX\n\nIt is now possible to run a list of GenX cases as separate batch jobs.\nAlternatively, they can also be run locally in sequence, as one job.\nIt has been developed by [Jacob Schwartz](https://github.com/cfe316).\nThis tool is called `SimpleGenXCaseRunner` and can be cloned from the github page: [SimpleGenXCaseRunner](https://github.com/cfe316/SimpleGenXCaseRunner)\n\n## Bug and feature requests and contact info\nIf you would like to report a bug in the code or request a feature, please use our [Issue Tracker](https://github.com/GenXProject/GenX/issues).\nIf you\'re unsure or have questions on how to use GenX that are not addressed by the above documentation, please reach out to Sambuddha Chakrabarti (sc87@princeton.edu), Jesse Jenkins (jdj2@princeton.edu) or Dharik Mallapragada (dharik@mit.edu).\n\n## GenX Team\nGenX has been developed jointly by researchers at the [MIT Energy Initiative](https://energy.mit.edu/) and the ZERO lab at Princeton University.\nKey contributors include [Nestor A. Sepulveda](https://energy.mit.edu/profile/nestor-sepulveda/),\n[Jesse D. Jenkins](https://mae.princeton.edu/people/faculty/jenkins),\n[Dharik S. Mallapragada](https://energy.mit.edu/profile/dharik-mallapragada/),\n[Aaron M. Schwartz](https://idss.mit.edu/staff/aaron-schwartz/),\n[Neha S. Patankar](https://www.linkedin.com/in/nehapatankar),\n[Qingyu Xu](https://www.linkedin.com/in/qingyu-xu-61b3567b),\n[Jack Morris](https://www.linkedin.com/in/jack-morris-024b37121),\n[Sambuddha Chakrabarti](https://www.linkedin.com/in/sambuddha-chakrabarti-ph-d-84157318).\n'",,"2021/05/19, 17:57:57",889,GPL-2.0,122,823,"2023/10/04, 21:34:35",62,353,504,272,20,7,14.0,0.6493288590604027,"2023/08/01, 19:35:41",v0.3.6,0,18,false,,false,false,,,,,,,,,,, Open Energy Platform,"Aims to ensure quality, transparency and reproducibility in energy system research. It is a collection of various tools and information and that help working with energy-related data.",OpenEnergyPlatform,https://github.com/OpenEnergyPlatform/oeplatform.git,github,"oep,django,rest-api,energy,database,open-energy-family,open-data",Energy Modeling and Optimization,"2023/10/23, 18:30:39",56,0,9,true,Python,Open Energy Family,OpenEnergyPlatform,"Python,JavaScript,HTML,SCSS,CSS,Shell,Dockerfile,Mako,Less",http://openenergyplatform.org/,"b'\n\n| Workflow | Status-Badge & Link |\n| ---------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |\n| Automated tests | [![Automated tests](https://github.com/OpenEnergyPlatform/oeplatform/actions/workflows/automated-testing.yaml/badge.svg)](https://github.com/OpenEnergyPlatform/oeplatform/actions/workflows/automated-testing.yaml) |\n| Docs | [![Documentation Status](https://github.com/OpenEnergyPlatform/oeplatform/actions/workflows/deploy-docs.yaml/badge.svg)](https://openenergyplatform.github.io/oeplatform/)\n\n\n\n# Open Energy Family - Open Energy Platform (OEP)\n\nRepository for the code of the Open Energy Platform (OEP) website [https://openenergy-platform.org/](https://openenergy-platform.org/). This repository does not contain data, for data access please consult [this page](https://github.com/OpenEnergyPlatform/organisation/blob/master/README.md)\n\n## License / Copyright\n\nThis repository is licensed under [GNU Affero General Public License v3.0 (AGPL-3.0)](https://www.gnu.org/licenses/agpl-3.0.en.html)\n\n# Installation & Setup\n\nFollow the detailed [installation guide](https://openenergyplatform.github.io/oeplatform/install/installation/).\n\n## Developement & Code contribution\n\nPlease read carefully the `CONTRIBUTING.md` [file](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md) before you start contributing!\n\nFor further information visit our [developement & contribution guide](https://openenergyplatform.github.io/oeplatform/dev/developement/)\n'",,"2015/11/20, 14:21:02",2896,AGPL-3.0,1429,4725,"2023/10/23, 15:15:19",245,496,1149,304,2,9,0.1,0.5477621910487642,"2023/10/23, 16:08:22",v0.15.3,0,36,false,,false,true,,,https://github.com/OpenEnergyPlatform,https://github.com/OpenEnergyPlatform/organisation/blob/master/README.md,"Magdeburg, Germany",,,https://avatars.githubusercontent.com/u/37101913?v=4,,, PyPSA,"A free software toolbox for simulating and optimizing modern power systems that include features such as conventional generators with unit commitment, variable wind and solar generation, storage units, coupling to other energy sectors, and mixed alternating and direct current networks.",PyPSA,https://github.com/PyPSA/PyPSA.git,github,"python,loadflow,optimisation,pyomo,energy-system,power-systems-analysis,optimal-power-flow,powerflow,energy,energy-systems,pypsa,power-flow,power-systems,renewable-energy,electrical-engineering,capacity-expansion-planning,clean-energy,climate-change,renewables",Energy Modeling and Optimization,"2023/10/25, 15:50:25",945,85,234,true,Python,PyPSA,PyPSA,"Python,Makefile",https://pypsa.readthedocs.io,"b'# PyPSA - Python for Power System Analysis\n\n\n[![PyPI version](https://img.shields.io/pypi/v/pypsa.svg)](https://pypi.python.org/pypi/pypsa)\n[![Conda version](https://img.shields.io/conda/vn/conda-forge/pypsa.svg)](https://anaconda.org/conda-forge/pypsa)\n[![CI](https://github.com/pypsa/pypsa/actions/workflows/CI.yml/badge.svg)](https://github.com/pypsa/pypsa/actions/workflows/CI.yml)\n[![CI with micromamba](https://github.com/pypsa/pypsa/actions/workflows/CI-micromamba.yml/badge.svg)](https://github.com/pypsa/pypsa/actions/workflows/CI-micromamba.yml)\n[![Code coverage](https://codecov.io/gh/PyPSA/PyPSA/branch/master/graph/badge.svg?token=kCpwJiV6Jr)](https://codecov.io/gh/PyPSA/PyPSA)\n[![Documentation Status](https://readthedocs.org/projects/pypsa/badge/?version=latest)](https://pypsa.readthedocs.io/en/latest/?badge=latest)\n[![License](https://img.shields.io/pypi/l/pypsa.svg)](LICENSE.txt)\n[![Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.3946412.svg)](https://doi.org/10.5281/zenodo.3946412)\n[![Examples of use](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/PyPSA/PyPSA/master?filepath=examples%2Fnotebooks)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/PyPSA/PyPSA/master.svg)](https://results.pre-commit.ci/latest/github/PyPSA/PyPSA/master)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Discord](https://img.shields.io/discord/911692131440148490?logo=discord)](https://discord.gg/AnuJBk23FU)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](CODE_OF_CONDUCT.md)\n[![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/pypsa)](https://stackoverflow.com/questions/tagged/pypsa)\n\nPyPSA stands for ""Python for Power System Analysis"". It is pronounced\n""pipes-ah"".\n\nPyPSA is an open source toolbox for simulating and optimising modern power and\nenergy systems that include features such as conventional generators with unit\ncommitment, variable wind and solar generation, storage units, coupling to other\nenergy sectors, and mixed alternating and direct current networks. PyPSA is\ndesigned to scale well with large networks and long time series.\n\nThis project is maintained by the [Department of Digital Transformation in\nEnergy Systems](https://tub-ensys.github.io) at the [Technical University of\nBerlin](https://www.tu.berlin). Previous versions were developed by the Energy\nSystem Modelling group at the [Institute for Automation and Applied\nInformatics](https://www.iai.kit.edu/english/index.php) at the [Karlsruhe\nInstitute of Technology](http://www.kit.edu/english/index.php) funded by the\n[Helmholtz Association](https://www.helmholtz.de/en/), and by the [Renewable\nEnergy\nGroup](https://fias.uni-frankfurt.de/physics/schramm/renewable-energy-system-and-network-analysis/)\nat [FIAS](https://fias.uni-frankfurt.de/) to carry out simulations for the\n[CoNDyNet project](http://condynet.de/), financed by the [German Federal\nMinistry for Education and Research (BMBF)](https://www.bmbf.de/en/index.html)\nas part of the [Stromnetze Research\nInitiative](http://forschung-stromnetze.info/projekte/grundlagen-und-konzepte-fuer-effiziente-dezentrale-stromnetze/).\n\n## Functionality\n\nPyPSA can calculate:\n\n- static power flow (using both the full non-linear network equations and the\n linearised network equations)\n- linear optimal power flow (least-cost optimisation of power plant and\n storage dispatch within network constraints, using the linear network\n equations, over several snapshots)\n- security-constrained linear optimal power flow\n- total electricity/energy system least-cost investment optimisation (using\n linear network equations, over several snapshots and investment periods\n simultaneously for optimisation of generation and storage dispatch and\n investment in the capacities of generation, storage, transmission and other\n infrastructure)\n\nIt has models for:\n\n- meshed multiply-connected AC and DC networks, with controllable converters\n between AC and DC networks\n- standard types for lines and transformers following the implementation in\n [pandapower](https://www.pandapower.org/)\n- conventional dispatchable generators and links with unit commitment\n- generators with time-varying power availability, such as wind and solar\n generators\n- storage units with efficiency losses\n- simple hydroelectricity with inflow and spillage\n- coupling with other energy carriers (e.g. resistive Power-to-Heat (P2H),\n Power-to-Gas (P2G), battery electric vehicles (BEVs), Fischer-Tropsch,\n direct air capture (DAC))\n- basic components out of which more complicated assets can be built, such as\n Combined Heat and Power (CHP) units and heat pumps.\n\n## Documentation\n\n[Documentation](https://pypsa.readthedocs.io/en/latest/index.html)\n\n[Quick start](https://pypsa.readthedocs.io/en/latest/quick_start.html)\n\n[Examples](https://pypsa.readthedocs.io/en/latest/examples-basic.html)\n\n[Known users of\nPyPSA](https://pypsa.readthedocs.io/en/latest/users.html)\n\n## Installation\n\npip:\n\n```pip install pypsa```\n\nconda/mamba:\n\n```conda install -c conda-forge pypsa```\n\nAdditionally, install a solver.\n\n## Usage\n\n```py\nimport pypsa\n\n# create a new network\nn = pypsa.Network()\nn.add(""Bus"", ""mybus"")\nn.add(""Load"", ""myload"", bus=""mybus"", p_set=100)\nn.add(""Generator"", ""mygen"", bus=""mybus"", p_nom=100, marginal_cost=20)\n\n# load an example network\nn = pypsa.examples.ac_dc_meshed()\n\n# run the optimisation\nn.optimize()\n\n# plot results\nn.generators_t.p.plot()\nn.plot()\n\n# get statistics\nn.statistics()\nn.statistics.energy_balance()\n```\n\nThere are [more extensive\nexamples](https://pypsa.readthedocs.io/en/latest/examples-basic.html) available\nas [Jupyter notebooks](https://jupyter.org/). They are also described in the\n[doc/examples.rst](doc/examples.rst) and are available as Python scripts in\n[examples/](examples/).\n\n## Screenshots\n\n[PyPSA-Eur](https://github.com/PyPSA/pypsa-eur) optimising capacities of\ngeneration, storage and transmission lines (9% line volume expansion allowed)\nfor a 95% reduction in CO2 emissions in Europe compared to 1990 levels\n\n![image](doc/img/elec_s_256_lv1.09_Co2L-3H.png)\n\n[SciGRID model](https://power.scigrid.de/) simulating the German power system\nfor 2015.\n\n![image](doc/img/stacked-gen_and_storage-scigrid.png)\n\n![image](doc/img/lmp_and_line-loading.png)\n\n## Dependencies\n\nPyPSA is written and tested to be compatible with Python 3.7 and above.\nThe last release supporting Python 2.7 was PyPSA 0.15.0.\n\nIt leans heavily on the following Python packages:\n\n- [pandas](http://pandas.pydata.org/) for storing data about\n components and time series\n- [numpy](http://www.numpy.org/) and [scipy](http://scipy.org/) for\n calculations, such as linear algebra and sparse matrix calculations\n- [networkx](https://networkx.github.io/) for some network\n calculations\n- [matplotlib](https://matplotlib.org/) for static plotting\n- [linpy](https://github.com/PyPSA/linopy) for preparing optimisation problems\n (currently only linear and mixed integer linear optimisation)\n- [cartopy](https://scitools.org.uk/cartopy) for plotting the\n baselayer map\n- [pytest](http://pytest.org/) for unit testing\n- [logging](https://docs.python.org/3/library/logging.html) for\n managing messages\n\nThe optimisation uses interface libraries like `linopy` which are\nindependent of the preferred solver. You can use e.g. one of the free\nsolvers [GLPK](https://www.gnu.org/software/glpk/) and\n[CLP/CBC](https://github.com/coin-or/Cbc/) or the commercial solver\n[Gurobi](http://www.gurobi.com/) for which free academic licenses are\navailable.\n\n## Documentation\n\nPlease check the [documentation](https://pypsa.readthedocs.io).\n\n## Contributing and Support\n\nWe strongly welcome anyone interested in contributing to this project. If you have any ideas, suggestions or encounter problems, feel invited to file issues or make pull requests on GitHub.\n\n- In case of code-related **questions**, please post on [stack overflow](https://stackoverflow.com/questions/tagged/pypsa).\n- For non-programming related and more general questions please refer to the [mailing list](https://groups.google.com/group/pypsa).\n- To **discuss** with other PyPSA users, organise projects, share news, and get in touch with the community you can use the [discord server](https://discord.gg/AnuJBk23FU).\n- For **bugs and feature requests**, please use the [PyPSA Github Issues page](https://github.com/PyPSA/PyPSA/issues).\n- For **troubleshooting**, please check the [troubleshooting](https://pypsa.readthedocs.io/en/latest/troubleshooting.html) in the documentation.\n\n## Code of Conduct\n\nPlease respect our [code of conduct](CODE_OF_CONDUCT.md).\n\n## Citing PyPSA\n\nIf you use PyPSA for your research, we would appreciate it if you would\ncite the following paper:\n\n- T. Brown, J. H\xc3\xb6rsch, D. Schlachtberger, [PyPSA: Python for Power\n System Analysis](https://arxiv.org/abs/1707.09913), 2018, [Journal\n of Open Research\n Software](https://openresearchsoftware.metajnl.com/), 6(1),\n [arXiv:1707.09913](https://arxiv.org/abs/1707.09913),\n [DOI:10.5334/jors.188](https://doi.org/10.5334/jors.188)\n\nPlease use the following BibTeX:\n\n @article{PyPSA,\n author = {T. Brown and J. H\\""orsch and D. Schlachtberger},\n title = {{PyPSA: Python for Power System Analysis}},\n journal = {Journal of Open Research Software},\n volume = {6},\n issue = {1},\n number = {4},\n year = {2018},\n eprint = {1707.09913},\n url = {https://doi.org/10.5334/jors.188},\n doi = {10.5334/jors.188}\n }\n\nIf you want to cite a specific PyPSA version, each release of PyPSA is\nstored on [Zenodo](https://zenodo.org/) with a release-specific DOI. The\nrelease-specific DOIs can be found linked from the overall PyPSA Zenodo\nDOI for Version 0.17.1 and onwards:\n\n[![image](https://zenodo.org/badge/DOI/10.5281/zenodo.3946412.svg)](https://doi.org/10.5281/zenodo.3946412)\n\nor from the overall PyPSA Zenodo DOI for Versions up to 0.17.0:\n\n[![image](https://zenodo.org/badge/DOI/10.5281/zenodo.786605.svg)](https://doi.org/10.5281/zenodo.786605)\n\n# Licence\n\nCopyright 2015-2023 [PyPSA\nDevelopers](https://pypsa.readthedocs.io/en/latest/developers.html)\n\nPyPSA is licensed under the open source [MIT\nLicense](https://github.com/PyPSA/PyPSA/blob/master/LICENSE.txt).\n'",",https://doi.org/10.5281/zenodo.3946412,https://arxiv.org/abs/1707.09913,https://arxiv.org/abs/1707.09913,https://doi.org/10.5334/jors.188,https://doi.org/10.5334/jors.188,https://zenodo.org/,https://doi.org/10.5281/zenodo.3946412,https://doi.org/10.5281/zenodo.786605","2016/01/11, 09:04:18",2844,MIT,763,2314,"2023/10/25, 15:08:53",84,450,666,257,0,16,2.0,0.7764768493879723,"2023/09/30, 06:48:17",v0.25.2,0,58,false,,true,false,"cshearer1977/EnergySystemModelling,maribjorn/pypsa-earth-bo,zhizhiyuyu/pypsa-tide,centrefornetzero/pypsa-fes,LukasFrankenQ/pypsa-fes,markushal/pypsa-reserves-dashboard,BishtArvind/pypsa-meets-earth,LaraNonino/Just_CILlin,alfiyaks/pypsa-kz-my,ktehranchi/ScenarioSelection_PCS,ricnogfer/pypsatopo,PyPSA/pypsa-usa,pypsa-meets-earth/pypsa-earth-lit,fneum/data-science-for-esm,drifter089/pypsaLit-container,clairehalloran/GeoHeat-GB,Tomkourou/feo-viz,2050plus/2050plus,yerbol-akhmetov/pypsa-kaz,ESI-FAR/hydrogen-bypass-example,BKashfutdinov/pypsa_check,nightswall/PyPSA-Simulations,cgegy5/dyryrwyryr,patrik-bartak/L2RPN-Delft-2023-Team-Conceptual,koen-vg/kjernekraft-noreg,GeloSalva/Visayas-Transmission-Simulator-VTS-,drifter089/pypsa-workflow,GeloSalva/Project-A.I-ssurance,LukasBNordentoft/pypsa_netview,pz-max/pypsa-earth-test,NOWUM/smartdso,toznyigit/data-simulator,drifter089/streamlit_vis,PlanQK/EnergyUnitCommitment,tZ3ma/tessif-phd,tZ3ma/tessif-pypsa-0-19-3,fneum/highs-pypsa-progress,SEL-Columbia/PreREISE,patryk-kubiczek/pypsa-pl,NOWUM/dmas,Pikugcp22/https-github.com-pypsa-meets-earth-pypsa-earth,Hasal-Pallewaththe/optimization_of_mathematical_problems_and_systems,ekatef/pypsa-earth,Breakthrough-Energy/PostREISE,Breakthrough-Energy/PreREISE,aeonu/single-line-diagram,lod531/smoothing,pypsa-meets-africa/pypsa-africa-archived,fonsp/grid-analysis,pascaluetz/adversarial-attack-modeling-on-energy-system-design,linz94/birdcatdetection,carlosfv92/pypsa-earth-BO,mikelperez01/Mike,eym55/power_grid_sim,LukasFrankenQ/easter-bush-energy,Breakthrough-Energy/PowerSimData,enlite-ai/maze-l2rpn-2021-submission,Adrianonsare/Geospatial-PowerAnalysis,pypsa-meets-earth/pypsa-earth,instrat-pl/pypsa-pl,openego/eGon-data,maxnutz/res_aut,montefesp/EPIPPy,dgusain1/energysim,eh-tien/L2RPN_submission_simple,PyPSA/pypsa-eur,jnmelchorg/pyensys,martynvdijke/Stargazer,Luiz-Phillip/State_Estimation,AncillaryServicesAcquisitionModel/ASAM,ChillkroeteTTS/ccf-analysis-of-two-coupled-ou-processes,BDonnot/ChroniX2Grid,ik4o5/docker,PyPSA/whobs-server,lukasnacken/pypsa-sec-mga,daanbakker1995/PyPSAServerApi,FRESNA/netallocation,mbardwell/intelligent-simulation-handler,mathildebadoual/training,martynvandijke/Stargazer,openego/eGo,openego/eTraGo,openego/ding0,openego/ego.powerflow,openego/eDisGo",,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, oemof,"Open Energy Modelling Framework - A Python toolbox for energy system modeling and optimization. A community driven, modular, flexible and generic software project.",oemof,https://github.com/oemof/oemof.git,github,,Energy Modeling and Optimization,"2023/08/18, 11:22:18",80,38,25,true,Python,oemof community,oemof,"Python,Batchfile",https://oemof.org,"b""=======================================\nOpen Energy Modelling Framework (oemof)\n=======================================\n\n.. figure:: https://raw.githubusercontent.com/oemof/oemof/master/logo/logo_oemof_big.svg\n\nThe Open Energy Modelling Framework (oemof) is a Python toolbox for energy system modelling and optimisation.\n\nThe oemof project aims to be a loose organisational frame for tools in the wide field of (energy) system modelling.\nEvery project is managed by their own developer team but we share some developer and design rules to make it easier to understand each other's tools. All project libraries are free software licenced under the MIT license.\n\nAll projects are in different stages of implementation, some even may not have a stable release, but all projects are open to be joined by interested people.\nWe do not belong to a specific institution and everybody is free to join the developer teams and will have the same rights.\nThere is no higher decision level.\n\noemof community\n===============\n`This repository `_ is also used to organise everything for the oemof community.\n\n- Webconference dates\n- Real life meetings\n- Website and Mailinglist\n- General communication\n\nYou can find recent topics of discussion in the `issues `_.\n\nReal life meetings\n------------------\nThe oemof community meets in person on a regular basis. Find the latest information on the next meeting(s) on this wiki page: https://github.com/oemof/oemof/wiki\n""",,"2016/04/20, 12:33:33",2744,MIT,20,147,"2023/08/18, 11:22:21",31,14,82,13,68,2,9.4,0.6141732283464567,"2022/11/09, 16:37:07",v1.0.0,0,11,false,,false,true,"tZ3ma/tessif-examples,UU-ER/EHUB-Py_Training,moritz-reuter/ESEM-EE,dpinney/wiires,rl-institut/oemof-B3,oemof/marketlib,znes/oemof-barbados,oemof-heat/educational_project,brizett/reegis_hp,znes/HESYSOPT,oemof-heat/energy-system-planning-workshop,greco-project/pvcompare,FlexiGIS/FlexiGIS,znes/angus-scenarios,jakob-wo/CoolProp_Examples,modex-flexmex/oemof-flexmex,rl-institut/smooth,oemof/oemof-thermal,oemof/oemof-db,open-fred/cli,rl-institut/multi-vector-simulator,smartie2076/oemof_workshop,rl-institut/workshop,rl-institut/offgridders,rl-institut/WAM_APP_vdi,oemof-heat/released_examples,oemof/oemof-tabular,rl-institut/WAM_APP_stemp_abw,uvchik/reegis_phd,gplssm/elesplan_m_EMP-2018,reegis/berlin_hp,windnode/WindNODE_ABW,reegis/deflex,rl-institut/appBBB,rl-institut/smenos,windnode/WindNODE_KWUM,gplssm/europepstrans,reegis/reegis",,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, pyGRETA,Python Generator of REnewable Time series and mAps: a tool that generates high-resolution potential maps and time series for user-defined regions within the globe.,tum-ens,https://github.com/tum-ens/pyGRETA.git,github,"renewable-energy,renewable-timeseries,potentials,wind,pv,csp,gis,high-resolution",Energy Modeling and Optimization,"2022/03/23, 10:38:57",34,0,8,false,Python,Chair of Renewable and Sustainable Energy Systems,tum-ens,Python,,"b'
\n\n
\n\n[![Documentation Status](https://readthedocs.org/projects/pygreta/badge/?version=latest)](http://pyGRETA.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/174577484.svg)](https://zenodo.org/badge/latestdoi/174577484)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![All Contributors](https://img.shields.io/badge/all_contributors-8-orange.svg?style=flat-square)](#contributors)\n\n**py**thon **G**enerator of **RE**newable **T**ime series and m**A**ps: a tool that generates high-resolution potential maps and time series for user-defined regions within the globe.\n\n## Features\n\n* Generation of potential maps and time series for user-defined regions within the globe\n* Modeled technologies: onshore wind, offshore wind, PV, CSP (user-defined technology characteristics)\n* Use of MERRA-2 reanalysis data, with the option to detect and correct outliers\n* High resolution potential taking into account the land use suitability/availability, topography, bathymetry, slope, distance to urban areas, etc.\n* Statistical reports with summaries (available area, maximum capacity, maximum energy output, etc.) for each user-defined region\n* Generation of several time series for each technology and region, based on user\'s preferences\n* Possibility to combine the time series into one using linear regression to match given full-load hours and temporal fluctuations\n\n## Applications\n\nThis code is useful if:\n\n* You want to estimate the theoretical and/or technical potential of an area, which you can define through a shapefile\n* You want to obtain high resolution maps\n* You want to define your own technology characteristics\n* You want to generate time series for an area after excluding parts of it that are not suitable for renewable power plants\n* You want to generate multiple time series for the same area (best site, upper 10%, median, lower 25%, etc.)\n* You want to match historical capacity factors of countries from the IRENA database\n\nYou do not need to use the code (*but you can*) if:\n\n* You do not need to exclude unsuitable areas - use the [Global Solar Atlas](https://globalsolaratlas.info/) or [Global Wind Atlas](https://globalwindatlas.info/)\n* You only need time series for specific points - use other webtools such as [Renewables.ninja](https://www.renewables.ninja/)\n* You only need time series for administrative divisions (countries, NUTS-2, etc.), for which such data is readily available - see [Renewables.ninja](https://www.renewables.ninja/) or [EMHIRES](https://ec.europa.eu/jrc/en/scientific-tool/emhires)\n\n## Outputs\n\nPotential maps for solar PV and onshore wind in Australia, using weather data for 2015:\n
\n\n
\n
\n\n
\n \n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n

kais-siala

\xf0\x9f\x92\xac \xf0\x9f\x90\x9b \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\xa4\x94 \xf0\x9f\x9a\xa7 \xf0\x9f\x91\x80 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\xa2

HoussameH

\xf0\x9f\x92\xac \xf0\x9f\x92\xbb \xf0\x9f\x93\x96

Pierre Grimaud

\xf0\x9f\x90\x9b

thushara2020

\xf0\x9f\x91\x80

lodersky

\xf0\x9f\x93\x96 \xf0\x9f\x92\xbb \xf0\x9f\x91\x80

sonercandas

\xf0\x9f\x93\x96

patrick-buchenberg

\xf0\x9f\x93\xa6

molarana

\xf0\x9f\x8e\xa8
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n\n## Please cite as:\n\nKais Siala, & Houssame Houmy. (2020, June 1). tum-ens/pyGRETA: python Generator of REnewable Time series and mAps (Version v1.1.0). Zenodo. https://doi.org/10.5281/zenodo.3727416\n'",",https://zenodo.org/badge/latestdoi/174577484,https://doi.org/10.5281/zenodo.3727416\n","2019/03/08, 17:08:33",1692,GPL-3.0,0,852,"2021/10/11, 10:38:26",10,127,179,0,744,1,0.2,0.5329249617151608,"2022/04/20, 09:45:25",v2.1,0,7,false,,false,false,,,https://github.com/tum-ens,http://www.ens.ei.tum.de,Technical University of Munich,,,https://avatars.githubusercontent.com/u/8157454?v=4,,, RESKit,A toolkit to help generate renewable energy generation time series for energy systems analysis.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/RESKit.git,github,,Energy Modeling and Optimization,"2021/08/12, 14:00:00",23,0,5,false,Python,FZJ-IEK3,FZJ-IEK3-VSA,Python,,"b'\n\n# RESKit - **R**enewable **E**nergy **S**imulation tool**kit** for Python\n\nRESKit aids with the broad-scale simulation of renewable energy systems, primarily for the purpose of input generation to Energy System Design Models. Simulation tools currently exist for onshore and offshore wind turbines, as well as for solar PV systems, in addtion to general weather-data manipulation tools. Simulations are performed in the context of singular units, however high computational performance is nevertheless maintained. As a result, this tool allows for the simulation of millions of individual turbines and PV systems in a matter of minutes (on the right hardware).\n\n## Features\n\n- High performance unit-level wind turbine and PV module simulations\n - Can generate synthetic wind turbine power curves\n - Access to all PV modules in the most recent databases from Sandia and the CEC\n- Configurable to make use climate model datasets\n- Flexible & modular function designs\n\n## Installation\n\nThe primary dependancies of RESKit are:\n\n1. netCDF4>=1.5.3\n2. xarray\n3. PVLib>=0.7.2\n4. gdal>2.0.0,<3.0.0\n5. GeoKit >= 1.2.4\n\nIf you can install these modules on you own, then the reskit module should be easily installable with:\n\n```\npip install git+https://github.com/FZJ-IEK3-VSA/reskit.git#egg=reskit\n```\n\nIf, on the otherhand, you prefer an automated installation using Anaconda, then you should be able to follow these steps:\n\n1. First clone a local copy of the repository to your computer, and move into the created directory\n\n```\ngit clone https://github.com/FZJ-IEK3-VSA/reskit.git\ncd reskit\n```\n\n1. (Alternative) If you want to use the \'dev\' branch (or another branch) then use:\n\n```\ngit checkout dev\n```\n\n2. RESkit should be installable to a new environment with:\n\n```\nconda env create --file requirements.yml\n```\n\n2. (Alternative) Or into an existing environment with:\n\n```\nconda env update --file requirements.yml -n \n```\n\n2. (Alternative) If you want to install RESKit in editable mode, and also with jupyter notebook and with testing functionalities use:\n\n```\nconda env create --file requirements-dev.yml\n```\n\n## Examples\n\nSee the [Examples page](Examples/)\n\n## Docker\n\nWe are looking into making RESKit accessible in a docker container. Check back later for more info!\n\n## Citation\n\nIf you decide to use RES anywhere in a published work related to wind energy, please kindly cite us using the following\n\n```bibtex\n@article{RybergWind2019,\n author = {Ryberg, David Severin and Caglayan, Dilara Gulcin and Schmitt, Sabrina and Lin{\\ss}en, Jochen and Stolten, Detlef and Robinius, Martin},\n doi = {10.1016/j.energy.2019.06.052},\n issn = {03605442},\n journal = {Energy},\n month = {sep},\n pages = {1222--1238},\n title = {{The future of European onshore wind energy potential: Detailed distribution and simulation of advanced turbine designs}},\n url = {https://linkinghub.elsevier.com/retrieve/pii/S0360544219311818},\n volume = {182},\n year = {2019}\n}\n\n```\n\n## License\n\nMIT License\n\nCopyright (c) 2019 David Severin Ryberg (FZJ IEK-3), heidi Heinrichs (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program. \nIf not, see \n\n## About Us\n\n\n\nWe are the [Process and Systems Analysis](http://www.fz-juelich.de/iek/iek-3/EN/Forschung/_Process-and-System-Analysis/_node.html) department at the [Institute of Energy and Climate Research: Techno-economic Systems Analysis (IEK-3)](http://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n## Acknowledgment\n\nThis work was supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 \xe2\x80\x93 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/).\n\n\n'",,"2020/01/03, 14:15:33",1391,MIT,0,449,"2020/07/09, 09:06:28",6,10,22,0,1203,2,0.5,0.11764705882352944,"2021/08/13, 06:17:41",v0.3.0,0,10,false,,false,false,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, PowSyBl,"An open source framework written in Java, that makes it easy to write complex software for power systems simulations and analysis.",powsybl,https://github.com/powsybl/powsybl-core.git,github,"powsybl,power-systems,power-system-simulation,energy-system,modular,extensible,cim,java,groovy",Energy Modeling and Optimization,"2023/10/25, 09:28:22",107,52,22,true,Java,PowSyBl,powsybl,"Java,Groovy,Shell,Batchfile,JavaScript",https://www.powsybl.org,"b'# PowSyBl Core\n\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4795/badge)](https://bestpractices.coreinfrastructure.org/projects/4795)\n[![Actions Status](https://github.com/powsybl/powsybl-core/workflows/CI/badge.svg)](https://github.com/powsybl/powsybl-core/actions)\n[![Coverage Status](https://sonarcloud.io/api/project_badges/measure?project=com.powsybl%3Apowsybl-core&metric=coverage)](https://sonarcloud.io/component_measures?id=com.powsybl%3Apowsybl-core&metric=coverage)\n[![Quality Gate](https://sonarcloud.io/api/project_badges/measure?project=com.powsybl%3Apowsybl-core&metric=alert_status)](https://sonarcloud.io/dashboard?id=com.powsybl%3Apowsybl-core)\n[![MPL-2.0 License](https://img.shields.io/badge/license-MPL_2.0-blue.svg)](https://www.mozilla.org/en-US/MPL/2.0/)\n[![Join the community on Spectrum](https://withspectrum.github.io/badge/badge.svg)](https://spectrum.chat/powsybl)\n[![Slack](https://img.shields.io/badge/slack-powsybl-blueviolet.svg?logo=slack)](https://join.slack.com/t/powsybl/shared_invite/zt-rzvbuzjk-nxi0boim1RKPS5PjieI0rA)\n[![Javadocs](https://www.javadoc.io/badge/com.powsybl/powsybl-core.svg?color=blue)](https://www.javadoc.io/doc/com.powsybl/powsybl-core)\n\nPowSyBl (**Pow**er **Sy**stem **Bl**ocks) is an open source framework written in Java, that makes it easy to write complex\nsoftware for power systems\xe2\x80\x99 simulations and analysis. Its modular approach allows developers to extend or customize its\nfeatures.\n\nPowSyBl is part of the LF Energy Foundation, a project of The Linux Foundation that supports open source innovation projects\nwithin the energy and electricity sectors.\n\n

\n\n

\n\nRead more at https://www.powsybl.org !\n\nThis project and everyone participating in it is governed by the [PowSyBl Code of Conduct](https://github.com/powsybl/.github/blob/main/CODE_OF_CONDUCT.md).\nBy participating, you are expected to uphold this code. Please report unacceptable behavior to [powsybl-tsc@lists.lfenergy.org](mailto:powsybl-tsc@lists.lfenergy.org).\n\n## PowSyBl vs PowSyBl Core\n\nThis document describes how to build the code of PowSyBl Core. If you just want to run PowSyBl demos, please visit\nhttps://www.powsybl.org/ where downloads will be available soon. If you want guidance on how to start building your own\napplication based on PowSyBl, please visit the http://www.powsybl.org/docs/tutorials/ page.\n\nThe PowSyBl Core project is not a standalone project. Read on to learn how to modify the core code, be it for fun, for\ndiagnosing bugs, for improving your understanding of the framework, or for preparing pull requests to suggest improvements!\nPowSyBl Core provides library code to build all kinds of applications for power systems: a complete and extendable grid\nmodel, support for common exchange formats, APIs for power simulations an analysis, and support for local or distributed\ncomputations. For deployment, powsybl-core also provides iTools, a tool to build cross-platform integrated command-line\napplications. To build cross-platform graphical applications, please visit the PowSyBl GSE repository\nhttps://github.com/powsybl/powsybl-gse page.\n\n## Environment requirements\n\nPowsybl-core project is fully written in Java, so you only need few requirements:\n- JDK *(17 or greater)*\n- Maven *(3.8.1 or greater)* - you could use the embedded maven wrapper instead if you prefer (see [Using Maven Wrapper](#using-maven-wrapper))\n\nTo run all the tests, simply launch the following command from the root of the repository:\n```\n$> mvn package\n```\n\nModify some existing tests or create your own new tests to experiment with the framework! If it suits you better, import\nthe project in an IDE and use the IDE to launch your own main classes. If you know java and maven and want to do things\nmanually, you can also use maven directly to compute the classpath of all the project jars and run anything you want with it.\n\nRead [Contributing.md](https://github.com/powsybl/.github/blob/main/CONTRIBUTING.md) for more in-depth explanations\non how to run code.\n\nRead [Install](#install) to generate an installed iTools distribution, a standalone external folder that contains all\nthe built objects required to run powsybl programs.\n\n## Install\nAn iTools distribution can be generated and installed. The installation is a standalone external folder that contains all\nthe built objects required to run powsybl programs through the itools command-line interface. This repository contains\nthe `install.sh` script to do so easily. By default, the `install.sh` will compile code and copy the resulting iTools\ndistribution to the install folder.\n```\n$> ./install.sh\n```\n\nA more detailled description of the install.sh script options follows:\n\n### Targets\n\n| Target | Description |\n| ------ | ----------- |\n| clean | Clean modules |\n| compile | Compile modules |\n| package | Compile modules and create a distributable package |\n| __install__ | __Compile modules and install it__ |\n| docs | Generate the documentation (Javadoc) |\n| help | Display this help |\n\n### Options\n\nThe install.sh script options are saved in the *install.cfg* configuration file. This configuration file is loaded and\nupdated each time you use the `install.sh` script.\n\n#### Global options\n\n| Option | Description | Default value |\n| ------ | ----------- | ------------- |\n| --help | Display this help | |\n| --prefix | Set the installation directory | $HOME/powsybl |\n| --mvn | Set the maven command to use | mvn | \n\n### Default configuration file\n```\n# -- Global options --\npowsybl_prefix=$HOME/powsybl\npowsybl_mvn=mvn\n```\n\n## Using Maven Wrapper\nIf you don\'t have a proper Maven installed, you could use the [Apache Maven Wrapper](https://maven.apache.org/wrapper/)\nscripts provided. They will download a compatible maven distribution and use it automatically.\n\n### Configuration\n#### Configure the access to the maven distributions\nIn order to work properly, Maven Wrapper needs to download 2 artifacts: the maven distribution and the maven wrapper\ndistribution. By default, these are downloaded from the online Maven repository, but you could use an internal repository instead.\n\n##### Using a Maven Repository Manager\nIf you prefer to use an internal Maven Repository Manager instead of retrieving the artefacts from the internet, you should define the following variable in your environment:\n- `MVNW_REPOURL`: the URL to your repository manager (for instance `https://my_server/repository/maven-public`)\n\nNote that if you need to use this variable, it must be set for **each maven command**. Else, the Maven Wrapper will try to\nretrieve the maven distribution from the online Maven repository (even if one was already downloaded from another location).\n\n##### Using a proxy to access the Internet\nIf you don\'t use an internal Maven Repository, and need to use a proxy to access the Internet, you should:\n1. configure the proxy in your terminal (on Linux/MacOS, you can do it via the `http_proxy` and `https_proxy` environment variables).\nThis is needed to download the Maven Wrapper distribution ;\n\n2. execute **at least once** the following command:\n```shell\n./mvnw -DproxyHost=XXX -DproxyPort=XXX -Dhttp.proxyUser=XXX -Dhttp.proxyPassword=XXX -Djdk.http.auth.tunneling.disabledSchemes= clean\n```\nNotes:\n- The 4 `XXX` occurrences should be replaced with your configuration;\n- The `-Djdk.http.auth.tunneling.disabledSchemes=` option should be left empty;\n- Windows users should use `mvnw.cmd` instead of `./mwn`.\n\nThis second step is required to download the Maven distribution.\n\nOnce both distributions are retrieved, the proxy configuration isn\'t needed anymore to use `./mvnw` or `mvnw.cmd` commands.\n\n\n##### Checking your access configuration\nYou could check your configuration with the following command:\n```shell\n./mvnw -version\n```\n\nIf you encounter any problem, you could specify `MVNW_VERBOSE=true` and relaunch the command to have\nfurther information.\n\n#### Configuring `install.sh` to use maven wrapper\nTo indicate `install.sh` to use Maven Wrapper, you need to configure it with the `--mvn` option:\n```shell\n./install.sh clean --mvn ./mvnw\n```\n\nYou can revert this configuration with the following command:\n```shell\n./install.sh clean --mvn mvn\n```\n\n### Usage\nOnce the configuration is done, you just need to use `./mvnw` instead of `mvn` in your commands.\n'",,"2017/09/29, 14:51:18",2217,MPL-2.0,308,2975,"2023/10/25, 06:11:56",231,2235,2520,428,0,49,1.7,0.7593429158110883,"2023/10/06, 13:53:42",v6.0.1,12,75,false,,false,false,"farao-community/gridcapa-core-cc,farao-community/farao-flow-decomposition,powsybl/powsybl-network-store,powsybl/powsybl-case-server,gridsuite/dynamic-simulation-server,farao-community/gridcapa-cse-valid,powsybl/powsybl-network-area-diagram,nvoskos/powsybl-metrix,powsybl/powsybl-gse,BenoitJeanson/powsybl-power-grid-model,nicolas-pierr/dynawaltz-example,farao-community/gridcapa-swe,gridsuite/cgmes-gl-server,farao-community/farao-dependencies,farao-community/farao-legacy,farao-community/gridcapa-core-valid,zamarrenolm/network-viz,farao-community/gridcapa-cse,gridsuite/dynamic-mapping-server,itesla/network-diff-core,cesar-luis-galli/powsybl-incubator,powsybl/powsybl-tutorials,powsybl/powsybl-entsoe,powsybl/powsybl-eurostag,powsybl/powsybl-metrix,farao-community/farao-virtual-hubs,farao-community/farao-dichotomy,JLLeitschuh/ipst,powsybl/pypowsybl,itesla/network-diff-study-server,itesla/network-diff-server,murgeyseb/powsybl-flow-transfer,gridsuite/dependencies,gridsuite/balances-adjustment-server,gridsuite/loadflow-server,murgeyseb/ramuh-grid-overlay-provider,gridsuite/network-modification-server,gridsuite/odre-server,powsybl/powsybl-afs,mathbagu/powsybl-live-basics,powsybl/powsybl-diagram,powsybl/powsybl-open-loadflow,powsybl/powsybl-dynawo,itesla/ipst-dynamic-database,itesla/ipst-dynamic-simulation,powsybl/powsybl-incubator,farao-community/farao-core,powsybl/powsybl-core,itesla/ipst,itesla/PSM,murgeyseb/pypsa-loadflow,itesla/CGMES",,https://github.com/powsybl,https://www.powsybl.org,,,,https://avatars.githubusercontent.com/u/29916668?v=4,,, PowSyBl Open Load Flow,An open source implementation of the load flow API that can be found in PowSyBl Core. It supports AC Newtow-Raphson and linear DC calculation methods.,powsybl,https://github.com/powsybl/powsybl-open-loadflow.git,github,"powsybl,klu,power-systems,power-system-simulation,energy-system,java,load-flow,power-flow,loadflow,powerflow",Energy Modeling and Optimization,"2023/10/23, 20:39:12",33,37,17,true,Java,PowSyBl,powsybl,Java,,"b'# PowSyBl Open Load Flow\n\n[![Actions Status](https://github.com/powsybl/powsybl-open-loadflow/workflows/CI/badge.svg)](https://github.com/powsybl/powsybl-open-loadflow/actions)\n[![Coverage Status](https://sonarcloud.io/api/project_badges/measure?project=com.powsybl%3Apowsybl-open-loadflow&metric=coverage)](https://sonarcloud.io/component_measures?id=com.powsybl%3Apowsybl-open-loadflow&metric=coverage)\n[![Quality Gate](https://sonarcloud.io/api/project_badges/measure?project=com.powsybl%3Apowsybl-open-loadflow&metric=alert_status)](https://sonarcloud.io/dashboard?id=com.powsybl%3Apowsybl-open-loadflow)\n[![MPL-2.0 License](https://img.shields.io/badge/license-MPL_2.0-blue.svg)](https://www.mozilla.org/en-US/MPL/2.0/)\n[![Slack](https://img.shields.io/badge/slack-powsybl-blueviolet.svg?logo=slack)](https://join.slack.com/t/powsybl/shared_invite/zt-rzvbuzjk-nxi0boim1RKPS5PjieI0rA)\n\nPowSyBl (**Pow**er **Sy**stem **Bl**ocks) is an open source library written in Java, that makes it easy to write complex\nsoftware for power systems\xe2\x80\x99 simulations and analysis. Its modular approach allows developers to extend or customize its\nfeatures.\n\nPowSyBl is part of the LF Energy Foundation, a project of The Linux Foundation that supports open source innovation projects\nwithin the energy and electricity sectors.\n\n

\n\n

\n\nRead more at https://www.powsybl.org !\n\nThis project and everyone participating in it is governed by the [PowSyBl Code of Conduct](https://github.com/powsybl/.github/blob/main/CODE_OF_CONDUCT.md).\nBy participating, you are expected to uphold this code. Please report unacceptable behavior to [powsybl-tsc@lists.lfenergy.org](mailto:powsybl-tsc@lists.lfenergy.org).\n\n## PowSyBl vs PowSyBl Open Load Flow\n\nPowSyBl Open Load Flow provides:\n- An open source implementation of the [LoadFlow API from PowSyBl Core](https://www.powsybl.org/pages/documentation/simulation/powerflow/), we support either DC or AC calculations.\n- An open source implementation of the [SecurityAnalysis API from PowSyBl Core](https://www.powsybl.org/pages/documentation/simulation/securityanalysis/), we support either DC or AC calculations.\n- An open source implementation of the [SensitivityAnalysis API from PowSyBl Core](https://www.powsybl.org/pages/documentation/simulation/sensitivity/), we support either DC or AC calculations.\n\nAlmost all of the code is written in Java. It only relies on native code for the [KLU](http://faculty.cse.tamu.edu/davis/suitesparse.html) sparse linear solver. Linux, Windows and MacOS are supported.\n\n### Common features\n\nThe AC calculations are based on full Newton-Raphson algorithm. The DC calculations are based on direct current linear approximation. Open Load Flow relies on:\n - a fast and robust convergence, based on [KLU](http://faculty.cse.tamu.edu/davis/suitesparse.html) sparse solver.\n - a distributed slack (on generation or on loads or on conform loads); Slack bus selection could be automatic or explicit as explained [here](https://www.powsybl.org/pages/documentation/simulation/powerflow/openlf.html#parameters).\n - a support of generators\' active and reactive power limits, included the support of reactive capability curves.\n - 5 starting point modes: flat, warm, only voltage angles initialization based on a DC load flow, only voltages magnitude initialization based on a specific initializer, or both voltages angle and magnitude initialization based on the two previous methods.\n - a support of non impedant branches, including complex non impedant sub-networks.\n - a multiple synchronous component calculation, generally linked to HVDC lines.\n\n ### About controls\n\n Open Load Flow supports:\n - a generator and static var compensator voltage remote control through PQV bus modelling. It supports any kind of shared voltage control between controllers that can be generators, static var compensators or VSC converter stations.\n - a static var compensator local voltage control involving a slope (support the powsybl-core extension [```VoltagePerReactivePowerControl```](https://www.powsybl.org/pages/documentation/grid/model/extensions.html).\n - a local and remote phase control: phase tap changers can regulate active power flows or limit currents at given terminals.\n - a local and remote voltage control by transformers. It also supports shared controls between them. In case of a controlled bus that has both a voltage control by a generator and a transformer, we have decided in a first approach to discard the transformer control.\n - a local and remote voltage control by shunts. We also support shared controls between them. In case of a controlled bus that has both a voltage control by a generator and a shunt, we have decided in a first approach to discard the shunt voltage control. In case of a controlled bus that has both a voltage control by a transformer and a shunt, we have decided in a first approach to discard the shunt. Several shunts on a controller bus are supported. \n - a remote reactive power control of a branch by a single generator connected on a bus.\n\n### Security analysis implementation \n\n - Network in node/breaker topology and in bus/breaker topology.\n - Contingency on branches and on shunt compensators. Note that for shunt compensators, we don\'t support a contingency on it with a global voltage control by shunts at this stage.\n - All kind of operational limits violations detection on branches (permanent and temporary limits): current limits, apparent power limits, active power limits.\n - High and low voltage violations detection on buses.\n - Complex cases where the contingency leads to another synchronous component where a new resolution has to be performed are not supported at that stage.\n - The active and reactive power flows on branches, angle or voltage at buses can be monitored and collected for later analysis after the base case and after each contingency.\n\n### Sensitivity analysis implementation \n\n Open Load Flow both supports AC and DC calculations. Even if it comes from the same powsybl-core API, the calculations behind are radically different. The AC post contingency sensitivities calculation is based on the same principles than the AC security analysis. The DC post contingency sensitivities calculation is highly optimized and fully documented [here](https://www.powsybl.org/pages/documentation/simulation/sensitivity/openlf.html).\n\nIt supports all types of sensitivity factors that can be find in the API: \n- Variables: injection increase, phase angle shift, HVDC set point increase, and for AC calculations only generator, static var compensator, transformers or shunt voltage target increase.\n- Functions: the active flow or the current on a branch, and for AC calculations only the voltage on a bus.\n\nIt supports contingencies of type:\n- branch contingencies,\n- load and generator contingencies,\n- HVDC line contingency.\n\n## Getting started\n\nRunning a load flow with PowSyBl Open Load Flow is easy. First let\'s start loading a IEEE 14 bus network. We first add a few Maven \ndependencies to respectively have access to network model, IEEE test networks and simple logging capabilities:\n\n```xml\n\n com.powsybl\n powsybl-iidm-impl\n 6.0.0\n\n\n com.powsybl\n powsybl-ieee-cdf-converter\n 6.0.0\n\n\n org.slf4j\n slf4j-simple\n 1.7.22\n\n```\n\nWe are now able to load the IEEE 14 bus:\n ```java\nNetwork network = IeeeCdfNetworkFactory.create14();\n ```\n\nAfter adding a last Maven dependency on Open Load Flow implementation:\n```xml\n\n com.powsybl\n powsybl-open-loadflow\n 1.3.0\n\n```\n\nWe can run the load flow with default parameters on the network:\n```java\nLoadFlow.run(network);\n```\n\nState variables and power flows computed by the load flow are have been updated inside the network model and we can for instance \nprint on standard output buses voltage magnitude and angle:\n\n```java\nnetwork.getBusView().getBusStream().forEach(b -> System.out.println(b.getId() + "" "" + b.getV() + "" "" + b.getAngle()));\n```\n## Contributing to PowSyBl Open Load Flow\n\nPowSyBl Open Load Flow could support more features. The following list is not exhaustive and is an invitation to collaborate:\n\nWe can always increase or improves features and implementations. We have thought about:\n\n- Transformer outer loop: support of transformers that have reached an extreme tap after the first Newton-Raphson iteration.\n- Shunt outerloop: support of shunts that have reached an extreme section after the first Newton-Raphson iteration.\n- Support of all type of contingency present in the security analysis API of PowSyBl Core.\n- Improving performances of the AC security and sensitivity analysis implementations. \n\n\nFor more details, to report bugs or if you need more features, visit our [github](https://github.com/powsybl/powsybl-open-loadflow/issues) and do not hesitate to write new issues.\n\n\n## Using Maven Wrapper\nIf you don\'t have a proper Maven installation, you could use the provided Apache Maven Wrapper scripts.\nThey will download a compatible maven distribution and use it automatically.\n\nYou can see the [Using Maven Wrapper](https://github.com/powsybl/powsybl-core/tree/main#using-maven-wrapper) section of the [powsybl-core](https://github.com/powsybl/powsybl-core) documentation if you want further information on this subject.\n'",,"2019/10/01, 13:48:47",1485,MPL-2.0,219,757,"2023/10/17, 15:05:34",28,797,859,249,8,14,3.0,0.5139056831922613,"2023/09/28, 16:35:18",v1.3.0,0,29,false,,false,false,"farao-community/gridcapa-swe-csa,farao-community/gridcapa-core-cc,powsybl/powsybl-optimizer,powsybl/powsybl-dev-tools,farao-community/farao-flow-decomposition,powsybl/powsybl-balances-adjustment,powsybl/powsybl-network-area-diagram,powsybl/powsybl-integration-test,jeandemanged/powsybl-playground,powsybl/powsybl-diagram,farao-community/gridcapa-swe,gridsuite/sensitivity-analysis-server,farao-community/farao-core,farao-community/gridcapa-cse-valid,powsybl/powsybl-entsoe,geofjamg/olf-micro-benchmark,powsybl/powsybl-starter,powsybl/powsybl-dependencies,G-PST/power-flow-exercise,geofjamg/powsybl-test,powsybl/powsybl-benchmark,farao-community/gridcapa-core-valid,farao-community/gridcapa-rao-runner,farao-community/farao-dependencies,farao-community/gridcapa-cse,cesar-luis-galli/powsybl-incubator,gridsuite/security-analysis-server,gridsuite/merge-orchestrator-server,powsybl/powsybl-tutorials,geofjamg/pypowsybl,powsybl/pypowsybl,gridsuite/dependencies,gridsuite/case-validation-server,gridsuite/balances-adjustment-server,powsybl/powsybl-distribution,gridsuite/loadflow-server,powsybl/powsybl-incubator",,https://github.com/powsybl,https://www.powsybl.org,,,,https://avatars.githubusercontent.com/u/29916668?v=4,,, matpower,"A package of M-files for solving power flow, continuation power flow and optimal power flow problems using MATLAB or Octave.",MATPOWER,https://github.com/MATPOWER/matpower.git,github,"matpower,matpower-github",Energy Modeling and Optimization,"2023/05/31, 17:11:01",344,0,72,true,MATLAB,MATPOWER Development,MATPOWER,"MATLAB,TeX,Python,Shell,Batchfile,Dockerfile,Makefile,M",https://matpower.org,"b'![MATPOWER][logo]\n\nA Power System Simulation Package for MATLAB and Octave\n-------------------------------------------------------\n\n- **MATPOWER Website** - https://matpower.org\n- **MATPOWER GitHub Project** - https://github.com/MATPOWER/matpower\n\nMATPOWER is a package of M-files for solving power flow, continuation\npower flow and optimal power flow problems using MATLAB or Octave. It\nis intended as a simulation tool for researchers and educators that is\neasy to use and modify. MATPOWER is designed to give the best\nperformance possible while keeping the code simple to understand and\nmodify.\n\nMATPOWER releases can be downloaded from the [MATPOWER website][1],\nand the latest stable and work-in-progress versions can always be\ndownloaded or cloned from the [MATPOWER GitHub project][2]. The\n`master` branch should always contain a stable version.\n\n\nSystem Requirements\n-------------------\n\nFor all features, including those based on the new MP-Core:\n* [MATLAB][3] version 9.1 (R2016b) or later, or\n* [GNU Octave][4] version 6.2 or later\n\nLegacy features only (from 7.1 and earlier) are also available on:\n* [MATLAB][3] version 7.5 (R2007b) or later, or\n* [GNU Octave][4] version 4 or later\n\n\nGetting MATPOWER\n----------------\n\nYou can either download an official *versioned release* or you can obtain\nthe *current development version*, which\nwe also attempt to keep stable enough for everyday use. The development\nversion includes new features and bug fixes added since the last\nversioned release.\n\n#### Versioned Releases\n\nDownload the ZIP file of the latest official versioned release from the\n[MATPOWER website][1].\n**Note:** This _does_ include the [MATPOWER Extras][7d].\n\n#### Current Development Version\n\nThere are also two options for obtaining the most recent development version\nof MATPOWER from the `master` branch on GitHub.\n**Note:** This does _not_ include the [MATPOWER Extras][7d].\n\n1. Clone the [MATPOWER repository from GitHub][2].\n *Use this option if you want to be able to easily update to the current\n development release, with the latest bug fixes and new features, using a\n simple `git pull` command, or if you want to help with testing or\n or development. This requires that you have a [Git client][5] (GUI\n or command-line) installed.*\n - From the command line:\n - `git clone https://github.com/MATPOWER/matpower.git`\n - Or, from the [MATPOWER GitHub repository page][2]:\n - Click the green **Clone or download** button, then **Open in Desktop**.\n\n2. Download a ZIP file of the MATPOWER repository from GitHub.\n *Use this option if you need features or fixes introduced since\n the latest versioned release, but you do not have access to or\n are not ready to begin using Git (but don\'t be afraid to\n [give Git a try][6]).*\n - Go to the [MATPOWER GitHub repository page][2].\n - Click the green **Clone or download** button, then **Download ZIP**.\n\nSee [CONTRIBUTING.md][7] for information on how to get a local copy\nof your own MATPOWER fork, if you are interesting in contributing\nyour own code or modifications.\n\n#### MATPOWER Docker Image\n\nMATPOWER is also available on [Docker Hub][7a] as the pre-packaged\n[Docker][7b] image tagged [matpower/matpower][7c], providing a Linux\nenvironment with Octave, MATPOWER, and the [MATPOWER Extras][7d]\npre-installed. See the [MATPOWER-Docker page][7e] for more details.\n\nDocker images are provided for both versioned releases and\ndevelopment versions.\n\n\nInstallation\n------------\n\nInstallation and use of MATPOWER requires familiarity with the basic\noperation of MATLAB or Octave. Make sure you follow the installation\ninstructions for the version of MATPOWER you are installing. The process\nwas simplified with an install script following version 6.0.\n\n1. **Get a copy of MATPOWER** as described above. Clone the repository\n or download and extract the ZIP file of the MATPOWER distribution\n and place the resulting directory in the location of your choice\n and call it anything you like. We will use `` as a\n placeholder to denote the path to this directory (the one\n containing `install_matpower.m`). The files in `` should\n not need to be modified, so it is recommended that they be kept\n separate from your own code.\n\n2. **Run the installer.**\n - Open MATLAB or Octave and change to the `` directory.\n - Run the installer and follow the directions to add the\n required directories to your MATLAB or Octave path, by typing:\n\n install_matpower\n\n3. **That\'s it.** There is no step 3.\n - But, if you chose not to have the installer run the test suite for\n you in step 2, you can run it now to verify that MATPOWER is\n installed and functioning properly, by typing:\n\n test_matpower\n\n\nRunning MATPOWER\n----------------\nTo run a simple Newton power flow on the 9-bus system specified in\nthe file `case9.m`, with the default algorithm options, at the\nMATLAB or Octave prompt, type:\n\n```matlab\nrunpf(\'case9\')\n```\n\nTo load the 30-bus system data from `case30.m`, increase its real power\ndemand at bus 2 to 30 MW, then run an AC optimal power flow with\ndefault options, type:\n\n```matlab\ndefine_constants;\nmpc = loadcase(\'case30\');\nmpc.bus(2, PD) = 30;\nrunopf(mpc);\n```\n\nBy default, the results of the simulation are pretty-printed to the\nscreen, but the solution can also be optionally returned in a `results`\nstruct. The following example shows how simple it is, after running a DC\nOPF on the 118-bus system in `case118.m`, to access the final objective\nfunction value, the real power output of generator 6 and the power flow\nin branch 51.\n\n```matlab\nresults = rundcopf(\'case118\');\nfinal_objective = results.f;\ngen6_output = results.gen(6, PG);\nbranch51_flow = results.branch(51, PF);\n```\n\nFor additional info, see the [MATPOWER User\'s Manual][8], the [on-line\nfunction reference][9], or the built-in help documentation for the various\nMATPOWER functions. For example:\n\n help runpf\n help runopf\n help mpoption\n help caseformat\n\n\nDocumentation\n-------------\n\nThere are a number of sources of documentation for MATPOWER\n\n#### User\'s Manuals\n\nThe User\'s Manuals are available as PDF files in the MATPOWER distribution\nas well as online.\n - [MATPOWER User\'s Manual][8] -- `docs/MATPOWER-manual.pdf`\n - [MOST User\'s Manual][10] -- `most/docs/MOST-manual.pdf`\n - [MP-Opt-User\'s Manual][10a] -- `mp-opt-model/docs/MP-Opt-Model-manual.pdf`\n - [MIPS Manual][10b] -- `mips/docs/MIPS-manual.pdf`\n - [MP-Test README][10c] -- `mptest/README.md`\n\nCurrent and past versions of the manuals are also available online at:\n - [https://matpower.org/doc/manuals][10d]\n\n#### MATPOWER Documentation website\n\nThe new [MATPOWER Documentation site][10e] is intended to be the home for all\nfuture MATPOWER documentation. It is very much a work-in-progress and\ncurrently contains only the new:\n - [MATPOWER Developer\'s Manual][10f]\n\nAs new documentation is written and legacy manuals are rewritten, they\nwill be found here in HTML and PDF formats. The site is generated by\n[Sphinx][10g] and the content is written in reStructuredText (reST) format.\n\n#### [MATPOWER Online Function Reference][9]\n\n#### Built-in Help\n\nEach M-file has its own documentation which can be accessed by typing at\nthe MATLAB/Octave prompt:\n\n help \n\nDocumentation for the case data file format can be found by typing:\n\n help caseformat\n\nIf something is still unclear after checking the manual and the help,\nthe source code *is* the documentation. :wink:\n\n#### Changes\n\nChanges to MATPOWER in each released version are summarized in the\n[release notes](docs/relnotes), found in `docs/relnotes` and in\nAppendix H of the [MATPOWER User\'s Manual][8]. A complete, detailed\nchange log, even for unreleased versions, is available in the\n[`CHANGES.md`][11] file.\n\n\nContributing\n------------\n\nPlease see our [contributing guidelines][7] for details on how to\ncontribute to the project or report issues.\n\n\nSponsoring the MATPOWER Project\n-------------------------------\n\nIf you have found MATPOWER to be valuable, please consider supporting\nthe project by [becoming a sponsor](https://matpower.org/sponsor).\nMATPOWER development and support require significant resources. Any\ncontributions from the community or other sponsors free us to focus on\nthat support and the development of valuable new features.\n\n\nPublications and Tech Notes\n---------------------------\n\n1. R. D. Zimmerman, C. E. Murillo-Sanchez, and R. J. Thomas,\n [""MATPOWER: Steady-State Operations, Planning and Analysis Tools\n for Power Systems Research and Education,""][12] *Power Systems, IEEE\n Transactions on*, vol. 26, no. 1, pp. 12\xe2\x80\x9319, Feb. 2011. \n doi: [10.1109/TPWRS.2010.2051168][13].\n\n2. R. D. Zimmerman, C. E. Murillo-Sanchez, and R. J. Thomas,\n [""MATPOWER\'s Extensible Optimal Power Flow Architecture,""][14]\n *Power and Energy Society General Meeting, 2009 IEEE*, pp. 1-7,\n July 26-30 2009. \n doi: [10.1109/PES.2009.5275967][15].\n - [slides of presentation][16]\n\n3. H. Wang, C. E. Murillo-S\xc3\xa1nchez, R. D. Zimmerman, R. J. Thomas,\n [""On Computational Issues of Market-Based Optimal Power Flow,""][17]\n *Power Systems, IEEE Transactions on*, vol. 22, no. 3,\n pp. 1185-1193, Aug. 2007. \n doi: [10.1109/TPWRS.2007.901301][17].\n\n4. C. E. Murillo-Sanchez, R. D. Zimmerman, C. L. Anderson, and\n R. J. Thomas, [""Secure Planning and Operations of Systems with\n Stochastic Sources, Energy Storage and Active Demand,""][18]\n *Smart Grid, IEEE Transactions on*, vol. 4, no. 4, pp. 2220\xe2\x80\x932229,\n Dec. 2013. \n doi: [10.1109/TSG.2013.2281001][18].\n\n5. A. J. Lamadrid, D. Munoz-Alvarez, C. E. Murillo-Sanchez,\n R. D. Zimmerman, H. D. Shin and R. J. Thomas, [""Using the MATPOWER\n Optimal Scheduling Tool to Test Power System Operation Methodologies\n Under Uncertainty,""][19] *Sustainable Energy, IEEE Transactions on*,\n vol. 10, no. 3, pp. 1280-1289, July 2019.\n doi: [10.1109/TSTE.2018.2865454][19].\n\n6. R. D. Zimmerman, [""Uniform Price Auctions and Optimal\n Power Flow,""][20] *MATPOWER Technical Note 1*, February 2010. \n Available: https://matpower.org/docs/TN1-OPF-Auctions.pdf \n doi: [10.5281/zenodo.3237850](https://doi.org/10.5281/zenodo.3237850).\n\n7. R. D. Zimmerman, [""AC Power Flows, Generalized OPF Costs\n and their Derivatives using Complex Matrix Notation,""][21]\n *MATPOWER Technical Note 2*, February 2010. \n Available:\n https://matpower.org/docs/TN2-OPF-Derivatives.pdf \n doi: [10.5281/zenodo.3237866](https://doi.org/10.5281/zenodo.3237866).\n\n8. B. Sereeter and R. D. Zimmerman, [""Addendum to AC Power Flows and\n their Derivatives using Complex Matrix Notation: Nodal Current\n Balance,""][22] *MATPOWER Technical Note 3*, April 2018. \n Available: https://matpower.org/docs/TN3-More-OPF-Derivatives.pdf \n doi: [10.5281/zenodo.3237900](https://doi.org/10.5281/zenodo.3237900).\n\n9. B. Sereeter and R. D. Zimmerman, [""AC Power Flows, Generalized\n OPF Costs and their Derivatives using Complex Matrix Notation\n and Cartesian Coordinate Voltages,""][23] *MATPOWER Technical\n Note 4*, April 2018. \n Available:\n https://matpower.org/docs/TN4-OPF-Derivatives-Cartesian.pdf \n doi: [10.5281/zenodo.3237909](https://doi.org/10.5281/zenodo.3237909).\n\n\n[Citing MATPOWER][31]\n---------------------\n\nWe request that publications derived from the use of MATPOWER, or the\nincluded data files, explicitly acknowledge that fact by citing the\nappropriate paper(s) and the software itself.\n\n#### Papers\n\nAll publications derived from the use of MATPOWER, or the included data\nfiles, should cite the 2011 MATPOWER paper:\n\n> R. D. Zimmerman, C. E. Murillo-Sanchez, and R. J. Thomas, ""MATPOWER:\n Steady-State Operations, Planning and Analysis Tools for Power Systems\n Research and Education,"" *Power Systems, IEEE Transactions on*, vol. 26,\n no. 1, pp. 12-19, Feb. 2011. \n doi: [10.1109/TPWRS.2010.2051168][13]\n\nPublications derived from the use of the [MATPOWER Optimal Scheduling\nTool (MOST)][24] should cite the 2013 MOST paper, in addition to the\n2011 MATPOWER paper above.\n\n> C. E. Murillo-Sanchez, R. D. Zimmerman, C. L. Anderson, and R. J. Thomas,\n ""Secure Planning and Operations of Systems with Stochastic Sources,\n Energy Storage and Active Demand,"" *Smart Grid, IEEE Transactions on*,\n vol. 4, no. 4, pp. 2220-2229, Dec. 2013. \n doi: [10.1109/TSG.2013.2281001][18]\n\nWork making specific reference to the [MATPOWER Interior Point Solver\n(MIPS)][32] should also cite:\n\n> H. Wang, C. E. Murillo-S\xc3\xa1nchez, R. D. Zimmerman, R. J. Thomas, ""On\n Computational Issues of Market-Based Optimal Power Flow,"" *Power Systems,\n IEEE Transactions on*, vol. 22, no. 3, pp. 1185-1193, Aug. 2007. \n doi: [10.1109/TPWRS.2007.901301][17]\n\nNOTE: Some of the case files included with MATPOWER request the citation\nof additional publications. This includes the ACTIVSg, PEGASE, and RTE\ncases. Details are available in the help text at the top of the\ncorresponding case files.\n\n#### Software\n\nFor the sake of reproducibility of research results, it is best to cite\nthe specific version of the software used, with the version-specfic DOI.\nFor example, for version 7.1 of MATPOWER, use:\n\n> R. D. Zimmerman, C. E. Murillo-Sanchez (2020). *MATPOWER (Version 7.1)*\n [Software]. Available: https://matpower.org \n doi: [10.5281/zenodo.4074135](https://doi.org/10.5281/zenodo.4074135)\n\nTo cite the MATPOWER software generally, without reference to a specific\nversion, use the following citation and DOI, with *\\* replaced by the\nyear of the most recent release:\n\n> R. D. Zimmerman, C. E. Murillo-Sanchez (*\\*). *MATPOWER*\n [Software]. Available: https://matpower.org \n doi: [10.5281/zenodo.3236535][33]\n\nA list of versions with release dates and version-specific DOI\'s can be\nfound via the general DOI at https://doi.org/10.5281/zenodo.3236535.\n\n#### User\'s Manuals\n\nThe MATPOWER, MIPS and MOST User\'s Manuals should also be cited\nexplicitly in work that refers to or is derived from their content. As\nwith the software, the citation and DOI can be version-specific or\ngeneral, as appropriate. For version 7.1 of the [MATPOWER User\'s Manual][8],\nuse:\n\n> R. D. Zimmerman, C. E. Murillo-Sanchez. *MATPOWER User\'s Manual,\n Version 7.1.* 2020. \n [Online]. Available: https://matpower.org/docs/MATPOWER-manual-7.1.pdf \n doi: [10.5281/zenodo.4074122](https://doi.org/10.5281/zenodo.4074122)\n\nFor a version non-specific citation, use the following citation and DOI,\nwith *\\* replaced by the year of the most recent release:\n\n> R. D. Zimmerman, C. E. Murillo-Sanchez. *MATPOWER User\'s Manual.* *\\*. \n [Online]. Available: https://matpower.org/docs/MATPOWER-manual.pdf \n doi: [10.5281/zenodo.3236519][34]\n\nA list of versions of the User\'s Manual with release dates and\nversion-specific DOI\'s can be found via the general DOI at\nhttps://doi.org/10.5281/zenodo.3236519.\n\nFor information on citing the MIPS or MOST User\'s Manuals, please see\nthe [`mips/CITATION`][35] and [`most/CITATION`][36] files, respectively.\n\n#### Recommendation\n\nIn the interest of facilitating research reproducibility and thereby\nincreasing the value of your MATPOWER-related research publications, we\nstrongly encourage you to also publish, whenever possible, all of the\ncode and data required to generate the results you are publishing.\n[Zenodo/GitHub][37] and [IEEE DataPort][38] are two of [many available\noptions][39].\n\n\nE-mail Lists\n------------\n\nThere are two e-mail lists available to serve the MATPOWER community:\n\n- [**Discussion List**][26] ([MATPOWER-L][26]) \xe2\x80\x93 to facilitate discussion\n among MATPOWER users and provide a forum for help with MATPOWER\n related questions\n\n- [**Developer List**][27] ([MATPOWER-DEV-L][27]) \xe2\x80\x93 to provide a forum\n for discussion among MATPOWER users and developers related to the\n development of the MATPOWER software or proposed contributions\n\nFor details see the [Mailing Lists section][28] of the\n[MATPOWER website][1].\n\nPlease select the most appropriate list for your post and do *not*\ncross-post to both Discussion and Developer lists. Bug reports,\nsoftware patches, proposed enhancements, etc. should be submitted to\nthe [issue tracker on GitHub][29].\n\n\nOptional Packages\n-----------------\n\nThere are numerous optional packages to enhance the performance of\nMATPOWER that must be installed separately. The terms of use and\nlicense agreements vary. Some are free of charge for all to use,\nothers are only free for academic use, and others may require a\ncommercial license. Please see Appendix G of the [MATPOWER User\'s\nManual][8] for details.\n\n\nLicense and Terms of Use\n------------------------\n\nMATPOWER is distributed as open-source under the [3-clause BSD license][30].\n\n---\n\n[1]: https://matpower.org\n[2]: https://github.com/MATPOWER/matpower\n[3]: https://www.mathworks.com/\n[4]: https://www.gnu.org/software/octave/\n[5]: https://git-scm.com/downloads\n[6]: https://git-scm.com\n[7]: CONTRIBUTING.md\n[7a]: https://hub.docker.com/\n[7b]: https://www.docker.com\n[7c]: https://hub.docker.com/r/matpower/matpower\n[7d]: https://github.com/MATPOWER/matpower-extras\n[7e]: docker/MATPOWER-Docker.md\n[8]: docs/MATPOWER-manual.pdf\n[9]: https://matpower.org/docs/ref/\n[10]: most/docs/MOST-manual.pdf\n[10a]: mp-opt-model/docs/MP-Opt-Model-manual.pdf\n[10b]: mips/docs/MIPS-manual.pdf\n[10c]: mptest/README.md\n[10d]: https://matpower.org/doc/manuals\n[10e]: https://matpower.org/documentation/\n[10f]: https://matpower.org/documentation/dev-manual/\n[10g]: https://www.sphinx-doc.org/\n[11]: CHANGES.md\n[12]: https://matpower.org/docs/MATPOWER-paper.pdf\n[13]: https://doi.org/10.1109/TPWRS.2010.2051168\n[14]: https://matpower.org/docs/MATPOWER-OPF.pdf\n[15]: https://doi.org/10.1109/PES.2009.5275967\n[16]: https://matpower.org/docs/MATPOWER-OPF-slides.pdf\n[17]: https://doi.org/10.1109/TPWRS.2007.901301\n[18]: https://doi.org/10.1109/TSG.2013.2281001\n[19]: https://doi.org/10.1109/TSTE.2018.2865454\n[20]: https://matpower.org/docs/TN1-OPF-Auctions.pdf\n[21]: https://matpower.org/docs/TN2-OPF-Derivatives.pdf\n[22]: https://matpower.org/docs/TN3-More-OPF-Derivatives.pdf\n[23]: https://matpower.org/docs/TN4-OPF-Derivatives-Cartesian.pdf\n[24]: https://github.com/MATPOWER/most\n[26]: https://matpower.org/mailing-lists/#discusslist\n[27]: https://matpower.org/mailing-lists/#devlist\n[28]: https://matpower.org/mailing-lists\n[29]: https://github.com/MATPOWER/matpower/issues\n[30]: LICENSE\n[31]: CITATION\n[32]: https://github.com/MATPOWER/mips\n[33]: https://doi.org/10.5281/zenodo.3236535\n[34]: https://doi.org/10.5281/zenodo.3236519\n[35]: mips/CITATION\n[36]: most/CITATION\n[37]: https://guides.github.com/activities/citable-code/\n[38]: https://ieee-dataport.org\n[39]: https://www.re3data.org\n\n[logo]: docs/src/images/MATPOWER-md.png\n'",",https://doi.org/10.5281/zenodo.3237850,https://doi.org/10.5281/zenodo.3237866,https://doi.org/10.5281/zenodo.3237900,https://doi.org/10.5281/zenodo.3237909,https://doi.org/10.5281/zenodo.4074135,https://doi.org/10.5281/zenodo.3236535.\n\n####,https://doi.org/10.5281/zenodo.4074122,https://doi.org/10.1109/TPWRS.2010.2051168\n,https://doi.org/10.1109/PES.2009.5275967\n,https://doi.org/10.1109/TPWRS.2007.901301\n,https://doi.org/10.1109/TSG.2013.2281001\n,https://doi.org/10.1109/TSTE.2018.2865454\n,https://doi.org/10.5281/zenodo.3236535\n,https://doi.org/10.5281/zenodo.3236519\n","2016/12/16, 19:12:30",2504,CUSTOM,78,2311,"2023/09/22, 07:58:26",29,35,177,44,33,3,0.2,0.01890189018901889,"2022/12/23, 04:58:25",8.0b1,0,9,false,,false,true,,,https://github.com/MATPOWER,,,,,https://avatars.githubusercontent.com/u/22669039?v=4,,, energyRt,Making Energy Systems Modeling as simple as a linear regression in R.,energyRt,https://github.com/energyRt/energyRt.git,github,"energy-models,gams,glpk,pyomo,julia",Energy Modeling and Optimization,"2022/12/05, 20:58:11",19,0,2,true,R,,,"R,Python,GAMS,Julia,AMPL,TeX",http://www.energyRt.org,"b'# energyRt: energy systems modeling toolbox in R\n\n**_NEWS: [documentation in progress.](https://energyrt.github.io/book/)_**\n\n**energyRt** is a package for [R](https://www.r-project.org/) to develop Reference Energy System (RES) models (also known as Capacity Expansion Models (CEM), or ""Bottom-Up"" technological energy models), and analyze energy-technologies.\n\n**energyRt** package provides tools to formulate the main ""bricks"" of an energy system model in **R**, and solve the model with one of the mainstream mathematical programming languages: \n* [GAMS](http://www.gams.com/), \n* [GLPK/Mathprog](https://www.gnu.org/software/glpk/), \n* [Python/Pyomo](http://www.pyomo.org/), \n* [Julia/JuMP](http://www.juliaopt.org/JuMP.jl/stable/). \n\nThe RES/CEM model has similarities with [TIMES/MARKAL](http://iea-etsap.org/web/tools.asp), [OSeMOSYS](http://www.osemosys.org/), but has its own specifics, f.i. definition of technologies. \n\n**energyRt** package is a set of _classes_, _methods_, and _functions_ in [R](https://www.r-project.org/) which are designed to: \n- handle data, assist in defining RES models, \n- helps to analyze data, check for errors and bugs before parsing it into solver, \n- parses your dataset to GAMS or GLPK or Python/Pyomo or Julia/JuMP and runs them to solve the model, \n- reads the solution and imports results back to R, \n- assists with an analysis of results and reporting. \n\n### Motivation\n\n- minimize time of development and application of RES/BottomUp models,\n- boost learning curve in energy modeling, \n- improve transparency and understanding of energy models,\n- use power of open-source to improve energy models and their application,\n- making reproducible research (see [Reproducible Research with R and R Studio] (https://github.com/christophergandrud/Rep-Res-Book) by @christophergandrud and/or [Dynamic Documents with R and knitr] (https://github.com/yihui/knitr-book) by @yihui) accessible in RES-modeling,\n- integration with other models and software.\n\n### Development status\n\nThe current functionality allows development of multi-regional RES models from basic to well advanced level of complexity, including multiple regions, exogenous or endogenous interregional trade routes (for example, electricity grid), multilevel/nested time-slices, as well as flexible definition of technologies, storages. The package documentation is in development. By now, the best way to test the functionality of the package is to check fully functional examples of the model (see *Examples* bellow). \n\n## Installation\n\n### Prerequisites\n \n#### R and RStudio\nAssuming that R is already installed (if not, please download and install from https://www.r-project.org/), we also recommend RStudio (https://www.rstudio.com/), a powerful IDE (Integrated Development Environment) for R. It simplifies usage of R, provides number of features such as reproducible research (integration with Markdown, Sweave), integration with version control (github, svn). \n\n#### GAMS or GLPK or Python or Julia to solve the model \nThe cost-minimising linear programming model (the set of equation for LP problem), emboddied into *energyRt* package requires additional software to solve it. Currently *energyRt* model code is written in several languages *GAMS*, *GLPK*, *Python/Pyomo*, *Julia/Jump*. At least one of them is required to solve the model.\n\nThe General Algebraic Modeling System (*GAMS*, http://gams.com/) is a powerful proprietary modeling system. Suitable LP solvers: CBC (included in the basic GAMS version, very powerful open source solver) or CPLEX. Others LP solvers have not been tested, but may work as well.\n\nGAMS path should be also added to the environmental variables in your operating system. \n\n*GLPK* is an open source Linear Programming Kit which includes powerful LP and MIP solver, and basic language for creating mathematical programming models (Mathprog or GMPL \xe2\x80\x93 for details see https://en.wikibooks.org/wiki/GLPK/GMPL_%28MathProg%29) \n\nGLPK/GMPL is an open source alternative to GAMS, but only for LP and MIP problems. GLPK/GMPL is a bit slower than GAMS for small models, and significantly slower for large models, partially because of the slower Mathprog (GMPL) language processor. \n\n##### Installing GLPK on PC/Windows systems \nDownload GLPK binaries for Windows:\nhttps://sourceforge.net/projects/winglpk/\nFollow the installation instructions, and add the path to the Windows environment variables. \n\n##### Installing GLPK on Mac systems\nWe are not familiar if there are any GLPK-binaries/installers for Mac OSx. Therefore the following example is for installed from source with a standard procedure:\ngzip -d glpk-4.57.tar.gz \ntar -x < glpk-4.57.tar \ncd glpk-4.57 \n./configure \nmake \nmake check \nmake install \nmake distclean \n \nAfter installation check: \nwhich glpsol \nglpsol \nor glpsol -v \n\nResponse from glpsol will be an indicator of successful installation. \n\nAlternatively, GLPK is included in homebrew-science installer library. \nSee: http://brew.sh/ and https://github.com/Homebrew/homebrew-science for details. \n\n##### Installing Pythom/Pyomo \nPlease folow one of the standard procedures to install Python, make it available in your system\'s terminal/cmd, install Pyomo package and LP solver(s). CPLEX or Gurobi are recommended for large scale models. \n\n##### Installing Julia/JuMP \nSimilarly, follow the standard procedure of installing Julia and JuMP package, as well as the solvers and links to the solvers. *Currently Julia/JuMP version of energyRt is suitable for small-scale models and is recommended for testing only, the code for large-scale models is in progress.*\n\n### energyRt\nCurrently the package is hosted only on GitHub. To install the package: \ndevtools::install_github(""olugovoy/energyRt"") \n \n# Examples\n* **UTOPIA** -- up to 11 regions model is saved in vignettes of the project `energyRt/vignettes/`. \n* **USENSYS** -- large scale model of US energy system is in progress. First version(s) are available here (https://github.com/usensys/usensys).\n'",,"2016/03/17, 16:08:29",2778,AGPL-3.0,9,1321,"2022/11/24, 17:23:03",0,28,33,3,335,0,0.0,0.18365627632687442,"2020/01/17, 08:33:17",0.01.04.9000,0,5,false,,false,false,,,,,,,,,,, MVS,"The multi-vector simulator allows the evaluation of local sector-coupled energy systems that include the energy carriers electricity, heat and/or gas.",rl-institut,https://github.com/rl-institut/multi-vector-simulator.git,github,oemof,Energy Modeling and Optimization,"2023/06/07, 08:49:41",21,2,4,true,Python,Reiner Lemoine Institut,rl-institut,"Python,CSS",,"b'##################################################\nMVS - Multi-Vector Simulator of the E-LAND toolbox\n##################################################\n\n|badge_docs| |badge_CI| |badge_coverage| |badge_zenodo| |badge_pypi| |badge_gpl2| |badge_black|\n\nRights: `Reiner Lemoine Institut (Berlin) `__\n\nThe Multi-Vector Simulator (MVS) allows the evaluation of local sector-coupled energy systems that include the energy carriers electricity, heat and/or gas. The MVS has three main features:\n\n- Analysis of an energy system model, which can be defined from csv or json files, including its costs and performance parameters.\n- Near-future investments into power generation and storage assets can be optimized aiming at least-cost supply of electricity and heat.\n- Future energy supply scenarios that integrate emerging technologies helping to meet sustainability goals and decrease adverse climate effects can be evaluated, e.g. through high renewable energy shares or sector-coupling technologies.\n\nThe tool is being developed within the scope of the H2020 project E-LAND (Integrated multi-vector management system for\nEnergy isLANDs, `project homepage `__).\nA graphical user interface for the MVS will be integrated.\n\n*Latest release*: Check the `latest release `__.\nPlease check the `CHANGELOG.md `__ for past updates and changes.\n\nYou find advanced documentation of the MVS on `readthedocs `__\n(stable version, latest developments `here `__).\n\n*Disclaimer*: As the MVS is still under development, changes might still occur in the code as well as code structure.\nIf you want to try the MVS, please make sure to check this project regularly.\n\nIf you are interested to try out the code, please feel free to do so! In case that you are planning to use it for a specific or a larger-scale\nproject, we would be very happy if you would get in contact with us, eg. via creating a github issue.\nMaybe you have ideas that can help the MVS move forward? Maybe you noticed a bug that we can resolve?\n\nFor advanced programmers: You can also use the ``dev`` branch that includes the latest updates and changes.\nYou find the changelog `HERE `__.\n\n.. |badge_docs| image:: https://readthedocs.org/projects/multi-vector-simulator/badge/?version=latest\n :target: https://multi-vector-simulator.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. |badge_CI| image:: https://github.com/rl-institut/multi-vector-simulator/workflows/CI/badge.svg\n :alt: Build status\n\n.. |badge_coverage| image:: https://coveralls.io/repos/github/rl-institut/multi-vector-simulator/badge.svg\n :target: https://coveralls.io/github/rl-institut/multi-vector-simulator\n :alt: Test coverage\n\n.. |badge_zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.4610237.svg\n :target: https://doi.org/10.5281/zenodo.4610237\n :alt: Zenodo DOI\n\n.. |badge_gpl2| image:: https://img.shields.io/badge/License-GPL%20v2-blue.svg\n :target: https://img.shields.io/badge/License-GPL%20v2-blue.svg\n :alt: License gpl2\n\n.. |badge_pypi| image:: https://badge.fury.io/py/multi-vector-simulator.svg\n :target: https://pypi.org/project/multi-vector-simulator/\n :alt: Pypi version\n\n.. |badge_black| image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n :alt: black linter\n\n========================\nGetting started with MVS\n========================\n\nSetup\n=====\n\nTo set up the MVS, follow the steps below:\n\n- If python3 is not pre-installed: Install miniconda (for python 3.7: https://docs.conda.io/en/latest/miniconda.html)\n\n- WINDOWS USERS: Using an Anaconda virtual environment is highly recommended for being able to fully utilize the tool. Venv \n environtments works only for running the optimization tool (mvs_tool). For this, updating Pandas to at least version 1.3.5 \n and installing the package pygraphviz as indicated in this link https://pygraphviz.github.io/documentation/stable/install.html\n is necessary. However, it is not possible to run the interactive report (mvs_report) with venv, as it gives an error. \n Therefore, it is best to use conda environments.\n\n- Open Anaconda prompt (or other software as Pycharm) to create and activate a virtual environment\n\n ``conda create -n [your_env_name] python=3.6 activate [your env_name]``\n\n- Install the latest `MVS release `__:\n\n ``pip install multi-vector-simulator``\n\n- Download the `cbc-solver `__ into your system from https://ampl.com/dl/open/cbc/\n and integrate it in your system, ie. unzip, place into chosen path, add path to your system variables\n (Windows: \xe2\x80\x9cSystem Properties\xe2\x80\x9d -->\xe2\x80\x9dAdvanced\xe2\x80\x9d--> \xe2\x80\x9cEnvironment Variables\xe2\x80\x9d, requires admin-rights).\n\n You can also follow the `steps `__\n from the oemof setup instructions\n\n- Test if that the cbc solver is properly installed by typing\n\n ``oemof_installation_test``\n\n You should at least get a confirmation that the cbc solver is working\n\n ::\n\n *****************************\n Solver installed with oemof:\n\n cbc: working\n glpk: not working\n gurobi: not working\n cplex: not working\n\n *****************************\n oemof successfully installed.\n *****************************\n\n- Test if the MVS installation was successful by executing\n\n ``mvs_tool``\n\nThis should create a folder ``MVS_outputs`` with the example simulation\'s results\n\nYou can always check which version you installed with the following command\n\n ``mvs_tool --version``\n\n\nUsing the MVS\n=============\n\nTo run the MVS with custom inputs you have several options:\n\nUse the command line\n--------------------\n\nEdit the json input file (or csv files) and run\n\n::\n\n mvs_tool -i path_input_folder -ext json -o path_output_folder\n\nWith ``path_input_folder``: path to folder with input data,\n\n``ext``: json for using a json file and csv for using csv files\n\nand ``path_output_folder``: path of the folder where simulation results should be stored.\n\nFor more information about the possible command lines options\n\n::\n\n mvs_tool -h\n\nUse the ``main()`` function\n---------------------------\n\nYou can also execute the mvs within a script, for this you need to import\n\n::\n\n from multi_vector_simulator.cli import main\n\nThe possible arguments to this functions are:\n\n- ``overwrite`` (bool): Determines whether to replace existing results in ``path_output_folder`` with the results of the current simulation (True) or not (False) (Command line ""-f""). Default: ``False``.\n\n- ``input_type`` (str): Defines whether the input is taken from the ``mvs_config.json`` file (""json"") or from csv files (\'csv\') located within /csv\\_elements/ (Command line ""-ext""). Default: ``json``.\n\n- ``path_input_folder`` (str): The path to the directory where the input CSVs/JSON files are located. Default: ``inputs/`` (Command line ""-i"").\n\n- ``path_output_folder`` (str): The path to the directory where the results of the simulation such as the plots, time series, results JSON files are saved by MVS (Command line ""-o""). Default: ``MVS_outputs/``.\n\n- ``display_output`` (str): Sets the level of displayed logging messages. Options: ""debug"", ""info"", ""warning"", ""error"". Default: ""info"".\n\n- ``lp_file_output`` (bool): Specifies whether linear equation system generated is saved as lp file. Default: False.\n\n- ``pdf_report`` (bool): Specify whether pdf report of the simulation\'s results is generated or not (Command line ""-pdf""). Default: False.\n\n- ``save_png`` (bool): Specify whether png figures with the simulation\'s results are generated or not (Command line ""-png""). Default: False.\n\nEdit the csv files (or, for devs, the json file) and run the ``main()`` function. The following ``kwargs`` are possible:\n\nDefault settings\n----------------\n\nIf you execute the ``mvs_tool`` command in a path where there is a folder named ``inputs`` (you can use the\nfolder ``input_template`` for inspiration) this folder will be taken as default input folder and you can simply run\n\n::\n\n mvs_tool\n\nA default output folder will be created, if you run the same simulation\nseveral time you would have to either overwrite the existing output file\nwith\n\n::\n\n mvs_tool -f\n\nOr provide another output folder\'s path\n\n::\n\n mvs_tool -o \n\n.. _pdf-report-commands:\n\nGenerate pdf report or an app in your browser to visualise the results of the simulation\n----------------------------------------------------------------------------------------\n\nTo use the report feature you need to install extra dependencies first\n\n::\n\n pip install multi-vector-simulator[report]\n\nIf you are using zsh terminals and recieve the error message ""no matches found"", you might need to run \n\n::\n\n pip install \'multi-vector-simulator[report]\'\n\n \nUse the option ``-pdf`` in the command line ``mvs_tool`` to generate a pdf report in a simulation\'s output folder\n(by default in ``MVS_outputs/report/simulation_report.pdf``):\n\n::\n\n mvs_tool -pdf\n\nUse the option ``-png`` in the command line ``mvs_tool`` to generate png figures of the results in the simulation\'s\noutput folder (by default in ``MVS_outputs/``):\n\n::\n\n mvs_tool -png\n\n\nTo generate a report of the simulation\'s results, run the following command **after** a simulation generated an output folder:\n\n::\n\n mvs_report -i path_simulation_output_folder -o path_pdf_report\n\nwhere ``path_simulation_output_folder`` should link to the folder of your simulation\'s output, or directly to a\njson file (default ``MVS_outputs/json_input_processed.json``) and ``path_pdf_report`` is the path where the report should be saved as a pdf file.\n\nThe report should appear in your browser (at http://127.0.0.1:8050) as an interactive Plotly Dash app.\n\nYou can then print the report via your browser print functionality (ctrl+p), however the layout of the pdf report is\nonly well optimized for chrome or chromium browser.\n\nIt is also possible to automatically save the report as pdf by using the option ``-pdf``\n\n::\n\n mvs_report -i path_simulation_output_folder -pdf\n\nBy default, it will save the report in a ``report`` folder within your simulation\'s output folder\ndefault (``MVS_outputs/report/``). See ``mvs_report -h`` for more information about possible options.\nThe css and images used to make the report pretty should be located under ``report/assets``.\n\nContributing and additional information for developers\n======================================================\n\nIf you want to contribute to this project, please read\n`CONTRIBUTING.md `__. For less experienced\ngithub users, we propose a `workflow `__.\n\nFor advanced programmers: please checkout the `dev` branch that includes the latest updates and changes. You can find out about the latest changes in the `CHANGELOG.md file `__.\n'",",https://doi.org/10.5281/zenodo.4610237\n","2019/07/29, 14:57:11",1549,GPL-2.0,32,4994,"2023/09/07, 13:22:30",129,475,837,9,48,7,1.0,0.6533084808946878,"2021/05/31, 15:11:08",v1.0.0,3,13,false,,false,true,"open-plan-tool/simulation-server,rl-institut/mvs_eland_api",,https://github.com/rl-institut,http://www.reiner-lemoine-institut.de,Berlin/Germany,,,https://avatars.githubusercontent.com/u/18393972?v=4,,, PowNet,A least-cost optimization model for simulating the Unit Commitment and Economic Dispatch of large-scale (regional to country) power systems.,kamal0013,https://github.com/Critical-Infrastructure-Systems-Lab/PowNet.git,github,"power-system-analysis,unit-commitment,economic-dispatch,transmission,dc-flow,n-1-criterion,water-energy-nexus,python,dispatchable-units,renewable-resources,electricity-supply,substations",Energy Modeling and Optimization,"2023/05/20, 08:33:00",63,0,15,true,Jupyter Notebook,CRITICAL Infrastructure Systems Lab,Critical-Infrastructure-Systems-Lab,"Jupyter Notebook,Python",,"b""[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4020167.svg)](https://doi.org/10.5281/zenodo.4020167) ![license MIT](https://img.shields.io/github/license/kamal0013/PowNet) \n# PowNet: Unit Commitment / Economic Dispatch model in Python\nPowNet is a least-cost optimization model for simulating the Unit Commitment and Economic Dispatch (UC/ED) of large-scale (regional to country) power systems. In PowNet, a power system is represented by a set of nodes that include power plants, high-voltage substations, and import/export stations (for cross-border systems). The model schedules and dispatches the electricity supply from power plant units to meet hourly electricity demand in substations (at a minimum cost). It considers the techno-economic constraints of both generating units and high-voltage transmission network. The power flow calculation is based on a Direct Current (DC) network (with N-1 criterion), which provides a reasonable balance between modelling accuracy and data and computational requirements. PowNet can easily integrate information about variable renewable resources (e.g., hydro, wind, solar) into the UC/ED process. For example, it can be linked with models that estimate the electricity supply available from renewable resources as a function of the climatic conditions. In addition, PowNet has provision to account for the effect of droughts on the generation of dispatchable thermal units (e.g., coal, oil, gas fired units) that depend on freshwater availability. These features facilitate the application of PowNet to problems in the water-energy nexus domain that investigate the impact of water availability on electricity supply and demand. More details about the functionalities of PowNet are provided in [Chowdhury et al. (2020a)](https://openresearchsoftware.metajnl.com/articles/10.5334/jors.302/).\n\n# Versions and implementations\nThe latest and previous versions of PowNet are listed below. Please, check the [release notes](https://github.com/kamal0013/PowNet/releases) for a list of modifications made in each version. \n### Current version\nPowNet v1.3 ([GitHub](https://github.com/kamal0013/PowNet/tree/v1.3) | [Zenodo](https://zenodo.org/record/4688309#.YHc5euhKguU))\n### Previous versions\nPowNet v1.1 ([GitHub](https://github.com/kamal0013/PowNet/tree/v1.1) | [Zenodo](https://zenodo.org/record/3756750))\n\nPowNet v1.2 ([GitHub](https://github.com/kamal0013/PowNet/tree/v1.2) | [Zenodo](https://zenodo.org/record/4020167#.X1hqrGhKguU))\n\n### Implementations\n1.\t[PowNet-Cambodia](https://github.com/kamal0013/PowNet) \xe2\x80\x93 implementation of PowNet for Cambodian power system with data for 2016\n\n2.\t[PowNet-Laos](https://github.com/kamal0013/PowNet-Laos) \xe2\x80\x93 implementation of PowNet for Laotian power system with data for 2016\n\n3.\t[PowNet-Thailand]( https://github.com/kamal0013/PowNet-Thailand) \xe2\x80\x93 implementation of PowNet for Thai power system with data for 2016\n\nComputational requirements and instructions on how to run and customize the model are presented below.\n\n# Requirements\nPowNet is written in Python 3.5. It requires the following Python packages: (i) Pyomo, (ii) NumPy, (iii) Pandas, and (iv) Matplotlib (optional for plotting). It also requires an optimization solver (e.g., Gurobi, CPLEX). Note that the Python packages are freely available, while academic users can obtain a free license of Gurobi or CPLEX. PowNet has been tested on both Windows 10 and Linux Ubuntu 16.04 operating systems.\n\n# How to run\nPowNet is implemented in three Python scripts, namely pownet_model.py, pownet_datasetup.py, and pownet_solver.py. pownet_model.py contains the main model structure, which is based on the Pyomo optimization package. The data concerning dispatchable units and transmission lines, hourly electricity demand at substations, and hourly electricity availability through variable renewable resources (hydropower in this specific example) are provided in separate .csv files. PowNet can be run as follows:\n\n1.\tRun the pownet_datasetup.py. The script reads the .csv data files and generates a .dat file (sample provided), in which all data are written in a format that is executable by Pyomo;\n2.\tRun the pownet_solver.py that executes the model with the prepared data. The script also generates .csv files containing the values of each decision variable, namely (i) operational status of generating units, (ii) electricity supplied by the generators and variable renewable resources, (iii) voltage angles at each node (required to estimate transmission through the lines), and (iv) spinning and non-spinning reserves.\n\nThe repository also includes sample output files. A few additional Jupyter notebooks are provided to help users perform some standard analyses on the output variables. Such analyses include estimation of (i) generation mix, (ii) operating costs, (iii) CO2 emissions, (iv) usage and N-1 violations of transmission lines, and (v) reserves. Note that full forms of the abbreviated node-names are provided in Appendix. The Appendix also includes an Excel file with sample estimation of transmission parameters (capacity and susceptance) from physical specifications of the lines (e.g., voltage level, length, size, number of circuits, capacity per circuit etc.).\n\n# How to customize\nThe implementation of PowNet for any other power system requires the customization of the three Python scripts. pownet_datasetup.py, pownet_model.py, and pownet_solver.py are labelled with Segments A.1-A.9, B.1-B.13, and C.1-C.5, respectively, to facilitate the following step-by-step operations:\n\n1.\tPrepare the system-specific data in .csv files;\n2.\tProvide information regarding the simulation, such as simulation period, planning horizon, transmission losses (expressed as a percentage), N-1 criterion, reserve margins etc. in Segment A.1;\n3.\tDeclare the lists of nodes and types of dispatchable units in Segment A.3;\n4.\tDeclare the set of dispatchable generators by nodes and types in Segment A.4 and B.1. Also, in Segment B.1, declare the types of generators that must ensure the minimum reserve;\n5.\tAdd type-specific cost functions in the objective function in Segment B.8; \n6.\tCustomize B.11.3 according to the number of nodes with dispatchable generating units;\n7.\tTo include or exclude any variable renewable resource (e.g., hydro, wind, solar), uncomment or comment the code provided in Segment A.2, A.5, A.9, B.2, B.6, B.7, B.10, B.11.2, and C.5.\n\n# Schematics\nA basic framework of PowNet is shown in the figure below (adapted from [Chowdhury et al., 2021](https://doi.org/10.1029/2020EF001814)).\n\n![]( https://github.com/kamal0013/PowNet/blob/master/Appendix/fig_pownet_model.PNG)\n\nFigure below (adapted from [Chowdhury et al., 2020a](https://openresearchsoftware.metajnl.com/articles/10.5334/jors.302/)) shows the main generation and transmission components of the Cambodian power system (as of 2016), used to describe PowNet in this repository. The data are mostly extracted from publicly available technical reports, published by Electricite Du Cambodge (EDC).\n\n![]( https://github.com/kamal0013/PowNet/blob/master/Appendix/fig_Cambodia_grid.jpg)\n\n# Citation\nIf you use PowNet for your research, please cite the following paper:\n\nChowdhury, A.F.M.K., Kern, J., Dang, T.D. and Galelli, S., 2020. PowNet: A Network-Constrained Unit Commitment/Economic Dispatch Model for Large-Scale Power Systems Analysis. Journal of Open Research Software, 8(1), p.5. DOI: http://doi.org/10.5334/jors.302.\n\nIn addition, each release of PowNet is archived on Zenodo with a DOI, that can be found [here](https://zenodo.org/record/4020167#.X1hsSWhKguU).\n\n# License\nPowNet is released under the MIT license. \n\n# Contact\nFor questions and feedback related to PowNet, please send an email to afm.chowdhury@uon.edu.au (AFM Kamal Chowdhury) or stefano_galelli@sutd.edu.sg (Stefano Galelli).\n\n# Acknowledgment\nPowNet development is supported by Singapore's Ministry of Education (MoE) through the Tier 2 project \xe2\x80\x9cLinking water availability to hydropower supply \xe2\x80\x93 an engineering systems approach\xe2\x80\x9d (Award No. MOE2017-T2-1-143).\n\n# Publications\nFollowing is a list of papers that used PowNet:\n1.\tGalelli, S., Dang, T.D., Ng, J.Y., Chowdhury, A.F.M.K., Arias, M.E. (2022) Curbing hydrological alterations in the Mekong\xe2\x80\x93limits and opportunities of dam re-operation. Nature Sustainability, [Link](https://www.nature.com/articles/s41893-022-00971-z)\n2.\tKoh, R., Kern, J., Galelli, S. (2022) Hard-coupling water and power system models increases the complementarity of renewable energy sources. Applied Energy, 321, 119386. [Link](https://www.sciencedirect.com/science/article/abs/pii/S0306261922007255)\n3.\tChowdhury, A.K., Dang, T.D., Nguyen, H.T., Koh, R., and Galelli, S., (2021). The Greater Mekong's climate-water-energy nexus: how ENSO-triggered regional droughts affect power supply and CO2 emissions. Earth\xe2\x80\x99s Future, 9, e2020EF001814, [Link](https://doi.org/10.1029/2020EF001814).\n4.\tChowdhury, A.K., Dang, T.D., Bagchi, A., and Galelli, S., (2020b). Expected benefits of Laos' hydropower development curbed by hydro-climatic variability and limited transmission capacity\xe2\x80\x94opportunities to reform. Journal of Water Resources Planning and Management, [Link](https://doi.org/10.1061/(ASCE)WR.1943-5452.0001279) \n5.\tChowdhury, A.K., Kern, J., Dang, T.D. and Galelli, S., (2020a). PowNet: A Network-Constrained Unit Commitment/Economic Dispatch Model for Large-Scale Power Systems Analysis. Journal of Open Research Software, 8(1), p.5. [Link](http://doi.org/10.5334/jors.302).\n""",",https://doi.org/10.5281/zenodo.4020167,https://zenodo.org/record/4688309#.YHc5euhKguU,https://zenodo.org/record/3756750,https://zenodo.org/record/4020167#.X1hqrGhKguU,https://doi.org/10.1029/2020EF001814,http://doi.org/10.5334/jors.302.\n\nIn,https://zenodo.org/record/4020167#.X1hsSWhKguU,https://doi.org/10.1029/2020EF001814,https://doi.org/10.1061/(ASCE)WR.1943-5452.0001279,http://doi.org/10.5334/jors.302","2019/09/27, 03:03:01",1489,MIT,2,95,"2021/04/13, 12:33:19",0,0,4,0,925,0,0,0.30107526881720426,"2021/04/14, 18:40:23",v1.3,0,2,false,,false,false,,,https://github.com/Critical-Infrastructure-Systems-Lab,https://galelli.cee.cornell.edu,United States of America,,,https://avatars.githubusercontent.com/u/133989297?v=4,,, OpenIPSL,"A library of power system component models written in the Modelica language that can be used for power system dynamic analysis, such as phasor time-domain simulations.",OpenIPSL,https://github.com/OpenIPSL/OpenIPSL.git,github,"hacktoberfest,modelica,power-grid,power-grids,power-system-simulation,power-systems,power-systems-analysis,gridcal,power-system-dynamic-modeling,power-system-dynamics,power-system-stability,power-system-stabilizer,smart-grids,energy,energy-system,energy-system-modeling",Energy Modeling and Optimization,"2023/09/27, 07:18:01",61,0,15,true,Modelica,OpenIPSL,OpenIPSL,"Modelica,Motoko,Python,TeX,Dockerfile",https://doc.openipsl.org/,"b'\n[![Build Status](https://github.com/openipsl/openipsl/actions/workflows/checkCI.yml/badge.svg?branch=master)](https://github.com/OpenIPSL/OpenIPSL/actions)\n\n# **OpenIPSL**: Open-Instance Power System Library\nThe OpenIPSL or Open-Instance Power System Library is a library of power system component models written in the [Modelica](http://modelica.org) language that can be used for power system dynamic analysis, such as phasor time-domain simulations.\n\nThe OpenIPSL is currently developed and maintained by Prof. [Luigi Vanfretti\'s](https://github.com/lvanfretti) research group [ALSETLab](https://github.com/ALSETLab) at [Rensselaer Polytechnic Institute](http://rpi.edu), Troy, NY, collaborators and friends, such as [Dietmar Winkler](https://github.com/dietmarw) and [FOSSEE](https://om.fossee.in/fellowship2018) (contributions are welcome!).\n\n## Scope\nThe OpenIPSL is developed to be used for research and education (therefore frequent release times may be available), with maximum compatibility with [OpenModelica](https://openmodelica.org/) (to provide a free/libre and cost-free alternative for power system dynamic simulation), to provide as many as possible typical ""test networks"" for use in research and teaching, and to be developed in such way that the library can efficiently be used for power system simulation within Modelica-based workflows (i.e., helping to give reference power system models for development and testing of Modelica back-end compilers) when faced with power system simulation challenges.\n\nPlease note that the library contains only the models that can be used for dynamic studies. As such, there are no solver tools provided in this repository (no Power Flow Solvers, no Time-Domain solvers, etc.).\nThe user should use a Modelica-compliant tool for simulation of models in this lirbary.\n\n## History\nThe iPSL is a [Modelica](https://www.modelica.org) library developed during the [iTesla project](https://cordis.europa.eu/project/id/283012/reporting).\nThe members of this project (OpenIPSL) at SmarTS Lab (now [ALSETLab](https://github.com/ALSETLab)) where key developers of the iPSL until March 31, 2016, when the iTesla project was completed.\nProf. [Luigi Vanfretti](https://github.com/lvanfretti) lead the development of a large amount of the models of the library (particularly those that replicate results from PSAT and PSS/E).\niPSL is part of the [iTesla Tool](https://github.com/itesla/ipst), and thus, it is subject to the needs of the consortium that develops the iTesla Tool.\nTherefore, the SmarTS Lab / ALSETLab team decided to create the OpenIPSL fork in order to develop the library in a direction that is more suitable for researchers and teachers/professors, and in a transparent, open source software approach.\n\n## Documentation\nDocumentation is provided within the library and can be accessed when loading OpenIPSL in any Modelica-compliant tool. Tutorials on OpenIPSL have been given at many conferences, and are available under [Release](https://github.com/OpenIPSL/OpenIPSL/releases). We recommend you start from there.\n\nOur documentation assumes that you have working knowledge of the Modelica language, are familiar with a Modelica-compliant modeling and simulation environment, that you have proeficient knowledge on power system steady state analysis (i.e., the so called ""power flow""), and knowledge on power system dynamic modeling (i.e., the called ""transient"" and ""small-signal"" stability). As such, the documentation is limited, and aims to provide very consice information for people who fulfill the requirements above. If you do not fulfill these requirements, we recommend that you first get acquainted with both Modelica and Power Systems.\n\n## Citing OpenIPSL in Publications\nIf you use OpenIPSL in your work or research, all we ask you in exchange is that you **cite the reference publications**, according to your use. Please consult our publication list, located within the User\'s Guide package, in the Publications page, for browsing the reference publications. Preferrably, please cite this repository by using our preferred reference, as seen on GitHub GUI.\n\nYou are also welcome to submit your contributions as stated below.\n\n## Contributing\n\nContributions to the library are welcome and can be submitted in the form of pull requests to this repository. Please consult the [contribution guidelines](.github/CONTRIBUTING.md) for more information on how to contribute to the development of this library. Information about our current and past contributors can be found when the library is loaded into a Modelica-compliant software. The information is located within the Users\' Guide package, in the Contact page.\n\nIf you want to submit your contributions to the OpenIPSL, note that we make use of an automated assistant for dealing with Contributor\'s License Agreements (CLAs). Please read the [CLA intructions](.github/legal/README.md) beforehand. If you have any questions, please don\'t hesitate to contact us.\n\n## Copyright and License Information\n\n**OpenIPSL:** Copyright April 2016 - current [Luigi Vanfretti](https://github.com/lvanfretti), [ALSETLab](https://github.com/ALSETLab), Troy, NY (Formely SmarTS Lab, Stockholm, Sweden).\n\nThe authors can be contacted by email: luigi.vanfretti@gmail.com.\n\nThis Source Code Form is subject to the terms of the [3-Clause BSD license](https://opensource.org/licenses/BSD-3-Clause).\n\n## Acknowledgements\n\nThis work was supported primarily by the following grants and institutions, in reverse chronological order:\n- 2018-2023: Dominion Energy Virginia through sponsored research projects: (2018-2019) Flexible Alternating Current Transmission System Modeling and Performance Analysis using Measurement Data, (2020-2021) Model Validation of Generator Power Plants, and (2021-2022) Cloud-Based Integrated Model-and-Measurement Analytics for Power System Applications, at Rensselaer Polytechnic Institute.\n- 2019-2021: New York State Energy Research and Development Authority (NYSERDA) through the Electric Power Transmission and Distribution (EPTD) PON 3770 High Performing Grid Program together with the New York Power Authority (NYPA).\n- 2018-2020: This work was also supported in part by the ERC Program of the National Science Foundation and DOE under NSF Award Number EEC-1041877 and in part by the CURENT Industry Partnership Program.\n'",,"2016/04/16, 09:09:47",2748,BSD-3-Clause,159,1736,"2023/08/17, 15:45:03",15,218,317,25,69,2,4.6,0.5839662447257383,"2022/06/23, 13:05:59",v3.0.1,0,20,false,,true,true,,,https://github.com/OpenIPSL,http://openipsl.org,Cyber Space and All Around the World!,,,https://avatars.githubusercontent.com/u/29949865?v=4,,, RAMP,"A bottom-up stochastic model for the generation of high-resolution multi-energy profiles, conceived for application in contexts where only rough information about users' behaviour are obtainable.",RAMP-project,https://github.com/RAMP-project/RAMP.git,github,,Energy Modeling and Optimization,"2023/10/08, 18:53:05",47,0,19,true,Python,RAMP,RAMP-project,Python,,"b'.. image:: https://img.shields.io/gitter/room/RAMP-project/RAMP\n :target: https://gitter.im/RAMP-project/community\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n\n.. image:: https://badge.fury.io/py/rampdemand.svg\n :target: https://badge.fury.io/py/rampdemand\n\n.. image:: https://readthedocs.org/projects/rampdemand/badge/?version=latest\n :target: https://rampdemand.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://github.com/RAMP-project/RAMP/blob/documentation/docs/source/_static/RAMP_logo_basic.png?raw=true\n :width: 300\n\n\n*An open-source bottom-up stochastic model for generating multi-energy load profiles* (`RAMP Website `_ , `RAMP Documentation `_)\n\n\nWhat is RAMP\n============\nRAMP is an open-source software suite for the stochastic simulation of any user-driven energy demand time series based on few simple inputs.\n\nThe project aims to provide synthetic data wherever metered data does not exist, such as when designing systems in remote areas. Check out the `documentation `_ and learn more on the RAMP world from our `website `_! \n\n.. image:: https://github.com/RAMP-project/RAMP/blob/master/docs/figures/Example_output.jpg?raw=true\n :width: 600\n\nRecommended installation method\n===============================\n\nThe easiest way to make RAMP software working is to use the free conda package manager which can install the current and future RAMP\ndepencies in an easy and user friendly way.\n\nTo get conda, `download and install ""Anaconda Distribution"" `_, or `""miniconda"" `_ which is lighter.\nYou can install RAMP using pip, conda or from source code.\n\nInstalling through pip\n----------------------\n1. To install the RAMP software, we suggest to create a new environment by running the following command in the anaconda prompt:\n\n.. code-block:: python\n\n conda create -n ramp python=3.8\n\n2. If you create a new environment for RAMP, you\'ll need to activate it each time before using it, by writing\nthe following line in the *Anaconda Prompt*\n\n.. code-block:: python\n\n conda activate ramp\n\n3. Now you can use pip to install `rampdemand` on your environment as follow:\n\n.. code-block:: python\n\n pip install rampdemand\n\n\nInstalling through source code\n------------------------------\nYou can also install RAMP from the source code! To do so, you first need to download the source code first:\n\n1. you can use git to clone the repository using:\n\n.. code-block:: bash\n\n git clone https://github.com/RAMP-project/RAMP.git\n\n2. you may download the source code directly from:\n\n`""RAMP GitHub Repository"" `_.\n\nIn this case, the source code will be downloaded as a zip file, so you need the extract the files.\n\nAfter downloading the source code using any of abovementioned ways, you need to use your **anaconda prompt** to install the code.\nYou can follow the first two steps mentioned in **Installing through pip**. Then you need to change the directory of the promt to the folder where the source code is saved (where you can find the *setup.py* file). To install the RAMP software use:\n\n.. code-block:: bash\n\n python setup.py\n\nAlternatively, you may use:\n\n.. code-block:: bash\n\n conda env create -f requirements.yml\n\nRequirements\n============\nRAMP has been tested on macOS, Windows and Linux.\n\nFor running RAMP, you\'ll need a few packages:\n\n#. The Python programming language, version 3.6 or higher\n#. A number of Python adds-on packages:\n\n * `Pandas `_\n * `Numpy `_\n * `Matplotlib `_\n * `Openpyxl `_\n\nThe requirements are specified in the `requirements.txt` file.\n\nQuick start\n===========\nThere are different ways to build a model using RAMP! Here, we provide a first example but you can find more information in our `documentation `_.\n\nExample python input files\n--------------------------\nThree different input files are provided as example representing three different categories of appliances that can be modelled with RAMP.\nTo have a look to the python files, you can download them using the ""download_example"" function:\n\n.. code-block:: python\n\n from ramp import download_example\n\n download_example(""the specfic folder directory to save the files"")\n\n- ``input_file_1.py``: represents the most basic electric appliances,\n is an example of how to model lightbulbs, radios, TVs, fridges, and\n other electric appliances. This input file is based on the ones used\n for `this\n publication `__.\n\n- ``input_file_2.py``: shows how to model thermal loads, with the\n example of a \xe2\x80\x9cshower\xe2\x80\x9d appliance. The peculiarity of thermal appiances\n is that the nominal power can be provided as external input as a\n \xe2\x80\x9ccsv\xe2\x80\x9d file (in this case, ``shower_P.csv``). For the example \xe2\x80\x9cshower\xe2\x80\x9d\n appliance, the varying nominal power accounts for the effect of\n groundwater temperature variation throughout the year. This input\n file is based on that used for `this\n publication `__.\n\n- ``input_file_3.py``: represents an example of how to model electric\n cooking appliances. In this input file two different kind of meals\n are modelled: 1) short and repetitive meals (e.g.\xc2\xa0breakfast); and 2)\n main meals (e.g.\xc2\xa0lunch, dinner). Repetitive meals do not vary across\n days, whilst main meals do so. In particular, every household can\n randomly choose between 3 different types of main meal every day.\n Such variability in meal preferences is modelled by means of two\n parameters: the ``user preference`` and the ``preference index``. The\n ``user preference`` defines how many types of meal are available for\n each user to choose every day (e.g.\xc2\xa03). Then, each of the available\n meal options is modelled separately, with a different\n ``preference index`` attached. The stochastic process randomly varies\n the meal preference of each user every day, deciding whether they\n want a \xe2\x80\x9ctype 1\xe2\x80\x9d meal, or a \xe2\x80\x9ctype 2\xe2\x80\x9d, etc. on a given day. This input\n file is used in `this\n publication `__\n\nSpreadsheet input files\n-----------------------\n\nIt is also possible to use spreadsheets as input files. To do so you\nneed to run the ``ramp`` command with the option ``-i``:\n\n.. code-block:: bash\n\n ramp -i \n\n.. note:: You can input several files, separated from each others by a single blank space\n\nIf you already know\nhow many profile you want to simulate you can indicate it with the\n``-n`` option:\n\n.. code-block:: bash\n\n ramp -i -n 10\n\nwill simulate 10 profiles. Note that you can use this option without\nproviding a ``.xlsx`` input file with the ``-i`` option, this will then\nbe equivalent to running ``python ramp_run.py`` from the ``ramp`` folder\nwithout being prompted for the number of profile within the console.\n\nIf you want to save ramp results to a custom file, you can provide it with the option `-o`\n\n.. code-block:: bash\n\n ramp -i -o \n\n.. note:: You can provide a number of output files, separated from each others by a single blank space, matching the number of input files.\n\nOther options are documented in the help of `ramp`, which you access with the ``-h`` option\n\n.. code-block:: bash\n\n ramp -h\n\n\nIf you have existing python input files, you can convert them to\nspreadsheet. To do so, go to ``ramp`` folder and run\n\n.. code-block:: bash\n\n python ramp_convert_old_input_files.py -i \n\nFor other example of command lines options, such as setting date ranges, please visit `the dedicated section `_ of the documentation.\n\nBuilding a model with a python script\n-------------------------------------\n\n.. code-block:: python\n\n # importing functions\n from ramp import User,calc_peak_time_range,yearly_pattern\n\n # Create a user category\n low_income_households = User(\n user_name = ""low_income_household"", # an optional feature for the User class\n num_users = 10, # Specifying the number of specific user category in the community\n )\n\nYou can add appliances to a user category by:\n\n.. code-block:: python\n\n # adding some appliances for the household\n radio = low_income_household.add_appliance(\n name = ""Small Radio"", # optional feature for the appliance class\n number = 1, # how many radio each low income household holds\n power = 10, # RAMP does not take care of unit of measures , watt\n func_time = 120, # Total functioning time of appliance in minutes\n num_windows = 2, # in how many time-windows the appliance is used\n )\n\nThe use time frames can be specified using the \'window\' method for each appliance of the user category:\n\n.. code-block:: python\n\n # Specifying the functioning windows\n radio.windows(\n window_1 = [480,540], # from 8 AM to 9 AM\n window_2 = [1320,1380], # from 10 PM to 11 PM\n )\n\nNow you can generate your **stochastic Profiles**:\n\n.. code-block:: python\n\n # generating load_curves\n load = low_income_household.generate_aggregated_load_profiles(\n prof_i = 1, # the ith day profile\n peak_time_range = calc_peak_time_range(), # the peak time range\n Year_behaviour = yearly_pattern(), # defining the yearly pattern (like weekdays/weekends)\n )\n\nContributing\n============\nThis project is open-source. Interested users are therefore invited to test, comment or contribute to the tool. Submitting issues is the best way to get in touch with the development team, which will address your comment, question, or development request in the best possible way. We are also looking for contributors to the main code, willing to contibute to its capabilities, computational-efficiency, formulation, etc.\n\nTo contribute changes:\n\n#. Fork the project on GitHub\n#. Create a feature branch (e.g. named ""add-this-new-feature"") to work on in your fork\n#. Add your name to the `AUTHORS `_ file\n#. Commit your changes to the feature branch\n#. Push the branch to GitHub\n#. On GitHub, create a new pull request from the feature branch\n\nWhen committing new changes, please also take care of checking code stability by means of the `qualitativte testing `_ functionality.\n\n\nHow to cite\n===========\nPlease cite the original Journal publication if you use RAMP in your research:\n\n*F. Lombardi, S. Balderrama, S. Quoilin, E. Colombo, Generating high-resolution multi-energy load profiles for remote areas with an open-source stochastic model, Energy, 2019,*\n`https://doi.org/10.1016/j.energy.2019.04.097 `_\n\nMore information\n================\nWant to know more about the possible applications of RAMP, the studies that relied on it and much more? Then take a look at the `RAMP Website `_!\n\nLicense\n=======\nCopyright 2019-2023 RAMP, contributors listed in **Authors**\n\nLicensed under the European Union Public Licence (EUPL), Version 1.2-or-later; you may not use this file except in compliance with the License.\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an **""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND**, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n\n\n.. note::\n\n This project is under active development!\n'",",https://doi.org/10.1016/j.energy.2019.04.097,https://doi.org/10.3390/app10217445,https://doi.org/10.1109/PTC.2019.8810571,https://doi.org/10.1016/j.energy.2019.04.097,https://doi.org/10.1016/j.energy.2019.04.097","2019/02/20, 17:02:32",1708,EUPL-1.2,85,346,"2023/10/24, 17:10:04",25,40,66,33,1,2,1.4,0.4571428571428572,"2023/02/17, 16:12:03",v0.4.0,0,6,false,,true,true,,,https://github.com/RAMP-project,,,,,https://avatars.githubusercontent.com/u/65850039?v=4,,, POMATO,An easy to use tool for the comprehensive analysis of the modern electricity market.,richard-weinhold,https://github.com/richard-weinhold/pomato.git,github,,Energy Modeling and Optimization,"2021/06/25, 14:20:50",56,0,11,false,Python,,,"Python,Julia",,"b' POMATO - Power Market Tool \n=========================================================================================================================================================\n\nMain Branch: ![Python package](https://github.com/richard-weinhold/pomato/workflows/Python%20package/badge.svg?branch=master) [![codecov](https://codecov.io/gh/richard-weinhold/pomato/branch/main/graph/badge.svg?token=1K2PHOjJmC)](https://codecov.io/gh/richard-weinhold/pomato)\n\n\nConstruction Branch: ![Python package](https://github.com/richard-weinhold/pomato/workflows/Python%20package/badge.svg?branch=construction) [![codecov](https://codecov.io/gh/richard-weinhold/pomato/branch/construction/graph/badge.svg?token=1K2PHOjJmC)](https://codecov.io/gh/richard-weinhold/pomato)\n\n\nDocumentation Status: [![Documentation Status](https://readthedocs.org/projects/pomato/badge/?version=latest)](https://pomato.readthedocs.io/en/latest/?badge=latest)\n\nOverview\n--------\n\nPOMATO stands for (POwer MArket TOol) and is an easy to use tool for the comprehensive\nanalysis of the modern electricity market. It comprises the necessary power\nengineering framework to account for power flow physics, thermal transport\nconstraints and security policies of the underlying transmission\ninfrastructure, depending on the requirements defined by the user.\nPOMATO was specifically designed to realistically model Flow-Based\nMarket-Coupling (FBMC) and is therefore equipped with a fast security\nconstrained optimal power flow algorithm and allows zonal market clearing\nwith endogenously generated flow-based parameters, and redispatch.\n\nDocumentation\n-------------\n\nComprehensive documentation is available at [pomato.readthedocs.io](https://pomato.readthedocs.io/).\n\nInstallation\n------------\n\nPOMATO is written in python and julia. Python takes care of the data processing\nand julia runs the economic dispatch and N-1 redundancy removal algorithm. \n\nThe recommended way to install POMATO with python and pip:\n\n - Install [python](https://www.python.org/downloads/) for your operating system. On linux\n based operating systems python is often already installed and available under the python3\n command. For Windows install python into a folder of your choice. POMATO is written and tested\n in python 3.7 by any version >= 3.6 should be compadible. \n \n - Install [julia](https://julialang.org/downloads/) for your operating system. POMATO is\n written and tested with 1.5, but the newest version 1.6 works as well, but throws some\n warnings. \n\n - Add *python* and *julia* to the system Path, this allows you to start *python* and *julia*\n directly for the command line without typing out the full path of the installation. PLattform\n specific instructions on how to do this are part of the [julia installation instructions](https://julialang.org/downloads/platform/) and work analogous for the python . \n \n - Install POMATO through *pip* in python. It is recommended to create a virtual environment and\n install pomato into it, but not necessary:\n \n python -m venv pomato\n ./pomato/Scripts/activate\n pip install git+https://github.com/richard-weinhold/pomato.git\n\n\nThis will not only clone the master branch of this repository into the local python environment, but\nalso pull the master branch of the MarketModel and RedundancyRemoval julia packages which are\nrequired to run POMATO. This process can take a few minutes to complete.\n\nAfter this is completed pomato can be imported in python:\n\n from pomato import POMATO\n\nSee the [POMATO Documentation](https://pomato.readthedocs.io/en/latest/installation.html) for\nfurther information on the installation process. \n\nExamples\n--------\nThis release includes two examples in the *examples* folder. Including the contents of this folder into a pomato working directory will allow their execution:\n\n - The IEEE 118 bus network, which contains a singular timestep. The data is available under \n open license at [https://power-grid-lib.github.io/](https://power-grid-lib.github.io/) and re-hosted in this repository.\n\n $ python /run_pomato_ieee.py\n\n - The DE case study, based on data from openly available data sources. The file can be run via\n\n $ python /run_pomato_de.py\n\nSee more in depth descriptions of this two case studies part of the [POMATO Documentation](file:///C:/Users/riw/Documents/repositories/pomato/docs/_build/html/running_pomato.html).\n\nThe *examples* folder also contains the two examples as Jupyter notebooks. Another possibility to\naccess the functionality of POMATO with an online REPL/Console when running POMATO inside a IDE with\nan interactive IPython Console (e.g. Spyder) to access POMATO objects and variables.\n\nRelease Status\n--------------\n\nPOMATO is part of my PhD and actively developed by Robert and myself. This means it will keep \nchanging to include new functionality or to improve existing features. The existing examples, which\nare also part of the Getting Started guide in the documentation, are part of a testing suite to \nensure some robustness. However, we are not software engineers, thus the ""program"" is not written \nwith robustness in mind and our experience is limited when it comes to common best practices. \nExpect errors, bug, funky behavior and code structures from the minds of two engineering economists. \n\nRelated Publications\n--------------------\n- (*preprint*) [Weinhold and Mieth (2020), Power Market Tool (POMATO) for the Analysis of Zonal \n Electricity Markets](https://arxiv.org/abs/2011.11594)\n- [Weinhold and Mieth (2020), Fast Security-Constrained Optimal Power Flow through \n Low-Impact and Redundancy Screening](https://ieeexplore.ieee.org/document/9094021)\n- [Sch\xc3\xb6nheit, Weinhold, Dierstein (2020), The impact of different strategies for generation \n shift keys (GSKs) on the flow-based market coupling domain: A model-based analysis of Central Western Europe](https://www.sciencedirect.com/science/article/pii/S0306261919317544)\n\nAcknowledgments\n---------------\n\nRichard and Robert would like to acknowledge the support of Reiner Lemoine-Foundation, the Danish\nEnergy Agency and Federal Ministry for Economic Affairs and Energy (BMWi). Robert Mieth is funded by\nthe Reiner Lemoine-Foundation scholarship. Richard Weinhold is funded by the Danish Energy Agency.\nThe development of POMATO and its applications was funded by BMWi in the project \xe2\x80\x9cLong-term Planning\nand Short-term Optimization of the German Electricity System Within the European Context\xe2\x80\x9d (LKD-EU,\n03ET4028A).\n\n\n'",",https://arxiv.org/abs/2011.11594","2020/05/12, 14:42:17",1261,CUSTOM,0,420,"2023/10/24, 17:10:04",5,0,0,0,1,1,0,0.022332506203473934,"2021/06/15, 20:21:21",v0.4,0,2,false,,false,false,,,,,,,,,,, PowerGAMA,A lightweight simulation tool for high level analyses of renewable energy integration in large power systems.,harald_g_svendsen/powergama/wiki,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://bitbucket.org/harald_g_svendsen/powergama/wiki/Home,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Power System Analysis Toolbox,A Matlab toolbox for electric power system analysis and simulation.,,,custom,,Energy Modeling and Optimization,,,,,,,,,,http://faraday1.ucd.ie/psat.html,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, USelectricity,Forecast the US demand for electricity.,RamiKrispin,https://github.com/RamiKrispin/USelectricity.git,github,,Energy Modeling and Optimization,"2022/02/11, 22:10:48",96,0,7,false,R,,,"R,Shell,Dockerfile",https://ramikrispin.github.io/USelectricity/,"b'\n\n\n# USelectricity\n\n\n\n\nWIP\xe2\x80\xa6\n\nThe [US Electricity\nDashboard](https://ramikrispin.github.io/USelectricity/) provides\nreal-time tracking and forecasting for US (lower 48 states) hourly\nelectricity demand. This project\xe2\x80\x99s primary goal is to demonstrate a data\nscience project\xe2\x80\x99s deployment into production using open source and free\ntools such as R, h2o, Github Actions, Docker, and others. That includes\nthe following components:\n\n- Data - pulling the US hourly demand and generation of electricity\n data from the Energy Information Administration (EIA) API with the\n use of R and jq\n- Forecast - generating 72 hours forecast for the total demand with\n the use of Generalized Linear Model (GLM) using the h2o package\n- Dashboard - communicating the demand data and forecast with the use\n of the flexdashboard package\n- Automation - the data on the dashboard is getting refresh every\n hour, and the forecast generated every 72 hours with the use of\n Github Actions and Docker\n\n### Data\n\nThe dashboard provides an overview for the overall hourly demand and\ngeneration of electricity in the US (lower 48).\n\nDemand\n\nNet generation\n\n\n\n'",,"2021/01/14, 13:03:45",1014,CUSTOM,0,7356,"2023/10/24, 17:10:04",1,0,0,0,1,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, reV,"Enables the efficient and scalable computation of renewable energy generation, levelized cost of energy, application of geospatial exclusion layers, and generation of renewable energy supply curves.",NREL,https://github.com/NREL/reV.git,github,,Energy Modeling and Optimization,"2023/10/16, 16:25:55",56,4,13,true,Python,National Renewable Energy Laboratory,NREL,"Python,PowerBuilder,Dockerfile",https://nrel.github.io/reV/,"b'.. raw:: html\n\n

\n \n                \n \n

\n\n---------\n\n.. image:: https://github.com/NREL/reV/workflows/Documentation/badge.svg\n :target: https://nrel.github.io/reV/\n\n.. image:: https://github.com/NREL/reV/workflows/Pytests/badge.svg\n :target: https://github.com/NREL/reV/actions?query=workflow%3A%22Pytests%22\n\n.. image:: https://github.com/NREL/reV/workflows/Lint%20Code%20Base/badge.svg\n :target: https://github.com/NREL/reV/actions?query=workflow%3A%22Lint+Code+Base%22\n\n.. image:: https://img.shields.io/pypi/pyversions/NREL-reV.svg\n :target: https://pypi.org/project/NREL-reV/\n\n.. image:: https://badge.fury.io/py/NREL-reV.svg\n :target: https://badge.fury.io/py/NREL-reV\n\n.. image:: https://codecov.io/gh/nrel/reV/branch/main/graph/badge.svg?token=U4ZU9F0K0Z\n :target: https://codecov.io/gh/nrel/reV\n\n.. image:: https://zenodo.org/badge/201343076.svg\n :target: https://zenodo.org/badge/latestdoi/201343076\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/nrel/reV/HEAD\n\n|\n\n.. inclusion-intro\n\n**reV** (the Renewable Energy Potential model)\nis an open-source geospatial techno-economic tool that\nestimates renewable energy technical potential (capacity and generation),\nsystem cost, and supply curves for solar photovoltaics (PV),\nconcentrating solar power (CSP), geothermal, and wind energy.\nreV allows researchers to include exhaustive spatial representation\nof the built and natural environment into the generation and cost estimates\nthat it computes.\n\nreV is highly dynamic, allowing analysts to assess potential at varying levels\nof detail \xe2\x80\x94 from a single site up to an entire continent at temporal resolutions\nranging from five minutes to hourly, spanning a single year or multiple decades.\nThe reV model can (and has been used to) provide broad coverage across large spatial\nextents, including North America, South and Central Asia, the Middle East, South America,\nand South Africa to inform national and international-scale analyses. Still, reV is\nequally well-suited for regional infrastructure and deployment planning and analysis.\n\n\nFor a detailed description of reV capabilities and functionality, see the\n`NREL reV technical report `_.\n\nHow does reV work?\n==================\nreV is a set of `Python classes and functions `_\nthat can be executed on HPC systems using `CLI commands `_.\nA full reV execution consists of one or more compute modules\n(each consisting of their own Python class/CLI command)\nstrung together using a `pipeline framework `_,\nor configured using `batch `_.\n\nA typical reV workflow begins with input wind/solar/geothermal resource data\n(following the `rex data format `_)\nthat is passed through the generation module. This output is then collected across space and time\n(if executed on the HPC), before being sent off to be aggregated under user-specified land exclusion scenarios.\nExclusion data is typically provided via a collection of high-resolution spatial data layers stored in an HDF5 file.\nThis file must be readable by reV\'s\n`ExclusionLayers `_\nclass. See the `reVX Setbacks utility `_\nfor instructions on generating setback exclusions for use in reV.\nNext, transmission costs are computed for each aggregated\n""supply-curve point"" using user-provided transmission cost tables.\nSee the `reVX transmission cost calculator utility `_\nfor instructions on generating transmission cost tables.\nFinally, the supply curves and initial generation data can be used to\nextract representative generation profiles for each supply curve point.\n\nA visual summary of this process is given below:\n\n\n.. inclusion-flowchart\n\n.. raw:: html\n\n

\n \n

\n\n|\n\n.. inclusion-get-started\n\nTo get up and running with reV, first head over to the `installation page `_,\nthen check out some of the `Examples `_ or\ngo straight to the `CLI Documentation `_!\n\n\n.. inclusion-install\n\n\nInstalling reV\n==============\n\nNOTE: The installation instruction below assume that you have python installed\non your machine and are using `conda `_\nas your package/environment manager.\n\nOption 1: Install from PIP (recommended for analysts):\n---------------------------------------------------------------\n\n1. Create a new environment:\n ``conda create --name rev python=3.9``\n\n2. Activate directory:\n ``conda activate rev``\n\n3. Install reV:\n 1) ``pip install NREL-reV`` or\n\n - NOTE: If you install using conda and want to use `HSDS `_\n you will also need to install h5pyd manually: ``pip install h5pyd``\n\nOption 2: Clone repo (recommended for developers)\n-------------------------------------------------\n\n1. from home dir, ``git clone git@github.com:NREL/reV.git``\n\n2. Create ``reV`` environment and install package\n 1) Create a conda env: ``conda create -n rev``\n 2) Run the command: ``conda activate rev``\n 3) cd into the repo cloned in 1.\n 4) prior to running ``pip`` below, make sure the branch is correct (install\n from main!)\n 5) Install ``reV`` and its dependencies by running:\n ``pip install .`` (or ``pip install -e .`` if running a dev branch\n or working on the source code)\n\n3. Check that ``reV`` was installed successfully\n 1) From any directory, run the following commands. This should return the\n help pages for the CLI\'s.\n\n - ``reV``\n\n\nreV command line tools\n======================\n\n- `reV `_\n- `reV template-configs `_\n- `reV batch `_\n- `reV pipeline `_\n- `reV project-points `_\n- `reV bespoke `_\n- `reV generation `_\n- `reV econ `_\n- `reV collect `_\n- `reV multiyear `_\n- `reV supply-curve-aggregation `_\n- `reV supply-curve `_\n- `reV rep-profiles `_\n- `reV hybrids `_\n- `reV nrwal `_\n- `reV qa-qc `_\n- `reV script `_\n- `reV status `_\n- `reV reset-status `_\n\n\nLaunching a run\n---------------\n\nTips\n\n- Only use a screen session if running the pipeline module: `screen -S rev`\n- `Full pipeline execution `_\n\n.. code-block:: bash\n\n reV -c ""/scratch/user/rev/config_pipeline.json"" pipeline\n\n- Running simply generation or econ can just be done from the console:\n\n.. code-block:: bash\n\n reV -c ""/scratch/user/rev/config_gen.json"" generation\n\nGeneral Run times and Node configuration on Eagle\n-------------------------------------------------\n\n- WTK Conus: 10-20 nodes per year walltime 1-4 hours\n- NSRDB Conus: 5 nodes walltime 2 hours\n\n`Eagle node requests `_\n\n\n.. inclusion-citation\n\n\nRecommended Citation\n====================\n\nPlease cite both the technical paper and the software with the version and\nDOI you used:\n\nMaclaurin, Galen J., Nicholas W. Grue, Anthony J. Lopez, Donna M. Heimiller,\nMichael Rossol, Grant Buster, and Travis Williams. 2019. \xe2\x80\x9cThe Renewable Energy\nPotential (reV) Model: A Geospatial Platform for Technical Potential and Supply\nCurve Modeling.\xe2\x80\x9d Golden, Colorado, United States: National Renewable Energy\nLaboratory. NREL/TP-6A20-73067. https://doi.org/10.2172/1563140.\n\nGrant Buster, Michael Rossol, Paul Pinchuk, Brandon N Benton, Robert Spencer,\nMike Bannister, & Travis Williams. (2023).\nNREL/reV: reV 0.8.0 (v0.8.0). Zenodo. https://doi.org/10.5281/zenodo.8247528\n'",",https://zenodo.org/badge/latestdoi/201343076\n\n,https://doi.org/10.2172/1563140.\n\nGrant,https://doi.org/10.5281/zenodo.8247528\n","2019/08/08, 21:54:39",1538,BSD-3-Clause,402,3232,"2023/10/12, 18:19:14",5,248,425,49,13,0,2.4,0.6014925373134328,"2023/10/16, 18:59:01",v0.8.2,0,7,false,,false,false,"nicolejkeeney/geo-py-docker,pswild/king_pine,NREL/sup3r,NREL/reVX",,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, openCEM,Capacity Expansion Model and Optimiser for the Australian National Energy Market.,openCEMorg,https://github.com/openCEMorg/openCEM.git,github,,Energy Modeling and Optimization,"2021/06/03, 00:58:26",15,0,0,false,Python,openCEM,openCEMorg,"Python,Dockerfile",,"b'[![Build Status](https://travis-ci.com/openCEMorg/openCEM.svg?token=YPwjEg4ZHVHXyJ2xeA7b&branch=master)](https://travis-ci.com/openCEMorg/openCEM)\n\n# openCEM\n\nWelcome to the repository for openCEM\n\n## What is this repository for?\n\nThis repository contains the development version of openCEM. You can download and try it in your computer. Please report issues either via:\n\n- Log an issue in the [issue tracker](https://github.com/openCEMorg/openCEM/issues)\n- Email [analyticsinfo-at-itpau.com.au](mailto:analyticsinfo@itpau.com.au)\n\n## Requirements\n\n* A computer with at least 16 GB of RAM (32 GB or more recommended)\n* Windows, MacOS, or Linux OS\n* 4 GB available in your hard drive (for full result sets)\n* An active internet connection for the duration of the run\n\n## Installation\n\nTo run openCEM, you need to install [Python 3](https://www.python.org/download/releases/3.0/) and additional dependencies.\nSee the [Install](https://github.com/openCEMorg/openCEM/wiki/Install) page for more information.\n\n## Documentation\n\nThe [Wiki](https://github.com/openCEMorg/openCEM/wiki) for this repository is the main and most up to date source of information for openCEM.\n\n## Examples\n\nExample input files for openCEM can be found in the [Examples](https://github.com/openCEMorg/openCEM_examples) repository\n'",,"2018/11/13, 03:41:55",1807,GPL-3.0,0,346,"2021/06/03, 00:58:33",3,14,22,0,874,2,0.2,0.12222222222222223,"2021/06/03, 00:03:59",v1.1.0,0,5,false,,true,true,,,https://github.com/openCEMorg,,,,,https://avatars.githubusercontent.com/u/44991012?v=4,,, energy-py,Reinforcement learning for energy systems.,ADGEfficiency,https://github.com/ADGEfficiency/energy-py.git,github,"reinforcement-learning,energy",Energy Modeling and Optimization,"2022/02/05, 03:15:20",169,0,10,true,Python,,,"Python,Makefile",,"b'# energy-py\n\n[![Build Status](https://travis-ci.org/ADGEfficiency/energy-py.svg?branch=master)](https://travis-ci.org/ADGEfficiency/energy-py)\n\nenergy-py is a framework for running reinforcement learning experiments on energy environments.\n\nThe library is focused on electric battery storage, and offers a implementation of a many batteries operating in parallel.\n\nenergy-py includes an implementation of the Soft Actor-Critic reinforcement learning agent, implementated in Tensorflow 2:\n\n- test & train episodes based on historical Australian electricity price data,\n- checkpoints & restarts,\n- logging in Tensorboard.\n\nenergy-py is built and maintained by Adam Green - adam.green@adgefficiency.com.\n\n\n## Setup\n\n```bash\n$ make setup\n```\n\n\n## Test\n\n```bash\n$ make test\n```\n\n\n## Running experiments\n\n`energypy` has a high level API to run a specific run of an experiment from a `JSON` config file.\n\nThe most interesting experiment is to run battery storage for price arbitrage in the Australian electricity market. This requires grabbing some data from S3. The command below will download a pre-made dataset and unzip it to `./dataset`:\n\n```bash\n$ make pulls3-dataset\n```\n\nYou can then run the experiment from a JSON file:\n\n```bash\n$ energypy benchmarks/nem-battery.json\n```\n\nResults are saved into `./experiments/{env_name}/{run_name}`:\n\n```bash\n$ tree -L 3 experiments\nexperiments/\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 battery\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 nine\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 checkpoints\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 hyperparameters.json\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 logs\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 tensorboard\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 random.pkl\n```\n\nAlso provide wrappers around two `gym` environments - Pendulum and Lunar Lander:\n\n```bash\n$ energypy benchmarks/pendulum.json\n```\n\nRunning the Lunar Lander experiment has a dependency on Swig and pybox2d - which can require a bit of elbow-grease to setup depending on your environment.\n\n```bash\n$ energypy benchmarks/lunar.json\n```\n'",,"2017/04/03, 10:14:01",2396,MIT,0,17,"2023/03/25, 01:04:44",6,22,59,2,214,2,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, glaes,Geospatial Land Availability for Energy Systems.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/glaes.git,github,"geospatial,geospatial-analysis,renewable-energy,energy,python",Energy Modeling and Optimization,"2023/01/18, 07:46:01",36,0,8,true,Python,FZJ-IEK3,FZJ-IEK3-VSA,"Python,Dockerfile",,"b'\n\n# Geospatial Land Availability for Energy Systems (GLAES)\n\nGLAES is a framework for conducting land eligibility analyses and is designed to easily incorporate disparate geospatial information from a variety of sources into a unified solution.\nCurrently, the main purpose of GLAES is performing land eligibility (LE) analyses which, in short, are used to determine which areas within a region are deemed \'eligible\' for some purpose (such as placing a wind turbine).\nAlthough initially intended to operate in the context of distributed renewable energy systems, such as onshore wind and open-field solar parks, the work flow of GLAES is applicable to any context where a constrained indication of land is desired.\nExcept in the context of Europe, GLAES only provides a framework for conducting these types of analyses, and so the underlying data sources which are used will need to be provided.\nFortunately, GLAES is built on top of the Geospatial Data Abstraction Library (GDAL) and so is capable of incorporating information from any geospatial dataset which GDAL can interpret; including common GIS formats such as .shp and .tif files.\nIn this way, GLAES affords a high degree of flexibility such that very specific considerations, while still maintaining a consistent application method between studies.\n\n[![DOI](https://zenodo.org/badge/114907468.svg)](https://zenodo.org/badge/latestdoi/114907468)\n\n## Features\n\n- Standardized approach to land eligibility analyses\n- Applicable in any geographic region and at any resolution\n- Can flexibly incorporate most geospatial datasets: including the common .shp and .tif formats\n- Simple visualization and storage of results as common image or raster dataset\n- Simple integration of results into other analysis (via numpy array)\n\n## European Priors\n\nA number of precomputed (Prior) datasets which constitute the most commonly considered criteria used for LE analyses have been constructed for the European context.\nThese datasets are formatted to be used directly with the GLAES framework and, in doing so, drastically reduce the time requirements, data management, and overall complexity of conducting these analyses.\nThe Priors also have the added benefit of providing a common data source to all LE researchers, which further promotes consistency between independent LE evaluations.\nMost important, usage of these datasets is just as easy as applying exclusions from other geospatial datasources.\nAlthough the Prior datasets are not included when cloning this repository, they can be downloaded from [Mendeley Data](https://data.mendeley.com/datasets/trvfb3nwt2) and installed by unzipping (or placing if downloaded one-by-one) the files in the repo directory `glaes/data/priors`.\n\n---\n\n## Example\n\n### A simple LE work flow using GLAES would go as follows:\n\nObjective:\n\n- Determine land eligibility for photovoltaic (PV) modules in the Aachen administration region considering that...\n 1. PV modules should not cover agricultural areas (because people need to eat)\n 2. PV modules should not be within 200 meters of a major road way (because they may get dirty)\n 3. PV modules should not be within 1000 meters of a settlement area (because they are too shiny)\n\n```python\n ec = ExclusionCalculator(aachenRegion, srs=3035, pixelSize=100)\n ec.excludePrior(""agriculture_proximity"", value=0)\n ec.excludePrior(""settlement_proximity"", value=(None,1000))\n ec.excludePrior(""roads_main_proximity"", value=(None,200))\n ec.draw()\n```\n\n\n\n### More Examples\n\n1. [Basic Workflow](Examples/00_basic_workflow.ipynb)\n2. [Placement Algorithm](Examples/01_Placement_algorithm.ipynb)\n\n---\n\n## Installation\n\nThe primary dependancies of GLAES are:\n\n1. gdal>2.0.0,<3.0.0\n2. GeoKit >= 1.2.4\n\nIf you can install these modules on you own, then the glaes module should be easily installable with:\n\n```\npip install git+https://github.com/FZJ-IEK3-VSA/glaes.git#egg=glaes\n```\n\nIf, on the otherhand, you prefer an automated installation using Anaconda, then you should be able to follow these steps:\n\n1. First clone a local copy of the repository to your computer, and move into the created directory\n\n```\ngit clone https://github.com/FZJ-IEK3-VSA/glaes.git\ncd glaes\n```\n\n1. (Alternative) If you want to use the \'dev\' branch (or another branch) then use:\n\n```\ngit checkout dev\n```\n\n2. GLAES should be installable to a new environment with:\n\n```\nconda env create --file requirements.yml\n```\n\n2. (Alternative) Or into an existing environment with:\n\n```\nconda env update --file requirements.yml -n \n```\n\n2. (Alternative) If you want to install GLAES in editable mode, and also with jupyter notebook and with testing functionalities use:\n\n```\nconda env create --file requirements-dev.yml\n```\n\n---\n\n## Docker\n\nWe are trying to get GLAES to work within a Docker container. Try it out!\n\n- First pull the image with:\n\n```bash\ndocker pull sevberg/glaes:latest\n```\n\n- You can then start a basic python interpreter with:\n\n```bash\ndocker run -it sevberg/glaes:latest -c ""python""\n```\n\n- Or you can start a jupyter notebook using:\n\n```bash\ndocker run -it \\\n -p 8888:8888 \\\n sevberg/glaes:latest \\\n -c ""jupyter notebook --ip=\'*\' --port=8888 --no-browser --allow-root""\n```\n\n- Which can then be connected to at the address ""localhost:8888:""\n- The API Key can be found from the output of the earlier command\n\n* Finally, you might want to mount a volume to access geospatial data. For this you can use:\n\n```bash\ndocker run -it \\\n --mount target=/notebooks,type=bind,src= \\\n -p 8888:8888 \\\n sevberg/glaes:latest \\\n -c ""jupyter notebook --notebook-dir=/notebooks --ip=\'*\' --port=8888 --no-browser --allow-root""\n```\n\n---\n\n## Associated papers\n\nIf you would like to see a **much** more detailed discussion on land eligibility analysis and see why a framework such as GLAES is not only helpful, but a requirement, please see:\n\nThe Background Paper\n\nExamples of Land Eligibility evaluation and applications:\n\n- [Uniformly constrained land eligibility for onshore European wind power](https://doi.org/10.1016/j.renene.2019.06.127)\n\n- [The techno-economic potential of offshore wind energy with optimized future turbine designs in Europe](https://doi.org/10.1016/j.apenergy.2019.113794)\n\n- [Linking the Power and Transport Sectors\xe2\x80\x94Part 2: Modelling a Sector Coupling Scenario for Germany](http://www.mdpi.com/1996-1073/10/7/957/htm)\n\n---\n\n## Example applications of external institutions\n\n- [Cost-potential curves of onshore wind energy including disamenity costs](https://link.springer.com/article/10.1007/s10640-022-00746-2) \n\n---\n \n## Citation\n\nIf you decide to use GLAES anywhere in a published work, please kindly cite us using the following\n\n```bibtex\n@article{Ryberg2018,\n author = {Ryberg, David and Robinius, Martin and Stolten, Detlef},\n doi = {10.3390/en11051246},\n issn = {1996-1073},\n journal = {Energies},\n month = {may},\n number = {5},\n pages = {1246},\n title = {{Evaluating Land Eligibility Constraints of Renewable Energy Sources in Europe}},\n url = {http://www.mdpi.com/1996-1073/11/5/1246},\n volume = {11},\n year = {2018}\n}\n```\n\n---\n\n## License\n\nMIT License\n\nCopyright (c) 2017-2022 David Severin Ryberg (FZJ IEK-3), Jochen Lin\xc3\x9fen (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program. \nIf not, see \n\n## About Us\n

\nWe are the Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n## Acknowledgment\n\nThis work was supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 \xe2\x80\x93 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/).\n\n\n'",",https://zenodo.org/badge/latestdoi/114907468,https://doi.org/10.1016/j.renene.2019.06.127,https://doi.org/10.1016/j.apenergy.2019.113794","2017/12/20, 16:19:36",2135,MIT,2,157,"2023/01/18, 07:46:02",5,5,6,2,280,0,0.2,0.18666666666666665,"2020/04/29, 12:45:38",v1.1.6,0,7,false,,false,false,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, onsset,A GIS based optimization tool that has been developed to support electrification planning and decision making for the achievement of energy access goals in currently unserved locations.,OnSSET,https://github.com/OnSSET/onsset.git,github,,Energy Modeling and Optimization,"2023/03/28, 15:04:40",24,1,5,true,Python,The Open Source Spatial Electrification Tool,OnSSET,"Python,Jupyter Notebook",http://www.onsset.org,"b'onsset : Open Source Spatial Electrification Tool\n=================================================\n\n[![PyPI version](https://badge.fury.io/py/onsset.svg)](https://badge.fury.io/py/onsset)\n[![Build Status](https://travis-ci.com/OnSSET/onsset.svg?branch=master)](https://travis-ci.com/OnSSET/onsset)\n[![Coverage Status](https://coveralls.io/repos/github/OnSSET/onsset/badge.svg?branch=master)](https://coveralls.io/github/OnSSET/onsset?branch=master)\n[![Documentation Status](https://readthedocs.org/projects/onsset/badge/?version=latest)](https://onsset.readthedocs.io/en/latest/?badge=latest)\n\n# Scope\n\nThis repository contains the source code of the Open Source Spatial Electrification Tool\n([OnSSET](http://www.onsset.org/)).\n\n\nThe repository also includes sample test files available in ```.\\test_data```\nand sample output files available in ```.\\sample_output```.\n\n## Installation\n\n### Requirements\n\nOnSSET requires Python > 3.5 with the following packages installed:\n- et-xmlfile\n- jdcal\n- numpy\n- openpyxl\n- pandas\n- python-dateutil\n- pytz\n- six\n- xlrd\n- notebook\n- seaborn\n- matplotlib\n- scipy\n\n### Install with pip\n\nInstall onsset from the Python Packaging Index (PyPI):\n\n```\npip install onsset\n```\n\n### Install from GitHub\n\nDownload or clone the repository and install the package in `develop`\n(editable) mode:\n\n```\ngit clone https://github.com/onsset/onsset.git\ncd onsset\npython setup.py develop\n```\n\n## Contact\nFor more information regarding the tool, its functionality and implementation\nplease visit https://www.onsset.org or contact the development team\nat seap@desa.kth.se.\n'",,"2019/05/27, 15:17:54",1612,CUSTOM,2,776,"2022/06/22, 01:37:21",27,72,128,0,490,5,0.0,0.6871287128712871,"2023/01/31, 10:20:16",Somalia-1.0,0,8,false,,false,true,Adrianonsare/EnergyAnalytics,,https://github.com/OnSSET,www.onsset.org,,,,https://avatars.githubusercontent.com/u/57988767?v=4,,, whobs-server,"This is the code for the online optimization of zero-direct-emission electricity systems with wind, solar and storage (using batteries and electrolysed hydrogen gas) to provide a baseload electricity demand, using the cost and other assumptions of your choice.",PyPSA,https://github.com/PyPSA/whobs-server.git,github,,Energy Modeling and Optimization,"2023/08/30, 09:08:36",32,0,5,true,Python,PyPSA,PyPSA,"Python,JavaScript,HTML,CSS",,"b'\n# model.energy: online optimisation of energy systems\n\nThis is the code for the online optimisation of zero-direct-emission\nelectricity systems with wind, solar and storage (using batteries and\nelectrolysed hydrogen gas) to provide a baseload electricity demand,\nusing the cost and other assumptions of your choice. It uses only free\nsoftware and open data, including [Python for Power System Analysis\n(PyPSA)](https://github.com/PyPSA/PyPSA) for the optimisation\nframework, the European Centre for Medium-Range Weather Forecasts\n(ECMWF) [ERA5\ndataset](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels)\nfor the open weather data, the [atlite\nlibrary](https://github.com/FRESNA/atlite) for converting weather data\nto generation profiles, [Clp](https://projects.coin-or.org/Clp) for\nthe solver, [D3.js](https://d3js.org/) for graphics,\n[Mapbox](https://www.mapbox.com/), [Leaflet](http://leafletjs.com/)\nand [Natural Earth](https://www.naturalearthdata.com/) for maps, and\nfree software for the server infrastructure (GNU/Linux, nginx, Flask,\ngunicorn, Redis).\n\nYou can find a live version at:\n\n\n\n\n## Requirements\n\n### Software\n\nThis software has only been tested on the Ubuntu distribution of GNU/Linux.\n\nUbuntu packages:\n\n`sudo apt install coinor-clp coinor-cbc redis-server`\n\nTo install, we recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) in combination with [mamba](https://github.com/QuantStack/mamba).\n\n\tconda install -c conda-forge mamba\n\tmamba env create -f environment.yaml\n\nFor (optional) server deployment:\n\n\tsudo apt install nginx\n\tmamba install gunicorn\n\n\n### Automatic preparation\n\nAfter installing the dependencies above, run the following line of code:\n\n\tpython prepare.py\n\nThis helps you:\n\n1. Fetch the pre-processed wind and solar data for the globe (around 6.5 GB per weather year specified in `config.yaml`)\n1. Create folders for results\n1. Fetch static files not included in this repository\n\nNow you are ready to [run the server locally](#run-server-locally-on-your-own-computer).\n\n### Generating wind and solar data yourself\n\nThe script `prepare.py` will download everything you need to get\nstarted, including the pre-processed wind and solar data for the\nglobe. If you want to build this data from scratch from wind and solar\ndata, follow these instructions. Be warned that the global datasets\ntake space of 444 GB per weather year.\n\nFor the wind and solar generation time series, we use the European\nCentre for Medium-Range Weather Forecasts (ECMWF) [ERA5\ndataset](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels).\n\nFirst you need to download the weather data (e.g. wind speeds, direct\nand diffuse solar radiation) as cutouts, then you need to convert them\nto power system data for particular wind turbines and solar\npanels. The weather data is in a 0.25 by 0.25 degree spatial\nresolution grid for the whole globe, but to save space, we downscale\nit to 0.5 by 0.5 degrees.\n\nData is downloaded from the European [Climate Data Store\n(CDS)](https://cds.climate.copernicus.eu/) using the [atlite\nlibrary](https://github.com/FRESNA/atlite) using the script:\n\n`python build_cutouts.py`\n\nNote that you need to register an account on the CDS first in order to\nget a CDS API key.\n\nAs of 19.03.2023 the atlite master cannot cope with such large\ncutouts, so you need to use the [monthly retrieval\nbranch](https://github.com/PyPSA/atlite/tree/feat/era5-monthly-retrieveal)\nof atlite. If you have shapely 2.0 you will need to backport [this bug\nfix](https://github.com/PyPSA/atlite/blob/ad6c9f5a076054e2b953666076447729e33c2fb0/atlite/gis.py#L150)\nby hand in the code.\n\n\nSet the `weather_years` you want to download in `config.yaml`. For\neach year it will download 4 quadrants cutouts (4 slices of 90 degrees\nof longitude) to cover the whole globe. Each quadrant takes up 111 GB,\nso you will need 444 GB per year.\n\nTo build the power system data, i.e. wind and solar generation time\nseries for each point on the globe, run the script:\n\n`python convert_and_downscale_cutout.py`\n\nEach quadrant is split into two octants, one for the northern half of\nthe quadrant with solar panels facing south, and the other for the\nsouthern half with solar panels facing north (with a slope of 35\ndegrees against the horizontal in both cases). The script downscales\nthe spatial resolution to 0.5 by 0.5 degrees to save disk space. Each\noctant takes up 820 MB for both technologies (solar and onshore wind),\nso in total for a year we have 820 MB times 8 octants, i.e. 6.5 GB.\n\n\n## Run without server\n\nSee the regular [WHOBS](https://github.com/PyPSA/WHOBS) repository.\n\n## Run server locally on your own computer\n\nTo run locally you need to start the Python Flask server in one terminal, and redis in another:\n\nStart the Flask server in one terminal with:\n\n`python server.py`\n\nThis will serve to local address:\n\nhttp://127.0.0.1:5002/\n\nIn the second terminal start Redis:\n\n`rq worker whobs`\n\nwhere `whobs` is the name of the queue. No jobs will be solved until\nthis is run. You can run multiple workers to process jobs in parallel.\n\n\n## Deploy on a publicly-accessible server\n\nUse nginx, gunicorn for the Python server, rq, and manage with supervisor.\n\nSee [nginx server configuration](nginx-configuration.txt).\n\n\n## License\n\nCopyright 2018-2023 Tom Brown \n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as\npublished by the Free Software Foundation; either [version 3 of the\nLicense](LICENSE.txt), or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but\nWITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the [GNU\nAffero General Public License](LICENSE.txt) for more details.\n'",,"2019/01/01, 15:58:29",1758,AGPL-3.0,23,164,"2023/03/19, 19:48:13",10,4,11,1,220,0,0.0,0.037735849056603765,,,0,2,false,,false,false,,,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, CityLearn,Official reinforcement learning environment for demand response and load shaping.,intelligent-environments-lab,https://github.com/intelligent-environments-lab/CityLearn.git,github,,Energy Modeling and Optimization,"2023/10/23, 21:00:51",380,14,106,true,Python,Intelligent Environments Laboratory,intelligent-environments-lab,"Python,Shell",,"b'# CityLearn\nCityLearn is an open source OpenAI Gym environment for the implementation of Multi-Agent Reinforcement Learning (RL) for building energy coordination and demand response in cities. A major challenge for RL in demand response is the ability to compare algorithm performance. Thus, CityLearn facilitates and standardizes the evaluation of RL agents such that different algorithms can be easily compared with each other.\n\n![Demand-response](https://github.com/intelligent-environments-lab/CityLearn/blob/master/assets/images/dr.jpg)\n\n## Environment Overview\n\nCityLearn includes energy models of buildings and distributed energy resources (DER) including air-to-water heat pumps, electric heaters and batteries. A collection of building energy models makes up a virtual district (a.k.a neighborhood or community). In each building, space cooling, space heating and domestic hot water end-use loads may be independently satisfied through air-to-water heat pumps. Alternatively, space heating and domestic hot water loads can be satisfied through electric heaters.\n\n![Citylearn](https://github.com/intelligent-environments-lab/CityLearn/blob/master/assets/images/citylearn_systems.png)\n\n## Installation\nInstall latest release in PyPi with `pip`:\n```console\npip install CityLearn\n```\n\n## Documentation\nRefer to the [docs](https://intelligent-environments-lab.github.io/CityLearn/) for documentation of the CityLearn API.'",,"2019/06/30, 02:41:48",1578,MIT,491,889,"2023/10/20, 12:17:15",4,52,89,63,5,2,0.0,0.20238095238095233,"2023/10/20, 12:19:47",v2.1b12,0,6,false,,true,false,"huyq/offline-hmarl,Roberock/citylearn2023_kit,itaysegev/citylearn_utils,instadeepai/og-marl,Tobi-Tob/CityLearn2023,mal84emma/VoI_in_CAS,narest-qa/repo68,intelligent-environments-lab/merlin-apen-2023,kenzeng24/March-Madness-Optimization,SVJayanthi/SynaptikApp,ludwigbald/msc-thesis,EECi/Annex_37,Demosthen/ActiveRL,RaOneK/math509Final2022",,https://github.com/intelligent-environments-lab,http://nagy.caee.utexas.edu,"Austin, TX",,,https://avatars.githubusercontent.com/u/29540963?v=4,,, rl-testbed-for-energyplus,Reinforcement Learning Testbed for Power Consumption Optimization using EnergyPlus.,IBM,https://github.com/IBM/rl-testbed-for-energyplus.git,github,,Energy Modeling and Optimization,"2023/09/05, 13:07:29",157,0,27,true,Python,International Business Machines,IBM,"Python,Dockerfile",,"b'[![unit tests](https://github.com/IBM/rl-testbed-for-energyplus/actions/workflows/test.yml/badge.svg)](https://github.com/IBM/rl-testbed-for-energyplus/actions/workflows/test.yml)\n\n# Project Description\nReinforcement Learning Testbed for Power Consumption Optimization.\n\n## Contributing to the project\nWe welcome contributions to this project in many forms. There\'s always plenty to do! Full details of how to contribute to this project are documented in the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n## Maintainers\nThe project\'s [maintainers](MAINTAINERS.txt): are responsible for reviewing and merging all pull requests and they guide the over-all technical direction of the project.\n\n## Supported platforms\n\nWe have tested on the following platforms.\n- macOS High Sierra (Version 10.13.6)\n- macOS Catalina (Version 10.15.3)\n- Ubuntu 20.04 LTS\n\n## Installation\n\n### Docker\n\nThe easiest way to setup a training environment is to use the docker image. See instructions [here](docker/README.md).\nFor manual installation, see below.\n\n### Building from source\n\nInstallation of rl-testbed-for-energyplus consists of three parts:\n\n- Install EnergyPlus prebuilt package\n- Build patched EnergyPlus\n- Install built executables\n\n#### Install EnergyPlus prebuilt package\n\nFirst, download pre-built package of EnergyPlus and install it.\nThis is not for executing normal version of EnergyPlus, but to get some pre-compiled binaries and data files that can not be generated from source code.\n\nSupported EnergyPlus versions:\n\n| | Linux | MacOS |\n|-------|--------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 8.8.0 | [EnergyPlus-8.8.0-7c3bbe4830-Linux-x86_64.sh](https://github.com/NREL/EnergyPlus/releases/download/v8.8.0/EnergyPlus-8.8.0-7c3bbe4830-Linux-x86_64.sh) | [EnergyPlus-8.8.0-7c3bbe4830-Darwin-x86_64.dmg](https://github.com/NREL/EnergyPlus/releases/download/v8.8.0/EnergyPlus-8.8.0-7c3bbe4830-Darwin-x86_64.dmg) |\n| 9.1.0 | [EnergyPlus-9.1.0-08d2e308bb-Linux-x86_64.sh](https://github.com/NREL/EnergyPlus/releases/download/v9.1.0/EnergyPlus-9.1.0-08d2e308bb-Linux-x86_64.sh) | [EnergyPlus-9.1.0-08d2e308bb-Darwin-x86_64.dmg](https://github.com/NREL/EnergyPlus/releases/download/v9.1.0/EnergyPlus-9.1.0-08d2e308bb-Darwin-x86_64.dmg) |\n| 9.2.0 | [EnergyPlus-9.2.0-921312fa1d-Linux-x86_64.sh](https://github.com/NREL/EnergyPlus/releases/download/v9.2.0/EnergyPlus-9.2.0-921312fa1d-Linux-x86_64.sh) | [EnergyPlus-9.2.0-921312fa1d-Darwin-x86_64.dmg](https://github.com/NREL/EnergyPlus/releases/download/v9.2.0/EnergyPlus-9.2.0-921312fa1d-Darwin-x86_64.dmg) |\n| 9.3.0 | [EnergyPlus-9.3.0-baff08990c-Linux-x86_64.sh](https://github.com/NREL/EnergyPlus/releases/download/v9.3.0/EnergyPlus-9.3.0-baff08990c-Linux-x86_64.sh) | [EnergyPlus-9.3.0-baff08990c-Darwin-x86_64.dmg](https://github.com/NREL/EnergyPlus/releases/download/v9.3.0/EnergyPlus-9.3.0-baff08990c-Darwin-x86_64.dmg) |\n| 9.4.0 | [EnergyPlus-9.4.0-998c4b761e-Linux-Ubuntu20.04-x86_64.sh](https://github.com/NREL/EnergyPlus/releases/download/v9.4.0/EnergyPlus-9.4.0-998c4b761e-Linux-Ubuntu20.04-x86_64.sh) | [EnergyPlus-9.4.0-998c4b761e-Darwin-macOS10.15-x86_64.dmg](https://github.com/NREL/EnergyPlus/releases/download/v9.4.0/EnergyPlus-9.4.0-998c4b761e-Darwin-macOS10.15-x86_64.dmg) |\n| 9.5.0 | [EnergyPlus-9.5.0-de239b2e5f-Linux-Ubuntu20.04-x86_64.sh](https://github.com/NREL/EnergyPlus/releases/download/v9.5.0/EnergyPlus-9.5.0-de239b2e5f-Linux-Ubuntu20.04-x86_64.sh) | [EnergyPlus-9.5.0-de239b2e5f-Darwin-macOS11.2-arm64.dmg](https://github.com/NREL/EnergyPlus/releases/download/v9.5.0/EnergyPlus-9.5.0-de239b2e5f-Darwin-macOS11.2-arm64.dmg) |\n\nand up to EnergyPlus 22.2.0. See `EnergyPlus` folder which provides a list of available patches.\n\nYou can also download the installer at https://github.com/NREL/EnergyPlus/releases/.\n\n##### Ubuntu\n\n1. Go to the web page shown above.\n2. Right click on relevant link in supported versions table and select `Save link As` to from the menu to download installation image.\n3. (9.1.0, Linux only) Apply patch on downloaded file (EnergyPlus 9.1.0 installation script unpacks in /usr/local instead of /usr/local/EnergyPlus-9.1.0)\n```\n$ cd \n$ patch -p0 < rl-testbed-for-energyplus/EnergyPlus/EnergyPlus-9.1.0-08d2e308bb-Linux-x86_64.sh.patch\n```\n4. Execute installation image. Below example is for EnergyPlus 9.1.0\n```\n$ sudo bash /EnergyPlus-9.1.0-08d2e308bb-Linux-x86_64.sh\n```\n\nEnter your admin password if required.\nSpecify `/usr/local` for install directory.\nRespond with `/usr/local/bin` if asked for symbolic link location.\nThe package will be installed at `/usr/local/EnergyPlus-`.\n\n##### macOS\n\n1. Go to the web page shown above.\n2. Right click in supported versions table and select `Save link As` to from the menu to download installation image.\n3. Double click the downloaded package, and follow the instructions.\nThe package will be installed in `/Applications/EnergyPlus-`.\n\n#### Build patched EnergyPlus\n\nDownload source code of EnergyPlus and rl-testbed-for-energyplus. In below scripted lines, replace ``\nby the one you\'re using (for instance, `9.3.0`)\n\n```\n$ cd \n$ git clone -b v git@github.com:NREL/EnergyPlus.git\n$ git clone git@github.com:ibm/rl-testbed-for-energyplus.git\n```\n\nApply patch to EnergyPlus and build. Replace ``\nby the one you\'re using (for instance, `9-3-0`)\n\n```\n$ cd /EnergyPlus\n$ patch -p1 < ../rl-testbed-for-energyplus/EnergyPlus/RL-patch-for-EnergyPlus-.patch\n$ mkdir build\n$ cd build\n$ cmake -DCMAKE_INSTALL_PREFIX=/usr/local/EnergyPlus- .. # Ubuntu case (please don\'t forget the two dots at the end)\n$ cmake -DCMAKE_INSTALL_PREFIX=/Applications/EnergyPlus- .. # macOS case (please don\'t forget the two dots at the end)\n$ make -j4\n```\n\n#### Install built executables\n\n```\n$ sudo make install\n```\n\n#### Install Python dependencies\n\nPython3 >= 3.8 is required.\n\n###### OpenAI Baselines\n\n```\n$ pip3 install -r requirements/baselines.txt\n```\n\nMain dependencies:\n\n- tensorflow 2.5\n- baselines 0.1.6\n- gym 0.15.7\n\nNote on baselines dependency:\n\n- baselines 0.1.5 fails to install when MuJoCo can\'t be found. Reason why 0.1.6 is required (available from sources only)\n- if baselines 0.1.6 installation fails because TensorFlow is missing, install tensorflow manually first, then retry.\n\nFor more information on baselines requirements, see https://github.com/openai/baselines for details.\n\nOlder versions:\n\nTo run on Ubuntu 18.04, you\'ll need the following pip dependencies:\n\n```\nscipy==1.5.4\ntensorflow==1.15.4\n```\n\n###### Ray RLlib\n\n```\n$ pip3 install -r requirements/ray.txt\n```\n\n## How to run\n\n### Set up\n\nSome environment variables must be defined. `ENERGYPLUS_VERSION` must be adapted to your version.\n\nIn `$(HOME)/.bashrc`\n```\n# Specify the top directory\nTOP=/rl-testbed-for-energyplus\nexport PYTHONPATH=${PYTHONPATH}:${TOP}\n\nif [ `uname` == ""Darwin"" ]; then\n\tenergyplus_instdir=""/Applications""\nelse\n\tenergyplus_instdir=""/usr/local""\nfi\nENERGYPLUS_VERSION=""8-8-0""\n#ENERGYPLUS_VERSION=""9-1-0""\n#ENERGYPLUS_VERSION=""9-2-0""\n#ENERGYPLUS_VERSION=""9-3-0""\nENERGYPLUS_DIR=""${energyplus_instdir}/EnergyPlus-${ENERGYPLUS_VERSION}""\nWEATHER_DIR=""${ENERGYPLUS_DIR}/WeatherData""\nexport ENERGYPLUS=""${ENERGYPLUS_DIR}/energyplus""\nMODEL_DIR=""${TOP}/EnergyPlus/Model-${ENERGYPLUS_VERSION}""\n\n# Weather file.\n# Single weather file or multiple weather files separated by comma character.\nexport ENERGYPLUS_WEATHER=""${WEATHER_DIR}/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw""\n#export ENERGYPLUS_WEATHER=""${WEATHER_DIR}/USA_CO_Golden-NREL.724666_TMY3.epw""\n#export ENERGYPLUS_WEATHER=""${WEATHER_DIR}/USA_FL_Tampa.Intl.AP.722110_TMY3.epw""\n#export ENERGYPLUS_WEATHER=""${WEATHER_DIR}/USA_IL_Chicago-OHare.Intl.AP.725300_TMY3.epw""\n#export ENERGYPLUS_WEATHER=""${WEATHER_DIR}/USA_VA_Sterling-Washington.Dulles.Intl.AP.724030_TMY3.epw""\n#export ENERGYPLUS_WEATHER=""${WEATHER_DIR}/USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw,${WEATHER_DIR}/USA_CO_Golden-NREL.724666_TMY3.epw,${WEATHER_DIR}/USA_FL_Tampa.Intl.AP.722110_TMY3.epw""\n\n# Ouput directory ""openai-YYYY-MM-DD-HH-MM-SS-mmmmmm"" is created in\n# the directory specified by ENERGYPLUS_LOGBASE or in the current directory if not specified.\nexport ENERGYPLUS_LOGBASE=""${HOME}/eplog""\n\n# Model file. Uncomment one.\n#export ENERGYPLUS_MODEL=""${MODEL_DIR}/2ZoneDataCenterHVAC_wEconomizer_Temp.idf"" # Temp. setpoint control\nexport ENERGYPLUS_MODEL=""${MODEL_DIR}/2ZoneDataCenterHVAC_wEconomizer_Temp_Fan.idf"" # Temp. setpoint and fan control\n\n# Run command (example)\n# $ time python3 -m baselines_energyplus.trpo_mpi.run_energyplus --num-timesteps 1000000000\n\n# Monitoring (example)\n# $ python3 -m common.plot_energyplus\n```\n\n### Running\n\nSimulation process starts by the following command. The only applicable option is `--num-timesteps`\n\n#### OpenAI Baselines\n\n```\n$ time python3 -m baselines_energyplus.trpo_mpi.run_energyplus --num-timesteps 1000000000\n```\n\n#### Ray RLlib\n\n```\n$ time python3 -m ray_energyplus.ppo.run_energyplus --num-timesteps 1000000000\n```\n\nOutput files are generated under the directory `${ENERGYPLUS_LOGBASE}/openai-YYYY-MM-DD-HH-MM-SS-mmmmmm`. These include:\n- log.txt Log file generated by baselines Logger.\n- progress.csv Log file generated by baselines Logger.\n- output/episode-NNNNNNNN/ Episode data\n\nEpsiode data contains the following files:\n- 2ZoneDataCenterHVAC_wEconomizer_Temp_Fan.idf A copy of model file used in the simulation of the episode\n- USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw A copy of weather file used in the simulation of the episode\n- eplusout.csv.gz Simulation result in CSV format\n- eplusout.err Error message. You need make sure that there are no Severe errors\n- eplusout.htm Human readable report file\n\n### Monitoring\n\nYou can monitor the progress of the simulation using plot_energyplus utility.\n\n```\n$ python3 -m common.plot_energyplus\nOptions:\n- -l Specify log directory (usually openai-YYYY-MM-DD-HH-MM-SS-mmmmmm)\n- -c Specify single CSV file to view\n- -d Dump every timestep in CSV file (dump_timesteps.csv)\n- -D Dump episodes in CSV file (dump_episodes.dat)\n```\n\nIf neither `-l` nor `-c` option is specified, plot_energyplus tries to open the latest directory under `${ENERGYPLUS_LOG}` directory.\nIf none of `-d` or `-D` is specified, the progress windows is opened.\n\n![EnergyPlus monitor](/images/energyplus_plot.png)\n\nSeveral graphs are shown.\n1. Zone temperature and outdoor temperature\n2. West zone return air temperature and west zone setpoint temperature\n3. Mixed air, fan, and DEC outlet temperatures\n4. IEC, CW, DEC outlet temperatures\n5. Electric demand power (whole building, facility, HVAC)\n6. Reward\n\nOnly the current episode is shown in the graph 1 to 5. The current episode is specified by pushing one of ""First"", ""Prev"", ""Next"", or\n""Last"" button, or directly clicking the appropriate point on the episode bar at the bottom.\nIf you\'re at the last episode, the current episode moves automatically to the latest one as new episode is completed.\n\nNote: The reward value shown in the graph 6 is retrieved from ""progress.csv"" file generated by TRPO baseline, which is not\nnecessarily same as the reward value computed by our reward function.\n\nYou can pan or zoom each graph by entering pan/zoom mode by clicking cross-arrows on the bottom left of the window.\n\nWhen new episode is shown on the window, some statistical information is show as follow:\n\n```\nepisode 362\nread_episode: file=/home/moriyama/eplog/openai-2018-07-04-10-48-46-712881/output/episode-00000362/eplusout.csv.gz\nReward ave= 0.77, min= 0.40, max= 1.33, std= 0.22\nwestzone_temp ave=22.93, min=21.96, max=23.37, std= 0.19\neastzone_temp ave=22.94, min=22.10, max=23.51, std= 0.17\nPower consumption ave=102,243.47, min=65,428.31, max=135,956.47, std=18,264.50\npue ave= 1.27, min= 1.02, max= 1.63, std= 0.13\nwestzone_temp distribution\n degree 0.0-0.9 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\n -------------------------------------------------------------------------\n 18.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 19.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 20.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 21.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 22.0C 50.8% 0.0% 0.1% 0.7% 3.4% 5.6% 2.2% 0.7% 1.0% 0.9% 36.4%\n 23.0C 49.2% 49.0% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 24.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 25.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 26.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n 27.0C 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n```\nThe reward value shown above is computed by applying reward function to the simulation result.\n\n## License \nThe Reinforcement Learning Testbed for Power Consumption Optimization Project uses the [MIT License](LICENSE) software license.\n\n## How to cite\nFor citing the use or extension of this testbed, you may cite our paper at AsiaSim 2018, which can be found at [Springer](https://link.springer.com/chapter/10.1007/978-981-13-2853-4_4) or as a slightly revised version at [Arxiv](https://arxiv.org/abs/1808.10427). You may use the following BibTeX entry:\n```\n@InProceedings{10.1007/978-981-13-2853-4_4,\nauthor=""Moriyama, Takao and De Magistris, Giovanni and Tatsubori, Michiaki and Pham, Tu-Hoa and Munawar, Asim and Tachibana, Ryuki"",\ntitle=""Reinforcement Learning Testbed for Power-Consumption Optimization"",\nbooktitle=""Methods and Applications for Modeling and Simulation of Complex Systems"",\nyear=""2018"",\npublisher=""Springer Singapore"",\naddress=""Singapore"",\npages=""45--59"",\nisbn=""978-981-13-2853-4""\n}\n```\n\n## Related information\n- A pre-print version of AsiaSim2018 paper on arXiv: https://arxiv.org/abs/1808.10427\n- EnergyPlus: https://github.com/NREL/EnergyPlus\n- OpenAI Gym: https://github.com/OpenAI/gym\n- OpenAI Baselines: https://github.com/OpenAI/baselines\n- Ray: https://github.com/ray-project/ray\n'",",https://arxiv.org/abs/1808.10427,https://arxiv.org/abs/1808.10427\n-","2018/08/03, 12:38:06",1909,MIT,14,219,"2023/09/05, 13:07:29",21,51,94,6,50,0,0.0,0.22754491017964074,,,0,8,false,,false,true,,,https://github.com/IBM,https://www.ibm.com/opensource/,"Armonk, New York, U.S.",,,https://avatars.githubusercontent.com/u/1459110?v=4,,, tsam,A Python package which uses different machine learning algorithms for the aggregation of time series.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/tsam.git,github,"clustering,timeseries,energy-system,typical-periods,optimization,aggregation,python,time-series",Energy Modeling and Optimization,"2023/08/25, 07:10:36",130,28,25,true,Python,FZJ-IEK3,FZJ-IEK3-VSA,Python,https://tsam.readthedocs.io/,"b'\xef\xbb\xbf[![pytest master status](https://github.com/FZJ-IEK3-VSA/tsam/actions/workflows/pytest.yml/badge.svg?branch=master)](https://github.com/FZJ-IEK3-VSA/tsam/actions) [![Version](https://img.shields.io/pypi/v/tsam.svg)](https://pypi.python.org/pypi/tsam) [![Documentation Status](https://readthedocs.org/projects/tsam/badge/?version=latest)](https://tsam.readthedocs.io/en/latest/) [![PyPI - License](https://img.shields.io/pypi/l/tsam)]((https://github.com/FZJ-IEK3-VSA/tsam/blob/master/LICENSE.txt)) [![codecov](https://codecov.io/gh/FZJ-IEK3-VSA/tsam/branch/master/graph/badge.svg)](https://codecov.io/gh/FZJ-IEK3-VSA/tsam)\n[![badge](https://img.shields.io/badge/launch-binder-579aca.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC)](https://mybinder.org/v2/gh/FZJ-IEK3-VSA/voila-tsam/HEAD?urlpath=voila/render/Time-Series-Aggregation-Module.ipynb)\n\n \n\n# tsam - Time Series Aggregation Module\ntsam is a python package which uses different machine learning algorithms for the aggregation of time series. The data aggregation can be performed in two freely combinable dimensions: By representing the time series by a user-defined number of typical periods or by decreasing the temporal resolution.\ntsam was originally designed for reducing the computational load for large-scale energy system optimization models by aggregating their input data, but is applicable for all types of time series, e.g., weather data, load data, both simultaneously or other arbitrary groups of time series.\n\nThe documentation of the tsam code can be found [**here**](https://tsam.readthedocs.io/en/latest/index.html).\n\n## Features\n* flexible handling of multidimensional time-series via the pandas module\n* different aggregation methods implemented (averaging, k-means, exact k-medoids, hierarchical, k-maxoids, k-medoids with contiguity), which are based on scikit-learn, or self-programmed with pyomo\n* hypertuning of aggregation parameters to find the optimal combination of the number of segments inside a period and the number of typical periods\n* novel representation methods, keeping statistical attributes, such as the distribution \n* flexible integration of extreme periods as own cluster centers\n* weighting for the case of multidimensional time-series to represent their relevance\n\n\n## Installation\nDirectly install via pip as follows:\n\n\tpip install tsam\n\nAlternatively, clone a local copy of the repository to your computer\n\n\tgit clone https://github.com/FZJ-IEK3-VSA/tsam.git\n\t\nThen install tsam via pip as follow\n\t\n\tcd tsam\n\tpip install . \n\t\nOr install directly via python as \n\n\tpython setup.py install\n\t\nIn order to use the k-medoids clustering, make sure that you have installed a MILP solver. As default [HiGHS](https://github.com/ERGO-Code/HiGHS) is used. Nevertheless, in case you have access to a license we recommend commercial solvers (e.g. Gurobi or CPLEX) since they have a better performance.\n\t\n\t\n## Examples\n\n### Basic workflow\n\nA small example how tsam can be used is decribed as follows\n```python\n\timport pandas as pd\n\timport tsam.timeseriesaggregation as tsam\n```\n\n\nRead in the time series data set with pandas\n```python\n\traw = pd.read_csv(\'testdata.csv\', index_col = 0)\n```\n\nInitialize an aggregation object and define the length of a single period, the number of typical periods, the number of segments in each period, the aggregation method and the representation method - here duration/distribution representation which contains the minimum and maximum value of the original time series \n```python\n\taggregation = tsam.TimeSeriesAggregation(raw, \n\t\t\t\t\t\tnoTypicalPeriods = 8, \n\t\t\t\t\t\thoursPerPeriod = 24, \n\t\t\t\t\t\tsegmentation = True,\n\t\t\t\t\t\tnoSegments = 8,\n\t\t\t\t\t\trepresentationMethod = ""distributionAndMinMaxRepresentation"",\n\t\t\t\t\t\tdistributionPeriodWise = False\n\t\t\t\t\t\tclusterMethod = \'hierarchical\'\n\t\t\t\t\t\t)\n```\n\nRun the aggregation to typical periods\n```python\n\ttypPeriods = aggregation.createTypicalPeriods()\n```\n\nStore the results as .csv file\n\t\n```python\n\ttypPeriods.to_csv(\'typperiods.csv\')\n```\n\n### Detailed examples\n\nA [**first example**](/examples/aggregation_example.ipynb) shows the capabilites of tsam as jupyter notebook. \n\nA [**second example**](/examples/aggregation_optiinput.ipynb) shows in more detail how to access the relevant aggregation results required for paramtrizing e.g. an optimization.\n\nThe example time series are based on a department [publication](https://www.mdpi.com/1996-1073/10/3/361) and the [test reference years of the DWD](https://www.dwd.de/DE/leistungen/testreferenzjahre/testreferenzjahre.html).\n\n## License\n\nMIT License\n\nCopyright (C) 2016-2022 Leander Kotzur (FZJ IEK-3), Maximilian Hoffmann (FZJ IEK-3), Peter Markewitz (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\nThe core developer team sits in the [Institute of Energy and Climate Research - Techno-Economic Energy Systems Analysis (IEK-3)](https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html) belonging to the [Forschungszentrum J\xc3\xbclich](https://www.fz-juelich.de/).\n\n## Citing and further reading\n\nIf you want to use tsam in a published work, **please kindly cite** our latest journal articles:\n* Hoffmann et al. (2022):\\\n[**The Pareto-Optimal Temporal Aggregation of Energy System Models**](https://www.sciencedirect.com/science/article/abs/pii/S0306261922004342)\n\n\nIf you are further interested in the impact of time series aggregation on the cost-optimal results on different energy system use cases, you can find a publication which validates the methods and describes their cababilites via the following [**link**](https://www.sciencedirect.com/science/article/pii/S0960148117309783). A second publication introduces a method how to model state variables (e.g. the state of charge of energy storage components) between the aggregated typical periods which can be found [**here**](https://www.sciencedirect.com/science/article/pii/S0306261918300242). Finally yet importantly the potential of time series aggregation to simplify mixed integer linear problems is investigated [**here**](https://www.mdpi.com/1996-1073/12/14/2825).\n\nThe publications about time series aggregation for energy system optimization models published alongside the development of tsam are listed below:\n* Hoffmann et al. (2021):\\\n[**The Pareto-Optimal Temporal Aggregation of Energy System Models**](https://www.sciencedirect.com/science/article/abs/pii/S0306261922004342)\\\n(open access manuscript to be found [**here**](https://arxiv.org/abs/1710.07593))\n* Hoffmann et al. (2021):\\\n[**Typical periods or typical time steps? A multi-model analysis to determine the optimal temporal aggregation for energy system models**](https://www.sciencedirect.com/science/article/abs/pii/S0306261921011545)\n* Hoffmann et al. (2020):\\\n[**A Review on Time Series Aggregation Methods for Energy System Models**](https://www.mdpi.com/1996-1073/13/3/641)\n* Kannengie\xc3\x9fer et al. (2019):\\\n[**Reducing Computational Load for Mixed Integer Linear Programming: An Example for a District and an Island Energy System**](https://www.mdpi.com/1996-1073/12/14/2825)\n* Kotzur et al. (2018):\\\n[**Time series aggregation for energy system design: Modeling seasonal storage**](https://www.sciencedirect.com/science/article/pii/S0306261918300242)\\\n(open access manuscript to be found [**here**](https://arxiv.org/abs/1710.07593))\n* Kotzur et al. (2018):\\\n[**Impact of different time series aggregation methods on optimal energy system design**](https://www.sciencedirect.com/science/article/abs/pii/S0960148117309783)\\\n(open access manuscript to be found [**here**](https://arxiv.org/abs/1708.00420))\n\n\n\n## Acknowledgement\n\nThis work is supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/) and the program [""Energy System Design""](https://www.esd.kit.edu/index.php) and within the [BMWi/BMWk](https://www.bmwk.de/Navigation/DE/Home/home.html) funded project [**METIS**](http://www.metis-platform.net/).\n\n\n\n\n'",",https://arxiv.org/abs/1710.07593,https://arxiv.org/abs/1710.07593,https://arxiv.org/abs/1708.00420","2017/05/15, 08:36:11",2354,MIT,45,393,"2023/08/25, 07:06:50",2,50,78,13,61,0,0.2,0.5078125,"2023/08/25, 07:12:15",v2.3.1,0,14,false,,false,false,"AncillaryServicesAcquisitionModel/ASAM,KhanhQuy/API_django,tZ3ma/tessif-phd,tZ3ma/tessif-fine-2-2-2,znes/oemof-jordan,felixarjuna/EnSysMod,rl-institut/Uganda_oemof,rl-institut/oemof_api,rl-institut/oemof-B3,openego/eGon-data,archgyn/porfo,FZJ-IEK3-VSA/voila-tsam,fhac-ewi/gas-wasserstoff,OfficialCodexplosive/FINE-GL,znes/oemof-barbados,samuelduchesne/energy-pandas,louisleroy5/trnslator,sbruche/aristopy,andreharewood/open_source_barbados_pub,znes/angus-scenarios,FZJ-IEK3-VSA/tsib,samuelduchesne/archetypal,znes/carpeDIEM,ZNES-datapackages/Status-quo-2015,oemof/oemof-tabular,ZNES-datapackages/100-sea-2050,openego/eGo,openego/eTraGo",,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, TimeSeriesClustering,"Provides simple integration of multi-dimensional time-series data (e.g. multiple attributes such as wind availability, solar availability, and electricity demand) in a single aggregation process.",holgerteichgraeber,https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git,github,"clustering,optimization,energy-systems,k-means-clustering,k-medoids-clustering,hierarchical-clustering,representative-days,time-series-aggregation,julia",Energy Modeling and Optimization,"2021/01/20, 11:21:15",72,0,10,false,Julia,,,"Julia,TeX",,"b'\xef\xbb\xbf![TimeSeriesClustering](docs/src/assets/clust_for_opt_text.svg)\n===\n[![](https://img.shields.io/badge/docs-stable-blue.svg)](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/stable)\n[![](https://img.shields.io/badge/docs-dev-blue.svg)](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/dev)\n[![License](http://img.shields.io/badge/license-MIT-brightgreen.svg?style=flat)](LICENSE)\n[![Build Status](https://travis-ci.com/holgerteichgraeber/TimeSeriesClustering.jl.svg?token=HRFemjSxM1NBCsbHGNDG&branch=master)](https://travis-ci.com/holgerteichgraeber/TimeSeriesClustering.jl)\n[![codecov](https://codecov.io/gh/holgerteichgraeber/TimeSeriesClustering.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/holgerteichgraeber/TimeSeriesClustering.jl)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.01573/status.svg)](https://doi.org/10.21105/joss.01573)\n\n[TimeSeriesClustering](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl) is a [Julia](https://julialang.org) implementation of unsupervised learning methods for time series datasets. It provides functionality for clustering and aggregating, detecting motifs, and quantifying similarity between time series datasets.\nThe software provides a type system for temporal data, and provides an implementation of the most commonly used clustering methods and extreme value selection methods for temporal data.\nIt provides simple integration of multi-dimensional time-series data (e.g. multiple attributes such as wind availability, solar availability, and electricity demand) in a single aggregation process.\nThe software is applicable to general time series datasets and lends itself well to a multitude of application areas within the field of time series data mining.\n\nThe TimeSeriesClustering package was originally developed to perform time series aggregation for energy systems optimization problems. By reducing the number of time steps used in the optimization model, using representative periods leads to significant reductions in computational complexity of these problems.\nThe package was previously known as `ClustForOpt.jl`.\n\nThe package has three main purposes:\n1) Provide a simple process of finding representative periods (reducing the number of observations) for time-series input data, with implementations of the most commonly used clustering methods and extreme value selection methods.\n2) Provide an interface between representative period data and application (e.g. optimization problem) by having representative period data stored in a generalized type system.\n3) Provide a generalized import feature for time series, where variable names, attributes, and node names are automatically stored and can then be used later when the reduced time series is used in the application at hand (e.g. in the definition of sets of the optimization problem).\n\nIn the domain of energy systems optimization, an example problem that uses TimeSeriesClustering for its input data is the package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl), which implements a scalable generation and transmission capacity expansion problem.\n\nThe TimeSeriesClustering package follows the clustering framework presented in [Teichgraeber and Brandt, 2019](https://doi.org/10.1016/j.apenergy.2019.02.012).\nThe package is actively developed, and new features are continuously added.\nFor a reproducible version of the methods and data of the original paper by [Teichgraeber and Brandt, 2019](https://doi.org/10.1016/j.apenergy.2019.02.012), please refer to [v0.1](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/v0.1) (including shape based methods such as `k-shape` and `dynamic time warping barycenter averaging`).\n\nThis package is developed by Holger Teichgraeber [@holgerteichgraeber](https://github.com/holgerteichgraeber) and Elias Kuepper [@YoungFaithful](https://github.com/youngfaithful).\n\n## Installation\nThis package runs under julia v1.0 and higher.\nInstall using:\n\n```julia\nimport Pkg\nPkg.add(""TimeSeriesClustering"")\n```\n\n## Documentation\n[Documentation (Stable)](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/stable): Please refer to this documentation for details on how to use TimeSeriesClustering the current version of TimeSeriesClustering. This is the documentation of the default version of the package. The default version is on the `master` branch.\n\n[Documentation (Development)](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/dev): If you like to try the development version of TimeSeriesClustering, please refer to this documentation. The development version is on the `dev` branch.\n\n**See [NEWS](NEWS.md) for significant breaking changes when updating from one version of TimeSeriesClustering to another.**\n\n## Citing TimeSeriesClustering\nIf you find TimeSeriesClustering useful in your work, we kindly request that you cite the following paper ([link](https://doi.org/10.21105/joss.01573)):\n\n```\n @article{Teichgraeber2019joss,\n author = {Teichgraeber, Holger and Kuepper, Lucas Elias and Brandt, Adam R},\n doi = {https://doi.org/10.21105/joss.01573},\n journal = {Journal of Open Source Software},\n number = {41},\n pages = {1573},\n title = {TimeSeriesClustering : An extensible framework in Julia},\n volume = {4},\n year = {2019}\n }\n```\n\nIf you find this package useful, our [paper](https://doi.org/10.1016/j.apenergy.2019.02.012) on comparing clustering methods for energy systems optimization problems may additionally be of interest.\n\n## Quick Start Guide\n\nThis quick start guide introduces the main concepts of using TimeSeriesClustering. The examples are taken from problems in the domain of scenario reduction for energy systems optimization. For more detail on the different functionalities that TimeSeriesClustering provides, please refer to the subsequent chapters of the documentation or the examples in the [examples](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/master/examples) folder, specifically [workflow_introduction.jl](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/blob/master/examples/workflow_introduction.jl).\n\nGenerally, the workflow consists of three steps:\n- load data\n- find representative periods (clustering + extreme period selection)\n- optimization\n\n## Example Workflow\nAfter TimeSeriesClustering is installed, you can use it by saying:\n```@repl workflow\nusing TimeSeriesClustering\n```\n\nThe first step is to load the data. The following example loads hourly wind, solar, and demand data for Germany (1 region) for one year.\n```@repl workflow\nts_input_data = load_timeseries_data(:CEP_GER1)\n```\nThe output `ts_input_data` is a `ClustData` data struct that contains the data and additional information about the data.\n```@repl workflow\nts_input_data.data # a dictionary with the data.\nts_input_data.data[""wind-germany""] # the wind data (choose solar, el_demand as other options in this example)\nts_input_data.K # number of periods\n```\n\nThe second step is to cluster the data into representative periods. Here, we use k-means clustering and get 5 representative periods.\n```@repl workflow\nclust_res = run_clust(ts_input_data;method=""kmeans"",n_clust=5)\nts_clust_data = clust_res.clust_data\n```\nThe `ts_clust_data` is a `ClustData` data struct, this time with clustered data (i.e. less representative periods).\n```@repl workflow\nts_clust_data.data # the clustered data\nts_clust_data.data[""wind-germany""] # the wind data. Note the dimensions compared to ts_input_data\nts_clust_data.K # number of periods\n```\n\nIf this package is used in the domain of energy systems optimization, the clustered input data can be used as input to an [optimization problem](https://www.juliaopt.org).\nThe optimization problem formulated in the package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl) can be used with the data clustered in this example.\n'",",https://doi.org/10.21105/joss.01573,https://doi.org/10.1016/j.apenergy.2019.02.012,https://doi.org/10.1016/j.apenergy.2019.02.012,https://doi.org/10.21105/joss.01573,https://doi.org/10.21105/joss.01573,https://doi.org/10.1016/j.apenergy.2019.02.012","2018/09/19, 22:59:26",1861,MIT,0,565,"2021/01/20, 11:19:21",17,78,116,0,1008,2,0.7,0.4043583535108959,"2019/09/10, 08:02:20",v0.5.3,0,7,false,,false,true,,,,,,,,,,, GridPath,A versatile simulation and optimization platform for power-system planning and operations.,blue-marble,https://github.com/blue-marble/gridpath.git,github,"energy,electricity,power,renewables,renewable-energy,planning,power-systems,power-system-simulation,power-system-analysis,optimization",Energy Modeling and Optimization,"2023/01/24, 20:38:40",74,3,19,true,Python,Blue Marble Analytics,blue-marble,"Python,TypeScript,HTML,JavaScript,CSS",https://www.gridpath.io,"b'[![GridPath Test Suite Status](https://github.com/blue-marble/gridpath/actions/workflows/test_gridpath.yml/badge.svg?branch=main)](https://github.com/blue-marble/gridpath/actions/workflows/test_gridpath.yml)\n[![Documentation Status](https://readthedocs.org/projects/gridpath/badge/?version=latest)](https://gridpath.readthedocs.io/en/latest/?badge=latest)\n[![Coverage Status](https://coveralls.io/repos/github/blue-marble/gridpath/badge.svg?branch=main)](https://coveralls.io/github/blue-marble/gridpath?branch=main)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Lint Black](https://github.com/blue-marble/gridpath/actions/workflows/black.yml/badge.svg?branch=main)](https://github.com/blue-marble/gridpath/actions/workflows/black.yml)\n[![DOI](https://zenodo.org/badge/65574330.svg)](https://zenodo.org/badge/latestdoi/65574330)\n\n# Welcome to GridPath\n\n

\n \n

\n\n![Approaches](doc/graphics/approaches.png)\n\n\nGridPath is a versatile power-system planning platform capable of a range of\nplanning approaches including production-cost, capacity-expansion, \nasset-valuation, and reliability modeling.\n\n# Documentation\nGridPath\'s documentation is hosted on [Read the Docs](https://gridpath.readthedocs.io/en/latest/).\n\n# Installation\n\n## Python\nGridPath is tested on Python 3.8, 3.9, and 3.10. We recommend using Python 3.9. Get \nPython\n[here](https://www.python.org/downloads/ ""Python download"").\n\n## GridPath Python environment\nYou should create a Python environment for your GridPath installation, e.g. via \n`venv`, [a lightweight environment manager](https://docs.python.org/3/library/venv.html ""venv"") \nthat is part of the standard Python distribution. Make sure to [create](https://docs.python.org/3/library/venv.html#creating-virtual-environments ""create"") [activate](https://docs.python.org/3/library/venv.html#how-venvs-work ""activate"") the environment before installing GridPath.\n\n## Install GridPath\n\nOnce you have _created and activated_ the GridPath Python environment, you can install GridPath and the Python packages it uses.\n\nFor most users, installing GridPath\'s base set of Python packages and those needed \nto use the graphical user interface would be sufficient. You can do so by navigating to the GridPath root \ndirectory (which is where this `README.md` file is located) and running:\n```bash\npip install .[ui]\n```\n\nYou can install all needed Python \npackages, including the developer extras by running:\n```bash\npip install .[all]\n```\n\n**NOTE:** If you plan to edit the GridPath code, you should install with the `-e` flag.\n\n## Solver\nYou will need a solver to use this platform. GridPath assumes you will be using Cbc (Coin-or branch and cut) by default, but you can specify a \ndifferent solver.\n\n## Testing your installation\n\nTo test the GridPath codebase, [make sure the GridPath environment you installed to \nis activated](https://docs.python.org/3/library/venv.html#how-venvs-work ""activate"") and use the unittest module as follows from the root directory:\n```bash\npython -m unittest discover tests\n```\n\n# Usage\n\n## The gridpath_run and gridpath_run_e2e commands\nIf you install GridPath via the setup script following the instructions above, \nyou can use the command `gridpath_run` to run a scenario from any directory \n-- as long as your GridPath Python environment is enabled -- as follows:\n```bash\ngridpath_run --scenario SCENARIO_NAME --scenario_location \n/PATH/TO/SCENARIO \n```\n\nIf you are using the database, you can use the command `gridpath_run_e2e` to \nrun GridPath end-to-end, i.e. get inputs for the scenario from the database, \nsolve the scenario problem, import the results into the database, and \nprocess them. Refer to the documentation for how to build the database.\n\n```bash\ngridpath_run_e2e --scenario SCENARIO_NAME --scenario_location \n/PATH/TO/SCENARIO \n```\n\nTo see usage and other optional arguments, e.g. how to specify a \nsolver, check the help menu, e.g.:\n```bash\ngridpath_run --help\n```\n\n\n## Help\nIn general, you can check usage of GridPath\'s scripts by calling the `--help` \noption, e.g.:\n```bash\npython get_scenario_inputs.py --help\n```\n'",",https://zenodo.org/badge/latestdoi/65574330","2016/08/12, 18:13:28",2630,Apache-2.0,16,1354,"2023/10/23, 16:59:04",77,731,979,122,2,12,0.0,0.22754946727549463,"2023/01/24, 21:18:20",v0.15.0,0,7,false,,false,false,"Kiriti96-Ray/EAY01---Unit-Commitment-,vikipedia/prayas-gridpath,blue-marble/gridpath",,https://github.com/blue-marble,https://www.bluemarble.run,"San Francisco, CA",,,https://avatars.githubusercontent.com/u/22780527?v=4,,, Peaky Finders,A Plotly Dash application with helpful peak load visualizations and a day ahead forecasting model for five different ISOs.,kbaranko,https://github.com/kbaranko/peaky-finders.git,github,,Energy Modeling and Optimization,"2022/09/08, 22:27:34",34,0,13,true,Python,,,"Python,Procfile",,"b'# Peaky-Finders\n\nPeaky Finders is a Plotly Dash application with helpful peak load visualizations and a day ahead forecasting model for five different ISOs. It does not demonstrate cutting-edge peak load forecasting methods -- there are a handful of high tech companies and millions of dollars spent trying to solve this problem -- but rather illustrate core concepts and explore how well a model can do with just historical load and temperature data.\n\nThe application has been deployed on Heroku: https://peaky-finders.herokuapp.com/\n\n## Stack\n\n- Python \n- Pandas\n- Matplotlib\n- Scikit-Learn\n- Dash \n- Plotly\n\n## Data\n\nHistorical load data was collected using the Pyiso python library, which provides clean API interfaces to make scraping ISO websites easy. The Darksky API was used for weather data, which provides historical temperature readings for a given latitude and longitude. For this model, I picked one central coordinate in each ISO territory to make API requests.\n\n## Features\n\n- Day of week (seven days)\n- Holiday (yes or no)\n- Hour of Day (24 hours)\n- Temperature Reading (hourly)\n- Previous Day\xe2\x80\x99s Load (t-24)\n\n## Results \n\nHow well does each model perform? Depends on the ISO. Mean Absolute Error (MAE) for the month of February 2021 in Megawatts (MW):\n\n- CAISO: 455.91\n- MISO: 2,382.66 \n- PJM: 2,886.66\n- NYISO: 347.62\n- ISONE: 522.43\n'",,"2020/01/04, 01:27:52",1390,Apache-2.0,0,174,"2022/08/23, 18:25:03",15,21,21,1,428,15,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Renewcast,"Forecasting renewable energy generation in EU countries with machine learning algorithms, based on Streamlit and sktime.",derevirn,https://github.com/derevirn/renewcast.git,github,"time-series,renewable-energy,machine-learning,streamlit,time-series-forecasting,pycaret",Energy Modeling and Optimization,"2023/02/18, 18:26:35",32,0,5,true,Jupyter Notebook,,,"Jupyter Notebook,Python,CSS,Procfile",http://renewcast.giannis.io,"b'# Renewcast\nA dashboard app that provides forecasts for renewable electricity generation in EU countries, based on Streamlit and PyCaret . Users can select the country and the forecasting model of their preference, ranging from classical approaches to machine learning models. The app has been deployed to Oracle Cloud and is available [here](http://renewcast.giannis.io/).\n\n![Renewcast](images/forecast.png)\n### Towards Data Science Article:\n[Forecasting Renewable Energy Generation with Streamlit and sktime](https://towardsdatascience.com/forecasting-renewable-energy-generation-with-streamlit-and-sktime-ab789ef1299f)\n\n(The source code has been significantly updated after I published this article, so [click here](https://github.com/derevirn/renewcast/tree/sktime_old) for the old version.)\n'",,"2020/08/08, 20:43:09",1173,Apache-2.0,8,61,"2022/08/23, 18:25:03",0,0,0,0,428,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, ANDES,Power system transient dynamics simulation with symbolic modeling and numerical analysis.,curent,https://github.com/CURENT/andes.git,github,"simulation,analysis,package,powerflow,timedomain,small-signal,toolbox,tool,library,modeling-dae,eigenvalue-analysis,andes,power-system,power-system-simulation,power-system-analysis,power-system-dynamics",Energy Modeling and Optimization,"2023/08/11, 02:03:34",175,0,44,true,Python,CURENT LTB,CURENT,"Python,Shell,Dockerfile",https://ltb.curent.org,"b'# LTB ANDES\n\n\n\nPython software for symbolic power system modeling and numerical analysis, serving as the core simulation engine for the [CURENT Largescale Testbed][LTB Repository].\n\n| | Latest | Stable |\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| Documentation | [![Latest Documentation](https://readthedocs.org/projects/andes/badge/?version=latest)](https://andes.readthedocs.io/en/latest/?badge=latest) | [![Documentation Status](https://readthedocs.org/projects/andes/badge/?version=stable)](https://andes.readthedocs.io/en/stable/?badge=stable) |\n\n| Badges | | |\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Downloads | [![PyPI Version](https://img.shields.io/pypi/v/andes.svg)](https://pypi.python.org/pypi/andes) | [![Conda Downloads](https://anaconda.org/conda-forge/andes/badges/downloads.svg)](https://anaconda.org/conda-forge/andes) |\n| Try on Binder | [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/cuihantao/andes/master) | |\n| Code Quality | [![Codacy Badge](https://api.codacy.com/project/badge/Grade/17b8e8531af343a7a4351879c0e6b5da)](https://app.codacy.com/app/cuihantao/andes?utm_source=github.com&utm_medium=referral&utm_content=cuihantao/andes&utm_campaign=Badge_Grade_Dashboard) | [![Codecov Coverage](https://codecov.io/gh/cuihantao/andes/branch/master/graph/badge.svg)](https://codecov.io/gh/cuihantao/andes) |\n| Build Status | [![GitHub Action Status](https://github.com/cuihantao/andes/workflows/Python%20application/badge.svg)](https://github.com/cuihantao/andes/actions) | [![Azure Pipeline build status](https://dev.azure.com/hcui7/hcui7/_apis/build/status/cuihantao.andes?branchName=master)](https://dev.azure.com/hcui7/hcui7/_build/latest?definitionId=1&branchName=master) |\n\n# Why ANDES\nThis software could be of interest to you if you are working on\nDAE modeling, simulation, and control for power systems.\nIt has features that may be useful if you are applying\ndeep (reinforcement) learning to such systems.\n\nANDES is by far easier to use for developing differential-algebraic\nequation (DAE) based models for power system dynamic simulation\nthan other tools such as\n[PSAT](http://faraday1.ucd.ie/psat.html),\n[Dome](http://faraday1.ucd.ie/dome.html) and\n[PST](https://www.ecse.rpi.edu/~chowj/),\nwhile maintaining high numerical efficiency.\n\nANDES comes with a rich set of commercial-grade dynamic models\nwith all details implemented, including limiters, saturation,\nand zeroing out time constants.\n\nANDES produces credible simulation results. The following table\nshows that\n\n1. For the Northeast Power Coordinating Council (NPCC) 140-bus system\n(with GENROU, GENCLS, TGOV1 and IEEEX1),\nANDES results match perfectly with that from TSAT.\n\n2. For the Western Electricity Coordinating Council (WECC) 179-bus\nsystem (with GENROU, IEEEG1, EXST1, ESST3A, ESDC2A, IEEEST and\nST2CUT), ANDES results match closely with those from TSAT and PSS/E.\nNote that TSAT and PSS/E results are not identical, either.\n\n| NPCC Case Study | WECC Case Study |\n| --------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |\n| ![](https://raw.githubusercontent.com/cuihantao/andes/master/docs/source/images/example-npcc/omega.png) | ![](https://raw.githubusercontent.com/cuihantao/andes/master/docs/source/images/example-wecc/omega.png) |\n\nANDES provides a descriptive modeling framework in a scripting environment.\nModeling DAE-based devices is as simple as describing the mathematical equations.\nNumerical code will be automatically generated for fast simulation.\n\n| Controller Model and Equation | ANDES Code |\n| ----------------------------- | ---------- |\n| Diagram:
![](https://raw.githubusercontent.com/cuihantao/andes/master/docs/source/modeling/example-tgov1/tgov1.png)
Write into DAEs:
![](https://raw.githubusercontent.com/cuihantao/andes/master/docs/source/modeling/example-tgov1/tgov1_eqns.png) | ![](https://raw.githubusercontent.com/cuihantao/andes/master/docs/source/modeling/example-tgov1/tgov1_class.png) |\n\nIn ANDES, what you simulate is what you document.\nANDES automatically generates model documentation, and the docs always stay up to date.\nThe screenshot below is the generated documentation for the implemented IEEEG1 model.\n\n![](https://raw.githubusercontent.com/cuihantao/andes/master/docs/source/images/misc/ieeeg1-screenshot.png)\n\nIn addition, ANDES features\n\n* a rich library of transfer functions and discontinuous components (including limiters, deadbands, and\n saturation functions) available for model prototyping and system analysis.\n* industry-grade second-generation renewable models (solar PV, type 3 and type 4 wind),\n distributed PV and energy storage model.\n* routines including Newton method for power flow calculation, implicit trapezoidal method for time-domain\n simulation, and full eigenvalue analysis.\n* developed with performance in mind. While written in Python, ANDES can\n finish a 20-second transient simulation of a 2000-bus system in a few seconds on a typical desktop computer.\n* out-of-the-box PSS/E raw and dyr data support for available models. Once a model is developed, inputs from a\n dyr file can be immediately supported.\n\nANDES is currently under active development.\nUse the following resources to get involved.\n\n+ Start from the [documentation][readthedocs] for installation and tutorial.\n+ Check out examples in the [examples folder][examples]\n+ Read the model verification results in the [examples/verification folder][verification]\n+ Try in Jupyter Notebook on [Binder][Binder]\n+ Ask a question in the [GitHub Discussions][Github Discussions]\n+ Report bugs or issues by submitting a [GitHub issue][GitHub issues]\n+ Submit contributions using [pull requests][GitHub pull requests]\n+ Read release notes highlighted [here][release notes]\n+ Check out and and cite our [paper][arxiv paper]\n\n# Citing ANDES\n\nIf you use ANDES for research or consulting, please cite the following paper in your publication that uses\nANDES\n\n```\nH. Cui, F. Li and K. Tomsovic, ""Hybrid Symbolic-Numeric Framework for Power System Modeling and Analysis,"" in IEEE Transactions on Power Systems, vol. 36, no. 2, pp. 1373-1384, March 2021, doi: 10.1109/TPWRS.2020.3017019.\n```\n\n# Who is Using ANDES?\nPlease let us know if you are using ANDES for research or projects.\nWe kindly request you to cite our [paper][arxiv paper] if you find ANDES useful.\n\n![Natinoal Science Foundation](https://raw.githubusercontent.com/CURENT/andes/master/docs/source/images/sponsors/nsf.jpg)\n![US Department of Energy](https://raw.githubusercontent.com/CURENT/andes/master/docs/source/images/sponsors/doe.png)\n![CURENT ERC](https://raw.githubusercontent.com/CURENT/andes/master/docs/source/images/sponsors/curent.jpg)\n![Lawrence Livermore National Laboratory](https://raw.githubusercontent.com/CURENT/andes/master/docs/source/images/sponsors/llnl.jpg)\n![Idaho National Laboratory](https://raw.githubusercontent.com/CURENT/andes/master/docs/source/images/sponsors/inl.jpg)\n\n# Sponsors and Contributors\nThis work was supported in part by the Engineering Research Center\nProgram of the National Science Foundation and the Department of Energy\nunder NSF Award Number EEC-1041877 and the CURENT Industry Partnership\nProgram.\n\nThis work was supported in part by the Advanced Grid Research and Development Program\nin the Office of Electricity at the U.S. Department of Energy.\n\nSee [GitHub contributors][GitHub contributors] for the contributor list.\n\n# License\n\nANDES is licensed under the [GPL v3 License](./LICENSE).\n\n* * *\n\n[GitHub releases]: https://github.com/CURENT/andes/releases\n[GitHub issues]: https://github.com/CURENT/andes/issues\n[Github Discussions]: https://github.com/CURENT/andes/discussions\n[GitHub insights]: https://github.com/CURENT/andes/pulse\n[GitHub pull requests]: https://github.com/CURENT/andes/pulls\n[GitHub contributors]: https://github.com/CURENT/andes/graphs/contributors\n[readthedocs]: https://andes.readthedocs.io\n[release notes]: https://andes.readthedocs.io/en/latest/release-notes.html\n[arxiv paper]: https://arxiv.org/abs/2002.09455\n[tutorial]: https://andes.readthedocs.io/en/latest/tutorial.html#interactive-usage\n[examples]: https://github.com/CURENT/andes/tree/master/examples\n[verification]: https://github.com/CURENT/andes/tree/master/examples/verification\n[Binder]: https://mybinder.org/v2/gh/cuihantao/andes/master\n[LTB Repository]: https://github.com/CURENT'",",https://arxiv.org/abs/2002.09455\n","2016/11/07, 01:04:50",2543,CUSTOM,147,4406,"2023/10/21, 02:52:36",7,262,308,39,4,2,0.2,0.0898558187435633,"2022/03/27, 17:07:13",v1.6.2,0,16,false,,true,true,,,https://github.com/CURENT,https://ltb.curent.org,United States of America,,,https://avatars.githubusercontent.com/u/127251657?v=4,,, REISE.jl,Renewable Energy Integration Simulation Engine.,Breakthrough-Energy,https://github.com/Breakthrough-Energy/REISE.jl.git,github,,Energy Modeling and Optimization,"2023/02/22, 20:39:53",25,0,6,true,Julia,Breakthrough Energy,Breakthrough-Energy,"Julia,Python,Dockerfile",https://breakthrough-energy.github.io/docs/,"b'![logo](https://raw.githubusercontent.com/Breakthrough-Energy/docs/master/source/_static/img/BE_Sciences_RGB_Horizontal_Color.svg)\n\n\n[![Code Style: Blue](https://img.shields.io/badge/code%20style-blue-4495d1.svg)](https://github.com/invenia/BlueStyle)\n[![Documentation](https://github.com/Breakthrough-Energy/docs/actions/workflows/publish.yml/badge.svg)](https://breakthrough-energy.github.io/docs/)\n![GitHub contributors](https://img.shields.io/github/contributors/Breakthrough-Energy/REISE.jl?logo=GitHub)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/m/Breakthrough-Energy/REISE.jl?logo=GitHub)\n![GitHub last commit (branch)](https://img.shields.io/github/last-commit/Breakthrough-Energy/REISE.jl/develop?logo=GitHub)\n![GitHub pull requests](https://img.shields.io/github/issues-pr/Breakthrough-Energy/REISE.jl?logo=GitHub)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Code of Conduct](https://img.shields.io/badge/code%20of-conduct-ff69b4.svg?style=flat)](https://breakthrough-energy.github.io/docs/communication/code_of_conduct.html)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4538590.svg)](https://doi.org/10.5281/zenodo.4538590)\n\n\n# REISE.jl\n**REISE.jl** is the Renewable Energy Simulation Engine developed by [Breakthrough\nEnergy Sciences](https://science.breakthroughenergy.org/) (BES) to solve DC optimal\npower flow problems. It is written in Julia and can be used with the BES software\necosystem (see [PowerSimData]) or in a standalone mode (see [Zenodo] for sample data).\n\n\n## Main Features\nHere are a few things that **REISE.jl** can do:\n* Formulate a Production Cost Model (PCM) into an optimization problem that can be\n solved by professional solvers compatible with [JuMP]\n* Decompose a model that cannot be solved all at once in memory (due to high spatial\n or temporal resolution or both) into a series of shorter-timeframe intervals that are\n automatically run sequentially.\n* Model operational decisions by energy storage and price-responsive flexible demand\n alongside those made by thermal and renewable generators\n* Handle adaptive infeasible/suboptimal/numeric issues via the homogenous barrier\n algorithm (when using the [Gurobi] solver) and involuntary load shedding\n\n\n## Where to get it\nFor now, only the source code is available. Clone or Fork the code here on GitHub.\n\n\n## Dependencies\n**REISE.jl** relies on several Julia packages. The list can be found in the\n***Project.toml*** file located at the root of this package.\n\nThis program builds an optimization problem, but still relies on an external solver to\ngenerate results. Any solver compatible with [JuMP] can be used, although performance\nwith open-source solvers (e.g. Clp, GLPK) may be significantly slower than with\ncommercial solvers.\n\n## Installation\nThere are two options, either install all the dependencies yourself or setup the engine\nwithin a Docker image. Detailed installation notes can be found [here][docs]. You will\nalso find in this document instructions to use **REISE.jl** in the standalone mode or\nin combination with **PowerSimData**.\n\n\n## License\n[MIT](LICENSE)\n\n\n## Communication Channels\n[Sign up](https://science.breakthroughenergy.org/#get-updates) to our email list and\nour Slack workspace to get in touch with us.\n\n\n## Contributing\nAll contributions (bug report, documentation, feature development, etc.) are welcome. An\noverview on how to contribute to this project can be found in our [Contribution\nGuide](https://breakthrough-energy.github.io/docs/dev/contribution_guide.html).\n\nThis package is formatted following the Blue Style conventions. Pull requests will be\nautomatically checked against consistency to this style guide. Formatting is as easy as:\n```julia\njulia> using JuliaFormatter\n\njulia> format(FILE_OR_DIRECTORY)\n```\nIf an individual file is passed, that file will be formatted. If a directory is passed,\nall Julia files in that directory and subdirectories will be formatted. Use\n`format(""."")` from the root of the package to format all files.\n\n\n\n[PowerSimData]: https://github.com/Breakthrough-Energy/PowerSimData\n[Gurobi]: https://www.gurobi.com/\n[docs]: https://breakthrough-energy.github.io/docs/reisejl/index.html\n[Jump]: https://jump.dev/\n[Zenodod]: https://doi.org/10.5281/zenodo.4538590\n'",",https://doi.org/10.5281/zenodo.4538590,https://doi.org/10.5281/zenodo.4538590\n","2020/01/17, 22:41:38",1376,MIT,41,410,"2023/02/22, 20:39:55",16,142,181,11,245,0,5.3,0.4421364985163204,"2021/06/14, 21:09:55",v0.2.1,0,9,false,,false,false,,,https://github.com/Breakthrough-Energy,https://breakthrough-energy.github.io/docs/,,,,https://avatars.githubusercontent.com/u/68243594?v=4,,, ESDL,A modelling language created for the components in an energy system and their relations towards each other.,EnergyTransition,https://github.com/EnergyTransition/ESDL.git,github,"esdl,energy,energy-transition,energy-transition-calculation,dsl,description-language,language,energy-information,interoperability",Energy Modeling and Optimization,"2023/04/05, 12:53:37",11,0,2,true,Shell,,EnergyTransition,Shell,,"b'# Energy System Description Language (ESDL)\r\n\r\nThe Energy System Description Language (ESDL) is a modelling language created for modelling the components in an energy system and their relations towards each other. Furthermore ESDL is capable of expressing the dynamic behavior of components in the energy system. For instance the power consumption of a neighborhood. ESDL describes components by their basic functionalities (Energy Capabilities) which are modelled in 5 abstract categories: Production, Consumption, Storage, Transport and Conversion. ESDL enables energy modellers to model a complex energy system in a generic way. The language is machine readable so makers of energy transition calculation tools and GIS applications can support ESDL in order to enforce the interoperability of their products.\r\n\r\n# Why ESDL\r\nThe energy system is in a transition towards a sustainable, less CO2 emitting system. Achieving this requires large adaptions of the structure and behavior of the energy system. Furthermore a comprehensive insight in the system is necessary. However the energy system is complex. It exists out of a vast number of assets/components which are connected to each other via various types of infrastructures. Besides that to fully understand the dynamics of an energy system, comprehending the dynamic behavior of the assets/components required. ESDL aims to model this complex structure and behavior of the energy system into one generic language. Resulting in a harmonized way of modeling energy data, this enables reusability and interoperability.\r\n\r\n# Example application areas\r\nESDL can be used for: \r\n\r\n* Energy transition calculation tools: A common language for energy transition calculation tools. To describe inputs and outputs of those tools. \r\n* Energy Information System: A basis for a central energy information system where the energy system of a certain region is registered. \r\n* ESDL can be used as a language for (local) governments to model and share their (local) energy system information. \r\n* Monitoring evolution of an energy system: Furthermore, multiple ESDL snapshots of a certain area over time provide insight in the evolution of an energy system. \r\n\r\n# Getting started\r\n- Detailed documentation of ESDL can be found here:\r\nhttps://energytransition.gitbook.io/esdl/\r\n\r\n- The ESDL model reference documentation can be found here:\r\nhttps://energytransition.github.io/\r\nThis website describes the classes, attributes and references in detail and is easy navigatable.\r\n\r\n- Information on tooling for ESDL can be found here:\r\nhttps://energytransition.gitbook.io/esdl/esdl-based-tools\r\nThose pages describe (graphical)editors for ESDL, Java integration and Python integration examples\r\n\r\n# Contribute to ESDL\r\nContributions to ESDL can be done by pull requests.\r\n\r\nIf you want to contact the ESDL team, please follow this [link](https://www.tno.nl/nl/aandachtsgebieden/informatie-communicatie-technologie/expertisegroepen/monitoring-control-services/grip-op-de-energietransitie-met-esdl/)\r\n\r\n'",,"2018/06/08, 11:40:18",1965,Apache-2.0,9,248,"2023/02/10, 17:48:02",16,3,7,1,257,1,0.0,0.5,"2019/05/03, 14:06:34",v1905,0,6,false,,false,false,,,https://github.com/EnergyTransition,,,,,https://avatars.githubusercontent.com/u/40061405?v=4,,, Transactive Energy Service System,"A platform to design, deploy, and operate transactive energy systems in electric utility retail environments.",slacgismo,https://github.com/slacgismo/TESS.git,github,,Energy Modeling and Optimization,"2022/04/13, 19:14:59",11,0,3,true,Python,SLAC GISMo,slacgismo,"Python,Jupyter Notebook,JavaScript,CSS,C,Objective-C,Ruby,HTML,Java,Starlark,Mako,Shell",,"b'# Transactive Energy Service System (TESS)\n\nThe Transactive Energy Service System (TESS) is a platform to design, deploy, and operate transactive energy systems in electric utility retail environments. TESS provides retail market clearing mechanisms for peer-to-peer trading of behind-the-meter distributed energy resources based on ramping, capacity, and storage prices.\n\n## Introduction\nAs the share of renewable resources grows, the marginal cost of energy resources tends to zero, and the long term average cost of energy is increasingly dominated by cost of flexibility resources, and the cost of associated capacity. Nearly all the existing work on Transactive Energy Systems is based on the retail analogy to wholesale energy markets, which are fundamentally designed around marginal cost pricing of energy resources (and constraints on associated capacity), not on the cost of other grid services. The goal of the Transactive Energy Service System (TESS) project to design, develop, test, and validate retail-level Transactive Energy systems that are dominated by behind-the-meter renewable energy resources and energy storage resources. \n\nSome of the research questions the project seeks to address include the following:\n1. Is it possible to use the current model of Transactive Energy systems when the marginal cost of energy is often zero?\n2. How can a Transactive system design reflect ramping and capacity costs in real time?\n3. How do alternative Transactive Energy market designs affect the stability, reliability and resilience of power systems?\n4. How do Transactive Energy systems compare to and work with flat rate or subscription billing in retail settings?\n5. What new outcomes, features, and benefits emerge for utilities and customers who subscribe to Transactive Energy tariffs?\n6. What are the economic impacts, e.g., distributional outcomes, that arise from these market design choices?\n\n### Code Organization\n|Path | Description |\n---------------|---------------------------------------------------------------\n|[/agents](../master/agents) | agent code for participation in bidding|\n|[/analysis](../master/analysis) | \'add description here\' |\n|[/cloud](../master/cloud) | Infrastructure and container orchestration deployments configurations and templates |\n|[/control](../master/control) | Control Room app |\n|[/docs](../master/docs) | User docs |\n|[/mobile](../master/mobile) | Member-facing mobile |\n|[/scripts](../master/scripts) | Database scripts |\n|[/simulation](../master/simulation) | Simulation models |\n|[/vendor](../master/vendor) | 3rd party dependencies |\n\n### Contributing\n\nPlease read [CONTRIBUTING.md](../docs/CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.\n\n### Versioning\n\nWe use [SemVer](https://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/slacgismo/TESS/tags).\n\n### Authors\n* Anna Peery - [Github](https://github.com/avpeery)\n* David Chassin - [Github](https://github.com/dchassin)\n* Gustavo Cezar - [Github](https://github.com/gcezar)\n* Jonathan Goncalves - [Github](https://github.com/jongoncalves)\n* Marie-Louise Arlt - [Github](https://github.com/mlamlamla)\n* Mayank Malik - [Github](https://github.com/malikmayank)\n* Wan-Lin Hu - [Github](https://github.com/honeymilktea)\n* Derin Serbetcioglu - [GitHub](https://github.com/derins)\n\n### Acknowledgements\n\nTESS is funded by the U.S. Department of Energy Office of Electricity. For more information contact Chris Irwin at christopher.irwin@hq.doe.gov.\n\nSLAC National Accelerator Laboratory is operated for the U.S. Department of Energy by Stanford University under Contract No. DE-AC02-76SF00515.\n\n### References\n\n* [TESS White Paper](https://s3.us-east-2.amazonaws.com/tess.slacgismo.org/Chassin+et+al%2C+TESS+White+Paper+(2019).pdf)\n* [GridLAB-D Transactive Module](https://github.com/slacgismo/gridlabd/pull/430)\n* [Transactive Orderbook Design](https://github.com/slacgismo/gridlabd/blob/transactive_orderbook/transactive/Transactive%20Orderbook.ipynb)\n* [Raspberry Pi Info](https://github.com/slacgismo/TESS/tree/master/edge_devices/README.md)\n\n## Publications\n\n1. [Arlt ML, DP Chassin, LL Kiesling, ""Opening Up Transactive Systems: Introducing TESS and Specification in a Field Deployment"", *Energies* **2021**, 14(13), 3970](https://www.mdpi.com/1996-1073/14/13/3970). DOI: https://doi.org/10.3390/en14133970\n1. Arlt ML, DP Chassin, C Rivetta, and J Sweeney (2020): ""Willingness to Pay for HVAC Operations for Automated Dispatch by Smart Home Systems"", presented at the Wirtschaftsinformatik 2021, Community Workshop ""Energy Informatics and Electro Mobility ICT"", online, March 8, 2021.\n1. Arlt ML, DP Chassin, C Rivetta, and J Sweeney (2020): ""Automated Bidding in and Welfare Effects of Local Electricity Markets"", Working paper, March 5, 2021. URL: https://marielouisearlt.files.wordpress.com/2021/03/wp_lems_210305.pdf.\n\n## License\n'",",https://doi.org/10.3390/en14133970\n1","2019/07/16, 13:50:17",1562,GPL-3.0,0,357,"2022/11/15, 22:50:33",46,116,127,4,343,42,0.1,0.7089041095890412,,,0,9,false,,false,false,,,https://github.com/slacgismo,https://gismo.slac.stanford.edu/,"SLAC National Accelerator Laboratory, Menlo Park, CA 94025",,,https://avatars.githubusercontent.com/u/19895500?v=4,,, Minpower,An open source toolkit for students and researchers in power systems.,adamgreenhall,https://github.com/adamgreenhall/minpower.git,github,,Energy Modeling and Optimization,"2021/07/22, 22:29:35",70,0,2,true,Python,,,"Python,Shell,Makefile",adamgreenhall.github.io/minpower,"b'h1. minpower \n\nh2. power systems optimization using python\n\n* Solves:\n - Economic Dispatch\n - Optimal Power Flow\n - Unit Commitment\n - Stochastic Unit Commitment\n* Problems can be made in simple spreadsheets\n* Many solvers supported\n* ""Full documentation and tutorials"":http://adamgreenhall.github.io/minpower/\n\nBuild Status !https://travis-ci.com/adamgreenhall/minpower.svg?branch=master!:https://travis-ci.com/adamgreenhall/minpower\n'",,"2011/04/04, 18:50:48",4587,GPL-3.0,0,886,"2021/07/21, 22:51:29",4,6,22,2,825,1,0.0,0.0,"2021/07/21, 22:52:05",v5.0.1,0,1,false,,false,false,,,,,,,,,,, TIMES-Ireland Model,Information on the Irish energy system as it is today and the best available projections for what the future technology and fuel options and demands will be.,MaREI-EPMG,https://github.com/MaREI-EPMG/times-ireland-model.git,github,"energy-system-model,times-model,gams,scenario-analysis,ireland,energy-planning",Energy Modeling and Optimization,"2022/10/13, 20:37:41",7,0,0,true,,MaREI-EPMG,MaREI-EPMG,,,"b'\n
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\n# TIMES-Ireland Model (TIM)\n\n[![DOI](https://zenodo.org/badge/429173600.svg)](https://zenodo.org/badge/latestdoi/429173600)\n\n## Purpose of the model\nThe TIMES-Ireland Model (TIM) is being developed at UCC to inform future possible decarbonisation pathways for the Irish energy system. We give it information on the Irish energy system as it is today, a set of constraints, including on greenhouse-gas emissions, and the best available projections for what the future technology and fuel options and demands will be.\n\nIt then finds the lowest-cost pathway to re-architect and restructure Ireland\xe2\x80\x99s entire energy system, for electricity, transport, industry, residential and commercial, and novel fuels like hydrogen and bioenergy, to reduce emissions to meet the target. It accounts for all the linkages in the system; rather than transform it one piece at a time, it transforms the entire system, accounting for all the sector couplings and trade-offs, even between distant parts of the system.\n\nRather than offering a single prescriptive plan, the model helps structure our discussions of the trade-offs and uncertainties; and helps us develop meaningful, consistent narratives of energy transformation, while considering a huge range of possible futures.\n\nAlternatively, TIM can be used to assess the implications of certain policies, namely regulatory or technology target-setting (for example, biofuels blending obligation or sales/stock share target for electric vehicles).\n\n## Documentation\nMore information on the TIMES model generator and specific information about TIM can be found in the [Documentation](https://doi.org/10.5194/gmd-15-4991-2022).\n\n## About the developers\nThe list of developers, contributors and reviewers is described in the [Acknowledgements](/ACKNOWLEDGEMENT.md) section. \n\nTIM is the successor model to the Irish TIMES Model, which has been developed by the MaREI Energy Policy and Modelling Group (EPMG) at University College Cork since 2010 and funded through various projects by the EPA, SEAI, SFI and the NTR Foundation, and has played a significant role in informing the evidence base of Irish climate target setting.\n\n## Climate policy use\nThis model has been built to better inform increased national climate mitigation ambition: Ireland now has one of the most ambitious near-term decarbonisation targets in the world, with a new carbon budget process to underpin action. The new model also take into account the changing energy technology landscape, and of new advances in energy systems optimisation modelling techniques.\n\nThe first set of scenarios developed with TIM have explored the implications of alternative climate mitigation, technology and demand pathways. This analysis formed a significant part of [the evidence base](https://www.climatecouncil.ie/carbonbudgets/technicalreport/) used by the Irish Climate Change Advisory Council to develop the first set of carbon budget recommendations. \n- Zenodo [repository of scenarios](https://doi.org/10.5281/zenodo.5517363)\n- [Web app](https://tim-carbon-budgets-2021.netlify.app/results/) visualising results from a selection of scenarios\n- [Presentation of TIM findings](https://www.youtube.com/watch?v=lBShCV0rKNk) to Engineers Ireland, Nov 3rd 2021.\n\n## Scenario descriptions\n\n- No mitigation - No GHG constraint\n- WAM - overall energy system GHG emissions contrained to the Environmental Protection Agency\'s [""With Additional Measures"" scenario](https://www.epa.ie/publications/monitoring--assessment/climate-change/air-emissions/irelands-greenhouse-gas-emissions-projections-2020-2040.php)\n- CB - energy system GHG emissions are constrained to meet [sectoral emissions ceilings](https://www.gov.ie/en/publication/76864-sectoral-emissions-ceilings/). \n\n## Peer-reviewed publications\n\n- [TIM: modelling pathways to meet Ireland\'s long-term energy system challenges with the TIMES-Ireland Model (v1.0)](https://doi.org/10.5194/gmd-15-4991-2022). 2022. *Geoscientific Model Development*.\n- [Low energy demand scenario for feasible deep decarbonisation: Whole energy systems modelling for Ireland](https://doi.org/10.1016/j.rset.2022.100024). 2022. *Renewable and Sustainable Energy Transition*.\n- [Decarbonisation of passenger light-duty vehicles using spatially resolved TIMES-Ireland Model](https://doi.org/10.1016/j.apenergy.2022.119078). 2022. *Applied Energy*.\n'",",https://zenodo.org/badge/latestdoi/429173600,https://doi.org/10.5194/gmd-15-4991-2022,https://doi.org/10.5281/zenodo.5517363,https://doi.org/10.5194/gmd-15-4991-2022,https://doi.org/10.1016/j.rset.2022.100024,https://doi.org/10.1016/j.apenergy.2022.119078","2021/11/17, 19:24:31",707,CUSTOM,0,237,"2023/02/24, 19:30:22",7,27,35,1,243,0,0.1,0.5046728971962617,"2023/07/06, 14:37:32",v1.0.2,0,3,false,,false,false,,,https://github.com/MaREI-EPMG,https://www.marei.ie/energy-policy-modelling/,Ireland,,,https://avatars.githubusercontent.com/u/69536806?v=4,,, Frictionless Energy data,Common medium to facilitate the flow of data between energy and environmental models in a way that can be automated.,,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://sentinel-energy.github.io/friendly_data/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Open Modeling Framework,A set of Python libraries for simulating power systems behavior with an emphasis on cost-benefit analysis of emerging technologies.,dpinney,https://github.com/dpinney/omf.git,github,,Energy Modeling and Optimization,"2023/10/24, 20:42:01",110,15,13,true,Python,,,"Python,JavaScript,HTML,CSS,MATLAB,Shell,Dockerfile",https://omf.coop,"b""### Overview\n\nThe Open Modeling Framework (OMF) is a set of Python libraries for simulating power systems behavior with an emphasis on cost-benefit analysis of emerging technologies: distributed generation, storage, networked controls, etc.\n\nFull documentation is available on our [OMF wiki](https://github.com/dpinney/omf/wiki).\n\n### Installation\n\nAnyone can sign up for a free account on our hosted production site https://www.omf.coop/.\n\nIf you'd like to host your own copy of the OMF, please follow the [developer installation instructions](https://github.com/dpinney/omf/wiki/Dev-~-Installation-Instructions).\n\n### Example Screenshots\n\nGraphical circuit editor:\n\n![Screenshot 1](https://raw.githubusercontent.com/wiki/dpinney/omf/images/readme_circuitEditor.png)\n\nPower quality metrics calculation from a solar distributed generation model:\n\n![Screenshot 2](https://raw.githubusercontent.com/wiki/dpinney/omf/images/readme_distributionPower.png)\n\nTransmission powerflow results:\n\n![Screenshot 3](https://raw.githubusercontent.com/wiki/dpinney/omf/images/readme_transmissionPowerflow.png)\n\nFinancial analysis of the impact of distributed generation:\n\n![Screenshot 4](https://raw.githubusercontent.com/wiki/dpinney/omf/images/readme_financialModeling.png)\n""",,"2012/09/17, 17:15:57",4055,GPL-2.0,278,7710,"2023/10/17, 13:35:22",2,47,405,8,8,1,0.0,0.5280433397068196,,,0,42,false,,false,false,"ghas-results/lfview-api-client,ghas-results/lfview-resources-spatial,elphick/mass-composition,dimitri-feniou/reidentification_faciale,OpenGeoVis/GeothermalDesignChallenge,seequent/lfview-resources-spatial,ericbdaniels/oomf,tkoyama010/PVGeo-doc-translations,kaufmanno/GSDMA,marado/finance_python,wblong/PVGeo-Copy,dray89/finance_python,seequent/lfview-api-client,OpenGeoVis/PVGeo,OpenGeoVis/omfvista",,,,,,,,,, PSP-UFU,Open-Source Software with advanced GUI features and CAD tools for electrical power system studies.,Thales1330,https://github.com/Thales1330/PSP.git,github,"psp-ufu,power-system-simulation,power-flow,short-circuit,stability,harmonics,educational-software,research,industry-aplication,graphical-user-interface,computer-aided-design,synchronous-machine,user-defined-control,induction-motors,free-and-open-source-software,open-source-project,free-libre-open-source-software",Energy Modeling and Optimization,"2023/09/29, 22:31:53",38,0,12,true,C++,,,"C++,C,JavaScript,SourcePawn,HTML,CSS,Objective-C,CMake,GLSL",https://thales1330.github.io/PSP/,"b'[![Build Status](https://travis-ci.org/Thales1330/PSP.svg?branch=master)](https://travis-ci.org/Thales1330/PSP)\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/d32eae214f2341c7b1dfc004274cd5d1)](https://www.codacy.com/manual/Thales1330/PSP?utm_source=github.com&utm_medium=referral&utm_content=Thales1330/PSP&utm_campaign=Badge_Grade)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3287/badge)](https://bestpractices.coreinfrastructure.org/projects/3287)\n[![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)\n[![DOI](https://zenodo.org/badge/64333860.svg)](https://zenodo.org/badge/latestdoi/64333860)\n\n![PSP-UFU](docs/doxygen/html/logoHeader.png)\n\n[**PSP-UFU website**](https://thales1330.github.io/PSP/)\n\nPSP-UFU (Power Systems Platform of Federal University of Uberl\xc3\xa2ndia) is a **cross-platform**, **multilingual**, **Free and Open-Source Software** (FOSS) with **advanced GUI** (Graphical User Interface) features and **CAD** (Computer-Aided Design) tools for power system studies.\n\nThe software allows for the construction of any electric **transmission network** and **control systems** through the deployment of visual elements.\n\nFor the visualization of results, the program offers linked text elements in the main screen, and also table and graph editors.\n\nThe PSP-UFU aims to provide efficient computer simulation tools for **research and education purposes**, in addition to **industrial applications** in electrical power systems.\n\nThe software can perform the following studies:\n\n- **Power Flow**\n - Newton-Raphson\n - Gauss-Seidel\n - Hybrid Newton-Gauss\n - Three-phase induction motors included in power flow studies\n- **Short-Circuit calculation**\n - Balanced\n - Unbalanced\n - Short-Circuit power in all system buses\n- **Harmonics**\n - Harmonic voltages and THD (Total Harmonic Distortion) calculation\n - Frequency scan\n- **Transient and Dynamic Stability**\n - Several synchronous machine models automatically selected\n - Three-phase induction motors\n - User-defined machine controls, exciters and prime moves created using block diagrams (Exciters, AVR, PSS, Hydro and Thermal turbines, Speed Governor, etc.)\n \n## [](#header-2)Published Papers\nFurther details can be found in the published papers:\n\n>Oliveira, T. L., Guimar\xc3\xa3es, G. C., & Silva, L. R. C. (2019). PSP-UFU: An open-source, graphical, and multiplatform software for power system studies. _International Transactions on Electrical Energy Systems_, e12185. doi: [10.1002/2050-7038.12185](https://doi.org/10.1002/2050-7038.12185)\n\n>Oliveira, T. L., Guimar\xc3\xa3es, G. C., Silva, L. R., & Rezende, J. O. (2019). Power system education and research applications using free and open-source, graphical and multiplatform PSP-UFU software. _The International Journal of Electrical Engineering & Education_, 0020720919879058. doi: [10.1177/0020720919879058](https://doi.org/10.1177/0020720919879058)\n\n## [](#header-2)Code Documentation\n\nAll detailed descriptions of the source-code can be found at [**Online Documentation**](https://thales1330.github.io/PSP/doxygen/html/index.html), generated by [Doxygen](http://www.doxygen.org).\n\n## [](#header-2)Overview\n\n![](docs/images/ss_1.png)\n\n![](docs/images/ss_1_1.png)\n\n![](docs/images/ss_1_2.png)\n\n![](docs/images/ss_2.png)\n\n![](docs/images/ss_3.png)\n\n![](docs/images/ss_5.png)\n\n![](docs/images/ss_4.png)\n'",",https://zenodo.org/badge/latestdoi/64333860,https://doi.org/10.1002/2050-7038.12185,https://doi.org/10.1177/0020720919879058","2016/07/27, 18:51:54",2646,GPL-2.0,2,480,"2022/11/16, 02:51:41",21,39,54,1,343,11,0.0,0.004555808656036442,"2020/08/01, 22:12:21",2020w31a-beta,0,2,false,,true,true,,,,,,,,,,, Energy Policy Simulator,The open-source United States Energy Policy Simulator estimates environmental and economic impacts of hundreds of climate and energy policies.,Energy-Innovation,https://github.com/EnergyInnovation/eps-us.git,github,,Energy Modeling and Optimization,"2023/06/08, 16:40:01",21,0,4,true,Python,Energy Innovation,EnergyInnovation,Python,,"b'[![Build Status](https://travis-ci.org/Thales1330/PSP.svg?branch=master)](https://travis-ci.org/Thales1330/PSP)\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/d32eae214f2341c7b1dfc004274cd5d1)](https://www.codacy.com/manual/Thales1330/PSP?utm_source=github.com&utm_medium=referral&utm_content=Thales1330/PSP&utm_campaign=Badge_Grade)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3287/badge)](https://bestpractices.coreinfrastructure.org/projects/3287)\n[![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)\n[![DOI](https://zenodo.org/badge/64333860.svg)](https://zenodo.org/badge/latestdoi/64333860)\n\n![PSP-UFU](docs/doxygen/html/logoHeader.png)\n\n[**PSP-UFU website**](https://thales1330.github.io/PSP/)\n\nPSP-UFU (Power Systems Platform of Federal University of Uberl\xc3\xa2ndia) is a **cross-platform**, **multilingual**, **Free and Open-Source Software** (FOSS) with **advanced GUI** (Graphical User Interface) features and **CAD** (Computer-Aided Design) tools for power system studies.\n\nThe software allows for the construction of any electric **transmission network** and **control systems** through the deployment of visual elements.\n\nFor the visualization of results, the program offers linked text elements in the main screen, and also table and graph editors.\n\nThe PSP-UFU aims to provide efficient computer simulation tools for **research and education purposes**, in addition to **industrial applications** in electrical power systems.\n\nThe software can perform the following studies:\n\n- **Power Flow**\n - Newton-Raphson\n - Gauss-Seidel\n - Hybrid Newton-Gauss\n - Three-phase induction motors included in power flow studies\n- **Short-Circuit calculation**\n - Balanced\n - Unbalanced\n - Short-Circuit power in all system buses\n- **Harmonics**\n - Harmonic voltages and THD (Total Harmonic Distortion) calculation\n - Frequency scan\n- **Transient and Dynamic Stability**\n - Several synchronous machine models automatically selected\n - Three-phase induction motors\n - User-defined machine controls, exciters and prime moves created using block diagrams (Exciters, AVR, PSS, Hydro and Thermal turbines, Speed Governor, etc.)\n \n## [](#header-2)Published Papers\nFurther details can be found in the published papers:\n\n>Oliveira, T. L., Guimar\xc3\xa3es, G. C., & Silva, L. R. C. (2019). PSP-UFU: An open-source, graphical, and multiplatform software for power system studies. _International Transactions on Electrical Energy Systems_, e12185. doi: [10.1002/2050-7038.12185](https://doi.org/10.1002/2050-7038.12185)\n\n>Oliveira, T. L., Guimar\xc3\xa3es, G. C., Silva, L. R., & Rezende, J. O. (2019). Power system education and research applications using free and open-source, graphical and multiplatform PSP-UFU software. _The International Journal of Electrical Engineering & Education_, 0020720919879058. doi: [10.1177/0020720919879058](https://doi.org/10.1177/0020720919879058)\n\n## [](#header-2)Code Documentation\n\nAll detailed descriptions of the source-code can be found at [**Online Documentation**](https://thales1330.github.io/PSP/doxygen/html/index.html), generated by [Doxygen](http://www.doxygen.org).\n\n## [](#header-2)Overview\n\n![](docs/images/ss_1.png)\n\n![](docs/images/ss_1_1.png)\n\n![](docs/images/ss_1_2.png)\n\n![](docs/images/ss_2.png)\n\n![](docs/images/ss_3.png)\n\n![](docs/images/ss_5.png)\n\n![](docs/images/ss_4.png)\n'",,"2019/10/19, 03:02:43",1467,GPL-3.0,33,1458,"2023/08/17, 17:10:54",79,0,207,31,69,0,0,0.48766328011611026,"2023/06/08, 16:51:35",3.4.7,0,7,false,,false,false,,,https://github.com/EnergyInnovation,https://energyinnovation.org,United States of America,,,https://avatars.githubusercontent.com/u/112431470?v=4,,, Open Energy Outlook,Examining U.S. energy futures to inform future energy and climate policy efforts.,TemoaProject,https://github.com/TemoaProject/oeo.git,github,,Energy Modeling and Optimization,"2023/04/25, 17:53:03",36,0,7,true,Jupyter Notebook,,,"Jupyter Notebook,Python",,"b'\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/TemoaProject/oeo/master/?urlpath=tree)\n\n## Overview\nWelcome to the GitHub repository associated with the [Open Energy Outlook (OEO) for the United States](https://openenergyoutlook.org/). This project aims to bring U.S. energy system modeling into the twenty-first century by applying the gold standards of policy-focused academic modeling, maximizing transparency, building a networked community, and working towards a common goal: examining U.S. energy futures to inform future energy and climate policy efforts.\n\nThis repository includes several elements, which are described below.\n\n## Project Roadmap\nIn January 2019, the [Open Energy Outlook Team](https://openenergyoutlook.org/?page_id=12081) met in Raleigh to discuss the aims and objectives of the project. Ideas and feedback gathered during the workshop were subsequently used to draft a [project roadmap](OEO_Roadmap.md), which appears in the main OEO directory. The project roadmap addresses a variety of topics, including research objectives, observed limitations in prevailing approaches, and the modeling approach in this project, which covers the choice of model, spatio-temporal resolution, sector-specific considerations, and uncertainty analysis.\n\n[](OEO_Roadmap.md)\n\n## Tools for Energy Model Optimization and Analysis (Temoa)\n[Temoa](https://temoacloud.com/) is an open source energy system optimization model, and serves as the backbone of the OEO analysis. The Temoa source code is included in this repository as a submodule. \n\nTemoa simultaneously balances supply and demand of energy commodities across the energy system and performs capacity expansion over time. The model employs linear programing techniques, and is driven by an objective function that minimizes the cost of energy supply over a user-defined time horizon. The decision variables include the installed capacity and energy output of each technology within the user-defined energy system network. The user-defined model time horizon typically spans multiple decades and consists of a set of time periods, which are further decomposed into time slices over which short-term variations in supply and demand must be balanced. Time slices in Temoa are user-defined and can be used to represent temporal resolutions ranging from large blocks of time (e.g., \'summer-day\') to every hour of the year. As noted below, the OEO project includes multiple versions of the database with different temporal resolutions.\n\nModel constraints enforce rules governing energy system performance, and user-defined constraints can be added to represent limits on technology expansion, fuel availability, and system-wide emissions. As with other energy system optimization models, Temoa source code and data are independent, and thus users can construct their own databases for different regions with varying technologies, demands, and spatio-temporal resolution. The [Temoa source code repository](https://github.com/TemoaProject/temoa) has been in GitHub for a decade and has been used to perform a number of analyses. The [model website](https://temoacloud.com/) contains additional information, including detailed model documentation.\n\nIf you\'re interested in running Temoa for your own analysis or to work with the databases provided as part of this project, check out these [Tutorial videos](https://youtube.com/embed/XYoxUGuZG2A), which can help get new users up and running.\n\n[!](https://www.youtube.com/watch?v=XYoxUGuZG2A&list=PLTxJN2lIFcQl9BhObJ7Sqgm542o2uttfp)\n\nFor users who may not wish to install Temoa on their local machine, you can also run the model on the cloud via [TemoaCloud](https://model.temoacloud.com/). Note that the cloud-based interface currently uses the [COIN-OR cbc](https://github.com/coin-or/Cbc) solver, which is slower than commercial solvers like Gurobi or CPLEX, and thus may not be able to solve the large OEO databases.\n\n## Temoa-Compatible Input Databases\nTo store input data, Temoa makes use of [sqlite](https://sqlite.org/index.html), a widely used, open source, self-contained relational database system. The databases developed as part of this project are listed in the main OEO directory. The naming convention is as follows: the first designator after the first underscore indicates the number of regions in the database and the designator after the second underscore indicates the temporal resolution of the database. Note that while the temporal resolution of a single year varies across the different database versions, all have a time horizon that spans 2020-2050 in 5-year increments. The following databases are currently included in the repo:\n\n* US_1R_TS: This database is an older, single region version that includes 12 timeslices that represent combinations of seasons (winter, intermediate, summer) and times of day (morning, afternoon, evening, night). We use this database for diagnostic purposes.\n\n* US_9R_4D: This is the 9-region version of the OEO database with a temporal resolution of 4 representative days (96 hours), which are selected by [PowerGenome](https://github.com/PowerGenome/PowerGenome). (See the electric sector Jupyter Notebook for more details on PowerGenome.) This version of the database is connected to the Jupyter Notebook documentation in the [```database_documentation```](database_documentation/) folder.\n\n* US_9R_12D: This is the 9-region version of the OEO database with a temporal resolution of 12 representative days (288 hours), which are selected by PowerGenome. This database is provided as a point of comparison, and we are currently experimenting with versions up to 48 representative days for the eventual analysis.\n\nFor each database version, the ```.sql``` file is a text file, and the ```.sqlite``` file represents the compiled binary sqlite database. Providing the ```.sql``` file allows us to track line-by-line changes with each commit, and the binary database versions are provided for convenience and are ready to run without compilation.\n\n## Data Aggregation\nRaw input data and processing scripts are stored in the [```data_aggregation```](database_aggregation/) folder. This raw input data is stored in CSV files or manually entered into the ```US_BASE.sql``` file, which is used to store input data that is later programmatically extracted and entered into the input databases listed in the main OEO directory. Additional documentation describing the data aggregation process will be added.\n\n## Data Documentation\nInput data is documented in a series of Jupyter Notebooks in the [```database_documentation```](database_documentation/) folder. The notebooks include a combination of markdown cells that provide a narrative describing data sources and assumptions, along with more interactive features that query the latest database version and render the input data as tables, graphs, and network diagrams. As noted above, the current documentation draws on the ```US_9R_4D``` database.\n\nThere are a couple ways to view the rendered notebooks. First, you can click on the [```launch binder```](https://mybinder.org/v2/gh/TemoaProject/oeo/master/?urlpath=tree) badge here or at the top of the README. We make use of [binder](https://mybinder.org/) to render the notebooks on the cloud for viewing. Note that it takes several minutes to process the notebooks for viewing. Second, if you have the Temoa environment set up on your local machine and have cloned this repo, you can simply execute the following command from your shell or the Anaconda prompt:\n\n```$ jupyter notebook```\n\nOnce the folder has been rendered as html on the cloud or your local machine, navigate to the [```database_documentation```](database_documentation/) folder and open one of the sector-specific notebooks to begin reviewing documentation. The screenshot below illustrates the network diagram lookup tool, which allows users to enter a commodity or technology name from the OEO database and visualize its connections to the rest of the modeled system.\n\n\n'",,"2020/03/21, 14:49:07",1313,GPL-3.0,11,185,"2023/09/21, 01:10:40",3,77,79,25,34,0,0.0,0.3027027027027027,,,0,6,false,,false,false,,,,,,,,,,, OpenSTEF,A Python package which is used to make short term forecasts for the energy sector.,OpenSTEF,https://github.com/OpenSTEF/openstef.git,github,"forecasting,energy,python,data-science,machine-learning,time-series",Energy Modeling and Optimization,"2023/10/10, 17:57:21",63,5,37,true,HTML,,OpenSTEF,"HTML,Python",https://openstef.github.io/openstef,"b""\n\n\n[![Python Build](https://github.com/openstef/openstef/actions/workflows/python-build.yaml/badge.svg)](https://github.com/openstef/openstef/actions/workflows/python-build.yaml)\n[![REUSE Compliance Check](https://github.com/openstef/openstef/actions/workflows/reuse-compliance.yaml/badge.svg)](https://github.com/openstef/openstef/actions/workflows/reuse-compliance.yaml)\n\n[![Bugs](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=bugs)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=code_smells)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=coverage)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Duplicated Lines (%)](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=duplicated_lines_density)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=sqale_rating)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=reliability_rating)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=security_rating)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Technical Debt](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=sqale_index)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=OpenSTEF_openstef&metric=vulnerabilities)](https://sonarcloud.io/dashboard?id=OpenSTEF_openstef)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5585/badge)](https://bestpractices.coreinfrastructure.org/projects/5585)\n\n# OpenSTEF\n\nOpenSTEF is a Python package which is used to make short term forecasts for the energy sector. This repository contains all components for the machine learning pipeline required to make a forecast. In order to use the package you need to provide your own data storage and retrieval interface.\n\nFind the latest information on the project on [the project's website](https://www.lfenergy.org/projects/openstef/).\n\nThe `openstef` Python package is available at: https://pypi.org/project/openstef/.\n\nDocumentation is available at: https://openstef.github.io/openstef/index.html.\n\nYou can also watch a [video about OpenSTEF](https://www.lfenergy.org/forecasting-to-create-a-more-resilient-optimized-grid/) instead of reading about the project.\n\n# Installation\n\n## Install the openstef package\n\n```shell\npip install openstef\n```\n\n_**Optional**_: if you would like to use the proloaf model with OpenSTEF install the proloaf dependencies by running:\n```shell\npip install openstef[proloaf]\n```\n### Remark regarding installation within a **conda environment on Windows**:\n\nA version of the pywin32 package will be installed as a secondary dependency along with the installation of the openstef package. Since conda relies on an old version of pywin32, the new installation can break conda's functionality. The following command can solve this issue:\n```shell\npip install pywin32==300\n```\nFor more information on this issue see the [readme of pywin32](https://github.com/mhammond/pywin32#installing-via-pip) or [this Github issue](https://github.com/mhammond/pywin32/issues/1865#issue-1212752696).\n\n# Usage\n\nTo run a task use:\n\n```shell\npython -m openstef task \n```\n\n## Reference Implementation\nA complete implementation including databases, user interface, example data, etc. is available at: https://github.com/OpenSTEF/openstef-reference\n\n![screenshot](https://user-images.githubusercontent.com/60883372/146760483-29af3ac7-62af-4f13-98c7-982a79c517d1.jpg)\nScreenshot of the operational dashboard showing the key functionality of OpenSTEF.\nDashboard documentation can be found [here](https://github.com/OpenSTEF/.github/blob/main/profile/README.md).\n\n## License\nThis project is licensed under the Mozilla Public License, version 2.0 - see LICENSE for details.\n\n## Licenses third-party libraries\nThis project includes third-party libraries, which are licensed under their own respective Open-Source licenses. SPDX-License-Identifier headers are used to show which license is applicable. The concerning license files can be found in the LICENSES directory.\n\n## Contributing\nPlease read [CODE_OF_CONDUCT.md](https://github.com/OpenSTEF/.github/blob/main/CODE_OF_CONDUCT.md), [CONTRIBUTING.md](https://github.com/OpenSTEF/.github/blob/main/CONTRIBUTING.md) and [PROJECT_GOVERNANCE.md](https://github.com/OpenSTEF/.github/blob/main/PROJECT_GOVERNANCE.md) for details on the process for submitting pull requests to us.\n\n## Contact\nPlease read [SUPPORT.md](https://github.com/OpenSTEF/.github/blob/main/SUPPORT.md) for how to connect and get into contact with the OpenSTEF project\n""",,"2021/02/09, 08:48:47",988,MPL-2.0,249,2613,"2023/10/10, 17:57:01",23,381,457,77,15,2,1.2,0.7801875732708089,"2023/10/10, 17:57:23",v3.3.3,2,23,false,,false,false,"OpenSTEF/openstef-reference,alliander-opensource/openstef-api-preliminary,OpenSTEF/openstef-dbc,alliander-opensource/openstef-api,OpenSTEF/openstef-offline-example",,https://github.com/OpenSTEF,,,,,https://avatars.githubusercontent.com/u/93986058?v=4,,, EIAdata,Provides programmatic access to the Energy Information Administration's API.,Matt-Brigida,https://github.com/Matt-Brigida/EIAdata.git,github,,Energy Modeling and Optimization,"2023/09/08, 23:17:05",16,0,0,true,R,,,R,,"b'EIAdata\n=======\n\nR Wrapper for the Energy Information Administration (EIA) API. \n\nEIAdata has been updated for v2 of the EIA API. To pull an energy series you\'ll have to know the Series ID from the version 1 API. If you don\'t know the Series IDs for the data you want, do the following:\n\n1. Go to [EIA\'s Opendata website](https://www.eia.gov/opendata/).\n2. Go to the ""Bulk File Downloads"" by scrolling down a bit and looking on the right margin.\n3. Choose the bulk download for the category that contains the series you want. Foe example if it is a Crude Oil series download the ""Petroleum"" file.\n4. After the file downloads unzip it and find the Series ID you want. \n\nNote you only have to do the above once to get the Series ID.\n\nThis package provides programmatic access to the Energy Information Administration\'s (EIA) API (See http://www.eia.gov/beta/api/). There are currently over a million unique time series available through the API. To use the package you\'ll need a *free* API key from here: http://www.eia.gov/beta/api/register.cfm\n\nThe package also contains a function to return the latest EIA Weekly Natural Gas Storage Report. This function does not require an API key.\n\nThe package has ~~2~~ 1 main function~~s~~, ~~getCatEIA~~ and getEIA.\n\n* ~~getCatEIA: allows you to query the parent and subcategories of a given category, as well as any series IDs within that category.~~\n\n* getEIA: allows you to pull a time series (given by the series ID). The resulting object is of class xts. \n\nThis is the development version. You may prefer to install the version on CRAN here: http://cran.r-project.org/web/packages/EIAdata/index.html\n\nIf you would like to install this development version, install the `devtools` package and use:\n\n```\nlibrary(devtools)\ninstall_github(""Matt-Brigida/EIAdata"")\n```\n\nIf you have the CRAN version and simply want to test the development version you can use:\n\n```\nlibrary(devtools)\ndev_mode(on = T)\ninstall_github(""Matt-Brigida/EIAdata"")\n## test the package\n## and when you are done\ndev_mode(on = F)\n## and you are again using the CRAN version\n```\n\n'",,"2014/08/07, 21:35:17",3365,MPL-2.0,24,97,"2023/09/08, 23:22:51",1,6,21,5,46,0,0.16666666666666666,0.101123595505618,"2023/06/29, 16:22:55",v0.2.0,0,5,false,,false,false,,,,,,,,,,, Energy System Technology Data,"Compilation of assumptions about energy system technologies such as cost, efficiency and lifetime that can be read by energy system modelling software.",PyPSA,https://github.com/PyPSA/technology-data.git,github,"energy,costs,energy-system-model",Energy Modeling and Optimization,"2023/09/26, 10:26:21",34,0,19,true,TeX,PyPSA,PyPSA,"TeX,Python",https://technology-data.readthedocs.io,"b'![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/pypsa/technology-data?include_prereleases)\n[![Documentation](https://readthedocs.org/projects/technology-data/badge/?version=latest)](https://technology-data.readthedocs.io/en/latest/?badge=latest)\n![Licence](https://img.shields.io/github/license/pypsa/technology-data)\n![Size](https://img.shields.io/github/repo-size/pypsa/technology-data)\n[![Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.3994163.svg)](https://doi.org/10.5281/zenodo.3994163)\n[![Gitter](https://badges.gitter.im/PyPSA/community.svg)](https://gitter.im/PyPSA/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\n\n# Energy System Technology Data\n\nThis script compiles assumptions on energy system technologies (such\nas costs, efficiencies, lifetimes, etc.) for chosen years\n(e.g. [2020, 2030, 2050]) from a variety of sources into CSV files to\nbe read by energy system modelling software. The merged outputs have\nstandardized cost years, technology names, units and source information. For further information about the structure and how to add new technologies, see the [documentation](https://technology-data.readthedocs.io/en/latest/).\n\n\nThe outputs are used in\n[PyPSA-Eur](https://github.com/PyPSA/pypsa-eur) and\n[PyPSA-Eur-Sec](https://github.com/PyPSA/pypsa-eur-sec).\n\n\n## Licence\n\nCopyright 2019-2020 Marta Victoria (Aarhus University), Kun Zhu\n(Aarhus University), Elisabeth Zeyen (TUB), Tom Brown (TUB)\n\nThe code in `scripts/` is released as free software under the\n[GPLv3](http://www.gnu.org/licenses/gpl-3.0.en.html), see LICENSE.txt.\nHowever, different licenses and terms of use may apply to the various\ninput data.\n'",",https://doi.org/10.5281/zenodo.3994163","2020/05/06, 16:46:01",1267,GPL-3.0,144,441,"2023/09/26, 11:59:23",26,62,79,32,29,4,0.5,0.7257142857142858,"2023/08/07, 10:52:27",v0.6.2,0,9,false,,false,false,,,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, "Asset-level Transition Risk in the Global Coal, Oil, and Gas Supply Chains","The global fossil fuel supply chain, mapped at the asset-level.",Lkruitwagen,https://github.com/Lkruitwagen/global-fossil-fuel-supply-chain.git,github,,Energy Modeling and Optimization,"2022/09/14, 10:01:24",169,0,11,false,Jupyter Notebook,,,"Jupyter Notebook,Python,Shell",,"b""# Asset-level Transition Risk in the Global Coal, Oil, and Gas Supply Chains\n\n**Kruitwagen, L., Klaas, J., Baghaei Lakeh, A., & Fan, J.**\n\nA repo for building a network model of the global fossil fuel supply chain using asset-level data. A project in support of the [Oxford Martin Programme on the Post-Carbon Transition](https://www.oxfordmartin.ox.ac.uk/post-carbon/) generously supported by [Microsoft AI for Earth](https://www.microsoft.com/en-us/ai/ai-for-earth).\n\n## Abstract\n\nClimate change risks manifest in the real economy, with grave consequences for welfare of human populations and the biosphere; the economic returns of industrial sectors; and geopolitical stability. Understanding the diffusion of risks in real infrastructure networks is an urgent priority for delivering climate change mitigation, adaptation, and resiliency. The oil, gas, and coal supply chains are the most salient progenitors and inheritors of these environmental risks.\nWe prepare a geospatial arrangement of the global oil, gas, and coal supply chains using open-source asset-level data. The resulting complex network has 6.09mn nodes and 15.70mn edges and is implemented in a graph database. With estimates of annual coal, gas, and oil demand in 13,229 global population centres and 8,165 global power stations, we use a minimum-cost flow method to estimate global asset-level energy flows. We develop a method for cross-validating and tuning our network flow simulation using aggregate country-level import and export statistics. We demonstrate two analyses of asset-level transition risk: a counter-factual demand shock scenario consistent with the IEA Sustainable Development Scenario; and supply shock scenarios developed by interdicting regionally-aggregated coal, oil, and gas supplies. Our contribution lies in the global scope of our asset-level supply chain and the novelty of our minimum-cost flow method. We conclude with a discussion of further research directions and make the graph database and supporting code publicly available.\n\nThe full paper is available [here](https://papers.ssrn.com/abstract=3783412).\n\n**Figure 1: Global coal, oil, and gas asset-level data**\n![Global fossil fuel infrastructure](image_assets/all_assets.png)\n\n## Data\n\nOur work uses only open-source data in order to maximise accessibility and reporduceability. We build on the substantial efforts of public sector organisations to develop and publish open datasets of energy system infrastrucutre and activity, particularly the work of the Global Energy Monitor, the World Resources Institute ResourceWatch, the Global Oil and Gas Features Database, the European Commision Joint Research Council, OpenStreetMap.\n\n| Name | N_installations | N_sources | Quality | Icon |\n| --------------------------------- | ---------------:| ----------:|:----------:| ----------------------------------:|\n| Well Pads | 9,845 | 24 | Good | ![](image_assets/OILWELL.png) |\n| Oil Fields | 25,236 | 63 | Good | ![](image_assets/OILFIELD.png) |\n| Coal Mines | 3,099 | 32 | Adequate | ![](image_assets/COALMINE.png) |\n| Processing & Refining Facilities | 2,501 | 55 | Good | ![](image_assets/REFINERY.png) |\n| LNG Liqufication & Regasification | 329 | 15 | Excellent | ![](image_assets/LNGTERMINAL.png) | \n| Pipelines | 94,448 | 82 | Good | ![](image_assets/PIPELINE.png) |\n| Ports | 3,702 | 10 | Excellent | ![](image_assets/PORT.png) |\n| Shipping Routes | 8,273 | 1 | Excellent | ![](image_assets/SHIPPINGROUTE.png)|\n| Railways | 496,808 | 52 | Excellent | ![](image_assets/RAILWAY.png) |\n| Power Stations | 28,664 | 1 | Good | ![](image_assets/POWERSTATION.png) |\n| Population Centres | 13,229 | 2 | Excellent | ![](image_assets/CITY.png) |\n\nData must be [downloaded](https://drive.google.com/file/d/1LWXT3WyNpMS8xmdFzStbUyQlzdPLGhv_/view?usp=sharing) and unzipped in a folder `data/` in the main directory.\n\n## Setup\n\n#### Environment\n\nOn a fresh linux install you will require the following:\n\n sudo apt-get install python3-dev build-essential libspatialindex-dev openjdk-8-jre\n\nWe use [conda](https://docs.conda.io/en/latest/miniconda.html) for environment management. Create a new environment:\n\n conda create -n ffsc python=3.8\n\nActivate your conda environment:\n\n conda activate ffsc\n\nClone and change directory into this repo:\n\n git clone https://github.com/Lkruitwagen/global-fossil-fuel-supply-chain.git\n cd global-fossil-fuel-supply-chain\n\nInstall pip package manager to the environment if it isn't already:\n\n conda install pip\n\nInstall the project packages. Conda is used to install geospatial packages with c binary dependancies:\n\n pip install -r requirements.txt\n conda install -c conda-forge --file conda_reqs.txt\n\n#### Environment Variables\n\nSave the environment variables we need in activation and deactivation scripts in the conda environment. Follow the [conda instructions](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#setting-environment-variables) for your os, and adapt the following:\n\n cd CONDA_PREFIX\n mkdir -p ./etc/conda/activate.d\n mkdir -p ./etc/conda/deactivate.d\n touch ./etc/conda/activate.d/env_vars.sh\n touch ./etc/conda/deactivate.d/env_vars.sh\n\nedit `./etc/conda/activate.d/env_vars.sh` as follows:\n\n #!/bin/sh\n export USE_PYGEOS=1\n export PYTHONPATH=$PWD\n export NUMEXPR_MAX_THREADS=24 # or something suiting your resources\n\nedit `./etc/conda/deactivate.d/env_vars.sh` as follows:\n\n #!/bin/sh\n\n unset PYTHONPATH\n unset USE_PYGEOS\n unset NUMEXPR_MAX_THREADS\n\nSave and close both files.\n\n\n#### Community Detection\n\nClone [the DirectedLouvain repository](https://github.com/nicolasdugue/DirectedLouvain)\n\n git clone https://github.com/nicolasdugue/DirectedLouvain.git\n \nEnter the repository and run the MakeFile\n\n cd DirectedLouvain\n make\n\n#### PyPy\n\nWe use [PyPy](https://www.pypy.org/index.html) to significantly speed-up dijkstra minimum-cost path search. PyPy uses a just-in-time compiler to significantly speed-up Python code execution. [Download](https://www.pypy.org/download.html) the correct version of PyPy to `bin/`.\n\n\n#### (Optional) Neo4J Database\n1. Note if ssh tunnelling you will need to redirect port 8678 and 8888\n1. [Install neo4j server](https://neo4j.com/docs/operations-manual/current/installation/linux/). We use Neo4j 3.5 in our experiments, \nbut everything described here should also work with Neo4j 4.0\n\n\n\n## Useage \n\n### Running the computational pipeline\n\nThe main pipeline entrypoint is `cli.py`. The commands available to `cli.py` can be found with:\n\n python cli.py --help\n\nThe following top-level commands are available: `network-assembly`,`solve-flow`,`shock-analysis`, and `visualisation`. \n\nThe pipeline makes extensive use of [Kedro](https://kedro.readthedocs.io/en/stable/), which is a computational graph manager. Kedro uses of a data catalog, which is here: [conf/base/catalog.yml](conf/base/catalog.yml). The catalog can also be used in other scripts and environments:\n\n from kedro.io import DataCatalog\n import yaml\n\n catalog = DataCatalog(yaml.load(open('/path/to/conf/base/catalog.yaml','r'),Loader=yaml.SafeLoader))\n dataframe = catalog.load('')\n\nEach pipeline is a computational graph with computation nodes and dataset edges, creating a version-controlled reproduceable workflow from raw data through to analysis. Computation nodes are labelled with *tags* so the pipeline can be run one-node-at-a-time, if desired. The computational graph for the `network-assembly` pipeline is shown in the figure below. Each pipeline can be run using:\n\n python cli.py --tags=\n\nTags are nested such that top-level tags will execute a while pipeline step, and nested tags will execute substeps for certain asset types or energy carriers.\n\n**Figure A-1: Network assembly computational graph**\n![Computational graph](image_assets/dag.png)\n\n### Assembling the infrastructure network\n\nThis pipeline assembles the asset-level infrastructure network. It can be run with `python cli.py network-assembly`. A comma-separated list of `--tags` can also be passed. Passing `--help` produces the following documentation:\n\n Usage: cli.py network-assembly [OPTIONS]\n\n Assemble the basic network from asset-level data. See\n ffsc.pipeline.pipeline.py for detailed tags.\n\n AVAILABLE TOP-LEVEL TAGS:\n -------------------------\n --preprocess : Preprocessing and homogenisation of all raw asset data.\n --prep : Geospatial preparation operations on all data.\n --sjoin : Spatial join operations matching linear and point assets.\n --flmile : First- and last-mile matching operations to gapfill missing data.\n --explode : Geospatial post-processing of joining and matching.\n --simplify : Simplification operations to reduce the number of nodes.\n\n Options:\n --tags TEXT Optionally specify any individual node tags you want to run in\n a comma-separated list.\n\n --help Show this message and exit.\n\n### Solving energy flow on the network\n\nThis pipeline calculates asset-level energy flow through the network, from energy sources (i.e. coalmines, and oil and gas fields and wells) through to energy sinks (population centres and powerstations). It can be run with `python cli.py solve-flow`. A comma-separated list of `--tags` can also be passed. Passing `--help` produces the following documentation:\n\n Usage: cli.py solve-flow [OPTIONS]\n\n Assemble the basic network from asset-level data. See\n ffsc.flow.flow_pipeline.py, ffsc.communities.community_pipeline.py, and\n ffsc.interdiction.interdiction_pipeline.py for detailed tags.\n\n AVAILABLE TOP-LEVEL TAGS:\n -------------------------\n --flow_edges : Prepare network edges dataframe.\n --flow_nodes : Prepare network nodes dataframe.\n --flow_nx : Test network connectivity and prepared for flow calculations.\n --community-prep : Prepare to add communities to network.\n --community-run : Run community detection algorithm.\n --community-post-nodes : Post-process community detection onto node dataframe.\n --community-post-edges : Post-process community detection onto edge dataframe.\n --dijkstra-pickle : Pickle edges in preparation for dijkstra mincost path.\n --dijkstra-paths : Run async dijkstra mincost path.\n --dijkstra-adj : Post-process dijkstra to mincost adjacency matrix.\n --dijkstra-flow : Solve flow using iterative cost-scaling.\n\n Options:\n --tags TEXT Optionally specify any individual node tags you want to run\n --help Show this message and exit.\n\n### Analysing demand and supply shocks\n\nThe energy infrastructure network can be treated with demand and supply shocks. Flow can be recalculated to see the effect of the shocks on asset-level energy flow and costs. The pipeline can be run with `python cli.py shock-analysis`. A comma-separated list of `--tags` can also be passed. Passing `--help` produces the following documentation:\n\n Usage: cli.py shock-analysis [OPTIONS]\n\n Prepare demand and supply shock analysis. See\n ffsc.interdiction.interdiction_pipeline.py for detailed tags.\n\n AVAILABLE TOP-LEVEL TAGS:\n -------------------------\n --sds_counterfactual : Prepare Sustainable Development Scenario demand shock analysis.\n --supply-interdiction : Prepare supply interdiction shock analysis.\n --post-supply-interdiction : Post-process supply interdiction shock analysis.\n\n Options:\n --tags TEXT Optionally specify any individual node tags you want to run\n --help Show this message and exit.\n\n### Visualising the data, flow, and analysis\n\nThe asset-level data can be visualised on a world map, and scaled and colored by the amount of energy flow passing through each asset. The pipeline can be run with `python cli.py visualisation`. A comma-separated list of `--tags` can also be passed. Passing `--help` produces the following documentation:\n\n Usage: cli.py visualisation [OPTIONS]\n\n Prepare visualisation of assets, flow, and demand shock counterfactual.\n See ffsc.visualisation.visualise_pipeline.py for detailed tags.\n\n AVAILABLE TOP-LEVEL TAGS:\n -------------------------\n --visualise-assets : Visualise all assets.\n --visualise-iso2 : Add iso2 country codes to dataframes.\n --visualise-trade-prep : Prepare trade dataframes for comparison.\n --visualise-trade : Visualise actual trade and production vs simulated.\n --visualise-flow : Visualise energy flow.\n --compare-flow : Compare energy flow to SDS demand shock energy flow.\n\n Options:\n --tags TEXT Optionally specify any individual node tags you want to run\n --help Show this message and exit.\n\n### Importing files into Neo4j\n1. Make sure Neo4j is shut down. The installer might start up neo4j under a different user (e.g. Neo4j). \nIn this case, you might want to find the process under which Neo4j runs using `sudo ps -a | grep neo4j`. Find the PID of the process and kill it using `sudo kill`.\n2. As Neo4j's files may be restricted, you want to do the next steps as root.\n3. Delete Neo4j's data folder from the old database. On Linux, this is stored under `/var/lib/neo4j/data`.\n4. Import the data by executing the `bin/import.sh` script, which you can find under `src/neo4j_commands` in this repository.\n5. After the import is complete, restart Neo4j using `neo4j start`\n\n\n""",,"2020/01/23, 16:59:07",1371,MIT,0,103,"2021/02/01, 20:02:20",12,10,11,0,996,3,0.5,0.03296703296703296,,,0,4,false,,false,false,,,,,,,,,,, draf,Analysis and decision support framework for local multi-energy hubs focusing on demand response.,DrafProject,https://github.com/DrafProject/draf.git,github,"decarbonization,decision-support,demand-response,flexibility-modeling,optimization,energy-system-modeling",Energy Modeling and Optimization,"2023/08/22, 07:13:17",10,0,5,true,Python,The Draf Project,DrafProject,"Python,Jupyter Notebook,Shell,Batchfile",,"b'\n\n
\n\n[![paper](https://img.shields.io/badge/Paper-doi.org/h3s2-brightgreen)][draf demo paper]\n[![License: LGPL v3](https://img.shields.io/badge/License-LGPL%20v3-blue.svg)](https://www.gnu.org/licenses/lgpl-3.0)\n[![python](https://img.shields.io/badge/python-3.9-blue?logo=python&logoColor=white)](https://github.com/DrafProject/draf)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1)](https://pycqa.github.io/isort/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n**d**emand **r**esponse **a**nalysis **f**ramework (`draf`) is an analysis and decision support framework for local multi-energy hubs focusing on demand response.\nIt uses the power of ([mixed integer]) [linear programming] optimization, [`pandas`], [`Plotly`], [`Matplotlib`], [`elmada`], [`GSEE`], [`Jupyter`] and more to help users along the energy system analysis process.\nThe software is described and demonstrated in the open-access [draf demo paper].\n`draf` runs on Windows, macOS, and Linux.\n\n## Features\n\n![`draf` process](doc/images/draf_process.svg)\n\n- **Time series analysis tools:**\n - `DemandAnalyzer` - Analyze energy demand profiles\n - `PeakLoadAnalyzer` - Analyze peak loads or run simple battery simulation\n- **Easily parameterizable [component templates](draf/components/component_templates.py):**\n - battery energy storage (`BES`), battery electric vehicle (`BEV`), combined heat and power (`CHP`), heat-only boiler (`HOB`), heat pump (`HP`), power-to-heat (`P2H`), photovoltaic (`PV`), wind turbine (`WT`), thermal energy storage (`TES`), fuel cell (`FC`), electrolyzer (`Elc`), hydrogen storage (`H2S`), production process (`PP`), product storage (`PS`), direct air capture (`DAC`), and more.\n - Sensible naming conventions for parameters and variables, see section [Naming conventions](#naming-conventions).\n- **Parameter preparation tools:**\n - `TimeSeriesPrepper` - For time series data\n - Electricity prices via [`elmada`]\n - Carbon emission factors via [`elmada`]\n - Standard load profiles from [BDEW]\n - PV profiles via [`GSEE`] (In Germany, using weather data from [DWD])\n - [`DataBase`](draf/prep/data_base.py) - For scientific data such as cost or efficiency factors\n- **Scenario generation tools:** Easily build individual scenarios or sensitivity analyses\n- **Multi-objective mathematical optimization** with support of different model languages and solvers:\n - [`Pyomo`] - A free and open-source modeling language in Python that supports multiple solvers.\n - [`GurobiPy`] - The Python interface to Gurobi, the fastest MILP solver (see [Mittelmann benchmark]).\n- **Plotting tools:** Convenient plotting of heatmaps, Sankeys, tables, pareto plots, etc. using [`Plotly`], [`Matplotlib`], and [`seaborn`].\n - Support of meta data such as `unit`, `doc`, `src`, and `dims`\n - Automatic unit conversion\n- **Export tools:**\n - `CaseStudy` objects including all parameters, meta data, and results can be saved to files.\n - Data can be exported to [xarray] format.\n\n## Quick start\n\n1. Install [miniconda] or [anaconda]\n\n1. Open a terminal in the directory where you want to place `draf` in.\n\n1. Clone `draf`:\n\n ```sh\n git clone https://github.com/DrafProject/draf\n cd draf\n ```\n\n1. Create and activate the `draf` conda environment (`conda env create` will create a conda environment based on [environment.yml](environment.yml) which will install the newest versions of the required packages including the full editable local version of `draf`.):\n\n ```sh\n conda env create\n conda activate draf\n ```\n\n1. (OPTIONAL) If the draf environment caused issues, you could install an older but more specific conda environment, e.g.:\n\n ```sh\n conda env create --file environments/environment_py39all_mac.yml --force\n conda activate draf39\n ```\n\n1. (OPTIONAL) To use Gurobi (fast optimization), install a valid Gurobi license (its [free for academics](https://www.gurobi.com/academia/academic-program-and-licenses)).\n\n1. Open Jupyter notebook:\n\n ```sh\n jupyter notebook\n ```\n\n1. Check if the imports work:\n\n ```py\n import draf\n import elmada\n ```\n\n1. (OPTIONAL) To use the latest electricity prices and carbon emission factors from [`elmada`], request an [ENTSO-E API key] and set it to elmada:\n\n ```py\n # You have to run this Python code only once (it writes to a permanent file):\n import elmada\n elmada.set_api_keys(entsoe=""YOUR_ENTSOE_KEY"")\n ```\n\n1. Start modeling. Have a look at the [examples](examples).\n Start with the [`minimal`](examples/minimal.py) if you want to write your own component.\n Start with the [`PV`](examples/pv.py) example if you want to import existing components.\n For more advanced modeling look at the [draf_demo_case_studies].\n Consider the [DRAF CHEAT SHEET](draf_cheat_sheet.md)\n\n## Structure\n\nA `CaseStudy` object can contain several `Scenario` instances:\n\n![`draf` architecture](doc/images/draf_architecture.svg)\n\n### Naming conventions\n\nAll parameter and variable names must satisfy the structure `___`.\nE.g. in `P_EG_buy_T`, `P` is the entity type (here: electrical power), `EG` the component (here: Electricity Grid), `buy` the descriptor and `T` the dimension (here: time).\nDimensions are denoted with individual capital letters, so `` is `TE` if the entity has the dimensions `T` and `E`.\nSee [conventions.py](draf/conventions.py) for examples of types, components, and descriptors.\n\n## Contributing\n\nContributions in any form are welcome!\nPlease contact [Markus Fleschutz].\n\n## Citing draf\n\nIf you use `draf` for academic work please cite the [draf demo paper]: \n\n```bibtex\n@article{Fleschutz2022,\n author = {Markus Fleschutz and Markus Bohlayer and Marco Braun and Michael D. Murphy},\n title = {Demand Response Analysis Framework ({DRAF}): An Open-Source Multi-Objective Decision Support Tool for Decarbonizing Local Multi-Energy Systems},\n publisher = {{MDPI} {AG}},\n journal = {Sustainability}\n year = {2022},\n volume = {14},\n number = {13},\n pages = {8025},\n url = {https://doi.org/10.3390/su14138025},\n doi = {10.3390/su14138025},\n}\n```\n\n## License and status\n\nCopyright (c) 2022 Markus Fleschutz\n\nLicense: [LGPL v3]\n\nThe development of `draf` was initiated by [Markus Fleschutz] in 2017 and continued in a cooperative PhD between the [MeSSO Research Group] of the [Munster Technological University], Ireland and the [Energy System Analysis Research Group] of the [Karlsruhe University of Applied Sciences], Germany.\nThis project was supported by the MTU R\xc3\xadsam PhD Scholarship scheme and by the Federal Ministry for Economic Affairs and Climate Action (BMWK) on the basis of a decision by the German Bundestag.\n\nThank you [Dr. Markus Bohlayer], [Dr. Ing. Adrian B\xc3\xbcrger], [Andre Leippi], [Dr. Ing. Marco Braun], and [Dr. Michael D. Murphy] for your valuable feedback.\n\n\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\n[`elmada`]: https://github.com/DrafProject/elmada\n[`GSEE`]: https://github.com/renewables-ninja/gsee\n[`GurobiPy`]: https://pypi.org/project/gurobipy\n[`Jupyter`]: https://jupyter.org\n[`Matplotlib`]: https://matplotlib.org\n[`pandas`]: https://pandas.pydata.org\n[`Plotly`]: https://plotly.com\n[`Pyomo`]: https://github.com/Pyomo/pyomo\n[`seaborn`]: https://seaborn.pydata.org\n[anaconda]: https://www.anaconda.com/products/individual\n[Andre Leippi]: https://www.linkedin.com/in/andre-leippi-3187a81a7\n[BDEW]: https://www.bdew.de\n[Dr. Ing. Adrian B\xc3\xbcrger]: https://www.linkedin.com/in/adrian-b%C3%BCrger-251205236/\n[Dr. Ing. Marco Braun]: https://www.h-ka.de/en/about-hka/organization-people/staff-search/person/marco-braun\n[Dr. Markus Bohlayer]: https://www.linkedin.com/in/markus-bohlayer\n[Dr. Michael D. Murphy]: https://www.linkedin.com/in/michael-d-murphy-16134118\n[draf demo paper]: https://doi.org/10.3390/su14138025\n[draf_demo_case_studies]: https://github.com/DrafProject/draf_demo_case_studies\n[DWD]: https://www.dwd.de\n[Energy System Analysis Research Group]: https://www.h-ka.de/en/ikku/energy-system-analysis\n[ENTSO-E API key]: https://transparency.entsoe.eu/content/static_content/Static%20content/web%20api/Guide.html#_authentication_and_authorisation\n[Karlsruhe University of Applied Sciences]: https://www.h-ka.de/en\n[LGPL v3]: https://www.gnu.org/licenses/lgpl-3.0.de.html\n[linear programming]: https://en.wikipedia.org/wiki/Linear_programming\n[Markus Fleschutz]: https://mfleschutz.github.io\n[MeSSO Research Group]: https://messo.cit.ie\n[miniconda]: https://docs.conda.io/en/latest/miniconda.html\n[Mittelmann benchmark]: http://plato.asu.edu/ftp/milp.html\n[mixed integer]: https://en.wikipedia.org/wiki/Integer_programming\n[Munster Technological University]: https://www.mtu.ie\n[xarray]: http://xarray.pydata.org/en/stable'",",https://doi.org/10.3390/su14138025,https://doi.org/10.3390/su14138025\n","2021/07/22, 11:34:54",825,LGPL-3.0,43,187,"2023/09/01, 13:10:40",1,1,6,3,54,0,0.0,0.0,"2023/01/14, 22:07:10",v0.3.0,0,1,false,,false,false,,,https://github.com/DrafProject,,Germany,,,https://avatars.githubusercontent.com/u/62054152?v=4,,, GENeSYS-MOD,"An open-source energy system model, originally based on the Open-Source Energy Modeling System (OSeMOSYS) framework, with various additions.",genesysmod,,custom,,Energy Modeling and Optimization,,,,,,,,,,https://git.tu-berlin.de/genesysmod/genesys-mod-public,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, PyPSA-Earth,An Open Optimisation Model of the Earth Energy System.,pypsa-meets-earth,https://github.com/pypsa-meets-earth/pypsa-earth.git,github,"pypsa-africa,pypsa-earth,energy-system-model,python,investment-optimization,operational-optimization,power-system-model,scenario-analysis,energy-system-planning,power-system-planning",Energy Modeling and Optimization,"2023/10/24, 10:48:53",135,0,75,true,Python,PyPSA meets Earth,pypsa-meets-earth,Python,https://pypsa-earth.readthedocs.io/en/latest/,"b'\n\n# PyPSA-Earth\n\n

\nby\n\n \n\n

\n\n## Development Status: **Stable and Active**\n\n[![Status Linux](https://github.com/pypsa-meets-earth/pypsa-earth/actions/workflows/ci-linux.yaml/badge.svg?branch=main&event=push)](https://github.com/pypsa-meets-earth/pypsa-earth/actions/workflows/ci-linux.yaml)\n[![Status Mac](https://github.com/pypsa-meets-earth/pypsa-earth/actions/workflows/ci-mac.yaml/badge.svg?branch=main&event=push)](https://github.com/pypsa-meets-earth/pypsa-earth/actions/workflows/ci-mac.yaml)\n[![Status Windows](https://github.com/pypsa-meets-earth/pypsa-earth/actions/workflows/ci-windows.yaml/badge.svg?branch=main&event=push)](https://github.com/pypsa-meets-earth/pypsa-earth/actions/workflows/ci-windows.yaml)\n[![Documentation Status](https://readthedocs.org/projects/pypsa-earth/badge/?version=latest)](https://pypsa-earth.readthedocs.io/en/latest/?badge=latest)\n![Size](https://img.shields.io/github/repo-size/pypsa-meets-earth/pypsa-earth)\n[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)\n[![REUSE status](https://api.reuse.software/badge/github.com/pypsa-meets-earth/pypsa-earth)](https://api.reuse.software/info/github.com/pypsa-meets-earth/pypsa-earth)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/pypsa-meets-earth/pypsa-earth/main.svg)](https://results.pre-commit.ci/latest/github/pypsa-meets-earth/pypsa-earth/main)\n[![Discord](https://img.shields.io/discord/911692131440148490?logo=discord)](https://discord.gg/AnuJBk23FU)\n[![Google Drive](https://img.shields.io/badge/Google%20Drive-4285F4?style=flat&logo=googledrive&logoColor=white)](https://drive.google.com/drive/folders/1U7fgktbxlaGzWxT2C0-Xv-_ffWCxAKZz)\n\n**PyPSA-Earth is the first open-source global energy system model with data in high spatial and temporal resolution.** It enables large-scale collaboration by providing a tool that can model the world energy system or any subset of it. This work is derived from the European [PyPSA-Eur](https://pypsa-eur.readthedocs.io/en/latest/) model using new data and functions. It is suitable for operational as well as combined generation, storage and transmission expansion studies. The model provides two main features: (1) customizable data extraction and preparation scripts with global coverage and (2) a [PyPSA](https://pypsa.readthedocs.io/en/latest/) energy modelling framework integration. The data includes electricity demand, generation and medium to high-voltage networks from open sources, yet additional data can be further integrated. A broad range of clustering and grid meshing strategies help adapt the model to computational and practical needs.\n\nThe model is described in the Applied Energy article ""PyPSA-Earth. A new global open energy system optimization model demonstrated in Africa"", 2023, https://doi.org/10.1016/j.apenergy.2023.121096 [(BibTeX)](https://pypsa-earth.readthedocs.io/en/latest/talks_and_papers.html#publications). The [documentation](https://pypsa-earth.readthedocs.io/en/latest/index.html) provides additional information.\n\n**PyPSA meets Earth is a free and open source software initiative aiming to develop a powerful energy system model for Earth.** We work on open data, open source modelling, open source solver support and open communities. Stay tuned and join our mission - We look for users, co-developers and leaders! Check out our [website for results and our projects](https://pypsa-meets-earth.github.io/projects.html). Happy coding!\n\n

\n \n \n \n

\n\n## Livetracker. Most popular global models:\n\n

\n\n \n\n\n## Get involved\n\nThere are multiple ways to get involved and learn more about our work. That\'s how we organise ourselves:\n\n- [**Discord NEW! (Open)**](https://discord.gg/AnuJBk23FU)\n - chat with the community, team up on features, exchange with developers, code in voice channels\n - registration and usage is for free\n

\n \n \n \n

\n- **General initiative meeting (Open)**\n - every forth Thursday each month Thursday 16-17:00 (UK time)\n \n `download .ics`\n \n - join for project news and high-level code updates\n - meeting hosted on Discord\n - [open agenda](https://docs.google.com/document/d/1r6wm2RBe0DWFngmItpFfSFHA-CnUmVcVTkIKmthdW3g/edit?usp=sharing). See what we will discuss. Invited members have edit rights.\n- **Buddy talk (Open)**\n - book a 30min meeting with Max to discuss anything you like\n - booking link: [calendly.com/pypsa-meets-earth](https://calendly.com/max-parzen/pypsa-meets-earth-exchange-30min)\n- **Specific code meeting (Open)**\n - meeting hosted on Discord\n - join updates, demos, Q&A\'s, discussions and the coordination of each work package\n 1. Demand creation and prediction meeting, on demand\n 2. AI asset detection meeting, on demand\n 3. Sector coupling meeting, every Thursday 09:00 (UK time), `download .ics`\n 4. PyPSA-Earth meeting, every Thursday 16:00 (UK time), `download .ics`\n- **Outreach meeting (Open)**\n - every second week, Tuesday 17:00 (UK time)\n - planning, discussing events, workshops, communication, community activities\n- [**Google Drive**](https://drive.google.com/drive/folders/13Z8Y9zgsh5IZaDNkkRyo1wkoMgbdUxT5?usp=sharing)\n - access to minutes, presentations, lists, documents (access to minutes)\n\n## Installation\n\n1. Open your terminal at a location where you want to install pypsa-earth. Type the following in your terminal to download the package from GitHub:\n\n ```bash\n .../some/path/without/spaces % git clone https://github.com/pypsa-meets-earth/pypsa-earth.git\n ```\n2. The python package requirements are curated in the `envs/environment.yaml` file.\n The environment can be installed using:\n\n```bash\n .../pypsa-earth % conda env create -f envs/environment.yaml\n```\n\n If the above takes longer than 30min, you might want to try mamba for faster installation:\n\n```bash\n (base) conda install -c conda-forge mamba\n\n .../pypsa-earth % mamba env create -f envs/environment.yaml\n```\n\n3. For running the optimization one has to install the solver. We can recommend the open source HiGHs solver which installation manual is given [here](https://github.com/PyPSA/PyPSA/blob/633669d3f940ea256fb0a2313c7a499cbe0122a5/pypsa/linopt.py#L608-L632).\n4. To use jupyter lab (new jupyter notebooks) **continue** with the [ipython kernel installation](http://echrislynch.com/2019/02/01/adding-an-environment-to-jupyter-notebooks/) and test if your jupyter lab works:\n\n ```bash\n .../pypsa-earth % ipython kernel install --user --name=pypsa-earth\n .../pypsa-earth % jupyter lab\n ```\n5. Verify or install a java redistribution from the [official website](https://www.oracle.com/java/technologies/downloads/) or equivalent.\n To verify the successful installation the following code can be tested from bash:\n\n ```bash\n .../pypsa-earth % java -version\n ```\n\n The expected output should resemble the following:\n\n ```bash\n java version ""1.8.0_341""\n Java(TM) SE Runtime Environment (build 1.8.0_341-b10)\n Java HotSpot(TM) 64-Bit Server VM (build 25.341-b10, mixed mode)\n ```\n\n## Test run on tutorial\n\n- In the folder open a terminal/command window to be located at this path `~/pypsa-earth/`\n- Activate the environment `conda activate pypsa-earth`\n- Rename config.tutorial.yaml to config.yaml. For instance in Linux:\n ```bash\n mv config.tutorial.yaml config.yaml\n ```\n- Run a dryrun of the Snakemake workflow by typing simply in the terminal:\n ```bash\n snakemake -j 1 solve_all_networks -n\n ```\n\n Remove the -n to do a real run. Follow the tutorial of PyPSA-Eur 1 and 2 on [YouTube](https://www.youtube.com/watch?v=ty47YU1_eeQ) to continue with an analysis.\n\n## Training\n\n- We recently updated some [hackathon material](https://github.com/pypsa-meets-earth/documentation) for PyPSA-Earth. The hackathon contains jupyter notebooks with exercises. After going through the 1 day theoretical and practical material you should have a suitable coding setup and feel confident about contributing.\n- The get a general feeling about the PyPSA functionality, we further recommend going through the [PyPSA](https://github.com/PyPSA/PyPSA/tree/master/examples) and [Atlite](https://github.com/PyPSA/atlite/tree/master/examples) examples.\n\n## Questions and Issues\n\n- We are happy to answer questions and help with issues **if they are public**. Through being public the wider community can benefit from the raised points. Some tips. **Bugs** and **feature requests** should be raised in the [**GitHub Issues**](https://github.com/pypsa-meets-earth/pypsa-earth/issues/new/choose). **General workflow** or **user questions** as well as discussion points should be posted at the [**GitHub Discussions**](https://github.com/pypsa-meets-earth/pypsa-earth/discussions/categories/q-a) tab. Happy coding.\n\n## Documentation\n\nThe documentation is available here: [documentation](https://pypsa-earth.readthedocs.io/en/latest/index.html).\n\n## Collaborators\n\n\n\n\n\n\n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n \n \n\n \n
\n \n \n
\n Hazemful\n
\n
\n \n \n
\n Fabian Neumann\n
\n
\n \n \n
\n Ekaterina\n
\n
\n \n \n
\n Euronion\n
\n
\n \n \n
\n Justus Ilemobayo\n
\n
\n \n \n
\n Mnm-matin\n
\n
\n \n \n
\n Martha Frysztacki\n
\n
\n \n \n
\n Lukas Franken\n
\n
\n \n \n
\n Max Parzen\n
\n
\n \n \n
\n Davide-f\n
\n
\n \n \n
\n Koen Van Greevenbroek\n
\n
\n \n \n
\n Hazem\n
\n
\n \n \n
\n EnergyLS\n
\n
\n \n \n
\n AnasAlgarei\n
\n
\n \n \n
\n Yerbol Akhmetov\n
\n
\n \n \n
\n DeniseGiub\n
\n
\n \n \n
\n Thomas Kouroughli\n
\n
\n \n \n
\n Emre_Yorat\n
\n
\n \n \n
\n Giacomo Falchetta\n
\n
\n \n \n
\n Ekaterina-Vo\n
\n
\n \n \n
\n Null\n
\n
\n \n \n
\n Cschau-ieg\n
\n
\n \n \n
\n Tobias\n
\n
\n \n \n
\n Anton Achhammer\n
\n
\n \n \n
\n Null\n
\n
\n \n \n
\n Null\n
\n
\n \n \n
\n Carlos Fernandez\n
\n
\n \n \n
\n EmreYorat\n
\n
\n \n \n
\n GridGrapher\n
\n
\n \n \n
\n HanaElattar\n
\n
\n \n \n
\n Jarrad Wright\n
\n
\n \n \n
\n Jess\n
\n
\n \n \n
\n Katherine M. Antonio\n
\n
\n \n \n
\n Pietro Monticone\n
\n
\n \n \n
\n Sylvain Quoilin\n
\n
\n \n \n
\n Juli-a-ko\n
\n
\n \n \n
\n Stephen J Lee\n
\n
\n\n'",",https://doi.org/10.1016/j.apenergy.2023.121096","2021/02/28, 19:26:35",969,LGPL-3.0,1061,3303,"2023/10/24, 10:48:54",121,497,731,351,1,14,0.8,0.7190207156308851,"2023/10/19, 10:33:35",v0.2.3,19,38,false,,false,false,,,https://github.com/pypsa-meets-earth,https://pypsa-meets-earth.github.io/,,,,https://avatars.githubusercontent.com/u/84225086?v=4,,, tell,An open-source Python package to model future hourly total electricity loads.,IMMM-SFA,https://github.com/IMMM-SFA/tell.git,github,,Energy Modeling and Optimization,"2023/08/18, 19:07:57",15,23,10,true,Python,Integrated Multisector Multiscale Modeling,IMMM-SFA,"Python,Jupyter Notebook,PowerShell,TeX,Batchfile",https://immm-sfa.github.io/tell/,"b'[![build](https://github.com/IMMM-SFA/tell/actions/workflows/build.yml/badge.svg)](https://github.com/IMMM-SFA/tell/actions/workflows/build.yml) [![DOI](https://zenodo.org/badge/305802399.svg)](https://zenodo.org/badge/latestdoi/305802399)\n\n\n## tell\n\n#### `tell` is an open-source Python package to model future hourly total electricity loads.\n\n### Purpose\n`tell` was created to:\n\n - Project the short- and long-term evolution of hourly electricity demand in response to changes in weather and climate.\n\n - Work at a spatial resolution adequate for input to a unit commitment/economic dispatch (UC/ED) model.\n\n - Maintain consistency with the long-term growth and evolution of annual state-level electricity demand projected by an economically driven human-Earth system model.\n\n### Install `tell`\n\n`tell` is available via GitHub repository by using the pip install functionality. `tell` requires a Python version between 3.8 and 4.0 as well as a pip install to import the package. `tell` has been tested on\nWindows and Mac platforms. (Note: For those installing on Windows, `tell` is supported by GeoPandas functionality. Please see suggestions for installing GeoPandas on Windows here: \nhttps://geopandas.org/en/stable/getting_started/install.html)\n\n```bash\npip install tell\n```\n\n### Check out a quickstarter tutorial to run `tell`\n\nRun `tell` using the quickstarter tutorial: [Quickstarter](https://immm-sfa.github.io/tell/tell_quickstarter.html).\n\n### Getting started\n\nNew to `tell`? Get familiar with what `tell` is all about in our [Getting Started](https://immm-sfa.github.io/tell/index.html#) documentation.\n\n### User guide\n\nOur [User Guide](https://immm-sfa.github.io/tell/user_guide.html) provides in-depth information on the key concepts of `tell` and how the model works. \n\n### Contributing to `tell`\n\nWhether you find a typo in the documentation, find a bug, or want to develop functionality that you think will make `tell` more robust, you are welcome to contribute. Please see our [Contribution Guidelines](https://immm-sfa.github.io/tell/contributing.html) for more details.\n\n### API reference\nThe [API Reference](https://immm-sfa.github.io/tell/modules.html) contains a detailed description of the `tell` API. The reference describes how the methods work and which parameters can be used. It assumes that you have an understanding of the key concepts.\n\n### Contact/Help\nNeed help with `tell` or have a comment? Please open a [new Issue](https://github.com/IMMM-SFA/tell/issues/new/choose) with your question/comments. \n'",",https://zenodo.org/badge/latestdoi/305802399","2020/10/20, 18:42:14",1100,BSD-2-Clause,66,593,"2023/08/18, 19:09:13",0,36,46,3,68,0,0.8,0.5620437956204379,"2023/08/18, 19:13:04",v1.1.0,0,6,false,,false,true,"GoDoG-app/project-GoDoG-server,bokyung124/GSDC_HTCondor,mindmeand/serverless-app,hyunsungKR/aws-posting-app,Yunwltn/aws-posting_server,Yunwltn/aws_S3_upload_server,hyunsungKR/aws_s3_upload_server,hyunsungKR/aws_movie_app,Yunwltn/aws_movie_server,Yunwltn/aws_memo_server,hyunsungKR/aws_memo_app,Yunwltn/aws_recipe_server,hyunsungKR/aws_recipe_server,pethotel-app/pethotel-serverless-app,hyunsungKR/AWS_Rekognition,eyoo95/perform-server-aws,IFIF3526/aws-memo-server,jkong72/serverless-flask-movie-server,jphines/PredictingPopGov,REDIVIOUS/UGP-HustLab-CMU15210,whatofit/LevelWordWithFreq,BitspleaseSliit/moocrec-coursera-service,BitspleaseSliit/topics-and-complexity",,https://github.com/IMMM-SFA,https://im3.pnnl.gov/,"Richland, WA",,,https://avatars.githubusercontent.com/u/31457237?v=4,,, AMIRIS,An agent-based simulation of electricity markets and their actors enabling researchers to analyse and evaluate energy policy instruments and their impact on the actors.,dlr-ve/esy/amiris,https://gitlab.com/dlr-ve/esy/amiris/amiris,gitlab,,Energy Modeling and Optimization,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, AMIRIS-Py,Python tools for the electricity market model AMIRIS.,dlr-ve/esy/amiris,https://gitlab.com/dlr-ve/esy/amiris/amiris-py,gitlab,,Energy Modeling and Optimization,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, PREP-SHOT,"A transparent, modular, and open-source Energy Capacity Expansion Model.",PREP-NexT,https://github.com/PREP-NexT/PREP-SHOT.git,github,"renewable-energy,reservoir-modeling,hydropower",Energy Modeling and Optimization,"2023/10/16, 12:38:08",48,0,48,true,Python,PREP-NexT,PREP-NexT,Python,https://prep-next.github.io/PREP-SHOT/,"b'

\n \n \n \n

\n\n

\n \n \n \n \n

\n\n## Overview\n\n**PREP-SHOT** (**P**athways for **R**enewable **E**nergy **P**lanning coupling **S**hort-term **H**ydropower **O**pera**T**ion) is a transparent, modular, and open-source energy expansion model, offering advanced solutions for multi-scale, intertemporal, and cost-effective expansion of energy systems and transmission lines. It\'s developed by [Zhanwei Liu](https://www.researchgate.net/profile/Zhanwei-Liu-4) and [Xiaogang He](http://hydro.iis.u-tokyo.ac.jp/~hexg/) from the [PREP-NexT](https://github.com/PREP-NexT) Lab at the [National University of Singapore](https://nus.edu.sg/).\n\nFor more information, please visit our [Official Documentation](https://prep-next.github.io/PREP-SHOT/).\n\nThis project is licensed under the [GNU General Public License 3.0](https://github.com/PREP-NexT/PREP-SHOT/blob/main/LICENSE).\n\n## Key Features\n\n- Optimization model based on linear programming for multi-zone energy systems.\n- Cost minimization while meeting given demand time series.\n- Adjustable operation on hourly-spaced time steps.\n- Input data in Excel format and output data in NetCDF format using ``Xarray``.\n- Support for multiple solvers like Gurobi, CPLEX, MOSEK, and GLPK via `Pyomo`.\n- Allows input of multiple scenarios for specific parameters.\n- A pure Python program, leveraging ``pandas`` and ``Xarray`` for simplified complex data analysis and extensibility.\n\n## Getting Started\n\nThis section includes a brief tutorial on running your first PREP-SHOT model.\n\n1. Clone the repo\n\n ```bash\n git clone https://github.com/PREP-NexT/PREP-SHOT.git\n ```\n\n2. Create the Conda Environment and install the dependencies\n\n ```bash\n conda env create -f prep-shot.yml\n ```\n\n3. Activate the Conda Environment\n\n ```bash\n conda activate prep-shot-test\n ```\n\n4. Run your first model\n\n ```bash\n python run.py\n ```\n\nThis example is inspired by real-world data. For a detailed elaboration of this tutorial, check out the [Tutorial Page](https://prep-next.github.io/PREP-SHOT/Tutorial.html) in our documentation.\n\n## How to Contribute\n\nTo contribute to this project, please read our [Contributing Guidelines](https://prep-next.github.io/PREP-SHOT/Changelog.html#contributing-to-prep-shot).\n\n## Citation\n\nIf you use PREP-SHOT in a scientific publication, we would appreciate citations. You can use the following BibTeX entry:\n\n```bibtex\n@article{liu2023,\n title = {Balancing-oriented hydropower operation makes the clean energy transition more affordable and simultaneously boosts water security},\n author = {Liu, Zhanwei and He, Xiaogang},\n journal = {Nature Water},\n volume = {1},\n pages = {778--789},\n year = {2023},\n doi = {10.1038/s44221-023-00126-0},\n}\n```\n\n## Contact Us\n\nIf you have any questions, comments, or suggestions that aren\'t suitable for public discussions in the Issues section, please feel free to reach out to [Zhanwei Liu](mailto:liuzhanwei@u.nus.edu).\n\nPlease use the GitHub Issues for public discussions related to bugs, enhancements, or other project-related discussions.\n\n## Roadmap\n\n+ `Benders` decomposition-based fast solution framework\n+ `JuMP`-based low-memory and fast modelling engine\n+ Support for input of cost\xe2\x80\x93supply curves of technologies\n+ Support for expanding conventional hydropower plants\n+ Support for refurbishing conventional hydropower plants to pumped-storage schemes\n+ Support for refurbishing carbon-emission plants to carbon capture and storage (CCS) schemes\n\n## Disclaimer\n\nThe PREP-SHOT model is an academic project and is not intended to be used as a precise prediction tool for specific hydropower operations or energy planning. The developers will not be held liable for any decisions made based on the use of this model. We recommend applying it in conjunction with expert judgment and other modeling tools in a decision-making context.\n\n---\n\n![Repo Analytics](https://repobeats.axiom.co/api/embed/159a603ee4c6124a5addc35d47b3cb02e3fc39f0.svg ""Repo analytics"")\n'",,"2022/05/29, 12:03:21",514,GPL-3.0,177,179,"2023/10/16, 12:38:08",0,13,13,13,9,0,0.1,0.308641975308642,,,0,4,false,,false,false,,,https://github.com/PREP-NexT,,Singapore,,,https://avatars.githubusercontent.com/u/93474760?v=4,,, HYBRID,A modeling toolset to assess the integration and economic viability of Integrated Energy Systems.,idaholab,https://github.com/idaholab/HYBRID.git,github,,Energy Modeling and Optimization,"2023/10/11, 15:15:21",26,0,8,true,C,Idaho National Laboratory,idaholab,"C,Modelica,Python,Motoko,CSS,Shell,Makefile,Batchfile,HTML",,"b'# HYBRID\r\n\r\nHYBRID is a collection of a transient process models developed in the Modelica langauage capable of representing the physical dynamics of various integrated energy systems and processes. The models are developed in a modular way so as to allow users to quickly assemble and test various configurations and control systems. \r\n\r\nThe systems studied are modular and made of an assembly of components. For example, a system could contain a hybrid nuclear reactor, a gas turbine, a battery and some renewables. This system would correspond to the size of a balance area, but in theory any size of system is imaginable. The system is modeled in the \xe2\x80\x98Modelica/Dymola\xe2\x80\x99 language.\r\n\r\nTo assess the economics of the system, an optimization procedure varying different parameters can be run using the INL [FORCE](https://ies.inl.gov/SitePages/Technology%20-%20System%20Simulation.aspx) platform that combines HYBRID with HERON and RAVEN. \r\n\r\nCurrently,the HYBRID repository is based on:\r\n\r\n* RAVEN : RAVEN v 1.1 (and newer). See [GITHUB WEBSITE](https://github.com/idaholab/raven)\r\n* TRANSFORM-Library : TRANsient Simulation Framework of Reconfigurable Models. See [GITHUB WEBSITE](https://github.com/ORNL-Modelica/TRANSFORM-Library)\r\n* DYMOLA: DYMOLA 2020x or newer. See [Dymola Website](https://www.3ds.com/products-services/catia/products/dymola/?woc=%7B%22category%22%3A%5B%22category%2Fdymola%22%5D%7D&wockw=card_content_cta_1_url%3A%22https%3A%2F%2Fblogs.3ds.com%2Fcatia%2F%22)\r\n\r\n\r\nPublications\r\n-----\r\n### Manual\r\n* [HYBRID User Manual](https://www.osti.gov/biblio/1863262-hybrid-user-manual)\r\n### Technical Reports\r\n* [Evaluation of Hydrogen Production Feasibility for a Light Water Reactor in the Midwest](https://www.osti.gov/biblio/1569271-evaluation-hydrogen-production-feasibility-light-water-reactor-midwest)\r\n* [Status Report on FY2022 Model Development within the Integrated Energy Systems HYBRID Repository](https://www.osti.gov/biblio/1844226-status-report-fy2022-model-development-within-integrated-energy-systems-hybrid-repository)\r\n* [Thermal Energy Storage Model Development within the Integrated Energy Systems HYBRID Repository](https://www.osti.gov/biblio/1787041-thermal-energy-storage-model-development-within-integrated-energy-systems-hybrid-repository)\r\n* [Reverse Osmosis and Sensible Heat Thermal Energy Storage](https://www.osti.gov/biblio/1468648-status-report-component-models-developed-modelica-framework-reverse-osmosis-desalination-plant-thermal-energy-storage)\r\n### Journal Publications\r\n* [Development of the NuScale Power Module in the INL Modelica Ecosystem](https://www.tandfonline.com/doi/full/10.1080/00295450.2020.1781497)\r\n* [Modeling the Idaho National Laboratory Thermal-Energy Distribution System (TEDS) in the Modelica Ecosystem](https://www.mdpi.com/1996-1073/13/23/6353)\r\n* [Modeling, control, and dynamic performance analysis of a reverse osmosis desalination plant integrated within hybrid energy systems](https://www.sciencedirect.com/science/article/pii/S0360544216306600)\r\n* [Dynamic performance analysis of a high-temperature steam electrolysis plant integrated within nuclear-renewable hybrid energy systems](https://www.sciencedirect.com/science/article/pii/S0306261918310870)\r\n### Dissertations\r\n* [Nuclear Integrated Energy System Preliminary Design and Analysis](https://repository.lib.ncsu.edu/bitstream/handle/1840.20/38548/etd.pdf?sequence=1)\r\n* Modeling and Experimental Validation of Latent Heat Thermal Energy Storage System\r\n\r\n### License\r\n\r\nCopyright 2020 Battelle Energy Alliance, LLC\r\n\r\nLicensed under the Apache 2.0 (the ""License"");\r\nyou may not use this file except in compliance with the License.\r\nYou may obtain a copy of the License at\r\n\r\n https://opensource.org/licenses/Apache-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an ""AS IS"" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\nSee the License for the specific language governing permissions and\r\nlimitations under the License.\r\n\r\n\r\n\r\n\r\nDevelopers\r\n-----\r\nBy contributing to this software project, you are agreeing to the following terms and conditions for your contributions:\r\n\r\nYou agree your contributions are submitted under the Apache-2 license. You represent you are authorized to make the contributions and grant the license. If your employer has rights to intellectual property that includes your contributions, you represent that you have received permission to make contributions and grant the required license on behalf of that employer.\r\n\r\nAuthors\r\n-----\r\n* Andrea Alfonsi\r\n* Aaron Epiney\r\n* Cristian Rabiti \r\n* Jong Suk Kim\r\n* Konor Frick (Technical Lead/ Owner)\r\n* Paul Talbot\r\n* Robert Kinoshita\r\n* Derek Stucki\r\n* Michael Greenwood\r\n* Roberto Ponciroli\r\n* Yu Tang\r\n* Daniel Mikkelson (Maintainer)\r\n* Amey Shigrekar\r\n\r\n\r\n### Other Software\r\nIdaho National Laboratory is a cutting edge research facility which is a constantly producing high quality research and software. Feel free to take a look at our other software and scientific offerings at:\r\n\r\n[Primary Technology Offerings Page](https://www.inl.gov/inl-initiatives/technology-deployment)\r\n\r\n[Supported Open Source Software](https://github.com/idaholab)\r\n\r\n[Raw Experiment Open Source Software](https://github.com/IdahoLabResearch)\r\n\r\n[Unsupported Open Source Software](https://github.com/IdahoLabCuttingBoard)\r\n'",,"2021/02/10, 20:17:48",987,Apache-2.0,120,319,"2023/10/11, 15:15:22",9,55,59,30,14,6,0.5,0.6956521739130435,,,0,8,false,,false,false,,,https://github.com/idaholab,https://inl.gov,"Idaho, US",,,https://avatars.githubusercontent.com/u/3855370?v=4,,, FAME,Its purpose is supporting the rapid development and fast execution of complex agent-based energy system simulations.,fame-framework,https://gitlab.com/fame-framework/fame-core,gitlab,,Energy Modeling and Optimization,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, REMix,"The main focus is on the broad techno-economical assessment of possible future energy system designs and analysis of interactions between technologies.",dlr-ve/esy/remix,https://gitlab.com/dlr-ve/esy/remix/framework,gitlab,,Energy Modeling and Optimization,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, StorageVET,A valuation model for analysis of energy storage technologies and some other energy resources paired with storage.,epri-dev,https://github.com/epri-dev/StorageVET.git,github,,Energy Modeling and Optimization,"2023/02/02, 22:51:48",57,0,20,true,Python,EPRI,epri-dev,Python,https://www.storagevet.com,"b'# StoragetVET 2.0\n\nStorageVET 2.0 is a valuation model for analysis of energy storage technologies and some other energy resources paired with storage. The tool can be used as a standalone model, or integrated with other power system models, thanks to its open-source Python framework. Download the executable environment and learn more at https://www.storagevet.com.\n\n## Getting Started\n\nThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.\n\n### Prerequisites & Installing\n\n#### 1. Install [Anaconda](https://www.anaconda.com/download/) for python 3.**\n\n#### 2. Open Anaconda Prompt\n\n#### 3. Activate Python 3.8 environment\n\n It is recommended that the latest Python 3.8 version be used. As of this writing, that version is Python 3.8.16\n We give the user 2 routes to create a python environment for python 3.8.16\n >Most Windows users have success with the Conda route.\n\n Each route results in a siloed python environment, but with different properties.\n Choose the conda OR pip route and stick to it. Commands are not interchangeable.\n >Please remember the route which created the python environment in order to activate it again later.\n > **You will need to activate the python environment to run the model, always.**\n\n **Conda Route - Recommended route for Windows OS**\n\nNote that the python version is specified, meaning conda does not have to be associated with a python 3.8\n```\nconda create -n storagevet-venv python=3.8.16\nconda activate storagevet-venv\n```\n\n**Pip Route**\n\n If you have Python 3.8.16 installed directly on your computer, then we recommend trying this route.\n >This route lets you to open the prompt of your choice.\nNote that pip should be associated to a python 3.8 installation\n\nOn Linux/Mac\n\n```\npip install virtualenv\nvirtualenv storagevet-venv\nsource storagevet-venv/bin/activate\n```\nOn Windows\n\n```\npip install virtualenv\nvirtualenv storagevet-venv\n""./storagevet-venv/Scripts/activate""\n```\n\n#### 3. Install project dependencies\n\n**Conda Route**\n```\npip install setuptools==52.0.0\nconda install conda-forge::blas=*=openblas --file requirements.txt --file requirements-dev.txt\npip install numpy_financial==1.0.0\n```\n\n**Pip Route**\n```\npip install setuptools==52.0.0\npip install -r requirements.txt -r requirements-dev.txt\npip install numpy_financial==1.0.0\n```\n\n## Running the tests\n\nTo run tests, activate Python environment. Then enter the following into your terminal:\n```\npython -m pytest test\n```\n\n## Deployment\n\nTo use this project as a dependency in your own, clone this repo directly into the root of your project.\nOpen terminal or command prompt from your project root, and input the following command:\n```\npip install -e ./storagevet\n```\n\n## Versioning\n\nFor the versions available, please\nsee the [list of releases](https://github.com/epri-dev/StorageVET/releases) on out GitHub repository.\nThis is version 1.2.3\n\n## Authors\n\n* **Miles Evans**\n* **Andres Cortes**\n* **Halley Nathwani**\n* **Ramakrishnan Ravikumar**\n* **Evan Giarta**\n* **Thien Nguyen**\n* **Micah Botkin-Levy**\n* **Yekta Yazar**\n* **Kunle Awojinrin**\n* **Giovanni Damato**\n* **Andrew Etringer**\n\n## License\n\nThis project is licensed under the BSD (3-clause) License - see the [LICENSE.txt](./LICENSE.txt) file for details\n\n'",,"2020/05/21, 15:25:31",1252,BSD-3-Clause,2,30,"2022/03/12, 14:40:11",5,9,9,0,592,0,0.0,0.5,"2023/02/02, 23:40:18",v1.2.3,0,4,false,,false,false,,,https://github.com/epri-dev,http://www.epri.com,"Palo Alto, CA",,,https://avatars.githubusercontent.com/u/10049875?v=4,,, OpenDER,Aims to accurately represent steady-state and dynamic behaviors of inverter-based distributed energy resources.,epri-dev,https://github.com/epri-dev/OpenDER.git,github,,Energy Modeling and Optimization,"2023/10/03, 14:50:19",40,0,22,true,Python,EPRI,epri-dev,Python,,"b'.. figure:: https://raw.githubusercontent.com/epri-dev/OpenDER/develop_req_SQA/docs/logo.png\n :alt: Open-source Distributed Energy Resources (OpenDER) Model\n\nEPRI\xe2\x80\x99s OpenDER model aims to accurately represent steady-state and dynamic behaviors of inverter-based distributed\nenergy resources (DERs). The model follows interconnection standards or grid-codes and is informed by the observed\nbehaviors of commercial products. Currently, model version 2.0 includes photovoltaic (PV) and battery energy storage\nsystem (BESS) DER behaviors according to the capabilities and functionalities required by the IEEE standard 1547-2018.\nThis first-of-its-kind model can be used to run snapshot, Quasi-Static Time Series (QSTS), and a variety of dynamic\nanalyses to study the impacts of DERs on distribution operations and planning.\n\nThis project is licensed under the terms of the BSD-3 clause license.\n\n\n.. |GitHub license| image:: https://img.shields.io/badge/License-BSD_3--Clause-blue.svg\n :target: https://github.com/epri-dev/opender/blob/master/LICENSE.txt\n\nResources\n---------\nOpenDER is under active development. Use the following resources to get involved.\n\n* EPRI OpenDER homepage (`link `__)\n\n* Model specification: IEEE 1547-2018 OpenDER Model: Version 2.0, EPRI, Palo Alto, CA: 2022. 3002025583\n (`link `__)\n\nDevelopment Objective\n---------------------\n* Harmonize accurate interpretations of the IEEE Std 1547-2018 DER interconnection standard among all the stakeholders,\n including utilities, distribution analysis tool developers, and original equipment manufacturers (OEMs).\n\n* Build consensus through an open-to-all DER Model User\xe2\x80\x99s Group (DERMUG), which will utilize EPRI developed model\n specifications and codes and provide feedback for continuous improvement of the OpenDER model.\n\n* Help the industry properly model the DERs that are (or to be) grid interconnected and evaluate the associated impacts\n on distribution circuits accurately.\n\nOverall Block Diagram\n---------------------\n.. figure:: https://raw.githubusercontent.com/epri-dev/OpenDER/develop_req_SQA/docs/blockdiagram.png\n :width: 900\n\nDependencies\n------------\nPython >= 3.7\n\nnumpy\n\npandas\n\nmatplotlib\n\nDependencies of the package are auto-installed by pip command below.\n\nInstallation\n------------\npip install opender\n\n\nExample of Using the DER Model\n------------------------------\nExample script: main.py\n\nThis example generate DER output power in a dynamic simulation to demonstrate DER trip and enter service behavior.\nThe grid voltage is set to be alternating between 1 and 1.11 per unit every ~10 minutes.\nDER should be observed to enter service and trip periodically.\n\nOther examples can be found in the `Examples `_ directory.\nClick the .ipynb files to see example scripts and execution results.\n\nUnit tests\n----------\nDependency: pytest\n\nExecution command: pytest path-to-package\\\\tests\n\n\n'",,"2022/04/26, 22:09:28",546,CUSTOM,45,105,"2023/10/03, 14:50:19",0,7,8,7,22,0,0.0,0.061224489795918324,"2023/10/03, 14:51:44",v2.1.2,0,2,false,,false,false,,,https://github.com/epri-dev,http://www.epri.com,"Palo Alto, CA",,,https://avatars.githubusercontent.com/u/10049875?v=4,,, HOPP,"Assesses optimal designs for the deployment of utility-scale hybrid energy plants, particularly considering wind, solar and storage.",NREL,https://github.com/NREL/HOPP.git,github,,Energy Modeling and Optimization,"2023/10/13, 15:23:46",18,1,8,true,Jupyter Notebook,National Renewable Energy Laboratory,NREL,"Jupyter Notebook,PowerBuilder,Python,Shell",,"b'# Hybrid Optimization and Performance Platform\n\n![CI Tests](https://github.com/NREL/HOPP/actions/workflows/ci.yml/badge.svg)\n\nAs part of NREL\'s [Hybrid Energy Systems Research](https://www.nrel.gov/wind/hybrid-energy-systems-research.html), this\nsoftware assesses optimal designs for the deployment of utility-scale hybrid energy plants, particularly considering wind,\nsolar and storage.\n\n## Software requirements\n- Python version 3.5+ 64-bit\n\n## Installing from Source\n1. Using Git, navigate to a local target directory and clone repository:\n ```\n git clone https://github.com/NREL/HOPP.git\n ```\n\n2. Open a terminal and navigate to /HOPP\n\n3. Create a new virtual environment and change to it. Using Conda and naming it \'hopp\':\n ```\n conda create --name hopp python=3.8 -y\n conda activate hopp\n ```\n\n4. Install requirements:\n ```\n conda install -c conda-forge coin-or-cbc -y\n conda install -c conda-forge shapely==1.7.1 -y\n pip install -r requirements.txt\n ```\n \n Note if you are on Windows, you will have to manually install Cbc: https://github.com/coin-or/Cbc\n\n5. Run install script:\n ```\n python setup.py develop\n ```\n\n6. The functions which download resource data require an NREL API key. Obtain a key from:\n \n [https://developer.nrel.gov/signup/](https://developer.nrel.gov/signup/)\n \n\n7. To set up the `NREL_API_KEY` required for resource downloads, you can create an Environment Variable called \n `NREL_API_KEY`. Otherwise, you can keep the key in a new file called "".env"" in the root directory of this project. \n\n Create a file "".env"" that contains the single line:\n ```\n NREL_API_KEY=key\n ```\n\n8. Verify setup by running an example:\n ```\n python examples/simulate_hybrid.py\n ```\n\n## Installing from Package Repositories\n1. HOPP is available as a PyPi package:\n\n ```\n pip install HOPP\n ```\n\n or as a conda package:\n\n ```\n conda install hopp -c nrel -c conda-forge -c sunpower\n ```\n\n NOTE: If you install from conda you will need to install `global-land-mask` from PyPi:\n\n ```\n pip install global-land-mask\n ```\n\n2. To set up `NREL_API_KEY` for resource downloads, first refer to section 7 and 8 above. But for the `.env` file method,\n the file should go in the working directory of your Python project, e.g. directory from where you run `python`.\n\n## Examples\n\nThe examples can be run by installing HOPP, then cloning the repo and calling each example file.\n\n##### Basic Simulation\n`python examples/simulate_hybrid.py`\n\n##### Flicker Map\n`python examples/flicker.py`\n\n##### Single Location Analysis\n`python examples/analysis/single_location.py`\n\n##### Wind Layout Optimization\n`python examples/optimization/layout_opt/wind_run.py`\n\n##### Hybrid Layout Optimization\n`python examples/optimization/layout_opt/hybrid_run.py`\n\n## HOPP-demos\n\nThe https://github.com/dguittet/HOPP-demos repo contains a more full featured example with detailed technical and financial inputs, a few scenarios and the optimal PV, Wind, and Battery design results.\n\n'",,"2019/11/04, 17:13:57",1451,BSD-3-Clause,28,370,"2023/10/25, 16:58:39",26,167,216,174,0,7,0.6,0.4901315789473685,"2023/09/22, 19:36:01",v.0.1.0,0,10,false,,false,false,NREL/reVX,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, ETM Pro,Professional interface of the Energy Transition model.,quintel,https://github.com/quintel/etmodel.git,github,,Energy Modeling and Optimization,"2023/10/23, 15:11:15",23,0,2,true,Ruby,Quintel,quintel,"Ruby,CoffeeScript,JavaScript,Haml,Sass,Liquid,HTML,TypeScript,Dockerfile,SCSS",pro.energytransitionmodel.com,"b'# The Energy Transition Model (ETM) Professional\n\n\n![](https://docs.energytransitionmodel.com/img/docs/20181031_etmodel_screenshot.png)\n\nThis is the source code of the [ETM Pro](https://pro.energytransitionmodel.com/):\nan online web app that let you create a future energy scenario for various countries, municipalities, neighbourhoods and more.\nThis software is [Open Source](LICENSE.txt), so you can fork it and alter at your will.\n\nIf you have any questions, please [contact us](http://quintel.com/contact).\n\n## Build Status\n\n### Master\n[![Build Status](https://quintel.semaphoreci.com/badges/etmodel/branches/master.svg)](https://quintel.semaphoreci.com/projects/etmodel)\n\n### Production\n[![Build Status](https://quintel.semaphoreci.com/badges/etmodel/branches/production.svg)](https://quintel.semaphoreci.com/projects/etmodel)\n\n## License\n\nThe ETM pro is released under the [MIT License](LICENSE.txt).\n\n## Branches\n\n* **master**: Working branch and is tracked by [the ETM beta server](https://beta-pro.energytransitionmodel.com/)\n* **production**: Tracks [the ETM production server](https://pro.energytransitionmodel.com/)\n\n## Installation with Docker\n\nNew users are recommended to use Docker to run ETEngine. Doing so will avoid the need to install additional dependencies.\n\n1. Get a copy of [ETModel](https://github.com/quintel/etmodel). You may choose to clone the repository using Git, or download a ZIP archive from Github.\n\n2. Build the ETModel image:\n\n ```sh\n docker-compose build\n ```\n\n3. Install dependencies and seed the database:\n\n ```sh\n docker-compose run --rm web bash -c \'bin/rails db:drop && bin/setup\'\n ```\n\n This command drops any existing ETModel database; be sure only to run this during the initial setup! This step will also provide you with an e-mail address and password for an administrator account.\n\n4. By default, ETModel will send requests to the beta (staging) version of ETEngine. This is used for testing purposes and is more frequently updated than the live (production) version.\n\n #### Run ETEngine locally\n\n If you wish to run [a local copy of ETEngine](https://github.com/quintel/etengine#installation-with-docker), ETModel must be told where to find its API. You must first find your machine\'s local/private IP address; ETModel will use this to send messages directly to ETEngine, and also by your browser when you are using the ETModel application to create scenarios. To get your IP address, run:\n\n ```sh\n ipconfig getifaddr en0 # on macOS\n hostname -I # on Linux\n ipconfig # on Windows\n ```\n\n * [macOS](https://www.hellotech.com/guide/for/how-to-find-ip-address-on-mac)\n * [Ubuntu](https://help.ubuntu.com/stable/ubuntu-help/net-findip.html.en)\n\n Create a file called `config/settings.local.yml` containing:\n\n ```yaml\n api_url: http://YOUR_IP_ADDRESS:3000\n ```\n\n #### Branches\n\n When running ETEngine locally, be sure to use the same branch or tag for ETModel, ETEngine, and ETSource. You are likely to encounter errors if you fail to do so.\n\n For example, if you wish to run the latest version all three should be set to the `master` branch. If you wish to run the production release from March 2022, they should all use the same tag:\n\n ```sh\n cd ../etengine && git checkout 2022.03\n cd ../etsource && git checkout 2022.03\n cd ../etmodel && git checkout 2022.03\n ```\n\n5. Launch the containers:\n\n ```\n docker-compose up\n ```\n\n After starting application will become available at http://localhost:3001 after a few seconds. This is indicated by the message ""Listening on http://0.0.0.0:3001"".\n\n## Installation without Docker\n\n### Prerequisites\n\nMac users should be able to install the following prerequisites with [Homebrew](brew.sh), Ubuntu users can use `apt-get`.\n* Ruby 2.6.6 and a Ruby version manager such as [rbenv](https://github.com/rbenv/rbenv)\n* Mysql database server\n* Yarn 1.22.5\n\n### Installing\n\n* Pull this repository with `git clone git@github.com:quintel/etmodel.git`\n * **Local Engine** You can communicate with either a local version of ETEngine, or one of our servers by specifying the `api_url` in `config.yml`. To use a local version, change the standard beta server url to `http://localhost:` based on which port you are running the Engine on.\n * **Database password** If you added a username and password to your mysql service, please replace the standard login info in `database.yml` with your own credentials.\n\n* Run `bundle install` and `yarn install` to install all the dependencies\n* Create and fill local database with `rake db:setup` and `RAILS_ENV=test rake db:setup`\n* Fire up your local server with `rails server -p3001`\n* Go to [localhost:3001](http://localhost:3001) and you should see the ETM pro!\n\n## Admin access\n\nIf you want to get admin access to your own page, the easiest way to do so\nis to create an Admin User through the console and follow instructions:\n\n rake db:create_admin\n\n## Bugs and feature requests\n\nIf you encounter a bug or if you have a feature request, you can either let us\nknow by creating an [Issue](http://github.com/quintel/etmodel/issues) *or* you\ncan try to fix it yourself and create a\n[pull request](http://github.com/quintel/etmodel/pulls).\n\n## With thanks...\n\nThe Energy Transition Model is built by [Quintel](https://quintel.com/). It is made possible by\nopen source software, and assets kindly provided for free by many wonderful people and\norganisations.\n\n#### Software\n\n* [Backbone.js](https://backbonejs.org/)\n* [D3.js](https://d3js.org/)\n* [Ruby on Rails](https://rubyonrails.org/)\n* [jQuery](https://jquery.com/)\n* and [many](https://github.com/quintel/etmodel/blob/master/Gemfile), [many](https://github.com/quintel/etmodel/blob/master/package.json) more ...\n\n#### Icons and images\n\n* [Emily J\xc3\xa4ger, OpenMoji](https://openmoji.org/)\n* [FontAwesome](https://fontawesome.com/)\n* [FreePik, Flaticon](https://www.flaticon.com/)\n* [Phosphor](https://phosphoricons.com/)\n\n#### Wiki\n\n[Wiki](../../wiki)\n'",,"2011/05/03, 08:55:13",4558,MIT,418,8973,"2023/10/23, 15:11:17",101,1027,4045,122,2,4,0.8,0.7138748116696343,"2023/06/06, 14:49:58",2023.06,0,36,false,,false,false,,,https://github.com/quintel,quintel.com,Amsterdam,,,https://avatars.githubusercontent.com/u/2242291?v=4,,, MESMO,"An open-source Python tool for the modeling, simulation and optimization of multi-scale electric and thermal distribution systems along with distributed energy resources.",mesmo-dev,https://github.com/mesmo-dev/mesmo.git,github,"optimal-power-flow,smart-grid,multi-energy-systems",Energy Modeling and Optimization,"2023/06/26, 22:12:23",37,0,18,true,Python,,mesmo-dev,"Python,Dockerfile",https://mesmo-dev.github.io/mesmo,"b'![](docs/assets/mesmo_logo.png)\n\n[![](https://zenodo.org/badge/201130660.svg)](https://zenodo.org/badge/latestdoi/201130660)\n[![](https://img.shields.io/github/release-date/mesmo-dev/mesmo?label=last%20release)](https://github.com/mesmo-dev/mesmo/releases)\n[![](https://img.shields.io/github/last-commit/mesmo-dev/mesmo?label=last%20commit%20%28develop%29)](https://github.com/mesmo-dev/mesmo/commits/develop)\n[![](https://img.shields.io/github/actions/workflow/status/mesmo-dev/mesmo/pythontests.yml?branch=develop)](https://github.com/mesmo-dev/mesmo/actions/workflows/pythontests.yml?query=branch%3Adevelop)\n\n> Work in progress: The repository is under active development and interfaces may change without notice. Please use [GitHub issues](https://github.com/mesmo-dev/mesmo/issues) for raising problems, questions, comments and feedback.\n\n# What is MESMO?\n\nMESMO stand for ""Multi-Energy System Modeling and Optimization"" and is an open-source Python tool for the modeling, simulation and optimization of multi-scale electric and thermal distribution systems along with distributed energy resources (DERs), such as flexible building loads, electric vehicle (EV) chargers, distributed generators (DGs) and energy storage systems (ESS).\n\n## Features\n\nMESMO implements 1) non-linear models for simulation-based analysis and 2) convex models for optimization-based analysis of electric grids, thermal grids and DERs. Through high-level interfaces, MESMO enables modeling operation problems for both traditional scenario-based simulation as well as optimization-based decision support. An emphasis of MESMO is on the modeling of multi-energy systems, i.e. the coupling of multi-commodity and multi-scale energy systems.\n\n1. **Electric grid modeling**\n - Simulation: Non-linear modeling of steady-state nodal voltage / branch flows / losses, for multi-phase / unbalanced AC networks.\n - Optimization: Linear approximate modeling via global or local approximation, for multi-phase / unbalanced AC networks.\n2. **Thermal grid modeling**\n - Simulation: Non-linear modeling of steady-state nodal pressure head / branch flow / pump losses, for radial district heating / cooling systems.\n - Optimization: Linear approximate modeling via global or local approximation, for radial district heating / cooling systems.\n3. **Distributed energy resource (DER) modeling**\n - Simulation & optimization: Time series models for non-dispatchable / fixed DERs.\n - Optimization: Linear state-space models for dispatchable / flexible DERs.\n - Currently implemented DER models: Conventional fixed loads, generic flexible loads, flexible thermal building loads, non-dispatchable generators, controllable electric / thermal generators, electric / thermal energy storage systems, combined heat-and-power plants.\n4. **Solution interfaces**\n - Simulation: Solution of non-linear power flow problems for electric / thermal grids.\n - Optimization: Solution of convex optimization problems for electric / thermal grids and DERs, through third-party numerical optimization solvers.\n - Generic optimization problem interface: Supports defining custom constraints and objective terms to augment the built-in models. Enables retrieving duals / DLMPs for the study of decentralized / distributed control architectures for energy systems.\n - High-level problem interfaces: Nominal operation problem for simulation-based studies; Optimal operation problem for optimization-based studies.\n\n## Documentation\n\nThe documentation is located at [mesmo-dev.github.io/mesmo](https://mesmo-dev.github.io/mesmo).\n\n## Installation\n\nMESMO has not yet been deployed to Python `pip` / `conda` package indexes, but can be installed in a local development environment as follows:\n\n1. Install `conda`-based Python distribution\xc2\xb9 such as [Anaconda](https://www.anaconda.com/distribution/) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) or [Miniforge](https://github.com/conda-forge/miniforge).\n2. Clone or download the repository. Ensure that the `cobmo` submodule directory is loaded as well.\n3. In `conda`-enabled shell (e.g. Anaconda Prompt), run:\n - `cd path_to_mesmo_repository`\n - `conda create -n mesmo -c conda-forge python=3.10 contextily cvxpy numpy pandas scipy`\n - `conda activate mesmo`\n - `python development_setup.py`\n - On Intel CPUs\xc2\xb2: `conda install -c conda-forge ""libblas=*=*mkl""`\n\nMESMO ships with [HiGHS](https://highs.dev/) as default optimization solver\xc2\xb3, but also supports [Gurobi](http://www.gurobi.com/) and [any CVXPY-supported solvers](https://www.cvxpy.org/tutorial/advanced/index.html#choosing-a-solver).\n\nFor notes \xc2\xb9/\xc2\xb2/\xc2\xb3 and alternative installation guide, see [docs/installation.md](docs/installation.md).\n\n## Contributing\n\nIf you are keen to contribute to this project, please see [docs/contributing.md](./docs/contributing.md).\n\n## Publications\n\nInformation on citing MESMO and a list of related publications is available at [docs/publications.md](docs/publications.md).\n\n## Acknowledgements\n\n- MESMO is developed in collaboration between [TUMCREATE](https://www.tum-create.edu.sg/), the [Institute for High Performance Computing, A*STAR](https://www.a-star.edu.sg/ihpc) and the [Chair of Renewable and Sustainable Energy Systems, TUM](https://www.ei.tum.de/en/ens/).\n- Sebastian Troitzsch implemented the initial version of MESMO and maintains this repository.\n- Sarmad Hanif and Kai Zhang developed the underlying electric grid modeling, fixed-point power flow solution and electric grid approximation methodologies.\n- Arif Ahmed implemented the implicit Z-bus power flow solution method & overhead line type definitions.\n- Mischa Grussmann developed the thermal grid modeling and approximation methodologies.\n- Verena Kleinschmidt implemented several multi-energy DER models, such as the heating plant and CHP plant models. \n- Sebastian Troitzsch and Tom Schelo implemented the optimization problem class.\n- This work was financially supported by the Singapore National Research Foundation under its Campus for Research Excellence And Technological Enterprise (CREATE) programme.\n'",",https://zenodo.org/badge/latestdoi/201130660","2019/08/07, 21:30:17",1539,MIT,5,1688,"2023/09/10, 19:30:53",13,7,16,2,45,0,0.2857142857142857,0.02179656538969621,"2021/11/11, 05:49:21",0.5.0,0,3,false,,false,false,,,https://github.com/mesmo-dev,,,,,https://avatars.githubusercontent.com/u/86105350?v=4,,, ASSUME,"An open-source toolbox for agent-based simulations of European electricity markets, with a primary focus on the German market setup.",assume-framework,https://github.com/assume-framework/assume.git,github,,Energy Modeling and Optimization,"2023/10/25, 16:29:02",10,0,10,true,Python,ASSUME,assume-framework,"Python,Dockerfile",https://assume.readthedocs.io,"b'# ASSUME: Agent-Based Electricity Markets Simulation Toolbox\n\n![Lint Status](https://github.com/assume-framework/assume/actions/workflows/lint-pytest.yaml/badge.svg)\n[![Code Coverage](https://codecov.io/gh/assume-framework/assume/branch/main/graph/badge.svg?token=CZ4FO7P57H)](https://codecov.io/gh/assume-framework/assume)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8088760.svg)](https://doi.org/10.5281/zenodo.8088760)\n[![Open Tutorials In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing)\n\n**ASSUME** is an open-source toolbox for agent-based simulations of European electricity markets, with a primary focus on the German market setup. Developed as an open-source model, its primary objectives are to ensure usability and customizability for a wide range of users and use cases in the energy system modeling community.\n\n## Introduction\n\nA unique feature of the ASSUME toolbox is its integration of **Deep Reinforcement Learning** methods into the behavioral strategies of market agents. The model offers various predefined agent representations for both the demand and generation sides, which can be used as plug-and-play modules, simplifying the reinforcement of learning strategies. This setup enables research into new market designs and dynamics in energy markets.\n\n\n## Documentation\n\n- [User Documentation](https://assume.readthedocs.io/en/latest/)\n- [Installation Guide](https://assume.readthedocs.io/en/latest/installation.html)\n\n## Installation\n\nYou can install ASSUME using pip. Choose the appropriate installation method based on your needs:\n\n### Using pip\n\nTo install the core package:\n\n```bash\npip install assume-framework\n```\n\nTo install with testing capabilities:\n\n```bash\npip install assume-framework[test]\n```\n\n### Timescale Database and Grafana Dashboards\n\nIf you want to benefit from a supported database and integrated Grafana dashboards for scenario analysis, you can use the provided Docker Compose file.\n\nFollow these steps:\n\n1. Clone the repository and navigate to its directory:\n\n```bash\ngit clone https://github.com/assume-framework/assume.git\ncd assume\n```\n\n2. Start the database and Grafana using the following command:\n\n```bash\ndocker-compose up -d\n```\n\nThis will launch a container for TimescaleDB and Grafana with preconfigured dashboards for analysis. You can access the Grafana dashboards at `http://localhost:3000`.\n\n### Using Learning Capabilities\n\nIf you intend to use the reinforcement learning capabilities of ASSUME and train your agents, make sure to install Torch. Detailed installation instructions can be found [here](https://pytorch.org/get-started/locally/).\n\n\n\n## Trying out ASSUME and the provided Examples\n\nTo ease your way into ASSUME we provided some examples and tutorials. The former are helpful if you would like to get an impression of how ASSUME works and the latter introduce you into the development of ASSUME.\n\n### The Tutorials\n\nThe tutorials work completly detached from your own machine on google colab. They provide code snippets and task that show you, how you can work with the software package one your own. We have two tutorials prepared, one for introducing a new unit and one for getting reinforcement learning ready on ASSUME.\n\nHow to configure a new unit in ASSUME?\n**Coming Soon**\n\nHow to intorduce reinforcement learning to ASSUME?\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing)\n\n\n\n### The Examples\n\nTo explore the provided examples, follow these steps:\n\n1. Clone the repository and navigate to its directory:\n\n```bash\ngit clone https://github.com/assume-framework/assume.git\ncd assume\n```\n\n2. Quick Start:\n\nThere are three ways to run a simulation:\n\n- Local:\n\n```bash\npython examples/examples.py\n```\n\n- Using the provided Docker setup:\n\nIf you have installed Docker and set up the Docker Compose file previously, you can select \'timescale\' in `examples.py` before running the simulation. This will save the simulation results in a Timescale database, and you can access the Dashboard at `http://localhost:3000`.\n\n- Using the CLI to run simulations:\n\n```bash\nassume -s example_01b -db ""postgresql://assume:assume@localhost:5432/assume""\n```\n\nFor additional CLI options, run `assume -h`.\n\n## Development\n\nIf you\'re contributing to the development of ASSUME, follow these steps:\n\n1. Install pre-commit:\n\n```bash\npip install pre-commit\npre-commit install\n```\n\nTo run pre-commit checks directly, use:\n\n```bash\npre-commit run --all-files\n```\n\n## Creating Documentation\n\nFirst, create an environment that includes the documentation dependencies:\n\n```bash\nconda env create -f environment_docs.yaml\n```\n\nTo generate or update the automatically created docs in `docs/source/assume*`, run:\n\n```bash\nsphinx-apidoc -o docs/source -Fa assume\n```\n\nTo create and serve the documentation locally, use:\n\n```bash\nmake -C docs html && python -m http.server --directory docs/build/html\n```\n\n## Contributors and Funding\n\nThe project is developed by a collaborative team of researchers from INATECH at the University of Freiburg, IISM at Karlsruhe Institute of Technology, Fraunhofer Institute for Systems and Innovation Research, Fraunhofer Institution for Energy Infrastructures and Geothermal Energy, and FH Aachen - University of Applied Sciences. Each contributor brings valuable expertise in electricity market modeling, deep reinforcement learning, demand side flexibility, and infrastructure modeling.\n\nASSUME is funded by the Federal Ministry for Economic Affairs and Climate Action (BMWK). We are grateful for their support in making this project possible.\n\n## License\n\nCopyright 2022-2023 [ASSUME developers](https://assume.readthedocs.io/en/latest/developers.html).\n\nASSUME is licensed under the [GNU Affero General Public License v3.0](./LICENSE). This license is a strong copyleft license that requires that any derivative work be licensed under the same terms as the original work. It is approved by the [Open Source Initiative](https://opensource.org/licenses/AGPL-3.0).\n'",",https://doi.org/10.5281/zenodo.8088760","2023/04/19, 09:20:48",189,AGPL-3.0,460,461,"2023/10/25, 16:29:10",9,161,213,213,0,2,1.0,0.5177111716621253,"2023/09/30, 09:33:39",v0.2.0,0,8,false,,false,false,,,https://github.com/assume-framework,https://assume-project.de/,,,,https://avatars.githubusercontent.com/u/131251735?v=4,,, NemoMod.jl,"A high performance, open-source energy system optimization modeling tool developed in Julia.",sei-international,https://github.com/sei-international/NemoMod.jl.git,github,,Energy Modeling and Optimization,"2023/10/21, 15:16:36",30,0,9,true,Julia,Stockholm Environment Institute,sei-international,Julia,,"b""![|nemo logo](docs/src/assets/nemo_logo_small.png)\n\n# NEMO: Next Energy Modeling system for Optimization\n\n[![](https://img.shields.io/badge/docs-stable-blue.svg)](https://sei-international.github.io/NemoMod.jl/stable)\n[![](https://img.shields.io/badge/docs-dev-blue.svg)](https://sei-international.github.io/NemoMod.jl/dev)\n\nNEMO is a high performance, open-source energy system optimization modeling tool developed in [Julia](https://julialang.org/). It is intended for users who seek substantial optimization capabilities without the financial burden of proprietary software or the performance bottlenecks of common open-source alternatives. Key features of NEMO include:\n\n- Least-cost optimization of energy supply and demand\n- Support for multiple regions and regional trade\n- Modeling of energy storage\n- Nodal network simulations and modeling of power and pipeline flow\n- Modeling of emissions and emission constraints (including carbon pricing and pollutant externalities)\n- Modeling of renewable energy targets\n- Support for simulating selected years in a modeling period\n- Parallel processing\n- Support for multiple solvers: [Cbc](https://github.com/coin-or/Cbc), [CPLEX](https://www.ibm.com/analytics/cplex-optimizer), [GLPK](https://www.gnu.org/software/glpk/), [Gurobi](https://www.gurobi.com/), [HiGHS](https://highs.dev/), [Mosek](https://www.mosek.com/), and [Xpress](https://www.fico.com/en/products/fico-xpress-optimization)\n- Optimization warm starts\n- [SQLite](https://www.sqlite.org/) data store\n- Numerous performance tuning options\n\nNEMO can be used in command line mode or with the [Low Emissions Analysis Platform](https://leap.sei.org/) (LEAP - formerly the Long-range Energy Alternatives Planning system) as a graphical user interface.\n\nDevelopment of NEMO is led by the Energy Modeling Program at the [Stockholm Environment Institute (SEI)](https://www.sei.org/).\n\n# Getting started with NEMO\n\nFor instructions on installing and using NEMO, see the [documentation](https://sei-international.github.io/NemoMod.jl/).\n\n# Contributing to NEMO\n\nIf you are interested in contributing to NEMO, please contact [Jason Veysey](https://www.sei.org/people/jason-veysey/).\n\n# Licensing and attribution\n\nNEMO's Julia code is made available under the Apache License, Version 2.0. See [LICENSE.md](LICENSE.md) for details, including attribution requirements and limitations on use.\n\nThe initial versions of NEMO were informed by version 2017_11_08 of the [Open Source Energy Modelling System (OSeMOSYS)](https://github.com/OSeMOSYS/OSeMOSYS), which was also released under the Apache License, Version 2.0.\n\n# For more information\n\nThe NEMO team includes several SEI staff: [Jason Veysey](https://www.sei.org/people/jason-veysey/), [Charlie Heaps](https://www.sei.org/people/charles-heaps/), [Eric Kemp-Benedict](https://www.sei.org/people/eric-kemp-benedict/), and [Taylor Binnington](https://www.sei.org/people/taylor-binnington/). Please feel free to contact any of us for more information or if you have questions.\n""",,"2018/06/22, 06:49:34",1951,CUSTOM,12,430,"2023/07/27, 12:16:46",1,3,7,1,90,1,0.0,0.0,"2022/08/22, 23:01:20",v1.9,0,1,false,,false,false,,,https://github.com/sei-international,http://www.sei.org/,,,,https://avatars.githubusercontent.com/u/14920927?v=4,,, Electricity Maps,A real-time visualization of the CO2 emissions from electricity consumption.,tmrowco,https://github.com/electricitymaps/electricitymaps-contrib.git,github,"sustainability,data-visualization,climate-change,hacktoberfest",Energy Distribution and Grids,"2023/10/25, 17:36:26",3088,0,567,true,Python,Electricity Maps,electricitymaps,"Python,HTML,TypeScript,JavaScript,Ruby,CSS,Swift,Earthly,Java,Dockerfile,Shell",https://app.electricitymaps.com,"b'

\n \n \n \n

\n

\n Electricity Maps\n

\n\n

\nA real time and historical visualisation of the Greenhouse Gas Intensity (in terms of CO2 equivalent) of electricity production and consumption around the world.
\n app.electricitymaps.com \xc2\xbb\n

\n\n

\n \n \n \n \n \n \n \n \n \n \n

\n\n![image](web/public/images/electricitymap_social_image.png#gh-light-mode-only)\n![image](web/public/images/electricitymap_social_image_dark.png#gh-dark-mode-only)\n\n## Introduction\n\nThis project aims to provide a free, open-source, and transparent visualisation of the carbon intensity of electricity consumption around the world.\n\nWe fetch the raw production data from public, free, and official sources. They include official government and transmission system operators\' data. We then run [our flow-tracing algorithm](https://www.electricitymaps.com/blog/flow-tracing) to calculate the actual carbon intensity of a country\'s electricity consumption.\n\n_Try it out at [app.electricitymaps.com](https://app.electricitymaps.com), or download the app on [Google Play](https://play.google.com/store/apps/details?id=com.tmrow.electricitymap&utm_source=github) or [App store](https://itunes.apple.com/us/app/electricity-map/id1224594248&utm_source=github)._\n\n## Contributing\n\nThe Electricity Maps app is a community project and we welcome contributions from anyone!\n\nWe are always looking for help to build parsers for new countries, fix broken parsers, improve the frontend app, improve accuracy of data sources, discuss new potential data sources, update region capacities, and much more.\n\nRead our [contribution guidelines](/CONTRIBUTING.md) to get started.\n\n## Community & Support\n\nUse these channels to be part of the community, ask for help while using Electricity Maps, or just learn more about what\'s going on:\n\n- [Slack](https://slack.electricitymaps.com): This is the main channel to join the community. You can ask for help, showcase your work, and stay up to date with everything happening.\n- [GitHub Issues](https://github.com/electricitymaps/electricitymaps-contrib/issues): Raise any issues you encounter with the data or bugs you find while using the app.\n- [GitHub Discussions](https://github.com/electricitymaps/electricitymaps-contrib/discussions): Join discussions and share new ideas for features.\n- [GitHub Wiki](https://github.com/electricitymaps/electricitymaps-contrib/wiki): Learn more about methodology, guides for how to set up development environment, etc.\n- [FAQ](https://app.electricitymaps.com/FAQ): Get your questions answered in our FAQ.\n- [Our Commercial Website](https://electricitymaps.com/): Learn more about how you or your company can use the data too.\n- [Our Blog](https://electricitymaps.com/blog/): Read about the green transition and how Electricity Maps is helping to accelerate it.\n- [Twitter](https://twitter.com/electricitymaps): Follow for latest news\n- [LinkedIn](https://www.linkedin.com/company/electricitymaps): Follow for latest news\n\n## License\n\nThis repository is licensed under GNU-AGPLv3 since v1.5.0, find our license [here](https://github.com/electricitymaps/electricitymaps-contrib/blob/master/LICENSE.md). Contributions prior to commit [cb9664f](https://github.com/electricitymaps/electricitymaps-contrib/commit/cb9664f43f0597bedf13e832047c3fc10e67ba4e) were licensed under [MIT license](https://github.com/electricitymaps/electricitymaps-contrib/blob/master/LICENSE_MIT.txt)\n\n## Frequently asked questions\n\n_We also have a lot more questions answered on [app.electricitymaps.com/faq](https://app.electricitymaps.com/faq)!_\n\n**Where does the data come from?**\nThe data comes from many different sources. You can check them out [here](https://github.com/electricityMaps/electricitymaps-contrib/blob/master/DATA_SOURCES.md)\n\n**Why do you calculate the carbon intensity of _consumption_?**\nIn short, citizens should not be responsible for the emissions associated with all the products they export, but only for what they consume.\nConsumption-based accounting (CBA) is a very important aspect of climate policy and allows assigning responsibility to consumers instead of producers.\nFurthermore, this method is robust to governments relocating dirty production to neighboring countries in order to green their image while still importing from it.\nYou can read more in our blog post [here](https://electricitymaps.com/blog/flow-tracing/).\n\n**Why don\'t you show emissions per capita?**\nA country that has few inhabitants but a lot of factories will appear high on CO2/capita.\nThis means you can ""trick"" the numbers by moving your factory abroad and import the produced _good_ instead of the electricity itself.\nThat country now has a low CO2/capita number because we only count CO2 for electricity (not for imported/exported goods).\nThe CO2/capita metric, by involving the size of the population, and by not integrating all CO2 emission sources, is thus an incomplete metric.\nCO2 intensity on the other hand only describes where is the best place to put that factory (and when it is best to use electricity), enabling proper decisions.\n\n**CO2 emission factors look high \xe2\x80\x94 what do they cover exactly?**\nThe carbon intensity of each type of power plant takes into account emissions arising from the whole life cycle of the plant (construction, fuel production, operational emissions and decommissioning). Read more on the [Emissions Factor Wiki page](https://github.com/electricitymaps/electricitymaps-contrib/wiki/Emission-factors).\n\n**How can I get access to historical data or the live API?**\nAll this and more can be found **[here](https://electricitymaps.com/)**.\nYou can also visit our **[data portal](https://www.electricitymaps.com/data-portal)** to download historical datasets.\n'",,"2016/05/21, 16:36:17",2713,AGPL-3.0,780,4647,"2023/10/25, 17:36:26",234,3275,5602,1247,0,34,1.8,0.6313217169570761,"2023/10/23, 15:27:01",v1.79.0,4,318,false,,true,true,,,https://github.com/electricitymaps,https://www.electricitymaps.com,Denmark,,,https://avatars.githubusercontent.com/u/24733017?v=4,,, Open Grid Emissions Initiative,"Seeks to fill a critical need for high-quality, publicly-accessible, hourly grid emissions data that can be used for GHG accounting, policymaking, academic research, and energy attribute certificate markets.",singularity-energy,https://github.com/singularity-energy/open-grid-emissions.git,github,"carbon-accounting,climate,electricity,emissions,ghg,open-data,eia,epa,power-systems,python,ghg-emissions,carbon-emissions,climate-change,decarbonization",Energy Distribution and Grids,"2023/08/22, 16:30:37",50,0,22,true,Python,Singularity Energy,singularity-energy,"Python,Jupyter Notebook",,"b'# Open Grid Emissions Initiative\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7062459.svg)](https://doi.org/10.5281/zenodo.7062459)\n\nThe Open Grid Emissions Initiative seeks to fill a critical need for high-quality, publicly-accessible, hourly grid emissions data that can be used for GHG accounting, policymaking, academic research, and energy attribute certificate markets. The initiative includes this repository of open-source grid emissions data processing tools that use peer-reviewed, well-documented, and validated methodologies to create the accompanying public dataset of hourly, monthly, and annual U.S. electric grid generation, GHG, and air pollution data.\n\nPlease check out [our documentation](https://docs.singularity.energy/docs/open-grid-emissions-docs) for more details about the Open Grid Emissions methodology.\n\nThe Open Grid Emissions Dataset can be [downloaded here](https://singularity.energy/open-grid-emissions). An archive of previous versions of the dataset and intermediate data outputs (for research and validation purposes) can be found on [Zenodo](https://zenodo.org/communities/singularity-energy?page=1&size=20).\n\n## Installing and running the data pipeline\n\nTo install and run the pipeline on your computer, open anaconda prompt, navigate to the folder where you want to save the repository, and run the following commands:\n\n```\nconda install git\ngit clone https://github.com/singularity-energy/open-grid-emissions.git\nconda update conda\ncd open-grid-emissions\nconda env create -f environment.yml\nconda activate open_grid_emissions\ncd src\npython data_pipeline.py --year 2021\n```\n\nA more detailed walkthough of these steps can be found below in the ""Development Setup"" section.\n\n## Data Availability and Release Schedule\nThe latest release includes data for year 2019-2021 covering the contiguous United States, Alaska, and Hawaii. In future releases, we plan to expand the geographic coverage to additional U.S. territories (dependent on data availability), and to expand the historical coverage of the data. \n\nParts of the input data used for the Open Grid Emissions dataset is released by the U.S. Energy Information Administration in the Autumn following the end of each year (2022 data should be available Autumn 2023). Each release will include the most recent year of available data as well as updates of all previous available years based on any updates to the OGEI methodology. All previous versions of the data will be archived on Zenodo.\n\nUpdated datasets will also be published whenever a new version of the open-grid-emissions repository is released.\n\n## Contribute\nThere are many ways that you can contribute!\n - Tell us how you are using the dataset or python tools\n - Request new features or data outputs by submitting a feature request or emailing us at <>\n - Tell us how we can make the datasets even easier to use\n - Ask a question about the data or methods in our [discussion forum](https://github.com/singularity-energy/open-grid-emissions/discussions)\n - [Submit an issue](https://github.com/singularity-energy/open-grid-emissions/issues) if you\'ve identified a way the methods or assumptions could be improved\n - Contribute your subject matter expertise to the discussion about [open issues and questions](https://github.com/singularity-energy/open-grid-emissions/issues?q=is%3Aissue+is%3Aopen+label%3Aquestion)\n - Submit a pull request to help us fix open issues\n\n# Repository Structure\n### Modules\n- `column_checks`: functions that check that all data outputs have the correct column names\n- `data_pipeline`: main script for running the data pipeline from start to finish\n- `download_data`: functions that download data from the internet\n- `data_cleaning`: functions that clean loaded data\n- `eia930`: functions for cleaning and formatting EIA-930 data\n- `emissions`: functions used for imputing emissions data\n- `filepaths`: Used to identify where repository files are located on the user\'s computer\n- `gross_to_net_generation`: Functions for identifying subplants and gross to net generation conversion factors\n- `impute_hourly_profiles`: functions related to assigning an hourly profile to monthly data\n- `load_data`: functions for loading data from downloaded files\n- `output_data`: functions for writing intermediate and final data to csvs\n- `validation`: functions for testing and validating data outputs\n- `visualization`: functions for visualizing data in notebooks\n\n### Notebooks\nNotebooks are organized into five directories based on their purpose\n- `explore_data`: notebooks used for exploring data outputs and results\n- `explore_methods`: notebooks that can be used to explore specific methods step-by-step\n- `manual_data`: notebooks that are used to create/update certain files in `data/manual`\n- `validation`: notebooks related to validating results\n- `visualization`: notebooks used to visualize data\n- `work_in_progress`: temporary notebooks being used for development purposes on specific branches\n\n### Data Structure\n- `data/downloads` contains all files that are downloaded by functions in `load_data`\n- `data/manual` contains all manually-created files, including the egrid static tables\n- `data/outputs` contains intermediate outputs from the data pipeline... any files created by our code that are not final results\n- `data/results` contains all final output files that will be published\n\n# Development Setup\n\nIf you would like to run the code on your own computer and/or contribute updates to the code, the following steps can help get you started.\n\n## Users unfamiliar with git / python\n\n### Install conda and python\n\nWe suggest using miniconda or Anaconda to manage the packages needed to run the Open Grid Emissions code. Anaconda and Miniconda install a similar environment, but Anaconda installs more packages by default and Miniconda installs them as needed. These can be downloaded from [miniconda](https://docs.conda.io/en/latest/miniconda.html) or [Anaconda](https://www.anaconda.com/products/distribution)\n\n### Install a code editor\n\nIf you want to edit the code and do not already have an integrated development environment (IDE) installed, one good option is Visual Studio Code (download: https://code.visualstudio.com/). \n\n### Install and setup git software manager\n\nIn order to download the repository, you will need to use git. You can either install Git Bash from https://git-scm.com/downloads, or you can install it using conda. To do so, fter installing Anaconda or Miniconda, open an Anaconda Command Prompt (Windows) or Terminal.app (Mac) and type the following command:\n\n```\nconda install git\n```\n\nThen you will need set up git following these instructions: https://docs.github.com/en/get-started/quickstart/set-up-git\n\n## Once you have git and conda installed\n\n### Download the codebase to a local repository\n\nUsing Anaconda command prompt or Git Bash, use the `cd` and `mkdir` commands to create and/or enter the directory where you would like to download the code (e.g. ""Users/myusername/GitHub""). Then run:\n\n```\ngit clone https://github.com/singularity-energy/open-grid-emissions.git\n```\n\n### Setup the conda environment\n\nOpen anaconda prompt, use `cd` to navigate to the directory where your local files are stored (e.g. ""GitHub/open-grid-emissions""), and then run:\n\n```\nconda update conda\nconda env create -f environment.yml\n```\n\nInstallation requires that the conda channel-priority be set to ""flexible"". This is the default behavior, \nso if you\'ve never manually changed this, you shouldn\'t have to worry about this. However, \nif you receive an error message like ""Found conflicts!"" when trying to install the environment,\ntry setting your channel priority to flexible by running the following command:\n`conda config --set channel_priority flexible` and then re-running the above commands.\n\n## Running the complete data pipeline\n\nIf you would like to run the full data pipeline to generate all intermediate outputs and results files, open anaconda prompt, navigate to `open-grid-emissions/src`, and run the following (replacing 2021 with whichever year you want to run):\n\n```\nconda activate open_grid_emissions\npython data_pipeline.py --year 2021\n```\n\n## Keeping the code updated\n\nFrom time to time, the code will be updated on GitHub. To ensure that you are keeping your local version of the code up to date, open git bash and follow these steps:\n```\n# change the directory to where ever your local git repository is saved\n# after hitting enter, it should show the name of the git branch (e.g. ""(main)"")\ncd GitHub/open-grid-emissions \n\n# save any changes that you might have made locally to your copy of the code\ngit add .\n\n# fetch and merge the updated code from github\ngit pull origin main\n```\n\n# Contribution Guidelines\n\nIf you plan on contributing edits to the codebase that will be merged into the main branch, please follow these best practices:\n\n1. Please do not make edits directly to the main branch. Any new features or edits should be completed in a new branch. To do so, open git bash, navigate to your local repo (e.g. `cd GitHub/open-grid-emissions`), and create a new branch, giving it a descriptive name related to the edit you will be doing:\n\n\t`git checkout -b branch_name`\n\n2. As you code, it is a good practice to \'save\' your work frequently by opening git bash, navigating to your local repo (`cd GitHub/open-grid-emissions`), making sure that your current feature branch is active (you should see the feature name in parentheses next to the command line), and running \n\t\n\t`git add .`\n\n3. You should commit your work to the branch whenever you have working code or whenever you stop working on it using:\n\n\t`git add .` \n\t`git commit -m ""short message about updates""`\n\n4. Once you are done with your edits, save and commit your code using step #3 and then push your changes:\n\n\t`git push`\n\n5. Now open the GitHub repo web page. You should see the branch you pushed up in a yellow bar at the top of the page with a button to ""Compare & pull request"". \n\t- Click ""Compare & pull request"". This will take you to the ""Open a pull request"" page. \n\t- From here, you should write a brief description of what you actually changed. \n\t- Click ""Create pull request""\n\t- The changes will be reviewed and discussed. Once any edits have been made, the code will be merged into the main branch.\n\n## Conventions and standards\n- We generally follow the naming conventions used by the Public Utility Data Liberation Project: https://catalystcoop-pudl.readthedocs.io/en/latest/dev/naming_conventions.html\n- Functions should include descriptive docstrings (using the Google style guide https://google.github.io/styleguide/pyguide.html#383-functions-and-methods), inline comments should be used to describe individual steps, and variable names should be made descriptive (e.g. `cems_plants_with_missing_co2_data` not `cems_missing` or `cpmco2`)\n- All pandas merge operations should include the `validate` parameter to ensure that unintentional duplicate entries are not created (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html)\n- All pandas groupby operations should include the `dropna=False` parameter so that data with missing groupby keys are not unintentionally dropped from the data.\n- All code should be formatted using `black`\n- Clear all outputs from notebooks before committing your work. \n- Any manual changes to reported categorical data, conversion factors, or manual data mappings should be loaded from a .csv file `data/manual` rather than stored in a dictionary or variable in the code.\n'",",https://doi.org/10.5281/zenodo.7062459,https://zenodo.org/communities/singularity-energy?page=1&size=20","2022/02/17, 20:59:40",615,MIT,129,616,"2023/08/22, 16:30:39",93,128,206,47,64,6,1.0,0.4023255813953488,"2023/03/02, 17:32:25",0.2.2,0,4,false,,false,false,,,https://github.com/singularity-energy,https://singularity.energy,United States of America,,,https://avatars.githubusercontent.com/u/49131244?v=4,,, gridemissions,Displays the hourly carbon footprint of the US electricity system.,jdechalendar,https://github.com/jdechalendar/gridemissions.git,github,,Energy Distribution and Grids,"2023/08/04, 22:16:02",23,0,5,true,Python,,,"Python,Jupyter Notebook,Shell,Makefile,Batchfile",,"b'# gridemissions: Tools for power sector emissions tracking\n\n\nThe tools in this repository power the visualization at [energy.stanford.edu/gridemissions](https://energy.stanford.edu/gridemissions), updated hourly. Associated datasets on electricity and emissions are made publicly available. In addition to tools to create the data, the `gridemissions` package provides a module to retrieve data from the API and methods to load and manipulate the data. This README file serves as technical documentation for the tools in this repository.\n\n# Contents\n* [How the datasets are created](https://github.com/jdechalendar/gridemissions#how-the-datasets-are-created)\n* [Retrieving data from the API](https://github.com/jdechalendar/gridemissions#Retrieving-data-from-the-API)\n* [FAQ](https://github.com/jdechalendar/gridemissions#FAQ)\n* [Installation](https://github.com/jdechalendar/gridemissions#Installation)\n* [The `GraphData` class](https://github.com/jdechalendar/gridemissions#The-GraphData-class)\n\n\n## How the datasets are created\nTwo main operations are needed to create the datasets for the the visualization at [energy.stanford.edu/gridemissions](https://energy.stanford.edu/gridemissions).\n\n### 1. Consumption-based emissions\nElectric grid data on production, consumption and exchanges, along with the emissions associated with electricity production, are used to compute the emissions embodied in electricity **consumption**. By default, we are using IPCC Life-Cycle Assessment emissions factors to compute the emissions associated with generating electricity from different sources, so the CO2 data we release are in units of CO2-eq. If you wish to use different emissions factors, or factors for other quantities (e.g. SO2, NOx, PM2.5, or H2O), you can use the tools in this package to generate corresponding consumption-based data. A tutorial on how to do this will be available soon.\n\nFor more on this operation, see ""Tracking emissions in the US electricity system"", by Jacques A. de Chalendar, John Taggart and Sally M. Benson. Proceedings of the National Academy of Sciences Dec 2019, 116 (51) 25497-25502; DOI: 10.1073/pnas.1912950116\n\n### 2. Physics-based data reconciliation\nRaw electric grid data typically have errors and inconsistencies, but we need ""clean"" data to compute consumption-based emissions. We use an optimization-based algorithm to reconcile the raw data while enforcing certain physical constraints, e.g. conservation of energy. We publish both the raw and reconciled electric data that we use.\n\nFor more on this operation, see ""Physics-informed data reconciliation framework for real-time electricity and emissions tracking"", by Jacques A. de Chalendar and Sally M. Benson. Applied Energy Dec 2021; DOI: 10.1016/j.apenergy.2021.117761 [ArXiv preprint](https://arxiv.org/abs/2103.05663).\n\n## Retrieving data from the API\nFor a quick introduction to the package, see the notebooks in the `notebooks/demo` folder. The [API Demo.ipynb](https://colab.research.google.com/drive/1HYHqiA2iA-vVMuqFHrKtUUkdPLN5UJYS) notebook can also be loaded on Colab and shows how data can be retrieved from the API and then manipulated using the `GraphData` methods. Note that only one month of historical data is available from the API. To retrieve earlier data, download datasets in bulk from [here](https://gridemissions.jdechalendar.su.domains/#/code)\n\nA download script is provided to quickly download data and can be used after installing the package (see below):\n```bash\n# Downloads data for CAISO for a year\ngridemissions_download --variable=co2 --region=CISO --start=20190101 --end=20200101\n\n# Print help\ngridemissions_download -h\n\n# Download one of the datasets in bulk\ngridemissions_download --variable co2 --all\n```\nNote that the data are downloaded to the `DATA_PATH` you configured during setup (see Installation notes below).\n\nYou can also use the `api` module to retrieve data programmatically. This is what the `gridemissions_download` script uses under the hood.\n```python\nfrom gridemissions import api\n\n# Download CO2 emissions embodied in electricity consumption in the California ISO (CISO) for a year:\ndata = api.retrieve(dataset=""co2"", region=""CISO"", start=""20190101"", end=""20200101"", field=""D"")\n\n# Download electricity generated by the Electric reliability council of Texas (ERCOT) and in the Bonneville Power Administration (BPAT) for a year:\ndata = api.retrieve(dataset=""elec"", region=[""ERCOT"", ""BPAT""], start=""20190101"", end=""20200101"", field=""NG"")\n```\nBy default, the `api.retrieve` function returns data in a pandas DataFrame. We also provide an abstraction to load and manipulate data called `GraphData`. This object is a light wrapped around a pandas DataFrame that provides convenient functionality to represent and access data from different fields on a graph. More on that below.\n\n### Data naming conventions\nIn the datasets that is generated from this work, we use the following conventions for naming columns (see `eia_api.py`). Replace `%s` in the following dictionaries by the balancing acronyms listed [here](https://www.eia.gov/electricity/gridmonitor/about).\n* For electricity, we follow the naming convention followed by the US EIA data source we are using:\n```python\n""E"": {\n ""D"": ""EBA.%s-ALL.D.H"", # Demand (Consumption)\n ""NG"": ""EBA.%s-ALL.NG.H"", # Generation\n ""TI"": ""EBA.%s-ALL.TI.H"", # Total Interchange\n ""ID"": ""EBA.%s-%s.ID.H"", # Interchange\n }\n```\nFor example, `""EBA.CISO-ALL.D.H""` is the column for demand in the California ISO.\n* For all other variables, we use a different convention, that only uses underscores as separators. For example, for carbon dioxide:\n```python\n""CO2"": {\n ""D"": ""CO2_%s_D"", # Demand (Consumption)\n ""NG"": ""CO2_%s_NG"", # Generation\n ""TI"": ""CO2_%s_TI"", # Total Interchange\n ""ID"": ""CO2_%s-%s_ID"", # Interchange\n}\n```\nFor example, `""CO2_CISO_D""` is the column for consumed emissions in the California ISO, `""CO2_CISO_NG""` is the column for produced emissions in the California ISO.\n\n## FAQ\n### I tried to retrieve data from the API from 2018 without success\nThe backend API only stores a month\'s worth of data (to save on AWS costs). You can download data in bulk instead from [here](https://gridemissions.jdechalendar.su.domains/#/code).\n\n### Where are the emissions factors coming?\nThese are life-cycle emissions factors from the [IPCC](https://www.ipcc.ch/report/renewable-energy-sources-and-climate-change-mitigation/) (Table A.II.4 on page 982 of the report at the link). If you want, you can use other emissions factors. [This](https://github.com/jdechalendar/gridemissions/blob/main/src/gridemissions/emissions.py#L14-L28) is where they are being read in by the codebase. If you pass in custom emissions factors, you can then re-run the code to generate estimates using your favorite ones. It would also not be too difficult to modify this code to make the emissions factors depend on the balancing-area and time of year, although that would require a bit more work.\n\n## Installation\nClone this repository on your machine using HTTPS:\n```\ngit clone https://github.com/jdechalendar/gridemissions.git\n```\nor using SSH (your GitHub account needs to have been configured in this case):\n```\ngit clone git@github.com:jdechalendar/gridemissions.git\n```\nFrom the `gridemissions` directory (the one that contains this README file), install this repository:\n```\npip install .\n```\nInstalling the project in this way means that you can now use statements like `import gridemissions` to make the code in this repository accessible anywhere on your system.\nTo install the optional dependencies as well (needed if you would like to run the automated data cleaning workflows)\n```\npip install .[all]\n```\nIf you intend to modify the code, you may want to install with the editable flag:\n```\npip install -e .[all]\n```\nAs explained [here](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs), this installs the package in setuptools\' [""Development mode""](https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html) so that you don\'t need to re-build the project every time you edit the code.\nOpen a Python interpreter and import the package to create the default configuration files for the project. When you import the package, it will check if configuration files exist. If not, a message will be printed to the screen to tell you where the configuration files are created on your system.\nOptionally, you can customize the configuration files. See the configuration section for details.\n\n### Configuration\nSome configuration is needed for this project, to hold environment variables like data paths, API keys and passwords. The recommended option is to use a configuration file (`config.json`). A default is created for you the first time you import the package, in the folder `~/.config/gridemissions`. `~` stands for your home directory. On Linux, this is what `$HOME` evaluates to, on Windows/OSX, this is the directory that contains your Documents, Desktop and Downloads folders.\n\nAlternatively, configuration settings can be read from environment variables, e.g. with\n```bash\nexport GRIDEMISSIONS_CONFIG_DIR_PATH=""$HOME/.config/gridemissions_test""\n```\n\n#### The config.json file\nWhenever you import the `gridemissions` module, the key-value pairs stored in `config.json` are loaded into a dictionary that then becomes available to you as the dictionary `gridemissions.config`. These can also be modified at runtime. At a minimum, your `config.json` should contain:\n* `DATA_PATH`: path to local data store, by default `~/data/gridemissions`\n* `TMP_PATH`: for scratch data (e.g. when downloading data), by default `~/tmp/gridemissions`\n\n#### Supported Environment Variables\n\n```text\nGRIDEMISSIONS_CONFIG_DIR_PATH: the configuration directory (default: ""$HOME/.config/gridemissions"")\nGRIDEMISSIONS_LOG_CONFIG_FILE_PATH: the file used to store logging (default: ""$HOME/.config/gridemissions/logging.conf"")\nGRIDEMISSIONS_CONFIG_FILE_PATH: the configuration file (default: ""$HOME/.config/gridemissions/config.json"")\nGRIDEMISSIONS_DEFAULT_LOGGING_CONF: the default logging configuration (default can be read within ./src/gridemissions/configure.py"")\nGRIDEMISSIONS_DATA_DIR_PATH: the data directory (default: ""$HOME/data/gridemissions"")\nGRIDEMISSIONS_TMP_DIR_PATH: the temporary data directory (default: ""$HOME/tmp/gridemissions"")\n```\n\n## The `GraphData` class\n*Important note: this class will progressively replace the `BaData` class, which will be deprecated.*\n\nAbstraction to represent timeseries data on a graph. This class is a light wrapper around a pd.DataFrame, with convenience functions for accessing data. In the underlying pd.DataFrame, the index represents time (UTC) and columns represent data for different fields on the graph.\n\nA graph consists of regions (nodes) and trade links (edges). Trade links are defined as ordered region pairs.\n\nThe class supports holding data for one variable and multiple fields. Examples of variables are electricity, co2, so2, etc. Examples of fields are demand, generation, interchange. Field data can be for regions or for links.\n\nThe regions, variable, and fields are inferred from the underlying data columns.\n\n### `GraphData.get_cols`\nRetrieve column name(s) corresponding to given region(s) and field(s)\n\n### `GraphData.get_data`\nConvenience function to get the data from a call to `get_cols`\n\n### `GraphData.check_*`\nThese functions can be used to check certain constraints are met for different fields. By convention, if one of the fields is missing, the check is `True`.\n'",",https://arxiv.org/abs/2103.05663","2020/09/19, 21:55:28",1130,MIT,14,53,"2023/08/04, 22:16:05",1,12,15,7,81,0,0.0,0.045454545454545414,,,0,2,false,,false,false,,,,,,,,,,, SimBench,The objective of the research project SimBench is the development of a benchmark dataset to support research in grid planning and operation.,e2nIEE,https://github.com/e2nIEE/simbench.git,github,,Energy Distribution and Grids,"2023/10/11, 08:34:03",75,9,12,true,Python,,e2nIEE,Python,https://simbench.de/en/,"b'\n.. image:: https://simbench.de/wp-content/uploads/2019/01/logo.png\n :target: https://www.simbench.net\n :alt: SimBench logo\n\n|\n\n.. image:: https://badge.fury.io/py/simbench.svg\n :target: https://pypi.python.org/pypi/simbench\n :alt: PyPI\n\n.. image:: https://img.shields.io/pypi/pyversions/simbench.svg\n :target: https://pypi.python.org/pypi/simbench\n :alt: versions\n\n.. image:: https://readthedocs.org/projects/simbench/badge/?version=stable\n :target: http://simbench.readthedocs.io/?badge=stable\n :alt: Documentation Status\n\n.. image:: https://github.com/e2nIEE/simbench/actions/workflows/github_test_action.yml/badge.svg\n :target: https://github.com/e2nIEE/simbench/actions/\n :alt: GitHub Actions\n\n.. image:: https://codecov.io/github/e2nIEE/simbench/coverage.svg?branch=master\n :target: https://app.codecov.io/github/e2nIEE/simbench?branch=master\n :alt: codecov\n\n.. image:: https://pepy.tech/badge/simbench\n :target: https://pepy.tech/project/simbench\n :alt: pepy\n\n.. image:: https://img.shields.io/badge/License-ODbL-brightgreen.svg\n :target: https://opendatacommons.org/licenses/odbl\n :alt: ODbL\n\n.. image:: https://img.shields.io/badge/License-BSD%203--Clause-blue.svg\n :target: https://github.com/e2nIEE/simbench/blob/master/LICENSE\n :alt: BSD\n\nSimBench (www.simbench.net) is a research project to create a ""simulation database for uniform comparison of innovative solutions in the field of network analysis, network planning and operation"", which was conducted for three and a half years from 1.11.2015 to 30.04.2019. It was part of the German Federal Government\'s 6th Energy Research Program ""Research for an Environmentally Friendly, Reliable and Affordable Energy Supply"". The project was carried out by the University of Kassel, the Fraunhofer IEE, the RWTH Aachen University and the Technical University of Dortmund in accordance with the authors mentioned above. The project, coordinated by the University of Kassel, was supported by the professional advisory from six German distribution network operators: DREWAG NETZ GmbH, Energie Netz Mitte GmbH, ENSO NETZ GmbH, Netze BW GmbH, Syna GmbH and Westnetz GmbH.\n\nThe objective of the research project SimBench is the development of a benchmark dataset to support research in grid planning and operation. SimBench grids differ from existing grids in the following key aspects:\n\n- Consideration of a wide range of use cases during the development of datasets\n- Provision of grid data for low voltage (LV), medium voltage (MV), high voltage (HV), extra-high voltage (EHV) as well as design of data for a suitable interconnection of a grid among different voltage levels for cross-level simulations\n- Ensuring high reproducibility and comparability by providing clearly assigned load and generation time series\n- Validation of the suitability of the datasets with simulation, deliberately determined grid states including suitable dimensioning of grid assets\n\nThis repository provides data and code to use SimBench within the software pandapower (www.pandapower.org).\n'",,"2019/05/13, 14:10:08",1626,CUSTOM,21,101,"2023/10/11, 08:34:03",3,9,33,10,14,0,0.0,0.032258064516129004,"2023/05/12, 09:42:42",v1.4.0,0,3,false,,false,true,"skloibhofer/fh-pandapower-cosimulation,CarachinoAlessio/ML-techniques-for-State-Estimation,e2nIEE/pandahub,maximilianboehm/ba_boehm,m-junaidaslam/Pandapower-Youtube-Tutorials,nitbharambe/cap-map,Pyosch/vpplib,tkarndt/commtailment,Jst3p/ASCIIDeluxe",,https://github.com/e2nIEE,,"Kassel, Germany",,,https://avatars.githubusercontent.com/u/40853245?v=4,,, Egret,A Python-based package for electrical grid optimization based on the Pyomo optimization modeling language.,grid-parity-exchange,https://github.com/grid-parity-exchange/Egret.git,github,"snl-applications,snl-science-libs,optimization,python,power,milp,nlp,minlp,energy-system,powerflow",Energy Distribution and Grids,"2023/05/23, 19:30:40",108,3,16,true,Python,Grid Parity Exchange,grid-parity-exchange,"Python,MATLAB",,"b'[![EGRET GitHub CI](https://github.com/grid-parity-exchange/Egret/workflows/EGRET%20GitHub%20CI/badge.svg)](https://github.com/grid-parity-exchange/Egret/actions/workflows/egret.yml)\n\n## EGRET Overview\n\nEGRET is a Python-based package for electrical grid optimization based on the Pyomo optimization modeling language. EGRET is designed to be friendly for performing high-level analysis (e.g., as an engine for solving different optimization formulations), while also providing flexibility for researchers to rapidly explore new optimization formulations.\n\nMajor features:\n* Solution of Unit-Commitment problems\n* Solution of Economic Dispatch (optimal power flow) problems (e.g., DCOPF, ACOPF)\n* Library of different problem formulations and approximations\n* Generic handling of data across model formulations\n* Declarative model representation to support formulation development\n\nEGRET is available under the BSD License (see [LICENSE.txt](https://github.com/grid-parity-exchange/Egret/blob/main/LICENSE.txt))\n\n### Installation\n\n* EGRET is a Python package and therefore requires a Python installation. We recommend using Anaconda with the latest Python (https://www.anaconda.com/distribution/).\n* These installation instructions assume that you have a recent version of Pyomo installed, in addition to a suite of relevant solvers (see www.pyomo.org for additional details).\n* Download (or clone) EGRET from this GitHub site.\n* From the main EGRET folder (i.e., the folder containing setup.py), use a terminal (or the Anaconda prompt for Windows users) to run setup.py to install EGRET into your Python installation - as follows:\n\n pip install -e .\n\n### Requirements\n\n* Python 3.7 or later\n* Pyomo version 6.4.0 or later\n* pytest\n* Optimization solvers for Pyomo - specific requirements depends on the models being solved. EGRET is tested with Gurobi or CPLEX for MIP-based problems (e.g., unit commitment) and Ipopt (with HSL linear solvers) for NLP problems.\n\nWe additionally recommend that EGRET users install the open source CBC MIP solver. The specific mechanics of installing CBC are platform-specific. When using Anaconda on Linux and Mac platforms, this can be accomplished simply by:\n\n conda install -c conda-forge coincbc\n\nThe COIN-OR organization - who developers CBC - also provides pre-built binaries for a full range of platforms on https://bintray.com/coin-or/download.\n\n### Testing the Installation\n\nTo test the functionality of the unit commitment aspects of EGRET, execute the following command from the EGRET models/tests sub-directory:\n\n pytest test_unit_commitment.py\n\nIf EGRET can find a commerical MIP solver on your system via Pyomo, EGRET will execute a large test suite including solving several MIPs to optimality. If EGRET can only find an open-source solver, it will execute a more limited test suite which mostly relies on solving LP relaxations. Example output is below.\n\n```\n=================================== test session starts ==================================\nplatform darwin -- Python 3.7.7, pytest-5.4.2, py-1.8.1, pluggy-0.13.0\nrootdir: /home/some-user/egret\ncollected 21 items\n\ntest_unit_commitment.py s.................... [100%]\n\n========================= 20 passed, 1 skipped in 641.80 seconds =========================\n```\n\n### How to Cite EGRET in Your Research\n\nIf you are using the unit commitment functionality of EGRET, please cite the following paper: \n\nOn Mixed-Integer Programming Formulations for the Unit Commitment Problem\nBernard Knueven, James Ostrowski, and Jean-Paul Watson.\nINFORMS Journal on Computing (Ahead of Print)\nhttps://pubsonline.informs.org/doi/10.1287/ijoc.2019.0944\n'",,"2019/01/28, 18:41:03",1731,CUSTOM,35,1058,"2023/08/09, 01:18:36",48,227,265,17,77,4,0.4,0.4072164948453608,"2023/04/03, 23:11:48",0.5.5,0,14,false,,false,true,"PrincetonUniversity/Vatic,darrylmelander/Prescient,grid-parity-exchange/Prescient",,https://github.com/grid-parity-exchange,,,,,https://avatars.githubusercontent.com/u/47119915?v=4,,, PyPSA-Eur,An Open Optimization Model of the European Transmission System.,PyPSA,https://github.com/PyPSA/pypsa-eur.git,github,"snakemake,energy,energy-system,power-systems,energy-model,pypsa,energy-system-model,energy-systems",Energy Distribution and Grids,"2023/10/24, 06:49:56",236,0,77,true,Python,PyPSA,PyPSA,"Python,Shell",https://pypsa-eur.readthedocs.io/,"b'\n\n![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/pypsa/pypsa-eur?include_prereleases)\n[![Build Status](https://github.com/pypsa/pypsa-eur/actions/workflows/ci.yaml/badge.svg)](https://github.com/PyPSA/pypsa-eur/actions)\n[![Documentation](https://readthedocs.org/projects/pypsa-eur/badge/?version=latest)](https://pypsa-eur.readthedocs.io/en/latest/?badge=latest)\n![Size](https://img.shields.io/github/repo-size/pypsa/pypsa-eur)\n[![Zenodo PyPSA-Eur](https://zenodo.org/badge/DOI/10.5281/zenodo.3520874.svg)](https://doi.org/10.5281/zenodo.3520874)\n[![Zenodo PyPSA-Eur-Sec](https://zenodo.org/badge/DOI/10.5281/zenodo.3938042.svg)](https://doi.org/10.5281/zenodo.3938042)\n[![Snakemake](https://img.shields.io/badge/snakemake-\xe2\x89\xa57.7.0-brightgreen.svg?style=flat)](https://snakemake.readthedocs.io)\n[![REUSE status](https://api.reuse.software/badge/github.com/pypsa/pypsa-eur)](https://api.reuse.software/info/github.com/pypsa/pypsa-eur)\n[![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/pypsa)](https://stackoverflow.com/questions/tagged/pypsa)\n\n# PyPSA-Eur: A Sector-Coupled Open Optimisation Model of the European Energy System\n\nPyPSA-Eur is an open model dataset of the European energy system at the\ntransmission network level that covers the full ENTSO-E area. The model is suitable both for operational studies and generation and transmission expansion planning studies.\nThe continental scope and highly resolved spatial scale enables a proper description of the long-range\nsmoothing effects for renewable power generation and their varying resource availability.\n\n\n\n\nThe model is described in the [documentation](https://pypsa-eur.readthedocs.io)\nand in the paper\n[PyPSA-Eur: An Open Optimisation Model of the European Transmission\nSystem](https://arxiv.org/abs/1806.01613), 2018,\n[arXiv:1806.01613](https://arxiv.org/abs/1806.01613).\nThe model building routines are defined through a snakemake workflow.\nPlease see the [documentation](https://pypsa-eur.readthedocs.io/)\nfor installation instructions and other useful information about the snakemake workflow.\nThe model is designed to be imported into the open toolbox\n[PyPSA](https://github.com/PyPSA/PyPSA).\n\n**WARNING**: PyPSA-Eur is under active development and has several\n[limitations](https://pypsa-eur.readthedocs.io/en/latest/limitations.html) which\nyou should understand before using the model. The github repository\n[issues](https://github.com/PyPSA/pypsa-eur/issues) collect known topics we are\nworking on (please feel free to help or make suggestions). The\n[documentation](https://pypsa-eur.readthedocs.io/) remains somewhat patchy. You\ncan find showcases of the model\'s capabilities in the Joule paper [The potential\nrole of a hydrogen network in\nEurope](https://doi.org/10.1016/j.joule.2023.06.016), another [paper in Joule\nwith a description of the industry\nsector](https://doi.org/10.1016/j.joule.2022.04.016), or in [a 2021 presentation\nat EMP-E](https://nworbmot.org/energy/brown-empe.pdf). We do not recommend to\nuse the full resolution network model for simulations. At high granularity the\nassignment of loads and generators to the nearest network node may not be a\ncorrect assumption, depending on the topology of the underlying distribution\ngrid, and local grid bottlenecks may cause unrealistic load-shedding or\ngenerator curtailment. We recommend to cluster the network to a couple of\nhundred nodes to remove these local inconsistencies. See the discussion in\nSection 3.4 ""Model validation"" of the paper.\n\n\n![PyPSA-Eur Grid Model](doc/img/elec.png)\n\nThe dataset consists of:\n\n- A grid model based on a modified [GridKit](https://github.com/bdw/GridKit)\n extraction of the [ENTSO-E Transmission System\n Map](https://www.entsoe.eu/data/map/). The grid model contains 6763 lines\n (alternating current lines at and above 220kV voltage level and all high\n voltage direct current lines) and 3642 substations.\n- The open power plant database\n [powerplantmatching](https://github.com/FRESNA/powerplantmatching).\n- Electrical demand time series from the\n [OPSD project](https://open-power-system-data.org/).\n- Renewable time series based on ERA5 and SARAH, assembled using the [atlite tool](https://github.com/FRESNA/atlite).\n- Geographical potentials for wind and solar generators based on land use (CORINE) and excluding nature reserves (Natura2000) are computed with the [atlite library](https://github.com/PyPSA/atlite).\n\nA sector-coupled extension adds demand\nand supply for the following sectors: transport, space and water\nheating, biomass, industry and industrial feedstocks, agriculture,\nforestry and fishing. This completes the energy system and includes\nall greenhouse gas emitters except waste management and land use.\n\nThis diagram gives an overview of the sectors and the links between\nthem:\n\n![sector diagram](graphics/multisector_figure.png)\n\nEach of these sectors is built up on the transmission network nodes\nfrom [PyPSA-Eur](https://github.com/PyPSA/pypsa-eur):\n\n![network diagram](https://github.com/PyPSA/pypsa-eur/blob/master/doc/img/base.png?raw=true)\n\nFor computational reasons the model is usually clustered down\nto 50-200 nodes.\n\nAlready-built versions of the model can be found in the accompanying [Zenodo\nrepository](https://doi.org/10.5281/zenodo.3601881).\n\n# Contributing and Support\nWe strongly welcome anyone interested in contributing to this project. If you have any ideas, suggestions or encounter problems, feel invited to file issues or make pull requests on GitHub.\n- In case of code-related **questions**, please post on [stack overflow](https://stackoverflow.com/questions/tagged/pypsa).\n- For non-programming related and more general questions please refer to the [mailing list](https://groups.google.com/group/pypsa).\n- To **discuss** with other PyPSA users, organise projects, share news, and get in touch with the community you can use the [discord server](https://discord.com/invite/AnuJBk23FU).\n- For **bugs and feature requests**, please use the [PyPSA-Eur Github Issues page](https://github.com/PyPSA/pypsa-eur/issues).\n\n# Licence\n\nThe code in PyPSA-Eur is released as free software under the\n[MIT License](https://opensource.org/licenses/MIT), see `LICENSE.txt`.\nHowever, different licenses and terms of use may apply to the various\ninput data.\n'",",https://doi.org/10.5281/zenodo.3520874,https://doi.org/10.5281/zenodo.3938042,https://arxiv.org/abs/1806.01613,https://arxiv.org/abs/1806.01613,https://doi.org/10.1016/j.joule.2023.06.016,https://doi.org/10.1016/j.joule.2022.04.016,https://doi.org/10.5281/zenodo.3601881","2017/10/11, 23:54:41",2204,CUSTOM,1153,3213,"2023/10/24, 06:49:57",151,398,616,248,1,31,0.7,0.6976292265837544,"2023/07/27, 09:57:00",v0.8.1,0,46,false,,false,false,,,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, Energy Transition Engine,Calculation engine for the Energy Transition Model.,quintel,https://github.com/quintel/etengine.git,github,,Energy Distribution and Grids,"2023/10/23, 15:08:18",12,0,0,true,Ruby,Quintel,quintel,"Ruby,JavaScript,HTML,Haml,CSS,Shell,CoffeeScript,Dockerfile",https://pro.energytransitionmodel.com,"b'# Energy Transition Engine (ETE)\n\nThis is the source code for the Calculation Engine that is used by the\n[Energy Transition Model](http://energytransitionmodel.com) and its various\ninterfaces (clients).\n\nIt is an online web app that lets you create a future energy scenario for\nvarious countries. This software is [open source](LICENSE.txt), so you can\nfork it and alter at your will.\n\nETEngine does not contain an easy-to-use frontend for creating and editing\nthese energy scenarios; that role is instead fulfilled by separate applications\nsuch as [ETModel][etmodel], [ETFlex][etflex], and the [EnergyMixer][energymixer],\nwhich each use ETEngine\'s REST API for manipulating and calculating scenarios.\n\n[![Build Status](https://quintel.semaphoreci.com/badges/etengine/branches/master.svg)](https://quintel.semaphoreci.com/projects/etengine)\n\n## License\n\nThe ETE is released under the [MIT License](LICENSE.txt).\n\n## Installation with Docker\n\nNew users are recommended to use Docker to run ETEngine. Doing so will avoid the need to install additional dependencies.\n\n1. Get a copy of [ETEngine](https://github.com/quintel/etengine) and [ETSource](https://github.com/quintel/etsource); placing them in the same parent directory:\n\n ```\n \xe2\x94\x9c\xe2\x94\x80 parent_dir\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 etengine\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80 etsource\n ```\n\n Place the ETSource decryption password in a file called `.password` in the ETSource directory. This is required to decrypt a small number of datasets for which we\'re not authorised to publicly release the source data.\n\n ```\n \xe2\x94\x9c\xe2\x94\x80 parent_dir\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 etengine\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80 etsource\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 .password # <- password goes here\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 carriers\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 config\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 datasets\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80 ...\n ```\n\n2. Build the ETEngine image:\n\n ```sh\n docker-compose build\n ```\n\n3. Install dependencies and seed the database:\n\n ```sh\n docker-compose run --rm web bash -c \'bin/rails db:drop && bin/setup\'\n ```\n\n The command drops any existing ETEngine database; be sure only to run this during the initial setup! This step will also provide you with an e-mail address and password for an administrator account.\n\n4. Launch the containers:\n\n ```sh\n docker-compose up\n ```\n\n After starting application will become available at http://localhost:3000 after a few seconds. This is indicated by the message ""Listening on http://0.0.0.0:3000"".\n\nBefore the application can start serving scenarios, it must calculate the default dataset (Netherlands). This process will begin the first time a scenario is requested and will take several seconds. Signing in to the administrator account will also begin the calculation. Please be patient! Further requests to ETEngine will happen much faster.\n\n## Installation without Docker\n\nInstalling ETEngine on a local machine can be a bit involved, owing to the\nnumber of dependencies. Assuming you can run a \'normal\' rails application on your local machine,\nyou have to follow these steps to run ETEngine.\n\n1. Install the ""Graphviz"" library\n * Mac users with [Homebrew][homebrew]: `brew install graphviz`\n * Ubuntu: `sudo apt-get install graphviz libgraphviz-dev`\n\n1. Install ""MySQL"" server\n * Mac: Install latest version using the [Native Package][mysql] (choose the 64-bit DMG version)\n * Ubuntu: `sudo apt-get install mysql-server-5.5 libmysqlclient-dev`\n\n1. Clone this repository with `git clone git@github.com:quintel/etengine.git`\n\n1. Run `bundle install` to install the dependencies required by ETEngine.\n\n1. Create your personal configuration files from the samples with\n ```\n cp -vn config/database.sample.yml config/database.yml\n cp -vn config/config.sample.yml config/config.yml\n ```\n Then make any changes -- particularly to the database configuration -- as you see fit.\n * Probably set ""standalone"" to `true` in ""config/config.yml""\n\n1. Clone a copy of [ETSource][etsource] \xe2\x80\x93\xe2\x80\x93 which contains the data for each\n region:\n 1. `cd ..; git clone git@github.com:quintel/etsource.git`\n 1. `cd etsource; bundle install`\n 1. Edit ""config/config.yml"" and enter the ETSource directory into the\n ""etsource_export"" and ""etsource_working_copy"" options \xe2\x80\x93\xe2\x80\x93 or leave at default if possible.\n\n\n1. Create the database you specified in your ""database.yml"" file, and\n 1. run `bundle exec rake db:setup db:seed` to create the tables and add an\n administrator account \xe2\x80\x93\xe2\x80\x93 whose name and password will be output at the end \xe2\x80\x93\xe2\x80\x93 OR\n 1. run `bundle exec rake db:create` to create your database and\n `bundle exec cap staging db2local` to fill your database with records from staging server\n\n1. You\'re now ready-to-go! Fire up the Rails process with `rails s`\n or use [Pow][pow].\n\n1. If you run into an dataset error, check out this\n [explanation](https://github.com/quintel/etsource#csv-documents ""Explanation on etsource CSV files"") on CSV files\n\n## Technical Design\n\n### Caching\n\nThe ETEngine uses heavily caching of calculated values by using the\n[fetch](https://github.com/quintel/etengine/blob/51b321f6d43a2d2a626aa268845b775fca051ae0/app/models/qernel/dataset_attributes.rb#L205-L237)\nfunction that stores and retrieves calculated values. This has some drawbacks,\nbut is necessary to keep performance up.\n\n### Scenario\n\nWhen the user starts a new scenario, the user has to choose the `end_year`\nand the `area` for which this scenario applies. This can/should *not* be\naltered later.\n\n### Present and future\n\nThe ETEngine uses *two* graphs that store all the data: one for the present\nyear and one for the future year. In this sense, the ETengine is a \'two\nstate\' model: everything is calculated twice: once for the start year, and\nonce for the end year. It is important to note that ETengine therefor does\n*not* calculate intermediate years. An exception to this is\n[Merit](http://github.com/quintel/merit), a module for ETengine (that can\nalso be used independently which contains time series at a one hour resolution\nfor one year.\n\n### Inputs\n\nA user can alter the start scenario with the use of **inputs**. Every input has\na key and a value can be sent to ETEngine. For example a user can tell ETengine:\n\n number_of_energy_power_nuclear_gen3_uranium_oxide = 2\n\nThis means that the user wants to \'set\' the number of nuclear power plants to `2`\nin his/her current scenario.\n\nThe current set of inputs can be found on\n[ETSource][etsource].\n\n\nEvery times the user requests some output, **all** the inputs that have been\ntouched by that user for that scenario are applied again. The order in which\nthey are applied can be controlled if necessary.\n\nThe priority of every input defaults to 0, and can be set a manual value\n(e.g. 100) on inputs which need to be executed first. For example, an input\nwith `priority=100` gets executed before an input with `priority=99`, etc...\n\nThis is someting to keep in mind when designing your input statements.\n\n#### Competing inputs\n\nFor example, when you have two inputs:\n\n* input `A`: update attribute `X` to have value `1`\n* input `B`: update attribute `X` to have value `2`\n\nThe outcome of this `X` will be `1` **or** `2` depending on the priority of\nthese inputs (if they both have no priority or the same priority), this will\nbe randomly determined.\n\n#### Complementary inputs\n\nFor example, when you have two inputs:\n\n* input `A`: update attribute `X` to **increase** with `1%`\n* input `B`: update attribute `X` to **increase** with `2%`\n\nThen the outcome of the `X` will be 1.01 * 1.02.\n\n### Output\n\nThe user can request output from his/her scenario with the use of\n*gqueries*. A gquery always returns the *present* and the *future*\noutput value, although there are exceptions to this.\n\nE.g. when the user sends the `dashboard_co2_emissions` query to\nETEngine, it will receive the following feedback:\n\n* present: 123\n* future: 456\n* unit: MJ\n\nA **gquery** is nothing more then a stored statement. These statements are\nwritten in our own language called the *Graph Query Language* (GQL) and\na recent list can be found on [ETSource][etsource].\n\n### Auto-reloading your changes to etsource\n\nSometimes you want to play around or tweak some gqueries. Then, you don\'t\nwant to create commits every time and import them. Because when you are\nsatisfied, you\'ll probably have 10 commits, that needs to be cleaned up,\nsquashed.\n\nYou can add the option `etsource_live_reload: true` in your `config.yml`\nfile.\n\nChange queries, inputs, datasets, gqueries, inputs or topology directory\nin your **et_source_export** folder, and Etengine reloads your changes\nautomatically!\n\nB.t.w. By default your *etsource_export* directory is not under version control.\nIn order to gain the advantages of Git, just point *etsource_export* to the\n*etsource* directory, either by using a symbolic link or using the same directory\nin your config.yml file. But **be carefull** NOT to use the interface\'s\n\'import\' action on /etsource: that will delete/overwrite your etsource_export\ndirectory!\n\n## GQL\n\n[GQL Functions](http://beta.et-engine.com/doc/Gql/Grammar/Sandbox.html)\n\n[Node methods](http://beta.et-engine.com/doc/Qernel/NodeApi.html)\n\n## Screencasts\n\nPassword for all the screencasts below is `quintel`.\n\n#### [GQL Console](http://vimeo.com/40660438)\n\n#### [GQL Docs](http://vimeo.com/40663213)\n\nHow to use this documentation.\n\n#### [GQL Console and ETSource](http://vimeo.com/40707436)\n\nHow to work with different etsource directories, make changes and load them in\nthe gql console.\n\n#### [ETSource: Create a new basic etmodel](http://vimeo.com/40709640)\n\nWe build a new etmodel with 3 nodes from scratch. This helps you\nunderstand how the etsource works.\n\nThe result you can find in: etsource/models/sample\n\n[etsource]: http://github.com/quintel/etsource ""ETSource: database for the ETM.""\n[etmodel]: http://github.com/quintel/etmodel\n[etflex]: http://github.com/quintel/etflex\n[energymixer]: http://github.com/quintel/energymixer\n[homebrew]: http://brew.sh\n[pow]: http://pow.cx\n[mysql]: http://dev.mysql.com/downloads/mysql/5.5.html#macosx-dmg\n\n'",,"2011/05/03, 08:55:40",4558,MIT,258,5256,"2023/10/23, 15:08:19",33,391,1329,64,2,3,0.6,0.5957446808510638,,,0,30,false,,false,false,,,https://github.com/quintel,quintel.com,Amsterdam,,,https://avatars.githubusercontent.com/u/2242291?v=4,,, Open Smart Grid Platform,"An open, generic, scalable and independent 'Internet of Things' platform, which enables various connected smart objects in the public space to be easily controlled and monitored.",OSGP,https://github.com/OSGP/open-smart-grid-platform.git,github,"iot-platform,smartmetering,public-lighting,distribution-automation,dlms,iec61850,oslp,cucumber-tests,java,springframework",Energy Distribution and Grids,"2023/10/25, 16:22:21",87,0,8,true,Java,Grid eXchange Fabric (GXF): formerly known as the Open Smart Grid Platform,OSGP,"Java,Gherkin,Groovy,Shell,PLpgSQL,Dockerfile",https://www.lfenergy.org/projects/gxf/,"b'\n\n[![Dependabot Status](https://api.dependabot.com/badges/status?host=github&repo=OSGP/open-smart-grid-platform)](https://dependabot.com) [![Build Status](https://ci.opensmartgridplatform.org/buildStatus/icon?job=OSGP_open-smart-grid-platform_development)](https://ci.opensmartgridplatform.org/job/OSGP_open-smart-grid-platform_development/) [![Quality Gate Status](https://sonar.osgp.cloud/api/project_badges/measure?project=org.opensmartgridplatform%3Aopen-smart-grid-platform&metric=alert_status)](https://sonar.osgp.cloud/dashboard?id=org.opensmartgridplatform%3Aopen-smart-grid-platform)\n\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4104/badge)](https://bestpractices.coreinfrastructure.org/projects/4104)\n\n# Code for Grid eXchange Fabric (GXF)\n\n### Project Description\n\nThis repository contains all code for the Grid eXchange Fabric (GXF) project, formerly known as Open Smart Grid Platform (OSGP). The name OSGP has been deprecated.\n\n- osgp. This project contains all code components needed to build the platform.\n- integration-tests. This project contains all the tests to verify the platform.\n- public-lighting-demo-app. This project contains a simple demo app for the platform.\n\n\n## Grid eXchange Fabric information and news\n\nHigh-level project information and news can be found on the GXF section of the LF Energy website: \n* [www.lfenergy.org/projects/gxf](https://www.lfenergy.org/projects/gxf/)\n\nGXF wiki with detailed project information:\n* [LF Energy wiki](https://wiki.lfenergy.org/display/GEF/Grid+eXchange+Fabric+-+GXF)\n\nGXF detailed documentation:\n* [grid-exchange-fabric.gitbook.io](https://grid-exchange-fabric.gitbook.io/gxf/)\n\nGXF issue tracker:\n* [github.com/OSGP/Documentation/issues](https://github.com/OSGP/Documentation/issues)\n\n## License\n\nThis project is licensed under the Apache 2.0 license - see the [LICENSE](LICENSE) file for details.\n\n## Licenses third-party libraries\nThis project uses third-party libraries, which are licensed under their own respective Open-Source licenses. \n\n## Contributing\n\nPlease read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.\n\n## Contact\n\nIf you have a question, please read [GXF wiki contact page](https://grid-exchange-fabric.gitbook.io/gxf/opensourcecommunity/communication-and-contact) how to best contact us.\n'",,"2018/10/23, 09:41:12",1828,Apache-2.0,695,15393,"2023/10/25, 16:22:22",27,1076,1076,164,0,27,1.6,0.8604381816426986,"2022/09/19, 13:11:54",release-5.30.0-15,0,67,false,,false,true,,,https://github.com/OSGP,https://www.lfenergy.org/projects/gxf/,Vianen,,,https://avatars.githubusercontent.com/u/11352045?v=4,,, PowerModels.jl,Designed to enable computational evaluation of emerging power network formulations and algorithms in a common platform.,lanl-ansi,https://github.com/lanl-ansi/PowerModels.jl.git,github,"optimization,network,power-network,optimal-power-flow",Energy Distribution and Grids,"2023/08/28, 16:56:29",349,0,60,true,Julia,advanced network science initiative,lanl-ansi,"Julia,MATLAB",https://lanl-ansi.github.io/PowerModels.jl/stable/,"b'# PowerModels.jl\n\n\n\nStatus:\n[![CI](https://github.com/lanl-ansi/PowerModels.jl/workflows/CI/badge.svg)](https://github.com/lanl-ansi/PowerModels.jl/actions?query=workflow%3ACI)\n[![codecov](https://codecov.io/gh/lanl-ansi/PowerModels.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/lanl-ansi/PowerModels.jl)\n[![Documentation](https://github.com/lanl-ansi/PowerModels.jl/workflows/Documentation/badge.svg)](https://lanl-ansi.github.io/PowerModels.jl/stable/)\n

\n\nPowerModels.jl is a Julia/JuMP package for Steady-State Power Network Optimization.\nIt is designed to enable computational evaluation of emerging power network formulations and algorithms in a common platform.\nThe code is engineered to decouple problem specifications (e.g. Power Flow, Optimal Power Flow, ...) from the power network formulations (e.g. AC, DC-approximation, SOC-relaxation, ...).\nThis enables the definition of a wide variety of power network formulations and their comparison on common problem specifications.\n\n**Core Problem Specifications**\n* Power Flow (pf)\n* Optimal Power Flow (opf)\n* Optimal Transmission Switching (ots)\n* Transmission Network Expansion Planning (tnep)\n\n**Core Network Formulations**\n* AC (polar and rectangular coordinates)\n* DC Approximation (polar coordinates)\n* LPAC Approximation (polar coordinates)\n* SDP Relaxation (W-space)\n* SOC Relaxation (W-space)\n* QC Relaxation (W+L-space)\n* IV (rectangular coordinates)\n\n**Network Data Formats**\n* Matpower "".m"" files\n* PTI "".raw"" files (PSS(R)E v33 specification)\n\n\n## Documentation\n\nThe package [documentation](https://lanl-ansi.github.io/PowerModels.jl/stable/) includes a variety of useful information including a [quick-start guide](https://lanl-ansi.github.io/PowerModels.jl/stable/quickguide/), [network model specification](https://lanl-ansi.github.io/PowerModels.jl/stable/network-data/), and [baseline results](https://lanl-ansi.github.io/PowerModels.jl/stable/experiment-results/).\n\nAdditionally, these presentations provide a brief introduction to various aspects of PowerModels,\n- [Network Model Update, v0.6](https://youtu.be/j7r4onyiNRQ)\n- [PSCC 2018](https://youtu.be/AEEzt3IjLaM)\n- [JuMP Developers Meetup 2017](https://youtu.be/W4LOKR7B4ts)\n\n\n## Development\n\nCommunity-driven development and enhancement of PowerModels are welcome and encouraged. Please fork this repository and share your contributions to the master with pull requests. See [CONTRIBUTING.md](https://github.com/lanl-ansi/PowerModels.jl/blob/master/CONTRIBUTING.md) for code contribution guidelines.\n\n\n## Acknowledgments\n\nThis code has been developed as part of the Advanced Network Science Initiative at Los Alamos National Laboratory.\nThe primary developer is Carleton Coffrin (@ccoffrin) with support from the following contributors,\n- Per Aaslid (@peraaslid) SINTEF ER, Branch flow storage model and linear branch flow formulation\n- Juan Luis Barber\xc3\xada (@jbarberia) UTN-BA, PSS(R)E v33 data export, Jacobian support for basic network data\n- Russell Bent (@rb004f) LANL, Matpower export, TNEP problem specification\n- Jose Daniel Lara (@jd-lara) Berkeley, Julia v1.0 compatibility\n- Jay Dave (@jay-dave) KU Leuven, LPAC for TNEP and OTS problems\n- Hakan Ergun (@hakanergun) KU Leuven, HVDC lines\n- David Fobes (@pseudocubic) LANL, PSS(R)E v33 data support\n- Rory Finnegan (@rofinn) Invenia, Memento Logging\n- Frederik Geth (@frederikgeth) CSIRO, storage modeling advise, Branch Flow and current-voltage formulation\n- Jonas Kersulis (@kersulis) University of Michigan, Sparse SDP formulation\n- Miles Lubin (@mlubin) MIT, Julia/JuMP advise\n- Yeesian Ng (@yeesian) MIT, Documenter.jl setup\n- Kaarthik Sundar (@kaarthiksundar) LANL, OBBT utility\n- Mathieu Tanneau (@mtanneau) Georgia Tech, PTDF matrix computation\n- Byron Tasseff (@tasseff) LANL, multi-infrastructure updates\n\n\n## Citing PowerModels\n\nIf you find PowerModels useful in your work, we kindly request that you cite the following [publication](https://ieeexplore.ieee.org/document/8442948/):\n```\n@inproceedings{8442948,\n author = {Carleton Coffrin and Russell Bent and Kaarthik Sundar and Yeesian Ng and Miles Lubin},\n title = {PowerModels.jl: An Open-Source Framework for Exploring Power Flow Formulations},\n booktitle = {2018 Power Systems Computation Conference (PSCC)},\n year = {2018},\n month = {June},\n pages = {1-8},\n doi = {10.23919/PSCC.2018.8442948}\n}\n```\nCitation of the original works for problem definitions (e.g. OPF) and [power flow formulations](https://lanl-ansi.github.io/PowerModels.jl/stable/formulation-details/) (e.g. SOC) is also encouraged when publishing works that use PowerModels.\n\n\n## License\n\nThis code is provided under a BSD license as part of the Multi-Infrastructure Control and Optimization Toolkit (MICOT) project, LA-CC-13-108.\n'",,"2016/08/01, 20:30:54",2641,CUSTOM,20,959,"2023/10/03, 18:32:34",92,422,801,54,22,2,0.1,0.15977011494252868,"2023/05/28, 00:57:08",v0.19.9,0,25,false,,false,true,,,https://github.com/lanl-ansi,https://lanl-ansi.github.io/,"Los Alamos, NM",,,https://avatars.githubusercontent.com/u/17053288?v=4,,, PowerModelsAnnex.jl,An extension of PowerModels.jl that provides a home for open source sharing of preliminary and/or exploratory methods in power system optimization.,lanl-ansi,https://github.com/lanl-ansi/PowerModelsAnnex.jl.git,github,"optimization,network,power-network,optimal-power-flow",Energy Distribution and Grids,"2023/07/24, 15:13:31",17,0,3,true,Julia,advanced network science initiative,lanl-ansi,Julia,,"b'# PowerModelsAnnex.jl\nDev:\n[![CI](https://github.com/lanl-ansi/PowerModelsAnnex.jl/workflows/CI/badge.svg)](https://github.com/lanl-ansi/PowerModelsAnnex.jl/actions/workflows/ci.yml)\n[![codecov](https://codecov.io/gh/lanl-ansi/PowerModelsAnnex.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/lanl-ansi/PowerModelsAnnex.jl)\n\n[PowerModels.jl](https://github.com/lanl-ansi/PowerModels.jl) provides an implementation reference for *established* formulations and methods in power system optimization, and hence is is not an appropriate location for more exploratory work. PowerModelsAnnex.jl is an extension of PowerModels.jl that provides a home for open-source sharing of preliminary and/or exploratory methods in power system optimization.\n\nDue to the exploratory nature of PowerModelsAnnex,\n- there is minimal documentation and testing\n- there are limited code quality and reliablity standards\n- anything goes in the annex, more-or-less\n\nUsers should be prepared for features that break.\n\nPull Requests to PowerModelsAnnex are always welcome and not subject to significant scrutiny.\n\n## Acknowledgments\n\nThis code has been developed as part of the Advanced Network Science Initiative at Los Alamos National Laboratory.\nThe primary developer is Carleton Coffrin.\n\n\n## License\nThis code is provided under a BSD license as part of the Multi-Infrastructure Control and Optimization Toolkit (MICOT) project, LA-CC-13-108.\n'",,"2017/07/27, 17:16:19",2281,CUSTOM,4,151,"2023/05/05, 00:58:45",9,69,88,2,173,3,0.0,0.18181818181818177,"2023/07/24, 15:56:07",v0.9.0,0,12,false,,false,false,,,https://github.com/lanl-ansi,https://lanl-ansi.github.io/,"Los Alamos, NM",,,https://avatars.githubusercontent.com/u/17053288?v=4,,, Power Grid Lib,This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emerging Power System Algorithms and is designed to evaluate a well established version of the AC Optimal Power Flow problem.,power-grid-lib,https://github.com/power-grid-lib/pglib-opf.git,github,"optimal-power-flow,matpower,benchmark,dataset",Energy Distribution and Grids,"2023/07/24, 16:12:04",250,0,55,true,MATLAB,A Library of IEEE PES Power Grid Benchmarks,power-grid-lib,"MATLAB,TeX",,"b'# Power Grid Lib - Optimal Power Flow\n\nThis benchmark library is curated and maintained by the [IEEE PES Task Force on Benchmarks for Validation of Emerging Power System Algorithms](https://power-grid-lib.github.io/) and is designed to evaluate a well established version of the the AC Optimal Power Flow problem. This [introductory video](https://youtu.be/fC3hzddCJ2c) and [detailed report](https://arxiv.org/abs/1908.02788) present the motivations and goals of this benchmark library. In particular, these cases are designed for benchmarking algorithms that solve the following Non-Convex Nonlinear Program,\n\n \n![The Mathematical Model of the Optimal Power Flow Problem](MODEL.png?raw=true ""Optional Title"")\n \n\nA detailed description of this mathematical model is available [here](https://arxiv.org/abs/1502.07847). All of the cases files are curated in the [MATPOWER](https://matpower.org) data format. Open-source reference implementations are available in [MATPOWER](https://matpower.org) and [PowerModels.jl](https://github.com/lanl-ansi/PowerModels.jl) and baseline results are reported in [BASELINE.md](BASELINE.md).\n\n### Problem Variants\n\nThese cases may also be useful for benchmarking the following variants of the Optimal Power Flow problem,\n* DC Optimal Power Flow\n* AC Optimal Transmission Switching\n* DC Optimal Transmission Switching\n\nThat said, these cases are curated with the AC Optimal Power Flow problem in mind. Application to other domains and problem variants should be done with discretion.\n\n## Case File Overview\n\nA forthcoming technical report will detail the sources, motivations, and procedures for curating these case files.\n\nIn this repository the network data files are organized into the following three broad groups:\n\n* /*.m - base case benchmarks as originally specified\n* /api/*.m - heavily loaded test cases (i.e. binding thermal limit constraints)\n* /sad/*.m - small phase angle difference cases (i.e. binding phase angle difference constraints)\n\n## Contributions\n\nAll case files are provided under a [Creative Commons Attribution License](http://creativecommons.org/licenses/by/4.0/), which allows anyone to share or adapt these cases as long as they give appropriate credit to the orginal author, provide a link to the license, and indicate if changes were made.\n\nCommunity-based recommendations and contributions are welcome and encouraged in all PGLib repositories. Please feel free to submit comments and questions in the [issue tracker](https://github.com/power-grid-lib/pglib-opf/issues). Corrections and new network contributions are welcome via pull requests. All data contributions are subject to a quality assurance review by the repository curator(s).\n\n## Citation Guidelines\n\nThis repository is not static. Consequently, it is critically important to indicate the version number when referencing this repository in scholarly work.\n\nUsers of this these cases are encouraged to cite the original source documents that are indicated in the file headers and the [achrive report](https://arxiv.org/abs/1908.02788).\n'",",https://arxiv.org/abs/1908.02788,https://arxiv.org/abs/1502.07847,https://arxiv.org/abs/1908.02788","2017/06/27, 19:02:06",2311,CUSTOM,1,15,"2023/08/11, 01:46:57",13,10,28,4,75,1,0.0,0.0625,"2023/07/24, 16:17:02",v23.07,0,1,false,,false,false,,,https://github.com/power-grid-lib,https://power-grid-lib.github.io/,,,,https://avatars.githubusercontent.com/u/21188896?v=4,,, pypownet,A power network simulator with a Reinforcement Learning-focused usage.,MarvinLer,https://github.com/MarvinLer/pypownet.git,github,"power-network,reinforcement-learning,simulator,powergrid,reinforcement-learning-environments,gym-environments",Energy Distribution and Grids,"2023/07/07, 16:08:45",100,0,14,true,Python,,,"Python,Dockerfile",https://pypownet.readthedocs.io/,"b'# pypownet\npypownet stands for Python Power Network, which is a simulator for power (electrical) networks.\n\nThe simulator is able to emulate a power grid (of any size or characteristics) subject to a set of temporal injections (productions and consumptions) for discretized timesteps. Loadflow computations relies on Matpower and can be run under the AC or DC models. The simulator is able to simulate cascading failures, where successively overflowed lines are switched off and a loadflow is computed on the subsequent grid.\n\n![Video capture of the renderer of the simulator in action](https://github.com/MarvinLer/pypownet/blob/master/doc/source/default14.gif)\n\n*Illustration of a running power grid with our renderer on the default IEEE14 grid environment.\nNB: the renderer drastically slows the performance of pypownet: it takes ~40s to compute 1000 timesteps without renderer mode with this environment.*\n\nThe simulator comes with an Reinforcement Learning-focused environment, which implements states (observations), actions (reduced to node-splitting and line status switches) as well as a reward signal. Finally, a renderer is available, such that the observations of the network can be plotted in real-time (synchronized with the game time).\n\nOfficial documentation: https://pypownet.readthedocs.io/\n\n\n* [1 Installation](#installation)\n * [1.1 Using Docker](#using-docker)\n * [1.2 Without using Docker](#without-using-docker)\n * [1.2.1 Requirements](#requirements)\n * [1.2.2 Instructions](#instructions)\n* [2 Basic usage](#basic-usage)\n * [2.1 Without using Docker](#without-using-docker-1)\n * [2.2 Using Docker](#using-docker-1)\n* [3 Main features of pypownet](#main-features)\n* [4 Generate the documentation](#generate-the-documentation)\n* [5 License information](#license-information)\n\n## Installation\n### Using Docker\nRetrieve the Docker image:\n```\nsudo docker pull marvinler/pypownet:2.2.8-light\n```\n\n### Without using Docker\n#### Requirements:\n* Python >= 3.6\n\n\nFor Octave backend (default is Python backend):\n\n* Octave >= 4.0.6\n* Matpower >= 6.0\n\n#### Instructions\n\nThese instructions allow to run the simulator with a Python backend; for Octave backend, please refer to the documentation for installation instructions.\n\n##### Step 1: Install Python3.6\n```\nsudo apt-get update\nsudo apt-get install python3.6\n```\nIf you have any trouble with this step, please refer to [the official webpage of Python](https://www.python.org/downloads/release/python-366/).\n\n##### (Optional, recommended) Step 1bis: Create a virtual environment\n```\nvirtualenv -p python3.6 --system-site-packages venv\nsource venv/bin/activate\n```\n\n##### Step 2: Clone pypownet\n```\ngit clone https://github.com/MarvinLer/pypownet\n```\nThis should create a folder pypownet with the current sources.\n\n##### Step 3: Run the installation script of pypownet\nFinally, run the following Python command to install the current simulator (including the Python libraries dependencies):\n```\ncd pypownet/\npython3.6 setup.py install\n```\nAfter this, this simulator is available under the name pypownet (e.g. ```import pypownet```).\n\n## Basic usage\n### Without using Docker\nExperiments can be conducted using the CLI.\n#### Using CLI arguments\nCLI can be used to run simulations:\n```\npython -m pypownet.main -v\n```\nYou can use `python -m pypownet.main --help` for further information about these runners arguments. Example running 1000 iterations (here, ~40 days) of the do-nothing (default) agent on a grid with 14 substations:\n```\npython -m pypownet.main --parameters parameters/default14 --niter 1000 --verbose --render\n```\nWith this default14/ parameters (emulates a grid with 14 substations, 5 productions, 11 consumptions and 20 lines), it takes ~100 seconds to run 1000 timesteps (old i5).\n### Using Docker\nYou can use the command line of the image with shared display (for running the renderer):\n```\nsudo docker run -it --privileged --net=host --env=""DISPLAY"" --volume=""$HOME/.Xauthority:/root/.Xauthority:rw"" marvinler/pypownet:2.2.0 sh\n```\nThis will open a terminal of the image. The usage is then identical to without docker, by doing the steps within this terminal.\n\n## Main features\npypownet is a power grid simulator, that emulates a power grid that is subject to pre-computed injections, planned maintenance as well as random external hazards. Here is a list of pypownet main features:\n* emulates a grid of any size and electrical properties in a game discretized in timesteps of any (fixed) size\n* computes and apply cascading failure process: at each timestep, overflowed lines with certain conditions are switched off, with a consequent loadflow computation to retrieve the new grid steady-state, and reiterating the process\n* has an RL-focused interface, where players or controlers can play actions (node-splitting or line status switches) on the current grid, based on a partial observation of the grid (high dimension), with a customable reward signal (and game over options)\n* has a renderer that enables the user to see the grid evolving in real-time, as well as the actions of the controler currently playing and further grid state details (works only for pypownet official grid cases)\n* has a runner that enables to use pypownet fully by simply coding an agent (with a method act(observation))\n* possess some baselines models (including treesearches) illustrating how to use the furnished environment\n* can be launched with CLI with the possibility of managing certain parameters (such as renderer toggling or the agent to be played)\n* functions on both DC and AC mode\n* has a set of parameters that can be customized (including AC or DC mode, or hard-overflow coefficient), associated with sets of injections, planned maintenance and random hazards of the various chronics\n* handles node-splitting (at the moment only max 2 nodes per substation) and lines switches off for topology management\n\n## Generate the documentation\nThe stable official documentation is available at https://pypownet.readthedocs.io/.\nAlternatively, a copy of the master documentation can be computed: you will need Sphinx, a Documentation building tool, and a nice-looking custom [Sphinx theme similar to the one of readthedocs.io](https://sphinx-rtd-theme.readthedocs.io/en/latest/):\n```\npip install sphinx sphinx_rtd_theme\n```\nThis installs both the Sphinx package and the custom template. Then:\n```\ncd doc\nsphinx-build -b html ./source ./build\n```\nThe html will be available within the folder [doc/build](doc/build/index.html).\n\n## Tests\npypownet is provided with series of tests developped by @ZergD and RTE. These tests are designed to verify some behavior of the game as a whole, including some expected grid values based on perfectly controlled injections/topology. Tests can be run with `pytest` in the current directory.\n\n(Here)[tests/README.md] for more information about the testing module.\n\n## License information\n\nCopyright 2017-2019 RTE and INRIA (France)\n\n RTE: http://www.rte-france.com\n INRIA: https://www.inria.fr/\n\nThis Source Code is subject to the terms of the GNU Lesser General Public License v3.0. If a copy of the LGPL-v3 was not distributed with this file, You can obtain one at https://www.gnu.org/licenses/lgpl-3.0.fr.html.\n\n## Citation\n\nIf you use this repo or find it useful, please consider citing:\n\n```BibTeX\n@article{lerousseau2021design,\n title={Design and implementation of an environment for Learning to Run a Power Network (L2RPN)},\n author={Lerousseau, Marvin},\n journal={arXiv preprint arXiv:2104.04080},\n year={2021}\n}\n\n```\n'",,"2018/08/03, 11:03:55",1909,LGPL-3.0,4,318,"2023/07/07, 16:08:54",2,12,52,3,110,0,0.0,0.030303030303030276,"2020/10/26, 19:34:05",v2.2.9,0,5,false,,true,false,,,,,,,,,,, Grid2Op,A testbed platform to model sequential decision making in power systems.,rte-france,https://github.com/rte-france/Grid2Op.git,github,"grid2op,reinforcement-learning,reinforcement-learning-environments,powergrid,powergrid-operation,gym-environments",Energy Distribution and Grids,"2023/09/18, 13:57:18",239,0,53,true,Python,Réseau de transport d'électricité,rte-france,"Python,Jupyter Notebook,Dockerfile,Shell,Makefile",https://grid2op.readthedocs.io/,"b'# Grid2Op\n[![Downloads](https://pepy.tech/badge/grid2op)](https://pepy.tech/project/grid2op)\n[![PyPi_Version](https://img.shields.io/pypi/v/grid2op.svg)](https://pypi.org/project/Grid2Op/)\n[![PyPi_Compat](https://img.shields.io/pypi/pyversions/grid2op.svg)](https://pypi.org/project/Grid2Op/)\n[![LICENSE](https://img.shields.io/pypi/l/grid2op.svg)](https://www.mozilla.org/en-US/MPL/2.0/)\n[![Documentation Status](https://readthedocs.org/projects/grid2op/badge/?version=latest)](https://grid2op.readthedocs.io/en/latest/?badge=latest)\n[![circleci](https://circleci.com/gh/rte-france/Grid2Op.svg?style=shield)](https://circleci.com/gh/rte-france/Grid2Op)\n[![discord](https://discord.com/api/guilds/698080905209577513/embed.png)]( https://discord.gg/cYsYrPT)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)\n\nGrid2Op is a platform, built with modularity in mind, that allows to perform powergrid operation.\nAnd that\'s what it stands for: Grid To Operate.\nGrid2Op acts as a replacement of [pypownet](https://github.com/MarvinLer/pypownet) \nas a library used for the Learning To Run Power Network [L2RPN](https://l2rpn.chalearn.org/). \n\nThis framework allows to perform most kind of powergrid operations, from modifying the setpoint of generators,\nto load shedding, performing maintenance operations or modifying the *topology* of a powergrid\nto solve security issues.\n\nOfficial documentation: the official documentation is available at \n[https://grid2op.readthedocs.io/](https://grid2op.readthedocs.io/).\n\n* [1 Installation](#installation)\n * [1.1 Setup a Virtualenv (optional)](#setup-a-virtualenv-optional)\n * [1.2 Install from source](#install-from-source)\n * [1.3 Install from PyPI](#install-from-pypi)\n * [1.4 Install for contributors](#install-for-contributors)\n * [1.5 Docker](#docker)\n* [2 Main features of Grid2Op](#main-features-of-grid2op)\n* [3 Getting Started](#getting-started)\n * [0 Basic features](getting_started/0_basic_functionalities.ipynb)\n * [1 BaseObservation Agents](getting_started/1_Observation_Agents.ipynb)\n * [2 BaseAction Grid Manipulation](getting_started/2_Action_GridManipulation.ipynb)\n * [3 Training An BaseAgent](getting_started/3_TrainingAnAgent.ipynb)\n * [4 Study Your BaseAgent](getting_started/4_StudyYourAgent.ipynb)\n* [4 Citing](#Citing)\n* [5 Documentation](#documentation)\n* [6 Contribute](#contributing)\n* [7 Test and known issues](#tests-and-known-issues)\n* [8 License information](#license-information)\n\n# Installation\n## Requirements:\n* Python >= 3.6\n\n## Setup a Virtualenv (optional)\n### Create a virtual environment \n```commandline\ncd my-project-folder\npip3 install -U virtualenv\npython3 -m virtualenv venv_grid2op\n```\n### Enter virtual environment\n```commandline\nsource venv_grid2op/bin/activate\n```\n\n## Install from PyPI\n```commandline\npip3 install grid2op\n```\n\n## Install from source\n```commandline\ngit clone https://github.com/rte-france/Grid2Op.git\ncd Grid2Op\npip3 install -U .\ncd ..\n```\n\n## Install for contributors\n```commandline\ngit clone https://github.com/rte-france/Grid2Op.git\ncd Grid2Op\npip3 install -e .\npip3 install -e .[optional]\npip3 install -e .[docs]\n```\n\n## Docker\nGrid2Op docker containers are available on [dockerhub](https://hub.docker.com/r/bdonnot/grid2op/tags).\n\nTo install the latest Grid2Op container locally, use the following:\n```commandline\ndocker pull bdonnot/grid2op:latest\n```\n\n# Main features of Grid2Op\n## Core functionalities\nBuilt with modulartiy in mind, Grid2Op is a library used for the ""Learning To Run Power Network"" [L2RPN](https://l2rpn.chalearn.org/)\ncompetitions series. It can also\n\nIts main features are:\n* emulates the behavior of a powergrid of any size at any format (provided that a *backend* is properly implemented)\n* allows for grid modifications (active and reactive load values, generator voltages setpoints, active production but most \n importantly grid topology beyond powerline connection / disconnection)\n* allows for maintenance operations and powergrid topological changes\n* can adopt any powergrid modeling, especially Alternating Current (AC) and Direct Current (DC) approximation to \n when performing the compitations\n* supports changes of powerflow solvers, actions, observations to better suit any need in performing power system operations modeling\n* has an RL-focused interface, compatible with [OpenAI-gym](https://gym.openai.com/): same interface for the\n Environment class.\n* parameters, game rules or type of actions are perfectly parametrizable\n* can adapt to any kind of input data, in various format (might require the rewriting of a class)\n\n## Powerflow solver\nGrid2Op relies on an open source powerflow solver ([PandaPower](https://www.pandapower.org/)),\nbut is also compatible with other *Backend*. If you have at your disposal another powerflow solver, \nthe documentation of [grid2op/Backend](grid2op/Backend/Backend.py) can help you integrate it into a proper ""Backend""\nand have Grid2Op using this powerflow instead of PandaPower.\n\n# Getting Started\nSome Jupyter notebook are provided as tutorials for the Grid2Op package. They are located in the \n[getting_started](getting_started) directories. \n\nTODO: this needs to be redone, refactorize and better explained for some of them.\n\nThese notebooks will help you in understanding how this framework is used and cover the most\ninteresting part of this framework:\n\n* [00_Introduction](getting_started/00_Introduction.ipynb) \n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/00_Introduction.ipynb)\n and [00_SmallExample](getting_started/00_SmallExample.ipynb) \n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/00_SmallExample.ipynb)\n describe what is \n adressed by the grid2op framework (with a tiny introductions to both power systems and reinforcement learning) \n and give and introductory example to a small powergrid manipulation.\n* [01_Grid2opFramework](getting_started/01_Grid2opFramework.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/01_Grid2opFramework.ipynb)\n covers the basics \n of the\n Grid2Op framework. It also covers how to create a valid environment and how to use the \n `Runner` class to assess how well an agent is performing rapidly.\n* [02_Observation](getting_started/02_Observation.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/02_Observation.ipynb)\n details how to create \n an ""expert agent"" that will take pre defined actions based on the observation it gets from \n the environment. This Notebook also covers the functioning of the BaseObservation class.\n* [03_Action](getting_started/03_Action.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/03_Action.ipynb)\n demonstrates \n how to use the BaseAction class and how to manipulate the powergrid.\n* [04_TrainingAnAgent](getting_started/04_TrainingAnAgent.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/04_TrainingAnAgent.ipynb)\n shows how to get started with \n reinforcement learning with the grid2op environment. It shows the basic on how to train a ""PPO"" model operating the grid relying on ""stable baselines 3"" PPO implementation.\n* [05_StudyYourAgent](getting_started/05_StudyYourAgent.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/05_StudyYourAgent.ipynb)\n shows how to study an BaseAgent, for example\n the methods to reload a saved experiment, or to plot the powergrid given an observation for\n example. This is an introductory notebook. More user friendly graphical interface should\n come soon.\n* [06_Redispatching_Curtailment](getting_started/06_Redispatching_Curtailment.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/06_Redispatching_Curtailment.ipynb)\n explains what is the \n ""redispatching"" and curtailment from the point \n of view of a company who\'s in charge of keeping the powergrid safe (aka a Transmission System Operator) and how to \n manipulate this concept in grid2op. Redispatching (and curtailment) allows you to perform **continuous** \n actions on the powergrid \n problem.\n* [07_MultiEnv](getting_started/07_MultiEnv.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/07_MultiEnv.ipynb)\n details how grid2op natively support a single agent interacting\n with multiple environments at the same time. This is particularly handy to train ""asynchronous"" agent in the \n Reinforcement Learning community for example.\n* [08_PlottingCapabilities](getting_started/08_PlottingCapabilities.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/08_PlottingCapabilities.ipynb)\n shows you the different ways with which you \n can represent (visually) the grid your agent interact with. A renderer is available like in many open AI gym \n environment. But you also have the possibility to post process an agent and make some movies out of it, and we also\n developed a Graphical User Interface (GUI) called ""[grid2viz](https://github.com/mjothy/grid2viz)"" that allows\n to perform in depth study of your agent\'s behaviour on different scenarios and even to compare it with baselines. \n* [09_EnvironmentModifications](getting_started/09_EnvironmentModifications.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/09_EnvironmentModifications.ipynb)\n elaborates on the maintenance, \n hazards\n and attacks. All three of these represents external events that can disconnect some powerlines. This notebook\n covers how to spot when such things happened and what can be done when the maintenance or the attack is over.\n* [10_StorageUnits](getting_started/10_StorageUnits.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/10_StorageUnits.ipynb)\n details the usage and behaviour of the storage units\n in grid2op. \n* [11_IntegrationWithExistingRLFrameworks](getting_started/11_IntegrationWithExistingRLFrameworks.ipynb)\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rte-france/Grid2Op/blob/master/getting_started/11_IntegrationWithExistingRLFrameworks.ipynb)\n explains how to use grid2op with other reinforcement learning framework. TODO: this needs to be redone\n \nTry them out in your own browser without installing \nanything with the help of mybinder: \n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)\n\nOr thanks to google colab (all links are provided near the notebook description)\n\n# Citing\n\nIf you use this package in one of your work, please cite:\n```\n@misc{grid2op,\n author = {B. Donnot},\n title = {{Grid2op- A testbed platform to model sequential decision making in power systems. }},\n year = {2020},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {\\url{https://GitHub.com/rte-france/grid2op}},\n}\n```\n\n# Documentation\n\nThe official documentation is available at \n[https://grid2op.readthedocs.io/](https://grid2op.readthedocs.io/).\n\n## Build the documentation locally\n\nA copy of the documentation can be built if the project is installed *from source*:\nyou will need Sphinx, a Documentation building tool, and a nice-looking custom\n[Sphinx theme similar to the one of readthedocs.io](https://sphinx-rtd-theme.readthedocs.io/en/latest/). These\ncan be installed with:\n```commandline\npip3 install -U grid2op[docs]\n```\nThis installs both the Sphinx package and the custom template. \n\nThen, on systems where `make` is available (mainly gnu-linux and macos) the documentation can be built with the command:\n```commandline\nmake html\n```\n\nFor windows, or systems where `make` is not available, the command:\n```commandline\nsphinx-build -b html docs documentation\n```\n\n\nThis will create a ""documentation"" subdirectory and the main entry point of the document will be located at \n[index.html](documentation/html/index.html).\n\nIt is recommended to build this documentation locally, for convenience.\nFor example, the ""getting started"" notebooks referenced some pages of the help.\n\n\n\n# Contributing\n\nWe welcome contributions from everyone. They can take the form of pull requests for smaller changed. \nIn case of a major change (or if you have a doubt on what is ""a small change""), please open an issue first \nto discuss what you would like to change.\n\nTo contribute to this code, you need to:\n\n1. fork the repository located at https://github.com/rte-france/Grid2Op\n2. synch your fork with the ""latest developement branch of grid2op"". For example, if the latest grid2op release\n on pypi is `1.6.5` you need to synch your repo with the branch named `dev_1.6.6` or `dev_1.7.0` (if \n the branch `dev_1.6.6` does not exist). It will be the highest number in the branches `dev_*` on\n grid2op official github repository.\n3. implement your functionality / code your modifications or anything else\n4. make sure to add tests and documentation if applicable\n5. once it is developed, synch your repo with the last development branch again (see point 2 above) and\n make sure to solve any possible conflicts\n6. write a pull request and make sure to target the right branch (the ""last development branch"")\n\n\nCode in the contribution should pass all the tests, have some dedicated tests for the new feature (if applicable)\nand documentation (if applicable).\n\nBefore implementing any major feature, please write a github issue first.\n\n# Tests and known issues\n\n## Tests performed currently\nGrid2op is currently tested on windows, linux and macos.\n\nThe unit tests includes testing, on linux machines the correct integration of grid2op with:\n\n- python 3.8\n- python 3.9\n- python 3.10\n- python 3.11\n\nOn all of these cases, we tested grid2op on all available numpy version >= 1.20 (**nb** available numpy versions depend\non python version).\n\nThe complete test suit is run on linux with the latest numpy version on python 3.8.\n\n## Known issues\n\nDue to the underlying behaviour of the ""multiprocessing"" package on windows based python versions,\nthe ""multiprocessing"" of the grid2op ""Runner"" is not supported on windows. This might change in the future, \nbut it is currently not on our priorities.\n\nA quick fix that is known to work include to set the `experimental_read_from_local_dir` when creating the\nenvironment with `grid2op.make(..., experimental_read_from_local_dir=True)` (see doc for more information)\n\n## Perform tests locally\nProvided that Grid2Op is installed *from source*:\n\n### Install additional dependencies\n```commandline\npip3 install -U grid2op[optional]\n```\n### Launch tests\n```commandline\ncd grid2op/tests\npython3 -m unittest discover\n```\n\n# License information\nCopyright 2019-2020 RTE France\n\n RTE: http://www.rte-france.com\n\nThis Source Code is subject to the terms of the Mozilla Public License (MPL) v2 also available \n[here](https://www.mozilla.org/en-US/MPL/2.0/)\n'",,"2019/10/10, 17:04:46",1476,MPL-2.0,618,2759,"2023/10/24, 11:10:40",42,223,487,173,1,0,0.3,0.35333018422295703,"2023/09/18, 13:59:06",v1.9.5,1,23,false,,false,false,,,https://github.com/rte-france,http://www.rte-france.com/,,,,https://avatars.githubusercontent.com/u/19531752?v=4,,, eDisGo,Optimization of flexibility options and grid expansion for distribution grids based on PyPSA.,openego,https://github.com/openego/eDisGo.git,github,,Energy Distribution and Grids,"2023/05/27, 19:22:59",33,3,6,true,Python,,openego,"Python,Julia",,"b'\n\n\n# Overview\n\n[![Coverage Status](https://coveralls.io/repos/github/openego/eDisGo/badge.svg?branch=dev)](https://coveralls.io/github/openego/eDisGo?branch=dev)\n[![Tests & coverage](https://github.com/openego/eDisGo/actions/workflows/tests-coverage.yml/badge.svg)](https://github.com/openego/eDisGo/actions/workflows/tests-coverage.yml)\n\n\n# eDisGo\n\nThe python package eDisGo serves as a toolbox to evaluate flexibility measures\nas an economic alternative to conventional grid expansion in\nmedium and low voltage grids.\nSee [documentation](https://edisgo.readthedocs.io/en/dev/) for further information.\n\n\n# LICENSE\n\nCopyright (C) 2017 Reiner Lemoine Institut gGmbH\n\nThis program is free software: you can redistribute it and/or modify it under\nthe terms of the GNU Affero General Public License as published by the Free\nSoftware Foundation, either version 3 of the License, or (at your option) any\nlater version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\nFOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more\ndetails.\n\nYou should have received a copy of the GNU General Public License along with\nthis program. If not, see https://www.gnu.org/licenses/.\n'",,"2017/04/05, 12:16:17",2394,AGPL-3.0,993,4063,"2023/05/09, 18:29:15",53,156,334,60,169,7,2.4,0.3632122625791403,"2023/04/02, 14:25:18",v0.2.1,0,13,false,,false,false,"nailend/eDisGo-lobaflex,windnode/WindNODE_ABW,openego/eGo",,https://github.com/openego,https://openegoproject.wordpress.com/,,,,https://avatars.githubusercontent.com/u/15909192?v=4,,, offgridders,"Models and optimizes capacity & dispatch of electricity supply systems, off-grid or connected to a (weak) central grid.",rl-institut,https://github.com/rl-institut/offgridders.git,github,oemof,Energy Distribution and Grids,"2022/03/31, 08:59:09",17,0,2,false,Python,Reiner Lemoine Institut,rl-institut,Python,,"b'# Tool description\n\nThe simulation tool **Offgridders** * generates a model of an user-defined electricity supply system, optimizes the capacities of the system\'s generation, storage and electrical components and then performs a dispatch optimization of the optimized capacities.\n \nOffgridders is written in python3 and utilizes the Open Energy Modelling Framework ([Website](https://oemof.org/)) ([Code](https://github.com/oemof)) \nand as such uses linerarized component models. \nThe electricity system can include AC- as well as DC demand, inverters/rectifiers, \na connection to a central electricity grid (optional: with blackouts), diesel generator, \nPV panels, wind plant and storage. \nIt is possible to allow a defined annual shortage or force a renewable share or system stability constraint. \nFor a visualization of the components and demands to be included, \nsee the [Readthedocs: Definition of an electricity supply system](https://offgridders.readthedocs.io/en/latest/Definition.html).\n\nExamples for electricity systems that can be simulated with Offgridders: \n* Off-grid micro grid, purely fossil-fuelled or hybridized\n* On-grid micro grid, either only consuming or also feeding into the central grid\n* Off-grid SHS\n* Backup systems (diesel generator, SHS, ...) to ensure reliable supply of consumers connected to weak national grids\n\nIf you have questions regarding the tool\'s execution or it\'s code pieces, please drop an issue so that as time goes by, I can build an FAQ for offgridders as well as improve its features.\n\n*) previous working name: oesmot - Open Electricity System Modelling and Optimization Tool\n\n# Setup\n* Download and integrate cbc solver.\n* Open Anaconda prompt, create environment with `python==3.6`\n* Run: `pip install -r requirements.txt`\n* Execute test data: `python Offgridders.py`\n* Run your own simulations by defining the path to your input excel file: `python Offgridders.py ./inputs/test_input_template.xlsx`\n\nWhen working as a dev, you need to install additional packages with `pip install -r requirements_dev.txt`\n\nFor Details: See [Readthedocs: Installation](https://offgridders.readthedocs.io/en/latest/Installation.html)\n\n# Literature\n\nFor further reading please refer to [Readthedocs: Literature](https://offgridders.readthedocs.io/en/latest/Literature.html)\n\n# Change log\n\n## MicroGridDesignTool_V3.0\n* New excel template - not compatible with previous versions\n* Taking into account investments into storage power\n* **currently working with oemof 0.2.2**\n\n## MicroGridDesignTool_V2.1\n* Error messages\n* Bugfix: Working renewable constraint\n* Bugfix: Excel-issues with max_shortage==\'default\' error (from columns=\'unnamed\')\n\n## MicroGridDesignTool_V2.0\nMajor changes:\n* New excel template\n* DC and AC bus, connected with inverters/rectifiers, possible AC/DC demand\n* Forced battery charge criteria (linearized)\n* Minimal renewable share criteria not working!\n* Console execution via ""python3 A_main_script.py FILE.xlsx""\n\n## MicroGridDesignTool_V1.1\n* Fixed termination due to undefined \'comments\', occurring when simulation without sensitivity analysis is performed\n* New constraint: Renewable share (testing needed)\n* Added DC bus including rectifier/inverter (testing needed -> Flows, calculated values)\n* Enabled demand AC + demand DC (testing needed -> Flows, calculated values)\n* PV charge only through battery can be enabled by not inluding a rectifier (testing needed -> Flows, calculated values)\n* New Constraint: Linearized forced charge when national grid available\n* New Constraint: Discharge of battery only when maingrid experiences blackout\n* New Constraint: Inverter from DC to AC bus only active when blackout occurs\n\n## MicroGridDesignTool_V1.0\n* Simulation of off- or on-grid energy system (not only MG)\n* 1 hr timesteps, 1 to 365 days evaluation time\n* All input data via excel sheet\n* Easy case definition\n\n# Open issues\n* Timestep lengh 15 Min\n* Inlcude generation of network diagram \n* Demand shortage per timestep\n'",,"2018/10/16, 15:06:05",1835,GPL-3.0,0,1230,"2022/06/22, 04:01:45",45,90,129,0,490,6,1.0,0.3543233082706767,"2020/11/07, 11:43:28",v4.6.1,2,5,false,,false,true,,,https://github.com/rl-institut,http://www.reiner-lemoine-institut.de,Berlin/Germany,,,https://avatars.githubusercontent.com/u/18393972?v=4,,, RTS-GMLC,Reliability Test System of the Grid Modernization Lab Consortium.,GridMod,https://github.com/GridMod/RTS-GMLC.git,github,"rts,energy-system,energy-data,power-systems,power-systems-analysis,matpower",Energy Distribution and Grids,"2022/09/28, 02:03:33",136,0,23,false,HTML,,GridMod,"HTML,Jupyter Notebook,Python,MATLAB,R",,"b'# RTS-GMLC\nReliability Test System - Grid Modernization Lab Consortium \n\n### This repository is for the Reliability Test System Grid Modernization Lab Consortium (RTS-GMLC) which is an updated version of the RTS-96 test system. A summary of updates can be found in [GMLC_updates.md](https://github.com/GridMod/RTS-GMLC/blob/master/RTS-GMLC_updates.md).\nThis repository, and the associated data has been developed to facilitate Production Cost Modeling. Reliability calculations using the updated RTS-GMLC data in this repository amd the RTS3 program, provided by [Gene Preston](http:/egpreston.com) can be found [here](http://egpreston.com/NEWRTS.zip).\n![RTS-GMLC-layers](https://github.com/GridMod/RTS-GMLC/blob/master/rts_layers.png)\n\n#### The [RTS_Data](https://github.com/GridMod/RTS-GMLC/tree/master/RTS_Data) folder contains data in an open `csv` format, and in grid modeling tool specific formats: \n\n1. [SourceData](https://github.com/GridMod/RTS-GMLC/tree/master/RTS_Data/SourceData) contains several `csv` files that describe all the RTS-GMLC data.\n2. [FormattedData](https://github.com/GridMod/RTS-GMLC/tree/master/RTS_Data/FormattedData) contains folders for each tool specific data format. Each tool specific folder is also intended to contain a script that automates the conversion from `SourceData` in addition to solutions obtained from each tool in the `FormattedData/*tool*/*tool*_Solution` folder. Currently datasets are included in the following formats:\n - [MATPOWER](http://www.pserc.cornell.edu/matpower/)\n - [PowerWorld](https://www.powerworld.com/)\n - [PSS/E](https://www.siemens.com/global/en/home/products/energy/services/transmission-distribution-smart-grid/consulting-and-planning/pss-software/pss-e.html) v31 and v33\n - [PLEXOS](https://energyexemplar.com/)\n - [Prescient](https://energy.sandia.gov/tag/prescient/)\n - [RTS3](http://egpreston.com/)\n - [pandapower](http://www.pandapower.org)\n - [ANDES](https://github.com/cuihantao/andes)\n - [SIIP](https://github.com/nrel-siip)/[PowerSystems.jl](https://github.com/nrel-siip/PowerSystems.jl)\n\n## TODO and Identified Areas of Improvement:\n - Conventional Plant Data:\n - Add information about different operating configuration for combined cycle plants\n - Evaluate/address the realism of the relative sizes of units in different categories\n - Wind and Solar Data:\n - Update to the state-of-the-art datasets ([NSRDB](https://nsrdb.nrel.gov/) and [WindToolkit](https://www.nrel.gov/grid/wind-toolkit.html)) and add raw weather data\n\n\n## Setup\n```bash\ngit clone git@github.com:GridMod/RTS-GMLC.git\ncd RTS-GMLC\ngit submodule init\ngit submodule update\n```\n\n## Contributing\n\nContributions to the development and enahancement of RTS data is welcome. Please see [CONTRIBUTING.md](https://github.com/GridMod/RTS-GMLC/blob/master/CONTRIBUTING.md) for contribution guidelines.\n\n## DATA USE DISCLAIMER AGREEMENT\n*(\xe2\x80\x9cAgreement\xe2\x80\x9d)*\n\nThese data (\xe2\x80\x9cData\xe2\x80\x9d) are provided by the National Renewable Energy Laboratory (\xe2\x80\x9cNREL\xe2\x80\x9d), which is operated by Alliance for Sustainable Energy, LLC (\xe2\x80\x9cALLIANCE\xe2\x80\x9d) for the U.S. Department Of Energy (\xe2\x80\x9cDOE\xe2\x80\x9d).\n\nAccess to and use of these Data shall impose the following obligations on the user, as set forth in this Agreement. The user is granted the right, without any fee or cost, to use, copy, and distribute these Data for any purpose whatsoever, provided that this entire notice appears in all copies of the Data. Further, the user agrees to credit DOE/NREL/ALLIANCE in any publication that results from the use of these Data. The names DOE/NREL/ALLIANCE, however, may not be used in any advertising or publicity to endorse or promote any products or commercial entities unless specific written permission is obtained from DOE/NREL/ ALLIANCE. The user also understands that DOE/NREL/Alliance is not obligated to provide the user with any support, consulting, training or assistance of any kind with regard to the use of these Data or to provide the user with any updates, revisions or new versions of these Data.\n\n**YOU AGREE TO INDEMNIFY DOE/NREL/ALLIANCE, AND ITS SUBSIDIARIES, AFFILIATES, OFFICERS, AGENTS, AND EMPLOYEES AGAINST ANY CLAIM OR DEMAND, INCLUDING REASONABLE ATTORNEYS\' FEES, RELATED TO YOUR USE OF THESE DATA. THESE DATA ARE PROVIDED BY DOE/NREL/Alliance ""AS IS"" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DOE/NREL/ALLIANCE BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER, INCLUDING BUT NOT LIMITED TO CLAIMS ASSOCIATED WITH THE LOSS OF DATA OR PROFITS, WHICH MAY RESULT FROM AN ACTION IN CONTRACT, NEGLIGENCE OR OTHER TORTIOUS CLAIM THAT ARISES OUT OF OR IN**\n\n'",,"2017/01/13, 22:10:37",2475,GPL-3.0,0,502,"2022/09/27, 18:12:22",26,30,127,0,393,4,0.8,0.5948051948051948,"2022/03/07, 22:04:45",v0.2.2,0,18,false,,false,true,,,https://github.com/GridMod,,,,,https://avatars.githubusercontent.com/u/23124369?v=4,,, openmodelica-microgrid-gym,An OpenAI Gym Environment for Microgrids.,upb-lea,https://github.com/upb-lea/openmodelica-microgrid-gym.git,github,"reinforcement-learning,openai-gym,openai-gym-environments,machine-learning,control,simulation,modelica,openmodelica,power-electronics,power-systems,microgrid,energy-system-modeling,power-supply,smart-grids,engineering,electrical-engineering,python",Energy Distribution and Grids,"2022/02/24, 12:41:07",157,0,37,true,Modelica,Paderborn University - LEA,upb-lea,"Modelica,Python,Motoko,Makefile",,"b""==========================\nOpenModelica Microgrid Gym\n==========================\n\n| |build| |cov| |nbsp| |nbsp| |python| |pypi| |download| |nbsp| |nbsp| |license|\n| |doc| |whitepaper| |joss|\n\n.. |nbsp| unicode:: U+00A0 .. NO-BREAK SPACE\n\n.. |build| image:: https://github.com/upb-lea/openmodelica-microgrid-gym/actions/workflows/build_and_test.yml/badge.svg\n :target: https://github.com/upb-lea/openmodelica-microgrid-gym/actions/workflows/build_and_test.yml\n\n.. |cov| image:: https://codecov.io/gh/upb-lea/openmodelica-microgrid-gym/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/upb-lea/openmodelica-microgrid-gym\n\n.. |license| image:: https://img.shields.io/github/license/upb-lea/openmodelica-microgrid-gym\n :target: LICENSE\n\n.. |python| image:: https://img.shields.io/pypi/pyversions/openmodelica-microgrid-gym\n :target: https://pypi.python.org/pypi/openmodelica_microgrid_gym\n\n.. |pypi| image:: https://img.shields.io/pypi/v/openmodelica_microgrid_gym\n :target: https://pypi.python.org/pypi/openmodelica_microgrid_gym\n\n.. |download| image:: https://img.shields.io/pypi/dw/openmodelica-microgrid-gym\n :target: https://pypistats.org/packages/openmodelica-microgrid-gym\n\n.. |doc| image:: https://img.shields.io/badge/doc-success-success\n :target: https://upb-lea.github.io/openmodelica-microgrid-gym\n\n.. |whitepaper| image:: https://img.shields.io/badge/arXiv-whitepaper-informational\n :target: https://arxiv.org/pdf/2005.04869.pdf\n \n.. |joss| image:: https://joss.theoj.org/papers/10.21105/joss.02435/status.svg\n :target: https://doi.org/10.21105/joss.02435\n\n\n\n.. figure:: https://github.com/upb-lea/openmodelica-microgrid-gym/raw/develop/docs/pictures/omg_flow.png\n\n**The OpenModelica Microgrid Gym (OMG) package is a software toolbox for the\nsimulation and control optimization of microgrids based on energy conversion by power electronic converters.**\n\nThe main characteristics of the toolbox are the plug-and-play grid design and simulation in OpenModelica as well as\nthe ready-to-go approach of intuitive reinfrocement learning (RL) approaches through a Python interface.\n\nThe OMG toolbox is built upon the `OpenAI Gym`_ environment definition framework.\nTherefore, the toolbox is specifically designed for running reinforcement\nlearning algorithms to train agents controlling power electronic converters in microgrids. Nevertheless, also arbritary classical control approaches can be combined and tested using the OMG interface.\n\n.. _OpenAI Gym: https://gym.openai.com/\n\n* Free software: GNU General Public License v3\n* Documentation: https://upb-lea.github.io/openmodelica-microgrid-gym\n\n\nVideo Tutorial\n--------------\n\nFollowing is a short YouTube video introduction, to get a fist impression how to use OMG.\n\n\n\n- https://www.youtube.com/watch?v=rwBNFvCi_dY\n\nInstallation\n------------\n\n\nInstall Python Environment\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThis is the short installation guide for Windows and Linux. OpenModelica_ is hardly supported for Mac, they suggest to install in a Linux VM. For this reason, running OMG in a Linux VM is strongly recommended for Mac users!\n\nSince it is not possible to install PyFMI_, a package which is necessary for the communication between the python interface and the environment, via pip, we recommend to install this package in advance in a conda environment.\nAs of now, only Windows and Linux are supported officially.\n\n- If conda is NOT installed on your PC, install miniconda_ for python 3.8\n- Create a new conda environment (e.g. in PyCharm)\n- Install PyFMI from the conda-forge channel in the terminal::\n\n $ conda install -c conda-forge pyfmi\n\n\n- Install OpenModelica MicrogridGym from PyPI (recommended)::\n\n $ pip install openmodelica_microgrid_gym\n\n.. _OpenModelica: https://openmodelica.org/download/download-mac\n.. _miniconda: https://conda.io/en/latest/miniconda.html\n.. _PyFMI: https://github.com/modelon-community/PyFMI\n\nInstallation of OpenModelica\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nOMG was create by using OMEdit_ v1.16\n\nIn case of installation issues you can resort to their pre-built `virtual machine`_.\n\n.. _OMEdit: https://openmodelica.org/download/download-windows\n.. _virtual machine: https://openmodelica.org/download/virtual-machine\n\nGetting started\n---------------\n\nThe environment is initialized and run like any other OpenAI Gym\n\n.. code-block:: python\n\n import gym\n\n if __name__ == '__main__':\n env = gym.make('openmodelica_microgrid_gym:ModelicaEnv-v1',\n max_episode_steps=None,\n net='../net/net.yaml',\n model_path='../omg_grid/grid.network.fmu')\n\n env.reset()\n for _ in range(1000):\n env.render()\n env.step(env.action_space.sample()) # take a random action\n env.close()\n\n\n\n\nOMG uses the `FMI standard`_ for the exchange of the model between OpenModelica and Python.\n\n.. _FMI standard: https://fmi-standard.org/\n\nAn example network consisting out of two inverters, three filters and an inductive load.\n\n.. figure:: https://github.com/upb-lea/openmodelica-microgrid-gym/raw/master/docs/pictures/omedit.jpg\n\nYou can either use one of the provided FMUs (Windows and Linux, 64-bit, both included in the grid.network.fmu) or create your own by running::\n\n openmodelica_microgrid_gym\\fmu> omc create_fmu.mos\n\nWindows users might need to open the terminal out of OpenModelica by clicking 'tools' => 'OpenModelica Command Prompt' to make sure that the command 'omc' gets recognized.\n\nRunning the ``staticctrl.py`` starts a simulation with a manually tuned cascaded PIPI controller\n\n.. figure:: https://github.com/upb-lea/openmodelica-microgrid-gym/raw/master/docs/pictures/control.jpg\n :scale: 70%\n :align: center\n\nA save Bayesian approach of a reinforcement learning agent is provided under examples/berkamkamp.py.\n\n.. figure:: https://github.com/upb-lea/openmodelica-microgrid-gym/raw/master/docs/pictures/kp_kp_J.png\n :figwidth: 60%\n :align: center\n\nUsing pytest\n^^^^^^^^^^^^\n\nOMG provides a big range of tests to ensure correct working toolbox after changes are done.\nOn some windows machines, the tests can only be started from the terminal via 'pytest'.\n\nThe standard test OS for the development is Linux. In some cases, we have noticed that the test_modelica.py on windows PCs might throw an error.\nSince on Linux everything works fine, it seems to be a numerical issue connected with the FMUs.\n\n\nCitation & white paper\n----------------------\n\nPlease find a white paper on the OMG toolbox including an exemplary usage scenario here:\n\n- https://arxiv.org/abs/2005.04869\n\nPlease use the following BibTeX entry for citing us::\n\n @article{OMG-code2020,\n title = {OMG: A Scalable and Flexible Simulation and Testing Environment Toolbox for Intelligent Microgrid Control},\n author = {Stefan Heid and Daniel Weber and Henrik Bode and Eyke H\xc3\xbcllermeier and Oliver Wallscheid},\n year = {2020},\n doi = {10.21105/joss.02435},\n url = {https://doi.org/10.21105/joss.02435},\n publisher = {The Open Journal},\n volume = {5},\n number = {54},\n pages = {2435},\n journal = {Journal of Open Source Software}\n }\n\n @article{OMG-whitepaper2020,\n title={Towards a Scalable and Flexible Simulation and\n Testing Environment Toolbox for Intelligent Microgrid Control},\n author={Henrik Bode and Stefan Heid and Daniel Weber and Eyke H\xc3\xbcllermeier and Oliver Wallscheid},\n year={2020},\n eprint={http://arxiv.org/abs/2005.04869},\n archivePrefix={arXiv},\n primaryClass={eess.SY}\n }\n\n\nContributing\n------------\n\nPlease refer to the `contribution guide`_.\n\n.. _`contribution guide`: https://github.com/upb-lea/openmodelica-microgrid-gym/blob/master/CONTRIBUTING.rst\n\n\nCredits\n-------\n\nThis package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.\n\n.. _Cookiecutter: https://github.com/audreyr/cookiecutter\n.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage\n""",",https://arxiv.org/pdf/2005.04869.pdf\n,https://doi.org/10.21105/joss.02435\n\n\n\n,https://arxiv.org/abs/2005.04869\n\nPlease,https://doi.org/10.21105/joss.02435,http://arxiv.org/abs/2005.04869","2020/04/15, 10:45:50",1288,GPL-3.0,0,610,"2023/04/18, 12:55:08",13,54,125,1,190,1,0.8,0.5157894736842106,"2022/02/02, 10:08:04",v0.4.0,0,5,false,,false,true,,,https://github.com/upb-lea,https://ei.uni-paderborn.de/en/lea/,"Paderborn, Germany",,,https://avatars.githubusercontent.com/u/55782224?v=4,,, OpenDSS,An electric power Distribution System Simulator for supporting distributed resource integration and grid modernization efforts.,projects,,custom,,Energy Distribution and Grids,,,,,,,,,,https://sourceforge.net/projects/electricdss/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, PowerDynamics.jl,Provides all the tools you need to create a dynamic power grid model and analyze it.,JuliaEnergy,https://github.com/JuliaEnergy/PowerDynamics.jl.git,github,,Energy Distribution and Grids,"2023/05/30, 14:12:12",92,0,15,true,Julia,JuliaEnergy,JuliaEnergy,"Julia,Makefile",https://juliaenergy.github.io/PowerDynamics.jl/latest/,"b'[![codecov](https://codecov.io/gh/JuliaEnergy/PowerDynamics.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaEnergy/PowerDynamics.jl)\n[![Stable Docs](https://img.shields.io/badge/docs-stable-blue.svg)](https://juliaenergy.github.io/PowerDynamics.jl/stable/)\n[![Dev Docs](https://img.shields.io/badge/docs-dev-blue.svg)](https://juliaenergy.github.io/PowerDynamics.jl/dev/)\n\n# PowerDynamics.jl\n\n\nPowerDynamics.jl: An Open-Source Framework Written in Julia for Dynamic Power Grid Modeling and Analysis. [Please check out the Docs.](https://juliaenergy.github.io/PowerDynamics.jl/stable/)\n\n## Citation\n\nIf you use PowerDynamics.jl in your research publications, [please cite our paper](https://www.sciencedirect.com/science/article/pii/S2352711021001345).\n\n```latex\n@article{PowerDynamics2022,\n title={PowerDynamics.jl--An experimentally validated open-source package for the dynamical analysis of power grids},\n author={Plietzsch, Anton and Kogler, Raphael and Auer, Sabine and Merino, Julia and Gil-de-Muro, Asier and Li{\\ss}e, Jan and Vogel, Christina and Hellmann, Frank},\n journal = {SoftwareX},\n volume = {17},\n pages = {100861},\n year = {2022},\n publisher={Elsevier}\n}\n```\n'",,"2018/10/10, 13:31:25",1841,GPL-3.0,4,336,"2023/06/23, 10:36:11",12,148,198,5,124,3,2.0,0.7293447293447293,"2023/05/30, 14:48:19",v3.1.5,1,17,false,,false,false,,,https://github.com/JuliaEnergy,,,,,https://avatars.githubusercontent.com/u/42609159?v=4,,, InfrastructureSystems.jl,Provides utilities to support data models for infrastructure modeling in NREL-SIIP.,NREL-SIIP,https://github.com/NREL-Sienna/InfrastructureSystems.jl.git,github,"julia,nrel",Energy Distribution and Grids,"2023/10/12, 22:15:33",27,0,0,true,Julia,NREL-Sienna,NREL-Sienna,Julia,,"b""# InfrastructureSystems.jl\n\n![Master - CI](https://github.com/NREL-Sienna/InfrastructureSystems.jl/workflows/Master%20-%20CI/badge.svg)\n[![codecov](https://codecov.io/gh/NREL-Sienna/InfrastructureSystems.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/NREL-Sienna/InfrastructureSystems.jl)\n[![Documentation](https://github.com/NREL-Sienna/InfrastructureSystems.jl/workflows/Documentation/badge.svg)](https://nrel-sienna.github.io/InfrastructureSystems.jl/stable/)\n[![DOI](https://zenodo.org/badge/202787784.svg)](https://zenodo.org/badge/latestdoi/202787784)\n\nThe `InfrastructureSystems.jl` package provides utilities to support data models for infrastructure modeling in [NREL-Sienna](https://github.com/NREL-Sienna). The `InfrastructureSystems.jl` package is used to support functionalities in [PowerSystems.jl](https://github.com/NREL-Sienna/PowerSystems.jl), [PowerSimulations.jl](https://github.com/NREL-Sienna/PowerSimulations.jl), [PowerSimulationsDynamics.jl](https://github.com/NREL-Sienna/PowerSimulationsDynamics.jl) and other modeling packages in the Sienna ecosystem. \n\nThis package is only compatible with Julia 1.6 or higher.\n\n## Development\n\nContributions to the development and enahancement of PowerSystems is welcome. Please see [CONTRIBUTING.md](https://github.com/NREL-Sienna/InfrastructureSystems.jl/blob/master/CONTRIBUTING.md) for code contribution guidelines.\n\n## License\n\nInfrastructureSystems is released under a BSD [license](https://github.com/NREL-Sienna/InfrastructureSystems.jl/blob/master/LICENSE). InfrastructureSystems has been developed as part of the Scalable Integrated Infrastructure Planning (SIIP) initiative at the U.S. Department of Energy's National Renewable Energy Laboratory ([NREL](https://www.nrel.gov/))\n""",",https://zenodo.org/badge/latestdoi/202787784","2019/08/16, 19:31:11",1531,BSD-3-Clause,54,1473,"2023/09/12, 04:41:16",3,283,310,20,43,0,0.8,0.4500419815281276,"2023/09/12, 04:07:23",v1.22.0,0,14,false,,false,true,,,https://github.com/NREL-Sienna,https://www.nrel.gov/analysis/sienna.html,"Golden, CO",,,https://avatars.githubusercontent.com/u/44615001?v=4,,, openleadr,Open Automated Demand Response (OpenADR) is an open and interoperable information exchange model and emerging smart grid standard.,projects,,custom,,Energy Distribution and Grids,,,,,,,,,,https://www.lfenergy.org/projects/openleadr/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, GridCal,Aims to be a complete platform for power systems research and simulation.,SanPen,https://github.com/SanPen/GridCal.git,github,"python,powerflow,electrical-engineering,electrical,helm,newton-raphson,latin-hypercube,monte-carlo-simulation,cim,comon-information-model,stochastic-power-flow,multi-terminal,acdc,optimization,power-systems",Energy Distribution and Grids,"2023/08/21, 09:26:51",348,7,63,true,Python,,,"Python,Jupyter Notebook,HTML,Shell,Batchfile",https://www.advancedgridinsights.com/gridcal,"b'[![Codacy Badge](https://api.codacy.com/project/badge/Grade/75e794c9bcfd49bda1721b9ba8f6c790)](https://app.codacy.com/app/SanPen/GridCal?utm_source=github.com&utm_medium=referral&utm_content=SanPen/GridCal&utm_campaign=Badge_Grade_Dashboard)\n[![Documentation Status](https://readthedocs.org/projects/gridcal/badge/?version=latest)](https://gridcal.readthedocs.io/en/latest/?badge=latest) [![Build Status](https://travis-ci.org/SanPen/GridCal.svg?branch=master)](https://travis-ci.org/SanPen/GridCal)\n[![DOI](https://www.zenodo.org/badge/49583206.svg)](https://www.zenodo.org/badge/latestdoi/49583206)\n[![Downloads](https://static.pepy.tech/personalized-badge/gridcal?period=total&units=abbreviation&left_color=grey&right_color=green&left_text=Downloads)](https://pepy.tech/project/gridcal)\n\n# What is this?\n\n![](https://github.com/SanPen/GridCal/blob/master/pics/GridCal_banner.png)\n\n![](https://github.com/SanPen/GridCal/blob/master/pics/GridCal.png)\n\nThis software aims to be a complete platform for power systems research and simulation.\n\n- [Watch the video](https://youtu.be/SY66WgLGo54)\n- Check out the [Documentation](https://gridcal.readthedocs.io/en/latest/about.html)\n- Explore the [Tutorials](https://gridcal.readthedocs.io/en/latest/tutorials/tutorials_module.html)\n- Submit questions or comments to our [form](https://forms.gle/MpjJAntAwZiLwE6B6)\n- Join the [Discord GridCal community](https://discord.com/invite/dzxctaNbvu)\n\n# Installation\n\nYou can choose to install GridCal through pip or just get a standalone setup ready to run.\n\n- From your python distribution on any OS: `pip install GridCal`\n\n- [GridCal standalone for windows x64](https://www.advancedgridinsights.com/gridcal)\n\n\nFor more options and details follow the\n[installation instructions](https://gridcal.readthedocs.io/en/latest/getting_started/install.html).\n\n\n## Execution\n\nIf you have just installed GridCal on your python distribution, \nyou can call the GUI with the following command:\n\n`python3 -c ""from GridCal.ExecuteGridCal import run; run()""`\n\n### Troubleshooting Ubuntu (maybe other Linux distributions too)\n\nUnder Ubuntu you may need to install xcb by `sudo apt-get install libxcb-xinerama0` this will solve the following error:\n\n```\nWarning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.\n```\n\n## Running tests\n\n python3 -m venv venv\n venv/bin/python -m pip install --upgrade -r requirements_venv.txt\n venv/bin/python -m tox\n\n For detailed instructions, follow the\n[instructions](https://gridcal.readthedocs.io/en/latest/getting_started.html)\nfrom the project\'s documentation.\n\n\n\n# Batteries included\n\nIn an effort to ease the simulation and construction of grids, \nWe have included extra materials to work with. These are included in the standalone setups.\n\n[Here](https://github.com/SanPen/GridCal/tree/master/Grids_and_profiles) you can find:\n- Load profiles for your projects\n- Standard IEEE grids as well as grids from open projects\n- [Equipment catalogue](https://gridcal.readthedocs.io/en/latest/data_sheets.html) (Wires, Cables and Transformers) ready to use in GridCal\n\n\n## Tutorials and examples\n\n- [Tutorials](https://gridcal.readthedocs.io/en/latest/tutorials/tutorials_module.html)\n\n- [Cloning the repository (video)](https://youtu.be/59W_rqimB6w)\n\n- [Standalone GridCal setup (video)](https://youtu.be/SY66WgLGo54)\n\n- [Making a grid with profiles (video)](https://youtu.be/H2d_2bMsIS0)\n\n- [GridCal PlayGround repository](https://github.com/yasirroni/GridCalPlayground) with some notebooks and examples.\n\n- [tests](https://github.com/SanPen/GridCal/tree/master/src/tests) may serve as a valuable source of examples.\n\n\n\n\n# Features overview\n\nIt is in pure Python and it works for Windows, Linux and OSX.\n\nSome features you\'ll find already are:\n\n- Compatible with other formats:\n - **Import** (Drag & Drop)\n - CIM (Common Information Model v16)\n - PSS/e RAW versions 29, 30, 32, 33 and 34.\n - Matpower (might not be fully compatible, notify me if not).\n - DigSilent .DGS (not be fully compatible: Only positive sequence and devices like loads, generators, etc.)\n \n - **Export**\n - Zip file `.gridcal` with CSV inside (fastest, normal GridCal format) \n - Sqlite\n - Excel\n - Custom JSON\n - CIM (Common Information Model v16)\n\n- **Power flow**:\n - State of the art multi-terminal AC/DC Newton Raphson in power and current equations.\n - Newton Raphson Iwamoto (optimal acceleration).\n - Fast Decoupled Power Flow\n - AC/DC multi-terminal Levenberg-Marquardt (Works very well with large ill-conditioned grids)\n - Holomorphic Embedding Power Flow (Unicorn under investigation...)\n - DC approximation.\n - Linear AC approximation.\n \n- **Optimal power flow (OPF)** and generation dispatch:\n - Linear (DC) with losses.\n - Linear (Ac) with losses.\n - Loss-less simple generation dispatch. \n - All the modes can be split the runs in hours, days, weeks or months!\n\n- **Time series** with profiles in all the objects physical magnitudes.\n\n- **PTDF** approximated branch flow time series for super fast estimation of the flows.\n\n- Bifurcation point with predictor-corrector Newton-Raphson.\n\n- **Monte Carlo / Latin Hypercube** stochastic power flow based on the input profiles.\n\n- **Blackout cascading** in simulation and step by step mode.\n\n- Three-phase and unbalanced **short circuit**.\n\n- Includes the Z-I-P load model, this means that the power flows can handle both power and current.\n\n- The ability to handle island grids in all the simulation modes.\n\n- **Profile editor** and importer from Excel and CSV.\n\n- **Grid elements\' analysis** to discover data problems.\n\n- **Overhead line construction** from wire scheme.\n\n- Device **templates** (lines and transformers).\n\n- **Grid reduction** based on branch type and filtering by impedance values\n\n- **Export** the schematic in SVG and PNG formats.\n\n[Check out the documentation](https://gridcal.readthedocs.io) to learn more and to get started.\n\n# Collaborators\n\n- Michel Lavoie (Transformer automation)\n- Bengt L\xc3\xbcers (Better testing)\n- Josep Fanals Batllori (HELM, Sequence Short circuit)\n- Manuel Navarro Catal\xc3\xa1n (Better documentation)\n- Paul Schultz (Grid Generator)\n- Andr\xc3\xa9s Ramiro (Optimal net transfer capacity)\n- Ameer Carlo Lubang (Sequence short-circuit)\n\n# Contact\n\nSend feedback and requests to [santiago@gridcal.org](santiago@gridcal.org).\n\n'",,"2016/01/13, 15:40:10",2842,LGPL-3.0,191,2814,"2023/08/21, 09:26:44",23,102,180,16,65,1,0.0,0.1534419337887546,"2023/04/22, 16:30:54",4.7.0,0,15,false,,false,true,"SanPen/Grid2OpBackends,yasirroni/GridCalPlayground,jinningwang/andes,CURENT2/andes,CURENT/andes,SanPen/GridCalTutorials,SanPen/GridCal-verification",,,,,,,,,, pyehub,"A Python-based, modular and nestable implementation of the Energy Hub model (balancing demand and supply, system capacity sizing and network flows using Mixed-Integer Linear Programming).",energyincities,https://gitlab.com/energyincities/python-ehub,gitlab,,Energy Distribution and Grids,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, origin,A set of toolkits that together provide a system for issuance and management of Energy Attribute Certificates.,energywebfoundation,https://github.com/energywebfoundation/origin.git,github,,Energy Distribution and Grids,"2023/02/14, 13:17:32",97,2,10,true,TypeScript,Energy Web Foundation,energywebfoundation,"TypeScript,Solidity,JavaScript,Shell,Makefile,Dockerfile,HTML",https://energyweb.org/technology/ew-origin/,"b'

\n
\n \n
\n EnergyWeb Origin\n
\n
\n

\n\n**Origin** is a set of toolkits that together provide a system for issuance and management of Energy Attribute Certificates (EACs). This repository is an entry point to Origin systems. It has a goal of explaining briefly the whole system and providing you with insight and info where to explore next.\n\n

\n \n

\n\n:construction: Documentation available at [https://energy-web-foundation-origin.readthedocs-hosted.com/en/latest/](https://energy-web-foundation-origin.readthedocs-hosted.com/en/latest/) :construction:\n\n## Table of Contents\n\n- [Table of Contents](#table-of-contents)\n- [Packages](#packages)\n - [SDK Releases](#sdk-releases)\n - [Applications, Infrastructure and Demo](#applications-infrastructure-and-demo)\n - [Packages types](#packages-types)\n - [Stable](#stable)\n - [Canary](#canary)\n - [Preview](#preview)\n- [Preparation](#preparation)\n- [Installation](#installation)\n- [Build](#build)\n- [Test](#test)\n- [Run demo](#run-demo)\n - [Heroku environment provisioning](#heroku-environment-provisioning)\n- [Energy Attribute Certificates](#energy-attribute-certificates)\n- [Deployment](#deployment)\n- [Contribution guidelines](#contribution-guidelines)\n\n## Packages\n\n### SDK Releases\n\n| Package | Stable | Canary | Description |\n| ------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------- |\n| [`@energyweb/origin-device-registry-api`](/packages/devices/origin-device-registry-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-device-registry-api.svg)](https://www.npmjs.com/package/@energyweb/origin-device-registry-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-device-registry-api/canary)](https://www.npmjs.com/package/@energyweb/origin-device-registry-api) | Generic implementation of API working with Origin device registry |\n| [`@energyweb/origin-device-registry-irec-local-api`](/packages/devices/origin-device-registry-irec-local-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-device-registry-irec-local-api.svg)](https://www.npmjs.com/package/@energyweb/origin-device-registry-irec-local-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-device-registry-irec-local-api/canary)](https://www.npmjs.com/package/@energyweb/origin-device-registry-irec-local-api) | API for local version of I-REC compatible registry |\n| [`@energyweb/origin-energy-api`](/packages/devices/origin-energy-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-energy-api.svg)](https://www.npmjs.com/package/@energyweb/origin-energy-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-energy-api/canary)](https://www.npmjs.com/package/@energyweb/origin-energy-api) | API for Smart meter reads |\n| [`@energyweb/origin-organization-irec-api`](/packages/devices/origin-organization-irec-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-organization-irec-api.svg)](https://www.npmjs.com/package/@energyweb/origin-organization-irec-api) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-organization-irec-api/canary)](https://www.npmjs.com/package/@energyweb/origin-organization-irec-api) | API for I-REC based organizations |\n| [`@energyweb/origin-backend`](/packages/origin-backend) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-backend.svg)](https://www.npmjs.com/package/@energyweb/origin-backend) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-backend/canary)](https://www.npmjs.com/package/@energyweb/origin-backend) | Example backend necessary for running Origin |\n| [`@energyweb/issuer`](/packages/traceability/issuer) | [![npm](https://img.shields.io/npm/v/@energyweb/issuer.svg)](https://www.npmjs.com/package/@energyweb/issuer) | [![npm](https://img.shields.io/npm/v/@energyweb/issuer/canary)](https://www.npmjs.com/package/@energyweb/issuer) | Energy Attribute Certificates Issuer Module |\n| [`@energyweb/issuer-api`](/packages/traceability/issuer-api) | [![npm](https://img.shields.io/npm/v/@energyweb/issuer-api.svg)](https://www.npmjs.com/package/@energyweb/issuer-api) | [![npm](https://img.shields.io/npm/v/@energyweb/issuer-api/canary)](https://www.npmjs.com/package/@energyweb/issuer-api) | NestJS module for interacting with renewable energy certificates |\n| [`@energyweb/issuer-irec-api`](/packages/traceability/issuer-irec-api) | [![npm](https://img.shields.io/npm/v/@energyweb/issuer-irec-api.svg)](https://www.npmjs.com/package/@energyweb/issuer-irec-api) | [![npm](https://img.shields.io/npm/v/@energyweb/issuer-irec-api/canary)](https://www.npmjs.com/package/@energyweb/issuer-irec-api) | NestJS module for interacting with renewable energy certificates with IREC connectivity |\n| [`@energyweb/exchange`](/packages/trade/exchange) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange.svg)](https://www.npmjs.com/package/@energyweb/exchange) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange/canary)](https://www.npmjs.com/package/@energyweb/exchange) | A service project hosting order book based exchange |\n| [`@energyweb/exchange-irec`](/packages/trade/exchange-irec) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-irec.svg)](https://www.npmjs.com/package/@energyweb/exchange-irec) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-irec/canary)](https://www.npmjs.com/package/@energyweb/exchange-irec) | A service project hosting order book based I-REC specific exchange |\n| [`@energyweb/exchange-core`](/packages/trade/exchange-core) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-core.svg)](https://www.npmjs.com/package/@energyweb/exchange-core) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-core/canary)](https://www.npmjs.com/package/@energyweb/exchange-core) | Generic EACs order book product and matching |\n| [`@energyweb/exchange-core-irec`](/packages/trade/exchange-core-irec) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-core-irec.svg)](https://www.npmjs.com/package/@energyweb/exchange-core-irec) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-core-irec/canary)](https://www.npmjs.com/package/@energyweb/exchange-core-irec) | An IREC based EACs product and matching |\n| [`@energyweb/exchange-io-erc1888`](/packages/trade/exchange-io-erc1888) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-io-erc1888.svg)](https://www.npmjs.com/package/@energyweb/exchange-io-erc1888) | [![npm](https://img.shields.io/npm/v/@energyweb/exchange-io-erc1888/canary)](https://www.npmjs.com/package/@energyweb/exchange-core-irec) | ERC1888 withdwaral/deposit processing for exchange |\n| [`@energyweb/utils-general`](/packages/utils-general) | [![npm](https://img.shields.io/npm/v/@energyweb/utils-general.svg)](https://www.npmjs.com/package/@energyweb/utils-general) | [![npm](https://img.shields.io/npm/v/@energyweb/utils-general/canary)](https://www.npmjs.com/package/@energyweb/utils-general) | General Utilities |\n| [`@energyweb/origin-ui-core`](/packages/ui/libs/ui/core) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-core.svg)](https://www.npmjs.com/package/@energyweb/origin-ui-core) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-core/canary)](https://www.npmjs.com/package/@energyweb/origin-ui-core) | React components library for building Origin marketplace user interface |\n| [`@energyweb/origin-ui-localization`](/packages/ui/libs/ui/localization) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-localization.svg)](https://www.npmjs.com/package/@energyweb/origin-ui-localization) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-localization/canary)](https://www.npmjs.com/package/@energyweb/origin-ui-localization) | Localization library for building Origin marketplace user interface |\n| [`@energyweb/origin-ui-theme`](/packages/ui/libs/ui/theme) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-theme.svg)](https://www.npmjs.com/package/@energyweb/origin-ui-theme) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-theme/canary)](https://www.npmjs.com/package/@energyweb/origin-ui-theme) | Material-UI theme configuration and styling utilities |\n| [`@energyweb/origin-ui-utils`](/packages/ui/libs/ui/utils) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-utils.svg)](https://www.npmjs.com/package/@energyweb/origin-ui-utils) | [![npm](https://img.shields.io/npm/v/@energyweb/origin-ui-utils/canary)](https://www.npmjs.com/package/@energyweb/origin-ui-utils) | UI general utilities |\n\n### Applications, Infrastructure and Demo\n\n| Package | Description |\n| ------------------------------------------------------------------------------ | --------------------------------------------------------------- |\n| [`@energyweb/origin-backend-irec-app`](/packages/apps/origin-backend-irec-app) | Bootstrap project for Origin API that uses I-REC API connection |\n| [`@energyweb/origin-ui`](/packages/ui/apps/origin-ui) | Root of UI for Origin |\n| [`@energyweb/migrations-irec`](/packages/tools/migrations-irec) | Deployment and configuration utilities |\n\n### Packages types\n\nOrigin monorepo produce 3 types of the packages that are meant to be used in different use-cases:\n\n#### Stable\n\nStable Origin SDK packages are created during `release` branch build.\n\nInstall using `yarn add @energyweb/{package}`\n\n#### Canary\n\nCanary packages are created during `master` branch builds. Canary reflects current state of the `master` branch, they should be a working versions considers as `alpha`\n\nInstall using `yarn add @energyweb/{package}@canary`\n\n#### Preview\n\nPreview packages are built on a special `preview` branch, this is mostly used as interal tool for tests, demos, discussions.\n\nInstall using `yarn add @energyweb/{package}@preview`\n\n## Preparation\n\n1. Make sure you are using Node 14.x.x\n2. Make sure have latest `@microsoft/rush` package manager installed.\n\n```shell\nnpm install -g @microsoft/rush\n```\n\n3. Make sure you have Java runtime installed\n4. Install [Postgres](https://www.postgresql.org/download/) 12.x+ and create a new database named `origin`.\n\nWe recommend using Docker based setup as follows (requires psql command line tool installed):\n\n```\ndocker pull postgres\ndocker run --name origin-postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=origin -d -p 5432:5432 postgres\n```\n\n4. Make sure you have created a `.env` file in the root of the monorepo and that all necessary variables are set.\n Use [`.env.example`](.env.example) as an example of how the `.env` file should look.\n\n5. Create InfluxDB to store smart meter readings\n\n```\ndocker run --rm --env-file ./.env -v $PWD/influxdb-local:/var/lib/influxdb influxdb:1.8 /init-influxdb.sh\n```\n\nRun the InfluxDB instance\n\n```\ndocker run --name energy-influxdb --env-file ./.env -d -p 8086:8086 -v $PWD/influxdb-local:/var/lib/influxdb -v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro influxdb:1.8\n```\n\n1. For custom DB credentials, ports, db name etc refer to https://github.com/energywebfoundation/origin/tree/master/packages/apps/origin-backend-irec-app#development\n\n## Installation\n\n```shell\nrush update\n```\n\n## Build\n\n```shell\nrush build\n```\n\n## Test\n\n```shell\nrush test:e2e\n```\n\n## Run demo\n\nAfter you have the `.env` file created, installed dependencies (`rush install`) and build completed (`rush build`) run the following command:\n\n```shell\nrush run:origin\n```\n\nVisit the UI at: http://localhost:3000.\n\n### Heroku environment provisioning\n\nFor fast deployment to Heroku you can run the available script `provision-heroku-origin`\n\n```\nPREFIX= STAGE= TEAM= ./provision-heroku-origin.sh\n```\n\nNaming convention is for apps:\n\n```\n${PREFIX}-origin-ui-${STAGE}\n${PREFIX}-origin-api-${STAGE}\n```\n\nFor e.g in order to create `ptt-origin-ui-stable` run the script with:\n\n```\nPREFIX=ptt STAGE=stable TEAM= ./provision-heroku-origin.sh\n```\n\nNote: This script assumes that Heroku CLI tool is installed and your are logged in https://devcenter.heroku.com/articles/heroku-cli\n\n## Energy Attribute Certificates\n\nEnergy Attribute Certificates, or EACs, is an official document which guarantees that produced energy comes from a renewable source. There are different standards that regulate how data is stored and validated. In Europe, this document is called Guarantee of Origin (GO), in North America, it\'s called Renewable Energy Certificate (REC), and in parts of Asia, Africa, the Middle East, and Latin America governing standard is International REC (I-REC). Standards do vary, but they all share the same core principles.\n\nThe main purpose of EACs is to act as an accounting vehicle to prove that consumed energy came from a renewable source. EACs are mostly used to address sustainability reports regarding [Scope 2 emissions](https://en.wikipedia.org/wiki/Carbon_emissions_reporting#Scope_2:_Electricity_indirect_GHG_emissions).\n\n## Deployment\n\nFor deployment instructions please refer to [Deployment](https://github.com/energywebfoundation/origin/wiki/Origin-Deployment) wiki page.\n\n## Contribution guidelines\n\nIf you want to contribute to Origin, be sure to follow classic open source contribution guidelines (described below).\n\n1. Commiting a change\n - Fork the repository\n - Make a change to repo code\n - Commit the change to the `master` branch\n2. Pull request\n - Open a pull request from your fork `master` branch\n - Request code reviews from [@JosephBagaric](https://github.com/JosephBagaric), [@kosecki123](https://github.com/kosecki123), [@alexworker23](https://github.com/alexworker23) or [@ioncreature](https://github.com/ioncreature)\n - Once the PR is approved and the build passes, it will be merged to the master branch\n'",,"2019/08/14, 14:06:18",1533,MIT,3,7761,"2023/08/03, 08:09:57",47,3290,3332,17,83,17,1.6,0.7073089145342031,"2022/03/16, 12:19:56",ui-packages@0.4.1,2,36,false,,false,true,"energywebfoundation/pjm-origin-monorepo,energywebfoundation/origin",,https://github.com/energywebfoundation,http://energyweb.org/,Germany,,,https://avatars.githubusercontent.com/u/32361864?v=4,,, Grid Singularity Energy Exchange,"An interface to download and deploy interconnected, grid-aware energy marketplaces.",gridsingularity,https://github.com/gridsingularity/gsy-e.git,github,,Energy Distribution and Grids,"2023/10/24, 14:04:46",75,0,8,true,Python,Grid Singularity,gridsingularity,"Python,HTML,SCSS,JavaScript,CSS,Shell,Makefile,Dockerfile",,"b'====================================\nGrid Singularity Energy Exchange\n====================================\n\n.. image:: https://codecov.io/gh/gridsingularity/gsy-e/branch/master/graph/badge.svg?token=XTWK3DAKUA\n :target: https://codecov.io/gh/gridsingularity/gsy-e\n\nThe Grid Singularity Energy Exchange Engine is developed by `Grid Singularity `__ as an interface (`Singularity Map `__) and open source codebase (see `Licensing `__ to model, simulate, optimize and (coming soon) download and deploy interconnected, grid-aware energy marketplaces.\nGrid Singularity has been proclaimed the `World Tech Pioneer by the World Economic Forum `__ and is also known as a co-founder of the `Energy Web Foundation `__ that gathers leading energy corporations globally co-developing a shared blockchain-based platform.\n\nCode of Conduct\n===============\nPlease refer to: https://github.com/gridsingularity/gsy-e/blob/master/CODE_OF_CONDUCT.md\n\nHow to contribute:\n==================\nPlease refer to: https://github.com/gridsingularity/gsy-e/blob/master/CONTRIBUTING.md\n\n\nBasic setup\n===========\n\n(For instructions using `Docker`_ see below)\n\nAfter cloning this project setup a Python 3.8 virtualenv and install `fabric3`_::\n\n ~# pip install fabric3\n \nWithout using virtualenv (e.g. using conda envs) you can just install gsy-e using\n\n ~# pip install -e .\n\nThe Simulation\n==============\n\nRunning the simulation\n----------------------\n\nAfter installation the simulation can be run with the following command::\n\n ~# gsy-e run\n\nThere are various options available to control the simulation run.\nHelp on there is available via::\n\n ~# gsy-e run --help\n\n\nControlling the simulation\n--------------------------\n\nWhile running a simulation, the following keyboard commands are available:\n\n=== =======\nKey Command\n=== =======\ni Show information about simulation\np Pause simulation\nq Quit simulation\nr Reset and restart simulation\nR Start a Python REPL at the current simulation step\ns Save current state of simulation to file (see below for resuming)\n=== =======\n\nDevelopment\n===========\n\nUpdating requirements\n---------------------\n\nWe use `pip-tools`_ managed by `fabric3`_ to handle requirements.\nTo update the pinned requirements use the following command::\n\n ~# fab compile\n\n\n\nThere is also a command to compile and sync in one step::\n\n ~# fab reqs\n\n\n_`pip-tools`: https://github.com/nvie/pip-tools\n_`fabric3`: https://pypi.python.org/pypi/Fabric3\n\n\nTesting\n-------\n\nWe use `py.test`_ managed by `tox`_ to run the (unit) tests.\nTo run the test suite simply run the following command::\n\n ~# tox\n\n\n_`py.test`: https://pytest.org\n_`tox`: https://tox.testrun.org\n\n\nDocker\n------\n\nThe repository contains a `docker`_ Dockerfile. To build an image use the\nfollowing command (change into repository folder first)::\n\n ~# docker build -t gsy-e .\n\n\nAfter building is complete you can run the image with::\n\n ~# docker run --rm -it gsy-e\n\n\nCommand line parameters can be given normally after the image name::\n\n ~# docker run --rm gsy-e --help\n ~# docker run --rm gsy-e run --help\n ~# docker run --rm gsy-e run --setup default_2a -t15s\n\n\nThere is also a handy script that deals with the building of the image and running the provided command::\n\n ~# ./run_gsy_e_on_docker.sh ""$docker_command"" $export_path\n\n\nwhere you can provide the gsy_e_command and export path where the simulation results are stored.\nFor example::\n\n ~# ./run_gsy_e_on_docker.sh ""gsy-e -l ERROR run --setup default_2a -t 15s"" $HOME/gsy_e-simulation\n\n\nbuilds a gsy-e docker image (if not already present),\nruns the simulation with setup-file default_2a, tick-length 15s\nand stores the simulation output data into $HOME/gsy_e-simulation.\nIf no export_path is provided, simulation results will be stored in $HOME/gsy_e-simulation.\n\n\n_`docker`: https://docker.io\n\n\nDetailed Documentation\n======================\nPlease refer to: https://gridsingularity.github.io/gsy-e/documentation/\n'",,"2016/11/09, 14:01:56",2541,GPL-3.0,514,9254,"2023/10/24, 11:39:03",11,1673,1695,169,1,10,6.9,0.726680320238317,"2022/03/23, 11:28:57",v1.3.0,0,33,false,,true,true,,,https://github.com/gridsingularity,https://gridsingularity.com/,Berlin,,,https://avatars.githubusercontent.com/u/15871858?v=4,,, Backbone,A generic energy network optimization tool written in GAMS.,backbone,,custom,,Energy Distribution and Grids,,,,,,,,,,https://gitlab.vtt.fi/backbone/backbone,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, APIS,Build a microgrid that utilizes variable renewable energy as the main power source and enhances the resilience of the power system.,SonyCSL,https://github.com/SonyCSL/APIS.git,github,,Energy Distribution and Grids,"2021/01/27, 06:51:09",27,0,0,false,Makefile,"Sony Computer Science Laboratories, Inc.",SonyCSL,"Makefile,Shell",,"b'# Autonomous Power Interchange System (APIS)\n\n# APIS features \nBy accommodating P2P energy sharing between distributed batteries, it is possible to build a microgrid that utilizes variable renewable energy(VRE) as the main power source and enhances the resilience of the power system. These distributed batteries absorb fluctuations of VRE generation and improve community self-sufficiency by balancing supply and demand within the community. \n
\nClick [here](https://www.sonycsl.co.jp/tokyo/11481/) for details\n
\n# Technoogy\n\n## Physical Peer to Peer(PP2P) energy sharing\nThis technology achieves a fixed amount of energy sharing between batteries by constant current control. It offset a shortage of one battery with delivering the necessary amount from a surplus of other battery. It is possible to send a fixed amount of power between specific users (batteries), which was difficult to realize by the voltage control, and it is possible to transact PP2P energy trading between users on the condition of the required energy amount and energy price. \n\n![\xe3\x82\xad\xe3\x83\xa3\xe3\x83\x97\xe3\x83\x81\xe3\x83\xa3](https://user-images.githubusercontent.com/71874910/95694571-c0c47080-0c6d-11eb-9935-89d62e43228c.PNG)\n\n## Autonomous distributed control\nSoftware with the same functions is installed in each battery system, and the software implements energy sharing according to the transaction conditions (time window, energy amount, energy price, etc.) of each battery system. It is a flexible energy trading system that allows various transaction conditions to be set for each battery system and that the conditions can be dynamically changed for each time window. \n\n![\xe3\x82\xad\xe3\x83\xa3\xe3\x83\x97\xe3\x83\x81\xe3\x83\xa3](https://user-images.githubusercontent.com/71874910/95833927-3ff19b80-0d77-11eb-9bc7-1994e641d5fd.PNG)\n\n
\n
\n\n## Software \nThe software that realizes the above technology and makes it possible to easily construct a DC microgrid is released. \n\n### \xe2\x96\xbc Software required to simulate energy exchange using a hardware emulator \n - [apis-main](https://github.com/SonyCSL/apis-main) \n Software installed on each node to provide bi-directional energy exchange with autonomous decentralized control \n See apis-main\'s [Documentation](https://github.com/SonyCSL/apis-main/blob/master/doc/en/apis-main_specification_en.md) for more information. \n - [apis-main_controller](https://github.com/SonyCSL/apis-main_controller) \n Software that visualizes the status of apis-main installed on each node and the status of energy exchange between nodes \n See apis-main_controller\'s [Documentation](https://github.com/SonyCSL/apis-main_controller/blob/master/doc/en/apis-main-controller_specification_en.md) for more information. \n - [apis-web](https://github.com/SonyCSL/apis-web) \n Software that acquires necessary information for visualization from apis-main and provides it to apis-main_controller \n See apis-web\'s [Documentation](https://github.com/SonyCSL/apis-web/blob/master/doc/en/apis-web_specification_en.md) for more information. \n - [apis-emulator](https://github.com/SonyCSL/apis-emulator) \n Software to emulate hardware such as DC/DC converters and batteries \n See apis-emulator\'s [Documentation](https://github.com/SonyCSL/apis-emulator/blob/master/doc/en/apis-emulator_specification_en.md) for more information. \n - [apis-service_center](https://github.com/SonyCSL/apis-service_center) (Added on December 24, 2020) \n Software to provide information required by the administrators and users of clusters constructed of apis-main services installed in each unit. \n See apis-service_center\'s [Documentation](https://github.com/SonyCSL/apis-service_center/blob/main/doc/en/apis-service_center_specification_EN.md) for more information. \n - [apis-ccc](https://github.com/SonyCSL/apis-ccc) (Added on December 24, 2020) \n Software to uploade information that is related to energy sharing to apis-service_center. \n See apis-ccc\'s [Documentation](https://github.com/SonyCSL/apis-ccc/blob/main/doc/en/apis-ccc_specification_EN.md) for more information. \n - [apis-log](https://github.com/SonyCSL/apis-log) (Added on December 24, 2020) \n Software to receive information from apis-main by multicast via a communication line and storing that information in a database. \n See apis-log\'s [Documentation](https://github.com/SonyCSL/apis-log/blob/main/doc/en/apis-log_specification_EN.md) for more information. \n - [apis-tester](https://github.com/SonyCSL/apis-tester) (Added on December 24, 2020) \n Software to test and evaluation of apis-main. \n See apis-tester\'s [Documentation](https://github.com/SonyCSL/apis-tester/blob/main/doc/en/apis-tester_specification_EN.md) for more information. \n \n ## Installation \n \n [Operating Environment] \n The above software has been tested on the following operating systems. \n - Ubuntu 18.04, 20.04 \n - CentOS 7, 8 \n - macOS Catalina, Big Sur\n \n \xe3\x80\x80* Virtual environments are not supported. \n \n [Preparation] \n It is assumed that the necessary software are installed. \n The following is an example of an advance preparation for Ubuntu 18.04. \n * Install the JDK if it\'s not already installed. \n * Python3.6.9 or later is required. \n * Sqlite3.8.3 or later is required. (CentOS 7)\n \n```bash\n$ sudo apt install git\n$ sudo apt install make\n$ sudo apt install maven\n$ sudo apt install groovy\n$ sudo apt install python3-venv\n$ sudo apt install python3-pip\n$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4\n$ echo ""deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse"" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list\n$ sudo apt install mongodb-org\n```\n\n[APIS related software bach installation] \n Install all of the above APIS related software at one time. \n See each software repository for the individual installation and execution of each software. \n \n ```bash\n$ git clone https://github.com/SonyCSL/APIS.git\n$ cd APIS\n$ make build\n```\n\n* If an error occurs with the ""make build"" or ""make run"" command, open the terminal again and run it. \n \n## Running \nRun all of the above APIS related software at once. \n\n ```bash\n$ make run\n```\n\n  After running each software, access the followings in your browser to display. \n \n     0.0.0.0:4382/   -> apis-main_controller \n     0.0.0.0:4390/   -> apis-emulator \n     0.0.0.0:10000/   -> apis-tester \n     http://127.0.0.1:8000/static/ui_example/staff/visual.html   -> apis-service_center (account/pwd = admin/admin) \n \n
\n \n  The following picture is apis-main_controller.\n  Flash an cache once.\n \n ![\xe3\x82\xad\xe3\x83\xa3\xe3\x83\x97\xe3\x83\x81\xe3\x83\xa3](https://user-images.githubusercontent.com/71874910/97250475-602a5b80-1849-11eb-95bd-b8c1cac57c61.PNG)\n \n  Simulate energy exchange with setting Global Mode to ""Run"". \n \n ![\xe3\x82\xad\xe3\x83\xa3\xe3\x83\x97\xe3\x83\x81\xe3\x83\xa3](https://user-images.githubusercontent.com/71874910/96272423-0932b400-1009-11eb-9a90-f9e5bd49baef.PNG)\n \n
\n \n \n ## Stopping\nStop all of the above APIS related software at once. \n\n ```bash\n$ make stop\n```\n \n### \xe2\x96\xbc Software and hardware information required for energy exchange using actual DC/DC converters and batteries\nUse apis-dcdc_batt_comm instead of apis-emulator. \n \n - [apis-dcdc_batt_comm](https://github.com/SonyCSL/apis-dcdc_batt_comm) \n Sample device driver to control DC/DC converters and batteries \n See apis-dcdc_batt_comm\'s [Documentation](https://github.com/SonyCSL/apis-dcdc_batt_comm/blob/master/doc/en/apis-dcdc_batt_comm_specification_en.md) for more information. \n - [apis-build_version_up_system](https://github.com/SonyCSL/apis-build_version_up_system) \n This tool clones all of the software needed to build the APIS evaluation environment from GitHub/Sony CSL in a single batch, builds the software, generates the various configuration files required for APIS operation according to the configuration file, and then installs all of the software on multiple nodes.\n See apis-build_version_up_systems\'s [Documentation](https://github.com/SonyCSL/apis-build_version_up_system/blob/main/doc/en/apis-build_version_up_system_specification_EN.md) for more information. \n - [apis-hw-info](https://github.com/SonyCSL/apis-hw-info) \n Hardware reference information \n See apis-hw-info\'s [Documentation](https://github.com/SonyCSL/apis-hw-info/blob/main/MAIN-DOCUMENT_EN.md) for more information.\n\n ## License\n [Apache License Version 2.0](https://github.com/oes-github/APIS/blob/master/LICENSE)\n\n\n## Notice\n [Notice](https://github.com/oes-github/APIS/blob/master/NOTICE.md)\n'",,"2020/09/29, 15:18:57",1121,Apache-2.0,0,178,"2020/10/16, 14:03:15",0,1,1,0,1104,0,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/SonyCSL,http://www.sonycsl.co.jp/,,,,https://avatars.githubusercontent.com/u/2343888?v=4,,, pymgrid,A Python library to generate and simulate a large number of microgrids.,Total-RD,https://github.com/Total-RD/pymgrid.git,github,"microgrid,reinforcement-learning,control,energy-management-systems,microgrid-model",Energy Distribution and Grids,"2023/05/02, 01:59:45",148,3,44,true,Python,TotalEnergies-R&D,Total-RD,Python,https://pymgrid.readthedocs.io/,"b'# pymgrid\n\n## Important Notice\n\n### The person that has been maintaining pymgrid since 2020 has moved future development to [python-microgrid](https://github.com/ahalev/python-microgrid) for the foreseeable future, a drop in replacement for pymgrid, and pymgrid may not be receiving any future updates. Please open any new issues in python-microgrid.\n\n\n![Build](https://github.com/Total-RD/pymgrid/workflows/build/badge.svg?dummy=unused)\n\npymgrid (PYthon MicroGRID) is a python library to generate and simulate a large number of microgrids.\n\nFor more context, please see the [presentation](https://www.climatechange.ai/papers/neurips2020/3) done at Climate Change AI\nand the [documentation](https://pymgrid.readthedocs.io).\n\n## Installation\n\nThe easiest way to install pymgrid is with pip:\n\n`pip install -U pymgrid`\n\nAlternatively, you can install from source. First clone the repo:\n \n```bash\ngit clone https://github.com/Total-RD/pymgrid.git\n``` \nThen navigate to the root directory of pymgrid and call\n\n```bash\npip install .\n```\n## Getting Started\n\nMicrogrids are straightforward to generate from scratch. Simply define some modules and pass them\nto a microgrid:\n```python\nimport numpy as np\nfrom pymgrid import Microgrid\nfrom pymgrid.modules import GensetModule, BatteryModule, LoadModule, RenewableModule\n\n\ngenset = GensetModule(running_min_production=10,\n running_max_production=50,\n genset_cost=0.5)\n\nbattery = BatteryModule(min_capacity=0,\n max_capacity=100,\n max_charge=50,\n max_discharge=50,\n efficiency=1.0,\n init_soc=0.5)\n\n# Using random data\nrenewable = RenewableModule(time_series=50*np.random.rand(100))\n\nload = LoadModule(time_series=60*np.random.rand(100),\n loss_load_cost=10)\n\nmicrogrid = Microgrid([genset, battery, (""pv"", renewable), load])\n```\n\nThis creates a microgrid with the modules defined above, as well as an unbalanced energy module -- \nwhich reconciles situations when energy demand cannot be matched to supply.\n\nPrinting the microgrid gives us its architecture:\n\n```python\n>> microgrid\n\nMicrogrid([genset x 1, load x 1, battery x 1, pv x 1, balancing x 1])\n```\n\nA microgrid is contained of fixed modules and flex modules. Some modules can be both -- `GridModule`, for example\n-- but not at the same time.\n\n\nA *fixed* module has requires a request of a certain amount of energy ahead of time, and then attempts to \nproduce or consume said amount. `LoadModule` is an example of this; you must tell it to consume a certain amount of energy\nand it will then do so.\n\n A *flex* module, on the other hand, is able to adapt to meet demand. `RenewableModule` is an example of this as\n it allows for curtailment of any excess renewable produced.\n \n A microgrid will tell you which modules are which:\n \n ```python\n>> microgrid.fixed_modules\n\n{\n ""genset"": ""[GensetModule(running_min_production=10, running_max_production=50, genset_cost=0.5, co2_per_unit=0, cost_per_unit_co2=0, start_up_time=0, wind_down_time=0, allow_abortion=True, init_start_up=True, raise_errors=False, provided_energy_name=genset_production)]"",\n ""load"": ""[LoadModule(time_series=, loss_load_cost=10, forecaster=NoForecaster, forecast_horizon=0, forecaster_increase_uncertainty=False, raise_errors=False)]"",\n ""battery"": ""[BatteryModule(min_capacity=0, max_capacity=100, max_charge=50, max_discharge=50, efficiency=1.0, battery_cost_cycle=0.0, battery_transition_model=None, init_charge=None, init_soc=0.5, raise_errors=False)]""\n}\n\n>>microgrid.flex_modules\n\n{\n ""pv"": ""[RenewableModule(time_series=, raise_errors=False, forecaster=NoForecaster, forecast_horizon=0, forecaster_increase_uncertainty=False, provided_energy_name=renewable_used)]"",\n ""balancing"": ""[UnbalancedEnergyModule(raise_errors=False, loss_load_cost=10, overgeneration_cost=2)]""\n}\n\n```\n\n\nRunning the microgrid is straightforward. Simply pass an action for each fixed module to `microgrid.run`. The microgrid\ncan also provide you a random action by calling `microgrid.sample_action.` Once the microgrid has been run for a\ncertain number of steps, results can be viewed by calling microgrid.get_log.\n\n```python\n>> for j in range(10):\n>> action = microgrid.sample_action(strict_bound=True)\n>> microgrid.run(action)\n\n>> microgrid.get_log(drop_singleton_key=True)\n\n genset ... balance\n reward ... fixed_absorbed_by_microgrid\n0 -5.000000 ... 10.672095\n1 -14.344353 ... 50.626726\n2 -5.000000 ... 17.538018\n3 -0.000000 ... 15.492778\n4 -0.000000 ... 35.748724\n5 -0.000000 ... 30.302300\n6 -5.000000 ... 36.451662\n7 -0.000000 ... 66.533872\n8 -0.000000 ... 20.645077\n9 -0.000000 ... 10.632957\n```\n\n## Benchmarking\n\n`pymgrid` also comes pre-packaged with a set of 25 microgrids for benchmarking.\nThe config files for these microgrids are available in `data/scenario/pymgrid25`.\nSimply deserialize one of the yaml files to load one of the saved microgrids; for example,\nto load the zeroth microgrid:\n\n```python\nimport yaml\nfrom pymgrid import PROJECT_PATH\n\nyaml_file = PROJECT_PATH / \'data/scenario/pymgrid25/microgrid_0/microgrid_0.yaml\'\nmicrogrid = yaml.safe_load(yaml_file.open(\'r\'))\n```\n\nAlternatively, `Microgrid.load(yaml_file.open(\'r\'))` will perform the same deserialization.\n\n\n## Citation\n\nIf you use this package for your research, please cite the following paper:\n\n@misc{henri2020pymgrid,\n title={pymgrid: An Open-Source Python Microgrid Simulator for Applied Artificial Intelligence Research}, \n author={Gonzague Henri, Tanguy Levent, Avishai Halev, Reda Alami and Philippe Cordier},\n year={2020},\n eprint={2011.08004},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n\nYou can find it on Arxiv here: https://arxiv.org/abs/2011.08004\n\n## Data\n\nData in pymgrid are based on TMY3 (data based on representative weather). The PV data comes from DOE/NREL/ALLIANCE (https://nsrdb.nrel.gov/about/tmy.html) and the load data comes from OpenEI (https://openei.org/doe-opendata/dataset/commercial-and-residential-hourly-load-profiles-for-all-tmy3-locations-in-the-united-states)\n\nThe CO2 data is from Jacque de Chalendar and his gridemissions API.\n\n## Contributing\nPull requests are welcome for bug fixes. For new features, please open an issue first to discuss what you would like to add.\n\nPlease make sure to update tests as appropriate.\n\n## License\n\nThis repo is under a GNU LGPL 3.0 (https://github.com/total-sa/pymgrid/edit/master/LICENSE)\n'",",https://arxiv.org/abs/2011.08004\n\n##","2020/03/18, 23:37:39",1315,LGPL-3.0,743,1421,"2023/05/19, 17:51:46",3,191,246,133,159,1,0.0,0.1105263157894737,"2023/01/25, 21:12:26",v1.2.2,0,8,false,,false,true,"YannBerthelot/PymgridExperiments,hi-paris/HiPARIS-Hickathon-2021,nicosquare/ml-707-project",,https://github.com/Total-RD,,,,,https://avatars.githubusercontent.com/u/61286436?v=4,,, SciGRID,"The focus will be on the European transmission grids, but the methods will be applicable more generally.",,,custom,,Energy Distribution and Grids,,,,,,,,,,https://power.scigrid.de/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, mosaik,A flexible Smart Grid co-simulation framework.,mosaik,https://gitlab.com/mosaik/mosaik,gitlab,,Energy Distribution and Grids,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, SmartGridToolbox,Designed to provide an extensible and flexible starting point for developing a wide variety of smart grid simulations and other applications.,SmartGridToolbox,https://gitlab.com/SmartGridToolbox/SmartGridToolbox,gitlab,,Energy Distribution and Grids,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OPEN,"The framework combines distributed energy resource modelling (e.g. for PV generation sources, battery energy storage systems, electric vehicles), energy market modelling, power flow simulation and multi-period optimisation for scheduling flexible energy resources.",EPGOxford,https://github.com/EPGOxford/OPEN.git,github,,Energy Distribution and Grids,"2020/07/16, 08:07:29",53,0,7,false,Python,Energy and Power Group,EPGOxford,Python,,"b'# Open Platform for Energy Networks (OPEN)\n\n## Overview\n\nOxford University\'s Energy and Power Group\'s Open Platform for Energy Networks (OPEN) provides a python toolset for modelling, simulation and optimisation of smart local energy systems.\nThe framework combines distributed energy resource modelling (e.g. for PV generation sources, battery energy storage systems, electric vehicles), energy market modelling, power flow simulation and multi-period optimisation for scheduling flexible energy resources.\n\nOPEN and the methods used are presented in detail in the following publication:\n\nT. Morstyn, K. Collett, A. Vijay, M. Deakin, S. Wheeler, S. M. Bhagavathy, F. Fele and M. D. McCulloch; *""An Open-Source Platform for Developing Smart Local Energy System Applications\xe2\x80\x9d*; University of Oxford Working Paper, 2019\n\n## Documentation\n\nThe full OPEN documentation can be found [here](https://open-platform-for-energy-networks.readthedocs.io).\n\n## Installation\n\nDownload OPEN source code.\n\nIf using conda, we suggest creating a new virtual environment from the requirements.txt file.\nFirst, add the following channels to your conda distribution if not already present:\n\n conda config --add channels invenia\n conda config --add channels picos\n conda config --add channels conda-forge\n\nTo create the new virtual environment, run:\n\n conda create --name --file requirements.txt python=3.6\n\n## Getting started\n\nThe simplest way to start is to duplicate one of the case study main.py files:\n- OxEMF_EV_case_study_v6.py\n- Main_building_casestudy.py\n\n\n## Contributors\n\n* Thomas Morstyn\n* Avinash Vijay\n* Katherine Collet\n* Filiberto Fele\n* Matthew Deakin\n* Sivapriya Mothilal Bhagavathy\n* Scot Wheeler\n* Malcolm McCulloch\n'",,"2019/12/02, 18:54:17",1423,Apache-2.0,0,6,"2020/01/06, 11:18:35",1,2,2,0,1388,1,0.0,0.16666666666666663,,,0,2,false,,false,false,,,https://github.com/EPGOxford,https://epg.eng.ox.ac.uk/,"Oxford, UK",,,https://avatars.githubusercontent.com/u/54850669?v=4,,, GridLAB-D,A distribution level power system simulator designed to allow users to create and analyze smart grid technologies.,slacgismo,https://github.com/arras-energy/gridlabd-old.git,github,"power-systems-analysis,power-systems,smartgrid,power-simulator,grid-simulation,transactive-energy",Energy Distribution and Grids,"2023/06/26, 23:01:49",31,0,7,false,C++,Arras,arras-energy,"C++,Jupyter Notebook,Python,HTML,XSLT,C,Shell,M4,Makefile,JavaScript,CSS,HCL,Awk,Batchfile,MATLAB,TeX,Dockerfile",https://docs.gridlabd.us,"b'---\n\n

IMPORTANT NOTICE

\n\nThis is the last release of HiPAS GridLAB-D before it is transferred to LF Energy\'s [Arras Energy project](https://github.com/arras-energy).\n\n---\n\n| Repository | Build | Deploy |\n| :---: | :---: | :---: |\n| [GridLAB-D](https://github.com/arras-energy/gridlabd) | ![master](https://github.com/arras-energy/gridlabd/actions/workflows/master.yml/badge.svg?branch=master) ![develop](https://github.com/arras-energy/gridlabd/workflows/develop/badge.svg?branch=develop) | [![master-images](https://github.com/arras-energy/gridlabd/actions/workflows/master-image.yml/badge.svg)](https://github.com/arras-energy/gridlabd/actions/workflows/master-image.yml) [![develop-images](https://github.com/arras-energy/gridlabd/actions/workflows/develop-image.yml/badge.svg)](https://github.com/arras-energy/gridlabd/actions/workflows/develop-image.yml) |\n| [Templates](https://github.com/arras-energy/gridlabd-template) | [![master](https://github.com/arras-energy/gridlabd-template/actions/workflows/master.yml/badge.svg)](https://github.com/arras-energy/gridlabd-template/actions/workflows/master.yml) [![develop](https://github.com/arras-energy/gridlabd-template/actions/workflows/develop.yml/badge.svg)](https://github.com/arras-energy/gridlabd-template/actions/workflows/develop.yml)\n| [Weather](https://github.com/arras-energy/gridlabd-weather) | [![validate](https://github.com/arras-energy/gridlabd-weather/actions/workflows/validate.yml/badge.svg)](https://github.com/arras-energy/gridlabd-weather/actions/workflows/validate.yml)\n| [Library](https://github.com/arras-energy/gridlabd-library) | [![validate](https://github.com/arras-energy/gridlabd-library/actions/workflows/master.yml/badge.svg)](https://github.com/arras-energy/gridlabd-library/actions/workflows/master.yml) [![validate](https://github.com/arras-energy/gridlabd-library/actions/workflows/develop.yml/badge.svg)](https://github.com/arras-energy/gridlabd-library/actions/workflows/develop.yml)\n| [Models](https://github.com/arras-energy/gridlabd-models) | [![validate](https://github.com/arras-energy/gridlabd-models/actions/workflows/validate.yml/badge.svg)](https://github.com/arras-energy/gridlabd-models/actions/workflows/validate.yml)\n| [Benchmarks](https://github.com/arras-energy/gridlabd-benchmarks) | Manual test (see [README.md](https://github.com/arras-energy/gridlabd-benchmarks/blob/main/README.md))\n| [Examples](https://github.com/arras-energy/gridlabd-examples) | Manual test (see [README.md](https://github.com/arras-energy/gridlabd-examples/blob/master/README.md))\n\n\nThe documentation for this project is located at http://docs.gridlabd.us/.\n\nThis respository contains the source code to HiPAS GridLAB-D, which is being developed by SLAC National Accelerator Laboratory for the California Energy Commission under grant [EPC-17-046](https://www.energy.ca.gov/filebrowser/download/1147). This version of GridLAB-D is intended to be a commercial-grade version of the [US Department of Energy\'s research version of GridLAB-D developed by Pacific Northwest National Laboratory](https://github.com/gridlab-d/gridlab-d).\n\n*Note*: This fork of [GridLAB-D](https://github.com/gridlab-d/gridlab-d) does not support MS Windows directly. You must use docker or a virtual machine running linux.\n\n# Quick start using Docker\n\nThe preferred method for running HiPAS GridLAB-D is to download the master image from docker hub (see https://hub.docker.com/repository/docker/arras-energy/gridlabd). You must install the docker daemon to use docker images. See https://www.docker.com/get-started for details.\n\nOnce you have installed docker, you may issue the following commands to run GridLAB-D at the command line:\n\n~~~\ndocker run -it -v $PWD:/model arras-energy/gridlabd:latest gridlabd -W /model [LOADOPTIONS] [FILENAME.EXT] [RUNOPTIONS] \n~~~ \n\nOn many systems, an alias can be used to make this a simple command that resembles the command you would normally issue to run a host-based installation:\n\n~~~\nalias gridlabd=\'docker run -it -v $PWD:/tmp arras-energy/gridlabd:latest gridlabd\'\n~~~\n\nNote that this alias will interfere with any host-based installation. You may use the `gridlabd docker` command to manage the use of docker images concurrently with host-based installations.\n\n# Downloading pre-built images\n\nInstallation from downloads may require `sudo` priviledges and always requires `curl`. The `install` script will automatically download and install the latest production image for your system if you use the following command:\n\n~~~\ncurl -sL http://install.gridlabd.us/install.sh | [sudo] sh\n~~~\n\nYou can download the latest development image using the command:\n\n~~~\ncurl -sL http://install-dev.gridlabd.us/install.sh | [sudo] sh\n~~~\n\nIf you must use `sudo`, then don\'t forget to grant user permission to access the build and runtime virtual environments created by the installer, e.g.,\n\n~~~\nsudo chmod -R g+rwx ~root /usr/local\nsudo adduser $USER root\n~~~\n\nIf you want to use a more secure approach to sharing the install among multiple users, see [install/README.md](install/README.md#Security).\n\nThe installer recognizes the following environment variables:\n\n| Variable | Default | Description\n| -------- | ------- | -----------\n| `INSTALL_SOURCE` | `http://install.gridlabd.us` | URL from which image is downloaded\n| `INSTALL_TARGET` | `/usr/local/opt` | Folder in which image is installed\n| `INSTALL_STDERR` | `/dev/stderr` | File to which error messages are sent\n| `INSTALL_STDOUT` | `/dev/stdout` | File to which output messages are sent\n| `GRIDLABD_IMAGE` | *varies* | Install image name, e.g., `$OSNAME_$VERMAJOR-$MACHINE`\n\nThis procedure may also be used in AWS EC2 instances and Docker containers.\n\nIf you have installed the AWS CLI, you can use the following command to get a list of available images:\n\n~~~\naws s3 ls s3://install.gridlabd.us | grep tarz\n~~~\n\nNote that the installer only works with image name that conform to the name pattern `VERSION-BUILD-BRANCH-SYSTEM-MACHINE.tarz`.\n\n# Build from source\n\nThe prerequesites for building HiPAS GridLAB-D from source include `git` and `curl`. In general you can use the `setup.sh` script to verify and update your system so that the prerequesites are satisifed. \n\nOn most systems, the process is as follows:\n\n~~~\ngit clone https://code.gridlabd.us/ [-b BRANCH] gridlabd\ncd gridlabd\n./setup.sh --local\n./build.sh --system --validate\n~~~\n\nIf you want to clone an alternate repository, use the following `git` command instead:\n\n~~~\ngit clone https://github.com/ORG/REPO [-b BRANCH] gridlabd\n~~~\n\nIf you do not specify the `--local` then by default the `setup.sh` source will match the `git` repository origin and branch if any. Otherwise the default source will be `arras-energy/gridlabd/master`. If you want to setup from a different origin, use the command `export GRIDLABD_ORIGIN=ORG/REPO/BRANCH` to specify an alternate source for `setup.sh`. The `build.sh` will also match the current `git` repository.\n\n*Do not* run the `setup.sh` and `build.sh` scripts with `sudo`, as that will usually create a broken install. If necessary, you should give yourself permission to write `/usr/local` and `brew`\'s install folder. If you have not already done so, add `brew` to your path.\n\nTo upload the image to the AWS installer you must install the AWS CLI, and obtain credentials to access the installer\'s S3 buckets before using the command:\n\n~~~\n./build.sh --upload\n~~~\n\nTo make the image the latest release, use the command:\n\n~~~\n./build.sh --release\n~~~\n\nWhen you are working in a master branch, these command will update `install.gridlabd.us`, otherwise the upload will go to `install-dev.gridlabd.us`.\n\n## Docker\n\nDevelopers should use the following command to build GridLAB-D in a Docker container:\n\n~~~\ndocker/build.sh\n~~~\n\nNote that Docker will build the currently checked out branch *from the repository rather than from your local code*.\n\nTo push the docker image to your personal Dockerhub, use the command:\n\n~~~\ndocker/build.sh --push\n~~~\n\nThe Dockerhub account is assumed to match the name of your GitHub account.\n\nTo release the docker image, use the command:\n\n~~~\ndocker/build.sh --release\n~~~\n\n## AWS EC2\n\nThe latest development and master builds of HiPAS gridlabd are available as community AMIs.\nSimply launch an ec2, browse the community AMIs and search for HiPAS Gridlabd\n\nIf you want to build gridlabd yourself, use the AWS Ubuntu AMI on AWS EC2 using the commands\n\n~~~\ngit clone https://code.gridlabd.us/ [-b BRANCH] gridlabd\ncd gridlabd\n./setup.sh --local\n./build.sh --system --validate\n~~~\n\n## Windows WSL\n\nGenerally, running HiPAS GridLAB-D on Docker is preferred because it is usually faster. Building, running and installing Gridlabd in WSL is not that different from a normal linux installation. You can follow Microsoft\'s instructions on setting up WSL and adding/changing distro\'s [here](https://learn.microsoft.com/en-us/windows/wsl/install). These instructions work for both cases on supported operating systems, which you can find in the build-aux directory.\n\n1) Open PowerShell as administrator or run the WSL (Ubuntu) from the start menu to open a dedicated terminal\n2) Run `wsl` (Using Ubuntu)\n3) Follow the Linux build procedure above.\n\n## Manual Build\n\nYou can build HiPAS GridLAB-D manually by running following commands in the top level repository folder:\n\n1. Create the target folder:\n\n~~~\nmkdir -p /usr/local/opt/gridlabd\n~~~\n\n2. Activate the python build environment\n\n~~~\n. $HOME/.gridlabd/bin/activate\n~~~\n\n3. Create the configuration script\n\n~~~\nautoreconf -isf\n~~~\n\n4. Run the configuration script\n\n~~~\n./configure\n~~~\n\n5. Compile everything\n\n~~~\nmake\n~~~\n\n6. Install everything\n\n~~~\nmake install\n~~~\n\n7. Validate the install\n\n~~~\nmake validate\n~~~\n\n8. Release install to all users\n\n~~~\nmake system\n~~~\n\n## Pro Tips\n\n1. If you accumulate a lot of local branches that no longer exist on the remote repo, you can use the following command to purge them:\n\n~~~\nhost% git fetch -p && git branch -vv | awk \'/: gone]/{print $1}\' | xargs git branch -D\n~~~\n\n2. You can manage multiple installs using the `gridlabd version` command. See `gridlabd version help` for details\n\n3. You can prevent `./configure` using the configure cache by deleting the `config.cache` folder.\n\n4. You can start a clean build using `--clean` option with `./build.sh`. Note that this will delete any new files not added with `git add`.\n\n5. You can change the install prefix using the `--prefix FOLDER` option with `./build.sh`.\n\n## Citation\n\nIf you use this fork of GridLAB-D for a publication you are required to cite it, e.g.,\n\nChassin, D.P., et al., ""GridLAB-D Version _major_._minor_._patch_-_build_ (_branch_) _platform_"", (_year_) [online]. Available at _url_, Accessed on: _month_ _day_, _year_.\n\nYou may use the `--cite` command option to obtain the correct citation for your version:\n\n~~~\nhost% gridlabd --cite\nChassin, D.P., et al. ""GridLAB-D 4.2.0-191008 (fix_python_validate) DARWIN"", (2019) [online]. Available at https://source.gridlabd.us/commit/dfc392dc0208419ce9be0706f699fdd9a11e3f5b, Accessed on: Oct. 8, 2019.\n~~~\n\nThis will allow anyone to identify the exact version you are using to obtain it from GitHub.\n\n## US Government Rights\n\nThis version of GridLAB-D is derived from the original US Department of Energy version of GridLAB-D developed at Pacific Northwest National Laboratory. The US Government retains certain rights as described in [the original GridLAB-D license](https://raw.githubusercontent.com/gridlab-d/gridlab-d/master/LICENSE).\n\n## Contributions\n\nPlease see https://source.gridlabd.us/blob/master/CONTRIBUTING.md for information on making contributions to this repository.\n\n'",,"2016/12/16, 17:19:39",2504,BSD-3-Clause,4,5079,"2023/09/22, 00:00:26",6,827,1254,163,33,6,0.6,0.3392105263157895,"2023/07/20, 14:38:16",4.3.2,0,26,false,,false,false,,,https://github.com/arras-energy,https://www.arras.energy,United States of America,,,https://avatars.githubusercontent.com/u/118282969?v=4,,, Gym-ANM,Design Reinforcement Learning environments that model Active Network Management tasks in electricity distribution networks.,robinhenry,https://github.com/robinhenry/gym-anm.git,github,"reinforcement-learning,electricity-networks,gym-environments",Energy Distribution and Grids,"2023/06/18, 12:25:25",119,4,26,true,Python,,,"Python,JavaScript,HTML,CSS",https://gym-anm.readthedocs.io/en/latest/,"b""# Gym-ANM\n[![Downloads](https://pepy.tech/badge/gym-anm)](https://pepy.tech/project/gym-anm)\n[![Documentation Status](https://readthedocs.org/projects/ansicolortags/badge/?version=latest)](https://gym-anm.readthedocs.io/en/latest/)\n[![codecov](https://codecov.io/gh/robinhenry/gym-anm/branch/master/graph/badge.svg?token=7JSMJPPIQ7)](https://codecov.io/gh/robinhenry/gym-anm)\n[![Checks](https://github.com/robinhenry/gym-anm/actions/workflows/ci_checks.yml/badge.svg)](https://github.com/robinhenry/gym-anm/actions/workflows/ci_checks.yml)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n`gym-anm` is a framework for designing reinforcement learning (RL) environments that model Active Network\nManagement (ANM) tasks in electricity distribution networks. It is built on top of the\n[OpenAI Gym](https://github.com/openai/gym) toolkit.\n\nThe `gym-anm` framework was designed with one goal in mind: **bridge the gap between research in RL and in\nthe management of power systems**. We attempt to do this by providing RL researchers with an easy-to-work-with\nlibrary of environments that model decision-making tasks in power grids.\n\n**Papers:**\n* [Gym-ANM: Reinforcement Learning Environments for Active Network Management Tasks in Electricity Distribution Systems](https://doi.org/10.1016/j.egyai.2021.100092)\n* [Gym-ANM: Open-source software to leverage reinforcement learning for power system management in research and education](https://doi.org/10.1016/j.simpa.2021.100092)\n\n## Key features\n* Very little background in electricity systems modelling it required. This makes `gym-anm` an ideal starting point\n for RL students and researchers looking to enter the field.\n* The environments (tasks) generated by `gym-anm` follow the [OpenAI Gym](https://github.com/openai/gym)\n framework, with which a large part of the RL community is already familiar.\n* The flexibility of `gym-anm`, with its different customizable components, makes it a suitable framework\n to model a wide range of ANM tasks, from simple ones that can be used for educational purposes, to complex ones\n designed to conduct advanced research.\n\n## Documentation\nDocumentation is provided online at [https://gym-anm.readthedocs.io/en/latest/](https://gym-anm.readthedocs.io/en/latest/).\n\n## Installation\n\n### Requirements\n`gym-anm` requires Python 3.8+ and can run on Linux, MaxOS, and Windows. Some rendering features may not work properly\non Windows (not tested).\n\nWe recommend installing `gym-anm` in a Python environment (e.g., [virtualenv](https://virtualenv.pypa.io/en/latest/)\nor [conda](https://conda.io/en/latest/#)).\n\n### Using pip\nUsing pip (preferably after activating your virtual environment):\n```\npip install gym-anm\n```\n\n### Building from source\nAlternatively, you can build `gym-anm` directly from source:\n```\ngit clone https://github.com/robinhenry/gym-anm.git\ncd gym-anm\npip install -e .\n```\n\n## Example\nThe following code snippet illustrates how `gym-anm` environments can be used. In this example,\nactions are randomly sampled from the action space of the environment `ANM6Easy-v0`. For more information\nabout the agent-environment interface, see the official [OpenAI Gym documentation](https://github.com/openai/gym).\n```\nimport gym\nimport time\n\ndef run():\n env = gym.make('gym_anm:ANM6Easy-v0')\n o = env.reset()\n\n for i in range(100):\n a = env.action_space.sample()\n o, r, done, info = env.step(a)\n env.render()\n time.sleep(0.5) # otherwise the rendering is too fast for the human eye.\n\n env.close()\n\nif __name__ == '__main__':\n run()\n```\nThe above code would render the environment in your default web browser as shown in the image below:\n![alt text](https://github.com/robinhenry/gym-anm/blob/master/docs/source/images/anm6-easy-example.png?raw=true)\n\nAdditional example scripts can be found in [examples/](examples).\n\n## Testing the installation\nAll unit tests in `gym-anm` can be ran from the project root directory with:\n```\npython -m pytest tests\n```\n\n## Contributing\nContributions are always welcome! Please read the [contribution guidelines](CONTRIBUTING.md) first.\n\n## Citing the project\nAll publications derived from the use of `gym-anm` should cite the following two 2021 papers:\n```\n@article{HENRY2021100092,\n title = {Gym-ANM: Reinforcement learning environments for active network management tasks in electricity distribution systems},\n journal = {Energy and AI},\n volume = {5},\n pages = {100092},\n year = {2021},\n issn = {2666-5468},\n doi = {https://doi.org/10.1016/j.egyai.2021.100092},\n author = {Robin Henry and Damien Ernst},\n}\n```\n```\n@article{HENRY2021100092,\n title = {Gym-ANM: Open-source software to leverage reinforcement learning for power system management in research and education},\n journal = {Software Impacts},\n volume = {9},\n pages = {100092},\n year = {2021},\n issn = {2665-9638},\n doi = {https://doi.org/10.1016/j.simpa.2021.100092},\n author = {Robin Henry and Damien Ernst}\n}\n```\n\n## Maintainers\n`gym-anm` is currently maintained by [Robin Henry](https://www.robinxhenry.com/).\n\n## License\nThis project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.\n""",",https://doi.org/10.1016/j.egyai.2021.100092,https://doi.org/10.1016/j.simpa.2021.100092,https://doi.org/10.1016/j.egyai.2021.100092,https://doi.org/10.1016/j.simpa.2021.100092","2019/07/17, 23:19:13",1560,MIT,43,235,"2023/04/14, 19:32:24",0,9,19,15,194,0,0.0,0.004484304932735439,"2023/04/14, 19:58:26",1.1.6,0,2,false,,true,true,"EVERGi/treec-paper-results,paulhenryyopa/nevergrad,facebookresearch/nevergrad,robinhenry/gym-anm-exp",,,,,,,,,, SEAPATH,Industrial grade open source real-time platform that can run virtualized automation and protection applications for the power grid industry.,projects,,custom,,Energy Distribution and Grids,,,,,,,,,,https://www.lfenergy.org/projects/seapath/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Tools for the iTEM databases,"Contains tools for two databases maintained by iTEM, the International Transport Energy Modeling consortium.",transportenergy,https://github.com/transportenergy/database.git,github,,Energy Distribution and Grids,"2021/05/04, 17:39:29",20,0,2,false,Python,iTEM: International Transport Energy Modeling,transportenergy,"Python,R",https://transportenergy.rtfd.io,"b'Tools for the iTEM databases\n============================\n\n[![Build Status](https://travis-ci.org/transportenergy/database.svg?branch=master)](https://travis-ci.org/transportenergy/database)\n![Codecov](https://img.shields.io/codecov/c/gh/transportenergy/database.svg)\n[![Documentation Status](https://readthedocs.org/projects/transportenergy/badge/?version=latest)](https://transportenergy.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4271789.svg)](https://doi.org/10.5281/zenodo.4271789)\n\n\nThis repository contains tools for two databases maintained by [iTEM](https://transportenergy.org), the **International Transport Energy Modeling** consortium:\n\n1. A *historical database* to form a common, public, \xe2\x80\x9cbest available\xe2\x80\x9d baseline for model calibration and projections.\n The historical database is under continuous development.\n\n2. A *model database* of transport energy projections assembled as part of the iTEM model intercomparison projects (MIPs) linked to [iTEM workshops](https://transportenergy.org/workshops/).\n To meet the intellectual property concerns of workshop participants, the model database is currently not public, and only available on request; however, the tools used to prepare it are public.\n These tools are developed periodically, during sequential MIPs.\n\nFor details on installation, usage, contributing, etc., see the **online documentation** at https://transportenergy.readthedocs.io, automatically built from the contents of this repository.\n\nLicense\n-------\n\nCopyright \xc2\xa9 2017\xe2\x80\x932021, [iTEM contributors](https://github.com/transportenergy/database/graphs/contributors)\n\nLicensed under the GNU General Public License, version 3.\nThe full text of the license is available in the file `LICENSE`.\n\nSee the online documentation for [citation of this software](https://transportenergy.readthedocs.io/#citation) in scientific publications that use the software *or* the resulting database.\n\nRelated repositories\n--------------------\n\n- [transportenergy/item_mip_data_processing](https://github.com/transportenergy/item_mip_data_processing): tools for iTEM MIP3.\n- [transportenergy/metadata](https://github.com/transportenergy/metadata): shared metadata about models and historical data sources.\n'",",https://doi.org/10.5281/zenodo.4271789","2017/05/15, 18:45:15",2354,GPL-3.0,0,583,"2021/05/04, 15:48:18",15,42,60,0,904,0,0.0,0.20733944954128436,"2021/05/04, 14:28:11",v2021.5.4,0,6,false,,false,false,,,https://github.com/transportenergy,https://transportenergy.org,Worldwide,,,https://avatars.githubusercontent.com/u/25395954?v=4,,, PowerSimData,Is part of a Python software ecosystem developed by Breakthrough Energy Sciences to carry out power flow study in the U.S. electrical grid.,Breakthrough-Energy,https://github.com/Breakthrough-Energy/PowerSimData.git,github,,Energy Distribution and Grids,"2023/02/23, 22:49:43",48,0,13,true,Python,Breakthrough Energy,Breakthrough-Energy,"Python,Jupyter Notebook,Dockerfile",https://breakthrough-energy.github.io/docs/,"b'![logo](https://raw.githubusercontent.com/Breakthrough-Energy/docs/master/source/_static/img/BE_Sciences_RGB_Horizontal_Color.svg)\n\n[![PyPI](https://img.shields.io/pypi/v/powersimdata?color=purple)](https://pypi.org/project/powersimdata/)\n[![codecov](https://codecov.io/gh/Breakthrough-Energy/PowerSimData/branch/develop/graph/badge.svg?token=5A20TCV5XL)](https://codecov.io/gh/Breakthrough-Energy/PowerSimData)\n[![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg)](https://www.python.org/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![Tests](https://github.com/Breakthrough-Energy/PowerSimData/workflows/Pytest/badge.svg)\n[![Documentation](https://github.com/Breakthrough-Energy/docs/actions/workflows/publish.yml/badge.svg)](https://breakthrough-energy.github.io/docs/)\n![GitHub contributors](https://img.shields.io/github/contributors/Breakthrough-Energy/PowerSimData?logo=GitHub)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/m/Breakthrough-Energy/PowerSimData?logo=GitHub)\n![GitHub last commit (branch)](https://img.shields.io/github/last-commit/Breakthrough-Energy/PowerSimData/develop?logo=GitHub)\n![GitHub pull requests](https://img.shields.io/github/issues-pr/Breakthrough-Energy/PowerSimData?logo=GitHub)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Code of Conduct](https://img.shields.io/badge/Code_of_conduct-ff69b4.svg)](https://breakthrough-energy.github.io/docs/communication/code_of_conduct.html)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4538590.svg)](https://doi.org/10.5281/zenodo.4538590)\n[![Slack](https://img.shields.io/badge/Community_Slack-sign_up-1f425f.svg?logo=slack)](https://science.breakthroughenergy.org/#get-updates)\n\n# PowerSimData\n**PowerSimData** is part of a Python software ecosystem developed by [Breakthrough\nEnergy Sciences](https://science.breakthroughenergy.org/) to carry out power flow study\nin the U.S. electrical grid.\n\n\n## Main Features\nHere are a few things that **PowerSimData** can do:\n* Provide a flexible modeling tool to create complex scenarios\n* Perform investment cost studies\n* Run power flow study using interface to external simulation engine\n* Manage data throughout the lifecycle of a simulation\n\nA detailed tutorial can be found on our [docs].\n\n\n## Where to get it\n* Clone or Fork the source code on [GitHub](https://github.com/Breakthrough-Energy/PowerSimData)\n* Get latest release from PyPi: `pip install powersimdata`\n\n\n## Dependencies\n**PowerSimData** relies on several Python packages all available on\n[PyPi](https://pypi.org/). The list can be found in the ***requirements.txt*** or\n***Pipfile*** files both located at the root of this package.\n\n\n## Installation\nTo take full advantage of our software, we recommend that you clone/fork\n**[plug](https://github.com/Breakthrough-Energy/plug)** and follow the information\ntherein to get our containerized framework up and running. A client/server installation\nis also possible and outlined in our [Installation\nGuide](https://breakthrough-energy.github.io/docs/user/installation_guide.html). Either\nway, you will need a powerful solver, e.g. Gurobi, to run complex scenarios.\n\nOnly a limited set of features are available when solely installing **PowerSimData**. If you choose this option, we recommend that you use `pipenv`:\n```sh\npipenv sync\npipenv shell\n```\nsince the dependencies will be installed in an isolated environment. It is of course\npossible to install the dependencies using the requirements file:\n```sh\npip install -r requirements.txt\n```\n\n\n## License\n[MIT](LICENSE)\n\n\n## Documentation\n[Code documentation][docstrings] in form of Python docstrings along with an overview of\nthe [package][docs] are available on our [website][website].\n\n\n## Communication Channels\n[Sign up](https://science.breakthroughenergy.org/#get-updates) to our email list and\nour Slack workspace to get in touch with us.\n\n\n## Contributing\nAll contributions (bug report, documentation, feature development, etc.) are welcome. An\noverview on how to contribute to this project can be found in our [Contribution\nGuide](https://breakthrough-energy.github.io/docs/dev/contribution_guide.html).\n\n\n\n[docs]: https://breakthrough-energy.github.io/docs/powersimdata/index.html\n[docstrings]: https://breakthrough-energy.github.io/docs/powersimdata.html\n[website]: https://breakthrough-energy.github.io/docs/\n'",",https://doi.org/10.5281/zenodo.4538590","2018/11/12, 19:45:02",1808,MIT,97,1778,"2023/08/02, 01:30:20",25,545,707,43,84,3,1.9,0.5532915360501567,"2022/12/10, 01:11:22",v0.5.5,1,18,false,,false,false,,,https://github.com/Breakthrough-Energy,https://breakthrough-energy.github.io/docs/,,,,https://avatars.githubusercontent.com/u/68243594?v=4,,, Digital Twins Definition Language ontology for Energy Grid,"A global standard for energy grid assets management, power system operations modeling and physical energy commodity market.",Azure,https://github.com/Azure/opendigitaltwins-energygrid.git,github,,Energy Distribution and Grids,"2021/06/11, 21:38:00",48,0,9,false,,Microsoft Azure,Azure,,,"b'\n# Digital Twins Definition Language (DTDL) ontology for Energy Grid \n\nDomain ontologies are the foundational components to develop global solutions with industry standards. The Azure IoT engineering team has been collaborating with customers, domain experts, and industry-standard organizations to develop DTDL ontologies by leveraging the existing industry ontologies and best practices. Earlier, we published the DTDL ontologies for [smart buildings](https://github.com/Azure/opendigitaltwins-building) based on RealEstateCore, and for [smart cities](https://github.com/Azure/opendigitaltwins-smartcities) based on ETSI NGSI-LD. Today, we are releasing the energy grid ontology adapted from [Common Information Model (CIM)](https://cimug.ucaiug.org/Pages/About.aspx), a global standard for energy grid assets management, power system operations modeling and physical energy commodity market. The CIM-based DTDL ontology provides contextual understanding of data by identifying the properties of various grid entities and the relationships among them. Power & Utilities customers and partners can leverage as well as extend this open-source repository for their solutions and contribute their learnings to the repository for others to benefit from. \n\nThe CIM organizes entities into distinct packages. In this first iteration, we have included core, wire, and generation packages, and prosumer-related entities from metering, customer, and Distributed Energy Resource (DER) packages. We have selected them based on the implemented and in-progress solutions, and on the collaborative decision of the extended team members of Agder Energi, Statnett, Sirus, FiWare, and Azure IoT. \n- **Core Package** contains the PowerSystemResource, ConductingEquipmen, and common collections of those entities shared by all applications. Most of the other packages have associations and generalizations that depend on the core package. \n- **Wire Package** is an extension to the Core that provides model information on the electrical characteristics of transmission and distribution networks. This package is used by network applications, such as state estimation, load flow, and optimal power flow. \n- **Generation Package** has information for unit commitment and economic dispatch of hydro and thermal generating units, load forecasting, automatic generation control, and unit modeling for training simulation. \nLast but not least \xe2\x80\x93 **prosumer** \xe2\x80\x93 we included various entities related to consumer and DER in the prosumer folder. For examples, EquivalentLoad, UsagePoints, and MeterReading.\n\nIf you need additional entities before the next iteration, please contact us. If you already have DTDL grid entities feel free to contribute them to the repo. \n\n![Energy grid models](EnergyGridOntologyModel.png)\n\n\n\n# How To Use\n\nUsing these models you can now build Azure Digital Twins based solution and bring it to life in a live execution environment.\n\nYou can use Azure Digital Twins Explorer to create a sample easily: upload models, instantiate entities in a twins graph, visualize the graph and run queries against the graph. \n\n\n# Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n# Modeling guidelines\n\nBefore creating new entities, [check if they exist already in the repo](https://github.com/Azure/opendigitaltwins-energygrid). You can look under each folder.\n\nTo learn how to adopt the ontology for your project, refer to [How to use the ontology](https://github.com/Azure/opendigitaltwins-smartcities#how-to-use).\n\n# Syntax\n\n- Use English terms, preferably American English.\n- Use camel case syntax for attribute names (camelcase).\n- Entity Type names must start with a Capital letter, for example: Substation.\n- Use names and not verbs for Attributes of type Property, for example: nominalVoltage, EquipmentContainer.\n- Use verbs for Relationship and optional an object, for example: locatedAt, partOf.\n\n# Data Types\nDTDL provides a full set of [primitive data types, along with support for a variety of complex schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#schemas).\n\n# Validation\nUse the [DTDL Validator tool](https://docs.microsoft.com/en-us/samples/azure-samples/dtdl-validator/dtdl-validator/) to validate the model document to make sure the DTDL is valid.\n\n# Resources\n- [Azure Digital Twins product page](https://azure.microsoft.com/services/digital-twins/)\n- [Azure Digital Twins documentation](https://docs.microsoft.com/en-us/azure/digital-twins/)\n- [Azure Digital Twins Tech Deep Dive](https://www.youtube.com/watch?v=5Ku55g1GQG8&feature=youtu.be)\n- [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl)\n- [DTDL Ontologies](https://docs.microsoft.com/en-us/azure/digital-twins/concepts-ontologies)\n- [ADT Explorer](https://github.com/Azure-Samples/digital-twins-explorer)\n- [Azure Digital Twins Model Uploader](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels)\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact opencode@microsoft.com with any additional questions or comments.\n'",,"2020/09/01, 23:19:08",1148,MIT,0,147,"2021/05/31, 07:58:29",2,54,58,0,877,1,0.0,0.1348314606741573,,,0,6,false,,true,false,,,https://github.com/Azure,https://docs.microsoft.com/en-us/azure/,"Redmond, WA & the cloud ☁️",,,https://avatars.githubusercontent.com/u/6844498?v=4,,, SIMONA,Provides a simulation toolbox to run and implement large-scale agent-based electricity grid simulations with focus on distribution grids.,ie3-institute,https://github.com/ie3-institute/simona.git,github,"simulation,powersystem,electricity,energy,agent-based-simulation,energy-transition,research",Energy Distribution and Grids,"2023/10/23, 05:39:38",22,0,6,true,Scala,"Institute of Energy Systems, Energy Efficiency and Energy Economics - ie3",ie3-institute,"Scala,Groovy,Shell,Java,Dockerfile",,"b'

\n\n

\n\n# SIMONA\n[![Build Status](https://simona.ie3.e-technik.tu-dortmund.de/ci/buildStatus/icon?job=ie3-institute%2Fsimona%2Fdev)](https://simona.ie3.e-technik.tu-dortmund.de/ci/job/ie3-institute/job/simona/job/dev/)\n[![Quality Gate Status](https://simona.ie3.e-technik.tu-dortmund.de/sonar/api/project_badges/measure?project=edu.ie3%3Asimona&metric=alert_status)](https://simona.ie3.e-technik.tu-dortmund.de/sonar/dashboard?id=edu.ie3%3Asimona)\n[![codecov](https://codecov.io/gh/ie3-institute/simona/branch/main/graph/badge.svg?token=pDg4Pbbp9L)](https://codecov.io/gh/ie3-institute/simona)\n[![Documentation Status](https://readthedocs.org/projects/simona/badge/?version=latest)](https://simona.readthedocs.io/en/latest/?badge=latest)\n[![License](https://img.shields.io/github/license/ie3-institute/simona)](https://github.com/ie3-institute/simona/blob/main/LICENSE)\n[![Maven Central](https://img.shields.io/maven-central/v/com.github.ie3-institute/simona.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22com.github.ie3-institute%22%20AND%20a:%22simona%22)\n\nThe agent-based simulation environment SIMONA provides a simulation toolbox to run and implement large-scale agent-based\nelectricity grid simulations with focus on distribution grids. As a result, close-to-reality time series are\ngenerated from various system participants and grid assets that can be used to analyze a given power grid.\nApplication cases are for example distribution grid planning purposes but also flexibility analysis or coupled\nsector interdependency analysis. The framework contains several out-of-the-box available models for a wide variety of grid participants as well as their\noperational behavior.\n\nMore information are provided in the project\'s [documentation](http://simona.readthedocs.io/).\n\n## Usage and Contribution\nSIMONA is part of several ongoing research projects and will be part of future research projects. Hence, the codebase\nis continuously under development from different perspectives, needs and developers.\n\nWe invite everyone to use SIMONA for their own research or for usage in a research project. If you use SIMONA for your\nown projects or research, please provide a reference to this repository. Furthermore, if you publish your scientific work\nplease give appropriate credit by citing one of the introduction papers of SIMONA. \n\nWe\'re also happy for any feedback and contributions. For details on how to contribute, please take a look at the\nCONTRIBUTING.md file in the root directory of this repository.\n\n## Questions\nFor all SIMONA related questions please feel free to contact people involved in the development and maintenance of SIMONA.\nFor the moment, these people are:\n\n- Feismann, Daniel - [daniel.feismann@tu-dortmund.de](mailto:daniel.feismann@tu-dortmund.de)\n- Peter, Sebastian - [sebastian.peter@tu-dortmund.de](mailto:sebastian.peter@tu-dortmund.de)\n- Oberlie\xc3\x9fen, Thomas - [thomas.oberliessen@tu-dortmund.de](mailto:thomas.oberliessen@tu-dortmund.de)\n- Sen Sarma, Debopama - [debopama-sen.sarma@tu-dortmund.de](mailto:debopama-sen.sarma@tu-dortmund.de)\n- Bao, Johannes - [johannes.bao@tu-dortmund.de](mailto:johannes.bao@tu-dortmund.de)\n- Hohmann, Julian - [julian.hohmann@tu-dortmund.de](mailto:julian.hohmann@tu-dortmund.de)\n- Kittl, Chris - [chris.kittl@tu-dortmund.de](mailto:chris.kittl@tu-dortmund.de)\n- Hiry, Johannes - [johannes.hiry@tu-dortmund.de](mailto:johannes.hiry@tu-dortmund.de)\n'",,"2021/11/19, 11:25:24",705,BSD-3-Clause,716,1638,"2023/10/23, 05:39:39",128,379,497,270,2,36,1.0,0.6641975308641975,"2023/08/07, 16:40:27",3.0.0,12,11,false,,false,true,,,https://github.com/ie3-institute,https://ie3.etit.tu-dortmund.de/,"Dortmund, Germany ",,,https://avatars.githubusercontent.com/u/58265273?v=4,,, Power Grid Model, A library for steady-state distribution power system analysis distributed for Python and C.,PowerGridModel,https://github.com/PowerGridModel/power-grid-model.git,github,"powersystem,powerflow,stateestimation,python,cpp,numpy,eigen3",Energy Distribution and Grids,"2023/10/25, 13:19:37",93,3,43,true,C++,Power Grid Model,PowerGridModel,"C++,Python,C,CMake,Jinja,Shell",,"b'\n[![PyPI version](https://badge.fury.io/py/power-grid-model.svg?no-cache)](https://badge.fury.io/py/power-grid-model)\n[![Anaconda-Server Badge](https://anaconda.org/conda-forge/power-grid-model/badges/version.svg?no-cache)](https://anaconda.org/conda-forge/power-grid-model)\n[![License: MIT](https://img.shields.io/badge/License-MPL2.0-informational.svg)](https://github.com/PowerGridModel/power-grid-model/blob/main/LICENSE)\n[![Build and Test C++ and Python](https://github.com/PowerGridModel/power-grid-model/actions/workflows/main.yml/badge.svg)](https://github.com/PowerGridModel/power-grid-model/actions/workflows/main.yml)\n[![Check Code Quality](https://github.com/PowerGridModel/power-grid-model/actions/workflows/check-code-quality.yml/badge.svg)](https://github.com/PowerGridModel/power-grid-model/actions/workflows/check-code-quality.yml)\n[![Clang Tidy](https://github.com/PowerGridModel/power-grid-model/actions/workflows/clang-tidy.yml/badge.svg)](https://github.com/PowerGridModel/power-grid-model/actions/workflows/clang-tidy.yml)\n[![REUSE Compliance Check](https://github.com/PowerGridModel/power-grid-model/actions/workflows/reuse-compliance.yml/badge.svg)](https://github.com/PowerGridModel/power-grid-model/actions/workflows/reuse-compliance.yml)\n[![docs](https://readthedocs.org/projects/power-grid-model/badge/)](https://power-grid-model.readthedocs.io/en/stable/)\n[![Downloads](https://static.pepy.tech/badge/power-grid-model)](https://pepy.tech/project/power-grid-model)\n[![Downloads](https://static.pepy.tech/badge/power-grid-model/month)](https://pepy.tech/project/power-grid-model)\n\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=PowerGridModel_power-grid-model&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=PowerGridModel_power-grid-model)\n[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=PowerGridModel_power-grid-model&metric=coverage)](https://sonarcloud.io/summary/new_code?id=PowerGridModel_power-grid-model)\n[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=PowerGridModel_power-grid-model&metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=PowerGridModel_power-grid-model)\n[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=PowerGridModel_power-grid-model&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=PowerGridModel_power-grid-model)\n[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=PowerGridModel_power-grid-model&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=PowerGridModel_power-grid-model)\n[![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=PowerGridModel_power-grid-model&metric=vulnerabilities)](https://sonarcloud.io/summary/new_code?id=PowerGridModel_power-grid-model)\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8054429.svg)](https://zenodo.org/record/8054429)\n\n[![](https://github.com/PowerGridModel/.github/blob/main/artwork/svg/color.svg)](#)\n\n# Power Grid Model\n\n`power-grid-model` is a library for steady-state distribution power system analysis distributed for Python and C.\nThe core of the library is written in C++.\nCurrently, it supports the following calculations:\n\n* Power Flow\n* State Estimation\n* Short Circuit\n\nSee the [power-grid-model documentation](https://power-grid-model.readthedocs.io/en/stable/) for more information.\nFor various conversions to the power-grid-model, refer to the [power-grid-model-io](https://github.com/PowerGridModel/power-grid-model-io) repository.\n\n```{note}\nWant to be updated on the latest news and releases? Subscribe to the Power Grid Model mailing list by sending an (empty) email to: powergridmodel+subscribe@lists.lfenergy.org\n```\n\n## Installation\n\n### Install from PyPI\n\nYou can directly install the package from PyPI.\n\n```\npip install power-grid-model\n```\n\n### Install from Conda\n\nIf you are using `conda`, you can directly install the package from `conda-forge` channel.\n\n```\nconda install -c conda-forge power-grid-model\n```\n\n### Build and install from Source\n\nTo install the library from source, refer to the [Build Guide](https://power-grid-model.readthedocs.io/en/stable/advanced_documentation/build-guide.html).\n\n## Examples\n\nPlease refer to [Examples](https://github.com/PowerGridModel/power-grid-model-workshop/tree/main/examples) for more detailed examples for power flow and state estimation. \nNotebooks for validating the input data and exporting input/output data are also included.\n\n## License\n\nThis project is licensed under the Mozilla Public License, version 2.0 - see [LICENSE](https://github.com/PowerGridModel/power-grid-model/blob/main/LICENSE) for details.\n\n## Licenses third-party libraries\n\nThis project includes third-party libraries, \nwhich are licensed under their own respective Open-Source licenses.\nSPDX-License-Identifier headers are used to show which license is applicable. \nThe concerning license files can be found in the [LICENSES](https://github.com/PowerGridModel/power-grid-model/tree/main/LICENSES) directory.\n\n## Contributing\n\nPlease read [CODE_OF_CONDUCT](https://github.com/PowerGridModel/.github/blob/main/CODE_OF_CONDUCT.md), [CONTRIBUTING](https://github.com/PowerGridModel/.github/blob/main/CONTRIBUTING.md), [PROJECT GOVERNANCE](https://github.com/PowerGridModel/.github/blob/main/GOVERNANCE.md) and [RELEASE](https://github.com/PowerGridModel/.github/blob/main/RELEASE.md) for details on the process \nfor submitting pull requests to us.\n\nVisit [Contribute](https://github.com/PowerGridModel/power-grid-model/contribute) for a list of good first issues in this repo.\n\n## Citations\n\nIf you are using Power Grid Model in your research work, please consider citing our library using the following references.\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8054429.svg)](https://zenodo.org/record/8054429)\n\n```bibtex\n@software{Xiang_PowerGridModel_power-grid-model,\n author = {Xiang, Yu and Salemink, Peter and Bharambe, Nitish and Govers, Martinus and van den Bogaard, Jonas and Stoeller, Bram and Jagutis, Laurynas and Wang, Chenguang and {Contributors from the LF Energy project Power Grid Model}},\n doi = {10.5281/zenodo.8054429},\n license = {MPL-2.0},\n title = {{PowerGridModel/power-grid-model}},\n url = {https://github.com/PowerGridModel/power-grid-model}\n}\n@inproceedings{Xiang2023,\n author = {Xiang, Yu and Salemink, Peter and Stoeller, Bram and Bharambe, Nitish and van Westering, Werner},\n booktitle = {CIRED 2023 - The 27th International Conference and Exhibition on Electricity Distribution},\n title = {Power grid model: A high-performance distribution grid calculation library},\n year = {2023},\n volume={2023},\n number = {},\n pages={1-5}\n}\n```\n\n## Contact\n\nPlease read [SUPPORT](https://github.com/PowerGridModel/.github/blob/main/SUPPORT.md) for how to connect and get into contact with the Power Gird Model project.\n'",",https://zenodo.org/record/8054429,https://zenodo.org/record/8054429","2022/01/20, 12:31:31",643,MPL-2.0,2592,3557,"2023/10/25, 11:41:04",37,317,374,256,0,3,2.6,0.5997838616714697,"2023/10/25, 12:35:56",v1.6.18,7,11,false,,false,false,"PowerGridModel/benchmark-pgm-vision,PowerGridModel/power-grid-model-workshop,PowerGridModel/power-grid-model-benchmark",,https://github.com/PowerGridModel,,Netherlands,,,https://avatars.githubusercontent.com/u/128388838?v=4,,, ElectricGrid.jl,A time domain electrical energy grid modeling and simulation tool with a focus on the control of power electronics converters.,upb-lea,https://github.com/upb-lea/ElectricGrid.jl.git,github,,Energy Distribution and Grids,"2023/10/05, 14:34:53",27,0,23,true,Julia,Paderborn University - LEA,upb-lea,"Julia,QML",https://upb-lea.github.io/ElectricGrid.jl/,"b'\n# ElectricGrid.jl\n\n\n\n| [**Reference docs**](https://upb-lea.github.io/ElectricGrid.jl/dev/)\n| [**Install guide**](#installation)\n| [**Quickstart**](#getting-started)\n| [**Release notes**](https://github.com/upb-lea/ElectricGrid.jl/releases/new)\n\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.05616/status.svg)](https://doi.org/10.21105/joss.05616)\n[![Build Status](https://github.com/upb-lea/ElectricGrid.jl/actions/workflows/CI.yml/badge.svg)](https://github.com/upb-lea/ElectricGrid.jl/actions/workflows/CI.yml)\n[![License](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/upb-lea/ElectricGrid.jl/blob/main/LICENSE)\n\n\n\n\nElectricGrid.jl is a library for setting up realistic electric grid simulations with extensive support for control options. With ElectricGrid.jl you can\n- create a simulation environment for an electric grid by defining its sources, loads, and cable connections,\n- set detailed parameters of your electric components or let them be auto-generated,\n- choose different control modes for each power electronic converter in your system and\n- use the agent architecture of [ReinforcementLearning.jl](https://juliareinforcementlearning.org/) to either train RL agents as controllers or write your own ones.\n\n\n![ElectricGrid Framework](docs/src/assets/Overview_EG.png)\n\n## Installation\n- Installation using the Julia package manager (recommended if you want to use ElectricGrid in your project):\n - In a Julia terminal run the following:\n```\nimport Pkg\nPkg.add(""ElectricGrid"")\n```\nor press `]` in the Julia Repl to enter Pkg mode and then run\n```\nadd ElectricGrid\n```\n\n- Install from GitHub source (recommended if you want to run the example notebooks and scripts):\n - Clone the git and navigate to the directory\n```\ngit clone https://github.com/upb-lea/ElectricGrid.jl.git\n```\n\n## Getting Started\n\nTo get started with ElectricGrid.jl the following interactive notebooks are useful. They show how to use the ElectricGrid.jl framework to build and simulate the dynamics of an electric power grid controlled via classic controllers or train common RL agents for different control tasks:\n* [Create an environment with ElectricGrid.jl](https://github.com/upb-lea/ElectricGrid.jl/blob/main/examples/notebooks/Env_Create.ipynb)\n* [Theory behind ElectricGrid.jl - Modelling Dynamics using Linear State-Space Systems](https://github.com/upb-lea/ElectricGrid.jl/blob/main/examples/notebooks/NodeConstructor_Theory.ipynb)\n* [Classic Controlled Electric Power Grids - State-of-the-Art](https://github.com/upb-lea/ElectricGrid.jl/blob/main/examples/notebooks/Classical_Controllers_1_Swing.ipynb)\n* [Use RL Agents in the ElectricGrid.jl Framework](https://github.com/upb-lea/ElectricGrid.jl/blob/main/examples/notebooks/RL_Single_Agent.ipynb)\n\nAn overview of all parameters defining the experiment setting in regard to the electric grid can be found here:\n* [Default Parameters](https://github.com/upb-lea/ElectricGrid.jl/blob/main/examples/notebooks/Default_Parameters.ipynb)\n\n\nTo run a simple example, the following few lines of code can be executed:\n\n```\nusing ElectricGrid\n\nenv = ElectricGridEnv(num_sources = 1, num_loads = 1)\nMulti_Agent = SetupAgents(env)\nhook = Simulate(Multi_Agent, env)\nRenderHookResults(hook = hook)\n```\n\nThis is a minimal example of a full ElectricGrid.jl setup. \nThere should also appear a plot that looks like this:\n![output of the minimal example](docs/src/assets/output1.png)\n\n\n## Using the GUI\n\nThe current version of ElectricGrid features a graphical user interface (GUI) that helps with setting up a simulation.\nThis is built on the library [QML.jl](https://github.com/JuliaGraphics/QML.jl), that, at the time of writing, stopped working in its current release version.\nFor that reason it is **required to clone this codebase** and install `QML.jl` in its GitHub main state manually if you want to use the GUI.\n\n```\nimport Pkg\nPkg.add(""QML#main"")\n```\nor press `]` in the Julia RPEL to enter Pkg mode and then run\n```\nadd QML#main\n```\n\n![GUI example](docs/src/assets/gui_example.png)\n\nUsage of the GUI is explained in the [GUI section in the docs](https://upb-lea.github.io/ElectricGrid.jl/dev/Gui/).\n'",",https://doi.org/10.21105/joss.05616","2021/12/10, 11:45:03",684,MIT,832,1277,"2023/10/05, 14:34:54",2,87,134,120,20,0,0.5,0.7057086614173229,"2023/10/05, 06:55:48",v1.1,0,9,false,,false,false,,,https://github.com/upb-lea,https://ei.uni-paderborn.de/en/lea/,"Paderborn, Germany",,,https://avatars.githubusercontent.com/u/55782224?v=4,,, PowerModelsACDC.jl,"A a Julia/JuMP/PowerModels package with models for DC lines, meshed DC networks, and AC DC converters.",Electa-Git,https://github.com/Electa-Git/PowerModelsACDC.jl.git,github,,Energy Distribution and Grids,"2023/07/03, 02:05:29",41,0,12,true,MATLAB,,,"MATLAB,Julia",,"b'# PowerModelsACDC.jl\n\nStatus:\n[![CI](https://github.com/Electa-Git/PowerModelsACDC.jl/workflows/CI/badge.svg)](https://github.com/Electa-Git/PowerModelsACDC.jl/actions?query=workflow%3ACI)\n\n\n\n\nPowerModelsACDC.jl is a Julia/JuMP/PowerModels package with models for DC lines, meshed DC networks, and AC DC converters.\nBuilding upon the PowerModels architecture, the code is engineered to decouple problem specifications (e.g. Power Flow, Optimal Power Flow, ...) from the power network formulations (e.g. AC, DC-approximation, SOC-relaxation, ...).\n\n**Installation**\nThe latest stable release of PowerModelACDC can be installed using the Julia package manager with\n\n```julia\nPkg.add(""PowerModelsACDC"")\n```\n\n\n**Core Problem Specifications**\n* Optimal Power Flow with both point-to-point and meshed and dc grid support\n* Power Flow with both point-to-point and meshed ac and dc grid support\n* TNEP problem for point-to-point and meashed ac and dc grids\n\n\n**Core Formulations**\nAll AC formulations of PowerModels are re-used.\nTherefore, the core formulations in this package are twofold: those for the DC network and those for the AC/DC converters.\n\nDC network connecting dc nodes:\n* DC nonlinear nonconvex formulation (NLP)\n* Convexified (SOC) bus injection model and branch flow model for the DC grid (which can be used with both the SDP and SOC convex relaxation formulations for the AC side)\n* Linearized (LP) active power only formulation, extending the linearized \'DC\' approximation of AC grids to DC grids\n\nAC/DC converter stations, connecting ac nodes and dc nodes, are composed of a transformer, filter, phase reactor and LCC/VSC converter. The passive components can be removed/disabled. Convex relaxation and linearized models for the passive components have been described, therefore, the converter station formulation is categorized by converter model complexity. The converter model includes constant losses and losses proportional to the current magnitude as well as current magnitude squared.\n* Nonlinear nonconvex formulation (NLP)\n* Convexified formulation (SOC)\n* Linearized formulation (LP)\n\n**Network Data Formats**\n* MatACDC-style "".m"" files (matpower "".m""-derived).\n* Matpower-style "".m"" files, including matpower\'s dcline extenstions.\n* PTI "".raw"" files, using PowerModels.jl parser\n\nNote that the matpower-style `dcline` is transformed internally to two converters + a dcline connecting them. Such a transformation is exact for the \'dc\'-style linearized models, but not for the AC models.\n\nFor further information, consult the PowerModels [documentation](https://lanl-ansi.github.io/PowerModels.jl/stable/).\n\n\n## Acknowledgments\n\nThe developers thank Carleton Coffrin (LANL) for his support.\n\n## Contributors\n\n- Hakan Ergun (KU Leuven / EnergyVille): Main developer\n- Frederik Geth (KU Leuven / EnergyVille): Formulations & relaxations of the OPF problem\n- Jay Dave (KU Leuven / EnergyVille): Transmission expansion plannning\n- Ghulam Mohy Ud Din (CSIRO): ACR formulation of the OPF problem\n\n## Citing PowerModelsACDC\n\nIf you find PowerModelsACDC useful in your work, we kindly request that you cite the following publications:\n[AC/DC OPF Core](https://ieeexplore.ieee.org/document/8636236):\n```\n@ARTICLE{8636236,\nauthor={H. {Ergun} and J. {Dave} and D. {Van Hertem} and F. {Geth}},\njournal={IEEE Transactions on Power Systems},\ntitle={Optimal Power Flow for AC\xef\xbf\xbdDC Grids: Formulation, Convex Relaxation, Linear Approximation, and Implementation},\nyear={2019},\nvolume={34},\nnumber={4},\npages={2980-2990},\nkeywords={AC-DC power convertors;approximation theory;HVDC power convertors;HVDC power transmission;power grids;power transmission control;reactive power control;AC-DC grids;linear approximation;active power control capabilities;reactive power control capabilities;HVDC converter stations;power systems;ancillary services;optimal power flow model;convex relaxation formulation;parameterized ac-dc converter model;common ac optimal power flow formulations;dc nodes;converter station technologies;ac nodes;ancillary security;open-source tool;Mathematical model;HVDC transmission;AC-DC power converters;Numerical models;Inductors;Impedance;Linear approximation;HVDC transmission;flexible ac transmission systems;power system analysis computing},\ndoi={10.1109/TPWRS.2019.2897835},\nISSN={0885-8950},\nmonth={July},}\n```\n[TNEP Extension 1](https://digital-library.theiet.org/content/journals/10.1049/iet-gtd.2019.0383):\n```\n@ARTICLE{\n iet:/content/journals/10.1049/iet-gtd.2019.0383,\n author = {Jay Dave},\n author = {Hakan Ergun},\n author = {Ting An},\n author = {Jingjing Lu},\n author = {Dirk Van Hertem},\n keywords = {power systems;meshed HVDC grids;increased utilisation;presented formulations;convex formulations;second-order cone convex relaxation;multiple HVDC links;linear approximation;dc grids;transmission network expansion planning problem;high-voltage direct current;traditional ac grid;TNEP problem;nonlinear formulation;},\n ISSN = {1751-8687},\n title = {TNEP of meshed HVDC grids: \xe2\x80\x98AC\xe2\x80\x99, \xe2\x80\x98DC\xe2\x80\x99 and convex formulations},\n journal = {IET Generation, Transmission & Distribution},\n issue = {24}, \n volume = {13},\n year = {2019},\n month = {December},\n pages = {5523-5532(9)},\n publisher ={Institution of Engineering and Technology},\n copyright = {\xc2\xa9 The Institution of Engineering and Technology},\n url = {https://digital-library.theiet.org/content/journals/10.1049/iet-gtd.2019.0383}\n}\n```\n[TNEP Extension 2](https://doi.org/10.1016/j.epsr.2020.106683):\n```\n@ARTICLE{dave2021relaxations,\n title={Relaxations and approximations of HVdc grid TNEP problem},\n author={Dave, Jay and Ergun, Hakan and Van Hertem, Dirk},\n journal={Electric Power Systems Research},\n volume={192},\n pages={106683},\n year={2021},\n publisher={Elsevier}\n}\n}\n```\n## License\n\nThis code is provided under a BSD license.\n'",",https://doi.org/10.1016/j.epsr.2020.106683","2018/01/11, 14:48:32",2113,BSD-3-Clause,22,393,"2022/12/13, 12:55:48",16,37,63,8,316,0,0.0,0.10097719869706845,"2023/07/03, 02:48:35",v0.6.3,0,9,false,,false,false,,,,,,,,,,, FlexPlan.jl,An open-source Julia tool for transmission and distribution expansion planning considering storage and demand flexibility.,Electa-Git,https://github.com/Electa-Git/FlexPlan.jl.git,github,"julia,optimization,planning-tool,storage,demand-flexibility,distribution-grid,transmission-grid",Energy Distribution and Grids,"2023/08/30, 04:27:43",18,0,9,true,Julia,,,"Julia,MATLAB",,"b'# FlexPlan.jl\n\nStatus:\n[![CI](https://github.com/Electa-Git/FlexPlan.jl/workflows/CI/badge.svg)](https://github.com/Electa-Git/FlexPlan.jl/actions?query=workflow%3ACI)\n\n\n[![DOI](https://zenodo.org/badge/293785598.svg)](https://zenodo.org/badge/latestdoi/293785598)\n\n\n## Overview\n\nFlexPlan.jl is a Julia/JuMP package to carry out transmission and distribution network planning considering AC and DC technology, storage and demand flexibility as possible expansion candidates.\nUsing time series input on renewable generation and demand, as well a list of candidates for grid expansion, a mixed-integer linear problem is constructed which can be solved with any commercial or open-source MILP solver.\nThe package builds upon the [PowerModels](https://github.com/lanl-ansi/PowerModels.jl) and [PowerModelsACDC](https://github.com/Electa-Git/PowerModelsACDC.jl) packages, and uses a similar structure.\n\nModelling features provided by the package include:\n\n- Joint multistage, multiperiod formulation to model a number of planning years, and planning hours within years for a sequential grid expansion plan.\n- Stochastic formulation of the planning problem, based on scenario probabilities for a number of different time series.\n- Extensive, parametrized models for storage, demand flexibility and DC grids.\n- Linearized DistFlow model for radial distribution networks, considering reactive power and voltage magnitudes.\n- Support of networks composed of transmission and distribution (T&D), with the possibility of using two different power flow models.\n- Heuristic procedure for efficient, near-optimal planning of T&D networks.\n- Basic implementations of Benders decomposition algorithm to efficiently solve the stochastic planning problem.\n\n\n## Documentation\n\nThe package [documentation](https://electa-git.github.io/FlexPlan.jl/dev/) includes useful information comprising links to [example scripts](https://electa-git.github.io/FlexPlan.jl/dev/examples/) and a [tutorial](https://electa-git.github.io/FlexPlan.jl/dev/tutorial/).\n\nAdditionally, these presentations provide a brief introduction to various aspects of FlexPlan:\n\n- Network expansion planning with FlexPlan.jl [[PDF](/docs/src/assets/20230216_flexplan_seminar_energyville.pdf)] \xe2\x80\x93 EnergyVille, 16/02/2023\n- FlexPlan.jl \xe2\x80\x93 An open-source Julia tool for holistic transmission and distribution grid planning [[PDF](/docs/src/assets/20230328_osmses2023_conference.pdf)] \xe2\x80\x93 OSMSES 2023 conference, Aachen, 28/03/2023\n\nAll notable changes to the source code are documented in the [changelog](/CHANGELOG.md).\n\n## Installation of FlexPlan\n\nFrom Julia, FlexPlan can be installed using the built-in package manager:\n```julia\nusing Pkg\nPkg.add(""FlexPlan"")\n```\n\n## Development\n\nFlexPlan.jl is research-grade software and is constantly being improved and extended.\nIf you have suggestions for improvement, please contact us via the Issues page on the repository.\n\n## Acknowledgements\n\nThis code has been developed as part of the European Union\xe2\x80\x99s Horizon 2020 research and innovation programme under the FlexPlan project (grant agreement no. 863819).\n\nDeveloped by:\n\n- Hakan Ergun (KU Leuven / EnergyVille)\n- Matteo Rossini (RSE)\n- Marco Rossi (RSE)\n- Damien Lepage (N-Side)\n- Iver Bakken Sperstad (SINTEF)\n- Espen Flo B\xc3\xb8dal (SINTEF)\n- Merkebu Zenebe Degefa (SINTEF)\n- Reinhilde D\'Hulst (VITO / EnergyVille)\n\nThe developers thank Carleton Coffrin (Los Alamos National Laboratory) for his countless design tips.\n\n## Citing FlexPlan.jl\n\nIf you find FlexPlan.jl useful in your work, we kindly request that you cite the following [publication](https://doi.org/10.1109/osmses58477.2023.10089624) ([preprint](https://doi.org/10.5281/zenodo.7705908)):\n\n```bibtex\n@inproceedings{FlexPlan.jl,\n author = {Matteo Rossini and Hakan Ergun and Marco Rossi},\n title = {{FlexPlan}.jl \xe2\x80\x93 An open-source {Julia} tool for holistic transmission and distribution grid planning},\n booktitle = {2023 Open Source Modelling and Simulation of Energy Systems ({OSMSES})},\n year = {2023},\n month = {mar},\n publisher = {{IEEE}},\n doi = {10.1109/osmses58477.2023.10089624},\n url = {https://doi.org/10.1109/osmses58477.2023.10089624}\n}\n```\n\n## License\n\nThis code is provided under a [BSD 3-Clause License](/LICENSE.md).\n'",",https://zenodo.org/badge/latestdoi/293785598,https://doi.org/10.1109/osmses58477.2023.10089624,https://doi.org/10.5281/zenodo.7705908,https://doi.org/10.1109/osmses58477.2023.10089624","2020/09/08, 11:09:37",1142,BSD-3-Clause,55,742,"2023/01/04, 10:05:14",1,118,136,14,294,0,0.2,0.281618887015177,"2023/06/14, 03:32:07",v0.3.1,0,9,false,,false,false,,,,,,,,,,, Easy SimAuto,An easy-to-use Power System Analysis Automation Platform atop PowerWorld's Simulator Automation Server.,mzy2240,https://github.com/mzy2240/ESA.git,github,"python,powerworld,simulator,simauto,automation,powerworld-simulator,esa,simulator-automation-server,smart-grid,numpy,pandas,powersystem,contingency-analysis,hpc,numba,pythran,power-flow,transient-stability,graph-analysis",Energy Distribution and Grids,"2023/06/05, 21:15:21",37,0,7,true,Python,,,"Python,TeX",https://mzy2240.github.io/ESA/,"b""Easy SimAuto (ESA)\n==================\n.. image:: https://img.shields.io/pypi/v/esa.svg\n :target: https://pypi.org/project/esa/\n.. image:: https://img.shields.io/pypi/pyversions/esa.svg\n :target: https://pypi.org/project/esa/\n.. image:: https://img.shields.io/discord/1114563747651006524\n :target: https://discord.gg/V9v8NRCT\n.. image:: https://joss.theoj.org/papers/10.21105/joss.02289/status.svg\n :target: https://doi.org/10.21105/joss.02289\n.. image:: https://img.shields.io/pypi/l/esa.svg\n :target: https://github.com/mzy2240/ESA/blob/master/LICENSE\n.. image:: https://pepy.tech/badge/esa/month\n :target: https://pepy.tech/project/esa\n.. image:: https://img.shields.io/badge/coverage-100%25-brightgreen\n :target: https://pypi.org/project/esa/\n\n\nEasy SimAuto (ESA) is an easy-to-use Power System Analysis Automation\nPlatform atop PowerWorld's Simulator Automation Server (SimAuto).\nESA wraps all PowerWorld SimAuto functions, supports Auxiliary scripts,\nprovides helper functions to further simplify working with SimAuto and\nalso turbocharges with native implementation of SOTA algorithms. Wherever\npossible, data is returned as Pandas DataFrames, making analysis a breeze.\nESA is well tested and fully `documented`_.\n\n`Documentation`_\n----------------\n\nFor quick-start directions, installation instructions, API reference,\nexamples, and more, please check out ESA's `documentation`_.\n\nIf you have your own copy of the ESA repository, you can also view the\ndocumentation locally by navigating to the directory ``docs/html`` and\nopening ``index.html`` with your web browser.\n\nIf you want to use ESA or SimAuto from julia, definitely check `EasySimauto.jl `__\nfor a julia wrapper of ESA.\n\nCitation\n--------\n\nIf you use ESA in any of your work, please use the citation below.\n\n.. code:: latex\n\n @article{ESA,\n doi = {10.21105/joss.02289},\n url = {https://doi.org/10.21105/joss.02289},\n year = {2020},\n publisher = {The Open Journal},\n volume = {5},\n number = {50},\n pages = {2289},\n author = {Brandon L. Thayer and Zeyu Mao and Yijing Liu and Katherine Davis and Thomas J. Overbye},\n title = {Easy SimAuto (ESA): A Python Package that Simplifies Interacting with PowerWorld Simulator},\n journal = {Journal of Open Source Software}\n }\n\nInstallation\n------------\n\nPlease refer to ESA's `documentation `__ for full, detailed installation\ndirections. In many cases, ESA can simply be installed by:\n\n.. code:: bat\n\n python -m pip install esa\n \nSimulator Compatibility\n-----------------------\n\nCurrently ESA supports PW Simulator V17, V18, V19, V20, V21, V22 and V23.\n\n\nTesting Coverage\n----------------\n\nThe ESA team works hard to ensure ESA is well tested, and we strive for\n100% testing coverage (sometimes we cannot due to lack of specific add-ons). The table below shows the most up-to-date\ntesting coverage data for ESA, using `coverage\n`__.\n\n.. table:: ESA's testing coverage as of 2023-05-31 (Git commit: 7180cc9)\n :widths: auto\n :align: left\n\n +-----------------+-------------------+-----------------+-----------------+--------------------+\n | Name | Num. Statements | Missing Lines | Covered Lines | Percent Coverage |\n +=================+===================+=================+=================+====================+\n | esa/__init__.py | 2 | 0 | 2 | 100 |\n +-----------------+-------------------+-----------------+-----------------+--------------------+\n | esa/saw.py | 1206 | 2 | 1204 | 99.8342 |\n +-----------------+-------------------+-----------------+-----------------+--------------------+\n\nLicense\n-------\n\n`Apache License 2.0 `__\n\nContributing\n------------\n\nWe welcome contributions! Please read ``contributing.md``.\n\n.. _documentation: https://mzy2240.github.io/ESA/\n.. _documented: https://mzy2240.github.io/ESA/\n""",",https://doi.org/10.21105/joss.02289\n,https://doi.org/10.21105/joss.02289","2019/10/08, 21:37:23",1478,Apache-2.0,33,663,"2023/08/11, 21:59:58",16,14,97,13,74,1,0.3,0.20738636363636365,"2023/06/05, 21:18:51",v1.3.5,4,6,false,,false,true,,,,,,,,,,, GElectrical,A free and opensource electrical system analysis software for LV/MV electrical distribution networks.,manuvarkey,https://github.com/manuvarkey/GElectrical.git,github,"electrical,electrical-engineering,electrical-grid,electrical-schematics,electrical-system,electrical-systems,pandapower,powerflow,short-circuit-analysis,voltage-drop-analysis,coordination-study,electrical-rules-check,electrical-network-analysis,electrical-networks",Energy Distribution and Grids,"2023/10/16, 11:08:50",32,0,32,true,Jupyter Notebook,,,"Jupyter Notebook,Python,NSIS,CSS,HTML,Shell",https://manuvarkey.github.io/GElectrical/,"b'# GElectrical\n\n[Website](https://manuvarkey.github.io/GElectrical) \xe2\x80\xa2 \n[Documentation](https://gelectrical.readthedocs.io) \xe2\x80\xa2\n[Forum](https://github.com/manuvarkey/GElectrical/discussions/) \xe2\x80\xa2\n[Bug tracker](https://github.com/manuvarkey/GElectrical/issues) \xe2\x80\xa2\n[Git repository](https://github.com/manuvarkey/GElectrical)\n\n[![Release](https://img.shields.io/github/release/manuvarkey/GElectrical.svg)](https://github.com/manuvarkey/GElectrical/releases/latest)\n![License](https://img.shields.io/github/license/manuvarkey/GElectrical)\n\n \n \n \n\nGElectrical is a free and opensource electrical system analysis software for LV/MV electrical distribution networks. Following features are currently implemented.\n\n* Schematic capture.\n* Pandapower network generation from schematic.\n* Power flow time series analysis (Symmetric and Assymetric).\n* Power flow with diversity factors (Symmetric and Assymetric).\n* Voltage drop analysis.\n* Short circuit analysis (Symmetric and SLG).\n* Coordination analysis for power supply protection devices with support for CB and fuse protection curves; damage curves for transformers, cables and motors.\n* Support for daily load curves for load elements.\n* Support for arriving network parameters for custom geometry OH lines.\n* Support for modeling networks with mixed TN-S/ TN-C/ TT/ IT earthing systems.\n* Electrical rules check for checking conformity with IEC/ local electrical guidelines.\n* Print and export of drawings to pdf.\n* Generation of analysis reports.\n\nGElectrical uses pandapower as the backend for implementing power flow related functionality like voltage drop and short circuit analysis.\n\n> **Please note that the program is in active development and bugs are expected. Cross checking of generated calculations is reccomended. See [Roadmap](https://github.com/manuvarkey/GElectrical/issues/1) for current limitations.**\n\n## Screenshots\n\n**See [Screencasts](https://www.youtube.com/playlist?list=PLyFdF5OlDZHI8DBi42qsmUeiiD2Cd0eLU) for application screencasts**\n\n![Properties display](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/screenshots/1.png)\n![Results display](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/screenshots/2.png)\n![Electrical rules check](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/screenshots/3.png)\n![Protection curve display](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/screenshots/4.png)\n![Load profile display](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/screenshots/5.png)\n![Database display](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/screenshots/6.png)\n\n## Samples\n\n[Sample schematic](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/sample_files/sample_drawing.pdf)\n\n[Sample report](https://raw.githubusercontent.com/manuvarkey/GElectrical/master/sample_files/sample_report.pdf)\n\n[Sample project file](https://github.com/manuvarkey/GElectrical/raw/master/sample_files/sample.gepro)\n\n## Installation\n\nApplication can be installed for use on your OS as described below.\n\n> It is reccomended to install `osifont` for schematic capture. This can be downloaded from [https://github.com/hikikomori82/osifont/blob/master/osifont.ttf](https://github.com/hikikomori82/osifont/blob/master/osifont.ttf).\n\n### Binary\n\n#### Windows\n\nUse `.EXE` installation package available under the latest [Release](https://github.com/manuvarkey/GElectrical/releases/latest).\n\n#### Linux\n\nApplication is published on `Flathub` repository at [GElectrical](https://flathub.org/apps/details/com.kavilgroup.gelectrical). \n\nIt should be possible to install the application using the default package manager on most linux systems if flathub is setup. Please see [https://flatpak.org/setup/](https://flatpak.org/setup/) to setup flahub for your linux distribution.\n\n### Source\n\n#### Linux\n\n* Install GTK3 from your distribution package manager.\n* Run `pip install appdirs pycairo numpy numba scipy pandas mako networkx matplotlib pandapower jinja2 weasyprint openpyxl shapely`.\n* Clone this repository `git clone https://github.com/manuvarkey/GElectrical.git`\n* Run `gelectrical_launcher.py` from cloned directory.\n\n#### Windows\n\n* Install `git`, `msys2`, `visualstudio2022-workload-vctools` and `gvs_build` by folowing this link [gvsbuild](https://github.com/wingtk/gvsbuild).\n* Setup GTK3 and PyGObject development envirnonment using `gvs_build` by running `gvsbuild build --enable-gi --py-wheel gtk3 pygobject adwaita-icon-theme`.\n* Add required environment variables as suggested in the above link. Please see [Create and Modify Environment Variables on Windows](https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oread/creating-and-modifying-environment-variables-on-windows.html) for more information about setting up environment varables.\n* Run `pip install appdirs pycairo numpy numba scipy pandas mako networkx matplotlib pandapower jinja2 weasyprint openpyxl shapely` in powershell.\n* Clone this repository using `git clone https://github.com/manuvarkey/GElectrical.git`.\n* Run `python gelectrical_launcher.py` from the cloned directory.\n\n#### Dependencies:\n\n##### Python 3 (v3.10+)\n\nPython Modules:\n\n* undo - Included along with distribution.\n* appdirs (v1.4.4) - Not included\n* openpyxl - Not included\n* mako - Not included\n* numba - Not included\n* pandapower (v2.10.1) - Not included\n* numpy - Not included\n* pandas - Not included\n* networkx - Not included\n* shapely - Not included\n* matplotlib (v3.5.1) - Not included\n* jinja2 - Not included\n* weasyprint - Not included\n* pycairo - Not included\n* PyGObject - Not included\n\n##### GTK3 (v3.36+)\n'",,"2021/01/18, 13:25:54",1010,GPL-3.0,380,412,"2023/10/12, 10:35:49",1,0,8,8,13,0,0,0.0,"2023/10/16, 11:46:31",v1.1,0,1,false,,false,false,,,,,,,,,,, Electra,Sovereign blockchain solution that enables local micro-grid to operate smoothly between trustless actors enabling a real circular economy based on the exchange of electricity units.,Alkia,https://github.com/Alkia/electra.git,github,"blockchain,circular-economy,cosmos-sdk,electrical-engineering,electricity,eolian,grid-system,ignite-cli,microgrid,off-grid,power,solar,islanding,grid,power-meter,monophased,3-phase-inverter,3-phases,cosmos-sdk-proto,minigrid",Energy Distribution and Grids,"2023/10/08, 04:13:26",74,0,12,true,TypeScript,,,"TypeScript,Go,Shell,Roff,Vue,Python,HTML,JavaScript",,"b'![Electra Logo](/vue/public/Electra.png ""Electra"")\r\n\r\n# Electra\r\n**Electra** is a sovereign blockchain solution that enables local micro-grid to operate smoothly between trustless actors enabling a real circular economy based on the exchange of electricity units (kWh) by the means of pre-purchased tokens.\r\n\r\nMore about [Electra](https://linktr.ee/alkia) on its link tree. \r\n\r\n## The BlockChain based Micro-Grid Architecture\r\n\r\nA **micro-grid**, is a local transport structure semi-isolated from the main transport backbone.\r\n\r\n![Electra Logo](/vue/public/Concept.jpg ""Architecture"")\r\n\r\n**BlockChain Meters**, are the key element of the solution. \r\n- They precisely measure the power exchanged\r\n- They are a circuit breaker for the microgrid\r\n- They embed a secure wallet having the capacity to validate the blocks\r\n- They store a full copy of the local chain \r\n\r\n**The Grid Gateway** \r\nThe grid gateway synchronizes the micro-grid and imports extra energy as needed. This gateway is the master node validating the blocks. In some cases the national grid gateway can export energy too should the microgrid and the regulation enables it. \r\n\r\n- The grit All features of a BlockChain meter +\r\n- Demand response \r\n- Stake/Validate the local Layer 2 BlockChain AS well AS the Layer 1 main chain.\r\n\r\n![Electra Logo](/vue/public/Concept.png ""The concept"") \r\n\r\n## Responsible Disclosure\r\nSend us an email to report any security related bugs or vulnerabilities at security@alkia.net\r\n\r\nYou can encrypt your email message using our public PGP key: [Public key](https://raw.githubusercontent.com/Alkia/electra/master/public_key.pgp)\r\n\r\n## Get started\r\n\r\n**electra** is built using Cosmos SDK and Tendermint.\r\n\r\n```\r\nignite chain serve\r\n```\r\n\r\n`serve` command installs dependencies, builds, initializes, and starts your Electra blockchain in development.\r\n\r\n### Configure\r\n\r\n\r\nYour Electra blockchain in development can be configured with `config.yml`. \r\n\r\n## Test from CLI \r\n\r\nTo check the current cycle ID your Electra chain reached, run the following command:\r\n```\r\nelectrad query meter currentcycle-id \r\n```\r\nTo prepare a bill:\r\n```\r\nelectrad tx meter prepare-bill [cycle-id] [record]\r\n```\r\n## Release\r\n\r\nTo release a new version of your Electra blockchain, create and push a new tag with `v` prefix. A new draft release with the configured targets will be created.\r\n\r\n```\r\ngit tag v0.1.5\r\ngit push origin v0.1.5\r\n```\r\n\r\nAfter a draft release is created, make your final changes from the release page and publish it.\r\n\r\n#### Code review format\r\nOpening a pull request (PR) will automatically create Summary and Test plan fields in the description. In the summary, add a high-level summary of what the change entails. For pull requests that scaffold ignite code, include the ignite scaffold commands run.\r\n###### Summary\r\nAdd summary of the pull request here (*E.g. This pull request adds XYZ feature to the x/ABC module and associated unit tests.*)\r\n###### Unit tests\r\n\r\nTo run unit tests for the whole project, execute:\r\n`make unit-test`\r\nTo run unit tests for a particular module (e.g. the meter module), execute:\r\n`make unit-test path=meter`\r\nTo run unit tests for a particular package (e.g. the meter module), execute:\r\n`make unit-test path=meter/types`\r\nTo inspect unit test coverage, execute:\r\n`make test-cover`\r\n\r\n### Install\r\n\r\nTo install the latest version of your Electra blockchain node\'s binary, execute the following command on your machine:\r\n\r\n```\r\ncurl https://get.ignite.com/alkia/electra@latest! | sudo bash\r\n```\r\n`alkia/electra` are the `username` and `repo_name` of the Github repository to which the source code is pushed. \r\n\r\nEdit /etc/security/limits.conf\r\n```\r\n$ ulimit -Sn 16384\r\n$ ulimit -Hn 65536\r\n```\r\n\r\n### Web Frontend\r\n\r\nElectra has a Vue.js-based web app in the `vue` directory. Run the following commands to install dependencies and start the app:\r\n\r\n```\r\ncd vue\r\nnpm install\r\nnpm run serve\r\n```\r\n\r\nThe frontend app is built using the `@starport/vue` and `@starport/vuex` packages.\r\n\r\n## Deploying an Electra node\r\nTo deploy a node on testnet-0 please follp the guide: ![Electra Node](https://github.com/Alkia/testnet-0/blob/main/README.md)\r\n\r\n## Implementing the blockchain meter\r\n![Electra Logo](/vue/public/ElectraSmartMeter.png ""Electra BlockChain Smart Meter"")\r\n\r\nThe smartmeter specs are under validation before first prototypes start moving to production.\r\n\r\nMost Standard IEC energy meters will be upgradeable to blockchain meters using the rugged Electra blockchain appliance. The Electra blockchain appliance connects to legacy meters with its infrared data reading cable following IEC 62056 21 standard.\r\n\r\nMore on the ![IEC 62056 21 standard](https://community.openhab.org/t/reading-power-consumption-of-the-electricity-meter-with-the-ir-interface/94996).\r\n\r\n## Testnets\r\n\r\n- [electra-testnet-0](https://github.com/Alkia/electra-testnet-0)\r\n\r\n## Disclaimer | No Liability\r\nThis project is bleeding-edge and does not conform with Poetry package structure.\r\n\r\nAs far as the law allows, this software comes as is, without any warranty or condition, and no contributor will be liable to anyone for any damages related to this software or this license, under any kind of legal claim.\r\n\r\n## References\r\n\r\nhttps://docs.tendermint.com/master/tendermint-core/validators.html\r\nhttps://hub.cosmos.network/master/validators/overview.html\r\n\r\n## Attribution\r\n\r\nElectra is proud to be an open-source project, and we welcome all other projects to use our repo. We use modules from the cosmos-sdk and other open source projects.\r\n\r\nWe have ourselves used the following modules from fellow Cosmos projects. Big thank you to these projects!\r\n\r\nWe use the following modules from [Osmosis](https://github.com/osmosis-labs/osmosis) provided under [this License](https://github.com/osmosis-labs/osmosis/blob/main/LICENSE):\r\n - x/epochs\r\n - x/mint\r\n - x/interchainqueries\r\n\r\n## Learn more\r\n- [Alkia IT Services Co., Ltd.](https://alkia.net)\r\n- [Link tree](linktr.ee/alkia)\r\n- [Cosmos SDK docs](https://docs.cosmos.network)\r\n- [Developer Chat](https://discord.gg/ignite)\r\n\r\n\r\n![Link tree QR](/vue/public/LinktreeQR.png ""QR to Link Tree"")\r\n\r\nDone with love for a carbon free world\r\n'",,"2022/08/11, 07:25:22",440,BSD-3-Clause,248,248,"2023/10/08, 04:13:27",3,69,77,70,17,1,0.0,0.08444444444444443,"2023/01/14, 07:50:46",v0.1.4.5,0,4,false,,true,false,,,,,,,,,,, SESMG,An energy system model generator with the focus on the optimization of urban energy systems.,SESMG,https://github.com/SESMG/SESMG.git,github,,Energy Distribution and Grids,"2023/09/21, 18:23:00",18,0,6,true,Python,SESMG,SESMG,"Python,Shell,Batchfile",,"b'# Spreadsheet Energy System Model Generator (SESMG) \n[![Generic badge](https://img.shields.io/badge/content-what/why-darkgreen.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n[![Generic badge](https://img.shields.io/badge/content-how-green.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\n\n[![Maintainability](https://api.codeclimate.com/v1/badges/5ab50cca9d852028f3df/maintainability)](https://codeclimate.com/github/SESMG/SESMG/maintainability)\n[![Documentation Status](https://readthedocs.org/projects/spreadsheet-energy-system-model-generator/badge/?version=latest)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/?badge=latest)\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.05519/status.svg)](https://doi.org/10.21105/joss.05519)\n\n\n## Software Description\n\nThe **SESMG** is an energy system model generator with the focus on the optimization of urban energy systems, which can, however, also be used for the modeling of other types of energy systems. The **SESMG** is based on the \'Open Energy Modelling Framework\' (oemof) and comes, compared to other modeling tools with advantages regarding user-friendliness, as\n \n * the model definition is based on spreadsheets, therefore no programming skills are required for the entire modeling process,\n * urban energy system models with any size can automatically conceptualized,\n * visualization of complex results are automatically created in the form of system graphs, Pareto-fronts, energy amounts, capacity diagrams, and many more, as well as\n * a set of standard (but still customizable) parameters are given, including detailed descriptions and references.\n \nFurthermore, the **SESMG** comes with important modeling methods, enabling holistic modeling of spatially high resolution modeling of mixed-use multi energy systems, such as\n \n * considering the multi-energy system (MES) approach\n * applying multi-objective optimization by using the epsilon-constraint-method, as well as\n * providing several methods for model-based reduction of computational requirements (run-time and RAM).\n\n![workflow_graph_SESMG](/docs/images/readme/workflow_graph.png)\n\n## Quick Start \n[![Generic badge](https://img.shields.io/badge/content-how-green.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\nA detailed description of the installation process for Windows, MacOS and Linux can be found in the [documentation (chapter installation)](\nhttps://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/02.01.00_installation.html). \n\n## SESMG Features & Releases \n[![Generic badge](https://img.shields.io/badge/content-what/why-darkgreen.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\n### Examples\nExamples are stored in a separate GIT-Repository: https://github.com/SESMG/SESMG-Examples\n\n### Project status\n\xe2\x9c\x93 Draft (alpha, beta) State
\n\xe2\x9c\x93 Modeling and Optimization of holistic energy systems
\n\xe2\x9c\x93 Automated modeling of urban energy systems
\n\xe2\x9c\x93 Several result plotting oportunities
\n\xe2\x9c\x93 Usable on Windows, MacOS and Linux
\n\n\xe2\x9c\x98 More time to code other things ... wait \xe2\x9c\x93! \n\n## Detailed Documentation! \n[![Generic badge](https://img.shields.io/badge/content-references-orange.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\nThe [documentation](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/),\nwhich includes detailed instructions for **installation** and **use**, **troubleshooting** \nand much more, can be accessed via the following link:\n\nhttps://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/\n\n## Questions? \n[![Generic badge](https://img.shields.io/badge/content-who-yellow.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n[![Generic badge](https://img.shields.io/badge/content-references-orange.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\n[Use the Discussions Section](https://github.com/SESMG/SESMG/discussions) and let\'s chat!\n\n## Credits \n[![Generic badge](https://img.shields.io/badge/content-who-yellow.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\n\n### Contact and Code of Conduct \n\nCode of Conduct can be found [here](https://github.com/SESMG/SESMG/blob/master/CODE_OF_CONDUCT.md).\n\n#### Contact information \nM\xc3\xbcnster University of Applied Sciences\n\nChristian Klemm - christian.klemm@fh-muenster.de\n\n### Acknowledgments\n\nThe Spreadsheet Energy System Model Generator was carried out within the research project [R2Q ""Resource Planing for Urban Districts""](https://www.fh-muenster.de/forschungskooperationen/r2q/index.php). The project was funded by the Federal Ministry of Education and Research (BMBF) funding program [RES:Z ""Resource-Efficient Urban Districts""](https://ressourceneffiziente-stadtquartiere.de). The funding measure is part of the flagship initiative ""City of the Future"" within the BMBF\'s framework programme ""Research for Sustainable Development - FONA3"". The contributors gratefully acknowledge the support of BMBF (grant number 033W102).\n\n### License\n\nThis project is published under GNU GPL-3.0 license, click [here](https://github.com/SESMG/SESMG/blob/master/LICENSE) for more details.\n\n## Contributing \n[![Generic badge](https://img.shields.io/badge/content-contribution-blue.svg)](https://spreadsheet-energy-system-model-generator.readthedocs.io/en/latest/#)\n\n\nIssues and Pull Requests are greatly appreciated. If you\'ve never contributed to an open source project before I\'m more than happy to walk you through how to create a pull request.\n\nDetailed description of the contribution procedure as well as the projects coding standards can be found [here](https://github.com/SESMG/SESMG/blob/master/CONTRIBUTING.md).\n'",",https://doi.org/10.21105/joss.05519","2020/08/07, 10:14:16",1174,GPL-3.0,1331,3385,"2023/09/19, 08:58:56",22,65,169,75,36,2,0.0,0.4284804077731762,"2023/09/14, 12:05:47",v1.0.0,0,15,false,,true,true,,,https://github.com/SESMG,www.fh.ms/esym,Germany,,,https://avatars.githubusercontent.com/u/122264954?v=4,,, GridPACK,An open-source high-performance package for simulation of large-scale electrical grids.,GridOPTICS,https://github.com/GridOPTICS/GridPACK.git,github,,Energy Distribution and Grids,"2023/09/05, 14:55:20",34,0,11,true,C++,,GridOPTICS,"C++,Fortran,CMake,Python,Shell,Perl,UnrealScript,MATLAB,Makefile,C",https://www.gridpack.org/,"b'\n# GridPACK: High-Performance Electric Grid Simulation\n\nGridPACK is an open-source high-performance (HPC) package for simulation of large-scale electrical grids. Powered by distributed (parallel) computing and high-performance numerical solvers, GridPACK offers several applications forfast simulation of electrical transmission systems. GridPACK includes a number of prebuilt applications that can be directly used. The most commonly used and well-developed are:\n- AC Power Flow\n- Dynamics Simulation\n- Contingency Analysis\n\nOther applications under development or not full featured are\n- Dynamic security assessment\n- State estimation\n\nIn addition, GridPACK is also a framework to simplify the development of new applications on HPC platforms. To ease the development, GridPACK offers several building blocks such as setting up and distributing (partitioning) power grid networks, support for custom components on buses and branches, converting the network models to the corresponding algebraic equations, parallel routines for manipulating and solving large algebraic systems, and input and output modules as well as basic profiling and error management. GridPACK is written in C++ with python wrappers available.\n\n## Installation\nSee the [instructions](docs/markdown/BASIC_INSTALL.md) for installing GridPACK, prerequisite software, and installation notes for different platforms.\n\n## Usage\nSee [User manual](docs/user_manual/GridPACK.pdf) for a deep dive on GridPACK internals and/or refer to the [tutorials](docs/markdown/TUTORIALS.md) for more info. \n\n- Quick Guide (To do)\n\n## Documentation\n- [User manual](docs/user_manual/GridPACK.pdf)\n- [Tutorials](docs/markdown/TUTORIALS.md)\n- [FAQS](docs/markdown/FAQS.md)\n- [License](docs/markdown/LICENSE.md)\n\n\n## Contact us\nThe best (and fastest) way to reach us for any technical questions is by posting an issue [here](https://github.com/GridOPTICS/GridPACK/issues). You can also reach us via email gridpack.account@pnnl.gov.\n\n## Citing GridPACK\n```\n@article{doi:10.1177/1094342015607609, \nauthor = {Bruce Palmer and William Perkins and Yousu Chen and Shuangshuang Jin and David C allahan and Kevin Glass and Ruisheng Diao and Mark Rice and Stephen Elbert and Mallikarjun a Vallem and Zhenyu Huang}, \ntitle ={GridPACKTM: A framework for developing power grid simulations on high-performance computing platforms}, \njournal = {The International Journal of High Performance Computing Applications}, \nvolume = {30}, \nnumber = {2}, \npages = {223-240}, \nyear = {2016}, \ndoi = {10.1177/1094342015607609}, \nURL = {https://doi.org/10.1177/1094342015607609}, \neprint = {https://doi.org/10.1177/1094342015607609}\n```\n\n## Authors\n- Bruce Palmer\n- William Perkins\n- Yousu Chen\n- Renke Huang\n- Yuan Liu\n- Shuangshuang Jin\n- Shrirang Abhyankar\n\n## Acknowledgement\nGridPACK has been developed through funding from various sources over the years.\n- PNNL LDRD Future Grid Initiative\n- DOE OE [Advanced Grid Modeling (AGM)](https://www.energy.gov/oe/advanced-grid-modeling) program\n- [Grid Modernization Laboratory Consortium](https://www.energy.gov/gmi/grid-modernization-lab-consortium)\n- DOE EERE [Solar Energy Technologies Office](https://www.energy.gov/eere/solar/solar-energy-technologies-office)\n- DOE EERE [Wind Energy Technologies Office](https://www.energy.gov/eere/wind/wind-energy-technologies-office)\n\n## Copyright\nCopyright © 2013, Battelle Memorial Institute.\n\nGridPACKTM is a free software distributed under a BSD 2-clause license. You may reuse, modify, and redistribute the software. \n\nSee the [license](src/LICENSE.md) file for details.\n\n\n## Disclaimer\nThe Software was produced by Battelle under Contract No. DE-AC05-76RL01830 with\nthe Department of Energy. For five years from October 10, 2013, the Government is granted\nfor itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide license in this data to reproduce, prepare derivative works, and perform publicly and display\npublicly, by or on behalf of the Government. There is provision for the possible extension\nof the term of this license. Subsequent to that period or any extension granted, the Government is granted for itself and others acting on its behalf a nonexclusive, paid-up, irrevocable\nworldwide license in this data to reproduce, prepare derivative works, distribute copies to\nthe public, perform publicly and display publicly, and to permit others to do so. The specific\nterm of the license can be identified by inquiry made to Battelle or DOE. Neither the United\nStates nor the United States Department of Energy, nor any of their employees, makes any\nwarranty, express or implied, or assumes any legal liability or responsibility for the accuracy,\ncompleteness or usefulness of any data, apparatus, product or process disclosed, or represents that its use would not infringe privately owned rights.\n'",",https://doi.org/10.1177/1094342015607609,https://doi.org/10.1177/1094342015607609","2014/07/22, 21:01:57",3382,GPL-3.0,70,3099,"2023/09/05, 14:55:20",48,59,127,60,50,6,0.0,0.3211488250652742,"2020/01/31, 21:18:21",v3.4,0,13,false,,false,false,,,https://github.com/GridOPTICS,,,,,https://avatars.githubusercontent.com/u/6787372?v=4,,, CLOVER,A minigrid simulation and optimisation for supporting rural electrification in developing countries.,CLOVER-energy,https://github.com/CLOVER-energy/CLOVER.git,github,,Energy Distribution and Grids,"2023/10/13, 17:13:18",14,0,8,true,Python,,CLOVER-energy,"Python,TeX,Shell,Rich Text Format",,"b'# CLOVER\n\nCLOVER minigrid simulation and optimisation for supporting rural electrification in developing countries.\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7638702.svg)](https://doi.org/10.5281/zenodo.7638702)\n\nThe quick start guide below provides step-by-step introductions for downloading, setting up, and using CLOVER. For full documentation containing further information about CLOVER and more detailed descriptions of its functionality, please [visit the Wiki](https://github.com/CLOVER-energy/CLOVER/wiki).\n\n#### Table Of Contents\n\n[Quick start guide](#quick-start-guide)\n\n\xe2\x8f\xac [Downloading CLOVER](#downloading-clover)\n * [Stable installation](#stable-installation)\n * [Upgrading](#upgrading)\n * [Downloading as a developer](#downloading-as-a-developer)\n\n\xf0\x9f\x90\x8d [Setting up your Python environment](#setting-up-your-python-environment)\n * [Anaconda method](#anaconda-method)\n * [Pip install](#pip-install)\n\n\xe2\x9b\x85 [Setting up a new location](#setting-up-a-new-location)\n * [Updating an existing location](#updating-an-existing-location)\n\n\xf0\x9f\x8c\xa6\xef\xb8\x8f [Renewables ninja](#renewables-ninja)\n\n:memo: [Completing input files](#completing-input-files)\n* [Simulation and optimisation files](#simulation-and-optimisation-files)\n* [Optimisation only files](#optimisation-only-files)\n\n\xf0\x9f\x8d\x80 [Running CLOVER](#running-clover)\n* [Profile generation](#profile-generation)\n* [Running a simulation](#running-a-simulation)\n* [Running an optimisation](#running-an-optimisation)\n* [Analysis](#analysis)\n\n\xf0\x9f\x8e\x93 [Running CLOVER on Imperial College London\'s high-performance computers](#running-clover-on-imperial-college-londons-high-performance-computers)\n\n# \xf0\x9f\x9a\xa4 Quick start guide\n\nThis guide provides a very brief introduction to get your CLOVER installation up and running as quickly as possible following the initial download. The file structure has two main branches:\n* a python branch, `src`, which contains CLOVER\'s source code which is used to perform simulations and generate outputs,\n* and a data branch, `locations`, which contains informaiton describing the locations being modelled and contains parameters to outline the simulations and optimisations that should be run.\n\nAn example location, ""Bahraich,"" in India, is included in the initial download for reference.\n\n## Downloading CLOVER\n\nCLOVER can be downloaded from Github or installed via the Python package manager. If you intend to use CLOVER, but not develop or edit any of its code, then it is recommended that you install CLOVER from the Python package manager as this will guarantee that you install a stable version. If you intend to develop or edit any of the code contained within CLOVER as part of your research, then it is recommended that you download CLOVER directly from Github.\n\n### Stable installation\n\nFor a stable version of CLOVER, it is recommended that you directly install the latest version of CLOVER via the `clover-energy` package. This can be installed using the python package manage, `pip`, in the usual way:\n```bash\npython -m pip install clover-energy\n```\n\nThis will download and install the latest version of CLOVER into the current virtual environment that you have running. If you are using Anaconda, please note that this will install CLOVER only for the virtual environment that you are currently in, not for your system as a whole. CLOVER can now be run by calling `clover` from a terminal anywhere on your system, though you will need to set up a location in order for it to run successfully. See \'Setting up a new location\' below.\n\nThis should install all of the relevant dependencies for CLOVER as well as providing four installable executable files: `new-clover-location`, `update-api-token`, `clover-hpc` and `clover`, which are described in more detail below.\n\nNote, installing CLOVER in this way will install the package to your conda environment or local computer and will not provide you with easy access to the source code files. To develop CLOVER and have access to the source code, ensure that you download the code from GitHub.\n\n#### Upgrading\n\nTo update the version of CLOVER that you have installed, from anywhere on your system, run:\n```bash\npython -m pip install clover-energy --upragde\n```\nThis will fetch the latest stable version of CLOVER and install it into your current virtual environment.\n\n### Downloading as a developer\n\nTo download the CLOVER source, with a view to editing and helping to develop the code, simply click the green `Code` button near the top of this page, copy the URL, and, in your local terminal, run `git clone ` to get your local copy of CLOVER. From there, check out a new branch for any of your edits:\n```\ngit checkout -b \n```\n\n### \xe2\x9a\xa0\xef\xb8\x8f One-time download from Github\n\nTo download the CLOVER source code directly from Github, simply click the green `Code` button near the top of this page, and select `Download ZIP`. Once downloaded, unpack the zip file into a directory of your choice. You will now be able to run CLOVER from a terminal in this directory. Use the `cd` command to change the directory of your terminal to the extracted folder in order to run CLOVER.\n\n**Note:** this is not recommended, as the version you will download will not be easily updatable from Github. It is recommended that you either [install as a developer](#downloading-as-a-developer) or [install with the Python package manage](#stable-installation).\n\n## Setting up your Python environment\n\nCLOVER is a scientific package and, as such, uses Python packages that may not have come installed by default on your system. These packages can be easily installed, provided that you are connected to the internet, either using `pip`, the python package manager, or `conda`, a virtual-environment-based system. Instructions for `conda` are provided below.\n\n**Note:** If you have installed CLOVER following the instructions in the [Stable Installation](#stable-installation) section, then you do not need to install any dependencies, and you can skip straight to [Setting up a new location](#setting-up-a-new-location).\n\n### Anaconda method\n\nTo install using `conda`, from the root of the repository, run:\n```bash\nconda install --file requirements.txt\n```\nNote, on some systems, Anaconda is unable to find the requirements.txt file. In these cases, it is necessary to use the full and absolute path to the file. E.G.,\n```bash\nconda install --file C:\\\\Users\\\\...\\requirements.txt\n```\n\n### Pip install\nWhether you are in an anaconda environment, or are using your native Python, you can use Python\'s native package manager to install any dependencies. From the root of the repository, run:\n```bash\npython -m pip install -r requirements.txt\n```\n\n## Setting up a new location\n\nNew locations can be set up in one of two ways:\n* By creating a new location from scratch and inputting all necessary information. To do this, call the `new_location` helper script with just the name of your new location.\n If you have installed CLOVER via a `git clone` command:\n ```bash\n python -m src.clover.scripts.new_location \n ```\n\n if you are on a Linux machine, you can use the launch scripts provided:\n ```bash\n ./bin/new_location.sh \n ```\n\n or, if you have installed the `clover-energy` package, either\n ```bash\n new-clover-location \n ```\n or\n ```bash\n python -m new-clover-location \n ```\n\n* By basing the location on an existing location. To do this, call the `new_location` helper script with the `--from-existing` flag.\n If you have installed CLOVER via a `git clone` command:\n ```bash\n python -m src.clover.scripts.new_location --from-existing \n ```\n\n if you are on a Linux machine, you can use the launch scripts provided with the additional `from-existing` flag:\n ```bash\n ./bin/new_location.sh --from-existing \n ```\n\n or, if you have installed the `clover-energy` package, either\n ```bash\n new-clover-location --from-existing \n ```\n or\n ```bash\n python -m new-clover-location --from-existing \n ```\n\n\n### Updating an existing location\n\nAs part of the ongoing development of CLOVER, new features will be introduced. In order to incorporate these into existing CLOVER locations on your system, you can use the `new_location` script provided to update these locations:\n```\npython -m src.clover.scripts.new_location --update\n```\nor, if you have installed the `clover-energy` package, either\n```\nnew-clover-location --update\n```\nor\n```bash\npython -m new-clover-location --update\n```\n\nCLOVER will search through your location and attempt to replace missing files and include new files that have been brought in by an update. Note, CLOVER will not correct missing or invalid fields within files, these must be corrected manually and the User Guide should be consulted for more information.\n\n## Renewables ninja\n\nGo to https://www.renewables.ninja/register to register a free account to gain your API token. This will be needed in order for CLOVER to correctly fetch and utilise solar profiles.\n\nOnce you have created a new location, you can input your API token using a CLOVER helper script.\nIf you have downloaded CLOVER using the `git clone` command:\n```bash\npython -m src.clover.scripts.update_api_token --location --token \n```\n\nor, if you have installed the `clover-energy` package, either\n```bash\nupdate-api-token --location --token \n```\n\nor\n```bash\npython -m update-api-token --location --token \n```\n\n## Completing input files\n\nWithin your location folder you will find a subfolder named `inputs`. This contains the various input files which are used by CLOVER. These need to be completed in order for CLOVER to run. Some files are needed only for optimisations while some are needed for both optimisations and simulations.\n\n### Simulation and optimisation files\n\n* Ensure that `inputs/generation/generation_inputs.yaml` contains your renewables.ninja API token and that the other parameters within the file are set correctly;\n* Complete `inputs/location_data/location_inputs.yaml` with the details of your location;\n* Complete the `inputs/generation/grid/grid_times.csv` template with the details of your location:\n * Grid profiles are a 1x24 matrix of hourly probabilities (0-1) that the grid is available,\n * Input all grid profiles at the same time;\n* Complete `inputs/generation/diesel/diesel_inputs.yaml` with information about your diesel generator;\n* Complete `inputs/load/devices.yaml`\twith the devices that your location needs and the parameters as appropriate. **NOTE:** CLOVER considers kerosene as a mitigated source. The best practice for leaving kerosene out of your location is to set the `initial_ownership` and `final_ownership` of the kerosene device included by default to `0`.\n* In the `inputs/load/device_utilisation` folder, complete the utilisation profiles for each device e.g. `light_times.csv`:\n * Utilisation profiles are a 12x24 (monthly x hourly) matrix of probabilities that the specified device is in use in that hour,\n * Each device in \xe2\x80\x9cDevices.csv\xe2\x80\x9d must have a corresponding utilisation profile;\n* In the `inputs/simulation` folder, complete the `energy_system.yaml` file with the details of your location\'s energy system;\n* In the `inputs/simulation` folder, complete the `simulations.yaml` file with the details of the simulation bounds that you wish to run.\n\n### Optimisation-only files\n\n* Complete the `inputs/impact/finance_inputs.yaml` with the financial details of your location;\n* Complete the `inputs/impact/ghg_inputs.yaml` with the GHG-emission details of your location;\n* Complete the `inputs/optimisation/optimisation_inputs.yaml` with the various parameters used to define the scope of the optimisations;\n\nSee the user guide, available within the repository, for more information on these input files.\n\n## Running CLOVER\n\nThe operation of CLOVER can be broken down into two steps:\n1. Fetching and generating profiles\n2. Carrying out simulations and optimisations as appropriate.\n\nWhen running a CLOVER simulation or optimisation, profiles will be generated if they are not present. However, these can also be generated on their own, without running a simultaion.\n\n### Profile generation\n\nTo generate the profiles on their own, run CLOVER with the name of the location only. If you have downloaded CLOVER from GitHub using the `git clone` command:\n```bash\npython -m src.clover --location \n```\nor, if you are on a Linux machine,\n```bash\n./bin/clover.sh --location \n```\nIf you have installed the `clover-energy` package, run either\n```bash\nclover --location \n```\nor\n```bash\npython -m clover --location \n```\n\n### Running a simulation\n\nWhen running a CLOVER simulation, the size of the PV and storage systems needs to be specified on the comand-line:\n```bash\npython -m src.clover --location --simulation --pv-system-size --storage-size \n```\nor, if you are on a Linux machine,\n```bash\n./bin/clover.sh --location --simulation --pv-system-size --storage-size \n```\nIf you have installed the `clover-energy` package, either\n```bash\nclover --location --simulation --pv-system-size --storage-size \n```\nor\n```bash\npython -m clover --location --simulation --pv-system-size --storage-size \n```\nwhere `` indicates that a floating point object, i.e., a number, is an acceptable input. The number should not have quotation marks around it.\n\n### Running an optimisation\n\nWhen running a CLOVER optimisation, the size of the PV and storage systems are optimised based on the information inputted in the `optimisation_inputs.yaml` file. To run an optimisation, simply call CLOVER from the command line:\n```bash\npython -m src.clover --location --optimisation\n```\nor, if you are on a Linux machine:\n```bash\n./bin/clover.sh --location --optimisation\n```\nIf you have installed the `clover-energy` package, either\n```bash\nclover --location --optimisation\n```\nor\n```bash\npython -m clover --location --optimisation\n```\n\n### Analysis\n\nWhen running CLOVER simulations, in-built graph plotting can be carried out by CLOVER. To activate this functionality, simply use the `--analyse` flag when initialising a CLOVER simulation from the command-line interface. You can run the analysis __without__ any plots by including the `--skip-analysis` flag.\n\n## Running CLOVER on Imperial College London\'s high-performance computers\n\nThe operation of CLOVER can be broken down into the same steps as per running CLOVER on a local machine. These are described in [Running CLOVER](#running-clover). On Imperial\'s high-performance computers (HPCs), this functionality is wrapped up in such a way that a single entry point is provided for launching runs and a single additional input file is required in addition to those described in [Completing input files](#completing-input-files). Consult the user guide or wiki pages for more information on what is required of the input jobs file.\n\n### Launching jobs\n\nOnce you have completed your input runs file, jobs are launched to the HPC by calling CLOVER\'s launch script from the command-line:\n```bash\npython -m src.clover.scripts.clover_hpc --runs \n```\nor, if you have installed the `clover-energy` package, either\n```bash\nclover-hpc --runs \n```\nor\n```bash\npython -m clover-hpc --runs \n```\n\n\n***\n\nFor more information, contact Phil Sandwell (philip.sandwell@gmail.com) or Ben Winchester (benedict.winchester@gmail.com).\n'",",https://doi.org/10.5281/zenodo.7638702","2022/04/01, 12:03:47",572,MIT,153,1143,"2023/10/13, 17:13:18",40,62,143,72,12,4,1.2,0.12067156348373553,"2023/07/25, 13:22:23",v5.1.1,2,3,false,,true,true,,,https://github.com/CLOVER-energy,,,,,https://avatars.githubusercontent.com/u/102805199?v=4,,, Open Energy Dashboard,Open Energy Dashboard is a user-friendly way to display energy information from smart energy meter.,OpenEnergyDashboard,https://github.com/OpenEnergyDashboard/OED.git,github,"education,javascript,nodejs,open-source,postgresql,climate-change,environmental,plotly",Energy Monitoring and Management,"2023/10/03, 20:42:39",57,0,11,true,JavaScript,Open Energy Dashboard,OpenEnergyDashboard,"JavaScript,TypeScript,PLpgSQL,Shell,CSS,Dockerfile,HTML",,"b""# Open Energy Dashboard #\n\n![Github Build](https://github.com/OpenEnergyDashboard/OED/workflows/Build/badge.svg)\n\nOpen Energy Dashboard is a user-friendly way to display energy information from smart energy meters or uploaded via CSV files. It is available to anyone and is optimized for non-technical users with a simple interface that provides access to your organization's energy data. To learn more, see [our website](https://openenergydashboard.github.io/).\n\nOpen Energy Dashboard is available under the Mozilla Public License v2, and contributions, in the form of bug reports, feature requests, and code contributions, are welcome.\n\n## Installation and Usage ##\n\nSee [USAGE.md](USAGE.md).\n\n## Built With ##\n\nPlotly.js - JavaScript library used to generate data charts ([plotly.com](https://plotly.com/javascript/))\n\nPostgreSQL - Database management system ([postgresql.org](https://www.postgresql.org))\n\nNode.js - JavaScript runtime environment ([nodejs.org](https://nodejs.org/en/))\n\nand lots of other software/packages.\n\n## Authors ##\n\nThis application has been developed by many volunteer developers (mostly students) and is an independent open source project.\n\nFor a list of contributors, [click here](https://github.com/OpenEnergyDashboard/OED/graphs/contributors)\n\n## Licensing ##\n\nThis project is licensed under the MPL version 2.0.\n\nSee the full licensing agreement [here](LICENSE.txt)\n\n## Contributions ##\n\nWe welcome others to contribute to this project by writing code for submission or collaborating with us. Before contributing, please sign our [Contributor License Agreement](https://openenergydashboard.github.io/developer/cla.html). Web pages with [information for developers](https://openenergydashboard.github.io/developer/) is available. If you have any questions or concerns feel free to at engage@OpenEnergyDashboard.org.\n\n## Code of Conduct ##\n\nOED is based on the idea of sharing so everyone benefits from our combined efforts. To benefit everyone, we need to maintain a welcoming and appropriate community. \nToward that end, OED has as a [code of conduct](CODE_OF_CONDUCT.md) that follows the [Contributor Covenant](https://www.contributor-covenant.org/) used by many\nopen source projects. It specifies\nour expectations and how to report any concerns. If, for any reason, you do not feel comfortable with any aspect of the OED project then you are encouraged to \ncontact the OED leadership so we can work to make this a welcoming community. This includes formal complaints, informal concerns and suggestions for how OED can\nimprove what it does. We commit to a timely response to input that clearly articulates what actions may\nbe taken and why. Whatever our decision, you will be informed. We want everyone to know that we take having a welcoming community seriously and will work for and\nrespond appropriately to any concern or ideas for improvement.\n\n## Security Concerns ##\n\nIf you think there is a security concern in the OED software, the please visit our [security reporting page](SECURITY.md) for information on reporting it to the OED project.\n\n## Contact ##\n\nTo contact us, see our [contact web page](https://openenergydashboard.github.io/contact.html), send an email to info@OpenEnergyDashboard.org or open an issue on GitHub.\n""",,"2016/10/24, 17:30:37",2557,MPL-2.0,936,4715,"2023/10/20, 15:55:56",92,642,955,248,5,13,0.0,0.7815758980301275,"2023/10/18, 16:32:18",v1.0.0,0,93,false,,true,true,,,https://github.com/OpenEnergyDashboard,https://openenergydashboard.github.io/,,,,https://avatars.githubusercontent.com/u/32343363?v=4,,, OpenEMS,Open Source Energy Management System.,OpenEMS,https://github.com/OpenEMS/openems.git,github,"climatechange,energy-management,photovoltaics,energy-storage,electric-vehicle-charging-station,heatpump",Energy Monitoring and Management,"2023/10/25, 11:46:16",524,0,199,true,Java,OpenEMS,OpenEMS,"Java,HTML,TypeScript,Shell,SCSS,JavaScript,Dockerfile",,"b'[![Build Status](https://github.com/OpenEMS/openems/actions/workflows/build.yml/badge.svg)](https://github.com/OpenEMS/openems/actions/workflows/build.yml)\n[![Gitpod live-demo](https://img.shields.io/badge/Gitpod-live--demo-blue?logo=gitpod)](https://gitpod.io/#https://github.com/OpenEMS/openems/tree/main)\n[![Cite via Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.4440884.svg)](https://doi.org/10.5281/zenodo.4440883)\n\n

\n \n
Open Source Energy Management System\n

\n\nOpenEMS - the Open Source Energy Management System - is a modular platform for energy management applications. It was developed around the requirements of monitoring, controlling, and integrating energy storage together with renewable energy sources and complementary devices and services like electric vehicle charging stations, heat-pumps, electrolysers, time-of-use electricity tariffs and more.\n\nIf you plan to use OpenEMS for your own projects, please consider joining the [OpenEMS Association e.V.](https://openems.io/association), a network of universities, hardware manufacturers, software companies as well as commercial and private owners, and get in touch in the [OpenEMS Community forum](https://community.openems.io). \n\n### OpenEMS in \xc2\xbbLocal Energy Management\xc2\xab\n\n![alt text](./doc/modules/ROOT/assets/images/local-energy-management.png ""Local Energy Management"")\n\n### OpenEMS in \xc2\xbbAreal Energy Management\xc2\xab\n\n![alt text](./doc/modules/ROOT/assets/images/areal-energy-management.png ""Areal Energy Management"")\n\n## OpenEMS IoT stack\n\nThe OpenEMS \'Internet of Things\' stack contains three main components:\n\n * **OpenEMS Edge** runs on site, communicates with devices and services, collects data and executes control algorithms\n * **OpenEMS UI** is the real-time user interface for web browsers and smartphones\n * **OpenEMS Backend** runs on a (cloud) server, connects the decentralized Edge systems and provides aggregation, monitoring and control via internet\n\n## Features\n\nThe OpenEMS software architecture was designed to leverage some features that are required by a modern and flexible Energy Management System:\n\n * Fast, PLC-like control of devices\n * Easily extendable due to the use of modern programming languages and modular architecture\n * Reusable, device independent control algorithms due to clear device abstraction\n * Wide range of supported devices and protocols\n\n## OpenEMS UI Screenshots\n\n![alt text](./doc/modules/ROOT/assets/images/ui-live.png ""OpenEMS UI Live View"")\n![alt text](./doc/modules/ROOT/assets/images/ui-history.png ""OpenEMS UI History View"")\n\n## System architecture\n\nOpenEMS is generally used in combination with external hardware and software components\n(the exception is a simulated development environment - see [Getting Started](https://openems.github.io/openems.io/openems/latest/gettingstarted.html)). As a brief overview, this is how OpenEMS is used in production setups:\n![alt text](./doc/modules/ROOT/assets/images/system-architecture.png ""OpenEMS System Architecture"")\n\n## Getting Started\n\n* Open up a [Live-Demo on Gitpod](https://gitpod.io/#https://github.com/OpenEMS/openems)\n* Follow the [Getting Started](https://openems.github.io/openems.io/openems/latest/gettingstarted.html) guide to setup OpenEMS on your own computer\n\n## Documentation\n\n* [Latest version of documentation](https://openems.github.io/openems.io/openems/latest/introduction.html)\n* [Javadoc](https://openems.github.io/openems.io/javadoc/)\n\n## Open Source philosophy\n\nThe OpenEMS project is driven by the [OpenEMS Association e.V.](https://openems.io/association), a network of users, vendors and scientific institutions from all kinds of areas like hardware manufacturers, software companies, grid operators and more. They share the common target of developing a free and open-source platform for energy management, that supports the 100 % energy transition.\n\nWe are inviting third parties to use OpenEMS for their own projects and are glad to support them with their first steps. In any case if you are interested in OpenEMS we would be glad to hear from you in the [OpenEMS Community forum](https://community.openems.io).\n\nOpenEMS development was started by [FENECON GmbH](https://www.fenecon.de), a German company specialized in manufacturing and project development of energy storage systems. It is the software stack behind [FEMS - FENECON Energy Management System](https://fenecon.de/page/fems) and widely used in private, commercial and industrial applications.\n\nOpenEMS is funded by several federal and EU funding projects. If you are a developer and you would like to get hired by one of the partner companies or universities for working on OpenEMS, please send your motivation letter to info@openems.io.\n\n## Scientific Research\n\nIf you use OpenEMS in your scientific research, please use our Zenodo Digital Object Identifier (DOI) as reference:\n\n[![Cite via Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.4440884.svg)](https://doi.org/10.5281/zenodo.4440883)\n\n## License\n\n* OpenEMS Edge \n* OpenEMS Backend\n\nCopyright (C) 2016-2022 OpenEMS Association e.V.\n\nThis product includes software developed at FENECON GmbH: you can\nredistribute it and/or modify it under the terms of the [Eclipse Public License version 2.0](LICENSE-EPL-2.0). \n\n * OpenEMS UI\n\nCopyright (C) 2016-2022 OpenEMS Association e.V.\n\nThis product includes software developed at FENECON GmbH: you can\nredistribute it and/or modify it under the terms of the [GNU Affero General Public License version 3](LICENSE-AGPL-3.0).\n'",",https://doi.org/10.5281/zenodo.4440883,https://doi.org/10.5281/zenodo.4440883","2016/10/16, 14:57:33",2565,AGPL-3.0,334,5625,"2023/10/24, 08:18:02",127,2155,2274,404,1,91,1.0,0.35954356846473035,"2023/10/01, 20:31:55",2023.10.0,0,58,false,,false,false,,,https://github.com/OpenEMS,http://openems.io,,,,https://avatars.githubusercontent.com/u/20765902?v=4,,, eemeter,An open source Python package for implementing and developing standard methods for calculating normalized metered energy consumption and avoided energy use.,openeemeter,https://github.com/openeemeter/eemeter.git,github,"energy,efficiency,energy-data,energy-efficiency,building-energy",Energy Monitoring and Management,"2023/09/01, 18:39:21",192,16,24,true,Python,OpenEEmeter,openeemeter,"Python,Shell,Dockerfile",http://eemeter.openee.io/,"b'EEmeter: tools for calculating metered energy savings\n=====================================================\n\n.. image:: https://travis-ci.org/openeemeter/eemeter.svg?branch=master\n :target: https://travis-ci.org/openeemeter/eemeter\n :alt: Build Status\n\n.. image:: https://img.shields.io/github/license/openeemeter/eemeter.svg\n :target: https://github.com/openeemeter/eemeter\n :alt: License\n\n.. image:: https://readthedocs.org/projects/eemeter/badge/?version=master\n :target: https://eemeter.readthedocs.io/?badge=master\n :alt: Documentation Status\n\n.. image:: https://img.shields.io/pypi/v/eemeter.svg\n :target: https://pypi.python.org/pypi/eemeter\n :alt: PyPI Version\n\n.. image:: https://codecov.io/gh/openeemeter/eemeter/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/openeemeter/eemeter\n :alt: Code Coverage Status\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/ambv/black\n :alt: Code Style\n\n---------------\n\n**EEmeter** \xe2\x80\x94 an open source toolkit for implementing and developing standard\nmethods for calculating normalized metered energy consumption (NMEC) and\navoided energy use.\n\nBackground - why use the EEMeter library\n----------------------------------------\n\nAt time of writing (Sept 2018), the OpenEEmeter, as implemented in the eemeter\npackage and sibling `eeweather package `_, contains the\nmost complete open source implementation of the\n`CalTRACK Methods `_, which\nspecify a family of ways to calculate and aggregate estimates avoided energy\nuse at a single meter particularly suitable for use in pay-for-performance\n(P4P) programs.\n\nThe eemeter package contains a toolkit written in the python langage which may\nhelp in implementing a CalTRACK compliant analysis.\n\nIt contains a modular set of of functions, parameters, and classes which can be\nconfigured to run the CalTRACK methods and close variants.\n\n.. note::\n\n Please keep in mind that use of the OpenEEmeter is neither necessary nor\n sufficient for compliance with the CalTRACK method specification. For example,\n while the CalTRACK methods set specific hard limits for the purpose of\n standardization and consistency, the EEmeter library can be configured to edit\n or entirely ignore those limits. This is becuase the emeter package is used not\n only for compliance with, but also for *development of* the CalTRACK methods.\n\n Please also keep in mind that the EEmeter assumes that certain data cleaning\n tasks specified in the CalTRACK methods have occurred prior to usage with the\n eemeter. The package proactively exposes warnings to point out issues of this\n nature where possible.\n\nInstallation\n------------\n\nEEmeter is a python package and can be installed with pip.\n\n::\n\n $ pip install eemeter\n\nFeatures\n--------\n\n- Reference implementation of standard methods\n\n - CalTRACK Daily Method\n - CalTRACK Monthly Billing Method\n - CalTRACK Hourly Method\n\n- Flexible sources of temperature data. See `EEweather `_.\n- Candidate model selection\n- Data sufficiency checking\n- Model serialization\n- First-class warnings reporting\n- Pandas dataframe support\n- Visualization tools\n\nRoadmap for 2020 development\n----------------------------\n\nThe OpenEEmeter project growth goals for the year fall into two categories:\n\n1. Community goals - we want help our community thrive and continue to grow.\n2. Technical goals - we want to keep building the library in new ways that make it\n as easy as possible to use.\n\nCommunity goals\n~~~~~~~~~~~~~~~\n\n1. Develop project documentation and tutorials\n\nA number of users have expressed how hard it is to get started when tutorials are\nout of date. We will dedicate time and energy this year to help create high quality\ntutorials that build upon the API documentation and existing tutorials.\n\n2. Make it easier to contribute\n\nAs our user base grows, the need and desire for users to contribute back to the library\nalso grows, and we want to make this as seamless as possible. This means writing and\nmaintaining contribution guides, and creating checklists to guide users through the\nprocess.\n\n\nTechnical goals\n~~~~~~~~~~~~~~~\n\n1. Implement new CalTRACK recommendations\n\nThe CalTRACK process continues to improve the underlying methods used in the\nOpenEEmeter. Our primary technical goal is to keep up with these changes and continue\nto be a resource for testing and experimentation during the CalTRACK methods setting\nprocess.\n\n2. Hourly model visualizations\n\nThe hourly methods implemented in the OpenEEMeter library are not yet packaged with\nhigh quality visualizations like the daily and billing methods are. As we build and\npackage new visualizations with the library, more users will be able to understand,\ndeploy, and contribute to the hourly methods.\n\n3. Weather normal and unusual scenarios\n\nThe EEweather package, which supports the OpenEEmeter, comes packaged with publicly\navailable weather normal scenarios, but one feature that could help make that easier\nwould be to package methods for creating custom weather year scenarios.\n\n4. Greater weather coverage\n\nThe weather station coverage in the EEweather package includes full coverage of US and\nAustralia, but with some technical work, it could be expanded to include greater, or\neven worldwide coverage.\n\nLicense\n-------\n\nThis project is licensed under [Apache 2.0](LICENSE).\n\nOther resources\n---------------\n\n- `CONTRIBUTING `_: how to contribute to the project.\n- `MAINTAINERS `_: an ordered list of project maintainers.\n- `CHARTER `_: open source project charter.\n- `CODE_OF_CONDUCT `_: Code of conduct for contributors.\n'",,"2016/08/19, 23:54:36",2622,Apache-2.0,53,2301,"2023/09/01, 18:39:22",15,429,463,24,54,10,0.4,0.23902768399729912,"2017/06/30, 18:52:07",v1.0.0,0,31,false,,true,true,"occamssafetyrazor/deps,pombredanne/5000-deps,recurve-inc/thermostat,kfiramar/baldar,Lisandro79/Measurement-Verification,datamade/oeem-energy-datastore,ZaninMarco/Copy-of-repository,brandonwillard/amimodels,buds-lab/the-building-data-genome-project,impactlab/oeem-uploader,impactlab/oeem-energy-datastore,impactlab/oeem-etl-fake-projects,james-prior/epathermostat-sandbox,impactlab/thermostat,impactlab/oeem-etl,EPAENERGYSTAR/epathermostat",,https://github.com/openeemeter,https://lfenergy.org/projects/openeemeter/,,,,https://avatars.githubusercontent.com/u/19336002?v=4,,, OperatorFabric,"A modular, extensible, industrial-strength and field-tested platform for use in electricity, water and other utility operations.",opfab,https://github.com/opfab/operatorfabric-core.git,github,"energy,platform,collaboration,linux-foundation,hypervision,alerting",Energy Monitoring and Management,"2023/10/25, 07:48:25",32,0,4,true,TypeScript,OperatorFabric,opfab,"TypeScript,Java,JavaScript,Gherkin,Handlebars,HTML,Mustache,Shell,SCSS,CSS,Dockerfile,Scala",https://opfab.github.io,"b'// Copyright (c) 2018-2022 RTE (http://www.rte-france.com)\n// See AUTHORS.txt\n// This document is subject to the terms of the Creative Commons Attribution 4.0 International license.\n// If a copy of the license was not distributed with this\n// file, You can obtain one at https://creativecommons.org/licenses/by/4.0/.\n// SPDX-License-Identifier: CC-BY-4.0\n\n\n:imagesdir: src/docs/asciidoc/images\n\n:sectnums:\n:icons: font\n:hide-uri-scheme:\n\n= OperatorFabric README\n\nimage:https://img.shields.io/badge/license-MPL_2.0-blue.svg[MPL-2.0\nLicense,link=https://www.mozilla.org/en-US/MPL/2.0/]\nimage:https://img.shields.io/github/workflow/status/opfab/operatorfabric-core/CI/develop[Build\nStatus,link=https://github.com/opfab/operatorfabric-core/actions]\nimage:https://sonarcloud.io/api/project_badges/measure?project=org.lfenergy.operatorfabric%3Aoperatorfabric-core&metric=alert_status[Quality\nGate,link=https://sonarcloud.io/dashboard?id=org.lfenergy.operatorfabric%3Aoperatorfabric-core]\nimage:https://sonarcloud.io/api/project_badges/measure?project=org.lfenergy.operatorfabric%3Aoperatorfabric-core&metric=coverage[Coverage,link=https://sonarcloud.io/component_measures?id=org.lfenergy.operatorfabric%3Aoperatorfabric-core&metric=Coverage]\nimage:https://sonarcloud.io/api/project_badges/measure?project=org.lfenergy.operatorfabric%3Aoperatorfabric-core&metric=code_smells[Code\nSmells,link=https://sonarcloud.io/component_measures?id=org.lfenergy.operatorfabric%3Aoperatorfabric-core&metric=Maintainability]\nimage:https://bestpractices.coreinfrastructure.org/projects/4806/badge[CII Best Practices,link=https://bestpractices.coreinfrastructure.org/projects/4806]\nimage:https://img.shields.io/badge/Join_us_on-Slack-blueviolet[OperatorFabric Slack Channel,link=https://lfenergy.slack.com/archives/C025ZGJPXM4]\n\nSee our website link:http://opfab.github.io/[opfab.github.io] for the complete documentation.\n\n== Introduction\n\n//tag::short_description[]\nOperatorFabric is a modular, extensible, industrial-strength platform for use in electricity, water, and other utility operations.\n\nIt aims to facilitate operational activities for utilities in two ways :\n\n* Centralize real time business events in a single place to avoid having multiple screens/software. To do so, OperatorFabric provides \n** event notifications named ""cards"" with filtering features (severity , date , process ... )\n** event dispatching rules on a per user basis (based on groups, organizational entities, processes... )\n** event-sending endpoints for business applications \n** a mechanism of templating to customize events rendering (using HTML5 )\n** a view of the events on a timeline or a calendar \n** storage of all the events (archive feature)\n** notifications via sounds \n** possibilities to integrate screen form other applications\n\n \n* Facilitate interactions between operational control centers:\n** Share information in real time, as pre-formatted cards that can be sent either manually by operators or automatically by external solutions.\n** Introduce pre-formatted question/response exchanges between control centers. This can be used to implement operational processes (with the notion of ""last time to respond""). \n** Share events in calendar (also allowing repeating events)\n\nIn addition, the following features are available: internationalization, light/dark mode for the UI, realtime connected users supervision, authorization mechanism.\n\nIntegration with existing IT systems is an overarching concern: support of Firefox and Chromium-based browsers, docker deployment, communication with business applications via REST API or Kafka, integration with external authentication systems (via OAuth2), monitoring via Prometheus endpoints.\n\n\nOperatorFabric is part of the https://www.lfenergy.org/[LF Energy] coalition, a project of The Linux Foundation that\nsupports open source innovation projects within the energy and electricity sectors.\n\nOpFab is an open source platform licensed under https://www.mozilla.org/en-US/MPL/2.0/[Mozilla Public License V2].\nThe source code is hosted on GitHub in this repository : https://github.com/opfab/operatorfabric-core[operatorfabric-core].\n\nDocumentation is available at https://opfab.github.io/ \n\n//end::short_description[]\n\nimage::feed_screenshot.png[UI screenshot]\n\n== Try it!\n\nIf you want to try OperatorFabric (see what the UI looks like with some test cards) in a few minutes, follow the steps below.\n\n. Clone this repository\n+\n----\ngit clone https://github.com/opfab/operatorfabric-core.git\ncd operatorfabric-core\n----\n\n. Launch our demo docker-compose file\n+\n----\ncd ./config/docker\n./startOpfab.sh\n----\n\n. Once script is finished, log into the application UI at *localhost:2002/* using operator1_fr/test as credentials.\n\n. Push basic configuration and cards using the following test scripts\n+\n[source,shell]\n----\n./src/test/resources/loadTestConf.sh\n./src/test/resources/send6TestCards.sh\n----\n\nTIP: If you want to experiment in more depth and have more details on how it works (as well as some troubleshooting), check out our\nlink:https://opfab.github.io/documentation/current/getting_started/[Getting Started guide]!\n\n== Technology stack\n\n=== Development\n\nOperatorFabric is mostly written in Java and based on the Spring framework. This makes writing and integrating software for a simplified and better coordination very easy.\n\nimage:https://img.shields.io/badge/Using-Java-%237473C0.svg?style=for-the-badge[Using Java,link=https://www.java.com]\nimage:https://img.shields.io/badge/Using-Spring-%236db33f.svg?style=for-the-badge[Using Spring,link=https://spring.io/]\nimage:https://img.shields.io/badge/Using-Angular-%237473C0.svg?style=for-the-badge[Using Angular,link=https://angular.io/]\nimage:https://img.shields.io/badge/Using-MongoDB-%236db33f.svg?style=for-the-badge[Using Swagger,link=https://www.mongodb.com/community/]\nimage:https://img.shields.io/badge/Using-Swagger-%237473C0.svg?style=for-the-badge[Using Swagger,link=https://swagger.io/]\nimage:https://img.shields.io/badge/Using-RabbitMQ-%236db33f.svg?style=for-the-badge[Using Swagger,link=https://www.rabbitmq.com/]\n\n\n=== Continuous Integration / Continuous Delivery\n\nOperatorFabric is built and integrated using battle-tested tools and (open) platforms.\n\nimage:https://img.shields.io/badge/Built%20with-Gradle-%23410099.svg?style=for-the-badge[Built with Gradle,link=https://gradle.org/]\nimage:https://img.shields.io/badge/Using-Github%20Actions-%23FF647D.svg?style=for-the-badge[Using Github Actions,link=https://github.com/opfab/operatorfabric-core/actions]\nimage:https://img.shields.io/badge/Using-SonarCloud-%23FF647D.svg?style=for-the-badge[Using SonarCloud,link=https://sonarcloud.io/dashboard?id=org.lfenergy.operatorfabric%3Aoperatorfabric-core]\n\n== Licensing\n\nThis project and all its sub-projects are licensed under\nhttps://www.mozilla.org/en-US/MPL/2.0/[Mozilla Public License V2.0]. See\nlink:LICENSE.txt[LICENSE.txt]\n\n== Contributing\n\nRead our link:https://opfab.github.io/documentation/current/community/[Community Documentation] for more information on\nhow to contribute to the project.\n'",,"2018/09/27, 09:36:25",1854,MPL-2.0,1136,6495,"2023/10/25, 07:46:41",98,3706,5115,1514,0,15,0.5,0.7796084512502424,"2023/10/23, 17:04:59",4.0.1.RELEASE,5,36,false,,false,true,,,https://github.com/opfab,,,,,https://avatars.githubusercontent.com/u/43637906?v=4,,, energy-sparks,An open source application that is designed to help schools improve their energy efficiency.,Energy-Sparks,https://github.com/Energy-Sparks/energy-sparks.git,github,"energy,bath,school,rails,ruby,ruby-on-rails,data",Energy Monitoring and Management,"2023/10/25, 16:04:12",22,0,4,true,Ruby,Energy Sparks,Energy-Sparks,"Ruby,HTML,JavaScript,SCSS,Shell,Handlebars,Procfile",http://www.energysparks.uk,"b'[![Build Status](https://travis-ci.org/Energy-Sparks/energy-sparks.svg?branch=master)](https://travis-ci.org/BathHacked/energy-sparks)\n[![Maintainability](https://api.codeclimate.com/v1/badges/1d4f9219bfa9e5848154/maintainability)](https://codeclimate.com/github/BathHacked/energy-sparks/maintainability)\n[![Test Coverage](https://api.codeclimate.com/v1/badges/1d4f9219bfa9e5848154/test_coverage)](https://codeclimate.com/github/BathHacked/energy-sparks/test_coverage)\n\n# Energy Sparks\n\nEnergy Sparks is an open source application that is designed to help schools improve their energy efficiency.\n\nThe application collects and presents gas and electricity usage data in a way that is accessible to staff, students and parents. Supported by educational resources, the application will support teachers in helping children understand more about energy usage, how to be more efficient and see how actions they take in the school, e.g. switching off lighting, has an effect on usage.\n\nCombining access to data, the ability to log interventions and a competitive element between schools, the goal is to not just save schools money in reducing energy consumption through long term changes, it is hoped that the application will also help educate children about what it means to be energy efficient.\n\nThe application is open source and is powered by open data. It is being designed to be easily deploy and run for minimal cost, allowing it to be run by local councils and/or community groups around the UK.\n\n# For Users\n\nDevelopment of the application and documentation is in progress. Please check back later for more information.\n\nFor now you may wish to read the evolving documentation in [the project wiki](https://github.com/BathHacked/energy-sparks/wiki).\n\n# For Developers\n\nThe application uses Ruby on Rails.\n\nRead the [developer guide](https://github.com/Energy-Sparks/energy-sparks/wiki/Setting-up-a-developer-environment) in the wiki for how to get started and the CONTRIBUTING.md guidelines.\n\n\n\n## Browser testing provided by:\n\n[![Browserstack](https://raw.githubusercontent.com/Energy-Sparks/energy-sparks/master/markdown_pages/browserstack-logo.png)](https://www.browserstack.com/)\n'",,"2016/08/16, 16:26:52",2626,MIT,1593,10529,"2023/10/25, 16:04:15",12,2974,3167,1027,0,12,1.6,0.7057766367137356,,,0,10,false,,false,true,,,https://github.com/Energy-Sparks,https://energysparks.uk/,UK,,,https://avatars.githubusercontent.com/u/66954989?v=4,,, emonpi,"The OpenEnergyMonitor system has the capability to monitor electrical energy use / generation, temperature and humidity.",openenergymonitor,https://github.com/openenergymonitor/emonpi.git,github,"emonpi,hardware-designs,raspberry-pi,energy-monitor,arduino,emoncms",Energy Monitoring and Management,"2023/09/07, 11:57:33",260,0,14,true,C++,OpenEnergyMonitor,openenergymonitor,"C++,Shell,Python,C,PHP,Makefile",https://guide.openenergymonitor.org/setup,"b'# emonPi\n\nRaspberry Pi based energy Monitoring Unit\n\n![emonPi](docs/img/emonPi_shop_photo.png)\n\n## Documentation\n\n- [Install Guide](https://docs.openenergymonitor.org/emonpi/install.html)\n- [Connect](https://docs.openenergymonitor.org/emonpi/connect.html)\n- [Pulse counting](https://docs.openenergymonitor.org/emonpi/pulse_counting.html)\n- [Temperature sensing](https://docs.openenergymonitor.org/emonpi/temperature_sensing.html)\n- [Firmware](https://docs.openenergymonitor.org/emonpi/firmware.html)\n- [Configuration](https://docs.openenergymonitor.org/emonpi/configuration.html)\n- [Use in North America](https://docs.openenergymonitor.org/emonpi/north-america.html)\n- [Modification](https://docs.openenergymonitor.org/emonpi/modification.html)\n- [Technical](https://docs.openenergymonitor.org/emonpi/technical.html)\n\nOr view directly on github [here](docs).\n\n## [emonSD image download](https://docs.openenergymonitor.org/emonsd/download.html)\n\n[Software Stack Build & Documentation](https://github.com/openenergymonitor/emonscripts)\n\n## Open-Hardware:\n\n- [Schematic & Board CAD Design Files](https://github.com/openenergymonitor/emonpi/tree/master/hardware)\n\n## Community & Support\n\n- [OpenEnergyMonitor Forums](https://community.openenergymonitor.org)\n- OpenEnergyMonitor Shop Support: support@openenergymonitor.zendesk.com\n\n## License\n\n- The hardware designs (schematics and CAD files) are licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.\n- The firmware is released under the GNU GPL V3 license The documentation is subject to GNU Free Documentation License\n- The hardware designs follow the terms of the OSHW (Open-source hardware) Statement of Principles 1.0.\n\n## Disclaimer\n\n```\nOUR PRODUCTS AND ASSEMBLY KITS MAY BE USED BY EXPERIENCED, SKILLED USERS, AT THEIR OWN RISK. TO THE FULLEST EXTENT PERMISSIBLE BY THE APPLICABLE LAW, WE HEREBY DISCLAIM ANY AND ALL RESPONSIBILITY, RISK, LIABILITY AND DAMAGES ARISING OUT OF DEATH OR PERSONAL INJURY RESULTING FROM ASSEMBLY OR OPERATION OF OUR PRODUCTS.\n\nYour safety is your own responsibility, including proper use of equipment and safety gear, and determining whether you have adequate skill and experience. OpenEnergyMonitor and Megni registered partnership disclaims all responsibility for any resulting damage, injury, or expense. It is your responsibility to make sure that your activities comply with applicable laws, including copyright. Always check the webpage associated with each unit before you get started. There may be important updates or corrections. All use of the instructions, kits, projects and suggestions given both by megni.co.uk, openenergymonitor.org and shop.openenergymonitor.org are to be used at your own risk. The technology (hardware , firmware and software) are constantly changing, documentation (including build guide and instructions) may not be complete or correct.\n\nIf you feel uncomfortable with assembling or using any part of the system, return it to us for a full refund.\n```\n'",,"2015/02/13, 22:57:13",3175,MIT,41,1509,"2023/06/28, 06:34:56",18,88,129,3,119,0,0.0,0.23253757736516356,"2023/04/20, 08:42:04",20-04-23,0,19,false,,false,false,,,https://github.com/openenergymonitor,https://openenergymonitor.org,United Kingdom,,,https://avatars.githubusercontent.com/u/758944?v=4,,, EmonLib,Arduino Energy Monitoring Library.,openenergymonitor,https://github.com/openenergymonitor/EmonLib.git,github,,Energy Monitoring and Management,"2019/05/31, 16:32:27",552,0,50,true,C++,OpenEnergyMonitor,openenergymonitor,C++,openenergymonitor.org,"b"" _ _ _\n | | (_) |\n ___ _ __ ___ ___ _ __ | | _| |__\n / _ \\ '_ ` _ \\ / _ \\| '_ \\| | | | '_ \\\n | __/ | | | | | (_) | | | | |____| | |_) |\n \\___|_| |_| |_|\\___/|_| |_|______|_|_.__/\n\nArduino Energy Monitoring Library - compatible with Arduino 1.0\n*****************************************************************\n\nDesigned for use with emonTx: http://openenergymonitor.org/emon/Modules\n\nDownload to Arduino IDE 'libraries' folder. Restart of IDE required.\n\nGit Clone and Git Pull can be easily used to keep the library up-to-date and manage changes.\nJeeLabs has done a good post on the topic: http://jeelabs.org/2011/12/29/out-with-the-old-in-with-the-new/\n\n\n\nUpdate: 5th January 2014: Support Added for Arduino Due (ARM Cortex-M3, 12-bit ADC) by icboredman.\n\nTo enable this feature on Arduino Due, add the following statement to setup() function in main sketch:\nanalogReadResolution(ADC_BITS); This will set ADC_BITS to 12 (Arduino Due), EmonLib will otherwise default to 10 analogReadResolution(ADC_BITS);.\nSee blog post on using Arduino Due as energy monitor: http://boredomprojects.net/index.php/projects/home-energy-monitor\n\n""",,"2012/02/27, 22:24:43",4257,AGPL-3.0,0,55,"2023/05/08, 18:50:14",31,21,44,1,170,5,0.0,0.2894736842105263,,,0,11,false,,false,false,,,https://github.com/openenergymonitor,https://openenergymonitor.org,United Kingdom,,,https://avatars.githubusercontent.com/u/758944?v=4,,, Emoncms,"A powerful open source web application for processing, logging and visualizing energy, temperature and other environmental data.",emoncms,https://github.com/emoncms/emoncms.git,github,"emoncms,php,openenergymonitor,energy-monitor,dashboards,sustainability",Energy Monitoring and Management,"2023/10/19, 07:52:27",1155,0,88,true,PHP,Emoncms,emoncms,"PHP,JavaScript,CSS,HTML,Shell,Python,Makefile",https://emoncms.org,"b'# Emoncms\n\nEmoncms is an open-source web application for processing, logging and visualising energy, temperature and other environmental data and is part of the [OpenEnergyMonitor project](http://openenergymonitor.org).\n\n![Emoncms](emoncms_graphic.png)\n\n## Requirements\n\n- PHP (tested with 8.1.12) \n- MySQL or MariaDB (tested with 10.5.15) \n- Apache (tested with 2.4.54)\n- Redis* (tested with 6.0.16)\n\n_*Redis is recommended because it reduces the number of disk writes and therefore prolongs disk life (noticeably on SD cards e.g. Raspberry Pi). Some input-processors also require Redis and fail silently if Redis is not installed. Some environments such as shared hosting or as far as we have tried Windows servers don\'t support Redis hence why Emoncms has a fall back mode that allows core operation without Redis._\n\n## Documentation\n\n**View the Emoncms documentation at: [https://docs.openenergymonitor.org/emoncms](https://docs.openenergymonitor.org/emoncms)**\n\n- [Getting started emonPi/Base](https://docs.openenergymonitor.org/emoncms/intro-rpi.html)\n- [Getting started emoncms.org](https://docs.openenergymonitor.org/emoncms/intro-remote.html)\n- [Emoncms Core Concepts](https://docs.openenergymonitor.org/emoncms/coreconcepts.html)\n- [Posting data](https://docs.openenergymonitor.org/emoncms/postingdata.html)\n- [MQTT](https://docs.openenergymonitor.org/emoncms/mqtt.html)\n- [View Graphs](https://docs.openenergymonitor.org/emoncms/graphs.html)\n- [Dashboard Builder](https://docs.openenergymonitor.org/emoncms/dashboards.html)\n- [Application dashboards](https://docs.openenergymonitor.org/emoncms/dashboards.html)\n- [Octopus Agile app](https://docs.openenergymonitor.org/emoncms/agileapp.html)\n- [Calculating Daily kWh](https://docs.openenergymonitor.org/emoncms/daily-kwh.html)\n- [Calculating Averages](https://docs.openenergymonitor.org/emoncms/daily-averages.html)\n- [Pulse counting](https://docs.openenergymonitor.org/emoncms/pulse-counting.html)\n- [Exporting CSV](https://docs.openenergymonitor.org/emoncms/export-csv.html)\n- [Histograms](https://docs.openenergymonitor.org/emoncms/histograms.html)\n- [Post Process module](https://docs.openenergymonitor.org/emoncms/postprocess.html)\n- [DemandShaper module](https://docs.openenergymonitor.org/emoncms/demandshaper.html)\n- [Import / Backup](https://docs.openenergymonitor.org/emoncms/import.html)\n- [Update & Upgrade](https://docs.openenergymonitor.org/emoncms/update.html)\n- [Remote Access](https://docs.openenergymonitor.org/emoncms/remoteaccess.html)\n- [Troubleshooting](https://docs.openenergymonitor.org/emoncms/troubleshooting.html)\n\n**Design**\n\n- [Emoncms architecture](docs/design/architecture.md)\n- [Input processing implementation](docs/design/input-processing.md)\n- [Developing a new Module](docs/design/developing-a-new-module.md)\n- [Global variables in Emoncms](docs/design/global-variables.md)\n\n**Emoncms timeseries database design (feed storage)**\n\n- [Emoncms time series database development history](docs/timeseries/History.md)\n- [Fixed interval time series](docs/timeseries/Fixed-interval.md)\n- [Variable interval time series](docs/timeseries/Variable-interval.md)\n- [Improving write performance with buffering](docs/timeseries/Write-load-investigation.md)\n\n**Other**\n\n- [Backup](docs/Backup.md)\n- [CLI](docs/CLI.md)\n- [Encrypted Input](docs/input_encrypted.md)\n\n**Emoncms Terminology**\n\n- **Input:** An incoming datasource. Each input has an associated ""node"" identifier and a ""key"" sub-identifier. Inputs are entry points, only the last value and time of the input is recorded. To record historic data a feed needs to be created from an input.\n- **Input: Node:** A grouping identifier for an input or feed.\n- **Input: Key:** A sub-identifier for items within each Node.\n- **Input process list (or input processing):** A list of processes* performed sequentially on each input value as it is received on that input.\n- **Process:** A function that can be attached to the process list of an input to change the value or to save the value to a feed*.\n- **Feed:** A place where data is recorded, a time-series of datapoints. The standard time-series databases used by Emoncms are PHPFina and PHPTimeSeries and were written as part of the Emoncms project.\n\n* For a description of what each input process does in Emoncms, see the helper note within the Emoncms input processing configuration interface.\n\n**Emoncms.org API Reference**\n\n- [Input API reference](https://emoncms.org/site/api#input)\n- [Feed API reference](https://emoncms.org/site/api#feed)\n\n## Install\n\nEmoncms is designed and tested to run on either Ubuntu Linux (Local, Dedicated machine or VPS) or RaspberryPi OS. It should work on other Debian Linux systems though we dont test or provide documentation for installation on these. \n\nWe do not recommend and are unable to support installation on shared hosting or XAMPP servers, shared hosting in particular has no or limited capabilities for running some of the scripts used by emoncms. There is now a large choice of low cost miniature Linux VPS hosting solutions that provide a much better installation environment at similar cost.\n\nRecommended: \n\n* [Install with emonScripts](https://docs.openenergymonitor.org/emonsd/install.html)\n* [Pre built emonSD SD-card Image Download](https://docs.openenergymonitor.org/emonsd/download.html)\n* [Purchase pre-loaded SD card](http://shop.openenergymonitor.com/emonsd-pre-loaded-raspberry-pi-sd-card/)\n\nExperimental (not currently up to date):\n\n* [Multi-platform using Docker Container](https://github.com/emoncms/emoncms-docker)\n\n## Modules\n\nModules can be installed by downloading or git cloning into the emoncms/Modules folder. Be sure to check for database updates in Administration menu after installing new modules. The following core modules are included on the emonSD image:\n\n- [Graph module](https://github.com/emoncms/graph) - Advanced graphing module that integrates with the emoncms feed list, highly recommended; examples of use can be found in emoncms guide [[1]](http://guide.openenergymonitor.org/setup/daily-kwh)[[2]](http://guide.openenergymonitor.org/setup/daily-averages/)[[3]](http://guide.openenergymonitor.org/setup/export-csv/)[[4]](http://guide.openenergymonitor.org/setup/histograms).\n\n- [Device module](https://github.com/emoncms/device) - Automatically configure inputs and feeds using device templates.\n\n- [Dashboards module](https://github.com/emoncms/dashboard) - Required for creating, viewing and publishing dashboards.\n\n- [App module](https://github.com/emoncms/app.git) - Application specific dashboards e.g. MyElectric, MySolar.\n\n- [Config]( https://github.com/emoncms/config.git) - In-browser emonhub.conf editor and emonhub.log log viewer. Use `git clone` to install.\n\n- [Wifi module]( https://github.com/emoncms/wifi.git) - [Wifi configuration interface designed for use on the emonPi](https://guide.openenergymonitor.org/setup/connect/)\n\n- [Raspberry Pi Backup / Restore module](https://github.com/emoncms/backup) (emonPi / emonBase)\n\n- [Sync module](https://github.com/emoncms/sync)\n\n- [Usefulscripts](https://github.com/emoncms/usefulscripts): Not strictly a module, more a collection of useful scripts for use with emoncms.\n\n- [DemandShaper module]( http://github.com/emoncms/demandshaper) - Schedule smartplugs, EmonEVSE smart EV chargers, heatpumps to run at best time in terms of: carbon, cost, grid strain. Based on day ahead forecasts.\n\nThere are many other available modules such as the event module and openbem (open source building energy modelling module): check out the [Emoncms repo list](https://github.com/emoncms).\n\n## Branches\n\n* [master](https://github.com/emoncms/emoncms) - The latest and greatest developments. Potential bugs, use at your own risk! All pull-requests should be made to the *master* branch.\n\n* [stable](https://github.com/emoncms/emoncms/tree/stable) - emonPi/emonBase release branch, regularly merged from master. Slightly more tried and tested. [See release change log](https://github.com/emoncms/emoncms/releases).\n\n## Tools\n\n* [PHPFina data file viewer](https://github.com/trystanlea/phpfinaview) - Easily explore phpfina timeseries feed engine data files directly without a full Emoncms installation. Useful for checking backups and archived data.\n\n#### Android App\n\n[Google Play](https://play.google.com/store/apps/details?id=org.emoncms.myapps&hl=en_GB)\n\n[GitHub Repo](https://github.com/emoncms/AndroidApp)\n\n[Development Forum](https://community.openenergymonitor.org/c/emoncms/mobile-app)\n\n## More information\n\n- Cloud hosted platform - http://emoncms.org\n- [OpenEnergyMonitor Forums](https://community.openenergymonitor.org)\n- [OpenEnergyMonitor Homepage](https://openenergymonitor.org)\n'",,"2012/10/15, 20:28:36",4027,AGPL-3.0,124,4946,"2023/10/18, 23:00:37",93,1060,1771,119,6,16,0.3,0.7936073059360731,"2023/01/18, 11:52:50",11.3.0,0,120,false,,false,false,,,https://github.com/emoncms,http://emoncms.org,,,,https://avatars.githubusercontent.com/u/2532311?v=4,,, FlexMeasures,"A platform for building energy flexibility services with forecasting and scheduling, written in Python & offering a USEF-conform API.",SeitaBV,https://github.com/FlexMeasures/flexmeasures.git,github,"energy,energy-data,energy-flexibility,backend,machine-learning",Energy Monitoring and Management,"2023/10/18, 14:02:09",116,1,51,true,Python,FlexMeasures,FlexMeasures,"Python,HTML,CSS,JavaScript,Shell,Makefile,Jinja,Dockerfile,Mako",https://flexmeasures.io,"b'![FlexMeasures Logo Light](https://github.com/FlexMeasures/screenshots/blob/main/logo/flexmeasures-horizontal-color.svg#gh-light-mode-only)\n![FlexMeasures Logo Dark](https://github.com/FlexMeasures/screenshots/blob/main/logo/flexmeasures-horizontal-dark.svg#gh-dark-mode-only)\n\n[![License](https://img.shields.io/github/license/seitabv/flexmeasures?color=blue)](https://github.com/FlexMeasures/flexmeasures/blob/main/LICENSE)\n![lint-and-test](https://github.com/FlexMeasures/flexmeasures/workflows/lint-and-test/badge.svg)\n[![Pypi Version](https://img.shields.io/pypi/v/flexmeasures.svg)](https://pypi.python.org/pypi/flexmeasures)\n[![](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Documentation Status](https://readthedocs.org/projects/flexmeasures/badge/?version=latest)](https://flexmeasures.readthedocs.io/en/latest/?badge=latest)\n[![Coverage](https://coveralls.io/repos/github/FlexMeasures/flexmeasures/badge.svg)](https://coveralls.io/github/FlexMeasures/flexmeasures)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/6095/badge)](https://bestpractices.coreinfrastructure.org/projects/6095)\n\nThe *FlexMeasures Platform* is the intelligent & developer-friendly EMS (energy management system) to support real-time energy flexibility apps, rapidly and scalable. \n\nIn a nutshell, FlexMeasures turns data into optimized schedules for flexible assets like batteries and heat pumps, or for flexible industry processes:\n\n![The most simple view of FlexMeasures, turning data into schedules](https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/simple-flexEMS.png)\n\n\nHere is why using FlexMeasures is a great idea:\n\n- Developing energy flexibility apps & services (e.g. to enable demand response) is crucial, but expensive.\n- FlexMeasures reduces development costs with real-time data intelligence & integrations, uncertainty models and developer support such as API/UI and plugins.\n\n![High-level overview of FlexMeasures as an EMS for energy flexibility apps, using plugins to fit a given use case](https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/overview-flexEMS.png)\n\n\nSo why optimise the schedules of flexible assets? Because planning ahead allows flexible assets to serve the whole system with their flexibility, e.g. by shifting energy consumption to other times.\nFor the asset owners, this creates CO\xe2\x82\x82 savings but also monetary value (e.g. through self-consumption, dynamic tariffs and grid incentives). FlexMeasures thrives to be applicable in cases with multiple sources of value (""value stacking"") and multiple types of assets (e.g. home/office/factory).\n\nAs possible users, we see energy service companies (ESCOs) who want to build real-time apps & services around energy flexibility for their customers, or medium/large industrials who are looking for support in their internal digital tooling. However, even small companies and hobby projects might find FlexMeasures useful!\n\n## How does FlexMeasures enable rapid development of energy flexibility apps?\n\nFlexMeasures is designed to help with three basic needs of developers in the energy flexibility domain:\n\n### I need help with integrating real-time data and continuously computing new data\n\nFlexMeasures is designed to make decisions based on data in an automated way. Data pipelining and dedicated machine learning tooling is crucial.\n\n- API/CLI functionality to read in time series data\n- Extensions for integrating 3rd party data, e.g. from [ENTSO-E](https://github.com/SeitaBV/flexmeasures-entsoe) or [OpenWeatherMap](https://github.com/SeitaBV/flexmeasures-openweathermap)\n- Forecasting for the upcoming hours\n- Schedule optimization for flexible assets\n\n\n### It\'s hard to correctly model data with different sources, resolutions, horizons and even uncertainties\n\nMuch developer time is spent correcting data and treating it correctly, so that you know you are computing on the right knowledge.\n\nFlexMeasures is built on the [timely-beliefs framework](https://github.com/SeitaBV/timely-beliefs), so we model this real-world aspect accurately:\n\n- Expected data properties are explicit (e.g. unit, time resolution)\n- Incoming data is converted to fitting unit and time resolution automatically\n- FlexMeasures also stores who thought that something happened (or that it will happen), and when they thought so\n- Uncertainty can be modelled (useful for forecasting)\n\n\n### I want to build new features quickly, not spend days solving basic problems\n\nBuilding customer-facing apps & services is where developers make impact. We make their work easy.\n\n- FlexMeasures has well-documented API endpoints and CLI commands to interact with its model and data\n- You can extend it easily with your own logic by writing plugins\n- A backend UI shows you your assets in maps and your data in plots. There is also support for plots to be available per API, for integration in your own frontend\n- Multi-tenancy \xe2\x80\x95 model multiple accounts on one server. Data is only seen/editable by authorized users in the right account\n\n\n## Getting started\n\nHead over to our [documentation](https://flexmeasures.readthedocs.io), e.g. the [getting started guide](https://flexmeasures.readthedocs.io/en/latest/getting-started.html) or the [5-minute tutorial](https://flexmeasures.readthedocs.io/en/latest/tut/toy-example-from-scratch.html). Or find more information on [FlexMeasures.io](https://flexmeasures.io).\n\nSee also [Seita\'s Github profile](https://github.com/SeitaBV), e.g. for FlexMeasures plugin examples.\n\n\n## Development & community\n\nFlexMeasures was initiated by [Seita BV](https://www.seita.nl) in The Netherlands in order to make sure that smart backend software is available to all parties working with energy flexibility, no matter where they are working on their local energy transition.\n\nWe made FlexMeasures freely available under the Apache2.0 licence and it is now [an incubation project at the Linux Energy Foundation](https://www.lfenergy.org/projects/flexmeasures/).\n\nWithin the FlexMeasures project, [we welcome contributions](https://github.com/FlexMeasures/tsc/blob/main/CONTRIBUTING.md). You can also [learn more about our governance](https://github.com/Flexmeasures/tsc/blob/main/GOVERNANCE.md).\n\nYou can connect with the community here on GitHub (e.g. by creating an issue), on [the mailing list](https://lists.lfenergy.org/g/flexmeasures), on [the FlexMeasures channel within the LF Energy Slack](https://slack.lfenergy.org/) or [by contacting the current maintainers](https://seita.nl/who-we-are/#contact).\n'",,"2020/12/04, 13:11:50",1055,Apache-2.0,212,1077,"2023/10/18, 14:02:11",106,530,760,329,7,11,2.3,0.5245762711864407,"2023/10/02, 17:47:03",v0.16.1,15,13,false,,false,false,SeitaBV/flexmeasures-openweathermap,,https://github.com/FlexMeasures,https://flexmeasures.io,,,,https://avatars.githubusercontent.com/u/75485874?v=4,,, STM32 Energy Monitoring,"The following resources are a work in progress guide to using the STM32 platform for energy monitoring, being written as part of development work into the next generation of OpenEnergyMonitor hardware.",openenergymonitor,https://github.com/openenergymonitor/STM32.git,github,,Energy Monitoring and Management,"2021/04/16, 14:42:35",70,0,9,true,C++,OpenEnergyMonitor,openenergymonitor,"C++,C,Assembly,Makefile,Rich Text Format,Shell,Python",,"b""# STM32 Energy Monitoring\n\nThe STM32 platform is a family of microcontrollers based on the Arm Cortex M processor, offering among many powerful features, plenty of 12-bit analog inputs and high speed sampling, making them particularly suitable for energy monitoring applications.\n\nThe following resources are a work in progress guide to using the STM32 platform for energy monitoring, being written as part of development work into the next generation of OpenEnergyMonitor hardware. To be included as a section in [http://learn.openenergymonitor.org](http://learn.openenergymonitor.org)\n\n### OpenEnergyMonitor Forum threads:\n\n- [STM32 Development thread](https://community.openenergymonitor.org/t/stm32-development)\n- [STM32 Hardware Development](https://community.openenergymonitor.org/t/stm32-hardware-development/7135)\n- [STM32 PlatformIO](https://community.openenergymonitor.org/t/stm32-platformio/7015)\n\n### Getting started: STM32 (Arduino integration)\n\nIf you are familiar with the Arduino platform getting the basics working using the familiarity of the Arduino IDE and the STM32Dunio Arduino integration is a good place to start.\n\n- [1. Blinking an LED using the NUCLEO-F303RE Development board & STM32Duino](docs/STM32Duino/Blink.md)\n- [2. Basic NUCLEO-F303RE energy monitor using an EmonTxShield & EmonLib discreet sampling STM32Duino library](docs/STM32Duino/EmonLib.md)\n\n### Introducing STM32CubeMX\n\nADC access using the Arduino analogRead command gives limited performance, its possible to sample much faster across many channels by using the lower level STM32 HAL (Hardware Access Layer) provided by ST. The development pathway to access these features is different and quite daunting if your primarily familiar with the Arduino platform. There is a tool called STM32CubeMX which is a kind of project builder that you can use to generate the initial outline of your project, from there you can enter your own 'user code' into the relevant placeholders in the generated project. The following set of guides give an introduction to this process:\n\n- [1. Blink](docs/Blink.md)\n- [2. Serial](docs/Serial.md)\n- [3. Analog](docs/Analog.md)\n- [4. DMA](docs/DMA.md)\n- [5. RFM69](docs/RFM69.md)\n\n\n### Hardware\n\nNotes on hardware development and initial designs:\n\n- [1. ST-LINK nucleo](docs/ST-LINK.md)\n- [2. ST-LINK adapters](docs/st-link2.md)\n- [3. Serial/UART Upload](docs/uartupload.md)\n- [4. RaspberryPi UART Upload + autoreset](docs/rpiautoupload.md)\n\n**STM32-pi_basic**\n\n- [Eagle design 01](Hardware/stm32-pi_basic/1)\n- [Prototype 1, breadboard, voltage follower & anti-alias](docs/prototype1.md)\n- [STM32-pi_basic eagle design 02](Hardware/stm32-pi_basic/2)\n- [STM32-pi_basic eagle design 03](Hardware/stm32-pi_basic/3)\n- [STM32-pi_basic eagle design 04](Hardware/stm32-pi_basic/4)\n- [Design Notes v4](docs/stm32notes.md)\n- [STM32-pi_basic eagle design 05](Hardware/stm32-pi_basic/5)\n\n**STM32-pi_full**\n\n- [Hardware/stm32-pi_full](Hardware/stm32-pi_full)\n\n**Misc**\n\n- [Flashing a new chip](docs/Blink-fresh-chip.md)\n\n### Firmware Examples\n\nFirmware examples included in this repository:\n\n- [1. Blink](Software/Blink)\n- [2. ADC](Software/ADC)\n- [3. DMA](Software/DMA)\n- [4. Emon](Software/Emon): EmonTxShield Voltage and CT1 current measurement, single ADC example.\n- [5. Emon1CT](Software/Emon1CT): EmonTxShield Voltage (ADC1) and CT3 current measurement (ADC2) example.\n- [6. Emon1CT_ds18b20](Software/Emon1CT_ds18b20): EmonTxShield Voltage (ADC1) and CT3 current measurement (ADC2) example with DS18B20 temperature measurement.\n- [7. Emon3CT](Software/Emon3CT): EmonTxShield Voltage (ADC1) and 3x CT inputs on ADC2.\n- [8. Emon3CT_CB](Software/Emon3CT_CB): Firmware for [Hardware/stm32-pi_basic/5](Hardware/stm32-pi_basic/5)\n- [9. Emon3CT_CB_v2](Software/Emon3CT_CB_v2): Firmware for [Hardware/stm32-pi_basic/5](Hardware/stm32-pi_basic/5) v2.\n- [10. Emon3CT_RFM69](Software/Emon3CT_RFM69): EmonTxShield Voltage (ADC1), 3x CT inputs on ADC2 and RFM69 support.\n- [11. Emon3CT_VET](Software/Emon3CT_VET): Basic firmware for [Hardware/stm32-pi_full](Hardware/stm32-pi_full) v2 by Trystan Lea.\n- [12. emonTxshield_dBC (v13)](Software/emonTxshield_dBC): Mutli-channel energy monitor example firmware thanks to @dBC see [https://community.openenergymonitor.org/t/stm32-development/6815/232](https://community.openenergymonitor.org/t/stm32-development/6815/232)\n- [13. RFM69](Software/RFM69): RFM69 library and examples.\n- [14. MBUS](Software/MBUS): Example of reading data from an MBUS meter using Serial and DMA's.\n\n**STM32 Pi Full**\n\n- [STM32 pi v0.7 firmware basics](docs/stm32-pi.md)\n- [Sombrero_VB_Blink](Software/Sombrero_VB_Blink)\n- [Sombrero_VE_ADC-test](Software/Sombrero_VE_ADC-test)\n- [Sombrero_VE_Blink](Software/Sombrero_VE_Blink)\n- [Sombrero_VE_Working5](Sombrero_VE_Working5)\n\n### Other:\n\n- [STM32F103 BluePill Blink](docs/bluepill.md)\n- [CAD files from ST](docs/cad-files.md)\n- [CAD file of the enclosure](https://a360.co/3kcYF79)\n""",,"2018/03/17, 09:39:37",2048,Apache-2.0,0,383,"2022/11/11, 09:32:09",0,17,20,1,348,0,0.0,0.25409836065573765,,,0,3,false,,false,false,,,https://github.com/openenergymonitor,https://openenergymonitor.org,United Kingdom,,,https://avatars.githubusercontent.com/u/758944?v=4,,, EHMASS,"Energy Management for Home Assistant, is a Python module designed to optimize your home energy interfacing with Home Assistant.",davidusb-geek,https://github.com/davidusb-geek/emhass.git,github,"energy,home-automation,linear-programming,management,model-predictive-control,optimization",Energy Monitoring and Management,"2023/10/19, 19:01:03",184,1,136,true,Python,,,"Python,CSS,HTML,Dockerfile,Shell,Makefile",,"b'
\n
\n \n

Energy Management for Home Assistant

\n \n
\n
\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

\n
\n

\nIf you like this work please consider buying a coffee ;-) \n

\n

\n \n \n \n

\n\nEHMASS is a Python module designed to optimize your home energy interfacing with Home Assistant.\n\n## Introduction\n\nEMHASS (Energy Management for Home Assistant) is an optimization tool designed for residential households. The package uses a Linear Programming approach to optimize energy usage while considering factors such as electricity prices, power generation from solar panels, and energy storage from batteries. EMHASS provides a high degree of configurability, making it easy to integrate with Home Assistant and other smart home systems. Whether you have solar panels, energy storage, or just a controllable load, EMHASS can provide an optimized daily schedule for your devices, allowing you to save money and minimize your environmental impact.\n\nThe complete documentation for this package is [available here](https://emhass.readthedocs.io/en/latest/).\n\n## What is Energy Management for Home Assistant (EMHASS)?\n\nEMHASS and Home Assistant provide a comprehensive energy management solution that can optimize energy usage and reduce costs for households. By integrating these two systems, households can take advantage of advanced energy management features that provide significant cost savings, increased energy efficiency, and greater sustainability.\n\nEMHASS is a powerful energy management tool that generates an optimization plan based on variables such as solar power production, energy usage, and energy costs. The plan provides valuable insights into how energy can be better managed and utilized in the household. Even if households do not have all the necessary equipment, such as solar panels or batteries, EMHASS can still provide a minimal use case solution to optimize energy usage for controllable/deferrable loads.\n\nHome Assistant provides a platform for the automation of household devices based on the optimization plan generated by EMHASS. This includes devices such as batteries, pool pumps, hot water heaters, and electric vehicle (EV) chargers. By automating EV charging and other devices, households can take advantage of off-peak energy rates and optimize their EV charging schedule based on the optimization plan generated by EMHASS.\n\nOne of the main benefits of integrating EMHASS and Home Assistant is the ability to customize and tailor the energy management solution to the specific needs and preferences of each household. With EMHASS, households can define their energy management objectives and constraints, such as maximizing self-consumption or minimizing energy costs, and the system will generate an optimization plan accordingly. Home Assistant provides a platform for the automation of devices based on the optimization plan, allowing households to create a fully customized and optimized energy management solution.\n\nOverall, the integration of EMHASS and Home Assistant offers a comprehensive energy management solution that provides significant cost savings, increased energy efficiency, and greater sustainability for households. By leveraging advanced energy management features and automation capabilities, households can achieve their energy management objectives while enjoying the benefits of a more efficient and sustainable energy usage, including optimized EV charging schedules.\n\nThe package flow can be graphically represented as follows:\n\n![](https://raw.githubusercontent.com/davidusb-geek/emhass/master/docs/images/ems_schema.png)\n\n## Configuration and Installation\n\nThe package is meant to be highly configurable with an object oriented modular approach and a main configuration file defined by the user.\nEMHASS was designed to be integrated with Home Assistant, hence it\'s name. \nInstallation instructions and example Home Assistant automation configurations are given below.\n\nYou must follow these steps to make EMHASS work properly:\n\n1) Define all the parameters in the configuration file according to your installation. See the description for each parameter in the **configuration** section.\n\n2) You most notably will need to define the main data entering EMHASS. This will be the `sensor_power_photovoltaics` for the name of the your hass variable containing the PV produced power and the variable `sensor_power_load_no_var_loads` for the load power of your household excluding the power of the deferrable loads that you want to optimize.\n\n3) Launch the actual optimization and check the results. This can be done manually using the buttons in the web ui or with a `curl` command like this: `curl -i -H \'Content-Type:application/json\' -X POST -d \'{}\' http://localhost:5000/action/dayahead-optim`.\n\n4) If you\xe2\x80\x99re satisfied with the optimization results then you can set the optimization and data publish task commands in an automation. You can read more about this on the **usage** section below.\n\n5) The final step is to link the deferrable loads variables to real switchs on your installation. An example code for this using automations and the shell command integration is presented below in the **usage** section.\n\nA more detailed workflow is given below:\n\n![](https://raw.githubusercontent.com/davidusb-geek/emhass/master/docs/images/workflow.png)\n\n### Method 1) The EMHASS add-on for Home Assistant OS and supervised users\n\nFor Home Assistant OS and HA Supervised users, I\'ve developed an add-on that will help you use EMHASS. The add-on is more user friendly as the configuration can be modified directly in the add-on options pane and as with the standalone docker it exposes a web ui that can be used to inspect the optimization results and manually trigger a new optimization.\n\nYou can find the add-on with the installation instructions here: [https://github.com/davidusb-geek/emhass-add-on](https://github.com/davidusb-geek/emhass-add-on)\n\nThe add-on usage instructions can be found on the documentation pane of the add-on once installed or directly here: [EMHASS Add-on documentation](https://github.com/davidusb-geek/emhass-add-on/blob/main/emhass/DOCS.md)\n\nThese architectures are supported: `amd64`, `armv7`, `armhf` and `aarch64`.\n\n### Method 2) Using Docker in standalone mode\n\nYou can also install EMHASS using docker. This can be in the same machine as Home Assistant (if using the supervised install method) or in a different distant machine. To install first pull the latest image from docker hub:\n```\ndocker pull davidusb/emhass-docker-standalone\n```\n\nYou can also build your image locally. For this clone this repository, setup your `config_emhass.yaml` file and use the provided make file with this command:\n```\nmake -f deploy_docker.mk clean_deploy\n```\nThen load the image in the .tar file:\n```\ndocker load -i .tar\n```\nFinally check your image tag with `docker images` and launch the docker itself:\n```\ndocker run -it --restart always -p 5000:5000 -e ""LOCAL_COSTFUN=profit"" -v $(pwd)/config_emhass.yaml:/app/config_emhass.yaml -v $(pwd)/secrets_emhass.yaml:/app/secrets_emhass.yaml --name DockerEMHASS \n```\n\n### Method 3) Legacy method using a Python virtual environment\n\nWith this method it is recommended to install on a virtual environment.\nFor this you will need `virtualenv`, install it using:\n```\nsudo apt install python3-virtualenv\n```\nThen create and activate the virtual environment:\n```\nvirtualenv -p /usr/bin/python3 emhassenv\ncd emhassenv\nsource bin/activate\n```\nInstall using the distribution files:\n```\npython3 -m pip install emhass\n```\nClone this repository to obtain the example configuration files.\nWe will suppose that this repository is cloned to:\n```\n/home/user/emhass\n```\nThis will be the root path containing the yaml configuration files (`config_emhass.yaml` and `secrets_emhass.yaml`) and the different needed folders (a `data` folder to store the optimizations results and a `scripts` folder containing the bash scripts described further below).\n\nTo upgrade the installation in the future just use:\n```\npython3 -m pip install --upgrade emhass\n```\n\n## Usage\n\n### Method 1) Add-on and docker standalone\n\nIf using the add-on or the standalone docker installation, it exposes a simple webserver on port 5000. You can access it directly using your brower, ex: http://localhost:5000.\n\nWith this web server you can perform RESTful POST commands on multiple ENDPOINTS with prefix `action/*`:\n\n- A POST call to `action/perfect-optim` to perform a perfect optimization task on the historical data.\n- A POST call to `action/dayahead-optim` to perform a day-ahead optimization task of your home energy.\n- A POST call to `action/naive-mpc-optim` to perform a naive Model Predictive Controller optimization task. If using this option you will need to define the correct `runtimeparams` (see further below).\n- A POST call to `action/publish-data` to publish the optimization results data for the current timestamp.\n- A POST call to `action/forecast-model-fit` to train a machine learning forecaster model with the passed data (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help).\n- A POST call to `action/forecast-model-predict` to obtain a forecast from a pre-trained machine learning forecaster model (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help).\n- A POST call to `action/forecast-model-tune` to optimize the machine learning forecaster models hyperparameters using bayesian optimization (see the [dedicated section](https://emhass.readthedocs.io/en/latest/mlforecaster.html) for more help).\n\nA `curl` command can then be used to launch an optimization task like this: `curl -i -H \'Content-Type:application/json\' -X POST -d \'{}\' http://localhost:5000/action/dayahead-optim`.\n\n### Method 2) Legacy method using a Python virtual environment\n\nTo run a command simply use the `emhass` CLI command followed by the needed arguments.\nThe available arguments are:\n- `--action`: That is used to set the desired action, options are: `perfect-optim`, `dayahead-optim`, `naive-mpc-optim`, `publish-data`, `forecast-model-fit`, `forecast-model-predict` and `forecast-model-tune`.\n- `--config`: Define path to the config.yaml file (including the yaml file itself)\n- `--costfun`: Define the type of cost function, this is optional and the options are: `profit` (default), `cost`, `self-consumption`\n- `--log2file`: Define if we should log to a file or not, this is optional and the options are: `True` or `False` (default)\n- `--params`: Configuration as JSON. \n- `--runtimeparams`: Data passed at runtime. This can be used to pass your own forecast data to EMHASS.\n- `--debug`: Use `True` for testing purposes.\n- `--version`: Show the current version of EMHASS.\n\nFor example, the following line command can be used to perform a day-ahead optimization task:\n```\nemhass --action \'dayahead-optim\' --config \'/home/user/emhass/config_emhass.yaml\' --costfun \'profit\'\n```\nBefore running any valuable command you need to modify the `config_emhass.yaml` and `secrets_emhass.yaml` files. These files should contain the information adapted to your own system. To do this take a look at the special section for this in the [documentation](https://emhass.readthedocs.io/en/latest/config.html).\n\n## Home Assistant integration\n\nTo integrate with home assistant we will need to define some shell commands in the `configuration.yaml` file and some basic automations in the `automations.yaml` file.\nIn the next few paragraphs we are going to consider the `dayahead-optim` optimization strategy, which is also the first that was implemented, and we will also cover how to publish the results.\nThen additional optimization strategies were developed, that can be used in combination with/replace the `dayahead-optim` strategy, such as MPC, or to expland the funcitonalities such as the Machine Learning method to predict your hosehold consumption. Each of them has some specificities and features and will be considered in dedicated sections.\n\n### Dayahead Optimization - Method 1) Add-on and docker standalone\n\nIn `configuration.yaml`:\n```\nshell_command:\n dayahead_optim: ""curl -i -H \\""Content-Type:application/json\\"" -X POST -d \'{}\' http://localhost:5000/action/dayahead-optim""\n publish_data: ""curl -i -H \\""Content-Type:application/json\\"" -X POST -d \'{}\' http://localhost:5000/action/publish-data""\n```\n### Dayahead Optimization - Method 2) Legacy method using a Python virtual environment\n\nIn `configuration.yaml`:\n```\nshell_command:\n dayahead_optim: /home/user/emhass/scripts/dayahead_optim.sh\n publish_data: /home/user/emhass/scripts/publish_data.sh\n```\nCreate the file `dayahead_optim.sh` with the following content:\n```\n#!/bin/bash\n. /home/user/emhassenv/bin/activate\nemhass --action \'dayahead-optim\' --config \'/home/user/emhass/config_emhass.yaml\'\n```\nAnd the file `publish_data.sh` with the following content:\n```\n#!/bin/bash\n. /home/user/emhassenv/bin/activate\nemhass --action \'publish-data\' --config \'/home/user/emhass/config_emhass.yaml\'\n```\nThen specify user rights and make the files executables:\n```\nsudo chmod -R 755 /home/user/emhass/scripts/dayahead_optim.sh\nsudo chmod -R 755 /home/user/emhass/scripts/publish_data.sh\nsudo chmod +x /home/user/emhass/scripts/dayahead_optim.sh\nsudo chmod +x /home/user/emhass/scripts/publish_data.sh\n```\n### Common for any installation method\n\nIn `automations.yaml`:\n```\n- alias: EMHASS day-ahead optimization\n trigger:\n platform: time\n at: \'05:30:00\'\n action:\n - service: shell_command.dayahead_optim\n- alias: EMHASS publish data\n trigger:\n - minutes: /5\n platform: time_pattern\n action:\n - service: shell_command.publish_data\n```\nIn these automations the day-ahead optimization is performed everyday at 5:30am and the data is published every 5 minutes.\n\nThe final action will be to link a sensor value in Home Assistant to control the switch of a desired controllable load. For example imagine that I want to control my water heater and that the `publish-data` action is publishing the optimized value of a deferrable load that I want to be linked to my water heater desired behavior. In this case we could use an automation like this one below to control the desired real switch:\n```\nautomation:\n- alias: Water Heater Optimized ON\n trigger:\n - minutes: /5\n platform: time_pattern\n condition:\n - condition: numeric_state\n entity_id: sensor.p_deferrable0\n above: 0.1\n action:\n - service: homeassistant.turn_on\n entity_id: switch.water_heater_switch\n```\nA second automation should be used to turn off the switch:\n```\nautomation:\n- alias: Water Heater Optimized OFF\n trigger:\n - minutes: /5\n platform: time_pattern\n condition:\n - condition: numeric_state\n entity_id: sensor.p_deferrable0\n below: 0.1\n action:\n - service: homeassistant.turn_off\n entity_id: switch.water_heater_switch\n```\n\n## The publish-data specificities\n\nThe `publish-data` command will push to Home Assistant the optimization results for each deferrable load defined in the configuration. For example if you have defined two deferrable loads, then the command will publish `sensor.p_deferrable0` and `sensor.p_deferrable1` to Home Assistant. When the `dayahead-optim` is launched, after the optimization, a csv file will be saved on disk. The `publish-data` command will load the latest csv file and look for the closest timestamp that match the current time using the `datetime.now()` method in Python. This means that if EMHASS is configured for 30min time step optimizations, the csv will be saved with timestamps 00:00, 00:30, 01:00, 01:30, ... and so on. If the current time is 00:05, then the closest timestamp of the optimization results that will be published is 00:00. If the current time is 00:25, then the closest timestamp of the optimization results that will be published is 00:30.\n\nThe `publish-data` command will also publish PV and load forecast data on sensors `p_pv_forecast` and `p_load_forecast`. If using a battery, then the battery optimized power and the SOC will be published on sensors `p_batt_forecast` and `soc_batt_forecast`. On these sensors the future values are passed as nested attributes.\n\nIt is possible to provide custm sensor names for all the data exported by the `publish-data` command. For this, when using the `publish-data` endpoint just add some runtime parameters as dictionaries like this:\n```\nshell_command:\n publish_data: ""curl -i -H \\""Content-Type:application/json\\"" -X POST -d \'{\\""custom_load_forecast_id\\"": {\\""entity_id\\"": \\""sensor.p_load_forecast\\"", \\""unit_of_measurement\\"": \\""W\\"", \\""friendly_name\\"": \\""Load Power Forecast\\""}}\' http://localhost:5000/action/publish-data""\n```\n\nThese keys are available to modify: `custom_pv_forecast_id`, `custom_load_forecast_id`, `custom_batt_forecast_id`, `custom_batt_soc_forecast_id`, `custom_grid_forecast_id`, `custom_cost_fun_id`, `custom_deferrable_forecast_id`, `custom_unit_load_cost_id` and `custom_unit_prod_price_id`.\n\nIf you provide the `custom_deferrable_forecast_id` then the passed data should be a list of dictionaries, like this:\n```\nshell_command:\n publish_data: ""curl -i -H \\""Content-Type:application/json\\"" -X POST -d \'{\\""custom_deferrable_forecast_id\\"": [{\\""entity_id\\"": \\""sensor.p_deferrable0\\"",\\""unit_of_measurement\\"": \\""W\\"", \\""friendly_name\\"": \\""Deferrable Load 0\\""},{\\""entity_id\\"": \\""sensor.p_deferrable1\\"",\\""unit_of_measurement\\"": \\""W\\"", \\""friendly_name\\"": \\""Deferrable Load 1\\""}]}\' http://localhost:5000/action/publish-data""\n```\nAnd you should be careful that the list of dictionaries has the correct length, which is the number of defined deferrable loads.\n\n## Passing your own data\n\nIn EMHASS we have basically 4 forecasts to deal with:\n\n- PV power production forecast (internally based on the weather forecast and the characteristics of your PV plant). This is given in Watts.\n\n- Load power forecast: how much power your house will demand on the next 24h. This is given in Watts.\n\n- Load cost forecast: the price of the energy from the grid on the next 24h. This is given in EUR/kWh.\n\n- PV production selling price forecast: at what price are you selling your excess PV production on the next 24h. This is given in EUR/kWh.\n\nThe sensor containing the load data should be specified in parameter `var_load` in the configuration file. As we want to optimize the household energies, when need to forecast the load power conumption. The default method for this is a naive approach using 1-day persistence. The load data variable should not contain the data from the deferrable loads themselves. For example, lets say that you set your deferrable load to be the washing machine. The variable that you should enter in EMHASS will be: `var_load: \'sensor.power_load_no_var_loads\'` and `sensor_power_load_no_var_loads = sensor_power_load - sensor_power_washing_machine`. This is supposing that the overall load of your house is contained in variable: `sensor_power_load`. The sensor `sensor_power_load_no_var_loads` can be easily created with a new template sensor in Home Assistant.\n\nIf you are implementing a MPC controller, then you should also need to provide some data at the optimization runtime using the key `runtimeparams`.\n\nThe valid values to pass for both forecast data and MPC related data are explained below.\n\n### Forecast data\n\nIt is possible to provide EMHASS with your own forecast data. For this just add the data as list of values to a data dictionary during the call to `emhass` using the `runtimeparams` option. \n\nFor example if using the add-on or the standalone docker installation you can pass this data as list of values to the data dictionary during the `curl` POST:\n```\ncurl -i -H \'Content-Type:application/json\' -X POST -d \'{""pv_power_forecast"":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}\' http://localhost:5000/action/dayahead-optim\n```\nOr if using the legacy method using a Python virtual environment:\n```\nemhass --action \'dayahead-optim\' --config \'/home/user/emhass/config_emhass.yaml\' --runtimeparams \'{""pv_power_forecast"":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}\'\n```\n\nThe possible dictionary keys to pass data are:\n\n- `pv_power_forecast` for the PV power production forecast.\n\n- `load_power_forecast` for the Load power forecast.\n\n- `load_cost_forecast` for the Load cost forecast.\n\n- `prod_price_forecast` for the PV production selling price forecast.\n\n### Passing other data\n\nIt is possible to also pass other data during runtime in order to automate the energy management. For example, it could be useful to dynamically update the total number of hours for each deferrable load (`def_total_hours`) using for instance a correlation with the outdoor temperature (useful for water heater for example). \n\nHere is the list of the other additional dictionary keys that can be passed at runtime:\n\n- `num_def_loads` for the number of deferrable loads to consider.\n\n- `P_deferrable_nom` for the nominal power for each deferrable load in Watts.\n\n- `def_total_hours` for the total number of hours that each deferrable load should operate.\n\n- `treat_def_as_semi_cont` to define if we should treat each deferrable load as a semi-continuous variable.\n\n- `set_def_constant` to define if we should set each deferrable load as a constant fixed value variable with just one startup for each optimization task.\n\n- `solcast_api_key` for the SolCast API key if you want to use this service for PV power production forecast.\n\n- `solcast_rooftop_id` for the ID of your rooftop for the SolCast service implementation.\n\n- `solar_forecast_kwp` for the PV peak installed power in kW used for the solar.forecast API call. \n\n- `SOCtarget` for the desired target value of initial and final SOC.\n\n- `publish_prefix` use this key to pass a common prefix to all published data. This will add a prefix to the sensor name but also to the forecasts attributes keys within the sensor.\n\n## A naive Model Predictive Controller\n\nA MPC controller was introduced in v0.3.0. This is an informal/naive representation of a MPC controller. This can be used in combination with/as a replacement of the Dayahead Optimization.\n\nA MPC controller performs the following actions:\n\n- Set the prediction horizon and receding horizon parameters.\n- Perform an optimization on the prediction horizon.\n- Apply the first element of the obtained optimized control variables.\n- Repeat at a relatively high frequency, ex: 5 min.\n\nThis is the receding horizon principle.\n\nWhen applying this controller, the following `runtimeparams` should be defined:\n\n- `prediction_horizon` for the MPC prediction horizon. Fix this at at least 5 times the optimization time step.\n\n- `soc_init` for the initial value of the battery SOC for the current iteration of the MPC. \n\n- `soc_final` for the final value of the battery SOC for the current iteration of the MPC. \n\n- `def_total_hours` for the list of deferrable loads functioning hours. These values can decrease as the day advances to take into account receding horizon daily energy objectives for each deferrable load.\n\nA correct call for a MPC optimization should look like:\n\n```\ncurl -i -H \'Content-Type:application/json\' -X POST -d \'{""pv_power_forecast"":[0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93], ""prediction_horizon"":10, ""soc_init"":0.5,""soc_final"":0.6,""def_total_hours"":[1,3]}\' http://localhost:5000/action/naive-mpc-optim\n```\n\n## A machine learning forecaster\n\nStarting in v0.4.0 a new machine learning forecaster class was introduced.\nThis is intended to provide a new and alternative method to forecast your household consumption and use it when such forecast is needed to optimize your energy through the available strategies.\nCheck the dedicated section in the documentation here: [https://emhass.readthedocs.io/en/latest/mlforecaster.html](https://emhass.readthedocs.io/en/latest/mlforecaster.html)\n\n## Development\n\nPull request are very much accepted on this project. For development you can find some instructions here [Development](./docs/develop.md)\n\n## Troubleshooting\n\nSome problems may arise from solver related issues in the Pulp package. It was found that for arm64 architectures (ie. Raspberry Pi4, 64 bits) the default solver is not avaliable. A workaround is to use another solver. The `glpk` solver is an option.\n\nThis can be controlled in the configuration file with parameters `lp_solver` and `lp_solver_path`. The options for `lp_solver` are: \'PULP_CBC_CMD\', \'GLPK_CMD\' and \'COIN_CMD\'. If using \'COIN_CMD\' as the solver you will need to provide the correct path to this solver in parameter `lp_solver_path`, ex: \'/usr/bin/cbc\'.\n\n\n## License\n\nMIT License\n\nCopyright (c) 2021-2023 David HERNANDEZ\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n'",,"2021/09/12, 19:49:39",773,MIT,283,593,"2023/10/20, 13:07:44",11,58,113,90,5,1,0.4,0.0855855855855856,"2023/10/19, 19:03:16",v0.5.1,0,7,false,,true,true,davidusb-geek/emhass-add-on,,,,,,,,,, Open Energy View,The goal of this project is to analyze and present resource consumption data to users empowering them to conserve and save money.,JPHutchins,https://github.com/JPHutchins/open-energy-view.git,github,"iot,energy,energy-monitor,energy-consumption,energy-data,electricity,electricity-meter,electricity-consumption,electricity-consumption-analysis,electricity-consumption-forecasting,home-automation,climate-change",Energy Monitoring and Management,"2023/01/21, 22:27:34",57,0,22,true,JavaScript,,,"JavaScript,Python,CSS,Shell,HTML",https://www.openenergyview.com,"b'# Open Energy View\n\nThe goal of this project is to analyze and present resource consumption data to users empowering them to conserve and save money. \n\n## Open Beta!\n\nhttps://www.openenergyview.com\n\nIf you are a PG&E customer you can link your account now! If you are not a PG&E customer you can try the demo and talk to J.P. about integrating your utility.\n\n## User Interface\n\n![Interface](/docs/open-energy-view-dashboard.PNG)\n\n## Design\n\n![Design](/docs/PGESMD_sketch_full.png)\n\n# Development\n\n## Environment Setup (Ubuntu 20.04)\n\nProcess notes here: https://github.com/JPHutchins/open-energy-view/issues/31\n\nThe following notes are for setting up the environment with a Windows 10 host and Ubuntu 20.04 on WSL2. Please submit a PR if you find necessary adaptations on your environment.\n\nPersonally I use VSCode from the Windows host utilizing the ""Remote - SSH"" and ""Remote - WSL"" extensions.\n\n### Clone this repository\n```\ngit clone git@github.com:JPHutchins/open-energy-view.git\ncd open-energy-view\n```\n### Install backend dependencies\n* **Install python requirements**\n\n Note: check your python3 version\n ```\n sudo apt install python3.8-venv build-essential python3-dev\n ```\n* **Create the virtual environment and install packages**\n ```\n python3 -m venv venv\n source venv/bin/activate\n pip3 install -r requirements.txt\n ```\n* **Install and configure rabbitmq**\n * Install erlang:\n ```\n sudo apt update\n sudo apt install software-properties-common apt-transport-https\n wget -O- https://packages.erlang-solutions.com/ubuntu/erlang_solutions.asc | sudo apt-key add -\n echo ""deb https://packages.erlang-solutions.com/ubuntu focal contrib"" | sudo tee /etc/apt/sources.list.d/rabbitmq.list\n sudo apt update\n sudo apt install erlang\n ```\n * Install rabbitmq\n ```\n curl -s https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.deb.sh | sudo bash\n sudo apt install rabbitmq-server\n * Start rabbitmq-server\n ```\n sudo service rabbitmq-server start\n ```\n * Verify that rabbitmq-server is running\n ```\n sudo service rabbitmq-server status\n ```\n * Configure rabbitmq\n ```\n sudo rabbitmqctl add_user jp admin\n sudo rabbitmqctl set_user_tags jp administrator\n sudo rabbitmqctl add_vhost myvhost\n sudo rabbitmqctl set_permissions -p myvhost jp "".*"" "".*"" "".*""\n ```\n\n### Install frontend dependencies and build\n* **Install nvm** (if you don\'t have it)\n\n notes: https://github.com/nvm-sh/nvm\n ```\n curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash\n ```\n* **Install npm**\n ```\n nvm install 10.19.0\n ```\n* **Install frontend packages**\n ```\n cd open_energy_view/frontend\n nvm use 10\n npm install\n ```\n* **Build frontend**\n \n Assumes you are at path: `*/open-energy-view/open_energy_view/frontend`\n ```\n nvm use 10\n npm run build\n ```\n\n### Run the development server and workers\n* **Start the server and workers**\n * Open four terminals (example from VSCode)\n\n ![Four-Terminals](/docs/four-terminals.png)\n * First terminal: `./run-wsgi-dev`\n * Second terminal: `./run-io-worker`\n * Third terminal: `./run-cpu-worker`\n* **Open the development site in a browser**\n * Fourth terminal: `ip a`\n * Note the IP address of your WSL2 instance, in this case `172.31.30.203`\n\n ![ip-a](/docs/ip-a.png)\n * On your host OS, open a Chrome or Firefox web browser and navigate to `http://:5000`\n\n ![browser-address](/docs/browser-address.png)\n\n### Example account setup\nFor first time setup you must register a user to your local database. Use something easily memorable and keep in mind that you can register as many users as you need while testing.\n* Click ""Register now!""\n * For email use: `dev@dev.com`\n * For password use: `admin`\n* You are prompted to add energy sources. This is local development so there would be no way to add a real PGE account here. In the dropdown select ""Fake Utility"" and click ""Authorize"".\n* You are prompted to name the fake energy source. Enter `dev`, for example, then click ""Add Source"".\n\nAfter simulating API calls and parsing the retrieved ESPI data (J.P.\'s old data) you will be greeted with an OEV instance that will respond to changes in your local Python/Flask/Celery backend and React frontend.\n\n## Data Analysis\n\n### Averages\nData is presented always by Watt hour (Wh). This is so that the user can compare different time intervals to one another. It is not meaningful to compare 630,000 Watts consumed in the month of June to 30,000 Watts consumed last Tuesday. Rather, we would like to understand the intensity of usage (average) of different time intervals. Usefully, we can see that the average 1,250 Watt hours consumed last Tuesday is higher than the average 875 Watt hours consumed during June.\n\n### Partitions\nA partition is a time interval that recurs each day. The default partitions are:\n- Night: 12AM -> 7AM\n- Day: 7AM -> 6PM\n- Evening: 6PM -> 12AM\n\nThese partitions allow the user to develop conclusions about what activities are using the most resources.\n\n### Passive Consumption\nPassive consumption is the amount of a resource a building will use even when no person is actively utilizing the energy. This is calculated statistically using a rolling mean and rolling standard deviation. This passive consumption metric can account for an outsized amount of electricity utilized in a building since it is by definition always consuming energy.\n\nPresentation of this very useful metric allows users to understand the impact of passive appliances on their resource consumption as well as empowers them to find and disable devices that they do not need 24/7.\n\n#### Activies Pie Chart\nThe activities pie chart shows the user how much power each activity consumed over the current time window.\n\n### Trends\nVarious trends are calculated to give the user realtime feedback on their resource conservation efforts.\n#### Seasonal Trend\nThis tend shows about how much power the user is using during this ""time of the year"" this year vs this ""time of the year"" last year. The time range is +/- 14 days to attempt to mitigate the impact of statistical outliers like unseasonably hot or cold weather.\n#### Active Use Trend\nThis trend shows the active use trend up to this point.\nExamples of active use:\n- Appliances\n- TVs\n- Computers\n- Lighting\n#### Background (Passive) Trend\nThis trend shows the passive used trend up to this point.\nExamples of passive use:\n- Network Equipment\n- Security Systems\n- HVAC\n- IoT devices\n####\n## Resources\nGreen Button and ESPI\n\nhttps://www.energy.gov/data/green-button\n\nhttps://green-button.github.io/developers/\n\nhttp://www.greenbuttondata.org/\n\nhttps://github.com/GreenButtonAlliance\n\nhttps://github.com/JPHutchins/pgesmd_self_access\n'",,"2019/09/03, 03:37:49",1513,CUSTOM,4,859,"2022/09/11, 01:51:01",36,6,25,2,409,27,0.3333333333333333,0.0011947431302270495,,,0,2,false,,false,false,,,,,,,,,,, OpenNEM,Aims to make the wealth of public Australian Electricity Market data more accessible to a wider audience.,opennem,https://github.com/opennem/opennem-fe.git,github,"national-electricity-market,vue,bulma-css,d3,nuxtjs,nuxt,vuejs,d3js,australia",Energy System Data Access,"2023/10/20, 02:45:34",60,0,9,true,Vue,OpenNEM,opennem,"Vue,JavaScript,SCSS,HTML",https://opennem.org.au,"b'# OpenNEM Energy Market Platform\n\n![logo](https://developers.opennem.org.au/_static/logo.png)\n\n**NOTE: This is the frontend project** For the core project and any issues see [opennem/opennem](https://github.com/opennem/opennem)\n\nThe OpenNEM project aims to make the wealth of public National Electricity Market (NEM) data more accessible to a wider audience.\n\nOpenNEM is a project of the [Energy Transition Hub](http://energy-transition-hub.org/).\n\nProject homepage at https://opennem.org.au\n\nFind us on [twitter](https://twitter.com/OpenNEM)\n\nDeveloped by:\n\n- [Dylan McConnell (@dylanjmcconnell) | Twitter](https://twitter.com/dylanjmcconnell)\n- [simon holmes \xc3\xa0 court (@simonahac) | Twitter](https://twitter.com/simonahac)\n- [Steven Tan (@chienleng) | Twitter](https://twitter.com/chienleng)\n- [Nik Cubrilovic (@dir) | Twitter](https://twitter.com/dir) [Website](https://nikcub.me)\n\n---\n\n## Development\n\nThis project uses [Yarn (v1 - classic)](https://classic.yarnpkg.com/lang/en/) for package management, ensure that Yarn is installed globally first.\n\n```sh\n$ yarn install\n```\n\nThis will install the required Node packages.\n\n### Run the dev server\n\n```sh\n$ yarn dev\n```\n\nThis wil run the local [`Nuxt`](https://nuxtjs.org/) dev server, you should be able to open `http://localhost:3000/` in your browser, by default the public facing Opennem API will be used.\n\n---\n\n## Issues\n\nFile issues at the main [OpenNEM Repository](https://github.com/opennem/opennem) and label them as frontend.\n\n---\n\n## License\n\nOpenNEM is MIT licensed.\n'",,"2017/09/16, 01:54:29",2230,MIT,106,3032,"2023/10/06, 04:25:00",59,108,173,74,19,2,0.0,0.024517087667161985,"2023/10/20, 02:45:39",v4.22.0,0,6,false,,false,false,,,https://github.com/opennem,https://opennem.org.au,"Melbourne, Australia",,,https://avatars.githubusercontent.com/u/39894852?v=4,,, Open Power System Data,A list of primary data sources that are helpful for power system modeling of Europe.,,,custom,,Energy System Data Access,,,,,,,,,,https://open-power-system-data.org/data-sources,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The Public Utility Data Liberation Project,Makes the US' energy data easier to access and use.,catalyst-cooperative,https://github.com/catalyst-cooperative/pudl.git,github,"open-data,ferc,eia,energy,utility,climate,electricity,epa,coal,natural-gas,eia923,eia860,cems,ddj,ghg,emissions,pudl,python,etl,sqlite",Energy System Data Access,"2023/10/24, 16:02:42",399,0,77,true,Python,Catalyst Cooperative,catalyst-cooperative,"Python,Jinja,Shell,Dockerfile,HCL,Mako",https://catalyst.coop/pudl,"b""===============================================================================\nThe Public Utility Data Liberation Project (PUDL)\n===============================================================================\n\n.. readme-intro\n\n.. image:: https://www.repostatus.org/badges/latest/active.svg\n :target: https://www.repostatus.org/#active\n :alt: Project Status: Active\n\n.. image:: https://github.com/catalyst-cooperative/pudl/workflows/tox-pytest/badge.svg\n :target: https://github.com/catalyst-cooperative/pudl/actions?query=workflow%3Atox-pytest\n :alt: Tox-PyTest Status\n\n.. image:: https://img.shields.io/codecov/c/github/catalyst-cooperative/pudl?style=flat&logo=codecov\n :target: https://codecov.io/gh/catalyst-cooperative/pudl\n :alt: Codecov Test Coverage\n\n.. image:: https://img.shields.io/readthedocs/catalystcoop-pudl?style=flat&logo=readthedocs\n :target: https://catalystcoop-pudl.readthedocs.io/en/latest/\n :alt: Read the Docs Build Status\n\n.. image:: https://img.shields.io/pypi/v/catalystcoop.pudl\n :target: https://pypi.org/project/catalystcoop.pudl/\n :alt: PyPI Latest Version\n\n.. image:: https://img.shields.io/conda/vn/conda-forge/catalystcoop.pudl\n :target: https://anaconda.org/conda-forge/catalystcoop.pudl\n :alt: conda-forge Version\n\n.. image:: https://img.shields.io/pypi/pyversions/catalystcoop.pudl\n :target: https://pypi.org/project/catalystcoop.pudl/\n :alt: Supported Python Versions\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n :alt: Any color you want, so long as it's black.\n\n.. image:: https://results.pre-commit.ci/badge/github/catalyst-cooperative/pudl/main.svg\n :target: https://results.pre-commit.ci/latest/github/catalyst-cooperative/pudl/main\n :alt: pre-commit CI\n\n.. image:: https://zenodo.org/badge/80646423.svg\n :target: https://zenodo.org/badge/latestdoi/80646423\n :alt: Zenodo DOI\n\n.. image:: https://img.shields.io/badge/calend.ly-officehours-darkgreen\n :target: https://calend.ly/catalyst-cooperative/pudl-office-hours\n :alt: Schedule a 1-on-1 chat with us about PUDL.\n\nWhat is PUDL?\n-------------\n\nThe `PUDL `__ Project is an open source data processing\npipeline that makes US energy data easier to access and use programmatically.\n\nHundreds of gigabytes of valuable data are published by US government agencies, but\nit's often difficult to work with. PUDL takes the original spreadsheets, CSV files,\nand databases and turns them into a unified resource. This allows users to spend more\ntime on novel analysis and less time on data preparation.\n\nWhat data is available?\n-----------------------\n\nPUDL currently integrates data from:\n\n* `EIA Form 860 `__: 2001-2022\n* `EIA Form 860m `__: 2023-06\n* `EIA Form 861 `__: 2001-2022\n* `EIA Form 923 `__: 2001-2022\n* `EPA Continuous Emissions Monitoring System (CEMS) `__: 1995-2022\n* `FERC Form 1 `__: 1994-2021\n* `FERC Form 714 `__: 2006-2020\n* `US Census Demographic Profile 1 Geodatabase `__: 2010\n\nThanks to support from the `Alfred P. Sloan Foundation Energy & Environment\nProgram `__, from\n2021 to 2024 we will be integrating the following data as well:\n\n* `EIA Form 176 `__\n (The Annual Report of Natural Gas Supply and Disposition)\n* `FERC Electric Quarterly Reports (EQR) `__\n* `FERC Form 2 `__\n (Annual Report of Major Natural Gas Companies)\n* `PHMSA Natural Gas Annual Report `__\n* Machine Readable Specifications of State Clean Energy Standards\n\nWho is PUDL for?\n----------------\n\nThe project is focused on serving researchers, activists, journalists, policy makers,\nand small businesses that might not otherwise be able to afford access to this data\nfrom commercial sources and who may not have the time or expertise to do all the\ndata processing themselves from scratch.\n\nWe want to make this data accessible and easy to work with for as wide an audience as\npossible: anyone from a grassroots youth climate organizers working with Google\nsheets to university researchers with access to scalable cloud computing\nresources and everyone in between!\n\nHow do I access the data?\n-------------------------\n\nThere are several ways to access PUDL outputs. For more details you'll want\nto check out `the complete documentation\n`__, but here's a quick overview:\n\nDatasette\n^^^^^^^^^\nWe publish a lot of the data on https://data.catalyst.coop using a tool called\n`Datasette `__ that lets us wrap our databases in a relatively\nfriendly web interface. You can browse and query the data, make simple charts and\nmaps, and download portions of the data as CSV files or JSON so you can work with it\nlocally. For a quick introduction to what you can do with the Datasette interface,\ncheck out `this 17 minute video `__.\n\nThis access mode is good for casual data explorers or anyone who just wants to grab a\nsmall subset of the data. It also lets you share links to a particular subset of the\ndata and provides a REST API for querying the data from other applications.\n\nDocker + Jupyter\n^^^^^^^^^^^^^^^^\nWant access to all the published data in bulk? If you're familiar with Python\nand `Jupyter Notebooks `__ and are willing to install Docker you\ncan:\n\n* `Download a PUDL data release `__ from\n CERN's `Zenodo `__ archiving service.\n* `Install Docker `__\n* Run the archived image using ``docker-compose up``\n* Access the data via the resulting Jupyter Notebook server running on your machine.\n\nIf you'd rather work with the PUDL `SQLite `__ Databases and\n`Apache Parquet `__ files directly, they are accessible\nwithin the same Zenodo archive.\n\nThe `PUDL Examples repository `__\nhas more detailed instructions on how to work with the Zenodo data archive and Docker\nimage.\n\nThe PUDL Development Environment\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIf you're more familiar with the Python data science stack and are comfortable working\nwith git, ``conda`` environments, and the Unix command line, then you can set up the\nwhole PUDL Development Environment on your own computer. This will allow you to run the\nfull data processing pipeline yourself, tweak the underlying source code, and (we hope!)\nmake contributions back to the project.\n\nThis is by far the most involved way to access the data and isn't recommended for\nmost users. You should check out the `Development section `__\nof the main `PUDL documentation `__ for more\ndetails.\n\nNightly Data Builds\n^^^^^^^^^^^^^^^^^^^\nIf you are less concerned with reproducibility and want the freshest possible data\nwe automatically upload the outputs of our nightly builds to public S3 storage buckets\nas part of the `AWS Open Data Registry\n`__. This data is based on\nthe `dev branch `__, of PUDL, and\nis updated most weekday mornings. It is also the data used to populate Datasette.\n\nThe nightly build outputs can be accessed using the AWS CLI, the S3 API, or downloaded\ndirectly via the web. See `Accessing Nightly Builds `__\nfor links to the individual SQLite, JSON, and Apache Parquet outputs.\n\nContributing to PUDL\n--------------------\nFind PUDL useful? Want to help make it better? There are lots of ways to help!\n\n* First, be sure to read our `Code of Conduct `__.\n* You can file a bug report, make a feature request, or ask questions in the\n `Github issue tracker `__.\n* Feel free to fork the project and make a pull request with new code, better\n documentation, or example notebooks.\n* `Make a recurring financial contribution `__\n to support our work liberating public energy data.\n* `Hire us to do some custom analysis `__ and\n allow us to integrate the resulting code into PUDL.\n* For more information check out the Contributing section of the\n `PUDL Documentation `__\n\nLicensing\n---------\n\nIn general, our code, data, and other work are permissively licensed for use by anybody,\nfor any purpose, so long as you give us credit for the work we've done.\n\n* The PUDL software is released under\n `the MIT License `__.\n* The PUDL data and documentation are published under the\n `Creative Commons Attribution License v4.0 `__\n (CC-BY-4.0).\n\nContact Us\n----------\n\n* For bug reports, feature requests, and other software or data issues please make a\n `GitHub Issue `__.\n* For more general support, questions, or other conversations around the project\n that might be of interest to others, check out the\n `GitHub Discussions `__\n* If you'd like to get occasional updates about the project\n `sign up for our email list `__.\n* Want to schedule a time to chat with us one-on-one about your PUDL use case, ideas\n for improvement, or get some personalized support? Join us for\n `Office Hours `__\n* Follow us on Twitter: `@CatalystCoop `__\n* More info on our website: https://catalyst.coop\n* To hire us to provide customized data\n extraction and analysis, you can email the maintainers:\n `hello@catalyst.coop `__\n\nAbout Catalyst Cooperative\n--------------------------\n\n`Catalyst Cooperative `__ is a small group of data wranglers\nand policy wonks organized as a worker-owned cooperative consultancy. Our goal is a\nmore just, livable, and sustainable world. We integrate public data and perform\ncustom analyses to inform public policy\n(`Hire us! `__). Our focus is primarily on\nmitigating climate change and improving electric utility regulation in the United\nStates.\n""",",https://zenodo.org/badge/latestdoi/80646423\n,https://zenodo.org/record/3653158,https://zenodo.org","2017/02/01, 17:45:40",2457,MIT,3572,9111,"2023/10/24, 16:02:48",372,1215,2527,927,1,28,0.0,0.6127734877734878,"2023/04/06, 21:29:01",v2022.11.30.post1,0,32,true,"open_collective,github,custom",false,false,,,https://github.com/catalyst-cooperative,https://catalyst.coop,"Boulder, CO",,,https://avatars.githubusercontent.com/u/25487946?v=4,,, Global Power Plant Database,"A comprehensive, global and open source database of power plants.",wri,https://github.com/wri/global-power-plant-database.git,github,"open-data,open-datasets,energy,energy-data,free-datasets,climate,climate-data",Energy System Data Access,"2022/01/26, 16:44:20",290,0,32,false,HTML,World Resources Institute,wri,"HTML,Jupyter Notebook,Python",,"b'# Global Power Plant Database\n\n**This project is not currently maintained by WRI. There are no planned updates as of this time (early 2022). The last version of this database is version 1.3.0. If we learn of active forks or maintained versions of the code and database we will attempt to provide links in the future.**\n\n\n\nThis project aims to build [an open database of all the power plants in the world](http://www.wri.org/publication/global-power-plant-database). It is the result of a large collaboration involving many partners, coordinated by the [World Resources Institute](https://www.wri.org/) and [Google Earth Outreach](https://www.google.com/earth/outreach/index.html). If you would like to get involved, please [email the team](mailto:powerexplorer@wri.org) or fork the repo and code! To learn more about how to contribute to this repository, read the [`CONTRIBUTING`](https://github.com/wri/global-power-plant-database/blob/master/.github/CONTRIBUTING.md) document.\n\nThe latest database release (v1.3.0) is available in CSV format [here](http://datasets.wri.org/dataset/globalpowerplantdatabase) under a [Creative Commons-Attribution 4.0 (CC BY 4.0) license](https://creativecommons.org/licenses/by/4.0/). A bleeding-edge version is in the [`output_database`](https://github.com/wri/global-power-plant-database/blob/master/output_database) directory of this repo.\n\nAll Python source code is available under a [MIT license](https://opensource.org/licenses/MIT).\n\nThis work is made possible and supported by [Google](https://environment.google/), among other organizations.\n\n## Database description\n\nThe Global Power Plant Database is built in several steps.\n\n* The first step involves gathering and processing country-level data. In some cases, these data are read automatically from offical government websites; the code to implement this is in the `build_databases` directory.\n* In other cases we gather country-level data manually. These data are saved in `raw_source_files/WRI` and processed with the `build_database_WRI.py` script in the `build_database` directory. \n* The second step is to integrate data from different sources, particularly for geolocation of power plants and annual total electricity generation. Some of these different sources are multi-national databases. For this step, we rely on offline work to match records; the concordance table mapping record IDs across databases is saved in resources/master_plant_concordance.csv.\n\nThroughout the processing, we represent power plants as instances of the `PowerPlant` class, defined in `powerplant_database.py`. The final database is in a flat-file CSV format.\n\n## Key attributes of the database\n\nThe database includes the following indicators:\n\n* Plant name\n* Fuel type(s)\n* Generation capacity\n* Country\n* Ownership\n* Latitude/longitude of plant\n* Data source & URL\n* Data source year\n* Annual generation\n\nWe will expand this list in the future as we extend the database.\n\n### Fuel Type Aggregation\n\nWe define the ""Fuel Type"" attribute of our database based on common fuel categories. In order to parse the different fuel types used in our various data sources, we map fuel name synonyms to our fuel categories [here](https://github.com/wri/global-power-plant-database/blob/master/resources/fuel_type_thesaurus). We plan to expand the database in the future to report more disaggregated fuel types.\n\n## Combining Multiple Data Sources\n\nA major challenge for this project is that data come from a variety of sources, including government ministries, utility companies, equipment manufacturers, crowd-sourced databases, financial reports, and more. The reliability of the data varies, and in many cases there are conflicting values for the same attribute of the same power plant from different data sources. To handle this, we match and de-duplicate records and then develop rules for which data sources to report for each indicator. We provide a clear [data lineage](https://en.wikipedia.org/wiki/Data_lineage) for each datum in the database. We plan to ultimately allow users to choose alternative rules for which data sources to draw on.\n\nTo the maximum extent possible, we read data automatically from trusted sources, and integrate it into the database. Our current strategy involves these steps:\n\n* Automate data collection from machine-readable national data sources where possible. \n* For countries where machine-readable data are not available, gather and curate power plant data by hand, and then match these power plants to plants in other databases, including GEO and CARMA (see below) to determine their geolocation.\n* For a limited number of countries with small total power-generation capacity, use data directly from Global Energy Observatory (GEO). \n\nA table describing the data source(s) for each country is listed below.\n\nFinally, we are examining ways to automatically incorporate data from the following supra-national data sources:\n\n* [Clean Development Mechanism](https://cdm.unfccc.int/Projects/projsearch.html)\n* [ENTSO-E](https://www.entsoe.eu/Pages/default.aspx)\n* [E-PRTR](http://prtr.ec.europa.eu/)\n* [CARMA](http://carma.org/)\n* [Arab Union of Electricity](http://www.auptde.org/Default.aspx?lang=en)\n* [IAEA PRIS](https://www.iaea.org/pris/)\n* [Industry About](http://www.industryabout.com/energy)\n* [Think Geo Energy](http://www.thinkgeoenergy.com/map/)\n* [WEC Global Hydropower Database](https://www.worldenergy.org/data/resources/resource/hydropower/)\n\n## ID numbers\n\nWe assign a unique ID to each line of data that we read from each source. In some cases, these represent plant-level data, while in other cases they represent unit-level data. In the case of unit-level data, we commonly perform an aggregation step and assign a new, unique plant-level ID to the result. For plants drawn from machine-readable national data sources, the reference ID is formed by a three-letter country code [ISO 3166-1 alpha-3](http://unstats.un.org/unsd/tradekb/Knowledgebase/Country-Code) and a seven-digit number. For plants drawn from other database (including the manually-maintained dataset by WRI), the reference ID is formed by a variable-size prefix code and a seven-digit number.\n\n## Power plant matching\n\nIn many cases our data sources do not include power plant geolocation information. To address this, we attempt to match these plants with the GEO and CARMA databases, in order to use that geolocation data. We use an [elastic search matching technique](https://github.com/cbdavis/enipedia-search) developed by Enipedia to perform the matching based on plant name, country, capacity, location, with confirmed matches stored in a concordance file. This matching procedure is complex and the algorithm we employ can sometimes wrongly match two power plants or fail to match two entries for the same power plant. We are investigating using the Duke framework for matching, which allows us to do the matching offline.\n\n\n## Build Instructions\nThe build system is as follows\n\n- Create a virtual environment with Python 2.7 and the third-party packages in `requirements.txt`\n- `cd` into `build_databases/`\n- run each `build_database_*.py` file for each data source or processing method that changed (when making a database update)\n- run `build_global_power_plant_database.py` which reads from the pickled store/sub-databases.\n- `cd` into `../utils`\n- run `database_country_summary.py` to produce summary table\n- `cd` into `../output_database`\n- copy `global_power_plant_database.csv` to the [`gppd-ai4earth-api`](https://github.com/wri/gppd-ai4earth-api) repository. Look a the `Makefile` in that repo to understand where it should be located\n- build new generation estimations as needed based on plant changes and updates compared to the stored and calculated values - this is not automatic, but there are some helper scripts for making the estimates\n- run the `make_gppd.py` script in `gppd-ai4earth-api` to construct a new version of the database with the full estimation data\n- copy the new merged dataset back to this repo, increment the `DATABASE_VERSION` file, commit, etc...\n\n \n## Related repos\n\n* [Open Power Systems Data](https://github.com/Open-Power-System-Data/)\n* [Public Utility Data Liberation Project](https://github.com/catalyst-cooperative/pudl)\n* [Global Energy Observatory](https://github.com/hariharshankar/pygeo)\n* [GeoNuclearData](https://github.com/cristianst85/GeoNuclearData)\n* [Duke](https://github.com/larsga/Duke)\n'",,"2018/04/09, 22:00:17",2024,MIT,0,51,"2018/12/04, 22:22:07",26,0,4,0,1785,3,0,0.21999999999999997,"2018/06/11, 04:34:54",v1.1.0,0,2,false,,false,true,,,https://github.com/wri,https://wri.org,"Washington, DC",,,https://avatars.githubusercontent.com/u/4615146?v=4,,, entsoe-py,A Python client for the ENTSO-E API (European Network of Transmission System Operators for Electricity).,EnergieID,https://github.com/EnergieID/entsoe-py.git,github,,Energy System Data Access,"2023/10/18, 13:27:52",328,91,109,true,Python,EnergieID cvba-so,EnergieID,Python,,"b'# entsoe-py\nPython client for the ENTSO-E API (european network of transmission system operators for electricity)\n\nDocumentation of the API found on https://transparency.entsoe.eu/content/static_content/Static%20content/web%20api/Guide.html\n\n## Installation\n`python3 -m pip install entsoe-py`\n\n## Usage\nThe package comes with 2 clients:\n- [`EntsoeRawClient`](#EntsoeRawClient): Returns data in its raw format, usually XML or a ZIP-file containing XML\'s\n- [`EntsoePandasClient`](#EntsoePandasClient): Returns data parsed as a Pandas Series or DataFrame\n### EntsoeRawClient\n```python\nfrom entsoe import EntsoeRawClient\nimport pandas as pd\n\nclient = EntsoeRawClient(api_key=)\n\nstart = pd.Timestamp(\'20171201\', tz=\'Europe/Brussels\')\nend = pd.Timestamp(\'20180101\', tz=\'Europe/Brussels\')\ncountry_code = \'BE\' # Belgium\ncountry_code_from = \'FR\' # France\ncountry_code_to = \'DE_LU\' # Germany-Luxembourg\ntype_marketagreement_type = \'A01\'\ncontract_marketagreement_type = \'A01\'\n\n# methods that return XML\nclient.query_day_ahead_prices(country_code, start, end)\nclient.query_net_position(country_code, start, end, dayahead=True)\nclient.query_load(country_code, start, end)\nclient.query_load_forecast(country_code, start, end)\nclient.query_wind_and_solar_forecast(country_code, start, end, psr_type=None)\nquery_intraday_wind_and_solar_forecast(country_code, start, end, psr_type=None)\nclient.query_generation_forecast(country_code, start, end)\nclient.query_generation(country_code, start, end, psr_type=None)\nclient.query_generation_per_plant(country_code, start, end, psr_type=None)\nclient.query_installed_generation_capacity(country_code, start, end, psr_type=None)\nclient.query_installed_generation_capacity_per_unit(country_code, start, end, psr_type=None)\nclient.query_crossborder_flows(country_code_from, country_code_to, start, end)\nclient.query_scheduled_exchanges(country_code_from, country_code_to, start, end, dayahead=False)\nclient.query_net_transfer_capacity_dayahead(country_code_from, country_code_to, start, end)\nclient.query_net_transfer_capacity_weekahead(country_code_from, country_code_to, start, end)\nclient.query_net_transfer_capacity_monthahead(country_code_from, country_code_to, start, end)\nclient.query_net_transfer_capacity_yearahead(country_code_from, country_code_to, start, end)\nclient.query_intraday_offered_capacity(country_code_from, country_code_to, start, end, implicit=True)\nclient.query_offered_capacity(country_code_from, country_code_to, start, end, contract_marketagreement_type, implicit=True)\nclient.query_contracted_reserve_prices(country_code, start, end, type_marketagreement_type, psr_type=None)\nclient.query_contracted_reserve_amount(country_code, start, end, type_marketagreement_type, psr_type=None)\nclient.query_procured_balancing_capacity(country_code, start, end, process_type, type_marketagreement_type=None)\nclient.query_aggregate_water_reservoirs_and_hydro_storage(country_code, start, end)\n\n# methods that return ZIP (bytes)\nclient.query_imbalance_prices(country_code, start, end, psr_type=None)\nclient.query_unavailability_of_generation_units(country_code, start, end, docstatus=None, periodstartupdate=None, periodendupdate=None)\nclient.query_unavailability_of_production_units(country_code, start, end, docstatus=None, periodstartupdate=None, periodendupdate=None)\nclient.query_unavailability_transmission(country_code_from, country_code_to, start, end, docstatus=None, periodstartupdate=None, periodendupdate=None)\nclient.query_withdrawn_unavailability_of_generation_units(country_code, start, end)\n```\n#### Dump result to file\n```python\nxml_string = client.query_day_ahead_prices(country_code, start, end)\nwith open(\'outfile.xml\', \'w\') as f:\n f.write(xml_string)\n\nzip_bytes = client.query_unavailability_of_generation_units(country_code, start, end)\nwith open(\'outfile.zip\', \'wb\') as f:\n f.write(zip_bytes)\n```\n#### Making another request\nIs the API-call you want not in the list, you can lookup the parameters yourself in the API documentation\n```python\nparams = {\n \'documentType\': \'A44\',\n \'in_Domain\': \'10YBE----------2\',\n \'out_Domain\': \'10YBE----------2\'\n}\nresponse = client._base_request(params=params, start=start, end=end)\nprint(response.text)\n```\n\n### EntsoePandasClient\nThe Pandas Client works similar to the Raw Client, with extras:\n- Time periods that span more than 1 year are automatically dealt with\n- Requests of large numbers of files are split over multiple API calls\n\nPlease note that this client requires you to specifically set a start= and end= parameter which should be a pandas timestamp with timezone.\nIf not it will throw an exception\n```python\nfrom entsoe import EntsoePandasClient\nimport pandas as pd\n\nclient = EntsoePandasClient(api_key=)\n\nstart = pd.Timestamp(\'20171201\', tz=\'Europe/Brussels\')\nend = pd.Timestamp(\'20180101\', tz=\'Europe/Brussels\')\ncountry_code = \'BE\' # Belgium\ncountry_code_from = \'FR\' # France\ncountry_code_to = \'DE_LU\' # Germany-Luxembourg\ntype_marketagreement_type = \'A01\'\ncontract_marketagreement_type = ""A01""\n\n# methods that return Pandas Series\nclient.query_day_ahead_prices(country_code, start=start,end=end)\nclient.query_net_position(country_code, start=start, end=end, dayahead=True)\nclient.query_crossborder_flows(country_code_from, country_code_to, start, end)\nclient.query_scheduled_exchanges(country_code_from, country_code_to, start, end, dayahead=False)\nclient.query_net_transfer_capacity_dayahead(country_code_from, country_code_to, start, end)\nclient.query_net_transfer_capacity_weekahead(country_code_from, country_code_to, start, end)\nclient.query_net_transfer_capacity_monthahead(country_code_from, country_code_to, start, end)\nclient.query_net_transfer_capacity_yearahead(country_code_from, country_code_to, start, end)\nclient.query_intraday_offered_capacity(country_code_from, country_code_to, start, end,implicit=True)\nclient.query_offered_capacity(country_code_from, country_code_to, start, end, contract_marketagreement_type, implicit=True)\nclient.query_aggregate_water_reservoirs_and_hydro_storage(country_code, start, end)\n\n# methods that return Pandas DataFrames\nclient.query_load(country_code, start=start,end=end)\nclient.query_load_forecast(country_code, start=start,end=end)\nclient.query_load_and_forecast(country_code, start=start, end=end)\nclient.query_generation_forecast(country_code, start=start,end=end)\nclient.query_wind_and_solar_forecast(country_code, start=start,end=end, psr_type=None)\nquery_intraday_wind_and_solar_forecast(country_code, start, end, psr_type=None)\nclient.query_generation(country_code, start=start,end=end, psr_type=None)\nclient.query_generation_per_plant(country_code, start=start,end=end, psr_type=None)\nclient.query_installed_generation_capacity(country_code, start=start,end=end, psr_type=None)\nclient.query_installed_generation_capacity_per_unit(country_code, start=start,end=end, psr_type=None)\nclient.query_imbalance_prices(country_code, start=start,end=end, psr_type=None)\nclient.query_contracted_reserve_prices(country_code, start, end, type_marketagreement_type, psr_type=None)\nclient.query_contracted_reserve_amount(country_code, start, end, type_marketagreement_type, psr_type=None)\nclient.query_unavailability_of_generation_units(country_code, start=start,end=end, docstatus=None, periodstartupdate=None, periodendupdate=None)\nclient.query_unavailability_of_production_units(country_code, start, end, docstatus=None, periodstartupdate=None, periodendupdate=None)\nclient.query_unavailability_transmission(country_code_from, country_code_to, start, end, docstatus=None, periodstartupdate=None, periodendupdate=None)\nclient.query_withdrawn_unavailability_of_generation_units(country_code, start, end)\nclient.query_physical_crossborder_allborders(country_code, start, end, export)\nclient.query_generation_import(country_code, start, end)\nclient.query_procured_balancing_capacity(country_code, start, end, process_type, type_marketagreement_type=None)\n\n```\n#### Dump result to file\nSee a list of all IO-methods on https://pandas.pydata.org/pandas-docs/stable/io.html\n```python\nts = client.query_day_ahead_prices(country_code, start=start, end=end)\nts.to_csv(\'outfile.csv\')\n```\n\n### Mappings\nThese lists are always evolving, so let us know if something\'s inaccurate!\n\nAll mappings can be found in ```mappings.py``` [here](https://github.com/EnergieID/entsoe-py/blob/master/entsoe/mappings.py)\n\nFor bidding zone that have changed (splitted/merged) some codes are only valid for certain times. The below table shows these cases.\n\n| | 2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2021 |\n| -- | -- | -- | -- | -- | -- | -- | -- |\n| DE_AT_LU | yes | yes | yes | yes | No Value | No Value | No Value |\n| DE | No Value | No Value | No Value | No Value | No Value | No Value | No Value |\n| DE_LU | No Value | No Value | No Value | yes | yes | yes | yes |\n| AT | No Value | No Value | No Value | yes | yes | yes | yes |'",,"2017/07/12, 13:17:39",2296,MIT,27,310,"2023/10/25, 08:10:01",33,79,245,59,0,14,0.2,0.6,"2023/07/20, 16:00:47",V0.5.10,0,37,false,,false,false,"copperwire/fbmc-quality-examples,SanteriLindfors/MQTTE-DJANGO,philipsidu/TSF_electricity_spot_price,copperwire/fbmc-quality,MainakRepositor/Activation-Infopedia,Kasra-Aliyon/Deepforkit,pekkon/EnergiaDataApp,NOWUM/open-energy-data-server,silvanmurre/electricity-demand-fc,shubhvjain/codegreen-prediction-tool,rojekpatryk/dashboard,Anttko/electric-spot-downloader,FredrikBakken/openai_functions,gertix22/entsoe,hubsti/Stromzeiten_datacollector,dodofit/electricity-price-forecasting_day-ahead-DNN,magnesyljuasen/streamlit-internside,LunarZephyr/ECS171_Project,apoxnen/haproxy,kerstinforster/electricity-price-forecasting,pleberer/spotON-sdk,RamyaRagu2506/MedSales_report_analysis,lottalassila/EC3,conflict-investigations/entso-e-electricity-ukraine,MarvinLiebisch/forecasting-electricity-prices,terop/env-logger,DVRed/gpe_pratice,StefaE/PVForecast,RomainBes/DataVizChallenge,shahbazbaig38/IFTTT_webhook_for_entsoe_e,jgd0x/entsoe,traunio/sahkonhinta,corneel27/day-ahead,rafzul/entsoe-pipelines,IsacLorentz/ID2223Proj-Streamlit-2,antonbn/ID2223Project,RubenVanEldik/entsoe-downloader,benoitputzeys/naive_model_uncertainty,Olarm/power-price-mqtt,amcaw/energizor,VanillaLattA/SpainEnergyConsumptionData,mireiaplalis/power_prediction,niklasw/spotprices-test,Daniel-FD/Data-Science,Enpal/interview-python,jyrkih/eu-energy-live-bot,chrklemm/electricity_mix_traffic_light,flyingstick22/electricity-saver-I,Causb1A/developer-to-cloud-workshop,alangibson/homeassistant-entsoe-transparency,otsaloma/electricity-price-chart,Mansi501/Renewcastapp,pekkon/EnergiaBotti,philippsommer27/opendc-eesr,bwatremetz/data_api,RubenVanEldik/PEIROCOM,NOWUM/entso-monitor,rseng/rsepedia-analysis,skortmann/fbmc-eur,vimaleshraja/calldrop,Olarm/power-price-dashboard,mendelvantriet/python-entsoefacade,kranjcevblaz/avto_net_alerts,spisokgit/ml_jupyter_user,spisokgit/tf2,mohcinemadkour/renewcastapp,gianlucamancini7/case_study_2_alpiq,stoicol/entsoe-dexter-api,DrafProject/elmada,muellermax/electricity,mam619/SEF_Thesis,fboerman/dash-cwe-prices,nemes1s/dexterenergy-assignment,Unstopable-Team/backend,mxkus/webpage-api,ren4kast/REN4KAST,drumilji-tech/Forecast_Energy,ewoken/electricity-analysis,CRCDApp/REN4KAST,obedsims/forecast_app,Frenz86/renew_streamlit,derevirn/renewcast,kolasniwash/entsoe-etl-pipeline,sandeshbhatjr/energy-prediction,kolasniwash/electrical-forecast-prediction-service,muralikrishnat29/smart-grids,FrancisDinh/Smart-Energy-Project,Jonathan56-archives/online_CSC_optimization,PyPSA/powerplantmatching,BONSAMURAIS/bentso,samber/powEUr",,https://github.com/EnergieID,http://www.energieid.be,"Antwerp, Belgium",,,https://avatars.githubusercontent.com/u/13538977?v=4,,, time series,Contains scripts that compile time series data of the European power system.,Open-Power-System-Data,https://github.com/Open-Power-System-Data/time_series.git,github,,Energy System Data Access,"2020/10/06, 17:56:21",110,0,15,false,Python,Open Power System Data,Open-Power-System-Data,"Python,Jupyter Notebook",http://data.open-power-system-data.org/time_series/,b'# Time series data package\n\nThis repository contains scripts that compile time series data of the European power system.\n\nSee the [main Jupter notebook](main.ipynb) for further details.\n\n## Preparation\n\nTo work on the Notebooks locally see the installation instructions in the\n[wiki](https://github.com/Open-Power-System-Data/common/wiki/Tutorial-to-run-OPSD-scripts).\n\n## License\n\nThis notebook as well as all other documents in this repository is published under the [MIT License](LICENSE.md).\n',,"2015/10/30, 14:58:41",2917,MIT,0,212,"2020/10/05, 17:53:27",5,14,25,0,1115,0,0.0,0.36190476190476195,"2020/10/06, 18:04:49",2020-10-06,0,8,false,,false,false,,,https://github.com/Open-Power-System-Data,http://open-power-system-data.org/,,,,https://avatars.githubusercontent.com/u/14346309?v=4,,, renewable power plant,"Contains scripts to create lists of renewable power plants in Germany, Denmark, France and Poland, and daily time series of cumulated installed capacity per energy source type for Germany.",Open-Power-System-Data,https://github.com/Open-Power-System-Data/renewable_power_plants.git,github,,Energy System Data Access,"2020/08/23, 05:32:33",59,0,9,false,Jupyter Notebook,Open Power System Data,Open-Power-System-Data,"Jupyter Notebook,Python,HTML",http://data.open-power-system-data.org/renewable_power_plants/,"b'# Open Power System Data: Renewable Energy Power Plants\n\nThis repository contains scripts to create lists of renewable power plants in Germany, Denmark, France and Poland, and daily time series of cumulated installed capacity per energy source type for Germany.\n\nSee [main.ipynb](main.ipynb) for further details.\n\n## License\n\nThe scripts in this data package are published under the [MIT license](LICENSE.md).\n\nThe scripts in this data package are developed by the project [Open Power System Data (OPSD)](http://open-power-system-data.org).\n'",,"2015/10/26, 17:16:36",2921,MIT,0,216,"2020/10/15, 16:18:32",8,20,23,0,1105,0,0.0,0.47887323943661975,"2020/08/26, 05:37:56",2020-08-25,0,7,false,,false,false,,,https://github.com/Open-Power-System-Data,http://open-power-system-data.org/,,,,https://avatars.githubusercontent.com/u/14346309?v=4,,, conventional power plants,Contains data on conventional power plants for Germany as well as other selected European countries. The data include individual power plants with their technical characteristics.,Open-Power-System-Data,https://github.com/Open-Power-System-Data/conventional_power_plants.git,github,,Energy System Data Access,"2020/10/01, 15:21:31",48,0,3,false,Jupyter Notebook,Open Power System Data,Open-Power-System-Data,"Jupyter Notebook,Python",http://data.open-power-system-data.org/conventional_power_plants/,"b'# 1. About Open Power System Data \nThis notebook is part of the project [Open Power System Data](http://open-power-system-data.org). Open Power System Data develops a platform for free and open data for electricity system modeling. We collect, check, process, document, and provide data that are publicly available but currently inconvenient to use. \nMore info on Open Power System Data:\n- [Information on the project on our website](http://open-power-system-data.org)\n- [Data and metadata on our data platform](http://data.open-power-system-data.org)\n- [Data processing scripts on our GitHub page](https://github.com/Open-Power-System-Data)\n\n# 2. About Jupyter Notebooks and GitHub\nThis file is a [Jupyter Notebook](http://jupyter.org/). A Jupyter Notebook is a file that combines executable programming code with visualizations and comments in markdown format, allowing for an intuitive documentation of the code. We use Jupyter Notebooks for combined coding and documentation. We use Python 3 as programming language. All Notebooks are stored on [GitHub](https://github.com/), a platform for software development, and are publicly available. More information on our IT-concept can be found [here](http://open-power-system-data.org/it). See also our [step-by-step manual](http://open-power-system-data.org/step-by-step) how to use the dataplatform.\n\n# 3. About this Data Package\nWe provide data in different chunks, or [datapackages](http://frictionlessdata.io/data-packages/). The one you are looking at is on [conventional power plants](http://data.open-power-system-data.org/convetional_power_plants/), \n\nThis notebook processes data on conventional power plants for Germany as well as other European countries. The data includes individual power plants with their technical characteristics. These include installed capacity, main energy source, type of technology, CHP capability, and geographical information.\n\n\n# 4. Data sources\nWe use as publicly available data sources, which includes national statistical offices, ministries, regulatory authorities, transmission system operators, as well as other associations. All data sources are listed in the datapackage.json file including their link.\n\n## 4.1 Germany\n- ""BNetzA Kraftwerksliste"" [Download](http://www.bundesnetzagentur.de/DE/Sachgebiete/ElektrizitaetundGas/Unternehmen_Institutionen/Versorgungssicherheit/Erzeugungskapazitaeten/Kraftwerksliste/kraftwerksliste-node.html)\n- ""Umweltbundesamt Datenbank Kraftwerke in Deutschland"" [Download](http://www.umweltbundesamt.de/dokument/datenbank-kraftwerke-in-deutschland)\n- For efficiency estimation: Jonas Egerer, Clemens Gerbaulet, Richard Ihlenburg, Friedrich Kunz, Benjamin Reinhard, Christian von Hirschhausen, Alexander Weber, Jens Weibezahn (2014): **Electricity Sector Data for Policy-Relevant Modeling: Data Documentation and Applications to the German and European Electricity Markets**. DIW Data Documentation 72, Berlin, Germany. [Download](https://www.diw.de/documents/publikationen/73/diw_01.c.440963.de/diw_datadoc_2014-072.pdf)\n- Other sources, e.g. for efficiency and georeferencing, are provided in the file\n\n## 4.2 Selected European countries\n- **AT**: **Verbund AG** (Austrian utility), Our hydro power plants [Download](https://www.verbund.com/en-at/about-verbund/power-plants/our-power-plants). Source links for conventional units are given in the column ""source"" of the power plant list\n- **BE**: **ELIA** (Belgian transmission system operator), Generation facilities [Download](http://publications.elia.be/upload/ProductionParkOverview.xls?TS=20120416193815)\n- **CH**: **BFE** (Swiss Federal Office of Energy), Statistik der Wasserkraftanlagen der Schweiz [Download](http://www.bfe.admin.ch/php/modules/publikationen/stream.php?extlang=de&name=de_416798061.zip&endung=Statistik%20der%20Wasserkraftanlagen%20der%20Schweiz) and Nuclear energy [Download](http://www.bfe.admin.ch/themen/00511/index.html?lang=en)\n- **CZ**: **CEPS** (Czech transmission system operator), Available capacity [Download](http://www.ceps.cz/_layouts/15/Ceps/_Pages/GraphData.aspx?mode=xlsx&from=1/1/2010%2012:00:00%20AM&to=12/31/2015%2011:59:59%20PM&hasinterval=False&sol=9&lang=ENG&ver=YF&)\n- **DK**: **Energinet.dk** (Danish transmission system operator), Energinet.dk\'s assumptions for analysis [Download](https://www.energinet.dk/SiteCollectionDocuments/Engelske%20dokumenter/El/Energinet%20dk%27s%20assumptions%20for%20analysis%202014-2035,%20September%202014.xlsm)\n- **ES**: **SEDE** (Ministry of Industry, Energy and Tourism), Productores (in Conjunto de Datos) [Download](http://www6.mityc.es/aplicaciones/electra/ElectraExp.csv.zip)\n- **FI**: **Energy Authority**, Power plant register [Download](http://www.energiavirasto.fi/documents/10191/0/Energiaviraston+Voimalaitosrekisteri+010117.xlsx)\n- **FR**: **RTE** (French tranmission system operator), List of production units of more than 100 MW [Download](http://clients.rte-france.com/servlets/CodesEICServlet)\n- **IT**: **TERNA** (Italian transmission network operator), Installed generation capacity 2014 [Download](http://download.terna.it/terna/0000/0216/16.XLSX)\n- **NL**: **TenneT** (Dutch transmission system operator), Available capacity in 2016 [Download](http://www.tennet.org/english/operational_management/export_data.aspx)\n- **NO**: **Nordpool** (Power exchange), Power plant units (installed generation capacity larger than 100 MW) [Download](http://www.nordpoolspot.com/globalassets/download-center/tso/generation-capacity_norway_valid-from-2-december-2013_larger-than-100mw.pdf) (Link is not working as data has been deleted by Nordpool)\n- **PL**: **GPI** (Exchange Information Platform by the Polish Power Exchange), List of generation units [Download](http://gpi.tge.pl/en/wykaz-jednostek?p_p_id=powerunits_WAR_powerunitsportlet&p_p_lifecycle=2&p_p_state=normal&p_p_mode=view&p_p_cacheability=cacheLevelPage&p_p_col_id=column-1&p_p_col_count=1)\n- **SE**: **Nordpool** (Power exchange), Installed generation capacity larger than 100 MW per unit in Sweden (17.12.2014) [Download](http://www.nordpoolspot.com/globalassets/download-center/tso/generation-capacity_sweden_larger-than-100mw-per-unit_17122014.pdf) (Link is not working as data has been deleted by Nordpool)\n- **SI**: **Several sources**, Source links of data are given in the column ""source"" of the power plant list\n- **SK**: **SEAS** (Slovakian utility), Power plants [Download](https://www.seas.sk/power-plants)\n- **UK**: **Statistical Office**, Power stations in the United Kingdom, May 2015 (DUKES 5.10) [Download](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/446457/dukes5_10.xls)\n\n\nBeside the listed publicly available sources, additional, but decentralized, information on individual power plants are available (e.g. on operator\'s webpages). We therefore aim to continuously extend the lists by these information.\n\n# 5. Model Output\nThe following standardized notation is used in this datapackage for energy sources, technology type:\n\n## 5.1 Energy sources\nOriginal Name in BNetzA-List|model output|Full name\n:-:|:-:|:-:\nSteinkohle|coal|Hard coal\nErdgas|natural_gas|Natural Gas\nBraunkohle|lignite|Lignite\nKernenergie|uranium|Uranium\nPumpspeicher|pumped_storage|Pumped Storage (Water)\nBiomasse|biomass|Biomass\nMineral\xc3\xb6lprodukte|oil|Mineral oil products \nLaufwasser|hydro|Water (run of river)\nSonstige Energietr\xc3\xa4ger (nicht erneuerbar) |other_non_renewable|Other Fuels (not renewable)\nAbfall|waste|Waste\nSpeicherwasser (ohne Pumpspeicher)|reservoir|Reservoir\nUnbekannter Energietr\xc3\xa4ger (nicht erneuerbar)|unknown_non_renewable|Unknown (not renewable)\nMehrere Energietr\xc3\xa4ger (nicht erneuerbar)|multiple_non_renewable|Multiple (not renewable)\nDeponiegas|gas_landfill|Landfill gas\nWindenergie (Onshore-Anlage)|wind_onshore|Onshore wind\nWindenergie (Offshore-Anlage)|wind_offshore|Offshore Wind\nSolare Strahlungsenergie|solar|Solar energy\nKl\xc3\xa4rgas|gas_sewage|Sewage Gas\nGeothermie|geothermal|Geothermal energy\nGrubengas|gas_mine|Mine Gas\n\n## 5.2 Energy source structure\n![OPSD-Tree](http://open-power-system-data.org/2016-10-25-opsd_tree.svg)\n\n## 5.3 CHP type\nCHP Type abbreviation|Full name\n:-:|:-:\nCHP|Combined heat and power\nIPP|Industrial power plant\n\n\n# 6. License\nThis notebook as well as all other documents in this repository is published under the [MIT License](LICENSE.md).\n\n'",,"2015/11/17, 21:28:13",2899,MIT,0,233,"2020/10/01, 14:37:13",4,8,13,0,1119,0,0.0,0.5196078431372548,"2020/10/01, 15:22:50",2020-10-01,0,10,false,,false,false,,,https://github.com/Open-Power-System-Data,http://open-power-system-data.org/,,,,https://avatars.githubusercontent.com/u/14346309?v=4,,, open-MaStR,Download and process German energy data from BNetzA database Marktstammdatenregister.,OpenEnergyPlatform,https://github.com/OpenEnergyPlatform/open-MaStR.git,github,"open-energy-family,oep,marktstammdatenregister,python,energy,energy-data",Energy System Data Access,"2023/08/07, 11:26:48",63,0,22,true,Python,Open Energy Family,OpenEnergyPlatform,Python,https://open-mastr.readthedocs.io/en/latest/,"b'\n.. image:: https://user-images.githubusercontent.com/14353512/199113556-4b53660f-c628-4138-8d01-3719595ecda1.png\n :align: left\n :target: https://github.com/OpenEnergyPlatform/open-MaStR\n :alt: MaStR logo\n\n==========\nopen-mastr\n==========\n\n**A package that provides an interface for downloading and processing the Marktstammdatenregister (MaStR)**\n\n.. list-table::\n :widths: 10, 50\n\n * - License\n - |badge_license|\n * - Documentation\n - |badge_rtd|\n * - Tests\n - |badge_ci|\n * - Publication\n - |badge_pypi| |badge_zenodo|\n * - Development\n - |badge_issue_open| |badge_issue_closes| |badge_pr_open| |badge_pr_closes|\n * - Community\n - |badge_contributing| |badge_contributors| |badge_repo_counts| |PyPI download month|\n \n\n.. contents::\n :depth: 2\n :local:\n :backlinks: top\n\nIntroduction\n============\n\nThe `Marktstammdatenregister (MaStR) `_ is a German register \nprovided by the German Federal Network Agency (Bundesnetzagentur / BNetza) that keeps track of all power and gas units located in Germany.\n\nThe MaStR data can be\n \n#. browsed and filtered `online `_\n#. taken from `daily provided dumps `_\n#. be accessed via the `web service `_\n\n| The python package ``open-mastr`` provides an interface for accessing the data. \n| It contains methods to download and parse the xml files (bulk) and the SOAP web service (API).\n| In this repository we are developing methods to analyze, validate and enrich the data.\n| We want to collect and compile post processing scripts to improve data quality.\n\n\nDocumentation\n=============\n\n| The documentation is in `sphinx `_ reStructuredText format in the ``doc`` sub-folder of the repository.\n| Find the `documentation `_ hosted on ReadTheDocs.\n\n| The original API documentation can be found on the `Webhilfe des Marktstammdatenregisters `_.\n| If you are interested in browsing the MaStR online, check out the privately hosted `Marktstammdatenregister.dev `_.\n| Also see the `bundesAPI/Marktstammdaten-API `_ for another implementation.\n\n\nInstallation\n============\n\n| It is recommended to use a virtual python environment, for example `conda `_ or `virtualenv `_.\n| The package is intended to be used with ``Python >=3.8``.\n\n\nPyPI\n----\n\nInstall the current release of ``open-mastr`` with ``pip``:\n\n.. code-block:: python\n\n pip install open-mastr\n\nGitHub\n------\n\nFor development, clone this repository manually.\n\n.. code-block:: python\n\n git clone git@github.com:OpenEnergyPlatform/open-MaStR.git\n cd open-MaStR\n\nSetup the conda environment with\n\n.. code-block:: python\n\n conda env create -f environment.yml\n\nInstall the package with\n\n.. code-block:: python\n\n python setup.py install\n\n\nExamples of Usage\n==================\nIf you want to see your project in this list, write an \n`Issue `_ or add\nchanges in a `Pull Request `_.\n\n- `PV- und Windfl\xc3\xa4chenrechner `_\n- `Wasserstoffatlas `_\n- `EE-Status App `_\n\n\n\nCollaboration\n=============\n| Everyone is invited to develop this repository with good intentions.\n| Please follow the workflow described in the `CONTRIBUTING.md `_.\n\n\nLicense and Citation\n====================\n\nSoftware\n--------\n\n| This repository is licensed under the **GNU Affero General Public License v3.0 or later** (AGPL-3.0-or-later).\n| See `LICENSE.md `_ for rights and obligations.\n| See the *Cite this repository* function or `CITATION.cff `_ for citation of this repository.\n| Copyright: `open-MaStR `_ \xc2\xa9 `Reiner Lemoine Institut `_ \xc2\xa9 `fortiss `_ | `AGPL-3.0-or-later `_\n\nData\n----\n| The data has the license **Datenlizenz Deutschland \xe2\x80\x93 Namensnennung \xe2\x80\x93 Version 2.0** (DL-DE-BY-2.0)\n| Copyright: `Marktstammdatenregister `_ - \xc2\xa9 Bundesnetzagentur f\xc3\xbcr Elektrizit\xc3\xa4t, Gas, Telekommunikation, Post und Eisenbahnen | `DL-DE-BY-2.0 `_\n\n\n.. |badge_license| image:: https://img.shields.io/github/license/OpenEnergyPlatform/open-MaStR\n :target: LICENSE.txt\n :alt: License\n\n.. |badge_rtd| image:: https://readthedocs.org/projects/open-mastr/badge/?style=flat\n :target: https://open-mastr.readthedocs.io/en/latest/\n :alt: Read the Docs\n\n.. |badge_ci| image:: https://github.com/OpenEnergyPlatform/open-MaStR/workflows/CI/badge.svg\n :target: https://github.com/OpenEnergyPlatform/open-MaStR/actions?query=workflow%3ACI\n :alt: GitHub Actions\n\n.. |badge_pypi| image:: https://img.shields.io/pypi/v/open-mastr.svg\n :target: https://pypi.org/project/open-mastr/\n :alt: PyPI\n\n.. |badge_zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.6807426.svg\n :target: https://doi.org/10.5281/zenodo.6807426\n :alt: zenodo\n\n.. |badge_issue_open| image:: https://img.shields.io/github/issues-raw/OpenEnergyPlatform/open-MaStR\n :alt: open issues\n\n.. |badge_issue_closes| image:: https://img.shields.io/github/issues-closed-raw/OpenEnergyPlatform/open-MaStR\n :alt: closes issues\n\n.. |badge_pr_open| image:: https://img.shields.io/github/issues-pr-raw/OpenEnergyPlatform/open-MaStR\n :alt: closes issues\n\n.. |badge_pr_closes| image:: https://img.shields.io/github/issues-pr-closed-raw/OpenEnergyPlatform/open-MaStR\n :alt: closes issues\n\n.. |badge_contributing| image:: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat\n :alt: contributions\n\n.. |badge_contributors| image:: https://img.shields.io/badge/all_contributors-1-orange.svg?style=flat-square\n :alt: contributors\n\n.. |badge_repo_counts| image:: http://hits.dwyl.com/OpenEnergyPlatform/open-MaStR.svg\n :alt: counter\n \n.. |PyPI download month| image:: https://img.shields.io/pypi/dm/open-mastr?label=PyPi%20Downloads\n :target: https://pypi.org/project/open-mastr/\n'",",https://doi.org/10.5281/zenodo.6807426\n","2019/08/21, 14:11:11",1526,AGPL-3.0,250,1870,"2023/08/07, 11:55:42",24,193,447,118,79,7,0.9,0.6559040590405905,"2023/08/07, 11:37:18",v0.13.2,0,16,false,,false,true,,,https://github.com/OpenEnergyPlatform,https://github.com/OpenEnergyPlatform/organisation/blob/master/README.md,"Magdeburg, Germany",,,https://avatars.githubusercontent.com/u/37101913?v=4,,, powerplantmatching,"A toolset for cleaning, standardizing and combining multiple power plant databases.",FRESNA,https://github.com/PyPSA/powerplantmatching.git,github,,Energy System Data Access,"2023/07/25, 12:00:39",121,25,33,true,Python,PyPSA,PyPSA,Python,https://powerplantmatching.readthedocs.io/en/latest/,"b'# powerplantmatching\n\n [![pypi](https://img.shields.io/pypi/v/powerplantmatching.svg)](https://pypi.org/project/powerplantmatching/) [![conda](https://img.shields.io/conda/vn/conda-forge/powerplantmatching.svg)](https://anaconda.org/conda-forge/powerplantmatching) ![pythonversion](https://img.shields.io/pypi/pyversions/powerplantmatching) ![LICENSE](https://img.shields.io/pypi/l/powerplantmatching.svg) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3358985.svg)](https://zenodo.org/record/3358985#.XUReFPxS_MU) [![doc](https://readthedocs.org/projects/powerplantmatching/badge/?version=latest)](https://powerplantmatching.readthedocs.io/en/latest/) [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/FRESNA/powerplantmatching/master.svg)](https://results.pre-commit.ci/latest/github/FRESNA/powerplantmatching/master)\n[![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/pypsa)](https://stackoverflow.com/questions/tagged/pypsa)\n\nA toolset for cleaning, standardizing and combining multiple power\nplant databases.\n\nThis package provides ready-to-use power plant data for the European power system.\nStarting from openly available power plant datasets, the package cleans, standardizes\nand merges the input data to create a new combined dataset, which includes all the important information.\nThe package allows to easily update the combined data as soon as new input datasets are released.\n\nYou can directly [download the current version of the data](https://downgit.github.io/#/home?url=https://github.com/PyPSA/powerplantmatching/blob/master/powerplants.csv) as a CSV file.\n\nInitially, powerplantmatching was developed by the\n[Renewable Energy Group](https://fias.uni-frankfurt.de/physics/schramm/complex-renewable-energy-networks/)\nat [FIAS](https://fias.uni-frankfurt.de/) and is now maintained by the [Digital Transformation in Energy Systems Group](https://tub-ensys.github.io/) at the Technical University of Berlin to build power plant data\ninputs to [PyPSA](http://www.pypsa.org/)-based models for carrying\nout simulations.\n\n### Main Features\n\n- clean and standardize power plant data sets\n- aggregate power plant units which belong to the same plant\n- compare and combine different data sets\n- create lookups and give statistical insight to power plant goodness\n- provide cleaned data from different sources\n- choose between gross/net capacity\n- provide an already merged data set of multiple different open data sources\n- scale the power plant capacities in order to match country-specific statistics about total power plant capacities\n- visualize the data\n- export your powerplant data to a [PyPSA](https://github.com/PyPSA/PyPSA)-based model\n\n## Installation\n\n Using pip\n\n```bash\npip install powerplantmatching\n```\n\nor conda\n\n```bash\nconda install -c conda-forge powerplantmatching\n```\n\n# Contributing and Support\nWe strongly welcome anyone interested in contributing to this project. If you have any ideas, suggestions or encounter problems, feel invited to file issues or make pull requests on GitHub.\n- In case of code-related **questions**, please post on [stack overflow](https://stackoverflow.com/questions/tagged/pypsa).\n- For non-programming related and more general questions please refer to the [PyPSA mailing list](https://groups.google.com/group/pypsa).\n- To **discuss** with other PyPSA & technology-data users, organise projects, share news, and get in touch with the community you can use the [discord server](https://discord.gg/JTdvaEBb).\n- For **bugs and feature requests**, please use the [powerplantmatching Github Issues page](https://github.com/PyPSA/powerplantmatching/issues).\n\n\n## Citing powerplantmatching\n\nIf you want to cite powerplantmatching, use the following paper\n\n- F. Gotzens, H. Heinrichs, J. H\xc3\xb6rsch, and F. Hofmann, [Performing energy modelling exercises in a transparent way - The issue of data quality in power plant databases](https://www.sciencedirect.com/science/article/pii/S2211467X18301056?dgcid=author), Energy Strategy Reviews, vol. 23, pp. 1\xe2\x80\x9312, Jan. 2019.\n\nwith bibtex\n\n```\n@article{gotzens_performing_2019,\n title = {Performing energy modelling exercises in a transparent way - {The} issue of data quality in power plant databases},\n volume = {23},\n issn = {2211467X},\n url = {https://linkinghub.elsevier.com/retrieve/pii/S2211467X18301056},\n doi = {10.1016/j.esr.2018.11.004},\n language = {en},\n urldate = {2018-12-03},\n journal = {Energy Strategy Reviews},\n author = {Gotzens, Fabian and Heinrichs, Heidi and H\xc3\xb6rsch, Jonas and Hofmann, Fabian},\n month = jan,\n year = {2019},\n pages = {1--12}\n}\n```\n\nand/or the current release stored on Zenodo with a release-specific DOI:\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3358985.svg)](https://zenodo.org/record/3358985#.XURat99fjRY)\n\n## Licence\n\nCopyright 2018-2022 Fabian Hofmann (EnSys TU Berlin), Fabian Gotzens (FZ J\xc3\xbclich), Jonas H\xc3\xb6rsch (KIT),\n\npowerplantmatching is released as free software under the\n[GPLv3](http://www.gnu.org/licenses/gpl-3.0.en.html), see\n[LICENSE](LICENSE) for further information.\n'",",https://zenodo.org/record/3358985#.XUReFPxS_MU,https://zenodo.org/record/3358985#.XURat99fjRY","2016/08/15, 16:36:25",2627,GPL-3.0,132,839,"2023/07/25, 12:00:40",24,58,112,47,92,5,0.5,0.37662337662337664,"2023/05/30, 09:24:13",v0.5.7,0,17,false,,false,false,"cshearer1977/EnergySystemModelling,maribjorn/pypsa-earth-bo,zhizhiyuyu/pypsa-tide,centrefornetzero/pypsa-fes,LukasFrankenQ/pypsa-fes,BishtArvind/pypsa-meets-earth,alfiyaks/pypsa-kz-my,fneum/data-science-for-esm,drifter089/pypsaLit-container,clairehalloran/GeoHeat-GB,2050plus/2050plus,yerbol-akhmetov/pypsa-kaz,koen-vg/kjernekraft-noreg,drifter089/pypsa-workflow,pz-max/pypsa-earth-test,Pikugcp22/https-github.com-pypsa-meets-earth-pypsa-earth,ekatef/pypsa-earth,pypsa-meets-africa/pypsa-africa-archived,carlosfv92/pypsa-earth-BO,mikelperez01/Mike,Eugenio2192/autumnopen,pypsa-meets-earth/pypsa-earth,openego/eGon-data,maxnutz/res_aut,PyPSA/pypsa-eur",,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, GeoNuclearData,Database with information about Nuclear Power Plants worldwide.,cristianst85,https://github.com/cristianst85/GeoNuclearData.git,github,"database,nuclear,reactor,npps,power,plant,powerplant,geolocation,open-datasets,open-data,energy",Energy System Data Access,"2023/08/28, 07:24:49",42,0,10,true,,,,,,"b""# GeoNuclearData\n\nThis repository contains a database with information about Nuclear Power Plants worldwide.\n\n### Version\n\nDatabase version: **0.17.0** (**2020/04/19**) \nDataset last updated in version: **0.17.16** (**2023/08/28**)\n\n### Changelog\n\nSee [CHANGELOG](https://github.com/cristianst85/GeoNuclearData/blob/master/CHANGELOG.md) file for details.\n\n### Data formats\n\nData is available in multiple formats (MySQL, JSON, and CSV).\n\n### Quick database summary (by reactor status)\n\n|**Status** |**Count**|**Diff**|\n|-------------------------|--------:|-------:|\n|Unknown | 1| |\n|Planned | 80| -2|\n|Under Construction | 60| |\n|Operational | 411| -11|\n|Suspended Operation | 27| +10|\n|Shutdown | 209| +5|\n|Suspended Construction | 6| |\n|Cancelled Construction | 4| |\n|Never Commissioned | 2| |\n|Decommissioning Completed| 3| 0|\n|**Total** | **803**| +2|\n\n## Tables structure\n\n### countries\n- `code` - ISO 3166-1 alpha-2 country code\n- `name` - country name in English\n \n### nuclear_power_plant_status_type\n- `id` - numeric id key\n- `type` - nuclear power plant status\n\n### nuclear_reactor_type\n- `id` - numeric id key\n- `type` - nuclear reactor type acronym\n- `description` - nuclear reactor type long form\n \n### nuclear_power_plants\n- `id` - numeric id key\n- `name` - nuclear power plant name\n- `latitude` - latitude in decimal format\n- `longitude` - longitude in decimal format\n- `country_code` - ISO 3166-1 alpha-2 country code\n- `status_id` - nuclear power plant status id\n- `reactor_type_id` - nuclear reactor type id\n- `reactor_model` - nuclear reactor model\n- `construction_start_at` - date when nuclear power plant construction was started\n- `operational_from` - date when nuclear power plant became operational (also known as commercial operation date)\n- `operational_to` - date when nuclear power plant was shutdown (also known as permanent shutdown date)\n- `capacity` - nuclear power plant capacity (design net capacity in MWe)\n- `source` - source of the information\n- `last_updated_at` - date and time when information was last updated\n- `iaea_id` - IAEA PRIS reactor id\n \n## Notes\nData from `source`, `last_updated_at`, and `iaea_id` columns is for maintenance purposes only and is not recommended to be used.\n\n### Known Inconsistencies (GeoNuclearData vs. WNA vs. IAEA PRIS)\n\n_Operational Reactors_\n\n- there are currently 411 reactors listed as being operational in the GeoNuclearData database, including China Experimental Fast Reactor (CEFR);\n- the PRIS database lists only 410 reactors as being operational (China Experimental Fast Reactor (CEFR) is not listed anymore) while the WNA's database has a slightly distinct category named _Operable Reactors_ that probably also includes reactors in Suspended Operation.\n\n_Reactors Under Construction_\n\n- The number of reactors listed as being under construction in the GeoNuclearData database does not match with either the number of reactors under construction from WNA's database or with the number from the PRIS database:\n - [BALTIC-1](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=968) reactor (Russia) is shown as under construction in PRIS, but it was removed from the WNA's database in November 2000 ([see here](https://www.world-nuclear.org/information-library/country-profiles/countries-o-s/russia-nuclear-power.aspx));\n- In addition to the list of reactors under construction from the PRIS database, the GeoNuclearData database also contains the following reactors (as in WNA's database): \n - CAP1400-1 ([Shidaowan 1](https://www.world-nuclear.org/reactor/default.aspx/SHIDAOWAN-1));\n - CAP1400-1 ([Shidaowan 2](https://www.world-nuclear.org/reactor/default.aspx/SHIDAOWAN-2));\n - Xiapu-2 ([Xiapu 2](https://www.world-nuclear.org/reactor/default.aspx/XIAPU-2)).\n\n_Naming_\n\n- The GeoNuclearData database usually follows the naming conventions from PRIS for reactors, but in WNA's database some nuclear reactors have completely different names. Some examples are:\n - [BELARUSIAN-1](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=1056) and [2](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=1061) reactors (Belarus) are named in WNA's database [Ostrovets 1](https://www.world-nuclear.org/reactor/default.aspx/BELARUSIAN-1) and [2](https://www.world-nuclear.org/reactor/default.aspx/BELARUSIAN-1), respectively;\n - [CAP1400-1](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=1085) and [2](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=1086) reactors (China) (they were unlisted from PRIS as of 30 April 2021) are named in WNA's database [Shidaowan 1](https://www.world-nuclear.org/reactor/default.aspx/SHIDAOWAN-1) and [2](https://www.world-nuclear.org/reactor/default.aspx/SHIDAOWAN-2), respectively;\n - [KANUPP-1](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=427), [2](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=1067), and [3](https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=1068) reactors (Pakistan) are named in WNA's database [Karachi 1](https://www.world-nuclear.org/reactor/default.aspx/KANUPP-1), [2](https://www.world-nuclear.org/reactor/default.aspx/KANUPP-2), and [3](https://www.world-nuclear.org/reactor/default.aspx/KANUPP-3).\n\n_Coordinates_\n\n- the coordinates found in GeoNuclearData database are approximate;\n- the original source for the existing coordinates was an old Google Fusion Table dating back to March 2012 (probably sourced from WNA). Since the inception of this database some of the coordinates were manually corrected using Wikipedia/GeoHack and/or satellite imagery from Google Maps;\n- the operational [Akademik Lomonosov-1](https://www.world-nuclear.org/reactor/default.aspx/AKADEMIK%20LOMONOSOV-1) and [2](https://www.world-nuclear.org/reactor/default.aspx/AKADEMIK%20LOMONOSOV-2) reactors (Russia) and planned Bohai Shipyard FNPP and Jiaodong Shipyard FNPP (China) reactors are floating nuclear power plants thus the coordinates from this database may not necessarily indicate their current location.\n\n## Usage\n SELECT npp.`id`\n , npp.`name`\n , npp.latitude\n , npp.longitude\n , c.`name` 'country'\n , s.type 'status'\n , r.type 'reactor_type'\n , npp.reactor_model\n , npp.construction_start_at\n , npp.operational_from\n , npp.operational_to\n FROM nuclear_power_plants npp\n INNER JOIN countries AS c ON npp.country_code = c.`code`\n INNER JOIN nuclear_power_plant_status_type AS s ON npp.status_id = s.id\n LEFT OUTER JOIN nuclear_reactor_type AS r ON npp.reactor_type_id = r.id\n ORDER BY npp.`id`\n\n## License\nThe GeoNuclearData database is made available under the Open Database License whose full text can be found at https://opendatacommons.org/licenses/odbl/1.0/.\n \nAny rights in individual contents of the database are licensed under the Database Contents License whose full text can be found at https://opendatacommons.org/licenses/dbcl/1.0/.\n \n## Sources\nCountries data is taken from [Unicode Common Locale Data Repository](https://github.com/unicode-org/cldr-json/blob/main/cldr-json/cldr-localenames-full/main/en/territories.json). \nNuclear power plants data is taken from [WNA](http://www.world-nuclear.org/information-library/facts-and-figures/reactor-database.aspx)/[IAEA](https://www.iaea.org/pris/), but some other sources are used, e.g., [Wikipedia](https://en.wikipedia.org/wiki/List_of_nuclear_power_stations). \n\nWNA data is also taken from the\xc2\xa0IAEA PRIS reactor database, with more recent information added if available ([see here](https://www.world-nuclear.org/information-library/facts-and-figures/reactor-database-guide.aspx)).\n\n""",,"2015/05/24, 16:14:16",3076,CUSTOM,3,82,"2021/12/10, 19:47:36",0,2,5,0,684,0,0.0,0.03749999999999998,"2023/08/28, 06:47:04",0.17.16,0,2,false,,false,false,,,,,,,,,,, pyEIA,An Energy Information Administration API Python client for researchers who need data.,thomastu,https://github.com/thomastu/pyEIA.git,github,"eia,python,eia-api,energy-data,energy",Energy System Data Access,"2022/01/26, 23:22:08",25,0,2,false,Python,,,Python,,"b'# Configuration\n\nYou can configure pyeia with your API key either at runtime.\n\n- Declare `EIA_APIKEY=""myapikey""` in a `.env` file\n- Set an environment variable explicitly, `export EIA_APIKEY=""myapikey""`\n- If you are using dynaconf, you can include an `[eia]` environment in your `settings.toml` file (or any other configured settings files.)\n\n```toml\n[eia]\napikey = ""my apikey""\n```\n\n# About\n\nThe U.S. Energy Information Adminsitration provides an API for access to commonly used datasets for policy makers\nand researchers. See the [EIA API documentation](http://www.eia.gov/opendata/commands.cfm) for more information.\n\nWarning : This package is a work in progress! A substantial update is expected in January 2020, with a published version on PyPi. The author took a break from this domain area, but is returning! Hoping to have a similar or identical R interface/API as well, but that may be much farther down the pipeline.\n\n# Basic Usage\n\nSince this package is still under active development, it has not been pushed to PyPi. That said, I believe it is\nstable and reliable enough for immediate use. You can install this via git+https, i.e. :\n\n```bash\npip install pyeia\npip show pyeia\n```\n\nThere are two main strategies for interacting with this package.\n\n## EIA Browser\n\n[EIA provides a web-based data browser](http://www.eia.gov/opendata/qb.cfm)\nSince most interactions for discovering data via the API will likely occur\nthrough this browser, this motivated a programmatic version.\n\nThe general strategy is to traverse a datapath or multiple datapaths, and\nwhen you arrive to the desired node, you flag one or more dataseries. \nThere is also the ability to add in meta information as you flag a dataseries.\n\nRunning the `export` method on a Browser object will make a request to the\n`Series` API to collect data you\'ve flagged.\n\nThere\'s currently a separate class for each dataset which is mostly syntactic.\nIn the future, there will likely be methods and visualizations builtin that are\nspecific to the datasets described at the root category level from EIA.\n\n1. [Browser Quickstart to Collect AEO data](examples/aeo_quickstart.py)\n2. [Computing Marginal Values for AEO data](examples/aeo_marginal_values.py)\n\n## Direct API usage\n\nEach endpoint has a corresponding class in `eia.api`. Every class has a `query` method that makes a call to EIA.\nThe returned result is always the response body. Metadata about the request is dropped. The `Series` and `Geoset`\nclasses have a special `query_df` method since their response bodies have a naturally tabular schema.\n\n\n```python\nfrom eia import api\n\nmyapikey = """" # Register here : www.eia.gov/opendata/register.cfm\n\n# Make a call to the Category endpoint\ncategory = api.Category(myapikey)\ncategory.query()\n\n# Make a call to the Series endpoint\nseries = api.Series(\n ""AEO.2015.REF2015.CNSM_DEU_TOTD_NA_DEU_NA_ENC_QBTU.A"",\n ""AEO.2015.REF2015.CNSM_ENU_ALLS_NA_DFO_DELV_ENC_QBTU.A"",\n api_key=myapikey,\n)\nseries.to_dict() # Export data from its json response\n# Make the same query, but get results as a pandas DataFrame\nseries.to_dataframe()\n\n# Make a call to the Geoset endpoint\ngeoset = api.Geoset(""ELEC.GEN.ALL-99.A"", ""USA-CA"", ""USA-FL"", ""USA-MN"", api_key=myapikey)\ngeoset.to_dict()\ngeoset.query_df()\n\n# Make a call to the SeriesCategory endpoint\n\nseriescategory = api.SeriesCategory(\n ""AEO.2015.REF2015.CNSM_DEU_TOTD_NA_DEU_NA_ENC_QBTU.A"",\n ""AEO.2015.REF2015.CNSM_ENU_ALLS_NA_DFO_DELV_ENC_QBTU.A"",\n api_key=myapikey,\n)\nseriescategory.to_dict()\n\n# Make a call to the Updates endpoint\n\nupdates = api.Updates(\n category_id=2102358,\n rows=0,\n firstrow=""currently_not_used"",\n deep=False,\n api_key=myapikey,\n)\nupdates.to_dict()\n\n# Make a call to the Search endpoint\nsearch = api.Search(api_key=myapikey)\n\n# Make a series_id search\nsearch.to_dict(""series_id"", ""EMI_CO2_COMM_NA_CL_NA_NA_MILLMETNCO2.A"", ""all"")\n\n# Make a name search\nsearch.to_dict(""name"", ""crude oil"", 25)\n\n# Make a date-range search\n# Dates can be input as a list/tuple of any valid pd.to_datetime argument\nsearch.to_dict(""last_updated"", [""Dec. 1st, 2014"", ""06/14/2015 3:45PM""])\n```\n'",,"2015/05/11, 01:43:04",3089,CUSTOM,0,63,"2022/01/26, 23:21:21",5,6,20,0,636,3,0.0,0.0625,"2022/01/26, 23:24:02",0.1.6,0,2,false,,false,false,,,,,,,,,,, EIA,An R package wrapping the US Energy Information Administration open data API.,ropensci,https://github.com/ropensci/eia.git,github,"eia-api,r-package,energy-data,energy-information-administration,open-data,eia,cran",Energy System Data Access,"2021/02/20, 16:18:22",42,0,6,true,R,rOpenSci,ropensci,"R,CSS",https://docs.ropensci.org/eia,"b'\n\n\n# eia \n\n**Author:** [Matthew Leonawicz](https://github.com/leonawicz)\n\n\n
**License:** [MIT](https://opensource.org/licenses/MIT)
\n\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable\nstate and is being actively\ndeveloped.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![Travis build\nstatus](https://travis-ci.org/ropensci/eia.svg?branch=master)](https://travis-ci.org/ropensci/eia)\n[![AppVeyor Build\nStatus](https://ci.appveyor.com/api/projects/status/github/ropensci/eia?branch=master&svg=true)](https://ci.appveyor.com/project/leonawicz/eia)\n[![Codecov test\ncoverage](https://codecov.io/gh/ropensci/eia/branch/master/graph/badge.svg)](https://codecov.io/gh/ropensci/eia?branch=master)\n\n[![](https://badges.ropensci.org/342_status.svg)](https://github.com/ropensci/software-review/issues/342)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/eia)](https://cran.r-project.org/package=eia)\n[![CRAN RStudio mirror\ndownloads](https://cranlogs.r-pkg.org/badges/eia)](https://cran.r-project.org/package=eia)\n[![Github\nStars](https://img.shields.io/github/stars/ropensci/eia.svg?style=social&label=Github)](https://github.com/ropensci/eia/)\n\nThe `eia` package provides API access to data from the US [Energy\nInformation Administration](https://www.eia.gov/) (EIA).\n\nPulling data from the US Energy Information Administration (EIA) API\nrequires a registered API key. A key can be obtained at no cost\n[here](https://www.eia.gov/opendata/register.php). A valid email and\nagreement to the API Terms of Service is required to obtain a key.\n\n`eia` includes functions for searching EIA API data categories and\nimporting time series and geoset time series datasets. Datasets returned\nby these functions are provided in a tidy format or alternatively in\nmore raw form. It also offers helper functions for working with EIA API\ndate strings and time formats and for inspecting different summaries of\nseries metadata. The package also provides control over API key storage\nand caching of API request results.\n\n## Installation\n\nInstall the CRAN release of `eia` with\n\n``` r\ninstall.packages(""eia"")\n```\n\nTo install the development version from GitHub use\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""ropensci/eia"")\n```\n\n## Example\n\nTo begin, store your API key. You can place it somewhere like your\n`.Renviron` file and never have to do anything with the key when you use\nthe package. You can set it with `eia_set_key` in your R session. You\ncan always pass it explicitly to the `key` argument of a function.\n\n``` r\nlibrary(eia)\n\n# not run\neia_set_key(""yourkey"") # set API key if not already set globally\n```\n\nLoad a time series of net electricity generation.\n\n``` r\nid <- ""ELEC.GEN.ALL-AK-99.A""\n(d <- eia_series(id, n = 10))\n#> # A tibble: 1 x 13\n#> series_id name units f description copyright source iso3166 geography start end updated data \n#> \n#> 1 ELEC.GEN.ALL-~ Net generation : all fuel~ thousand me~ A ""Summation of all fuels used for e~ None EIA, U.S. Energy Inf~ USA-AK USA-AK 2001 2019 2020-10-27T1~ # A tibble: 10 x 3\n#> value date year\n#> \n#> 1 6071. 2019-01-01 2019\n#> 2 6247. 2018-01-01 2018\n#> 3 6497. 2017-01-01 2017\n#> 4 6335. 2016-01-01 2016\n#> 5 6285. 2015-01-01 2015\n#> 6 6043. 2014-01-01 2014\n#> 7 6497. 2013-01-01 2013\n#> 8 6946. 2012-01-01 2012\n#> 9 6871. 2011-01-01 2011\n#> 10 6760. 2010-01-01 2010\n\nlibrary(ggplot2)\nlibrary(tidyr)\nunnest(d, cols = data) %>% ggplot(aes(factor(year), value)) + geom_col() + \n labs(x = ""Year"", y = d$units, title = d$name, caption = d$description)\n```\n\n\n\n## References\n\nSee the collection of vignette tutorials and examples as well as\ncomplete package documentation available at the `eia` package\n[website](https://docs.ropensci.org/eia/).\n\n-----\n\nPlease note that the `eia` project is released with a [Contributor Code\nof\nConduct](https://github.com/ropensci/eia/blob/master/CODE_OF_CONDUCT.md).\nBy contributing to this project, you agree to abide by its terms.\n\n[![ropensci\\_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)\n'",,"2019/06/26, 17:24:27",1582,CUSTOM,0,110,"2023/10/19, 22:24:59",2,3,12,5,5,1,0.0,0.6666666666666667,"2021/02/24, 17:02:29",v0.3.7,0,3,false,,true,false,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, atlite,Light-weight version of Aarhus RE Atlas for converting weather data to power systems data.,PyPSA,https://github.com/PyPSA/atlite.git,github,"python,energy,energy-system,energy-systems,wind,pv,era5,renewable-timeseries,renewable-energy,potentials,gis,reanalysis,solar,csp,heat-pump,dynamic-line-rating",Energy System Data Access,"2023/10/25, 14:24:31",209,30,63,true,Python,PyPSA,PyPSA,Python,https://atlite.readthedocs.io,"b' .. SPDX-FileCopyrightText: 2016 - 2023 The Atlite Authors\n\n .. SPDX-License-Identifier: CC-BY-4.0\n\n======\nAtlite\n======\n\n|PyPI version| |Conda version| |Documentation Status| |ci| |codecov| |standard-readme compliant| |MIT-image| |reuse| |black| |pre-commit.ci| |joss| |discord| |stackoverflow|\n\nAtlite is a `free software`_, `xarray`_-based Python library for\nconverting weather data (like wind speeds, solar influx) into energy systems data.\nIt is designed to be lightweight, keeping computing resource requirements (CPU, RAM) usage low.\nIt is therefore well suited to be used with big weather datasets.\n\n.. Atlite is designed to be modular, so that it can work with any weather\n.. datasets. It currently has modules for the following datasets:\n\n.. * `NCEP Climate Forecast System `_ hourly\n.. historical reanalysis weather data available on a 0.2 x 0.2 degree global grid\n.. * `ECMWF ERA5\n.. `_ hourly\n.. historical reanalysis weather data on an approximately 0.25 x 0.25 deg global\n.. grid\n.. * `EURO-CORDEX Climate Change Projection `_\n.. three-hourly up until 2100, available on a 0.11 x 0.11 degree grid for Europe\n.. * `CMSAF SARAH-2\n.. `_\n.. half-hourly historical surface radiation on a 0.05 x 0.05 deg grid available\n.. for Europe and Africa (automatically interpolated to a 0.2 deg grid and\n.. combined with ERA5 temperature).\n\n\nAtlite can process the following weather data fields and can convert them into following power-system relevant time series for any subsets of a full weather database.\n\n.. image:: doc/workflow_chart.png\n\n.. * Temperature\n.. * Downward short-wave radiation\n.. * Upward short-wave radiation\n.. * Wind\n.. * Runoff\n.. * Surface roughness\n.. * Height maps\n.. * Soil temperature\n\n\n.. * Wind power generation for a given turbine type\n.. * Solar PV power generation for a given panel type\n.. * Solar thermal collector heat output\n.. * Hydroelectric inflow (simplified)\n.. * Heating demand (based on the degree-day approximation)\n\n\nAtlite was initially developed by the `Renewable Energy Group\n`_\nat `FIAS `_ to carry out simulations\nfor the `CoNDyNet project `_, financed by the\n`German Federal Ministry for Education and Research (BMBF)\n`_ as part of the `Stromnetze\nResearch Initiative\n`_.\n\n\nInstallation\n============\n\nTo install you need a working installation running Python 3.6 or above\nand we strongly recommend using either miniconda or anaconda for package\nmanagement.\n\nTo install the current stable version:\n\nwith ``conda`` from `conda-forge`_\n\n.. code:: shell\n\n conda install -c conda-forge atlite\n\nwith ``pip`` from `pypi`_\n\n.. code:: shell\n\n pip install atlite\n\nto install the most recent upstream version from GitHub\n\n.. code:: shell\n\n pip install git+https://github.com/pypsa/atlite.git\n\n\nDocumentation\n===============\n.. * Install atlite from conda-forge or pypi.\n.. * Download one of the weather datasets listed above (ERA5 is downloaded\n.. automatically on-demand after the ECMWF\n.. `cdsapi` client is\n.. properly installed)\n.. * Create a cutout, i.e. a geographical rectangle and a selection of\n.. times, e.g. all hours in 2011 and 2012, to narrow down the scope -\n.. see `examples/create_cutout.py `_\n.. * Select a sparse matrix of the geographical points inside the cutout\n.. you want to aggregate for your time series, and pass it to the\n.. appropriate converter function - see `examples/ `_\n\n\nPlease check the `documentation `_.\n\n\nSupport & Contributing\n======================\n* In case of code-related **questions**, please post on `stack overflow `_.\n* For non-programming related and more general questions please refer to the `pypsa mailing list `_.\n* To **discuss** with other PyPSA and atlite users, organise projects, share news, and get in touch with the community you can use the `discord server `_.\n* For **bugs and feature requests**, please use the `issue tracker `_.\n* We strongly welcome anyone interested in providing **contributions** to this project. If you have any ideas, suggestions or encounter problems, feel invited to file issues or make pull requests on the `Github repository `_.\n\nAuthors and Copyright\n---------------------\n\nCopyright (C) 2016 - 2023 The Atlite Authors.\n\nSee the `AUTHORS`_ for details.\n\nLicence\n=======\n\n|MIT-image|\n\nThis work is licensed under multiple licences:\n\n- All original source code is licensed under `MIT`_\n- Auxiliary code from SPHINX is licensed under `BSD-2-Clause`_.\n- The documentation is licensed under `CC-BY-4.0`_.\n- Configuration and data files are mostly licensed under `CC0-1.0`_.\n\nSee the individual files for license details.\n\n.. _free software: http://www.gnu.org/philosophy/free-sw.en.html\n.. _xarray: http://xarray.pydata.org/en/stable/\n\n.. _conda-forge: https://anaconda.org/conda-forge/atlite\n.. _pypi: https://pypi.org/project/atlite/%3E\n.. _GitHub: https://github.com/pypsa/atlite\n\n.. _documentation on getting started: https://atlite.readthedocs.io/en/latest/getting-started.html\n\n.. _AUTHORS: AUTHORS.rst\n\n.. _MIT: LICENSES/MIT.txt\n.. _BSD-2-Clause: LICENSES/BSD-2-Clause.txt\n.. _CC-BY-4.0: LICENSES/CC-BY-4.0.txt\n.. _CC0-1.0: LICENSES/CC0-1.0.txt\n\n.. |PyPI version| image:: https://img.shields.io/pypi/v/atlite.svg\n :target: https://pypi.python.org/pypi/atlite\n.. |Conda version| image:: https://img.shields.io/conda/vn/conda-forge/atlite.svg\n :target: https://anaconda.org/conda-forge/atlite\n.. |Documentation Status| image:: https://readthedocs.org/projects/atlite/badge/?version=master\n :target: https://atlite.readthedocs.io/en/master/?badge=master\n.. |standard-readme compliant| image:: https://img.shields.io/badge/readme%20style-standard-brightgreen.svg?style=flat\n :target: https://github.com/RichardLitt/standard-readme\n.. |MIT-image| image:: https://img.shields.io/pypi/l/atlite.svg\n :target: LICENSES/MIT.txt\n.. |codecov| image:: https://codecov.io/gh/PyPSA/atlite/branch/master/graph/badge.svg?token=TEJ16CMIHJ\n :target: https://codecov.io/gh/PyPSA/atlite\n.. |ci| image:: https://github.com/PyPSA/atlite/actions/workflows/CI.yaml/badge.svg\n :target: https://github.com/PyPSA/atlite/actions/workflows/CI.yaml\n.. |reuse| image:: https://api.reuse.software/badge/github.com/pypsa/atlite\n :target: https://api.reuse.software/info/github.com/pypsa/atlite\n.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n :alt: Code style: black\n.. |pre-commit.ci| image:: https://results.pre-commit.ci/badge/github/PyPSA/atlite/master.svg\n :target: https://results.pre-commit.ci/latest/github/PyPSA/atlite/master\n :alt: pre-commit.ci status\n.. |joss| image:: https://joss.theoj.org/papers/10.21105/joss.03294/status.svg\n :target: https://doi.org/10.21105/joss.03294\n.. |discord| image:: https://img.shields.io/discord/911692131440148490?logo=discord\n :target: https://discord.gg/AnuJBk23FU\n.. |stackoverflow| image:: https://img.shields.io/stackexchange/stackoverflow/t/pypsa\n :target: https://stackoverflow.com/questions/tagged/pypsa\n :alt: Stackoverflow\n'",",https://doi.org/10.21105/joss.03294\n","2016/11/03, 23:59:25",2546,CUSTOM,190,691,"2023/10/24, 16:05:36",26,218,302,73,1,3,0.8,0.7267950963222417,"2023/10/25, 10:29:04",v0.2.12,0,24,false,,false,true,"cshearer1977/EnergySystemModelling,maribjorn/pypsa-earth-bo,zhizhiyuyu/pypsa-tide,centrefornetzero/pypsa-fes,LukasFrankenQ/pypsa-fes,BishtArvind/pypsa-meets-earth,alfiyaks/pypsa-kz-my,PyPSA/pypsa-usa,fneum/data-science-for-esm,drifter089/pypsaLit-container,clairehalloran/GeoHeat-GB,2050plus/2050plus,yerbol-akhmetov/pypsa-kaz,narest-qa/repo22,wago-stiftung/workshop_data-analytics,koen-vg/kjernekraft-noreg,drifter089/pypsa-workflow,pz-max/pypsa-earth-test,Pikugcp22/https-github.com-pypsa-meets-earth-pypsa-earth,ekatef/pypsa-earth,pypsa-meets-africa/pypsa-africa-archived,vttresearch/ArchetypeBuildingModel,carlosfv92/pypsa-earth-BO,mikelperez01/Mike,pypsa-meets-earth/pypsa-earth,maxnutz/res_aut,PyPSA/pypsa-eur,richard-weinhold/PomatoData,openego/eGon-data,ZNES-datapackages/100-sea-2050",,https://github.com/PyPSA,www.pypsa.org,,,,https://avatars.githubusercontent.com/u/32890768?v=4,,, NYISOToolkit,"A collection of modules for accessing power system data, generating statistics, and creating visualizations from the New York Independent System Operator.",m4rz910,https://github.com/m4rz910/NYISOToolkit.git,github,"electricity,nyiso,energy,analysis,data,datasets,datascience,decarbonization,visualization,clcpa,newyork,clean-energy,renewable-energy,kaggle,kaggle-competition,kaggle-dataset,machine-learning,ml",Energy System Data Access,"2023/08/22, 17:30:00",42,0,11,true,Python,,,"Python,Jupyter Notebook",,"b'## NYISOToolkit\n\nA package for accessing power system data (`NYISOData`), generating statistics (`NYISOStat`), and creating visualizations (`NYISOVis`) from the [New York Independent System Operator (NYISO)](https://www.nyiso.com/).\n\nCheck out the [NYISOToolkit Web App!](http://viosimos.com/nyisotoolkit/)\n\n## How to Install\n\n```python\npip install git+https://github.com/m4rz910/NYISOToolkit#egg=nyisotoolkit\n```\n \n## NYISOData\n\n**Example:**\n```python\nfrom nyisotoolkit import NYISOData, NYISOStat, NYISOVis\ndf = NYISOData(dataset=\'load_h\', year=\'2019\').df # year argument in local time, but returns dataset in UTC\n\n#If you need to work with data in local time, then convert time zone\ndf = df.tz_convert(\'US/Eastern\')\n\n#Construct datasets for certain years\nyears = [\'2013\',\'2019\',\'2020\']\ndatasets = [\'load_h\',\'interface_flows_5m\']\nNYISOData.construct_databases(years=years, datasets=datasets, redownload=True, reconstruct=True, create_csvs=False)\n```\n\nRaw Data Source: http://mis.nyiso.com/public/\n\nDataset Name | Resolution | Description\n--- | --- | --\n`load_h` | hourly | day-ahead load by NYISO region\n`load_5m` | 5-min | real-time load by NYISO region\n`load_forecast_h` | hourly | load forecast by NYISO region\n`fuel_mix_5m` | 5-min | real-time aggregated fuel mix data\n`interface_flows_5m` | 5-min | real-time flows between regions\n`lbmp_dam_h` | hourly | day-ahead zonal Locational Based Marginal Price (LBMP)\n`lbmp_rt_5m` | 5-min | real-time zonal LBMP\n`lbmp_dam_h_refbus` | hourly | day-ahead reference bus marginal cost of energy\n`lbmp_rt_h_refbus` | hourly | time weighted average rt reference bus marginal cost of energy\n`asp_rt` | 5-min | real-time zonal ancillary service prices\n`asp_dam` | hourly | day-ahead zonal ancillary service prices\n\nAll datasets:\n\n* Timezone: Coordinated Universal Time [UTC]\n* Frequency: Hourly or 5-mins. The raw data sometimes has higher or lower frequency than intended, but this library uses mean values to resample at the intended frequency. When interpolations are necessary, they are made. Some datasets only come in one frequency.\n* Datetime Convention: Start. The value(s)/measurement(s) associated with each timestamp occurred in the time period before the start of the next timestamp.\n\n### More Dataset Information\n\n#### Load (`load_h`)\n\n* ""Integrated Real-Time Actual Load is posted after each hour and represents the timeweighted hourly load for each zone"" (NYISO Market Participant Guide p.62)\n* Units: Power [MW]\n* Frequency: Hour\n\n#### Load (`load_5m`)\n\n* ""Real-Time Actual Load posts the actual measured load for each RTD interval (5 minutes) by zone.\nActual loads are calculated as generation plus net interchange for each zone, based on real-time telemetered data."" (NYISO Market Participant Guide p.62)\n* Units: Power [MW]\n* Frequency: 5-min\n\n#### Load Forecast (`load_forecast_h`)\n\n* ""Weather forecast information grouped by zone is input into a neural network forecaster tool to produce a preliminary zonal load forecast for each hour of the following day. The tool makes use of historical load and weather patterns."" (NYISO Market Participant Guide p.25)\n* Units: Power [MW]\n* Frequency: Hour\n\n#### Fuel Mix (`fuel_mix_5m`)\n\n* Units: Power [MW]\n* Frequency: 5-min\n\n#### Interface Flows (`interface_flows_5m`)\n\n* ""Internal/ External Interface Limits and Flows consist of hourly limits (for all major internal interfaces, HQ, NE, PJM, and OH) and flows (for HQ, NE, PJM, and OH) in SCUC and time-weighted average hourly flows (for the same interfaces) in RTD. The data is posted at least day-after or sooner."" (NYISO Market Participant Guide p.59)\n* Units: Power [MW] (Note: The raw datafile column is mislabled as MWH, but it is correct on the NYISO Dashboard)\n* Frequency: 5-min\n\nInterface Name | Type | Mapping Name | Notes\n--- | --- | --- | ---\nCENTRAL EAST | Internal | `CENTRAL EAST - VC`\nDYSINGER EAST | Internal | `DYSINGER EAST`\nMOSES SOUTH | Internal | `MOSES SOUTH`\nSPR/DUN-SOUTH | Internal | `SPR/DUN-SOUTH`\nTOTAL EAST | Internal | `TOTAL EAST`\nUPNY CONED | Internal | `UPNY CONED`\nWEST CENTRAL | Internal | `WEST CENTRAL`\nHQ CHATEAUGUAY | External | `SCH - HQ - NY`\nHQ CEDARS | External | `SCH - HQ_CEDARS`\nHQ Import Export | External | `SCH - HQ_IMPORT_EXPORT` | subset of HQ CHATEAUGUAY, excludes wheel-through\nNPX NEW ENGLAND (NE) | External | `SCH - NE - NY`\nNPX 1385 NORTHPORT (NNC) | External | `SCH - NPX_1385`\nNPX CROSS SOUND CABLE (CSC) | External | `SCH - NPX_CSC`\nIESO | External | `SCH - OH - NY`\nPJM KEYSTONE | External | `SCH - PJ - NY`\nPJM HUDSON TP | External | `SCH - PJM_HTP`\nPJM NEPTUNE | External | `SCH - PJM_NEPTUNE`\nPJM LINDEN VFT | External | `SCH - PJM_VFT`\n\n#### LBMP (`lbmp_dam_h`)\n\n* NYISO Market Participant Guide\n* Units: Price [$/MWh]\n* Frequency: Hour\n\n#### LBMP (`lbmp_rt_5m`)\n\n* NYISO Market Participant Guide\n* Units: Price [$/MWh]\n* Frequency: Hour\n\n#### Ancillary Service Price (`asp_rt`)\n\n* Units: Price [$/MWh]\n* Frequency: Hour\n\n#### Ancillary Service Price (`asp_dam`)\n\n* Units: Price [$/MWh]\n* Frequency: 5-min\n\n## NYISOVis\nThere are several visualizations currently supported - browse them on the [NYISOToolkit Web App](http://viosimos.com/nyisotoolkit/) or in the nyisotoolkit/nyisovis/visualizations folder. The visualizations are focused on communicating New York\'s status toward achieving the power sector decarbonization goals outlined by the Climate Leadership and Community Protection Act (CLCPA). \n\n> No later than [June 13, 2021], the commission shall establish a program to require that:\n>\n> * (A) A minimum of [70%] of the state wide electric generation secured by jurisdictional load serving entities to meet the electrical energy requirements of all end-use customers in New York State in [2030] shall be generated by renewable energy systems;\n> * (B) and that by [2040] the statewide electrical demand system will be zero emissions.""\n\n**Source:** [CLCPA p.17](https://www.nysenate.gov/legislation/bills/2019/s6599)\n\n**Example:**\n\n```python\nfrom nyisotoolkit import NYISOData, NYISOStat, NYISOVis\nnv = NYISOVis(year=\'2019\') #figures saved in nyisotoolkit/nyisovis/visualization folder by default. \nnv.fig_carbon_free_timeseries(f=\'D\') # daily (D) or monthy (M) frequency is recommended\nprint(f""Figures saved by default to: {nv.out_dir} \\nYou can change this by passing a pathlib object to the out_dir parameter in the NYISOVis object initialization."")\n```\n![CLCPA](nyisotoolkit/nyisovis/visualizations/2021_carbon_free_timeseries_D.png)\n'",,"2020/07/18, 14:37:20",1194,MIT,2,210,"2022/06/24, 02:33:17",8,9,21,0,488,0,0.0,0.10106382978723405,"2020/10/25, 20:07:17",2.0.0,1,3,false,,false,false,,,,,,,,,,, Photovoltaic time series for European countries,"Comprises 38 years-long hourly time series representing the photovoltaic capacity factors in every European country (EU-28 plus Serbia, Bosnia-Herzegovina, Norway, and Switzerland).",record,,custom,,Energy System Data Access,,,,,,,,,,https://zenodo.org/record/2613651#.XRtJRP7Rapo,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, SolarData,Download and manipulate some publicly available solar datasets.,dazhiyang,https://github.com/dazhiyang/SolarData.git,github,solar-energy,Energy System Data Access,"2021/04/19, 00:46:08",25,0,4,false,R,,,R,,"b'# Access and manipulate some publicly available solar data\n\nThere are many publicly available solar datasets. This package contains functions to download and manipulate these datasets. Currently available ones include: \n- NREL [Physical Solar Model (PSM)](https://nsrdb.nrel.gov/current-version) version 3, gridded satellite-derived irradiance data\n- NREL [Oahu Solar Measurement Grid (OSMG)](https://midcdmz.nrel.gov/oahu_archive/), dense sensor network in Oahu, Hawaii\n- NOAA [Surface Radiation (SURFRAD)](https://www.esrl.noaa.gov/gmd/grad/surfrad/), long-term high-resolution ground-based irradiance data\n- NASA [Shuttle Radar Topography Mission (SRTM)](https://www2.jpl.nasa.gov/srtm/cbanddataproducts.html), digital elevation model data\n- SoDa [Linke Turbidity Factor (LTF)](http://www.soda-pro.com/help/general-knowledge/linke-turbidity-factor), Linke turbidity data\n- WRMC [Baseline Surface Radiation Network (BSRN)](https://bsrn.awi.de/), long-term high-resolution ground-based irradiance data\n\n## Getting Started\n\nThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes. \n\n### Prerequisites\n\nThis is an R package, so you need to install [R](https://www.r-project.org/) on your computer first. In addition, [RStudio](https://www.rstudio.com/) is an integrated development environment (IDE) for R; it is highly recommended.\n\n### Installing\n\nOnce R and RStudio are installed. Open R or RStudio and install the [devtools](https://cran.r-project.org/web/packages/devtools/index.html) package, which allows you to install R package from GitHub\n\n```\ninstall.packages(""devtools"")\n```\n\nLoad the package that you just installed\n\n```\nlibrary(""devtools"")\n```\n\nNow, you can install the SolMod package, using\n\n```\ninstall_github(""dazhiyang/SolarData"")\n```\n\n## Running the tests\n\nThis code segment gives an example on how to run transposition modeling (horizontal to tilt) using a variety of models. (This is not up to date)\n\n```\nlibrary(""SolarData"")\n\n#get SURFRAD data from Goodwin_Creek_MS (gwn) station, for the first three days in 2004\nSURFRAD.get(station = \'Goodwin_Creek_MS\', year = \'2004\', day_of_year = c(1:3))\n\n#get PSM data for two locations\nPSM.get(lat = c(42.05, 44), lon = c(-124.02, -110), api_key <- \'FVltdchrxzBCHiSNF6M7R4ua6BFe4j81fbPp8dDP\', attributes <- \'ghi,dhi,dni,clearsky_dhi,clearsky_dni,clearsky_ghi,solar_zenith_angle\', name = \'John+Smith\', affiliation = \'Some+Institute\', year = \'2016\', leap_year = \'true\', interval = \'30\', utc = \'false\', reason_for_use = \'research\', email = \'yangdazhi.nus@gmail.com\', mailing_list = \'false\')\n\n#get SRTM, i.e., digital elevation model, data for two boxes with resolution 3 arcsec\nSRTM.list(3, want.plot = TRUE) #check available files\nfiles <- c(""Eurasia/N00E072.hgt.zip"", ""Eurasia/N00E073.hgt.zip"")\nSRTM.get(resolution = 3, files = files)\n```\n\n## License\n\nThis package is under the GPL-2 license.\n'",,"2018/04/09, 06:55:17",2025,MIT,0,62,"2022/06/24, 02:33:17",0,0,0,0,488,0,0,0.0,"2019/02/20, 08:01:23",v1.0,0,1,false,,false,false,,,,,,,,,,, UKgrid,An R data package with the UK National Grid historical demand for electricity between April 2005 and October 2019.,RamiKrispin,https://github.com/RamiKrispin/UKgrid.git,github,,Energy System Data Access,"2020/07/04, 10:31:29",27,0,1,false,R,,,R,https://ramikrispin.github.io/UKgrid/,"b'# UKgrid \n\n[![lifecycle](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://www.tidyverse.org/lifecycle/#maturing)\n[![CRAN status](https://www.r-pkg.org/badges/version/UKgrid)](https://cran.r-project.org/package=UKgrid)\n[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n\n\n\n\n\nAn R data package with the UK [National Grid](https://en.wikipedia.org/wiki/National_Grid_(Great_Britain)) historical demand for electricity between April 2005 and October 2019\n\n\nOverview\n--------\nThe UKgrid dataset is an example of a multiple seasonality time series. This time series captures the demand for electricity and its components in the UK since April 2005 using half-hour intervals. In addition, the package provides a function to extract, subset and aggregate the series into `tsibble`, `ts`, `xts`, `zoo`, `data.frame`, `data.table`, or `tbl`. \n\nThe data was sourced from the National Grid UK [website](https://www.nationalgrid.com/uk)\n\n\n\n\nInstallation\n------------\n\nInstall the stable version from [CRAN](https://CRAN.R-project.org/package=UKgrid):\n\n``` r\ninstall.packages(""UKgrid"")\n```\n\nor install the development version from [Github](https://github.com/RamiKrispin/UKgrid):\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""RamiKrispin/UKgrid"")\n```\n\n\nUsage\n-----\n\n``` r\nlibrary(UKgrid)\n\n# Load the full dataset (data.frame format)\ndata(""UKgrid"")\n\n# Extract only the demand field (ND - National Demand) using tsibble format\nextract_grid(type = ""tsibble"", \n columns = ""ND"") \n\n# Extract the demand between 2016 and 2017 using tbl format\nextract_grid(type = ""tbl"", \n columns = ""ND"", \n start = 2016, \n end = 2017)\n\n# Extract the first 10 days in 2018 and aggregate to hourly using zoo format\nextract_grid(type = ""zoo"", \n columns = ""ND"", \n start = as.Date(""2018-01-01""), \n end = as.Date(""2018-01-10""),\n aggregate = ""hourly"")\n``` \n\nMore details available on the package [site](https://ramikrispin.github.io/UKgrid/) and [vignette](https://ramikrispin.github.io/UKgrid/articles/UKgrid_vignette.html)\n'",,"2018/07/15, 06:57:40",1928,CUSTOM,0,2,"2019/11/27, 21:52:50",0,0,6,0,1427,0,0,0.0,"2019/12/10, 18:31:09",0.1.2,0,1,false,,false,false,,,,,,,,,,, USgrid,The hourly demand and supply of electricity in the US.,RamiKrispin,https://github.com/RamiKrispin/USgrid.git,github,,Energy System Data Access,"2021/03/21, 18:19:33",24,0,2,false,R,,,R,https://ramikrispin.github.io/USgrid/,"b'\n\n\n# USgrid \n\n\n\n[![lifecycle](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/USgrid)](https://cran.r-project.org/package=USgrid)\n[![License:\nMIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n\n\nThe USgrid R package provides a set of high frequency (hourly)\ntime-series datasets, describing the demand and generation of\nelectricity in the US (lower-48 states, excluding Alaska and Hawaii).\nThat includes the following series:\n\n- `US_elec` - the total hourly demand and supply (generation) for\n electricity in the US since July 2015\n\n- `US_source` - the US net generation of electricity by energy source\n (natural gas, coal, solar, etc.) since July 2018\n\n- `Cal_elec` - The California subregion hourly demand by operator\n since July 2018\n\nAll datasets are in [tsibble](https://tsibble.tidyverts.org/index.html)\nformat\n\n**Source:** [US Energy Information\nAdministration](https://www.eia.gov/), Mar 2021\n\n## Installation\n\nInstall the stable version from\n[CRAN](https://CRAN.R-project.org/package=USgrid):\n\n``` r\ninstall.packages(""USgrid"")\n```\n\nor install the development version from\n[Github](https://github.com/RamiKrispin/USgrid):\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""RamiKrispin/USgrid"")\n```\n\n## Examples\n\nThe hourly demand and generation (supply) of electricty in the US:\n\n``` r\nlibrary(USgrid)\nlibrary(plotly)\n\ndata(US_elec)\n\nplot_ly(data = US_elec,\n x = ~ date_time,\n y = ~ series,\n color = ~ type,\n colors = c(""#66C2A5"",""#8DA0CB""),\n type = ""scatter"",\n mode = ""lines"") %>%\n layout(title = ""US Electricity Demand vs. Supply (Hourly)"",\n yaxis = list(title = ""Mwh""),\n xaxis = list(title = ""Source: US Energy Information Administration (Mar 2021)""))\n```\n\n\n\nThe hourly generation (supply) of electricty in the US by source:\n\n``` r\ndata(""US_source"")\n\nplot_ly(data = US_source,\n x = ~ date_time,\n y = ~ series,\n color = ~ source,\n type = ""scatter"",\n mode = ""lines"") %>%\n layout(title = ""US Electricity Generation by Energy Source"",\n yaxis = list(title = ""Mwh""),\n xaxis = list(title = ""Source: US Energy Information Administration (Mar 2021)""))\n```\n\n\n\nThe California subregion hourly demand by operator\n\n``` r\ndata(""Cal_elec"")\n\nplot_ly(data = Cal_elec,\n x = ~ date_time,\n y = ~ series,\n color = ~ operator,\n type = ""scatter"",\n mode = ""lines"") %>%\n layout(title = ""California Hourly Demand by Operator"",\n yaxis = list(title = ""Mwh""),\n xaxis = list(title = ""Source: US Energy Information Administration (Mar 2021)""))\n```\n\n\n'",,"2019/11/06, 06:17:49",1449,CUSTOM,0,136,"2019/11/27, 23:14:25",0,0,6,0,1427,0,0,0.0,"2021/05/26, 12:03:11",v0.1.2,0,1,false,,false,false,,,,,,,,,,, ESIOS,Comprehensive library to access the Spanish electricity market entity in Python.,SanPen,https://github.com/SanPen/ESIOS.git,github,,Energy System Data Access,"2023/10/13, 10:51:54",42,0,7,true,Jupyter Notebook,,,"Jupyter Notebook,Python,Makefile",,"b""# ESIOS\nAccess to the ESIOS data, the Spanish electricity market entity, in python 3 \n(python 2.7 might work but it is not supported)\n\nThis API is made to make it painless to access the market published data.\n\nFirst you need a token string. You should ask for yours to: Consultas Sios \nIt looks like this\n`'615e6d8c80629b8eef25c8f3d0c36094e23db4ed50ce5458f3462129d7c46dba'`\n\nTo use the ESIOS module, just do:\n\n```\nfrom ESIOS import *\n\ntoken = '615e6d8c80629b8eef25c8f3d0c36094e23db4ed50ce5458f3462129d7c46dba'\n\nesios = ESIOS(token)\n\nindicators_ = [1293, 600] # demand (MW) and SPOT price (\xe2\x82\xac)\n\nnames = esios.get_names(indicators_)\n\ndfmul, df_list, names = esios.get_multiple_series(indicators_, start_, end_)\ndf = dfmul[names] # get the actual series and neglect the rest of the info\n```\n\nThis is an example of what you can get:\n\n![Image of some indicators on December 2015](https://github.com/SanPen/ESIOS/blob/master/example.png)\n\nIf you have any suggestion please write to: (Espa\xc3\xb1ol e Ingl\xc3\xa9s)\n\nTo install ESIOS package\n\n```\npip install pyesios\n\n# To build the graphs in the examples\npip install pyesios[graphs]\n```\n\n## [Contribuiting](./CONTRIBUITING.md) \n\n\n\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)""",,"2016/09/02, 15:04:20",2609,GPL-3.0,17,36,"2023/10/13, 10:46:36",4,3,8,7,12,1,0.6666666666666666,0.3548387096774194,,,0,3,false,,false,false,,,,,,,,,,, energy-data,"Data on global energy consumption (primary energy, per capita, and growth rates), energy mix, electricity mix and other relevant metrics.",owid,https://github.com/owid/energy-data.git,github,"energy,electricity",Energy System Data Access,"2023/07/18, 13:21:58",216,0,93,true,Python,Our World in Data,owid,Python,https://ourworldindata.org/energy,"b'# Data on Energy by *Our World in Data*\n\nOur complete Energy dataset is a collection of key metrics maintained by [*Our World in Data*](https://ourworldindata.org/energy). It is updated regularly and includes data on energy consumption (primary energy, per capita, and growth rates), energy mix, electricity mix and other relevant metrics.\n\n## The complete *Our World in Data* Energy dataset\n\n### \xf0\x9f\x97\x82\xef\xb8\x8f Download our complete Energy dataset : [CSV](https://nyc3.digitaloceanspaces.com/owid-public/data/energy/owid-energy-data.csv) | [XLSX](https://nyc3.digitaloceanspaces.com/owid-public/data/energy/owid-energy-data.xlsx) | [JSON](https://nyc3.digitaloceanspaces.com/owid-public/data/energy/owid-energy-data.json)\n\nThe CSV and XLSX files follow a format of 1 row per location and year. The JSON version is split by country, with an array of yearly records.\n\nThe variables represent all of our main data related to energy consumption, energy mix, electricity mix as well as other variables of potential interest.\n\nWe will continue to publish updated data on energy as it becomes available. Most metrics are published on an annual basis.\n\nA [full codebook](https://github.com/owid/energy-data/blob/master/owid-energy-codebook.csv) is made available, with a description and source for each variable in the dataset.\n\n## Our source data and code\n\nThe dataset is built upon a number of datasets and processing steps:\n- Statistical review of world energy (Energy Institute, EI):\n - [Source data](https://www.energyinst.org/statistical-review)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/energy_institute/2023-06-26/statistical_review_of_world_energy.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/energy_institute/2023-06-26/statistical_review_of_world_energy.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy_institute/2023-06-26/statistical_review_of_world_energy.py)\n- International energy data (U.S. Energy Information Administration, EIA):\n - [Source data](https://www.eia.gov/opendata/bulkfiles.php)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/eia/2023-07-10/international_energy_data.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/eia/2023-07-10/energy_consumption.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/eia/2023-07-10/energy_consumption.py)\n- Energy from fossil fuels (The Shift Dataportal):\n - [Source data](https://www.theshiftdataportal.org/energy)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/shift/2023-07-10/energy_production_from_fossil_fuels.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/shift/2023-07-10/energy_production_from_fossil_fuels.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/shift/2023-07-10/energy_production_from_fossil_fuels.py)\n- Yearly Electricity Data (Ember):\n - [Source data](https://ember-climate.org/data-catalogue/yearly-electricity-data/)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/ember/2023-07-10/yearly_electricity.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/ember/2023-07-10/yearly_electricity.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/ember/2023-07-10/yearly_electricity.py)\n- European Electricity Review (Ember):\n - [Source data](https://ember-climate.org/insights/research/european-electricity-review-2022/)\n - [Ingestion code](https://github.com/owid/walden/blob/master/owid/walden/index/ember/2022-02-01/european_electricity_review.json)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/ember/2022-08-01/european_electricity_review.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/ember/2022-08-01/european_electricity_review.py)\n- Combined Electricity (Our World in Data based on Ember\'s Yearly Electricity Data and European Electricity Review):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/ember/2023-07-10/combined_electricity.py)\n- Energy mix (Our World in Data based on EI\'s Statistical review of world energy):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy/2023-07-10/energy_mix.py)\n- Fossil fuel production (Our World in Data based on EI\'s Statistical review of world energy & The Shift Dataportal\'s Energy from fossil fuels):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy/2023-07-10/fossil_fuel_production.py)\n- Primary energy consumption (Our World in Data based on EI\'s Statistical review of world energy & EIA\'s International energy data):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy/2023-07-10/primary_energy_consumption.py)\n- Electricity mix (Our World in Data based on EI\'s Statistical Review & Ember\'s Combined Electricity):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy/2023-07-10/electricity_mix.py)\n- Energy dataset (Our World in Data based on all sources above):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy/2023-07-10/owid_energy.py)\n - [Exporting code](https://github.com/owid/energy-data/blob/master/scripts/make_dataset.py)\n - [Uploading code](https://github.com/owid/energy-data/blob/master/scripts/upload_datasets_to_s3.py)\n\nAdditionally, to construct region aggregates and variables per capita and per GDP, we use the following datasets and processing steps:\n- Regions (Our World in Data).\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/regions/2023-01-01/regions.py)\n- Population (Our World in Data based on [a number of different sources](https://ourworldindata.org/population-sources)).\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/demography/2023-03-31/population/__init__.py)\n- Income groups (World Bank).\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/wb/2023-04-30/income_groups.py)\n- GDP (University of Groningen GGDC\'s Maddison Project Database, Bolt and van Zanden, 2020).\n - [Source data](https://www.rug.nl/ggdc/historicaldevelopment/maddison/releases/maddison-project-database-2020)\n - [Ingestion code](https://github.com/owid/walden/blob/master/ingests/ggdc_maddison.py)\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/ggdc/2020-10-01/ggdc_maddison.py)\n\n## Changelog\n\n- On July 7, 2023:\n - Replaced BP\'s data by the new Energy Institute Statistical Review of World Energy 2023.\n - Updated Ember\'s yearly electricity data.\n - Updated all datasets accordingly.\n- On June 1, 2023:\n - Updated Ember\'s yearly electricity data.\n - Renamed countries \'East Timor\' and \'Faroe Islands\', and added \'Middle East (Ember)\'.\n - Population and per capita variables are now calculated using an updated version of our population dataset.\n- On March 1, 2023:\n - Updated Ember\'s yearly electricity data and fixed some minor issues.\n- On December 30, 2022:\n - Fixed some minor issues with BP\'s dataset. Regions like ""Other North America (BP)"" have been removed from the data, since, in the original Statistical Review of World Energy, these regions represented different sets of countries for different variables.\n- On December 16, 2022:\n - The column `electricity_share_energy` (electricity as a share of primary energy) was added to the dataset.\n - Fixed some minor inconsistencies in electricity data between Ember and BP, by prioritizing data from Ember.\n - Updated Ember\'s yearly electricity data.\n- On August 9, 2022:\n - All inconsistencies due to different definitions of regions among different datasets (especially Europe) have been fixed.\n - Now all regions follow [Our World in Data\'s definitions](https://ourworldindata.org/world-region-map-definitions).\n - We also include data for regions as defined in the original datasets; for example, `Europe (BP)` corresponds to Europe as defined by BP.\n - All data processing now occurs outside this repository; the code has been migrated to be part of the [etl repository](https://github.com/owid/etl).\n - Variable `fossil_cons_per_capita` has been renamed `fossil_elec_per_capita` for consistency, since it corresponds to electricity generation.\n - The codebook has been updated following these changes.\n- On April 8, 2022:\n - Electricity data from Ember was updated (using the Global Electricity Review 2022).\n - Data on greenhouse-gas emissions in electricity generation was added (`greenhouse_gas_emissions`).\n - Data on emissions intensity is now provided for most countries in the world.\n- On March 25, 2022:\n - Data on net electricity imports and electricity demand was added.\n - BP data was updated (using the Statistical Review of the World Energy 2021).\n - Maddison data on GDP was updated (using the Maddison Project Database 2020).\n - EIA data on primary energy consumption was included in the dataset.\n - Some issues in the dataset were corrected (for example some missing data in production by fossil fuels).\n- On February 14, 2022:\n - Some issues were corrected in the electricity data, and the energy dataset was updated accordingly.\n - The json and xlsx dataset files were removed from GitHub in favor of an external storage service, to keep this repository at a reasonable size.\n - The `carbon_intensity_elec` column was added back into the energy dataset.\n- On February 3, 2022, we updated the [Ember global electricity data](https://ember-climate.org/data/global-electricity/), combined with the [European Electricity Review from Ember](https://ember-climate.org/project/european-electricity-review-2022/).\n - The `carbon_intensity_elec` column was removed from the energy dataset (since no updated data was available).\n - Columns for electricity from other renewable sources excluding bioenergy were added (namely `other_renewables_elec_per_capita_exc_biofuel`, and `other_renewables_share_elec_exc_biofuel`).\n - Certain countries and regions have been removed from the dataset, because we identified significant inconsistencies in the original data.\n- On March 31, 2021, we updated 2020 electricity mix data.\n- On September 9, 2020, the first version of this dataset was made available.\n\n## Data alterations\n\n- **We standardize names of countries and regions.** Since the names of countries and regions are different in different data sources, we harmonize all names to the [*Our World in Data* standard entity names](https://ourworldindata.org/world-region-map-definitions).\n- **We create aggregate data for regions (e.g. Africa, Europe, etc.).** Since regions are defined differently by our sources, we create our own aggregates following [*Our World in Data* region definitions](https://ourworldindata.org/world-region-map-definitions).\n - We also include data for regions as defined in the original datasets; for example, `Europe (EI)` corresponds to Europe as defined by the Energy Institute.\n- **We recalculate primary energy in terawatt-hours.** The primary data sources on energy\xe2\x80\x94the Energy Institute Statistical review of world energy, for example\xe2\x80\x94typically report consumption in terms of exajoules. We have recalculated these figures as terawatt-hours using a conversion factor of 277.8.\n - Primary energy for renewable sources is reported using [the \'substitution method\'](https://ourworldindata.org/energy-substitution-method).\n- **We calculate per capita figures.** All of our per capita figures are calculated from our `population` metric, which is included in the complete dataset.\n - We also calculate energy consumption per gdp, and include the corresponding `gdp` metric used in the calculation as part of the dataset.\n- **We remove inconsistent data.** Certain data points have been removed because their original data presented anomalies. They may be included again in further data releases if the anomalies are amended.\n\n## License\n\nAll visualizations, data, and code produced by _Our World in Data_ are completely open access under the [Creative Commons BY license](https://creativecommons.org/licenses/by/4.0/). You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.\n\nThe data produced by third parties and made available by _Our World in Data_ is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our database, and you should always check the license of any such third-party data before use.\n\n## Authors\n\nThis data has been collected, aggregated, and documented by Hannah Ritchie, Pablo Rosado, Edouard Mathieu, Max Roser.\n\n*Our World in Data* makes data and research on the world\xe2\x80\x99s largest problems understandable and accessible. [Read more about our mission](https://ourworldindata.org/about).\n\n## How to cite this data?\n\nIf you are using this dataset, please cite both [Our World in Data](https://ourworldindata.org/energy#citation) and the underlying data source(s).\n\nPlease follow [the guidelines in our FAQ](https://ourworldindata.org/faqs#citing-work-produced-by-third-parties-and-made-available-by-our-world-in-data) on how to cite our work.\n'",,"2020/09/08, 15:21:25",1142,GPL-3.0,28,234,"2023/09/23, 09:00:07",0,25,35,15,32,0,0.4,0.25742574257425743,,,0,6,false,,false,false,,,https://github.com/owid,https://ourworldindata.org,,,,https://avatars.githubusercontent.com/u/14187135?v=4,,, OpenEI,A knowledge-sharing online community dedicated to connecting people with the latest information and data on energy resources from around the world.,wiki,,custom,,Energy System Data Access,,,,,,,,,,https://openei.org/wiki/Main_Page,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Power grid frequency data base,This data set contains precisely time stamped (GPS referenced) frequency data from several power grids around the world in one second resolution and 1 hour excerpts of raw data.,,,custom,,Energy System Data Access,,,,,,,,,,https://osf.io/by5hu/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, EnergyDataSimulationChallenge,Allows applicants to demonstrate their ability to analyze and develop software that makes use of big energy production data.,enechange,https://github.com/enechange/EnergyDataSimulationChallenge.git,github,,Energy System Data Access,"2021/06/01, 02:59:15",67,0,4,true,HTML,ENECHANGE,enechange,"HTML,Jupyter Notebook,Ruby,JavaScript,Python,CSS,MATLAB,SCSS,Slim,R,CoffeeScript,Vue,Haml,Dockerfile,Shell,Go,Less,TypeScript,HCL,Makefile",https://enechange.co.jp,"b'# EnergyDataSimulationChallenge\n\nWelcome to EnergyDataSimulationChallenge!\n\nThis project allows applicants to demonstrate their ability to analyze and develop software that makes use of big energy production data. Please complete one (or more) of the several challenges we have prepared. Pull-requests are also welcome.\n\n## Instructions\n\n### Steps\n\n1. Fork this repository\n2. Create a new branch\n(please name the branch `challengeX/YOURNAME` (eg. `challenge1/shirakia`))\n3. Create your development directory inside the `analysis/` or `webapp/` directory\n4. Design, write and commit your program to the above branch\n5. Push the branch\n6. Create a Pull Request\n\n### Attention\n- Avoid working on any branch except your own branch\n- Avoid committing files other than those in your own directory\n\n### Reward\n\n* For Full-Time Employment\n * A chance at a final interview with the CTO and engineers.\n* For Paid Internship\n * A paid internship offer (3 month internship programme with competitive salary)\n * Includes FREE accommodation (private room) and own desk at coworking space in London if you wish to work with us here.\n * Optionally, you may work remotely from a location of your choice.\n * After successful 3 month programme, you may be offered an extension, or another position (permanent, longer contract, additional intern, etc..)\n\n## Challenge 1 - Energy Production Data Simulation\n\nWe have prepared energy production data for 500 houses.\nFor each house, there is monthly data from July, 2011 to June, 2013.\nThe data contains temperature and daylight data.\n\nPlease make a model for predicting EnergyProduction using data from July 2011 to May 2013.\nOn that basis, predict EnergyProduction on June 2013 for each house, and calculate MAPE (Mean Absolute Percentage Error).\nYou can use any algorithm, including multiple-variables regression, polynomial regression, Neural network, SVM, etc...\n\nWe will check the following:\n* accuracy of prediction (MAPE)\n* algorithm choice\n* parameter tuning\n* programming skill\n\nPlease make sure to consider all of the above criteria (not just accuracy of prediction) when completing each challenge.\n\n### Input\n\nThe input dataset file inside the `data/` directory has the following format:\n```\n$ head data/dataset_500.csv | column -s, -t\nID Label House Year Month Temperature Daylight EnergyProduction\n0 0 1 2011 7 26.2 178.9 740\n1 1 1 2011 8 25.8 169.7 731\n2 2 1 2011 9 22.8 170.2 694\n3 3 1 2011 10 16.4 169.1 688\n4 4 1 2011 11 11.4 169.1 650\n5 5 1 2011 12 4.2 199.5 763\n6 6 1 2012 1 1.8 203.1 765\n7 7 1 2012 2 2.8 178.2 706\n8 8 1 2012 3 6.7 172.7 788\n```\n\nThe first line of the file gives the format name.\nThe rest of the file describes EnergyProduction data for 500 houses.\nEach dataset consists of 24 lines showing monthly temperature and daylight EnergyProduction data.\n\n`training_dataset_500.csv` and `test_dataset_500.csv` are subsets of `dataset_500.csv`.\n`test_dataset_500.csv` includes only June 2013 data of each house (the rest can be found in `training_dataset_500.csv`).\n\nYou can use any of the given data you like; but please **do not forget** that you can use only data from July 2011 to May 2013 for training.\n\n### Output\n\nOutput is `predicted_energy_production.csv`, `mape.txt` and other files.\nPlease place these files in `challenge1/analysis/YOURNAME/`.\n\n1. **predicted_energy_production.csv**\nMust include House column and EnergyProduction column for each line.\nAny csv file that we can find which columns means House and EnergyProduction is also acceptable.\n2. **mape.txt**\nNeed to include just MAPE value. Minimize it.\n3. **another files**\nShould include files you use, edit or write -- like R source code, batch Python file, excel file, etc..\nThese files will help us understand your thought process.\nYou are not required to commit any files that contain sensitive information.\n\n\n## Challenge 2 - Visualization of Energy Consumptions\n\nThe following task is intended to give us an idea of your data visualisation skills. Please use the tools and programming language with which you are most familiar.\n\n\n### Steps\n1. Download the data-set `total-watt.csv`\n2. The data-set consists of two columns: a time stamp and the energy consumption\n3. visualise the data-set\n5. visualise the data-set as values per day\n6. cluster the values per day into 3 groups: low, medium, and high energy consumption\n7. visualise the clusters (How you visualize the data is up to you. Please show us your imagination and creativity!)\n\n### Input\ndataset file is in `data/` directory as follows.\n\n```\n$ head data/total_watt.csv| column -s, -t\n2011-04-18 13:22:00 925.840613752523\n2011-04-18 13:52:00 483.295891812865\n2011-04-18 14:22:00 915.761633660131\n2011-04-18 14:52:00 609.043490935672\n2011-04-18 15:22:00 745.155434458509\n2011-04-18 15:52:00 409.855947368421\n2011-04-18 16:22:00 434.084038321073\n2011-04-18 16:52:00 152.684299188514\n2011-04-18 17:22:00 327.579073188405\n2011-04-18 17:52:00 156.826945856169\n```\n\n### Output\nPlease place output files in `challenge2/analysis/YOURNAME/`.\n\n1. visualization of the data-set as values per 30mins\n2. visualization of the data-set as values per day\n3. visualization of the data-set as clusters\n\n## Challenge 3 - Web Application\n\nPlease create a web application to show house energy production.\n\n1. Insert csv files into SQL database. (MySQL, postgreSQL, etc..)\n2. Load data from DB and show it on the web with a web framework. (Rails preferred)\n3. Show 1 or 2 types of charts of the data. (no more than 2 types)\n4. (Option) Deploy it somewhere. (AWS, Heroku, your own server, etc...)\n\nWe will review basic programming skill, data modelling and what to show. We will **not** review your web design skill.\n\n### Input\n\nInput dataset files in the `challenge3/data/` directory contain data in the following format:\n```\n$ ls data/\ndataset_50.csv house_data.csv\n\n$ head data/house_data.csv | column -s, -t\nID Firstname Lastname City num_of_people has_child\n1 Carolyn Flores London 2 Yes\n2 Jennifer Martinez Cambridge 3 No\n3 Larry Robinson London 4 Yes\n4 Paul Wright Oxford 3 No\n5 Frances Ramirez London 3 Yes\n6 Pamela Lee Oxford 3 Yes\n7 Patricia Taylor London 3 Yes\n8 Denise Lewis Oxford 4 Yes\n9 Kelly Clark Cambridge 4 No\n```\n(Names are by Random Name Generator http://random-name-generator.info/ )\n\n`dataset_50.csv` is almost same to Challenge1\'s Input. It is smaller and its ID starts with 1 rather than 0. Please refer to Challenge1.\n`house_data.csv` is household data related to `dataset_50.csv`.\nThe first line gives the format name. `ID` column values in this file are same to `House` column values in `dataset_50.csv`. `City` column includes \'London\', \'Cambridge\' and \'Oxford\'. `has_child` column has only \'Yes\' or \'No\'.\n\n### Output\n\n1. Fork this repository.\n2. Place all source code in `challenge3/webapp/YOURNAME/`.\n3. Create a pull request.\n4. (Option) Write deployed url in pull request comment.\n\nYou can refer to sample implementation in `challenge3/webapp/sample/`, but please bear in mind that it is a rough implementation and may be broken in some places.\n\n## Challenge 4 - WEB-API Server\n\nPlease create a web api server to calculate electricity charges.\n\n1. see TEPCO\'s explanation of electricity charges.\n - http://www.tepco.co.jp/e-rates/individual/data/chargelist/chargelist04-j.html\n - http://www.tepco.co.jp/e-rates/individual/menu/home/home02-j.html\n - http://www.tepco.co.jp/e-rates/individual/menu/home/home08-j.html\n - http://www.tepco.co.jp/en/customer/guide/ratecalc-e.html\n\n2. you have to calculate `Energy Charge` of ""Meter-Rate Lighting B"" and ""Yoru Toku Plan"",\n and write WEB-API server with your favorite web framework ( Ruby on Rails preferred )\n `Energy Charge` grows when the energy consumption ( kWh ) is bigger.\n\n3. deploy it to somewhere ( AWS, heroku, your own server, etc...)\n\nWe will review basic programming skill, API design and performance.\n\n### Input\n\nThe input dataset files in `challenge4/data/` contain data in the following format:\n\n```\n $ ls data/\nsample-consumption.json plans.json\n\n $ cat data/sample-consumption.json\n[\n [ 0.2, 0.3, 0.2, ... ], # 24 values for 1st day, 1 am, 2 am .. 12 am, 1 pm .. 12 pm\n [ 0.2, 0.3, 0.2, ... ], # 24 values for 2nd day\n ...\n [ 0.2, 0.3, 0.2, ... ] # 24 values for 31st day\n]\n```\n\n `sample-consumptions.json` is a JSON array of arrays of float values.\n Each float value is a energy consumption(kWh).\n First value of a day is a consumption from 0 am to 1 am.\n\n```\n $ cat data/plans.json\n{\n ""Meter-Rate Lighting B"": {\n ""Day time"": [\n [ null, 120, 19.43],\n [ 120, 300, 25.91],\n [ 300, null, 29.93]\n ],\n ""Night time"": null,\n ""Night time range"": null\n },\n\n ""Yoru Toku Plan"": {\n ""Day time"": [\n [ null, 90, 24.03],\n [ 90, 230, 32.03],\n [ 230, null, 37.00]\n ],\n ""Night time"": [\n [ null, null, 12.48]\n ],\n ""Night time range"":\n [ true, true, true, true,\n true, false, false, false,\n false, false, false, false,\n false, false, false, false,\n false, false, false, false,\n false, true, true, true ]\n }\n}\n```\n ""Day time"" and ""Night time"" values are array [ from kWh, to kWh, unit price tax included ]\n\n```\n[ null, 120, 19.43 ] :\nmeans the unit price is \xc2\xa519.43 per kilo watt hour upto initial 120 kWh.\n[ 300, null, 29.93 ] :\nmeans the unit price is \xc2\xa529.93 when the energy consumption is larger than 300 kWh.\n```\n\n When ""Night time"" attribute is `null`, the plan has only day time.\n ""Night time range"" is 24 boolean values which represent 24 hours night time and day time.\n\n (Yoru Toku Plan offers discount rate in night time.)\n\n Output of the API is just one float number of `Energy Charge` with tax.\n\n### Output\n\n1. Please set ALL source codes in `challenge4/webapp/YOURNAME/`\n2. Write deployed URL in Pull Request Comment.\n'",,"2013/08/28, 05:08:59",3710,GPL-3.0,0,1666,"2023/07/26, 04:45:25",13,306,307,2,91,12,3.6,0.8878737541528239,,,0,99,false,,false,false,,,https://github.com/enechange,https://enechange.co.jp,"Tokyo, Japan",,,https://avatars.githubusercontent.com/u/12653157?v=4,,, disaggregator,"A set of tools for processing of spatial and temporal disaggregations of demands of electricity, heat and natural gas.",DemandRegioTeam,https://github.com/DemandRegioTeam/disaggregator.git,github,,Energy System Data Access,"2021/11/26, 13:57:49",26,0,7,false,Jupyter Notebook,,DemandRegioTeam,"Jupyter Notebook,Python",,"b'# DemandRegio\n\nThis project aims at setting up both a database and a python toolkit called `disaggregator` for\n- temporal and\n- spatial disagregation\n\nof demands of \n- electricity,\n- heat and\n- natural gas\n\nof the final energy sectors\n- private households,\n- commerce, trade & services (CTS) and\n- industry.\n\n\n## Installation\n\nBefore we really start, please install `conda` through the latest [Anaconda package](https://www.anaconda.com/distribution/) or via [miniconda](https://docs.conda.io/en/latest/miniconda.html). After successfully installing `conda`, open the **Anaconda Powershell Prompt**. \nFor experts: You can also open a bash shell (Linux) or command prompt (Windows), but then make sure that your local environment variable `PATH` points to your anaconda installation directory.\n\nNow, in the root folder of the project create an environment to work in that will be called `disaggregator` via\n\n```bash\n$ conda env create -f environment.yml\n```\n\nwhich installs all required packages. Then activate the environment\n\n```bash\n$ conda activate disaggregator\n```\n\n## How to start\n\nOnce the environment is activated, you can start a Jupyter Notebook from there\n\n```bash\n(disaggregator) $ jupyter notebook\n```\n\nAs soon as the Jupyter Notebook opens in your browser, click on the `01_Demo_data-and-config.ipynb` file to start with a demonstration:\n\n![Jupyter_View][img_01]\n\n[img_01]: img/jupyter_notebook.png ""Jupyter Notebook View""\n\n## Results\n\n![Jupyter_View][img_02]\n\n[img_02]: img/spatial_elc_by_household_sizes.png ""Year Electricity Consumption of Private Households""\n\n## How does it work?\n\nFor each of the three sectors \'private households\', \'commerce, trade & services\' and \'industry\' the spatial and temporal disaggregation is accomplished through application of various functions. These functions take input data from a database and return the desired output as shwon in the diagram. There are four Demo-Notebooks to present these functions and demonstrate their execution.\n\n![Jupyter_View][img_03]\n\n[img_03]: img/model_overview.png ""Schematic diagram of modelling approach""\n\n## Acknowledgements\n\nThe development of disaggregator was part of the joint [DemandRegio-Project](https://www.ffe.de/en/topics-and-methods/production-and-market/736-harmonization-and-development-of-methods-for-a-spatial-and-temporal-resolution-of-energy-demands-demandregio) which was carried out by\n\n- Forschungszentrum J\xc3\xbclich GmbH (Simon Burges, Bastian Gillessen, Fabian Gotzens)\n- Forschungsstelle f\xc3\xbcr Energiewirtschaft e.V. (Tobias Schmid)\n- Technical University of Berlin (Stephan Seim, Paul Verwiebe)\n\n## License\n\nCurrent version of software written and maintained by Paul A. Verwiebe (TUB)\n\nOriginal version of software written by Fabian P. Gotzens (FZJ), Paul A. Verwiebe (TUB), Maike Held (TUB), 2019/20.\n\ndisaggregator is released as free software under the [GPLv3](http://www.gnu.org/licenses/gpl-3.0.en.html), see [LICENSE](LICENSE) for further information.\n'",,"2019/10/16, 09:06:28",1470,GPL-3.0,0,267,"2022/06/13, 09:05:56",11,4,8,0,499,3,0.0,0.01869158878504673,,,0,3,false,,false,false,,,https://github.com/DemandRegioTeam,,,,,https://avatars.githubusercontent.com/u/57136345?v=4,,, The FfE Open Data Portal,Offers an overview of free datasets for modelling energy demand and generation.,,,custom,,Energy System Data Access,,,,,,,,,,http://opendata.ffe.de/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, DKA Solar Centre,"Online hub for sharing solar-related knowledge and data from the Northern Territory, Australia.",,,custom,,Energy System Data Access,,,,,,,,,,http://dkasolarcentre.com.au/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, eiapy,A simple wrapper for the U.S. Energy Information Administration API.,systemcatch,https://github.com/systemcatch/eiapy.git,github,"api-wrapper,energy-data,python,eia",Energy System Data Access,"2022/02/18, 10:13:29",22,232,4,false,Python,,,Python,https://pypi.org/project/eiapy/,"b'# eiapy\n[![PyPI](https://img.shields.io/pypi/v/eiapy.svg)](https://pypi.org/project/eiapy/) [![PyPI - License](https://img.shields.io/pypi/l/eiapy.svg)](https://pypi.org/project/eiapy/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/eiapy.svg)](https://pypi.org/project/eiapy/) \n\nPython 3 wrapper for the U.S. Energy Information Administration API. \n\n### Quick start\n```bash\npip install eiapy\n```\n\nGet the last 5 measurements of the electricity flow between California and Mexico.\n\n```python3\n>>> from eiapy import Series\n>>> cal_to_mex = Series(\'EBA.CISO-CFE.ID.H\')\n>>> cal_to_mex.last(5)\n{\'request\': {\'command\': \'series\', \'series_id\': \'EBA.CISO-CFE.ID.H\'},\n \'series\': [{\'data\': [[\'20180401T07Z\', -11],\n [\'20180401T06Z\', -16],\n [\'20180401T05Z\', -11],\n [\'20180401T04Z\', -7],\n [\'20180401T03Z\', -5]],\n \'description\': \'Timestamps follow the ISO8601 standard \'\n \'(https://en.wikipedia.org/wiki/ISO_8601). Hourly \'\n \'representations are provided in Universal Time.\',\n \'end\': \'20180401T07Z\',\n \'f\': \'H\',\n \'name\': \'Actual Net Interchange for California Independent System \'\n \'Operator (CISO) to Comision Federal de Electricidad \'\n \'(CFE), Hourly\',\n \'series_id\': \'EBA.CISO-CFE.ID.H\',\n \'start\': \'20150701T00Z\',\n \'units\': \'megawatthours\',\n \'updated\': \'2018-04-02T08:43:16-0400\'}]}\n\n```\n\nFurther examples can be found [in this gist](https://gist.github.com/systemcatch/019cf50302093b9b51838c62b99623df).\n\nTo find more details about the API go to the EIA\'s [Open Data](https://www.eia.gov/opendata/) page. To find interesting data (and identifiers) [browse the data sets](https://www.eia.gov/opendata/qb.php).\n\nFor specific information about the [real-time grid display](https://www.eia.gov/beta/electricity/gridmonitor/dashboard/electric_overview/US48/US48) please see [this guide](https://www.eia.gov/realtime_grid/docs/userguide-knownissues.pdf).\n\nGo [here](https://www.eia.gov/opendata/register.cfm#terms_of_service) to see the\nAPI terms of service and [here](https://www.eia.gov/about/copyrights_reuse.cfm)\nfor an explanation of copyright and reuse of their data.\n\n### Setting up your API key\nAn API key is needed to access the EIA\'s data, you can get one [here](https://www.eia.gov/opendata/register.php). eiapy needs this key to be manually set in the operating system environmental variables.\n\n**Mac & Linux** \nOpen a terminal and enter the following;\n```bash\nexport EIA_KEY=type_your_api_key_here\n```\nTo set it permanently follow the instructions on this [stackexchange question](https://unix.stackexchange.com/questions/117467/how-to-permanently-set-environmental-variables).\n\n**Windows** \nOpen a Command Prompt and enter the following;\n```bat\nsetx EIA_KEY ""type_your_api_key_within_the_quotes""\n```\n\n### Notes on API behaviour\n- When providing invalid time limits for a series data request an empty data list is returned.\n- For data requests num & start cannot be used together but num & end can.\n- When an invalid series id is passed this is the response.\n```python3\n{\'request\': {\'series_id\': \'eba.ciso-cfe.id.\', \'command\': \'series\', \'num\': \'5\'},\n \'data\': {\'error\': \'invalid series_id. For key registration, documentation, and\n examples see https://www.eia.gov/developer/\'}}\n```\n- The API expects timestamps in ISO 8601 format (YYYYMMDDTHHZ) with Z meaning UTC, [bad timestamps](https://github.com/systemcatch/eiapy/issues/16) will not raise errors.\n\n### Changelog\n**0.1.6**\n- Changed URLs to https as http is no longer supported by EIA.\n- Added python 3.9 and 3.10 to classifiers, removed 3.5.\n\n**0.1.5**\n- Added Python 3.8 to supported versions.\n- Updated readme with advice about bad timestamps.\n- Disabled broken Relation class.\n- Made handling of no api key more human friendly.\n\n**0.1.4**\n- Fixed broken Search `repr`.\n- Added Python 3.7 to supported versions.\n- Mention real-time grid in readme.\n\n**0.1.3**\n- Simplify construction and use of the Search class.\n- Explain how to set up the API key.\n- Added gist of examples to readme.\n\n**0.1.2**\n- Rename several methods for extra clarity.\n- data -> get_data\n- get -> get_updates\n\n**0.1.1** \n- Started using requests session functionality to improve performance.\n- Fixed a mistake in the MultiSeries class that stopped it working entirely.\n- Added a version attribute to the package.\n- Overhaul of readme.\n'",,"2018/04/02, 14:10:48",2032,MIT,0,30,"2022/02/18, 10:13:29",10,14,19,0,614,2,0.1,0.033333333333333326,"2022/02/18, 10:24:23",v0.1.6,0,2,false,,false,false,"snarfed/electricitymap-contrib,tkatsoulas/electricitymaps-contrib,rajatksud/electricitymaps-contrib,DrRoad/electricitymaps-contrib,spinnaker999/electricitymap-contrib,LenaChornovol/electricitymap-contrib,Tim810306/electricitymap-contrib,NectarioT/electricitymap-contrib,goncaloluis89/electricitymap-contrib,ddOGbb/electricitymap-contrib,cameronjohnston/electricitymap-contrib,Ponchia/electricitymap-contrib,pyshx/electricitymap-contrib,alexisbarreaux/electricitymap-contrib,Raader/electricitymap-contrib,Teniola-theDev/electricitymap-contrib-teniola,meme1255/electricitymap-contrib,Crawnicles/electricitymap-contrib,ammar257ammar/electricitymap-contrib,juanvillasanteg/electricitymap-contrib,Preisschild/electricitymap-contrib,waltonzt/electricitymap-contrib,j-jayes/electricitymap-contrib,CatBia/electricitymap-contrib,shtevie/electricitymap-contrib,xmichele/electricitymap-contrib,MitchellJThomas/electricitymap-contrib,1Plouis/electricitymap-contrib,sebzz/electricitymap-contrib,shuuji3/electricitymap-contrib,xomanuel/electricitymap-contrib,mhybal/electricitymap-contrib,gh56123/electricitymap-contrib,vincentdum/electricitymap-contrib,danielmatsuda/electricitymap-contrib,strassburger/electricitymap-contrib,VictorHaine/electricitymap-contrib,pratapdd/electricitymap-contrib,du-phan/electricitymap-contrib,mzhuang1/electricitymap-contrib,bzmw/electricitymap-contrib,HuBaX/electricitymap-contrib,rafzul/electricitymap-contrib,pmensalt/electricitymap-contrib,mjdhasan/electricitymap-contrib,davincee/electricitymap-contrib,nicksdawe/electricitymap-contrib,EarthlyAlien/electricitymap-contrib,whysosocold/electricitymap-contrib,pierresegonne/electricitymap-contrib,ollawone/electricitymap-contrib,diptyaroop/electricitymap-contrib,menghaniv/electricitymap-contrib,Harry3167/electricitymap-contrib,sokolsaiti/electricitymap-contrib,Henrik575/electricitymap-contrib,aekrylov/electricitymap-contrib,Zhufanpo/electricitymap-contrib,dnzengou/electricitymap-contrib,DavidKarlas/electricitymap-contrib,atjohans/electricitymap-contrib,maxfire2008/electricitymap-contrib,notuntoward/electricitymap-contrib,SDIAlliance/electricitymap-contrib,wmeingott/electricitymap-contrib,TorRydberg/electricitymap-contrib,Pheelix/electricitymap-contrib,eldk/electricitymap-contrib,neilfulwiler/electricitymap-contrib,Eric-Santos/electricitymap-contrib,oblyn/electricitymap-contrib,ERDALTUFANET/electricitymap-contrib,Macquaria/electricitymap-contrib,tumi0629/electricitymap-contrib,florestan92/electricitymap-contrib,aemartinez/electricitymap-contrib,haraldgroven/electricitymap-contrib,Paquito86/electricitymap-contrib,kube-csc/electricitymap-contrib,clawfire/electricitymap-contrib,Nico1320/electricitymap-contrib,agdolla/electricitymap-contrib,dounaisiji/electricitymap-contrib,Fastaxx/electricitymap-contrib,jacekz123/electricitymap-contrib,Ruckdaschel/electricitymap-contrib,pz-max/electricitymap-contrib,GeraldSoellinger/electricitymap-contrib,TheChizler/electricitymap-contrib,morganchristiansson/electricitymap-contrib,intermittentnrg/electricitymap-contrib,lisebohr/electricitymap-contrib,lvm1/electricitymap-contrib,m45555/electricitymap-contrib,bharathpgp/electricitymap-contrib,phanthe1/electricitymap-contrib,eaxsi/electricitymap-contrib,darryl-revok/electricitymap-contrib,aebenw/electricitymap-contrib,r24mille/electricitymap-contrib,milesjag/electricitymap-contrib,MindFreeze/electricitymap-contrib,drc38/electricitymap-contrib,Wexford1/electricitymap-contrib,melissamarcher/electricitymap-contrib,bchapuis/electricitymap-contrib,RobinFrcd/electricitymap-contrib,sackofguts/electricitymap-contrib,ManuGirault/electricitymap-contrib,leighadennis/electricitymap-contrib,ckyvra/electricitymap-contrib,MackyDIARRA/electricitymap-contrib,ayushsubedi/electricitymap-contrib,xneo1/electricitymap-contrib,MelHiQ/electricitymap-contrib,Ricarrr/electricitymap-contrib,Rprojet/electricitymap-contrib,Celestin-Poux/electricitymap-contrib,Suwailem1/electricitymap-contrib,JeromeBobe/electricitymap-contrib,piekar294/electricitymap-contrib,rathbonz/electricitymap-contrib,AlliBusa/electricitymap-contrib,oscarmrom/electricitymap-contrib,Rodeobe/electricitymap-contrib,sdrshnptl/electricitymap-contrib,gregorywalton/electricitymap-contrib,simon-brooke/electricitymap-contrib,kkreine/electricitymap-contrib,CloCkWeRX/electricitymap-contrib,Paulojorge3011/electricitymap-contrib,Tomkourou/electricitymap-contrib,RichChang963/electricitymap-contrib,christianhvejsel/electricitymap-contrib,arturofc/electricitymap-contrib,vignesh-ponraj/electricitymap-contrib,pierreSeroul/electricitymap-contrib,migdard/electricitymap-contrib,iZND/electricitymap-contrib,Tangjas20/electricitymap-contrib,olsson17/electricitymap-contrib,Nahdus/electricitymap-contrib,phiphou/electricitymap-contrib,johntharian/electricitymap-contrib,cieciurm/electricitymap-contrib,bengouma/electricitymap-contrib,Shriever/electricitymap-contrib,mutantmonkey/electricitymap-contrib,davidcole1340/electricitymap-contrib,piotrek124-1/electricitymap-contrib,krdslv/electricitymap-contrib,vamsi963601/electricitymap-contrib,chelseagreen/electricitymap-contrib,dhilbig/electricitymap-contrib,steren/electricitymap-contrib,ashiq4836/electricitymap-contrib,gwg313/electricitymap-contrib,hdatteln/electricitymap-contrib,vburckhardt/electricitymap-contrib,stashayancho/electricitymap-contrib,pjakobsen/electricitymap-contrib,nickvallee/electricitymap-contrib,potaufeuman/electricitymap-contrib,ClrGe/electricitymap-contrib,K-Class/electricitymap-contrib,df6ih/electricitymap-contrib,c-m-a/electricitymap-contrib,victormachadogp/electricitymap-contrib,JudeWells/electricitymap-contrib,glnsagar/electricitymap-contrib,bin0al/electricitymap-contrib,sjiekak/electricitymap-contrib,brandongalbraith/electricitymap-contrib,DominicHerrmann/electricitymap-contrib,Jironah/electricitymap-contrib,MiNi33/electricitymap-contrib,bgkyer/electricitymap-contrib,Evelyn0/electricitymap-contrib,fransixles/electricitymap-contrib,Kizuu/electricitymap-contrib,matrix0123456789/electricitymap-contrib,jakedorne/electricitymap-contrib,padila50/electricitymap-contrib,xuberance137/electricitymap-contrib,fagan2888/electricitymap-contrib,hybridcattt/electricitymap-contrib,Cr7SIU/electricitymap-contrib,leohuisman/electricitymap-contrib,flobz/electricitymap-contrib,MinzChen/electricitymap-contrib,HungerHan/electricitymap-contrib,sidneythekidney/electricitymap-contrib,nedgar/electricitymap-contrib,Prathap671/electricitymap-contrib,chiefymuc/electricitymap-contrib,SunweiWang/electricitymap-contrib,hassaku/electricitymap-contrib,awrgold/electricitymap-contrib,joshjauregi/electricitymap-contrib,autipial/electricitymap-contrib,liuyifei17/electricitymap-contrib,martin-laurent/electricitymap-contrib,byronwilliams/electricitymap-contrib,JeanBaptisteScellier/electricitymap-contrib,marciska/electricitymap-contrib,ssmssam/electricitymap-contrib,Kafkaese/electricitymap-contrib,veqtrus/electricitymap-contrib,larsschellhas/electricitymap-contrib,KabelWlan/electricitymap-contrib,rolanddosa/electricitymap-contrib-fork-rolanddosa,amkozlov/electricitymap-contrib,con-cat/electricitymap-contrib,saisha92/electricitymap-contrib,provokateurin/electricitymap-contrib,willbeaufoy/electricitymap-contrib,mzaharie/electricitymap-contrib,lorrieq/electricitymap,soreva/electricitymap,systemcatch/electricitymap,AlixFachin/electricitymap-contrib,leon-v/electricitymap-contrib,aeorxc/oilanalytics,SeppPenner/electricitymap-contrib,Mikma03/Time_Series_Packt,zstarpak/electricitymap-contrib,jorgermurillo/electricitymap,electricitymaps/electricitymaps-contrib,chzhong25346/Skylantern,IacobEd/MachineLearningProject,mhilmiasyrofi/carbonmap,antifa-ev/electro",,,,,,,,,, Power Station Dictionary,A power station dictionary that enables mapping between various naming conventions and associated plant metadata.,OSUKED,https://github.com/OSUKED/Power-Station-Dictionary.git,github,"energy-data,power-stations",Energy System Data Access,"2023/02/12, 23:23:22",16,0,7,true,Jupyter Notebook,Open Source UK Energy Data,OSUKED,"Jupyter Notebook,Python,Batchfile",https://osuked.github.io/Power-Station-Dictionary/,"b'# Power Station Dictionary\n\n> The *Power Station Dictionary* is a [site](https://osuked.github.io/Power-Station-Dictionary/) that enables mapping between various power plant ids and automatically extracts data relating to those plants from Frictionless Data packages.\n\nN.b. This project is currently in active development.\n\n
\n\n>**Any and all contributions are very welcome. If you are less comfortable editing via GitHub please contribute via [this Google Sheet](https://docs.google.com/spreadsheets/d/1cn4zJ3Eyn9tqMdPal_QnC4eZUqq6nUCJXtQOyJYXChY/edit?usp=sharing) (GitHub is still preferred if possible).**\n\n
\n
\n\n### Motivation\n\nExisting work into increasing the visibility of energy data has focused on improving the ability of humans to find datasets, which has historically been a key issue within a highly fragmented energy data landscape. Groups such as the Energy Data Taskforce have prompted a new wave of metadata standardisation and data cataloguing initiatives which have gone a long way to solving this issue, opening up new opportunities such as the creation of digital twins of the power grid. However, these new opportunities bring new challenges. To enable a digital twin of the energy system we need to be able to [""describe relationships between assets and datasets""](https://docs.google.com/document/d/1X8PIP4f0K2abKjyQiGJQaxdcflQ36GeATBfhJqFevxA), requiring two core extensions to our existing toolset:\n\n1. Field-level metadata that describes the contents of individual columns in a dataset\n2. ""Mapping"" datasets that are able to express the relationships between other datasets\n\n![Dictionary Diagram](img/dictionary_diagram.png)\n\nThese additions enable us to move from improving the ability of humans to discover datasets to making it easier for machines to automatically find and extract relevant data - a need that will only increase as the number and size of datasets continues to grow. The benefits extend beyond our digital colleagues though - by pivoting data exploration to be about finding objects/assets, which then reveal the datasets (and attributes) they are linked to, we can create a more intuitive search experience. Similar to Google\xe2\x80\x99s move from searching for [""Things not Strings""](https://blog.google/products/search/introducing-knowledge-graph-things-not/), the data dictionary lets us search for ""Assets not Datasets"".\n\nTo illustrate the benefits of such a framework we are building a pilot dictionary focused on improving the discoverability, linkage, and automated extraction of data relating to power stations on the GB system. Power stations were chosen due to the high number of datasets they relate to, the wide range of ids used to describe them, and the current duplication in efforts to link them across industry and academia. We will then demonstrate how the dictionary can be used for analysis with two case studies: one researching the carbon intensity of individual generators by matching power output and carbon emission datasets, the second linking wholesale price and renewable subsidy data to help explain why wind subsidies have fallen below the average market price.\n\n
\n
\n\n### Dictionary Framework\n\n##### Dictionary Schema & Core Dataset\n\nThe dictionary is composed of two files, a [csv containing ids](https://raw.githubusercontent.com/OSUKED/Power-Station-Dictionary/shiro/data/dictionary/ids.csv) that relate to different power stations and a [json containing metadata](https://raw.githubusercontent.com/OSUKED/Power-Station-Dictionary/shiro/data/dictionary/datapackage.json) written as an extension to the [Frictionless Data Tabular schema](https://specs.frictionlessdata.io/table-schema/). ""Frictionless Data (FD) is an open-source toolkit that brings simplicity to the data experience"" through an open-source standard that defines a specification for describing metadata relating to different types of datasets. Once a dataset has been described using the specification it then becomes incredibly easy to load it using different programming languages as well as export it into a wide range of different formats. What makes FD different to most other specifications is that they provide a comprehensive way to describe individual columns within a dataset, including their formats and constraints.\n\nThe majority of the schema is the same as the Tabular Schema published by FD. The core change is the use of `foreignKeys` to link to external datasets that use ids specified in the dictionary, a separate `attributes` entry then describes the columns which should be extracted from the dataset. The `hierarchy` attribute for each column then describes whether the ids in that column have a `same-as` or `part-of` relationship with the asset they\xe2\x80\x99re linked to. A further `url_format` entry then provides a way to convert specific IDs into urls (e.g. with wikidata ids).\n\nThe datasets linked to the dictionary must be described using the FD tabular schema, however, the metadata does not need to be stored adjacent to the raw source and could be generated by a third party rather than the original data provider. Data-providers from within the energy sector already using this format include [Public Utility Data Library](https://catalyst.coop/pudl/) and [Open Power System Data](https://open-power-system-data.org/). As well as being able to link into the dictionary by publishing your datasets using this standard you can make use of a [wider ecosystem of data tools](https://frictionlessdata.io/software/).\n\n
\n\n##### Building the Knowledge Graph/Website\n\nOnce the dictionary has been created a Python library then uses it to programmatically identify the different assets it contains, and then extract data relating to those assets from the datasets linked to the dictionary. The generation steps are as follows:\n\n1. Each row of the dictionary is iterated over with the associated ids extracted for each asset\n2. The datasets linked to the dictionary which contain an id relating to the current asset are identified\n3. The relevant attributes for each asset which are contained in the linked datasets are then extracted\n4. For each asset the outputted linked ids, datasets, and attributes are then used to populate a markdown template which forms the basis of a webpage within the dictionary site\n\n\n
\n
\n\n### Installation\n\nTo install the `powerdict` library please run:\n\n```bash\npip install powerdict\n```\n\n
\n\n### Development Set-Up\n\nTo set-up a new environment you can run the following from the batch_scripts directory\n\n```bash\nsetup_env\n```\n\nAlternatively you can run these commands\n\n```bash\ncall conda env create -f environment.yml\ncall conda activate PowerDict\ncall ipython kernel install --user --name=PowerDict\n```\n\n
\n\n### Publishing to PyPi\n\nTo publish the `powerdict` module to PyPi simply run the following from the batch_scripts directory\n\n```bash\npypi_publish\n```\n\nor follow these commands\n\n```bash\ncall conda activate PowerDict\ncall python setup.py sdist bdist_wheel\ncall twine upload --skip-existing dist/*\n```\n\nWhen prompted you should enter your PyPi username and password\n\nAfter this you will be able to install the latest version of powerdict using `pip install powerdict`\n'",,"2020/12/17, 20:36:14",1042,MIT,9,236,"2023/04/16, 10:45:54",23,3,8,5,192,3,0.0,0.0,"2020/12/24, 01:47:23",v1.3.0,1,2,false,,false,false,,,https://github.com/OSUKED,https://osuked.com/,,,,https://avatars.githubusercontent.com/u/75696139?v=4,,, ElexonDataPortal,Wrapper for the Balancing Mechanism Reporting Service API to balance power flowing on to and off from the electricity Transmission System in Great Britain.,OSUKED,https://github.com/OSUKED/ElexonDataPortal.git,github,,Energy System Data Access,"2023/04/01, 18:55:39",47,0,11,true,Jupyter Notebook,Open Source UK Energy Data,OSUKED,"Jupyter Notebook,Python,JavaScript,Batchfile",https://osuked.github.io/ElexonDataPortal,"b""# Elexon Data Portal\n\n[![DOI](https://zenodo.org/badge/189842391.svg)](https://zenodo.org/badge/latestdoi/189842391) [![Binder](https://notebooks.gesis.org/binder/badge_logo.svg)](https://notebooks.gesis.org/binder/v2/gh/OSUKED/ElexonDataPortal/master?urlpath=lab%2Ftree%2Fnbs%2F08-quick-start.ipynb) [![PyPI version](https://badge.fury.io/py/ElexonDataPortal.svg)](https://badge.fury.io/py/ElexonDataPortal)\n\nThe `ElexonDataPortal` library is a Python Client for retrieving data from the Elexon/BMRS API. The library significantly reduces the complexity of interfacing with the Elexon/BMRS API through the standardisation of parameter names and orchestration of multiple queries when making requests over a date range. To use the `ElexonDataPortal` you will have to register for an Elexon API key which can be done [here](https://www.elexonportal.co.uk/registration/newuser). \n\n
\n
\n\n### Installation\n\nThe library can be easily installed from PyPi, this can be done using:\n\n```bash\npip install ElexonDataPortal\n```\n\n
\n
\n\n### Getting Started\n\nWe'll begin by initialising the API `Client`. The key parameter to pass here is the `api_key`, alternatively this can be set by specifying the environment variable `BMRS_API_KEY` which will then be loaded automatically.\n\n```python\nfrom ElexonDataPortal import api\n\nclient = api.Client('your_api_key_here')\n```\n\n
\n\nNow that the client has been initialised we can make a request! \n\nOne of the key abstractions within the `ElexonDataPortal` library is the handling of multiple requests over a date range specified through the `start_date` and `end_date` parameters. Each response will be automatically cleaned and parsed, then concatenated into a single Pandas DataFrame. If a settlement period and date column can be identified in the returned data then a new column will be added with the local datetime for each data-point. N.b. that if passed as a string the start and end datetimes will be assumed to be in the local timezone for the UK\n\n```python\nstart_date = '2020-01-01'\nend_date = '2020-01-01 1:30'\n\ndf_B1610 = client.get_B1610(start_date, end_date)\n\ndf_B1610.head(3)\n```\n\n| | documentType | businessType | processType | timeSeriesID | curveType | settlementDate | powerSystemResourceType | registeredResourceEICCode | marketGenerationUnitEICCode | marketGenerationBMUId | marketGenerationNGCBMUId | bMUnitID | nGCBMUnitID | activeFlag | documentID | documentRevNum | resolution | start | end | settlementPeriod | quantity | local_datetime |\n|---:|:------------------|:---------------|:--------------|:----------------------|:----------------------------|:-----------------|:--------------------------|:----------------------------|:------------------------------|:------------------------|:---------------------------|:------------|:--------------|:-------------|:------------------------|-----------------:|:-------------|:-----------|:-----------|-------------------:|-----------:|:--------------------------|\n| 0 | Actual generation | Production | Realised | ELX-EMFIP-AGOG-TS-212 | Sequential fixed size block | 2020-01-01 | Generation | 48W000CAS-BEU01F | 48W000CAS-BEU01F | M_CAS-BEU01 | CAS-BEU01 | M_CAS-BEU01 | CAS-BEU01 | Y | ELX-EMFIP-AGOG-22495386 | 1 | PT30M | 2020-01-01 | 2020-01-01 | 1 | 18.508 | 2020-01-01 00:00:00+00:00 |\n| 1 | Actual generation | Production | Realised | ELX-EMFIP-AGOG-TS-355 | Sequential fixed size block | 2020-01-01 | Generation | 48W00000STLGW-3A | 48W00000STLGW-3A | T_STLGW-3 | STLGW-3 | T_STLGW-3 | STLGW-3 | Y | ELX-EMFIP-AGOG-22495386 | 1 | PT30M | 2020-01-01 | 2020-01-01 | 1 | 28.218 | 2020-01-01 00:00:00+00:00 |\n| 2 | Actual generation | Production | Realised | ELX-EMFIP-AGOG-TS-278 | Sequential fixed size block | 2020-01-01 | Generation | 48W00000GNFSW-1H | 48W00000GNFSW-1H | T_GNFSW-1 | GNFSW-1 | T_GNFSW-1 | GNFSW-1 | Y | ELX-EMFIP-AGOG-22495386 | 1 | PT30M | 2020-01-01 | 2020-01-01 | 1 | 29.44 | 2020-01-01 00:00:00+00:00 |\n\n
\n\nIf you've previously written your own code for extracting data from the Elexon/BMRS API then you may be wondering where some of the normal parameters you pass have gone. The reduction in the parameters passed are due to 4 core drivers:\n\n* Standardisation of date range parameter names\n* Removal of the need to specify `ServiceType`\n* Automatic passing of `APIKey` after client initialisation\n* Shipped with sensible defaults for all remaining parameters\n\nThe full list of data streams that are able to be requested can be found [here](#data-stream-descriptions). If you wish to make requests using the raw methods these are available through the `ElexonDataportal.dev.raw` module.\n\nFurther information can be found in the [Quick Start guide](https://osuked.github.io/ElexonDataPortal/08-quick-start/).\n\n
\n
\n\n### What's Changed in v2\n\nThe latest release of the library includes a full rewrite of the code-base. We have endeavoured to make the new API as intuitive as possible but that has required breaking changes from v1, if you wish to continue using the historic library use `pip install ElexonDataPortal==1.0.4`. N.b v1 will not be maintained going forward, you are advised to change over to v2.0.0+. \n\nThe key feature changes are:\n\n* Coverage of more BMRS streams \n* Automated default values\n* Cleaner client API\n* A larger range of request types are compatible with the date range orchestrator\n\n
\n
\n\n### Programmatic Library Generation\n\nOne of the core features within the `ElexonDataPortal` library is that it is *self-generating*, by which we mean it can rebuild itself (including any new API request methods) from scratch using only the `endpoints.csv` spreadsheet. As well as generating the Python Client library a `BMRS_API.yaml` file is created, this provides an OpenAPI specification representation of the Elexon/BMRS API. In turn this allows us to automatically generate documentation, as well as run tests on the API itself to ensure that everything is working as expected - during this process we identified and corrected several small errors in the API documentation provided by Elexon. \n\nTo rebuild the library simply run the following in the root directory: \n\n```bash\npython -m ElexonDataPortal.rebuild\n```\n\n
\n\nN.b. If you wish to develop the library further or use any of the programmatic library generation functionality then please install the development version of the library using:\n\n```bash\npip install ElexonDataPortal[dev]\n```\n\nIf you are not installing into a fresh environment it is recommended you install `pyyaml` and `geopandas` using conda to avoid any dependency conflicts. In future we are looking to release `ElexonDataPortal` as a conda package to avoid these issues.\n\n
\n
\n\n### Data Stream Descriptions\n\nThe following table describes the data streams that are currently retreivable through the API. The client method to retrieve data from a given stream follows the naming convention `get_{stream-name}`.\n\n| Stream | Description |\n|:-----------------------|:---------------------------------------------------------------|\n| B0610 | Actual Total Load per Bidding Zone |\n| B0620 | Day-Ahead Total Load Forecast per Bidding Zone |\n| B0630 | Week-Ahead Total Load Forecast per Bidding Zone |\n| B0640 | Month-Ahead Total Load Forecast Per Bidding Zone |\n| B0650 | Year Ahead Total Load Forecast per Bidding Zone |\n| B0710 | Planned Unavailability of Consumption Units |\n| B0720 | Changes In Actual Availability Of Consumption Units |\n| B0810 | Year Ahead Forecast Margin |\n| B0910 | Expansion and Dismantling Projects |\n| B1010 | Planned Unavailability In The Transmission Grid |\n| B1020 | Changes In Actual Availability In The Transmission Grid |\n| B1030 | Changes In Actual Availability of Offshore Grid Infrastructure |\n| B1320 | Congestion Management Measures Countertrading |\n| B1330 | Congestion Management Measures Costs of Congestion Management |\n| B1410 | Installed Generation Capacity Aggregated |\n| B1420 | Installed Generation Capacity per Unit |\n| B1430 | Day-Ahead Aggregated Generation |\n| B1440 | Generation forecasts for Wind and Solar |\n| B1510 | Planned Unavailability of Generation Units |\n| B1520 | Changes In Actual Availability of Generation Units |\n| B1530 | Planned Unavailability of Production Units |\n| B1540 | Changes In Actual Availability of Production Units |\n| B1610 | Actual Generation Output per Generation Unit |\n| B1620 | Actual Aggregated Generation per Type |\n| B1630 | Actual Or Estimated Wind and Solar Power Generation |\n| B1720 | Amount Of Balancing Reserves Under Contract Service |\n| B1730 | Prices Of Procured Balancing Reserves Service |\n| B1740 | Accepted Aggregated Offers |\n| B1750 | Activated Balancing Energy |\n| B1760 | Prices Of Activated Balancing Energy |\n| B1770 | Imbalance Prices |\n| B1780 | Aggregated Imbalance Volumes |\n| B1790 | Financial Expenses and Income For Balancing |\n| B1810 | Cross-Border Balancing Volumes of Exchanged Bids and Offers |\n| B1820 | Cross-Border Balancing Prices |\n| B1830 | Cross-border Balancing Energy Activated |\n| BOD | Bid Offer Level Data |\n| CDN | Credit Default Notice Data |\n| DERSYSDATA | Derived System Data |\n| DETSYSPRICES | Detailed System Prices |\n| DEVINDOD | Daily Energy Volume Data |\n| DISBSAD | Balancing Services Adjustment Action Data |\n| FORDAYDEM | Forecast Day and Day Ahead Demand Data |\n| FREQ | Rolling System Frequency |\n| FUELHH | Half Hourly Outturn Generation by Fuel Type |\n| MELIMBALNGC | Forecast Day and Day Ahead Margin and Imbalance Data |\n| MID | Market Index Data |\n| MessageDetailRetrieval | REMIT Flow - Message List Retrieval |\n| MessageListRetrieval | REMIT Flow - Message List Retrieval |\n| NETBSAD | Balancing Service Adjustment Data |\n| NONBM | Non BM STOR Instructed Volume Data |\n| PHYBMDATA | Physical Data |\n| SYSDEM | System Demand |\n| SYSWARN | System Warnings |\n| TEMP | Temperature Data |\n| WINDFORFUELHH | Wind Generation Forecast and Out-turn Data |""",",https://zenodo.org/badge/latestdoi/189842391","2019/06/02, 12:14:48",1606,MIT,4216,23336,"2023/03/25, 14:47:47",7,6,17,3,214,2,0.16666666666666666,0.000514933058702316,"2022/05/31, 21:32:53",v2.0.15,0,3,false,,false,false,,,https://github.com/OSUKED,https://osuked.com/,,,,https://avatars.githubusercontent.com/u/75696139?v=4,,, Open Energy Tracker,An open data platform for monitoring and visualizing energy policy targets.,diw-evu/oet,https://gitlab.com/diw-evu/oet/openenergytracker,gitlab,,Energy System Data Access,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, gridstatus,Provides standardized API to access energy data from the major Independent System Operators in the United States.,kmax12,https://github.com/kmax12/gridstatus.git,github,"co2-emissions,decarbonization,electrical-grid,energy,python",Energy System Data Access,"2023/10/11, 18:02:35",200,2,118,true,Python,,,"Python,Shell,Makefile",https://docs.gridstatus.io,"b'

\n\n

\n\n

\n \n \n \n \n \n \n \n \n

\n\n`gridstatus` is a Python library that provides a uniform API for accessing electricity supply, demand, and pricing data for the major Independent System Operators (ISOs) in the United States. It currently supports data from CAISO, SPP, ISONE, MISO, Ercot, NYISO, and PJM.\n\n## GridStatus.io and Hosted API\nTo preview some of the data this library provide access to, visit [GridStatus.io](https://www.gridstatus.io/).\n\nIf you are trying to use our hosted API, you might want to check out the gridstatusio library [here](https://github.com/gridstatus/gridstatusio). \n\nTo learn more about our hosted API visit: https://www.gridstatus.io/api.\n\n## Community\n\n- Need Help? Post a [GitHub issue](https://github.com/kmax12/gridstatus/issues)\n- Want to chat? Join our [Slack](https://join.slack.com/t/gridstatus/shared_invite/zt-1jk6vlzt2-Lzz4pdpjkJYVUJkynOiIvQ)\n- Want to stay updated? Follow us on Twitter [@grid_status](https://twitter.com/grid_status)\n- Want to contribute? Read our [Contributing Guide](CONTRIBUTING.md)\n\n## Installation\n\n`gridstatus` supports python 3.8+. Install with pip\n\n```\npython -m pip install gridstatus\n```\n\nUpgrade using the following command\n\n```\npython -m pip install --upgrade gridstatus\n```\n\n## Documentation and Examples\n\nTo learn more, visit the [documentation](https://docs.gridstatus.io/) and view [example notebooks](https://docs.gridstatus.io/en/latest/Examples/index.html).\n\n## Get Help\n\nWe\'d love to answer any usage or data access questions! Please let us know by posting a GitHub issue.\n'",,"2022/07/28, 19:24:20",454,BSD-3-Clause,161,304,"2023/10/11, 18:02:36",58,184,225,178,14,13,3.5,0.14715719063545152,"2023/09/13, 01:21:57",0.23.0,0,10,false,,false,true,"mpgentleman/GolfStats,gridstatus/gridstatusio",,,,,,,,,, sup3r,The Super Resolution for Renewable Resource Data software uses generative adversarial networks to create synthetic high-resolution wind and solar spatiotemporal data from coarse low-resolution inputs.,NREL,https://github.com/NREL/sup3r.git,github,"generative-adversarial-network,machine-learning,renewable-energy,deep-learning,climate-change,climate-data,solar-energy,tensorflow,wind-energy",Energy System Data Access,"2023/10/18, 15:16:05",25,0,17,true,Python,National Renewable Energy Laboratory,NREL,Python,https://nrel.github.io/sup3r/,"b""#################\nWelcome to SUP3R!\n#################\n\n.. image:: https://github.com/NREL/sup3r/workflows/Documentation/badge.svg\n :target: https://nrel.github.io/sup3r/\n\n.. image:: https://github.com/NREL/sup3r/workflows/Pytests/badge.svg\n :target: https://github.com/NREL/sup3r/actions?query=workflow%3A%22Pytests%22\n\n.. image:: https://github.com/NREL/sup3r/workflows/Lint%20Code%20Base/badge.svg\n :target: https://github.com/NREL/sup3r/actions?query=workflow%3A%22Lint+Code+Base%22\n\n.. image:: https://img.shields.io/pypi/pyversions/NREL-sup3r.svg\n :target: https://pypi.org/project/NREL-sup3r/\n\n.. image:: https://badge.fury.io/py/NREL-sup3r.svg\n :target: https://badge.fury.io/py/NREL-sup3r\n\n.. image:: https://codecov.io/gh/nrel/sup3r/branch/main/graph/badge.svg\n :target: https://codecov.io/gh/nrel/sup3r\n\n.. image:: https://zenodo.org/badge/422324608.svg\n :target: https://zenodo.org/badge/latestdoi/422324608\n\n.. inclusion-intro\n\nThe Super Resolution for Renewable Resource Data (sup3r) software uses\ngenerative adversarial networks to create synthetic high-resolution wind and\nsolar spatiotemporal data from coarse low-resolution inputs. To get started,\ncheck out the sup3r command line interface (CLI) `here\n`_.\n\nInstalling sup3r\n================\n\nNOTE: The installation instruction below assume that you have python installed\non your machine and are using `conda `_\nas your package/environment manager.\n\nOption 1: Install from PIP (recommended for analysts):\n------------------------------------------------------\n\n1. Create a new environment: ``conda create --name sup3r python=3.9``\n\n2. Activate environment: ``conda activate sup3r``\n\n3. Install sup3r: ``pip install NREL-sup3r``\n\n4. Run this if you want to train models on GPUs: ``conda install -c anaconda tensorflow-gpu``\n\nOption 2: Clone repo (recommended for developers)\n-------------------------------------------------\n\n1. from home dir, ``git clone git@github.com:NREL/sup3r.git``\n\n2. Create ``sup3r`` environment and install package\n 1) Create a conda env: ``conda create -n sup3r``\n 2) Run the command: ``conda activate sup3r``\n 3) ``cd`` into the repo cloned in 1.\n 4) Prior to running ``pip`` below, make sure the branch is correct (install\n from main!)\n 5) Install ``sup3r`` and its dependencies by running:\n ``pip install .`` (or ``pip install -e .`` if running a dev branch\n or working on the source code)\n 6) Run this if you want to train models on GPUs: ``conda install -c anaconda tensorflow-gpu``\n On Eagle HPC, you will need to also run ``pip install protobuf==3.20.*`` and ``pip install chardet``\n 7) *Optional*: Set up the pre-commit hooks with ``pip install pre-commit`` and ``pre-commit install``\n\nRecommended Citation\n====================\n\nUpdate with current version and DOI:\n\nBrandon Benton, Grant Buster, Andrew Glaws, Ryan King. Super Resolution for Renewable Resource Data (sup3r). https://github.com/NREL/sup3r (version v0.0.3), 2022. DOI: 10.5281/zenodo.6808547\n\nAcknowledgments\n===============\n\nThis work was authored\xc2\xa0by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. Funding provided by\xc2\xa0the DOE Grid Deployment Office (GDO), the DOE Advanced Scientific Computing Research (ASCR) program, the DOE Solar Energy Technologies Office (SETO), the DOE Wind Energy Technologies Office (WETO), the United States Agency for International Development (USAID), and the Laboratory Directed Research and Development (LDRD) program at the National Renewable Energy Laboratory. The research was performed using computational resources sponsored by the Department of Energy's Office of Energy Efficiency and Renewable Energy and located at the National Renewable Energy Laboratory. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes.\n""",",https://zenodo.org/badge/latestdoi/422324608\n\n","2021/10/28, 19:05:05",727,BSD-3-Clause,600,1859,"2023/10/18, 15:16:09",0,151,171,66,7,0,4.5,0.3837965700768776,"2023/10/13, 16:44:58",v0.1.1,0,4,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, EnergyData.Info,An open data platform from the World Bank Group providing access to datasets and data analytics that are relevant to the energy sector.,,,custom,,Energy System Data Access,,,,,,,,,,https://energydata.info/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, scout,"A tool for estimating the future energy use, carbon emissions, and capital and operating cost impacts of energy efficiency and demand flexibility technologies in the U.S. residential and commercial building sectors.",trynthink,https://github.com/trynthink/scout.git,github,"building-energy,energy-data,energy-consumption,energy-efficiency,demand-side-management,carbon-emissions",Buildings and Heating,"2023/10/23, 19:27:23",49,0,15,true,Python,,,"Python,JavaScript",https://scout.energy.gov,"b'scout [![Scout test status](https://github.com/trynthink/scout/actions/workflows/tests.yml/badge.svg)](https://github.com/trynthink/scout/actions/workflows/tests.yml)\n======\n\n*The contents of this repository are all in-progress and should not be expected to be free of errors or to perform any specific functions. Use only with care and caution.*\n\nScout is a software program that estimates the impacts of various energy conservation measures (ECMs) in the U.S. residential and commercial building sectors. Scout evaluates the energy savings, avoided CO2 emissions, operating cost reductions, and cost-effectiveness (using several metrics) of each ECM under multiple technology adoption scenarios. These results are obtained for the entire U.S., and also broken out by climate zone, building class (i.e., new/existing, residential/commercial), and end use.\n\n### Getting Started\n\nScout is currently a command line-based tool. Follow the [Quick Start Guide](http://scout-bto.readthedocs.io/en/latest/quick_start_guide.html#quick-start-guide) to start using Scout on your computer.\n\nScout is free and open-source software and can be used by anyone, subject to the [license terms](https://github.com/trynthink/scout/blob/master/LICENSE.md).\n\n### Documentation\n\nDocumentation for Scout is [available online](http://scout-bto.readthedocs.io/en/latest/). The documentation includes instructions for setting up a computer to run Scout, tutorials on how to use the components of Scout, a primer on the modeling approach, and reference materials for ECM definitions.\n\n### Future Updates\n\nYou can track on-going development of Scout in this repository. If you find any errors in the model or opportunities for improvement, contribute to the issue tracker by [commenting on an existing issue](https://github.com/trynthink/scout/issues) or [submitting a new one](https://github.com/trynthink/scout/issues/new).\n\n## Scout Web App\n\nThe [Scout web app](https://scout.energy.gov) allows users to [review ECM definitions](https://scout.energy.gov/ecms.html) from the default portfolio in a convenient tabular format; [visualize ECM impacts](https://scout.energy.gov/energy.html) on energy, CO2 emissions, and operating costs for the default ECM portfolio; and [query the baseline energy database](https://scout.energy.gov/baseline-energy-calculator.html).\n\n### Baseline Energy Calculator\n\nThe [Baseline Energy Calculator](https://scout.energy.gov/baseline-energy-calculator.html) is part of the Scout web app. It allows users to explore the Scout baseline data. Based on user selections, the calculator yields total baseline U.S. energy use and CO2 emissions for one or several energy use segments, which can help users evaluate the national impact potential for an ECM of interest.'",,"2014/11/04, 22:04:43",3276,CUSTOM,48,827,"2023/10/23, 19:27:23",55,97,298,76,2,3,0.7,0.35822784810126584,"2023/10/10, 23:34:44",v0.9,0,12,false,,false,false,,,,,,,,,,, BOPTEST,The Building Optimization Testing (BOPTEST) Framework enables the assessment and benchmarking of control algorithms for building energy management.,ibpsa,https://github.com/ibpsa/project1-boptest.git,github,,Buildings and Heating,"2023/10/05, 13:03:52",76,0,31,true,Modelica,IBPSA,ibpsa,"Modelica,Python,HTML,Motoko,Makefile,Dockerfile",,"b'# IBPSA Project 1 - BOPTEST\n\n[![Build Status](https://travis-ci.com/ibpsa/project1-boptest.svg?branch=master)](https://travis-ci.com/ibpsa/project1-boptest)\n\nBuilding Optimization Performance Tests\n\nVisit the [BOPTEST Home Page](https://ibpsa.github.io/project1-boptest/) for more information about the project, software, and documentation.\n\nThis repository contains code for the Building Optimization Performance Test framework (BOPTEST)\nthat is being developed as part of the [IBPSA Project 1](https://ibpsa.github.io/project1/).\n\n\n## Structure\n- ``/testcases`` contains test cases, including docs, models, and configuration settings.\n- ``/examples`` contains code for interacting with a test case and running example tests with simple controllers. Those controllers are implemented in Python (Version 2.7 and 3.9), Julia (Version 1.0.3), and JavaScript (Version ECMAScript 2018).\n- ``/parsing`` contains code for a script that parses a Modelica model using signal exchange blocks and outputs a wrapper FMU and KPI json.\n- ``/testing`` contains code for unit and functional testing of this software. See the README there for more information about running these tests.\n- ``/data`` contains code for generating and managing data associated with test cases. This includes boundary conditions, such as weather, schedules, and energy prices, as well as a map of test case FMU outputs needed to calculate KPIs.\n- ``/forecast`` contains code for returning boundary condition forecast, such as weather, schedules, and energy prices.\n- ``/kpis`` contains code for calculating key performance indicators.\n- ``/docs`` contains design documentation and delivered workshop content.\n- ``/bacnet`` contains code for a bacnet interface.\n\n## Quick-Start to Deploy a Test Case\n1) Download this repository.\n2) Install [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/).\n3) To build and deploy a test case, use the following commands within the root directory of the extracted software:\n\n * Linux or macOS: ``$ TESTCASE= docker-compose up``\n * Windows PowerShell: ``> ($env:TESTCASE="""") -and (docker-compose up)``\n * A couple notes:\n * Replace ```` with the name of the test case you wish to deploy. Test case names can be found in the [""testcases"" directory](https://github.com/ibpsa/project1-boptest/tree/master/testcases) or on the [""Test Cases"" web page](https://ibpsa.github.io/project1-boptest/testcases/index.html).\n * The first time this command is run, the image ``boptest_base`` will be built. This takes about a minute. Subsequent usage will use the already-built image and deploy much faster.\n * If you update your BOPTEST repository, use the command ``docker rmi boptest_base`` to remove the image so it can be re-built with the updated repository upon next deployment.\n * ``TESTCASE`` is simply an environment variable. Consistent with use of docker-compose, you may also edit the value of this variable in the ``.env`` file and then use ``docker-compose up``.\n\n4) In a separate process, use the test case API defined below to interact with the test case using your test controller. Alternatively, view and run an example test controller as described below.\n5) Shutdown the test case by the command ``docker-compose down`` executed in the root directory of this repository\n\n## Run an example test controller:\n\n* For Python-based example controllers:\n * Optionally, add the directory path to the root of this repository to the ``PYTHONPATH`` environment variable. Use ``export PYTHONPATH=$(pwd):$PYTHONPATH``. Note: The Python example updates the ``PYTHONPATH`` just in time.\n * Build and deploy ``testcase1``. Then, in a separate terminal, use ``$ cd examples/python/ && python testcase1.py`` to test a simple proportional feedback controller on this test case over a two-day period.\n * Build and deploy ``testcase1``. Then, in a separate terminal, use ``$ cd examples/python/ && python testcase1_scenario.py`` to test a simple proportional feedback controller on this test case over a test period defined using the ``/scenario`` API.\n * Build and deploy ``testcase2``. Then, in a separate terminal, use ``$ cd examples/python/ && python testcase2.py`` to test a simple supervisory controller on this test case over a two-day period.\n\n* For Julia-based example controllers:\n * Build and deploy ``testcase1``. Then, in a separate terminal, use ``$ cd examples/julia && make build Script=testcase1 && make run Script=testcase1`` to test a simple proportional feedback controller on this test case over a two-day period. Note that the Julia-based controller is run in a separate Docker container.\n * Build and deploy ``testcase2``. Then, in a separate terminal, use ``$ cd examples/julia && make build Script=testcase2 && make run Script=testcase2`` to test a simple supervisory controller on this test case over a two-day period. Note that the Julia-based controller is run in a separate Docker container.\n * Once either test is done, use ``$ make remove-image Script=testcase1`` or ``$ make remove-image Script=testcase2`` to removes containers, networks, volumes, and images associated with these Julia-based examples.\n\n* For JavaScript-based example controllers:\n * In a separate terminal, use ``$ cd examples/javascript && make build Script=testcase1 && make run Script=testcase1`` to test a simple proportional feedback controller on the testcase1 over a two-day period.\n * In a separate terminal, use ``$ cd examples/javascript && make build Script=testcase2 && make run Script=testcase2`` to test a simple supervisory controller on the testcase2 over a two-day period.\n * Ince the test is done, use ``$ make remove-image Script=testcase1`` or ``$ make remove-image Script=testcase2`` to removes containers, networks, volumes, and images, and use ``$ cd examples/javascript && rm geckodriver`` to remove the geckodriver file.\n * Note that those two controllers can also be executed by web browers, such as chrome or firefox.\n\n## Test Case RESTful API\n- To interact with a deployed test case, use the API defined in the table below by sending RESTful requests to: ``http://127.0.0.1:5000/``\n- The API will return a JSON in the form ``{""status"":, ""message"":, ""payload"":}``. Status codes in ``""status""`` are integers: ``200`` for successful with or without warning, ``400`` for bad input error, or ``500`` for internal error. Data returned in ``""payload""`` is the data of interest relvant to the specific API request, while the string in ``""message""`` will report any warnings or error messages to help debug encountered problems.\n\nExample RESTful interaction:\n\n- Receive a list of available measurement names and their metadata: ``$ curl http://127.0.0.1:5000/measurements``\n- Receive a forecast of boundary condition data: ``$ curl http://127.0.0.1:5000/forecast``\n- Advance simulation of test case 2 with new heating and cooling temperature setpoints: ``$ curl http://127.0.0.1:5000/advance -d \'{""oveTSetRooHea_u"":293.15,""oveTSetRooHea_activate"":1, ""oveTSetRooCoo_activate"":1,""oveTSetRooCoo_u"":298.15}\' -H ""Content-Type: application/json""``. Leave an empty json to advance the simulation using the setpoints embedded in the model.\n\n| Interaction | Request |\n|-----------------------------------------------------------------------|-----------------------------------------------------------|\n| Advance simulation with control input and receive measurements. | POST ``advance`` with optional json data ""{:}"" |\n| Initialize simulation to a start time using a warmup period in seconds. Also resets point data history and KPI calculations. | PUT ``initialize`` with required arguments ``start_time=``, ``warmup_period=``|\n| Receive communication step in seconds. | GET ``step`` |\n| Set communication step in seconds. | PUT ``step`` with required argument ``step=`` |\n| Receive sensor signal point names (y) and metadata. | GET ``measurements`` |\n| Receive control signal point names (u) and metadata. | GET ``inputs`` |\n| Receive test result data for the given point names between the start and final time in seconds. | PUT ``results`` with required arguments ``point_names=``, ``start_time=``, ``final_time=``|\n| Receive test KPIs. | GET ``kpi`` |\n| Receive test case name. | GET ``name`` |\n| Receive boundary condition forecast from current communication step for the given point names for the horizon and at the interval in seconds. | PUT ``forecast`` with required arguments ``point_names=``, ``horizon=``, ``interval=``|\n| Receive boundary condition forecast available point names and metadata. | GET ``forecast_points`` |\n| Receive current test scenario. | GET ``scenario`` |\n| Set test scenario. Setting the argument ``time_period`` performs an initialization with predefined start time and warmup period and will only simulate for predefined duration. | PUT ``scenario`` with optional arguments ``electricity_price=``, ``time_period=``. See README in [/testcases](https://github.com/ibpsa/project1-boptest/tree/master/testcases) for options and test case documentation for details.|\n| Receive BOPTEST version. | GET ``version`` |\n| Submit KPIs, other test information, and optional string tags (up to 10) to online dashboard. Requires a formal test scenario to be completed, initialized using the PUT ``scenario`` API. | POST ``submit`` with required argument ``api_key=`` and optional arguments ``tag#=`` where # is an integer between 1 and 10. The API key can be obtained from the user account registered with the online dashboard.|\n\n## Development\nCommunity development is welcome through reporting [issues](https://github.com/ibpsa/project1-boptest/issues) and/or making pull requests. If making a pull request,\nmake sure an issue is opened first, name the development branch according to the convention ``issue_``, and cite in the pull request which issue is being addressed.\n\nThis repository uses pre-commit to ensure that the files meet standard formatting conventions (such as line spacing, layout, etc).\nPresently only a handful of checks are enabled and will expanded in the near future. To run pre-commit first install\npre-commit into your Python version using pip `pip install pre-commit`. Pre-commit can either be manually by calling\n`pre-commit run --all-files` from within the BOPTEST checkout directory, or you can install pre-commit to be run automatically\nas a hook on all commits by calling `pre-commit install` in the root directory of the BOPTEST GitHub checkout.\n\n## Additional Software\n\n### Deployment as a Web-Service\nBOPTEST is delpoyed as a web-service using [BOPTEST-Service](https://github.com/NREL/boptest-service).\nSee the related [section in the user guide](https://ibpsa.github.io/project1-boptest/docs-userguide/getting_started.html#public-web-service) for getting started.\n\n### OpenAI-Gym Environment\nAn OpenAI-Gym environment for BOPTEST is implemented in [ibpsa/project1-boptest-gym](https://github.com/ibpsa/project1-boptest-gym).\nSee the documentation there for getting started.\n\n### BACnet Interface\nA BACnet interface for BOPTEST is implemented in the ``/bacnet`` directory of this repository. See the ``/bacnet/README.md`` there for getting started.\n\n### Results Dashboard\nA proposed BOPTEST home page and dashboard for creating accounts and sharing results is published here https://xd.adobe.com/view/0e0c63d4-3916-40a9-5e5c-cc03f853f40a-783d/.\n\n## Use Cases and Development Requirements\nSee the [wiki](https://github.com/ibpsa/project1-boptest/wiki) for use cases and development requirements.\n\n## Publications\n\n### To cite, please use:\nD. Blum, J. Arroyo, S. Huang, J. Drgona, F. Jorissen, H.T. Walnum, Y. Chen, K. Benne, D. Vrabie, M. Wetter, and L. Helsen. (2021). [""Building optimization testing framework (BOPTEST) for simulation-based benchmarking of control strategies in buildings.""](https://doi.org/10.1080/19401493.2021.1986574) *Journal of Building Performance Simulation*, 14(5), 586-610.\n\n### Additional publications:\nSee the [Publications](https://ibpsa.github.io/project1-boptest/publications/index.html) page.\n'",",https://doi.org/10.1080/19401493.2021.1986574","2018/05/11, 18:18:07",1993,CUSTOM,222,2492,"2023/10/05, 13:03:57",62,283,520,76,20,8,0.1,0.3541121005762179,"2023/10/04, 19:02:13",v0.5.0,0,17,false,,false,false,,,https://github.com/ibpsa,http://www.ibpsa.org,Worldwide,,,https://avatars.githubusercontent.com/u/16223588?v=4,,, BOPTEST-Gym,The OpenAI-Gym interface of the BOPTEST framework facilitates the assessment and benchmarking of RL algorithms for building energy management.,ibpsa,https://github.com/ibpsa/project1-boptest-gym.git,github,,Buildings and Heating,"2023/08/11, 15:05:44",25,0,14,true,Python,IBPSA,ibpsa,"Python,Makefile,Dockerfile",,"b""# BOPTEST-Gym\r\n\r\nBOPTESTS-Gym is the [OpenAI-Gym](https://gym.openai.com/) environment for the [BOPTEST](https://github.com/ibpsa/project1-boptest) framework. This repository accommodates the BOPTEST API to the OpenAI-Gym convention in order to facilitate the implementation, assessment and benchmarking of reinforcement learning (RL) algorithms for their application in building energy management. RL algorithms from the [Stable-Baselines 3](https://github.com/DLR-RM/stable-baselines3) repository are used to exemplify and test this framework. \r\n\r\nThe environment is described in [this paper](https://www.researchgate.net/publication/354386346_An_OpenAI-Gym_environment_for_the_Building_Optimization_Testing_BOPTEST_framework). \r\n\r\n## Structure\r\n- `boptestGymEnv.py` contains the core functionality of this Gym environment.\r\n- `environment.yml` contains the dependencies required to run this software. \r\n- `/examples` contains prototype code for the interaction of RL algorithms with an emulator building model from BOPTEST. \r\n- `/testing` contains code for unit testing of this software. \r\n\r\n## Quick-Start (using BOPTEST-Service)\r\nBOPTEST-Service allows to directly access BOPTEST test cases in the cloud, without the need to run it locally. Interacting with BOPTEST-Service requires less configuration effort but is considerably slower because of the communication overhead between the agent and the test case running in the cloud. Use this approach when you want to quickly check out the functionality of this repository. \r\n\r\n1) Create a conda environment from the `environment.yml` file provided (instructions [here](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file)). \r\n2) Check out the `boptest-gym-service` branch and run the example below that uses the [Bestest hydronic case with a heat-pump](https://github.com/ibpsa/project1-boptest/tree/master/testcases/bestest_hydronic_heat_pump) and the [DQN algorithm](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) from Stable-Baselines: \r\n\r\n```python\r\nfrom boptestGymEnv import BoptestGymEnv, NormalizedObservationWrapper, DiscretizedActionWrapper\r\nfrom stable_baselines3 import DQN\r\n\r\n# url for the BOPTEST service. \r\nurl = 'https://api.boptest.net' \r\n\r\n# Decide the state-action space of your test case\r\nenv = BoptestGymEnv(\r\n url = url,\r\n testcase = 'bestest_hydronic_heat_pump',\r\n actions = ['oveHeaPumY_u'],\r\n observations = {'time':(0,604800),\r\n 'reaTZon_y':(280.,310.),\r\n 'TDryBul':(265,303),\r\n 'HDirNor':(0,862),\r\n 'InternalGainsRad[1]':(0,219),\r\n 'PriceElectricPowerHighlyDynamic':(-0.4,0.4),\r\n 'LowerSetp[1]':(280.,310.),\r\n 'UpperSetp[1]':(280.,310.)}, \r\n predictive_period = 24*3600, \r\n regressive_period = 6*3600, \r\n random_start_time = True,\r\n max_episode_length = 24*3600,\r\n warmup_period = 24*3600,\r\n step_period = 3600)\r\n\r\n# Normalize observations and discretize action space\r\nenv = NormalizedObservationWrapper(env)\r\nenv = DiscretizedActionWrapper(env,n_bins_act=10)\r\n\r\n# Instantiate an RL agent\r\nmodel = DQN('MlpPolicy', env, verbose=1, gamma=0.99,\r\n learning_rate=5e-4, batch_size=24, \r\n buffer_size=365*24, learning_starts=24, train_freq=1)\r\n\r\n# Main training loop\r\nmodel.learn(total_timesteps=10)\r\n\r\n# Loop for one episode of experience (one day)\r\ndone = False\r\nobs, _ = env.reset()\r\nwhile not done:\r\n action, _ = model.predict(obs, deterministic=True) \r\n obs,reward,terminated,truncated,info = env.step(action)\r\n done = (terminated or truncated)\r\n\r\n# Obtain KPIs for evaluation\r\nenv.get_kpis()\r\n\r\n```\r\n\r\n## Quick-Start (running BOPTEST locally)\r\nRunning BOPTEST locally is substantially faster\r\n\r\n1) Create a conda environment from the `environment.yml` file provided (instructions [here](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file)). \r\n2) Run a BOPTEST case with the building emulator model to be controlled (instructions [here](https://github.com/ibpsa/project1-boptest/blob/master/README.md)). \r\n3) Check out the `master` branch of this repository and run the example above replacing the url to be `url = 'http://127.0.0.1:5000'` and avoiding the `testcase` argument to the `BoptestGymEnv` class. \r\n\r\n\r\n## Citing the project\r\n\r\nPlease use the following reference if you used this repository for your research.\r\n\r\n```\r\n@inproceedings{boptestgym2021,\r\n\tauthor = {Javier Arroyo and Carlo Manna and Fred Spiessens and Lieve Helsen},\r\n\ttitle = {{An OpenAI-Gym environment for the Building Optimization Testing (BOPTEST) framework}},\r\n\tyear = {2021},\r\n\tmonth = {September},\r\n\tbooktitle = {Proceedings of the 17th IBPSA Conference},\r\n\taddress = {Bruges, Belgium},\r\n}\r\n\r\n```\r\n\r\n\r\n\r\n""",,"2020/11/05, 16:15:12",1084,CUSTOM,134,461,"2023/10/23, 12:41:28",17,58,117,15,2,2,0.0,0.0024875621890547706,"2023/07/17, 13:44:53",v0.4.0,0,2,false,,false,false,,,https://github.com/ibpsa,http://www.ibpsa.org,Worldwide,,,https://avatars.githubusercontent.com/u/16223588?v=4,,, hpxml,Home Performance XML is a data transfer standard for the home performance industry.,hpxmlwg,https://github.com/hpxmlwg/hpxml.git,github,,Buildings and Heating,"2023/10/05, 20:08:30",33,0,4,true,Python,HPXML Working Group,hpxmlwg,"Python,Makefile,Batchfile",https://www.hpxmlonline.com,"b'HPXML\n=====\n\nHome Performance XML (HPXML) is a data transfer standard for the home performance industry. This repository is where the development of the schemas happens. \n\n* `Official HPXML Website `_ - includes press releases, and general information about the project.\n* `HPXML Guide `_ - Draft documentation including program administrator and software implementer guides. Includes examples and explanation of how to use HPXML.\n* `HPXML Toolbox `_ - Validator and Data Dictionary (schema explorer).\n\n'",,"2014/10/14, 14:53:49",3298,CUSTOM,289,1511,"2023/10/05, 20:09:04",38,231,358,61,20,9,0.6,0.4757981462409887,"2023/10/17, 19:07:12",v4.0-rc2,0,12,false,,false,false,,,https://github.com/hpxmlwg,http://hpxmlonline.com,,,,https://avatars.githubusercontent.com/u/9216350?v=4,,, HPXML to Home Energy Score Translator,This translator script takes an HPXML file or directory of files as an input and generates HEScore inputs from it.,NREL,https://github.com/NREL/hescore-hpxml.git,github,,Buildings and Heating,"2023/06/20, 16:58:58",18,0,1,true,Python,National Renewable Energy Laboratory,NREL,Python,,"b'HPXML to Home Energy Score Translator\n=====================================\n\n[![CircleCI](https://circleci.com/gh/NREL/hescore-hpxml.svg?style=svg)](https://circleci.com/gh/NREL/hescore-hpxml)\n\nThis translator script takes an HPXML file or directory of files as an\ninput and generates HEScore inputs from it. The HEScore inputs are\nexported as json.\n\nDetails of the translation assumptions as well instructions for use can\nbe found in [the\ndocumentation](http://hescore-hpxml.readthedocs.org/en/latest/).\n\nInstallation\n------------\n\nUse a\n[virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/).\n(Good idea, but not strictly required.)\n\nInstall using pip:\n\n pip install hescore-hpxml\n\nTo get the latest and greatest, clone this repository, cd into the\ndirectory and install as follows:\n\n pip install -e .\n\nHow to use\n----------\n\nUse the command line script:\n\n hpxml2hescore examples/house1.xml\n\nTo get some guidance on how to use the script:\n\n hpxml2hescore -h\n'",,"2014/09/18, 16:59:56",3324,BSD-2-Clause,173,984,"2023/05/31, 18:48:26",15,92,211,11,147,1,4.2,0.5203955500618047,"2023/05/18, 16:48:04",hescore-hpxml-2023.05.0,0,5,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, LoadProfileGenerator,A program for generating load curves for residential consumers. Agent-based and extremely detailed.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/LoadProfileGenerator.git,github,,Buildings and Heating,"2022/12/14, 08:01:08",28,0,14,true,C#,FZJ-IEK3,FZJ-IEK3-VSA,"C#,TeX,Smalltalk,Batchfile,Dockerfile",,"b'\n\n# LoadProfileGenerator\n\nThis repository contains the full source code for the LoadProfileGenerator. \n\nBinaries are available at https://www.loadprofilegenerator.de\n\nThe manual is available [here](https://nbn-resolving.org/urn:nbn:de:bsz:ch1-qucosa-209036), in the second part of the author\'s PhD thesis.\n\n## Contributions\n\nContributions are highly welcome. Feel free to send me pull requests.\n\n## Plans\n\n- Improve electromobility\n- Speed improvements\n- International profiles\n\n## License\n\nMIT License\n\nCopyright (c) 2010-2022 Noah Pflugradt (FZJ IEK-3), Peter Stenzel (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program. \nIf not, see \n\n## Citation\n\nIf you want to use the LoadProfileGenerator for a publication, please cite the following paper:\n```\nPflugradt et al., (2022). LoadProfileGenerator: An Agent-Based Behavior Simulation for Generating Residential Load Profiles. Journal of Open Source Software, 7(71), 3574, https://doi.org/10.21105/joss.03574\n```\n\n## External Data\n\nThe LoadProfileGenerator uses solar radiation profiles from Deutscher Wetterdienst (DWD, www.dwd.de) and from Photovoltaic Geographical Information System (PVGIS, https://ec.europa.eu/jrc/en/pvgis)\n\n## About Us\n\n

\nWe are the Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n\n# Acknowledgements\n\n### 2010-2016\n\nThis software was first developed at\n\n__Technische Universit\xc3\xa4t Chemnitz - Professur Technische Thermodynamik__\n\n### 2016-2020\n\n__Berner Fachhochschule - Labor f\xc3\xbcr Photovoltaik-Systeme__\n\nPart of the Development was funded by the \n\n__Swiss Federal Office of Energy__\n\n## Starting March 2020\n\nCurrently development is funded by the Forschungszentrum J\xc3\xbclich - IEK 3.\n\n\n\n'",",https://doi.org/10.21105/joss.03574\n```\n\n##","2020/06/18, 09:41:13",1224,MIT,29,285,"2022/12/14, 08:01:15",6,25,27,12,315,2,0.6,0.5,"2022/12/13, 15:27:28",v10.9.0,0,5,false,,false,false,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, The-building-data-genome-project,A collection of non-residential buildings for performance analysis and algorithm benchmarking.,buds-lab,https://github.com/buds-lab/the-building-data-genome-project.git,github,"open-data,jupyter-notebook,electricity-meter,commercial-building,energy-efficiency,electrical-meters,smart-meter,temporal-data,feature-extraction,feature-engineering",Buildings and Heating,"2021/03/30, 12:28:32",171,0,19,false,Jupyter Notebook,Building and Urban Data Science (BUDS) Group,buds-lab,"Jupyter Notebook,Python,Makefile",http://www.buildingdatagenome.org,"b'# Check out the Building Data Genome 2 - the latest version that supercedes this one: https://github.com/buds-lab/building-data-genome-project-2\n\n\n\n![building data genome logo](https://raw.githubusercontent.com/buds-lab/the-building-data-genome-project/master/figures/buildingdatagenome1.png)\n\n- Does your data science technique actually scale across hundreds of buildings?\n- Is it actually faster or more accurate?\n\nThese are questions that researchers should ask when developing data-driven methods. Building performance prediction, classi cation, and clustering algorithms are becoming an essential part of analysis for anomaly detection, control optimization, and demand response. But how do we actually compare, each individual technique against previously created methods?\n\nThe time-series data mining community identifed this problem as early as 2003: \xe2\x80\x9cMuch of this work has very little utility because the contribution made\xe2\x80\x9d...\xe2\x80\x9coffer an amount of improvement that would have been completely dwarfed by the variance that would have been observed by testing on many real world datasets, or the variance that would have been observed by changing minor (unstated) implementation details.\xe2\x80\x9d ([Keogh, E. and Kasetty, S.: On the need for time series data mining benchmarks: A survey and empirical demonstration. Data Mining and Knowledge Discovery, 7(4):349\xe2\x80\x93371, Oct. 2003.](https://link.springer.com/article/10.1023/A:1024988512476))\n\n[They created the time-series data benchmarking set](http://www.cs.ucr.edu/~eamonn/time_series_data/). This data set enables testing of new techniques on an assortment of real world data sets. For commerical buildings data, we are doing the same!\n\n## The need for Benchmarking Data Set for Non-residential Building Data Analytics\n\n### Most of the existing building performance data science studies rely on each individual researcher creating their own methods, finding a case study data set and determining efficacy on their own. Not surprisingly, most of those researcher find positive, yet questionably meaningful results.\n\n![old way](https://raw.githubusercontent.com/buds-lab/the-building-data-genome-project/master/figures/Oldway.png)\n\n\n### Using a large, consistent benchmark data set from hundreds (or thousands) of buildings, a researcher can determine how well their methods actually perform across a heterogeneous data set. If multiple researcher use the same data set, then there can be meaningful comparisons of accuracy, speed and ease-of-use.\n\n![new way](https://raw.githubusercontent.com/buds-lab/the-building-data-genome-project/master/figures/NewWay.png)\n\n## Introducing the Building Data Genome Project\nIt is an open data set from 507 non-residential buildings that includes hourly whole building electrical meter data for one year. Each of the buildings has meta data such as or area, weather, and primary use type. This data set can be used to benchmark various statistical learning algorithms and other data science techniques. It can also be used simply as a teaching or learning tool to practice dealing with measured performance data from large numbers of non-residential buildings. The charts below illustrate the breakdown of the buildings according to location, building industry, sub-industry, and primary use type.\n\n![meta data](https://raw.githubusercontent.com/buds-lab/the-building-data-genome-project/master/figures/allbars.png)\n\n### Please contribute new data sets or provide analysis examples in Jupyter or R markdown using the data\n\n\nCitation of Data-Set\n------------\n\n[Clayton Miller, Forrest Meggers, The Building Data Genome Project: An open, public data set from non-residential building electrical meters, Energy Procedia, Volume 122, September 2017, Pages 439-444, ISSN 1876-6102, https://doi.org/10.1016/j.egypro.2017.07.400.](http://www.sciencedirect.com/science/article/pii/S1876610217330047) \n\n[ResearchGate](https://www.researchgate.net/publication/319507342_The_Building_Data_Genome_Project_An_open_public_data_set_from_non-residential_building_electrical_meters)\n\n```\nBibTex:\n@article{Miller2017439,\ntitle = ""The Building Data Genome Project: An open, public data set from non-residential building electrical meters "",\njournal = ""Energy Procedia "",\nvolume = ""122"",\nnumber = """",\npages = ""439 - 444"",\nyear = ""2017"",\nnote = ""\\{CISBAT\\} 2017 International ConferenceFuture Buildings & Districts \xe2\x80\x93 Energy Efficiency from Nano to Urban Scale "",\nissn = ""1876-6102"",\ndoi = ""https://doi.org/10.1016/j.egypro.2017.07.400"",\nurl = ""http://www.sciencedirect.com/science/article/pii/S1876610217330047"",\nauthor = ""Clayton Miller and Forrest Meggers"",\nkeywords = ""Open Data"",\nkeywords = ""Non-Residential Building Meter Data"",\nkeywords = ""Benchmark Data Set"",\nkeywords = ""Big Data"",\nkeywords = ""Machine Learning "",\nabstract = ""Abstract As of 2015, there are over 60 million smart meters installed in the United States; these meters are at the forefront of big data analytics in the building industry. However, only a few public data sources of hourly non-residential meter data exist for the purpose of testing algorithms. This paper describes the collection, cleaning, and compilation of several such data sets found publicly on-line, in addition to several collected by the authors. There are 507 whole building electrical meters in this collection, and a majority are from buildings on university campuses. This group serves as a primary repository of open, non-residential data sources that can be built upon by other researchers. An overview of the data sources, subset selection criteria, and details of access to the repository are included. Future uses include the application of new, proposed prediction and classification models to compare performance to previously generated techniques. ""\n}\n```\n\nGetting Started\n------------\n\nWe recommend you download the [Anaconda Python Distribution](https://www.continuum.io/downloads) and use Jupyter to get an understanding of the data.\n- Raw temporal and meta data are found in `/data/raw/`\n\nExample notebooks are found in `/notebooks/` -- a few good overview examples:\n- [Meta data overview](https://github.com/buds-lab/the-building-data-genome/blob/master/notebooks/00_Meta%20Data%20Exploration.ipynb)\n- [Temporal data overview](https://github.com/buds-lab/the-building-data-genome/blob/master/notebooks/00_Temporal%20Data%20Exploration%20--%20Subset.ipynb)\n\nPublications or Projects that use this data-set:\n------------\n\nPlease update this list if you add notebooks or R-Markdown files to the ``notebook`` folder.\n\n- [Miller, Clayton. \xe2\x80\x9cScreening Meter Data: Characterization of Temporal Energy Data from Large Groups of Non-Residential Buildings.\xe2\x80\x9d ETH Z\xc3\xbcrich, 2017.](https://www.research-collection.ethz.ch/handle/20.500.11850/125778) - [ResearchGate](https://www.researchgate.net/publication/313720565_Screening_Meter_Data_Characterization_of_Temporal_Energy_Data_from_Large_Groups_of_Non-Residential_Buildings)\n- [Temporal Data Mining Library for Buildings](https://github.com/buds-lab/temporal-features-for-nonres-buildings-library)\n\n\n# Contact -- (Add yours if you contribute to the data set)\nDr. Clayton Miller\nBuilding and Urban Data Science (BUDS) Group \nNational University of Singapore\nclayton@nus.edu.sg \nhttp://budslab.org/\n\n\nDr. Forrest Meggers\nCooling and Heating for Architecturally Optimized System (CHAOS) Lab\nPrinceton University\nfmeggers@princeton.edu\nhttp://chaos.princeton.edu/\n\n\nAnjukan Kathirgamanathan\nPhD Student, Energy Institute\nUniversity College Dublin\nanjukan.kathirgamanathan@ucdconnect.ie\nhttps://energyinstitute.ucd.ie/\n\n\nProject Organization\n------------\n\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Makefile <- Makefile with commands like `make data` or `make train`\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md <- The top-level README for developers using this project.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 external <- Data from third party sources.\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 interim <- Intermediate data that has been transformed.\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 processed <- The final, canonical data sets for modeling.\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 raw <- The original, immutable data dump.\n \xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),\n \xe2\x94\x82 the creator\'s initials, and a short `-` delimited description, e.g.\n \xe2\x94\x82 `1.0-jqp-initial-data-exploration`.\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 references <- Data dictionaries, manuals, and all other explanatory materials.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 requirements.txt <- The requirements file for reproducing the analysis environment, e.g.\n generated with `pip freeze > requirements.txt`\n\n\nProject Organization\n------------\nThe MIT License (MIT)\nCopyright (c) 2016, Clayton Miller\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n'",",https://doi.org/10.1016/j.egypro.2017.07.400,https://doi.org/10.1016/j.egypro.2017.07.400"",\nurl","2016/05/04, 04:07:20",2730,MIT,0,87,"2019/01/13, 12:58:35",4,0,4,0,1746,0,0,0.12068965517241381,,,0,3,false,,false,false,,,https://github.com/buds-lab,www.budslab.org,Singapore,,,https://avatars.githubusercontent.com/u/26264086?v=4,,, VOLTTRON,A platform that provides services for collecting and storing data from buildings and devices. It provides an environment for developing applications that interact with data.,VOLTTRON,https://github.com/VOLTTRON/volttron.git,github,"buildings,bacnet,modbus,python,message-bus,office-hours,volttron,volttron-applications,volttron-instance",Buildings and Heating,"2022/10/03, 18:24:10",431,109,37,true,Python,,VOLTTRON,"Python,JavaScript,HTML,Shell,CSS,Dockerfile",https://volttron.readthedocs.io/,"b'![image](docs/source/files/VOLLTRON_Logo_Black_Horizontal_with_Tagline.png)\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/fcf58045b4804edf8f4d3ecde3016f76)](https://app.codacy.com/gh/VOLTTRON/volttron?utm_source=github.com&utm_medium=referral&utm_content=VOLTTRON/volttron&utm_campaign=Badge_Grade_Settings)\n\n\nVOLTTRON\xe2\x84\xa2 is an open source platform for distributed sensing and control. The\nplatform provides services for collecting and storing data from buildings and\ndevices and provides an environment for developing applications which interact\nwith that data.\n\n## Upgrading to VOLTTRON 8.x\n\nVOLTTRON 8 introduces four changes that require an explict upgrade step when upgrading from an earlier VOLTTRON version\n\n 1. Dynamic RPC authorization feature - This requires a modification to the auth file. If you have a pre-existing\n instance of VOLTTRON running on an older version, the auth file will need to be updated.\n 2. Historian agents now store the cache database (backup.sqlite file) in\n /agents///.agent-data directory instead of\n /agents// directory. In future all core agents will write data only\n to the .agent-data subdirectory. This is because vctl install --force backs up and restores\n only the contents of this directory.\n 3. SQLHistorians (historian version 4.0.0 and above) now use a new database schema where metadata is stored in\n topics table instead of separate metadata table. SQLHistorians with version >= 4.0.0 can work with existing\n database with older schema however the historian agent code should be upgraded to newer version (>=4.0.0) to run\n with VOLTTRON 8 core.\n 4. VOLTTRON feature to run individual agents as unique Unix users is now named ""agent-isolation-mode"" and is \n consistently referred to using this name in code, configuration, and documentation. Before VOLTTRON 8.2 this \n configuration parameter was called ""secure-agent-users"" and related documentation referred to this mode as \n ""secure mode"". \n\nTo upgrade:\n\n 1. If upgrading historian, make sure historians are not in auto start mode. To remove any historian from auto start\n mode use the command \'vctl disable . This is necessary so that the old\n sqlhistorian does not automatically start after step 5. \n 2. Update volttron source code version to VOLTTRON 8\n 3. activate the volttron environment, and run ```python bootstrap.py --force```. If you have \n any additional bootstrap options that you need (rabbitmq, web, drivers, etc.) include these in the above command.\n 4. Run ```volttron-upgrade``` to update the auth file, move historian cache files into agent-data directory, and \n rename the config parameter ""secure-agent-users"" in VOLTTRON_HOME/config to ""agent-isolation-mode""\n **Note** that the upgrade script will only move the backup.sqlite file and will not move sqlite historian\'s db \n file if they are within the install directory. If using a SQLite historian, please backup the database file of \n sqlite historian before upgrading to the latest historian version.\n 5. Start VOLTTRON\n 6. Run ```vctl install --force --vip-identity --agent-config ``` to upgrade \n to the latest historian version. vctl install --force will backup the cache in .agent-data \n folder, installs the latest version of the historian and restore the contents of \n .agent-data folder.\n\n### Upgrading aggregate historians\n\nVOLTTRON 8 also comes with updated SQL aggregate historian schema. However, there is no automated upgrade path for\naggregate historian. To upgrade an existing aggregate historian please refer to the CHANGELOG.md within \nSQLAggregateHistorian source directory\n\n## Features\n\n- [Message Bus](https://volttron.readthedocs.io/en/latest/platform-features/message-bus/index.html) allows agents to subscribe to data sources and publish results and messages.\n- [Driver framework](https://volttron.readthedocs.io/en/latest/driver-framework/drivers-overview.html) for collecting data from and sending control actions to buildings and devices.\n- [Historian framework](https://volttron.readthedocs.io/en/latest/agent-framework/historian-agents/historian-framework.html) for storing data.\n- [Agent lifecycle managment](https://volttron.readthedocs.io/en/latest/platform-features/control/agent-management-control.html) in the platform\n- [Web UI](https://volttron.readthedocs.io/en/latest/agent-framework/core-service-agents/volttron-central/volttron-central-overview.html) for managing deployed instances from a single central instance.\n\n## Installation\n\nVOLTTRON is written in Python 3.6+ and runs on Linux Operating Systems. For\nusers unfamiliar with those technologies, the following resources are recommended:\n\n- \n- \n\n### 1. Install prerequisites\n\n[Requirements Reference](https://volttron.readthedocs.io/en/latest/introduction/platform-install.html#step-1-install-prerequisites)\n\nFrom version 7.0, VOLTTRON requires python 3 with a minimum version of 3.6; it is tested only systems supporting that as a native package.\nOn Debian-based systems (Ubuntu bionic, debian buster, raspbian buster), these can all be installed with the following commands:\n\n```sh\nsudo apt-get update\nsudo apt-get install build-essential libffi-dev python3-dev python3-venv openssl libssl-dev libevent-dev git\n ```\n(Note: `libffi-dev` seems to only be required on arm-based systems.)\n\n On Redhat or CENTOS systems, these can all be installed with the following command:\n```sh\nsudo yum update\nsudo yum install make automake gcc gcc-c++ kernel-devel python3.6-devel pythone3.6-venv openssl openssl-devel libevent-devel git\n ```\n\n### 2. Clone VOLTTRON code\n\nFrom version 6.0, VOLTTRON supports two message buses - ZMQ and RabbitMQ. \n\n```sh\ngit clone https://github.com/VOLTTRON/volttron --branch \n```\n\n### 3. Setup virtual environment\n\n#### Steps for ZMQ\n\nRun the following command to install all required packages\n\n```sh\ncd \npython3 bootstrap.py\nsource env/bin/activate\n```\n\nProceed to step 4.\n\nYou can deactivate the environment at any time by running `deactivate`.\n\n#### Steps for RabbitMQ\n\n##### 1. Install Erlang version 24 packages\n\nFor RabbitMQ based VOLTTRON, some RabbitMQ specific software packages must be installed.\n\n###### On Debian based systems and CentOS 6/7\n\nIf you are running an Debian or CentOS system, you can install the RabbitMQ dependencies by running the rabbit \n dependencies script, passing in the OS name and appropriate distribution as parameters. The following are supported:\n\n- `debian focal` (for Ubuntu 20.04)\n\n- `debian bionic` (for Ubuntu 18.04)\n\n- `debian stretch` (for Debian Stretch)\n\n- `debian buster` (for Debian Buster)\n\n- `raspbian buster` (for Raspbian/Raspberry Pi OS buster)\n\nExample command:\n\n```sh\n./scripts/rabbit_dependencies.sh debian xenial\n```\n\n###### Alternatively\n\nYou can download and install Erlang from [Erlang Solutions](https://www.erlang-solutions.com/resources/download.html).\nPlease include OTP/components - ssl, public_key, asn1, and crypto.\nAlso lock your version of Erlang using the [yum-plugin-versionlock](https://access.redhat.com/solutions/98873)\n\n##### 2. Configure hostname\n\nMake sure that your hostname is correctly configured in /etc/hosts (See [this StackOverflow Post](https://stackoverflow.com/questions/24797947/os-x-and-rabbitmq-error-epmd-error-for-host-xxx-address-cannot-connect-to-ho)).\nIf you are testing with VMs make please make sure to provide unique host names for each of the VM you are using. \n\nThe hostname should be resolvable to a valid IP when running on bridged mode. RabbitMQ checks for this during initial \nboot. Without this (for example, when running on a VM in NAT mode) RabbitMQ start would fail with the error ""unable to \nconnect to empd (port 4369) on ."" Note: RabbitMQ startup error would show up in syslog (/var/log/messages) file\nand not in RabbitMQ logs (/var/log/rabbitmq/rabbitmq@hostname.log)\n\n##### 3. Bootstrap\n\n```sh\ncd volttron\npython3 bootstrap.py --rabbitmq [optional install directory. defaults to\n/rabbitmq_server]\n```\n\nThis will build the platform and create a virtual Python environment and\ndependencies for RabbitMQ. It also installs RabbitMQ server as the current user.\nIf an install path is provided, that path should exist and the user should have \nwrite permissions. RabbitMQ will be installed under `/rabbitmq_server-`.\nThe rest of the documentation refers to the directory `/rabbitmq_server-` as\n`$RABBITMQ_HOME`\n\nYou can check if the RabbitMQ server is installed by checking its status. Please\nnote, the `RABBITMQ_HOME` environment variable can be set in ~/.bashrc. If doing so,\nit needs to be set to the RabbitMQ installation directory (default path is\n`/rabbitmq_server/rabbitmq_server-`)\n\n```sh\necho \'export RABBITMQ_HOME=$HOME/rabbitmq_server/rabbitmq_server-3.9.7\'|sudo tee --append ~/.bashrc\nsource ~/.bashrc\n\n$RABBITMQ_HOME/sbin/rabbitmqctl status\n```\n\n##### 4. Activate the environment\n\n```sh\nsource env/bin/activate\n```\n\nYou can deactivate the environment at any time by running `deactivate`.\n\n##### 5. Create RabbitMQ setup for VOLTTRON:\n\n```sh\nvcfg rabbitmq single [--config optional path to rabbitmq_config.yml]\n```\n\nRefer to [examples/configurations/rabbitmq/rabbitmq_config.yml](examples/configurations/rabbitmq/rabbitmq_config.yml)\nfor a sample configuration file.\nAt a minimum you will need to provide the hostname and a unique common-name\n(under certificate-data) in the configuration file. Note: common-name must be\nunique. The general convention is to use `-root-ca`.\n\nRunning the above command without the optional configuration file parameter will\ncause the user to be prompted for all the required data in the command prompt. \n`vcfg` will use that data to generate a rabbitmq_config.yml file in the `VOLTTRON_HOME` \ndirectory.\n\nIf the above configuration file is being used as a basis, be sure to update it with \nthe hostname of the deployment (this should be the fully qualified domain name\nof the system).\n\nThis script creates a new virtual host and creates SSL certificates needed\nfor this VOLTTRON instance. These certificates get created under the subdirectory \n""certificates"" in your VOLTTRON home (typically in ~/.volttron). It\nthen creates the main VIP exchange named ""volttron"" to route message between\nthe platform and agents and alternate exchange to capture unrouteable messages.\n\nNOTE: We configure the RabbitMQ instance for a single volttron_home and\nvolttron_instance. This script will confirm with the user the volttron_home to\nbe configured. The VOLTTRON instance name will be read from volttron_home/config\nif available, if not the user will be prompted for VOLTTRON instance name. To\nrun the scripts without any prompts, save the VOLTTRON instance name in\nvolttron_home/config file and pass the VOLTTRON home directory as a command line\nargument. For example: `vcfg --vhome /home/vdev/.new_vhome rabbitmq single`\n\nThe Following are the example inputs for `vcfg rabbitmq single` command. Since no\nconfig file is passed the script prompts for necessary details.\n\n```sh\nYour VOLTTRON_HOME currently set to: /home/vdev/new_vhome2\n\nIs this the volttron you are attempting to setup? [Y]:\nCreating rmq config yml\nRabbitMQ server home: [/home/vdev/rabbitmq_server/rabbitmq_server-3.9.7]:\nFully qualified domain name of the system: [cs_cbox.pnl.gov]:\n\nEnable SSL Authentication: [Y]:\n\nPlease enter the following details for root CA certificates\nCountry: [US]:\nState: Washington\nLocation: Richland\nOrganization: PNNL\nOrganization Unit: Volttron-Team\nCommon Name: [volttron1-root-ca]:\nDo you want to use default values for RabbitMQ home, ports, and virtual host: [Y]: N\nName of the virtual host under which RabbitMQ VOLTTRON will be running: [volttron]:\nAMQP port for RabbitMQ: [5672]:\nhttp port for the RabbitMQ management plugin: [15672]:\nAMQPS (SSL) port RabbitMQ address: [5671]:\nhttps port for the RabbitMQ management plugin: [15671]:\nINFO:rmq_setup.pyc:Starting rabbitmq server\nWarning: PID file not written; -detached was passed.\nINFO:rmq_setup.pyc:**Started rmq server at /home/vdev/rabbitmq_server/rabbitmq_server-3.9.7\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost\nINFO:rmq_setup.pyc:\nChecking for CA certificate\n\nINFO:rmq_setup.pyc:\nRoot CA (/home/vdev/new_vhome2/certificates/certs/volttron1-root-ca.crt) NOT Found. Creating root ca for volttron instance\nCreated CA cert\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost\nINFO:rmq_setup.pyc:**Stopped rmq server\nWarning: PID file not written; -detached was passed.\nINFO:rmq_setup.pyc:**Started rmq server at /home/vdev/rabbitmq_server/rabbitmq_server-3.9.7\nINFO:rmq_setup.pyc:\n\n#######################\n\nSetup complete for volttron home /home/vdev/new_vhome2 with instance name=volttron1\nNotes:\n\n- Please set environment variable `VOLTTRON_HOME` to `/home/vdev/new_vhome2` before starting volttron\n\n- On production environments, restrict write access to\n /home/vdev/new_vhome2/certificates/certs/volttron1-root-ca.crt to only admin user. For example: sudo chown root /home/vdev/new_vhome2/certificates/certs/volttron1-root-ca.crt\n\n- A new admin user was created with user name: volttron1-admin and password=default_passwd.\n You could change this user\'s password by logging into Please update /home/vdev/new_vhome2/rabbitmq_config.yml if you change password\n\n#######################\n```\n\n### 4. Test\n\nWe are now ready to start the VOLTTRON instance. If configured with a RabbitMQ message bus a config file would have been\n generated in `$VOLTTRON\\_HOME/config` with the entry `message-bus=rmq`. If you need to revert to ZeroMQ based \n VOLTTRON, you will have to either remove ""message-bus"" parameter or set it to default ""zmq"" in `$VOLTTRON\\_HOME/config`\n and restart the volttron process. The following command starts the VOLTTRON process in the background:\n\n```sh\nvolttron -vv -l volttron.log &\n```\n\nThis command causes the shell to enter the virtual Python environment and then starts the platform in debug (vv) mode \nwith a log file named volttron.log.\n\nNext, start an example listener to see it publish and subscribe to the message bus:\n\n```sh\nvctl install examples/ListenerAgent\n```\n\nThis script handles several commands for installing and starting an agent after removing an old copy. This \nsimple agent publishes a heartbeat message and listens to everything on the message bus. Look at the VOLTTRON log to see \nthe activity:\n\n```sh\ntail volttron.log\n```\n\nListener agent heartbeat publishes appear in the logs as:\n\n```sh\n2020-04-20 18:49:31,395 (listeneragent-3.3 13458) __main__ INFO: Peer: pubsub, Sender: listeneragent-3.2_1:, Bus: , Topic: heartbeat/listeneragent-3.2_1, Headers: {\'TimeStamp\': \'2020-04-20T18:49:31.393651+00:00\', \'min_compatible_version\': \'3.0\', \'max_compatible_version\': \'\'}, Message:\n\'GOOD\'\n2020-04-20 18:49:36,394 (listeneragent-3.3 13458) __main__ INFO: Peer: pubsub, Sender: listeneragent-3.2_1:, Bus: , Topic: heartbeat/listeneragent-3.2_1, Headers: {\'TimeStamp\': \'2020-04-20T18:49:36.392294+00:00\', \'min_compatible_version\': \'3.0\', \'max_compatible_version\': \'\'}, Message:\n\'GOOD\'\n```\n\nTo top the platform run the following command:\n\n```sh\n./stop-volttron\n```\n\n## Next Steps\n\nThere are several walkthroughs to explore additional aspects of the platform:\n\n- [Agent Development Walkthrough](https://volttron.readthedocs.io/en/latest/developing-volttron/developing-agents/agent-development.html)\n- Demonstration of the [management UI](https://volttron.readthedocs.io/en/latest/deploying-volttron/multi-platform/volttron-central-deployment.html)\n- [RabbitMQ setup with Federation and Shovel plugins](https://volttron.readthedocs.io/en/latest/deploying-volttron/multi-platform/multi-platform-rabbitmq-deployment.html)\n- [Backward compatibility with the RabbitMQ message bus](https://volttron.readthedocs.io/en/latest/deploying-volttron/multi-platform/multi-platform-multi-bus.html)\n\n\n## Acquiring Third Party Agent Code\n\nThird party agents are available under the volttron-applications repository. In\norder to use those agents, clone the volttron-applications repository into the same\ndirectory as the VOLTTRON source code:\n\n```sh\ncd \ngit clone https://github.com/VOLTTRON/volttron-applications.git develop\n```\n\n## Contribute\n\nHow to [contribute](https://volttron.readthedocs.io/en/latest/developing-volttron/contributing-code.html) back:\n\n- [Issue Tracker](https://github.com/VOLTTRON/volttron/issues)\n- [Source Code](https://github.com/VOLTTRON/volttron) \n\n## Support\n\nThere are several options for VOLTTRONTM [support](https://volttron.readthedocs.io/en/latest/developing-volttron/community.html).\n\n- A VOLTTRONTM office hours telecon takes place every other Friday at 11am Pacific over Zoom.\n- A mailing list for announcements and reminders\n- The VOLTTRONTM contact email for being added to office hours, the mailing list, and for inquiries is: volttron@pnnl.gov\n- The preferred method for questions is through [StackOverflow](https://stackoverflow.com/questions/tagged/volttron) since this is easily discoverable by others who may have the same issue.\n- [GitHub issue tracker](https://github.com/VOLTTRON/volttron/issues) for feature requests, bug reports, and following development activities\n- VOLTTRON now has a [Slack channel](https://volttron-community.slack.com/signup)\n\n## License\n\nThe project is [licensed](LICENSE.md) under Apache 2.\n'",,"2013/11/07, 02:07:46",3639,CUSTOM,0,10353,"2023/10/23, 21:17:29",393,1667,2719,54,2,9,1.6,0.5905114401076716,"2022/09/30, 23:05:56",8.2,0,63,false,,true,true,"eclipse-volttron/volttron-economizer-rcx,eclipse-volttron/volttron-ilc,davidraker/volttron-app-intelligent-load-control,ACE-IoT-Solutions/volttron-yolo-occupancy,davidraker/volttron-app-economizer-rcx,thakorneyp11/volttron-iot-demo,eclipse-volttron/volttron-boptest,rlutes/volttron-pnnl-applications,AntonLED/macroproject,ACE-IoT-Solutions/VisualBacnetCapture,eclipse-volttron/volttron-dnp3-outstation,lazlop/VolttronSemantics,Matammanjunath/VOLTTRON_pnnl_applications_All_scripts,kefeimo/dev-volttron-modular,ACE-IoT-Solutions/volttron-azure-iot-hub-historian,riley206/VOLTTRON2HomeAssistant,davidraker/volttron-actuator,kefeimo/volttron-openadr-ven,schandrika/volttron-testing,bonicim/volttron-lib-web,davidraker/volttron-lib-base-driver,craig8/volttron-lib-base-driver,bonicim/volttron-actuator,craig8/volttron-testing,craig8/volttron-lib-web,bonicim/volttron-bacnet-proxy,bonicim/volttron-lib-base-driver,davidraker/volttron-lib-web,eclipse-volttron/volttron-openadr-ven,FraunhoferCSE/GlobalSchedulerCore,IntwineConnect-archive/integrate-DelayAgent,eclipse-volttron/volttron-bacnet-proxy,eclipse-volttron/volttron-listener,eclipse-volttron/volttron-lib-base-historian,eclipse-volttron/volttron-testing,mlockwo3/Volttronclone,eclipse-volttron/volttron-lib-web,eclipse-volttron/volttron-lib-base-driver,davidraker/TransactiveNodeAgentOld,IntwineConnect-archive/integrate-TranslatorAgent,Bowriverstudio/volttron-kirkland,sankanire201/MITLL_BEMS_controller,sankanire201/ESTCP_OPAL,sankanire201/ESTCP_EMS,bbartling/hvac_volttron_agents,ajfar-bem/wisebldg,VOLTTRON/volttron-applications-contrib,VOLTTRON/volttron-pnnl-applications,TPponmat/enerkey_os,ChargePoint/volttron,ChargePoint/volttron-applications,kwarodom/bemoss_os-2,LBNL-ETA/LPDM-Volttron,RoshanLKini/pnnl-project,wietlabs/volttron-optimizer,VOLTTRON-UI-API/volttron-ui-api,Soulweed/Agent,mihaiscutaru/drimpacVEN,dzimmanck/volttron-homeassistant,pyffpyff/ACMG_lastest,hlngo/tns,pyffpyff/new-changing,pyffpyff/ACMGAgent,Sgkamnuanchai/bems,SenHuang19/VOLTTRON_communication,simplco/Volttron-Monitor-Agent,VOLTTRON/volttron-GS,pyffpyff/ACMGAgent_old-version-backup,VOLTTRON/EnergyPlus-Volttron-Toolkit,mohantysubhendu/Bemoss_New,BEMOSSPlus/BEMOSSx,kwarodom/bemoss_os-1,IntwineConnect/volttron-CTAagent,OpenBMS/volttron-demo,IntwineConnect-archive/integrate-HomeownerAgent,kruthikarshankar/bemoss_web_ui,cyrus19901/volttron-homeassistant,bruskauff/BBB-VOLTTRON,IntwineConnect-archive/integrate-TestMessageOriginatorAgent,cdcorbin-pnnl/volttron-models,cdcorbin-pnnl/volttron-pubsub,cdcorbin-pnnl/volttron-market,IntwineConnect/volttron-CPRAgent,kmorri09/openevse_volttron,cyrus19901/VOLTTRON-azure,cyrus19901/VOLTTRON-Cloud,cyrus19901/Volttron-transactive,kgegner/BeagleBoneCode,nikithark/DMS-BMS,VOLTTRON/volttron-models,IntwineConnect-archive/integrate-UtilityAgent,VOLTTRON/volttron-pubsub,VOLTTRON/volttron-trxhvac,cdcorbin-pnnl/volttron-trxhvac,rborde/volttron,VOLTTRON/econ-dispatch,cdcorbin-pnnl/volttron-energyplus,bemoss/BEMOSS3.5,joeldevlearning/volt-sensor-prototype,NREL/volttime,Aditya23456/BEMOSS3.5,Jordan87M/DCMGAgents,VOLTTRON/deprecated-volttron-applications,miraabid/bemoss,so3500/volttron-kafka,TPponmat/TP,IntwineConnect/volttron-OSISoftPi,rajeee/bemoss_os,VOLTTRON/volttron",,https://github.com/VOLTTRON,http://volttron.org,,,,https://avatars.githubusercontent.com/u/5875448?v=4,,, EnergyPlus,"A whole building energy simulation program that engineers, architects, and researchers use to model both energy consumption and water usage in buildings.",NREL,https://github.com/NREL/EnergyPlus.git,github,,Buildings and Heating,"2023/10/18, 13:24:02",940,0,167,true,C++,National Renewable Energy Laboratory,NREL,"C++,Fortran,Jupyter Notebook,Python,Visual Basic 6.0,CMake,C,REALbasic,Batchfile,Shell,Qt Script,Hack,HTML,Ruby,Xojo",https://energyplus.net,"b""EnergyPlus [![](https://img.shields.io/github/release/NREL/energyplus.svg)](https://github.com/NREL/EnergyPlus/releases/latest)\n==========\n\n[![](https://img.shields.io/github/downloads/NREL/EnergyPlus/latest/total?color=5AC451)](https://github.com/NREL/EnergyPlus/releases/latest)\n[![](https://img.shields.io/github/downloads/nrel/energyplus/total.svg?color=5AC451&label=downloads_since_v8.1)](https://github.com/NREL/EnergyPlus/releases)\n\nThis is the EnergyPlus Development Repository. EnergyPlus\xe2\x84\xa2 is a whole building energy simulation program that engineers, architects, and researchers use to model both energy consumption and water use in buildings.\n\n## Contact/Support\n\n - The Department of Energy maintains a [public website for EnergyPlus](https://energyplus.net) where you can find much more information about the program.\n - For detailed developer information, consult the [wiki](https://github.com/nrel/EnergyPlusTeam/wiki).\n - Many users (and developers) of EnergyPlus are active on [Unmet Hours](https://unmethours.com/), so that's a great place to start if you have a question about EnergyPlus or building simulation.\n - For more in-depth, developer-driven support, please utilize the [EnergyPlus Helpdesk](https://energyplushelp.freshdesk.com/).\n\n## Testing\n\n[![](https://github.com/NREL/EnergyPlus/workflows/Custom%20Check/badge.svg)](https://github.com/NREL/EnergyPlus/actions/workflows/custom_check.yml) \n[![](https://github.com/NREL/EnergyPlus/workflows/Documentation/badge.svg)](https://github.com/NREL/EnergyPlus/actions/workflows/documentation.yml) \n[![](https://github.com/NREL/EnergyPlus/workflows/CppCheck/badge.svg)](https://github.com/NREL/EnergyPlus/actions/workflows/cppcheck.yml)\n\nEvery commit and every release of EnergyPlus undergoes rigorous testing.\nThe testing consists of building EnergyPlus, of course, then there are unit tests, integration tests, API tests, and regression tests.\nSince 2014, most of the testing has been performed by our bots ([Tik-Tok](https://github.com/nrel-bot), [Gort](https://github.com/nrel-bot-2), and [Marvin](https://github.com/nrel-bot-3)), using a fork of the [Decent CI](https://github.com/lefticus/decent_ci) continuous integration system.\nWe are now adapting our efforts to use the Github Actions system to handle more of our testing processes.\nIn the meantime, while Decent CI is still handling the regression and bulkier testing, results from Decent CI are still available on the testing [dashboard](https://myoldmopar.github.io/EnergyPlusBuildResults/).\n\n## Releases\n\n[![](https://github.com/NREL/EnergyPlus/workflows/Windows%20Releases/badge.svg)](https://github.com/NREL/EnergyPlus/actions/workflows/windows_release.yml) \n[![](https://github.com/NREL/EnergyPlus/workflows/Mac%20Releases/badge.svg)](https://github.com/NREL/EnergyPlus/actions/workflows/mac_release.yml) \n[![](https://github.com/NREL/EnergyPlus/workflows/Linux%20Releases/badge.svg)](https://github.com/NREL/EnergyPlus/actions/workflows/linux_release.yml)\n\nEnergyPlus is released twice annually, usually in March and September.\nIt is recommended all use of EnergyPlus is production workflows use these formal, public releases.\nIteration **(pre-)releases** may be created during a development cycle, however users should generally avoid these, as input syntax may change which won't be supported by the major release version transition tools, and could require manual intervention to remedy.\nIf an interim release is intended for active use by users, such as a bug-fix-only or performance-only re-release, it will be clearly specified on the release notes and a public announcement will accompany this type of release.\nOur releases are now built by Github Actions.\n\n## Core Documentation\n\nProgram documentation is installed alongside the program, with the pdfs also available [online](https://energyplus.net/documentation).\nBig Ladder also produces html based documentation [online](http://bigladdersoftware.com/epx/docs/).\n\n## API Documentation\n\n[![Read the Docs](https://img.shields.io/readthedocs/energyplus?label=docs%20%28latest%29&color=5AC451)](https://energyplus.readthedocs.io/en/latest/)\n[![Read the Docs](https://img.shields.io/readthedocs/energyplus?label=docs%20%28stable%29&color=5AC451)](https://energyplus.readthedocs.io/en/stable/)\n\nAn API has been developed to allow access to internal EnergyPlus functionality and open up the possibility for new workflow opportunities around EnergyPlus.\nA C API is developed to expose the C++ functions, then Python bindings are built on top of that to maximize the accessibility.\nDocumentation is being built and posted on ReadTheDocs and that documentation will continue to be expanded over time as the API grows.\nThe badges above here show the status, and link out to, the `latest` documentation (most recent commit to the `develop` branch) as well as the `stable` documentation (most recent release tag).\n\n## License & Contributing Development\n\n[![](https://img.shields.io/badge/license-BSD--3--like-5AC451.svg)](https://github.com/NREL/EnergyPlus/blob/develop/LICENSE.txt)\n\nEnergyPlus is available under a BSD-3-like license.\nFor more information, check out the [license file](https://github.com/NREL/EnergyPlus/blob/develop/LICENSE.txt).\nThe EnergyPlus team accepts contributions to EnergyPlus source, utilities, test files, documentation, and other materials distributed with the program.\nThe current EnergyPlus contribution policy is now available on the EnergyPlus [contribution policy page](https://www.energyplus.net/contributing).\nIf you are interested in contributing, please start there, but feel free to reach out to the team.\n\n## Building EnergyPlus\n\nA detailed description of compiling EnergyPlus on multiple platforms is available on the [wiki](https://github.com/NREL/EnergyPlus/wiki/Building-EnergyPlus).\nAlso, as we are adapting to using Github Actions, the recipes for building EnergyPlus can be found in our [workflow files](https://github.com/NREL/EnergyPlus/tree/develop/.github/workflows).\n""",,"2013/11/22, 14:47:34",3624,CUSTOM,3748,36211,"2023/10/18, 13:24:07",868,3047,9368,497,7,38,0.8,0.8627215504389598,"2023/09/29, 02:10:25",v23.2.0,0,80,false,,false,true,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, OpenStudio,A cross-platform collection of software tools to support whole building energy modeling using EnergyPlus and advanced daylight analysis using Radiance.,NREL,https://github.com/NREL/OpenStudio.git,github,,Buildings and Heating,"2023/10/18, 18:35:57",447,12,57,true,C++,National Renewable Energy Laboratory,NREL,"C++,Ruby,XSLT,CMake,Jupyter Notebook,SWIG,Python,C#,Shell,Qt Script,Batchfile",https://www.openstudio.net/,"b'OpenStudio\n==========\n\nOpenStudio is a cross-platform (Windows, Mac, and Linux) collection of software tools to support whole building energy modeling using [EnergyPlus](https://github.com/NREL/EnergyPlus) and advanced daylight analysis using [Radiance](https://github.com/NREL/Radiance/). OpenStudio is an open source project to facilitate community development, extension, and private sector adoption.\n\nThe OpenStudio SDK allows building researchers and software developers to quickly get started through its multiple entry levels, including access through C++, Ruby, Python, and C#.\n\nMore information and documentation is available at the [OpenStudio website](https://www.openstudio.net/). User support is available via the community moderated question and answer resource [unmethours.com](https://unmethours.com/questions/).\n'",,"2013/07/03, 19:34:37",3766,CUSTOM,1405,20452,"2023/10/16, 22:38:20",186,1967,4728,279,8,5,0.5,0.7019565630342557,"2023/10/18, 19:15:33",v3.7.0-rc1,0,46,false,,false,true,"bruadam/ICIEE_DTU_Heat_stress_Africa,intelligent-environments-lab/DOE_XStock,mechyai/rl_bca,saeranv/thiru,canmet-energy/btap_batch,MatthewSteen/openstudio-metadata-utility,jmarrec/geomeffibem,melanie-ensta/CityLearn,juliet29/cee256_final_2022,intelligent-environments-lab/CityLearn,DimitrisMantas/ADAPT,TShapinsky/openstudio-metadata-utility",,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, BEMServer,An open source Python server to deploy energy management solutions for buildings.,HIT2GAP-EU-PROJECT,https://github.com/HIT2GAP-EU-PROJECT/bemserver.git,github,,Buildings and Heating,"2022/10/27, 16:05:33",39,0,2,true,Python,HIT2GAP,HIT2GAP-EU-PROJECT,"Python,Shell,Dockerfile,Mako",http://bemserver.org,"b'# BEMServer is being fully refactorized! A newer version is under development [here](https://github.com/BEMServer). The present version is now abandonned (on our side at least).\n\n\n\n

\n\n\n**Table of Contents**\n\n1. [Description](#description)\n 1. [Technologies used](#technologies)\n 2. [Installation](#installation)\n2. [User guidelines](#user)\n3. [Developer guidelines](#developer)\n4. [Related work](#documentation)\n\n\n\n## Description \n\nBEMServer is an open source platform to ease the deployment of energy management software in monitored buildings.\n\nBased on standard technologies (REST APIs, formal ontologies...) it is a Python software that is used to\n\n- ***collect data*** from buildings. Data are currently pushed to BEMServer through a specific REST API. Therefore, for smart meters, adapters need to be developed to connect the meters to BEMServer. For BMS, specific adapters need to be developed, or a third-party solution need to be used to collect data from proprietary protocols (KNX, BACNet...)\n- ***agregate data*** collected. Data in buildings are heterogeneous by nature, and come from a variety of sources (meters, sensors, but also human interactions, building descriptions, IFC files...). To ease data access, all data need first to be aggregated and aligned according to a specific model. We use our specific [BEMOnt](https://github.com/HIT2GAP-EU-PROJECT/BEMOnt) formal ontology to do so.\n- ***preprocess data***. Data from sensors may not be as reliable as expected. In BEMServer we are continuously developing new algorithms to facilitate data access for software developers: data cleansing to avoid blanks and outliers, unit conversion, airhtmetic operations... are at the disposal to get the data as you need them.\n- ***present data***. Through its [REST APIs](https://h2g-platform-core.nobatek.com/api/v0/api-docs/redoc), data are exposed in a standard way to thrid-party developers.\n\n### Technologies used \n\nBEMServer is a **Python**-based software. As a server is it developed using the [flask](https://palletsprojects.com/p/flask/) library.\n\nAdditionally, its storage system uses 3 different technologies:\n- Apache Jena to store the metadata (i.e. data used to describe all relevant information, from the building, to the measures made by a sensor)\n- HDF5 file format to store timeseries, which are a big part of the data stored, coming from sensors and meters.\n- SQLite to store events. An event is typically generated by a service connected to BEMServer, and can be an alert (e.g. an abnormal energy consumption), or an advice (e.g. potential for energy saving).\n\n### Installation \n\nFollow the guideline in [INSTALL.md](INSTALL.md).\n\n\n## User guidelines \n\nBEMServer is mainly dedicated to people and company who want to develop smart energy services for building. As such, it was already used as the support for load forecasting, fault detection and diagnosis, or confort simulation tools. In brief, BEMServer is a tool to be used for domain expert and software developers who do not want to become experts on how to collect data and access them.\n\nIn order to install BEMServer, please check the ***INSTALL.md*** file. Once up and running, simply use the online [REST APIs](https://h2g-platform-core.nobatek.com/api/v0/api-docs/redoc) to interact with your BEMServer instance.\n\n## Developer guidelines \n\nWant to be part of the developing team? Want to contribute to the project and join effort in providing the community with an open source tool to deploy energy management softwares? Then, just download the project, fork it, start developing and make a pull request.\n\nAlso check the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n## Related work \n\n- Pierre BourreauP, Richard Chbeir, Yudith Cardinale, Aitor Corchero, Khouloud Salameh, J\xc3\xa9r\xc3\xb4me Lafr\xc3\xa9choux, David Fr\xc3\xa9d\xc3\xa9rique, Rafael Constantinou - ***BEMServer: An Open Source Platform for Building Energy Performance Management*** - EC3 (European Conference on Computing in Construction), July 2019 - https://www.researchgate.net/publication/334626054_BEMServer_An_Open_Source_Platform_for_Building_Energy_Performance_Management\n\n\n- Lara Kallab, Richard Chbeir, Pierre Bourreau, Pascale Brassier, Michael Mrissa - ***HIT2GAP: Towards a better building energy management***, Energy Procedia, Volume 122, 2017 - http://www.sciencedirect.com/science/article/pii/S1876610217330035\n\nAlso, see the [presentation on slideShare](https://www.slideshare.net/pbourreau/bemserver-open-source-platform-for-building-energy-management)\n'",,"2019/07/16, 14:45:14",1562,CUSTOM,1,67,"2022/02/14, 12:49:56",5,2,4,0,618,0,0.0,0.5,"2019/08/21, 16:23:01",v0.1,0,3,false,,false,true,,,https://github.com/HIT2GAP-EU-PROJECT,http://www.hit2gap.eu/,,,,https://avatars.githubusercontent.com/u/27004362?v=4,,, SEED,Standard Energy Efficiency Data Platform™ is a web-based application that helps organizations easily manage data on the energy performance of large groups of buildings.,SEED-platform,https://github.com/SEED-platform/seed.git,github,"energy,commercial,buildings",Buildings and Heating,"2023/10/25, 02:13:34",101,0,11,true,Python,Standard Energy Efficiency Data Platform™,SEED-platform,"Python,JavaScript,HTML,SCSS,PLpgSQL,Shell,Dockerfile,Mustache,Makefile",,"b'## Standard Energy Efficiency Data (SEED) Platform\xe2\x84\xa2\n\n[![Build Status][build-img]][build-url] [![Coverage Status][coveralls-img]][coveralls-url]\n\nThe SEED Platform is a web-based application that helps organizations easily\nmanage data on the energy performance of large groups of buildings. Users can\ncombine data from multiple sources, clean and validate it, and share the\ninformation with others. The software application provides an easy, flexible,\nand cost-effective method to improve the quality and availability of data to\nhelp demonstrate the economic and environmental benefits of energy efficiency,\nto implement programs, and to target investment activity.\n\nThe SEED application is written in Python/Django, with AngularJS, Bootstrap,\nand other javascript libraries used for the front-end. The back-end database\nis required to be PostgreSQL.\n\nThe SEED web application provides both a browser-based interface for users to\nupload and manage their building data, as well as a full set of APIs that app\ndevelopers can use to access these same data management functions. From a\nrunning server, the Swagger API documentation can be found at `/api/swagger`\nor from the front end by clicking the API documentation link in the sidebar.\n\n### Installation\n\n- Production on Amazon Web Service: See [Installation Notes][production-aws-url]\n- Development on Mac OSX: [Installation Notes][development-mac-osx]\n- Development using Docker: [Installation Notes][development-docker]\n\n### Starting SEED Platform\n\nIn production the following two commands will run the web server (uWSGI) and\nthe background task manager (Celery) with:\n\n```\nbin/start_uwsgi.sh\nbin/start_celery.sh\n```\n\nIn development mode, you can start the web server (uWSGI) and the background\ntask manager (Celery) with:\n\n```\n./manage.py runserver\ncelery -A seed worker -l INFO -c 4 --max-tasks-per-child 1000 -EBS django_celery_beat.schedulers:DatabaseScheduler\n```\n\n### Developer Resources\n\n- Source code documentation is on the [SEED website][code-documentation] and there are links to [older versions][code-documentations-links] as needed.\n- Several notes regarding Django and AngularJS integration: See [Developer Resources][developer-resources]\n\n#### Testing\n\n- Running tests: See [Testing Notes][developer-testing-notes]\n\n### Copyright\n\nSee the information in the [LICENSE.md](LICENSE.md) file.\n\n[code-documentation]: https://seed-platform.org/code_documentation/latest/\n[code-documentation-links]: https://seed-platform.org/developer_resources/\n[development-docker]: https://github.com/SEED-platform/seed/blob/develop/docs/source/setup_docker.rst\n[development-mac-osx]: https://github.com/SEED-platform/seed/blob/develop/docs/source/setup_osx.rst\n[production-aws-url]: http://www.github.com/seed-platform/seed/wiki/Installation\n[developer-resources]: https://github.com/SEED-platform/seed/blob/develop/docs/source/developer_resources.rst\n[developer-testing-notes]: https://github.com/SEED-platform/seed/blob/develop/docs/source/developer_resources.rst#testing\n[build-img]: https://github.com/SEED-platform/seed/workflows/CI/badge.svg?branch=develop\n[build-url]: https://github.com/SEED-platform/seed/actions?query=branch%3Adevelop\n[coveralls-img]: https://coveralls.io/repos/github/SEED-platform/seed/badge.svg?branch=HEAD\n[coveralls-url]: https://coveralls.io/github/SEED-platform/seed?branch=HEAD\n'",,"2014/10/20, 04:26:53",3292,CUSTOM,332,11081,"2023/10/25, 02:13:35",307,1887,3989,599,0,14,1.3,0.7364788958433639,"2023/10/06, 17:00:51",v2.20.0,0,40,false,,false,true,,,https://github.com/SEED-platform,http://energy.gov/eere/buildings/standard-energy-efficiency-data-platform,United States of America,,,https://avatars.githubusercontent.com/u/7445100?v=4,,, HPWHsim,An open source simulation model for Heat Pump Water Heaters (HPWH).,EcotopeResearch,https://github.com/EcotopeResearch/HPWHsim.git,github,,Buildings and Heating,"2023/05/23, 14:53:27",11,0,1,true,C++,"Ecotope, Inc.",EcotopeResearch,"C++,CMake",,"b""# HPWHsim\n\nAn open source simulation model for Heat Pump Water Heaters (HPWH).\n\nHPWHsim was developed with whole house simulation in mind; it is intended to be run independently of the overarching simulation's time steps, other parameters, and does not aggregate its own outputs. It was also designed to run quickly, as the typical use case would see many simulations run, each a year-long or more.\n\n### Development\n\nHPWHsim is configured as a CMake project. Currently, CMake is only configured to generate Microsoft Visual Studio solutions compiled with Microsoft Visual C++ (other generators and compilers will not work). CMake also handles version control via Git.\n\n### Dependencies\n\n- Microsoft Visual Studio 2017 with Visual C++ (which can be installed after from the Microsoft Visual Studio Installer)\n- CMake 3.5 or later\n- Git\n- Btwxt 0.2.0\n\n### Building HPWHsim from source\n\n1. Clone the git repository, or download and extract the source code.\n2. Make a directory called `build` inside the top level of your source.\n3. Open a console in the `build` directory.\n4. Type `cmake ..`.\n5. Type `cmake --build . --config Release`.\n6. Type `ctest -C Release` to run the test suite and ensure that your build is working properly.\n""",,"2015/12/04, 01:24:35",2882,BSD-3-Clause,49,793,"2023/05/23, 14:53:27",12,154,157,14,155,2,1.0,0.541501976284585,"2023/05/23, 15:00:07",v1.22.0,0,7,false,,false,false,,,https://github.com/EcotopeResearch,www.ecotope.com,"Seattle, WA",,,https://avatars.githubusercontent.com/u/10763891?v=4,,, OpenStudio-ERI,Calculates an Energy Rating Index (ERI) via an OpenStudio/EnergyPlus-based workflow. Building information is provided through an HPXML file.,NREL,https://github.com/NREL/OpenStudio-ERI.git,github,,Buildings and Heating,"2023/10/24, 14:53:56",12,0,1,true,Ruby,National Renewable Energy Laboratory,NREL,"Ruby,Python,CSS,Batchfile,Makefile",,"b""OpenStudio-ERI\n==============\n\n[![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/NREL/OpenStudio-ERI?include_prereleases)](https://github.com/NREL/OpenStudio-ERI/releases)\n[![ci](https://github.com/NREL/OpenStudio-ERI/workflows/ci/badge.svg)](https://github.com/NREL/OpenStudio-ERI/actions)\n[![Documentation Status](https://readthedocs.org/projects/openstudio-eri/badge/?version=latest)](https://openstudio-eri.readthedocs.io/en/latest/?badge=latest)\n\n\nThe OpenStudio-ERI project allows calculating an Energy Rating Index (ERI) using the Department of Energy's open-source [OpenStudio](https://www.openstudio.net/)/[EnergyPlus](https://energyplus.net/) simulation platform.\nThe building description is provided in an [HPXML file](https://hpxml.nrel.gov/) format.\nOpenStudio-ERI is intended to be used by user interfaces or other automated software workflows that automatically produce the HPXML file.\n\nThe project supports:\n- ANSI/RESNET/ICC 301\xc2\xa9 Standard for the Calculation and Labeling of the Energy Performance of Dwelling and Sleeping Units using an Energy Rating Index\n- ENERGY STAR Certification System for Homes and Apartments Using an ERI Compliance Path\n- IECC ERI Compliance Alternative (Section R406)\n- DOE ZERH Certification Using an ERI Compliance Path\n\n\nFor more information on running simulations, generating HPXML files, etc., please visit the [documentation](https://openstudio-eri.readthedocs.io/en/latest).\n\n## License\n\nThis workflow is available under a BSD-3-like license, which is a free, open-source, and permissive license.\nFor more information, check out the [license file](https://github.com/NREL/OpenStudio-ERI/blob/master/LICENSE.md).\n\n## Disclaimer\n\nDownloading and using this software from this website does not constitute accreditation of the final software product by RESNET.\nIf you are seeking to develop RESNET Accredited Rating Software, you will need to submit your final software product to RESNET for accreditation.\n\nAny reference herein to RESNET, its activities, products, or services, or any linkages from this website to RESNET's website, does not constitute or imply the endorsement, recommendation, or favoring of the U.S. Government, the Alliance for Sustainable Energy, or any of their employees or contractors acting on their behalf.\n\n""",,"2017/07/28, 17:08:03",2280,CUSTOM,372,8094,"2023/10/24, 14:54:00",10,608,685,47,1,5,0.1,0.20997880947794256,"2023/08/14, 15:39:46",v1.6.2,0,13,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, OpenStudio-HPXML,Modeling of residential buildings in EnergyPlus using OpenStudio/HPXML.,NREL,https://github.com/NREL/OpenStudio-HPXML.git,github,,Buildings and Heating,"2023/10/25, 00:40:03",29,0,9,true,Ruby,National Renewable Energy Laboratory,NREL,"Ruby,Python",,"b'# OpenStudio-HPXML\n\n[![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/NREL/OpenStudio-HPXML?include_prereleases)](https://github.com/NREL/OpenStudio-HPXML/releases)\n[![ci](https://github.com/NREL/OpenStudio-HPXML/workflows/ci/badge.svg)](https://github.com/NREL/OpenStudio-HPXML/actions)\n[![Documentation Status](https://readthedocs.org/projects/openstudio-hpxml/badge/?version=latest)](https://openstudio-hpxml.readthedocs.io/en/latest/?badge=latest)\n\nOpenStudio-HPXML allows running residential EnergyPlus simulations using an [HPXML file](https://hpxml.nrel.gov/) for the building description.\nIt is intended to be used by user interfaces or other automated software workflows that automatically produce the HPXML file. A [Schematron](http://schematron.com/) document for the EnergyPlus use case is used to validate that the appropriate HPXML inputs are provided to run EnergyPlus.\n\nOpenStudio-HPXML can accommodate a wide range of different building technologies and geometries.\nEnd-to-end simulations typically run in 3-10 seconds, depending on complexity, computer platform and speed, etc.\n\nFor more information on running simulations, generating HPXML files, etc., please visit the [documentation](https://openstudio-hpxml.readthedocs.io/en/latest).\n\n## Workflows\n\nA simple `run_simulation.rb` script is provided to run a residential EnergyPlus simulation from an HPXML file.\nSee the [Usage Instructions](https://openstudio-hpxml.readthedocs.io/en/latest/usage_instructions.html) for documentation on running the workflow.\n\nSince [OpenStudio measures](http://nrel.github.io/OpenStudio-user-documentation/getting_started/about_measures/) are used for model generation, additional OpenStudio-based workflows and interfaces can instead be used if desired.\n\n## Measures\n\nThis repository contains several OpenStudio measures:\n- `BuildResidentialHPXML`: A measure that generates an HPXML file from a set of building description inputs (including, e.g., simplified geometry inputs).\n- `BuildResidentialScheduleFile`: A measure that generates a CSV of detailed schedules (e.g., stochastic occupancy) for use in the simulation.\n- `HPXMLtoOpenStudio`: A measure that translates an HPXML file to an OpenStudio model.\n- `ReportSimulationOutput`: A reporting measure that generates a variety of simulation-based annual/timeseries outputs in CSV/JSON/MessagePack format.\n- `ReportUtilityBills`: A reporting measure that generates utility bill outputs in CSV/JSON/MessagePack format.\n\n## Projects\n\nThe OpenStudio-HPXML workflow is used by a number of other residential projects, including:\n- [BEopt](https://beopt.nrel.gov)\n- [Energy Rating Index (ERI)](https://github.com/NREL/OpenStudio-ERI)\n- [Home Energy Score](https://betterbuildingssolutioncenter.energy.gov/home-energy-score)\n- [ResStock](https://resstock.nrel.gov/)\n- [URBANopt](https://www.nrel.gov/buildings/urbanopt.html)\n- [Weatherization Assistant](https://weatherization.ornl.gov/softwaredescription/) (pending)\n\nIt is also used by several private-sector software tools.\n\n## License\n\nThis project is available under a BSD-3-like license, which is a free, open-source, and permissive license. For more information, check out the [license file](https://github.com/NREL/OpenStudio-HPXML/blob/master/LICENSE.md).\n'",,"2018/11/02, 23:02:25",1817,CUSTOM,1740,11544,"2023/10/25, 00:40:06",123,1155,1402,295,0,22,1.2,0.46535094227630214,"2023/05/23, 04:09:20",v1.6.0,0,12,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, AixLib,A Modelica model library for building performance simulations.,RWTH-EBC,https://github.com/RWTH-EBC/AixLib.git,github,hacktoberfest,Buildings and Heating,"2023/10/25, 14:41:09",158,0,23,true,Modelica,RWTH Aachen University - E.ON Energy Research Center - Institute for Energy Efficient Buildings and Indoor Climate,RWTH-EBC,"Modelica,Python,HTML,C,CSS,JavaScript,Java,TeX,Batchfile,Shell,Makefile,C++",https://ebc-tools.eonerc.rwth-aachen.de/aixlib,"b'![E.ON EBC RWTH Aachen University](./AixLib/Resources/Images/EBC_Logo.png)\n[![OM](https://ebc.pages.rwth-aachen.de/EBC_all/github_ci/AixLib/development/badge_file/om_readyness_badge.svg)](https://ebc.pages.rwth-aachen.de/EBC_all/github_ci/AixLib/development/badge_file/om_readyness_badge.svg)\n\n# AixLib\n\n**AixLib** is a Modelica model library for building performance simulations. \nThe library contains models of HVAC systems as well as high and reduced order building models. \nIt is being developed at [RWTH Aachen University, E.ON Energy Research Center, Institute for Energy Efficient Buildings and Indoor Climate (EBC)](http://www.ebc.eonerc.rwth-aachen.de/cms/~dmzz/E-ON-ERC-EBC/?lidx=1) in Aachen, Germany.\n\nAs the library is developed at RWTH Aachen University\'s EBC, the library\'s name **AixLib** is derived from the city\'s French name Aix-la-Chapelle, which the people of Aachen are very fond of and use a lot. With the name **AixLib** we follow this local tradition.\n\nIf you have any questions regarding **AixLib**, feel free to contact us at aixlib@eonerc.rwth-aachen.de.\n\n## Clone repository\n\n* To clone the repository for the first time run: \n ``git clone --recurse-submodules https://github.com/RWTH-EBC/AixLib.git``\n* If you have already cloned the repository, run: \n ``git submodule update --init --recursive``\n* The default branch of AixLib is the ``development`` branch. This means that after cloning the repository, you always checked out the ``development`` branch.\n## Release versions\n\nThe latest version is always available on the [release page](https://github.com/RWTH-EBC/AixLib/releases) and defined in [AixLib\'s package.mo](https://github.com/RWTH-EBC/AixLib/blob/master/AixLib/package.mo).\n\n## How to cite AixLib\n\nWe continuously improve **AixLib** and try to keep the community up-to-date with citable papers.\nPlease use the following article for citations when using or enhancing AixLib.\n\n@article{doi:10.1080/19401493.2023.2250521,
\nauthor = {Laura Maier and David Jansen and Fabian W\xc3\xbcllhorst and Martin Kremer and Alexander K\xc3\xbcmpel and Tobias Blacha and Dirk M\xc3\xbcller},
\ntitle = {AixLib: an open-source Modelica library for compound building energy systems from component to district level with automated quality management},
\njournal = {Journal of Building Performance Simulation},
\nvolume = {0},
\nnumber = {0},
\npages = {1-24},
\nyear = {2023},
\npublisher = {Taylor & Francis},
\ndoi = {10.1080/19401493.2023.2250521},
\nURL = {https://doi.org/10.1080/19401493.2023.2250521 },
\neprint = {https://doi.org/10.1080/19401493.2023.2250521 }
\n}\n\n## Publications using AixLib\n\nPlease see the [publications list](https://github.com/RWTH-EBC/AixLib/blob/master/PUBLICATIONS.md)\n\n## How to contribute to the development of AixLib\n\nYou are invited to contribute to the development of **AixLib**.\nIssues can be reported using this site\'s [Issues section](https://github.com/RWTH-EBC/AixLib/issues).\nFurthermore, you are welcome to contribute via [Pull Requests](https://github.com/RWTH-EBC/AixLib/pulls). The workflow for changes is described in our [Wiki](https://github.com/RWTH-EBC/AixLib/wiki).\n\n## License\n\nThe **AixLib** Library is released by RWTH Aachen University, E.ON Energy Research Center, Institute for Energy Efficient Buildings and Indoor Climate and is available under a 3-clause BSD-license.\nSee [AixLib Library license](https://htmlpreview.github.io/?https://github.com/rwth-ebc/aixlib/blob/master/AixLib/legal.html).\n\n## Acknowledgements\n\nParts of **AixLib** have been developed within public funded projects and with financial support by BMWi (German Federal Ministry for Economic Affairs and Energy).\n'",",https://doi.org/10.1080/19401493.2023.2250521,https://doi.org/10.1080/19401493.2023.2250521","2014/07/10, 08:04:06",3394,CUSTOM,303,5605,"2023/10/25, 08:28:40",78,761,1394,67,0,26,0.7,0.7573127611700418,"2023/02/09, 13:31:31",v1.3.2,1,60,false,,false,true,,,https://github.com/RWTH-EBC,http://www.ebc.eonerc.rwth-aachen.de/,"RWTH Aachen University, Aachen, Germany",,,https://avatars.githubusercontent.com/u/8121773?v=4,,, TEASER,Tool for Energy Analysis and Simulation for Efficient Retrofit.,RWTH-EBC,https://github.com/RWTH-EBC/TEASER.git,github,"python,simulation,buildings,urban-energy-modeling,hacktoberfest",Buildings and Heating,"2023/09/14, 15:31:25",99,11,21,true,Python,RWTH Aachen University - E.ON Energy Research Center - Institute for Energy Efficient Buildings and Indoor Climate,RWTH-EBC,Python,,"b'![E.ON EBC RWTH Aachen University](docs/source/_static/EBC_Logo.png)\n\n# TEASER - Tool for Energy Analysis and Simulation for Efficient Retrofit\n\n[![License](http://img.shields.io/:license-mit-blue.svg)](http://doge.mit-license.org)\n[![Coverage Status](https://coveralls.io/repos/github/RWTH-EBC/TEASER/badge.svg)](https://coveralls.io/github/RWTH-EBC/TEASER)\n[![Build Status](https://travis-ci.org/RWTH-EBC/TEASER.svg?branch=master)](https://travis-ci.org/RWTH-EBC/TEASER.svg?branch=master)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/RWTH-EBC/TEASER/master?labpath=docs%2Fjupyter_notebooks)\n\nTEASER (Tool for Energy Analysis and Simulation for Efficient Retrofit) allows\nfast generation of archetype buildings with low input requirements and the\nexport of individual dynamic simulation models for the below-mentioned Modelica\nlibraries. These libraries all use the framework of [Modelica IBPSA\nlibrary](https://github.com/ibpsa/modelica). TEASER is being developed at the\n[RWTH Aachen University, E.ON Energy Research Center, Institute for Energy\nEfficient Buildings and Indoor\nClimate](https://www.ebc.eonerc.rwth-aachen.de/cms/~dmzz/E-ON-ERC-EBC/?lidx=1).\n\n * [AixLib](https://github.com/RWTH-EBC/AixLib)\n * [Buildings](https://github.com/lbl-srg/modelica-buildings)\n * [BuildingSystems](https://github.com/UdK-VPT/BuildingSystems)\n * [IDEAS](https://github.com/open-ideas/IDEAS).\n\nThe full documentation of TEASER including examples and description of modules,\nclasses and functions can be found at the website:\n\n * http://rwth-ebc.github.io/TEASER/\n\nThis GitHub page will be used to further develop the package and make it\navailable under the\n[MIT License](https://github.com/RWTH-EBC/TEASER/blob/master/License.md).\n\nIf you have any questions regarding TEASER feel free to contact us at\n[ebc-teaser@eonerc.rwth-aachen.de](mailto:ebc-teaser@eonerc.rwth-aachen.de).\n\n\n## Description\n\nEnergy supply of buildings in urban context currently undergoes significant\nchanges. The increase of renewable energy sources for electrical and thermal\nenergy generation will require flexible and secure energy storage and\ndistribution systems. To reflect and consider these changes in energy systems\nand buildings, dynamic simulation is one key element, in particular when it\ncomes to thermal energy demand on minutely or hourly scale.\nSparse and limited access to detailed building information as well as computing\ntimes are challenges for building simulation on urban scale. In addition,\ndata acquisition and modeling for Building Performance Simulation (BPS) are\ntime consuming and error-prone. To enable the use of BPS on urban scale we\npresent the TEASER tool, an open framework for urban energy modeling of\nbuilding stocks. TEASER provides an easy interface for multiple data sources,\ndata enrichment (where necessary) and export of ready-to-run Modelica simulation\nmodels for all libraries supporting the\n[Modelica IBPSA library](https://github.com/ibpsa/modelica).\n\n\n## Version\n\nTEASER is an ongoing research project, the current version is still a pre-release.\n\n## How to use TEASER\n\n### Examples and jupyter notebooks\n\nWe provide different examples to show the usage of TEASER.\nCheck out the files under teaser/examples or the jupyter-notebooks available here: docs/jupyter-notebooks.\nIf you just want to read the example on github, check them here: docs/examples.\n\n### Dependencies\n\nTEASER is currently tested against Python 3.6 and 3.7. Older versions of Python may\nstill work, but are no longer actively supported.\nUsing a Python distribution is recommended as they already contain (or easily\nsupport installation of) many Python packages (e.g. SciPy, NumPy, pip, PyQT,\netc.) that are used in the TEASER code. Two examples of those distributions are:\n\n1. https://winpython.github.io/ WinPython comes along with a lot of Python\npackages (e.g. SciPy, NumPy, pip, PyQT, etc.)..\n2. http://conda.pydata.org/miniconda.html Conda is an open source package\nmanagement system and environment management system for installing multiple\nversions of software packages and their dependencies and switching easily\nbetween them.\n\nIn addition, TEASER requires some specific Python packages:\n\n1. Mako: template Engine\n install on a python-enabled command line with `pip install -U mako`\n2. pandas: popular data analysis library\n install on a python-enabled command line with `pip install -U pandas`\n3. pytest: Unit Tests engine\n install on a python-enabled command line with `pip install -U pytest`\n\n### Installation\n\nThe best option to install TEASER is to use pip:\n\n`pip install teaser`\n\nIf you actively develop TEASER you can clone this repository by using:\n\n `git clone [SSH-Key/Https]`\n\nand then run:\n\n `pip install -e [Path/to/your/Teaser/Clone]` which will install the local version of TEASER.\n\n\n### How to contribute to the development of TEASER\nYou are invited to contribute to the development of TEASER. You may report any issues by using the [Issues](https://github.com/RWTH-EBC/TEASER/issues) button.\nFurthermore, you are welcome to contribute via [Pull Requests](https://github.com/RWTH-EBC/TEASER/pulls).\nThe workflow for changes is described in our [Wiki](https://github.com/RWTH-EBC/TEASER/wiki).\n\n## How to cite TEASER\n\n+ TEASER: an open tool for urban energy modelling of building stocks. Remmen P., Lauster M., Mans M., Fuchs M., Osterhage T., M\xc3\xbcller D.. Journal of Building Performance Simulation, February 2017,\n[pdf](http://dx.doi.org/10.1080/19401493.2017.1283539),\n[bibtex](https://github.com/RWTH-EBC/TEASER/tree/master/doc/cite_jbps.bib)\n\n### TEASER related publications\n\n+ CityGML Import and Export for Dynamic Building Performance Simulation in Modelica. Remmen P.,\nLauster M., Mans M., Osterhage T., M\xc3\xbcller D.. BSO16, p.329-336, September 2016,\n[pdf](http://www.ibpsa.org/proceedings/BSO2016/p1047.pdf),\n[bibtex](https://github.com/RWTH-EBC/TEASER/tree/master/doc/cite.bib)\n\n+ Scalable Design-Driven Parameterization of Reduced Order Models Using Archetype Buildings with TEASER.\nLauster M., Mans M., Remmen P., Fuchs M., M\xc3\xbcller D.. BauSIM2016, p.535-542, September 2016,\n[pdf](https://www.researchgate.net/profile/Moritz_Lauster/publication/310465372_Scalable_Design-Driven_Parameterization_of_Reduced_Order_Models_using_Archetype_Buildings_with_TEASER/links/582ee96908ae004f74be1fb0.pdf?origin=publication_detail&ev=pub_int_prw_xdl&msrp=eEyK6WYemhC8wK7xkMEPRDO4obE4uxBN4-0BdBy1Ldwhy9FhCe1pXfNObJYubvC_aZN0IWDPf9uayBo3u79bsZvg3hzUoLoYRatES2ARH8c.B2cYwSICt0IOa7lD-4oAiEa_3TtrO-7k-1W9chuNQwr_VNMCpZ5ubSb-eY2D77rGUP4S6wS8m6vudUUbMlXbQQ.Cledgd1Q9fPp11nYGpcpKNhSS6bVTqAEXeMZPkiV3HsJxcVWTFj4Hr_jmLZ0MOzDxbDEZObcGiKfmTL_9k_59A)\n\n+ Refinement of Dynamic Non-Residential Building Archetypes Using Measurement Data and Bayesian Calibration\nRemmen P., Sch\xc3\xa4fer J., M\xc3\xbcller D.. Building Simulation 2019, September 2019,\n[pdf](https://www.researchgate.net/publication/337925776_Refinement_of_Dynamic_Non-Residential_Building_Archetypes_Using_Measurement_Data_and_Bayesian_Calibration)\n\n+ Selecting statistical indices for calibrating building energy models. Vogt, M., Remmen P., Lauster M., Fuchs M. , M\xc3\xbcller D.. Building and Environment 144, pages 94-107, October 2018. [bibtex](https://github.com/RWTH-EBC/TEASER/tree/master/doc/cite_be.bib)\n\n+ The [Institute of Energy Efficiency and Sustainable Building](https://www.e3d.rwth-aachen.de/go/id/iyld/?) published a parametric study of TEASER where all functions and parameters used in TEASER are gathered and explained. The publication can be found [here](https://publications.rwth-aachen.de/record/749801/files/749801.pdf).\n\n\n## License\n\nTEASER is released by RWTH Aachen University, E.ON Energy\nResearch Center, Institute for Energy Efficient Buildings and Indoor Climate,\nunder the\n[MIT License](https://github.com/RWTH-EBC/TEASER/blob/master/License.md).\n\n## Acknowledgements\n\nThis work was supported by the Helmholtz Association under the Joint Initiative \xe2\x80\x9cEnergy System 2050 \xe2\x80\x93 A Contribution of the Research Field Energy\xe2\x80\x9d.\n\nParts of TEASER have been developed within public funded projects\nand with financial support by BMWi (German Federal Ministry for Economic\nAffairs and Energy).\n\n\n'",,"2015/10/22, 13:57:56",2925,CUSTOM,56,2776,"2023/08/31, 14:59:00",56,345,687,8,55,5,1.7,0.7479784366576819,"2021/07/23, 13:12:18",v0.7.6,0,21,false,,false,false,"modelica-tools/geojson-modelica-translator,Galactromeda/geojson-modelica-translator,ahmadikalkhorani/geojson-modelica-translator,ChengnanShi-Work/geojson-modelica-translator,mingzhe37/geojson-modelica-translator,nllong/des-example-analysis,RWTH-EBC/districtgenerator,urbanopt/geojson-modelica-translator-examples,Helarga/geojson-modelica-translator,urbanopt/geojson-modelica-translator,RWTH-EBC/X-HD",,https://github.com/RWTH-EBC,http://www.ebc.eonerc.rwth-aachen.de/,"RWTH Aachen University, Aachen, Germany",,,https://avatars.githubusercontent.com/u/8121773?v=4,,, pyCity,A Python package for data handling and scenario generation of city districts and urban energy systems.,RWTH-EBC,https://github.com/RWTH-EBC/pyCity.git,github,"python,urban,city,modeling,urban-energy-modeling",Buildings and Heating,"2021/05/21, 14:32:02",20,0,1,true,Python,RWTH Aachen University - E.ON Energy Research Center - Institute for Energy Efficient Buildings and Indoor Climate,RWTH-EBC,"Python,Jupyter Notebook",,"b'![E.ON EBC RWTH Aachen University](./doc/_static/EBC_Logo.png)\n\n\n[![Build Status](https://travis-ci.org/RWTH-EBC/pyCity.svg?branch=master)](https://travis-ci.org/RWTH-EBC/pyCity)\n[![Coverage Status](https://coveralls.io/repos/github/RWTH-EBC/pyCity/badge.svg)](https://coveralls.io/github/RWTH-EBC/pyCity)\n[![License](http://img.shields.io/:license-mit-blue.svg)](http://doge.mit-license.org)\n\n\n# pycity_base\n\nPython package for data handling and scenario generation of city districts and urban energy systems.\n\n## Contributing\n\n1. Clone repository: `git clone git@github.com:RWTH-EBC/pyCity.git` (for SSH usage)\nAlternatively: Clone via https: `git clone https://github.com/RWTH-EBC/pyCity.git`\n2. Open an issue at [https://github.com/RWTH-EBC/pyCity/issues](https://github.com/RWTH-EBC/pyCity/issues)\n3. Checkout development branch: `git checkout development` \n4. Update local development branch (if necessary): `git pull origin development`\n5. Create your feature branch: `git checkout -b issueXY_explanation`\n6. Commit your changes: `git commit -m ""Add some feature #XY""`\n7. Push to the branch: `git push origin issueXY_explanation`\n8. Submit a pull request from issueXY_explanation to development branch via [https://github.com/RWTH-EBC/pyCity/pulls](https://github.com/RWTH-EBC/pyCity/pulls)\n\n## Installation\n\n*One important issue at the beginning: Please do NOT confuse pycity_base with\nthe pycity package on pypi! This (other) pycity package is installable via \npip. However, if you want to install pycity_base, follow this instruction.*\n\npycity_base requires the following Python packages:\n- numpy==1.19.5\n- matplotlib==3.3.4\n- pandas==1.1.5\n- pytest==6.2.4\n- xlrd==1.2.0\n- networkx==2.5.1\n- Shapely==1.7.1\n- pyproj==3.0.1\n\nas well as the EBC Python packages:\n\n- richardsonpy==0.2.1\n\nwhich is available at [https://github.com/RWTH-EBC/richardsonpy](https://github.com/RWTH-EBC/richardsonpy)\n\n- uesgraphs==0.6.4\n(with dependencies to shapely and pyproj)\n\nwhich is available at [https://github.com/RWTH-EBC/uesgraphs](https://github.com/RWTH-EBC/uesgraphs)\n\nrichardsonpy and uesgraphs can be installed via pip.\n\n### Installation of pycity_base\n\nThe latest version of pycity_base is 0.3.2.\n\nWhen uesgraph and its dependencies are installed, you should be able to install\npycity_base via pip:\n\n`pip install pycity_base`\n\nor:\n\n`pip install -e \'\'`\n\nor:\n\n` -m pip install -e \'\'`\n\n\nYou can check if installation / adding packages to python has been successful\nby adding new .py file and trying to import uesgraphs and pyCity.\n\n`import uesgraphs`\n\n`import pycity_base`\n\nImport should be possible without errors.\n\n## Example usage\n\n```Python\nimport shapely.geometry.point as point\nimport matplotlib.pyplot as plt\n\nimport uesgraphs.visuals as uesvis\n\nimport pycity_base.classes.timer as time\nimport pycity_base.classes.weather as weath\nimport pycity_base.classes.prices as price\nimport pycity_base.classes.environment as env\nimport pycity_base.classes.demand.apartment as apart\nimport pycity_base.classes.demand.occupancy as occ\nimport pycity_base.classes.demand.domestic_hot_water as dhw\nimport pycity_base.classes.demand.electrical_demand as eldem\nimport pycity_base.classes.demand.space_heating as spaceheat\nimport pycity_base.classes.building as build\nimport pycity_base.classes.city_district as citydist\nimport pycity_base.classes.supply.building_energy_system as besys\nimport pycity_base.classes.supply.boiler as boil\nimport pycity_base.classes.supply.photovoltaic as pvsys\n\n\ndef main():\n # Define the time discretization for the timer object\n timestep = 3600 # in seconds\n\n # Define the total number of timesteps (in this case for one year)\n nb_timesteps = int(365 * 24 * 3600 / timestep)\n\n # Generate environment with timer, weather, and prices objects\n # ######################################################################\n timer = time.Timer(timeDiscretization=timestep,\n timestepsTotal=nb_timesteps)\n weather = weath.Weather(timer=timer)\n prices = price.Prices()\n\n environment = env.Environment(timer=timer, weather=weather, prices=prices)\n\n # Generate city district object\n # ######################################################################\n city_district = citydist.CityDistrict(environment=environment)\n # Annotations: To prevent some methods of subclasses uesgraph / nx.Graph\n # from failing (e.g. \'.subgraph()) environment is set as optional input\n # parameter. However, it is necessary to use an environment object as\n # input parameter to initialize a working cityDistrict object!\n\n # Empty dictionary for building positions\n dict_pos = {}\n\n # Generate shapely point positions\n dict_pos[0] = point.Point(0, 0) # (x, y)\n dict_pos[1] = point.Point(20, 0)\n\n # Use for loop to generate two identical building objects for city\n # district\n # ######################################################################\n for i in range(2):\n living_area = 200 # in m^2\n spec_sh_dem = 160 # Specific space heating demand in kWh/m^2\n number_occupants = 3 # Total number of occupants\n\n # Generate space heating demand object (holding loadcurve attribute\n # with space heating power)\n heat_demand = spaceheat.SpaceHeating(\n environment=environment,\n method=1, # Standard load profile\n livingArea=living_area, # in m^2\n specificDemand=spec_sh_dem) # in kWh/m^2\n\n # Generate occupancy object with stochastic user profile\n occupancy = occ.Occupancy(environment=environment,\n number_occupants=number_occupants)\n\n # Generate electrical demand object\n el_dem_stochastic = eldem.ElectricalDemand(\n environment=environment,\n method=2, # stochastic Richardson profile (richardsonpy)\n total_nb_occupants=number_occupants, # Number of occupants\n randomizeAppliances=True, # Random choice of installed appliances\n lightConfiguration=10, # Light bulb configuration nb.\n occupancy=occupancy.occupancy, # Occupancy profile (600 sec resolution)\n prev_heat_dev=True, # Prevent space heating and hot water devices\n annualDemand=None, # Annual el. demand in kWh could be used for\n do_normalization=False) # rescaling (if do_normalization is True)\n # Annotation: The calculation of stochastic electric load profiles\n # is time consuming. If you prefer a faster method, you can either\n # hand over an own array-like load curve (method=0) or generate a\n # standardized load profile (SLP) (method=1)\n\n # Generate domestic hot water demand object\n dhw_obj = dhw.DomesticHotWater(\n environment=environment,\n tFlow=60, # DHW output temperature in degree Celsius\n method=2, # Stochastic dhw profile\n supplyTemperature=25, # DHW inlet flow temperature in degree C.\n occupancy=occupancy.occupancy) # Occupancy profile (600 sec resolution)\n\n # Generate apartment and add demand curves\n apartment = apart.Apartment(environment)\n apartment.addMultipleEntities([heat_demand,\n el_dem_stochastic,\n dhw_obj])\n\n # Generate building and add apartment\n building = build.Building(environment)\n building.addEntity(apartment)\n\n # Add buildings to city district\n city_district.addEntity(entity=building,\n position=dict_pos[i])\n\n # Access information on city district object instance\n # ######################################################################\n print(\'Get number of building entities:\')\n print(city_district.get_nb_of_building_entities())\n print()\n\n print(\'Get list with node ids of building entities:\')\n print(city_district.get_list_build_entity_node_ids())\n print()\n\n print(\'Get city district overall space heating power load curve:\')\n print(city_district.get_aggr_space_heating_power_curve(current_values=True))\n print()\n \n print(\'Get city district overall space cooling power load curve:\')\n print(city_district.get_aggr_space_cooling_power_curve(current_values=True))\n print()\n\n # We can use the Visuals class of uesgraphs to plot the city district\n\n # Generate uesgraphs visuals object instance\n uesvisuals = uesvis.Visuals(uesgraph=city_district)\n\n fig = plt.figure()\n ax = fig.gca()\n ax = uesvisuals.create_plot_simple(ax=ax)\n plt.show()\n plt.close()\n\n # Access buildings\n # ######################################################################\n # As city_district is a networkx graph object, we can access the building\n # entities with the corresponding building node,\n # Pointer to building object with id 1001:\n building_1001 = city_district.nodes[1001][\'entity\']\n\n print(\'Get building 1001 electric load curve:\')\n print(building_1001.get_electric_power_curve())\n print()\n\n # Add energy systems to buildings\n # ######################################################################\n # We can also add building energy systems (BES) to each building object\n\n # Generate boiler object\n boiler = boil.Boiler(environment=environment,\n qNominal=10000, # Boiler thermal power in Watt\n eta=0.85) # Boiler efficiency\n\n # Generate PV module object\n pv = pvsys.PV(environment=environment,\n area=30, # Area in m^2\n eta=0.15) # Electrical efficiency at NOCT conditions\n\n # Instantiate BES (container object for all energy systems)\n bes = besys.BES(environment)\n\n # Add energy systems to bes\n bes.addMultipleDevices([boiler, pv])\n\n # Add bes to building 1001\n building_1001.addEntity(entity=bes)\n\n print(\'Does building 1001 has a building energy system (BES)?\')\n print(building_1001.hasBes)\n\n # Access boiler nominal thermal power\n print(\'Nominal thermal power of boiler in kW:\')\n print(building_1001.bes.boiler[0].qNominal / 1000)\n\n\nif __name__ == \'__main__\':\n # Run program\n main()\n\n```\n\n## Tutorial\n\npycity_base also has also a jupyter notebook tutorial script under pycity/examples/tutorials/... \nTo open the jupyter notebook, open a command/terminal window and change your directory to the directory, \nwhere tutorial_pycity_calc_1.ipynb is stored. Then type \'jupyter notebook\' (without \'\' signs) and press Enter.\nJupyter notebook should open within your browser (such as Firefox). Click on one notebook to start.\nIf your Pyhton path does not point at your Python installation, you have to\nopen jupyter notebook directly, e.g. by looking for the jupyter.exe in your distribution.\n\n## How to cite pycity_base\n\n+ Schiefelbein, J., Rudnick, J., Scholl, A., Remmen, P., Fuchs, M., M\xc3\xbcller, D. (2019),\nAutomated urban energy system modeling and thermal building simulation based on OpenStreetMap data sets,\nBuilding and Environment,\nVolume 149,\nPages 630-639,\nISSN 0360-1323\n[pdf](https://doi.org/10.1016/j.buildenv.2018.12.025),\n[bibtex](https://github.com/RWTH-EBC/pyCity/tree/master/doc/S0360132318307686.bib)\n\nIf you require a reference in German language:\n+ Schiefelbein, J. , Javadi, A. , Fuchs, M. , M\xc3\xbcller, D. , Monti, A. and Diekerhof, M. (2017), Modellierung und Optimierung von Mischgebieten. Bauphysik, 39: 23-32. doi:10.1002/bapi.201710001\n[pdf](https://doi.org/10.1002/bapi.201710001),\n[bibtex](https://github.com/RWTH-EBC/pyCity/tree/master/doc/pericles_1437098039.bib)\n\n## License\n\npyCity is released by RWTH Aachen University\'s E.ON Energy Research Center (E.ON ERC),\nInstitute for Energy Efficient Buildings and Indoor Climate (EBC) and\nInstitute for Automation of Complex Power Systems (ACS)\nunder the [MIT License](https://opensource.org/licenses/MIT)\n\n## Acknowledgements\n\nWe gratefully acknowledge the financial support by BMWi \n(German Federal Ministry for Economic Affairs and Energy) \nunder promotional references 03ET1138D and 03ET1381A.\n\n\n'",",https://doi.org/10.1016/j.buildenv.2018.12.025,https://doi.org/10.1002/bapi.201710001","2015/03/06, 14:08:28",3155,MIT,0,470,"2023/09/23, 14:04:07",4,161,288,1,32,0,0.5,0.5862068965517242,"2021/05/21, 14:12:24",v0.3.2,0,7,false,,false,false,,,https://github.com/RWTH-EBC,http://www.ebc.eonerc.rwth-aachen.de/,"RWTH Aachen University, Aachen, Germany",,,https://avatars.githubusercontent.com/u/8121773?v=4,,, IDEAS,A Modelica model library for integrated building or district energy simulations.,open-ideas,https://github.com/open-ideas/IDEAS.git,github,,Buildings and Heating,"2023/10/18, 17:10:57",110,0,16,true,Modelica,OpenIDEAS,open-ideas,"Modelica,Motoko,Python,HTML,CSS,JavaScript,C,Java,Jupyter Notebook,TeX,Makefile,Batchfile,Shell,Dockerfile",,"b""IDEAS v3.0.0\n============\nModelica model environment for Integrated District Energy Assessment Simulations (IDEAS), allowing simultaneous transient simulation of thermal and electrical systems at both building and feeder level. This Modelica library was originally developed by KU Leuven and [3E](https://3e.eu) and is currently developed and maintained by the [Thermal Systems Simulation (The SySi)](http://the.sysi.be) research group of KU Leuven. It includes significant contributions by the [Building Physics and Sustainable Design Section](https://bwk.kuleuven.be/bwf) of KU Leuven, the [Building Physics Research Group](https://www.ugent.be/ea/architectuur/en/research/research-groups/building-physics) of UGent, [IBPSA project 1](https://ibpsa.github.io/project1/), [IEA EBC Annex 60](https://iea-annex60.org) and the [Electrical Energy Systems and Applications Section](https://www.esat.kuleuven.be/electa) of KU Leuven. \n\n## Release history\n+ May 3rd, 2022: IDEAS v3.0.0 has been released. This includes an update to MSL 4.0.0.\n+ April 2nd, 2022: IDEAS v2.2.2 has been released. This is the final release before updating MSL 4.0.0.\n+ September 20th, 2021: IDEAS v2.2.1 has been released.\n+ June 9th, 2021: IDEAS v2.2 has been released.\n+ February 28th, 2019: IDEAS v2.1 has been released.\n+ September 28th, 2018: IDEAS v2.0 has been released.\n+ May 5th, 2017: IDEAS v1.0 has been released. \n February 16th 2018: A [paper describing IDEAS v1.0](http://www.tandfonline.com/doi/full/10.1080/19401493.2018.1428361) has been published on line.\n+ September 2nd, 2015: IDEAS v0.3 has been released.\n\n## Contributions and community\nWe love to hear what you are using IDEAS for. Feel free to open an issue to provide feedback or contact us through mail. If you like our library, you can support us by adding a star at the top right of our Github page.\n\n## Getting started\nThe following packages contain examples that can help to get started with IDEAS:\n - IDEAS.Buildings.Components.Examples (Simple examples of individual features)\n - IDEAS.Buildings.Examples (Simple examples)\n - IDEAS.Examples.TwinHouses (The Holzkirchen twin house validation experiment)\n - IDEAS.Examples.PPD12 (A model of a terraced house, including heating and ventilation.)\n - IDEAS.Examples.IBPSA (Models for BOPTEST.)\n - IDEAS.Examples.Tutorial (A tutorial.) \nSee the documentation sections of the respective models for more details.\n\n## Tool support\nOur goal is to provide support for Dymola and OpenModelica. Consequently, any tool that supports the full Modelica specification should be able to run our models. Feel free to file a bug report in case we do not adhere to the Modelica specification.\n\n## Release notes\n[This is a link to detailed release notes.](https://github.com/open-ideas/IDEAS/blob/master/ReleaseNotes.md)\n\n## Unit tests\nThe library is unit tested using BuildingsPy. Automated unit tests are run on GitHub Actions with a self-hosted runner at KU Leuven.\n\n## License\nIDEAS is licensed by [KU Leuven](http://www.kuleuven.be) and [3E](http://www.3e.eu) under a [BSD 3 license](https://htmlpreview.github.io/?https://github.com/open-ideas/IDEAS/blob/master/IDEAS/legal.html).\n\n\n\n## References\n### Development of IDEAS\n - F. Jorissen, G. Reynders, R. Baetens, D. Picard, D. Saelens, and L. Helsen. (2018) [Implementation and Verification of the IDEAS Building Energy Simulation Library.](http://www.tandfonline.com/doi/full/10.1080/19401493.2018.1428361) *Journal of Building Performance Simulation*, **11** (6), 669-688, doi: 10.1080/19401493.2018.1428361.\n - R. Baetens, R. De Coninck, F. Jorissen, D. Picard, L. Helsen, D. Saelens (2015). OpenIDEAS - An Open Framework for Integrated District Energy Simulations. In Proceedings of Building Simulation 2015, Hyderabad, 347--354.\n - R. Baetens. (2015) On externalities of heat pump-based low-energy dwellings at the low-voltage distribution grid. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - F. Jorissen, W. Boydens, and L. Helsen. (2017) Validated air handling unit model using indirect evaporative cooling. *Journal of Building Performance Simulation*, **11** (1), 48\xe2\x80\x9364, doi: 10.1080/19401493.2016.1273391\n - R. Baetens, D. Saelens. (2016) Modelling uncertainty in district energy simulations by stochastic residential occupant behaviour. *Journal of Building Performance Simulation* **9** (4), 431\xe2\x80\x93447, doi:10.1080/19401493.2015.1070203.\n - M. Wetter, M. Fuchs, P. Grozman, L. Helsen, F. Jorissen, M. Lauster, M. Dirk, C. Nytsch-geusen, D. Picard, P. Sahlin, and M. Thorade. (2015) IEA EBC Annex 60 Modelica Library - An International Collaboration to Develop a Free Open-Source Model Library for Buildings and Community Energy Systems. In Proceedings of Building Simulation 2015, Hyderabad, 395\xe2\x80\x93402.\n - B. van der Heijde, M. Fuchs, C. Ribas Tugores, G. Schweiger, K. Sartor, D. Basciotti, D. M\xc3\xbcller,C. Nytsch-Geusen, M. Wetter, L. Helsen (2017). Dynamic equation-based thermo-hydraulic pipe model for district heating and cooling systems. *Energy Conversion and Management*, **151**, 158-169.\n - D. Picard, L. Helsen (2014). Advanced Hybrid Model for Borefield Heat Exchanger Performance Evaluation, an Implementation in Modelica. In Proceedings of the 10th International Modelica Conference. Lund, 857-866.\n - D. Picard, L. Helsen (2014). A New Hybrid Model For Borefield Heat Exchangers Performance Evaluation. 2014 ASHRAE ANNUAL CONFERENCE: Vol. 120 (2). ASHRAE: Ground Source Heat Pumps: State of the Art Design, Performance and Research. Seattle, 1-8.\n - D. Picard, F. Jorissen, L. Helsen (2015). Methodology for Obtaining Linear State Space Building Energy Simulation Models. 11th International Modelica Conference. International Modelica Conference. Paris, 21-23 September 2015 (pp. 51-58).\n - F. Jorissen, L. Helsen (2019). Integrated Modelica Model and Model Predictive Control of a Terraced House Using IDEAS. 13th International Modelica Conference. Regensburg, 4-6 March 2019.\n - K. De Jonge, F. Jorissen, L. Helsen, J. Laverge (2021). Wind-Driven Air Flow Modelling in Modelica: Validation and Implementation in the IDEAS Library. In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September\n - F. Jorissen, M. Wetter, L. Helsen (2018). Simplifications for hydronic system models in modelica. *Journal of Building Performance Simulation* **11** (6). 639-654\n\n### Applications of IDEAS\n - D. Picard. (2017) Modeling, optimal control and HVAC design of large buildings using ground source heat pump systems. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - G. Reynders. (2015) Quantifying the impact of building design on the potential of structural storage for active demand response in residential buildings. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - R. De Coninck. (2015) Grey-box based optimal control for thermal systems in buildings - Unlocking energy efficiency and flexibility. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - G. Reynders, T. Nuytten, D. Saelens. (2013) Potential of structural thermal mass for demand-side management in dwellings. *Building and Environment* **64**, 187\xe2\x80\x93199, doi:10.1016/j.buildenv.2013.03.010.\n - R. De Coninck, R. Baetens, D. Saelens, A. Woyte, L. Helsen (2014). Rule-based demand side management of domestic hot water production with heat pumps in zero energy neighbourhoods. *Journal of Building Performance Simulation*, **7** (4), 271-288.\n - R. Baetens, R. De Coninck, J. Van Roy, B. Verbruggen, J. Driesen, L. Helsen, D. Saelens (2012). Assessing electrical bottlenecks at feeder level for residential net zero-energy buildings by integrated system simulation. *Applied Energy*, **96**, 74-83.\n - G. Reynders, J. Diriken, D. Saelens. (2014) Quality of grey-box models and identified parameters as function of the accuracy of input and observation signals. *Energy & Buildings* **82**, 263\xe2\x80\x93274, doi:10.1016/j.enbuild.2014.07.025.\n - F. Jorissen, L. Helsen, M. Wetter (2015). Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation. In Proceedings of the 11th International Modelica Conference. Paris, 59-69.\n - C. Protopapadaki, G. Reynders, D. Saelens (2014). Bottom-up modeling of the Belgian residential building stock: impact of building stock descriptions. In Proceedings of the 9th International Conference on System Simulation in Buildings. Li\xc3\xa8ge.\n - G. Reynders, J. Diriken, D. Saelens (2014). Bottom-up modeling of the Belgian residential building stock: impact of model complexity. In Proceedings of the 9th International Conference on System Simulation in Buildings. Li\xc3\xa8ge.\n - E. Van Kenhove, A. Aertgeerts, J. Laverge, A. Janssens (2015). Energy Efficient Renovation of Heritage Residential Buildings Using Modelica Simulations. In Proceedings of Building Simulation 2015: 14th Conference of IBPSA. Hyderabad, 535\xe2\x80\x93542.\n - G. Reynders, J. Diriken, D. Saelens (2015). Impact of the heat emission system on the indentification of grey-box models for residential buildings. *Energy Procedia* **78**, 3300-3305, doi: 10.1016/j.egypro.2015.11.740.\n - I. De Jaeger, G. Reynders, D. Saelens (2017). Impact of spacial accuracy on district energy simulations. *Energy Procedia* **132**, 561-566, doi: 10.1016/j.egypro.2017.09.741\n - G. Reynders, R. Andriamamonjy, R. Klein, D. Saelens (2017). Towards an IFC-Modelica Tool Facilitating Model Complexity Selection for Building Energy Simulation. In Proceedings of the 15th Conference of the IBPSA Conference. California.\n - G. Reynders, J. Diriken, D. Saelens (2017). Generic characterization method for energy flexibility: Applied to structural thermal storage in residential buildings. *Applied Energy* **198**, 192-202, doi: 10.1016/j.apenergy.2017.04.061\n - F. Jorissen. (2018) Toolchain for optimal control and design of energy systems in buildings. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - F. Jorissen., W. Boydens, L. Helsen (2019) TACO, an automated toolchain for model predictive control of building systems: implementation and verification. *Journal of Building Performance Simulation* **12** (2). 180-192.\n - Picard D., Sourbron M., Jorissen F., Vana Z., Cigler J., Ferkl L., Helsen L. (2016). Comparison of Model Predictive Control Performance Using Grey-Box and White-Box Controller Models of a Multi-zone Office Building. International High Performance Buildings Conference. West Lafayette, 11-14 July 2016 (art.nr. 203).\n - F. Jorissen, W. Boydens, L. Helsen (2019) Model implementation and verification of the envelope, HVAC and controller of an office building in Modelica. *Journal of Building Performance Simulation* **12** (4), 445-464\n - F. Jorissen, D. Picard, K. Six, L. Helsen (2021) Detailed White-Box Non-Linear Model Predictive Control for Scalable Building HVAC Control. In Proceedings of the 14th Modelica Conference 2021. Online.\n - F. Jorissen, D. Picard, L Helsen (2021). Strengths of Non-Linear White-Box MPC for Building HVAC Control. In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September 2021.\n - A. Erfani Beyzaee, X. Yu, T.M. Kull, P. Bacher, T. Jafarinejad, and S. Roels (2021). Analysis of the impact of predictive models on the quality of the model predictive control for an experimental building. In Proceedings of the 17th IBPSA Conference. Bruges, 1-3 September 2021.\n - G. Reynders, A. Erfani Beyzaee, D. Saelens (2021). IEA EBC Annex71: Building energy performance assessment based on in-situ measurements.\n - D. Blum, J. Arroyo, S. Huang, J. Drgona, F. Jorissen, H. Taxt Walnum, C. Yan, K. Benne, D. Vrabie, M. Wetter, L. Helsen (2021). Building Optimization Testing Framework (BOPTEST) for Simulation-Based Benchmarking of Control Strategies in Buildings. *Journal of Building Performance Simulation* **14** (5), 586\xe2\x80\x93610.\n - W. Boydens, S. Feyaerts, A. Vandermeulen, L. Helsen (2021). Control strategy assessment of a small GSHP sourced DH system with end user DHW booster heat pumps. In Proceedings of the 13th IEA Heat Pump Conference. Jeju, art.nr. 301.\n - J. Jansen, F. Maertens, W. Boydens, L. Helsen (2021). Living lab 'De Schipjes': a zero-fossil-fuel energy concept in the historic city center of Bruges. In Proceedings of Building Simulation 2021: 17th Conference of IBPSA. Bruges.\n - J. Arroyo, C. Manna, F. Spiessens, L. Helsen (2021). An OpenAI-Gym environment for the Building Optimization Testing (BOPTEST) framework. In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September 2021.\n - J. Arroyo, F. Spiessens, L. Helsen (2022). Comparison of Model Complexities in Optimal Control Tested in a Real Thermally Activated Building System. *Buildings*, **12** (5).\n - J. Arroyo, F. Spiessens, L. Helsen (2022). Comparison of Optimal Control Techniques for Building Energy Management. *Frontiers in Built Environment, section Indoor Environment. Research Topic: Artificial Intelligence Applications in Building\xe2\x80\x99s Thermal Management*, **8**.\n - J. Arroyo, C. Manna, F. Spiessens, L. Helsen (2022). Reinforced Model Predictive Control (RL-MPC) for Building Energy Management. *Applied Energy*, **309**.\n - J. Jansen, L. Helsen (2022). Non-linear model predictive control of a small-scale 4th generation district heating network with on/off heat pumps. In Proceedings of the 2nd International Sustainable Energy Conference. Graz, 204-212.\n - B. Merema, D. Saelens, H. Breesch (2021). Analysing modelling challenges of smart controlled ventilation systems in educational buildings. *Journal Of Building Performance Simulation* **14** (2), 116-131.\n - S. Meunier, C. Protopapadaki, R. Baetens, D. Saelens (2021). Impact of residential low-carbon technologies on low-voltage grid reinforcements. *Applied Energy* **297**, art.nr. 117057, 1-15.\n - J.E. Goncalves, H. Montazeri, T. van Hooff, D. Saelens (2021). Performance of building integrated photovoltaic facades: Impact of exterior convective heat transfer. *Applied Energy* **287**, art.nr. 116538\n - J.E. Goncalves, T. van Hooff, D. Saelens (2021). Simulating building integrated photovoltaic facades: Comparison to experimental data and evaluation of modelling complexity. *Applied Energy* **281**, art.nr. 116032.\n - J.E. Goncalves, T. van Hooff, D. Saelens (2020). Understanding the behaviour of naturally-ventilated BIPV modules: A sensitivity analysis. *Renewable Energy* **161**, 133-148\n - J.E, Goncalves, T. van Hooff, D. Saelens (2020). A physics-based high-resolution BIPV model for building performance simulations. *Solar Energy* **204**, 585-599\n - K. Spiliotis, J.E. Goncalves, D. Saelens, K. Baert, J. Driesen (2020). Electrical system architectures for building-integrated photovoltaics: A comparative analysis using a modelling framework in Modelica. *Applied Energy* **261**, art.nr. 114247. doi: 10.1016/j.apenergy.2019.114247\n - I. De Jaeger, A. Vandermeulen, B. van der Heijde, L. Helsen, D. Saelens (2020). Aggregating set-point temperature profiles for archetype-based: simulations of the space heat demand within residential districts. *Journal Of Building Performance Simulation* **13** (3).\n - C. Protopapadaki, D. Saelens (2019). Towards metamodeling the neighborhood-level grid impact of low-carbon technologies. *Energy and Buildings* **194**, 273-288.\n - V. Reinbold, C. Protopapadaki, J.P. Tavella, D. Saelens (2019). Assessing scalability of a low-voltage distribution grid co-simulation through functional mock-up interface. *Journal of Building Performance Simulation* **12** (5), 637-649.\n - R.A. L. Andriamamonjy, D. Saelens, R. Klein (2018). An auto-deployed model-based fault detection and diagnosis approach for Air Handling Units using BIM and Modelica. *Automation in Construction* **96**, 508-526.\n - R.A. L. Andriamamonjy, D. Saelens, R. Klein (2018). An automated IFC-based work\xef\xac\x82ow for building energy performance simulation with Modelica. *Automation in Construction* **91**, 166-181. doi: 10.1016/j.autcon.2018.03.019\n - I. De Jaeger, G. Reynders, Y. Ma, D. Saelens (2018). Impact of building geometry description within district energy simulations. *Energy* **158**, 1060-1069\n - C. Protopapadaki, D. Saelens (2017). Heat pump and PV impact on residential low-voltage distribution grids as a function of building and district properties. *Applied Energy* **192**, 268-281.\n - B.J. Merema, D. Saelens, H. Breesch (2021). Co-Simulation approach to evaluate MPC strategies for all-air systems: case study. In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September 2021\n - B. Merema, Q. Carton, D. Saelens, H. Breesch (2021). Implementation of MPC for an all-air system in an educational building. In: J. Kurnitski, M. Thalfeldt (Eds.), COLD CLIMATE HVAC & ENERGY 2021: vol. 246, art.nr. 11007. Presented at the 10th International SCANVAC Cold Climate Conference, Tallinn, Estonia, 18-21 April 2021.\n - F. Gonzalez, S. Meunier, C. Protopapadaki, Y. Perez, D. Saelens, M. Petit (2021). Impact of distributed energy resources and electric vehicle smart charging on low voltage grid stability. In CIRED 2021.\n - R. Claeys, C. Protopapadaki, D. Saelens, J. Desmet (2020). A Data-Driven Approach to Assessing and Improving Stochastic Residential Load Modeling for District-Level Simulations and PV Integration. In 2020 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS) (pp. 1-6). IEEE.\n - D. Saelens, I. De Jaeger, F. B\xc3\xbcnning, M. Mans, A. Vandermeulen, B. van der Heijde, E. Garreau, A. Maccarini, \xc3\x98. R\xc3\xb8nneseth, I. Sartori, L. Helsen (2019). Towards a DESTEST: a District Energy Simulation Test Developed in IBPSA Project 1. Presented at the Building Simulation Conference 2019, Rome, 2-4 Sep 2019\n - T. Jafarinejad, I. De Jaeger, A. Erfani, D. Saelens (2021). Evaluating data-driven building stock heat demand forecasting models for energy optimization. art.nr. 30569. In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September 2021 \n - A. Erfani, X. Yu, T.M. Kull, P. Bacher, T. Jafarinejad, S. Roels, D. Saelens (2021). Analysis of the impact of predictive models on the quality of the model predictive control for an experimental building. art.nr. 30566. In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September 2021 \n - M. Delwati, D. Saelens, P. Geyer (2019). Multi-Scale Simulation of a Thermochemical District Network. art.nr. 210652, In Proceedings of the 17th IBPSA Conference. Bruges, Belgium, 1-3 September 2021\n - C. Protopapadaki, D. Saelens (2018). Sensitivity of low-voltage grid impact indicators to weather conditions in residential district energy modeling. In 2018 Building Performance Modeling Conference and SimBuild co-organized by ASHRAE and IBPSA-USA\n - B.J. Merema, H. Breesch, D. Saelens (2019). Comparison of model identification techniques for MPC in all-air HVAC systems in an educational building. In: Clima 2019 congress, Bucharest.\n - B.J. Merema, H. Breesch, D. Saelens (2018). Validation of a BES model of an all-air HVAC educational building. In Proceedings of the Tenth International Conference on System Simulation in Buildings - SSB 2018, art.nr. 38, Li\xc3\xa8ge, Belgium, 10-12 Dec 2018\n - B.J. Merema, H. Breesch (sup.), D. Saelens (sup.) (2021). An MPC framework for all-air systems in non-residential buildings. PhD thesis.\n - I. De Jaeger, D. Saelens (sup.) (2021). On the Impact of Input Data Uncertainty on the Reliability of Urban Building Energy Models. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - J.E. Gon\xc3\xa7alves, D. Saelens (sup.), T.A. J. van Hooff (cosup.) (2021). Understanding the Behaviour of Building Integrated Photovoltaic Facades. Numerical and Experimental Analysis. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - C. Protopapadaki, D. Saelens (sup.) (2018). A Probabilistic Framework Towards Metamodeling the Impact of Residential Heat Pumps and PV on Low-voltage Grids. PhD thesis, Arenberg Doctoral School, KU Leuven.\n - R. Andriamamonjy, R. Klein (sup.), D. Saelens (cosup.) (2018). Automated workflows for building design and operation using openBIM and Modelica. PhD thesis\n - K. Spiliotis (2020) Electrical system architectures for building-integrated PV. Multi-scale, multi-domain modeling and simulation. PhD thesis, Arenberg Doctoral School, KU Leuven.\n\n\n### Bibtex entry for citing IDEAS\nPlease cite IDEAS using the information below.\n\n```\n@article{Jorissen2018ideas, \nauthor = {Jorissen, Filip and Reynders, Glenn and Baetens, Ruben and Picard, Damien and Saelens, Dirk and Helsen, Lieve}, \njournal = {Journal of Building Performance Simulation}, \ntitle = {{Implementation and Verification of the IDEAS Building Energy Simulation Library}}, \nvolume = {11},\nissue = {6}, \npages = {669-688},\ndoi={10.1080/19401493.2018.1428361}, \nyear = {2018} \n}\n```\n""",,"2014/02/18, 13:02:55",3536,MIT,99,5302,"2023/10/18, 17:10:58",33,666,1301,28,7,4,0.4,0.31273408239700373,"2022/05/03, 13:34:18",v3.0,0,19,false,,false,false,,,https://github.com/open-ideas,,Belgium,,,https://avatars.githubusercontent.com/u/5467967?v=4,,, tespy,"Provides a powerful simulation toolkit for thermal engineering plants such as power plants, district heating systems or heat pumps.",oemof,https://github.com/oemof/tespy.git,github,"thermodynamics,process-engineering,cooling,heating,energy-system,powerplant,python,simulation,exergy,refrigeration,thermodynamic-cycles",Buildings and Heating,"2023/10/22, 09:42:11",202,20,50,true,Python,oemof community,oemof,"Python,TeX",https://tespy.readthedocs.io,"b'Thermal Engineering Systems in Python\n=====================================\nTESPy stands for ""Thermal Engineering Systems in Python"" and provides a\npowerful simulation toolkit for thermal engineering plants such as power\nplants, district heating systems or heat pumps. It is an external extension\nmodule within the Open Energy Modelling Framework `oemof `_\nand can be used as a standalone package.\n\n.. figure:: https://raw.githubusercontent.com/oemof/tespy/9915f013c40fe418947a6e4c1fd0cd0eba45893c/docs/api/_images/logo_tespy_big.svg\n :align: center\n\nWith the TESPy package you are able to calculate stationary operation in order\nto design the process of thermal energy systems. From that point it is possible\nto simulate the offdesign behavior of your plant using underlying\ncharacteristics for each of the plants components. The package includes basic\ncomponents, such as turbines, pumps, compressors, heat exchangers, pipes,\nmixers and splitters as well as some advanced components (derivatives of heat\nexchangers, drum).\n\nEverybody is welcome to use and/or develop TESPy. Contribution is already\npossible on a low level by simply fixing typos in TESPy\'s documentation or\nrephrasing sections which are unclear. If you want to support us that way\nplease fork the TESPy repository to your own GitHub account and make changes\nas described in the GitHub guidelines:\nhttps://guides.github.com/activities/hello-world/\n\nKey Features\n============\n* **Open** Source\n* **Generic** thermal engineering applications\n* **Automatic** model documentation in LaTeX for high transparency and\n reproducibility\n* **Extendable** framework for the implementation of custom components and\n component groups\n* **Postprocessing** features like exergy analysis and fluid property plotting\n\n.. start-badges\n\n.. list-table::\n :stub-columns: 1\n\n * - docs\n - |docs|\n * - tests\n - |pytests| |checks| |packaging| |coveralls|\n * - package\n - | |version| |wheel| |supported-versions| |commits-since|\n * - reference\n - |joss| |zenodo|\n\n.. |docs| image:: https://readthedocs.org/projects/tespy/badge/?style=flat\n :target: https://readthedocs.org/projects/tespy\n :alt: Documentation Status\n\n.. |pytests| image:: https://github.com/oemof/tespy/workflows/tox%20pytests/badge.svg\n :target: https://github.com/oemof/tespy/actions?query=workflow%3A%22tox+pytests%22\n :alt: tox pytest\n\n.. |checks| image:: https://github.com/oemof/tespy/workflows/tox%20checks/badge.svg\n :target: https://github.com/oemof/tespy/actions?query=workflow%3A%22tox+checks%22\n :alt: tox checks\n\n.. |packaging| image:: https://github.com/oemof/tespy/workflows/packaging/badge.svg\n :target: https://github.com/oemof/tespy/actions?query=workflow%3Apackaging\n :alt: packaging\n\n.. |coveralls| image:: https://coveralls.io/repos/oemof/tespy/badge.svg?branch=main&service=github\n :alt: Coverage Status\n :target: https://coveralls.io/r/oemof/tespy\n\n.. |version| image:: https://img.shields.io/pypi/v/tespy.svg\n :alt: PyPI Package latest release\n :target: https://pypi.org/project/tespy\n\n.. |wheel| image:: https://img.shields.io/pypi/wheel/tespy.svg\n :alt: PyPI Wheel\n :target: https://pypi.org/project/tespy\n\n.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/tespy.svg\n :alt: Supported Python versions\n :target: https://pypi.org/project/tespy\n\n.. |commits-since| image:: https://img.shields.io/github/commits-since/oemof/tespy/latest/dev\n :alt: Commits since latest release\n :target: https://github.com/oemof/tespy/compare/main...dev\n\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.2555866.svg\n :alt: Release archive\n :target: https://doi.org/10.5281/zenodo.2555866\n\n.. |joss| image:: https://joss.theoj.org/papers/590b0b4767606bce4d0ebe397d4b7a4f/status.svg\n :alt: Software Paper in JOSS\n :target: https://joss.theoj.org/papers/590b0b4767606bce4d0ebe397d4b7a4f\n\n.. end-badges\n\nDocumentation\n=============\nYou can find the full documentation at\n`readthedocs `_. Use the\n`project site `_ of readthedocs to\nchoose the version of the documentation. Go to the\n`download page `_ to\ndownload different versions and formats (pdf, html, epub) of the documentation.\n\nTo get the latest news visit and follow our `website `_.\n\nInstalling TESPy\n================\nIf you have a working Python3 environment, use pypi to install the latest\ntespy version:\n\n.. code:: bash\n\n pip install tespy\n\nIf you want to use the latest features, you might want to install the\n**developer version**. See section\n`Developing TESPy `_\nfor more information. The developer version is not recommended for productive\nuse.\n\nGet in touch\n============\n\nOnline ""Stammtisch""\n-------------------\n\nWe have decided to start a reoccurring ""Stammtisch"" meeting for all interested\nTESPy users and (potential) developers. You are invited to join us on every 3rd\nMonday of a month at 17:00 CE(S)T for a casual get together. The first meeting\nwill be held at June, 20, 2022. The intent of this meeting is to establish a\nmore active and well-connected network of TESPy users and developers.\n\nIf you are interested, you can simply join the meeting at\nhttps://meet.jit.si/tespy_user_meeting. We are looking forward to seeing you!\n\nUser forum\n----------\nWe have implemented a\n`discussion room on GitHub `__ as\nuser forum. If you have issues with setting up your model or any other question\nabout using the software, you are invited to start a discussion there.\n\nExamples\n========\n\nFor a short introduction on how TESPy works and how you can use it, we provide\nan extensive `user guide `__. You can\ndownload all python scripts of the examples and tutorials from this GitHub\nrepository. They are included in the ""tutorial"" directory.\n\nCitation\n========\nThe scope and functionalities of TESPy have been documented in a paper\npublished in the Journal of Open Source Software with an Open-Access license.\nDownload the paper from https://doi.org/10.21105/joss.02178. As TESPy is a free\nsoftware, we kindly ask that you add a reference to TESPy if you use the\nsoftware for your scientific work. Please cite the article with the BibTeX\ncitation below.\n\nBibTeX citation::\n\n @article{Witte2020,\n doi = {10.21105/joss.02178},\n year = {2020},\n publisher = {The Open Journal},\n volume = {5},\n number = {49},\n pages = {2178},\n author = {Francesco Witte and Ilja Tuschy},\n title = {{TESPy}: {T}hermal {E}ngineering {S}ystems in {P}ython},\n journal = {Journal of Open Source Software}\n }\n\nFurthermore, a paper on the exergy analysis feature has been published in\nthe mdpi journal energies. You can download the pdf at\nhttps://doi.org/10.3390/en15114087. If you are using this feature specifically,\nyou can reference it with the following BibTeX citation:\n\nBibTeX citation::\n\n @article{Witte2022,\n doi = {10.3390/en15114087},\n year = {2022},\n publisher = {The Open Journal},\n volume = {15},\n number = {11},\n article-number = {4087},\n issn = {1996-1073},\n author = {Witte, Francesco and Hofmann, Mathias and Meier, Julius and Tuschy, Ilja and Tsatsaronis, George},\n title = {Generic and Open-Source Exergy Analysis—Extending the Simulation Framework TESPy},\n journal = {Energies}\n }\n\n\nAdditionally, you have the possibility to cite a specific version of TESPy to\nmake your work reproducible. The source code of every version is published on\nzenodo. Find your version here: https://doi.org/10.5281/zenodo.2555866.\n\nLicense\n=======\nCopyright (c) 2017-2023 Francesco Witte\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the ""Software""), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n'",",https://doi.org/10.5281/zenodo.2555866\n\n,https://doi.org/10.21105/joss.02178,https://doi.org/10.5281/zenodo.2555866.\n\nLicense\n=======\nCopyright","2017/12/11, 08:44:12",2144,MIT,275,3305,"2023/08/21, 16:56:11",22,236,368,43,65,3,0.5,0.09816799170411339,"2023/10/20, 13:09:49",v0.7.0.post1,0,25,false,,true,true,"maBeigi98/refrigeration_cycle_exergy,in-RET/inretensys-fastapi,jfreissmann/heatpumps,fwitte/TESPy_teaching_exergy,oemof/oemof,ChaofanChen/ORCSimulator,fwitte/chp_orc,NicholasFry/Heat-pump-DH-web-application,fwitte/refrigeration_cycle_exergy,fwitte/sCO2_exergy,fwitte/tespy_tutorial_usermeeting_2021,fwitte/SEGS_exergy,mohankmn/congenial-train,lodetomasi/Covid-Tweets-Analysis,ufz/ogs-data,lkqhvac/ogs6,jbathmann/ogs,fwitte/fluprodia,project-angus/IF_HeatSys_GeoStorage,project-angus/IF_PPlant_GeoStorage",,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, RC_BuildingSimulator,A Resistance Capacitance Model for an Energetic Simulation of a Building.,architecture-building-systems,https://github.com/architecture-building-systems/RC_BuildingSimulator.git,github,,Buildings and Heating,"2021/03/07, 06:06:51",81,0,15,false,Python,Architecture and Building Systems,architecture-building-systems,"Python,Shell",,"b'# 5R1C Building Simulation Model\n\n![Python application](https://github.com/architecture-building-systems/RC_BuildingSimulator/workflows/Python%20application/badge.svg)\n\n## 10min Guide to your first simulation\n\nSee the [10min guide in the wiki page](https://github.com/architecture-building-systems/RC_BuildingSimulator/wiki/10min-guide-to-your-first-simulation)\n\n## Developer Material\n\nCan be found in [the wiki](https://github.com/architecture-building-systems/RC_BuildingSimulator/wiki)\n\n## Academic Citations\n\nIf citing this simulator in research, please use the following reference\n\nJayathissa, Prageeth, et al. ""Optimising building net energy demand with dynamic BIPV shading."" Applied Energy 202 (2017): 726-735.\n\n## References\n\nMadsen, Henrik, and Jan Holst. ""Estimation of continuous-time models for the heat dynamics of a building."" Energy and Buildings 22.1 (1995): 67-79.\n\nBacher, Peder, and Henrik Madsen. ""Identifying suitable models for the heat dynamics of buildings."" Energy and Buildings 43.7 (2011): 1511-1522.\n\nSonderegger, Robert. ""Diagnostic tests determining the thermal response of a house."" Lawrence Berkeley National Laboratory (2010).\n\n'",,"2016/02/29, 13:35:47",2795,CUSTOM,0,235,"2021/03/21, 01:13:21",10,23,49,0,948,1,0.0,0.24285714285714288,"2020/08/09, 06:52:43",v0.3,0,7,false,,false,false,,,https://github.com/architecture-building-systems,http://systems.arch.ethz.ch,,,,https://avatars.githubusercontent.com/u/8478952?v=4,,, City Energy Analyst,"Helps you to analyze the effects of building retrofits, land-use planning, district heating and cooling and renewable energy on the future costs, emissions and energy consumption of neighborhoods and districts.",architecture-building-systems,https://github.com/architecture-building-systems/CityEnergyAnalyst.git,github,,Buildings and Heating,"2023/10/25, 13:37:24",171,1,36,true,Python,Architecture and Building Systems,architecture-building-systems,"Python,Jupyter Notebook,HTML,NSIS,Shell,Batchfile,Dockerfile",,"b'|license| |repo_size| |lines_of_code| |zenodo|\n\n.. |license| image:: https://img.shields.io/badge/License-MIT-blue.svg\n :alt: GitHub license\n.. |repo_size| image:: https://img.shields.io/github/repo-size/architecture-building-systems/CityEnergyAnalyst\n :alt: Repo Size\n.. |lines_of_code| image:: https://img.shields.io/tokei/lines/github/architecture-building-systems/CityEnergyAnalyst\n :alt: Lines of code\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.8327389.svg\n :target: https://doi.org/10.5281/zenodo.8327389\n\n.. image:: cea_logo.png\n :height: 420 px\n :width: 1500 px\n :scale: 25 %\n :alt: City Energy Analyst (CEA) logo\n :target: https://www.cityenergyanalyst.com\n\nCity Energy Analyst (CEA)\n--------------------------\n\nThe `City Energy Analyst (CEA) `_ is an urban building energy simulation platform and one of the first open-source initiatives of computation tools for the design of low-carbon and highly efficient cities. The CEA combines knowledge of urban planning and energy systems engineering in an integrated simulation platform. This allows to study of the effects, trade-offs, and synergies of urban design scenarios and energy infrastructure plans. At CEA we are committed to empowering practitioners and researchers to plan future low-carbon cities. \n\n\n* Click `here `__ for installation manual and tutorials\n\n* Click `here `__ to report an issue\n\n* Click `here `__ to contact us\n\n\n.. attention:: We ended the support of the ArcGIS and Rhino/Grasshopper interfaces on 1 May 2019. \nWe invite all CEA users to get acquainted with the CEA Dashboard and CEA Console.\n \nCite us:\n--------\n\nFor V3.34.1 (stable): https://doi.org/10.5281/zenodo.8327389\n'",",https://doi.org/10.5281/zenodo.8327389\n\n,https://doi.org/10.5281/zenodo.8327389\n","2016/01/12, 10:02:17",2843,MIT,420,13320,"2023/10/24, 16:03:03",102,1122,3276,124,1,2,0.1,0.7135647337521887,"2023/09/19, 20:41:29",v3.34.2,0,32,false,,false,false,narest-qa/repo68,,https://github.com/architecture-building-systems,http://systems.arch.ethz.ch,,,,https://avatars.githubusercontent.com/u/8478952?v=4,,, Modelica Buildings library,A free and open source library with dynamic simulation models for building energy and control systems.,lbl-srg,https://github.com/lbl-srg/modelica-buildings.git,github,"modelica,buildings,control,energy-efficiency",Buildings and Heating,"2023/10/25, 05:37:32",190,0,31,true,Modelica,Berkeley Lab - Modeling & Simulation,lbl-srg,"Modelica,Motoko,C,HTML,Python,Java,CSS,Batchfile,Makefile,MAXScript,Shell,TeX,CMake,JavaScript",,"b'# Modelica Buildings library\n\n[![Build Status](https://travis-ci.com/lbl-srg/modelica-buildings.svg?branch=master)](https://travis-ci.com/lbl-srg/modelica-buildings)\n\nThis is the development site for the Modelica _Buildings_ library and its user guide.\n\nStable releases including all previous releases are available from the main project site\nat http://simulationresearch.lbl.gov/modelica.\n\nInstructions for developers are available on the [wiki](https://github.com/lbl-srg/modelica-buildings/wiki).\n\n## Library description\n\nThe Modelica Buildings library is a free open-source library with dynamic simulation models for building energy and control systems. The library contains models for\n\n- HVAC systems,\n- energy storage,\n- controls, including a reference implementation of ASHRAE Standard 231P,\n- heat transfer among rooms and the outside, either\n - natively in Modelica with a detailed or a reduced order model, or\n - integrated run-time coupling with EnergyPlus, aka, Spawn of EnergyPlus\n- multizone airflow, including natural ventilation and contaminant transport,\n- single-zone computational fluid dynamics coupled to heat transfer and HVAC systems,\n- data-driven load prediction for demand response applications, and\n- electrical DC and AC systems with two- or three-phases that can be balanced and unbalanced.\n\n\nThe main project site is http://simulationresearch.lbl.gov/modelica.\n\n## Current release\n\nDownload [Buildings Library 10.0.0 (2023-09-05)](https://github.com/lbl-srg/modelica-buildings/releases/download/v10.0.0/Buildings-v10.0.0.zip)\n\n## License\n\nThe Modelica _Buildings_ Library is available under a 3-clause BSD-license.\nSee [Modelica Buildings Library license](https://htmlpreview.github.io/?https://github.com/lbl-srg/modelica-buildings/blob/master/Buildings/legal.html).\n\nPython modules are available under a 3-clause BSD-license. See [BuildingsPy license](http://simulationresearch.lbl.gov/modelica/buildingspy/legal.html).\n\n## Development and contribution\nYou may report any issues with using the [Issues](https://github.com/lbl-srg/modelica-buildings/issues) button.\n\nContributions in the form of [Pull Requests](https://github.com/lbl-srg/modelica-buildings/pulls) are always welcome.\nPrior to issuing a pull request, make sure your code follows the [style guide and coding conventions](https://github.com/lbl-srg/modelica-buildings/wiki/Style-Guide).\n\n## Building binaries\n\nThe distribution at https://simulationresearch.lbl.gov/modelica/download.html\ncontains all binaries.\n\nDevelopers may build the binaries as follows.\n\n### Spawn of EnergyPlus\n\nThe Buildings library already contains the compiled binaries that are needed to link to EnergyPlus.\n\nTo rebuild the Spawn of EnergyPlus binaries, CMake is required. The binaries\nconsist of the fmi-library, and a library that connects Modelica to EnergyPlus.\n\nTo build the fmi-library, which is only needed if https://github.com/modelon-community/fmi-library is updated, run\n```\ncd Buildings/Resources/src/fmi-library\nrm -rf build && mkdir build && \\\n cd build && cmake .. && cmake --build . && \\\n cd .. && rm -rf build\n```\n\nTo build the Modelica to EnergyPlus library, run\n```\ncd modelica-buildings\nrm -rf build && mkdir build && cd build && \\\n cmake ../ && cmake --build . --target install && \\\n cd .. && rm -rf build\n```\n\nTo install the EnergyPlus binaries for the Spawn interface for the current operating system, run\n```\nBuildings/Resources/src/ThermalZones/install.py --binaries-for-os-only\n```\nTo install the binaries for all operating systems, omit the flag `--binaries-for-os-only`\n\n## Citation\n\nTo cite the library, use\n\nMichael Wetter, Wangda Zuo, Thierry S. Nouidui and Xiufeng Pang.\nModelica Buildings library.\n_Journal of Building Performance Simulation_, 7(4):253-270, 2014.\n\n```\n@Article{WetterZuoNouiduiPang2014,\n author = {Michael Wetter and Wangda Zuo and Thierry S. Nouidui and Xiufeng Pang},\n title = {Modelica {Buildings} library},\n journal = {Journal of Building Performance Simulation},\n volume = {7},\n number = {4},\n pages = {253--270},\n year = {2014},\n doi = {10.1080/19401493.2013.765506},\n url = ""https://doi.org/10.1080/19401493.2013.765506""\n}\n\n```'",",https://doi.org/10.1080/19401493.2013.765506""\n","2013/03/03, 16:36:03",3888,MIT,178,13985,"2023/10/25, 05:37:33",150,2042,3401,429,0,26,0.3,0.36572969608961725,"2023/09/05, 16:09:22",v10.0.0,0,38,false,,false,false,,,https://github.com/lbl-srg,https://buildings.lbl.gov/modeling-simulation,,,,https://avatars.githubusercontent.com/u/3753398?v=4,,, StROBe,An open web tool developed at the KU Leuven Building Physics Section to model the pervasive space for residential integrated district energy assessment simulations in the openIDEAS modeling environment.,open-ideas,https://github.com/open-ideas/StROBe.git,github,,Buildings and Heating,"2021/02/03, 15:16:31",34,0,8,false,Python,OpenIDEAS,open-ideas,Python,,"b'# StROBe\n\n\nCurrently in beta.\n\n**StROBe** (Stochastic Residential Occupancy Behaviour) is an open web tool developed at the [KU Leuven Building Physics Section](http://bwk.kuleuven.be/bwf/) to model the pervasive space for residential integrated district energy assessment simulations in the\n[**openIDEAS**](https://github.com/open-ideas) modeling environment (among others). Primarily conceived as a tool for scientific researchers, **StROBe** aims at providing missing boundary conditions in integrated district energy assessment simulations related to human behavior, such as the use of appliances and lighting, space heating settings and domestic hot water redrawals.\n**StROBe** is also highly customizable and extensible, accepting model changes or extensions defined by users. \n\n\n## Dependencies\n\n**StROBe** is implemented in **Python 3.7** and uses the packages *os*, *numpy*, *random*, *time*, *datetime*, *calendar*, *cPickle*, *itertools*, and *jason*, which are all generally available. An old Python 2.7 version is available in branch [`python2.7`](https://github.com/open-ideas/StROBe/tree/python2.7), with changes up to February 03, 2021. \n\n## Examples\n\nIn [example.py](https://github.com/open-ideas/StROBe/blob/master/example.py) you can find simple examples for: \n- simulation of individual households using `class Household()` from [`Corpus/residential.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/residential.py), and \n- simulation of sets of households as inputs for **IDEAS** using `class IDEAS_Feeder()` from [`Corpus/feeder.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/feeder.py).\n\n## Revision history\n\n**Feb 03, 2021**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/38) for details.)\n\n- Changes to make the code compatible with Python 3, checked at least with Python 3.7.\n- [`irradiation.txt`](https://github.com/open-ideas/StROBe/blob/master/Data/Climate/irradiance.txt) file changed to simple text format to avoid reading problems with cpickle. Corresponding changes in [`Corpus/residentia.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/residential.py#L413).\n\n**Oct 22, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/35) for details.)\n\n- Changed occupancy generation in [`Corpus/residentia.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/residential.py). Now a typical week is created with all different days (previously all weekdays were the same), which is then copied for the year.\n- Changed way 4h shift is implemented. One extra day is simulated in the front, of which 20 hours are then cut, so that the data starts at midnight of the first day of the year. The last 4h are also cut. This replaces a previous [solution](https://github.com/open-ideas/StROBe/pull/10) which took the last 4h of the year and put them in the front.\n- Other minor fixes.\n\n**Oct 9, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/33) for details.)\n\n- Fixed problem with wrong occupancy cluster selection, in [`Corpus/data.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/data.py#L33). Now all different occupancy patterns should be correctly represented.\n\n**May 8, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/31) for details.)\n\n- Added in-line comments throughout the code, mainly in [`Corpus/residential.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/residential.py).\n\n**May 8, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/30) for details.)\n\n- Added the 4h shift to the occupancy results `occ` and `occ_m` and included the function `Household.roundUp()`, used to perform this shift, in the execution of `Household.simulate()`, to guarantee the correct time shifting also when someone simulates independent households.\n\n**May 8, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/29) for details.)\n\n- Added revision history and references and updated dependencies in [`README.md`](https://github.com/open-ideas/StROBe/blob/master/README.md), and updated [`example.py`](https://github.com/open-ideas/StROBe/blob/master/example.py).\n\n**Apr 27, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/28) for details.)\n\n- Added option to save output files generated by [`Corpus/feeder.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/feeder.py) for temperature set-points in K (default) instead of Celsius. In this way these outputs will be consistent with the **StROBe** input readers in **IDEAS**. See also [related thread](https://github.com/open-ideas/IDEAS/pull/1127) in **IDEAS**. \n\n**Mar 9, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/26) for details.)\n\n- Fixed problem of appliances being used independently by more than one occupant at a time, which was generating higher loads. Note that this can influence the total electricity demand, which is not verified in any way.\n\n**Mar 9, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/25) for details.)\n\n- Changed delay times of cycle loads in [`Data/Appliances.py`](https://github.com/open-ideas/StROBe/blob/master/Data/Appliances.py) such that the specified annual demand and number of cycles are obtained on average, instead of fixing the number of cycles and delay only (which lead to high annual demand).\n\n\n**Feb 13, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/21) for details.)\n\n- Added extra rules for cold appliances ownership such there is always at least one fridge and not more than two freezers. Note that this can influence the total electricity demand, which is not verified in any way.\n\n\n**Feb 4, 2020**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/20) for details.)\n\n- Changed [`Corpus/feeder.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/feeder.py) output function to only load pickled household files once for all outputted variables. This reduces outputting time, while keeping the same outputs and file formats. \n\n\n**Jul 18, 2019**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/18) for details.)\n\n- Fixed cycling loads implementation to account for cycle length, as before the power was only applied for one minute. \n\n\n**Jun 15, 2018**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/14) for details.)\n\n- Added a minimal working example to simulate a single household and a group of buildings (feeder). See [`example.py`](https://github.com/open-ideas/StROBe/blob/master/example.py).\n\n\n**Jul 3, 2018**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/12) for details.)\n\n- Fixed problem in lighting adjustment.\n- Fixed standby-use of appliances and DHW, which was only included one time-step before the appliance/water use is turned on.\n\n\n**Jun 15, 2018**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/11) for details.)\n\n- Fixed problem in activity probabilities in `class DTMC` in [`Corpus/stats.py`](https://github.com/open-ideas/StROBe/pull/11/files#diff-766109a870ede664c022e3f24738863d), where Sunday was used twice instead of Saturday.\n\n**Jun 15, 2018**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/10) for details.)\n\n- Brought last 4h of resulting time series to the front so that data starts at midnight instead of 4am (which is when the survey statistics begin). This is applied in the `roundUp()` function of [`Corpus/residential.py`](https://github.com/open-ideas/StROBe/blob/63da1fc06db9ebe683a69b879436104f1ffdfa11/Corpus/residential.py#L572-L581).\n- Updated [`Corpus/__calibrate__.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/__calibrate__.py) file to perform automatically the calibration of `cal` values of appliances, and added check for annual electricity load.\n\n**Sep 28, 2017**\n(See [pull request](https://github.com/open-ideas/StROBe/pull/7) for details.)\n\n- Fixed space heating type definition that previously produced many unheated household profiles.\n\n**Apr 20, 2017** \n(See [pull request](https://github.com/open-ideas/StROBe/pull/4) for details.)\n\n- Added a function to generate several scenarios of demands for the same household and period: [`Corpus/simulation.py`](https://github.com/open-ideas/StROBe/blob/master/Corpus/simulation.py).\n- Fixed the generation of lighting loads, that was on when asleep or away.\n- Fixed the selection of appliances, which selected only less probable devices.\n- Updated paths, data importation and initialization files.\n\n**Mar 13, 2014** \n\n- First full working version. See [repository](https://github.com/open-ideas/StROBe/tree/84921ca49841b40de53273ed08cbf0f49f849e41) at that point.\n\n**Oct 1, 2013** \n\n- Initial commit.\n\n## References\n\n1. Baetens, R., & Saelens, D. (2016). Modelling uncertainty in district energy simulations by stochastic residential occupant behaviour. *Journal of Building Performance Simulation*, 9(4), 431\xe2\x80\x93447. https://doi.org/10.1080/19401493.2015.1070203\n2. Baetens, R. (2015). On externalities of heat pump based low-energy dwellings at the low-voltage distribution grid. Ph.D. thesis, Arenberg Doctoral School, KU Leuven. \n\nPlease cite **StROBe** using the information below.\n```\n@article{Baetens2016,\nauthor = {Baetens, Ruben and Saelens, Dirk},\ntitle = {{Modelling uncertainty in district energy simulations by stochastic residential occupant behaviour}},\njournal = {Journal of Building Performance Simulation},\nvolume = {9},\nnumber = {4},\npages = {431--447},\npublisher = {Taylor {\\&} Francis},\ndoi = {10.1080/19401493.2015.1070203},\nyear = {2016}\n}\n\n```\n'",",https://doi.org/10.1080/19401493.2015.1070203\n2","2013/10/01, 12:45:56",3676,MIT,0,208,"2022/02/16, 12:02:53",10,21,30,0,616,2,0.0,0.15172413793103445,,,0,9,false,,false,false,,,https://github.com/open-ideas,,Belgium,,,https://avatars.githubusercontent.com/u/5467967?v=4,,, NYCBuildingEnergyUse,"Predict the emission of greenhouse gases from buildings by looking at their age, and water consumption as well as other energy consumption metrics.",mdh266,https://github.com/mdh266/NYCBuildingEnergyUse.git,github,"energy-efficiency,exploratory-data-analysis,regression-models,regression,bokeh,data-science,outlier-detection,outlier-removal,missing-values,missing-data,scikit-learn,google-app-engine,xgboost",Buildings and Heating,"2021/07/29, 00:35:04",18,0,4,true,Jupyter Notebook,,,"Jupyter Notebook,Python,Dockerfile",http://michael-harmon.com/blog/GreenBuildings1.html,"b'# About\n-------------\nI originally started this project a while back with a goal of taking the 2016 NYC Benchmarking Law data about building energy usage and do something interesting with it. After a few iterations I thought it might be interesting to see if I could predict the emission of green house gases from buildings by looking at their age, and water consumption as well as other energy consumption metrics. In the end the point of this project was to build and deploy a model on the cloud using a real world dataset with outliers and missing values using state of the art tools such as,\n\n* [Seaborn](http://seaborn.pydata.org/)\n* [Scikit-Learn](https://scikit-learn.org)\n* [XGBoost](https://xgboost.readthedocs.io/en/latest/)\n* [BigQuery](https://cloud.google.com/bigquery)\n* [MLflow](https://www.mlflow.org/) \n* [Docker](https://www.docker.com/)\n* [Google Appe Engine](https://cloud.google.com/appengine)\n\n\n## Notebook Overviews\n--------------------------\n\n\n### GreenBuildings1 : Exploratory Analysis & Outlier Removal\n---------------------\nIn this first blogpost I will cover how to perform the basics of data cleaning including:\n\n- Exploratory data analysis\n- Identifying and removing outliers\n\nIn indentifying outliers I will cover both visual inspection as well a machine learning method called [Isolation Forests](https://en.wikipedia.org/wiki/Isolation_forest). Since I will completing this project over multiple days and using [Google Cloud](https://cloud.google.com/), I will go over the basics of using [BigQuery](https://cloud.google.com/bigquery) for storing the datasets so I won\'t have to start all over again each time I work on it. At the end of this blogpost I will summarize the findings, and give some specific recommendations to reduce mulitfamily and office building energy usage.\n\n\n\n### GreenBuildings2 : Imputing Missing Values With Scikit-Learn\n---------------------\nIn this second post I cover [imputations techniques](https://en.wikipedia.org/wiki/Imputation_(statistics)#Regression) for missing data using Scikit-Learn\'s [impute module](https://scikit-learn.org/stable/modules/impute.html) using both point estimates (i.e. mean, median) using the **[SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)** class as well as more complicated regression models (i.e. KNN) using the **[IterativeImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.IterativeImputer.html)** class. The later requires that the features in the model are correlated. This is indeed the case for our dataset and in our particular case we also need to [transform](https://en.wikipedia.org/wiki/Data_transformation_(statistics)) the feautres in order to discern a more meaningful and predictive relationship between them. As we will see, the transformation of the features also gives us much better results for imputing missing values.\n\n\n### GreenBuildings3: Build & Deploy Models With MLflow, Docker & Google App Engine\n---------------------\nThis last post will deal with model building and model deployment. Specifically I will build a model of New York City building green house gas emissions based on the building energy usage metrics. After I build a sufficiently accurate model I will convert the model to [REST API](https://restfulapi.net/) for serving and then deploy the REST API to the cloud. The processes of model development and deployment are made a lot easier with [MLflow](https://mlflow.org/) library. Specifically, I will cover using the [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html) framework to log all the diffent models I developed as well as their performance. MLflow tracking acts a great way to memorialize and document the development process. I will then use [MLflow Models](https://www.mlflow.org/docs/latest/models.html) to convert the selected model into a [REST API](https://restfulapi.net/) for model servin and show how to the API to the cloud using [Docker](https://www.docker.com/) and [Google App Engine](https://cloud.google.com/appengine). \n\n\n### Using The Notebooks\n----------------------\n\nYou can install the dependencies and access the first two notebook (`GreenBuildings1` & (`GreenBuildings2`) using Docker by building the Docker image with the following:\n\n\tdocker build -t greenbuildings .\n\nFollowed by running the command container:\n\n\tdocker run -ip 8888:8888 -v `pwd`:/home/jovyan -t greenbuildings\n\nSee here for more info. Otherwise without Docker, make sure to use Python 3.7 and install GeoPandas (0.3.0) using Conda as well as the additional libraries listed in requirements.txt. These can be installed with the command,\n\n\tpip install -r requirements.txt\n\nThe last notebook (`GreenBuildings3`) I ran locally on my machine with the dependencies in `requirements.txt`.\n\n\n### The Dataset \n------------------\n\nThe NYC Benchmarking Law requires owners of large buildings to annually measure their energy and water consumption in a process called benchmarking. The law standardizes this process by requiring building owners to enter their annual energy and water use in the U.S. Environmental Protection Agency\'s (EPA) online tool, ENERGY STAR Portfolio Manager\xc2\xae and use the tool to submit data to the City. This data gives building owners about a building\'s energy and water consumption compared to similar buildings, and tracks progress year over year to help in energy efficiency planning.\n\nI used the 2016 Benchmarking data which is disclosed publicly and can be found here. \n\n'",,"2017/03/23, 19:11:56",2407,MIT,0,46,"2023/05/01, 13:48:29",1,4,4,2,177,1,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Smart-Energy-Monitor,The goal is to accurately predict the monthly electricity bill of the household using minimum hardware and by acquiring electrical data at a single location.,jonathanrjpereira,https://github.com/jonathanrjpereira/Smart-Energy-Monitor.git,github,"arduino,raspberry-pi,energy-monitor,energy-consumption,energy-disaggregation,smarthome,electronics,machine-learning,naive-bayes-classifier,appliance-level-consumption,hardware",Buildings and Heating,"2019/12/20, 23:37:41",20,0,4,false,Python,,,"Python,C++,HTML",https://photos.app.goo.gl/EcPnu42qcf3pCxSX9,"b'![Banner](https://github.com/jonathanrjpereira/Smart-Energy-Monitor/blob/master/img/Banner.svg)\n\nThe goal of the Smart Energy Monitor is to accurately predict the monthly electricity bill of the household using minimum hardware & by acquiring electrical data at a single location (instead of individual sensors per appliance). We do this by analyzing the current & power signatures of all the active devices and pass this information through a Naive Bayes classifier which helps us obtain the Active devices which can further be used to calculate the number of units consumed by individual load appliances.\n\n\n## Features\n - Detect when a particular appliance has changed its state (On/Off/Other).\n - Classify the active load appliances by analyzing the measured electrical signals which have been obtained from a single location.\n - Tracking of individual appliance-level consumption.\n - Electricity bill prediction.\n - Hardware used to obtain current, active and reactive power signatures of individual devices to create high quality datasets.\n\n\n## Prerequisites\n**Hardware:**\n 1. Raspberry Pi running Raspbian\n 2. Arduino Nano\n 3. Current & Voltage Transformers\n 4. Additional Electronic components required for the Zero Cross Detection circuit.\n\n**Software:**\n 1. Flask - Web App Microframework\n 2. Dash by Plotly - Analytical Web App Framework (Optional)\n\n\n## Working\n**Combinatorial Algorithm (CA):** \nThe combinatorial algorithm is a brute-force method to determine the active appliances. CA finds the optimal combination of appliance states, which\nminimizes the difference between the sum of the predicted appliance power and the observed aggregate power, subject to a set of appliance models. The complexity of disaggregation for T time slices is:\n\n

\n\n

\n\nwhere N is the number of appliances and K is the number of appliance states.\nSince the complexity of CA is exponential to the number of appliances, the approach is only computationally tractable for a small number of modelled appliances. Hence, we chose to use a Naive Bayes classifier to determine the active appliances. A [toy demonstration](https://github.com/jonathanrjpereira/Smart-Energy-Monitor/blob/master/Demo/co.py ""toy demonstration"") example is used to visualize how the number of computations increases as the number of appliances increases.\n\n\n**Power Measurement:** \nWe can measure peak current by sampling the mains power line until we obtain the maximum and minimum peak voltage values (i.e. the maximum value measured in each direction). We can then compute the RMS voltage value from the peak-to-peak voltage. For the ACS712 we can convert the RMS voltage value into an RMS current value using the scale factor given in the datasheet [[1]]. We can then measure the power being consumed using the voltage and current RMS values.\n\n

\n\n

\n\nSteps to find Power for a sine wave with a zero volt offset:\n1. Find the Peak to Peak voltage (Vpp) of the ACS712 current sensor.\n2. Divide Vpp by 2 to get the peak voltage in one direction.\n3. Multiply the peak voltage by 0.707 to get the RMS voltage for the ACS712.\n4. Convert the RMS voltage into RMS current by multiplying by the scale factor given for the particular ACS712 model.\n5. Multiply the measured voltage with the RMS current to find the power being drawn by the loads.\n\n

\n\n

\n\n**Steady State Analysis:** \nReal power and reactive power are two of the most commonly used steady state signatures in NILM for tracking On/Off operation of appliances. The real power is the amount of energy consumed by an appliance during its operation. If the load is purely resistive then the current and voltage waveforms will always be in phase and there will be no reactive energy. For a purely reactive load the phase shift will be 90 degrees, and there will be no transfer of real power. On the other hand, due to inductive and capacitive elements of the load, there is always a phase shift between current and voltage waveforms that generates or consumes a reactive power respectively.[[2]]\n\n**Monitoring Changes in Current and Power:** \nThe current drawn by the load is the first parameter used to classify the various load appliances. The current sensor will provide us with the instantaneous value of the total current drawn by all loads connected to the\nSmart Energy Monitor. In order to get the value of the current drawn by each individual load appliance we must calculate the difference between each of the consecutive current sample.\n\n

\n\n

\n\nAs shown in the above figure, we can see that when load appliance A is turned ON, the change in current is 10, when B is turned ON after A, the change in current is 25 but the total current is 35 and finally when B is turned OFF the change in current is 25 while the total current drawn is 10.\n\nThe total change in current that occurs when a load appliance is switched ON/OFF may not be correctly reflected between the two consecutive samples. Instead the total change in current may be reflected over more than two consecutive current samples depending on the switching speed or transient time of the load appliances. Hence, occasionally large errors are produced when measuring the current difference between any two consecutive current samples. In order to reduce the error, the sampling rate may be adjusted accordingly. But calibrating the sampling rate may be difficult as various load appliances have different transient times as well as the time taken for different software instructions may differ with different hardware and sensors.\n\n

\n\n

\n\nA more efficient method for reducing this error will be to find the sum of N consecutive current samples such that the sum of all the error is zero(or close to zero). The user may have to wait for a very short duration before the next load appliance can be switched ON/OFF. The small error that remains will not come into effect as the classification algorithm will rule it out based on the mean and standard deviation values of all the labelled data. As shown in the above figure, load appliance A during its transient state as a total current change of 10. But this value is not reflected between any two consecutive current samples. Instead the change in current observed between any two consecutive current samples for load appliance A is 2,3 and 5 respectively. Hence the value of error produced will be either 3 or 5. But if we take the sum of N consecutive samples where in N in this case is equal to 3, then we will get an error value of 0.\n\n**Active and Reactive Power:** \nUsing only the current drawn by a load as a classification parameter will cause problems when two or more completely different load appliances (with different applications) draw the same amount of current. This is because, the current difference between N consecutive samples for both the devices may be equal which may produce an incorrect result.\n\nHence in order to more accurately differentiate between load appliances which draw the same amount of current, we also consider the value of active power and reactive power drawn by each individual load appliance. It will be highly unlikely that two completely different load appliances with different applications have the same current drawn as well as active power and reactive power values.\n\n

\n\n

\n\nIn order to calculate the Active Power and Reactive Power drawn by load appliances, we must first find the phase difference between the voltage and current. We do this by implementing a simple Zero Cross Detector (ZCD) for both the voltage and current.\n\nThe ZCD is built using the LM339. The amplitude of the measured current and voltage signals is reduced to meet the maximum input value permitted by the LM339. A screenshot of the phase angle measurement circuit is shown.\n\n

\n\n

\n\n

\n\n

\n\nThe output of the ZCD is fed to a single Ex-OR gate of a 7486 EXOR IC which will produce pulses when there is a phase shift. i.e. The two logic levels of the inputs to the EXOR gate are not equal to each other.\n\n

\n\n

\n\nThe time period of these pulses can be used to find the phase angle between the voltage and current waveforms.\n\n

\n\n

\n\n**Creating the Dataset:** \nEach appliance will have at least two class labels associated with it depending upon the number of \'Activity States\' it may undergo during normal operation. Typically most appliances will have only two states. E.g: A light bulb will have only two states - On and Off. Whereas, a kitchen mixer may have several states depending on the adjustable mixer mode. Each state is defined by its separate current and power attributes. Each sample is a measurement of the change in either current, active power or reactive power with the ground truth label being one of the appliance activity states.\n\n**Determining the Appliance State:** \n\n

\n\n

\n\nWe use a Naive Bayes classification algorithm to determine which appliances are active and in which activity state are they operating in. Predictions are made by choosing the class with the closest mean and standard deviation for each feature when compared with the training data using a Gaussian Probability Density function. i.e We combine the probability of each feature to determine the class probability.\n\nIn the [Demo Example](https://github.com/jonathanrjpereira/Smart-Energy-Monitor/blob/master/Demo/beproject.py ""Demo Example"") we use a Demo training dataset which contains the measured features for 3 devices with a total of 6 states namely two CFL bulbs and an electric drill.\n\nYou can find the video of this Demo Example: [Demo Video](https://photos.app.goo.gl/EcPnu42qcf3pCxSX9 ""Demo Video"")\n\n**Additional Classification Features:** \nWe can use additional features such as Weather, Time and Room to improve the prediction accuracy.\n\n

\n\n

\n\nThese additional features can be used as features within a decision tree to be used as an ensemble together with the existing Naive Bayes classifier.\n\n**Estimating Electricity Bill:** \nOnce we have determined the active devices. We can use measure the total power they individually consume by measuring their Start and Stop time (Total Active Time = Start Time - Stop Time) and simply multiplying the power consumed by the appliance for each activity state by the total active time for a particular state. We can then estimate the monthly electricity bill by multiplying the power consumed over the entire month by the rate per unit. An example of a blank table for the Demo Example is shown below.\n\n

\n\n

\n\n

\n\n

\n\n**User Interface:** \nThe user interface uses a Python Micro-Framework called Flask which is used to build web applications. The Flask framework encodes the real time Python Variables into a JavaScript Object Notation (JSON) string. The JSON string is then read into the HTML file using Asynchronous JavaScript And XML (AJAX) requests. These requests are periodically refreshed using the Auto-Refresh function in AJAX. [Basic UI Auto Data Generator Demo](https://github.com/jonathanrjpereira/Smart-Energy-Monitor/blob/master/Software%20Test/test.py ""Basic UI Demo"") uses auto-updating date/time information as dummy data sent to the [HTML Dashboard Demo](https://github.com/jonathanrjpereira/Smart-Energy-Monitor/blob/master/Software%20Test/index.html ""HTML Dashboard Demo"").\n\n\n## Future Work\n- The classification accuracy can be improved through pattern analysis by\nmonitoring the shape of the VI trajectory. Shape features such as area under the VI curve as well as peak of segments can be further analyzed. [[2]]\n- Analysis of Steady State Voltage Noise such as EMI signatures can improve the detection of motor based devices like fans, food mixers and washing machines. Although this would require additional EMI sensors for each appliance which will be contradiction with the initial goal of this project.\n- Using Hidden Markov Models [[3]] and Neural Networks [[4]] to determine active devices.\n- Graphs for energy consumed per month per appliance.\n\n\n## Contributing\nAre you a programmer, engineer or hobbyist who has a great idea for a new feature in this project? Maybe you have a good idea for a bug fix? Feel free to grab our code from Github and tinker with it. Don\'t forget to smash those \xe2\xad\x90\xef\xb8\x8f & Pull Request buttons. [Contributor List](https://github.com/jonathanrjpereira/Smart-Energy-Monitor/graphs/contributors)\n\nMade with \xe2\x9d\xa4\xef\xb8\x8f by [Jonathan Pereira](https://github.com/jonathanrjpereira)\n\n## References\n1. [ACS712 Datasheet](https://www.sparkfun.com/datasheets/BreakoutBoards/0712.pdf)\n2. [Non-Intrusive Load Monitoring Approaches for Disaggregated Energy Sensing: A Survey](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3571813/)\n3. [NILM using Hidden Markov Models](https://www.youtube.com/watch?v=9a8dR9NEe6w)\n4. [Neural NILM](https://www.youtube.com/watch?v=PC60fysLScg)\n\n\n[1]: https://www.sparkfun.com/datasheets/BreakoutBoards/0712.pdf ""ACS712 Datasheet""\n[2]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3571813/ ""Non-Intrusive Load Monitoring Approaches for Disaggregated Energy Sensing""\n[3]: https://www.youtube.com/watch?v=9a8dR9NEe6w ""NILM using HMMs""\n[4]: https://www.youtube.com/watch?v=PC60fysLScg ""Neural NILM""\n'",,"2017/09/25, 09:52:30",2221,MIT,0,61,"2019/12/20, 15:50:27",0,2,3,0,1405,0,0.0,0.037735849056603765,,,0,2,false,,false,false,,,,,,,,,,, Hotmaps,The open source mapping and planning tool for heating and cooling.,HotMaps,https://github.com/HotMaps/Hotmaps-toolbox-service.git,github,,Buildings and Heating,"2020/10/20, 12:49:22",4,0,0,false,Python,,HotMaps,"Python,Dockerfile,Shell",,"b""# HotMaps-toolbox Docker image\n\n![Build Status](https://vlheasilab.hevs.ch/buildStatus/icon?job=Hotmaps-toolbox-service%2Fdevelop)\n\nThe Hotmaps toolbox is built around several services:\n- frontend (Angular, typescript)\n- backend (Flask, python)\n- database (Postgresql + PostGIS)\n- map server (Geoserver)\n- calculation modules (Flask, python)\n- reverse proxy (Nginx)\n- wiki (Gollum)\n\nAll these services are built around Docker.\nA docker-compose.yml file allows to configure and run the whole project in one place. Only the database is run separatly.\n\n## Build, configure and run:\n### Build\nDockerfiles are available to build the project manually. \nA docker-compose.yml file is available to build all the services.\n\n### Configure\n\nFirst configure all the services in docker-compose.yml.\nMake sure the volume bindings, build and environment paths fits your project on your host machine.\nIf `.env` files are mentionned in the `docker-compose.yml`, update their content to match your own configuration (crendentials, urls, ...). \n\nThe `docker-compose.yml` file should be place on the root directory of your project (where all you repositories are cloned).\n\nYou can also run the toolbox without reverse proxy, wiki and geoserver using the `docker-compose-local.yml` (and place it also at the root of your project).\n\n`.env` config. files should be place at the root of each repository (if necessary) to set the configuration of each service.\n\n\n#### Project structure\n\n\n- root /\n - toolbox-service /\n - [code]\n - .env (config. file)\n - toolbox-client /\n - [code]\n - .env (config. file)\n - wiki /\n - [code]\n - .env (config. file)\n - geoserver /\n - web.xml (config. file)\n - nginx / \n - calculation_modules /\n - CM1 /\n - CM2 /\n - docker-compose.yml\n - docker-compose-local.yml\n - nginx.tmpl (config. file for dockergen service)\n\n**Notes**\n\nWiki:\n- the wiki image is pulled by the docker-compose\n - image and doc: [hotmaps/gollum](https://hub.docker.com/r/hotmaps/gollum)\n- pull the wiki repository (the content of the wiki) to the root of your project according to structure\n - repository: [Official Hotmaps wiki](https://github.com/HotMaps/wiki/)\n - you can of cours configure your own repository for the content\n- you should provide a valid ssh key to your wiki in order to push the modifications to the remote\n - config. in docker-compose.yml/wiki/volumes/.ssh\n- `.env`: \n - there is an .env.example in the wiki content [repository](https://github.com/HotMaps/wiki/)\n - you can find all options and environment variables available in the readme of the docker image on [hotmaps/gollum](https://hub.docker.com/r/hotmaps/gollum)\n\n### Run\n\nFirst, run the database, either using Docker or using an external service. The database should have 4 schemas:\n- geo\n- public\n- stat\n- users\n\nTo populate the database, refer to the official Hotmaps [Wiki](https://wiki.hotmaps.eu/en/Developers#dataset-integration).\n\n*If you hosted geoserver differently, make sure it's accessible.*\n\nRun the project using docker-compose:\n`docker-compose up -d --build`\n\n## Release\n\nTo release the project to a server, use the file `docker-compose.yml`.\nThis file is using a reverse proxy (nginx) automatically \nYour server should have: \n- 4 subdomains 'wiki', 'geoserver', 'api' and 'www'\n - edit `docker-compose.yml`: replace all VIRTUAL_HOST, VIRTUAL_PORT and LETSENCRYPT_HOST + LETSENCRYPT_EMAIL to match your own configuration\n- min ports to open: 80/443 \n- 1 postgis database setup somewhere\n- 1 geoserver setup somewhere (or use the one in the docker-compose)\n - edit `web.xml` to match your own domain (`web.xml` is the one shared in the `docker-compose.yml`, especially the CORS)\n- 1 gurobi v8 license for some calculation modules (configuration [documentation](https://github.com/HotMaps/base_calculation_module/tree/gurobi))\n - if running the toolbox on your own server, use your own license key and make sure your gurobi server can be reached by the toolbox\n- nginx.tmpl configured for the server url""",,"2017/02/13, 08:46:11",2445,Apache-2.0,0,694,"2022/07/06, 20:24:40",22,4,5,0,476,14,0.0,0.61328125,,,0,13,false,,false,false,,,https://github.com/HotMaps,https://www.hotmaps-project.eu/,,,,https://avatars.githubusercontent.com/u/25738377?v=4,,, BuildSysPro,"EDF's Modelica library for buildings, districts and energy systems modeling.",EDF-TREE,https://github.com/EDF-Lab/BuildSysPro.git,github,,Buildings and Heating,"2023/03/08, 14:40:28",40,0,4,true,Modelica,,EDF-Lab,"Modelica,Motoko,Python,HTML,CSS,C,Java,JavaScript,TeX,Batchfile,Makefile,Shell",,"b""# ![BuildSysPro](https://raw.githubusercontent.com/EDF-TREE/BuildSysPro/master/BuildSysPro/Resources/Images/Logo-BuildSysPro.png)\n*BuildSysPro open source* is [EDF](https://www.edf.fr/en/the-edf-group/who-we-are/activities/research-and-development)'s Modelica library for buildings, districts and energy systems modelling. This is BuildSysPro's official repository.\n\n### Release updates\nCurrent release is version 3.5.0.\n\nThis release provides :\n\n- Migration from Modelica version 3.2.3 to 4.0.0.\n- Correction of models following the BuildSysPro and OpenModelica compatibility study.\n- Adding new building models: \n - Simple air renewal model for a single zone with variable air properties, \n - Enthalpy transfer through a door, \n - Ventilation model with default values from 3CL DPE v1.3 method, \n - Records allows to save the parameters needed to calculate the ventilation flow using 3CL-DPE method.\n- Adding new system models: \n - On/Off control model for a heat generator, according to the hydraulic circuit temperature, \n - Generic model of air temperature control based on a PID model,\n - Wood Stove,\n - Models of heatpumps with its components,\n - A solar water heater model,\n - Solar wall models,\n - Storage : Lithium-Ion Battery;\n - Distribution systems : HydraulicPipe, ThreeWayValveFlow, fan.\n- Adding new boundary conditions:\n - Weather : Cold water temperature reader,\n - Scenarios: occupancy schedule.\n- Adding new building stock models:\n - GV calculation of BuildingR2,\n - GenericFloor model,\n - BuildingR2 model,\n - Building date, assembly and settings of collective housing: Gaugun, Picasso and unheated room,\n - Collective housing building R+2 of 9 apartements and R+5 of 34 apartements.\n- Adding new utilities models:\n - Description of the battery characteristics with different configurations,\n - Cases of DHW analysis (Detect change in a signal value, Domestic hot water drawing queue, Measure of cold discomfort of DHW temperature relative to the setpoint).\n- other minor error corrections\n\nEDF is part of [IBPSA Project 1](https://ibpsa.github.io/project1/), and the [IBPSA library](https://github.com/ibpsa/modelica-ibpsa) is now integrated to BuildSysPro open source 3.5.0.\n\nBuildSysPro open source 3.5.0 is compatible with OpenModelica 1.17.0. When working with OpenModelica, specify your initial conditions carefully and use preferably Dassl, Euler or Runge-Kutta solvers.\n\n### License\nThe BuildSysPro open source library is licensed by EDF under the [3-Clause BSD-License](https://opensource.org/licenses/BSD-3-Clause).\n\n### Documentation\nA set of [self-training documents](https://github.com/edf-enerbat/buildsyspro-doc) for the BuildSysPro open source library is available.\n\n### References\n1. Plessis G., Kaemmerlen A., Lindsay A. (2014) [BuildSysPro: a Modelica library for modelling buildings and energy systems](https://www.modelica.org/events/modelica2014/proceedings/html/submissions/ECP140961161_PlessisKaemmerlenLindsay.pdf). Modelica Conference 2014.\n2. Schumann M. (2015) [Vers une plate-forme de mod\xc3\xa9lisation du b\xc3\xa2timent au quartier multiphysique avec Modelica et BuildSysPro](http://ibpsa.fr/jdownloads/Simurex/2015/Presentations/29_01_mathieuschumann.pdf) (*Towards a multiphysics modelling platform for buildings and districts with Modelica and BuildSysPro*), IBPSA France SIMUREX 2015 Conference.\n3. Bouquerel M., Bermes S., Brun A., Bouia H., Lecussan R., Charrier B. (2019) [Building Energy Modeling at District Scale through BIM Based Automatic Model Generation - Towards Building Envelope Optimization](http://www.ibpsa.org/proceedings/BS2019/BS2019_211008.pdf), Building Simulation Conference 2019.\n4. Bouquerel M., Ruben Deutz K., Charrier B., Duforestel T., Rousset M., Erich B., van Riessen G., Braun T. (2021) [Application of MyBEM, a BIM to BEM platform, to a building renovation concept with solar harvesting technologies](https://publications.ibpsa.org/proceedings/bs/2021/papers/bs2021_30153.pdf), Building Simulation Conference 2021.\n""",,"2016/01/29, 12:43:53",2826,CUSTOM,2,90,"2023/06/23, 09:29:00",5,4,10,3,124,3,0.0,0.6385542168674698,"2022/03/21, 12:24:40",v3.5.0,0,7,false,,false,false,,,https://github.com/EDF-Lab,https://www.edf.fr/en/the-edf-group/inventing-the-future-of-energy/r-d-global-expertise,France,,,https://avatars.githubusercontent.com/u/16956822?v=4,,, MPCPy,The Python-based open source platform for model predictive control in buildings.,lbl-srg,https://github.com/lbl-srg/MPCPy.git,github,,Buildings and Heating,"2023/03/07, 23:17:15",107,0,18,true,Python,Berkeley Lab - Modeling & Simulation,lbl-srg,"Python,Modelica",,"b'![](doc/userGuide/source/images/logo.png)\n\nThis is the development site for MPCPy, the python-based open-source platform for model predictive control in buildings.\n\n## General\nMPCPy is a python package that facilitates the testing and implementation of occupant-integrated model predictive control (MPC) for building systems. The package focuses on the use of data-driven, simplified physical or statistical models to predict building performance and optimize control. Four main modules contain object classes to import data, interact with real or emulated systems, estimate and validate data-driven models, and optimize control input.\n\n## Third Party Software\nWhile MPCPy provides an integration platform, it relies on free, open-source, third-party software packages for model implementation, simulators, parameter estimation algorithms, and optimization solvers. This includes python packages for scripting and data manipulation as well as other more comprehensive software packages for specific purposes. \n\nIn particular, modeling and optimization for physical systems currently relies on the Modelica language specification (https://www.modelica.org/) and FMI standard (http://fmi-standard.org/) in order to leverage model library and tool development on these standards occurring elsewhere within the building and other industries.\n\nA note to users: Per https://jmodelica.org/, Modelon stopped supporting the open-source JModelica environment as of December 2019. MPCPy can still continue to work with the public open-source version for compilation and optimization of Modelica models. Alternative solutions are being explored for longer-term maintenance.\n\n## Getting Started\n**Users** can [**download v0.1.0**](https://github.com/lbl-srg/MPCPy/releases/tag/v0.1.0).\n\n**Developers** can ``> git clone https://github.com/lbl-srg/MPCPy.git``.\n\nThen, follow the installation instructions and introductory tutorial in Section 2 of the [User Guide](https://github.com/lbl-srg/MPCPy/tree/master/doc/userGuide), located in /doc/userGuide.\n\nMPCPy uses Python 2.7 and has been tested on Ubuntu 16.04.\n\n**Join**, **follow**, and **participate** in the conversation with the [**google group**](https://groups.google.com/forum/#!forum/mpcpy)!\n\n## Contributing\nIf you are interested in contributing to this project:\n\n- You are welcome to report any issues in [Issues](https://github.com/lbl-srg/MPCPy/issues).\n- You are welcome to make a contribution by following the steps outlined on the [Contribution Workflow](https://github.com/lbl-srg/MPCPy/wiki/Contribution-Workflow) page.\n\nResearch has shown that MPC can address emerging control challenges faced by buildings. However, there exists no standard practice or methods for implementing MPC in buildings. Implementation is defined here as model structure, complexity, and training methods, data resolution and amount, optimization problem structure and algorithm, and transfer of optimal control solution to real building control. In fact, different applications likely require different implementations. Therefore, we aim for MPCPy to be flexible enough to accommodate different and new approaches to MPC in buildings as research approaches a consensus on best-practice methods.\n\n## License\nMPCPy is available under the following open-source [license](https://github.com/lbl-srg/MPCPy/blob/master/license.txt).\n\n## Cite\nTo cite MPCPy, please use:\n\nBlum, D. H. and Wetter, M. \xe2\x80\x9cMPCPy: An Open-Source Software Platform for Model Predictive Control in Buildings.\xe2\x80\x9d Proceedings of the 15th Conference of International Building Performance Simulation, Aug 7 \xe2\x80\x93 9, 2017. San Francisco, CA.\n'",,"2017/02/22, 15:26:54",2436,CUSTOM,3,633,"2023/03/07, 23:17:18",40,88,176,1,231,1,0.1,0.035502958579881616,"2017/08/07, 02:57:48",v0.1.0,0,5,false,,false,false,,,https://github.com/lbl-srg,https://buildings.lbl.gov/modeling-simulation,,,,https://avatars.githubusercontent.com/u/3753398?v=4,,, obc,"Performance Evaluation, Specification, Deployment and Verification of Building Control Sequences.",lbl-srg,https://github.com/lbl-srg/obc.git,github,,Buildings and Heating,"2023/09/12, 22:52:32",22,0,3,true,HTML,Berkeley Lab - Modeling & Simulation,lbl-srg,"HTML,Python,CSS,TeX,Modelica,Makefile,Shell",http://obc.lbl.gov,"b'# OpenBuildingControl\n\n[![Build Status](https://travis-ci.org/lbl-srg/obc.svg?branch=master)](https://travis-ci.org/lbl-srg/obc)\n\nThis is the development site for the OpenBuildingControl project. The project web site is at https://obc.lbl.gov/\n\nOpenBuidlingControl will develop tools and processes for the\nperformance evaluation, specification and verification of building control sequences.\n'",,"2016/12/15, 16:22:04",2505,CUSTOM,46,722,"2023/09/11, 22:09:30",9,80,132,12,43,0,0.9,0.40619902120717777,,,0,11,false,,false,false,,,https://github.com/lbl-srg,https://buildings.lbl.gov/modeling-simulation,,,,https://avatars.githubusercontent.com/u/3753398?v=4,,, The Application Domain Extension,"Defines a standardized data model based on CityGML format for urban energy analyses, aiming to be a reference exchange data format between different urban modeling tools and expert databases.",cstb,https://github.com/cstb/citygml-energy.git,github,,Buildings and Heating,"2020/10/13, 10:00:57",38,0,2,false,Python,CSTB,cstb,Python,,"b'[![Build Status](https://travis-ci.org/cstb/citygml-energy.svg?branch=v0.9.0)](https://travis-ci.org/cstb/citygml-energy)\n# CityGML Energy ADE\n\n## Abstract\nThe Application Domain Extension (ADE) Energy defines a standardized data model based on CityGML format for urban energy analyses, aiming to be a reference exchange data format between different urban modelling tools and expert databases.\n\nA documentation of each classes and parameters is in development as an [Excel sheet](./doc/Definitions_Energy-ADE.xlsx?raw=true).\n\nThe latest ADE release is version 0.6.0. All releases can be found and downloaded on the [releases page](https://github.com/cstb/citygml-energy/releases) of the repository.\n\nIt has been developed since May 2014 by an international consortium of urban energy simulation developers and users:\n* Special Interest Group 3D (SIG3D)\n* University of Applied Sciences Stuttgart\n* Technische Universit\xc3\xa4t M\xc3\xbcnchen\n* Karlsruhe Institute f\xc3\xbcr Technologie\n* RWTH Aachen University / E.ON Energy Research Center\n* HafenCityUniversit\xc3\xa4t Hamburg\n* European Institute for Energy Research\n* Ecole Polytechnique F\xc3\xa9d\xc3\xa9rale de Lausanne\n* Centre Scientifique et Technique du Batiment\n* Electricit\xc3\xa9 de France\n* Dedagroup Public Services\n* M.O.S.S Computer Grafik Systeme\n* Austrian Institute of Technology\n\n![Special Interest Group 3D](./doc/logos/201309_SIG3D_Logo.png) ![University of Applied Sciences Stuttgart logo](./doc/logos/hft.jpg) ![Technische Universit\xc3\xa4t M\xc3\xbcnchen logo](./doc/logos/tum.png) ![Karlsruhe Institute f\xc3\xbcr Technologie logo](./doc/logos/kit.jpg)\n![RWTH Aachen University / E.ON Energy Research Center logo](./doc/logos/rwth_eon.jpg) ![HafenCityUniversit\xc3\xa4t Hamburg logo](./doc/logos/hcu.png) ![European Institute for Energy Research logo](./doc/logos/eifer.png)\n![Ecole Polytechnique F\xc3\xa9d\xc3\xa9rale de Lausanne logo](./doc/logos/epfl.png) ![Centre Scientifique et Technique du Batiment logo](./doc/logos/cstb.png) ![Electricit\xc3\xa9 de France logo](./doc/logos/edf.jpg)\n![DedagroupPS logo](./doc/logos/logoPS.png) ![M.O.S.S Computer Grafik Systeme logo](./doc/logos/moss.jpg) ![Austrian Institute of Technology](./doc/logos/ait.jpg)\n\n\n## Guidelines\nA document which describes the guidelines can be downloaded from [this link](./doc/guidelines/Guidelines_EnergyADE.pdf). The guidelines can also be read [online](./doc/guidelines/Guidelines_EnergyADE.md).\n\n## UML diagrams\nThe CityGML Energy ADE currently (v0.6.0) is implemented in a single XSD schema. [This pdf file](./doc/UML-Diagrams_Energy-ADE.pdf) gives an overview of the class diagrams of the whole model. For more details about the modules, a complete description can be found in the [guidelines](./doc/guidelines/Guidelines_EnergyADE.md).\n\n\n'",,"2015/01/13, 10:42:50",3207,CUSTOM,0,323,"2017/11/27, 13:53:30",15,13,135,0,2158,0,0.0,0.44047619047619047,"2016/01/20, 15:36:34",v0.6.0,0,10,false,,false,false,,,https://github.com/cstb,http://www.cstb.fr,"290 route des lucioles, 06904 Sophia Antipolis, FRANCE",,,https://avatars.githubusercontent.com/u/1405833?v=4,,, PLANHEAT Tool,"A QGIS plug-in based on an open source code, whose goal is to analyze, plan and simulate low carbon Heating & Cooling scenarios.",Planheat,https://github.com/Planheat/Planheat-Tool.git,github,,Buildings and Heating,"2020/04/30, 17:26:33",2,0,0,false,Python,,,"Python,HTML,Julia,TeX,Makefile,Batchfile,NSIS,Shell,QML",,"b'[![Build Status](https://travis-ci.org/cstb/citygml-energy.svg?branch=v0.9.0)](https://travis-ci.org/cstb/citygml-energy)\n# CityGML Energy ADE\n\n## Abstract\nThe Application Domain Extension (ADE) Energy defines a standardized data model based on CityGML format for urban energy analyses, aiming to be a reference exchange data format between different urban modelling tools and expert databases.\n\nA documentation of each classes and parameters is in development as an [Excel sheet](./doc/Definitions_Energy-ADE.xlsx?raw=true).\n\nThe latest ADE release is version 0.6.0. All releases can be found and downloaded on the [releases page](https://github.com/cstb/citygml-energy/releases) of the repository.\n\nIt has been developed since May 2014 by an international consortium of urban energy simulation developers and users:\n* Special Interest Group 3D (SIG3D)\n* University of Applied Sciences Stuttgart\n* Technische Universit\xc3\xa4t M\xc3\xbcnchen\n* Karlsruhe Institute f\xc3\xbcr Technologie\n* RWTH Aachen University / E.ON Energy Research Center\n* HafenCityUniversit\xc3\xa4t Hamburg\n* European Institute for Energy Research\n* Ecole Polytechnique F\xc3\xa9d\xc3\xa9rale de Lausanne\n* Centre Scientifique et Technique du Batiment\n* Electricit\xc3\xa9 de France\n* Dedagroup Public Services\n* M.O.S.S Computer Grafik Systeme\n* Austrian Institute of Technology\n\n![Special Interest Group 3D](./doc/logos/201309_SIG3D_Logo.png) ![University of Applied Sciences Stuttgart logo](./doc/logos/hft.jpg) ![Technische Universit\xc3\xa4t M\xc3\xbcnchen logo](./doc/logos/tum.png) ![Karlsruhe Institute f\xc3\xbcr Technologie logo](./doc/logos/kit.jpg)\n![RWTH Aachen University / E.ON Energy Research Center logo](./doc/logos/rwth_eon.jpg) ![HafenCityUniversit\xc3\xa4t Hamburg logo](./doc/logos/hcu.png) ![European Institute for Energy Research logo](./doc/logos/eifer.png)\n![Ecole Polytechnique F\xc3\xa9d\xc3\xa9rale de Lausanne logo](./doc/logos/epfl.png) ![Centre Scientifique et Technique du Batiment logo](./doc/logos/cstb.png) ![Electricit\xc3\xa9 de France logo](./doc/logos/edf.jpg)\n![DedagroupPS logo](./doc/logos/logoPS.png) ![M.O.S.S Computer Grafik Systeme logo](./doc/logos/moss.jpg) ![Austrian Institute of Technology](./doc/logos/ait.jpg)\n\n\n## Guidelines\nA document which describes the guidelines can be downloaded from [this link](./doc/guidelines/Guidelines_EnergyADE.pdf). The guidelines can also be read [online](./doc/guidelines/Guidelines_EnergyADE.md).\n\n## UML diagrams\nThe CityGML Energy ADE currently (v0.6.0) is implemented in a single XSD schema. [This pdf file](./doc/UML-Diagrams_Energy-ADE.pdf) gives an overview of the class diagrams of the whole model. For more details about the modules, a complete description can be found in the [guidelines](./doc/guidelines/Guidelines_EnergyADE.md).\n\n\n'",,"2019/11/15, 15:53:09",1440,MIT,0,8,"2017/11/27, 13:53:30",2,0,0,0,2158,0,0,0.1428571428571429,,,0,2,false,,false,false,,,,,,,,,,, Energy Signature Analyser,A toolbox to analyze energy signatures of buildings and compare the signatures of all buildings within an entire building stock.,energyincities,https://gitlab.com/energyincities/energy-signature-analyser,gitlab,,Buildings and Heating,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, BuildingSystems,"The Modelica open source BuildingSystems library is developed for dynamic simulation of the energetic behavior of single rooms, buildings and whole districts.",UdK-VPT,https://github.com/UdK-VPT/BuildingSystems.git,github,"modelica,building,energy,simulation",Buildings and Heating,"2023/06/23, 16:51:44",66,0,8,true,Modelica,UdK-VPT,UdK-VPT,"Modelica,Motoko,Python,HTML,C,CSS,JavaScript,Java,TeX,Batchfile,Makefile,Shell",http://modelica-buildingsystems.de/,"b'BuildingSystems\n===============\n\nThis is the development site for the Modelica [BuildingSystems](https://www.modelica-buildingsystems.de) library and its user guide.\nInstructions for developers are available on the [wiki](https://github.com/UdK-VPT/BuildingSystems/wiki).\n\n## Library description\nThe Modelica BuildingSystems library is a free and open-source library for energy-related building and plant simulation.\n\n## Current release\nThis library is still under heavy development.\nA release will be published when it is done.\n\n## License\nThe Modelica BuildingSystems library is available under the terms of the BSD 3-Clause License.\nA copy of the license is available at\n[BuildingSystems/LICENSE.md](https://github.com/UdK-VPT/BuildingSystems/blob/master/LICENSE.md).\n\n## Development and contribution\nYou may report any issues by using the [Issues](https://github.com/UdK-VPT/BuildingSystems/issues) button. \nIf you just have a question on how to use the library, please ask that question on the BuildingSystems mailinglist:\n[https://groups.google.com/forum/#!forum/modelica-buildingsystems](https://goo.gl/d4ERld)\n\nContributions in the form of [Pull Requests](https://github.com/UdK-VPT/BuildingSystems/pulls) are always welcome.\nPrior to issuing a pull request, make sure your code follows\nthe [style guide and coding conventions](https://github.com/UdK-VPT/BuildingSystems/wiki/Guidelines).\n'",,"2015/11/09, 09:17:44",2907,BSD-3-Clause,6,739,"2023/05/24, 16:07:10",31,93,147,2,154,3,0.1,0.40445859872611467,,,0,10,false,,false,false,,,https://github.com/UdK-VPT,http://www.arch.udk-berlin.de/vpt,Berlin,,,https://avatars.githubusercontent.com/u/10708399?v=4,,, modelica-ibpsa,A Modelica library for building and district energy systems developed within IBPSA Project 1.,ibpsa,https://github.com/ibpsa/modelica-ibpsa.git,github,,Buildings and Heating,"2023/10/25, 12:07:36",125,0,17,true,Modelica,IBPSA,ibpsa,"Modelica,Motoko,Python,C,CSS,Java,TeX,Batchfile,Makefile,HTML,Shell",https://ibpsa.github.io/project1,"b'# Modelica IBPSA library\n\n[![Build Status](https://travis-ci.com/ibpsa/modelica-ibpsa.svg?branch=master)](https://travis-ci.com/ibpsa/modelica-ibpsa)\n\nThis is the development site for the _Modelica IBPSA Library_ and its user guide.\n\nInstructions for developers are available on the [wiki](https://github.com/ibpsa/modelica-ibpsa/wiki).\n\n## Library description\n\nThe Modelica _IBPSA_ library is a free open-source library with basic models that codify\nbest practices for the implementation of models for building and community energy and control systems.\n\nThe development of the IBPSA library is organized through \nthe IBPSA Modelica Working Group (https://github.com/ibpsa/modelica-working-group).\nThe development was organized from 2017 to 2022 through\nthe IBPSA Project 1 (https://ibpsa.github.io/project1)\nof the International Building Performance Simulation Association (IBPSA),\nand from 2012 to 2017 through the\nAnnex 60 project (http://www.iea-annex60.org) of the\nEnergy in Buildings and Communities Programme\nof the International Energy Agency (IEA EBC).\n\nThis library is typically not used directly by end-users. Rather, it\nis integrated by developers of other Modelica libraries for building and\ncommunity energy systems, who then distribute it to end-users as part of their\nrespective library.\nCurrently, the _IBPSA_ library is used as the core of these libraries:\n\n * _AixLib_, from RWTH Aachen University, Germany: https://github.com/RWTH-EBC/AixLib\n * _Buildings_, from LBNL, Berkeley, CA, USA: http://simulationresearch.lbl.gov/modelica\n * _BuildingSystems_, from UdK Berlin, Germany: http://www.modelica-buildingsystems.de\n * _IDEAS_ from KU Leuven, Belgium: https://github.com/open-ideas/IDEAS\n\n## License\n\nThe Modelica _IBPSA_ library is available under a 3-clause BSD-license.\nSee [Modelica IBPSA Library license ](https://htmlpreview.github.io/?https://github.com/ibpsa/modelica-ibpsa/blob/master/IBPSA/legal.html).\n\n## Development and contribution\nYou may report any issues by using the [Issues](https://github.com/ibpsa/modelica-ibpsa/issues) button.\n\nContributions in the form of [Pull Requests](https://github.com/ibpsa/modelica-ibpsa/pulls) are always welcome.\nPrior to issuing a pull request, make sure your code follows\nthe [style guide and coding conventions](https://github.com/ibpsa/modelica-ibpsa/wiki/Style-Guide).\n'",,"2013/09/21, 05:49:03",3686,BSD-3-Clause,630,7278,"2023/10/25, 12:07:38",39,967,1769,160,0,4,1.6,0.44851586489252815,"2018/09/27, 19:52:29",v3.0.0,0,47,false,,false,false,,,https://github.com/ibpsa,http://www.ibpsa.org,Worldwide,,,https://avatars.githubusercontent.com/u/16223588?v=4,,, project1,Creates open source software that builds the basis of next generation computing tools for the design and operation of building and district energy and control systems.,ibpsa,https://github.com/ibpsa/project1.git,github,"ibpsa,modelica,bim,simulation,optimization,mpc",Buildings and Heating,"2022/09/14, 12:25:07",48,0,1,true,Modelica,IBPSA,ibpsa,"Modelica,HTML,Python,CSS,TeX,Batchfile,Jupyter Notebook,Makefile,JavaScript,SCSS,Shell",,"b""# IBPSA Project 1\n\nThis public repository is for the administration of the IBPSA Project 1.\nIBPSA Project 1 is conducted under the umbrella of the\nInternational Building Performance Simulaton Association (http://ibpsa.org/).\nIt will create open-source software that builds the basis of\nnext generation computing tools for the design and operation of\nbuilding and district energy and control systems\n\nFor the main web site, visit https://ibpsa.github.io/project1/.\n\nFor the wiki that contains meeting announcements and agendas, visit https://github.com/ibpsa/project1/wiki.\n\nFiles for the individual work packages are in the directories `wp_x_y_z`.\nTo avoid the repository to become very large, please don't upload multiple versions of\nlarge binary files (such as PowerPoint or Excel files).\n""",,"2016/07/29, 05:49:31",2644,BSD-3-Clause,0,679,"2022/07/19, 12:55:26",7,34,43,1,463,0,0.1,0.5492957746478873,,,0,29,false,,false,false,,,https://github.com/ibpsa,http://www.ibpsa.org,Worldwide,,,https://avatars.githubusercontent.com/u/16223588?v=4,,, teb,A library to calculate the urban surface energy balance at neighborhood scale assuming a simplified canyon geometry.,TEB-model,https://github.com/TEB-model/teb.git,github,"energy-balance-model,urban-meteorology,land-surface-model,meteorology,building-energy,energy-model,energy-consumption,energy-simulation,urban-planning,atmospheric-modelling,environmental-modelling,cmake,teb,town-energy-balance,urban-energy-budget,forecasting-model",Buildings and Heating,"2021/12/13, 10:03:17",22,0,5,false,Fortran,,TEB-model,"Fortran,Python,TeX,CMake",,"b'
\n\n\n# The Town Energy Balance (TEB) model\n\n[![GitHub release (latest by date)](https://img.shields.io/github/v/release/TEB-model/teb)](https://github.com/TEB-model/teb/releases/latest) [![CI](https://github.com/TEB-model/teb/workflows/CI/badge.svg)](https://github.com/TEB-model/teb/actions) [![DOI](https://joss.theoj.org/papers/10.21105/joss.02008/status.svg)](https://doi.org/10.21105/joss.02008) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3887080.svg)](https://doi.org/10.5281/zenodo.3887080)\n\n[Overview](#overview) | [Prerequisites](#prerequisites) | [Installation](#installation) | [Documentation](#documentation) | [Example application](#example-application) | [Testing](#testing) | [How to cite](#how-to-cite) | [Contributing](#contributing) | [Copyright and license](#copyright-and-license)\n
\n\n## Overview\n\nThis enhanced software and platform for TEB (Town Energy Balance; [Masson, 2000](https://dx.doi.org/10.1023/A:1002463829265) and subsequent papers), is intended to help scientists and practitioners wishing to use the TEB model in their research as a standalone software application or as a library (e.g. [WRF-TEB](https://doi.org/10.1029/2019MS001961)) to calculate the urban surface energy balance at neighborhood scale assuming a simplified canyon geometry.\n\n\n## Prerequisites\n\n- [Git](https://git-scm.com/) >= 2\n- [CMake](https://cmake.org/) >= 3.1\n- A Recent version of GNU/Intel/Cray Fortran compiler\n- [Python](https://www.python.org/) >= 3.7 [*Optional for testing and tutorial*]\n\n\n## Installation\n\nTo build the Town Energy Balance (TEB) executable and library on Windows, Linux and macOS, clone this repository and run the following commands from your command prompt:\n\n```\nmkdir build\ncd build\ncmake -DCMAKE_BUILD_TYPE=Release ..\ncmake --build .\n```\n\nBy default, we set the real type to an 8 byte wide. This behavior is controlled by the optional `USE_REAL8` flag (default ON). [Ninja](https://ninja-build.org/) support is avalable and can be specified with the generator flag `-G` at configure time (e.g. `cmake -GNinja -DCMAKE_BUILD_TYPE=Release ..`).\n\n
\nNote for Windows Users\n\nMake sure you have installed the Intel\xc2\xae Visual Studio Integration plugins or CMake will not be able to identify your compiler (`No CMAKE_Fortran_COMPILER could be found` error).\nMake sure that you use Intel\xc2\xae Command-Line Window when launching CMake - The Intel\xc2\xae compiler provides a command-line window with the appropriate environment variables already set (see: [Using the Intel\xc2\xae Command-Line Window](https://software.intel.com/en-us/node/522358)).\nYou may also need to specify the generator flag `-G` in CMake; for example, if you are using Intel\xc2\xae Command-Line Window for Visual Studio 2017, then the CMake command should now be `cmake -G ""Visual Studio 15 2017 Win64"" ..`. For more information on how to specify generators in CMake see [cmake-generators](https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#visual-studio-generators)\n\nE.g. on Windows using Intel\xc2\xae Command-Line Window for Visual Studio 2017:\n\n```powershell\nmkdir build && cd build\ncmake -G ""Visual Studio 15 2017 Win64"" ..\ncmake --build .\n```\n
\n\n\n## Documentation\n\nThis section includes links to software and model documentation. If you are new to TEB and simply looking to easily get started, please refer to the [example application](#example-application) instead.\n\n### Software\n\nSee [software documentation](docs/software-docs.md) for a general overview. For configuration options available at runtime please refer to [namelist options](docs/namelist-options.md).\n\n### Scientific\nThe complete scientific documentation is included in the [SURFEX scientific documentation](http://www.umr-cnrm.fr/surfex/IMG/pdf/surfex_scidoc_v8.1.pdf).\n\n\n### Code browser\nThe code browser is available at [https://teb-model.github.io/teb](https://teb-model.github.io/teb).\n\n\n## Example application\n\nTo get started with TEB, see [`examples/CAPITOUL/README.md`](examples/CAPITOUL/README.md). The folder contains a simple tutorial in [Jupyter Notebook](https://jupyter.org/) to estimate the buildings\' power demand for cooling using TEB and data and parameters from the CAPITOUL (Canopy and Aerosol Particles Interactions in TOulouse Urban Layer; [Masson et al., 2008](https://doi.org/10.1007/s00703-008-0289-4)) campaign.\n\n\n## Testing\n\nTests are run using the CAPITOUL data provided in `examples/CAPITOUL`. Tests are found in `tests/tests.py` and an overview is given in [`tests/README.md`](tests/README.md). All tests are automatically run at every commit using Continuos Integration. If you are looking to run your tests locally on Linux or macOS, first make sure you have installed all the [prerequisites](#prerequisites), then from the command prompt:\n\n``` bash\npython -m pip install -r requirements.txt\npython tests/test.py --build_type= --case=\n```\n\nwhere `` is either `Debug` or `Release` and `` is the test case currently supported in `tests/test.py` -- see [`tests/README.md`](tests/README.md) for more information. The output files are written to `temp`. Plots are written to `plots`.\n\n\nE.g.\n\n``` bash\npython -m pip install -r requirements.txt\npython tests/test.py --build_type=Debug --case=integration\n```\n\n## How to cite\n\nWhen using the TEB software, please cite both model, and software (with version) as follows:\n\n| Physical model | This software | Version* |\n| ------------------------------------------------------- | --------------------------------------------------------- | ---------------------------------------------------- |\n| [Masson, 2000](https://doi.org/10.1023/A:1002463829265) | [Meyer et al., 2020](https://doi.org/10.21105/joss.02008) | [see Zenodo](https://doi.org/10.5281/zenodo.3887080) |\n\n\nThe corresponding reference list should be as follows:\n\n> Masson, V., 2000: A Physically-Based Scheme For The Urban Energy Budget In Atmospheric Models. Boundary-Layer Meteorology, 94, 357\xe2\x80\x93397, https://doi.org/10.1023/A:1002463829265.\n\n> Meyer, D., Schoetter, R., Masson, V., Grimmond, S., 2020: Enhanced software and platform for the Town Energy Balance (TEB) model. Journal of Open Source Software, 5(50), 2008. https://doi.org/10.21105/joss.02008.\n\n*please make sure to cite the same version you are using with the correct DOI. For a list of all available versions see the list of versions on [Zenodo](https://doi.org/10.5281/zenodo.3887080).\n\n\n## Contributing\n\nPlease see the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n\n## Copyright and license\n\nCopyright stated at the top of source files. Software released under [CeCILL version 2.1](Licence_CeCILL_V2.1-en.txt).\n'",",https://doi.org/10.21105/joss.02008,https://doi.org/10.5281/zenodo.3887080,https://doi.org/10.1029/2019MS001961,https://doi.org/10.1007/s00703-008-0289-4,https://doi.org/10.1023/A:1002463829265,https://doi.org/10.21105/joss.02008,https://doi.org/10.5281/zenodo.3887080,https://doi.org/10.1023/A:1002463829265.\n\n,https://doi.org/10.21105/joss.02008.\n\n*please,https://doi.org/10.5281/zenodo.3887080","2019/04/14, 10:30:15",1655,BSD-3-Clause,0,73,"2021/12/13, 09:51:26",2,35,43,0,681,0,0.4,0.028169014084507005,"2021/12/13, 10:08:38",4.1.2,0,2,false,,false,true,,,https://github.com/TEB-model,,,,,https://avatars.githubusercontent.com/u/44439852?v=4,,, tsib,A Python package that builds up on different databases and models for creating consistent demand and production time series of residential buildings.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/tsib.git,github,,Buildings and Heating,"2023/03/22, 09:29:28",17,0,2,true,Python,FZJ-IEK3,FZJ-IEK3-VSA,"Python,Jupyter Notebook,Makefile",,"b'[![Build Status](https://img.shields.io/gitlab/pipeline/l-kotzur/tsib/master.svg)](https://gitlab.com/l-kotzur/tsib/pipelines)\n[![Version](https://img.shields.io/pypi/v/tsib.svg)](https://pypi.python.org/pypi/tsib)\n\n\n\n# tsib - Time Series Initialization for Buildings\n\ntsib is a python package that builds up on different databases and models for creating consistent demand and production time series of residential buildings. This could be either occupancy behavior, electricity demand or heat demand time series as well as photovoltaic (PV) and solar thermal production time series.\n\n\nIf you want to use tsib in a published work, please [**cite following publication**](http://juser.fz-juelich.de/record/858675) which applies tsib for the creation of time series for residential buildings in Germany. \n\n\n## Features\n* flexible configuration of single buildings by different input arguments\n* simple building definition based on an archetype building catalogue\n* consideration of the occupancy behavior\n* derivation of the electric device load or the demand for thermal comfort\n* calculation of the heat load based on a thermal building model\n* provision of location specific time series for solar irradiation and temperature based on weather data\n\n\n## Applied databases and models\ntsib is a flexible tool which allows the use of different models and databases for the generation of time series for buildings. In Version 0.1.0 the following databases and models are included is tsib:\n* [CREST](https://www.lboro.ac.uk/research/crest/demand-model/) demand model for the simulaton of the occupancy behavior\n* [5R1C](https://www.sciencedirect.com/science/article/abs/pii/S0306261916314933) thermal building model \n* [pvlib](https://github.com/pvlib/pvlib-python) for solar irradiance calculation and photovoltaic simulation\n* [TABULA/EPISCOPE](http://episcope.eu/) archetype building catalogue\n* [DWD Testreferenzjahre](https://www.dwd.de/DE/leistungen/testreferenzjahre/testreferenzjahre.html) for providing weather data\n\n\n## Installation\nDirectly install via pip as follows:\n\n\tpip install tsib\n\nAlternatively, clone a local copy of the repository to your computer\n\n\tgit clone https://github.com/FZJ-IEK3-VSA/tsib.git\n\t\nThen install tsib via pip as follow\n\t\n\tcd tsib\n\tpip install . \n\t\nOr install directly via python as \n\n\tpython setup.py install\n\t\nIn order to use the 5R1C thermal building model, make sure that you have installed a MILP solver. As default solver coin-cbc is used which can either installed by\n\n\tsudo apt-get install coinor-cbc\n\nor for Anaconda under windows as\n\n\tconda install -c conda-forge coincbc\n\n. Other solvers can be defined by defining the environment variable $SOLVER. \n\t\nTo get flexible weather data from the Climate Data Store, register [here](https://cds.climate.copernicus.eu/api-how-to) and follow the instructions to get an own key. Make sure that you have agreed on the [license terms](https://cds.climate.copernicus.eu/cdsapp/#!/terms/licence-to-use-copernicus-products).\n\n\t\n## Examples\n\nThis [jupyter notebook](examples/showcase.ipynb) shows the capabilites of tsib to create all relevant time series. \n\n\n## License\n\nMIT License\n\nCopyright (C) 2016-2022 Leander Kotzur (FZJ IEK-3), Timo Kannengie\xc3\x9fer (FZK-IEK-3), Kevin Knosala (FZJ IEK-3), Peter Stenzel (FZJ IEK-3), Peter Markewitz (FZJ IEK-3), Martin Robinius (FZJ IEK-3), Detlef Stolten (FZJ IEK-3)\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\n## About Us\n

\nWe are the Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n\n## Acknowledgement\n\nThis work was supported by the Helmholtz Association under the Joint Initiative [""Energy System 2050 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/).\n\n\n'",,"2019/11/17, 21:09:35",1438,MIT,12,131,"2022/12/15, 08:28:41",0,2,2,1,314,0,0.0,0.14655172413793105,,,0,5,false,,false,false,,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, DHNx,This package provides an open toolbox for district heating and cooling network optimization and simulation models.,oemof,https://github.com/oemof/DHNx.git,github,,Buildings and Heating,"2023/10/23, 13:41:37",24,3,9,true,Python,oemof community,oemof,Python,,"b'|badge_coverage| |readthedocs| |zenodo|\r\n\r\n~~~~\r\nDHNx\r\n~~~~\r\n\r\nThis package provides an open toolbox for district heating and cooling network\r\noptimization and simulation models.\r\n\r\n.. contents::\r\n\r\nAbout\r\n=====\r\n\r\nThe aim of DHNx is to provide a toolbox for building models of\r\ndistrict heating/cooling systems. \r\n\r\nQuickstart\r\n==========\r\n\r\nIf you have a working Python3 environment, use pypi to install the latest DHNx version:\r\n\r\n.. code:: bash\r\n\r\n pip install dhnx\r\n\r\nInstall the developer version of DHNx by cloning DHNx to your computer and running\r\n\r\n.. code:: bash\r\n\r\n pip install -e \r\n\r\nin your virtualenv.\r\n\r\nCheck out the\r\n`examples `_ to get started.\r\n\r\nDocumentation\r\n=============\r\n\r\nThe documentation (work in progress) can be found here\r\n``_.\r\nTo build the docs locally using sphinx-build run the following in a terminal.\r\n\r\n.. code:: bash\r\n\r\n sphinx-build docs \r\n\r\nContributing\r\n============\r\n\r\nEverybody is welcome to contribute to the development of DHNx. The `developer\r\nguidelines of oemof `_\r\nare in most parts equally applicable to DHNx.\r\n\r\n\r\nCiting\r\n======\r\n\r\nWe use the zenodo project to get a DOI for each version.\r\n`Search zenodo for the right citation of your DHNx version `_.\r\n\r\nIf you want to refer specifically to the district heating network optimization\r\npart of DHNx, you can also cite\r\n`https://doi.org/10.5278/ijsepm.6248 `_.\r\n\r\n\r\nLicense\r\n=======\r\n\r\nMIT License\r\n\r\nCopyright (c) 2020 oemof developing group\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy\r\nof this software and associated documentation files (the ""Software""), to deal\r\nin the Software without restriction, including without limitation the rights\r\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\ncopies of the Software, and to permit persons to whom the Software is\r\nfurnished to do so, subject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n\r\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\nSOFTWARE.\r\n\r\n\r\n.. |badge_coverage| image:: https://coveralls.io/repos/github/oemof-heat/DHNx/badge.svg?branch=dev&service=github\r\n :target: https://coveralls.io/github/oemof-heat/DHNx?branch=dev\r\n :alt: Test coverage\r\n\r\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.7844753.svg\r\n :target: https://doi.org/10.5281/zenodo.7844753\r\n\r\n.. |readthedocs| image:: https://readthedocs.org/projects/dhnx/badge/?version=latest\r\n :target: https://dhnx.readthedocs.io/en/latest/?badge=latest\r\n :alt: Documentation Status\r\n'",",https://zenodo.org/search?page=1&size=20&q=dhnx,https://doi.org/10.5278/ijsepm.6248,https://doi.org/10.5278/ijsepm.6248,https://doi.org/10.5281/zenodo.7844753\r\n\r\n","2019/06/12, 11:28:54",1596,MIT,75,1106,"2023/10/23, 12:57:53",35,54,92,22,2,9,0.2,0.34090909090909094,"2023/04/19, 08:31:04",v0.0.3,0,7,false,,false,false,"in-RET/inretensys-fastapi,oemof/oemof,SESMG/SESMG",,https://github.com/oemof,https://oemof.org,Germany,,,https://avatars.githubusercontent.com/u/8503379?v=4,,, The Building Data Genome 2 Data-Set,Whole building non-residential hourly energy meter data from the Great Energy Predictor III competition.,buds-lab,https://github.com/buds-lab/building-data-genome-project-2.git,github,"open-source,open-data,open-data-science,energy-efficiency,energy-consumption,building-energy,building-automation,smart-city,smart-meter,electricity-meter,electricity-consumption",Buildings and Heating,"2023/10/14, 03:26:45",143,0,40,true,Jupyter Notebook,Building and Urban Data Science (BUDS) Group,buds-lab,Jupyter Notebook,https://www.budslab.org/,"b'![logo](figures/buildingdatagenome2.png)\n\n[![DOI](https://zenodo.org/badge/247690451.svg)](https://zenodo.org/badge/latestdoi/247690451)\n\n# The Building Data Genome 2 (BDG2) Data-Set\n## Data-set description\nBDG2 is an open data set made up of 3,053 energy meters from 1,636 buildings. The time range of the times-series data is the two full years (2016 and 2017) and the frequency is hourly measurements of electricity, heating and cooling water, steam, and irrigation meters. A subset of the data was used in the [Great Energy Predictor III (GEPIII) competition hosted by the ASHRAE organization in late 2019](https://www.kaggle.com/c/ashrae-energy-prediction). A full overview of the GEPIII competition can be [found in a Science and Technology for the Built Environment Journal](https://www.tandfonline.com/doi/full/10.1080/23744731.2020.1795514) - [Preprint found on arXiv](https://arxiv.org/abs/2007.06933)\n\nThe GEPIII sub-set includes hourly data from 2,380 meters from 1,449 buildings that were used in a machine learning competition for long-term prediction with an application to measurement and verification in the building energy analysis domain. This data set can be used to benchmark various statistical learning algorithms and other data science techniques. It can also be used simply as a teaching or learning tool to practice dealing with measured performance data from large numbers of non-residential buildings. The charts below illustrate the breakdown of the buildings according to primary use category and subcategory, industry and subindustry, timezone and meter type.
\n\n![cat_features](figures/metadata_features.png)\n\n## Getting Started\nWe recommend you download the [Anaconda Python Distribution](https://www.continuum.io/downloads) and use Jupyter to get an understanding of the data.\n- Temporal meters data are found in `/data/meters/`\n- Metadata is found in `data/metadata/`\n- To join all meters raw data into one dataset follow [this](/notebooks/00_All-meters-dataset.ipynb) notebook\n\nExample notebooks are found in `/notebooks/` -- a few good overview examples:\n- [Exploratory Data Analysis of metadata](notebooks/01_EDA-metadata.ipynb)\n- [Exploratory Data Analysis of weather](notebooks/02_EDA-weather.ipynb)\n- [Exploratory Data Analysis of meter reading](notebooks/03_EDA-meter-reading.ipynb)\n\n## Detailed Documentation\nThe detailed documentation of how this data set was created can be found in the [repository\'s wiki](https://github.com/buds-lab/building-data-genome-project-2/wiki) and in the following publication:\n\n### Citation of BDG2 Data-Set\n* [Nature Scientific Data (open access)](https://www.nature.com/articles/s41597-020-00712-x)\n\nMiller, C., Kathirgamanathan, A., Picchetti, B. et al. The Building Data Genome Project 2, energy meter data from the ASHRAE Great Energy Predictor III competition. Sci Data 7, 368 (2020). https://doi.org/10.1038/s41597-020-00712-x\n\n```\n\n\n@ARTICLE{Miller2020-yc,\n title = ""The Building Data Genome Project 2, energy meter data from the\n {ASHRAE} Great Energy Predictor {III} competition"",\n author = ""Miller, Clayton and Kathirgamanathan, Anjukan and Picchetti,\n Bianca and Arjunan, Pandarasamy and Park, June Young and Nagy,\n Zoltan and Raftery, Paul and Hobson, Brodie W and Shi, Zixiao\n and Meggers, Forrest"",\n abstract = ""This paper describes an open data set of 3,053 energy meters\n from 1,636 non-residential buildings with a range of two full\n years (2016 and 2017) at an hourly frequency (17,544\n measurements per meter resulting in approximately 53.6 million\n measurements). These meters were collected from 19 sites across\n North America and Europe, with one or more meters per building\n measuring whole building electrical, heating and cooling water,\n steam, and solar energy as well as water and irrigation meters.\n Part of these data was used in the Great Energy Predictor III\n (GEPIII) competition hosted by the American Society of Heating,\n Refrigeration, and Air-Conditioning Engineers (ASHRAE) in\n October-December 2019. GEPIII was a machine learning competition\n for long-term prediction with an application to measurement and\n verification. This paper describes the process of data\n collection, cleaning, and convergence of time-series meter data,\n the meta-data about the buildings, and complementary weather\n data. This data set can be used for further prediction\n benchmarking and prototyping as well as anomaly detection,\n energy analysis, and building type classification.\n Machine-accessible metadata file describing the reported data:\n https://doi.org/10.6084/m9.figshare.13033847"",\n journal = ""Scientific Data"",\n publisher = ""Nature Publishing Group"",\n volume = 7,\n pages = ""368"",\n month = oct,\n year = 2020,\n language = ""en""\n}\n\n\n```\n\n### Preprints\n* [arXiv](https://arxiv.org/abs/2006.02273)\n* [ResearchGate](https://www.researchgate.net/publication/341895125_The_Building_Data_Genome_Project_2_Hourly_energy_meter_data_from_the_ASHRAE_Great_Energy_Predictor_III_competition)\n\n# Publications or Projects that use BDG2 data-set\nPlease update this list if you add notebooks or R-Markdown files to the ``notebook`` folder. Naming convention is a number (for ordering), the creator\'s initials, and a short `-` delimited description, e.g. `1.0-jqp-initial-data-exploration`.\n\n- (publication here)\n\n## Repository structure\n```\nbuilding-data-genome-project-2\n\xe2\x94\x9c\xe2\x94\x80 README.md <- BDG2 README for developers using this data-set\n\xe2\x94\x94\xe2\x94\x80 data\n| \xe2\x94\x9c\xe2\x94\x80metadata <- buildings metadata\n| \xe2\x94\x9c\xe2\x94\x80 weather <- weather data\n| \xe2\x94\x94\xe2\x94\x80 meters\n| \xe2\x94\x94\xe2\x94\x80 raw <- all meter reading datasets\n| \xe2\x94\x94\xe2\x94\x80 cleaned <- cleaned meter data based on several filtering steps\n| \xe2\x94\x94\xe2\x94\x80 kaggle <- the 2017 meter data that aligns with the Kaggle competition\n\xe2\x94\x9c\xe2\x94\x80 notebooks <- Jupyter notebooks, named after the naming convention\n\xe2\x94\x94\xe2\x94\x80 figures <- figures created during exploration of BDG 2.0 Data-set\n```\n\n\n'",",https://zenodo.org/badge/latestdoi/247690451,https://arxiv.org/abs/2007.06933,https://doi.org/10.1038/s41597-020-00712-x\n\n```\n\n\n@ARTICLE,https://doi.org/10.6084/m9.figshare.13033847"",\n,https://arxiv.org/abs/2006.02273","2020/03/16, 11:57:32",1318,CUSTOM,1,84,"2020/06/23, 08:19:41",4,2,24,0,1219,0,0.0,0.19753086419753085,"2020/06/10, 00:36:17",v1.0,0,2,false,,false,false,,,https://github.com/buds-lab,www.budslab.org,Singapore,,,https://avatars.githubusercontent.com/u/26264086?v=4,,, BESOS,A collection of modules for the simulation and optimization of buildings and urban energy systems.,energyincities,https://gitlab.com/energyincities/besos,gitlab,,Buildings and Heating,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, pythermalcomfort,"Package to calculate several thermal comfort indices (e.g. PMV, PPD, SET, adaptive) and convert physical variables.",CenterForTheBuiltEnvironment,https://github.com/CenterForTheBuiltEnvironment/pythermalcomfort.git,github,,Buildings and Heating,"2023/10/24, 07:11:19",113,35,32,true,Python,,CenterForTheBuiltEnvironment,"Python,Batchfile",https://pythermalcomfort.readthedocs.io/en/latest/,"b'========\nOverview\n========\n\n.. start-badges\n\n.. list-table::\n :stub-columns: 1\n\n * - docs\n - |docs|\n * - license\n - |license|\n * - downloads\n - |downloads|\n * - tests\n - | |appveyor| |codecov|\n * - package\n - | |version| |wheel|\n | |supported-ver|\n | |package-health|\n\n.. |package-health| image:: https://snyk.io/advisor/python/pythermalcomfort/badge.svg\n :target: https://snyk.io/advisor/python/pythermalcomfort\n :alt: pythermalcomfort\n\n.. |license| image:: https://img.shields.io/pypi/l/pythermalcomfort?color=brightgreen\n :target: https://github.com/CenterForTheBuiltEnvironment/pythermalcomfort/blob/master/LICENSE\n :alt: pythermalcomfort license\n\n.. |docs| image:: https://readthedocs.org/projects/pythermalcomfort/badge/?style=flat\n :target: https://readthedocs.org/projects/pythermalcomfort\n :alt: Documentation Status\n\n.. |downloads| image:: https://img.shields.io/pypi/dm/pythermalcomfort?color=brightgreen\n :alt: PyPI - Downloads\n\n.. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/CenterForTheBuiltEnvironment/pythermalcomfort?branch=master&svg=true\n :alt: AppVeyor Build Status\n :target: https://ci.appveyor.com/project/CenterForTheBuiltEnvironment/pythermalcomfort\n\n.. |codecov| image:: https://codecov.io/github/CenterForTheBuiltEnvironment/pythermalcomfort/coverage.svg?branch=master\n :alt: Coverage Status\n :target: https://codecov.io/github/CenterForTheBuiltEnvironment/pythermalcomfort\n\n.. |version| image:: https://img.shields.io/pypi/v/pythermalcomfort.svg\n :alt: PyPI Package latest release\n :target: https://pypi.org/project/pythermalcomfort\n\n.. |wheel| image:: https://img.shields.io/pypi/wheel/pythermalcomfort.svg\n :alt: PyPI Wheel\n :target: https://pypi.org/project/pythermalcomfort\n\n.. |supported-ver| image:: https://img.shields.io/pypi/pyversions/pythermalcomfort.svg\n :alt: Supported versions\n :target: https://pypi.org/project/pythermalcomfort\n\n.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/pythermalcomfort.svg\n :alt: Supported implementations\n :target: https://pypi.org/project/pythermalcomfort\n\n.. end-badges\n\nPackage to calculate several thermal comfort indices (e.g. PMV, PPD, SET, adaptive) and convert physical variables.\n\nPlease cite us if you use this package: `Tartarini, F., Schiavon, S., 2020. pythermalcomfort: A Python package for thermal comfort research. SoftwareX 12, 100578. https://doi.org/10.1016/j.softx.2020.100578 `_\n\n* Free software: MIT license\n\nInstallation\n============\n\n::\n\n pip install pythermalcomfort\n\nYou can also install the in-development version with::\n\n pip install https://github.com/CenterForTheBuiltEnvironment/pythermalcomfort/archive/master.zip\n\n\nDocumentation\n=============\n\n\nhttps://pythermalcomfort.readthedocs.io/\n\n\nExamples and Tutorials\n======================\n\n`Examples`_ files on how to use some of the functions\n\n.. _Examples: https://pythermalcomfort.readthedocs.io/en/latest/usage.html\n\n\nContributing\n============\n\nContributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. Click `here`_ to learn more on how to contribute to the project.\n\n.. _here: https://pythermalcomfort.readthedocs.io/en/latest/contributing.html\n\n\nDeployment\n==========\n\nI am using travis to test the code. In addition, I have enabled GitHub actions.\nEvery time the code is pushed or pulled to the `master` repository then the GitHub action tests the code and if the tests pass, a new version of the package is published automatically on PyPI.\nSee file in `.github/workflows/` for more information.\n'",",https://doi.org/10.1016/j.softx.2020.100578","2020/02/10, 18:55:51",1353,MIT,231,638,"2023/10/24, 07:11:20",8,21,73,41,1,0,1.7,0.23880597014925375,,,0,6,false,,true,true,"night-crawler/sensor-hub-ble-collector,heidless-stillwater/streamlit-app,Pancakeme/Dynamic-Pricing,fulvio9999/PROGETTO-BD,bruadam/ICIEE_DTU_Heat_stress_Africa,andamanopal/test_jenkins,samwesleyptl/data-science-portfolio,ant-daq/climate_analysis,hura2/switchbot_comfort_control,hermmanhender/energyplus-gymEnv,deepaknagaraj2021/comfort_tool-master,mohamedrks/python-flask-webapp-msft,sebastienlanglois/dash-gis,GabeLR/GCPTutorial,ddegrave/sh,akikhub/comfort_tool,akikhub/example-youtube,FedericoTartarini/dorn-longitudinal-tc-study,FedericoTartarini/tool-risk-scale-football-nsw,Ramonfi/ModellingOccupantBehavior,buds-lab/ComfortLearn,mohamedrks/thermal-py,noohauqu/Comfort,buds-lab/abm-demo,FedericoTartarini/paper-cobee-2022,SupKCH/thermal-comfort-calculation,covetool/clima,dxdc/comfort_tool,GTsimogiannis/Flask-App-GCP-Deployment,jbonett/thermal-comfort-tflite,sh-s/comfortpy,sh-s/compytest10,sh-s/compytest9,CenterForTheBuiltEnvironment/clima,CenterForTheBuiltEnvironment/comfort_tool",,https://github.com/CenterForTheBuiltEnvironment,http://cbe.berkeley.edu,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/6592546?v=4,,, comfort_tool,"A web interface for comfort model calculations and visualizations according to ASHRAE Standard-55, EN Standard 16798 and ISO Standard 7730.",CenterForTheBuiltEnvironment,https://github.com/CenterForTheBuiltEnvironment/comfort_tool.git,github,"comfort,air-temperature,thermal-comfort,pmv-prediction,pmv",Buildings and Heating,"2023/10/20, 00:05:40",81,0,5,true,JavaScript,,CenterForTheBuiltEnvironment,"JavaScript,HTML,CSS,Python,Dockerfile,Procfile",http://comfort.cbe.berkeley.edu,"b'# CBE Thermal Comfort Tool\n\nA web interface for comfort model calculations and visualizations according to ASHRAE Standard-55, EN Standard 16798 and ISO Standard 7730. \n\n[Live deployment of the tool](http://comfort.cbe.berkeley.edu/).\n\n[Official documentation](https://center-for-the-built-environment.gitbook.io/thermal-comfort-tool/)\n\n[Contribute to the project](https://center-for-the-built-environment.gitbook.io/thermal-comfort-tool/contributing/contributing)\n\n[How to deploy](https://cbe-berkeley.gitbook.io/thermal-comfort-tool/contributing/contributing#deploying)'",,"2014/09/04, 17:51:59",3338,CUSTOM,43,572,"2023/10/20, 12:47:00",4,35,71,4,5,0,0.2,0.1917808219178082,"2017/11/20, 18:53:42",1.1,0,8,false,,false,false,,,https://github.com/CenterForTheBuiltEnvironment,http://cbe.berkeley.edu,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/6592546?v=4,,, resstock,"Helping states, municipalities, utilities, and manufacturers identify which building stock improvements save the most energy and money.",NREL,https://github.com/NREL/resstock.git,github,,Buildings and Heating,"2023/10/24, 15:47:01",87,0,16,true,Ruby,National Renewable Energy Laboratory,NREL,"Ruby,Python,CSS,Batchfile,Makefile",https://resstock.nrel.gov,"b'\n\nThe `develop` branch is under active development. Find the latest release [here](https://github.com/NREL/resstock/releases).\n\n[![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/NREL/resstock?include_prereleases)](https://github.com/NREL/resstock/releases)\n[![ci](https://github.com/NREL/resstock/workflows/ci/badge.svg)](https://github.com/NREL/resstock/actions)\n[![Documentation Status](https://readthedocs.org/projects/resstock/badge/?version=latest)](https://resstock.readthedocs.io/en/latest/?badge=latest)\n\n[ResStock\xe2\x84\xa2](https://www.nrel.gov/buildings/resstock.html), built on the [OpenStudio platform](http://openstudio.net), is a project geared at modeling existing residential building stocks at national, regional, or local scales with a high-degree of granularity (e.g., one physics-based simulation model for every 200 dwelling units), using the [EnergyPlus simulation engine](http://energyplus.net). Information about ComStock\xe2\x84\xa2, a sister tool for modeling the commercial building stock, can be found [here](https://www.nrel.gov/buildings/comstock.html). \n\nThis repository contains:\n\n- [Housing characteristics of the U.S. residential building stock](https://github.com/NREL/resstock/tree/main/project_national/housing_characteristics), in the form of conditional probability distributions stored as tab-separated value (.tsv) files. Comments at the bottom of each file document data sources and assumptions for each.\n- [A library of housing characteristic ""options""](https://github.com/NREL/resstock/blob/main/resources/options_lookup.tsv) that translate high-level characteristic parameters into arguments for OpenStudio measures, and which are referenced by the housing characteristic .tsv files and building energy upgrades defined in project definition files\n- Project definition files:\n - v2.3.0 and later: [buildstockbatch YML files openable in any text editor](https://github.com/NREL/resstock/blob/main/project_national/national_baseline.yml)\n - v2.2.5 and prior: [Project folder openable in PAT](https://github.com/NREL/resstock/tree/v2.2.5/project_singlefamilydetached)\n- Unit-level OpenStudio Measures for automatically constructing OpenStudio Models of each representative dwelling unit model:\n - v3.0.0 and later: [OpenStudio-HPXML Measures](https://github.com/NREL/resstock/tree/main/resources/hpxml-measures)\n - v2.5.0 and prior: [OpenStudio Measures](https://github.com/NREL/resstock/tree/v2.5.0/resources/measures)\n- [Higher-level OpenStudio Measures](https://github.com/NREL/resstock/tree/main/measures) for controlling simulation inputs and outputs\n\nThis repository does not contain software for running ResStock simulations, which can be found as follows:\n\n - [Versions 2.3.0](https://github.com/NREL/resstock/releases/tag/v2.3.0) and later only support the use of [buildstockbatch](https://github.com/NREL/buildstockbatch) for deploying simulations on high-performance or cloud computing. Version 2.3.0 also removed separate projects for single-family detached and multifamily buildings, in lieu of a combined `project_national` representing the U.S. residential building stock. See the [changelog](https://github.com/NREL/resstock/blob/main/CHANGELOG.md) for more details. \n - [Versions 2.2.5](https://github.com/NREL/resstock/releases/tag/v2.2.5) and prior support the use of the publicly available [OpenStudio-PAT](https://github.com/NREL/OpenStudio-PAT) software as an interface for deploying simulations on cloud computing. Read the [documentation for v2.2.5](https://resstock.readthedocs.io/en/v2.2.5/).\n'",,"2016/04/11, 15:37:56",2753,CUSTOM,1067,9547,"2023/10/18, 23:15:29",72,802,1059,136,6,28,0.6,0.37504204507231753,"2023/05/26, 00:27:29",v3.1.0,0,22,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, DSMR-reader,Used for reading the smart meter DSMR (Dutch Smart Meter Requirements) P1 port yourself at your home.,dsmrreader,https://github.com/dsmrreader/dsmr-reader.git,github,"raspberry-pi,non-commercial,dsmr5,dsmr4,dsmrp1,dsmr-reader,telegram-data-storage,energy-consumption-visualizer,p1",Buildings and Heating,"2023/02/02, 06:39:12",446,0,36,true,Python,DSMR-reader,dsmrreader,"Python,CSS,JavaScript,HTML,Shell",https://dsmr-reader.readthedocs.io,"b""[![Python](https://img.shields.io/badge/python-3.7%20|%203.8%20|%203.9|%203.10|%203.11-brightgreen.svg?style=for-the-badge)](https://devguide.python.org/versions/#versions)\n[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/dsmrreader/dsmr-reader/automated-tests.yml?branch=v5&style=for-the-badge)](https://github.com/dsmrreader/dsmr-reader/actions)\n[![Read the Docs](https://img.shields.io/readthedocs/dsmr-reader/v5?style=for-the-badge)](https://dsmr-reader.readthedocs.io/)\n\n\n# DSMR-reader\n*DSMR-protocol reader, telegram data storage and energy consumption visualizer. \nCan be used for reading the smart meter DSMR (Dutch Smart Meter Requirements) P1 port yourself at your home. \nYou will need a cable and hardware that can run Linux software. \n**Free for non-commercial use**.*\n\n----\n\n## Docker\nA third party Docker implementation [can be found here](https://github.com/xirixiz/dsmr-reader-docker).\nCourtesy of [@Xirixiz](https://github.com/xirixiz).\n\n## Docs\n- **Documentation**: [English](https://dsmr-reader.readthedocs.io/en/v5/index.html) / [Nederlands](https://dsmr-reader.readthedocs.io/nl/v5/index.html)\n- **Installation**: [English](https://dsmr-reader.readthedocs.io/en/v5/tutorial/installation/step-by-step.html) / [Nederlands](https://dsmr-reader.readthedocs.io/nl/v5/tutorial/installation/step-by-step.html)\n- **How-to's**: [English](https://dsmr-reader.readthedocs.io/en/v5/how-to/index.html) / [Nederlands](https://dsmr-reader.readthedocs.io/nl/v5/how-to/index.html)\n\n[Check out the documentation](https://dsmr-reader.readthedocs.io/en/v5/explained/about.html) for a tour and screenshots.\n\n## Preview\n\n![Dashboard](https://github.com/dsmrreader/dsmr-reader/blob/v5/docs/_static/screenshots/v5/frontend/dashboard.png)\n\n----\n\n![Live](https://github.com/dsmrreader/dsmr-reader/blob/v5/docs/_static/screenshots/v5/frontend/live.png)\n\n----\n\n![Archive](https://github.com/dsmrreader/dsmr-reader/blob/v5/docs/_static/screenshots/v5/frontend/archive.png)\n\n----\n\n![Compare](https://github.com/dsmrreader/dsmr-reader/blob/v5/docs/_static/screenshots/v5/frontend/compare.png)\n\n----\n\n[Check out the documentation for more screenshots](https://dsmr-reader.readthedocs.io/en/v5/explained/about.html#screenshots).\n""",,"2016/02/07, 13:14:21",2817,CUSTOM,108,3182,"2023/10/19, 19:23:13",12,189,1812,1161,6,0,0.0,0.018620689655172384,"2023/02/01, 21:21:07",v5.10.3,0,29,false,,false,false,,,https://github.com/dsmrreader,https://dsmr-reader.readthedocs.io,Netherlands,,,https://avatars.githubusercontent.com/u/57727360?v=4,,, Multiscale Solar Water Heating,Solar water heating system modeling and simulation for individual and community scale projects.,LBNL-ETA,https://github.com/LBNL-ETA/MSWH.git,github,,Buildings and Heating,"2022/05/12, 15:59:52",10,0,2,false,Jupyter Notebook,LBNL Energy Technologies Area,LBNL-ETA,"Jupyter Notebook,JavaScript,Python,HTML,TeX,CSS",,"b'# Multiscale Solar Water Heating\n**Solar water heating system modeling and simulation for individual and community scale projects**\n\n## Repository Content\n\nFolder | Content\n------ | ------\n[mswh](mswh) | Python module to calculate solar irradiation on a tilted surface ([mswh/system/source_and_sink.py](mswh/system/source_and_sink.py)).

Python module with simplified component models ([mswh/system/components.py](mswh/system/components.py)) for Converter (solar collectors, electric resistance heater, gas burner, photovoltaic panels, heat pump), Storage (solar thermal tank, heat pump thermal tank, conventional gas tank water heater), and Distribution (distribution and solar pump, piping losses) components.

Python module with preconfigured system simulation models ([mswh/system/models.py](mswh/system/models.py)) for: base case gas tank water heaters, solar thermal water heaters (solar collector feeding a storage tank, with a tankless gas water heater backup in a new installation cases and a base case gas tank water heater in a retrofit case) and solar electric water heaters (heat pump storage tank with an electric resistance backup).

Database with component performance parameters, California specific weather data and domestic hot water end-use load profiles ([mswh/comm/swh_system_input.db](mswh/comm/mswh_system_input.db)).

Modules to communicate with the database ([mswh/comm/sql.py](mswh/comm/sql.py)), unit conversion and plotting modules in [mswh/tools](mswh/tools).\n[scripts](scripts) | Jupyter notebooks with preconfigured models and any side analysis if applicable. Navigate to scripts in [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/LBNL-ETA/MSWH/632fd9860c66e3d5b5cafe0af61a6b42f4c6b4f7) to try them out quickly.\n[web](web) | Django web framework to configure project, parametrize components and run simulation from a web browser.\n[docs](docs) | API documentation, including a short methodology documentation can be found [here](https://lbnl-eta.github.io/MSWH/). To build HTML or LaTeX use `make html` or `make latex`. A `pdf` version of the Code Documentation can be viewed and downloaded [here](https://github.com/LBNL-ETA/MSWH/blob/v2.0.0/docs/MSWH.pdf).\n\n## Statement of Need\n\nWe envision four main groups of users for the MSWH software:\n\n* Researchers and policy developers.\n* Solar water heating planners, designers and contractors.\n* Homeowners.\n* Educators.\n\nThe policy developers and researchers could utilize the existing MSWH software by embedding it into some larger analysis framework they construct such that it provides answers to their specific research questions.\n\nThe professional planners, designers, and contractors of solar thermal water heating systems might find it useful to have access to a freely available simulation tool such as the MSWH software, that they can use to evaluate alternative system designs.\n\nHomeowners considering transitioning to a solar water heating system may be interested in doing the math before seeking further professional help, or just for their own education and curiosity about both solar water heating systems and system simulation in general.\n\nEducators may wish and find it useful to utilize the MSWH simulation tool in the classroom when teaching the basics of energy simulation.\n\n## Usage\n\nThe fastest way to explore the preset simulations is to use the [`MSWH System Tool`](scripts/MSWH System Tool.ipynb) notebook. In the notebook the user provides a climate zone for a project, an occupancy for each household and whether any of the occupants stay at home during the day. The notebook can then load a set of example California specific hourly domestic hot water end-use load profiles from a database, size and locate the systems. The user can now simulate the hourly system performance over a period of one representative year, visualize and explore the simulation results using time-series plots for temperature profiles, heat and power rates, or look at annual summaries. Similarly the user can model individual household solar water heating projects and base case conventional gas tank water heater systems, such that the results can be compared between the individual, community and base case systems. All simulation and sizing parameters are exposed in the notebook and the user can easily change them if needed.\n\nIf you opt to use the web framework the shortest path to explore the simulaton results after [setting up a local server](#django-web-framework-deployment) is to:\n\n* Click on `Configurations` on the landing page.\n* Click on `Simulate` for any of the example preconfigured systems (`Solar Thermal New` or `Solar Electric`). This leads the user to a visualization page with hourly timeseries results for a representative year.\n* Play with sizes and performance parameters of preconfigured components.\n\nTo configure new system types in the web framework (such as `Solar Thermal Retrofit`) one would need to map it through the backend analogously to the currently preconfigured systems.\n\nAn example demonstrating usage of the simulation models for an additional climate outside\nof California, that is Banja Luka in Bosnia & Herzegovina, is provided in [this notebook](scripts/MSWH System Tool - Additional Climate.ipynb).\n\n## Setup and Installation\n\n1. Make sure that `pip` [is installed](https://pip.pypa.io/en/stable/installing/).\n\n2. Unless you already have [`conda`](https://docs.conda.io/en/latest/) installed, please install the lightweight option [`Miniconda`](https://docs.conda.io/en/latest/miniconda.html) or [`Anaconda`](https://docs.anaconda.com/anaconda/install/) software.\n\n### Simple Installation Using `Conda`\n\n1. If you are familiar with `conda` and experienced with virtual environments\n you can perform the package installation using the following set of commands:\n\n conda create -n mswh -c conda-forge -c plotly python=3.8 pip git-lfs jupyterlab plotly-orca\n conda activate mswh\n git lfs install\n git clone https://github.com/LBNL-ETA/MSWH.git\n cd MSWH\n pip install -e .\n\n To ensure functionality of the example notebooks install the following:\n\n python -m ipykernel install --user --name mswh\n jupyter labextension install jupyterlab-plotly\n\nThe examples are best explored using `JupyterLab`. Please check out the\n[JupyterLab documentation](https://jupyterlab.readthedocs.io/en/latest/)\nfor further help as needed.\n\n### Detailed Installation Steps\n\nIf for any reason a user encounters difficulties with the simple installation\ninstructions, the user is encouraged to consult a [more detailed installation guide that is\nposted with the code documentation](https://lbnl-eta.github.io/MSWH/source/installation.html).\n\n## Django Web Framework Deployment\n\n### 1. Local\n\nIf the installation succeeded, to run the Django application navigate to the `web` folder (there should be a `manage.py` file) and start the development server on your local machine with:\n\n python manage.py runserver\n\n Now you can open your browser and type in `localhost:8000` (or `127.0.0.1:8000` if you are on a Windows machine) to start the web interface.\n\n Note that to build python extensions one needs to have `python3.x-dev` installed.\n\n Make sure that `DEBUG = True` in `settings.py`, this will ensure that the development server is able to serve local static files.\n\n### 2. Public\n\n#### Override settings locally\n\nTo deploy publicly, rename the file `local_settings_TEMPLATE.py` to `local_settings.py` and update the constants.\n\n* `SECRET_KEY = \'\'`\n\n The random string should be 50 characters long and can created (on Linux) by using the following command as super user:\n\n For detailed documentation on how to serve static files, see the official Django documentation:\n> * [Managing static files](https://docs.djangoproject.com/en/3.1/howto/static-files/)\n> * [Deploying static files](https://docs.djangoproject.com/en/3.1/howto/static-files/deployment/)\n\nAs the Django devlopment server is not meant for production and only serves static files if `Debug` is set to `True`, the static files used in the Django project need to be served another way.\n\nAt this point, two important aspects regarding how to deploy static files in production will be named:\n\n1. Running this command, will create a folder `static` that will contain a copy of all static files from different directories across the Django project.\n ```\n python manage.py collectstatic\n ```\n > :warning: Run this command every time you update one of the static files in their respective location in the Django project folder.\n\n2. Configure `nginx` to serve static files from the generated `static` folder by adding a `location /static` block to the server block of the `nginx` config file for the domain you serve the Django app with. This is an example `nginx` server block:\n ```\n server {\n listen 80;\n server_name ;\n\n access_log /var/log/nginx/access.log;\n error_log /var/log/nginx/error.log;\n\n location / {\n # For testing, using the django development server:\n # proxy_pass http://127.0.0.1:8000/;\n # For production, using gunicorn:\n proxy_pass http://unix:/run/swhweb.sock;\n }\n\n # Run \'python manage.py collectstatic\' command in Django root project folder, so this folder will be created\n location /static {\n root /MSWH/web;\n try_files $uri $uri/ =404;\n }\n }\n ```\n Replace `` with the actual path to the MSWH repository and `` with your domain.\n\n## Contributing\n\nAll are invited to contribute to the MSWH software through following the [Guidelines for Contributors](contributing.md).\n\n### Automated tests\n\nTo run tests, from the `MSWH` folder use the following command modified according to the test module and method you intend to run:\n\n python -m unittest mswh.{my_module}.tests.{test_my_module}.{MyModuleTests}.{test_my_method}\n\n## Publications\n\nThe code was used for the following publications:\n* Coughlin, Katie, Milica Grahovac, Mohan Ganeshalingam, Robert Hosbach, and Vagelis Vossos. 2020. Costs and Benefits of Community versus Individual End-use Infrastructure for Solar Water Heating. California Energy Commission. CEC-XXX-2020-XXX. (in press)\n\n* Grahovac, Milica, Katie Coughlin, Mohan Ganeshalingam, Robert Hosbach and Vagelis Vossos. 2020. Costs and Benefits of Community Scale Solar Water Heating. 2020 ACEEE Study on Energy Efficiency in Buildings. Pacific Grove, California. [Link to the paper with a video presentation](https://aceee2020.conferencespot.org/event-data/pdf/catalyst_activity_10923/catalyst_activity_paper_20200812133157248_498ce455_3a9c_4278_9088_6e3fdce5745b)\n\n* Milica Grahovac, Katie Coughlin, Robert Hosbach, Hannes Gerhart, (2020). Multiscale Solar Water Heating. Journal of Open Source Software, 5(56), 2695, [![DOI](https://joss.theoj.org/papers/10.21105/joss.02695/status.svg)](https://doi.org/10.21105/joss.02695)\n\n* Gerhart, H. (2019). Implementation of a Flexible Web Framework for Simulating Python System Models (p. 82). Technical University of Munich; Technical University of Munich. Research performed at LBNL. [Download link](https://gerhart.xyz/thesis.pdf)\n\n* Web deployed version of the Django app is under construction [on this publicly available private website](https://solar.floweragenda.org/).\n\n## About\n\nThe software may be distributed under the copyright and a BSD license provided in [legal.md](legal.md).\n\nMilica Grahovac, Robert Hosbach, Katie Coughlin, Mohan Ganeshalingam and Hannes Gerhart created the contents of this repo\nin the scope of the CEC ""Costs and Benefits of Community vs. Individual End-Use Infrastructure for Solar Water Heating"" project.\n\nTo cite use format provided at the [DOE CODE](https://www.osti.gov/doecode/biblio/26000) MSWH record.\n\n## Acknowledgements\n\nThis work was supported by the California Energy Commission, Public Interest Energy Research Program, under Contract No. PIR-16-022.\n\nWe thank the reviewers and the editor of [The Journal of Open Source Software (JOSS)](https://joss.theoj.org/), [Bryn Pickering](https://github.com/brynpickering), [Nithiya Streethran](https://github.com/nmstreethran), and [Stefan Pfenninger](https://github.com/sjpfenninger) for their contributions in improving the code, the examples and the code documentation for the code release 2.0.0.\n'",",https://doi.org/10.21105/joss.02695","2019/05/22, 21:13:45",1617,CUSTOM,0,267,"2021/04/04, 21:24:17",2,16,44,0,934,0,0.3,0.10822510822510822,"2020/12/04, 08:17:01",v2.0.0,0,4,false,,false,true,,,https://github.com/LBNL-ETA,http://eta.lbl.gov,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/14825882?v=4,,, eplusr,"Provides a rich toolkit of using whole building energy simulation program EnergyPlus directly in R, which enables programmatic navigation, modification of EnergyPlus, conducts parametric simulations and retrieves outputs.",hongyuanjia,https://github.com/hongyuanjia/eplusr.git,github,"energyplus,r,idf,idd,eplus,r6,parametric-simulation,epw,energyplus-models,simulation,energy-simulation",Buildings and Heating,"2023/08/25, 15:21:56",63,0,8,true,R,,,"R,Shell,Dockerfile",https://hongyuanjia.github.io/eplusr,"b'\n\n\n# eplusr \n\n\n\n[![R build\nstatus](https://github.com/hongyuanjia/eplusr/workflows/R-CMD-check/badge.svg)](https://github.com/hongyuanjia/eplusr/actions)\n[![codecov](https://codecov.io/gh/hongyuanjia/eplusr/branch/master/graph/badge.svg?token=HoBA0Qm6k2)](https://app.codecov.io/gh/hongyuanjia/eplusr)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/eplusr)](https://cran.r-project.org/package=eplusr)\n[![CRAN\nChecks](https://badges.cranchecks.info/worst/eplusr.svg)](https://badges.cranchecks.info/worst/eplusr.svg)\n[![CRAN Download\nBadge](https://cranlogs.r-pkg.org/badges/eplusr)](https://cran.r-project.org/package=eplusr)\n\n\n> A Toolkit for Using EnergyPlus in R.\n\neplusr provides a rich toolkit of using whole building energy simulation\nprogram [EnergyPlus](https://energyplus.net) directly in R, which\nenables programmatic navigation, modification of EnergyPlus, conducts\nparametric simulations and retrieves outputs. More information about\nEnergyPlus can be found at [its website](https://energyplus.net).\n\nA comprehensive introduction to eplusr can be found using\n[`vignette(""eplusr"")`](https://hongyuanjia.github.io/eplusr/articles/eplusr.html).\nThere is also an online slides here ([Interfacing EnergyPlus Using\nR](https://hongyuanjia.github.io/eplusrIntro/)). You can learn more\nabout eplusr at , along with full\npackage documentation.\n\n## How to cite\n\n``` r\ncitation(""eplusr"")\n#> \n#> To cite eplusr in publications use:\n#> \n#> Hongyuan Jia, Adrian Chong (2021). eplusr: A framework for\n#> integrating building energy simulation and data-driven analytics.\n#> Energy and Buildings 237: 110757.\n#> https://doi.org/10.1016/j.enbuild.2021.110757\n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @Article{,\n#> title = {eplusr: A framework for integrating building energy simulation and data-driven analytics},\n#> author = {Hongyuan Jia and Adrian Chong},\n#> year = {2020},\n#> journal = {Energy and Buildings},\n#> volume = {237},\n#> url = {https://CRAN.R-project.org/package=eplusr},\n#> doi = {10.1016/j.enbuild.2021.110757},\n#> }\n```\n\n## Installation\n\nYou can install the latest stable release of eplusr from CRAN.\n\n``` r\ninstall.packages(""eplusr"")\n```\n\nAlternatively, you can install the development version from GitHub.\n\n``` r\ninstall.packages(""eplusr"", repos = ""https://hongyuanjia.r-universe.dev"")\n```\n\nSince running the IDF files requires EnergyPlus\n(), EnergyPlus has to be installed if you want\nto run EnergyPlus models in R. There are helper functions in eplusr to\ndownload and install it automatically on major operating systems\n(Windows, macOS and Linux):\n\n``` r\n# install the latest version (currently v23.1.0)\neplusr::install_eplus(""latest"")\n\n# OR download the latest version (currently v23.1.0) and run the installer\n# manually by yourself\neplusr::download_eplus(""latest"", dir = tempdir())\n```\n\nNote that the installation process in `install_eplus()` requires\n**administrative privileges**. You have to run R with administrator (or\nwith sudo if you are on macOS or Linux) to make it work if you are not\nin interactive mode.\n\n## Features\n\n- Download, install EnergyPlus in R\n- Read, parse and modify EnergyPlus:\n - Input Data File (IDF)\n - Weather File (EPW)\n - Report Data Dictionary (RDD) & Meter Data Dictionary (MDD)\n - Error File (ERR)\n- Modify multiple versions of IDFs and run corresponding EnergyPlus both\n in the background and in the front\n- Rich-featured interfaces to query and modify IDFs\n- Automatically handle referenced fields and validate input during\n modification\n- Take fully advantage of most common used data structure for data\n science in R \xe2\x80\x93 data.frame\n - Extract model, weather data into data.frames\n - Modify multiple objects via data.frames input\n - Query output via SQL in Tidy format which is much better for data\n analysis and visualization\n- Provide a simple yet extensible prototype of conducting parametric\n simulations and collect all results in one go\n- A pure R-based version updater which is more than\n [20X](https://hongyuanjia.github.io/eplusr/articles/transition.html)\n faster than VersionUpdater distributed with EnergyPlus\n- Fast 3D geometry visualization\n\n**View IDF geometry in 3D** \n\n\n**Turn RStudio into a model editor via autocompletion** \n\n\n**Query and modify weather file** \n\n\n**Query output via SQL in Tidy format which is much better for data\nanalysis** \n\n\n## Resources\n\n### Articles\n\n- Hongyuan Jia, Adrian Chong (2020). eplusr: A framework for integrating\n building energy simulation and data-driven analytics. doi:\n 10.13140/RG.2.2.34326.16966\n - [Source code and data to reproduce figures in the\n article](https://github.com/ideas-lab-nus/eplusr-paper)\n\n### Vignettes\n\nPlease see these vignettes and articles about {eplusr}\n\n- [Introduction to\n eplusr](https://hongyuanjia.github.io/eplusr/articles/eplusr.html)\n- [Run simulation and data\n exploration](https://hongyuanjia.github.io/eplusr/articles/job.html)\n- [Parametric\n simulations](https://hongyuanjia.github.io/eplusr/articles/param.html)\n- [Update IDF\n version](https://hongyuanjia.github.io/eplusr/articles/transition.html)\n- [Work with weather\n files](https://hongyuanjia.github.io/eplusr/articles/epw.html)\n- [Work with `Schedule:Compact`\n objects](https://hongyuanjia.github.io/eplusr/articles/schedule.html)\n- [Work with\n geometries](https://hongyuanjia.github.io/eplusr/articles/geom.html)\n- [Frequently asked\n questions](https://hongyuanjia.github.io/eplusr/articles/faq.html)\n\n### Slides\n\n- [Slides: Interfacing EnergyPlus using\n R](https://hongyuanjia.github.io/eplusrIntro/)\n\n## Additional resources\n\n- eplusr manual: \n- eplusr Docker image: \n- [epwshiftr](https://CRAN.R-project.org/package=epwshiftr) for creating\n future EnergyPlus weather files using CMIP6 data\n- [epluspar](https://github.com/hongyuanjia/epluspar) for conducting\n parametric analysis on EnergyPlus models, including sensitivity\n analysis, Bayesian calibration and optimization.\n\n## Acknowledgement\n\nI would like to thank many open source projects who have heavily\ninspired the development of eplusr package, especially these below:\n\n- [EnergyPlus](https://energyplus.net): A whole building energy\n simulation program.\n- [OpenStudio](https://openstudio.net): A cross-platform collection of\n software tools to support whole building energy modeling using\n EnergyPlus and advanced daylight analysis using Radiance.\n- [eppy](https://github.com/santoshphilip/eppy): Scripting language for\n E+, EnergyPlus.\n- [JEplus](http://www.jeplus.org): An EnergyPlus simulation manager for\n parametrics.\n\n## Author\n\nHongyuan Jia and Adrian Chong\n\n## License\n\nThe project is released under the terms of MIT License.\n\nCopyright \xc2\xa9 2016-2023 Hongyuan Jia and Adrian Chong\n\n------------------------------------------------------------------------\n\nPlease note that the \xe2\x80\x98eplusr\xe2\x80\x99 project is released with a [Contributor\nCode of\nConduct](https://github.com/hongyuanjia/eplusr/blob/master/.github/CODE_OF_CONDUCT.md).\nBy contributing to this project, you agree to abide by its terms.\n'",",https://doi.org/10.1016/j.enbuild.2021.110757\n#","2017/04/26, 15:16:34",2373,CUSTOM,30,1862,"2023/08/24, 02:05:23",33,250,540,51,62,1,0.0,0.0008591065292096189,"2023/08/26, 14:22:27",v0.16.2,0,2,false,,true,false,,,,,,,,,,, Brick,"An open-source effort to standardize semantic descriptions of the physical, logical and virtual assets in buildings and the relationships between them.",BrickSchema,https://github.com/BrickSchema/Brick.git,github,,Buildings and Heating,"2023/09/27, 18:58:57",266,0,48,true,Python,Brick Schema,BrickSchema,"Python,Makefile",http://brickschema.org/,"b""# Brick\n\n[![Build Status](https://github.com/BrickSchema/Brick/workflows/Build/badge.svg)](https://github.com/BrickSchema/Brick/actions)\n[![Python 3.6](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/downloads/release/python-360/)\n\nBrick is an open-source, BSD-licensed development effort to create a uniform schema for representing metadata in buildings. Brick has three components:\n\n* An RDF class hierarchy describing the various building subsystems and the entities and equipment therein\n* A minimal, principled set of relationships for connecting these entities together into a directed graph representing a building\n* A method of encapsulation for composing complex components from a set of lower-level ones\n\nThe official Brick website, [http://brickschema.org/](http://brickschema.org/), contains documentation and other information about the Brick schema.\n\nThis repository tracks the main schema development of Brick.\n\n\n## Discussion\n\nDiscussion takes place primarily on the Brick User Forum: [https://groups.google.com/forum/#!forum/brickschema](https://groups.google.com/forum/#!forum/brickschema)\n\n## Questions and Issues\n\nIf you have an issue with Brick's coverage, utility or usability, or any other Brick-related question:\n\n1. First check the [Brick user forum](https://groups.google.com/forum/#!forum/brickschema) and the [Brick issue tracker](https://github.com/BuildSysUniformMetadata/Brick/issues)\n to check if anyone has asked your question already.\n2. If you find a previously submitted issue that closely mirrors your own, feel free to jump in on the conversation. Otherwise, please file a new issue or submit a new thread on the forum.\n\n## Examples\n\nThe `examples/` directory contains executable code samples with extensive documentation that introduce Brick concepts and idioms.\n\n- `example1`: getting familiar with RDFlib, namespaces, Brick models and when and when not to import the Brick ontology definition\n- `simple_apartment`: uses Python to programmatically build a Brick model of a small apartment\n- `g36`: contains Brick implementations of several figures from ASHRAE Guideline 36\n\n## Versioning\n\nBrick uses a semantic versioning scheme for its version numbers: `major.minor.patch`. The [releases page](https://github.com/BrickSchema/Brick/releases) contains links to each published Brick release for easy download.\n\nWe target a minor version release (e.g. `1.1`, `1.2`, `1.3`) roughly every 6 months. Minor releases will contain largely backwards-compatible extensions and new features to the ontology. Due to the significance of these changes, minor releases will be developed in their own branch; PRs for those releases will be merged into the minor version branch, and then ultimately merged into the main branch when the minor release is published.\n\nPatch releases (e.g. `1.2.1`, `1.2.2`) contain smaller, incremental, backwards-compatible changes to the ontology. Commits and PRs for the next patch release will be merged directly into `master`. Every evening, a `nightly` build is produced containing the latest commits. **There may be bugs or errors in the nightly release**, however these bugs will be removed by the time a patch release is published.\n\n## How To Contribute\n\nSee [CONTRIBUTING.md](https://github.com/BrickSchema/Brick/blob/master/CONTRIBUTING.md)\n\n## Tests\n\nTests go in the `tests/` directory and should be implemented using [pytest](https://pytest.readthedocs.io/en/latest/getting-started.html#getstarted).\n[`tests/test_inference.py`](https://github.com/BrickSchema/Brick/blob/master/tests/test_inference.py) is a good example.\n\nRun tests by executing `pytest` or `make test` in the top-level directory of this repository.\n* Before running `pytest` the Brick.ttl file needs to be created using either `make` or `python generate_brick.py`.\n\n## Python Framework\n\nRather than getting lost in the Sisyphean bikeshedding of how to format everything as YAML, we're\njust using Python dictionaries so we don't have to worry about any (well, not that much) parsing logic.\n\nFor now, the code is the documentation. Look at `bricksrc/equipment.py`, `bricksrc/point.py`, etc. for examples and how to add to each of the class hierarchies.\n\n## Other Tools\n\n### Version Comparison\n\nWe can track the different classes between versions. The below scripts produces comparison files.\n- `python tools/compare_versions/compare_versions.py --oldbrick 1.0.3 https://brickschema.org/schema/1.0.3/Brick.ttl --newbrick 1.1.0 ./Brick.ttl`\n\nIt will produce three files inside `history/{old_version}-{new_version}`.\n- `added_classes.txt`: A list of new classes introduced in the current version compared to the previous version.\n- `removed_classes.txt`: A list of old classes removed in the current version compared to the previous version.\n- `possible_mapping.json`: A map of candidate classes that can replace removed classes. Keys are removed classes and the values are candidate correspondants in the new vesion.\n""",,"2016/08/31, 16:11:29",2611,CUSTOM,79,834,"2023/09/20, 22:23:40",82,344,497,108,34,18,0.7,0.543421052631579,"2023/10/25, 02:24:23",nightly,0,26,false,,false,true,,,https://github.com/BrickSchema,https://brickschema.org/,,,,https://avatars.githubusercontent.com/u/41708328?v=4,,, BETTER,Building Efficiency Targeting Tool for Energy Retrofits.,LBNL-JCI-ICF,https://github.com/LBNL-JCI-ICF/better.git,github,,Buildings and Heating,"2021/08/25, 17:30:28",35,0,6,true,Python,,LBNL-JCI-ICF,"Python,Batchfile",,"b""# Building Efficiency Targeting Tool for Energy Retrofits (BETTER)\n\n## Quick Start \nDownload the [latest release](https://github.com/LBNL-JCI-ICF/better/releases/), see [Installation](#installation) for how to install the tool or download [Introduction to BETTER Presentation](https://github.com/LBNL-JCI-ICF/better/releases/download/v0.4-alpha/BETTER.Training.Slides.pptx) to start.\n\n## Background\nThe lack of public-access, data-driven tools requiring minimal inputs and short run time to benchmark against peers, quantify energy/cost savings, and recommend energy efficiency (EE) improvements is one of the main barriers to capturing untapped EE opportunities in the United States and globally. To fill the gap, and simultaneously address the need for automated, cost-effective, and standardized EE assessment of large volumes of buildings in U.S. state and municipal benchmarking and transparency programs, an automated, open-source, virtual building EE retrofit targeting tool has been developed with support from the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) Building Technologies Office (BTO) by Lawrence Berkeley National Laboratory (LBNL) and Johnson Controls, with assistance from ICF. \n\nBETTER requires very simple building data inputs; minimum manual work; and provides fast, \xe2\x80\x9cno-cost/no-touch\xe2\x80\x9d building EE upgrade targeting (equipment and operations) with an acceptable accuracy. It implements the ASHRAE Inverse Modeling Toolkit (IMT) to find piece-wise linear regression models between building energy consumption (electricity and fossil fuel) and outdoor air temperature. The model coefficients of each individual building are then benchmarked against the coefficients of buildings in the same space type category. Johnson Controls\xe2\x80\x99 LEAN Energy Analysis is used to identify the EE measures for the building. Finally, the potential energy, cost, and greenhouse gas (GHG) emissions reductions are estimated with the EE measures.\n\nBETTER is made possible by support from the U.S. DOE EERE BTO.\n\nBETTER is being developed under Cooperative Research and Development Agreement (CRADA) No. FP00007338 between the Regents of the University of California Ernest Orlando Lawrence Berkeley National Laboratory, under its U.S. DOE Contract No. DE-AC02-05CH11231, and Johnson Controls, with assistance from ICF.\n\n\n## Getting Started\n\n### Software Prerequisites\nBETTER is developed using Python 3.6. We recommend using Anaconda to manage Python environments. If you'd rather not install Anaconda, you can download Python 3.6 [here](https://www.python.org/downloads/).\n\n### Data Requirements\n\nThe BETTER source code posted here can provide analysis on a building-by-building and portfolio basis as long as the following data points for at least 30 buildings of an identical type are provided:\n1. Building Location (City, State)\n2. Gross Floor Area in m2 (exclude parking)\n3. Building Primary/Secondary Space Use Type\n4. Monthly Utility Bill Consumption and Cost Data (by fuel type)
\n - Electricity and fossil fuel (if any) consumption and cost
\n - Minimum of 1 year is required (2-5 years of data is desirable)
\n - For each consumption point, start and end dates (\xe2\x80\x9cbill dates\xe2\x80\x9d) are required
\n - Consumption units are required (e.g., kWh, therms, etc.)
\n - Cost units are required (e.g., US Dollars)
\n
\n\n#### Input Data Format\nPlease note that the BETTER source code was initially developed for a global market and therefore the current version uses metric/SI units (such as square meters instead of square feet for building area) for some inputs and outputs. \n\nSample data for 10 buildings are included in `./data/portfolio.xlsx`. Metadata for each building to be analyzed should be entered in the \xe2\x80\x9cMetadata\xe2\x80\x9d tab, one row per building. Utility data for all fuel types should be entered on the \xe2\x80\x9cUtility\xe2\x80\x9d tab. Be sure to double check that the building ID, fuel type, and units are accurate for each utility bill entry, and be sure to save the file as `portfolio.xlsx`. Overwrite the file to suit your needs.\n\n#### Benchmark Statistics\nA sample benchmark statistic is provided in `./better/constants.py`. The team is working to create a database of U.S. buildings to allow the benchmarking and analysis of individual buildings. If you have a portfolio of at least 30 buildings, you may choose to benchmark individual buildings against your own data set. For smaller portfolios, your benchmark will be based on buildings in the demo. See \xe2\x80\x9c[How to Use](#how-to-use)\xe2\x80\x9d for information on how to select your benchmark data set.\n\n#### Weather Data\nWeather data is downloaded from the [NOAA website](https://governmentshutdown.noaa.gov/?page=gsod.html) for the building location. To use previously downloaded weather data at later runs set `cached_weather` to `True` in `run.py`.\n\n### Installation\n1. Download and install [Python >=3.6](https://www.python.org/downloads/)\n2. Download the source code from the [latest release](https://github.com/LBNL-JCI-ICF/better/releases/)\n3. Extract and navigate to the downloaded release \n3. Install dependencies by clicking on `install.bat` or run `python setup.py install` on your cmd\n\n*Note: The current release is an alpha version. The tool will be packaged and setup files will be provided in future releases.* \n\n## How to Use\nThe focus of the development is the building energy benchmarking and EE targeting analytical core but not the user interface. To demonstrate the data input/output and the use of the tool.\n\n### Demo\n1. From your cmd or terminal, change your working directory to `./better`\n2. Run `python demo.py`. It will run the sample of 10 buildings provided in `./data/portfolio.xslx`\n3. Output is stored in `./outputs`\n\nOnce you have run the demo and familiarized yourself with the tool, you can use your own building data and follow the steps below to run analyses on either a single building or on a portfolio of buildings.\n\n### Run Single Building\n1.\tChange building information and utility data in the `./data/portfolio.xlsx` and save the file.\n2.\tOpen `./better/run.py` file using a text editor and ensure that line **11** (`run_single(...)`) is uncommented, and line **13** (`run_batch(...)`) is commented out (i.e., has a \xe2\x80\x9c#\xe2\x80\x9d at the beginning of the line).\n3.\tSet the target building ID based on the ID in `portfolio.xlsx` (e.g., `bldg_id = 1` \xe2\x80\x93 change the **1** to match the ID of the building you wish to analyze).\n4.\tSet the saving target level (1 = conservative, 2 = nominal, 3 = aggressive) \n5.\tRun the analysis by running python run.py from your cmd or terminal\n\n### Run Portfolio\n1.\tChange building information and utility data in the `./data/portfolio.xlsx` and save the file.\n2.\tOpen ./better/run.py file using a text editor and ensure that line 11 (\xe2\x80\x9crun_single\xe2\x80\x9d) is commented out (i.e., has a \xe2\x80\x9c#\xe2\x80\x9d at the beginning of the line), and line 13 (\xe2\x80\x9crun_batch\xe2\x80\x9d) is uncommented.\n3.\tSet the start and end building IDs based on the IDs in portfolio.xlsx (e.g., `start_id=1` and `end_id=20` \xe2\x80\x93 change the **1** and **20** to match the first and last IDs of the buildings you wish to analyze).\n4.\tSet the saving target level (1 = conservative, 2 = nominal, 3 = aggressive)\n5. Run the analysis by running the `python run.py` from your cmd or terminal\n\n\n## Interpreting Results\nThe analysis results are in the `./outputs` folder. Comprehensive reports are provided in .html format for each individual building, and results are explained within those html files. For portfolio analyses, a separate Portfolio html output is also provided.\n## Copyright\n\nBuilding Efficiency Targeting Tool for Energy Retrofits (BETTER) Copyright (c) 2018, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.\n\nIf you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office at IPO@lbl.gov.\n\nNOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so.\n""",,"2019/01/04, 17:14:05",1755,CUSTOM,0,72,"2023/03/11, 16:07:02",11,3,7,1,228,1,0.0,0.4776119402985075,"2019/01/19, 00:23:16",v0.4-alpha,0,2,false,,false,false,,,https://github.com/LBNL-JCI-ICF,,,,,https://avatars.githubusercontent.com/u/43547106?v=4,,, NILM,Non-Intrusive Load Monitoring is the process of estimating the energy consumed by individual appliances given just a whole-house power meter reading.,nilmtk,https://github.com/nilmtk/nilmtk.git,github,"disaggregation,python,nilm,energy,forecasting,algorithms,ipython-notebook,energy-disaggregation,nilmtk,nilm-algorithms",Buildings and Heating,"2023/06/07, 06:59:36",756,0,79,true,Python,,nilmtk,"Python,Jupyter Notebook,Shell,Batchfile",http://nilmtk.github.io,"b'[![Build Status](https://travis-ci.org/nilmtk/nilmtk.svg?branch=master)](https://travis-ci.org/nilmtk/nilmtk) [![Install with conda](https://anaconda.org/nilmtk/nilmtk/badges/installer/conda.svg)](https://anaconda.org/nilmtk/nilmtk) [![conda package version](https://anaconda.org/nilmtk/nilmtk/badges/version.svg)](https://anaconda.org/nilmtk/nilmtk)\n\n# NILMTK: Non-Intrusive Load Monitoring Toolkit\n\nNon-Intrusive Load Monitoring (NILM) is the process of estimating the\nenergy consumed by individual appliances given just a whole-house\npower meter reading. In other words, it produces an (estimated)\nitemised energy bill from just a single, whole-house power meter.\n\nNILMTK is a toolkit designed to help **researchers** evaluate the accuracy of NILM algorithms. If you are a new Python user, it is recommended to educate yourself on [Pandas](https://pandas.pydata.org/), [Pytables](http://www.pytables.org/) and other tools from the Python ecosystem.\n\n**\xe2\x9a\xa0\xef\xb8\x8fIt may take time for the NILMTK authors to get back to you regarding queries/issues. However, you are more than welcome to propose changes, support!** Remember to check existing issue tickets, especially the open ones.\n\n# Documentation\n\n[NILMTK Documentation](https://github.com/nilmtk/nilmtk/tree/master/docs/manual)\n\nIf you are a new user, read the [install instructions here](https://github.com/nilmtk/nilmtk/blob/master/docs/manual/user_guide/install_user.md). It came to our attention that some users follow third-party tutorials to install NILMTK. Always remember to check the dates of such tutorials, many are very outdated and don\'t reflect NILMTK\'s current version or the recommended/supported setup.\n\n# Why a toolkit for NILM?\n\nWe quote our [NILMTK paper](http://arxiv.org/pdf/1404.3878v1.pdf)\nexplaining the need for a NILM toolkit:\n\n > Empirically comparing disaggregation algorithms is currently\n > virtually impossible. This is due to the different data sets used,\n > the lack of reference implementations of these algorithms and the\n > variety of accuracy metrics employed.\n\n\n# What NILMTK provides\n\nTo address this challenge, we present the Non-intrusive Load Monitoring\nToolkit (NILMTK); an open source toolkit designed specifically to enable\nthe comparison of energy disaggregation algorithms in a reproducible\nmanner. This work is the first research to compare multiple\ndisaggregation approaches across multiple publicly available data sets.\nNILMTK includes:\n\n- parsers for a range of existing data sets (8 and counting)\n- a collection of preprocessing algorithms\n- a set of statistics for describing data sets\n- a number of [reference benchmark disaggregation algorithms](https://github.com/nilmtk/nilmtk/wiki/NILM-Algorithms)\n- a common set of accuracy metrics\n- and much more!\n\n# Publications\n\nIf you use NILMTK in academic work then please consider citing our papers. Here are some of the publications (contributors, please update this as required):\n\n1. Nipun Batra, Jack Kelly, Oliver Parson, Haimonti Dutta, William Knottenbelt, Alex Rogers, Amarjeet Singh, Mani Srivastava. NILMTK: An Open Source Toolkit for Non-intrusive Load Monitoring. In: 5th International Conference on Future Energy Systems (ACM e-Energy), Cambridge, UK. 2014. DOI:[10.1145/2602044.2602051](http://dx.doi.org/10.1145/2602044.2602051). arXiv:[1404.3878](http://arxiv.org/abs/1404.3878).\n2. Nipun Batra, Jack Kelly, Oliver Parson, Haimonti Dutta, William Knottenbelt, Alex Rogers, Amarjeet Singh, Mani Srivastava. NILMTK: An Open Source Toolkit for Non-intrusive Load Monitoring"". In: NILM Workshop, Austin, US. 2014 \\[[pdf](http://nilmworkshop14.files.wordpress.com/2014/05/batra_nilmtk.pdf)\\]\n3. Jack Kelly, Nipun Batra, Oliver Parson, Haimonti Dutta, William Knottenbelt, Alex Rogers, Amarjeet Singh, Mani Srivastava. Demo Abstract: NILMTK v0.2: A Non-intrusive Load Monitoring Toolkit for Large Scale Data Sets. In the first ACM Workshop On Embedded Systems For Energy-Efficient Buildings, 2014. DOI:[10.1145/2674061.2675024](http://dx.doi.org/10.1145/2674061.2675024). arXiv:[1409.5908](http://arxiv.org/abs/1409.5908).\n4. Nipun Batra, Rithwik Kukunuri, Ayush Pandey, Raktim Malakar, Rajat Kumar, Odysseas Krystalakos, Mingjun Zhong, Paulo Meira, and Oliver Parson. 2019. Towards reproducible state-of-the-art energy disaggregation. In Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (BuildSys \'19). Association for Computing Machinery, New York, NY, USA, 193\xe2\x80\x93202. DOI:[10.1145/3360322.3360844](https://doi.org/10.1145/3360322.3360844)\n\nPlease note that NILMTK has evolved *a lot* since most of these papers were published! Please use the [online docs](https://github.com/nilmtk/nilmtk/tree/master/docs/manual)\nas a guide to the current API. \n\n# Brief history\n\n* August 2019: v0.4 released with the new API. See also [NILMTK-Contrib](https://github.com/nilmtk/nilmtk-contrib).\n* June 2019: v0.3.1 released on [Anaconda Cloud](https://anaconda.org/nilmtk/nilmtk/).\n* Jav 2018: Initial Python 3 support on the v0.3 branch\n* Nov 2014: NILMTK wins best demo award at [ACM BuildSys](http://www.buildsys.org/2014/)\n* July 2014: v0.2 released\n* June 2014: NILMTK presented at [ACM e-Energy](http://conferences.sigcomm.org/eenergy/2014/)\n* April 2014: v0.1 released\n\nFor more detail, please see our [changelog](https://github.com/nilmtk/nilmtk/blob/master/docs/manual/development_guide/changelog.md).\n'",",http://arxiv.org/pdf/1404.3878v1.pdf,http://arxiv.org/abs/1404.3878,http://arxiv.org/abs/1409.5908,https://doi.org/10.1145/3360322.3360844","2013/12/03, 11:39:12",3613,Apache-2.0,2,1872,"2023/06/07, 07:15:15",113,106,858,7,140,7,0.3,0.44918444165621074,"2020/08/27, 01:27:47",0.4.2,0,31,false,,false,false,,,https://github.com/nilmtk,,,,,https://avatars.githubusercontent.com/u/6094836?v=4,,, volkszaehler.org,A free smart meter implementation with focus on data privacy.,volkszaehler,https://github.com/volkszaehler/volkszaehler.org.git,github,"smarthome,smartmeter,monitoring,volkszaehler,privacy,logging,php",Buildings and Heating,"2023/10/23, 21:52:19",189,1,18,true,PHP,volkszaehler.org project,volkszaehler,"PHP,JavaScript,CSS,Python,HTML,Shell,Dockerfile",https://volkszaehler.org,"b'# volkszaehler.org\n\n[![Build](https://github.com/volkszaehler/volkszaehler.org/actions/workflows/build.yml/badge.svg)](https://github.com/volkszaehler/volkszaehler.org/actions/workflows/build.yml)\n\nvolkszaehler.org is a free smart meter implementation with focus on data privacy.\n\n\n## Demo\n\n[demo.volkszaehler.org](https://demo.volkszaehler.org)\n\n![Screenshot](misc/docs/screenshot.png?raw=true)\n\n\n## Quickstart\n\nThe easiest way to try out volkszaehler is using Docker:\n\n docker-compose up -d\n\nwhich will create a database, initialize it and start volkszaehler at port 8080.\n\n## Installation\n\nFor local installation, run the install script from the shell:\n\n wget https://raw.github.com/volkszaehler/volkszaehler.org/master/bin/install.sh\n bash install.sh\n\nOr follow the detailed installation instructions at http://wiki.volkszaehler.org/software/middleware/installation\n\n\n## Documentation\n\n* Website: [volkszaehler.org](http://volkszaehler.org)\n* Wiki: [wiki.volkszaehler.org](http://wiki.volkszaehler.org)\n\n\n## Support\n\n* Users mailing list: https://demo.volkszaehler.org/mailman/listinfo/volkszaehler-users\n* Developers mailing list: https://demo.volkszaehler.org/mailman/listinfo/volkszaehler-dev\n\n\n\n## Repository structure\n\n volkszaehler.org/\n |_ etc/ configuration files\n |_ bin/ scripts for imports, installation etc.\n |_ htdocs/ web UI\n | \\_ middleware.php middleware\n |\n |_ lib/ middleware libraries\n |_ test/ unit tests\n \\_ misc/\n |_ docs/ documentation\n |_ graphics/ graphics for docs, etc.\n \\_ sql/ database schema dumps\n \\_ demo/ demo data\n\n\n## Copyright\n\nCopyright \xc2\xa9 2011-2020 volkszaehler.org\nLicensed under the GNU General Public License Version 3 (https://opensource.org/licenses/GPL-3.0).\n'",,"2010/07/22, 09:35:12",4843,GPL-3.0,22,1909,"2023/10/23, 21:50:54",30,667,921,29,2,2,0.0,0.56998556998557,"2019/03/11, 10:34:58",1.0,0,49,false,,false,false,illacceptanything/illacceptanything,,https://github.com/volkszaehler,https://volkszaehler.org,Germany,,,https://avatars.githubusercontent.com/u/340617?v=4,,, ModBus Measurement Daemon,A daemon for collecting measurement data from smart meters and grid inverters over modbus.,volkszaehler,https://github.com/volkszaehler/mbmd.git,github,"modbus,smart-meter,openhab,golang,volkszaehler,modbus-meters,grid-inverters,sunspec",Buildings and Heating,"2023/10/01, 15:51:17",196,19,62,true,Go,volkszaehler.org project,volkszaehler,"Go,HTML,JavaScript,Dockerfile,Roff,Makefile,CSS",,"b'# ModBus Measurement Daemon\n\n[![Build Status](https://travis-ci.org/volkszaehler/mbmd.svg?branch=master)](https://travis-ci.org/volkszaehler/mbmd)\n\nA daemon for collecting measurement data from smart meters and grid inverters over modbus.\n\n`mbmd` provides an http interface to smart meters and grid inverters with modbus interface.\nMeter readings are made accessible through REST API and MQTT.\nModbus communication is possible over RS485 connections as well as TCP sockets.\n\n`mbmd` was originally developer by Mathias Dalheimer under the name of `gosdm`. Previous releases are still [available](https://github.com/gonium/gosdm630).\n\n# Table of Contents\n\n* [Requirements](#requirements)\n* [Installation](#installation)\n * [Raspberry Pi](#raspberry-pi)\n * [Detecting connected meters](#detecting-connected-meters)\n* [API](#api)\n * [Rest API](#rest-api)\n * [Websocket API](#websocket-api)\n * [MQTT API](#mqtt-api)\n* [Supported Devices](#supported-devices)\n* [Releases](#releases)\n\n\n## Requirements\n\nYou\'ll need:\n* A supported Modbus/RTU smart meter OR an supported Modbus/TCP SunSpec-compatible grid inverter.\n* In case of Modbus/RTU: A USB RS485 adapter. See [USB-ISO-RS485 project](https://github.com/gonium/usb-iso-rs485) for a home-grown adapter.\n* Optionally an RS485 to Ethernet converter (see [SO discussion](https://stackoverflow.com/questions/59459877/is-rtu-over-tcp-a-spec-conforming-modbus-application))\n\n\n## Installation\n\n### Using the precompiled binaries\n\nPrecompiled release packages are [available](https://github.com/volkszaehler/mbmd/releases). Download the right package for the target platform and unzip.\n\n### Building from source\n\n`mbmd` is developed in [Go](http://golang.org) and requires ^1.16. To build from source two steps are needed:\n\n- use `make install` to install the build tools (make sure `$GOPATH/bin` is part of the path to make the installed tools accessible for the next step)\n- then run `make build` which creates the `./mbmd` binary\n\nTo cross-build for a different archtecture (e.g. Raspberry Pi), use\n\n GOOS=linux GOARCH=arm GOARM=5 make build\n\n### Running\n\nTo get help on the various command line options run\n\n\tmbmd -h\n\nThe full documentation is available in the [docs](docs/mbmd.md) folder.\nA typical invocation looks like this:\n\n $ ./bin/mbmd run -a /dev/ttyUSB0 -d janitza:26,sdm:1\n 2017/01/25 16:34:26 config: creating RTU connection via /dev/ttyUSB0 (9600baud, 8N1)\n 2017/01/25 16:34:26 httpd: starting api at :8080\n\nThis call queries a Janitza B23 meter with ID 26 and an Eastron SDM\nmeter at ID 1. Not all devices are by default configured to use ID 1.\nThe default device IDs depend on the meter type and documented in the meter\'s manual.\n\nTo use RTU devices with RS485/Ethernet adapters, add the `--rtu` switch to configure `mbmd` to use the TCP connection with RTU data format:\n\n\t\xe2\x9d\xaf ./bin/mbmd run -a rs485.fritz.box:23 --rtu -d sdm:1\n\t2020/01/02 10:43:53 mbmd unknown version (unknown commit)\n\t2020/01/02 10:43:53 config: creating RTU over TCP connection for rs485.fritz.box:23\n\t2020/01/02 10:43:53 initialized device SDM1.1: {SDM Eastron SDM meters }\n\t2020/01/02 10:43:53 httpd: starting api at :8080\n\nIf you use the ``-v`` commandline switch you can see\nmodbus traffic and the current readings on the command line. At\n[http://localhost:8080](http://localhost:8080) you can see an embedded\nweb page that updates itself with the latest values:\n\n![realtime view of incoming measurements](img/realtimeview.png)\n\n\n### Run using Docker\n\nAlternatively run `mbmd` using the Docker image:\n\n\tdocker run -p 8080:8080 --device=/dev/ttyUSB0 volkszaehler/mbmd run -a /dev/ttyUSB0 -u 0.0.0.0:8080 -d sdm:1\n\nTo mount the config file into the docker container use `-v $(pwd)/mbmd.yaml:/etc/mbmd.yaml`.\n\n## Raspberry Pi\n\nDownload the ARM package for usage with Raspberry Pi and copy the binary\ninto `/usr/local/bin`. The following sytemd unit can be used to\nstart `mbmd` as service (put this into a new file ``/etc/systemd/system/mbmd.service``):\n\n [Unit]\n Description=mbmd\n After=syslog.target\n After=network-online.target\n [Service]\n ExecStart=/usr/local/bin/mbmd run -a /dev/ttyAMA0\n Restart=always\n [Install]\n WantedBy=multi-user.target\n\nYou might need to adjust the ``-a`` parameter depending on where your\nRS485 adapter is connected. Then, use\n\n systemctl start mbmd\n\nto test your installation. If you\'re satisfied use\n\n systemctl enable mbmd\n\nto start the service at boot time automatically.\n\n*WARNING:* When using an FTDI-based USB-RS485 adaptor the\nRaspberry Pi might become unreachable after a while. This is most likely not\nan issue with the RS485-USB adaptor or this software, but because of [a\nbug in the Raspberry Pi kernel](https://github.com/raspberrypi/linux/issues/1187).\nTo fix switch the internal `dwc` USB hub of the Raspberry Pi to\nUSB1.1 by adding the following parameter to `/boot/cmdline.txt`:\n\n dwc_otg.speed=1\n\n\n## Detecting connected meters\n\nMODBUS/RTU does not provide a mechanism to discover devices. There is no\nreliable way to detect all attached devices.\nAs workaround `mbmd scan` attempts to read the L1 voltage from all\ndevice IDs and reports which one replied correctly (i.e. 110/230V +/-10%):\n\n````\n./mbmd scan -a /dev/ttyUSB0\n2017/06/21 10:22:34 Starting bus scan\n2017/06/21 10:22:35 Device 1: n/a\n...\n2017/07/27 16:16:39 Device 21: SDM type device found, L1 voltage: 234.86\n2017/07/27 16:16:40 Device 22: n/a\n2017/07/27 16:16:40 Device 23: n/a\n2017/07/27 16:16:40 Device 24: n/a\n2017/07/27 16:16:40 Device 25: n/a\n2017/07/27 16:16:40 Device 26: Janitza type device found, L1 voltage: 235.10\n...\n2017/07/27 16:17:25 Device 247: n/a\n2017/07/27 16:17:25 Found 2 active devices:\n2017/07/27 16:17:25 * slave address 21: type SDM\n2017/07/27 16:17:25 * slave address 26: type JANITZA\n2017/07/27 16:17:25 WARNING: This lists only the devices that responded to a known L1 voltage request. Devices with different function code definitions might not be detected.\n````\n\n\n# API\n\n## Rest API\n\n`mbmd` provides a convenient REST API. Supported endpoints under `/api` are:\n\n* `/api/last/{ID}` latest data for device\n* `/api/avg/{ID}` averaged data over last minute\n* `/api/status` daemon status\n\nBoth device APIs can also be called without the device id to return data for all connected devices.\n\n\n### Monitoring\n\nThe `/api/status` endpoint provides the following information:\n\n $ curl http://localhost:8080/api/status\n {\n ""StartTime"": ""2017-01-25T16:35:50.839829945+01:00"",\n ""UpTime"": 65587.177092186,\n ""Goroutines"": 11,\n ""Memory"": {\n ""Alloc"": 1568344,\n ""HeapAlloc"": 1568344\n },\n ""Modbus"": {\n ""TotalModbusRequests"": 1979122,\n ""ModbusRequestRatePerMinute"": 1810.5264666764785,\n ""TotalModbusErrors"": 738,\n ""ModbusErrorRatePerMinute"": 0.6751319688261972\n },\n ""ConfiguredMeters"": [\n {\n ""Id"": 26,\n ""Type"": ""JANITZA"",\n ""Status"": ""available""\n }\n ]\n }\n\nThis is a snapshot of a process running over night, along with the error\nstatistics during that timeframe. The process queries continuously,\nthe cabling is not a shielded, twisted wire but something that I had laying\naround. With proper cabling the error rate should be lower, though.\n\n\n## Websocket API\n\nData read from the meters can be observed by clients in realtime using the Websocket API.\nAs soon as new readings are available, they are pushed to connected websocket clients.\n\nThe websocket API is available on `/ws`. All connected clients receive status and\nmeter updates for all connected meters without further subscription.\n\n\n## MQTT API\n\nAnother option for receiving client updates is by using the built-in MQTT publisher.\nBy default, readings are published at `/mbmd//`. Rate limiting is possible.\n\n\n## Homie API\n\n[Homie](https://homieiot.github.io) is an MQTT convention for IoT/M2M. `mbmd` publishes all devices and readings using the Homie protocol. This allows systems like e.g. OpenHAB to auto-discover devices operated by `mbmd`:\n\n![auto-discovery of thinks in OpenHAB](img/openhab.png)\n\n## InfluxDB support\n\nThere is also the option to directly insert the data into an influxdb database by using the command-line options available. InfluxDB 1.8 and 2.0 are currently supported. to enable this, add the `--influx-database` and the `--influx-url` commandline parameter. More advanced configuration is available, to learn more checkout the [mbmd_run.md](docs/mbmd_run.md) documentation\n\n# Supported Devices\n\n`mbmd` supports a range of DIN rail meters and grid inverters.\n\n## Modbus RTU Meters\n\nThe meters have slightly different capabilities. The EASTRON SDM630 offers\na lot of features, while the smaller devices only support basic\nfeatures. The table below gives an overview (please consult the\nmanuals for definitive guidance):\n\n| Meter | Phases | Voltage | Current | Power | Power Factor | Total Import | Total Export | Per-phase Import/Export | Line/Neutral THD |\n|---|---|---|---|---|---|---|---|---|---|\n| SDM72 | 3 | - | - | + | - | + | + | - | - |\n| SDM120/220 | 1 | + | + | + | + | + | + | - | - |\n| SDM530 | 3 | + | + | + | + | + | + | - | - |\n| SDM630 | 3 | + | + | + | + | + | + | + | + |\n| Inepro PRO1/2 | 1 | + | + | + | + | + | + | - | - |\n| Inepro PRO380 | 3 | + | + | + | + | + | + | + | - |\n| Janitza B23-312 | 3 | + | + | + | + | + | + | - | - |\n| DZG DVH4013 | 3 | + | + | - | - | + | + | - | - |\n| SBC ALE3 | 3 | + | + | + | + | + | + | - | - |\n| ABB A/B-Series | 3 | + | + | + | + | + | + | + | + |\n| BE MPM3MP | 3 | + | + | + | + | + | + | - | - |\n| KOSTAL Smart Energy Meter | 3 | + | + | + | + | + | + | + | - |\n| ORNO WE-504/514/515 | 1 | + | + | + | + | + | - | - | - |\n| ORNO WE-516/517 | 3 | + | + | + | + | + | + | + | - |\n| iEM3000 Series | 3 | + | + | + | + | + | + | (+) | + |\n\n- **SDM72**: Compact (4TE), 3P meter with bare minium of total measurements, no currents. Can be configured using the builtin display.\n- **SDM120**: Cheap and small (1TE), but communication parameters can only be set over MODBUS, which is currently not supported by this project.\nYou can use e.g. [SDM120C](https://github.com/gianfrdp/SDM120C) to change parameters.\n- **SDM220, SDM230**: More comfortable (2TE), can be configured using the builtin display.\n- **SDM530**: Very big (7TE) - takes up a lot of space, but all connections are\non the underside of the meter.\n- **SDM630**: v1 and v2, both MID and non-MID. Compact (4TE) and with lots\nof features. Can be configured for 1P2 (single phase with neutral), 3P3\n(three phase without neutral) and 3P4 (three phase with neutral) systems.\n- **Inepro PRO1/2**: Small (1TE) MID meter up to 100A (Pro2). External tariff input possible (2T versions).\n- **Inepro PRO380**: Compact (4TE) MID meter with extensive features.\nCan be connected 3P4W, 3P3W and 1P2W. Includes per-direction active/reactive energy consumption and supports two tariffs. Energy resolution is 2 digits per kWh.\n- **Janitza B23-312**: These meters have a higher update rate than the Eastron\ndevices, but they are more expensive. The -312 variant is the one with a MODBUS interface.\n- **DZG DVH4013**: This meter does not provide raw phase power measurements\nand only aggregated import/export measurements. The meter is only\npartially implemented and not recommended.\nBy default, the meter communicates using 9600 8E1. The meter ID\nis derived from the serial number taking the last two numbers of the\nserial number (top right of the device), e.g. 23, and add one (24).\nAssume this is a hexadecimal number and convert it to decimal (36). Use\nthis as the meter ID.\n- **SBC ALE3**: This compact Saia Burgess Controls meter is comparable to the SDM630.\nIt has two tariffs, both import and export depending on meter version and compact (4TE). It\'s often used with Viessmann heat pumps.\n- **BE MPM3PM**: Compact (4TE) three phase meter.\n- **KOSTAL Smart Energy Meter**: Slave device for Kostal grid inverters. Known [bug](https://github.com/volkszaehler/mbmd/pull/61#issuecomment-570081618) in inverter firmware with Total Export Energy.\n- **ORNO WE-504/514/515**: Low cost single phase meter\nBy default, the meter communicates using 9600 8E1. The meter ID is 1. Meter ID, bus speed and other parameters are configurable via [Software(Windows only)](https://www.partner.orno.pl/grafiki2/PC%20softwre170621.rar)\nWE-515 has a lithium battery and multi-tariff support, WE-514 does not support tariff zones.\n- **ORNO WE-516/517**: Low cost three phase meter.\nBy default, the meter communicates using 9600 8E1. The meter ID is 1. Meter ID, bus speed and other parameters are configurable via [Software(Windows only)](https://www.partner.orno.pl/grafiki2/PC%20softwre170621.rar)\nWE-517 has a lithium battery and multi-tariff support, WE-516 does not support tariff zones.\n- **Schneider Electric iEM3000 Series**: Professional meter with loads of configurable max/average measurements with timestamp functionality.\n\n## Modbus TCP Grid Inverters\n\nApart from meters, SunSpec-compatible grid inverters connected over TCP\nare supported, too. SunSpec defines a default register layout for accessing\nthe devices.\n\nSupported inverters include popular devices from SolarEdge (SE3000, SE9000)\nand SMA (Sunny Boy and Sunny TriPower).\n\nIn case of TCP connection, the adapter parameter becomes the hostname and port:\n\n\t./mbmd run -a 192.168.0.44:502 -d SMA:23\n\nSunSpec devices can host multiple subdevices, e.g. to expose a meter attached to an inverter. To access a subdevice, append its id to the slave id:\n\n\t./mbmd run -a 192.168.0.44:502 -d FRONIUS:1.0 -d FRONIUS:1.1\n\n\n# Releases\n\nDownload the lastest release from [github.com/volkszaehler/mbmd/releases](https://github.com/volkszaehler/mbmd/releases).\n'",,"2019/05/31, 17:17:13",1608,BSD-3-Clause,16,549,"2023/08/19, 07:36:24",49,195,281,27,67,11,0.1,0.38247011952191234,"2020/08/09, 13:50:01",0.13,0,28,false,,false,false,"ngehrsitz/evcc,thommyho/robotui,andoma93/ocpp-evcc-simulator,hoermto/evcc-tho,sarvex/evcc,jonilala796/evcc,chr4/enyaq_exporter,goebelmeier/evcc,nexeck/modbus-proxy,orgTestCodacy11KRepos110MB/repo-5479-evcc,abeamstart/evcc,eforsthofer/eeg-sophiensiedlung,Bucky2k/evcc-soc,mikenye/sunspec2chargehq,andig/evcc-vehicle-demo,cathiele/evcc-config,Kiwi173/Easee,cathiele/evcc,evcc-io/evcc",,https://github.com/volkszaehler,https://volkszaehler.org,Germany,,,https://avatars.githubusercontent.com/u/340617?v=4,,, HiSim,Simulation and analysis of household scenarios using modern components as alternative to fossil fuel based ones.,FZJ-IEK3-VSA,https://github.com/FZJ-IEK3-VSA/HiSim.git,github,,Buildings and Heating,"2023/10/25, 12:13:57",22,1,9,true,Python,FZJ-IEK3,FZJ-IEK3-VSA,"Python,Jupyter Notebook,Batchfile,TeX,Makefile,Dockerfile",,"b' [![PyPI Version](https://img.shields.io/pypi/v/hisim.svg)](https://pypi.python.org/pypi/hisim)\n [![PyPI - License](https://img.shields.io/pypi/l/hisim)](LICENSE)\n \n \n\n# HiSim - Household Infrastructure and Building Simulator\n\nHiSim is a Python package for simulation and analysis of household scenarios and building systems using modern\ncomponents as alternative to fossil fuel based ones. This package integrates load profiles generation of electricity\nconsumption, heating demand, electricity generation, and strategies of smart strategies of modern components, such as\nheat pump, battery, electric vehicle or thermal energy storage. HiSim is a package under development by\nForschungszentrum J\xc3\xbclich und Hochschule Emden/Leer. For detailed documentation, please\naccess [ReadTheDocs](https://household-infrastructure-simulator.readthedocs.io/en/latest/) of this repository.\n\n\n# Install Graphviz\n\nIf you want to use the feature that generates system charts, you need to install GraphViz in your system. If you don\'t\nhave Graphviz installed, you will experience error messages about a missing dot.exe under Windows.\n\nFollow the installation instructions from here:\nhttps://www.graphviz.org/download/\n\n(or simply disable the system charts)\n\nClone repository\n-----------------------\nTo clone this repository, enter the following command to your terminal:\n\n```python\ngit clone https://github.com/FZJ-IEK3-VSA/HiSim.git\n```\n\nVirtual Environment\n-----------------------\nBefore installing `hisim`, it is recommended to set up a Python virtual environment. Let `hisimvenv` be the name of\nvirtual environment to be created. For Windows users, setting the virtual environment in the path `\\hisim` is done with\nthe command line:\n\n```python\npython -m venv hisimvenv\n```\n\nAfter its creation, the virtual environment can be activated in the same directory:\n\n```python \nhisimvenv\\Scripts\\activate\n```\n\nFor Linux/Mac users, the virtual environment is set up and activated as follows:\n\n```python \nvirtual hisimvenv source hisimvenv/bin/activate\n```\n\nAlternatively, Anaconda can be used to set up and activate the virtual environment:\n\n```python \nconda create -n hisimvenv python=3.9\nconda activate hisimvenv\n```\n\nWith the successful activation, `hisim` is ready to be locally installed.\n\nInstall package\n------------------------\nAfter setting up the virtual environment, install the package to your local libraries:\n\n```python\npip install -e .\n```\n\nRun Simple Examples\n-----------------------\nRun the python interpreter in the `hisim/examples` directory with the following command:\n\n```python\npython ../hisim/hisim_main.py simple_examples.py first_example\n```\n\nThis command executes `hisim_main.py` on the setup function `first_example` implemented in the file `examples.py` that\nis stored in `hisim/examples`. The same file contains another setup function that can be used: `second_example`. The\nresults can be visualized under directory `results` created under the same directory where the script with the setup\nfunction is located.\n\nRun Basic Household Example\n-----------------------\nThe directory `hisim\\examples` also contains a basic household configuration in the script `basic_household.py`. The\nfirst setup function (`basic_household_explicit`) can be executed with the following command:\n\n```python\npython ../hisim/hisim_main.py basic_household.py basic_household_explicit\n```\n\nThe system is set up with the following elements:\n\n* Occupancy (Residents\' Demands)\n* Weather\n* Photovoltaic System\n* Building\n* Heat Pump\n\nHence, photovoltaic modules and the heat pump are responsible to cover the electricity the thermal energy demands as\nbest as possible. As the name of the setup function says, the components are explicitly connected to each other, binding\ninputs correspondingly to its output sequentially. This is difference then automatically connecting inputs and outputs\nbased its similarity. For a better understanding of explicit connection, proceed to session `IO Connecting Functions`.\n\nGeneric Setup Function Walkthrough\n---------------------\nThe basic structure of a setup function follows:\n\n1. Set the simulation parameters (See `SimulationParameters` class in `hisim/hisim/component.py`)\n1. Create a `Component` object and add it to `Simulator` object\n 1. Create a `Component` object from one of the child classes implemented in `hisim/hisim/components`\n 1. Check if `Component` class has been correctly imported\n 1. If necessary, connect your object\'s inputs with previous created `Component` objects\' outputs.\n 1. Finally, add your `Component` object to `Simulator` object\n1. Repeat step 2 while all the necessary components have been created, connected and added to the `Simulator` object.\n\nOnce you are done, you can run the setup function according to the description in the simple example run.\n\nPackage Structure\n-----------\nThe main program is executed from `hisim/hisim/hisim_main.py`. The `Simulator`(`simulator.py`) object groups `Component`\ns declared and added from the setups functions. The `ComponentWrapper`(`simulator.py`) gathers together the `Component`s\ninside an `Simulator` Object. The `Simulator` object performs the entire simulation under the\nfunction `run_all_timesteps` and stores the results in a Python pickle `data.pkl` in a subdirectory\nof `hisim/hisim/results` named after the executed setup function. Plots and the report are automatically generated from\nthe pickle by the class `PostProcessor` (`hisim/hisim/postprocessing/postprocessing.py`).\n\nComponent Class\n-----------\nA child class inherits from the `Component` class in `hisim/hisim/component.py` and has to have the following methods\nimplemented:\n\n* i_save_state: updates previous state variable with the current state variable\n* i_restore_state: updates current state variable with the previous state variable\n* i_simulate: performs a timestep iteration for the `Component`\n* i_doublecheck: checks if the values are expected throughout the iteration\n\nThese methods are used by `Simulator` to execute the simulation and generate the results.\n\nList of `Component` children\n-----------\nTheses classes inherent from `Component` (`component.py`) class and can be used in your setup function to customize\ndifferent configurations. All `Component` class children are stored in `hisim/hisim/components` directory. Some of these\nclasses are:\n\n- `RandomNumbers` (`random_numbers.py`)\n- `SimpleController` (`simple_controller.py`)\n- `SimpleSotrage` (`simple_storage.py`)\n- `Transformer` (`transformer.py`)\n- `PVSystem` (`pvs.py`)\n- `CHPSystem` (`chp_system.py`)\n- `Csvload` (`csvload.py`)\n- `SumBuilderForTwoInputs` (`sumbuilder.py`)\n- `SumBuilderForThreeInputs` (`sumbuilder.py`)\n- ToDo: more components to be added\n\nConnecting Input/Outputs\n-----------\nLet `my_home_electricity_grid` and `my_appliance` be Component objects used in the setup function. The\nobject `my_apppliance` has an output `ElectricityOutput` that has to be connected to an object `ElectricityGrid`. The\nobject `my_home_electricity_grid` has an input `ElectricityInput`, where this connection takes place. In the setup\nfunction, the connection is performed with the method `connect_input` from the `Simulator` class:\n\n```python\nmy_home_electricity_grid.connect_input(input_fieldname=my_home_electricity_grid.ELECTRICITY_INPUT,\n src_object_name=my_appliance.component_name,\n src_field_name=my_appliance.ELECTRICITY_OUTPUT)\n```\n\nConfiguration Automator\n-----------\nA configuration automator is under development and has the goal to reduce connections calls among similar components.\n\nPost Processing\n-----------\nAfter the simulator runs all time steps, the post processing (`postprocessing.py`) reads the persistent saved results,\nplots the data and\ngenerates a report.\n\n## License\n\nMIT License\n\nCopyright (C) 2020-2021 Noah Pflugradt, Vitor Zago, Frank Burkard, Tjarko Tjaden, Leander Kotzur, Detlef Stolten\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\n## About Us\n\n\n\nWe are\nthe [Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3)](https://www.fz-juelich.de/iek/iek-3/DE/Home/home_node.html)\nbelonging to the [Forschungszentrum J\xc3\xbclich](www.fz-juelich.de/). Our interdisciplinary institute\'s research is focusing\non energy-related process and systems analyses. Data searches and system simulations are used to determine energy and\nmass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for\nperforming comparative assessment studies between the various systems. Our current priorities include the development of\nenergy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new\ninfrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating\nnew technologies into future energy market frameworks.\n\n## Contributions and Users\n\nDevelopment Partners:\n\n**Hochschule Emden/Leer** inside the project ""Piegstrom"".\n\n**4ward Energy** inside the EU project ""WHY"" \n\n## Acknowledgement\n\nThis work was supported by the Helmholtz Association under the Joint\nInitiative [""Energy System 2050 A Contribution of the Research Field Energy""](https://www.helmholtz.de/en/research/energy/energy_system_2050/)\n.\n\nFor this work weather data is based on data from [""German Weather Service (Deutscher Wetterdienst-DWD)""](https://www.dwd.de/DE/Home/home_node.html/) and [""NREL National Solar Radiation Database""](https://nsrdb.nrel.gov/data-viewer/download/intro/) (License: Creative Commons Attribution 3.0 United States License); individual values are averaged.\n\n\n\n\n\nThis project has received funding from the European Union\xe2\x80\x99s Horizon 2020 research and innovation programme under grant agreement No 891943. \n\n\n\n\n'",,"2021/10/05, 11:08:23",750,MIT,272,882,"2023/10/25, 10:52:12",28,198,218,142,0,4,1.0,0.6302211302211302,"2023/10/25, 12:11:13",v1.0.0,0,15,false,,false,true,FZJ-IEK3-VSA/HiSim-Building-Sizer,,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, hplib,Database with efficiency parameters from public Heatpump Keymark datasets as well as parameter-sets and functions in order to simulate heat pumps.,RE-Lab-Projects,https://github.com/FZJ-IEK3-VSA/hplib.git,github,"energy,simulation,heatpump",Buildings and Heating,"2023/04/27, 18:38:16",56,6,28,true,HTML,FZJ-IEK3,FZJ-IEK3-VSA,"HTML,Jupyter Notebook,Python",,"b'\n\n# hplib - heat pump library\n\nRepository with code to\n \n- build a **database** with relevant data from public Heatpump Keymark Datasets.\n- identify **efficiency parameters** from the database with a least-square regression model, comparable to Schwamberger [1]. \n- **simulate** heat pump efficiency (COP) as well as electrical (P_el) & thermal power (P_th) and massflow (m_dot) as time series.\n\nFor the simulation, it is possible to calculate outputs of a **specific manufacturer + model** or alternatively for one of **6 different generic heat pump types**.\n\n[1] *K. Schwamberger: \xe2\x80\x9eModellbildung und Regelung von Geb\xc3\xa4udeheizungsanlagen mit W\xc3\xa4rmepumpen\xe2\x80\x9c, VDI Verlag, D\xc3\xbcsseldorf, Fortschrittsberichte VDI Reihe 6 Nr. 263, 1991.*\n\n**For reference purposes:**\n- DOI: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5521597.svg)](https://doi.org/10.5281/zenodo.5521597)\n- Citation: Tjarko Tjaden, Hauke Hoops, Kai R\xc3\xb6sken. (2021). RE-Lab-Projects/hplib: heat pump library (v2.0). Zenodo. https://doi.org/10.5281/zenodo.5521597\n\n## Documentation\n\nIf you\'re interested in how the database and parameters were calclulated, have a look into the Documentation [HTML](http://htmlpreview.github.io/?https://github.com/FZJ-IEK3-VSA/hplib/blob/main/notebooks/documentation.html) or [Jupyter-Notebook](https://github.com/FZJ-IEK3-VSA/hplib/blob/main/notebooks/documentation.ipynb). There you also find a **simulation examples** and a **validation**.\n\n\n\n---\n\n## Heat pump models and Group IDs\nThe hplib_database.csv contains the following number of heat pump models, sorted by Group ID\n\n| [Group ID]: Count | Regulated | On-Off |\n| :--- | :--- | :--- |\n| Outdoor Air / Water | [1]: 5812 | [4]: 40 |\n| Brine / Water | [2]: 283 | [5]: 194 |\n| Water / Water | [3]: 6| [6]: 6 |\n\n## Database\n\nAll resulting database CSV file are under [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/).\n\nThe following columns are available for every heat pump of this library\n\n| Column | Description | Comment |\n| :--- | :--- | :--- |\n| Manufacturer | Name of the manufacturer | 30 manufacturers |\n| Model | Name of the heat pump model | 506 models |\n| Titel | Name of the heat pump submodel | use titel name for simulating |\n| Date | heat pump certification date | 2016-07-27 to 2021-03-10 |\n| Type | Type of heat pump model | Outdoor Air/Water, Brine/Water, Water/Water |\n| Subtype | Subtype of heat pump model | On-Off, Regulated|\n| Group ID | ID for combination of type and subtype | 1 - 6|\n| Rated Power low T [kW] | Rated Power for low temperature level | -7/34 \xc2\xb0C |\n| Rated Power medium T [kW] | Rated Power for medium temperature level | -7/52 \xc2\xb0C|\n| Refrigerant | Refrigerant Type | R134a, R290, R32, R407c, R410a, other |\n| Mass of Refrigerant [kg]| Mass of Refrigerant | 0.15 to 17.5 kg |\n| SPL indoor [dBA]| Sound emissions indoor| 15 - 68 dBA|\n| SPL outdoor [dBA]| Sound emissions outdoor| 33 - 78 dBA|\n| Bivalence temperature [\xc2\xb0C] | Minimum temperature heat pump is running without supplementary heater| *T_biv not used in simulation|\n| Tolerance temperature [\xc2\xb0C] | Minimum temperature heat pump is running with supplementary heater| *TOL not used in simulation|\n| Max. water heating temperature [\xc2\xb0C] | Maximum heating temperature | *T_max not used in simulation|\n| Poff [W] | Eletrical power consumption, ? | *P_off not used in simulation (0-110 W)|\n| PTOS [W] | Eletrical power consumption, ? | *P_tos not used in simulation (0-404 W)|\n| PSB [W] | Eletrical power consumption, standby mode | *P_sb not used in simulation (0-110 W)|\n| PCKS [W] | Eletrical power consumption, ? | *P_cks not used in simulation (0-99 W)|\n| eta low T [%] | Efficiency for low temperature level| 105-300% |\n| eta medium T [%] | Efficiency for medium temperature level| 107-202% |\n| SCOP | seasonal COP | 2,7-7,7 |\n| SEER low T | seasonal EER for low Temperature Level | 3,39-12,93 |\n| SEER medium T | seasonal EER for medium Temperature Level | 5,04-13,87 |\n| P_th_h_ref [W]| Thermal heating power at -7\xc2\xb0C / 52\xc2\xb0C | 2400 to 69880 W |\n| P_th_c_ref [W]| Thermal cooling power at ? | 3000 to 53200 W |\n| P_el_h_ref [W]| Electrical power at -7\xc2\xb0C / 52\xc2\xb0C | 881 to 29355 W |\n| P_el_c_ref [W]| Electrical power at ? | 881 to 17647 W |\n| COP_ref | COP at -7\xc2\xb0C / 52\xc2\xb0C | 1,53 to 7,95 |\n| EER_ref | Electrical power at ? | 1,99 to 10,8 |\n| p1-p4_P_th | Fit-Parameters for thermal power | - |\n| p1-p4_P_el | Fit-Parameters for electricl power | P_el = P_el_ref * (p1*T_in + p2*T_out + p3 + p4*T_amb) |\n| p1-p4_COP | Fit-Parameters for COP | COP = p1*T_in + p2*T_out + p3 + p4*T_amb|\n| MAPE_P_th | mean absolute percentage error for coefficient of performance (simulation vs. measurement) | average = 19,7 % |\n| MAPE_P_el | mean absolute percentage error for electrical input power (simulation vs. measurement) | average = 16,3 % |\n| MAPE_COP | mean absolute percentage error for thermal input power (simulation vs. measurement) | average = 9,8 % |\n| MAPE_P_dc | mean absolute percentage error for coefficient of performance (simulation vs. measurement) | average = 19,7 % |\n| MAPE_P_el | mean absolute percentage error for electrical input power (simulation vs. measurement) | average = 16,3 % |\n| MAPE_EER | mean absolute percentage error for electrical input power (simulation vs. measurement) | average = 16,3 % |\n\n## Usage\n\n- Get repository with pip:\n - `pip install hplib`\n\nor: \n\n- Download or clone repository:\n - `git clone https://github.com/RE-Lab-Projects/hplib.git`\n - Create the environment:\n - `conda env create --name hplib --file requirements.txt`\n\nCreate some code with `from hplib import hplib` and use the included functions `hplib.load_database()`, `hplib.get_parameters`, `hplib.HeatPump()`, `hplib.HeatPump.simulate()`, `hplib.HeatingSystem.calc_brine_temp()` and `hplib.HeatingSystem.calc_heating_dist_temp()`\n\n\n**Hint:** The csv files in the `output` folder are for documentation and validation purpose. The code and database files, which are meant to be used for simulations, are located in the `hplib` folder. \n\n---\n\n## Input-Data\nThe European Heat Pump Association (EHPA) hosts a website with the results of laboratory measurements from the keymark certification process. For every heat pump model a pdf file can be downloaded from https://keymark.eu/en/products/heatpumps/certified-products.\n\nThis repository is based on all pdf files that were download for every manufacturer on 2023-04-17.\n\n## Further development & possibilities to collaborate\n\nIf you find errors or are interested in developing together on the heat pump library, please create an ISSUE and/or FORK this repository and create a PULL REQUEST.\n\n\n## License\nMIT License\n\nCopyright (c) 2023\n\nYou should have received a copy of the MIT License along with this program.\nIf not, see https://opensource.org/licenses/MIT\n\n## About Us\n

\nWe are the Institute of Energy and Climate Research - Techno-economic Systems Analysis (IEK-3) belonging to the Forschungszentrum J\xc3\xbclich. Our interdisciplinary department\'s research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government\xe2\x80\x99s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.\n'",",https://doi.org/10.5281/zenodo.5521597,https://doi.org/10.5281/zenodo.5521597\n\n##","2021/04/06, 08:55:39",932,MIT,35,341,"2023/06/30, 16:04:25",6,19,26,8,117,0,0.4,0.43769968051118213,"2022/07/03, 19:08:28",v1.9,0,4,false,,false,false,"blazdob/consmodel,hpharmsen/jirasimp,hpharmsen/jira2simplicate,ttjaden/vdi4657.app,RE-Lab-Projects/prosumer-wp-trj,FZJ-IEK3-VSA/HiSim",,https://github.com/FZJ-IEK3-VSA,https://www.fz-juelich.de/iek/iek-3/EN/Home/home_node.html,Forschungszentrum Jülich,,,https://avatars.githubusercontent.com/u/28654423?v=4,,, Thermofeel,A library to calculate human thermal comfort indexes.,ecmwf-projects,https://github.com/ecmwf-projects/thermofeel.git,github,,Buildings and Heating,"2022/10/31, 07:40:42",53,0,13,true,Python,ECMWF projects,ecmwf-projects,Python,,"b'.. image:: https://raw.githubusercontent.com/ecmwf-projects/thermofeel/master/thermofeel.png\n :width: 600\n :alt: thermofeel logo\n\n|license| |tag_release| |commits_since_release| |last_commit| |docs|\n\n**thermofeel** (pronounced *thermo-feel*)\n\nA library to calculate human thermal comfort indexes.\n\nCurrently calculates the thermal indexes:\n * Universal Thermal Climate Index\n * Mean Radiant Temperature\n * Mean Radiant Temperature from Wet Bulb Globe Temperature\n * Heat Index Simplified\n * Heat Index Adjusted\n * Humidex\n * Apparent Temperature\n * Wind Chill\n * Net Effective Temperature\n * Wet Bulb Globe Temperature Simple\n * Wet Bulb Globe Temperature\n\nIn support of the above indexes, it also calculates:\n * Solar Declination Angle\n * Solar Zenith Angle\n * Relative Humidity Percentage\n * Saturation vapour pressure\n\n\nPyPi\n====\n\n|pypi_status| |pypi_release| |pypi_downloads| |code_size|\n\nInstall with::\n\n $ pip install thermofeel\n\nTesting\n=======\n\n|ci| |codecov|\n\nSystem dependencies\n===================\n\nthermofeel core functions depend on:\n * numpy\n\nOptionally, thermofeel depends on:\n * pytest - for unit testing\n * numba - automatically detect and use Numba JIT to accelerate function execution\n\nContributing\n============\n\nThe main repository is hosted on GitHub, testing, bug reports and contributions are highly welcomed and appreciated:\n\nhttps://github.com/ecmwf-projects/thermofeel\n\nPlease see the Contributing_ document for the best way to help.\n\n.. _Contributing: https://github.com/ecmwf-projects/thermofeel/blob/master/CONTRIBUTING.rst\n\nMain contributors:\n\n- Chloe Brimicombe - `ECMWF `_\n- Tiago Quintino - `ECMWF `_\n\nSee also the `contributors `_ for a more complete list.\n\n\nLicense\n=======\n\nCopyright 2021 European Centre for Medium-Range Weather Forecasts (ECMWF)\n\nLicensed under the Apache License, Version 2.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nIn applying this licence, ECMWF does not waive the privileges and immunities\ngranted to it by virtue of its status as an intergovernmental organisation nor\ndoes it submit to any jurisdiction.\n\nCiting\n======\n\n\nIn publications, please use our paper in SoftwareX as the main citation for **thermofeel**.\nto cite **thermofeel** as a publication please use:\nBrimicombe C,Di Napoli C, Quintino T,Pappenberger F, Cornforth R and Cloke H,2022\n*thermofeel: a python thermal comfort indices library* Software X \nhttps://doi.org/10.1016/j.softx.2022.101005\n\nTo cite **thermofeel** the code currently please use:\nBrimicombe C,Di Napoli C, Quintino T,Pappenberger F, Cornforth R and Cloke H,2021\n*thermofeel: a python thermal comfort indices library* https://doi.org/10.21957/mp6v-fd16\n\n\nFor referring to the latest release of **thermofeel** please use this DOI: https://doi.org/10.21957/mp6v-fd16\n\n\n\nAcknowledgements\n================\nPast and current funding and support for **thermofeel** is listed in the adjoning Acknowledgements_\n\n\n.. _Acknowledgements: https://github.com/ecmwf-projects/thermofeel/blob/master/ACKNOWLEDGEMENTS.rst\n\n\n.. |last_commit| image:: https://img.shields.io/github/last-commit/ecmwf-projects/thermofeel\n :target: https://github.com/ecmwf-projects/thermofeel\n\n.. |commits_since_release| image:: https://img.shields.io/github/commits-since/ecmwf-projects/thermofeel/latest?sort=semver\n :target: https://github.com/ecmwf-projects/thermofeel\n\n.. |license| image:: https://img.shields.io/github/license/ecmwf-projects/thermofeel\n :target: https://www.apache.org/licenses/LICENSE-2.0.html\n\n.. |pypi_release| image:: https://img.shields.io/pypi/v/thermofeel?color=green\n :target: https://pypi.org/project/thermofeel\n\n.. |pypi_status| image:: https://img.shields.io/pypi/status/thermofeel\n :target: https://pypi.org/project/thermofeel\n\n.. |tag_release| image:: https://img.shields.io/github/v/release/ecmwf-projects/thermofeel?sort=semver\n :target: https://github.com/ecmwf-projects/thermofeel\n\n.. |codecov| image:: https://codecov.io/gh/ecmwf-projects/thermofeel/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/ecmwf-projects/thermofeel\n\n.. |ci| image:: https://img.shields.io/github/workflow/status/ecmwf-projects/thermofeel/ci\n :target: https://github.com/ecmwf-projects/thermofeel/actions\n\n.. |pypi_downloads| image:: https://img.shields.io/pypi/dm/thermofeel\n :target: https://pypi.org/project/thermofeel\n\n.. |code_size| image:: https://img.shields.io/github/languages/code-size/ecmwf-projects/thermofeel?color=green\n :target: https://github.com/ecmwf-projects/thermofeel\n \n.. |docs| image:: https://readthedocs.org/projects/thermofeel/badge/?version=latest\n :target: https://thermofeel.readthedocs.io/en/latest/?badge=latest\n\n'",",https://doi.org/10.21957/mp6v-fd16\n\n\nFor,https://doi.org/10.21957/mp6v-fd16\n\n\n\nAcknowledgements\n================\nPast","2021/06/17, 15:52:26",860,Apache-2.0,10,432,"2023/04/17, 10:49:56",4,12,30,15,191,2,0.0,0.36641221374045807,"2022/10/31, 08:07:30",1.3.0,0,8,false,,false,true,,,https://github.com/ecmwf-projects,www.ecmwf.int,United Kingdom,,,https://avatars.githubusercontent.com/u/75486320?v=4,,, CBE Clima Tool,A web-based application built to support the need of architects and engineers interested in climate-adapted design.,CenterForTheBuiltEnvironment,https://github.com/CenterForTheBuiltEnvironment/clima.git,github,"clima,epw,epw-files,climate-analysis,building-design,python",Buildings and Heating,"2023/10/05, 06:55:22",40,0,14,true,Python,,CenterForTheBuiltEnvironment,"Python,CSS,Dockerfile,Procfile",https://clima.cbe.berkeley.edu,"b'# CBE Clima Tool\n\nThe CBE Clima Tool is a web-based application built to support climate analysis specifically designed to support the need of architects and engineers interested in climate-adapted design. \nIt allows users to analyze the climate data of more than 27,500 locations worldwide from both [Energy Plus](https://energyplus.net/weather) and [Climate.One.Building.org](http://climate.onebuilding.org/). \nYou can, however, also choose to upload your own EPW weather file. Our tool can be used to analyze and visualize data contained in EnergyPlus Weather \\(EPW\\) files. \nIt furthermore calculates a number of climate-related values \\(i.e. \nsolar azimuth and altitude, Universal Thermal Climate Index \\(UTCI\\), comfort indices, etc.\\) that are not contained in the EPW files but can be derived from the information therein contained. \nIt can be freely accessed at [clima.cbe.berkeley.edu](http://clima.cbe.berkeley.edu)\n\nIf you use this tool please consider citing us.\n\n## Official documentation\n\n[Official documentation](https://center-for-the-built-environment.gitbook.io/clima/).\n\n## Authors\n* [Giovanni Betti](https://www.linkedin.com/in/gbetti/)\n* [Federico Tartarini](https://www.linkedin.com/in/federico-tartarini-3991995b/)\n* [Christine Nguyen](https://chrlng.github.io/)\n\n## Built with\n\n* [Dash](https://plotly.com/dash/) - Framework for building the web app\n* [Plotly Python](https://plotly.com/python/) - Used to create the interactive plots \n\n'",,"2021/01/12, 20:53:32",1016,MIT,192,726,"2023/05/19, 03:11:07",25,28,175,57,159,1,1.6,0.44725111441307575,,,0,9,false,,false,false,,,https://github.com/CenterForTheBuiltEnvironment,http://cbe.berkeley.edu,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/6592546?v=4,,, Kiva,Used to calculate heat loss and gain on a timestep basis from building foundations.,bigladder,https://github.com/bigladder/kiva.git,github,"engineering,heat-transfer,energy,kiva,foundation,building,simulation",Buildings and Heating,"2023/07/31, 18:20:07",24,0,1,true,C++,Big Ladder Software,bigladder,"C++,CMake,Ruby,Shell,Batchfile",,"b""![](docs/images/kiva-logo.png)\n\n[![Documentation Status](https://readthedocs.org/projects/kiva/badge/?version=latest)](http://kiva.readthedocs.org/en/latest/?badge=latest)\n[![Build and Test](https://github.com/bigladder/kiva/actions/workflows/build_and_test.yml/badge.svg)](https://github.com/bigladder/kiva/actions/workflows/build_and_test.yml)\n[![codecov](https://codecov.io/gh/bigladder/kiva/branch/develop/graph/badge.svg)](https://codecov.io/gh/bigladder/kiva)\n\nKiva\n====\n\nKiva is a free and open source ground heat transfer calculation tool written in\nC++. Specifically, Kiva is used to calculate heat loss and gain on a timestep\nbasis from building foundations. The goal is to create a tool that can integrate\nthe multi-dimensional heat transfer into standard building energy simulation\nengines.\n\nDocumentation\n-------------\n\nSee the [online documentation](http://kiva.readthedocs.org/en/latest/) for information on using Kiva and creating Kiva input files.\n\nContributing\n------------\n\nKiva is configured as a cross-platform CMake project. To build Kiva, you'll need to clone the git repository and use CMake (pointing to the kiva root directory).\n\nPre-requisites:\n\n1. A C++ compiler (e.g., Clang, GCC, MSVC)\n2. CMake\n\nBuilding Kiva from source\n-------------------------\n\n1. Clone the git repository.\n2. Make a directory called `build` inside the top level of your source.\n3. Open a console in the `build` directory.\n4. Type `cmake ..`.\n5. Type `cmake --build . --config Release`.\n6. The Kiva executable (`kiva` or `kiva.exe`) will appear in your build directory.\n\nIf you'd like to contribute to this code or if you have questions, send an email to Neal\nKruis (neal.kruis AT bigladdersoftware DOT com).\n""",,"2013/07/02, 16:06:44",3767,BSD-3-Clause,30,857,"2023/07/31, 18:20:11",11,52,55,7,86,2,0.4,0.3328591749644382,"2023/03/14, 15:07:48",v0.6.6,0,6,false,,false,false,,,https://github.com/bigladder,https://bigladdersoftware.com,"Denver, CO",,,https://avatars.githubusercontent.com/u/11000150?v=4,,, Macquette,"A whole house energy assessment tool, which models a building to produce a report to help householders under how their home performs now in terms of energy use and how it might be improved.",retrofitcoop,https://gitlab.com/retrofitcoop/macquette,gitlab,,Buildings and Heating,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Urban Multi-scale Environmental Predictor,"A climate service tool, designed for researchers and service providers presented as a plugin for QGIS.",UMEP-dev,https://github.com/UMEP-dev/UMEP.git,github,"qgis,urban-climate,urban-planning",Buildings and Heating,"2023/09/12, 14:40:35",51,0,16,true,Python,,UMEP-dev,"Python,JavaScript,CSS,Makefile,QML,GAP,HTML,Shell,Batchfile",https://umep-docs.readthedocs.io/,b'# UMEP: the Urban Multi-scale Universal Predictor\n\nThis is the official repository for the Urban Multi-scale Universal Predictor. \nUMEP includes various climate sensitive planning tools and comes as a plugin \nfor QGIS (www.qgis.org). UMEP is available as a plugin from the QGIS \nofficial plugin repository. It can also be downloaded/cloned/forked from this repository. \n\n## Useful links\n- [Instructions on how to install UMEP](https://umep-docs.readthedocs.io/en/latest/Getting_Started.html)\n\n- [Email list](https://www.lists.rdg.ac.uk/mailman/listinfo/met-umep)\n\n- [UMEP website](https://umep-docs.readthedocs.io/en/latest/index.html)\n\n',,"2020/03/25, 22:57:46",1308,BSD-3-Clause,11,240,"2023/10/13, 11:42:48",14,4,352,35,12,0,0.0,0.26428571428571423,,,0,4,false,,false,false,,,https://github.com/UMEP-dev,,,,,https://avatars.githubusercontent.com/u/62675320?v=4,,, Urban Weather Generator,A Python application for modeling the urban heat island effect.,ladybug-tools,https://github.com/ladybug-tools/uwg.git,github,,Buildings and Heating,"2023/10/01, 16:27:53",40,8,10,true,Python,Ladybug Tools,ladybug-tools,"Python,Shell",https://www.ladybug.tools/uwg/docs/,"b'[![Build Status](https://github.com/ladybug-tools/uwg/workflows/CI/badge.svg)](https://github.com/ladybug-tools/uwg/actions)\n[![Coverage Status](https://coveralls.io/repos/github/ladybug-tools/uwg/badge.svg?branch=master)](https://coveralls.io/github/ladybug-tools/uwg)\n\n[![Python 3.6](https://img.shields.io/badge/python-3.6-blue.svg)](https://www.python.org/downloads/release/python-360/) [![Python 2.7](https://img.shields.io/badge/python-2.7-green.svg)](https://www.python.org/downloads/release/python-270/) [![IronPython](https://img.shields.io/badge/ironpython-2.7-red.svg)](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)\n\n# uwg\n\nThe Urban Weather Generator (uwg) is a Python application for modeling the [urban heat island effect](https://en.wikipedia.org/wiki/Urban_heat_island). Specifically, it morphs rural [EnergyPlus weather (.epw) files](http://www.ladybug.tools/epwmap/) to reflect average conditions within the urban canyon using a range of properties including:\n\n* Building geometry (including building height, ground coverage, window:wall area, and facade:site area)\n* Building use (including program type, HVAC systems, and occupancy/equipment scheduling)\n* Cooling system heat rejection to the outdoors (for Summer)\n* Indoor heat leakage to the outdoors (for Winter)\n* Urban materials (including the thermal mass, albedo and emissivity of roads, walls, and roofs)\n* Anthropogenic heat from traffic (including traffic schedules)\n* Vegetation coverage (both trees and shrubs)\n* Atmospheric heat transfer from urban boundary and canopy layers\n\nThe [original Urban Weather Generator](http://urbanmicroclimate.scripts.mit.edu/uwg.php) was developed by Bruno Bueno for [his PhD thesis at MIT](https://dspace.mit.edu/handle/1721.1/59107). Since this time, it has been validated 3 times and has been [enhanced by Aiko Nakano](https://dspace.mit.edu/handle/1721.1/108779). In 2016, Joseph Yang also [improved the engine and added a range of building templates](https://dspace.mit.edu/handle/1721.1/107347).\n\nThis repository is a Python translation of the original [MATLAB Urban Weather Generator](https://github.com/hansukyang/UWG_Matlab).\n\n# Example\nHere is a Python example that shows how to create and run an Urban Weather Generator object.\n\n```python\nfrom uwg import UWG\n\n# Define the .epw, .uwg paths to create an uwg object.\nepw_path = ""resources/SGP_Singapore.486980_IWEC.epw"" # available in resources directory.\n\n\n# Initialize the UWG model by passing parameters as arguments, or relying on defaults\nmodel = UWG.from_param_args(epw_path=epw_path, bldheight=10, blddensity=0.5,\n vertohor=0.8, grasscover=0.1, treecover=0.1, zone=\'1A\')\n\n# Uncomment these lines to initialize the UWG model using a .uwg parameter file\n# param_path = ""initialize_singapore.uwg"" # available in resources directory.\n# model = UWG.from_param_file(param_path, epw_path=epw_path)\n\nmodel.generate()\nmodel.simulate()\n\n# Write the simulation result to a file.\nmodel.write_epw()\n```\n\n## Installation\n```console\npip install uwg\n```\n\n## QuickStart\n```python\nimport uwg\n\n```\n\n## [API Documentation](http://ladybug-tools.github.io/uwg/docs)\n\n## Local Development\n1. Clone this repo locally\n```console\ngit clone git@github.com:ladybug-tools/uwg\n\n# or\n\ngit clone https://github.com/ladybug-tools/uwg\n```\n2. Install dependencies:\n```console\ncd uwg\npip install -r dev-requirements.txt\npip install -r requirements.txt\n```\n\n3. Run Tests:\n```console\npython -m pytest tests/\n```\n\n4. Generate Documentation:\n```console\nsphinx-apidoc -f -e -d 4 -o ./docs ./uwg\nsphinx-build -b html ./docs ./docs/_build/docs\n```\n'",,"2017/04/18, 03:32:01",2381,GPL-3.0,5,639,"2023/10/01, 16:39:15",20,219,262,7,24,0,0.0,0.28816793893129766,"2023/10/01, 16:38:49",v5.8.13,0,6,false,,true,true,"FilhoRicardo/parametric-geometry-app,BHoM/LadybugTools_Toolkit,Eliewiii/Building_Urban_Analysis,Eliewiii/Elie_UBEM_tool,saeranv/astrobot,3riccc/graduation_design,Aldrich-TB/mysite,ladybug-tools/dragonfly-uwg",,https://github.com/ladybug-tools,ladybug.tools,Worldwide,,,https://avatars.githubusercontent.com/u/14942270?v=4,,, eensight,This Python package implements the measurement and verification (M&V) methodology that has been developed by the H2020 project SENSEI - Smart Energy Services to Improve the Energy Efficiency of the European Building Stock.,hebes-io,https://github.com/hebes-io/eensight.git,github,"building-energy,pipelines,energy-efficiency,energy-data",Buildings and Heating,"2023/01/22, 11:56:03",14,0,4,true,Python,,,Python,https://hebes-io.github.io/rethinking/,"b'![logo](https://github.com/hebes-io/eensight/blob/master/logo.png)\n

\n\n[![PyPI version](https://badge.fury.io/py/eensight.svg)](https://badge.fury.io/py/eensight)\n\n## The `eensight` tool for measurement and verification of energy efficiency improvements\n\nThe `eensight` Python package implements the measurement and verification (M&V) methodology that has been developed by the H2020 project [SENSEI - Smart Energy Services to Improve the Energy Efficiency of the European Building Stock](https://senseih2020.eu/). \n\nThe online book *Rethinking Measurement and Verification of Energy Savings* (accessible [here](https://hebes-io.github.io/rethinking/index.html)) explains in detail both the methodology and its implementation.\n\n## Installation\n\n`eensight` can be installed by pip:\n\n```bash\npip install eensight\n```\n\n## Usage\n\n### 1. Through the command line\n\nAll the functionality in `eensight` is organized around data pipelines. Each pipeline consumes data and other artifacts (such as models) produced by a previous pipeline, and produces new data and artifacts for its successor pipelines.\n\nThere are four (4) pipelines in `eensight`. The names of the pipelines and the associations between pipelines and namespaces are summarized below:\n\n| \t| train \t| test \t| apply |\n|------------\t|----------\t|----------\t|---------|\n| preprocess \t| ✔ \t| ✔ \t| ✔|\n| predict \t| ✔ \t| ✔\t| ✔|\n| evaluate \t| \t| ✔ | ✔|\n| adjust \t| \t| | ✔|\n\nThe primary way of using `eensight` is through the command line. The first argument is always the name of the pipeline to run, such as:\n\n```bash\neensight run predict --namespace train\n```\nThe command\n\n```bash\neensight run --help\n```\nprints the documentation for all the options that can be passed to the command line.\n\n### 2. As a library\n\nThe pipelines of `eensight` are separate from the methods that implement them, so that the latter can be used directly:\n\n```python\nimport pandas as pd\n\nfrom eensight.methods.prediction.baseline import UsagePredictor\nfrom eensight.methods.prediction.activity import estimate_activity\n\nnon_occ_features = [""temperature"", ""dew point temperature""]\n\nactivity = estimate_activity(\n X, \n y, \n non_occ_features=non_occ_features, \n exog=""temperature"",\n assume_hurdle=False,\n\n)\n\nX_act = pd.concat([X, activity.to_frame(""activity"")], axis=1)\nmodel = UsagePredictor(skip_calendar=True).fit(X_act, y)\n```\n\n
\n\n'",,"2020/05/16, 16:56:19",1257,Apache-2.0,7,129,"2023/01/21, 15:52:48",0,2,3,2,277,0,0.0,0.0,"2023/01/21, 16:06:11",v1.0,0,1,false,,false,false,,,,,,,,,,, PointER,A LiDAR-Derived Point Cloud Dataset of One Million English Buildings Linked to Energy Characteristics.,kdmayer,https://github.com/kdmayer/PointER.git,github,"building-energy,deep-learning,lidar,point-cloud,dataset",Buildings and Heating,"2023/10/06, 14:32:00",8,0,8,true,Python,,,"Python,Jupyter Notebook",https://www.nature.com/articles/s41597-023-02544-x,"b'# Points for Energy Renovation (PointER): \n## A LiDAR-Derived Point Cloud Dataset of One Million English Buildings Linked to Energy Characteristics\n\n## Getting Started\n- Please see our [setup documentation](documentation/DB_CONTAINER_SETUP.md) for a step by step description.\n- Please check our [related open access paper](https://www.nature.com/articles/s41597-023-02544-x) for information about the method and the resulting dataset. \n- A dataset comprising one million building point clouds with half of the buildings linked to energy features is available [here](https://mediatum.ub.tum.de/1713501).\n\n## Prerequisites\n- Required packages are documented in the [environment.yml](environment.yml) file. \n- The [environment_for_analysis.yml](environment_for_analysis.yml) includes some more packages required for visualization and analysis.\n\n## Running the Code\n- To run an example point cloud generation, please use the [jupyter notebook](experimentation/building_pointcloud_generation.ipynb).\n- To run the point cloud generation for an entire area of interest, please see the [point cloud generation documentation](documentation/RUN_POINTCLOUD_GENERATION.md).\n- The main program can be found [here](src/building_pointcloud_main.py). Please note, that the point cloud generation process involves some upfront data preparation.\n\nThe process involves 6 steps:\n\n![img](/assets/images/overview.png)\n\nDue to the size of the point cloud files, it is recommended to set up the container on a machine with a large working memory. \nWe ran the code without problems on a machine with 48 GB, but a machine with 16 GB or more should work.\n\n## Dataset\nThe [dataset](https://mediatum.ub.tum.de/1713501) contains one million building point clouds for 16 Local Authority Districts in England.\nThese Local Authority Districts are representative for the English building stock and selected across the country (see image).\n\n![img](/assets/images/LAD_selected.png)\n\nThis is an example of a resulting point cloud:\n![img](/assets/images/example.png)\n\n## Data Sources\n- Point cloud data (.laz): [UK National LiDAR Programme](https://www.data.gov.uk/dataset/f0db0249-f17b-4036-9e65-309148c97ce4/national-lidar-programme)\n - Open Government Licencse\n- We use [Verisk UKBuildings database](https://www.verisk.com/en-gb/3d-visual-intelligence/products/ukbuildings/) (.gpkg format) as building footprints\n - License for personal use only\n - Alternatively, we can use OSM data\n- [Local Authority Distric Boundaries](https://geoportal.statistics.gov.uk/) (.shp format) \n - Open Government Licencse\n- [Unique Property Reference Numbers](https://www.ordnancesurvey.co.uk/business-government/products/open-uprn) (UPRN) including coordinates (.gpkg format) \n - Open Government Licencse\n\n\n## Versioning\nV0.1 Initial version\n\n## Citation\n\n @article{Krapf2023,\n doi = {10.1038/s41597-023-02544-x},\n url = {https://doi.org/10.1038/s41597-023-02544-x},\n year = {2023},\n publisher = {Springer Science and Business Media {LLC}},\n volume = {10},\n author = {Sebastian Krapf and Kevin Mayer and Martin Fischer},\n title = {Points for energy renovation ({PointER}): A point cloud dataset of a million buildings linked to energy features},\n journal = {Scientific Data}\n }\n\n## License\nThis project is licensed under the [MIT License](LICENSE).\n'",",https://doi.org/10.1038/s41597-023-02544-x","2022/08/02, 22:22:37",449,MIT,141,241,"2023/06/17, 01:10:36",0,4,6,5,130,0,0.0,0.3850267379679144,,,0,2,false,,false,false,,,,,,,,,,, predyce,"Is the natural evolution of the conventional Energy Performance Certification into real time optimization of building performance and comfort, by capturing the building's dynamic behaviour, and at the same time providing transparent feedback, through an intuitive interface.",polito-edyce-prelude,https://gitlab.com/polito-edyce-prelude/predyce,gitlab,,Buildings and Heating,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, EUReCA,"Provides an efficient and reliable Urban Building Energy Modeling platform, entirely developed in Python, aiming at simulating and predicting cities and urban areas energy consumption.",BETALAB-team,https://github.com/BETALAB-team/EUReCA.git,github,"building,building-simu,ubem",Buildings and Heating,"2023/10/20, 08:53:53",8,0,6,true,Python,BETALAB,BETALAB-team,Python,https://research.dii.unipd.it/betalab/,"b'# EUReCA \n\n![Insert caption here](https://research.dii.unipd.it/betalab/wp-content/uploads/sites/33/2021/03/EUReCA_logo_300x300.jpg)\n\nThe Energy Urban Resistance Capacitance Approach provides an efficient and reliable Urban Building Energy Modeling platform, entirely developed in Python, aiming at simulating and predicting cities and urban areas energy consumption. The tool exploits a bottom-up modeling methodology, creating simple and useful dynamic building energy models.\n\nThis research project has been developed within the [BETALAB](https://research.dii.unipd.it/betalab/) research group of the University of Padua, Italy.\n\n## Python environment set up\nThe tool is distributed via the GitHub repository. As first step, you must create a new conda or venv environment. You can name it eureca.\n\n`conda create -n eureca python=3.9`\n\nand activate it:\n\n``conda activate eureca``\n\nThen clone the following package in a separate folder:\n\n``\ngit clone https://github.com/BETALAB-team/EUReCA.git\n``\nand install it in the same environement:\n``\npip install -e *your_path_to_the_folder/eureca-ubem*\n``\n## Preparing and run a simulation\n### Input files\n\nThe [eureca_ubem/Input](https://github.com/BETALAB-team/EUReCA/tree/main/eureca_ubem/Input) folder has some examples files to run the simulation. \nTo simulate cities energy consumption in EUReCA, some input files must be prepared:\n - A `weather_data.epw` weather file. These files are available at the [EnergyPlus](https://www.energyplus.net/weather) website.\n - A `EnvelopeTypes.xlsx` spreadsheet. It includes the thermo-physic properties of building envelopes. An example is available in the `materials_and_construction_test.xlsx`\n - A `Schedules.xlsx` spreadsheet. It includes the operational schedules of occupancy, appliances, temperature, humidity setpoints, HVAC usage for different end-uses. Example in `Schedules.xlsx`.\n - The `config.json` file, which defines the simulation parameters. Example in [config.json](https://github.com/BETALAB-team/EUReCA/blob/main/eureca_ubem/Input/config.json).\n - The `city.json` model. See the next section for further info on the alternatives.\n\n### The JSON city model\nCurrently, EUReCA can handle two typologies of JSON city models. The recommended methodology consists of importing buildings\' geometries via semantic [CityJSON](https://www.cityjson.org/) files, but also 2D shapefiles, encoded in GeoJSON format, can be exploited to build up the city.\n\nThe required attributes are:\n- CityJSON: \n ```\n ""End Use"": ""schedule_type_name"", \n ""Envelope"": ""envelope_type_name"", \n ""Heating System"": ""heating_system_name"", \n ""Cooling System"": ""cooling_system_name""\n ```\n- GeoJSON: \n ```\n ""id"": integer, \n ""Name"": ""name"", \n ""End Use"": ""schedule_archetype_name"", \n ""Envelope"": ""envelope_archetype_name"", \n ""Height"": float, ""Nfloors"": integer, \n ""Floors"": float, \n ""Heating System"": ""heating_system_name"", \n ""Cooling System"": ""cooling_system_name""\n ```\n \nThe strings in the city model\'s attribute table (`End Use` and `Envelope`) must match the labels of the End Uses and Envelope types listed in the `Schedules.xlsx` and `EnvelopeTypes.xlsx`.\n\n`Heating System` and `Cooling System` must match one of the following items:\nList of available heating systems:\n- IdealLoad\n- CondensingBoiler\n- TraditionalBoiler\n- Traditional Gas Boiler, Centralized, Low Temp Radiator\n- Traditional Gas Boiler, Single, Low Temp Radiator\n- Traditional Gas Boiler, Centralized, High Temp Radiator\n- Traditional Gas Boiler, Single, High Temp Radiator\n- Traditional Gas Boiler, Centralized, Fan coil\n- Traditional Gas Boiler, Single, Fan coil\n- Traditional Gas Boiler, Centralized, Radiant surface\n- Traditional Gas Boiler, Single, Radiant surface\n- Condensing Gas Boiler, Centralized, Low Temp Radiator\n- Condensing Gas Boiler, Single, Low Temp Radiator\n- Condensing Gas Boiler, Centralized, High Temp Radiator\n- Condensing Gas Boiler, Single, High Temp Radiator\n- Condensing Gas Boiler, Centralized, Fan coil\n- Condensing Gas Boiler, Single, Fan coil\n- Condensing Gas Boiler, Centralized, Radiant surface\n- Condensing Gas Boiler, Single, Radiant surface\n- Oil Boiler, Centralized, High Temp Radiator\n- Oil Boiler, Single, High Temp Radiator\n- Stove\n- A-W Heat Pump, Centralized, Low Temp Radiator\n- A-W Heat Pump, Single, Low Temp Radiator\n- A-W Heat Pump, Centralized, Fan coil\n- A-W Heat Pump, Single, Fan coil\n- A-W Heat Pump, Centralized, Radiant surface\n- A-W Heat Pump, Single, Radiant surface\n\nList of available cooling systems:\n- IdealLoad\n- SplitAirCooler\n- ChillerAirtoWater\n- SplitAirConditioner\n- A-A split\n- A-W chiller, Centralized, Fan coil\n- A-W chiller, Centralized, Radiant surface\n- A-W chiller, Single, Fan coil\n- A-W chiller, Single, Radiant surface\n\nInput folder provides some example for the city of Padua.\n\n### Simulation\n\nAfter the set up of all input files, you can run the simulation throughout a python file, as following:\n\n```\nimport os\nimport time as tm\n\n\n# CONFIG FILE LOADING\nfrom eureca_building.config import load_config\nload_config(""path_to_your_config\\\\config.json"")\n\nfrom eureca_ubem.city import City\n\n# SET INPUT FILES\nweather_file = os.path.join(""."",""path_to_you_input"",""weather_file.epw"")\nschedules_file = os.path.join(""."",""path_to_you_input"",""Schedules.xlsx"")\nmaterials_file = os.path.join(""."",""path_to_you_input"",""materials_and_construction_test.xlsx"")\ncity_model_file = os.path.join(""."",""path_to_you_input"",""citymodel.geojson"")\n\n# Creation of the City object and simulation\ncity_geojson = City(\n city_model=city_model_file,\n epw_weather_file=weather_file,\n end_uses_types_file=schedules_file,\n envelope_types_file=materials_file,\n shading_calculation=True,\n output_folder=os.path.join(""."",""your_output_folder"")\n)\ncity_geojson.loads_calculation()\ncity_geojson.simulate(print_single_building_results=True)\n```\n\n### Output report\nIf `output_folder=os.path.join(""."",""your_output_folder"")` is set, outputs are printed in the output folder.\nEach file is a csv with the main output variables of each building.\n\n### How to cite EUReCA\nIn case you want to use EUReCA for your own research project, please cite the following paper: \n\n@article{\\\nPRATAVIERA2021544,\\\ntitle = {EUReCA: An open-source urban building energy modelling tool for the efficient evaluation of cities energy demand},\\\njournal = {Renewable Energy},\\\nvolume = {173},\\\npages = {544-560},\\\nyear = {2021},\\\nissn = {0960-1481},\\\ndoi = {https://doi.org/10.1016/j.renene.2021.03.144}, \\\nurl = {https://www.sciencedirect.com/science/article/pii/S0960148121005085}, \\\nauthor = {Enrico Prataviera and Pierdonato Romano and Laura Carnieletto and Francesco Pirotti and Jacopo Vivian and Angelo Zarrella},\\\nkeywords = {Urban building energy modelling, Lumped-capacitance thermal networks, Semantic georeferenced data, EUReCA, District simulation}\\\n}\n'",",https://doi.org/10.1016/j.renene.2021.03.144","2021/03/02, 13:57:27",967,GPL-3.0,160,278,"2023/04/11, 14:10:52",1,5,5,2,197,0,0.0,0.048387096774193505,,,0,3,false,,false,false,,,https://github.com/BETALAB-team,https://research.dii.unipd.it/betalab/,"Padua, Italy",,,https://avatars.githubusercontent.com/u/79995108?v=4,,, stplanr,A package for sustainable transport planning with R.,ropensci,https://github.com/ropensci/stplanr.git,github,"r,transport,spatial,rstats,r-package,peer-reviewed,transport-planning,walking,cycling,pubic-transport,origin-destination,desire-lines,routes,routing,route-network,transportation,cycle",Mobility and Transportation,"2023/10/05, 23:19:19",407,0,26,true,R,rOpenSci,ropensci,"R,JavaScript,TeX",https://docs.ropensci.org/stplanr,"b'\n\n\n# stplanr \n\n\n\n[![rstudio mirror\ndownloads](https://cranlogs.r-pkg.org/badges/stplanr)](https://github.com/r-hub/cranlogs.app)\n[![](https://cranlogs.r-pkg.org/badges/grand-total/stplanr)](https://cran.r-project.org/package=stplanr)\n[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/stplanr)](https://cran.r-project.org/package=stplanr)\n[![lifecycle](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![](https://badges.ropensci.org/10_status.svg)](https://github.com/ropensci/software-review/issues/10)\n[![R-CMD-check](https://github.com/ropensci/stplanr/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/stplanr/actions)\n\n**stplanr** is a package for sustainable transport planning with R.\n\nIt provides functions for solving common problems in transport planning\nand modelling, such as how to best get from point A to point B. The\noverall aim is to provide a reproducible, transparent and accessible\ntoolkit to help people better understand transport systems and inform\npolicy, as outlined in a\n[paper](https://journal.r-project.org/archive/2018/RJ-2018-053/index.html)\nabout the package, and the potential for open source software in\ntransport planning in general, published in the [R\nJournal](https://journal.r-project.org/).\n\nThe initial work on the project was funded by the Department of\nTransport\n([DfT](https://www.gov.uk/government/organisations/department-for-transport))\nas part of the development of the Propensity to Cycle Tool (PCT), a web\napplication to explore current travel patterns and cycling potential at\nzone, desire line, route and route network levels (see\n[www.pct.bike](https://www.pct.bike/) and click on a region to try it\nout). The basis of the methods underlying the PCT is origin-destination\ndata, which are used to highlight where many short distance trips are\nbeing made, and estimate how many could switch to cycling. The results\nhelp identify where cycleways are most needed, an important component of\nsustainable transport planning infrastructure engineering and policy\n[design](https://www.icevirtuallibrary.com/doi/abs/10.1680/dfct.63495.001).\n\nSee the package vignette (e.g.\xc2\xa0via `vignette(""introducing-stplanr"")`) or\nan [academic paper on the Propensity to Cycle Tool\n(PCT)](https://dx.doi.org/10.5198/jtlu.2016.862) for more information on\nhow it can be used. This README provides some basics.\n\nMuch of the work supports research undertaken at the Leeds\xe2\x80\x99 Institute\nfor Transport Studies ([ITS](https://environment.leeds.ac.uk/transport))\nbut **stplanr** should be useful to transport researchers and\npractitioners needing free, open and reproducible methods for working\nwith geographic data everywhere.\n\n## Key functions\n\nData frames representing flows between origins and destinations must be\ncombined with geo-referenced zones or points to generate meaningful\nanalyses and visualisations of \xe2\x80\x98flows\xe2\x80\x99 or origin-destination (OD) data.\n**stplanr** facilitates this with `od2line()`, which takes flow and\ngeographical data as inputs and outputs spatial data. Some example data\nis provided in the package:\n\n``` r\nlibrary(stplanr)\n```\n\nLet\xe2\x80\x99s take a look at this data:\n\n``` r\nod_data_sample[1:3, 1:3] # typical form of flow data\n#> # A tibble: 3 \xc3\x97 3\n#> geo_code1 geo_code2 all\n#> \n#> 1 E02002361 E02002361 109\n#> 2 E02002361 E02002363 38\n#> 3 E02002361 E02002367 10\ncents_sf[1:3,] # points representing origins and destinations\n#> geo_code MSOA11NM percent_fem avslope geometry\n#> 1708 E02002384 Leeds 055 0.458721 2.856563 -1.546463, 53.809517\n#> 1712 E02002382 Leeds 053 0.438144 2.284782 -1.511861, 53.811611\n#> 1805 E02002393 Leeds 064 0.408759 2.361707 -1.524205, 53.804098\n```\n\nThese datasets can be combined as follows:\n\n``` r\ntravel_network <- od2line(flow = od_data_sample, zones = cents_sf)\nw <- flow$all / max(flow$all) *10\nplot(travel_network, lwd = w)\n```\n\n\n\n**stplanr** has many functions for working with OD data. See the\n[`stplanr-od`](https://docs.ropensci.org/stplanr/articles/stplanr-od.html)\nvignette for details.\n\nThe package can also allocate flows to the road network, e.g.\xc2\xa0with\n[CycleStreets.net](https://www.cyclestreets.net/api/) and the\nOpenStreetMap Routing Machine\n([OSRM](https://github.com/Project-OSRM/osrm-backend)) API interfaces.\nThese are supported in `route_*()` functions such as\n`route_cyclestreets` and `route_osrm()`:\n\nRouting can be done using a range of back-ends and using lat/lon or\ndesire line inputs with the `route()` function, as illustrated by the\nfollowing commands which calculates the route between Fleet Street and\nSouthwark Street over the River Thames on Blackfriars Bridge in London:\n\n``` r\nlibrary(osrm)\n#> Data: (c) OpenStreetMap contributors, ODbL 1.0 - http://www.openstreetmap.org/copyright\n#> Routing: OSRM - http://project-osrm.org/\ntrip <- route(\n from = c(-0.11, 51.514),\n to = c(-0.10, 51.506),\n route_fun = osrmRoute,\n returnclass = ""sf""\n )\n#> Warning: ""returnclass"" is deprecated.\n#> Most common output is sf\nplot(trip)\n```\n\n\n\nYou can also use and place names, found using the Google Map API:\n\n``` r\ntrip2 <- route(\n from = ""Leeds"",\n to = ""Bradford"",\n route_fun = osrmRoute,\n returnclass = ""sf""\n )\n#> Warning: ""returnclass"" is deprecated.\n#> Most common output is sf\nplot(trip2)\n```\n\n\n\nWe can replicate this call multiple times with the `l` argument in\n`route()`:\n\n``` r\ndesire_lines <- travel_network[2:6, ]\n```\n\nNext, we\xe2\x80\x99ll calculate the routes:\n\n``` r\nroutes <- route(\n l = desire_lines,\n route_fun = osrmRoute,\n returnclass = ""sf""\n )\n#> Warning: ""returnclass"" is deprecated.\n\n#> Warning: ""returnclass"" is deprecated.\n\n#> Warning: ""returnclass"" is deprecated.\n\n#> Warning: ""returnclass"" is deprecated.\n\n#> Warning: ""returnclass"" is deprecated.\nplot(sf::st_geometry(routes))\nplot(desire_lines, col = ""red"", add = TRUE)\n#> Warning in plot.sf(desire_lines, col = ""red"", add = TRUE): ignoring all but the\n#> first attribute\n```\n\n\n\n\n\nFor more examples, `example(""route"")`.\n\n`overline()` takes a series of route-allocated lines, splits them into\nunique segments and aggregates the values of overlapping lines. This can\nrepresent where there will be most traffic on the transport system, as\ndemonstrated in the following code chunk.\n\n``` r\nroutes$foot <- desire_lines$foot\nrnet <- overline(routes, attrib = ""foot"")\n```\n\nThe resulting route network, with segment totals calculated from\noverlapping parts for the routes for walking, can be visualised as\nfollows:\n\n``` r\nplot(rnet[""foot""], lwd = rnet$foot)\n```\n\n\n\nThe above plot represents the number walking trips made (the \xe2\x80\x98flow\xe2\x80\x99)\nalong particular segments of a transport network.\n\n\n\n## Policy applications\n\nThe examples shown above, based on tiny demonstration datasets, may not\nseem particularly revolutionary. At the city scale, however, this type\nof analysis can be used to inform sustainable transport policies, as\ndescribed in papers [describing the Propensity to Cycle\nTool](https://www.jtlu.org/index.php/jtlu/article/view/862/859) (PCT),\nand its [application to calculate cycling to school\npotential](https://doi.org/10.1016/j.jth.2019.01.008) across England.\n\nResults generated by **stplanr** are now part of national government\npolicy: the PCT is the recommended tool for local and regional\nauthorities developing strategic cycle network under the Cycling and\nWalking Infrastructure Strategy\n([CWIS](https://www.gov.uk/government/publications/cycling-and-walking-investment-strategy)),\nwhich is part of the Infrastructure Act\n[2015](https://www.legislation.gov.uk/ukpga/2015/7/contents/enacted).\n**stplanr** is helping dozens of local authorities across the UK to\nanswer the question: where to prioritise investment in cycling? In\nessence, stplanr was designed to support sustainable transport policies.\n\nThere are many other research and policy questions that functions in\n**stplanr**, and other open source software libraries and packages, can\nhelp answer. At a time of climate, health and social crises, it is\nimportant that technology is not only sustainable itself (e.g.\xc2\xa0as\nenabled by open source communities and licenses) but that it contributes\nto a sustainable future.\n\n## Installation\n\nTo install the stable version, use:\n\n``` r\ninstall.packages(""stplanr"")\n```\n\nThe development version can be installed using **devtools**:\n\n``` r\n# install.packages(""devtools"") # if not already installed\ndevtools::install_github(""ropensci/stplanr"")\nlibrary(stplanr)\n```\n\n### Installing stplanr on Linux and Mac\n\n**stplanr** depends on **sf**. Installation instructions for Mac, Ubuntu\nand other Linux distros can be found here:\n\n\n## Funtions, help and contributing\n\nThe current list of available functions can be seen on the package\xe2\x80\x99s\nwebsite at\n[docs.ropensci.org/stplanr/](https://docs.ropensci.org/stplanr/), or\nwith the following command:\n\n``` r\nlsf.str(""package:stplanr"", all = TRUE)\n```\n\nTo get internal help on a specific function, use the standard way.\n\n``` r\n?od2line\n```\n\nTo contribute, report bugs or request features, see the [issue\ntracker](https://github.com/ropensci/stplanr/issues).\n\n## Further resources / tutorials\n\nWant to learn how to use open source software for reproducible\nsustainable transport planning work? Now is a great time to learn.\nTransport planning is a relatively new field of application in R.\nHowever, there are already some good resources on the topic, including\n(any further suggestions: welcome):\n\n- The Transport chapter of *Geocomputation with R*, which provides a\n broad introduction from a geographic data perspective:\n \n- The **stplanr** paper, which describes the context in which the\n package was developed:\n \n (please cite this if you use **stplanr** in your work)\n- The `dodgr` vignette, which provides an introduction to routing in R:\n \n\n## Meta\n\n- Please report issues, feature requests and questions to the [github\n issue tracker](https://github.com/ropensci/stplanr/issues)\n- License: MIT\n- Get citation information for **stplanr** in R doing\n `citation(package = \'stplanr\')`\n- This project is released with a [Contributor Code of\n Conduct](https://github.com/ropensci/stplanr/blob/master/CONDUCT.md).\n By participating in this project you agree to abide by its terms.\n\n[![rofooter](https://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n'",",https://doi.org/10.1016/j.jth.2019.01.008","2015/01/30, 08:34:49",3190,CUSTOM,77,2056,"2023/10/05, 11:29:29",24,283,515,48,20,2,0.5,0.15284827975183302,"2023/09/15, 08:54:09",v1.1.2,0,27,false,,false,false,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, CO2MPAS-TA,CO2MPAS is backward-looking longitudinal-dynamics CO2 and fuel-consumption simulator for light-duty vehicles.,JRCSTU,https://github.com/JRCSTU/CO2MPAS-TA.git,github,"co2,fuel-consumption,vehicle,automotive,wltp,nedc,eu,jrc,simulator",Mobility and Transportation,"2023/01/30, 12:35:33",22,5,1,true,Python,"STU-IET, JRC-EC",JRCSTU,"Python,Dockerfile,Shell,Smarty",https://co2mpas.readthedocs.io/,"b"".. image:: doc/_static/image/banner.png\n :width: 100%\n\n.. _start-info:\n\n######################################################################\n|co2mpas|: Vehicle simulator predicting NEDC |CO2| emissions from WLTP\n######################################################################\n:release: 4.3.4\n:rel_date: 2022-11-09 15:45:00\n:home: http://co2mpas.readthedocs.io/\n:repository: https://github.com/JRCSTU/CO2MPAS-TA\n:pypi-repo: https://pypi.org/project/co2mpas/\n:keywords: |CO2|, fuel-consumption, WLTP, NEDC, vehicle, automotive,\n EU, JRC, IET, STU, correlation, back-translation, policy,\n monitoring, M1, N1, simulator, engineering, scientific\n:mail box: \n:team: .. include:: AUTHORS.rst\n:copyright: 2015-2023 European Commission (`JRC `_)\n:license: `EUPL 1.1+ `_\n\n.. _end-info:\n.. _start-intro:\n\nWhat is |co2mpas|?\n==================\n|co2mpas| is backward-looking longitudinal-dynamics |CO2| and fuel-consumption\nsimulator for light-duty M1 & N1 vehicles (cars and vans), specially crafted to\n*estimate the CO2 emissions of vehicles undergoing NEDC* testing based on the\nemissions produced *WLTP testing* during :term:`type-approval`, according to\nthe :term:`EU legislation`\\s *1152/EUR/2017 and 1153/EUR/2017* (see `History`_\nsection, below).\n\nIt is an open-source project\n(`EUPL 1.1+ `_) developed for\nPython-3.6+. It runs either as a *console command* or as a\n*desktop GUI application*, and it uses Excel-files or pure python structures\n(dictionary and lists) for its input & output data.\n\nHistory\n-------\nThe *European Commission* has introduced the *WLTP* as the test procedure for\nthe type I test of the European type-approval of Light-duty vehicles as of\nSeptember 2017. Its introduction has required the adaptation of |CO2|\ncertification and monitoring procedures set by European regulations (443/2009,\n510/2011, 1152/EUR/2017 and 1153/EUR/2017). European Commission\xe2\x80\x99s *Joint\nResearch Centre* (JRC) has been assigned the development of this vehicle\nsimulator to facilitate this adaptation.\n\nThe European Regulation setting the conditions for using |co2mpas| can be\nfound in `the Comitology Register\n`_\nafter its adoption by the *Climate Change Committee* which took place on\nJune 23, 2016, and its 2nd vote for modifications, in April 27, 2017.\n\n.. _end-intro:\n.. _start-install:\n\nInstallation\n============\n.. _start-install-dev:\n\nTo install |co2mpas| use (with root privileges):\n\n.. code-block:: console\n\n $ pip install co2mpas\n\nOr download the latest git version and use (with root privileges):\n\n.. code-block:: console\n\n $ python setup.py install\n\n\nInstall extras\n^^^^^^^^^^^^^^\nSome additional functionality is enabled installing the following extras:\n\n- ``cli``: enables the command line interface.\n- ``sync``: enables the time series synchronization tool (i.e.,\n `syncing `_ previously named\n ``datasync``).\n- ``gui``: enables the graphical user interface.\n- ``plot``: enables to plot the |co2mpas| model and the workflow of each run.\n- ``io``: enables to read/write excel files.\n- ``driver``: enables the driver model (currently is not available).\n\nTo install co2mpas and all extras, do:\n\n.. code-block:: console\n\n $ pip install 'co2mpas[all]'\n\n.. _end-install-dev:\n.. _end-install:\n.. _start-quick:\n\nQuick Start\n===========\nThe following steps are basic commands to get familiar with |co2mpas| procedural\nworkflow using the command line interface:\n\n- `Run`_\n- `Input file`_\n- `Data synchronization`_\n\nRun\n---\nTo run |co2mpas| with some sample data, you have to:\n\n1. Generate some demo files inside the ``./input`` folder, to get familiar with\n the input data (for more info check\n the `link <_build/co2mpas/co2mpas.cli.html#co2mpas-demo>`__)::\n\n ## Generate the demo files and open a demo file.\n $ co2mpas demo ./input\n $ start ./input/co2mpas_conventional.xlsx\n\n2. Run |co2mpas| and inspect the results in the ``./output`` folder.\n The workflow is plotted on the browser (for more info check the\n `link <_build/co2mpas/co2mpas.cli.html#co2mpas-run>`__)::\n\n ## Run co2mpas and open the output folder.\n $ co2mpas run ./input/co2mpas_conventional.xlsx -O ./output -PL\n $ start ./output\n\n.. image:: _static/image/output_workflow.png\n :width: 100%\n :alt: Output workflow\n :align: center\n\nInput file\n----------\nTo create an input file with your data, you have to:\n\n1. Generate an empty input template file (i.e., ``vehicle.xlsx``) inside\n the ``./input`` folder::\n\n ## Generate template file.\n $ co2mpas template ./input/vehicle.xlsx -TT input\n\n2. Follow the instructions provided in the excel file to fill the required\n inputs::\n\n ## Open the input template.\n $ start ./input/vehicle.xlsx\n\n.. image:: _static/image/input_template.png\n :width: 100%\n :alt: Input template\n :align: center\n\nData synchronization\n--------------------\nTo synchronize the `dyno` and `OBD` data with the theoretical cycle, you have\nto:\n\n1. Generate a `synchronization template` file ``wltp.xlsx``::\n\n ## Generate template file.\n $ co2mpas syncing template ./to_sync/wltp.xlsx -CT wltp -WC class3b -GB automatic\n\n .. note::\n With the command above, the file contains the theoretical ``WLTP``\n velocity profile for an ``automatic`` vehicle of ``class3b``. For more\n info type ``co2mpas syncing template -h`` or click the\n `link <_build/co2mpas/co2mpas.cli.html#co2mpas-syncing-template>`__\n2. Fill the ``dyno`` and ``obd`` sheets with the relative data collected in the\n laboratory::\n\n ## Open the input template.\n $ start ./to_sync/wltp.xlsx\n\n3. Synchronize the data with the theoretical velocity profile::\n\n $ co2mpas syncing sync ./to_sync/wltp.xlsx ./sync/wltp.sync.xlsx\n\n4. Copy/Paste the synchronized data (``wltp.sync.xlsx``) contained in the\n ``synced`` sheet into the relative sheet of the input template::\n\n ## Open the synchronized data.\n $ start ./sync/wltp.sync.xlsx\n\n.. _end-quick:\n.. _start-sub:\n.. |co2mpas| replace:: CO\\ :sub:`2`\\ MPAS\n.. |CO2| replace:: CO\\ :sub:`2`\n.. _end-sub:\n""",,"2016/09/15, 12:56:52",2596,EUPL-1.1,8,5017,"2021/03/12, 09:36:53",31,0,20,0,957,0,0,0.4929720920757792,"2019/11/08, 19:35:43",v4.1.10,0,10,false,,false,true,"narest-qa/repo72,JRCSTU/co2wui,saikashyap6433/Fuel-Estimator-For-Vehicles,stefanocorsi/co2wui,JRCSTU/CO2MPAS-TA",,https://github.com/JRCSTU,https://ec.europa.eu/jrc/en/about/institutes-and-directorates/jrc-iet,"Ispra (VA), Italy",,,https://avatars.githubusercontent.com/u/13638890?v=4,,, wltp,Generate WLTC gear-shifts based on vehicle characteristics.,JRCSTU,https://github.com/JRCSTU/wltp.git,github,"python,vehicle,emissions,simulator,wltp,nedc,fuel-consumption,engine,driving,unece",Mobility and Transportation,"2021/10/08, 09:11:32",11,7,1,false,VBScript,"STU-IET, JRC-EC",JRCSTU,"VBScript,Python,M,Jupyter Notebook,MATLAB,VBA,Shell,PowerShell,Visual Basic .NET",https://wltp.readthedocs.org,"b'################################################################\nwltp: generate WLTC gear-shifts based on vehicle characteristics\n################################################################\n:versions: |pypi-version| |conda-version| |gh-version| |proj-version| |rel-date|\n (build-version: |release|, build-date: |today|)\n |dev-status| |python-ver| |conda-plat|\n:documentation: https://wltp.readthedocs.org/ |br|\n |docs-status|\n:live-demo: |binder| |binder-dev|\n:sources: https://github.com/JRCSTU/wltp |br|\n |travis-status| |appveyor-status| |downloads-count| |codestyle| |br|\n |gh-watch| |gh-star| |gh-fork| |gh-issues|\n:keywords: UNECE, automotive, car, cars, driving, engine, emissions, fuel-consumption,\n gears, gearshifts, rpm, simulation, simulator, standard, vehicle, vehicles, WLTC, NEDC\n:copyright: 2013-2020 European Commission (`JRC-IET `_) |br|\n |proj-lic|\n\nA python-3.6+ package to generate the *gear-shifts* of Light-duty vehicles\nrunning the :term:`WLTP` driving-cycles, according to :term:`UNECE`\'s :term:`GTR`\\s.\n\n.. figure:: docs/_static/wltc_class3b.png\n :align: center\n\n **Figure 1:** :ref:`annex-2:cycles` for class-3b Vehicles\n\n\n.. Attention::\n This *wltp* python project is still in *alpha* stage, in the sense that\n its results are not ""correct"" by the standard, and no WLTP dyno-tests should rely\n currently on them.\n\n Some of the known limitations are described in these places:\n\n * In the :doc:`CHANGES`.\n * Compare results with AccDB in ``Notebooks/CarsDB-compare.ipynb`` notebook;\n launch your private *demo-server* (|binder|) to view it.\n\n.. _end-opening:\n.. contents:: Table of Contents\n :backlinks: top\n.. _begin-intro:\n\nIntroduction\n============\n\nOverview\n--------\nThe calculator accepts as input the vehicle\'s technical data, along with parameters for modifying the execution\nof the :term:`WLTC` cycle, and it then spits-out the gear-shifts of the vehicle, the attained speed-profile,\nand any warnings. It does not calculate any |CO2| emissions.\n\n\nAn ""execution"" or a ""run"" of an experiment is depicted in the following diagram::\n\n .-----------------. .------------------.\n : Input : : Output :\n ;-----------------; ;------------------;\n ; +--test_mass ; ____________ ; +--pmr ;\n ; +--n_idle ; | | ; +--wltc_class ;\n ; +--f0,f1,f2 ; ==> | Cycle | ==> ; +--... ;\n ; +--wot/ ; | Generator | ; +--cycle ;\n ; +-- ; |____________| ; | +-- ;\n ; +--n2vs ; ; +--gwots ;\n ; +-- ; ; +-- ;\n \'-----------------\' \'------------------\'\n\nThe *Input*, *Output* and all its contents are instances of :term:`datamodel`\n(trees of strings, numbers & pandas objects)\n\n\nQuick-start\n-----------\n- Launch the example *jupyter notebooks* in a private *demo server* (|binder|).\n- Otherwise, install it locally, preferably from the sources (instructions below).\n- ``pip install`` :abbr:`""extras"" (e.g. pip install wltp[all])`:\n\n - ``plot, excel, all, dev, notebook, test, doc``\n\nPrerequisites:\n^^^^^^^^^^^^^^\n**Python-3.6+** is required and **Python-3.7** or **Python-3.8** recommended.\nIt requires **numpy/scipy** and **pandas** libraries with native backends.\n\n.. Tip::\n On *Windows*, it is preferable to use the `miniconda `_\n distribution; although its `conda` command adds another layer of complexity on top of ``pip``,\n unlike standard Python, it has pre-built all native libraries required\n (e.g. **numpy/scipy** and **pandas**).\n\n If nevertheless you choose the *standard Python*, and some packages fail to build when `pip`-installing them,\n download these packages from `Gohlke\'s ""Unofficial Windows Binaries""\n `_ and install them manually with::\n\n pip install \n\nDownload:\n^^^^^^^^^\nDownload the sources,\n\n- either with *git*, by giving this command to the terminal::\n\n git clone https://github.com/JRCSTU/wltp/ --depth=1\n\n- or download and extract the project-archive from the release page:\n https://github.com/JRCSTU/wltp/archive/v1.1.0.dev0.zip\n\n\nInstall:\n^^^^^^^^\nFrom within the project directory, run one of these commands to install it:\n\n- for standard python, installing with ``pip`` is enough (but might)::\n\n pip install -e .[test]\n\n- for *conda*, prefer to install the conda-packages listed in :file:`Notebooks/conda/conda-reqs.txt`,\n before running the same `pip` command, like this::\n\n conda install --override-channels -c ankostis -c conda-forge -c defaults --file Notebooks/conda/conda-reqs.txt\n pip install -e .[dev]\n\n\n- Check installation:\n\n .. code-block:: bash\n\n $ wltp --version\n ...\n\n $ wltp --help\n ...\n\n See: :ref:`wltp-usage`\n\n- Recreate jupyter notebooks from the paired ``*.py`` ""py:percent"" files\n (only these files are stored in git-repo),\n by executing the bash-script::\n\n Notebooks/recreate_ipynbs.sh\n\n- Run pyalgo on all AccDB cars to re-create the H5 file\n needed for ``CarsDB-compare`` notebook, etc::\n\n Notebooks/recreate_pyalgo_h5.sh\n\n\nUsage:\n^^^^^^\n.. code-block:: python\n\n import pandas as pd\n from wltp import datamodel\n from wltp.experiment import Experiment\n\n inp_mdl = datamodel.get_model_base()\n inp_mdl.update({\n ""unladen_mass"": None,\n ""test_mass"": 1100, # in kg\n ""p_rated"": 95.3, # in kW\n ""n_rated"": 3000, # in RPM\n ""n_idle"": 600,\n ""n2v_ratios"": [122.88, 75.12, 50.06, 38.26, 33.63],\n\n ## For giving absolute P numbers,\n # rename `p_norm` column to `p`.\n #\n ""wot"": pd.DataFrame(\n [[600, 0.1],\n [2500, 1],\n [3500, 1],\n [5000, 0.7]], columns=[""n"", ""p_norm""]\n ),\n \'f0\': 395.78,\n \'f1\': 0,\n \'f2\': 0.15,\n })\n datamodel.validate_model(inp_mdl, additional_properties=True)\n exp = Experiment(inp_mdl, skip_model_validation=True)\n\n # exp = Experiment(inp_mdl)\n out_mdl = exp.run()\n print(f""Available values: \\n{list(out_mdl.keys())}"")\n print(f""Cycle: \\n{out_mdl[\'cycle\']}"")\n\nSee: :ref:`python-usage`\n\n\n\nProject files and folders\n-------------------------\nThe files and folders of the project are listed below (see also :ref:`architecture:Architecture`)::\n\n +--bin/ # (shell-scripts) Utilities & preprocessing of WLTC data on GTR and the wltp_db\n | +--bumpver.py # (script) Update project\'s version-string\n +--wltp/ # (package) python-code of the calculator\n | +--cycles/ # (package) code & data for the WLTC data\n | +--experiment # top-level code running the algo\n | +--datamodel # schemas & defaults for data of algo\n | +--cycler # code for generating the cycle\n | +--engine # formulae for engine power & revolutions and gear-box\n | +--vehicle # formulae for cycle/vehicle dynamics\n | +--vmax # formulae estimating `v_max` from wot\n | +--downscale # formulae downscaling cycles based on pmr/test_mass ratio\n | +--invariants # definitions & idempotent formulae for physics/engineering\n | +--io # utilities for starting-up, parsing, naming and spitting data\n | +--utils # software utils unrelated to physics or engineering\n | +--cli # (OUTDATED) command-line entry-point for launching this wltp tool\n | +--plots # (OUTDATED) code for plotting diagrams related to wltp cycles & results\n | +--idgears # (OUTDATED) reconstructs the gears-profile by identifying the actual gears\n +--tests/ # (package) Test-TestCases\n +--vehdb # Utils for manipulating h5db with accdb & pyalgo cases.\n +--docs/ # (folder) documentation\n | +--pyplots/ # (DEPRECATED by notebooks) scripts plotting the metric diagrams embedded in the README\n +--Notebooks/ # Jupyter notebooks for running & comparing results (see `Notebooks/README.md`)\n +--AccDB_src/ # AccDB code & queries extracted and stored as text\n +--setup.py # (script) The entry point for `setuptools`, installing, testing, etc\n +--requirements/ # (txt-files) Various pip-dependencies for tools.\n +--README.rst\n +--CHANGES.rst\n +--LICENSE.txt\n\n\n\n.. _wltp-usage:\n\nUsage\n=====\n.. _python-usage:\n\nPython usage\n------------\nFirst run :command:`python` or :command:`ipython` :abbr:`REPL (Read-Eval-Print Loop)` and\ntry to import the project to check its version:\n\n.. doctest::\n\n >>> import wltp\n\n >>> wltp.__version__ ## Check version once more.\n \'1.1.0.dev0\'\n\n >>> wltp.__file__ ## To check where it was installed. # doctest: +SKIP\n /usr/local/lib/site-package/wltp-...\n\n\n.. Tip:\n The use :command:`ipython` is preferred over :command:`python` since it offers various user-friendly\n facilities, such as pressing :kbd:`Tab` for completions, or allowing you to suffix commands with ``?`` or ``??``\n to get help and read their source-code.\n\n Additionally you can copy any python commands starting with ``>>>`` and ``...`` and copy paste them directly\n into the ipython interpreter; it will remove these prefixes.\n But in :command:`python` you have to remove it yourself.\n\nIf everything works, create the :term:`datamodel` of the experiment.\nYou can assemble the model-tree by the use of:\n\n* sequences,\n* dictionaries,\n* :class:`pandas.DataFrame`,\n* :class:`pandas.Series`, and\n* URI-references to other model-trees.\n\n\nFor instance:\n\n.. doctest::\n\n >>> from wltp import datamodel\n >>> from wltp.experiment import Experiment\n\n >>> mdl = {\n ... ""unladen_mass"": 1430,\n ... ""test_mass"": 1500,\n ... ""v_max"": 195,\n ... ""p_rated"": 100,\n ... ""n_rated"": 5450,\n ... ""n_idle"": 950,\n ... ""n_min"": None, ## Manufacturers my override it\n ... ""n2v_ratios"": [120.5, 75, 50, 43, 37, 32],\n ... ""f0"": 100,\n ... ""f1"": 0.5,\n ... ""f2"": 0.04,\n ... }\n >>> mdl = datamodel.upd_default_load_curve(mdl) ## need some WOT\n\n\nFor information on the accepted model-data, check the :ref:`code:Schemas`:\n\n.. doctest::\n\n >>> from wltp import utils\n >>> utils.yaml_dumps(datamodel.model_schema(), indent=2) # doctest: +SKIP\n $schema: http://json-schema.org/draft-07/schema#\n $id: /wltc\n title: WLTC data\n type: object\n additionalProperties: false\n required:\n - classes\n properties:\n classes:\n ...\n\n\nYou then have to feed this model-tree to the :class:`~wltp.experiment.Experiment`\nconstructor. Internally the :class:`pandalone.pandel.Pandel` resolves URIs, fills-in default values and\nvalidates the data based on the project\'s pre-defined :term:`JSON-schema`:\n\n.. doctest::\n\n >>> processor = Experiment(mdl) ## Fills-in defaults and Validates model.\n\n\nAssuming validation passes without errors, you can now inspect the defaulted-model\nbefore running the experiment:\n\n.. doctest::\n\n >>> mdl = processor.model ## Returns the validated model with filled-in defaults.\n >>> sorted(mdl) ## The ""defaulted"" model now includes the `params` branch.\n [\'driver_mass\', \'f0\', \'f1\', \'f2\', \'f_dsc_decimals\', \'f_dsc_threshold\', \'f_inertial\',\n \'f_n_clutch_gear2\', \'f_n_min\', \'f_n_min_gear2\', \'f_running_threshold\', \'f_safety_margin\',\n \'f_up_threshold\', \'n2v_ratios\', \'n_idle\', \'n_min_drive1\', \'n_min_drive2\', \'n_min_drive2_stopdecel\',\n \'n_min_drive2_up\', \'n_min_drive_down\', \'n_min_drive_down_start\', \'n_min_drive_set\',\n \'n_min_drive_up\', \'n_min_drive_up_start\', \'n_rated\', \'p_rated\', \'t_cold_end\', \'test_mass\',\n \'unladen_mass\', \'v_cap\', \'v_max\', \'v_stopped_threshold\', \'wltc_data\', \'wot\']\n\nNow you can run the experiment:\n\n.. doctest::\n\n >>> mdl = processor.run() ## Runs experiment and augments the model with results.\n >>> sorted(mdl) ## Print the top-branches of the ""augmented"" model.\n [`cycle`, \'driver_mass\', \'f0\', \'f1\', \'f2\', `f_dsc`, \'f_dsc_decimals\', `f_dsc_raw`,\n \'f_dsc_threshold\', \'f_inertial\', \'f_n_clutch_gear2\', \'f_n_min\', \'f_n_min_gear2\',\n \'f_running_threshold\', \'f_safety_margin\', \'f_up_threshold\', `g_vmax`, `is_n_lim_vmax`,\n \'n2v_ratios\', `n95_high`, `n95_low`, \'n_idle\', `n_max`, `n_max1`, `n_max2`, `n_max3`,\n \'n_min_drive1\', \'n_min_drive2\', \'n_min_drive2_stopdecel\', \'n_min_drive2_up\', \'n_min_drive_down\',\n \'n_min_drive_down_start\', \'n_min_drive_set\', \'n_min_drive_up\', \'n_min_drive_up_start\',\n \'n_rated\', `n_vmax`, \'p_rated\', `pmr`, \'t_cold_end\', \'test_mass\', \'unladen_mass\', \'v_cap\',\n \'v_max\', \'v_stopped_threshold\', `wltc_class`, \'wltc_data\', \'wot\', `wots_vmax`]\n\nTo access the time-based cycle-results it is better to use a :class:`pandas.DataFrame`:\n\n.. doctest::\n\n >>> import pandas as pd, wltp.cycler as cycler, wltp.io as wio\n >>> df = pd.DataFrame(mdl[\'cycle\']); df.index.name = \'t\'\n >>> df.shape ## ROWS(time-steps) X COLUMNS.\n (1801, 107)\n >>> wio.flatten_columns(df.columns)\n [\'t\', \'V_cycle\', \'v_target\', \'a\', \'phase_1\', \'phase_2\', \'phase_3\', \'phase_4\', \'accel_raw\',\n \'run\', \'stop\', \'accel\', \'cruise\', \'decel\', \'initaccel\', \'stopdecel\', \'up\', \'p_inert\', \'n/g1\',\n \'n/g2\', \'n/g3\', \'n/g4\', \'n/g5\', \'n/g6\', \'n_norm/g1\', \'n_norm/g2\', \'n_norm/g3\', \'n_norm/g4\',\n \'n_norm/g5\', \'n_norm/g6\', \'p/g1\', \'p/g2\', \'p/g3\', \'p/g4\', \'p/g5\', \'p/g6\', \'p_avail/g1\',\n \'p_avail/g2\', \'p_avail/g3\', \'p_avail/g4\', \'p_avail/g5\', \'p_avail/g6\', \'p_avail_stable/g1\',\n \'p_avail_stable/g2\', \'p_avail_stable/g3\', \'p_avail_stable/g4\', \'p_avail_stable/g5\',\n \'p_avail_stable/g6\', \'p_norm/g1\', \'p_norm/g2\', \'p_norm/g3\', \'p_norm/g4\', \'p_norm/g5\',\n \'p_norm/g6\', \'p_resist\', \'p_req\', \'P_remain/g1\', \'P_remain/g2\', \'P_remain/g3\',\n \'P_remain/g4\', \'P_remain/g5\', \'P_remain/g6\', \'ok_p/g3\', \'ok_p/g4\', \'ok_p/g5\', \'ok_p/g6\',\n \'ok_gear0/g0\', \'ok_max_n/g1\', \'ok_max_n/g2\', \'ok_max_n/g3\', \'ok_max_n/g4\', \'ok_max_n/g5\',\n \'ok_max_n/g6\', \'ok_min_n_g1/g1\', \'ok_min_n_g1_initaccel/g1\', \'ok_min_n_g2/g2\',\n \'ok_min_n_g2_stopdecel/g2\', \'ok_min_n_g3plus_dns/g3\', \'ok_min_n_g3plus_dns/g4\',\n \'ok_min_n_g3plus_dns/g5\', \'ok_min_n_g3plus_dns/g6\', \'ok_min_n_g3plus_ups/g3\',\n \'ok_min_n_g3plus_ups/g4\', \'ok_min_n_g3plus_ups/g5\', \'ok_min_n_g3plus_ups/g6\', \'ok_n/g1\',\n \'ok_n/g2\', \'ok_n/g3\', \'ok_n/g4\', \'ok_n/g5\', \'ok_n/g6\', \'ok_gear/g0\', \'ok_gear/g1\',\n \'ok_gear/g2\', \'ok_gear/g3\', \'ok_gear/g4\', \'ok_gear/g5\', \'ok_gear/g6\', \'G_scala/g0\', \'G_scala/g1\',\n \'G_scala/g2\', \'G_scala/g3\', \'G_scala/g4\', \'G_scala/g5\', \'G_scala/g6\', \'g_min\', \'g_max0\']\n >>> \'Mean engine_speed: %s\' % df.n.mean() # doctest: +SKIP\n \'Mean engine_speed: 1908.9266796224322\'\n >>> df.describe() # doctest: +SKIP\n v_class v_target ... rpm_norm v_real\n count 1801.000000 1801.000000 ... 1801.000000 1801.000000\n mean 46.361410 46.361410 ... 0.209621 50.235126\n std 36.107745 36.107745 ... 0.192395 32.317776\n min 0.000000 0.000000 ... -0.205756 0.200000\n 25% 17.700000 17.700000 ... 0.083889 28.100000\n 50% 41.300000 41.300000 ... 0.167778 41.300000\n 75% 69.100000 69.100000 ... 0.285556 69.100000\n max 131.300000 131.300000 ... 0.722578 131.300000\n \n [8 rows x 10 columns]\n\n >>> processor.driveability_report() # doctest: +SKIP\n ...\n 12: (a: X-->0)\n 13: g1: Revolutions too low!\n 14: g1: Revolutions too low!\n ...\n 30: (b2(2): 5-->4)\n ...\n 38: (c1: 4-->3)\n 39: (c1: 4-->3)\n 40: Rule e or g missed downshift(40: 4-->3) in acceleration?\n ...\n 42: Rule e or g missed downshift(42: 3-->2) in acceleration?\n ...\n\nYou can export the cycle-run results in a CSV-file with the following pandas command:\n\n.. code-block:: pycon\n\n >>> df.to_csv(\'cycle.csv\') # doctest: +SKIP\n\n\nFor more examples, download the sources and check the test-cases\nfound under the :file:`/tests/` folder.\n\n.. _cmd-line-usage:\n\nCmd-line usage\n--------------\n.. Warning:: Not implemented in yet.\n\nThe command-line usage below requires the Python environment to be installed, and provides for\nexecuting an experiment directly from the OS\'s shell (i.e. :program:`cmd` in windows or :program:`bash` in POSIX),\nand in a *single* command. To have precise control over the inputs and outputs\n(i.e. experiments in a ""batch"" and/or in a design of experiments)\nyou have to run the experiments using the API python, as explained below.\n\n\nThe entry-point script is called :program:`wltp`, and it must have been placed in your :envvar:`PATH`\nduring installation. This script can construct a *model* by reading input-data\nfrom multiple files and/or overriding specific single-value items. Conversely,\nit can output multiple parts of the resulting-model into files.\n\nTo get help for this script, use the following commands:\n\n.. code-block:: bash\n\n $ wltp --help ## to get generic help for cmd-line syntax\n $ wltcmdp.py -M vehicle/full_load_curve ## to get help for specific model-paths\n\n\nand then, assuming ``vehicle.csv`` is a CSV file with the vehicle parameters\nfor which you want to override the ``n_idle`` only, run the following:\n\n.. code-block:: bash\n\n $ wltp -v \\\n -I vehicle.csv file_frmt=SERIES model_path=params header@=None \\\n -m vehicle/n_idle:=850 \\\n -O cycle.csv model_path=cycle\n\n\n.. _excel-usage:\n\nExcel usage\n-----------\n.. Attention:: OUTDATED!!! Excel-integration requires Python 3 and *Windows* or *OS X*!\n\nIn *Windows* and *OS X* you may utilize the excellent `xlwings `_ library\nto use Excel files for providing input and output to the experiment.\n\nTo create the necessary template-files in your current-directory you should enter:\n\n.. code-block:: shell\n\n $ wltp --excel\n\n\nYou could type instead :samp:`wltp --excel {file_path}` to specify a different destination path.\n\nIn *windows*/*OS X* you can type :samp:`wltp --excelrun` and the files will be created in your home-directory\nand the excel will open them in one-shot.\n\nAll the above commands creates two files:\n\n:file:`wltp_excel_runner.xlsm`\n The python-enabled excel-file where input and output data are written, as seen in the screenshot below:\n\n .. image:: docs/xlwings_screenshot.png\n :scale: 50%\n :alt: Screenshot of the `wltp_excel_runner.xlsm` file.\n\n After opening it the first tie, enable the macros on the workbook, select the python-code at the left and click\n the :menuselection:`Run Selection as Python` button; one sheet per vehicle should be created.\n\n The excel-file contains additionally appropriate *VBA* modules allowing you to invoke *Python code*\n present in *selected cells* with a click of a button, and python-functions declared in the python-script, below,\n using the ``mypy`` namespace.\n\n To add more input-columns, you need to set as column *Headers* the *json-pointers* path of the desired\n model item (see :ref:`python-usage` below,).\n\n:file:`wltp_excel_runner.py`\n Utility python functions used by the above xls-file for running a batch of experiments.\n\n The particular functions included reads multiple vehicles from the input table with various\n vehicle characteristics and/or experiment parameters, and then it adds a new worksheet containing\n the cycle-run of each vehicle .\n Of course you can edit it to further fit your needs.\n\n\n.. Note:: You may reverse the procedure described above and run the python-script instead.\n The script will open the excel-file, run the experiments and add the new sheets, but in case any errors occur,\n this time you can debug them, if you had executed the script through *LiClipse*, or *IPython*!\n\nSome general notes regarding the python-code from excel-cells:\n\n* On each invocation, the predefined VBA module ``pandalon`` executes a dynamically generated python-script file\n in the same folder where the excel-file resides, which, among others, imports the ""sister"" python-script file.\n You can read & modify the sister python-script to import libraries such as \'numpy\' and \'pandas\',\n or pre-define utility python functions.\n* The name of the sister python-script is automatically calculated from the name of the Excel-file,\n and it must be valid as a python module-name. Therefore do not use non-alphanumeric characters such as\n spaces(`` ``), dashes(``-``) and dots(``.``) on the Excel-file.\n* On errors, a log-file is written in the same folder where the excel-file resides,\n for as long as **the message-box is visible, and it is deleted automatically after you click \'ok\'!**\n* Read http://docs.xlwings.org/quickstart.html\n\n\n.. _architecture:\n\nArchitecture\n============\nThe Python code is highly modular, with `testability in mind\n`_.\nso that specific parts can run in isolation.\nThis facilitates studying tough issues, such as, `double-precision reproducibility\n`_, boundary conditions,\ncomparison of numeric outputs, and studying the code in sub-routines.\n\n.. tip::\n Run test-cases with ``pytest`` command.\n\nData Structures:\n----------------\n.. default-role:: term\n\nComputations are vectorial, based on `hierarchical dataframes\n`_,\nall of them stored in a single structure, the `datamodel`.\nIn case the computation breaks, you can still retrieve all intermediate results\ntill that point.\n\n.. TODO::\n Almost all of the names of the `datamodel` and `formulae` can be remapped,\n For instance, it is possible to run the tool on data containing ``n_idling_speed``\n instead of ``n_idle`` (which is the default), without renaming the input data.\n\n.. glossary::\n\n mdl\n datamodel\n The container of all the scalar Input & Output values, the WLTC constants factors,\n and 3 matrices: `WOT`, `gwots`, and the `cycle run` time series.\n\n It is composed by a stack of mergeable `JSON-schema` abiding trees of *string, numbers & pandas objects*,\n formed with python *sequences & dictionaries, and URI-references*.\n It is implemented in :mod:`~wltp.datamodel`, supported by :class:`pandalone.pandata.Pandel`.\n\n\n WOT\n Full Load Curve\n An *input* array/dict/dataframe with the full load power curves for (at least) 2 columns for ``(n, p)``\n or their normalized values ``(n_norm, p_norm)``.\n See also https://en.wikipedia.org/wiki/Wide_open_throttle\n\n gwots\n grid WOTs\n A dataframe produced from `WOT` for all gear-ratios, indexed by a grid of rounded velocities,\n and with 2-level columns ``(item, gear)``.\n It is generated by :func:`~wltp.engine.interpolate_wot_on_v_grid()`, and augmented\n by :func:`~wltp.engine.attach_p_avail_in_gwots()` & :func:`~wltp.vehicle.calc_p_resist()` .\n\n .. TODO::\n Move `grid WOTs` code in own module :mod:`~wltp.gwots`.\n\n cycle\n cycle run\n A dataframe with all the time-series, indexed by the time of the samples.\n The velocities for each time-sample must exist in the `gwots`.\n The columns are the same 2-level columns like *gwots*.\n it is implemented in :mod:`~wltp.cycler`.\n\nCode Structure:\n---------------\nThe computation code is roughly divided in these python modules:\n\n.. glossary::\n\n formulae\n Physics and engineering code, implemented in modules:\n\n - :mod:`~wltp.engine`\n - :mod:`~wltp.vmax`\n - :mod:`~wltp.downscale`\n - :mod:`~wltp.vehicle`\n\n - orchestration\n The code producing the actual gear-shifting, implemented in modules:\n\n - :mod:`~wltp.datamodel`\n - :mod:`~wltp.cycler`\n - :mod:`~wltp.gridwots` (TODO)\n - :mod:`~wltp.scheduler` (TODO)\n - :mod:`~wltp.experiment` (TO BE DROPPED, :mod:`~wltp.datamodel` will assume all functionality)\n\n scheduler\n graphtik\n The internal software component :mod:`graphtik` which decides which\n `formulae` to execute based on given inputs and requested outputs.\n\nThe blueprint for the underlying software ideas is given with this diagram:\n\n.. image:: docs/_static/WLTP_architecture.png\n :alt: Software architectural concepts underlying WLTP code structure.\n\nNote that currently there is no `scheduler` component, which will allow to execute the tool\nwith a varying list of available inputs & required data, and automatically compute\nonly what is not already given.\n\n\nSpecs & Algorithm\n-----------------\nThis program imitates to some degree the `MS Access DB` (as of July 2019),\nfollowing this *08.07.2019_HS rev2_23072019 GTR specification*\n(:download:`docs/_static/WLTP-GS-TF-41 GTR 15 annex 1 and annex 2 08.07.2019_HS rev2_23072019.docx`,\nincluded in the :file:`docs/_static` folder).\n\n.. Note::\n There is a distinctive difference between this implementation and the `AccDB`:\n\n All computations are *vectorial*, meaning that all intermediate results are calculated & stored,\n for all time sample-points,\n and not just the side of the conditions that evaluate to *true* on each sample.\n\nThe latest official version of this GTR, along\nwith other related documents maybe found at UNECE\'s site:\n\n* http://www.unece.org/trans/main/wp29/wp29wgs/wp29grpe/grpedoc_2013.html\n* https://www2.unece.org/wiki/pages/viewpage.action?pageId=2523179\n\n\n.. default-role:: obj\n.. _begin-cycles:\n\n\nCycles\n======\nThe WLTC-profiles for the various classes were generated from the tables\nof the specs above using the :file:`devtools/csvcolumns8to2.py` script, but it still requires\nan intermediate manual step involving a spreadsheet to copy the table into ands save them as CSV.\n\n\n.. image:: docs/_static/wltc_class1.png\n :align: center\n.. image:: docs/_static/wltc_class2.png\n :align: center\n.. image:: docs/_static/wltc_class3a.png\n :align: center\n.. image:: docs/_static/wltc_class3b.png\n :align: center\n\n.. _phasings:\n\nPhases (the problem)\n--------------------\nGTR\'s ""V"" phasings\n^^^^^^^^^^^^^^^^^^\nThe :term:`GTR`\'s velocity traces have overlapping split-time values, i.e. belonging to 2 phases,\nand e.g. for *class1* these are the sample-values @ times 589 & 1022:\n\n.. table:: GTR\'s **""V""** phasing scheme for Velocities\n\n ============ ======== =========== ============ =========\n *class1* phase-1 phase-2 phase-3 *cycle*\n ============ ======== =========== ============ =========\n Boundaries [0, 589] [589, 1022] [1022, 1611] [0, 1611]\n Duration 589 433 589 1611\n # of samples 590 434 590 1612\n ============ ======== =========== ============ =========\n\n""Semi-VA"" phasings\n^^^^^^^^^^^^^^^^^^\nSome programs and most spreadsheets do not handle overlapping split-time values\nlike that (i.e. keeping a separate column for each class-phase),\nand assign split-times either to the earlier or the later phase, distorting thus\nthe duration & number of time samples some phases contain!\n\nFor instance, Access-DB tables assign split-times on the lower parts,\ndistorting the start-times & durations for all phases except the 1st one\n(deviations from GTR **in bold**):\n\n.. table::\n Access-DB, a **""semi-VA1""** phasing scheme (all but 1st phases shorter)\n\n ============ ======== =============== ================ =========\n *class1* phase-1 phase-2 phase-3 *cycle*\n ============ ======== =============== ================ =========\n Boundaries [0, 589] [**590**, 1022] [**1023**, 1611] [0, 1611]\n Duration 589 **432** **588** 1611\n # of samples 590 **433** **589** 1612\n ============ ======== =============== ================ =========\n\n.. Note::\n The algorithms contained in Access DB are carefully crafted to do the right thing.\n\nThe inverse distortion (assigning split-times on the higher parts) would preserve\nphase starting times (hint: downscaling algorithm depends on those absolute timings\nbeing precisely correct):\n\n.. table::\n ""Inverted"" Access-DB, a **""semi-VA0""** phasing schema (all but last phases shorter)\n\n ============= =============== =============== ================ =========\n *class1* phase-1 phase-2 phase-3 *cycle*\n ============= =============== =============== ================ =========\n Boundaries [0, **588**] [589, **1021**] [1022, 1611] [0, 1611]\n Duration **588** **432** 589 1611\n # of samples **589** **433** 590 1612\n ============= =============== =============== ================ =========\n\n""VA"" phasings\n^^^^^^^^^^^^^\nOn a related issue, GTR\'s formula for Acceleration (Annex 1 3.1) produces\n**one less value** than the number of velocity samples\n(like the majority of the distorted phases above).\nGTR prescribes to (optionally) append and extra *A=0* sample at the end,\nto equalize Acceleration & Velocities lengths, but that is not totally ok\n(hint: mean Acceleration values do not add up like mean-Velocities do,\nsee next point about averaging).\n\nSince most calculated and measured quantities (like cycle Power) are tied\nto the acceleration, we could **refrain from adding the extra 0**, and leave\nall phases with -1 samples, without any overlapping split-times:\n\n.. table:: **""VA0""** phasings\n\n ============= =============== =============== ================ =============\n *class1* phase-1 phase-2 phase-3 *cycle*\n ============= =============== =============== ================ =============\n Boundaries [0, **588**] [589, **1021**] [1022, **1610**] [0, **1610**]\n Duration **588** **432** **588** **1610**\n # of samples **589** **433** **589** **1611**\n ============= =============== =============== ================ =============\n\nActually this is **""semi-VA0""** phasings, above, with the last part equally distorted\nby -1 @ 1610.\nBut now the whole cycle has (disturbingly) -1 # of samples & duration:\n\nWe can resolve this, conceptually, by assuming that **each Acceleration-dependent sample\nsignifies a time-duration**, so that although the # of samples are still -1,\nthe phase & cycle durations (in sec) are as expected:\n\n.. table:: **""VA0+""** phasings, with 1 sec step duration\n\n ============= =============== ================ ================= ==============\n *class1* phase-1 phase-2 phase-3 *cycle*\n ============= =============== ================ ================= ==============\n Boundaries [0, 589 **)** [589, 1022 **)** [1022, 1611 **)** [0, 1611 **)**\n Duration 589 433 589 1611\n # of samples **589** **433** **589** **1611**\n ============= =============== ================ ================= ==============\n\nSummarizing the last **""VA0+""** phasing scheme:\n\n - each step signifies a ""duration"" of 1 sec,\n - the duration of the final sample @ 1610 reaches just before 1611sec,\n - # of samples for all phases are symmetrically -1 compared to Velocity phases,\n - it is valid for Acceleration-dependent quantities only,\n - it is **valid for any sampling frequency (not just 1Hz)**,\n - respects the `Dijkstra counting\n `_\n (notice the parenthesis signifying *open right* intervals, in the above table),\n BUT ...\n - Velocity-related quantities cannot utilize this phasing scheme,\n must stick to the original, with overlapping split-times.\n\n\nAveraging over phases\n^^^^^^^^^^^^^^^^^^^^^\nCalculating mean values for Acceleration-related quantities produce correct results\nonly with non-overlapping split-times.\n\nIt\'s easier to demonstrate the issues with a hypothetical 4-sec cycle,\xc2\xa0\ncomposed of 2 symmetrical ramp-up/ramp-down\xc2\xa02-sec phases\n(the ""blue"" line in the plot, below):\n\n.. table:: ramp-up/down cycle\n\n ===== ======== ======== ===== ========= ========= =========\n t V-phase1 V-phase2 V Distance VA-phase A\n [sec] [kmh] [m x 3.6] [m/sec\xc2\xb2]\n ===== ======== ======== ===== ========= ========= =========\n 0 X 0 0 1 5\n 1 X 5 2.5 1 5\n 2 X X 10 10 2 -5\n 3 X 5 17.5 2 -5\n 4 X 0 20 ** **\n ===== ======== ======== ===== ========= ========= =========\n\n- The final *A* value has been kept blank, so that mean values per-phase\n add up, and phases no longer overlap.\n\n.. raw:: html\n :file: docs/_static/2ramps.svg\n\n.. table:: mean values for ramp-up/down cycle, above\n\n =========== ======== ========= ========\n \\ mean(V) mean(S) mean(A)\n \\ [kmh] [m x 3.6] [m/sec\xc2\xb2]\n =========== ======== ========= ========\n **phase1:** 5 10 5\n **phase2:** 5 10 -5\n =========== ======== ========= ========\n\n- Applying the *V-phasings* and the extra 0 on *mean(A)* would have arrived\n to counterintuitive values, that don\'t even sum up to 0:\n\n - up-ramp: :math:`\\left(\\frac{5 + 5 + (-5)}{3} =\\right) 1.66m/sec^2`\n - down-ramp: :math:`\\left(\\frac{(-5) + (-5) + 0}{3} =\\right) -3.33m/sec^2`\n\n\nPractical deliberations\n^^^^^^^^^^^^^^^^^^^^^^^\nAll phases in WLTC begin and finish with consecutive zeros(0),\ntherefore the deliberations above do not manifest as problems;\nbut at the same time, discovering off-by-one errors & shifts in time\n(wherever this really matters e.g. for syncing data), on arbitrary files containing\ncalculated and/or measured traces is really hard:\nSUMs & CUMSUMs do not produce any difference at all.\n\nThe tables in the next section, along with accompanying CRC functions developed in Python,\ncome as an aid to the problems above.\n\n.. _checksums:\n\nPhase boundaries\n^^^^^^^^^^^^^^^^\nAs reported by :func:`wltp.cycles.cycle_phases()`, and neglecting the advice\nto *optionally* add a final 0 when calculating the cycle Acceleration (Annex 1 2-3.1),\nthe following 3 *phasing* are identified from velocity traces of 1Hz:\n\n- **V:** phases for quantities dependent on **Velocity** samples, overlapping\n split-times.\n- **VA0:** phases for **Acceleration**\\-dependent quantities, -1 length,\n NON overlapping split-times, starting on *t=0*.\n- **VA1:** phases for **Acceleration**\\-dependent quantities, -1 length,\n NON overlapping split-times, starting on *t=1*.\n (e.g. Energy in Annex 7).\n\n======= ======== ======== =========== ============ ============\nclass phasing phase-1 phase-2 phase-3 phase-4\n======= ======== ======== =========== ============ ============\nclass1 **V** [0, 589] [589, 1022] [1022, 1611]\n\\ **VA0** [0, 588] [589, 1021] [1022, 1610]\n\\ **VA1** [1, 589] [590, 1022] [1023, 1611]\nclass2 **V** [0, 589] [589, 1022] [1022, 1477] [1477, 1800]\n\\ **VA0** [0, 588] [589, 1021] [1022, 1476] [1477, 1799]\n\\ **VA1** [1, 589] [590, 1022] [1023, 1477] [1478, 1800]\nclass3a **V** [0, 589] [589, 1022] [1022, 1477] [1477, 1800]\n\\ **VA0** [0, 588] [589, 1021] [1022, 1476] [1477, 1799]\n\\ **VA1** [1, 589] [590, 1022] [1023, 1477] [1478, 1800]\nclass3b **V** [0, 589] [589, 1022] [1022, 1477] [1477, 1800]\n\\ **VA0** [0, 588] [589, 1021] [1022, 1476] [1477, 1799]\n\\ **VA1** [1, 589] [590, 1022] [1023, 1477] [1478, 1800]\n======= ======== ======== =========== ============ ============\n\nChecksums\n^^^^^^^^^\n* The :func:`~wltp.cycles.crc_velocity()` function has been specially crafted to consume\n series of floats with 2-digits precision (:data:`~wltp.invariants.v_decimals`)\n denoting *Velocity-traces*, and spitting out a hexadecimal string, the *CRC*,\n without neglecting any zeros(0) in the trace.\n* The checksums for all :ref:`phasings` are reported by :func:`~wltp.cycles.cycle_checksums()`,\n and the the table below is constructed. The original checksums of the :term:`GTR` are\n also included in the final 2 columns.\n* Based on this table of CRCs, the :func:`~wltp.cycles.identify_cycle_v_crc()` function\n tries to match and identify any given Velocity-trace:\n\n.. table:: CRCs & CUMSUMs for all phases over different ""phasings""\n\n ================== ===== ===== ===== ==== ===== ===== ========== ============\n \\ *CRC32* *SUM*\n ------------------ --------------------------------------- ------------------------\n \\ *by_phase* *cumulative* *by_phase* *cumulative*\n ------------------ ------------------- ------------------ ---------- ------------\n *phasing\xe2\x87\xa8 phase\xe2\x87\xa9* V VA0 VA1 V VA0 VA1 V V\n ================== ===== ===== ===== ==== ===== ===== ========== ============\n **class1**\n --------------------------------------------------------------------------------------\n *phase1* 9840 4438 97DB 9840 4438 97DB 11988.4 11988.4\n *phase2* 8C34 8C8D D9E8 DCF2 090B 4295 17162.8 29151.2\n *phase3* 9840 4438 97DB 6D1D 4691 F523 11988.4 41139.6\n **class2**\n --------------------------------------------------------------------------------------\n *phase1* 8591 CDD1 8A0A 8591 CDD1 8A0A 11162.2 11162.2\n *phase2* 312D 391A 64F1 A010 606E 3E77 17054.3 28216.5\n *phase3* 81CD E29E 9560 28FB 9261 D162 24450.6 52667.1\n *phase4* 8994 0D25 2181 474B 262A F70F 28869.8 81536.9\n **class3a**\n --------------------------------------------------------------------------------------\n *phase1* 48E5 910C 477E 48E5 910C 477E 11140.3 11140.3\n *phase2* 1494 D93B 4148 403D 2487 DE5A 16995.7 28136.0\n *phase3* 8B3B 9887 9F96 D770 3F67 2EE9 25646.0 53782.0\n *phase4* F962 1A0A 5177 9BCE 9853 2B8A 29714.9 83496.9\n **class3b**\n --------------------------------------------------------------------------------------\n *phase1* 48E5 910C 477E 48E5 910C 477E 11140.3 11140.3\n *phase2* AF1D E501 FAC1 FBB4 18BD 65D3 17121.2 28261.5\n *phase3* 15F6 A779 015B 43BC B997 BA25 25782.2 54043.7\n *phase4* F962 1A0A 5177 639B 0B7A D3DF 29714.9 83758.6\n ================== ===== ===== ===== ==== ===== ===== ========== ============\n\n... where if a some cycle-phase is identified as:\n\n* **V phasing**, it contains all samples;\n* **VA0 phasing**, it lacks -1 sample *from the end*;\n* **VA1 phasing**, it lacks -1 sample *from the start*.\n\n\nPractical example\n^^^^^^^^^^^^^^^^^\nFor instance, let\'s identify the *V-trace* of *class1*\'s full cycle:\n\n>>> from wltp.datamodel import get_class_v_cycle as wltc\n>>> from wltp.cycles import identify_cycle_v as crc\n>>> V = wltc(""class1"")\n\n>>> crc(V) # full cycle\n(\'class1\', None, \'V\')\n>>> crc(V[:-1]) # -1 (last) sample\n(\'class1\', None, \'VA0\')\n\nThe :func:`crc() ` function returns a 3-tuple:\n``(i-class, i-phase, i-kind)``:\n\n* When ``i-phase`` is None, the trace was a full-cycle.\n\nNow let\'s identify the phases of ""Access-DB"":\n\n>>> crc(V[:590]) # AccDB phase1 respects GTR\n(\'class1\', \'phase-1\', \'V\')\n>>> crc(V[590:1023]) # AccDB phase2 has -1 (first) sample\n(\'class1\', \'phase-2\', \'VA1\')\n>>> crc(V[:1023]) # cumulative AccDB phase2 respects GTR\n(\'class1\', \'PHASE-2\', \'V\')\n\n* When ``i-phase`` is CAPITALIZED, the trace was cumulative.\n* *Phase2* is missing -1 sample from the start (``i-kind == VA1``).\n\n>>> crc(V[1023:]) # AccDB phase3\n(\'class1\', \'phase-1\', \'VA1\')\n\n* *Phase3* was identified again as *phase1*, since they are identical.\n\nFinally, clipping both samples from start & end, matches no CRC:\n\n>>> crc(V[1:-1])\n(None, None, None)\n\nTo be note, all cases above would have had identical *CUMSUM* (GTR\'s) CRCs.\n\n\n.. _begin-contribute:\n\nGetting Involved\n================\nThis project is hosted in **github**.\nTo provide feedback about bugs and errors or questions and requests for enhancements,\nuse `github\'s Issue-tracker `_.\n\nDevelopment procedure\n---------------------\nFor submitting code, use ``UTF-8`` everywhere, unix-eol(``LF``) and set ``git --config core.autocrlf = input``.\n\nThe typical development procedure is like this:\n\n0. Install and arm a `pre-commit hook `_\n with *black* to auto-format you python-code.\n\n1. Modify the sources in small, isolated and well-defined changes, i.e.\n adding a single feature, or fixing a specific bug.\n\n2. Add test-cases ""proving"" your code.\n\n3. Rerun all test-cases to ensure that you didn\'t break anything,\n and check their *coverage* remain above the limit set in :file:`setup.cfg`.\n\n4. If you made a rather important modification, update also the :doc:`CHANGES` file and/or\n other documents (i.e. README.rst). To see the rendered results of the documents,\n issue the following commands and read the result html at :file:`build/sphinx/html/index.html`:\n\n .. code-block:: shell\n\n python setup.py build_sphinx # Builds html docs\n python setup.py build_sphinx -b doctest # Checks if python-code embedded in comments runs ok.\n\n5. If there are no problems, commit your changes with a descriptive message.\n\n6. Repeat this cycle for other bugs/enhancements.\n\n7. When you are finished, push the changes upstream to *github* and make a *merge_request*.\n You can check whether your merge-request indeed passed the tests by checking\n its build-status |travis-status| on the integration-server\'s site (TravisCI).\n\n .. Hint:: Skim through the small IPython developer\'s documentation on the matter:\n `The perfect pull request `_\n\n8. Generate the Sphinx documents in :file:`./wltp.git/docs/_build/html/`\n with this command::\n\n python setup.py build_sphinx\n\n Access the generated documents through a a web-server, for :term:`graphtik` graphs\n to work correctly, with this bash-command (remove the final ``&`` on *Windows*)::\n\n python -m http.server 8080 --directory ./wltp.git/docs/_build/html/ &\n\n.. _dev-team:\n\nDevelopment team\n----------------\n\n* Author:\n * Kostis Anagnostopoulos\n* Contributing Authors:\n * Heinz Steven (test-data, validation and review)\n * Georgios Fontaras (simulation, physics & engineering support)\n * Alessandro Marotta (policy support)\n * Jelica Pavlovic (policy support)\n * Eckhard Schlichte (discussions & advice)\n\n\n.. _begin-glossary:\n\nGlossary\n========\nSee also :ref:`architecture:Architecture`.\n\n.. default-role:: term\n\n.. glossary::\n\n WLTP\n The `Worldwide harmonised Light duty vehicles Test Procedure `_,\n a `GRPE` informal working group\n\n UNECE\n The United Nations Economic Commission for Europe, which has assumed the steering role\n on the `WLTP`.\n\n GRPE\n `UNECE` Working party on Pollution and Energy - Transport Programme\n\n GTR\n Any of the *Global Technical Regulation* documents of the `WLTP` .\n\n GS Task-Force\n The Gear-shift Task-force of the `GRPE`. It is the team of automotive experts drafting\n the gear-shifting strategy for vehicles running the `WLTP` cycles.\n\n WLTC\n The family of pre-defined *driving-cycles* corresponding to vehicles with different\n :abbr:`PMR (Power to Mass Ratio)`. Classes 1,2, 3a/b are split in 3, 4 and 4 *parts* respectively.\n\n AccDB\n MS Access DB\n The original implementation of the algorithm in *MS Access* by Heinz Steven.\n\n To facilitate searching and cross-referencing the existing routines,\n all the code & queries of the database have been extracted and stored in as text\n under the `Notebooks/AccDB_src/\n `_ folder\n of this project.\n\n MRO\n Mass in running order\n The mass of the vehicle, with its fuel tank(s) filled to at least 90\xc2\xa0per\xc2\xa0cent\n of its or their capacity/capacities, including the mass of the driver and the liquids,\n fitted with the standard equipment in accordance with the manufacturer\xe2\x80\x99s specifications and,\n where they are fitted, the mass of the bodywork, the cabin,\n the coupling and the spare wheel(s) as well as the tools when they are fitted.\n\n UM\n Kerb mass\n Curb weight\n Unladen mass\n The `Mass in running order` minus the `Driver mass`.\n\n Driver weight\n Driver mass\n 75 kgr\n\n TM\n Test mass\n The representative weight of the vehicle used as input for the calculations of the simulation,\n derived by interpolating between high and low values for the |CO2|-family of the vehicle.\n\n Downscaling\n Reduction of the top-velocity of the original drive trace to be followed, to ensure that the vehicle\n is not driven in an unduly high proportion of ""full throttle"".\n\n JSON-schema\n The `JSON schema `_ is an `IETF draft `_\n that provides a *contract* for what JSON-data is required for a given application and how to interact\n with it. JSON Schema is intended to define validation, documentation, hyperlink navigation, and\n interaction control of JSON data.\n\n The schema of this project has its own section: :ref:`code:Schemas`\n\n You can learn more about it from this `excellent guide `_,\n and experiment with this `on-line validator `_.\n\n JSON-pointer\n JSON Pointer(:rfc:`6901`) defines a string syntax for identifying a specific value within\n a JavaScript Object Notation (JSON) document. It aims to serve the same purpose as *XPath* from the XML world,\n but it is much simpler.\n\n sphinx\n The text-oriented language, a superset of `Restructured Text `_,\n used to write the documentation for this project, with similar capabilities to *LaTeX*,\n but for humans, e.g., the Linux kernel adopted this textual format on 2016.\n http://sphinx-doc.org/\n\n notebook\n jupyter notebook\n Jupyter\n *Jupyter* is a web-based interactive computational environment for creating *Jupyter notebook* documents.\n The ""notebook"" term can colloquially make reference to many different entities,\n mainly the Jupyter web application, Jupyter Python web server, or Jupyter document format,\n depending on context.\n\n A *Jupyter Notebook* document is composed of an ordered list of input/output *cells*\n which contain code in various languages, text (using Markdown), mathematics, plots and\n rich media, usually ending with the "".ipynb"" extension.\n\n.. _begin-replacements:\n\n.. |br| raw:: html\n\n
\n\n.. |CO2| replace:: CO\\ :sub:`2`\n\n.. |virtualenv| replace:: *virtualenv* (isolated Python environment)\n.. _virtualenv: http://docs.python-guide.org/en/latest/dev/virtualenvs/\n\n.. |binder| image:: https://img.shields.io/badge/demo-stable-579ACA.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC\n :target: https://mybinder.org/v2/gh/JRCSTU/wltp/master?urlpath=lab/tree/Notebooks/README.md\n :alt: JupyterLab for WLTP (stable)\n\n.. |binder-dev| image:: https://img.shields.io/badge/demo-dev-579ACA.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC\n :target: https://mybinder.org/v2/gh/ankostis/wltp/master?urlpath=lab/tree/Notebooks/README.md\n :alt: JupyterLab for WLTP (dev)\n\n.. |pypi| replace:: *PyPi* repo\n.. _pypi: https://pypi.python.org/pypi/wltp\n\n.. |winpython| replace:: *WinPython*\n.. _winpython: http://winpython.github.io/\n\n.. |anaconda| replace:: *Anaconda*\n.. _anaconda: http://docs.continuum.io/anaconda/\n\n.. |travis-status| image:: https://travis-ci.org/JRCSTU/wltp.svg\n :alt: Travis continuous integration testing ok? (Linux)\n :target: https://travis-ci.org/JRCSTU/wltp/builds\n\n.. |appveyor-status| image:: https://ci.appveyor.com/api/projects/status/0e2dcudyuku1w1gd?svg=true\n :alt: Appveyor continuous integration testing ok? (Windows)\n :target: https://ci.appveyor.com/project/JRCSTU/wltp\n\n.. |cover-status| image:: https://coveralls.io/repos/JRCSTU/wltp/badge.png?branch=master\n :target: https://coveralls.io/r/JRCSTU/wltp?branch=master\n\n.. |docs-status| image:: https://readthedocs.org/projects/wltp/badge/\n :alt: Documentation status\n :target: https://readthedocs.org/projects/wltp/builds/\n\n.. |gh-version| image:: https://img.shields.io/github/v/release/JRCSTU/wltp.svg?label=GitHub%20release&include_prereleases\n :target: https://github.com/JRCSTU/wltp/releases\n :alt: Latest version in GitHub\n\n.. |pypi-version| image:: https://img.shields.io/pypi/v/wltp.svg?label=PyPi%20version\n :target: https://pypi.python.org/pypi/wltp/\n :alt: Latest version in PyPI\n\n.. |conda-version| image:: https://img.shields.io/conda/v/ankostis/wltp?label=conda%20version\n :target: https://anaconda.org/ankostis/wltp\n :alt: Latest version in Anaconda cloud\n\n.. |proj-version| image:: https://img.shields.io/badge/project--version-1.1.0.dev0-orange.svg\n :target: https://github.com/JRCSTU/wltp/releases\n :alt: Version grafted in project\'s package coordinates\n\n.. |rel-date| image:: https://img.shields.io/badge/rel--date-2020--08--04_00:00:00-orange.svg\n :target: https://github.com/JRCSTU/wltp/releases\n :alt: Release date grafted in project\'s package coordinates\n\n.. |python-ver| image:: https://img.shields.io/pypi/pyversions/wltp.svg?label=PyPi%20Python\n :target: https://pypi.python.org/pypi/wltp/\n :alt: Supported Python versions of latest release in PyPi\n\n.. |conda-plat| image:: https://img.shields.io/conda/pn/ankostis/wltp.svg?label=conda%20platforms\n :target: https://anaconda.org/ankostis/wltp\n :alt: Supported conda platforms\n\n.. |dev-status| image:: https://pypip.in/status/wltp/badge.svg\n :target: https://pypi.python.org/pypi/wltp/\n :alt: Development Status\n\n.. |downloads-count| image:: https://pypip.in/download/wltp/badge.svg?period=month&label=PyPi%20downloads\n :target: https://pypi.python.org/pypi/wltp/\n :alt: PyPi downloads\n\n.. |codestyle| image:: https://img.shields.io/badge/code%20style-black-black.svg\n :target: https://github.com/ambv/black\n :alt: Code Style\n\n.. |gh-watch| image:: https://img.shields.io/github/watchers/JRCSTU/wltp.svg?style=social\n :target: https://github.com/JRCSTU/wltp\n :alt: Github watchers\n\n.. |gh-star| image:: https://img.shields.io/github/stars/JRCSTU/wltp.svg?style=social\n :target: https://github.com/JRCSTU/wltp\n :alt: Github stargazers\n\n.. |gh-fork| image:: https://img.shields.io/github/forks/JRCSTU/wltp.svg?style=social\n :target: https://github.com/JRCSTU/wltp\n :alt: Github forks\n\n.. |gh-issues| image:: http://img.shields.io/github/issues/JRCSTU/wltp.svg?style=social\n :target: https://github.com/JRCSTU/wltp/issues\n :alt: Issues count\n\n.. |proj-lic| image:: https://img.shields.io/pypi/l/wltp.svg\n :target: https://joinup.ec.europa.eu/software/page/eupl\n :alt: EUPL 1.1+\n'",,"2014/07/31, 14:34:27",3373,CUSTOM,0,1310,"2020/03/31, 20:33:24",7,3,7,0,1303,3,0.0,0.0007668711656442229,"2019/08/30, 17:06:48",v1.0.0.dev12,0,2,false,,false,true,"JRCSTU/co2mpas_driver,JRCSTU/co2wui,saikashyap6433/Fuel-Estimator-For-Vehicles,stefanocorsi/co2wui,ashenafimenza/binder_co2mpas_driver,weihangChen/pythontest,JRCSTU/CO2MPAS-TA",,https://github.com/JRCSTU,https://ec.europa.eu/jrc/en/about/institutes-and-directorates/jrc-iet,"Ispra (VA), Italy",,,https://avatars.githubusercontent.com/u/13638890?v=4,,, The Open Charge Point Protocol,A network protocol for communication between electric vehicle chargers and a central backoffice system.,NewMotion,https://github.com/ShellRechargeSolutionsEU/ocpp.git,github,"ocpp,scala,electric-vehicles,charging-stations,chargingstation,websocket,emobility",Mobility and Transportation,"2021/10/08, 09:39:37",180,0,33,false,Scala,Shell Recharge Solutions EU,ShellRechargeSolutionsEU,Scala,,"b'# Open Charge Point Protocol for Scala [![Build Status](https://secure.travis-ci.org/NewMotion/ocpp.png)](http://travis-ci.org/NewMotion/ocpp) [![Coverage Status](https://coveralls.io/repos/github/NewMotion/ocpp/badge.svg?branch=master)](https://coveralls.io/github/NewMotion/ocpp?branch=master) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/eab36abac62e4e33845a10a1485f35c6)](https://www.codacy.com/app/reinierl/ocpp)\n\nThe Open Charge Point Protocol (OCPP) is a network protocol for communication\nbetween electric vehicle chargers and a central backoffice system. It is\ndeveloped by the Open Charge Alliance (OCA). You can find more details on the\n[official website of the OCA](http://openchargealliance.org/).\n\n## Note to open source users\nOpen source users of this library will want to use the\n[IHomer fork](https://github.com/IHomer/scala-ocpp/) which is more actively\nsupported and published to Maven Central.\n\n## Functionality\n\nThis library is the implementation of OCPP developed and used by NewMotion, one\nof Europe\'s largest Electric Vehicle Charge Point Operators.\n\nThis library only implements the network protocol. That is, it provides data\ntypes for the OCPP messages, remote procedure call using those request and\nresponse messages, and error reporting about those remote procedure calls. It\ndoes _not_ provide any actual handling of the message contents. For an actual\napp speaking OCPP using this library, see\n[docile-charge-point](https://github.com/NewMotion/docile-charge-point).\n\nThe library is designed with versatility in mind. OCPP comes in 4 versions (1.2,\n1.5, 1.6 and 2.0), two transport variants (SOAP/XML aka OCPP-S and WebSocket/JSON aka\nOCPP-J), and two roles (""Charge Point"" and ""Central System""). This library will\nhelp you with 1.5 and 1.6 over JSON. For 1.2 and 1.5 over SOAP, there is [a\nseparate library by NewMotion](https://github.com/NewMotion/ocpp-soap) that\ndepends on this one. Some OCPP 2.0 support is present, but not a full\nimplementation yet. The main body of this README will be writing about the OCPP\n1.5 and 1.6 support; for OCPP 2.0 see [here](#ocpp2).\n\nVersion 2.0 with SOAP/XML is not possible. Version 1.2 with\nWebSocket/JSON and version 1.6 with SOAP/XML are not supported by this\nlibrary.\n\nUsers of this library probably want to use different WebSocket libraries for\ndifferent scenarios: a production back-office server with tens of thousands of\nconcurrent connections, a client in a load testing tool, or a simple one-off\nscript to test a certain behavior. This library uses the\n[cake pattern](http://www.cakesolutions.net/teamblogs/2011/12/19/cake-pattern-in-depth)\nto make it easy to swap out the underlying WebSocket implementation while still\nusing the same concise high-level API.\n\n## How to use\n\n### Setup\n\nThe library is divided into three separate modules so applications using it\nwon\'t get too many dependencies dragged in. Those are:\n\n * `ocpp-j-api`: high-level interface to OCPP-J connections\n * `ocpp-json`: serialization of OCPP messages to/from JSON\n * `ocpp-messages`: The definitions of OCPP messages, independent from\n the transport variant used\n\nSo if you want to use the high-level OCPP-J connection interface, and you\'re\nusing SBT, you can declare the dependency by adding this this to your `build.sbt` after publishing the library:\n\n```\nlibraryDependencies += ""com.thenewmotion.ocpp"" %% ""ocpp-j-api"" % ""9.2.2""\n```\n\nWith Maven, add this to your dependencies:\n\n```xml\n \n com.thenewmotion.ocpp\n ocpp-j-api_2.11\n 9.2.2\n \n```\n\n### Using the simple client API\n\nAn example OCPP-J client application included. You can run it like this:\n\n sbt ""project example-json-client"" ""run 01234567 ws://localhost:8017/ocppws 1.5,1.6""\n\nThis means: connect to the Central System running at\n`ws://localhost:8017/ocppws`, as a charge point with ID 01234567, using OCPP\nversion 1.5 and if that is not supported try 1.6 instead. If you don\'t specify\na version, 1.6 is used by default.\n\nIf you look at the code of the example by clicking [here](example-json-client/src/main/scala/com/thenewmotion/ocpp/json/example/JsonClientTestApp.scala),\nyou can see how the client API is used:\n\n * A connection is established by creating an instance of `OcppJsonClient`\n using the `OcppJsonClient.forVersion1x` factory method.\n The server endpoint URI, charge point ID and OCPP version to use are passed\n to the method, followed by a handler for incoming OCPP requests in a\n second parameter list.\n\n * To send OCPP messages to the Central System, you call the `send` method on\n the `OcppJsonClient` instance. You will get a `Future` back that will be\n completed with the Central System\'s response. If the Central System fails\n to respond to your request, the `Future` will fail.\n\n * `OcppJsonClient` is an instance of the [`OutgoingOcppEndpoint`](ocpp-j-api/src/main/scala/com/thenewmotion/ocpp/json/api/OutgoingOcppEndpoint.scala)\n trait. This trait defines this interface.\n\n#### Handling requests\n\nTo specify the request handler, we use a [_magnet pattern_](http://spray.io/blog/2012-12-13-the-magnet-pattern/).\nYou can specify the request handler in different ways. After the\n`val requestHandler: ChargePointRequestHandler =`, you see a\n[`ChargePoint`](ocpp-messages/src/main/scala/com/thenewmotion/ocpp/messages/ChargePoint.scala)\ninstance in the example program. But you can also specify the request handler\nas a function from `ChargePointReq` to `Future[ChargePointRes]`:\n\n```scala\n\n val ocppJsonClient = OcppJsonClient.forVersion1x(chargerId, new URI(centralSystemUri), versions) {\n (req: ChargePointReq) =>\n req match {\n case GetConfigurationReq(keys) =>\n System.out.println(s""Received GetConfiguration for $keys"")\n Future.successful(GetConfigurationRes(\n values = List(),\n unknownKeys = keys\n ))\n case x =>\n val opName = x.getClass.getSimpleName\n Future.failed(OcppException(\n PayloadErrorCode.NotSupported,\n s""Demo app doesn\'t support $opName""\n ))\n }\n}\n```\n\nThis behavior of this request handler is more or less equivalent to that of the\none in the example app. It is shorter at the price of being less type-safe:\nthis code does not check if you generate the right response type for the\nrequest, so if you generate a GetConfigurationRes in response to a\nGetConfigurationReq for instance.\n\n#### Sending requests\n\nSending requests is simple, as explained. You call the `send` method of your\nendpoint and off you go, like this:\n\n```scala\n connection.send(HeartbeatReq)\n```\n\nThe set of messages you can send with OCPP 1.x connections is defined in\n[ocpp-messages](ocpp-messages/src/main/scala/com/thenewmotion/ocpp/messages/v1x/Message.scala).\nFor every request type, you represent requests as instances of a case class\nnamed `Req`, e.g. `StatusNotificationReq`, `HeartbeatReq`.\n\nFor OCPP 1.x, these case classes in `ocpp-messages` are designed\naccording to two principles:\n * They are independent of OCPP version, so you have one interface to charging\n stations that use different versions\n * They sometimes group and rearrange fields to make it impossible to specify\n nonsense messages (e.g., no `vendorErrorCode` in status notifications that\n are not about errors). This makes it easier to write the code dealing with\n those requests, which does not have to validate things first.\n\nThis does mean that sometimes the way these case classes are defined may be a\nbit surprising to people familiar with the OCPP specification. It be so. Use\nthe link to the file above, or use \xe2\x8c\x98P in IntelliJ IDEA, to see how to give these\ncase classes the right parameters to formulate the request you want to send.\n\nThis also means that it is possible to send requests that cannot be represented\nin the OCPP version that is used for the connection you send them over. In that\ncase `send` will return a failed future with an `OcppError` with error code\n`NotSupported`.\n\nThe result of the `send` method is a `Future[RES]`, where `RES` is the type\nof the response that belongs to the request you sent. So the type of this\nexpression:\n\n```scala\n connection.send(AuthorizeReq(idTag = ""12345678""))\n```\n\nis `Future[AuthorizeRes]`.\n\nAnd if you want to do something with the result, the code could look like this:\n\n```scala\n connection.send(AuthorizeReq(idTag = ""12345678"")).map { res =>\n if (res.idTag.status == AuthorizationStatus.Accepted)\n System.out.println(""12345678 is authorized."")\n else\n System.out.println(""12345678 has been rejected. No power to you!"")\n }\n```\n\nNote that the library does not by itself\nenforce the OCPP requirement that you wait for the response before sending the\nnext request. A simple way to obey it is chaining the send operations in a `for`\ncomprehension, as shown in the example app.\n\n#### Error handling\n\nIf the remote side responds to your OCPP requests with a `CALLERROR` message\nindicating a failure to process your request, the future returned from `.send`\nwill be failed. The exception in there will be an `OcppException` object, which\ncontains an `OcppError` object, which contains the error code and description\nsent from the other side.\n\nIt works the same way in your own request handlers. You can return a failed\nfuture with an `OcppException`, and the library will turn this into a\n`CALLERROR` message and sends it back to the remote side.\n\n### Using the cake pattern directly\n\nIf you want to build an OCPP-J client using a different WebSocket\nimplementation, or an OCPP-J server, you\'ll have to use the\n[cake](http://www.cakesolutions.net/teamblogs/2011/12/19/cake-pattern-in-depth)\nlayers directly.\n\nThe OCPP cake has three layers:\n\n * [`OcppConnectionComponent`](ocpp-j-api/src/main/scala/com/thenewmotion/ocpp/json/api/OcppConnection.scala): handles serialization and deserialization between OCPP request/response objects and SRPC messages\n * [`SrpcComponent`](ocpp-j-api/src/main/scala/com/thenewmotion/ocpp/json/api/SrpcConnection.scala): matches requests to responses, and serializes and deserializes SRPC messages to JSON\n * [`WebSocketComponent`](ocpp-j-api/src/main/scala/com/thenewmotion/ocpp/json/api/WebSocketConnection.scala): reads and writes JSON messages from and to a WebSocket connection\n\nThere are default implementations for the `OcppConnectionComponent` and\n`SrpcComponent`. The `WebSocketComponent` you will have to create yourself to\nintegrate with your WebSocket implementation of choice.\n\nTo put it in a diagram:\n\n```\n (your application logic)\n\n +--------------V--------------------^--------------+\n | com.thenewmotion.ocpp.messages.v1x.{Req, Res} |\n | |\n | OcppConnectionComponent layer |\n | |\n +--------------V--------------------^--------------+\n | com.thenewmotion.ocpp.json.TransportMessage |\n | |\n | SrpcComponent layer |\n | |\n +--------------V--------------------^--------------+\n | org.json4s.JValue |\n | |\n | WebSocketComponent layer |\n | |\n +--------------V--------------------^--------------+\n\n (WebSocket lib specific types)\n```\n\nSo the `OcppConnectionComponent` layer exchanges OCPP requests and responses\nwith your app. It exchanges SRPC messages, represented as\n[TransportMessage](ocpp-json/src/main/scala/com/thenewmotion/ocpp/json/TransportMessageProtocol.scala)\nobjects, with the `SrpcComponent` layer. The `SrpcComponent` layer exchanges\nJSON messages, represented as `org.json4s.JValue` objects, with the WebSocket\nlayer. The WebSocket layer then speaks to the WebSocket library, using whatever\ntypes that library uses, usually just `String`s.\n\nNow this is not the whole picture yet: besides just OCPP messages, the layers\nalso exchange connection commands and events, like ""close this connection!"" or\n""an error occurred sending that request"". The traits also define methods to\nexchange those, and so the total amount of methods in your\n`OcppConnectionComponent with SrpcComponent with WebSocketComponent` instance\nmay be intimidating at first. Let\'s make a version of the above diagram that\nshows the methods defined by each layer:\n\n```\n OcppConnectionComponent.ocppConnection.sendRequest (call) OcppConnectionComponent.onRequest (override)\n +------------------V-----------------------------------------------^-----------------------------------------------------------+\n | |\n | OcppConnectionComponent layer |\n | |\n | SrpcConnectionComponent.srpcConnection.sendCall (call) |\n | SrpcConnectionComponent.srpcConnection.close (call) |\n | SrpcConnectionComponent.srpcConnection.forceClose (call) |\n | SrpcConnectionComponent.srpcConnection.onClose (call) SrpcConnectionComponent.onSrpcCall (override) |\n +------------------V-----------------------------------------------^-----------------------------------------------------------+\n | |\n | SrpcComponent layer |\n | |\n | WebSocketConnectionComponent.onMessage (override) |\n | WebSocketConnectionComponent.webSocketConnection.send (call) WebSocketConnectionComponent.onWebSocketDisconnect (override) |\n | WebSocketConnectionComponent.webSocketConnection.close (call) WebSocketConnectionComponent.onError (override) |\n +------------------V-----------------------------------------------^-----------------------------------------------------------+\n | |\n | WebSocketComponent layer |\n | |\n | (WebSocket library dependent) (WebSocket library dependent) |\n +------------------V-----------------------------------------------^-----------------------------------------------------------+\n```\n\nSo each layer defines the interface that the higher layers can use to\ncommunicate with it. For every layer, you see on the left how the higher layer\ncan give it information, and on the right how the higher layer gets information\nfrom it.\n\nFor instance, on top you see that the user of the whole cake can give\ninformation to the OCPP layer by calling the `ocppConnection.sendRequest`\nmethod. And on the lower middle right you see that the SRPC cake layer\ncan get information from the WebSocket layer by overriding the\n`onMessage`, `onDisconnect` and `onError` methods of the\n`WebSocketComponent`.\n\nThere has to be one instance of the whole cake for every open WebSocket\nconnection in the system. A server would typically maintain a mapping of\nWebSocket connection IDs from the underlying library to\n`OcppConnectionComponent with SrpcComponent with WebSocketComponent` instances\nfor them. When it receives an incoming WebSocket message, it will look up the\ncake for that connection, and pass the message to the WebSocket layer of that\ncake.\n\nSo now with this background information, the steps to constructing your cake would be:\n\n * Determine the kind of interface you want to the logic in the rest of your app\n\n * Create a trait extending `WebSocketComponent` that uses your WebSocket\n implementation of choice\n\n * Create the cake:\n\n * Do either `new CentralSystemOcpp1XConnectionComponent with DefaultSrpcComponent with MyWebSocketComponent { ... }`\n or `new ChargePointOcpp1XConnectionComponent with DefaultSrpcComponent with MyWebSocketComponent { ... }`\n\n * Define in it a `val webSocketConnection`, `val srpcConnection` and\n `val ocppConnection`. For `ocppConnection`, use one of the\n `defaultChargePointOcppConnection` and\n `defaultCentralSystemOcppConnection` methods defined by the\n `*Ocpp1XConnectionComponent` traits. For `val srpcConnection`, use\n `new DefaultSrpcConnection`.\n\n * Define all the ""(override)"" methods shown at the top of the cake to connect\n your app\'s request and response processing to the OCPP cake\n\n * Make the WebSocket layer call your WebSocket library to send messages over\n the socket, and make your WebSocket library call the cake for it to receive messages\n\n#### Putting this together for a server\n\nBecause that bit about the cake pattern is still quite abstract, let\'s\nlook at how we can implement an OCPP server using the cake pattern.\n\nAccording to the list of steps above, we first have to determine the\ninterface that we want or OCPP server component to have.\n\nThe interface I am thinking of here is something like this:\n\n```\nabstract class OcppJsonServer(listenPort: Int, ocppVersion: Version) {\n\n type OutgoingEndpoint = OutgoingOcppEndpoint[ChargePointReq, ChargePointRes, ChargePointReqRes]\n\n def handleConnection: OutgoingEndpoint => CentralSystemRequestHandler\n}\n```\n\nSo if someone writes a back-office system using this server component,\nshe has to provide three things:\n\n * the TCP port to listen on, as a constructor argument\n\n * The OCPP version to use, as a constructor argument (I\'m too lazy to worry\n about version negotiation right now)\n\n * Her own logic for how to handle and send OCPP messages over the\n connections to this server. She should specify this as a method\n `handleConnection` that gets an endpoint for sending outgoing\n requests as an argument, and returns to the server component a\n request handler for handling incoming requests\n\nSo now we have decided on an interface and we move to step two: create\na trait extending `WebSocketComponent` that uses our WebSocket\nimplementation of choice. Here we are using java-websocket, so we\'ll\nend up using WebSocketServer.\n\nIf we look at the WebSocketServer API, we see that it is based on\ninstantiating a `WebSocketServer` object with overridden methods to\nhandle incoming messages. The interface looks like this:\n\n```java\npublic abstract class WebSocketServer extends AbstractWebSocket implements Runnable {\n\n // These methods are provided by the implementation...\n\n public WebSocketServer(InetSocketAddress address) { ... }\n\n public void start() { ... }\n\n public void stop() { ... }\n\n // ...and these are to be overridden by the user\n public abstract void onStart();\n\n public abstract void onOpen( WebSocket conn, ClientHandshake handshake );\n\n public abstract void onClose( WebSocket conn, int code, String reason, boolean remote );\n\n public abstract void onMessage( WebSocket conn, String message );\n\n public abstract void onError( WebSocket conn, Exception ex );\n}\n```\n\nSo for each connection, `WebSocketServer` creates a `WebSocket` object,\nand then passes that into the callback. We will create our\n`WebSocketComponent` instance so that it calls the `send` method on such\na `WebSocket` to send outgoing messages. That means a first stab at our\n`WebSocketComponent` looks like this:\n\n```scala\nimport org.java_websocket.WebSocket\nimport org.json4s.JValue\nimport org.json4s.native.JsonMethods.{compact, render}\n\ntrait SimpleServerWebSocketComponent extends WebSocketComponent {\n\n trait SimpleServerWebSocketConnection extends WebSocketConnection {\n\n def webSocket: WebSocket\n\n def send(msg: JValue): Unit = webSocket.send(compact(render(msg)))\n\n def close(): Unit = webSocket.close()\n }\n}\n```\n\nThat brings us to step 3: creating the cake. Now we want to create a\ncake for every incoming WebSocket connection to the server, so that\nmeans we also have to create a WebSocket server that will create a cake\nin its `onOpen` method:\n\n```scala\nimport java.net.InetSocketAddress\nimport scala.concurrent.ExecutionContext\nimport org.java_websocket.WebSocket\nimport org.java_websocket.handshake.ClientHandshake\nimport org.java_websocket.server.WebSocketServer\nimport messages.v1x._\n\nabstract class OcppJsonServer(listenPort: Int, ocppVersion: Version)\n extends WebSocketServer(new InetSocketAddress(listenPort)) {\n\n type OutgoingEndpoint = OutgoingOcppEndpoint[ChargePointReq, ChargePointRes, ChargePointReqRes]\n\n def handleConnection: OutgoingEndpoint => CentralSystemRequestHandler\n\n override def onStart(): Unit = {}\n\n override def onOpen(conn: WebSocket, hndshk: ClientHandshake): Unit = {\n\n val ocppConnection = new CentralSystemOcpp1XConnectionComponent with DefaultSrpcComponent with SimpleServerWebSocketComponent {\n override val ocppConnection: DefaultOcppConnection = defaultCentralSystemOcppConnection\n\n override val srpcConnection: DefaultSrpcConnection = new DefaultSrpcConnection()\n\n override val webSocketConnection: SimpleServerWebSocketConnection = new SimpleServerWebSocketConnection {\n val webSocket: WebSocket = conn\n }\n\n def onRequest[REQ <: CentralSystemReq, RES <: CentralSystemRes](req: REQ)(implicit reqRes: CentralSystemReqRes[REQ, RES]) = ???\n\n implicit val executionContext: ExecutionContext = ???\n\n def ocppVersion: Version = ???\n }\n }\n\n override def onClose(\n conn: WebSocket,\n code: Int,\n reason: IdTag,\n remote: Boolean\n ): Unit = ???\n\n override def onMessage(conn: WebSocket, message: String): Unit = ???\n\n override def onError(conn: WebSocket, ex: Exception): Unit = ???\n}\n```\n\nOuch, that\'s a big load of code there. Still, it came about after a few\nsimple steps:\n\n 1. Take the interface template for OcppJsonServer that we started this\n example with\n 2. Make it extend WebSocketServer, and add `???` implementations of\n its abstract methods\n 3. In the `onOpen` method from the `WebSocketServer` abstract class,\n create a cake with the three layers: `CentralSystemOcpp1XConnectionComponent with DefaultSrpcComponent with SimpleServerWebSOcketComponent`\n 4. Add `???` implementations of all the abstract methods in the cake\n 5. Add actual definitions of the `ocppConnection`, `srpcConnection`\n and `webSocketConnection` members of the three cake layers.\n\nNote that at step 5, we have passed the `WebSocket` argument to the\n`onOpen` method on into the `SimpleServerWebSocketConnection` so that\nthe cake can later use this `WebSocket` to send OCPP messages over.\n\nSo by now we\'re at the fourth point of the five-step cake plan: connect our\napp\'s logic to the OCPP cake. The app\'s logic, in our case, is the\n`handleConnection` that the library user specifies. And then, _inside_\nthe OCPP cake definition in `onOpen`, we create the outgoing endpoint,\nand call the user-defined `handleConnection` on it so a request handler\nfor incoming messages is created:\n\n```scala\nprivate val outgoingEndpoint = new OutgoingEndpoint {\n def send[REQ <: ChargePointReq, RES <: ChargePointRes](req: REQ)(implicit reqRes: ChargePointReqRes[REQ, RES]): Future[RES] =\n ocppConnection.sendRequest(req)\n\n def close(): Future[Unit] = srpcConnection.close()\n}\n\nprivate val requestHandler = handleConnection(outgoingEndpoint)\n```\n\nNow that we have the incoming request handler, we can also fill\nin definitions for the `onRequest` handler in the OCPP cake:\n\n```scala\ndef onRequest[REQ <: CentralSystemReq, RES <: CentralSystemRes](req: REQ)(implicit reqRes: CentralSystemReqRes[REQ, RES]) =\n requestHandler(req)\n```\n\nAnd then, there is the `ocppVersion` method in `OcppConnectionComponent`\nthat will tell it which OCPP version to use to serialize and deserialize\nOCPP messages to JSON. We have to fill in the OCPP version that was\npassed in to the `OcppJsonServer` constructor. To avoid name clashes,\nwe rename the argument to `requestedOcppVersion`:\n\n```scala\nabstract class OcppJsonServer(listenPort: Int, requestedOcppVersion: Version)\n```\n\nand then fill in this `requestedOcppVersion` argument in the cake\'s\n`ocppVersion` method:\n\n```scala\ndef ocppVersion: Version = requestedOcppVersion\n```\n\nBy now, the OCPP cake definition body is free of any reference to ""???"". The\ncake is fully linked to the app\'s business logic.\n\nThat means we\'re at the fifth and last step of the five-step plan: We have to\nmake sure that when a new connection is opened, we will call this function to\ncreate a `CentralSystemRequestHandler`, and that we later call this `CentralSystemRequestHandler`\nwhenever a message comes in on the connection. To make this happen, we will need\na map from `WebSocket` instances that the server gets in its `onMessage` method,\nto the OCPP cake instance that will process messages for that connection.\n\nSo let\'s add this to the `OcppJsonServer` class: link the WebSocket connections\nwe\'re using to their OCPP cakes.\n\n```scala\n private type OcppCake = CentralSystemOcppConnectionComponent with DefaultSrpcComponent with SimpleServerWebSocketComponent\n\n object connectionMap {\n private val ocppConnections: mutable.Map[WebSocket, BaseConnectionCake] =\n mutable.HashMap[WebSocket, BaseConnectionCake]()\n\n def put(conn: WebSocket, cake: BaseConnectionCake): Unit = connectionMap.synchronized {\n ocppConnections.put(conn, cake)\n ()\n }\n\n def remove(conn: WebSocket): Option[BaseConnectionCake] = connectionMap.synchronized {\n ocppConnections.remove(conn)\n }\n\n def get(conn: WebSocket): Option[BaseConnectionCake] = connectionMap.synchronized {\n ocppConnections.get(conn)\n }\n }\n```\n\nWe use a mutable map, wrapped in its own little `object connectionMap`\nthat assures thread-safety.\n\nTo fill the map, we add this line at the bottom of `onOpen` in our\n`OcppJsonServer`:\n\n```scala\nconnectionMap.put(conn, ocppConnection)\n```\n\nand as responsible professionals, let\'s also remove the entry from the map again\nwhen a connection is closed, by changing the definition of `onClose` to this:\n\n```scala\noverride def onClose(\n conn: WebSocket,\n code: Int,\n reason: IdTag,\n remote: Boolean\n): Unit = ocppConnections.remove(conn)\n```\n\nNow the last thing to be done on the way to a working server, is making the\n`onMessage`, `onClose` and `onError` callbacks of `WebSocketServer`\nactually call into the connection\'s OCPP cake to let it process the message. It\nturns out that to do this, we have to add three methods to the\n`SimpleWebSocketServerComponent`:\n\n```scala\ndef feedIncomingMessage(msg: String) = self.onMessage(org.json4s.native.JsonMethods.parse(msg))\n\ndef feedIncomingDisconnect(): Unit = self.onWebSocketDisconnect()\n\ndef feedIncomingError(err: Exception) = self.onError(err)\n```\n\nto make that work, we also have to make SimpleServerWebSocketComponent aware\nthat it is to be mixed into something that is also an SrpcComponent, by adding\na self-type:\n\n```scala\ntrait SimpleServerWebSocketComponent extends WebSocketComponent {\n\n self: SrpcComponent =>\n\n ...\n```\n\nand back in `OcppJsonServer`, we change the implementation of `onMessage`,\n`onClose` and `onError` to feed those events into the cake for the right\nconnection:\n\n```scala\noverride def onClose(\n conn: WebSocket,\n code: Int,\n reason: IdTag,\n remote: Boolean\n): Unit = {\n connectionMap.remove(conn) foreach { c =>\n c.feedIncomingDisconnect()\n }\n}\n\noverride def onMessage(conn: WebSocket, message: String): Unit =\n connectionMap.get(conn) foreach { c =>\n c.feedIncomingMessage(message)\n }\n\noverride def onError(conn: WebSocket, ex: Exception): Unit =\n connectionMap.get(conn) foreach { c =>\n c.feedIncomingError(ex)\n }\n```\n\nThat\'s it, it should work now!\n\nIn fact, upon testing it, I realize that with this interface, the server-side\ncode doesn\'t know the ChargePointIdentity of the client. That\'s not very\nhelpful; an OCPP back-office system will probably want to know which charge\npoints are connected to it. So let\'s change the definition of `handleConnection`\nin `OcppJsonServer` to this:\n\n```scala\ndef handleConnection(clientChargePointIdentity: String, remote: OutgoingEndpoint): CentralSystemRequestHandler\n```\n\nand let\'s pass the ChargePointIdentity into it by changing `onOpen` to this:\n\n```scala\n override def onOpen(conn: WebSocket, hndshk: ClientHandshake): Unit = {\n\n val uri = hndshk.getResourceDescriptor\n uri.split(""/"").lastOption match {\n\n case None =>\n conn.close(1003, ""No ChargePointIdentity in path"")\n\n case Some(chargePointIdentity) =>\n onOpenWithCPIdentity(conn, chargePointIdentity)\n }\n }\n\n private def onOpenWithCPIdentity(conn : WebSocket, chargePointIdentity: String): Unit = {\n val ocppConnection = new CentralSystemOcppConnectionComponent with DefaultSrpcComponent with SimpleServerWebSocketComponent {\n ... // continues as in earlier definition of onOpen\n ```\n\nSo there we check if the URL includes the charge point identity. If it doesn\'t,\nwe immediately close the WebSocket connection with an error message. If we do\nhave a ChargePointIdentity, we proceed to handle the open as we did before. And\nnow it should really be done.\n\nIn order to save you the typing and bugfixing, an actual tested version of the\nserver developed while writing this [is\nincluded](ocpp-j-api/src/main/scala/com/thenewmotion/ocpp/json/api/server/OcppJsonServer.scala).\nWith the OCPP 2.0 work though, it has become a bit more involved because it is\nnow an abstract class that is extended by 1.x and 2.0 specific classes.\n\nAnd there is also a small\n[example app](example-json-server/src/main/scala/com/thenewmotion/ocpp/json/example/ExampleServerTestApp.scala)\nthat shows how to use the `OcppJsonServer` interface. It will listen on port\n2345, and return `NotImplemented` to any request except BootNotification. In\nresponse to a BootNotification, it will send a response and also send a\nGetConfiguration request back to the client. To run it, you do:\n\n```\n$ sbt ""project example-json-server"" run\n```\n\nThere is also an [OCPP 2.0 version of this server](example-json-server-20/src/main/scala/com/thenewmotion/ocpp/json/example/ExampleServerTestApp.scala).\nYou can run it with:\n\n```\n$ sbt ""project example-json-server-20"" run\n```\n\n### Just serializing\n\nIf you do not need the connection management provided by the high-level API,\nyou can still use the `ocpp-json` module for serializing and deserializing OCPP\nmessages that you will send or receive using other libraries.\n\nTo do so, call the methods in the [Serialization](ocpp-json/src/main/scala/com/thenewmotion/ocpp/json/v1x/Serialization.scala)\nobject after importing either\n`com.thenewmotion.ocpp.json.v1x.v15.SerializationV15._`\nor `com.thenewmotion.ocpp.json.v1x.v16.SerializationV16._` to select which OCPP\nversion to use:\n\n```scala\n\n import com.thenewmotion.ocpp.json.OcppJ\n import com.thenewmotion.ocpp.messages.{v1x => messages}\n import com.thenewmotion.ocpp.Version\n import com.thenewmotion.ocpp.json.v16.SerializationV16._\n\n OcppJ.write(messages.AuthorizeReq(idTag = ""ABCDEF012""))\n // this results in:\n // res6: String = {""idTag"":""ABCDEF012""}\n\n OcppJ.read[messages.AuthorizeReq, Version.V16.type](""""""{""idTag"":""ABCDEF012""}"""""")\n // this results in:\n // res10: com.thenewmotion.ocpp.messages.AuthorizeReq = AuthorizeReq(ABCDEF012)\n```\n\nThere are also `serialize` and `deserialize` methods on the `Serialization` object that\nuse json4s `JValue`s as the representation of JSON instead of raw `String`s.\nYou can use those to build the [SRPC](http://www.gir.fr/ocppjs/ocpp_srpc_spec.shtml)\nmessages that are sent over the WebSocket. See [TransportMessage](ocpp-json/src/main/scala/com/thenewmotion/ocpp/json/TransportMessageProtocol.scala)\nand [TransportMessageJsonSerializers](ocpp-json/src/main/scala/com/thenewmotion/ocpp/json/TransportMessageJsonSerializers.scala)\nfor how to work with those.\n\n\n## OCPP 2.0 support\n\nOCPP 2.0 presents a major revision of the OCPP protocol, vastly improving\nauthentication and encryption of the connection, and bringing new, more refined\nmodels of the charging station\'s configuration and the lifecycle of a charge\ntransaction.\n\nThis means that it is not possible to write code that will transparently work\nwith OCPP 2.0 or other OCPP versions, as is possible with this library when\nusing the 1.x versions of OCPP.\n\nYou can use this library already to exchange OCPP 2.0 messages, but not all\nmessage types, error codes, security mechanisms etc added by 2.0 are supported\nyet. We\'re ""glass half full"" people, so we\'ll first explain what works, and then\nmove on to explain what still remains to be done.\n\n### What this library offers\n\n#### Creating an OCPP 2.0 connection\n\nUsing the `OcppJsonClient.forversion20`, you can create an instance of\n`OcppJsonClient`, more specifically `Ocpp20JsonClient`, that will let you send\nand receive OCPP 2.0 messages.\n\n#### OCPP 2.0 messages\n\nFor OCPP 2.0, the message case classes are named `Request` and\n`Response`, e.g. `BootNotificationRequest` and\n`BootNotificationResponse`. They live in the\n`com.thenewmotion.ocpp.messages.v20` package. They extend the\n`com.thenewmotion.ocpp.messages.Request` and\n`com.thenewmotion.ocpp.messages.Response` types depending on whether they are\nrequests or responses. They also extend the `CsRequest`, `CsResponse`,\n`CsmsRequest` and `CsmsResponse` types defined in the\n`com.thenewmotion.ocpp.messages.v20` package, depending on which side is\nexecuting the request. `CsRequest` and `CsResponse` are for operations executed\nby the ""Cs"" (Charging Station, equivalent to the ""Charge Point"" of OCPP 1.x);\n`CsmsRequest` and `CsmsResponse` are for operations executed by the ""Csms""\n(Charging Station Management System, equivalent to the ""Central System"" of OCPP\n1.X).\n\nUnlike the 1.5/1.6 message case classes, the OCPP 2.0 case classes for requests\nand responses directly reflect the message structures defined in the OCPP 2.0\nspecification. This is done to reduce the cognitive load when comparing\nmessage definitions in Scala code to the OCPP specification, and because OCPP\n2.0 as of now has no protocol versions that are similar enough to write code\nthat can transparently deal with multiple protocol versions.\n\nThe price of giving up the version-independent message case classes is\nthat some counterintuitive idiosyncracies of the OCPP specification, like\nstarting to count at 1 instead of 0 and using 0 as a sentinel, are now\ninflicted upon unsuspecting Scala developers.\n\n#### Supported operations\n\nCurrently the library has message case classes in place for the following\noperations:\n\nCharging Station operations:\n\n * GetBaseReport\n * GetTransactionStatus\n * GetVariables\n * RequestStartTransaction\n * RequestStopTransaction\n * SendLocalList\n * SetVariables\n\nCharging Station Management System operations:\n\n * Authorize\n * BootNotification\n * Heartbeat\n * StatusNotification\n * TransactionEvent\n\n### What remains to be done\n\n * support new RPC-level error codes\n\n * Although this is actually about SRPC, perhaps the simplest implementation\n would be in the OCPP layer, where we already distinguish the versions.\n The Ocpp1XConnectionComponent could filter error codes sent and received\n and map the ones unsupported by OCPP 1.X to the closest alternative that\n is supported. That would save us the complexity of multiple SrpcComponent\n implementations and choosing between them.\n\n * Message case classes and serializers for all operations\n\n * To add a message, you should:\n * Add Request and Response case classes [here](ocpp-messages/src/main/scala/com/thenewmotion/ocpp/messages/v20/Message.scala)\n * Add a ReqRes instance [here](ocpp-messages/src/main/scala/com/thenewmotion/ocpp/messages/v20/ReqRes.scala)\n * Add a Ocpp20Procedure [here](ocpp-json/src/main/scala/com/thenewmotion/ocpp/json/v20/Ocpp20Procedure.scala)\n * Add necessary JSON serializers (e.g. for new `Enumerable`s) [here](ocpp-json/src/main/scala/com/thenewmotion/ocpp/json/v20/serialization)\n * Add a JSON schema validation test [here](ocpp-json/src/test/scala/com/thenewmotion/ocpp/json/v20/JsonSchemaValidationSpec.scala)\n * Add a JSON serialization/deserialization round-trip test [here](ocpp-json/src/test/scala/com/thenewmotion/ocpp/json/v20/SerializationSpec.scala)\n\n * Maybe we can find a way to make it easier to add this messages, or at\n least make it straightforward so that you can just follow the compiler\n errors until it compiles and then it also works.\n\n * Factory method on OcppJsonClient to give caller a 1.x or 2.0 client\n depending on negotiation with server\n\n * Similarly on `OcppJsonServer`, create a factory method that lets people\n create a server with two request handlers, one for 1.x and for 2.0, and\n let the server negotiate the OCPP version to use with the client.\n\n * Add a mechanism for the library to report standardized security events\n about the connection\n\n * Add support for the 3 security profiles for authenticated encryption\n\n * Perhaps then add a note that the ready-made server class and app in this\n library are not intended for production use, do not support the\n authenticated encryption of the OCPP channel, and are as such insecure.\n\n * Support for JSON web signatures in the RPC-level encoding\n\n## Changelog\n\n### Changes in 9.2.2\n\n - JsonOperations.reqRes: handle serialization and deserialisation both within Future context\n\n### Changes in 9.2.1\n\n - Remove a Scala 2.13 build until dependencies are fixed\n\n### Changes in 9.2.0\n\n - Added support for `v1x.ChargePointDataTransferReq` and `v1x.ChargePointDataTransferRes` to\n `com.thenewmotion.ocpp.json.v1x.v15.SerializationV15` and `com.thenewmotion.ocpp.json.v1x.v16.SerializationV16`\n\n - Add a Scala 2.13 build\n\n - Add OpenJDK 8 and 11 builds\n\n### Changes in 9.1.0\n\n - Added more OCPP 2.0 messages\n\n - Added an OCPP 2.0 example server app\n\n### Changes in 9.0.1\n\n - Renamed `com.thenewmotion.ocpp.VersionFamily.V1XCentralSystemRequest` to\n `com.thenewmotion.ocpp.VersionFamily.V1XCentralSystemMessages` as it should\n have been named all along.\n\n### Changes in 9.0.0\n\n - A start was made with support for OCPP 2.0\n\n - `OcppJsonServer` has now been split into two classes, depending on\n whether you want to serve OCPP 2.0 or OCPP 1.5/1.6:\n `Ocpp1XJsonServer` and `Ocpp20JsonServer`\n\n - `OcppJsonClient` has similarly been split into `Ocpp1XJsonClient` and\n `Ocpp20JsonClient`. Factory methods `OcppJsonClient.forVersion1x`\n and `OcppJsonClient.forVersion20` are available for easier\n instantiation of `OcppJsonClient` instances.\n\n - The `connection` member of `OcppJsonClient` was made private. To\n see the version of OCPP used by an `OcppJsonClient`, you can now use\n the `ocppVersion` method on the `OcppJsonClient` class.\n\n - The interfaces of the `ocpp-messages` and `ocpp-json` projects dealing\n with OCPP 1.x messages have moved to the `com.thenewmotion.ocpp.messages.v1x`\n and `com.thenewmotion.ocpp.json.v1x` packages, respectively.\n\n So code that used `com.thenewmotion.ocpp.messages.AuthorizationReq`\n must be changed to use `com.thenewmotion.ocpp.messages.v1x.AuthorizationReq`,\n and code that used `com.thenewmotion.ocpp.json.JsonOperation` must be changed\n to use `com.thenewmotion.ocpp.json.v1x.JsonOperation`.\n\n - `com.thenewmotion.ocpp.json.OcppJ` was renamed to\n `com.thenemwotion.ocpp.json.v1x.Serialization`\n\n### Changes in 8.0.0\n\n - Move the code for handling OCPP over SOAP to another project\n\n - Add a Scala 2.12 build\n\n### Changes in 7.0.0\n\n - Wait for pending incoming requests to be answered before closing a WebSocket\n connection\n\n - `OcppJsonClient.close` is now asynchronous; it returns a future that is\n completed once the connection is closed.\n\n - `OcppJsonClient` now has an apply method to construct\n `OcppJsonClient` instances without overriding any members\n\n - The `IncomingOcppEndpoint` trait is gone because the `onError` and\n `onDisconnect` methods were removed, so that only the\n `requestHandler` member was left and can simply take a\n `RequestHandler` in every place where the library previously\n expected an `IncomingOcppEndpoint`.\n\n - The rudimentary `onError` method is removed from `OcppJsonClient`.\n All OCPP errors are reported as failed futures returned from\n `OcppJsonClient.send`.\n\n - As part of the same dead code removal, the `onOcppError` method in\n `OcppComponent` is also gone\n\n - The `onDisconnect` method is removed from `OcppJsonClient` because\n closes are now signaled via an `onClose` member which returns a\n future which is completed once the connection is closed.\n\n - Because it is no longer needed to implement\n `IncomingOcppEndpoint.onDisconnect`, the\n `SrpcComponent.onSrpcDisconnect` method is removed.\n\n - The more robust closing involved changing some method names in the `SrpcConnectionComponent` and `WebSocketComponent`:\n\n * `SrpcComponent#SrpcConnection.send` is now `SrpcComponent#SrpcConnection.sendCall`\n * `SrpcComponent#SrpcConnection.close` now works asynchronously and returns a `Future[Unit]`\n * `SrpcComponent#SrpcConnection.forceClose` was added and works like the old `.close`, immediately closing the underlying WebSocket without waiting for processing to complete\n * `SrpcComponent.onSrpcRequest` is now `SrpcComponent.onSrpcCall`\n * `WebSocketComponent.onDisconnect` was renamed to `WebSocketComponent.onWebSocketDisconnect`\n\n - The case classes for SRPC messages were renamed to reflect the names used in the specification\n\n - Fixed a bug in the serialization of SendLocalListReq, where a member was\n called ""localAuthorisationList"" instead of ""localAuthorizationList""\n\n### Changes in 6.0.3\n\n - Support DataTransfer messages from Charge Point to Central System also over SOAP\n\n### Changes in 6.0.2\n\n - Support DataTransfer messages from Charge Point to Central System\n\n### Changes in 6.0.1\n\n - Throw more meaningful exceptions instead of always `VersionMismatch` when an `OcppJsonClient` fails to connect\n\n### Changes in 6.0.0 compared to version 4.x\n\nThis library had been stable for a few years between 2014 and 2017, with 4.x.x\nversion numbers, supporting OCPP-S 1.2 and 1.5, and OCPP-J 1.5, but not 1.6. Now\nthat 1.6 support has been added with version 6.0.0, many wildly incompatible\nchanges to the library interface were made while we were at it. The most\nimportant ones to be aware of when porting older code:\n\n - The `CentralSystem` and `ChargePoint` traits were renamed to\n `SyncCentralSystem` and `SyncChargePoint`. The names `CentralSystem` and\n `ChargePoint` are now used for asynchronous versions of these traits that\n return `Future`s.\n - In the high-level JSON API, request-response-handling has become more\n type-safe. Your request handler is no longer just a function from requests\n to responses, but now a `RequestHandler` which will also verify that you\n produce the right response type for the given request.\n - The library now uses [enum-utils](https://github.com/NewMotion/enum-utils)\n instead of Scala\'s `Enumeration`s\n - The library now uses Java 8\'s `java.time` for date and time handling instead\n of `com.thenewmotion.time`.\n - `JsonDeserializable` was renamed to `JsonOperation` and now handles not only\n deserialization but also serialization of OCPP messages for OCPP-J.\n - `OcppJsonClient` now takes a version parameter\n\n## Licensing and acknowledgements\n\nThe contents of this repository are \xc2\xa9 2012 - 2018 The New Motion B.V., licensed under the [GPL version 3](LICENSE), except:\n\n * [The example messages for OCPP 1.5](ocpp-json/src/test/resources/com/thenewmotion/ocpp/json/v1x/ocpp15/without_srpc) in the ocpp-json unit tests, which were taken from [GIR ocppjs](http://www.gir.fr/ocppjs/).\n\n * The [JSON schema files](ocpp-json/src/test/resources/com/thenewmotion/ocpp/json/v1x/v16/schemas/) for OCPP 1.6 are part of the OCPP 1.6 Specification, distributed under the following conditions:\n\n ```\n Copyright \xc2\xa9 2010 \xe2\x80\x93 2015 Open Charge Alliance. All rights reserved.\n This document is made available under the *Creative Commons Attribution- NoDerivatives 4.0 International Public License* (https://creativecommons.org/licenses/by-nd/4.0/legalcode).\n ```\n\n * The [JSON schema files](ocpp-json/src/test/resources/com/thenewmotion/ocpp/json/v20/schemas/) for OCPP 2.0 are part of the OCPP 2.0 Specification, distributed under the following conditions:\n\n ```\n Copyright \xc2\xa9 2010 \xe2\x80\x93 2018 Open Charge Alliance. All rights reserved.\n This document is made available under the *Creative Commons Attribution-NoDerivatives 4.0 International Public License*\n (https://creativecommons.org/licenses/by-nd/4.0/legalcode).\n ```\n\n\n'",,"2012/08/01, 06:14:34",4102,GPL-3.0,0,582,"2021/10/08, 09:39:37",7,29,38,0,747,0,0.4,0.2272727272727273,,,0,8,false,,false,false,,,https://github.com/ShellRechargeSolutionsEU,https://shellrecharge.com,"Amsterdam, Netherlands",,,https://avatars.githubusercontent.com/u/799802?v=4,,, ocpp,Python implementation of the Open Charge Point Protocol.,mobilityhouse,https://github.com/mobilityhouse/ocpp.git,github,"ocpp,framework,client,server,electric-vehicles",Mobility and Transportation,"2023/10/24, 13:33:08",565,63,227,true,Python,The Mobility House,mobilityhouse,"Python,Makefile",,"b'.. image:: https://github.com/mobilityhouse/ocpp/actions/workflows/pull-request.yml/badge.svg?style=svg\n :target: https://github.com/mobilityhouse/ocpp/actions/workflows/pull-request.yml\n\n.. image:: https://img.shields.io/pypi/pyversions/ocpp.svg\n :target: https://pypi.org/project/ocpp/\n\n.. image:: https://img.shields.io/readthedocs/ocpp.svg\n :target: https://ocpp.readthedocs.io/en/latest/\n\nOCPP\n----\n\nPython package implementing the JSON version of the Open Charge Point Protocol\n(OCPP). Currently OCPP 1.6 (errata v4), OCPP 2.0 and OCPP 2.0.1 (Final Version)\nare supported.\n\nYou can find the documentation on `rtd`_.\n\nInstallation\n------------\n\nYou can either the project install from Pypi:\n\n.. code-block:: bash\n\n $ pip install ocpp\n\nOr clone the project and install it manually using:\n\n.. code-block:: bash\n\n $ pip install .\n\nQuick start\n-----------\n\nBelow you can find examples on how to create a simple OCPP 2.0 central system as\nwell as an OCPP 2.0 charge point.\n\n.. note::\n\n To run these examples the dependency websockets_ is required! Install it by running:\n\n .. code-block:: bash\n\n $ pip install websockets\n\nCentral system\n~~~~~~~~~~~~~~\n\nThe code snippet below creates a simple OCPP 2.0 central system which is able\nto handle BootNotification calls. You can find a detailed explanation of the\ncode in the `Central System documentation_`.\n\n\n.. code-block:: python\n\n import asyncio\n import logging\n import websockets\n from datetime import datetime\n\n from ocpp.routing import on\n from ocpp.v201 import ChargePoint as cp\n from ocpp.v201 import call_result\n from ocpp.v201.enums import RegistrationStatusType\n\n logging.basicConfig(level=logging.INFO)\n\n\n class ChargePoint(cp):\n @on(\'BootNotification\')\n async def on_boot_notification(self, charging_station, reason, **kwargs):\n return call_result.BootNotificationPayload(\n current_time=datetime.utcnow().isoformat(),\n interval=10,\n status=RegistrationStatusType.accepted\n )\n\n\n async def on_connect(websocket, path):\n """""" For every new charge point that connects, create a ChargePoint\n instance and start listening for messages.\n """"""\n try:\n requested_protocols = websocket.request_headers[\n \'Sec-WebSocket-Protocol\']\n except KeyError:\n logging.info(""Client hasn\'t requested any Subprotocol. ""\n ""Closing Connection"")\n return await websocket.close()\n\n if websocket.subprotocol:\n logging.info(""Protocols Matched: %s"", websocket.subprotocol)\n else:\n # In the websockets lib if no subprotocols are supported by the\n # client and the server, it proceeds without a subprotocol,\n # so we have to manually close the connection.\n logging.warning(\'Protocols Mismatched | Expected Subprotocols: %s,\'\n \' but client supports %s | Closing connection\',\n websocket.available_subprotocols,\n requested_protocols)\n return await websocket.close()\n\n charge_point_id = path.strip(\'/\')\n cp = ChargePoint(charge_point_id, websocket)\n\n await cp.start()\n\n\n async def main():\n server = await websockets.serve(\n on_connect,\n \'0.0.0.0\',\n 9000,\n subprotocols=[\'ocpp2.0.1\']\n )\n logging.info(""WebSocket Server Started"")\n await server.wait_closed()\n\n if __name__ == \'__main__\':\n asyncio.run(main())\n\nCharge point\n~~~~~~~~~~~~\n\n.. code-block:: python\n\n import asyncio\n\n from ocpp.v201.enums import RegistrationStatusType\n import logging\n import websockets\n\n from ocpp.v201 import call\n from ocpp.v201 import ChargePoint as cp\n\n logging.basicConfig(level=logging.INFO)\n\n\n class ChargePoint(cp):\n\n async def send_boot_notification(self):\n request = call.BootNotificationPayload(\n charging_station={\n \'model\': \'Wallbox XYZ\',\n \'vendor_name\': \'anewone\'\n },\n reason=""PowerUp""\n )\n response = await self.call(request)\n\n if response.status == RegistrationStatusType.accepted:\n print(""Connected to central system."")\n\n\n async def main():\n async with websockets.connect(\n \'ws://localhost:9000/CP_1\',\n subprotocols=[\'ocpp2.0.1\']\n ) as ws:\n cp = ChargePoint(\'CP_1\', ws)\n\n await asyncio.gather(cp.start(), cp.send_boot_notification())\n\n\n if __name__ == \'__main__\':\n asyncio.run(main())\n\nDebugging\n---------\n\nPython\'s default log level is `logging.WARNING`. As result most of the logs\ngenerated by this package are discarded. To see the log output of this package\nlower the log level to `logging.DEBUG`.\n\n.. code-block:: python\n\n import logging\n logging.basicConfig(level=logging.DEBUG)\n\nHowever, this approach defines the log level for the complete logging system.\nIn other words: the log level of all dependencies is set to `logging.DEBUG`.\n\nTo lower the logs for this package only use the following code:\n\n.. code-block:: python\n\n import logging\n logging.getLogger(\'ocpp\').setLevel(level=logging.DEBUG)\n logging.getLogger(\'ocpp\').addHandler(logging.StreamHandler())\n\nLicense\n-------\n\nExcept from the documents in `docs/v16` and `docs/v201` everything is licensed under MIT_.\n\xc2\xa9 `The Mobility House`_\n\nThe documents in `docs/v16` and `docs/v201` are licensed under Creative Commons\nAttribution-NoDerivatives 4.0 International Public License.\n\n.. _Central System documentation: https://ocpp.readthedocs.io/en/latest/central_system.html\n.. _MIT: https://github.com/mobilityhouse/ocpp/blob/master/LICENSE\n.. _rtd: https://ocpp.readthedocs.io/en/latest/index.html\n.. _The Mobility House: https://www.mobilityhouse.com/int_en/\n.. _websockets: https://pypi.org/project/websockets/\n'",,"2019/05/09, 07:25:09",1630,MIT,36,167,"2023/10/19, 08:06:57",78,155,413,132,6,18,1.0,0.5263157894736843,"2023/10/19, 09:54:11",0.21.0,2,31,false,,false,false,"joaonpsilva/IT_CSMS,heroyooki/ocpp-garage,jeongsooh/grecsms,amartinezEtraid/ocpp,sh-101/docker-ocpp,tomazinhal/vcp-api,heroyooki/ocpp-fastapi-vue3,tomwebmaster/python_test,Oliver-65535/ocpp-cs-with-redis,vincenzo-suraci-ares2t/charge_advisor,ovchars2/charging-station-mfrc522-pzem004,Singizin/ChargePi-cutted,PabloTToledano/OCPP-Honeypot,rebelmc/TWCManager,larrykluger/central_system,mase-git/smart-charging,chrisK824/ocpp_charging_point_operator,kelseymok/charge-point-live-status,buihieu2k1/ocpp,kelseymok/charge-point-simulator-v1.6,aws-samples/aws-ocpp-gateway,mokhairy2019/ocpp-simulator-python,Vinayak-Gaonkar/ocpp-asgi,geniuz/rev_ocpp,elamaran619/rev_ocpp,elamaran619/ocpp-asgi,alertor/rev_ocpp,alertor/ocpp-serverless-example,BristiJana/Home-assistance,p32929/ocpp_tests,tuliosbhz/sistemas_distribuidos_paxos_state_machine,libpoet1312/ocpp-project,chrisK824/ocpp_charging_point_simulator,juanjoqg/TWCManager2,iconnor/ocpp_charge_profile,TECHS-Technological-Solutions/ocpp-simulator,Quohen-Leth/csms_ocpp,OrangeTux/zeegat,Quohen-Leth/charging-station-mocking,aliiabdii/OCPP-SampleClientServer,Quohen-Leth/try_ocpp,fxacout/ocpp-monitoring,xhx5188/ChargingPile_OCPP1.6J,SergeiVorobev/EVCharging,villekr/ocpp-serverless-example,david-woelfle/test,fzi-forschungszentrum-informatik/BEMCom,pj635/ocpp_test,xBlaz3kx/ChargePi,DevApoorv/ev-charging-central,stefan2811/port-16,villekr/ocpp-asgi,atDTSystemsLtd/ocpp_test,lbbrhzn/ocpp,harsh8088/py_ocpp16,uwa-rev/rev_ocpp,Jonathan-GC/API-OCPP-PORT,id-griff/CPC,netoalceu/OCPP_PUCCAMP,comunitaria/Patio_ocpp,Go4Green/Parity-Platform,mtoliv/camera-web-interface-demo,diegogonzalezmaneyro/car-charge-handler",,https://github.com/mobilityhouse,https://mobilityhouse.com/,"Munich, Germany",,,https://avatars.githubusercontent.com/u/5763064?v=4,,, docile-charge-point,Scriptable OCPP charge point simulator and test tool.,NewMotion,https://github.com/ShellRechargeSolutionsEU/docile-charge-point.git,github,"ocpp,emobility,testing,charging-station,dsl,scala",Mobility and Transportation,"2021/08/17, 13:37:10",76,0,20,true,Scala,Shell Recharge Solutions EU,ShellRechargeSolutionsEU,"Scala,Dockerfile",,"b'# docile-charge-point [![Codacy Badge](https://api.codacy.com/project/badge/Grade/1a97f5dd32e24653b9b45d5c8b4a14b7)](https://www.codacy.com/app/reinierl/docile-charge-point)\n\n\n\nA scriptable [OCPP](https://www.openchargealliance.org/protocols/ocpp-20/)\ncharge point simulator. Supports OCPP 1.5, 1.6 and 2.0 using JSON over\nWebSocket as the transport.\n\nNot as continuously ill-tempered as\n[abusive-charge-point](https://github.com/chargegrid/abusive-charge-point), but\nit can be mean if you script it to be.\n\nThe aims for this thing:\n\n * Simulated charge point behaviors expressed as simple scripts, that can be redistributed separately from the simulator that executes them\n\n * Simulate lots of charge points at once that follow given behavior scripts\n\n * Checkable test assertions in the scripts\n\n * Non-interactive command line interface, which combined with the test assertions makes it useful for use in CI/CD pipelines\n\nScripts are expressed as Scala files, in which you can use predefined functions\nto send OCPP messages, make expectations about incoming messages or declare the\ntest case failed. And next to that, all of Scala is at your disposal! Examples\nof behavior scripts it can run already are a simple heartbeat script ([OCPP\n1.x](examples/ocpp1x/heartbeat.scala) / [OCPP\n2.0](examples/ocpp20/heartbeat.scala)) and a full simulation of a charge\nsession ([OCPP 1.x](examples/ocpp1x/do-a-transaction.scala) / [OCPP\n2.0](examples/ocpp20/do-a-transaction.scala)). The full set of OCPP and testing\nspecific functions can be found in\n[CoreOps](core/src/main/scala/chargepoint/docile/dsl/CoreOps.scala),\n[expectations.Ops](core/src/main/scala/chargepoint/docile/dsl/expectations/Ops.scala)\nand `shortsend.Ops` ([OCPP\n1.x](core/src/main/scala/chargepoint/docile/dsl/shortsend/OpsV1X.scala) / [OCPP\n2.0](core/src/main/scala/chargepoint/docile/dsl/shortsend/OpsV20.scala)). For OCPP\n2.0, there is also a [special set of\noperations](core/src/main/scala/chargepoint/docile/dsl/ocpp20transactions/Ops.scala)\nto deal with the complicated stateful transaction management.\n\nThere are by now four ways to run the simulator:\n\n * On the command line, with a behavior script given as a file\n\n * On the command line, directly controlling the charge point behavior using an interactive prompt\n\n * In a Docker container, which allows you to have a simulated charge point execute a behavior script somewhere in the cloud, testing your system continuously with as little as possible work on your part\n\n * As a library dependency of another application, which allows you to combine it with other test tools, and write tests that interface with other network services besides just OCPP central systems.\n\n## Running on the command line\n\nThe simplest way to run docile-charge-point is on the command line so we will discuss that first.\n\nTo run the simulator, you first have to compile it with this command:\n\n```bash\nsbt assembly\n```\n\nWhen that completes successfully, you can run the simulator like this, from the root directory of the project:\n\n```bash\njava -jar cmd/target/scala-2.12/docile.jar -c -v \n```\n\nso e.g.:\n\n```bash\njava -jar cmd/target/scala-2.12/docile.jar -c chargepoint0123 -v 1.6 ws://example.org/ocpp-j-endpoint examples/ocpp1x/heartbeat.scala\n```\n\nSee `java -jar cmd/target/scala-2.12/docile.jar --help` for more options.\n\nIf you\'re looking for a Central System to run docile-charge-point against, check [SteVe](https://github.com/RWTH-i5-IDSG/steve) or [OCPP 1.6 Backend](https://github.com/gertjana/ocpp16-backend).\n\n## Script structure\n\nThe charge point scripts that you specify on the command line are ordinary\nScala files, that will be loaded and executed at runtime by\ndocile-charge-point. To write these files, besides the normal standard Scala\nlibrary, you can use a DSL in which you can send OCPP messages and expect a\ncertain behavior in return from the Central System.\n\n### Simple one-line scripts\n\nAs a very simple example of the DSL, consider this script:\n\n```scala\nheartbeat()\n```\n\nSimple as it looks, this script already does two things:\n\n * It sends an OCPP heartbeat request\n * It asserts that the Central System responds with an OCPP heartbeat response\n\nIn fact, there are such functions doing these two things for every request in OCPP 1.6 that a Charge Point can send to a Central System. There is a `statusNotification` that will, indeed, send a StatusNotification request.\n\nWhere these requests contain fields with data, these data can be given by supplying values for certain named arguments of those functions. By default, `statusNotification()` will mark the charge point as _Available_. To make it seem _Charging_ instead, do this:\n\n```\nstatusNotification(status = ChargePointStatus.Occupied(Some(OccupancyKind.Charging)))\n```\n\nOuch, that\'s quite some code to express the OCPP 1.6 charging state. Let me\ndigress to explain it. The reason for it is that this tool uses the case class\nhierarchy from the [NewMotion OCPP library](https://github.com/NewMotion/ocpp/)\nto express OCPP messages and their constituents parts. Those case classes\nhowever are different from the data types found in the OCPP specification, to\nencode the information in a more type-safe manner and to provide an abstract\nrepresentation that can be transmitted in either a 1.6 or a 1.5 format. Here it\nis the compatibility with 1.5 that means that we can\'t just write\n`ChargePointStatus.Charging`, because that couldn\'t be serialized for 1.5.\n`ChargePointStatus.Occupied(Some(OccupancyKind.Charging))` means: _Occupied_ for\n1.5 terms, but if you want to encode it for 1.6, you can be more precise and\nmake it _Charging_.\n\nThis kind of abstraction from version-specific messages can be a useful feature\nin some scenarios: you can now easily test that a back-office handles a certain\nbehavior correctly both with 1.5 and 1.6.\n\nAn interactive session with tab completion (see below) can come in handy to\nexplore the ways you can specify OCPP messages.\n\n### Stringing operations together\n\nAs an example of how you can string DSL operations together to specify a\nmeaningful behavior, let\'s look at the ""do a transaction"" example in its full\nglory. There are comments explaining what happens where:\n\n```scala\n// The idTag which can be used to look up who started the transaction\n// This is ordinary Scala defining a string value; no docile DSL so far.\nval chargeTokenId = ""01234567""\n\n// Now let\'s send an authorize request to the Central System and expect a\n// response. If the expected response comes, it is returned from the\n// `authorize` function.\n// This response objet then contains an `idTag` field with the authorization\n// information from the Central System. We call that authorization info `auth`.\nval auth = authorize(chargeTokenId).idTag\n\n// We check whether the charge token we sent is authorized. This is just plain\n// Scala again.\nif (auth.status == AuthorizationStatus.Accepted) {\n\n // If it\'s authorized, we start a transaction. Starting a transaction in OCPP\n // means: first set the status to Preparing...\n statusNotification(status = ChargePointStatus.Occupied(Some(OccupancyKind.Preparing)))\n\n // ...then start a transaction. The startTransaction function again returns\n // the StartTransaction response from the Central System, from which we take\n // the transaction ID and assign it the name `transId`\n val transId = startTransaction(meterStart = 300, idTag = chargeTokenId).transactionId\n\n // ... and then, we notify the Central System that this charge point has\n // started charging.\n statusNotification(status = ChargePointStatus.Occupied(Some(OccupancyKind.Charging)))\n\n // Another DSL operation: prompt. This allows us to prompt the user for some\n // keyboard input. Here, we just want him to press ENTER when\n // docile-charge-point should stop the transaction.\n // This `prompt` function will block until the user presses ENTER.\n prompt(""Press ENTER to stop charging"")\n\n // Okay, the user has apparently pressed ENTER. Let\'s stop.\n // First notify that the status of our connector is going to Finishing...\n statusNotification(status = ChargePointStatus.Occupied(Some(OccupancyKind.Finishing)))\n\n // ...then send a StopTransaction request...\n stopTransaction(transactionId = transId, idTag = Some(chargeTokenId))\n\n // ...and notify that our connector is Available again\n statusNotification(status = ChargePointStatus.Available())\n\n// Oh yeah, it\'s also possible that the Central System does not authorize the\n// transaction\n} else {\n // In that case we consider this script failed.\n fail(""Not authorized"")\n}\n```\n\nThe important take-aways here are:\n\n * DSL operations are just Scala function calls\n\n * Typically, DSL operations will block until the user input or Central System\n response has come, and will return this result as from the function call\n\n### Writing scripts with autocomplete in your favourite IDE\n\nI admit it\'s quite inconvenient that in the script files you have the full power of Scala, but you don\'t have IDE support to help you suggest methods to call or highlight mistakes. So I\'ve added a project in which you can edit your docile-charge-point script with IDE support.\n\nHow does that work? Well, docile-charge-point actually loads its script files by adding a bunch of imports and other boilerplate before and after the script file contents before the code is compiled. So you would have IDE support if you would be editing the file with the boilerplate in place. I don\'t want to make it the standard way of working to add this boilerplate in the scripts though, because that would make it harder to read the scripts for outsiders and it would lead to more version compatibility troubles between versions of docile-charge-point.\n\nTo still allow you to have your autocomplete, there is a special template project in which you can edit a script with the boilerplate included and also execute it. In that project, docile is loaded as a library so that it is possible to run the interpreter on your compiled code without running the script file loader that adds the boilerplate.\n\nTo use it, open the template project in the [autocomplete-template-project](autocomplete-template-project/) directory. The project already has an sbt file setting up the library dependencies\non the docile-charge-point DSL core. IntelliJ IDEA will import this project just fine. You\'ll then find a file [TestScript.scala](autocomplete-template-project/src/main/scala/TestScript.scala) that contains all the boilerplate and a comment that says `// INSERT SCRIPT HERE`. If you start editing at the place where that comment is, you can type anything that you can also type in a script file without the boilerplate. IDEA will offer you all its suggestion magic. To run your code, just execute `TestScript` as a main class in IDEA. To distribute your code as a reusable docile-charge-point script, just copy the part you added out of the surrounding boilerplate and put it in a file of its own.\n\n### Scripts with expectations\n\nThe semi-final line is interesting: `fail(""Not authorized"")`.\n\nThis shows that docile-charge-point scripts don\'t just run. They run, and in the\nend docile-charge-point will consider them either _failed_ or _passed_. Also, if\na script inadvertently fails to run at all, docile-charge-point will consider\nthe outcome an _error_.\n\nIf I for instance run both the example heartbeat script and the example\ndo-a-transaction script, against a back-office that does not authorize the\ntransaction, I will see that one script failed and the other one passed. In the\nconsole, that looks like this:\n\n```\njava -jar cmd/target/scala-2.12/docile.jar -c \'03000001\' ws://example.com/ocpp examples/heartbeat.scala examples/do-a-transaction.scala\nLoading settings from plugins.sbt ...\nLoading project definition from /Users/reinier/Documents/Programs/docile-charge-point/project\nLoading settings from build.sbt ...\nSet current project to docile-charge-point (in build file:/Users/reinier/Documents/Programs/docile-charge-point/)\nCredentials file /Users/reinier/.ivy2/.credentials does not exist\nPackaging /Users/reinier/Documents/Programs/docile-charge-point/target/scala-2.11/docile-charge-point_2.11-0.1-SNAPSHOT.jar ...\nDone packaging.\nRunning (fork) chargepoint.docile.Main -c 03000001 ws://example.com/ocpp examples/heartbeat.scala examples/do-a-transaction.scala\nGoing to run heartbeat\n>> HeartbeatReq\n<< HeartbeatRes(2018-04-02T20:38:13.342Z[UTC])\nGoing to run do-a-transaction\n>> AuthorizeReq(01234567)\n<< AuthorizeRes(IdTagInfo(IdTagInvalid,None,Some(01234567)))\nheartbeat: \xe2\x9c\x85\ndo-a-transaction: \xe2\x9d\x8c Not authorized\n```\n\nSo docile-charge-point will show that the heartbeat script passed, and the\ndo-a-transaction script failed with the message ""Not authorized"".\n\nAlso, the command will return success (exit status 0) if all scripts passed, and\nfailure (exit status 1) otherwise.\n\nIt is now also time to come back to the statement about `statusNotification()`,\nsaying that this simple call did two things. In fact, this function will send\nthe message, and then wait for the response, and make the script fail if the\nfirst incoming message is not a StatusNotification response. This is usually\nuseful in order to get a response object to work with, but sometimes you\'d\nwant your script to be more flexible about how the Central System can respond.\n\nFor those cases you have the `send` and `expectIncoming` functions in the DSL.\n`send` sends a message to the Central System, and immediately returns without\nwaiting for a response. `expectIncoming` in turn looks if a message has been\nreceived from the Central System, and if not, will block until one arrives.\n\nThe `statusNotification()` call turns out to be equivalent to:\n\n```\nsend(StatusNotificationReq(\n scope = ConnectorScope(0),\n status = ChargePointStatus.Available(),\n timestamp = Some(ZonedDateTime.now()),\n vendorId = None\n))\nexpectIncoming(matching { case res@StatusNotificationRes => res })\n```\n\nSo this `expectIncoming(matching ...)` line is in the end also an expression that\nreturns the response that was just received.\n\nWhat `expectIncoming` does comes down to:\n\n * Get the first incoming message that has not been expected before by the script, waiting for it if there is no such incoming message yet\n\n * See if this message matches the partial function that\'s given after\n `matching`\n\n * If so, return the result of the partial function. If not, fail the script.\n\n\nIn order to feed `expectIncoming`, docile-charge-point keeps a queue of messages\nthat have been received. The `expectIncoming` call is always evaluated against\nthe head of the queue. So you _have_ to expect every message the Central System\nsends to you, in the order in which they arrive!\n\nTo make this order requirement easier to deal with, you can also expect\nmultiple messages at once, and docile-charge-point will accept them no matter\nin which order they arrive:\n\n```\nexpectInAnyOrder(\n remoteStartTransactionReq.respondingWith(RemoteStartTransactionRes(true)),\n changeConfigurationReq.respondingWith(ChangeConfigurationRes(ConfigurationStatus.Accepted))\n)\n```\n\nAs a variant of the `expectIncoming(matching ...)` idiom, there is also an\n`expectIncoming requestMatching ...` variant, that lets you expect incoming\nrequests from the Central System, and respond to them, like so:\n\n```\n expectIncoming(\n requestMatching({case r: RemoteStopTransactionReq => r.transactionId == transId})\n .respondingWith(RemoteStopTransactionRes(_))\n )\n```\n\nThis bit waits for an incoming StopTransaction request, and fails if the next\nincoming message is not a StopTransaction request. If it is, it returns whether\nthe transaction ID in that message matches the `transId` value. Also, it\nresponds to the Central System with a RemoteStopTransaction response.\n\nThe argument to `respondingWith` can either be a literal value, or it can be a\nfunction from the result of the partial function to a response. Here the latter\noption is used in order to tell the Central System whether the remote stop\nrequest is accepted, based on whether the remote stop request\'s transaction ID\nmatched the one that the script had started.\n\nSee the remote start/stop example [for OCPP 1.x](examples/ocpp1x/remote-transaction.scala) or [for OCPP 2.0](examples/ocpp20/remote-transaction.scala) for the\nfull script using all these features.\n\nAs you can see in the handling of the remote start request there, there is also\na shorthand for expecting an incoming request of a certain type, without caring\nmore about the specific message contents. So this bit:\n\n```\nexpectIncoming(remoteStartTransactionReq.respondingWith(RemoteStartTransactionRes(true)))\n```\n\nis equivalent to:\n\n```\nexpectIncoming(\n requestMatching({case r: StartTransactionReq => r})\n .respondingWith(RemoteStartTransactionRes(true))\n)\n```\n\n## Running on the command line with an interactive prompt\n\nYou can also go into an interactive testing session on the command line. To do that, pass the `-i` command line flag:\n\n```\njava -jar cmd/target/scala-2.12/docile.jar -i -v 1.6 -c chargepoint0123 ws://example.com/ocpp\n```\n\nThe `-i` option here tells `docile-charge-point` to go into interactive mode.\n\nThe app will start and something write this to the console:\n\n```\n[info, chargepoint.docile.test.InteractiveRunner] Going to run Interactive test\nCompiling (synthetic)/ammonite/predef/interpBridge.sc\nCompiling (synthetic)/ammonite/predef/replBridge.sc\nCompiling (synthetic)/ammonite/predef/DefaultPredef.sc\nCompiling (synthetic)/ammonite/predef/ArgsPredef.sc\nCompiling (synthetic)/ammonite/predef/CodePredef.sc\nWelcome to the Ammonite Repl 1.0.3\n(Scala 2.11.11 Java 1.8.0_144)\nIf you like Ammonite, please support our development at www.patreon.com/lihaoyi\n@\n```\n\nThe `@` sign on that last line is your prompt. You can now type expressions in the `docile-charge-point` DSL, like:\n\n```\nstatusNotification()\n```\n\nand you\'ll see the docile-charge-point and the back-office exchange messages:\n\n```\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] >> StatusNotificationReq(ConnectorScope(0),Occupied(Some(Charging),None),Some(2018-01-01T15:12:43.251+01:00[Europe/Paris]),None)\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] << StatusNotificationRes\n```\n\nlet\'s see what happens if we send a timestamp from before the epoch...\n\n```\nstatusNotification(timestamp = Some(ZonedDateTime.of(1959, 1, 1, 12, 0, 0, 0, ZoneId.of(""Z""))))\n```\n\nturns out it works surprisingly well :-):\n\n```\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] >> StatusNotificationReq(ConnectorScope(0),Available(None),Some(1959-01-01T12:00Z),None)\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] << StatusNotificationRes\n```\n\nYou\'ll also see that the interactive mode prints something like this:\n\n```\nres0: StatusNotificationRes.type = StatusNotificationRes\n```\n\nThat\'s the return value of the expression you entered, which in this case, is the StatusNotification response object. And because you\'re in a full-fledged Scala REPL using [Ammonite](ammonite.io), nothing is stopping you from doing fancy stuff with that. So you can for instance values from responses in subsequent requests:\n\n```\n@ startTransaction(idTag = ""ABCDEF01"")\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] >> StartTransactionReq(ConnectorScope(0),ABCDEF01,2018-01-01T15:22:30.122+01:00[Europe/Paris],0,None)\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] << StartTransactionRes(177,IdTagInfo(Accepted,None,Some(ABCDEF01)))\nres3: StartTransactionRes = StartTransactionRes(177, IdTagInfo(Accepted, None, Some(""ABCDEF01"")))\n\n@ stopTransaction(transactionId = res3.tr\ntransactionId\n@ stopTransaction(transactionId = res3.transactionId)\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] >> StopTransactionReq(177,Some(ABCDEF01),2018-01-01T15:22:50.457+01:00[Europe/Paris],16000,Local,List())\n[info, chargepoint.docile.test.InteractiveOcppTest$$anon$1] << StopTransactionRes(Some(IdTagInfo(Accepted,None,Some(ABCDEF01))))\nres4: StopTransactionRes = StopTransactionRes(Some(IdTagInfo(Accepted, None, Some(""ABCDEF01""))))\n```\n\nNote also that between those two requests, I used tab completion to look up the name of the `transactionId` field in the StopTransaction request.\n\n## Running in a Docker container\n\nThere is now a Dockerfile included, so you can run it in Docker if you want. Also you can use the Docker image as a basis for your own images that encode certain charge point behaviors.\n\nTo run it in Docker, do:\n\n```\n$ sbt assembly\n\n$ docker build -t docile-charge-point:latest .\n\n$ docker run --rm -it docile-charge-point:latest\n```\n\nThe Docker container will execute docile-charge-point, executing a script that\nwaits for OCPP remote start and remote stop requests and reports charge\ntransactions accordingly.\n\nSee [the Dockerfile](Dockerfile) for available environment variables to control the image.\n\n## Running inside another app, embedded as a library\n\nFor maximum flexibility, you can embed docile-charge-point as a library dependency in your own Scala code. In that case,\n you can use the docile-charge-point DSL while also calling other libraries and code of your own.\n\n To make docile-charge-point a dependency of your Scala project, add this to your library dependencies in your `build.sbt`:\n\n```scala\n""com.newmotion"" %% ""docile-charge-point"" % ""0.5.1""\n```\n\nThen, in your code:\n 1. Create tests as instances of `chargepoint.docile.dsl.OcppTest` in your code\n 1. Combine them with a testcase name to be a [`chargepoint.docile.test.TestCase`](core/src/main/scala/chargepoint/docile/test/TestCase.scala)\n 1. Instantiate a [`chargepoint.docile.test.Runner`](core/src/main/scala/chargepoint/docile/test/Runner.scala) wrapping the test cases\n 1. Call the `.run()` method on the `Runner`, passing a [`chargepoint.docile.test.RunnerConfig`](core/src/main/scala/chargepoint/docile/test/Runner.scala) to specify how you\'d like the test to be executed\n\n### Loading test cases distributed separately as files\n\nTo load text files as test cases, you need another library as a dependency:\n\n```scala\n""com.newmotion"" %% ""docile-charge-point-loader"" % ""0.5.1""\n```\n\nThen you\'ll, besides all the classes for defining and running test cases\nmentioned above, also have a [`chargepoint.docile.test.Loader`](loader/src/main/scala/chargepoint/docile/test/Loader.scala)\nthat has a few methods all called `runnerFor` that will give you a `Runner`\ninstance based on a file, `String` or `Array[Byte]` for a test case.\n\nOne example where this is done is the AWS Lambda and S3 integration in the [lambda](aws-lambda/) subproject in this repository. Run `sbt lambda/run` to compile and run that code.\n\nAt the moment, unfortunately, the only documentation is this and the [source code](aws-lambda/src/main/scala/chargepoint/docile/Lambda.scala).\n\nAlso, at the moment the library is only published for Scala 2.11 and 2.12. The\nreason is that docile-charge-point depends on Ammonite and Ammonite is a very\nfinicky thing when it comes to dependency versioning. We are aware and if we\nkeep working on docile-charge-point we will split the library and the\ninteractive executable, so that we can also build the library for Scala 2.13\nand later versions.\n\n## TODOs\n\nIt\'s far from finished now. The next steps I plan to develop:\n\n * Make it able to take both Charging Station Management System and Charging Station roles\n\n * Nicer syntax for constructing OCPP messages to send or expect\n\n * For OCPP 2.0, build some autonomous, stateful management of\n components, variables, transaction management. This stuff is too\n complicated to manage in interactive mode otherwise. This autonomous\n default behavior should be overrideable of course.\n\n * Show incoming messages that caused errors parsing or processing\n (this might entail changes to the NewMotion OCPP library)\n\n * Add a command line flag to drop to interactive mode, instead of exit,\n when an assertion fails in a script\n\n * Add a command in interactive mode to run a script from a file or URL\n\n * Add a functionality to automatically respond to messages matching a\n certain pattern in a certain way\n\n * Messages of OCPP 2.0 that seem to be in demand:\n * ChangeAvailability\n * Reset\n\n## Other ideas\n\n * Web interface: click together test: 150 CPs behaving like this, 300 like that, ..., GO!\n\n * Live demo on the web?\n\n## Legal\n\nThe contents of this repository are \xc2\xa9 2017-2019 The New Motion B.V. and other\ncontributors, licensed under the terms of the [GNU General Public License version 3](LICENSE).\n'",,"2018/04/04, 15:33:31",2030,GPL-3.0,0,183,"2022/12/10, 17:09:42",9,16,19,1,319,3,0.4,0.3571428571428571,,,0,7,false,,false,false,,,https://github.com/ShellRechargeSolutionsEU,https://shellrecharge.com,"Amsterdam, Netherlands",,,https://avatars.githubusercontent.com/u/799802?v=4,,, MaaS Global,"Mobility as a Service API - data model, tests and validation.",maasglobal,https://github.com/maasglobal/maas-schemas.git,github,"mobility-as-a-service,mobility,transportation",Mobility and Transportation,"2023/10/24, 11:44:16",17,2,2,true,TypeScript,MaaS Global Ltd,maasglobal,"TypeScript,JavaScript,Python",https://maasglobal.github.io/,"b'# Shared MaaS Global Type Definitions\n\nThis repository contains MaaS Global shared type definitions.\n\n## Index\n\nThe code is divided into several independent npm packages.\n\n* [maas-schemas](maas-schemas) contains schemas, type definitions, and related utilities\n* [maasglobal-json-schema-validator](maasglobal-json-schema-validator) contains our ajv validator configuration\n* [maasglobal-schema-generator-ajv](maasglobal-schema-generator-ajv) contains build utils for ensuring ajv compatibility\n* [maasglobal-schema-generator-io-ts](maasglobal-schema-generator-io-ts) contains build utils for generating io-ts codecs\n* [maasglobal-schema-package](maasglobal-schema-package) contains general definitions for our schema package format\n\n## Devops\n\nThe following commands should work in all packages where applicable.\n\n```\nnpm install -g yarn # install yarn\nyarn # install dependencies\nyarn lint # run linters\nyarn typecheck # run static type checks\nyarn test # run tests\nyarn prettify # auto format code base\nyarn ci # perform a local CI test run\nyarn build # create a production build\nyarn clean # remove build artefacts\nyarn readme-ts # extract readme code examples\nyarn deploy-npm # deploy npm package\nyarn deploy-alpha # deploy prelease npm package\n```\n'",,"2016/07/21, 10:39:44",2652,MIT,224,2400,"2023/10/24, 11:44:17",20,743,743,63,1,20,0.9,0.8151835718730553,"2019/05/23, 08:15:50",v7.10.0-dev,0,24,false,,false,false,"maasglobal/maas-schemas,maasglobal/maas-tsp-api",,https://github.com/maasglobal,https://maas.global,Finland,,,https://avatars.githubusercontent.com/u/17618385?v=4,,, CoopCycle,A self-hosted platform to order meals in your neighborhood and get them delivered by bike couriers.,coopcycle,https://github.com/coopcycle/coopcycle-web.git,github,"symfony,apiplatform,coop,platformcoop",Mobility and Transportation,"2023/10/25, 12:17:47",550,0,58,true,PHP,CoopCycle,coopcycle,"PHP,JavaScript,Gherkin,Twig,SCSS,CSS,Dockerfile,Shell,Makefile,HTML",https://coopcycle.org,"b'CoopCycle\n=========\n\n![Build Status](https://github.com/coopcycle/coopcycle-web/actions/workflows/test.yml/badge.svg)\n\nCoopCycle is a **self-hosted** platform to order meals in your neighborhood and get them delivered by bike couriers. The only difference with proprietary platforms as Deliveroo or UberEats is that this software is [reserved to co-ops](#license).\n\nThe main idea is to **decentralize** this kind of service and to allow couriers to **own the platform** they are working for.\nIn each city, couriers are encouraged to organize into co-ops, and to run their very own version of the software.\n\nThe software is under active development. If you would like to contribute we will be happy to hear from you! All instructions are [in the Contribute file](CONTRIBUTING.md).\n\nCoopcycle-web is the main repo, containing the web API, the front-end for the website and the dispatch algorithm : [Technical Overview](https://github.com/coopcycle/coopcycle-web/wiki/Technical-Overview). You can see it in action & test it here : https://demo.coopcycle.org\n\nYou can find a comprehensive list of our repos here : [Our repos comprehensive list](https://github.com/coopcycle/coopcycle-web/wiki/Our-repos-comprehensive-list).\n\nHow to run a local instance\n---------------------------\n\n### Prerequisites\n\nInstall [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/install).\n\n#### OSX\n\nUse [Docker for Mac](https://www.docker.com/docker-mac) which will provide you both `docker` and `docker-compose`.\n\n#### Windows\n\nUse [Docker for Windows](https://www.docker.com/docker-windows) which will provide you both `docker` and `docker-compose`.\nDepending on your platform, Docker could be installed as Native or you have to install Docker toolbox which use VirtualBox instead of Hyper-V causing a lot a differences in implementations.\nIf you have the luck to have a CPU that supports native Docker you can [share your hard disk as a virtual volume for your appliances](https://blogs.msdn.microsoft.com/stevelasker/2016/06/14/configuring-docker-for-windows-volumes/).\n\nDocker doesn\'t work under Windows, you need to install linux in hypervisualization. Follow the recommendations here to activate the necessary features under windows 11 and make sure you have an administrator account \n https://docs.docker.com/desktop/troubleshoot/topics/\n\nDownload docker \nhttps://www.docker.com/products/docker-desktop/\nCheck in the BIOS that : \n-hypervisualization (HYPER-V) \n-Data Execution Prevention (DEP). \nYou can also use the following procedure for DEP: \nWindows + r\nSearch for sysdm.cpl\nAdvanced system settings \nIn Performance, select settings data execution prevention ""enable for all except those I select..."". click on apply \n\ninstall, from your PowerShell WSL 2 terminal\n https://learn.microsoft.com/en-us/windows/wsl/install\n\nconfigure your WSL 2 environment by creating a Linux administrator account. The password is not displayed (this is normal), so remember it.\nhttps://learn.microsoft.com/fr-fr/windows/wsl/setup/environment#file-storage\n\n#### Linux\n\nFollow [the instructions for your distribution](https://docs.docker.com/install/). `docker-compose` binary is to be installed independently.\nMake sure:\n- to install `docker-compose` [following instructions](https://docs.docker.com/compose/install/) to get the **latest version**.\n- to follow the [post-installation steps](https://docs.docker.com/install/linux/linux-postinstall/).\n\n#### Setup OpenStreetMap geocoders (optional)\n\nCoopCycle uses [OpenStreetMap](https://www.openstreetmap.org/) to geocode addresses and provide autocomplete features.\n\n##### Address autocomplete\n\nTo configure address autocomplete, choose a provider below, grab the credentials, and configure environment variables accordingly.\n\n```\nLOCATIONIQ_ACCESS_TOKEN\nGEOCODE_EARTH_API_KEY\n```\n\n- For [Geocode Earth](https://geocode.earth/), set `COOPCYCLE_AUTOCOMPLETE_ADAPTER=geocode-earth`\n- For [LocationIQ](https://locationiq.com/), set `COOPCYCLE_AUTOCOMPLETE_ADAPTER=locationiq`\n\n##### Geocoding\n\nTo configure geocoding, create an account on [OpenCage](https://opencagedata.com/), and configure the `OPENCAGE_API_KEY` environement variable.\n\n### Run the application\n\n#### Pull the Docker containers (optional)\n\nWe have prebuilt some images and uploaded them to [Docker Hub](https://hub.docker.com/u/coopcycle).\nTo avoid building those images locally, you can pull them first.\n\n```\ndocker-compose pull\n```\n\n#### Start the Docker containers\n\n```\ncp .env.dist .env\ndocker-compose up\n```\n\nAt this step, the platform should be up & running, but the database is still empty.\nTo create the schema & initialize the platform with demo data, run:\n```sh\nmake install\n```\n\n#### Open the platform in your browser\n```\nopen http://localhost\n```\n\nTesting\n-------\n\n#### Create the test database\n\n```\ndocker-compose run php bin/console doctrine:schema:create --env=test\n```\n\n#### Launch the PHPUnit tests\n\n```\nmake phpunit\n```\n\n#### Launch the Behat tests\n\n```\nmake behat\n```\n\n#### Launch the Mocha tests\n\n```\nmake mocha\n```\nDebugging\n------------------\n#### 1. Install and enable xdebug in the php container\n\n```\nmake enable-xdebug\n```\n> **Note:** If you\'ve been working with this stack before you\'ll need to rebuild the php image for this command to work:\n> ```\n> docker-compose build php\n> docker-compose restart php nginx\n> ```\n\n#### 2. Enable php debug in VSCode\n\n1. Install a PHP Debug extension, this is tested with [felixfbecker.php-debug](https://marketplace.visualstudio.com/items?itemName=felixfbecker.php-debug) extension.\n2. Add the following configuration in your `.vscode/launch.json` of your workspace:\n\n```json\n{\n\t""configurations"": [\n {\n ""name"": ""Listen for XDebug"",\n ""type"": ""php"",\n ""request"": ""launch"",\n ""port"": 9001,\n ""pathMappings"": {\n ""/var/www/html"": ""${workspaceFolder}""\n },\n ""xdebugSettings"": {\n ""max_data"": 65535,\n ""show_hidden"": 1,\n ""max_children"": 100,\n ""max_depth"": 5\n }\n }\n ]\n}\n```\n\n3. If you\'re having issues connecting the debugger yo can restart nginx and php containers to reload the xdebug extension.\n\n```\ndocker-compose restart php nginx\n```\n\nRunning migrations\n------------------\n\nWhen pulling change from the remote, the database models may have changed. To apply the changes, you will need to run a database migration.\n\n```\nmake migrations-migrate\n```\n\nLicense\n-------\n\nThe code is licensed under the [Coopyleft License](https://wiki.coopcycle.org/en:license), meaning you can use this software provided:\n\n- You are matching with the social and common company\xe2\x80\x99s criteria as define by their national law, or by the European Commission in its [October 25th, 2011 communication](http://www.europarl.europa.eu/meetdocs/2009_2014/documents/com/com_com(2011)0681_/com_com(2011)0681_en.pdf), or by default by the Article 1 of the French law [n\xc2\xb02014-856 of July 31st, 2014](https://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000029313296&categorieLien=id) \xe2\x80\x9crelative \xc3\xa0 l\xe2\x80\x99\xc3\xa9conomie sociale et solidaire\xe2\x80\x9d\n- You are using a cooperative model in which workers are employees\n'",,"2016/11/29, 12:10:32",2521,CUSTOM,635,10003,"2023/10/11, 15:42:15",445,2170,3337,450,14,51,0.8,0.188484642411878,"2023/10/25, 12:19:16",v1.13.12,1,72,true,"open_collective,custom",false,true,,,https://github.com/coopcycle,https://coopcycle.org,"Paris, France",,,https://avatars.githubusercontent.com/u/24247863?v=4,,, EVNotify,Allows you to monitor your electric vehicle and lets you notify when the specified preset state of charge has been achieved.,EVNotify,https://github.com/EVNotify/EVNotify.git,github,,Mobility and Transportation,"2022/10/23, 14:24:31",192,0,23,true,Vue,,EVNotify,"Vue,JavaScript,CSS,HTML",https://evnotify.com,"b'# EVNotify\nEVNotify allows you to control your electric vehicle and let you notify when the specified preset state of charge has been achieved.\n\n\xf0\x9f\x9a\xa7 \xf0\x9f\x9a\xa7 \xf0\x9f\x9a\xa7 \n\nThis repository contains the ""v2"" code of the app. EVNotify is under active development for the next generation ""v3"". The corresponding repository of v3 is currently still private. I just want to let you know, that in the upcming weeks the first official tests will start with you. Thanks for all of your patience. The project is alive, and v3 will be ready to test soon (and will be open source later).\n\n\xf0\x9f\x9a\xa7 \xf0\x9f\x9a\xa7 \xf0\x9f\x9a\xa7\n\n### Note:\nThis repository contains the frontend/client source code. For the backend please visit: https://github.com/EVNotify/EVNotifyBackend.\nEVNotify is still in an early stage of development. Errors or unexpected behavior may occur. Furthermore, not all features may be implemented yet.\nStay tuned and please report any issues or suggestions.\n\n### The idea behind EVNotify\nOriginally this application was developed to have a possibility to remotely see the charging state for the Hyundai IONIQ, which isn\'t possible in Europe due to missing BlueLink connection.\nThen I decided to enhance it so you\'ll get notified when the desired state of charge has been achieved. Also I want to support more cars in the future.\n\nImagine the following situation:\nYou are charging your electric vehicle on a fast charger. To proceed with your road trip, you need to charge until 80%. You are drinking a coffee in the meantime. But you want to drive away as fast as possible because every minute that passes costs time and money. Normally you have to check for the charging state every few minutes, leave everything, physically needs to go to the car, which is annoying.\nWith EVNotify you can just check the state of charge remotely - or simply gets notified when the desired state of charge is set so you can go. Without having to worry every about your state of charge every few minutes. Simply enjoy!\n\n### Features and benefits of EVNotify\n- free to use - available on the [Google Play Store](https://play.google.com/store/apps/details?id=com.evnotify.app)\n- state of charge monitoring\n- multiple notification possibilities if desired state of charge had been achieved (e.g. Mail, Push, Telegram)\n- easy to use\n- use your existing hardware\n- connect multiple devices together\n- continuous development\n- open source\n- API documentation for developers\n- integrated charging stations finder\n- more features and supported cars will be added soon\n\n### Prerequisites\nTo use EVNotify you need to download the Android Application linked above. The app must be installed on an Android device with Android version greater than Android 4.1 (Android 5 or greater recommended). The android device must have a Bluetooth connection and optionally an internet connection. The app also runs on Android TV Sticks.\nEVNotify communicates over the OBD2 interface of the car, so a Bluetooth capable OBD2-dongle is required. For supported OBD2 dongle please have a look at the wiki.\n\n### Contributing\nFeel free to help and to contribute to this repository. Even if you can\'t code, feel free to create issues if you have discovered a bug or a strange behavior. If you want to commit code, please create a pull request for it in a separate branch (dev or a feature-branch, not master!).\nIf you are a developer and want to contribute, have a look at the CONTRIBUTING.md file.\n\n### How it works\nEVNotify interacts with the electric vehicle over a Bluetooth-capable OBD2-Dongle. The device interacting with the Dongle, must be an Android device and has to support Bluetooth.\nFor full functionality, an internet connection is required.\nThe readout and monitoring of the state of charge works locally, the notifications are sent over the internet.\n\nThe App of EVNotify as well as the Backend has an integrated mechanism to track errors with a tool called Rollbar. This allows faster tracking of uncaught errors.\n\n\n\n\n### Need help?\nHave a look at the Wiki here for EVNotify.\n\n### Additional notes\nThe use of the software is at your own risk. I am not liable for damage caused by improper use or cheap, fake OBD2 dongle.\n'",,"2017/06/09, 19:01:25",2329,CUSTOM,0,679,"2023/04/08, 05:58:35",71,62,231,6,200,0,0.3,0.1394611727416799,"2022/10/23, 14:28:58",2.4.0,0,12,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,custom",true,true,,,https://github.com/EVNotify,,,,,https://avatars.githubusercontent.com/u/45699245?v=4,,, icare,An open source carpooling platform used as a basis for our commercial product Company Carpool.,diowa,https://github.com/diowa/icare.git,github,"carpooling,rails,ridesharing,travel,ruby,heroku",Mobility and Transportation,"2023/10/22, 08:07:08",230,0,15,true,Ruby,diowa,diowa,"Ruby,Slim,JavaScript,SCSS,HTML,Handlebars,Dockerfile,Shell,Procfile",,"b'# icare\n[![Build Status](https://github.com/diowa/icare/actions/workflows/ci.yml/badge.svg)](https://github.com/diowa/icare/actions) [![Maintainability](https://api.codeclimate.com/v1/badges/b5c7bd31597d298a5d6e/maintainability)](https://codeclimate.com/github/diowa/icare/maintainability) [![Coverage Status](https://coveralls.io/repos/diowa/icare/badge.svg?branch=main)](https://coveralls.io/r/diowa/icare?branch=main)\n\n[![Gitter](https://badges.gitter.im/diowa/icare.svg)](https://gitter.im/diowa/icare?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\n[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy)\n\n**icare** is an open source [carpooling](https://en.wikipedia.org/wiki/Carpool) platform used as a basis for our commercial product [Company Carpool](https://www.companycarpool.com).\n\nCarpooling (also known as car-sharing, ride-sharing, lift-sharing and covoiturage), is the sharing of car journeys so that more than one person travels in a car.\nBy having more people using one vehicle, carpooling reduces each person\xe2\x80\x99s travel costs such as fuel costs, tolls, and the stress of driving. Carpooling is also seen as a more environmentally friendly and sustainable way to travel as sharing journeys reduces carbon emissions, traffic congestion on the roads, and the need for parking spaces. Authorities often encourage carpooling, especially during high pollution periods and high fuel prices. (From Wikipedia)\n\n**icare** uses the following technologies:\n\n* [Ruby on Rails][:rails_url]\n* [PostgreSQL][:postgresql]\n* [Shakapacker][:shakapacker_url]\n* [Handlebars.js][:handlebarsjs_url] (JavaScript semantic templates)\n* [SLIM][:slim_url]\n* [Bootstrap][:bootstrap_url]\n* [Font Awesome][:fa_url] (vectorial icons)\n* [Devise][:devise_url]\n* Asynchronous tasks with [Sucker Punch][:sucker_punch_url]\n* OAuth login with [Auth0][:auth0_url]\n* Google Maps API\n* [RSpec][:rspec_url]\n* [Heroku][:heroku_url] Cloud Application Platform\n* [Multi-Environment configuration][:simpleconfig_url]\n* [Airbrake][:airbrake_url] Exception Notification\n* [New Relic][:newrelic_url] Application Performance Management service\n\n## Name and logo\n\n**icare** name and logo are temporary. **icare** is a portmanteau of ""I care"", ""Car"" and ""Environment"". No copyright violation is intended.\n\n## Roadmap\n\nImmediate: Check out our [To Do](https://github.com/diowa/icare/wiki/To-Do) list.\nLong-term: TODO\n\n## Internationalization (i18n)\n\n**icare** uses standard [Rails Internationalization (I18n) API](https://guides.rubyonrails.org/i18n.html). If you translated **icare** in your own language, make a pull request.\n\n## Contributing\n\nPlease read through our [contributing guidelines](CONTRIBUTING.md). Included are directions for opening issues, coding standards, and notes on development.\n\nMore over, if your pull request contains patches or features, you must include relevant unit tests.\n\nEditor preferences are available in the [editor config](.editorconfig) for easy use in common text editors. Read more and download plugins at .\n\nIf you are interested in feature development, we have priorities. Check out our [To Do](https://github.com/diowa/icare/wiki/To-Do) list.\n\n## Authors\n\n**Geremia Taglialatela**\n\n+ https://github.com/tagliala\n+ https://twitter.com/gtagliala\n\n**Cesidio Di Landa**\n\n+ https://github.com/cesidio\n+ https://twitter.com/cesid\n\n## Copyright and license\n\n**icare** is licensed under the BSD 2-Clause License\n\nCheck the LICENSE file for more information\n\n## Thanks\n\nSpecial thanks to all developers of open source libraries used in this project.\n\n## Docker (Experimental)\n\nExperimental Docker support. Please do not ask for support, PR to improve the\ncurrent implementation are very welcomed.\n\nTODO:\n- [ ] Fix Puma exit status (puma/puma#1673)\n- [ ] Multi-environment support\n- [ ] Add Sidekiq container\n\nGenerate SSL requirements:\n\n```ssh\nopenssl req -subj \'/CN=localhost\' -x509 -newkey rsa:4096 -nodes -keyout docker/nginx/ssl/app_key.pem -out docker/nginx/ssl/app_cert.pem -days 825\nopenssl genpkey -genparam -algorithm DH -out docker/nginx/ssl/app_dhparam4096.pem -pkeyopt dh_paramgen_prime_len:4096\n```\n\nCopy `docker/icare/variables.env.example` to `docker/icare/variables.env` and\nrun `docker compose up`\n\nicare will be accessible on `https://localhost:3443`\n\n### Start Rails outside of Docker with SSL\n\nAfter generating the SSL requirements, run:\n\n```sh\nrails s -b ""ssl://0.0.0.0:3443?key=docker/nginx/ssl/app_key.pem&cert=docker/nginx/ssl/app_cert.pem""\n```\n\nicare will be accessible on `https://localhost:3443`\n\n## Donations\n\nIf you like this project or you are considering to use it (or any part of it) for commercial purposes, please make a donation to the authors.\n\n[![Donate once-off to this project using Bitcoin](https://img.shields.io/badge/bitcoin-donate-blue.svg)](bitcoin:1L6sqoG8xXhYziH9NGjPzgR1dEP2SbJrfM)\n\n[:airbrake_url]: https://github.com/airbrake/airbrake\n[:auth0_url]: https://auth0.com/\n[:bootstrap_url]: https://getbootstrap.com\n[:devise_url]: https://github.com/plataformatec/devise\n[:fa_url]: https://fontawesome.com\n[:handlebarsjs_url]: https://handlebarsjs.com/\n[:heroku_url]: https://www.heroku.com/\n[:newrelic_url]: https://newrelic.com/\n[:postgresql]: https://www.postgresql.org/\n[:rails_url]: https://rubyonrails.org/\n[:rspec_url]: https://rspec.info/\n[:shakapacker_url]: https://github.com/shakacode/shakapacker\n[:simpleconfig_url]: https://github.com/lukeredpath/simpleconfig\n[:slim_url]: https://slim-template.github.io/\n[:sucker_punch_url]: https://github.com/brandonhilkert/sucker_punch\n'",,"2012/08/26, 18:09:24",4077,BSD-2-Clause,625,4531,"2023/10/21, 19:35:22",14,1482,1554,369,4,2,0.0,0.5332577475434619,,,0,6,false,,false,true,,,https://github.com/diowa,diowa.com,United Kingdom,,,https://avatars.githubusercontent.com/u/2099801?v=4,,, Carpoolear,The first Argentine Facebook app that allows the users of this social network to share car trips with other users.,STS-Rosario,https://github.com/STS-Rosario/carpoolear.git,github,"vuejs,frontend,carpooling,app,cordova,mobile,web",Mobility and Transportation,"2023/05/17, 13:56:48",88,0,12,true,Vue,STS Rosario,STS-Rosario,"Vue,CSS,JavaScript,HTML,Dockerfile",https://carpoolear.com.ar,"b'# Carpoolear frontend\n\n# Espa\xc3\xb1ol\n\nCarpoolear es la primera aplicaci\xc3\xb3n argentina de Facebook que permite a los usuarios de dicha red social compartir viajes en autom\xc3\xb3vil con otros usuarios de su entorno.\n\nEs una customizaci\xc3\xb3n ad-hoc para Argentina de la filosof\xc3\xada carpooling, la cual consiste en compartir nuestros viajes en auto con otras personas de forma cotidiana. El carpooling es una pr\xc3\xa1ctica popular en Estados Unidos y Europa, donde se realiza de manera organizada para lograr aumentar el n\xc3\xbamero de viajes compartidos y que estos sean concretados con otras personas adem\xc3\xa1s de nuestros vecinos y amigos.\n\n# English\n\nCarpoolear is the first Argentine Facebook app that allow the users of this social network share car trips with other users.\n\nIt is and ad-hoc customization for Argentina with carpooling philosophy, with consist of sharing our car trips with other people. The carpooling is a popular practice in USA and Europe, where it is practice in a organized way with the purpose of increase the number of trips shared with new people in addition to our neighbors and friends.\n\n## Start coding\n\n``` bash\n# git clone\ngit clone https://github.com/STS-Rosario/carpoolear.git\n\n# install dependencies\nnpm install\n\n# serve with hot reload at localhost:8080\nnpm run dev\nnpm run prod (NOT WORKING)\n\n# serve with hot realod and with prod.env\nnpm run prod\n\n```\n\n## Selecting project\n\n``` bash\n\n# selec project, if not project selected, the default project is ""default""\n# linux and osx\nTARGET_APP=myProject\n\n# window poweshell\n$env:TARGET_APP = ""myProject""\n\n```\n\n## Mobile apps\n\n\n``` bash\n\n# in root folder\nnpm run build:android\nnpm run build:ios\nnpm run build-dev:android\nnpm run build-dev:ios\n\n# the apk will be created in the dist folder\n\n```\n\n## Creating new projects\n\nThis branch is multi project. You can handle multiple apps in only one source code. To create a new project, first select it\'s name, for example ""YOUR-PROJECT-NAME"".\n\n1. Then have to go to ./projects folder and clone default folder. Change the name of the folder to ""YOUR-PROJECT-NAME"" and customize all the assets (google-services.js, config.xml and images). Remember that in config.xml in ""cordova-plugin-facebook4"" you must put your APP_ID and your APP_NAME of facebook.\n\n2. If you wanna customize some css files or any vue module like main.css, you must copy the file in the same folder and name it ""main.YOUR-PROJECT-NAME.css"". When compiling the project webpack will resolve the correct file.\n\n3. Finally in ./config folder clone the files dev.env.js and prod.env.js and save as dev.YOUR-PROJECT-NAME.env.js and prod.YOUR-PROJECT-NAME.env.js. Personalize the files with your values. Your new project is ready.\n\nHappy coding!\n\n## Config\n\nIn the config table in your carpoolear DB you can configure the following parameters:\n\n| Property | Type | Description |\n| -------- | ---- | ----------- |\n| admin_email | STRING | Email of the admin.|\n| name_app | STRING | Name of the app. |\n| target_app | STRING | Only in development. (target project name) |\n| osm_country | STRING | Country locale for Open Street Map. |\n| country_name | STRING | Your country name. |\n| locale | STRING | Country locale. |\n| home_redirection | STRING | Your home website url. |\n| module_validated_drivers | BOOLEAN | If enabled, drivers must be valdiated. |\n| trip_card_design | STRING | \'default\' or \'light\' // \'default is the custom carpoolear theme. \'light\' is an optional carpoolear theme. You can make your own themes. |\n| trip_stars | BOOLEAN | If enabled, the punctuation of users is shown as stars, if not as numbers.|\n| max_cards_per_row | INT | How many trip cards must be shown per row. Default: 4 |\n| disable_user_hints | BOOLEAN | If enabled, user hints are hidden. |\n| login_custom_header | BOOLEAN | If enabled, you can use a custom header. |\n| enable_footer | BOOLEAN | If enabled, you can login by facebook. |\n| donation | JSON | With donation you can configure donation campaigns. Object must be: {""ammount_needed"": 1000, ""month_days"": 0, ""trips_count"": 20, ""trips_offset"": 0, ""trips_rated"": 2} |\n| module_trip_seats_payment | BOOLEAN | If enabled, online payment if required to travel. |\n| module_user_request_limited | JSON | Object must be: {""enabled"": true, ""hours_range"": 8} |\n| api_crice | BOOLEAN | If enabled, active the api that calculate the trip price. |\n| fuel_price | FLOAT | The local fuel price, to stimate the trip price. |\n| enable_facebook | BOOLEAN | If enabled, you can login with facebook. |\n| module_on_boarding_new_user | JSON | Object must be: {""enabled"": true, ""cards"": 4} |\n| allow_rating_reply | BOOLEAN | If enabled, you can reply a comment of another user |\n| module_coordinate_by_message | BOOLEAN | If enabled, the trip coordination is in the chat view |\n| module_references | BOOLEAN | If enabled, references can be made in another user profile |\n---\n\n## Important\n\nYou must respect this linting configuration: /*jshint esversion: 6*/\n\nAll the variable and methods name must be in english.\n\n\n## License\n\nThe Carpoolear frontend is open-sourced software licensed under the [GPL 3.0](https://github.com/STS-Rosario/carpoolear_backend/blob/master/LICENSE).\n'",,"2016/12/10, 16:11:17",2510,LGPL-3.0,18,757,"2023/05/17, 13:56:49",13,76,80,28,161,7,0.3,0.5717054263565892,,,0,12,false,,false,false,,,https://github.com/STS-Rosario,http://www.stsrosario.org.ar/index.html,Rosario,,,https://avatars.githubusercontent.com/u/24493883?v=4,,, UTD19,Largest multi-city traffic dataset publicly available.,,,custom,,Mobility and Transportation,,,,,,,,,,https://utd19.ethz.ch/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OpenEVSE,Firmware for OpenEVSE: Open Source Hardware J1772 Electric Vehicle Supply Equipment.,OpenEVSE,https://github.com/OpenEVSE/open_evse.git,github,,Mobility and Transportation,"2023/10/10, 17:44:42",101,3,27,true,C++,OpenEVSE,OpenEVSE,"C++,Roff,C,Objective-C,Shell,Batchfile",,"b""# OpenEVSE\n\nFirmware for OpenEVSE controller used in OpenEVSE Charging Stations sold in the USA, and OpenEnergyMonitor EmonEVSE units sold in (UK/EU).\n\n- OpenEVSE: \n- EmonEVSE: \n\nBased on OpenEVSE: Open Source Hardware J1772 Electric Vehicle Supply Equipment\n\n## USA\n\nTODO: add notes about USA OpenEVSE\n\n## UK/EU\n\n- Disable `AUTOSVCLEVEL` (autodetection is designed for split-phase)\n- Charging level default to `L2`\n- Set `MAX_CURRENT_CAPACITY_L2 32` (limit for single-phase charging in UK/EU)\n- Add '.EU' to version number\n- Enable LCD Redraw every couple of min (required for EMC/CE)\n\n### EmonEVSE\n\nEmonEVSE (non-tethered type-2 EVSE unit)\n\n- `PP_AUTO_AMPACITY` enabled to set max current based on non-tethered cable connected\n- Three-phase option with `THREEPHASE` enabled to calculate three-phase energy ( Unneeded with ESP32_WiFi firmware >= 4.2\n\n## API Documentation\n\n- WIFI API: \n- RAPI API: \n\n## Resources\n\n- [OpenEnergyMonitor OpenEVSE Setup Guide](https://guide.openenergymonitor.org/integrations/openevse)\n- [OpenEnergyMonitor OpenEVSE Shop](https://shop.openenergymonitor.com/ev-charging/)\n\n- [OpenEVSE Controller Datasheet](https://github.com/OpenEVSE/OpenEVSE_PLUS/blob/master/OpenEVSE_PLUS_v5/OpenEVSE_Plus_v5.pdf)\n- [OpenEVSE Controller Hardware Repo](https://github.com/OpenEVSE/OpenEVSE_PLUS)\n- [OpenEVSE Project Homepage](https://openevse.com)\n\n***\n\nFirmware compile & upload help: [firmware/open_evse/LoadingFirmware.md](firmware/open_evse/LoadingFirmware.md)\n\nNOTES:\n\n- Working versions of the required libraries are included with the firmware code. This avoids potential issues related to using the wrong versions of the libraries.\n- Highly recommend using the tested pre-compiled firmware (see releases page)\n\n## Flash pre-compiled using avrdude\n\n`$ avrdude -p atmega328p -B6 -c usbasp -P usb -e -U flash:w:firmware.hex`\n\nISP programmer required e.g [USBASP](https://www.amazon.co.uk/Hobby-Components-USBASP-Programmer-Adapter/dp/B06XYV162N)\n\n### Set AVR fuses\n\nThis only needs to be done once in the factory\n\n`avrdude -c USBasp -p m328p -U lfuse:w:0xFF:m -U hfuse:w:0xDF:m -U efuse:w:0xFD:m -B6`\n\nIf writing eFuse fails ISBasp may need a [firmware update](https://www.vishnumaiea.in/articles/electronics/how-to-solve-usbasp-avr-efuse-write-problem-on-progisp)\n\n***\n\nTip Jar: I developed/maintain this firmware on a volunteer basis. Any donation, no matter how small, is greatly appreciated.\n\n[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.me/lincomatic)\n\n```text\nOpen EVSE is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 3, or (at your option)\nany later version.\n\nOpen EVSE is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with Open EVSE; see the file COPYING. If not, write to the\nFree Software Foundation, Inc., 59 Temple Place - Suite 330,\nBoston, MA 02111-1307, USA.\n\n* Open EVSE is distributed in the hope that it will be useful,\n* but WITHOUT ANY WARRANTY; without even the implied warranty of\n* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n```\n""",,"2015/11/10, 21:46:15",2906,GPL-3.0,29,709,"2023/10/10, 17:44:42",3,17,17,7,15,3,0.6,0.20262664165103195,"2023/10/10, 17:46:18",latest,0,16,false,,false,false,"Tifaifai/open_evse,jeremypoulter/open_evse,OpenEVSE/open_evse",,https://github.com/OpenEVSE,http://www.openevse.com,,,,https://avatars.githubusercontent.com/u/14914533?v=4,,, OpenEVSE WiFi gateway,Uses an ESP8266 (ESP-12) which communicates with the OpenEVSE controller via serial utilizing the existing RAPI API serial interface.,OpenEVSE,https://github.com/OpenEVSE/ESP8266_WiFi_v2.x.git,github,"esp8266,openevse,openenergymonitor,evse,wifi,emoncms,mqtt-client",Mobility and Transportation,"2021/10/07, 14:45:06",73,0,1,true,C,OpenEVSE,OpenEVSE,"C,C++,JavaScript,Python,Shell",https://openevse.openenergymonitor.org,"b'# OpenEVSE WiFi Gateway\n\n**OpenEVSE WiFi V2.x is now archived, the current version is [OpenEVSE V4.x ](https://github.com/openevse/ESP32_WiFi_V4.x), upgrading from V2.x to V4.x requires a [WiFi module hardware upgrade to ESP32](https://shop.openenergymonitor.com/openevse-wifi-gateway/).**\n\n![main](docs/main2.png)\n\nThe WiFi gateway uses an ESP8266 (ESP-12) which communicates with the OpenEVSE controller via serial utilizing the existing RAPI API serial interface. The web interface UI is served directly from the ESP8266 web server and can be controlled via a connected device over the network.\n\nLive demo: https://openevse.openenergymonitor.org\n\n## Features\n\n- Web UI to view & control all OpenEVSE functions\n - Start / pause\n - Delay timer\n - Time limit\n - Energy Limit\n - Adjust charging current\n- MQTT status & control\n- Log to Emoncms server e.g [data.openevse.org](http://data.openevse.org) or [emoncms.org](https://emoncms.org)\n- \'Eco\' mode: automatically adjust charging current based on availability of power from solar PV or grid export\n- OhmConnect integration (California USA only)\n\n## Requirements\n\n### OpenEVSE charging station\n - Purchase via: [OpenEVSE Store (USA/Canda)](https://store.openevse.com) | [OpenEnergyMonitor (UK / EU)](https://shop.openenergymonitor.com/openevse-deluxe-ev-charge-controller-kit/)\n - OpenEVSE FW [V4.8.0+ recommended](https://github.com/OpenEVSE/open_evse/releases)\n - All new OpenEVSE units are shipped with V4.8.0 pre-loaded (October 2017 onwards)\n - OpenEVSE FW V3.10.4 will work with latest WiFi FW with some minor issues e.g. LCD text corruption\n\n### WiFi Module\n\n- ESP8266 (ESP-12) e.g Adafruit Huzzah\n- Purchase via: [OpenEVSE Store (USA/Canda)](https://store.openevse.com/collections/frontpage/products/openevse-wifi-kit) | [OpenEnergyMonitor (UK / EU)](http://shop.openenergymonitor.com/openevse-wifi-kit/)\n- See [OpenEVSE WiFi setup guide](https://openevse.dozuki.com/Guide/OpenEVSE+WiFi+%28Beta%29/14) for WiFi module connection instructions\n\n***\n\n## Contents\n\n\n\n- [User Guide](#user-guide)\n * [WiFi Setup](#wifi-setup)\n * [OpenEVSE Web Interface](#openevse-web-interface)\n * [Charging Mode: Eco](#charging-mode-eco)\n + [Solar PV Divert Example](#solar-pv-divert-example)\n + [Setup](#setup)\n + [Operation](#operation)\n * [Services](#services)\n + [Emoncms data logging](#emoncms-data-logging)\n + [MQTT](#mqtt)\n - [OpenEVSE Status via MQTT](#openevse-status-via-mqtt)\n + [RAPI](#rapi)\n - [RAPI via web interface](#rapi-via-web-interface)\n - [RAPI over MQTT](#rapi-over-mqtt)\n - [RAPI over HTTP](#rapi-over-http)\n + [OhmConnect](#ohmconnect)\n * [System](#system)\n + [Authentication](#authentication)\n + [Firmware update](#firmware-update)\n + [Hardware reset](#hardware-reset)\n * [Firmware Compile & Upload](#firmware-compile--upload)\n + [Using PlatformIO](#using-platformio)\n - [a. Install PlatformIO command line](#a-install-platformio-command-line)\n - [b. And / Or use PlatformIO IDE](#b-and--or-use-platformio-ide)\n - [1. Clone this repo](#1-clone-this-repo)\n - [2. Compile & upload](#2-compile--upload)\n + [Using Arduino IDE](#using-arduino-ide)\n - [1. Install ESP for Arduino with Boards Manager](#1-install-esp-for-arduino-with-boards-manager)\n - [2. Compile and Upload](#2-compile-and-upload)\n + [Troubleshooting Upload](#troubleshooting-upload)\n - [Erase Flash](#erase-flash)\n - [Fully erase ESP](#fully-erase-esp)\n * [About](#about)\n * [Licence](#licence)\n\n\n\n***\n\n# User Guide\n\n## WiFi Setup\n\nOn first boot, OpenEVSE should broadcast a WiFI access point (AP) `OpenEVSE_XXX`. Connect to this AP (default password: `openevse`) and the [captive portal](https://en.wikipedia.org/wiki/Captive_portal) should forward you to the log-in page. If this does not happen navigate to [http://openevse](http://openevse), [http://openevse.local](http://openevse.local) or [http://192.168.4.1](http://192.168.4.1)\n\n*Note: You may need to disable mobile data if connecting via a mobile*\n\n*Note: Use of Internet Explorer 11 or earlier is not recommended*\n\n![Wifi connect](docs/wifi-connect.png) ![Wifi setup](docs/wifi-scan.png)\n\n\n- Select your WiFi network from list of available networks\n- Enter WiFi PSK key then click `Connect`\n\n- OpenEVSE should now connect to local WiFi network\n- Re-connect device to local WiFi network and connect to OpenEVSE using [http://openevse.local](http://openevse.local), [http://openevse](http://openevse) or local IP address.\n\n**If connection / re-connection fails (e.g. network cannot be found or password is incorrect) the OpenEVSE will automatically revert back to WiFi access point (AP) mode after a short while to allow a new network to be re-configured if required. Re-connection to existing network will be attempted every 5 minutes.**\n\n*Holding the `boot / GPIO0` button on the ESP8266 module for about 5s will force WiFi access point mode. This is useful when trying to connect the unit to a new WiFi network. If the unit cannot connect t0 a WiFi network it will resturn to AP more before retrying to connect*\n\n***\n\n## OpenEVSE Web Interface\n\nAll functions of the OpenEVSE can be viewed and controlled via the web interface. Here is a screen grab showing the \'advanced\' display mode:\n\n![advanced](docs/adv.png)\n\nThe interface has been optimised to work well for both desktop and mobile. Here is an example setting a charging delay timer using an Android device:\n\n![android-clock](docs/mobile-clock.png)\n\n## Charging Mode: Eco\n\n\'Eco\' charge mode allows the OpenEVSE to adjust the charging current automatically based on an MQTT feed. This feed could be the amount of solar PV generation or the amount of excess power (grid export). \'Normal\' charge mode charges the EV at the maximum rate set.\n\n### Solar PV Divert Example\n\nThis is best illustrated using an Emoncms MySolar graph. The solar generation is shown in yellow and OpenEVSE power consumption in blue:\n\n![divert](docs/divert.png)\n\n- OpenEVSE is initially sleeping with an EV connected\n- Once solar PV generation reaches 6A (1.5kW @ 240V) the OpenEVSE initiates charging\n- Charging current is adjusted based on available solar PV generation\n- Once charging has begun, even if generation drops below 6A, the EV will continue to charge*\n\n**The decision was made not to pause charging if generation current drops below 6A since repeatedly starting / stopping a charge causes excess wear to the OpenEVSE relay contactor.*\n\nIf a Grid +I/-E (positive import / negative export) feed was used the OpenEVSE would adjust its charging rate based on *excess* power that would be exported to the grid; for example, if solar PV was producing 4kW and 1kW was being used on-site, the OpenEVSE would charge at 3kW and the amount exported to the grid would be 0kW. If on-site consumption increases to 2kW the OpenEVSE would reduce its charging rate to 2kW.\n\nAn [OpenEnergyMonitor solar PV energy monitor](https://guide.openenergymonitor.org/applications/solar-pv/) with an AC-AC voltage sensor adaptor is required to monitor direction of current flow.\n\n### Setup\n\n![eco](docs/eco.png)\n\n- To use \'Eco\' charging mode MQTT must be enabled and \'Solar PV divert\' MQTT topics must be entered.\n- Integration with an OpenEnergyMonitor emonPi is straightforward:\n - Connect to emonPi MQTT server, [emonPi MQTT credentials](https://guide.openenergymonitor.org/technical/credentials/#mqtt) should be pre-populated\n - Enter solar PV generation / Grid (+I/-E) MQTT topic e.g. if solar PV is being monitored by emonPi CT channel 1 enter `emon/emonpi/power1`\n - [MQTT lens Chrome extension](https://chrome.google.com/webstore/detail/mqttlens/hemojaaeigabkbcookmlgmdigohjobjm?hl=en) can be used to view MQTT data e.g. subscribe to `emon/#` for all OpenEnergyMonitor MQTT data. To lean more about MQTT see [MQTT section of OpenEnergyMonitor user guide](https://guide.openenergymonitor.org/technical/mqtt/)\n - If using Grid +I/-E (positive import / negative export) MQTT feed ensure the notation positive import / negative export is correct, CT sensor can be physically reversed on the cable to invert the reading.\n\n### Operation\n\nTo enable \'Eco\' mode charging:\n\n- Connect EV and ensure EV\'s internal charging timer is switched off\n- Pause charge; OpenEVSE should display \'sleeping\'\n- Enable \'Eco\' mode using web interface or via MQTT\n- EV will not begin charging when generation / excess current reaches 6A (1.4kW @ 240V)\n\n- During \'Eco\' charging changes to charging current are temporary (not saved to EEPROM)\n- After an \'Eco mode\' charge the OpenEVSE will revert to \'Normal\' when EV is disconnected and previous \'Normal\' charging current will be reinstated.\n- Current is adjusted in 1A increments between 6A* (1.5kW @ 240V) > max charging current (as set in OpenEVSE setup)\n- 6A is the lowest supported charging current that SAE J1772 EV charging protocol supports\n- The OpenEVSE does not adjust the current itself but rather request that the EV adjusts its charging current by varying the duty cycle of the pilot signal, see [theory of operation](https://openev.freshdesk.com/support/solutions/articles/6000052070-theory-of-operation) and [Basics of SAE J1772](https://openev.freshdesk.com/support/solutions/articles/6000052074-basics-of-sae-j1772).\n- Charging mode can be viewed and set via MQTT: `{base-topic}/divertmode/set` (1 = normal, 2 = eco).\n\n\\* *OpenEVSE controller firmware [V4.8.0](https://github.com/OpenEVSE/open_evse/releases/tag/v4.8.0) has a bug which restricts the lowest charging current to 10A. The J1772 protocol can go down to 6A. This ~~will~~ has be fixed with a firmware update. See [OpenEnergyMonitor OpenEVSE FW releases](https://github.com/openenergymonitor/open_evse/releases/). A ISP programmer is required to update openevse controler FW.*\n\n***\n\n## Services\n\n![services](docs/services.png)\n\n### Emoncms data logging\n\nOpenEVSE can post its status values (e.g amp, temp1, temp2, temp3, pilot, status) to [emoncms.org](https://emoncms.org) or any other Emoncms server (e.g. emonPi) using [Emoncms API](https://emoncms.org/site/api#input). Data will be posted every 30s.\n\nData can be posted using HTTP or HTTPS. For HTTPS the Emoncms server must support HTTPS (emoncms.org does, the emonPi does not).Due to the limited resources on the ESP the SSL SHA-1 fingerprint for the Emoncms server must be manually entered and regularly updated.\n\n*Note: the emoncms.org fingerprint will change every 90 days when the SSL certificate is renewed.*\n\n\n### MQTT\n\n#### OpenEVSE Status via MQTT\n\nOpenEVSE can post its status values (e.g. amp, wh, temp1, temp2, temp3, pilot, status) to an MQTT server. Data will be published as a sub-topic of base topic.E.g `/amp`. Data is published to MQTT every 30s.\n\nMQTT setup is pre-populated with OpenEnergyMonitor [emonPi default MQTT server credentials](https://guide.openenergymonitor.org/technical/credentials/#mqtt).\n\n- Enter MQTT server host and base-topic\n- (Optional) Enter server authentication details if required\n- Click connect\n- After a few seconds `Connected: No` should change to `Connected: Yes` if connection is successful. Re-connection will be attempted every 10s. A refresh of the page may be needed.\n\n*Note: `emon/xxxx` should be used as the base-topic if posting to emonPi MQTT server if you want the data to appear in emonPi Emoncms. See [emonPi MQTT docs](https://guide.openenergymonitor.org/technical/mqtt/).*\n\nMQTT can also be used to control the OpenEVSE, see RAPI MQTT below.\n\n### RAPI\n\nRAPI commands can be used to control and check the status of all OpenEVSE functions. RAPI commands can be issued via the direct serial, web-interface, HTTP and MQTT. We recommend using RAPI over MQTT.\n\n**A full list of RAPI commands can be found in the [OpenEVSE plus source code](https://github.com/openenergymonitor/open_evse/blob/master/firmware/open_evse/rapi_proc.h).**\n\n#### RAPI via web interface\n\nEnter RAPI commands directly into to web interface (dev mode must be enabled), RAPI response is printed in return:\n\n![enable-rapi](docs/enable-rapi.png)\n\n![rapi-web](docs/rapi-web.png)\n\n#### RAPI over MQTT\n\nRAPI commands can be issued via MQTT messages. The RAPI command should be published to the following MQTT:\n\n`/rapi/in/<$ rapi-command> payload`\n\ne.g assuming base-topic of `openevse` the following command will set current to 13A:\n\n`openevse/rapi/in/$SC 13`\n\nThe payload can be left blank if the RAPI command does not require a payload e.g.\n\n`openevse/rapi/in/$GC`\n\nThe response from the RAPI command is published by the OpenEVSE back to the same sub-topic and can be received by subscribing to:\n\n`/rapi/out/#`\n\ne.g. `$OK`\n\n[See video demo of RAPI over MQTT](https://www.youtube.com/watch?v=tjCmPpNl-sA&t=101s)\n\n#### RAPI over HTTP\n\nRAPI (rapid API) commands can also be issued directly via a single HTTP request.\n\n*Assuming `192.168.0.108` is the local IP address of the OpenEVSE ESP.*\n\nEg.the RAPI command to set charging rate to 13A:\n\n[http://192.168.0.108/r?rapi=%24SC+13](http://192.168.0.108/r?rapi=%24SC+13)\n\nTo sleep (pause a charge) issue RAPI command `$FS`\n\n[http://192.168.0.108/r?rapi=%24FS](http://192.168.0.108/r?rapi=%24FS)\n\nTo enable (start / resume a charge) issue RAPI command `$FE`\n\n[http://192.168.0.108/r?rapi=%24FE](http://192.168.0.108/r?rapi=%24FE)\n\n\nThere is also an [OpenEVSE RAPI command python library](https://github.com/tiramiseb/python-openevse).\n\n### OhmConnect\n\n**USA California only**\n[Join here](https://ohm.co/openevse)\n\n**Video - How does it Work**\nhttps://player.vimeo.com/video/119419875\n\n-Sign Up\n-Enter Ohm Key\n\nOhm Key can be obtained by logging in to OhmConnect, enter Settings and locate the link in ""Open Source Projects""\nExample: https://login.ohmconnect.com/verify-ohm-hour/OpnEoVse\nKey: OpnEoVse\n## System\n\n![system](docs/system.png)\n\n### Authentication\n\nAdmin HTTP Authentication (highly recommended) can be enabled by saving admin config by default username and password.\n\n**HTTP authentication is required for all HTTP requests including input API**\n\n\n### Firmware update\n\nPre-compiled .bin\'s can be uploaded via the web interface, see [OpenEVSE Wifi releases](https://github.com/OpenEVSE/ESP8266_WiFi_v2.x/releases) for latest updates.\n\n### Hardware reset\n\nA Hardware reset can be made (all wifi and services config lost) by pressing and holding GPIO0 hardware button (on the Huzzah WiFi module) for 10s.\n\nNote: Holding the GPIO0 button for 5s will but the WiFi unit into AP (access point) mode to allow the WiFi network to be changed without loosing all the service config\n\n***\n\n## Upload pre-compiled firmware \n\n- Download pre-compiled FW from the [OpenEVSE Wifi releases](https://github.com/OpenEVSE/ESP8266_WiFi_v2.x/releases) page\n\n- Either Flash using esptool:\n\n`esptool.py write_flash 0x000000 firmware.bin`\n\n- Or the OpenEnergyMonitor [emonUpload](https://github.com/openenergymonitor/emonupload) tool can be used to automatically download and flash the latest firmware. Be sure to select the correct version. \n\n- Or use [Pyflasher](https://github.com/marcelstoer/nodemcu-pyflasher) GUI avilable on windows/mac \n\n## Firmware Compile & Upload\n\n**The ESP should be shipped with latest firmware pre-installed, firmware can be updated via the HTTP web interface.**\n\n**Updating from V1: it\'s possible to update from V1 to V2 firmware using the HTTP web interface uploader, just upload the latest .bin pre-compiled firmware release.***\n\nIf required firmware can also be uploaded via serial using USB to UART cable.\n\nThe code for the ESP8266 can be compiled and uploaded using PlatformIO or Arduino IDE. We recommend PlatformIO for its ease of use.\n\n\n### Compile & Upload Using PlatformIO\n\nFor more detailed ESP8266 Arduino core specific PlatfomIO notes see: https://github.com/esp8266/Arduino#using-platformio\n\n#### a. Install PlatformIO command line\n\nThe easiest way if running Linux is to install using the install script. See [PlatformIO installation docs](http://docs.platformio.org/en/latest/installation.html#installer-script). Or PlatformIO IDE can be used :\n\n`$ sudo python -c ""$(curl -fsSL https://raw.githubusercontent.com/platformio/platformio/master/scripts/get-platformio.py)""`\n\n#### b. And / Or use PlatformIO IDE\n\nStandalone built on GitHub Atom IDE, or use PlatformIO Atom IDE plug-in if you already have Atom installed. The IDE is nice, easy and self-explanatory.\n\n[Download PlatfomIO IDE](http://platformio.org/platformio-ide)\n\n#### 1. Clone this repo\n\n`$ git clone https://github.com/OpenEVSE/ESP8266_WiFi_v2.x`\n\n\n#### 2. Compile & upload\n\n- Put ESP into bootloader mode\n- On other ESP boards (Adafruit HUZZAH) press and hold `boot` button then press `reset`, red LED should light dimly to indicate bootloader mode.\n- Compile and upload using platformIO\n\n```\npio run -t upload\n```\n\n*To enable to OTA upload first upload via serial using the dev environment, this enables to OTA enable build flag. See `platformio.ino*\n\n*Note: uploading SPIFFS is no longer required since web resources are [now embedded in the firmware](https://github.com/OpenEVSE/ESP8266_WiFi_v2.x/pull/87)\n\n### Building the GUI\n\nThe GUI files are minified and compiled into the firmware using a combination of Webpack and a custom build script. You will also need Node.JS and NPM installed.\n\nIn addition the GUI is now maintained in a separate repository and included as a Git submodule. If the `gui` directory is empty use the following to retrieve the GUI source and fetch the dependencies.\n\n```shell\ngit submodule update --init\ncd gui\nnpm install\n```\n\nTo \'build\' the GUI use the following:\n\n```shell\ncd gui\nnpm run build\n```\n\nYou can then just compile and upload as above.\n\nFor more details see the [GUI documentation](gui/readme.md)\n\n***\n\n### Compile & Upload Using Arduino IDE\n\n#### 1. Install ESP for Arduino with Boards Manager\n\nFrom: https://github.com/esp8266/Arduino\n\nStarting with 1.6.4, Arduino allows installation of third-party platform packages using Boards Manager. ESP Arduino packages are available for Windows, Mac OS, and Linux (32 and 64 bit).\n\n- Install Arduino 1.6.8 from the Arduino website.\n- Start Arduino and open Preferences window.\n- Enter http://arduino.esp8266.com/stable/package_esp8266com_index.json into Additional Board Manager URLs field. You can add multiple URLs, separating them with commas.\n- Open Boards Manager from Tools > Board menu and install esp8266 platform (and don\'t forget to select your ESP8266 board from Tools > Board menu after installation).\n\n\n#### 2. Compile and Upload\n\n- Open `src.ino` in the Arduino IDE.\n- Compile and Upload as normal\n\n***\n\n### Troubleshooting\n\n\n### WiFi Connection issues \n\nIt\'s [been reported](https://community.openenergymonitor.org/t/wifi-connection-openevse/11039/19?u=glyn.hudson) that the OpenEVSE WiFi has issues with non standard MTU settings, we recomend using the standard 1500 bytes for router MTU setting \n\n#### Uploading issues\n\n- Double check device is in bootloder mode\n- Try reducing the upload ESP baudrate\n- Erase flash: If you are experiencing ESP hanging in a reboot loop after upload it may be that the ESP flash has remnants of previous code (which may have the used the ESP memory in a different way). The ESP flash can be fully erased using [esptool](https://github.com/themadinventor/esptool). With the unit in bootloader mode run:\n\n`$ esptool.py erase_flash`\n\nOutput:\n\n```\nesptool.py v1.2-dev\nConnecting...\nRunning Cesanta flasher stub...\nErasing flash (this may take a while)...\nErase took 8.0 seconds\n```\n\n**Fully erase ESP**\n\nTo fully erase all memory locations on an ESP-12 (4Mb) we need to upload a blank file to each memory location\n\n`esptool.py write_flash 0x000000 blank_1MB.bin 0x100000 blank_1MB.bin 0x200000 blank_1MB.bin 0x300000 blank_1MB.bin`\n\n#### View serial debug\n\nTo help debug it may be useful to enable serial debug output. To do this upload using `openevse_dev` environment e.g.\n\n`pio run -t upload -eopenevse_dev`\n\nThe default is to enable serial debug on serial1 the ESP\'s 2nd serial port. You will need to connect a debugger to the ESP serial1 Tx pin (GPIO2).\n\nTo change to use serial0 (the main ESP\'s serial port) change `-DDEBUG_PORT=Serial1` to `-DDEBUG_PORT=Serial` in `platformio.ini`. Note that using serial 0 will adversely effect RAPI communication with the openevse controller.\n\n***\n\n## About\n\nCollaboration of [OpenEnergyMonitor](http://openenergymonitor.org) and [OpenEVSE](https://openevse.com).\n\nContributions by:\n\n- @glynhudson\n- @chris1howell\n- @trystanlea\n- @jeremypoulter\n- @sandeen\n- @lincomatic\n- @joverbee\n\n## Licence\n\nGNU General Public License (GPL) V3\n'",,"2017/03/02, 20:44:50",2428,GPL-3.0,0,852,"2022/05/16, 16:53:54",20,109,246,1,527,0,0.1,0.528443113772455,"2020/07/29, 19:08:08",2.9.1,0,15,false,,false,false,,,https://github.com/OpenEVSE,http://www.openevse.com,,,,https://avatars.githubusercontent.com/u/14914533?v=4,,, navitia,"An open source web API, initially built to provide traveler information on urban transportation networks.",CanalTP,https://github.com/hove-io/navitia.git,github,"public-transportation,navitia,open-api,journey-planner,gtfs,trip-planner,mobility,mobility-as-a-service,trip-planning",Mobility and Transportation,"2023/10/24, 12:06:46",414,0,37,true,C++,Hove,hove-io,"C++,Python,CMake,Shell,C,Scala,Makefile,Mako,HTML,Procfile",https://www.navitia.io/,"b"".. image:: documentation/diagrams/kraken.jpg\n :alt: navitia\n :align: center\n\nCher membre de la communaut\xc3\xa9 Navitia,\n\nNous vous remercions de l\xe2\x80\x99int\xc3\xa9r\xc3\xaat que vous portez \xc3\xa0 nos produits et plus particuli\xc3\xa8rement \xc3\xa0 notre plateforme d\xe2\x80\x99information voyageurs.\n\nNous avons le plaisir de vous informer qu\xe2\x80\x99**une nouvelle version de Navitia sera prochainement disponible**.\n\nDans le cadre du lancement de cette nouvelle version, nous avons pris la d\xc3\xa9cision de limiter l\xe2\x80\x99acc\xc3\xa8s \xc3\xa0 notre code source. Cette d\xc3\xa9cision se traduit par la fermeture progressive d\xe2\x80\x99ici \xc3\xa0 fin septembre de nos r\xc3\xa9pertoires sur Github. Cette d\xc3\xa9cision est avant tout motiv\xc3\xa9e par le d\xc3\xa9sir d\xe2\x80\x99offrir \xc3\xa0 nos clients une meilleure qualit\xc3\xa9 de services tant sur le plan fonctionnel que sur le plan des performances. Elle r\xc3\xa9pond \xc3\xa9galement \xc3\xa0 un certain nombre d\xe2\x80\x99exigences en termes de cyber s\xc3\xa9curit\xc3\xa9.\n\nLa version historique de Navitia, ouverte en 2014, restera disponible. Cette version est \xc3\xa9galement disponible avec notre offre fremium via https://navitia.io , offre qui \xc3\xa9voluera progressivement vers notre nouvelle version Navitia \xc3\xa0 partir de fin 2023.\nCette offre vous permet, sous certaines conditions, de disposer d\xe2\x80\x99une assistance.\n\nNotre service commercial sales@hove.com reste bien entendu \xc3\xa0 votre disposition pour vous \xc3\xa9tudier avec vous une formule adapt\xc3\xa9e \xc3\xa0 vos besoins.\n\nCordialement,\nHove\n\n=========\n Navitia\n=========\n``(pronounce [navi-sia])``\n\n\n.. |Version Logo| image:: https://img.shields.io/github/v/tag/hove-io/navitia?logo=github&style=flat-square\n :target: https://github.com/hove-io/navitia/releases\n :alt: version\n\n.. |Build Status| image:: https://img.shields.io/github/workflow/status/hove-io/navitia/Build%20Navitia%20Packages%20For%20Release?logo=github&style=flat-square\n :target: https://github.com/hove-io/navitia/actions?query=workflow%3A%22Build+Navitia+Packages+For+Release%22\n :alt: Last build\n\n.. |License| image:: https://img.shields.io/github/license/hove-io/navitia?color=9873b9&style=flat-square\n :alt: license\n\n.. |Chat| image:: https://img.shields.io/matrix/navitia:matrix.org?logo=riot&style=flat-square\n :target: https://app.element.io/#/room/#navitia:matrix.org\n :alt: chat\n\n.. |Code Coverage| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=coverage\n :alt: SonarCloud Coverage\n\n.. |Vulnerabilities| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=vulnerabilities\n :alt: SonarCloud Vulnerabilities\n\n.. |Security Rating| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=security_rating\n :alt: SonarCloud Security Rating\n\n\n\n\n+----------------+----------------+-----------+--------+-----------------+-------------------+-------------------+\n| Version | Build status | License | Chat | Code Coverage | Vulnerabilities | Security Rating |\n+----------------+----------------+-----------+--------+-----------------+-------------------+-------------------+\n| |Version Logo| | |Build Status| | |License| | |Chat| | |Code Coverage| | |Vulnerabilities| | |Security Rating| |\n+----------------+----------------+-----------+--------+-----------------+-------------------+-------------------+\n\n\n.. |Maintainability Rating| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=sqale_rating\n :alt: SonarCloud Maintainability Rating\n\n.. |Quality Gate Status| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=alert_status\n :alt: SonarCloud Quality Gate Status\n\n.. |Duplicated Lines (%)| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=duplicated_lines_density\n :alt: SonarCloud Duplicated Lines (%)\n\n.. |Reliability Rating| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=reliability_rating\n :alt: SonarCloud Reliability Rating\n\n.. |Bugs| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=bugs\n :alt: SonarCloud Bugs\n\n.. |Lines of Code| image:: https://sonarcloud.io/api/project_badges/measure?project=Hove_navitia&metric=ncloc\n :alt: SonarCloud Lines of Code\n\n+--------------------------+-----------------------+------------------------+----------------------+--------+-------------------+\n| Maintainability | Quality Gate | Duplicated Lines (%) | Reliability | Bugs | Lines of Code |\n+--------------------------+-----------------------+------------------------+----------------------+--------+-------------------+\n| |Maintainability Rating| | |Quality Gate Status| | |Duplicated Lines (%)| | |Reliability Rating| | |Bugs| | |Lines of Code| |\n+--------------------------+-----------------------+------------------------+----------------------+--------+-------------------+\n\n\nPresentation\n============\nWelcome to the Navitia repository!\n\nNavitia is a webservice providing:\n\n#. multi-modal journeys computation\n\n#. line schedules\n\n#. next departures\n\n#. exploration of public transport data\n\n#. search & autocomplete on places\n\n#. sexy things such as isochrones\n\n\nApproach\n--------\n\n| Navitia is an open-source web API, **initially** built to provide traveler information on urban\n transportation networks.\n|\n| Its main purpose is to provide day-to-day informations to travelers.\n| Over time, Navitia has been able to do way more, *sometimes* for technical and debuging purpose\n *or* because other functional needs fit quite well in what Navitia can do *or* just because it was\n quite easy and super cool.\n|\n| Technically, Navitia is a HATEOAS_ API that returns JSON formated results.\n\n.. _HATEOAS: https://en.wikipedia.org/wiki/HATEOAS\n\n\nWho's who\n----------\n\n| Navitia is instanciated and exposed publicly through api.navitia.io_.\n| Developments on Navitia are lead by Hove (previously Kisio Digital and CanalTP).\n| Hove is a subsidiary of Keolis (itself a subsidiary of SNCF, French national railway company).\n\n.. _api.navitia.io: https://api.navitia.io\n\n\nMore information\n----------------\n\n* main web site https://www.navitia.io\n* playground https://playground.navitia.io\n* integration documentation https://doc.navitia.io\n* technical documentation https://github.com/hove-io/navitia/tree/dev/documentation/rfc\n* twitter @navitia https://twitter.com/navitia\n\n\nGetting started\n===============\n\nWant to test the API?\n----------------------\n\n| The easiest way to do this is a to go to `navitia.io `_.\n| `Signup `_, grab a token, read the `doc `_\n and start using the API!\n\nFor a more friendly interface you can use the API through\n`navitia playground `_ (no matter the server used).\n\n\nWant to dev and contribute to navitia?\n---------------------------------------\n\nIf you want to build navitia, develop in it or read more about technical details please refer to\n`CONTRIBUTING.md `_.\n\nCurious of who's contributing? :play_or_pause_button: https://www.youtube.com/watch?v=GOLfMTMGVFI\n\nArchitecture overview\n=====================\nNavitia is made of 3 main modules:\n\n#. *Kraken* is the c++ core (Heavy computation)\n\n#. *J\xc3\xb6rmungandr* is the python frontend (Webservice and lighter computation)\n\n#. *Ed* is the postgres database (Used for preliminary binarization)\n\n*Kraken* and *J\xc3\xb6rmungandr* communicate with each other through protocol buffer messages sent by ZMQ.\n\n| Transportation data (in the `NTFS `_,\n or `GTFS `_ format) or routing data\n (mainly from `OpenStreetMap `_ for the moment) can be given to *Ed*.\n| *Ed* produces a binary file used by *Kraken*.\n\n.. image:: documentation/diagrams/Navitia_simple_architecture.png\n\nMore information here: https://github.com/hove-io/navitia/wiki/Architecture\n\nAlternatives?\n=============\nNavitia is written in C++ / python, here are some alternatives:\n\n* | `OpenTripPlanner `_ : written in java.\n | More information here https://github.com/hove-io/navitia/wiki/OpenTripPlanner-and-Navitia-comparison.\n* `rrrr `_ : the lightest one, written in python/c\n* `Motis `_ : Multi objective algorithm similar to Navitia in its approach\n* `Mumoro `_ : an R&D MUltiModal MUltiObjective ROuting algorithm\n""",,"2013/06/19, 15:34:30",3780,AGPL-3.0,1388,21195,"2023/10/24, 13:05:38",62,3959,4093,290,1,30,0.9,0.8361537105587873,"2023/10/24, 13:20:52",v15.46.1,0,69,false,,false,true,,,https://github.com/hove-io,https://www.hove.com,Paris,,,https://avatars.githubusercontent.com/u/2743985?v=4,,, DeepMove,Predicting Human Mobility with Attentional Recurrent Networks.,vonfeng,https://github.com/vonfeng/DeepMove.git,github,"mobility-trajectory,attention,www,prediction",Mobility and Transportation,"2019/08/03, 10:09:27",133,0,18,true,Python,,,Python,,"b""# DeepMove\nPyTorch implementation of WWW'18 paper-DeepMove: Predicting Human Mobility with Attentional Recurrent Networks [link](https://dl.acm.org/citation.cfm?id=3178876.3186058)\n\n# Datasets\nThe sample data to evaluate our model can be found in the data folder, which contains 800+ users and ready for directly used. The raw mobility data similar to ours used in the paper can be found in this public [link](https://sites.google.com/site/yangdingqi/home/foursquare-dataset).\n\n# Requirements\n- Python 2.7\n- [Pytorch](https://pytorch.org/previous-versions/) 0.20\n\ncPickle is used in the project to store the preprocessed data and parameters. While appearing some warnings, pytorch 0.3.0 can also be used.\n\n# Project Structure\n- /codes\n - [main.py](https://github.com/vonfeng/DeepMove/blob/master/codes/main.py)\n - [model.py](https://github.com/vonfeng/DeepMove/blob/master/codes/model.py) # define models\n - [sparse_traces.py](https://github.com/vonfeng/DeepMove/blob/master/codes/sparse_traces.py) # foursquare data preprocessing \n - [train.py](https://github.com/vonfeng/DeepMove/blob/master/codes/train.py) # define tools for train the model\n- /pretrain\n - /simple\n - [res.m](https://github.com/vonfeng/DeepMove/blob/master/pretrain/simple/res.m) # pretrained model file\n - [res.json](https://github.com/vonfeng/DeepMove/blob/master/pretrain/simple/res.json) # detailed evaluation results\n - [res.txt](https://github.com/vonfeng/DeepMove/blob/master/pretrain/simple/res.txt) # evaluation results\n - /simple_long\n - /attn_local_long\n - /attn_avg_long_user\n- /data # preprocessed foursquare sample data (pickle file)\n- /docs # paper and presentation file\n- /resutls # the default save path when training the model\n\n# Usage\n1. Load a pretrained model:\n> ```python\n> python main.py --model_mode=attn_avg_long_user --pretrain=1\n> ```\n\nThe codes contain four network model (simple, simple_long, attn_avg_long_user, attn_local_long) and a baseline model (Markov). The parameter settings for these model can refer to their [res.txt](https://github.com/vonfeng/DeepMove/blob/master/pretrain/simple/res.txt) file.\n\n|model_in_code | model_in_paper | top-1 accuracy (pre-trained)|\n:---: |:---:|:---:\n|markov | markov | 0.082|\n|simple | RNN-short | 0.096|\n|simple_long | RNN-long | 0.118|\n|attn_avg_long_user | Ours attn-1 | 0.133|\n|attn_local_long | Ours attn-2 | 0.145|\n\n2. Train a new model:\n> ```python\n> python main.py --model_mode=attn_avg_long_user --pretrain=0\n> ```\n\nOther parameters (refer to [main.py](https://github.com/vonfeng/DeepMove/blob/master/codes/main.py)):\n- for training: \n - learning_rate, lr_step, lr_decay, L2, clip, epoch_max, dropout_p\n- model definition: \n - loc_emb_size, uid_emb_size, tim_emb_size, hidden_size, rnn_type, attn_type\n - history_mode: avg, avg, whole\n\n# Others\nBatch version for this project will come soon.""",,"2018/06/15, 16:31:08",1958,GPL-2.0,0,7,"2023/04/27, 06:41:09",8,0,3,1,181,1,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, mobility-data-specification,A data standard to enable communication between mobility companies and local governments.,openmobilityfoundation,https://github.com/openmobilityfoundation/mobility-data-specification.git,github,"mds,scooters,cities,carshare,bikesharing,scooter-sharing,mobility-as-a-service,bike-share,bike-sharing,mobility,mobility-data,micromobility,passenger-services,delivery-robot,geofencing,right-of-way,policy-as-code,delivery,taxi,open-source",Mobility and Transportation,"2023/05/09, 12:46:39",651,0,51,true,,Open Mobility Foundation,openmobilityfoundation,,https://www.openmobilityfoundation.org/about-mds/,"b'# Mobility Data Specification\n\n## Table of Contents\n\n- [About](#about)\n- [Endpoints](#endpoints)\n - [Modularity](#modularity)\n - [GBFS Requirement](#gbfs-requirement)\n- [Modes](#modes)\n- [Versions](#versions)\n - [Technical Information](#technical-information)\n - [Data Validation](#data-validation)\n- [Get Involved](#get-involved)\n - [Membership](#membership)\n- [Cities Using MDS](#cities-using-mds)\n- [Providers Using MDS](#providers-using-mds)\n- [Software Companies Using MDS](#software-companies-using-mds)\n- [Data Privacy](#data-privacy)\n- [Use Cases](#use-cases)\n- [Related Projects](#related-projects)\n\n# About\n\nThe Mobility Data Specification (**MDS**), a project of the [Open Mobility Foundation](http://www.openmobilityfoundation.org) (**OMF**), is a set of Application Programming Interfaces (APIs) that helps cities better manage transportation in the public right of way, standardizing communication and data-sharing between cities and mobility providers, allowing cities to share and validate policy digitally, and enabling vehicle management and better outcomes for residents. Inspired in part by projects like [GTFS](https://developers.google.com/transit/gtfs/reference/) and [GBFS](https://github.com/NABSA/gbfs), MDS is focused on managing mobility services such as dockless scooters, bicycles, mopeds, car share, delivery robots, and passenger services. \n\n**MDS** is a key piece of digital infrastructure that supports the effective implementation of mobility policies in cities around the world. For a high level overview and visuals, see the [About MDS](https://www.openmobilityfoundation.org/about-mds/) page on the OMF website.\n\n![MDS Main Logo](https://i.imgur.com/AiUedl3.png)\n\n**MDS** is an open-source project originally created by the [Los Angeles Department of Transportation](http://ladot.io) (LADOT). In November 2019, stewardship of MDS and the ownership of this repository were transferred to the [Open Mobility Foundation](http://www.openmobilityfoundation.org). GitHub automatically redirects any links to this repository from the `CityOfLosAngeles` organization to the `openmobilityfoundation` instead. MDS continues to be used by LADOT and [many other municipalities](#cities-using-mds) and companies.\n\n[Top][toc]\n\n# Endpoints\n\n**MDS** is comprised of six distinct APIs, with multiple endpoints under each API:\n\n\n\nThe [`provider`][provider] API endpoints are intended to be implemented by mobility providers and consumed by regulatory agencies. Data is **pulled** from providers by agencies. When a municipality queries information from a mobility provider, the Provider API provides historical and recent views of operations. First released in June 2018.\n\n---\n\n\n\nThe [`agency`][agency] API endpoints are intended to be implemented by regulatory agencies and consumed by mobility providers. Data is **pushed** to agencies by providers. Providers query the Agency API when events (such as a trip start or vehicle status change) occur in their systems. First released in April 2019.\n\n---\n\n\n\nThe [`policy`][policy] API endpoints are intended to be implemented by regulatory agencies and consumed by mobility providers. Providers query the Policy API to get information about local rules that may affect the operation of their mobility service or which may be used to determine compliance. First released in October 2019.\n\n---\n\n\n\nThe [`geography`][geography] API endpoints are intended to be implemented by regulatory agencies and consumed by mobility providers. Providers query the Geography API to get information about geographical regions for regulatory and other purposes. First released in October 2019, as part of the Policy specification.\n\n---\n\n\n\nThe [`jurisdiction`][jurisdiction] API endpoints are intended to be implemented by regulatory agencies that have a need to coordinate with each other. The Jurisdiction API endpoints allow cities to communicate boundaries between one another and to mobility providers. First released in March 2021.\n\n---\n\n\n\nThe [`metrics`](/metrics) API endpoints are intended to be implemented by regulatory agencies or their appointed third-party representatives to have a standard way to consistently describe available metrics, and create an extensible interface for querying MDS metrics. First released in March 2021.\n\n---\n\n## Modularity\n\nMDS is designed to be a modular kit-of-parts. Regulatory agencies can use the components of the API that are appropriate for their needs. An agency may choose to use only `agency`, `provider`, and/or `policy`. Other APIs like `geography`, `jurisdiction`, and/or `metrics` can be used in coordination as described with these APIs or sometimes on their own. Or agencies may select specific elements (endpoints) from each API to help them implement their goals. Development of the APIs takes place under the guidance of the OMF\'s [MDS Working Group](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/MDS-Working-Group).\n\nMany parts of the MDS definitions and APIs align across each other. In these cases, consolidated information can be found on the [General Information](/general-information.md) page.\n\nYou can read more in our **[Understanding the different MDS APIs](https://github.com/openmobilityfoundation/governance/blob/main/technical/Understanding-MDS-APIs.md)** guide. \n\n![MDS APIs and Endpoints](https://i.imgur.com/i27Mmfw.png)\n\n## GBFS Requirement\n\nAll MDS compatible Provider and/or Agency feeds must also expose a public [GBFS](https://github.com/NABSA/gbfs) feed for the micromobility and car share [modes](/modes) (passenger services and delivery robots may be optionally supported at the discretion of the agency running the program). Compatibility with [GBFS 2.2](https://github.com/NABSA/gbfs/blob/v2.2/gbfs.md) or greater is advised, or the version recommended per MobilityData\'s [supported releases](https://github.com/MobilityData/gbfs#past-version-releases) guidance. Read MobilityData\'s RFP recommendations and required files list in their [GBFS and Shared Mobility Data Policy guide](https://mobilitydata.org/gbfs-and-shared-mobility-data-policy/).\n\nSee our [MDS Vehicles Guide](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/MDS-Vehicles) for how MDS Provider/Agency `/vehicles` can be used by regulators instead of the public GBFS `/free_bike_status`. Additional information on MDS and GBFS can be found in this [guidance document](https://github.com/openmobilityfoundation/governance/blob/main/technical/GBFS_and_MDS.md).\n\n[Top][toc]\n\n# Modes\n\nMDS supports multiple ""modes"", defined as a distinct regulatory framework for a type of mobility service. See the [modes overview](/modes) or get started with a specific mode:\n\n- **[Micromobility](/modes/micromobility.md)** - dockless or docked small devices such as e-scooters and bikes.\n- **[Passenger services](/modes/passenger-services.md)** - transporting individuals with a vehicle driven by another entity, including taxis, TNCs, and microtransit\n- **[Car share](/modes/car-share.md)** - shared point-to-point and station-based multi-passenger vehicles.\n- **[Delivery robots](/modes/delivery-robots.md)** - autonomous and remotely driven goods delivery devices\n\n

\n      \n       \n      \n\n

\n
\n\n[Top][toc]\n\n# Versions\n\nMDS has a **current release** (version 2.0.0), **previous releases** (both recommended and longer recommended for use), and **upcoming releases** in development. For a full list of releases, their status, recommended versions, and timelines, see the [Official MDS Releases](https://github.com/openmobilityfoundation/governance/wiki/Releases) page.\n\nThe OMF provides guidance on upgrading for cities, providers, and software companies, and sample permit language for cities. See our [MDS Version Guidance](https://github.com/openmobilityfoundation/governance/blob/main/technical/OMF-MDS-Version-Guidance.md) for best practices on how and when to upgrade MDS as new versions become available. Our complimentary [MDS Policy Language Guidance](https://github.com/openmobilityfoundation/governance/blob/main/technical/OMF-MDS-Policy-Language-Guidance.md) document is for cities writing MDS into their operating policy and includes sample policy language.\n\n## Technical Information\n\nThe latest MDS release is in the [`main`](https://github.com/openmobilityfoundation/mobility-data-specification/tree/main) branch, and development for the next release occurs in the [`dev`](https://github.com/openmobilityfoundation/mobility-data-specification/tree/dev) branch.\n\nThe MDS specification is versioned using Git tags and [semantic versioning](https://semver.org/). See prior [releases](https://github.com/openmobilityfoundation/mobility-data-specification/releases) and the [Release Guidelines](https://github.com/openmobilityfoundation/governance/blob/main/technical/ReleaseGuidelines.md) for more information and [version support](https://github.com/openmobilityfoundation/governance/blob/main/technical/ReleaseGuidelines.md#ongoing-version-support).\n\n* [Latest Release Branch](https://github.com/openmobilityfoundation/mobility-data-specification/tree/main) (main)\n* [Development Branch](https://github.com/openmobilityfoundation/mobility-data-specification/tree/dev) (dev)\n* [All GitHub Releases](https://github.com/openmobilityfoundation/mobility-data-specification/releases)\n* [MDS Releases](https://github.com/openmobilityfoundation/governance/wiki/Releases) - current/recommended versions, timeline\n* [Release Guidelines](https://github.com/openmobilityfoundation/governance/blob/main/technical/ReleaseGuidelines.md)\n\n## Data Validation\n\nTo help with MDS data and feed validation, please see our OpenAPI schema description in the OMF [mds-openapi](https://github.com/openmobilityfoundation/mds-openapi) repository. Browsable interactive documentation is also linked to in that repository.\n\nStarting with MDS 2.0, OpenAPI documents describe MDS endpoints and allow for [schema](/schema) validation, expanding on the JSON Schemas formerly housed in this repository.\n\n[Top][toc]\n\n# Get Involved\n\nTo stay up to date on MDS, please **subscribe to the [mds-announce](https://groups.google.com/a/groups.openmobilityfoundation.org/g/mds-announce) mailing list** for general updates, the **[wg-mds](https://groups.google.com/a/groups.openmobilityfoundation.org/g/wg-mds) mailing list** for Working Group details and meetings, and read our **[Community Wiki](https://github.com/openmobilityfoundation/mobility-data-specification/wiki)**.\n\nThe Mobility Data Specification is an open source project with all development taking place on GitHub and public meetings through our [MDS Working Group](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/MDS-Working-Group). Comments and ideas can be shared by [starting a discussion](https://github.com/openmobilityfoundation/mobility-data-specification/discussions), [creating an issue](https://github.com/openmobilityfoundation/mobility-data-specification/issues), suggesting changes with [a pull request](https://github.com/openmobilityfoundation/mobility-data-specification/pulls), and attending meetings. Before contributing, please review our OMF [CONTRIBUTING](https://github.com/openmobilityfoundation/governance/blob/main/CONTRIBUTING.md) and [CODE OF CONDUCT](https://github.com/openmobilityfoundation/governance/blob/main/CODE_OF_CONDUCT.md) pages to understand guidelines and policies for participation.\n\n**Read our [How to Get Involved in the Open Mobility Foundation](https://www.openmobilityfoundation.org/how-to-get-involved-in-the-open-mobility-foundation/) blog post for more detail and an overview of how the OMF is organized.**\n\nFor other questions about MDS or media inquiries please contact the OMF directly [on our website](https://www.openmobilityfoundation.org/get-in-touch/). \n\n## Membership\n\nOMF Members (public agencies and commercial companies) have additional participation opportunities with leadership roles within our OMF [governance](https://github.com/openmobilityfoundation/governance#omf-scope-of-work):\n\n- [Board of Directors](https://www.openmobilityfoundation.org/about/)\n- [Privacy, Security, and Transparency Committee](https://github.com/openmobilityfoundation/privacy-committee)\n- [Technology Council](https://github.com/openmobilityfoundation/governance/wiki/Technology-Council)\n- [Strategy Committee](https://github.com/openmobilityfoundation/governance/wiki/Strategy-Committee)\n- [Advisory Committee](https://github.com/openmobilityfoundation/governance/wiki/Advisory-Committee)\n- Steering committees of all [Working Groups](https://github.com/openmobilityfoundation/mobility-data-specification/wiki#omf-meetings), currently:\n - [MDS Working Group](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/MDS-Working-Group)\n - [Curb Working Group](https://github.com/openmobilityfoundation/curb-data-specification/wiki)\n\nRead about [how to become an OMF member](https://www.openmobilityfoundation.org/how-to-become-a-member/), [how to get involved and our governance model](https://www.openmobilityfoundation.org/how-to-get-involved-in-the-open-mobility-foundation/), and [contact us](https://mailchi.mp/openmobilityfoundation/membership) for more details. \n\n[Top][toc]\n\n# Cities Using MDS\n\nMore than 200 cities and public agencies across 21 countries around the world are known to use MDS, and it has been implemented by most major [mobility service providers](#providers-using-mds). \n- See our **[list of cities using MDS](https://www.openmobilityfoundation.org/mds-users/#cities-using-mds)** with links to public mobility websites and policy/permit documents.\n\nPlease let us know [via our website](https://www.openmobilityfoundation.org/get-in-touch/) or in the [public discussion area](https://github.com/openmobilityfoundation/mobility-data-specification/discussions) if you are an agency using MDS so we can add you to the city resource list, especially if you have published your policies or documents publicly.\n\nTo add yourself to the [agency list](/agencies.csv) and add your [Policy Requirement link](/provider.md#requirements), please let us know [via our website](https://www.openmobilityfoundation.org/get-in-touch/) or open an [Issue](https://github.com/openmobilityfoundation/mobility-data-specification/issues) or [Pull Request](https://github.com/openmobilityfoundation/mobility-data-specification/pulls). Find out how in our [Adding an Agency ID](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/Adding-an-MDS-Agency-ID) help document.\n\n[Top][toc]\n\n# Providers Using MDS\n\nOver four dozen mobility service providers (MSPs) around the world use MDS, allowing them to create tools around a single data standard for multiple cities. \n\n- See our **[list of providers using MDS](https://www.openmobilityfoundation.org/mds-users/#mobility-providers-using-mds)**. For a table list with unique IDs, see the MDS [provider list](/providers.csv) which includes both service operators and data solution providers.\n- A provider needs a unique ID for each [mode](#modes) they operate under.\n\nTo add yourself to the provider list, please let us know [via our website](https://www.openmobilityfoundation.org/get-in-touch/) or open an [Issue](https://github.com/openmobilityfoundation/mobility-data-specification/issues) or [Pull Request](https://github.com/openmobilityfoundation/mobility-data-specification/pulls). Find out how in our [Adding an Provider ID](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/Adding-an-MDS-Provider-ID) help document.\n\n[Top][toc]\n\n# Software Companies Using MDS\n\nAn open source approach to data specifications benefits cities and companies by creating a space for collaborative development, reducing costs, and nurturing a healthy, competitive ecosystem for mobility services and software tools. The open model promotes a competitive ecosystem for software tools built by dozens of software companies providing their services to cities, agencies, and providers.\n\n- See our **[list of third party software companies using MDS](https://www.openmobilityfoundation.org/mds-users/#software-companies-using-mds)** and an article about the [benefits of an open approach](https://www.openmobilityfoundation.org/why-open-behind-omfs-unique-open-source-model/). \n- For a table list with unique IDs, see the MDS [provider list](/providers.csv) which includes both service operators and data solution providers.\n\nTo add yourself to the provider list (as a data solution providers), please let us know [via our website](https://www.openmobilityfoundation.org/get-in-touch/) or open an [Issue](https://github.com/openmobilityfoundation/mobility-data-specification/issues) or [Pull Request](https://github.com/openmobilityfoundation/mobility-data-specification/pulls). Find out how in our [Adding an Provider ID](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/Adding-an-MDS-Provider-ID) help document.\n\nPlease [let us know](https://www.openmobilityfoundation.org/get-in-touch/) if you are using MDS in your company so we can add you to the list.\n\n[Top][toc]\n\n# Data Privacy\n\nMDS includes information about vehicles, their location, and trips taken on those vehicles to allow agencies to regulate their use in the public right of way and to conduct transportation planning and analysis. While MDS is not designed to convey personal information about the users of shared mobility services, data collected about mobility can be sensitive. The OMF and MDS community have created a number of resources to help cities, mobility providers, and software companies handle vehicle data safely:\n\n* [MDS Privacy Guide for Cities](https://github.com/openmobilityfoundation/governance/raw/main/documents/OMF-MDS-Privacy-Guide-for-Cities.pdf) - guide that covers essential privacy topics and best practices\n* [The Privacy Principles for Mobility Data](https://www.mobilitydataprivacyprinciples.org/) - principles endorsed by the OMF and other mobility organizations to guide the mobility ecosystem in the responsible use of data and the protection of individual privacy\n* [Mobility Data State of Practice](https://github.com/openmobilityfoundation/privacy-committee/blob/main/products/state-of-the-practice.md) - real-world examples related to the handling and protection of MDS and other types of mobility data\n* [Understanding the Data in MDS](https://github.com/openmobilityfoundation/mobility-data-specification/wiki/Understanding-the-Data-in-MDS) - technical document outlining what data is (and is not) in MDS\n* [Use Case Database](https://www.openmobilityfoundation.org/whats-possible-with-mds/) - a starting point for understanding how MDS can be used, and what parts of MDS is required to meet those use cases\n* [Policy Requirements](https://github.com/openmobilityfoundation/mobility-data-specification/tree/main/policy#requirement) - built into MDS, allowing agencies to specify only the endpoints and fields needed for program regulation\n* [Using MDS Under GDPR](https://www.openmobilityfoundation.org/using-mds-under-gdpr/) - how to use MDS in the context of GDPR in Europe\n\nThe OMF\xe2\x80\x99s [Privacy, Security, and Transparency Committee](https://github.com/openmobilityfoundation/privacy-committee#welcome-to-the-privacy-security-and-transparency-committee) creates many of these resources, and advises the OMF on principles and practices that ensure the secure handling of mobility data. The committee \xe2\x80\x93 which is composed of both private and public sector OMF members \xe2\x80\x93 also holds regular public meetings, which provide additional resources and an opportunity to discuss issues related to privacy and mobility data. Learn more [here](https://github.com/openmobilityfoundation/privacy-committee#welcome-to-the-privacy-security-and-transparency-committee).\n\n[Top][toc]\n\n# Use Cases\n\nHow cities use MDS depends on a variety of factors: their transportation goals, existing services and infrastructure, and the unique needs of their communities. Cities are using MDS to create policy, enforce rules, manage hundreds of devices, and ensure the safe operation of vehicles in the public right of way. Some examples of how cities are using MDS in practice are:\n\n- **Vehicle Caps:** Determine total number of vehicles per operator in the right of way\n- **Distribution Requirements:** Ensure vehicles are distributed according to equity requirements\n- **Injury Investigation:** Investigate injuries and collisions with other objects and cars to determine roadway accident causes\n- **Restricted Area Rides:** Find locations where vehicles are operating or passing through restricted areas\n- **Resident Complaints:** Investigate and validate complaints from residents about operations, parking, riding, speed, etc, usually reported through 311\n- **Infrastructure Planning:** Determine where to place new bike/scooter lanes and drop zones based on usage and demand, start and end points, and trips taken\n\nA list of use cases is useful to show what\'s possible with MDS, to list what other cities are accomplishing with the data, to see many use cases up front for privacy considerations, and to use for policy discussions and policy language. More details and examples can be seen on the [OMF website](https://www.openmobilityfoundation.org/whats-possible-with-mds/) and our [Wiki Database](https://github.com/openmobilityfoundation/governance/wiki/MDS-Use-Cases). An agency may align their program to specific use cases by publishing [Policy Requirement use cases](/policy#requirement-apis).\n\nPlease [let us know](https://docs.google.com/forms/d/e/1FAIpQLScrMPgeb1TKMYCjjKsJh3y1TPTJO8HR_y1NByrf1rDmTEJS7Q/viewform) if you have recommended updates or use cases to add.\n\n[Top][toc]\n\n# Related Projects\n\nCommunity projects are those efforts by individual contributors or informal groups that take place outside Open Mobility Foundation\xe2\x80\x99s formalized process, complementing MDS. These related projects often push new ideas forward through experimental or locally-focused development, and are an important part of a thriving open source community. Some of these projects may eventually be contributed to and managed by the Open Mobility Foundation.\n\nThe OMF\'s [Community Projects](https://www.openmobilityfoundation.org/community-projects/) page has an every growing list of projects related to MDS, and see our [Privacy Committee\'s State of Practice](https://github.com/openmobilityfoundation/privacy-committee/blob/main/products/state-of-the-practice.md) for more examples.\n\nPlease [let us know](https://www.openmobilityfoundation.org/get-in-touch/) if you create open source or private tools for implementing or working with MDS data.\n\n[Top][toc]\n\n[agency]: /agency/README.md\n[provider]: /provider/README.md\n[policy]: /policy/README.md\n[geography]: /geography/README.md\n[jurisdiction]: /jurisdiction/README.md\n[metrics]: /metrics/README.md\n[modes]: /modes/README.md\n[toc]: #table-of-contents\n'",,"2018/05/03, 18:21:02",2001,CUSTOM,326,2023,"2023/06/16, 00:48:35",73,471,777,125,131,11,1.0,0.5334957369062119,"2023/05/09, 12:52:57",2.0.0,0,94,false,,true,true,,,https://github.com/openmobilityfoundation,http://www.openmobilityfoundation.org/,,,,https://avatars.githubusercontent.com/u/52187191?v=4,,, OpenConcept,A toolkit for conceptual MDAO of aircraft with unconventional propulsion architectures.,mdolab,https://github.com/mdolab/openconcept.git,github,,Mobility and Transportation,"2023/09/18, 18:10:17",27,1,6,true,Python,MDO Lab,mdolab,Python,,"b'# OpenConcept - A conceptual design toolkit with efficient gradients implemented in the OpenMDAO framework\n\n### Authors: Benjamin J. Brelje and Eytan J. Adler\n\n[![Build](https://github.com/mdolab/openconcept/workflows/Build/badge.svg?branch=main)](https://github.com/mdolab/openconcept/actions?query=branch%3Amain)\n[![Coverage](https://codecov.io/gh/mdolab/openconcept/branch/main/graph/badge.svg?token=RR8CN3IOSL)](https://codecov.io/gh/mdolab/openconcept)\n[![Documentation](https://readthedocs.com/projects/mdolab-openconcept/badge/?version=latest)](https://mdolab-openconcept.readthedocs-hosted.com/en/latest/?badge=latest)\n[![PyPI](https://img.shields.io/pypi/v/openconcept)](https://pypi.org/project/openconcept/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/openconcept)](https://pypi.org/project/openconcept/)\n\nOpenConcept is a new toolkit for the conceptual design of aircraft. OpenConcept was developed in order to model and optimize aircraft with electric propulsion at low computational cost. The tools are built on top of NASA Glenn\'s [OpenMDAO](http://openmdao.org/) framework, which in turn is written in Python.\n\nOpenConcept is capable of modeling a wide range of propulsion systems, including detailed thermal management systems.\nThe following figure (from [this paper](https://doi.org/10.3390/aerospace9050243)) shows one such system that is modeled in the `N3_HybridSingleAisle_Refrig.py` example.\n\n

\n \n

\n\nThe following charts show more than 250 individually optimized hybrid-electric light twin aircraft (similar to a King Air C90GT). Optimizing hundreds of configurations can be done in a couple of hours on a standard laptop computer.\n\n![Example charts](https://raw.githubusercontent.com/mdolab/openconcept/main/doc/_static/images/readme_charts.png)\n\nThe reason for OpenConcept\'s efficiency is the analytic derivatives built into each analysis routine and component. Accurate, efficient derivatives enable the use of Newton nonlinear equation solutions and gradient-based optimization at low computational cost.\n\n## Documentation\n\nAutomatically-generated documentation is available at (https://mdolab-openconcept.readthedocs-hosted.com/en/latest/).\n\nTo build the docs locally, install the `sphinx_mdolab_theme` via `pip`. Then enter the `doc` folder in the root directory and run `make html`. The built documentation can be viewed by opening `_build/html/index.html`. OpenAeroStruct is required (also installable via `pip`) to build the OpenAeroStruct portion of the source docs.\n\n## Getting Started\n\nOpenConcept can be pip installed directly from PyPI\n\n```shell\npip install openconcept\n```\n\nTo run the examples or edit the source code:\n1. Clone the repo to disk (`git clone https://github.com/mdolab/openconcept`)\n2. Navigate to the root `openconcept` folder\n3. Run `pip install -e .` to install the package (the `-e` can be omitted if not editing the source)\n\nGet started by following the tutorials in the documentation to learn the most important parts of OpenConcept.\nThe features section of the documentation describes most of the components and system models available in OpenConcept.\n\n### Requirements\n\n\n\nOpenConcept is tested regularly on builds with the oldest and latest supported package versions. The package versions in the oldest and latest builds are the following:\n\n| Package | Oldest | Latest |\n| ------- | ------- | ------ |\n| Python | 3.8 | 3.11 |\n| OpenMDAO | 3.10 | latest |\n| NumPy | 1.20 | latest |\n| SciPy | 1.6.0 | latest |\n| OpenAeroStruct | latest | latest |\n\n## Citation\n\nPlease cite this software by reference to the [conference paper](https://www.researchgate.net/publication/326263660_Development_of_a_Conceptual_Design_Model_for_Aircraft_Electric_Propulsion_with_Efficient_Gradients):\n\nBenjamin J. Brelje and Joaquim R. R. A. Martins, ""Development of a Conceptual Design Model for Aircraft Electric Propulsion with Efficient Gradients"", 2018 AIAA/IEEE Electric Aircraft Technologies Symposium, AIAA Propulsion and Energy Forum, (AIAA 2018-4979) DOI: 10.2514/6.2018-4979\n\n```\n@inproceedings{Brelje2018a,\n\taddress = {{C}incinnati,~{OH}},\n\tauthor = {Benjamin J. Brelje and Joaquim R. R. A. Martins},\n\tbooktitle = {Proceedings of the AIAA/IEEE Electric Aircraft Technologies Symposium},\n\tdoi = {10.2514/6.2018-4979},\n\tmonth = {July},\n\ttitle = {Development of a Conceptual Design Model for Aircraft Electric Propulsion with Efficient Gradients},\n\tyear = {2018}\n}\n```\n\nIf using the integrated OpenAeroStruct VLM or aerostructural aerodynamic models, please cite the following [conference paper](https://www.researchgate.net/publication/357559489_Aerostructural_wing_design_optimization_considering_full_mission_analysis):\n\nEytan J. Adler and Joaquim R. R. A. Martins, ""Efficient Aerostructural Wing Optimization Considering Mission Analysis"", Journal of Aircraft, 2022. DOI: 10.2514/1.c037096\n\n```\n@article{Adler2022d,\n\tauthor = {Adler, Eytan J. and Martins, Joaquim R. R. A.},\n\tdoi = {10.2514/1.c037096},\n\tissn = {1533-3868},\n\tjournal = {Journal of Aircraft},\n\tmonth = {December},\n\tpublisher = {American Institute of Aeronautics and Astronautics},\n\ttitle = {Efficient Aerostructural Wing Optimization Considering Mission Analysis},\n\tyear = {2022}\n}\n```\n'",",https://doi.org/10.3390/aerospace9050243","2018/06/28, 17:01:48",1945,MIT,7,165,"2023/09/18, 18:10:25",1,47,56,13,37,0,0.4,0.1823899371069182,"2023/03/23, 12:23:43",v1.1.0,0,4,false,,false,false,mid2SUPAERO/LCA4MDAO,,https://github.com/mdolab,mdolab.engin.umich.edu,,,,https://avatars.githubusercontent.com/u/26934866?v=4,,, Open Charge Map,The global public registry of electric vehicle charging locations.,openchargemap,https://github.com/openchargemap/ocm-system.git,github,,Mobility and Transportation,"2023/10/23, 06:00:39",89,0,16,true,C#,Open Charge Map,openchargemap,"C#,HTML,JavaScript,TypeScript,CSS,SCSS,Dockerfile",https://openchargemap.org,"b'Open Charge Map (OCM)\n==========\n\n### About the project\n\n[Open Charge Map](https://openchargemap.org) is the global public registry of electric vehicle charging locations.\nOCM was first established in 2011 and aims to crowdsource a high quality, well maintained open data set with the greatest breadth possible. Our data set is a mixture of manually entered information and imported open data sources (such as government-run registries and charging networks with an open data license). OCM provides data to drivers (via hundreds of apps and sites), as well to researchers, EV market analysts, and government policy makers. \n\nThe code in this repository represents the backend systems ([API](https://openchargemap.org/site/develop/), [Web Site](https://openchargemap.org) and server-side import processing) for the project. Server-side code is developed mostly in C#, currently building under Visual Studio 2022 Community Edition (or higher) with .Net 6, Data is primarily stored in SQL Server, using Entity Framework Core, with an additional caching layer using MongoDB. Most API reads are services by the MongoDB cache.\n\nDevelopers can use our [API](https://openchargemap.org/site/develop/) to access the data set and build their own [apps](https://openchargemap.org/site/develop/apps/). The map [app](https://map.openchargemap.io) source (using latest Ionic/Angular/TypeScript) can be found in its own repo at https://github.com/openchargemap/ocm-app\n\n\n### Basic build/dev prerequisites\n\n- dotnet 6.x sdk (windows/linux)\n- SQL Server Express\n- MongoDB 5.x\n\n### Local Dev Setup\n- Restore SQL database clone https://github.com/openchargemap/ocm-docs/tree/master/Database/Clone\n- Run OCM.Web (main website) project and create a user acccount, in User table set Permissions field for your account to administrator: `{""Permissions"":[{""Level"":100},{""Level"":1000}],""LegacyPermissions"":""[CountryLevel_Editor=All];[Administrator=true];""}`\n- Once you have the Admin menu in the web ui, `Admin > Dashboard > Check POI Cache Status > Refresh Cache (All)` to rebuild local MongoDB cache. Peridocially run Refresh Cache (Modified) to ensure cache does not become stale (otherwise the SQL database will be used for queries).\n\n### Deployment \n\n - Configure MongoDB as services and initialise ocm_mirror database\n - Enable read/write for app pool user for \\Temp folders\n - Configure web.config\n\n To run an API mirror, see the OCM.API.Worker readme.\n\n### Contributing\n\nPlease contribute in any way you can:\n - Improve the data (anyone can do this):\n - Submit comments, checkins, and photos via the website\n - Submit new data, become an editor for your country\n - Grow the user base\n - Advocacy, tell people about [Open Charge Map](https://openchargemap.org) and help them use it.\n - Improve the system:\n - Help translate the app into other languages see https://github.com/openchargemap/ocm-app/issues/35\n - Help write documentation\n - Web App (HTML/CSS/JavaScript)\n - Website development (MVC)\n - Map widget for embedding\n - Sample Code for developers\n - Graphic Design\n - Testing\n\n\n'",,"2013/05/31, 08:48:24",3799,MIT,62,986,"2023/07/11, 11:10:48",36,60,185,59,106,0,0.0,0.042735042735042694,,,0,11,false,,true,false,,,https://github.com/openchargemap,http://openchargemap.org,,,,https://avatars.githubusercontent.com/u/2486335?v=4,,, EVCC,An extensible EV Charge Controller with PV integration implemented in Go.,andig,https://github.com/evcc-io/evcc.git,github,"mqtt,golang,pv,wallbox,emobility,charger,home-automation,modbus,sunspec,ocpp,eebus,semp",Mobility and Transportation,"2023/10/25, 13:37:17",2010,29,1190,true,Go,,evcc-io,"Go,Vue,JavaScript,Shell,Smarty,CSS,Makefile,Dockerfile,HTML",https://evcc.io,"b'# evcc \xf0\x9f\x9a\x98\xe2\x98\x80\xef\xb8\x8f\n\n[![Build Status](https://github.com/evcc-io/evcc/workflows/Build/badge.svg)](https://github.com/evcc-io/evcc/actions?query=workflow%3ABuild)\n[![Code Quality](https://goreportcard.com/badge/github.com/evcc-io/evcc)](https://goreportcard.com/report/github.com/evcc-io/evcc)\n[![Translation](https://hosted.weblate.org/widgets/evcc/-/evcc/svg-badge.svg)](https://hosted.weblate.org/engage/evcc/)\n[![Open in Visual Studio Code](https://img.shields.io/static/v1?logo=visualstudiocode&label=&message=Open%20in%20VS%20Code&labelColor=2c2c32&color=007acc&logoColor=007acc)](https://open.vscode.dev/evcc-io/evcc)\n[![OSS hosting by cloudsmith](https://img.shields.io/badge/OSS%20hosting%20by-cloudsmith-blue?logo=cloudsmith)](https://cloudsmith.io/~evcc/packages/)\n[![Latest Version](https://img.shields.io/github/release/evcc-io/evcc.svg)](https://github.com/evcc-io/evcc/releases)\n\nevcc is an extensible EV Charge Controller with PV integration implemented in [Go][1]. Featured in [PV magazine](https://www.pv-magazine.de/2021/01/15/selbst-ist-der-groeoenlandhof-wallbox-ladesteuerung-selbst-gebaut/).\n\n![Screenshot](docs/screenshot.png)\n\n## Features\n\n- simple and clean user interface\n- wide range of supported [chargers](https://docs.evcc.io/docs/devices/chargers):\n - ABL eMH1, Alfen (Eve), Bender (CC612/613), cFos (PowerBrain), Daheimladen, Ebee (Wallbox), Ensto (Chago Wallbox), [EVSEWifi/ smartWB](https://www.evse-wifi.de), Garo (GLB, GLB+, LS4), go-eCharger, HardyBarth (eCB1, cPH1, cPH2), Heidelberg (Energy Control), Innogy (eBox), Juice (Charger Me), KEBA/BMW, Menneckes (Amedio, Amtron Premium/Xtra, Amtron ChargeConrol), NRGkick, [openWB (includes Pro)](https://openwb.de/), Optec (Mobility One), PC Electric (includes Garo), Siemens, TechniSat (Technivolt), [Tinkerforge Warp Charger](https://www.warp-charger.com), Ubitricity (Heinz), Vestel, Wallbe, Webasto (Live), Mobile Charger Connect and many more\n - experimental EEBus support (Elli, PMCC)\n - experimental OCPP support\n - Build-your-own: Phoenix Contact (includes ESL Walli), [EVSE DIN](http://evracing.cz/simple-evse-wallbox)\n - Smart-Home outlets: FritzDECT, Shelly, Tasmota, TP-Link\n- wide range of supported [meters](https://docs.evcc.io/docs/devices/meters) for grid, pv, battery and charger:\n - ModBus: Eastron SDM, MPM3PM, ORNO WE, SBC ALE3 and many more, see for a complete list\n - Integrated systems: SMA Sunny Home Manager and Energy Meter, KOSTAL Smart Energy Meter (KSEM, EMxx)\n - Sunspec-compatible inverter or home battery devices: Fronius, SMA, SolarEdge, KOSTAL, STECA, E3DC, ...\n - and various others: Discovergy, Tesla PowerWall, LG ESS HOME, OpenEMS (FENECON)\n- [vehicle](https://docs.evcc.io/docs/devices/vehicles) integration (state of charge, remote charge, battery and preconditioning status):\n - Audi, BMW, Citro\xc3\xabn, Dacia, Fiat, Ford, Hyundai, Jaguar, Kia, Landrover, ~~Mercedes~~, Mini, Nissan, Opel, Peugeot, Porsche, Renault, Seat, Smart, Skoda, Tesla, Volkswagen, Volvo, ...\n - Services: OVMS, Tronity\n - Scooters: Niu, ~~Silence~~\n- [plugins](https://docs.evcc.io/docs/reference/plugins) for integrating with any charger/ meter/ vehicle:\n - Modbus, HTTP, MQTT, Javascript, WebSockets and shell scripts\n- status [notifications](https://docs.evcc.io/docs/reference/configuration/messaging) using [Telegram](https://telegram.org), [PushOver](https://pushover.net) and [many more](https://containrrr.dev/shoutrrr/)\n- logging using [InfluxDB](https://www.influxdata.com) and [Grafana](https://grafana.com/grafana/)\n- granular charge power control down to mA steps with supported chargers (labeled by e.g. smartWB as [OLC](https://board.evse-wifi.de/viewtopic.php?f=16&t=187))\n- REST and MQTT [APIs](https://docs.evcc.io/docs/reference/api) for integration with home automation systems\n- Add-ons for [HomeAssistant](https://github.com/evcc-io/evcc-hassio-addon) and [OpenHAB](https://www.openhab.org/addons/bindings/evcc) (not maintained by the evcc core team)\n\n## Getting Started\n\nYou\'ll find everything you need in our [documentation](https://docs.evcc.io/).\n\n## Contribute\n\nTo build evcc from source, [Go][1] 1.21 and [Node][2] 18 are required.\n\nBuild and run go backend. The UI becomes available at http://127.0.0.1:7070/\n\n```sh\nmake install-ui\nmake ui\nmake install\nmake\n./evcc\n```\n\n### UI development\n\nFor frontend development start the Vue toolchain in dev-mode. Open http://127.0.0.1:7071/ to get to the livelreloading development server. It pulls its data from port 7070 (see above).\n\n```sh\nnpm install\nnpm run dev\n```\n\n### Integration tests\n\nWe use Playwright for end-to-end integration tests. They start a local evcc instance with different configuration yamls and prefilled databases. To run them, you have to do a local build first.\n\n```sh\nmake ui build\nnpm run playwright\n```\n\n#### Simulating device state\n\nSince we dont want to run tests agains real devices or cloud services we\'ve build a simple simulator that lets you emulated meters, vehicles and loadpoints. The simulators web interface runs on http://localhost:7072.\n\n```\nnpm run simulator\n```\n\nRun an evcc instance that uses simulator data. This configuration runs with a very high refresh interval to seed up testing.\n\n```\nmake ui build\n./evcc --config tests/simulator.evcc.yaml\n```\n\n### Code formatting\n\nWe use linters (golangci-lint, Prettier) to keep a coherent source code formatting. It\'s recommended to use the format-on-save feature of your editor. For VSCode use the [Go](https://marketplace.visualstudio.com/items?itemName=golang.Go), [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) and [Veture](https://marketplace.visualstudio.com/items?itemName=octref.vetur) extension. You can manually reformat your code by running:\n\n```sh\nmake lint\nmake lint-ui\n```\n\n### Changing templates\n\nevcc supports a massive amount of different devices. To keep our documentation and website in sync with the latest software the core project (this repo) generates meta-data that\'s pushed to the `docs` and `evcc.io` repository. Make sure to update this meta-data every time you make changes to a templates.\n\n```sh\nmake docs\n```\n\nIf you miss one of the above steps Gitub Actions will likely trigger a **Porcelain** error.\n\n### Adding or modifying translations\n\nevcc already includes many translations for the UI. Weblate Hosted is used to maintain all languages. Feel free to add more languages or verify and edit existing translations. Weblate will automatically push all modifications on a regular base to the evcc repository.\n\n[![Weblate Hosted](https://hosted.weblate.org/widgets/evcc/-/evcc/287x66-grey.png)](https://hosted.weblate.org/engage/evcc/)\n[![Languages](https://hosted.weblate.org/widgets/evcc/-/evcc/multi-auto.svg)](https://hosted.weblate.org/engage/evcc/)\n\nhttps://hosted.weblate.org/projects/evcc/evcc/\n\n## Sponsorship\n\n\n\nevcc believes in open source software. We\'re committed to provide best in class EV charging experience.\nMaintaining evcc consumes time and effort. With the vast amount of different devices to support, we depend on community and vendor support to keep evcc alive.\n\nWhile evcc is open source, we would also like to encourage vendors to provide open source hardware devices, public documentation and support open source projects like ours that provide additional value to otherwise closed hardware. Where this is not the case, evcc requires ""sponsor token"" to finance ongoing development and support of evcc.\n\nThe personal sponsor token requires a [Github Sponsorship](https://github.com/sponsors/evcc-io) and can be requested at [sponsor.evcc.io](https://sponsor.evcc.io/).\n\n[1]: https://golang.org\n[2]: https://nodejs.org/\n'",,"2019/12/06, 16:27:04",1419,MIT,1439,3868,"2023/10/25, 08:26:32",54,2554,4920,2448,0,26,0.1,0.34262544348707547,"2023/10/23, 12:08:43",0.121.2,0,132,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",false,false,"ngehrsitz/evcc,hoermto/evcc-tho,sarvex/evcc,jonilala796/evcc,goebelmeier/evcc,orgTestCodacy11KRepos110MB/repo-5479-evcc,abeamstart/evcc,opensprinklershop/evcc,matspi/evcc,jkmssoft/evcc,MrJayC/evcc,fawick/evcc,premultiply/evcc,mclane/evccWOPhases,Pjg-Git/evcc,SirkoVZ/evcc,int5749/evcc,waldtraut1981/evcc,lukasbisdorf/evcc,texperiri/evcc,Chris591/evcc,bboehmke/evcc,StefanSchoof/evcc,PanNwt/evcc,bicaluv/evcc,thierolm/evcc,rivengh/evcc,schenlap/evcc,evcc-io/evcc",,https://github.com/evcc-io,https://evcc.io,Germany,,,https://avatars.githubusercontent.com/u/81383504?v=4,,, SteVe,"Provides basic functions for the administration of charge points, user data and RFID cards for user authentication and was tested successfully in operation.",RWTH-i5-IDSG,https://github.com/steve-community/steve.git,github,"steve,ocpp,java,emobility,chargingstation,mobility,smarthome",Mobility and Transportation,"2023/10/15, 22:23:59",593,0,210,true,Java,SteVe Community,steve-community,"Java,CSS,JavaScript,Dockerfile",,"b'![SteVe](src/main/resources/webapp/static/images/logo.png) \n\n[![build and run tests](https://github.com/steve-community/steve/actions/workflows/main.yml/badge.svg)](https://github.com/steve-community/steve/actions/workflows/main.yml)\n\n\n# Introduction\n\nSteVe started its life at the RWTH Aachen University [in 2013](https://github.com/steve-community/steve/issues/827). \nThe name is derived from _Steckdosenverwaltung_ in German (in English: socket administration). \nThe aim of SteVe is to support the deployment and popularity of electric mobility, so it is easy to install and to use. \nIt provides basic functions for the administration of charge points, user data, and RFID cards for user authentication and was tested successfully in operation.\n\nSteVe is considered as an open platform to implement, test and evaluate novel ideas for electric mobility, like authentication protocols, reservation mechanisms for charge points, and business models for electric mobility. \nThe project is distributed under [GPL](LICENSE.txt) and is free to use. \nIf you are going to deploy it we are happy to see the [logo](website/logo/managed-by-steve.pdf) on a charge point.\n\n### Charge Point Support\n\nElectric charge points using the following OCPP versions are supported:\n\n* OCPP1.2S\n* OCPP1.2J\n* OCPP1.5S\n* OCPP1.5J\n* OCPP1.6S\n* OCPP1.6J\n\nFor Charging Station compatibility please check:\nhttps://github.com/steve-community/steve/wiki/Charging-Station-Compatibility\n\n### System Requirements\n\nSteVe requires \n* JDK 11 (both Oracle JDK and Adoptium are supported)\n* Maven \n* MySQL or MariaDB. You should use [one of these](.github/workflows/main.yml#L11) supported versions.\n\nto build and run. \n\nSteVe is designed to run standalone, a java servlet container / web server (e.g. Apache Tomcat), is **not** required.\n\n# Configuration and Installation\n\n1. Database preparation:\n\n **Important**: Make sure that the time zone of the MySQL server is the same as [the time zone of SteVe](src/main/java/de/rwth/idsg/steve/SteveConfiguration.java#L46). Since `UTC` is strongly recommended by OCPP, it is the default in SteVe and you should set it in MySQL, accordingly.\n\n Make sure MySQL is reachable via TCP (e.g., remove `skip-networking` from `my.cnf`).\n The following MySQL statements can be used as database initialization (adjust database name and credentials according to your setup).\n \n * For MariaDB (all LTS versions) and MySQL 5.7:\n ```\n CREATE DATABASE stevedb CHARACTER SET utf8 COLLATE utf8_unicode_ci;\n CREATE USER \'steve\'@\'localhost\' IDENTIFIED BY \'changeme\';\n GRANT ALL PRIVILEGES ON stevedb.* TO \'steve\'@\'localhost\';\n GRANT SELECT ON mysql.proc TO \'steve\'@\'localhost\';\n ```\n \n * For MySQL 8:\n ```\n CREATE DATABASE stevedb CHARACTER SET utf8 COLLATE utf8_unicode_ci;\n CREATE USER \'steve\'@\'localhost\' IDENTIFIED BY \'changeme\';\n GRANT ALL PRIVILEGES ON stevedb.* TO \'steve\'@\'localhost\';\n GRANT SUPER ON *.* TO \'steve\'@\'localhost\';\n ```\n Note: The statement `GRANT SUPER [...]` is only necessary to execute some of the previous migration files and is only needed for the initial database setup. Afterwards, you can remove this privilege by executing \n ```\n REVOKE SUPER ON *.* FROM \'steve\'@\'localhost\';\n ```\n \n2. Download and extract tarball:\n\n You can download and extract the SteVe releases using the following commands (replace X.X.X with the desired version number):\n ```\n wget https://github.com/steve-community/steve/archive/steve-X.X.X.tar.gz\n tar xzvf steve-X.X.X.tar.gz\n cd steve-X.X.X\n ```\n\n3. Configure SteVe **before** building:\n\n The basic configuration is defined in [main.properties](src/main/resources/config/prod/main.properties):\n - You _must_ change [database configuration](src/main/resources/config/prod/main.properties#L9-L13)\n - You _must_ change [the host](src/main/resources/config/prod/main.properties#L22) to the correct IP address of your server\n - You _must_ change [web interface credentials](src/main/resources/config/prod/main.properties#L17-L18)\n - You _can_ access the application via HTTPS, by [enabling it and setting the keystore properties](src/main/resources/config/prod/main.properties#L32-L35)\n \n For advanced configuration please see the [Configuration wiki](https://github.com/steve-community/steve/wiki/Configuration)\n\n4. Build SteVe:\n\n To compile SteVe simply use Maven. A runnable `jar` file containing the application and configuration will be created in the subdirectory `steve/target`.\n\n ```\n # ./mvnw package\n ```\n\n5. Run SteVe:\n\n To start the application run (please do not run SteVe as root):\n\n ```\n # java -jar target/steve.jar\n ```\n\n# Docker\n\nIf you prefer to build and start this project via docker (you can skip the steps 1, 4 and 5 from above), this can be done as follows: `docker-compose up -d`\n\nBecause the docker-compose file is written to build the project for you, you still have to change the project configuration settings from step 3.\nInstead of changing the [main.properties in the prod directory](src/main/resources/config/prod/main.properties), you have to change the [main.properties in the docker directory](src/main/resources/config/docker/main.properties). There you have to change all configurations which are described in step 3.\nThe database password for the user ""steve"" has to be the same as you have configured it in the docker-compose file.\n\nWith the default docker-compose configuration, the web interface will be accessible at: `http://localhost:8180`\n\n# Kubernetes\n\nFirst build your image, and push it to a registry your K8S cluster can access. Make sure the build args in the docker build command are set with the same database configuration that the main deployment will use.\n\n`docker build --build-arg DB_HOST= --build-arg DB_PORT= --build-arg DB_USERNAME= --build-arg DB_PASSWORD= --build-arg DB_DATABASE= -f k8s/docker/Dockerfile -t .`\n\n`docker push `\n\n\nThen go to `k8s/yaml/Deployment.yaml` and change `### YOUR BUILT IMAGE HERE ###` to your image tag, and fill in the environment variables with the same database connection that you used at build time.\n\nAfter this, create the namespace using `kubectl create ns steve` and apply your yaml with `kubectl apply -f k8s/yaml/Deployment.yaml` followed by `kubectl apply -f k8s/yaml/Service.yaml`\n\n\nTo access this publicaly, you\'ll also have to setup an ingress using something like nginx or traefik. \n\n# Ubuntu\n\nYou\'ll find a tutorial how to prepare Ubuntu for SteVe here: https://github.com/steve-community/steve/wiki/Prepare-Ubuntu-VM-for-SteVe\n\n# AWS\n\nYou\'ll find a tutorial how to setup SteVe in AWS using Lightsail here: https://github.com/steve-community/steve/wiki/Create-SteVe-Instance-in-AWS-Lightsail\n\n# First Steps\n\nAfter SteVe has successfully started, you can access the web interface using the configured credentials under:\n\n http://:/steve/manager\n \nThe default port number is 8080.\n\n### Add a charge point\n\n1. In order for SteVe to accept messages from a charge point, the charge point must first be registered. To add a charge point to SteVe select *Data Management* >> *Charge Points* >> *Add*. Enter the ChargeBox ID configured in the charge point and confirm.\n\n2. The charge points must be configured to communicate with following addresses. Depending on the OCPP version of the charge point, SteVe will automatically route messages to the version-specific implementation.\n - SOAP: `http://:/steve/services/CentralSystemService`\n - WebSocket/JSON: `ws://:/steve/websocket/CentralSystemService`\n\n\nAs soon as a heartbeat is received, you should see the status of the charge point in the SteVe Dashboard.\n \n*Have fun!*\n\nScreenshots\n-----\n1. [Home](website/screenshots/home.png)\n2. [Connector Status](website/screenshots/connector-status.png)\n3. [Data Management - Charge Points](website/screenshots/chargepoints.png)\n4. [Data Management - Users](website/screenshots/users.png)\n5. [Data Management - OCPP Tags](website/screenshots/ocpp-tags.png)\n6. [Data Management - Reservations](website/screenshots/reservations.png)\n7. [Data Management - Transactions](website/screenshots/transactions.png)\n8. [Operations - OCPP v1.2](website/screenshots/ocpp12.png)\n9. [Operations - OCPP v1.5](website/screenshots/ocpp15.png)\n10. [Settings](website/screenshots/settings.png)\n\nGDPR\n-----\nIf you are in the EU and offer vehicle charging to other people using SteVe, keep in mind that you have to comply to the General Data Protection Regulation (GDPR) as SteVe processes charging transactions, which can be considered personal data.\n\nAre you having issues?\n-----\nSee the [FAQ](https://github.com/steve-community/steve/wiki/FAQ)\n\nAcknowledgments\n-----\n[goekay](https://github.com/goekay) thanks to\n- [JetBrains](https://jb.gg/OpenSourceSupport) who support this project by providing a free All Products Pack license, and\n- ej-technologies GmbH who support this project by providing a free license for their [Java profiler](https://www.ej-technologies.com/products/jprofiler/overview.html).\n'",,"2013/09/05, 12:31:46",3702,GPL-3.0,273,2004,"2023/10/25, 09:02:54",115,704,1162,294,0,21,0.0,0.38637896156439644,"2023/06/04, 06:41:47",steve-3.6.0,0,22,true,github,false,false,,,https://github.com/steve-community,,Germany,,,https://avatars.githubusercontent.com/u/112710077?v=4,,, RISE-V2G,The only fully-featured reference implementation of the Vehicle-2-Grid communication interface ISO 15118.,V2GClarity,https://github.com/SwitchEV/RISE-V2G.git,github,,Mobility and Transportation,"2022/06/02, 08:57:28",196,0,46,false,Java,Switch,SwitchEV,"Java,Batchfile,Shell",,"b'![RISE V2G logo](https://v2g-clarity.com/test/rise-v2g-logo-2/ ""RISE V2G logo"")\n\nThe open source reference implementation of the Vehicle-2-Grid communication interface ISO 15118\n\n### About RISE V2G\n\nRISE V2G is the **R**eference **I**mplementation **S**upporting the **E**volution of the **V**ehicle-**2**-**G**rid communication interface ISO 15118.\nThe international standard ISO 15118, entitled ""Road vehicles - Vehicle to grid communication interface"", defines a digital IP-based communication interface between an electric vehicle (EV) and a charging station (named Electric Vehicle Supply Equipment - EVSE). It allows for a user-friendly ""Plug And Charge"" mechanism for authentication, authorisation, billing, and flexible load control based on a wide set of information exchanged between the EV and EVSE.\nA rise in the wide application of this standard is essential for reaching the goal of integrating EVs as flexible energy storage devices into a smart grid.\n\n\n### Support Update\nThe RISE V2G project is no longer supported. Switch\'s new [Josev Community](https://github.com/SwitchEV/josev) project is replacing RISE V2G.\n\n\n### License\nRISE V2G is published under the [MIT License](https://github.com/V2GClarity/RISE-V2G/blob/master/LICENSE).\n\n'",,"2017/07/20, 18:59:25",2288,MIT,0,182,"2022/01/28, 13:52:00",20,33,76,0,635,5,0.0,0.5714285714285714,"2020/07/23, 16:44:22",1.2.6,0,11,false,,false,false,,,https://github.com/SwitchEV,https://www.switch-ev.com,"London, UK",,,https://avatars.githubusercontent.com/u/30324321?v=4,,, simobility,A human-friendly Python framework that helps scientists and engineers to prototype and compare fleet optimization algorithms (autonomous and human-driven vehicles).,sash-ko,https://github.com/sash-ko/simobility.git,github,"mobility,mobility-modeling,autonomous-vehicles,optimization-algorithms,simulation-framework,simulator,ridesharing,fleet-management,python,ridehailing,transportation",Mobility and Transportation,"2020/12/29, 14:07:49",35,0,2,false,Python,,,"Python,Makefile,Shell",,"b'# simobility\n\n**simobility** is a light-weight mobility simulation framework. Best for quick prototyping\n\n**simobility** is a human-friendly Python framework that helps scientists and engineers to prototype and compare fleet optimization algorithms (autonomous and human-driven vehicles). It provides a set of building blocks that can be used to design different simulation scenarious, run simulations and calculate metrics. It is easy to plug in custom demand models, customer behavior models, fleet types, spatio-temporal models (for example, use [OSRM](http://project-osrm.org/) for routing vehicles and machine learning models trained on historical data to predict [ETA](https://en.wikipedia.org/wiki/Estimated_time_of_arrival)).\n\n### Motivation\n\nCreate an environment for experiments with machine learning algorithms for decision-making problems in mobility services and compare them to classical solutions.\n\n\n\nSome examples:\n* [Deep Reinforcement Learning with Applications in Transportation](https://outreach.didichuxing.com/tutorial/AAAI2019/)\n\n* [T. Oda and C. Joe-Wong, ""Movi: A model-free approach to dynamic fleet management"". 2018](https://arxiv.org/pdf/1804.04758.pdf)\n\n* [A. Alabbasi, A. Ghosh, and V. Aggarwal, ""DeepPool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning"", IEEETrans. Intelligent Transportation Systems (to appear). 2019](https://arxiv.org/pdf/1903.03882)\n\n* [C Wang,Y Hou, M Barth, ""Data-Driven Multi-step Demand Prediction for Ride-hailing Services Using Convolutional Neural Network"". 2019](https://arxiv.org/pdf/1911.03441.pdf)\n\n* [J. Ke, F. Xiao, H. Yang, and J. Ye. Optimizing online matching for ride-sourcing services with multi-agent deep reinforcement learning. 2019](https://arxiv.org/abs/1902.06228)\n\n### Installation\n\n`pip install simobility`\n\n### Contributions and thanks\n\nThanks to all who contributed to the concept/code:\n\n* [Steffen H\xc3\xa4u\xc3\x9fler](https://www.linkedin.com/in/steffenhaeussler/)\n* [Stephen Privitera](https://www.linkedin.com/in/stephen-privitera/)\n* [Sultan Imanhodjaev](https://www.linkedin.com/in/imanhodjaev/)\n* [Y\xc3\xa1bir Benchakhtir](https://www.linkedin.com/in/yabirgb/)\n\n### Examples\n\n[Grid world simulation](./examples/grid_world.py)\n\n[Simple simulation](./examples/simple_simulation.py)\n\n[Taxi service](./examples/taxi_service.py)\n\n[Log example](./examples/simulation_output_example.csv)\n\n\n### Benchmarks\n\nBenchmark simulations with `LinearRouter` and `GreedyMatcher`. Simulations will run slower with `OSRMRouter` because `OSRM` cannot process requests as fast as the linear router.\n\n_Processor: 2,3 GHz Dual-Core Intel Core i5; Memory: 8 GB 2133 MHz LPDDR3_\n\nSimulated time | Simulation step | Vehicles | Bookings per hour | Execution time | Generated events | Pickup rate\n--- | --- | --- | --- | --- | --- | ---\n|1 hour | 10 sec | 50 | 100 | 4 sec | 1082 | 96.97%\n|24 hours | 1 min | 50 | 100 | 12 sec | 23745 | 88.37%\n|24 hours | 10 sec | 50 | 100 | 20 sec | 23880 | 88.84%\n|12 hours | 10 sec | 200 | 100 | 18 sec | 13337 | 99.89%\n|12 hours | 10 sec | 50 | 500 | 31 sec | 40954 | 53.92%\n|12 hours | 10 sec | 200 | 500 | 46 sec | 65444 | 99.3%\n|12 hours | 10 sec | 1000 | 500 | 1 min 48 sec | 66605 | 99.98%\n|1 hour | 1 min | 1000 | 1000 | 14 sec | 11486 |\n|1 hour | 10 sec | 1000 | 1000 | 18 sec | 11631 |\n|24 hours | 1 min | 1000 | 1000 | 5 min 1 sec | 262384 |\n|24 hours | 10 sec | 1000 | 1000 | 6 min 20 sec | 262524 |\n\nA heuristic that allows estimating a maximum number of booking a fleet of N vehicles can handle: assume that an avarage trip duration is 15 minute, than 1 vehicle can not more then handle 4 booking per hour and the upper limit for 1000 vehicles is 4000 bookings per hour.\n\n### Metrics example\n\n```json\n{\n ""avg_paid_utilization"": 63.98,\n ""avg_utilization"": 96.87,\n ""avg_waiting_time"": 292.92,\n ""created"": 3998,\n ""dropoffs"": 589,\n ""empty_distance"": 640.37,\n ""empty_distance_pcnt"": 33.67,\n ""fleet_paid_utilization"": 63.98,\n ""fleet_utilization"": 96.87,\n ""num_vehicles"": 50,\n ""pickup_rate"": 15.48,\n ""pickups"": 619,\n ""total_distance"": 1902.04,\n}\n```\n\n### Simulation logs\n\nThe are multiple ways to collect simulation log - use CSV or InMemory log handler or implement your own handler: [loggers](https://github.com/sash-ko/simobility/blob/master/simobility/core/loggers.py)\n\n\nRead CSV logs with pandas:\n\n```python\nimport pandas as pd\n\ndata = pd.read_csv(\n ""simulation_output.csv"",\n sep="";"",\n converters={""details"": lambda v: eval(v)},\n)\n\ndetails = data.details.apply(pd.Series)\n```\n\n### Run OSRM\n\n```bash\nwget http://download.geofabrik.de/north-america/us/new-york-latest.osm.pbf\ndocker run -t -v ""${PWD}:/data"" osrm/osrm-backend osrm-extract -p /opt/car.lua /data/new-york-latest.osm.pbf\ndocker run -t -v ""${PWD}:/data"" osrm/osrm-backend osrm-partition /data/new-york-latest.osrm\ndocker run -t -v ""${PWD}:/data"" osrm/osrm-backend osrm-customize /data/new-york-latest.osrm\ndocker run -d -t -i -p 5010:5000 -v ""${PWD}:/data"" osrm/osrm-backend osrm-routed --algorithm mld /data/new-york-latest.osrm\n```\n'",",https://arxiv.org/pdf/1804.04758.pdf,https://arxiv.org/pdf/1903.03882,https://arxiv.org/pdf/1911.03441.pdf,https://arxiv.org/abs/1902.06228","2020/01/22, 18:23:34",1372,MIT,0,237,"2020/12/29, 14:07:50",6,4,5,0,1030,0,0.0,0.008658008658008698,"2020/03/01, 11:43:07",v0.3.0,0,2,false,,false,false,,,,,,,,,,, MobiVoc,An open vocabulary for future-oriented mobility solutions.,vocol,https://github.com/vocol/mobivoc.git,github,"public-transportation,vocabulary,schema,ontology,mobility-concepts,parking,charging-stations,limbo-project",Mobility and Transportation,"2019/09/02, 09:04:50",26,0,1,false,,VoCol - Vocabulary collaboration and build environment,vocol,,http://mobivoc.org,"b'![logo](https://www.mobivoc.org/static/img/logo-www.mobivoc.org.png)\n\nMobiVoc is an open vocabulary for future-oriented mobility solutions.\n\nNew mobility concepts and better data networking are both crucial factors for\nglobal economic development. To invent innovative and sustainable mobility\nconcepts, new data-based value-added services are required.\n\nOur goal is to significantly improve the data mobility between all stakeholders by providing a standardized vocabulary using Semantic Web technologies and ontologies. \nFor the open vocabulary covering various mobility aspects we use RDF (Resource Description Framework) - a recommended specification of the World Wide Web Consortium (W3C) and the so-called lingua franca for the integration of data and web. \nWe invite everyone who is interested to join our MobiVoc initiative and to participate in the development of the Open Mobility Vocabulary.\n\nMobiVoc was extended in the following research projects: [LIMBO](https://www.limbo-project.org/), [bIoTope](https://biotope-project.eu/).\n\n## Table of Contents\n\n* [Latest Release](#latest-release)\n* [Links](#links)\n* [Class diagram](#class-diagram)\n* [Example data](#example-data)\n* [Repository Files](#repository-files)\n * [Schema](#schema)\n * [Examples](#examples)\n\n## Latest Release\n\n* [mobivoc_v1.1.4.ttl](diagrams/mobivoc_v1.1.4.ttl) - Ontology (Turtle document)\n* [mobivoc_v1.1.4.png](diagrams/mobivoc_v1.1.4.png) - Class diagramm (PNG)\n* [mobivoc_v1.1.4.svg](diagrams/mobivoc_v1.1.4.svg) - Class diagramm (SVG)\n\n## Links\n\n* Homepage: [mobivoc.org](http://www.mobivoc.org/#)\n* Schema page and namespace: [schema.mobivoc.org](http://schema.mobivoc.org/) (suggested prefix: `mv`)\n* LOV entry: [@lov.okfn.org](http://lov.okfn.org/dataset/lov/vocabs/mv)\n* WebVOWL Visualization: [@visualdataweb.de](http://www.visualdataweb.de/webvowl/#iri=http://schema.mobivoc.org/)\n* oops Report: [@oops.linkeddata.es](http://oops.linkeddata.es/response.jsp?uri=http://schema.mobivoc.org/#) (slow)\n\n## Class diagram\n\n![Class diagram](diagrams/mobivoc_v1.1.4.png ""Mobivoc class diagram"")\n\nFurther diagrams can be found in the [diagrams folder](diagrams).\n\n## Example data\n\nAn example dataset on how to represent charging points is given in [openchargemap.ttl](examples/openchargemap.ttl). Data is taken from [OpenChargeMap](https://openchargemap.org/) for the cities of Brussels, Lyon and Helsinki. The dataset is licensed CC BY-SA 4.0.\n\nThe API call used to retrieve the individual datasets is:\n\n`https://api.openchargemap.io/v2/poi/?output=json&maxresults=1000&opendata=true&latitude=50.8504500&longitude=4.3487800&distance=20&distanceunit=km`\n\n## Repository Files\n\n### Schema\n\n* [schema/Metadata.ttl](schema/Metadata.ttl) - ontology metadata\n* [schema/Core.ttl](schema/Core.ttl) - core classes and properties\n* [schema/Parking.ttl](schema/Parking.ttl) - parking facilities and parking places\n* [schema/ChargingPoints.ttl](schema/ChargingPoints.ttl) - charging stations\n* [schema/Roadworks.ttl](schema/Roadworks.ttl) - highway roadworks\n* [schema/Deprecated.ttl](schema/Deprecated.ttl) - deprecated resources, not included in the schema anymore (for documentation reasons only)\n\n### Examples\n\n* [examples/openchargemap.ttl](examples/openchargemap.ttl) - Example instances for charging stations\n* [examples/parkingfacility.ttl](examples/parkingfacility.ttl) - Example instances for parking facilities\n* [examples/roadworks.ttl](examples/roadworks.ttl) - Example instances for highway roadworks\n'",,"2014/08/05, 14:18:24",3368,CC-BY-4.0,0,394,"2019/08/16, 12:16:48",9,8,56,0,1531,0,0.25,0.7137096774193548,"2019/08/22, 11:52:54",v1.1.4,0,12,false,,false,true,,,https://github.com/vocol,https://github.com/vocol/vocol,,,,https://avatars.githubusercontent.com/u/8235117?v=4,,, Transportr,The public transport companion that respects your privacy and your freedom.,grote,https://github.com/grote/Transportr.git,github,"public-transportation,android-app,map",Mobility and Transportation,"2023/08/16, 03:17:59",932,0,117,true,Kotlin,,,"Kotlin,Java,Python,Ruby,Shell",https://transportr.app,"b'Transportr\n==========\n\nThe public transport companion that respects your privacy and your freedom.\nTransportr is a non-profit app developed by people around the world to make using public transport as easy as possible wherever you are. \n\n[![Transportr Logo](/app/src/main/res/mipmap-xhdpi/ic_launcher.png)](https://transportr.app)\n[![Build and test](https://github.com/grote/Transportr/actions/workflows/build.yml/badge.svg)](https://github.com/grote/Transportr/actions/workflows/build.yml)\n\nPlease **[visit the website](https://transportr.app)** for more information!\n\nIf you find any issues with this app, please report them at [the issue tracker](https://github.com/grote/Transportr/issues). Contributions are both encouraged and appreciated. If you like to contribute please [check the website](https://transportr.app/contribute) for more information.\n\nThe upstream repository is at: https://github.com/grote/Transportr\n\n[![Follow @TransportrApp](artwork/twitter.png)](https://twitter.com/TransportrApp)\n\nGet Transportr\n--------------\n\n[![Available on F-Droid](/artwork/f-droid.png)](https://f-droid.org/repository/browse/?fdid=de.grobox.liberario)\n[![Available on Google Play](/artwork/google-play.png)](https://play.google.com/store/apps/details?id=de.grobox.liberario)\n\nPre-releases and beta versions for advanced users are available via [a special F-Droid repository](http://grobox.de/fdroid/).\n\nScreenshots\n-----------\n[](fastlane/metadata/android/en-US/images/phoneScreenshots/1_FirstStart.png)\n[](fastlane/metadata/android/en-US/images/phoneScreenshots/2_SavedSearches.png)\n[](fastlane/metadata/android/en-US/images/phoneScreenshots/3_Trips.png)\n[](fastlane/metadata/android/en-US/images/phoneScreenshots/4_TripDetails.png)\n[](fastlane/metadata/android/en-US/images/phoneScreenshots/5_Station.png)\n[](fastlane/metadata/android/en-US/images/phoneScreenshots/6_Departures.png)\n\n\nBuilding From Source\n--------------------\n\nIf you want to start working on Transportr and if you haven\'t done already, you should [familiarize yourself with Android development](https://developer.android.com/training/basics/firstapp/index.html) and [set up a development environment](https://developer.android.com/sdk/index.html).\n\nThe next step is to clone the source code repository.\n\n $ git clone https://github.com/grote/Transportr.git\n\nIf you don\'t want to use an IDE like Android Studio, you can build Transportr on the command line as follows.\n\n $ cd Transportr\n $ ./gradlew assembleRelease\n\nLicense\n-------\n\n[![GNU GPLv3 Image](https://www.gnu.org/graphics/gplv3-127x51.png)](https://www.gnu.org/licenses/gpl-3.0.html)\n\nThis program is Free Software: You can use, study share and improve it at your\nwill. Specifically you can redistribute and/or modify it under the terms of the\n[GNU General Public License](https://www.gnu.org/licenses/gpl.html) as\npublished by the Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n'",,"2013/09/06, 10:00:57",3701,GPL-3.0,23,1286,"2023/10/06, 16:56:16",129,186,742,55,19,10,3.5,0.17905675459632298,"2023/06/09, 13:09:24",2.1.5,0,45,true,"github,liberapay,custom",false,true,,,,,,,,,,, OneBusAway,The Open Source platform for Real Time Transit Info.,OneBusAway,https://github.com/OneBusAway/onebusaway-android.git,github,"android,onebusaway,java,public-transportation,transit,open-transit-software-foundation",Mobility and Transportation,"2023/10/22, 18:40:09",437,0,22,true,Java,OneBusAway,OneBusAway,"Java,Kotlin,HTML,PowerShell,Python,Shell",http://www.onebusaway.org/,"b'# OneBusAway for Android [![Android CI Build](https://github.com/OneBusAway/onebusaway-android/actions/workflows/android.yml/badge.svg)](https://github.com/OneBusAway/onebusaway-android/actions/workflows/android.yml) [![Join the OneBusAway chat](https://onebusaway.herokuapp.com/badge.svg)](https://onebusaway.herokuapp.com/)\n\nThis is the official Android / Fire Phone app for [OneBusAway](https://onebusaway.org/), a project of the non-profit [Open Transit Software Foundation](https://opentransitsoftwarefoundation.org/)!\n\n[](https://play.google.com/store/apps/details?id=com.joulespersecond.seattlebusbot)\n\n[](http://www.amazon.com/gp/mas/dl/android?p=com.joulespersecond.seattlebusbot)\n\n\n\nOneBusAway for Android provides:\n\n1. Real-time arrival/departure information for public transit\n2. A browse-able map of nearby stops\n3. A list of favorite bus stops\n4. Reminders to notify you when your bus is arriving or departing\n5. The ability to search for nearby stops or routes\n6. Real-time multimodal trip planning, using real-time transit and bike share information (requires a regional [OpenTripPlanner](http://www.opentripplanner.org/) server)\n6. Bike share map layer, which includes real-time availability information for floating bikes and bike rack capacity (requires a regional [OpenTripPlanner](http://www.opentripplanner.org/) server)\n7. Issue reporting to any Open311-compliant issue management system (see [this page](ISSUE_REPORTING.md) for details)\n\nOneBusAway for Android automatically keeps track of your most used stops and routes, and allows you to put shortcuts on your phone\'s home screen for any stop or route you choose.\n\n## Alpha and Beta Testing\n\nGet early access to new OneBusAway Android versions, and help us squash bugs! See our [Testing Guide](https://github.com/OneBusAway/onebusaway-android/blob/master/BETA_TESTING.md) for details.\n\n## Build Setup\n\nWant to build the project yourself and test some changes? See our [build documentation](BUILD.md).\n\n## Contributing\n\nWe welcome contributions to the project! Please see our [Contributing Guide](https://github.com/OneBusAway/onebusaway-android/blob/master/.github/CONTRIBUTING.md) for details, including Code Style Guidelines and Template.\n\n## System Architecture\n\nCurious what servers power certain features in OneBusAway Android? Check out the [System Architecture page](SYSTEM_ARCHITECTURE.md).\n\n## Deploying OneBusAway Android in Your City\n\nThere are two ways to deploy OneBusAway Android in your city:\n\n1. **Join the OneBusAway [multi-region project](https://github.com/OneBusAway/onebusaway/wiki/Multi-Region)** - The easiest way to get started - simply set up your own OneBusAway server with your own transit data, and get added to the OneBusAway apps! See [this page](https://github.com/OneBusAway/onebusaway/wiki/Multi-Region) for details.\n2. **Deploy a rebranded version of OneBusAway Android as your own app on Google Play** - Requires a bit more maintenance, but it allows you to set up your own app on Google Play / Amazon App Store based on the OneBusAway Android source code. See [rebranding page](https://github.com/OneBusAway/onebusaway-android/blob/master/REBRANDING.md) for details.\n\n## Testing Your Own OneBusAway/OpenTripPlanner servers\n\nDid you just set up your own [OneBusAway](https://github.com/OneBusAway/onebusaway-application-modules/wiki) and/or [OpenTripPlanner](http://www.opentripplanner.org/) server? You can test both in this app without compiling any Android code. Just download the app from [Google Play](https://play.google.com/store/apps/details?id=com.joulespersecond.seattlebusbot), and see our [Custom Server Setup Guide](CUSTOM_SERVERS.md) for details.\n\n## Permissions\n\nIn order to support certain features in OneBusAway, we need to request various permissions to access information on your device. See an explanation of why each permission is needed [here](PERMISSIONS.md).\n\n## Troubleshooting\n\nThings not going well building the project? See our [Troubleshooting](TROUBLESHOOTING.md) section. If you\'re a user of the app, check out our [FAQ](FAQ.md).\n\n## OneBusAway Project\n\nWant to learn more about the [OneBusAway project](https://onebusaway.org/), a project of the non-profit [Open Transit Software Foundation](https://opentransitsoftwarefoundation.org/)? [Read up on the entire Application Suite](https://github.com/OneBusAway/onebusaway-application-modules) and/or [learn more about the mobile apps](https://github.com/OneBusAway/onebusaway-application-modules/wiki/Mobile-App-Design-Considerations).\n'",,"2011/06/10, 01:07:10",4520,CUSTOM,1,2385,"2023/10/22, 18:40:09",177,390,917,1,3,13,0.1,0.32156488549618323,"2022/06/14, 21:24:42",v2.10.1,0,28,false,,false,true,,,https://github.com/OneBusAway,https://github.com/OneBusAway/onebusaway/wiki,,,,https://avatars.githubusercontent.com/u/1428806?v=4,,, transitfeed,"A Python library for reading, validating and writing transit schedule information in the GTFS format.",google,https://github.com/google/transitfeed.git,github,,Mobility and Transportation,"2022/05/23, 16:23:53",665,36,19,false,Python,Google,google,"Python,JavaScript,CSS,HTML,VBScript",https://github.com/google/transitfeed/wiki,"b'> \xe2\x9a\xa0\xef\xb8\x8f **NOTE:** This project is no longer actively maintained. For up-to-date GTFS validation tools, see the https://github.com/MobilityData/gtfs-validator project. \xe2\x9a\xa0\xef\xb8\x8f\n\n## transitfeed\n\nProvides a library to help you parse, validate, and generate [General Transit Feed Spec (GTFS)](https://developers.google.com/transit/gtfs/) feed files. See INSTALL for installation instructions.\n\nFor the latest documentation, see:\n\nhttps://github.com/google/transitfeed/wiki\n\nFor the latest release and downloads, see:\n\nhttps://github.com/google/transitfeed/releases/latest\n\nFor general questions, send a message to the mailing list:\n\nhttps://groups.google.com/forum/#!forum/transitfeed\n'",,"2014/09/15, 15:16:32",3327,Apache-2.0,0,434,"2021/10/16, 08:29:50",194,48,327,0,739,21,0.4,0.4968553459119497,"2018/01/24, 22:09:07",1.2.16,0,23,false,,false,true,"gis-ops/gtfs-fetcher,evansiroky/oba_deployer,informaticacba/gtfseditor,avilaton/gtfseditor,etalab/transport-validator-python,tulsawebdevs/tulsa-transit-google,YoRyan/busbook,Thynix/aaata-buspage,jonathanzhang99/livesubway,brennv/gtfs-pipeline,SamTrans/gtfs-qa,zacs/victoriaclipper-gtfs,ultranaut/jitney,bcpearce/nyc-train-arrival-server,NoxHarmonium/transport-heartbeat,egofer/organicity-risum,yinnonsanders/tcat_gtfs,philnielsen/alexa-metra,bcpearce/nyc-train-arrival,askogvold/shapes-editor,AaronMorais/textYRT,guidj/transit,kdheepak/RTDT,azavea/gtfs-feed-fetcher,flibbertigibbet/gtfs-feed-fetcher,publictransit-in/livetrains,grote/osm2gtfs,fgcarto/gtfsdb,peterlau/gtfsdb-versioned,opentripplanner/OTPSetup,demotera/mobirennes,Tristramg/mumoro,novalis/BusTracker,jcabannes/Projet_tut,ed-g/transitfeed_web,evansiroky/OnTransit",,https://github.com/google,https://opensource.google/,,,,https://avatars.githubusercontent.com/u/1342004?v=4,,, node-gtfs,"Import GTFS transit data into SQLite and query routes, stops, times, fares and more.",BlinkTagInc,https://github.com/BlinkTagInc/node-gtfs.git,github,"gtfs,node-gtfs,transit,transit-data,public-transportation,sqlite,geojson",Mobility and Transportation,"2023/09/19, 17:50:21",405,0,44,true,JavaScript,BlinkTag,BlinkTagInc,"JavaScript,TypeScript,Shell",,"b'

\n \xe2\x9e\xa1\xef\xb8\x8f\n Installation |\n Quick Start |\n TypeScript Support |\n Configuration |\n Query Methods\n \xe2\xac\x85\xef\xb8\x8f\n

\n \n

\n \n \n \n \n

\n Import and Export GTFS transit data into SQLite. Query or change routes, stops, times, fares and more.\n

\n \n

\n\n
\n\n`node-GTFS` loads transit data in [GTFS format](https://developers.google.com/transit/) into a SQLite database and provides some methods to query for agencies, routes, stops, times, fares, calendars and other GTFS data. It also offers spatial queries to find nearby stops, routes and agencies and can convert stops and shapes to geoJSON format. Additionally, this library can export data from the SQLite database back into GTFS (csv) format.\n\nThe library also supports importing GTFS-Realtime data into the same database. In order to keep the realtime database fresh, it uses `SQLITE REPLACE` which makes it very effective.\n\nYou can use it as a [command-line tool](#command-line-examples) or as a [node.js module](#code-example).\n\nThis library has four parts: the [GTFS import script](#gtfs-import-script), [GTFS export script](#gtfs-export-script) and [GTFS-Realtime update script](#gtfsrealtime-update-script) and the [query methods](#query-methods)\n\n## Breaking changes in Version 4\n\nVersion 4 of node-gtfs switched to using the better-sqlite3 library. This allowed all query methods to become synchronous and speeds up import and export.\n\n- All query methods are now synchronous.\n\n```js\n// Version 3\nconst routes = await getRoutes();\n\n// Version 4\nconst routes = getRoutes();\n```\n\n- `runRawQuery` has been removed. Use [Raw SQLite Query](#raw-sqlite-query) instead.\n- `execRawQuery` has been removed. Use [Raw SQLite Query](#raw-sqlite-query) instead.\n- `getDb` has been removed. Use `openDb` instead.\n\n## Installation\n\nTo use this library as a command-line utility, install it globally [npm](https://npmjs.org):\n\n npm install gtfs -g\n\nThis will add the `gtfs-import` and `gtfs-export` scripts to your path.\n\nIf you are using this as a node module as part of an application, include it in your project\'s `package.json` file.\n\n npm install gtfs\n\n## Quick Start\n\n### Command-line examples\n\n gtfs-import --gtfsUrl http://www.bart.gov/dev/schedules/google_transit.zip\n\nor\n\n gtfs-import --gtfsPath /path/to/your/gtfs.zip\n\nor\n\n gtfs-import --gtfsPath /path/to/your/unzipped/gtfs\n\nor\n\n gtfs-import --configPath /path/to/your/custom-config.json\n\n gtfs-export --configPath /path/to/your/custom-config.json\n\n### Code example\n\n```js\nimport { importGtfs } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\n\nconst config = JSON.parse(\n await readFile(new URL(\'./config.json\', import.meta.url))\n);\n\ntry {\n await importGtfs(config);\n} catch (error) {\n console.error(error);\n}\n```\n\n### Example Applications\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
GTFS-to-HTML uses `node-gtfs` for downloading, importing and querying GTFS data. It provides a good example of how to use this library and is used by over a dozen transit agencies to generate the timetables on their websites.
GTFS-to-geojson creates geoJSON files for transit routes for use in mapping. It uses `node-gtfs` for downloading, importing and querying GTFS data. It provides a good example of how to use this library.
GTFS-to-chart generates a stringline chart in D3 for all trips for a specific route using data from an agency\'s GTFS. It uses `node-gtfs` for downloading, importing and querying GTFS data.
GTFS-Text-to-Speech app tests GTFS stop name pronunciation for text-to-speech. It uses `node-gtfs` for loading stop names from GTFS data.
Transit Departures Widget creates a realtime transit departures widget from GTFS and GTFS-Realtime data.
\n\n## Command-Line Usage\n\nThe `gtfs-import` command-line utility will import GTFS into SQLite3.\n\nThe `gtfs-export` command-line utility will create GTFS from data previously imported into SQLite3.\n\n### gtfs-import Command-Line options\n\n`configPath`\n\nAllows specifying a path to a configuration json file. By default, `node-gtfs` will look for a `config.json` file in the directory it is being run from. Using a config.json file allows you specify more options than CLI arguments alone - see below.\n\n gtfs-import --configPath /path/to/your/custom-config.json\n\n`gtfsPath`\n\nSpecify a local path to GTFS, either zipped or unzipped.\n\n gtfs-import --gtfsPath /path/to/your/gtfs.zip\n\nor\n\n gtfs-import --gtfsPath /path/to/your/unzipped/gtfs\n\n`gtfsUrl`\n\nSpecify a URL to a zipped GTFS file.\n\n gtfs-import --gtfsUrl http://www.bart.gov/dev/schedules/google_transit.zip\n\n## TypeScript Support\n\nBasic TypeScript typings are included with this library. Please [open an issue](https://github.com/blinktaginc/node-gtfs/issues) if you find any inconsistencies between the declared types and underlying code.\n\n## Configuration\n\nCopy `config-sample.json` to `config.json` and then add your projects configuration to `config.json`.\n\n cp config-sample.json config.json\n\n| option | type | description |\n| --------------------------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------- |\n| [`agencies`](#agencies) | array | An array of GTFS files to be imported, and which files to exclude. |\n| [`csvOptions`](#csvOptions) | object | Options passed to `csv-parse` for parsing GTFS CSV files. Optional. |\n| [`exportPath`](#exportPath) | string | A path to a directory to put exported GTFS files. Optional, defaults to `gtfs-export/`. |\n| [`ignoreDuplicates`](#ignoreduplicates) | boolean | Whether or not to ignore unique constraints on ids when importing GTFS, such as `trip_id`, `calendar_id`. Optional, defaults to false. |\n| [`sqlitePath`](#sqlitePath) | string | A path to an SQLite database. Optional, defaults to using an in-memory database. |\n| [`verbose`](#verbose) | boolean | Whether or not to print output to the console. Optional, defaults to true. |\n\n### agencies\n\n{Array} Specify the GTFS files to be imported in an `agencies` array. GTFS files can be imported via a `url` or a local `path`.\n\nFor GTFS files that contain more than one agency, you only need to list each GTFS file once in the `agencies` array, not once per agency that it contains.\n\nTo find an agency\'s GTFS file, visit [transitfeeds.com](http://transitfeeds.com).\n\n#### agencies options\n\n| option | type | description |\n| ----------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------- |\n| `url` | string | The URL to a zipped GTFS file. Required if `path` not present. |\n| `path` | string | A path to a zipped GTFS file or a directory of unzipped .txt files. Required if `url` is not present. |\n| `headers` | object | An object of HTTP headers in key:value format to use when fetching GTFS from the `url` specified. Optional. |\n| `prefix` | string | A prefix to be added to every ID field maintain uniqueness when importing multiple GTFS from multiple agencies. Optional. |\n| `exclude` | array | An array of GTFS file names (without `.txt`) to exclude when importing. Optional. |\n| `realtimeUrls` | array | An array of GTFS-Realtime urls to import. Optional. |\n| `realtimeHeaders` | array | An object of HTTP headers in key:value format to use when fetching GTFS-Realtime data from the `realtimeUrls` specified. Optional. |\n\n- Specify a `url` to download GTFS:\n\n```json\n{\n ""agencies"": [\n {\n ""url"": ""http://countyconnection.com/GTFS/google_transit.zip""\n }\n ]\n}\n```\n\n- Specify a download URL with custom headers using the `headers` field:\n\n```json\n{\n ""agencies"": [\n {\n ""url"": ""http://countyconnection.com/GTFS/google_transit.zip"",\n ""headers"": {\n ""Content-Type"": ""application/json"",\n ""Authorization"": ""bearer 1234567890""\n }\n }\n ]\n}\n```\n\n- Specify a `path` to a zipped GTFS file:\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/gtfs.zip""\n }\n ]\n}\n```\n\n- Specify a `path` to an unzipped GTFS file:\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/unzipped/gtfs/""\n }\n ]\n}\n```\n\n- If you don\'t want all GTFS files to be imported, you can specify an array of files to `exclude`. This can save a lot of time for larger GTFS.\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/unzipped/gtfs/"",\n ""exclude"": [""shapes"", ""stops""]\n }\n ]\n}\n```\n\n- Specify urls for GTFS-Realtime updates. `realtimeUrls` allows an array of GTFS-Realtime URLs. For example, a URL for trip updates, a URL for vehicle updates and a URL for service alerts. In addition, a `realtimeHeaders` parameter allows adding additional HTTP headers to the request.\n\n```json\n{\n ""agencies"": [\n {\n ""url"": ""http://countyconnection.com/GTFS/google_transit.zip"",\n ""realtimeUrls"": [\n ""https://opendata.somewhere.com/gtfs-rt/VehicleUpdates.pb"",\n ""https://opendata.somewhere.com/gtfs-rt/TripUpdates.pb""\n ],\n ""realtimeHeaders"": {\n ""Authorization"": ""bearer 1234567890""\n }\n }\n ]\n}\n```\n\n- Specify multiple agencies to be imported into the same database\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/gtfs.zip""\n },\n {\n ""path"": ""/path/to/the/othergtfs.zip""\n }\n ]\n}\n```\n\n- When importing multiple agencies their IDs may overlap. Specify a `prefix` to be added to every ID field to maintain uniqueness.\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/gtfs.zip"",\n ""prefix"": ""A""\n },\n {\n ""path"": ""/path/to/the/othergtfs.zip"",\n ""prefix"": 10000\n }\n ]\n}\n```\n\n### csvOptions\n\n{Object} Add options to be passed to [`csv-parse`](https://csv.js.org/parse/) with the key `csvOptions`. This is an optional parameter.\n\nFor instance, if you wanted to skip importing invalid lines in the GTFS file:\n\n```json\n ""csvOptions"": {\n ""skip_lines_with_error"": true\n }\n```\n\nSee [full list of options](https://csv.js.org/parse/options/).\n\n### exportPath\n\n{String} A path to a directory to put exported GTFS files. If the directory does not exist, it will be created. Used when running `gtfs-export` script or `exportGtfs()`. Optional, defaults to `gtfs-export/` where `` is a sanitized, [snake-cased](https://en.wikipedia.org/wiki/Snake_case) version of the first `agency_name` in `agency.txt`.\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/unzipped/gtfs/""\n }\n ],\n ""exportPath"": ""~/path/to/export/gtfs""\n}\n```\n\n### ignoreDuplicates\n\n{Boolean} If you don\'t want node-GTFS to throw an error when it encounters a duplicate id on GTFS import. If `true`, it will skip importing duplicate records where unique constraints are violated, such as`trip_id`, `stop_id`, `calendar_id`. Useful if importing GTFS from multiple sources into one SQlite database that share routes or stops. Defaults to `false`.\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/unzipped/gtfs/""\n }\n ],\n ""ignoreDuplicates"": false\n}\n```\n\n### sqlitePath\n\n{String} A path to an SQLite database. Optional, defaults to using an in-memory database with a value of `:memory:`.\n\n```json\n ""sqlitePath"": ""/dev/sqlite/gtfs""\n```\n\n### verbose\n\n{Boolean} If you don\'t want the import script to print any output to the console, you can set `verbose` to `false`. Defaults to `true`.\n\n```json\n{\n ""agencies"": [\n {\n ""path"": ""/path/to/the/unzipped/gtfs/""\n }\n ],\n ""verbose"": false\n}\n```\n\nIf you want to route logs to a custom function, you can pass a function that takes a single `text` argument as `logFunction`. This can\'t be defined in `config.json` but instead passed in a config object to `importGtfs()`. For example:\n\n```js\nimport { importGtfs } from \'gtfs\';\n\nconst config = {\n agencies: [\n {\n url: \'http://countyconnection.com/GTFS/google_transit.zip\',\n exclude: [\'shapes\'],\n },\n ],\n logFunction: function (text) {\n // Do something with the logs here, like save it or send it somewhere\n console.log(text);\n },\n};\n\nawait importGtfs(config);\n```\n\n## `gtfs-import` Script\n\nThe `gtfs-import` script reads from a JSON configuration file and imports the GTFS files specified to a SQLite database. [Read more on setting up your configuration file](#configuration).\n\n### Run the `gtfs-import` script from command-line\n\n gtfs-import\n\nBy default, it will look for a `config.json` file in the project root. To specify a different path for the configuration file:\n\n gtfs-import --configPath /path/to/your/custom-config.json\n\n### Use `importGtfs` script in code\n\nUse `importGtfs()` in your code to run an import of a GTFS file specified in a config.json file.\n\n```js\nimport { importGtfs } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\n\nconst config = JSON.parse(\n await readFile(new URL(\'./config.json\', import.meta.url))\n);\n\nawait importGtfs(config);\n```\n\nConfiguration can be a JSON object in your code\n\n```js\nimport { importGtfs } from \'gtfs\';\n\nconst config = {\n sqlitePath: \'/dev/sqlite/gtfs\',\n agencies: [\n {\n url: \'http://countyconnection.com/GTFS/google_transit.zip\',\n exclude: [\'shapes\'],\n },\n ],\n};\n\nawait importGtfs(config);\n```\n\n## `gtfsrealtime-update` Script\n\nThe `gtfsrealtime-update` script requests GTFS-Realtime data and importings into a SQLite database. [GTFS-Realtime data](https://gtfs.org/realtime/reference/) can compliment GTFS Static data. [Read more about GTFS-Realtime configuration](#configuration).\n\n### Run the `gtfsrealtime-update` script from command-line\n\n gtfsrealtime-update\n\nBy default, it will look for a `config.json` file in the project root. To specify a different path for the configuration file:\n\n gtfsrealtime-update --configPath /path/to/your/custom-config.json\n\n### Use `updateGtfsRealtime` script in code\n\nUse `updateGtfsRealtime()` in your code to run an update of a GTFS-Realtime data specified in a config.json file.\n\n```js\nimport { updateGtfsRealtime } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\n\nconst config = JSON.parse(\n await readFile(new URL(\'./config.json\', import.meta.url))\n);\n\nawait updateGtfsRealtime(config);\n```\n\n## `gtfs-export` Script\n\nThe `gtfs-export` script reads from a JSON configuration file and exports data in GTFS format from a SQLite database. [Read more on setting up your configuration file](#configuration).\n\nThis could be used to export a GTFS file from SQLite after changes have been made to the data in the database manually.\n\n### Make sure to import GTFS data into SQLite first\n\nNothing will be exported if there is no data to export. See the [GTFS import script](#gtfs-import-script).\n\n### Run the `gtfs-export` script from Command-line\n\n gtfs-export\n\nBy default, it will look for a `config.json` file in the project root. To specify a different path for the configuration file:\n\n gtfs-export --configPath /path/to/your/custom-config.json\n\n### Command-Line options\n\n#### Specify path to config JSON file\n\nYou can specify the path to a config file to be used by the export script.\n\n gtfs-export --configPath /path/to/your/custom-config.json\n\n#### Show help\n\nShow all command-line options\n\n gtfs-export --help\n\n### Use `exportGtfs` script in code\n\nUse `exportGtfs()` in your code to run an export of a GTFS file specified in a config.json file.\n\n```js\nimport { exportGtfs } from \'gtfs\';\n\nconst config = {\n sqlitePath: \'/dev/sqlite/gtfs\',\n agencies: [\n {\n url: \'http://countyconnection.com/GTFS/google_transit.zip\',\n exclude: [\'shapes\'],\n },\n ],\n};\n\nawait exportGtfs(config);\n```\n\n## Query Methods\n\nThis library includes many methods you can use in your project to query GTFS data. In addition to standard static GTFS, `node-gtfs` supports the following extensions to GTFS:\n\n- [GTFS-Realtime](https://gtfs.org/realtime/) - Realtime alerts, vehicle positions and predictions\n- [GTFS-Ride](https://gtfsride.org) - Passenger counts\n- [Operational Data Standard (ODS)](https://docs.calitp.org/operational-data-standard/) - Deadheads and personnel info\n- [GTFS-Timetables](https://gtfstohtml.com) - Information for creating human-readable timetables\n\nThere are also methods for retrieving stops and shapes in geoJSON format.\n\nMost query methods accept three optional arguments: `query`, `fields`, `sortBy` and `options`.\n\nFor more advanced queries, you can use `advancedQuery` or raw SQL queries using query method from [better-sqlite3](#raw-sqlite-query).\n\n### Database Setup\n\nTo use any of the query methods, first open the database using `openDb` before making any queries:\n\n```js\nimport { openDb } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\nconst config = JSON.parse(\n await readFile(new URL(\'./config.json\', import.meta.url))\n);\nconst db = openDb(config);\n```\n\nIf you no longer need a database (especially if using an in-memory database) you can use `closeDb`:\n\n```js\nimport { closeDb, openDb } from \'gtfs\';\nconst db = openDb(config);\n\n// Do some stuff here\n\n// Close database connection when done.\ncloseDb(db);\n```\n\n### Examples\n\nFor example, to get a list of all routes with just `route_id`, `route_short_name` and `route_color` sorted by `route_short_name`:\n\n```js\nimport { closeDb, openDb, getRoutes } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\nconst config = JSON.parse(\n await readFile(new URL(\'./config.json\', import.meta.url))\n);\n\nconst db = openDb(config);\nconst routes = getRoutes(\n {}, // No query filters\n [\'route_id\', \'route_short_name\', \'route_color\'], // Only return these fields\n [[\'route_short_name\', \'ASC\']], // Sort by this field and direction\n { db: db } // Options for the query. Can specify which database to use if more than one are open\n);\n\ncloseDb(db);\n```\n\nTo get a list of all trip_ids for a specific route:\n\n```js\nimport { closeDb, openDb, getTrips } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\nconst config = JSON.parse(\n await readFile(new URL(\'./config.json\', import.meta.url))\n);\n\nconst db = openDb(config);\nconst trips = getTrips(\n {\n route_id: \'123\',\n },\n [\'trip_id\']\n);\n\ncloseDb(db);\n```\n\nTo get a few stops by specific stop_ids:\n\n```js\nimport { closeDb, openDb, getStops } from \'gtfs\';\nimport { readFile } from \'fs/promises\';\nconst config = JSON.parse(await readFile(new URL(\'./config.json\', import.meta.url)));\n\nconst db = openDb(config);\nconst stops = getStops(\n {\n stop_id: [\n \'123\',\n \'234\'\n \'345\'\n ]\n }\n);\n\ncloseDb(db);\n```\n\n### Static GTFS Files\n\n#### getAgencies(query, fields, sortBy, options)\n\nReturns an array of agencies that match query parameters. [Details on agency.txt](https://gtfs.org/schedule/reference/#agencytxt)\n\n```js\nimport { getAgencies } from \'gtfs\';\n\n// Get all agencies\nconst agencies = getAgencies();\n\n// Get a specific agency\nconst agencies = getAgencies({\n agency_id: \'caltrain\',\n});\n```\n\n#### getAreas(query, fields, sortBy, options)\n\nReturns an array of areas that match query parameters. [Details on areas.txt](https://gtfs.org/schedule/reference/#areastxt)\n\n```js\nimport { getAreas } from \'gtfs\';\n\n// Get all areas\nconst areas = getAreas();\n\n// Get a specific area\nconst areas = getAreas({\n area_id: \'area1\',\n});\n```\n\n#### getAttributions(query, fields, sortBy, options)\n\nReturns an array of attributions that match query parameters. [Details on attributions.txt](https://gtfs.org/schedule/reference/#attributionstxt)\n\n```js\nimport { getAttributions } from \'gtfs\';\n\n// Get all attributions\nconst attributions = getAttributions();\n\n// Get a specific attribution\nconst attributions = getAttributions({\n attribution_id: \'123\',\n});\n```\n\n#### getRoutes(query, fields, sortBy, options)\n\nReturns an array of routes that match query parameters. [Details on routes.txt](https://gtfs.org/schedule/reference/#routestxt)\n\n```js\nimport { getRoutes } from \'gtfs\';\n\n// Get all routes, sorted by route_short_name\nconst routes = getRoutes({}, [], [[\'route_short_name\', \'ASC\']]);\n\n// Get a specific route\nconst routes = getRoutes({\n route_id: \'Lo-16APR\',\n});\n\n/*\n * `getRoutes` allows passing a `stop_id` as part of the query. This will\n * query stoptimes and trips to find all routes that serve that `stop_id`.\n */\nconst routes = getRoutes(\n {\n stop_id: \'70011\',\n },\n [],\n [[\'stop_name\', \'ASC\']]\n);\n```\n\n#### getStops(query, fields, sortBy, options)\n\nReturns an array of stops that match query parameters. [Details on stops.txt](https://gtfs.org/schedule/reference/#stopstxt)\n\n```js\nimport { getStops } from \'gtfs\';\n\n// Get all stops\nconst stops = getStops();\n\n// Get a specific stop by stop_id\nconst stops = getStops({\n stop_id: \'70011\',\n});\n\n/*\n * `getStops` allows passing a `route_id` in the query and it will\n * query trips and stoptimes to find all stops served by that `route_id`.\n */\nconst stops = getStops({\n route_id: \'Lo-16APR\',\n});\n\n/*\n * `getStops` allows passing a `trip_id` in the query and it will query\n * stoptimes to find all stops on that `trip_id`.\n */\nconst stops = getStops({\n trip_id: \'37a\',\n});\n\n/*\n * `getStops` allows passing a `shape_id` in the query and it will query\n * trips and stoptimes to find all stops that use that `shape_id`.\n */\nconst stops = getStops({\n shape_id: \'cal_sf_tam\',\n});\n```\n\n#### getStopsAsGeoJSON(query, options)\n\nReturns geoJSON object of stops that match query parameters. Stops will include all properties of each stop from stops.txt and stop_attributes.txt if present. All valid queries for `getStops()` work for `getStopsAsGeoJSON()`.\n\n```js\nimport { getStopsAsGeoJSON } from \'gtfs\';\n\n// Get all stops for an agency as geoJSON\nconst stopsGeojson = getStopsAsGeoJSON();\n\n// Get all stops for a specific route as geoJSON\nconst stopsGeojson = getStopsAsGeoJSON({\n route_id: \'Lo-16APR\',\n});\n```\n\n#### getStoptimes(query, fields, sortBy, options)\n\nReturns an array of stop_times that match query parameters. [Details on stop_times.txt](https://gtfs.org/schedule/reference/#stop_timestxt)\n\n```js\nimport { getStoptimes } from \'gtfs\';\n\n// Get all stoptimes\nconst stoptimes = getStoptimes();\n\n// Get all stoptimes for a specific stop\nconst stoptimes = getStoptimes({\n stop_id: \'70011\',\n});\n\n// Get all stoptimes for a specific trip, sorted by stop_sequence\nconst stoptimes = getStoptimes(\n {\n trip_id: \'37a\',\n },\n [],\n [[\'stop_sequence\', \'ASC\']]\n);\n\n// Get all stoptimes for a specific stop and service_id\nconst stoptimes = getStoptimes({\n stop_id: \'70011\',\n service_id: \'CT-16APR-Caltrain-Weekday-01\',\n});\n```\n\n#### getTrips(query, fields, sortBy, options)\n\nReturns an array of trips that match query parameters. [Details on trips.txt](https://gtfs.org/schedule/reference/#tripstxt)\n\n```js\nimport { getTrips } from \'gtfs\';\n\n// Get all trips\nconst trips = getTrips();\n\n// Get trips for a specific route and direction\nconst trips = getTrips({\n route_id: \'Lo-16APR\',\n direction_id: 0\n});\n\n// Get trips for direction \'\' or null\nconst trips = getTrips({\n route_id: \'Lo-16APR\',\n direction_id: null\n});\n\n// Get trips for a specific route and direction limited by a service_id\nconst trips = getTrips({\n route_id: \'Lo-16APR\',\n direction_id: 0,\n service_id: \'\n});\n```\n\n#### getShapes(query, fields, sortBy, options)\n\nReturns an array of shapes that match query parameters. [Details on shapes.txt](https://gtfs.org/schedule/reference/#shapestxt)\n\n```js\nimport { getShapes } from \'gtfs\';\n\n// Get all shapes for an agency\nconst shapes = getShapes();\n\n/*\n * `getShapes` allows passing a `route_id` in the query and it will query\n * trips to find all shapes served by that `route_id`.\n */\nconst shapes = getShapes({\n route_id: \'Lo-16APR\',\n});\n\n/*\n * `getShapes` allows passing a `trip_id` in the query and it will query\n * trips to find all shapes served by that `trip_id`.\n */\nconst shapes = getShapes({\n trip_id: \'37a\',\n});\n\n/*\n * `getShapes` allows passing a `service_id` in the query and it will query\n * trips to find all shapes served by that `service_id`.\n */\nconst shapes = getShapes({\n service_id: \'CT-16APR-Caltrain-Sunday-02\',\n});\n```\n\n#### getShapesAsGeoJSON(query, options)\n\nReturns a geoJSON object of shapes that match query parameters. Shapes will include all properties of each route from routes.txt and route_attributes.txt if present. All valid queries for `getShapes()` work for `getShapesAsGeoJSON()`.\n\n```js\nimport { getShapesAsGeoJSON } from \'gtfs\';\n\n// Get geoJSON of all routes in an agency\nconst shapesGeojson = getShapesAsGeoJSON();\n\n// Get geoJSON of shapes for a specific route\nconst shapesGeojson = getShapesAsGeoJSON({\n route_id: \'Lo-16APR\',\n});\n\n// Get geoJSON of shapes for a specific trip\nconst shapesGeojson = getShapesAsGeoJSON({\n trip_id: \'37a\',\n});\n\n// Get geoJSON of shapes for a specific `service_id`\nconst shapesGeojson = getShapesAsGeoJSON({\n service_id: \'CT-16APR-Caltrain-Sunday-02\',\n});\n\n// Get geoJSON of shapes for a specific `shape_id`\nconst shapesGeojson = getShapesAsGeoJSON({\n shape_id: \'cal_sf_tam\',\n});\n```\n\n#### getCalendars(query, fields, sortBy, options)\n\nReturns an array of calendars that match query parameters. [Details on calendar.txt](https://gtfs.org/schedule/reference/#calendartxt)\n\n```js\nimport { getCalendars } from \'gtfs\';\n\n// Get all calendars for an agency\nconst calendars = getCalendars();\n\n// Get calendars for a specific `service_id`\nconst calendars = getCalendars({\n service_id: \'CT-16APR-Caltrain-Sunday-02\',\n});\n```\n\n#### getCalendarDates(query, fields, sortBy, options)\n\nReturns an array of calendar_dates that match query parameters. [Details on calendar_dates.txt](https://gtfs.org/schedule/reference/#calendar_datestxt)\n\n```js\nimport { getCalendarDates } from \'gtfs\';\n\n// Get all calendar_dates for an agency\nconst calendarDates = getCalendarDates();\n\n// Get calendar_dates for a specific `service_id`\nconst calendarDates = getCalendarDates({\n service_id: \'CT-16APR-Caltrain-Sunday-02\',\n});\n```\n\n#### getFareAttributes(query, fields, sortBy, options)\n\nReturns an array of fare_attributes that match query parameters. [Details on fare_attributes.txt](https://gtfs.org/schedule/reference/#fare_attributestxt)\n\n```js\nimport { getFareAttributes } from \'gtfs\';\n\n// Get all `fare_attributes` for an agency\nconst fareAttributes = getFareAttributes();\n\n// Get `fare_attributes` for a specific `fare_id`\nconst fareAttributes = getFareAttributes({\n fare_id: \'123\',\n});\n```\n\n#### getFareLegRules(query, fields, sortBy, options)\n\nReturns an array of fare_leg_rules that match query parameters. [Details on fare_leg_rules.txt](https://gtfs.org/schedule/reference/#fare_leg_rulestxt)\n\n```js\nimport { getFareLegRules } from \'gtfs\';\n\n// Get all fare leg rules\nconst fareLegRules = getFareLegRules();\n\n// Get fare leg rules for a specific fare product\nconst fareLegRules = getFareLegRules({\n fare_product_id: \'product1\',\n});\n```\n\n#### getFareProducts(query, fields, sortBy, options)\n\nReturns an array of fare_products that match query parameters. [Details on fare_products.txt](https://gtfs.org/schedule/reference/#fare_productstxt)\n\n```js\nimport { getFareProducts } from \'gtfs\';\n\n// Get all fare products\nconst fareProducts = getFareProducts();\n\n// Get a specific fare product\nconst fareProducts = getFareProducts({\n fare_product_id: \'product1\',\n});\n```\n\n#### getFareRules(query, fields, sortBy, options)\n\nReturns an array of fare_rules that match query parameters. [Details on fare_rules.txt](https://gtfs.org/schedule/reference/#fare_rulestxt)\n\n```js\nimport { getFareRules } from \'gtfs\';\n\n// Get all `fare_rules` for an agency\nconst fareRules = getFareRules();\n\n// Get fare_rules for a specific route\nconst fareRules = getFareRules({\n route_id: \'Lo-16APR\',\n});\n```\n\n#### getFareTransferRules(query, fields, sortBy, options)\n\nReturns an array of fare_transfer_rules that match query parameters. [Details on fare_transfer_rules.txt](https://gtfs.org/schedule/reference/#fare_transfer_rulestxt)\n\n```js\nimport { getFareTransferRules } from \'gtfs\';\n\n// Get all fare transfer rules\nconst fareTransferRules = getFareTransferRules();\n\n// Get a all fare transfer rules for a specific fare product\nconst fareTransferRules = getFareTransferRules({\n fare_product_id: \'product1\',\n});\n```\n\n#### getFeedInfo(query, fields, sortBy, options)\n\nReturns an array of feed_info that match query parameters. [Details on feed_info.txt](https://gtfs.org/schedule/reference/#feed_infotxt)\n\n```js\nimport { getFeedInfo } from \'gtfs\';\n\n// Get feed_info\nconst feedInfo = getFeedInfo();\n```\n\n#### getFrequencies(query, fields, sortBy, options)\n\nReturns an array of frequencies that match query parameters. [Details on frequencies.txt](https://gtfs.org/schedule/reference/#frequenciestxt)\n\n```js\nimport { getFrequencies } from \'gtfs\';\n\n// Get all frequencies\nconst frequencies = getFrequencies();\n\n// Get frequencies for a specific trip\nconst frequencies = getFrequencies({\n trip_id: \'1234\',\n});\n```\n\n#### getLevels(query, fields, sortBy, options)\n\nReturns an array of levels that match query parameters. [Details on levels.txt](https://gtfs.org/schedule/reference/#levelstxt)\n\n```js\nimport { getLevels } from \'gtfs\';\n\n// Get all levels\nconst levels = getLevels();\n```\n\n#### getPathways(query, fields, sortBy, options)\n\nReturns an array of pathways that match query parameters. [Details on pathways.txt](https://gtfs.org/schedule/reference/#pathwaystxt)\n\n```js\nimport { getPathways } from \'gtfs\';\n\n// Get all pathways\nconst pathways = getPathways();\n```\n\n#### getTransfers(query, fields, sortBy, options)\n\nReturns an array of transfers that match query parameters. [Details on transfers.txt](https://gtfs.org/schedule/reference/#transferstxt)\n\n```js\nimport { getTransfers } from \'gtfs\';\n\n// Get all transfers\nconst transfers = getTransfers();\n\n// Get transfers for a specific stop\nconst transfers = getTransfers({\n from_stop_id: \'1234\',\n});\n```\n\n#### getTranslations(query, fields, sortBy, options)\n\nReturns an array of translations that match query parameters. [Details on translations.txt](https://gtfs.org/schedule/reference/#translationstxt)\n\n```js\nimport { getTranslations } from \'gtfs\';\n\n// Get all translations\nconst translations = getTranslations();\n```\n\n#### getStopAreas(query, fields, sortBy, options)\n\nReturns an array of stop_areas that match query parameters. [Details on stop_areas.txt](https://gtfs.org/schedule/reference/#stop_areastxt)\n\n```js\nimport { getStopAreas } from \'gtfs\';\n\n// Get all stop areas\nconst stopAreas = getStopAreas();\n```\n\n### GTFS-Timetables files\n\n#### getTimetables(query, fields, sortBy, options)\n\nReturns an array of timetables that match query parameters. This is for the non-standard `timetables.txt` file used in GTFS-to-HTML. [Details on timetables.txt](https://gtfstohtml.com/docs/timetables)\n\n```js\nimport { getTimetables } from \'gtfs\';\n\n// Get all timetables for an agency\nconst timetables = getTimetables();\n\n// Get a specific timetable\nconst timetables = getTimetables({\n timetable_id: \'1\',\n});\n```\n\n#### getTimetableStopOrders(query, fields, sortBy, options)\n\nReturns an array of timetable_stop_orders that match query parameters. This is for the non-standard `timetable_stop_order.txt` file used in GTFS-to-HTML. [Details on timetable_stop_order.txt](https://gtfstohtml.com/docs/timetable-stop-order)\n\n```js\nimport { getTimetableStopOrders } from \'gtfs\';\n\n// Get all timetable_stop_orders\nconst timetableStopOrders = getTimetableStopOrders();\n\n// Get timetable_stop_orders for a specific timetable\nconst timetableStopOrders = getTimetableStopOrders({\n timetable_id: \'1\',\n});\n```\n\n#### getTimetablePages(query, fields, sortBy, options)\n\nReturns an array of timetable_pages that match query parameters. This is for the non-standard `timetable_pages.txt` file used in GTFS-to-HTML. [Details on timetable_pages.txt](https://gtfstohtml.com/docs/timetable-pages)\n\n```js\nimport { getTimetablePages } from \'gtfs\';\n\n// Get all timetable_pages for an agency\nconst timetablePages = getTimetablePages();\n\n// Get a specific timetable_page\nconst timetablePages = getTimetablePages({\n timetable_page_id: \'2\',\n});\n```\n\n#### getTimetableNotes(query, fields, sortBy, options)\n\nReturns an array of timetable_notes that match query parameters. This is for the non-standard `timetable_notes.txt` file used in GTFS-to-HTML. [Details on timetable_notes.txt](https://gtfstohtml.com/docs/timetable-notes)\n\n```js\nimport { getTimetableNotes } from \'gtfs\';\n\n// Get all timetable_notes for an agency\nconst timetableNotes = getTimetableNotes();\n\n// Get a specific timetable_note\nconst timetableNotes = getTimetableNotes({\n note_id: \'1\',\n});\n```\n\n#### getTimetableNotesReferences(query, fields, sortBy, options)\n\nReturns an array of timetable_notes_references that match query parameters. This is for the non-standard `timetable_notes_references.txt` file used in GTFS-to-HTML. [Details on timetable_notes_references.txt](https://gtfstohtml.com/docs/timetable-notes-references)\n\n```js\nimport { getTimetableNotesReferences } from \'gtfs\';\n\n// Get all timetable_notes_references for an agency\nconst timetableNotesReferences = getTimetableNotesReferences();\n\n// Get all timetable_notes_references for a specific timetable\nconst timetableNotesReferences = getTimetableNotesReferences({\n timetable_id: \'4\',\n});\n```\n\n### GTFS-Realtime\n\nIn order to use GTFS-Realtime query methods, you must first configure GTFS Realtime import in node-gtfs\n\n#### getServiceAlerts(query, fields, sortBy, options)\n\nReturns an array of GTFS Realtime service alerts that match query parameters. [Details on Service Alerts](https://gtfs.org/realtime/feed-entities/service-alerts/)\n\n```js\nimport { getServiceAlerts } from \'gtfs\';\n\n// Get service alerts\nconst serviceAlerts = getServiceAlerts();\n```\n\n#### getTripUpdates(query, fields, sortBy, options)\n\nReturns an array of GTFS Realtime trip updates that match query parameters. [Details on Trip Updates](https://gtfs.org/realtime/feed-entities/trip-updates/)\n\n```js\nimport { getTripUpdates } from \'gtfs\';\n\n// Get all trip updates\nconst tripUpdates = getTripUpdates();\n```\n\n#### getStopTimesUpdates(query, fields, sortBy, options)\n\nReturns an array of GTFS Realtime stop time updates that match query parameters. [Details on Stop Time Updates](https://gtfs.org/realtime/feed-entities/trip-updates/#stoptimeupdate)\n\n```js\nimport { getStopTimesUpdates } from \'gtfs\';\n\n// Get all stop times updates\nconst stopTimesUpdates = getStopTimesUpdates();\n```\n\n#### getVehiclePositions(query, fields, sortBy, options)\n\nReturns an array of GTFS Realtime vehicle positions that match query parameters. [Details on Vehicle Positions](https://gtfs.org/realtime/feed-entities/vehicle-positions/)\n\n```js\nimport { getVehiclePositions } from \'gtfs\';\n\n// Get all vehicle position data\nconst vehiclePositions = getVehiclePositions();\n```\n\n### GTFS+ Files\n\n#### getCalendarAttributes(query, fields, sortBy, options)\n\nReturns an array of calendar_attributes that match query parameters.\n\n```js\nimport { getCalendarAttributes } from \'gtfs\';\n\n// Get all calendar attributes\nconst calendarAttributes = getCalendarAttributes();\n\n// Get calendar attributes for specific service\nconst calendarAttributes = getCalendarAttributes({\n service_id: \'1234\',\n});\n```\n\n#### getDirections(query, fields, sortBy, options)\n\nReturns an array of directions that match query parameters.\n\n```js\nimport { getDirections } from \'gtfs\';\n\n// Get all directions\nconst directions = getDirections();\n\n// Get directions for a specific route\nconst directions = getDirections({\n route_id: \'1234\',\n});\n\n// Get directions for a specific route and direction\nconst directions = getDirections({\n route_id: \'1234\',\n direction_id: 1,\n});\n```\n\n#### getRouteAttributes(query, fields, sortBy, options)\n\nReturns an array of route_attributes that match query parameters.\n\n```js\nimport { getRouteAttributes } from \'gtfs\';\n\n// Get all route attributes\nconst routeAttributes = getRouteAttributes();\n\n// Get route attributes for specific route\nconst routeAttributes = getRouteAttributes({\n route_id: \'1234\',\n});\n```\n\n#### getStopAttributes(query, fields, sortBy, options)\n\nReturns an array of stop_attributes that match query parameters.\n\n```js\nimport { getStopAttributes } from \'gtfs\';\n\n// Get all stop attributes\nconst stopAttributes = getStopAttributes();\n\n// Get stop attributes for specific stop\nconst stopAttributes = getStopAttributes({\n stop_id: \'1234\',\n});\n```\n\n### GTFS-Ride Files\n\nSee full [documentation of GTFS Ride](https://gtfsride.org).\n\n#### getBoardAlights(query, fields, sortBy, options)\n\nReturns an array of board_alight that match query parameters. [Details on board_alight.txt](http://gtfsride.org/specification#board_alighttxt)\n\n```js\nimport { getBoardAlights } from \'gtfs\';\n\n// Get all board_alight\nconst boardAlights = getBoardAlights();\n\n// Get board_alight for a specific trip\nconst boardAlights = getBoardAlights({\n trip_id: \'123\',\n});\n```\n\n#### getRideFeedInfos(query, fields, sortBy, options)\n\nReturns an array of ride_feed_info that match query parameters. [Details on ride_feed_info.txt](http://gtfsride.org/specification#ride_feed_infotxt)\n\n```js\nimport { getRideFeedInfos } from \'gtfs\';\n\n// Get all ride_feed_info\nconst rideFeedInfos = getRideFeedInfos();\n```\n\n#### getRiderTrips(query, fields, sortBy, options)\n\nReturns an array of rider_trip that match query parameters. [Details on rider_trip.txt](http://gtfsride.org/specification#rider_triptxt)\n\n```js\nimport { getRiderTrips } from \'gtfs\';\n\n// Get all rider_trip\nconst riderTrips = getRiderTrips();\n\n// Get rider_trip for a specific trip\nconst riderTrips = getRiderTrips({\n trip_id: \'123\',\n});\n```\n\n#### getRiderships(query, fields, sortBy, options)\n\nReturns an array of ridership that match query parameters. [Details on ridership.txt](http://gtfsride.org/specification#ridershiptxt)\n\n```js\nimport { getRiderships } from \'gtfs\';\n\n// Get all ridership\nconst riderships = getRiderships();\n\n// Get ridership for a specific route\nconst riderships = getRiderships({\n route_id: \'123\',\n});\n```\n\n#### getTripCapacities(query, fields, sortBy, options)\n\nReturns an array of trip_capacity that match query parameters. [Details on trip_capacity.txt](http://gtfsride.org/specification#trip_capacitytxt)\n\n```js\nimport { getTripCapacities } from \'gtfs\';\n\n// Get all trip_capacity\nconst tripCapacities = getTripCapacities();\n\n// Get trip_capacity for a specific trip\nconst tripCapacities = getTripCapacities({\n trip_id: \'123\',\n});\n```\n\n### Operational Data Standard (ODS) Files\n\n#### getDeadheads(query, fields, sortBy, options)\n\nReturns an array of deadheads that match query parameters. [Details on deadheads.txt](https://docs.calitp.org/operational-data-standard/spec/#deadheadstxt)\n\n```js\nimport { getDeadheads } from \'gtfs\';\n\n// Get all deadheads\nconst deadheads = getDeadheads();\n\n// Get deadheads for a specific block\nconst deadheads = getDeadheads({\n block_id: \'123\',\n});\n```\n\n#### getDeadheadTimes(query, fields, sortBy, options)\n\nReturns an array of deadhead_times that match query parameters. [Details on deadhead_times.txt](https://docs.calitp.org/operational-data-standard/spec/#deadhead_timestxt)\n\n```js\nimport { getDeadheadTimes } from \'gtfs\';\n\n// Get all deadhead_times\nconst deadheadTimes = getDeadheadTimes();\n\n// Get deadhead_times for a specific deadhead\nconst deadheadTimes = getDeadheadTimes({\n deadhead_id: \'123\',\n});\n```\n\n#### getOpsLocations(query, fields, sortBy, options)\n\nReturns an array of ops_locations that match query parameters. [Details on ops_locations.txt](https://docs.calitp.org/operational-data-standard/spec/#ops_locationstxt)\n\n```js\nimport { getOpsLocations } from \'gtfs\';\n\n// Get all ops_locations\nconst opsLocations = getOpsLocations();\n\n// Get a specific ops_locations\nconst opsLocations = getOpsLocations({\n ops_location_id: \'123\',\n});\n```\n\n#### getRunsPieces(query, fields, sortBy, options)\n\nReturns an array of runs_pieces that match query parameters. [Details on runs_pieces.txt](https://docs.calitp.org/operational-data-standard/spec/#runs_piecestxt)\n\n```js\nimport { getRunsPieces } from \'gtfs\';\n\n// Get all runs_pieces\nconst runsPieces = getRunsPieces();\n```\n\n#### getRunEvents(query, fields, sortBy, options)\n\nReturns an array of run_events that match query parameters. [Details on run_events.txt](https://docs.calitp.org/operational-data-standard/spec/#run_eventstxt)\n\n```js\nimport { getRunEvents } from \'gtfs\';\n\n// Get all run_events\nconst runEvents = runEvents();\n\n// Get a run_events for a specific piece\nconst runEvents = runEvents({\n piece_id: \'123\',\n});\n```\n\n### Other Non-standard GTFS Files\n\n#### getTripsDatedVehicleJourneys(query, fields, sortBy, options)\n\nReturns an array of trips_dated_vehicle_journey that match query parameters. This is for the non-standard `trips_dated_vehicle_journey.txt` file. [Details on trips_dated_vehicle_journey.txt](https://www.trafiklab.se/api/trafiklab-apis/gtfs-regional/extra-files/)\n\n```js\nimport { getTripsDatedVehicleJourneys } from \'gtfs\';\n\n// Get all trips_dated_vehicle_journey\nconst tripsDatedVehicleJourneys = getTripsDatedVehicleJourneys();\n```\n\n### Advanced Query Methods\n\n#### advancedQuery(table, advancedQueryOptions)\n\nQueries the database with support for table joins and custom tables and returns an array of data.\n\n```js\nimport { advancedQuery } from \'gtfs\';\n\n// Example `advancedQuery` joining stop_times with trips.\nconst advancedQueryOptions = {\n query: {\n \'stop_times.trip_id\': tripId,\n },\n fields: [\'stop_times.trip_id\', \'arrival_time\'],\n join: [\n {\n type: \'INNER\',\n table: \'trips\',\n on: \'stop_times.trip_id=trips.trip_id\',\n },\n ],\n};\n\nconst stoptimes = advancedQuery(\'stop_times\', advancedQueryOptions);\n```\n\n#### Raw SQLite Query\n\nUse the `openDb` function to get the db object, and then use any query method from [better-sqlite3](https://github.com/WiseLibs/better-sqlite3) to query GTFS data.\n\n```js\nimport { openDb } from \'gtfs\';\nconst db = openDb(config);\n\n// Get a specific trip\nconst trip = db.prepare(\'SELECT * FROM trips WHERE trip_id = ?\').get(\'123\');\n\n// Get all stops\nconst stops = db.prepare(\'SELECT * from stops\').all();\n\n// Get all calendar_ids for specific date\nconst calendarIds = db\n .prepare(\n \'SELECT service_id from calendar WHERE start_date <= $date AND end_date >= $date\'\n )\n .all({ date: 20150101 });\n\n// Find all stops for route_id=18 by joining tables\nconst stopIds = db\n .prepare(\n \'SELECT DISTINCT stops.stop_id from stops INNER JOIN stop_times ON stops.stop_id = stop_times.stop_id INNER JOIN trips on trips.trip_id = stop_times.trip_id WHERE trips.route_id = ?\'\n )\n .all(\'18\');\n\n// Execute raw SQL\nconst sql = ""DELETE FROM trips where trip_id = \'329\'"";\ndb.exec(sql);\n```\n\n## Contributing\n\nPull requests are welcome, as is feedback and [reporting issues](https://github.com/blinktaginc/node-gtfs/issues).\n\n### Tests\n\nTo run tests:\n\n npm test\n\nTo run a specific test:\n\n NODE_ENV=test mocha ./test/mocha/gtfs.get-stoptimes.js\n\n### Linting\n\n npm run lint\n'",,"2012/02/13, 17:38:00",4272,MIT,75,759,"2023/09/19, 17:50:22",7,68,142,13,36,1,0.4,0.11527377521613835,"2023/08/23, 23:23:09",4.5.0,0,37,false,,false,false,,,https://github.com/BlinkTagInc,https://blinktag.com,"San Francisco, CA",,,https://avatars.githubusercontent.com/u/12618433?v=4,,, Public Transport Enabler,Unleash public transport data in your Java project.,schildbach,https://github.com/schildbach/public-transport-enabler.git,github,"java,library,public-transportation,navitia,hafas,efa",Mobility and Transportation,"2023/09/26, 21:23:42",359,0,37,true,Java,,,Java,https://groups.google.com/forum/#!forum/public-transport-enabler-discuss,"b'Public Transport Enabler\n========================\n\nThis is a Java library allowing you to get data from public transport providers.\nLook into [NetworkProvider.java](https://github.com/schildbach/public-transport-enabler/blob/master/src/de/schildbach/pte/NetworkProvider.java) for an overview of the API.\n\nUsing providers that require secrets\n------------------------------------\n\nFor some providers a secret like an API key is required to use their API.\nCopy the `secrets.properties.template` file to `secrets.properties` like so:\n\n $ cp test/de/schildbach/pte/live/secrets.properties.template test/de/schildbach/pte/live/secrets.properties\n\nYou need to request the secrets directly from the provider. For Navitia based providers, you can [request a secret here](https://www.navitia.io/register).\n\nHow to run live tests?\n----------------------\n\nMake sure the test you want to run does not require a secret and if it does, see above for how to get one.\nOnce you have the secret or if your provider does not need one, you can run the tests in your IDE.\nBoth IntelliJ and Eclipse have excellent support for JUnit tests.\n\nIf you prefer to run tests from the command line, you can comment out the test exclude at the end of\n[build.gradle](https://github.com/schildbach/public-transport-enabler/blob/master/build.gradle#L30)\nand use this command to only execute a test for a single provider:\n\n $ gradle -Dtest.single=ParisProviderLive test\n\nThis uses the `ParisProvider` as an example.\nJust replace it with the provider you want to test.\n'",,"2013/05/27, 15:31:24",3803,GPL-3.0,40,2504,"2023/10/08, 08:37:33",254,137,284,10,17,31,1.2,0.06921555702043503,,,0,36,false,,false,false,,,,,,,,,,, osm2gtfs,Turn OpenStreetMap data and schedule information into GTFS.,grote,https://github.com/grote/osm2gtfs.git,github,"openstreetmap,gtfs,schedule,public-transportation,python",Mobility and Transportation,"2020/02/03, 07:14:37",94,0,13,true,Python,,,Python,,"b'osm2gtfs\n========\n\n[![Build Status](https://travis-ci.org/grote/osm2gtfs.svg?branch=master)](https://travis-ci.org/grote/osm2gtfs)\n\nUse public transport data from [OpenStreetMap](http://www.openstreetmap.org/)\nand external schedule information\nto create a General Transit Feed ([GTFS](https://developers.google.com/transit/gtfs/)).\n\nThe official source code repository is at [github.com/grote/osm2gtfs](https://github.com/grote/osm2gtfs).\n\nHow does it work?\n-----------------\n\nThe script retrieves current data about public transport networks directly from\nOpenStreetMap via the Overpass API. It stores the data in python objects and\ncaches on disk for efficient re-use. Then the data is combined with another\nsource of schedule (time) information in order to create a GTFS file using the\ntransitfeed library.\n\nFor every new city a new [configuration file](https://github.com/grote/osm2gtfs/wiki/Configuration)\nneeds to be created. Additionally, schedule information should be provided. By-default the schedule information is expected to be provided in a [certain format](https://github.com/grote/osm2gtfs/wiki/Schedule). However other formats are supported through extending the code. For any city and schedule format the script can be easily extended, see the\n[developer documentation](https://github.com/grote/osm2gtfs/wiki/Development)\nfor more information.\n\nIncluded cities\n-----------------\n\n* [Florian\xc3\xb3polis, Brazil](./osm2gtfs/creators/br_florianopolis/config.json)\n* [Suburban trains in Costa Rica](./osm2gtfs/creators/cr_gam/config.json)\n* [Accra, Ghana](./osm2gtfs/creators/gh_accra/readme.md)\n* [Managua, Ciudad Sandino](./osm2gtfs/creators/ni_managua/config.json) and [Estel\xc3\xad](./osm2gtfs/creators/ni_esteli/config.json) in Nicaragua\n* [Abidjan, Ivory Coast](./osm2gtfs/creators/ci_abidjan/README.md)\n\n*Soon, also in your city*\n\nInstall\n------------\n\nInstall by running\n\n pip install -e .\n\nRequirements\n------------\nAutomatically installed by the previous step:\n* https://github.com/DinoTools/python-overpy\n* https://github.com/google/transitfeed\n\nUse\n------------\n\n osm2gtfs -c \n\nExample:\n\n osm2gtfs -c osm2gtfs/creators/br_florianopolis/config.json\n\nLicense\n-------\n\n![GNU GPLv3 Image](https://www.gnu.org/graphics/gplv3-127x51.png)\n\nThis program is Free Software: You can use, study share and improve it at your\nwill. Specifically you can redistribute and/or modify it under the terms of the\n[GNU General Public License](https://www.gnu.org/licenses/gpl.html) as\npublished by the Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n'",,"2016/07/11, 21:07:57",2662,GPL-3.0,0,111,"2023/08/08, 17:57:31",32,83,134,2,78,3,1.5,0.6296296296296297,,,0,9,false,,false,false,,,,,,,,,,, Quetzal,A modeling library designed for transport planning and traffic forecasts.,systragroup,https://github.com/systragroup/quetzal.git,github,,Mobility and Transportation,"2023/10/23, 14:55:09",38,1,13,true,Jupyter Notebook,SYSTRA,systragroup,"Jupyter Notebook,Python,QML,HCL,Shell,Batchfile,Dockerfile,HTML",,"b'\n# quetzal\n## What is it?\n**quetzal** is a Python package providing flexible models for transport planning and traffic forecasting.\n## Copyright\n(c) SYSTRA\n## License\n[CeCILL-B](LICENSE.md)\n## Documentation\nThe official documentation is hosted on https://systragroup.github.io/quetzal\n## Backward compatibility\nIn order to improve the ergonomics, the code may be re-factored and a few method calls may be re-designed. As a consequence, the backward compatibility of the library is not guaranteed. Therefore, the version of quetzal used for a project should be specified in its requirements.\n## Installation from sources\nIt is preferred to first create and use a virtual environment.\n### For Linux\nOne should choose between Virtualenv and Pipenv or use Anaconda 3.\n\n#### Pipenv\n```bash\npipenv install\npipenv shell\npython -m ipykernel install --user --name=quetzal_env\n\n```\n\n#### Virtualenv\nVirtual environment: `virtualenv .venv -p python3.8; source .venv/bin/activate` or any equivalent command.\n\n```bash\npip install -e .\n```\n\n\n\n#### Anaconda\nIn order to use python notebook, Anaconda 3 + Python 3.8 must be installed.\nThen create + activate quetzal environment:\n```bash\nconda init\nconda create -n quetzal_env -y python=3.8\nconda activate quetzal_env\npip install -e . -r requirements.txt\npython -m ipykernel install --user --name=quetzal_env\n```\n\n... Or use the `linus-install.sh` script.\n\n### For Windows\n`Anaconda 3 + Python 3.8` is supposed to be installed. You must edit the `Path`user environment variable, adding several folders where Anaconda is installed:\n- `path-to-anaconda3\\`\n- `path-to-anaconda3\\Scripts`\n- `path-to-anaconda3\\Library\\bin`\n- `path-to-anaconda3\\Library\\usr\\bin`\n\n#### PIP with Wheels (recommended)\n```bash\n(base) C:users\\you\\path\\to\\quetzal>windows-install-whl.bat\n```\npress enter to accept default environment name\n#### PIP and Anaconda \nTo create quetzal_env automatically and install quetzal \n```bash\n(base) C:users\\you\\path\\to\\quetzal> windows-install.bat\n```\npress enter to accept default environment name\n#### If you are facing SSL issues\n```bash\n(base) pip config set global.trusted-host ""pypi.org files.pythonhosted.org""\n(base) C:users\\you\\path\\to\\quetzal> windows-install.bat\n```\nsecurity warning: the host is added to pip.ini\n\n#### If you are facing DLL or dependencies issues\nAnaconda and Pip do not get along well, your Anaconda install may have been corrupted at some point.\n- Remove your envs\n- Uninstall Anaconda\n- Delete your Python and Anaconda folders (users\\you\\Anaconda3, users\\you\\Appdata\\Roaming\\Python, ...etc)\n- Install Anaconda \n'",,"2018/11/20, 08:30:37",1800,CUSTOM,121,673,"2023/09/26, 14:42:13",16,89,98,19,29,11,0.7,0.5426356589147288,"2021/01/29, 09:13:16",v2.0.0,0,9,false,,false,false,systragroup/quetzal_santo_domingo,,https://github.com/systragroup,https://www.systra.com,,,,https://avatars.githubusercontent.com/u/45179084?v=4,,, quetzal_germany,A four step transport model for Germany using the quetzal transport modeling suite.,marlinarnz,https://github.com/marlinarnz/quetzal_germany.git,github,,Mobility and Transportation,"2023/06/20, 15:39:40",5,0,2,true,Jupyter Notebook,,,Jupyter Notebook,,"b'# quetzal_germany\n\nThis open source project is a macroscopic passenger transport model for the region of Germany. It supports research aimed at designing an integrated, renewable energy system with mobility behaviour insights. The reference publication can be found here: https://doi.org/10.1186/s12544-022-00568-9\n\nIt uses the quetzal transport modelling framework: https://github.com/systragroup/quetzal\n\n## Structure\n\nThe method is oriented towards classical four-step transport modelling with emphasis on the demand side.\n\n![Structure of quetzal_germany](input/quetzal_germany_structure_chart.PNG ""Structure of quetzal_germany"")\n\nThe directory structure is as follows:\n> quetzal_germany/
\n> -- input/
\n> -- input_static/
\n> -- model/
\n> ---- base/
\n> ---- scenarioX/
\n> -- notebooks/
\n> ---- log/
\n> -- output/
\n> ---- base/
\n> ---- scenarioX/
\n\nWhile input and output data as well as (temporary) model files are stored in seperate folders, Jupyter Notebooks contain all data management and modelling. Briefly, they are structured as follows (`X` as wildcard):\n* ``prep1X``: Generation of transport demand zones and all transport networks in high resolution\n* ``prep2X``: Aggregation of PT network graph and connection to transport demand sources and sinks\n* ``prep3X``: Calculation of shortest paths and enrichment with performance attributes for PT and cars, respectively\n* ``prep4X``: Data preparation for generation and destination choice models\n* ``calX``: Generation of calibration dataset and estimation of demand model parameters (only applicable with access to calibration data (see below))\n* ``model_generation_X``: Trip generation choice for non-compulsory trips either from exogenous data (MiD2017) or endogenously with a logit model\n* ``model_destination``: Destination choice for non-compulsory trips\n* ``model_volumes_X``: Generation of OD matrix either from exogenous data (VP2030) or endogenously from previous steps\n* ``model_mode``: Mode choice and calculation of composite cost for generation and destination choice\n* ``model_assignment``: Route assignment and results validation\n* ``model_inner-zonal``: Calculation of transport system indicators for inner-zonal traffic\n* ``post_X``: Calculation of emissions or energy demand for the entire passenger transport sector\n* ``00_launcher``: Automatically runs all preparation and modelling steps in order\n* ``00_test_environment``: Run it to see whether your virtual environment is properly set up\n* ``val_X``: Various validation notebooks for model results\n\nAll scenario parameters are saved in the `input/parameters.xls` file. Other mallable input files are located in the same folder, while unchanged input data sits in `input_static/`.\n\n## Installation\n\n1. Create a virtual environment for quetzal models. Choose one of these methods:\n + Either clone the quetzal package into a local folder and create a virtual environment as described here[^1]: https://github.com/systragroup/quetzal\n + Or create a virtual environment manually with all dependencies using conda:\n * Create an environment with the desired python version, e.g.: `conda create -n quetzal python=3.10`\n\t * Activate this environemnt (`conda activate quetzal`)\n\t * Install neccesary dependencies: `conda install -c conda-forge geopandas contextily osmnx geopy rtree notebook matplotlib xlsxwriter cython numba scikit-learn scipy xlrd tqdm ray-default pytables sqlalchemy openpyxl==3.0.7`\n\t * Install quetzal dependencies that are not available on conda for python > 3.6: `pip install simpledbf biogeme`\n\t * Clone the quetzal repository to a desired location (`git clone https://github.com/systragroup/quetzal`) and install it as development version `pip install -e quetzal/`\n2. Activate your quetzal environment, if not done yet\n3. Clone this repository into a local folder: In your terminal, navigate to the position where you want to store the code. Type `git clone `. Navigate into the folder `quetzal_germany`.\n4. Download static input files from Zenodo[^2] into a folder named `input_static/` within the `quetzal_germany` repository (see directory structure): [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4518680.svg)](https://doi.org/10.5281/zenodo.4518680)\n5. Open the local project in Jupyter Notebook (in your terminal type `jupyter notebook`) and start running the notebooks\n6. *OPTIONAL* Install the car ownership module: Download the module from Zenodo into a folder named `car_ownership`, just like done with `input_static/`: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7809061.svg)](https://doi.org/10.5281/zenodo.7809061)\n\nYou can test your virtual environment by running the `00_test_environment` notebook. Read potential error messages and install concerning packages using conda, or refer to the [quetzal issues](https://github.com/systragroup/quetzal/issues) page to see if this error has occured before.\n\n[^1]: If you face problems importing geopandas, consider uninstalling package `rtree` and reinstalling a version up to 0.9.3 (`conda install -c conda-forge rtree=0.9.3`) or browsing through quetzal [issues](https://github.com/systragroup/quetzal/issues) for discussions on installation.\n[^2]: Large input data files are not hosted in this very repository, as they require different handling in git and licensing.\n\n## First model run\n\nYou can execute your first model run from the `00_launcher` notebook by running only `prep10`, `prep3X` and `model_X` notebooks in the order provided there. Note: If you want to run the model on a laptop with at least 8GB RAM, you should sparsify the OD set or use the NUTS3-level zoning system. Please read the cooresponding section in `00_launcher`. Otherwise, you\'ll need 80GB RAM.\n\nThis repository (together with static input data) contains road and public transport (incl. air) networks ready for simulation, and estimation results for the demand models. It also includes OSM data from `prep4X` notebooks. Thus, you can simply generate the level-of-service (LoS)-shortest-paths-stack and then run the classic transport modelling steps. However, the order shifts, as you need the composite cost from mode choice in generation and destination choice steps (see order in `00_launcher`).\n\nRunning notebooks with the `00_launcher` generates log files for printed output and error messages, respectively. These can be found under `notebooks/log/`.\n\nYou can adjust all assumptions in the `parameters.xls` file, if you want to simulate an alternative transport system (see Scenarios).\n\nDetailed descriptions what the notebooks do are to be found as comments. Briefly: Your StepModel object (always abbreviated with `sm`) is where the magic happens. It saves all tables as attributes (pandas `DataFrame`s) and provides all transport modelling specific functions from the quetzal library. Quetzal provides wrapper function for classic steps in aggregated transport modelling (trip generation, assignment, etc.), which execute a set of more specific functions. Due to a higher degree of customisation, this model uses quetzal\'s specific functions in many places.\n\n## Results\n\nResults of the transport model are computed in `model_assignment`. If you have access to validation data, this notebook will also visualise validation plots. Files and figures are saved under the respective scenario name in `output`.\n\nSome relevant results are aggregated numbers that are printed in the corresponding model step (e.g. average yearly mileage per car in `model_assignment`). These statements can be found in the output log file under `/notebooks/log/out__.txt`\n\n## Scenarios\n\nYou can define own scenarios ""the quetzal way"": Open the `parameters.xls` file and add a new column with your scenario name. Name it under ""general/description"" and refer to ""base"" as a ""general/parent"" scenario. All values, which you don\'t change in your new column are taken from the parent column.\n\nYou can now adjust parameters and run the model with new values. To do so, either use the `00_launcher` by typing your scenario name (column name in `parameters.xls`) in the list of scenarios (fourth cell). All scenario names in this list will be executed in parallel. The other option is running the notebooks manually and defining the variable `scenario` to your name (very first cell).\n\nIf you installed the car ownership module, you can compute changed car availabilities by running the correspondingly named notebook from the launcher (or manually). Further details are given in the notebook and in the Readme of the car ownership module.\n\n## Network generation and example for custom region\n\nNotebook `prep10` creates the four step model (`sm`) with a zones table that you specify. By default, it contains all NUTS3 zones of Germany, but you can limit it to the desired region or refine it with higher resolution data. The disaggregated notebook uses an aggregation of ""Gemeindeverband""-zones, which constitute the default model.\n\nNotebooks `prep11` to `prep14` create road and PT networks from OpenStreetMap and German-wide GTFS feeds, respectively. They will be saved in `sm.road_links`/`sm.road_nodes` and `sm.links`/`sm.nodes`, respectively. Additionally, a list `sm.pt_route_types` is created. Make sure you uncomment the cell where you spatially restrict the network graph, if you want a smaller region. Notebook `prep15` creates distances from all population points in the latest census to your PT stops (make sure to spatially restrict this one too). This data is used to parametrise PT access and egress links between zone\'s demand centroids and transport networks.\n\nNotebooks `prep2X` aggregate your network and create access/egress links between zones\' demand centroids and the PT stops or road nodes, respectively. There are two methods used for PT network aggregation, which is necessary in order to reduce computation time for path finders and all other methods:\n* Clustering short-distance stops\n* Aggregation of PT network to relevant trips and stops with simultaneous connection to zone centroids (size and quality of the network depend on your definition of \'relevant\')\n* Subsequently, the road network gets connected\n\nThe rest works straight-forward with the notebooks\' comments and should work for every self-defined region with minor adjustments. At the end of each notebook in the \'save\' cell(s) you find all DataFrames (as `sm`\'s attributes) that will be relevant in later steps. One additional attribute is always present: `sm.epsg` which defines the coordinate reference system.\n\n## Data accessibility\n\nThis repository together with externally hosted data packages contains all openly licensed data sources which are necessary for transport modelling in Germany.\n\nThough, for estimating calibration parameters anew (beyond the those given in `input/`), you need a German-wide mobility survey with trips on ""Gemeindeverband""-level: ""[Mobilit\xc3\xa4t in Deutschland 2017](http://www.mobilitaet-in-deutschland.de/) B2"" (MiD2017)\n\nOptionally, you can generate the origin destination matrix exogenously, using origin destination matrices from the underlying model of the German federal governments transport study ""[Bundesverkehrswegeplan 2030](https://www.bmvi.de/SharedDocs/DE/Artikel/G/BVWP/bundesverkehrswegeplan-2030-inhalte-herunterladen.html)"".\n\nYou can apply for access to both data sets using the national [Clearing House Transport order form](https://daten.clearingstelle-verkehr.de/order-form.html) . All `csv` and `xlsx` data tables go into the folder `input/transport_demand`, which added to the `.gitignore`.\n\n'",",https://doi.org/10.1186/s12544-022-00568-9\n\nIt,https://doi.org/10.5281/zenodo.4518680,https://doi.org/10.5281/zenodo.7809061","2021/02/19, 15:12:46",978,MIT,144,223,"2023/09/26, 14:42:13",0,0,0,0,29,0,0,0.024691358024691357,"2023/02/27, 06:47:06",v2.1.0,0,2,false,,false,false,,,,,,,,,,, OpenMobility,Driving the Evolution and Broad Adoption of Open Source Mobility Modeling and Simulation Technologies.,,,custom,,Mobility and Transportation,,,,,,,,,,https://openmobility.eclipse.org/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, NoiseModelling,A free and open source model to compute noise maps.,Ifsttar,https://github.com/Universite-Gustave-Eiffel/NoiseModelling.git,github,"propagation,acoustics,gis,modelling,noise,java,cnossos-eu,noise-modelling,pathfinder,pathfinding,research,sound-propagation,noise-propagation",Mobility and Transportation,"2023/09/05, 12:57:19",135,0,39,true,Java,Université Gustave Eiffel,Universite-Gustave-Eiffel,"Java,Groovy,CSS,JavaScript,Python,HTML,Shell,Batchfile",https://noisemodelling.readthedocs.io ,"b'[![CI](https://github.com/Universite-Gustave-Eiffel/NoiseModelling/actions/workflows/CI.yml/badge.svg?branch=4.X)](https://github.com/Universite-Gustave-Eiffel/NoiseModelling/actions/workflows/CI.yml)\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Documentation Status](https://readthedocs.org/projects/noisemodelling/badge/?version=latest)](https://noisemodelling.readthedocs.io/en/latest/?badge=latest)\n[![GitHub release](https://img.shields.io/github/release/Universite-Gustave-Eiffel/NoiseModelling)](https://github.com/Universite-Gustave-Eiffel/NoiseModelling/releases/)\n\n\n\nNoiseModelling\n======\n\nNoiseModelling is a library capable of producing noise maps. \nIt can be freely used either for research and education, as well as by experts in a professional use.\n\nA general overview of the model (v3.4.5 - September 2020) can be found in [this video](https://www.youtube.com/watch?v=V1-niMT9cYE&t=1s).\n\n* for **more information** on NoiseModelling, [visit the offical NoiseModelling website](https://noise-planet.org/noisemodelling.html)\n* to **contribute to NoiseModelling** source code, follow the [""Get Started Dev""](https://noisemodelling.readthedocs.io/en/latest/Get_Started_Dev.html) page\n* to **contact the support / development team**, \n - open an issue : https://github.com/Universite-Gustave-Eiffel/NoiseModelling/issues or a write a message : https://github.com/Universite-Gustave-Eiffel/NoiseModelling/discussions *(we prefer these two options)*\n - send us an email at ``contact@noise-planet.org``\n* follow us on Twitter @Noise_Planet [![Twitter Follow](https://img.shields.io/twitter/follow/noise_planet.svg?style=social&label=Follow)](https://twitter.com/Noise_Planet?lang=en)\n\nStable release\n---------------------------\n\nThe current stable version is v4.0.0 (see [latest release](https://github.com/Universite-Gustave-Eiffel/NoiseModelling/releases/latest) page)\n\n\nDocumentation\n---------------------------\n\nAn online documentation is available [here](https://noisemodelling.readthedocs.io/en/latest/).\n\nAuthors\n---------------------------\n\nNoiseModelling project is leaded by acousticians from the *Joint Research Unit in Environmental Acoustics* ([UMRAE](https://www.umrae.fr/), Universit\xc3\xa9 Gustave Eiffel - Cerema) and Geographic Information Science specialists from [Lab-STICC](https://labsticc.fr) laboratory (CNRS - DECIDE Team).\n\nThe NoiseModelling team owns the majority of the authorship of this application, but any external contributions are warmly welcomed.\n\nLicence\n---------------------------\n\nNoiseModelling and its documentation are distributed for free under [GPL v3](https://noisemodelling.readthedocs.io/en/latest/License.html). \n\nPublications\n---------------------------\n\nNoiseModelling was initially developed in a research context, which has led to numerous scientific publications. For more information, have a look to the [""Scientific production""](https://noisemodelling.readthedocs.io/en/v4.0.0/Scientific_production.html) page. \n\nTo quote this tool, please use the following bibliographic reference: Erwan Bocher, Gwena\xc3\xabl Guillaume, Judica\xc3\xabl Picaut, Gwendall Petit, Nicolas Fortin. *NoiseModelling: An Open Source GIS Based Tool to Produce Environmental Noise Maps*. ISPRS International Journal of Geo-Information, MDPI, 2019, 8 (3), pp.130. - [10.3390/ijgi8030130](https://www.mdpi.com/2220-9964/8/3/130)\n\nFundings\n---------------------------\n\n*Research projects:*\n- ANR [Eval-PDU](https://anr.fr/Projet-ANR-08-VILL-0005) (ANR-08-VILL-0005) 2008-2011\n- ANR [Veg-DUD](https://anr.fr/Projet-ANR-09-VILL-0007) (ANR-09-VILL-0007) 2009-2014\n- ANR [CENSE](https://anr.fr/Projet-ANR-16-CE22-0012) (ANR-16-CE22-0012) 2017-2021\n- [Nature4cities](https://www.nature4cities.eu/) (N4C) project, funded by European Union\xe2\x80\x99s Horizon 2020 research and innovation programme under grant agreement No 730468\n- [PlaMADE](https://www.cerema.fr/fr/projets/plamade-plate-forme-mutualisee-aide-au-diagnostic) 2020-2022\n\n*Institutional (public) fundings:*\n- [Universit\xc3\xa9 Gustave Eiffel](https://www.univ-gustave-eiffel.fr/) (formerly Ifsttar, formerly LCPC), [CNRS](https://www.cnrs.fr), [Cerema](https://www.cerema.fr/), [Universit\xc3\xa9 Bretagne Sud](https://www.univ-ubs.fr/), [Ecole Centrale de Nantes](https://www.ec-nantes.fr/)\n\n*Private fundings:*\n- Airbus Urban Mobility\n\n'",,"2012/07/12, 11:21:06",4122,GPL-3.0,90,2495,"2023/09/25, 13:22:07",62,307,495,52,30,0,0.0,0.5255172413793103,"2023/09/05, 09:36:07",v4.0.5,0,16,false,,true,false,,,https://github.com/Universite-Gustave-Eiffel,https://www.univ-gustave-eiffel.fr/,France,,,https://avatars.githubusercontent.com/u/7008647?v=4,,, NoiseCapture,Android App dedicated to the measurement of environmental noise.,Ifsttar,https://github.com/Universite-Gustave-Eiffel/NoiseCapture.git,github,"measurements,noise-maps,collaborative-science,android",Mobility and Transportation,"2022/07/28, 07:59:00",81,0,19,true,Java,Université Gustave Eiffel,Universite-Gustave-Eiffel,"Java,Groovy,HTML,CSS,JavaScript,Scheme,Python,MATLAB",http://noise-planet.org/fr/noisecapture.html,"b'# About NoiseCapture App\n\n[![Build Status](https://travis-ci.org/Ifsttar/NoiseCapture.svg?branch=master)](https://travis-ci.org/Ifsttar/NoiseCapture) \n\n**NoiseCapture App** is Android App dedicated to the measurement of environmental noise.\n\n## Description\n**NoiseCapture App** is an Android App project for measuring environmental noise using a smartphone. The goal is to **produce relevant noise indicators from audio measurements, including a geospatial representation**. Measurements can be shared with the community in order to produce participatory noise maps. **NoiseCapture App** is a component of a global infrastructure, _i.e._ a Spatial Data Infrastructure (SDI), called the **OnoMap SDI**, that allows to process and represent the geospatial information, like noise maps.\n\n* A [**full description**](https://github.com/Ifsttar/NoiseCapture/wiki) of the whole OnoMap SDI, including the NoiseCapture App, is given in the [wiki pages](https://github.com/Ifsttar/NoiseCapture/wiki).\n* An **user guide**, for the use of the NoiseCapture App, is proposed within the NoiseCapture App (see the \'Help\' page in the menu of NoiseCapture App).\n\n## Features\n\nNoiseCapture App features are divided into 3 parts:\n\n - Measurement - Once the sound level calibration is done, the user start the measurement in order to record each second the LAeq, an average sound energy over a period of 1s. The spectrum repartition of the sound are analysed and stored using the Fourrier transform. The device location are recorded while measuring the sound level. The user has the hability to provide his own feedback about the feeling of the noise environment.\n\n - Extented report - Advanced statistics are computed locally on the phone and shown to the user. For each user\'s measurement the locations of the noise levels are displayed in a map.\n\n - Share results with the community - Anonymous results are transfered to Virtual Hubs (web server) and post-processed in order to build a noise map that merge all community results. Participative noise maps can be displayed within the NoiseCapture App, or online at https://onomap.noise-planet.org/.\n\n## Developments\nNoiseCapture App is a collaboration between the [Environmental Acoustic Research unit](http://www.umrae.fr/en/) ([Ifsttar](http://www.ifsttar.fr)) and the [Lab-STICC](http://www.lab-sticc.fr/) CNRS. If you need more information about the project developped by the Environmental Acoustic Research unit and the Lab-STICC, on this topic, go to [http://www.noise-planet.org](http://noise-planet.org).\n\n## Download\n\n[](https://f-droid.org/packages/org.noise_planet.noisecapture/)\n[](https://play.google.com/store/apps/details?id=org.noise_planet.noisecapture)\n\n## Funding\nThis application was developed under the initial funding the European project [ENERGIC-OD](http://www.energic-od.eu/), with the help of the [GEOPAL](http://www.geopal.org/accueil) program.\n\n## License\nNoiseCapture App is released under the GENERAL PUBLIC LICENSE Version 3. Please refer to GPLv3 for more details.\n\n## Follow us\nFollow the developement of NoiseCapture App (and more...) on Twitter at @Noise_Planet\n[![Twitter URL](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/Noise_Planet)\n\n'",,"2015/03/25, 14:45:36",3136,GPL-3.0,0,1002,"2023/09/25, 09:06:05",67,178,325,23,30,1,0.0,0.15000000000000002,"2022/10/27, 12:31:07",v1.2.23,0,8,false,,false,false,,,https://github.com/Universite-Gustave-Eiffel,https://www.univ-gustave-eiffel.fr/,France,,,https://avatars.githubusercontent.com/u/7008647?v=4,,, bikedata,Aims to enable ready importing of historical trip data from all public bicycle hire systems which provide data. Will be expanded on an ongoing basis as more systems publish open data.,ropensci,https://github.com/ropensci/bikedata.git,github,"bicycle-hire-systems,r,rstats,bike-hire,bicycle-hire,database,bike-data,r-package,peer-reviewed",Mobility and Transportation,"2022/03/23, 14:55:25",79,0,2,false,R,rOpenSci,ropensci,"R,C++,C,TeX,Makefile,Shell",https://docs.ropensci.org/bikedata,"b'\n\n[![R build\nstatus](https://github.com/ropensci/bikedata/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/bikedata/actions?query=workflow%3AR-CMD-check)\n[![codecov](https://codecov.io/gh/ropensci/bikedata/branch/master/graph/badge.svg)](https://codecov.io/gh/ropensci/bikedata)\n[![Project Status:\nActive](http://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/)\n[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/bikedata)](https://cran.r-project.org/package=bikedata)\n[![CRAN\nDownloads](https://cranlogs.r-pkg.org/badges/grand-total/bikedata?color=orange)](https://cran.r-project.org/package=bikedata)\n[![](http://badges.ropensci.org/116_status.svg)](https://github.com/ropensci/software-review/issues/116)\n[![status](https://joss.theoj.org/papers/10.21105/joss.00471/status.svg)](https://joss.theoj.org/papers/10.21105/joss.00471)\n\nThe `bikedata` package aims to enable ready importing of historical trip\ndata from all public bicycle hire systems which provide data, and will\nbe expanded on an ongoing basis as more systems publish open data.\nCities and names of associated public bicycle systems currently\nincluded, along with numbers of bikes and of docking stations (from\n[wikipedia](https://en.wikipedia.org/wiki/List_of_bicycle-sharing_systems#Cities)),\nare\n\n| City | Hire Bicycle System | Number of Bicycles | Number of Docking Stations |\n|--------------------------------|-----------------------------------------------------------------------|--------------------|----------------------------|\n| London, U.K. | [Santander Cycles](https://tfl.gov.uk/modes/cycling/santander-cycles) | 13,600 | 839 |\n| San Francisco Bay Area, U.S.A. | [Ford GoBike](https://www.fordgobike.com/) | 7,000 | 540 |\n| New York City NY, U.S.A. | [citibike](https://www.citibikenyc.com/) | 7,000 | 458 |\n| Chicago IL, U.S.A. | [Divvy](https://www.divvybikes.com/) | 5,837 | 576 |\n| Montreal, Canada | [Bixi](https://bixi.com/) | 5,220 | 452 |\n| Washingon DC, U.S.A. | [Capital BikeShare](https://www.capitalbikeshare.com/) | 4,457 | 406 |\n| Guadalajara, Mexico | [mibici](https://www.mibici.net/) | 2,116 | 242 |\n| Minneapolis/St Paul MN, U.S.A. | [Nice Ride](https://www.niceridemn.com/) | 1,833 | 171 |\n| Boston MA, U.S.A. | [Hubway](https://www.bluebikes.com/) | 1,461 | 158 |\n| Philadelphia PA, U.S.A. | [Indego](https://www.rideindego.com) | 1,000 | 105 |\n| Los Angeles CA, U.S.A. | [Metro](https://bikeshare.metro.net/) | 1,000 | 65 |\n\nThese data include the places and times at which all trips start and\nend. Some systems provide additional demographic data including years of\nbirth and genders of cyclists. The list of cities may be obtained with\nthe `bike_cities()` functions, and details of which include demographic\ndata with `bike_demographic_data()`.\n\nThe following provides a brief overview of package functionality. For\nmore detail, see the\n[vignette](https://docs.ropensci.org/bikedata/articles/bikedata.html).\n\n------------------------------------------------------------------------\n\n## 1 Installation\n\nCurrently a development version only which can be installed with the\nfollowing command,\n\n``` r\ndevtools::install_github(""ropensci/bikedata"")\n```\n\nand then loaded the usual way\n\n``` r\nlibrary (bikedata)\n```\n\n## 2 Usage\n\nData may downloaded for a particular city and stored in an `SQLite3`\ndatabase with the simple command,\n\n``` r\nstore_bikedata (city = \'nyc\', bikedb = \'bikedb\', dates = 201601:201603)\n# [1] 2019513\n```\n\nwhere the `bikedb` parameter provides the name for the database, and the\noptional argument `dates` can be used to specify a particular range of\ndates (Jan-March 2016 in this example). The `store_bikedata` function\nreturns the total number of trips added to the specified database. The\nprimary objects returned by the `bikedata` packages are \xe2\x80\x98trip matrices\xe2\x80\x99\nwhich contain aggregate numbers of trips between each pair of stations.\nThese are extracted from the database with:\n\n``` r\ntm <- bike_tripmat (bikedb = \'bikedb\')\ndim (tm); format (sum (tm), big.mark = \',\')\n```\n\n #> [1] 518 518\n #> [1] ""2,019,513""\n\nDuring the specified time period there were just over 2 million trips\nbetween 518 bicycle docking stations. Note that the associated databases\ncan be very large, particularly in the absence of `dates` restrictions,\nand extracting these data can take quite some time.\n\nData can also be aggregated as daily time series with\n\n``` r\nbike_daily_trips (bikedb = \'bikedb\')\n```\n\n #> # A tibble: 87 x 2\n #> date numtrips\n #> \n #> 1 2016-01-01 11172\n #> 2 2016-01-02 14794\n #> 3 2016-01-03 15775\n #> 4 2016-01-04 19879\n #> 5 2016-01-05 18326\n #> 6 2016-01-06 24922\n #> 7 2016-01-07 28215\n #> 8 2016-01-08 29131\n #> 9 2016-01-08 21140\n #> 10 2016-01-10 14481\n #> # \xe2\x80\xa6 with 77 more rows\n\nA summary of all data contained in a given database can be produced as\n\n``` r\nbike_summary_stats (bikedb = \'bikedb\')\n#> num_trips num_stations first_trip last_trip latest_files\n#> ny 2019513 518 2016-01-01 00:00 2016-03-31 23:59 FALSE\n```\n\nThe final field, `latest_files`, indicates whether the files in the\ndatabase are up to date with the latest published files.\n\n### 2.1 Filtering trips by dates, times, and weekdays\n\nTrip matrices can be constructed for trips filtered by dates, days of\nthe week, times of day, or any combination of these. The temporal extent\nof a `bikedata` database is given in the above `bike_summary_stats()`\nfunction, or can be directly viewed with\n\n``` r\nbike_datelimits (bikedb = \'bikedb\')\n```\n\n #> first last \n #> ""2016-01-01 00:00"" ""2016-03-31 23:59""\n\nAdditional temporal arguments which may be passed to the `bike_tripmat`\nfunction include `start_date`, `end_date`, `start_time`, `end_time`, and\n`weekday`. Dates and times may be specified in almost any format, but\nlarger units must always precede smaller units (so years before months\nbefore days; hours before minutes before seconds). The following\nexamples illustrate the variety of acceptable formats for these\narguments.\n\n``` r\ntm <- bike_tripmat (\'bikedb\', start_date = ""20160102"")\ntm <- bike_tripmat (\'bikedb\', start_date = 20160102, end_date = ""16/02/28"")\ntm <- bike_tripmat (\'bikedb\', start_time = 0, end_time = 1) # 00:00 - 01:00\ntm <- bike_tripmat (\'bikedb\', start_date = 20160101, end_date = ""16,02,28"",\n start_time = 6, end_time = 24) # 06:00 - 23:59\ntm <- bike_tripmat (\'bikedb\', weekday = 1) # 1 = Sunday\ntm <- bike_tripmat (\'bikedb\', weekday = c(\'m\', \'Th\'))\ntm <- bike_tripmat (\'bikedb\', weekday = 2:6,\n start_time = ""6:30"", end_time = ""10:15:25"")\n```\n\n### 2.2 Filtering trips by demographic characteristics\n\nTrip matrices can also be filtered by demographic characteristics\nthrough specifying the three additional arguments of `member`, `gender`,\nand `birth_year`. `member = 0` is equivalent to `member = FALSE`, and\n`1` equivalent to `TRUE`. `gender` is specified numerically such that\nvalues of `2`, `1`, and `0` respectively translate to female, male, and\nunspecified. The following lines demonstrate this functionality\n\n``` r\nsum (bike_tripmat (\'bikedb\', member = 0))\nsum (bike_tripmat (\'bikedb\', gender = \'female\'))\nsum (bike_tripmat (\'bikedb\', weekday = \'sat\', birth_year = 1980:1990,\n gender = \'unspecified\'))\n```\n\n### 3. Citation\n\n``` r\ncitation (""bikedata"")\n#> \n#> To cite bikedata in publications use:\n#> \n#> Mark Padgham, Richard Ellison (2017). bikedata Journal of Open Source Software, 2(20). URL\n#> https://doi.org/10.21105/joss.00471\n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @Article{,\n#> title = {bikedata},\n#> author = {Mark Padgham and Richard Ellison},\n#> journal = {The Journal of Open Source Software},\n#> year = {2017},\n#> volume = {2},\n#> number = {20},\n#> month = {Dec},\n#> publisher = {The Open Journal},\n#> url = {https://doi.org/10.21105/joss.00471},\n#> doi = {10.21105/joss.00471},\n#> }\n```\n\n### 4. Code of Conduct\n\nPlease note that this project is released with a [Contributor Code of\nConduct](https://ropensci.org/code-of-conduct/). By contributing to this\nproject you agree to abide by its terms.\n\n[![ropensci_footer](https://ropensci.org//public_images/github_footer.png)](https://ropensci.org/)\n'",",https://doi.org/10.21105/joss.00471\n#,https://doi.org/10.21105/joss.00471","2016/12/12, 16:14:07",2508,GPL-3.0,0,678,"2022/02/01, 12:13:51",19,15,85,0,631,0,0.0,0.04173106646058733,"2019/05/09, 13:38:38",v0.2.3,0,9,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, CyclOSM,A CartoCSS map style designed with cycling in mind.,cyclosm,https://github.com/cyclosm/cyclosm-cartocss-style.git,github,"osm,cartocss,cycling,openstreetmap,map,tiles",Mobility and Transportation,"2023/08/01, 10:50:33",203,0,34,true,CartoCSS,CyclOSM,cyclosm,"CartoCSS,Python,JavaScript,Shell,Dockerfile",https://www.cyclosm.org,"b'CyclOSM\n=======\n\nCyclOSM is a [CartoCSS](https://carto.com/developers/styling/cartocss/) map style\ndesigned with cycling in mind. It leverages\n[OpenStreetMap](https://www.openstreetmap.org/) data to create a beautiful and\npractical cycling map!\n\n[![Build Status](https://api.travis-ci.org/cyclosm/cyclosm-cartocss-style.svg?branch=master)](https://travis-ci.org/cyclosm/cyclosm-cartocss-style)\n\n[![CyclOSM](https://www.cyclosm.org/images/social_media.png)](https://www.cyclosm.org/)\n\n\n## Demonstration\n\nA demonstration of this style is available at [https://cyclosm.org](https://cyclosm.org).\n\nThe tile server url is\n`https://{s}.tile-cyclosm.openstreetmap.fr/cyclosm/{z}/{x}/{y}.png`. Tiles can\nbe reused under the general OpenStreetMap [tile usage\npolicy](https://operations.osmfoundation.org/policies/tiles/).\n\nThe map is available by default in the following smartphone applications:\n- [OSMAnd](https://osmand.net/)\n- [OpenMultiMaps](https://framagit.org/tom79/openmaps)\n- [All-In-One Offline Maps](https://www.offline-maps.net) and [AlpineQuest Rando GPS](https://alpinequest.net)\n\nThe tile server is provided by\n[OpenStreetMap-France](https://www.openstreetmap.fr), many thanks to them for\nthe support!\n\n## Philosophy\n\nCyclOSM is a new cycle-oriented render. Contrary to\n[OpenCycleMap](http://opencyclemap.org/), this render is free and open-source\nsoftware and aims at being more complete to take into account a wider\ndiversity of cycling habits.\n\nIn urban areas, it renders the main different types of cycle tracks and lanes,\non each side of the road, for helping you draw your bike to work route. It also\nfeatures essential POIs as well as bicycle parking spots or spots shared with\nmotorbikes, specific infrastructure (elevators / ramps), road speeds or\nsurfaces to avoid streets with pavings, bumpers and bike boxes, etc.\n\nThe same render also lets you visualize main bicycle touring routes as well as\nessential POIs when touring (emergency services, shelters, tourism, shops).\n\n\n## Features\n\nRender:\n\n* Cycleways track, lanes, cycle-bus lanes\n* Motor oneway - two way for bicycle\n* Cycle routes (local, regional, national, international)\n* Parking for bicycle (or motorcycle parking open to bicycle)\n* Steps with bicycle friendly ramp\n* Bicycle shop and repair stations\n* First aid amenities : shelter, hospital, pharmacy, police station, water, food store\n* Travel amenities : camping, hotel, train station, museum, picnic table, peaks...\n* Emphasis on low speed roads (<= 30km/h)\n* Elevation curves and shading\n* Smoothness of the roads\n* Traffic calming\n* \xe2\x80\xa6\n\nA full list of rendered features is available in [the\nlegend](https://www.cyclosm.org/legend.html).\n\nA list of the tags considered by this render is available in [Taginfo JSON\nformat](https://wiki.openstreetmap.org/wiki/Taginfo/Projects) in [`taginfo.json`](taginfo.json).\n\nSome extra information about the way OSM tags are rendered is available in\n[the wiki](https://github.com/cyclosm/cyclosm-cartocss-style/wiki/Tag-to-Render).\n\n\n## Getting started\n\nGetting started instructions are available in the [`docs/INSTALL.md`](docs/INSTALL.md) file.\n\n\n## Printing\n\nInstructions for printing maps with a CyclOSM render are available in\nthe [`docs/PRINT.md`](docs/PRINT.md) file.\n\n\n## Contributing\n\nSome getting started information for contributing is available in\n[`CONTRIBUTING.md`](CONTRIBUTING.md) file.\n\n\n## Changelog\n\nChanges to this theme are listed in the [`CHANGELOG.md`](CHANGELOG.md) file.\nVersions are tagged with Git tags and are available through Github releases\nfeature.\n\n\n## MapCSS validators\n\nWe also offer some MapCSS checkers for bicycle tags which can be used with\n[JOSM](https://josm.openstreetmap.de/wiki/Help/Preferences/Validator) for\ninstance in the [`validator`](validator) folder of this repository.\n\n\n## Licenses\n\nSee [`LICENSE.md`](LICENSE.md) file.\n\n## Links\n\n* http://www.cyclosm.org, official website.\n* http://www.cyclosm.org/legend.html, full detailed key.\n* https://wiki.openstreetmap.org/wiki/CyclOSM, wiki page on the OSM wiki.\n* A list of the tags considered by CyclOSM is available in [Taginfo JSON format](https://wiki.openstreetmap.org/wiki/Taginfo/Projects) in [`taginfo.json`](https://github.com/cyclosm/cyclosm-cartocss-style/blob/master/taginfo.json).\n\n\n## Related projects\n\n* An unofficial Docker image to deploy a CyclOSM tile server is available at https://github.com/mhajder/openstreetmap-tile-server-cyclosm.\n* A gravel-oriented fork from CxBerlin is available at https://github.com/cxberlin/gravel-cartocss-style\n* An high quality (especially DEM) tile server for Belgium is available from\n Champs-Libres, see https://www.champs-libres.coop/blog/post/2020-09-17-cyclosm/.\n'",,"2019/01/23, 15:03:34",1736,CUSTOM,9,945,"2023/10/21, 09:45:49",63,159,597,47,4,6,0.3,0.44415917843388963,"2021/07/22, 17:49:55",v0.6,3,13,false,,false,true,,,https://github.com/cyclosm,,,,,https://avatars.githubusercontent.com/u/47216240?v=4,,, Gym Electric Motor,An OpenAI Gym Environment for Electric Motors.,upb-lea,https://github.com/upb-lea/gym-electric-motor.git,github,"reinforcement-learning,openai-gym-environments,machinelearning,openai-gym,openai,gym-environment,pmsm,motor-models,converters,benchmark,electrical-engineering,electric-drive",Mobility and Transportation,"2023/08/15, 08:17:37",249,0,55,true,Python,Paderborn University - LEA,upb-lea,Python,https://upb-lea.github.io/gym-electric-motor/,"b'# Gym Electric Motor\n![](docs/plots/Motor_Logo.png)\n\n\n[**Overview paper**](https://joss.theoj.org/papers/10.21105/joss.02498)\n| [**Reinforcement learning paper**](https://arxiv.org/abs/1910.09434)\n| [**Quickstart**](#getting-started)\n| [**Install guide**](#installation)\n| [**Reference docs**](https://upb-lea.github.io/gym-electric-motor/)\n| [**Release notes**](https://github.com/upb-lea/gym-electric-motor/releases)\n\n[![Build Status](https://github.com/upb-lea/gym-electric-motor/actions/workflows/build_and_test.yml/badge.svg)](https://github.com/upb-lea/gym-electric-motor/actions/workflows/build_and_test.yml)\n[![codecov](https://codecov.io/gh/upb-lea/gym-electric-motor/branch/master/graph/badge.svg)](https://codecov.io/gh/upb-lea/gym-electric-motor)\n[![PyPI version shields.io](https://img.shields.io/pypi/v/gym-electric-motor.svg)](https://pypi.python.org/pypi/gym-electric-motor/)\n[![License](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/upb-lea/gym-electric-motor/blob/master/LICENSE)\n[![DOI Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.4355691.svg)](https://doi.org/10.5281/zenodo.4355691)\n[![DOI JOSS](https://joss.theoj.org/papers/10.21105/joss.02498/status.svg)](https://doi.org/10.21105/joss.02498)\n\n## Overview\nThe gym-electric-motor (GEM) package is a Python toolbox for the simulation and control of various electric motors.\nIt is built upon [Faram Gymnasium Environments](https://gym.openai.com/), and, therefore, can be used for both, classical control simulation and [reinforcement learning](https://github.com/upb-lea/reinforcement_learning_course_materials) experiments. It allows you to construct a typical drive train with the usual building blocks, i.e., supply voltages, converters, electric motors and load models, and obtain not only a closed-loop simulation of this physical structure, but also a rich interface for plugging in any decision making algorithm, from linear feedback control to [Deep Deterministic Policy Gradient](https://spinningup.openai.com/en/latest/algorithms/ddpg.html) agents.\n\n## Getting Started\nAn easy way to get started with GEM is by playing around with the following interactive notebooks in Google Colaboratory. Most important features of GEM as well as application demonstrations are showcased, and give a kickstart for engineers in industry and academia.\n\n* [GEM cookbook](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master//examples/environment_features/GEM_cookbook.ipynb)\n* [Keras-rl2 example](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/reinforcement_learning_controllers/keras_rl2_dqn_disc_pmsm_example.ipynb)\n* [Stable-baselines3 example](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/reinforcement_learning_controllers/stable_baselines3_dqn_disc_pmsm_example.ipynb)\n* [MPC example](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/model_predictive_controllers/pmsm_mpc_dq_current_control.ipynb)\n\nThere is a list of [standalone example scripts](examples/) as well for minimalistic demonstrations.\n\nA basic routine is as simple as:\n```py\nimport gym_electric_motor as gem\n\nif __name__ == \'__main__\':\n env = gem.make(""Finite-CC-PMSM-v0"") # instantiate a discretely controlled PMSM\n env.reset()\n for _ in range(10000):\n (states, references), rewards, done, _ =\\ \n \tenv.step(env.action_space.sample()) # pick random control actions\n if done:\n (states, references), _ = env.reset()\n env.close()\n```\n\n\n\n## Installation\n- Install gym-electric-motor from PyPI (recommended):\n\n```\npip install gym-electric-motor\n```\n\n- Install from Github source:\n\n```\ngit clone git@github.com:upb-lea/gym-electric-motor.git \ncd gym-electric-motor\n# Then either\npython setup.py install\n# or alternatively\npip install -e .\n```\n\n## Building Blocks\nA GEM environment consists of following building blocks:\n- Physical structure:\n - Supply voltage\n - Converter\n - Electric motor\n - Load model\n- Utility functions for reference generation, reward calculation and visualization\n \n### Information Flow in a GEM Environment\n![](docs/plots/SCML_Overview.png)\n\nAmong various DC-motor models, the following AC motors - together with their power electronic counterparts - are available:\n- Permanent magnet synchronous motor (PMSM), \n- Synchronous reluctance motor (SynRM)\n- Squirrel cage induction motor (SCIM)\n- Doubly-fed induction motor (DFIM)\n\nThe converters can be driven by means of a duty cycle (continuous control set) or switching commands (finite control set). \n\n### Citation\nA white paper for the general toolbox in the context of drive simulation and control prototyping can be found in the [Journal of Open Sorce Software (JOSS)](https://joss.theoj.org/papers/10.21105/joss.02498). Please use the following BibTeX entry for citing it:\n```\n@article{Balakrishna2021,\n doi = {10.21105/joss.02498},\n url = {https://doi.org/10.21105/joss.02498},\n year = {2021},\n publisher = {The Open Journal},\n volume = {6},\n number = {58},\n pages = {2498},\n author = {Praneeth {Balakrishna} and Gerrit {Book} and Wilhelm {Kirchg\xc3\xa4ssner} and Maximilian {Schenke} and Arne {Traue} and Oliver {Wallscheid}},\n title = {gym-electric-motor (GEM): A Python toolbox for the simulation of electric drive systems},\n journal = {Journal of Open Source Software}\n}\n\n```\n\nA white paper for the utilization of this framework within reinforcement learning is available at [IEEE-Xplore](https://ieeexplore.ieee.org/document/9241851) (preprint: [arxiv.org/abs/1910.09434](https://arxiv.org/abs/1910.09434)). Please use the following BibTeX entry for citing it:\n```\n@article{9241851, \n author={Traue, Arne and Book, Gerrit and Kirchg\xc3\xa4ssner, Wilhelm and Wallscheid, Oliver},\n journal={IEEE Transactions on Neural Networks and Learning Systems}, \n title={Toward a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control}, \n year={2022},\n volume={33},\n number={3},\n pages={919-928},\n doi={10.1109/TNNLS.2020.3029573}}\n```\n\n### Running Unit Tests with Pytest\nTo run the unit tests \'\'pytest\'\' is required.\nAll tests can be found in the \'\'tests\'\' folder.\nExecute pytest in the project\'s root folder:\n```\n>>> pytest\n```\nor with test coverage:\n```\n>>> pytest --cov=./\n```\nAll tests shall pass.\n'",",https://arxiv.org/abs/1910.09434,https://doi.org/10.5281/zenodo.4355691,https://doi.org/10.21105/joss.02498,https://doi.org/10.21105/joss.02498,https://arxiv.org/abs/1910.09434","2019/10/21, 14:04:01",1465,MIT,85,976,"2023/08/15, 08:10:05",14,103,205,17,71,0,0.5,0.6753731343283582,"2023/08/15, 08:27:28",v2.0.0,0,15,false,,true,true,,,https://github.com/upb-lea,https://ei.uni-paderborn.de/en/lea/,"Paderborn, Germany",,,https://avatars.githubusercontent.com/u/55782224?v=4,,, BEAM,"The Framework for Modeling Behavior, Energy, Autonomy, and Mobility in Transportation Systems.",LBNL-UCB-STI,https://github.com/LBNL-UCB-STI/beam.git,github,,Mobility and Transportation,"2023/10/25, 14:49:32",136,0,12,true,Jupyter Notebook,LBNL/UCB - Sustainable Transportation Initiative,LBNL-UCB-STI,"Jupyter Notebook,Scala,Python,Java,R,Shell,Batchfile,Dockerfile,Smarty,Makefile,JavaScript",,"b'# BEAM\n\n[![Build Status](https://beam-ci.tk/job/master/badge/icon)](https://beam-ci.tk/job/master/)\n\n[![Documentation Status](https://readthedocs.org/projects/beam/badge/?version=latest)](http://beam.readthedocs.io/en/latest/?badge=latest)\n\nBEAM stands for Behavior, Energy, Autonomy, and Mobility. The model is being developed as a framework for a series of research studies in sustainable transportation at Lawrence Berkeley National Laboratory. \n\nBEAM extend the [Multi-Agent Transportation Simulation Framework](https://github.com/matsim-org/matsim) (MATSim) to enable powerful and scalable analysis of urban transportation systems.\n\n[Read more about BEAM](http://beam.readthedocs.io/en/latest/about.html) \n\nor \n\ntry running BEAM with our simple [getting started guide](http://beam.readthedocs.io/en/latest/users.html#getting-started) \n\nor \n\ncheck out the [developer guide](http://beam.readthedocs.io/en/latest/developers.html) to get serious about power using or contributing.\n\n## Project website: \nhttp://beam.lbl.gov/\n\n'",,"2016/11/07, 20:38:29",2543,CUSTOM,701,15980,"2023/10/04, 14:41:09",155,1881,3647,127,21,73,0.9,0.8313479623824451,"2019/11/14, 15:11:09",pilates-runs-10-28-2019,0,48,false,,false,false,,,https://github.com/LBNL-UCB-STI,,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/26527414?v=4,,, WoBike,Public transport and multimodal routing apps could benefit from showing nearby bikes from bikesharing services. So here's a list showing the APIs of a few of these platforms.,ubahnverleih,https://github.com/ubahnverleih/WoBike.git,github,"bike,sharing,api,map,location,documentation,opendata,scooter,bike-share,scooter-sharing",Mobility and Transportation,"2023/06/27, 09:09:28",907,0,45,true,,,,,,"b""# WoBike\n\nDocumentation of Bike Sharing APIs\n\nPublic transport and multimodal routing apps could benefit from showing nearby bikes from bikesharing services. So here's a list showing the APIs of a few of these platforms.\n\n## Bikes\n### Active\n- [Nextbike (Worldwide)](Nextbike.md)\n- [Call-a-Bike (Germany / Deutsche Bahn)](Call-a-Bike.md)\n- [Motivate (US; Powers FordGoBike, Biketown, Capital Bikeshare, Bike Chattanooga, Citi Bike, CoGo, Divvy, Hubway)](Motivate.md)\n- [BYKE/WIND](Wind.md)\n- [Dropbike (Canada)](Dropbike.md)\n- [SocialBicycles (USA, Canada, Czech Republic & Poland)](SocialBicycles.md)\n- [Bixi (Montreal)](Bixi.md)\n- [Donkey Republic](Donkey.md)\n- [CityBike Wien (Vienna/Austria)](CityBikeWien.md)\n- [Velo Antwerpen (Antwerp/Belgium)](VeloAntwerpen.md)\n- [Avocargo (Berlin)](Avocargo.md)\n\n### Dead (e.g., bankrupt)\n- [oBike (Worldwide)](Obike.md) - Closed June 2018\n- [ofo bike (China, UK, US, Austria, Thailand, Singapore, France & India)](Ofo.md) - Closed January 2019\n- [Bluegogo (China, US)](Bluegogo.md) - Closed November 2017\n- [OnzO (New Zealand)](Onzo.md) - Closed March 2020\n- [Mobike](Mobike.md) Closed December 2020\n- [Gobee bike (Hong Kong & Italy)](Gobee.md) - Closed July 2018\n- [Yobike/Ohbike/Indigoweel](Yobike.md)\n- [EUBIKE (Sweden)](EUBike.md)\n- [Lidl-Bike](Lidl-Bike.md)\n\n\n## E-Scooters (\xf0\x9f\x9b\xb4, that you stand on)\n### Active\n- [Bird](Bird.md)\n- [VOI](Voi.md)\n- [Helbiz](Helbiz.md)\n- [TIER](Tier.md)\n- [Beam](Beam.md)\n- [Flamingo](Flamingo.md)\n- [Blip](Blip.md)\n- [Vaimoo](Vaimoo.md)\n- [Go Sharing](Go-Sharing.md)\n- [Lime & JUMP](Lime.md)\n- [Spin](Spin.md)\n- [Pony](Pony.md)\n\n### Dead (e.g., bankrupt)\n- [JUMP (USA, Europe, Australia & NZ)](Jump.md) - Closed 16.6.2020\n- [Zagster](Zagster.md) - Closed June 2020\n- [Circ](Circ.md) - Bought by Bird 27.1.2020\n- [Zero (Germany)](Zero.md)\n- [Ufo (Europe)](Ufo.md)\n- [Scoota](Scoota.md)\n- [Hive](Hive.md)\n\n\n## Scooters/Mopeds (\xf0\x9f\x9b\xb5, that you sit on)\n### Active\n- [Emmy (Germany)](Emmy.md)\n- [Stella (Germany (Stuttgart))](Stella.md)\n- [Felyx (Europe)](Felyx.md)\n- [Eddy (D\xc3\xbcsseldorf, Germany)](Eddy.md)\n\n### Dead (e.g., bankrupt)\n- [Coup (Germany, Spain & France)](Coup.md) - Closed December 2019\n\n## Cars\n- [Cambio (Germany & Belgium)](Cambio.md)\n- [WeShare (Germany)](WeShare.md)\n- [Miles (Germany)](Miles.md)\n\n\n## More...\n* Also have a look at [this project](https://github.com/eskerda/pybikes/tree/master/pybikes)\n* [GBFS (General Bikeshare Feed Specification)](https://github.com/NABSA/gbfs)\n* [Bikesharing World Map](https://www.google.com/maps/d/u/0/viewer?mid=1UxYw9YrwT_R3SGsktJU3D-2GpMU&ll=50.01042750703113%2C35.03132237929685&z=2)\n* [Open Bike Share Data](https://bikeshare-research.org/)\n* For more documentation and some updates on the APIs, a look into the issues section could be helpful. Also feel free to add APIs from the issues into the documentation by creating a Pull Request.\n\n## Todo\n\nSome new providers are listed in [issues with the new provider tag](https://github.com/ubahnverleih/WoBike/issues?q=is%3Aissue+is%3Aopen+label%3A%22new+provider%22). Some of them are already documented there and just need to be put in an own file in this repo.\n""",,"2017/08/04, 15:12:14",2273,CUSTOM,23,318,"2023/10/09, 19:56:36",100,94,174,23,16,3,0.0,0.7352941176470589,,,0,51,false,,false,false,,,,,,,,,,, multicycles,"Aggregates on one map, more than 100 share vehicles like bikes, scooters, mopeds and cars.",PierrickP,https://github.com/PierrickP/multicycles.git,github,"velib,freefloating,bike,bikesharing,bikeshare,mobike,bike-rental-services,lime,bird,voi,tier,api,carsharing,scooters,bikes,cars",Mobility and Transportation,"2023/10/24, 08:11:19",84,0,3,true,Vue,,,"Vue,JavaScript,SCSS,Shell,Dockerfile,HTML",https://multicycles.org/,"b'# Multicyles\n\n[Multicycles.org](http://multicycles.org) aggregates on one map, more than 100 share vehicles like bikes, scooters, mopeds and cars.\n\nSee [fluctuo Data Flow](https://flow.fluctuo.com/) for the API endpoint.\n\n## Supported Providers\n\nAll from [fluctuo Data Flow](https://flow.fluctuo.com/)\n\nWant to add one ? [https://en.wikipedia.org/wiki/List_of_bicycle-sharing_systems](https://en.wikipedia.org/wiki/List_of_bicycle-sharing_systems)\nOr submit an [Issue](https://github.com/PierrickP/multicycles/issues/new)\n\n## Contact\n\nEmail: contact@fluctuo.com\nTwitter: https://twitter.com/fluctuo\n'",,"2017/12/27, 09:43:15",2128,MIT,7,620,"2023/02/16, 15:32:33",43,92,165,12,251,43,0.0,0.10992907801418439,,,0,7,false,,false,false,,,,,,,,,,, pybikes,"Provides a set of tools to scrape bike sharing data from different websites and APIs, thus providing a coherent and generalized set of classes and methods to access this sort of information.",eskerda,https://github.com/eskerda/pybikes.git,github,,Mobility and Transportation,"2023/10/20, 15:51:52",519,0,32,true,Python,,,"Python,HTML,Makefile",https://citybik.es,"b'pybikes [![Build Status](https://github.com/eskerda/pybikes/actions/workflows/test.yml/badge.svg)](https://github.com/eskerda/pybikes/actions/workflows/test.yml)\n=======\n![pybikes](http://citybik.es/files/pybikes.png)\n\npybikes provides a set of tools to scrape bike sharing data from different\nwebsites and APIs, thus providing a coherent and generalized set of classes\nand methods to access this sort of information.\n\nThe library is distributed and intended mainly for statistics and data\nsharing projects. More importantly, it powers the [CityBikes][1] project, and\nis composed of a set of classes and a pack of data files that provide instances\nfor all different systems.\n\nInstallation\n------------\n\nInstall directly from GitHub:\n```bash\npip install git+https://github.com/eskerda/pybikes.git\n```\n\nOr after downloading/cloning the source:\n```bash\npython setup.py install\n```\n\nThe following dependencies are required (example using Ubuntu package manager):\n```\nsudo apt-get install python\nsudo apt-get install python-setuptools\nsudo apt-get install libxml2 libxml2-dev libxslt1-dev libgeos-dev\n```\n\nUsage\n-----\n```python\n>>> import pybikes\n\n# Capital BikeShare instantiation data is in bixi.json file\n>>> capital_bikeshare = pybikes.get(\'capital-bikeshare\')\n\n# The instance contains all possible metadata regarding this system\n>>> print(capital_bikeshare.meta)\n{\n \'name\': \'Capital BikeShare\',\n \'city\': \'Washington, DC - Arlington, VA\',\n \'longitude\': -77.0363658,\n \'system\': \'Bixi\',\n \'company\': [\'PBSC\'],\n \'country\': \'USA\',\n \'latitude\': 38.8951118\n}\n# The update method retrieves the list of stations\n>>> print(len(capital_bikeshare.stations))\n0\n>>> capital_bikeshare.update()\n>>> print(len(capital_bikeshare.stations))\n191\n>>> print(capital_bikeshare.stations[0])\n--- 31000 - 20th & Bell St ---\nbikes: 7\nfree: 4\nlatlng: 38.8561,-77.0512\nextra: {\n \'installed\': True,\n \'uid\': 1,\n \'locked\': False,\n \'removalDate\': \'\',\n \'installDate\': \'1316059200000\',\n \'terminalName\': \'31000\',\n \'temporary\': False,\n \'name\': \'20th & Bell St\',\n \'latestUpdateTime\': \'1353454305589\'\n}\n```\n\nSome systems might require an API key to work (for instance, Cyclocity). In\nthese cases, the instance factory can take an extra API key parameter.\n\n```python\n>>> key = ""This is not an API key""\n>>> dublinbikes = pybikes.get(\'dublinbikes\', key)\n```\n\nNote that pybikes works as an instance factory and, choicely, instances can be\ngenerated by passing the right arguments to the desired class\n\n```python\n>>> from pybikes.cyclocity import BixiSystem\n>>> capital_bikeshare = BixiSystem(\n tag = \'foo_tag\',\n root_url = \'http://capitalbikeshare.com/data/stations/\',\n meta = {\'foo\':\'bar\'}\n )\n```\n\nThe way information is retrieved can be tweaked using the PyBikesScraper class\nincluded on the utils module thus allowing session reusing and niceties such as\nusing a proxy. This class uses [Requests][2] module internally.\n\n```python\n>>> scraper = pybikes.utils.PyBikesScraper()\n>>> scraper.enableProxy()\n>>> scraper.setProxies({\n ""http"" : ""127.0.0.1:8118"",\n ""https"": ""127.0.0.1:8118""\n })\n>>> scraper.setUserAgent(""Walrus\xe2\x84\xa2 v3.0"")\n>>> scraper.headers[\'Foo\'] = \'bar\'\n>>> capital_bikeshare.update(scraper)\n```\n\n[1]: http://www.citybik.es ""CityBikes""\n[2]: http://docs.python-requests.org ""Requests""\n\nTests\n-----\nTests are separated between unit tests and integration tests with the different\nsources supported.\n\nTo run unit tests simply\n\n```bash\nmake test\n```\n\nTo run integration tests\n\n```bash\nmake test-update\n```\n\nNote that some systems require authorization keys, tests expect these to be\nset as environment variables like:\n\n```bash\nPYBIKES_CYCLOCITY=\'some-api-key\'\nPYBIKES_DEUTSCHEBAHN_CLIENT_ID=\'some-client-id\'\nPYBIKES_DEUTSCHEBAHN_CLIENT_SECRET=\'some-client-secret\'\n\n# or if using an .env file\n# source .env\n\nmake test-update\n```\n\nThis project uses pytest for tests. Test a particular network by passing a\nfilter expresson\n\n```bash\npytest -k bicing\npytest -k gbfs\n```\n\nTo speed up tests execution, install [pytest-xdist][3] to specify the number of\nCPUs to use\n\n```bash\npytest -k gbfs -n auto\n```\n\nTo use Makefile steps and pass along pytest arguments, append to the `T_FLAGS`\nvariable\n\n```bash\nmake test-update T_FLAGS+=\'-n 10 -k gbfs\'\n```\n\nIntegration tests can generate a json report file with all extracted data stored\nas geojson. Using this json report file, further useful reports can be generated\nlike a summary of the overall health of the library or a map visualization of\nall the information.\n\nFor more information on reports see [utils/README.md][4]\n\n[3]: https://pypi.org/project/pytest-xdist/\n[4]: utils/README.md\n'",,"2010/07/16, 10:07:59",4849,LGPL-3.0,293,1400,"2023/10/20, 15:35:24",48,443,591,144,5,13,0.3,0.3890577507598785,,,0,88,false,,false,false,,,,,,,,,,, sustainable-mobility-api,Consists of a Python library and HTTP API for estimating the environmental impact of personal mobility.,mshepherd,https://gitlab.com/mshepherd/sustainable-mobility-api,gitlab,,Mobility and Transportation,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ChargyDesktopApp,"Chargy is a transparency software for secure and transparent e-mobility charging processes, as defined by the German ""Eichrecht"".",OpenChargingCloud,https://github.com/OpenChargingCloud/ChargyDesktopApp.git,github,"transparency,eichrecht,electron,chargy,smartmetering,evroamingdeluxe,premiumeichrecht,evroaming,e-mobility",Mobility and Transportation,"2023/10/25, 11:26:06",10,0,3,true,TypeScript,Open Charging Cloud,OpenChargingCloud,"TypeScript,SCSS,HTML,CSS,JavaScript,Shell",,"b'# Chargy Desktop App\n\nChargy is a transparency software for secure and transparent e-mobility charging processes, as defined by the German ""Eichrecht"". The software allows you to verify the cryptographic signatures of energy measurements within charge detail records and comes with a couple of useful extentions to simplify the entire process for endusers and operators.\n\n![](documentation/Screenshot02.png)\n\n## Benefits of Chargy\n\n1. Chargy comes with __*meta data*__. True charging transparency is more than just signed smart meter values. Chargy allows you to group multiple signed smart meter values to entire charging sessions and to add additional meta data like EVSE information, geo coordinates, tariffs, ... within your backend in order to improve the user experience for the ev drivers.\n2. Chargy is __*secure*__. Chargy implements a public key infrastructure for managing certificates of smart meters, EVSEs, charging stations, charging station operators and e-mobility providers. By this the ev driver will always retrieve the correct public key to verify a charging process automatically and without complicated manual lookups in external databases.\n3. Chargy is __*platform agnostic*__. The entire software is available for desktop and smart phone operating systems and .NET. If you want ports to other platforms or programming languages, we will support your efforts.\n4. Chargy is __*Open Source*__. In contrast to other vendors in e-mobility, we belief that true transparency is only trustworthy if the entire process and the required software is open and reusable under a fair copyleft license (AGPL).\n5. Chargy is __*open for your contributions*__. We currently support adapters for the protocols of different charging station vendors like chargeIT mobility, ABL (OCMF), chargepoint. The certification at the Physikalisch-Technische Bundesanstalt (PTB) is provided by chargeIT mobility. If you want to add your protocol or a protocol adapter feel free to read the contributor license agreement and to send us a pull request.\n6. Chargy is __*white label*__. If you are a supporter of the Chargy project you can even use the entire software project under the free Apache 2.0 license. This allows you to create proprietary forks implementing your own corporate design or to include Chargy as a library within your existing application (This limitation was introduced to avoid discussions with too many black sheeps in the e-mobility market. We are sorry...).\n7. Chargy is __*accessible*__. For public sector bodies Chargy fully supports the [EU directive 2016/2102](https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016L2102) on the accessibility of websites and mobile applications and provides a context-sensitive feedback-mechanism and methods for dispute resolution.\n\n\n## Versions and Milestones\n\nVersion 1.0.x of the Chargy Transparency Software was reviewed and certified by [Physikalisch-Technische Bundesanstalt (PTB)](https://www.ptb.de). If you are a charge point vendor and want to use this software to verify the compliance with the German Eichrecht you can talk to our partner [chargeIT mobility](https://www.chargeit-mobility.com) and obtain the required legal documents.\n\nVersion 1.2.x of the Chargy Transparency Software was reviewed and certified by [Verband der Elektrotechnik Elektronik Informationstechnik e.V. (VDE)](https://www.vde.com/de). If you are a charge point vendor and want to use this software to verify the compliance with the German Eichrecht you can talk to our partner [ChargePoint](https://www.chargepoint.com/de-de/) and obtain the required legal documents.\n\nIf you need help with the Chargy Transparency Software or want to include your smarty energy meter or transparency data format, talk to [us](https://open.charging.cloud).\n\nThe development of version [v1.3](https://github.com/OpenChargingCloud/ChargyDesktopApp/tree/v1.3) already started and will focus on enhanced security concepts, more digital certificates and pricing information.\n\n## Credits\n\n- Christian Meusel for some more BSM validations.\n\n## Awards\n\nThe Chargy Transparency Software is one of the winners of the [1. Thuringia\'s Open-Source Prize](https://www.it-leistungsschau.de/programm/TOSP2019/) in March 2019. This prize was awarded by [Wolfgang Tiefensee](https://de.wikipedia.org/wiki/Wolfgang_Tiefensee), [Thuringia\xe2\x80\x99s Secretary of Commerce](https://www.thueringen.de/th6/tmwwdg/), in conjunction with the board of directors of the IT industry network [ITNet Thuringia](https://www.itnet-th.de).\n\n \n'",,"2018/11/29, 21:52:31",1791,AGPL-3.0,27,598,"2023/10/25, 11:29:56",26,23,54,5,0,0,0.0,0.02651515151515149,"2020/06/29, 11:04:11",v1.2,0,3,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",true,true,,,https://github.com/OpenChargingCloud,https://open.charging.cloud,"Jena, Germany",,,https://avatars.githubusercontent.com/u/17928234?v=4,,, WWCP_OCPP,Connectivity between the World Wide Charging Protocol (WWCP) and the Open Charge Point Protocol (OCPP v1.6/v2.0).,OpenChargingCloud,https://github.com/OpenChargingCloud/WWCP_OCPP.git,github,"ocpp,e-mobility,charging,chargingstation",Mobility and Transportation,"2023/10/23, 22:35:20",32,0,9,true,C#,Open Charging Cloud,OpenChargingCloud,"C#,SCSS,CSS,TypeScript,HTML,JavaScript",,"b'# WWCP OCPP\n\nThis software will allow the communication between World Wide Charging\nProtocol (WWCP) entities and entities implementing the\n_Open ChargePoint Protocol Version 1.6/2.0.1/2.1_, which is defined by the\n_Open Charge Alliance (OCA)_. The focus of this protocol are the communication\naspects between e-mobility charging stations, local nodes and charging station\noperator backends.\n\nFor more details on this protocol please visit http://www.openchargealliance.org.\n\n## Versions\n\n- **OCPP v2.1** is a based on an internal OCA specification (Draft 2) and currently under development. This version also comes with our [OCPP CSE](OCPP_CSE) extensions for cryptographically signed messages and security polices.\n- **OCPP v2.0.1** is fully implemented and at least one tests exists for\nevery charging station or CSMS message.\n- **OCPP v1.6** and the **Security Whitepaper** extensions are fully implemented\nand at least one tests exists for every charging station or CSMS message. This\nversion was also tested on multiple *Open Charge Alliance Plugfests*.\n- **OCPP v1.5** is no longer maintained. If you still need this version please\nsend us an e-mail.\n\n## Content\n\n- **Implementation Details and Differences** for [OCPP v1.6](WWCP_OCPPv1.6/README.md), [OCPP v2.0.1](WWCP_OCPPv2.0.1/README.md) and [OCPP v2.1](WWCP_OCPPv2.1/README.md) to the official protocol specification. The OCPP specification has unfortunatelly many flaws and security issues. This implementation provides extentions and work-arounds for most of these issues to simplify the daily operations business, high availability or to support additional concepts/methods like *European General Data Protection Regulation (GDPR)* and the *German Calibration Law (Eichrecht)*.\n\n\n### Your participation\n\nThis software is Open Source under the **Apache 2.0 license** and in some parts\n**Affero GPL 3.0 license**. We appreciate your participation in this\nongoing project, and your help to improve it and the e-mobility ICT in\ngeneral. If you find bugs, want to request a feature or send us a pull\nrequest, feel free to use the normal GitHub features to do so. For this\nplease read the Contributor License Agreement carefully and send us a signed\ncopy or use a similar free and open license.\n'",,"2016/05/23, 17:07:44",2711,Apache-2.0,236,378,"2020/06/28, 03:44:21",5,0,1,0,1214,0,0,0.0,,,0,1,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",false,false,,,https://github.com/OpenChargingCloud,https://open.charging.cloud,"Jena, Germany",,,https://avatars.githubusercontent.com/u/17928234?v=4,,, WWCP_Core,The World Wide Charging Protocol Suite is a collection of protocols in order to connect market actors in the field of e-mobility solutions via scalable and secure Internet protocols.,OpenChargingCloud,https://github.com/OpenChargingCloud/WWCP_Core.git,github,"wwcp,e-mobility,ev-roaming,charging,chargingstation",Mobility and Transportation,"2023/10/22, 11:49:17",7,0,1,true,C#,Open Charging Cloud,OpenChargingCloud,"C#,Smalltalk",,"b'# WWCP Core\n\nThe *World Wide Charging Protocol Suite* is a suite of protocols that provides a unified architecture,\ndesigned to link all stakeholders in electric mobility and energy networks through scalable, secure\nand semantic Internet protocols.\n\nUnlike traditional e-mobility protocols, the WWCP Suite prioritizes IT security, privacy, transparency,\nmetrology, and semantics in an extensively connected open environment. Therefore the suite also\nemphasizes interoperability with existing protocols and both backward and forward compatibility within\nits own framework.\n\nThis repository lays down the bedrock concepts, entities, and data structures, and provides a reference\nimplementation of virtual components like Electric Vehicle Supply Equipment (EVSEs), charging stations,\ncharging pools, charging station operators, e-mobility service providers, roaming networks, national\naccess points and more. These components are designed to facilitate continuous integration testing and\npenetration testing within e-mobility roaming networks.\n\n#### Related Projects\n\nServeral project make use of this core library:\n - [WWCP OICP](https://github.com/OpenChargingCloud/WWCP_OICP) defines a mapping between WWCP and the [Open InterCharge Protocol](http://www.intercharge.eu) v2.3 as defined by [Hubject GmbH](http://www.hubject.com) and thus allows you to enable EMP and CPO roaming.\n - [WWCP OCPI](https://github.com/OpenChargingCloud/WWCP_OCPI) defines a mapping between WWCP and the [Open Charge Point Interface](https://github.com/ocpi/ocpi) v2.1.1 and v2.2 and thus allows you to enable EMP and CPO roaming via direct connections betwewen those entities.\n - [WWCP OCPP](https://github.com/OpenChargingCloud/WWCP_OCPP) defines a mapping between WWCP and the [Open Charge Point Protocol](http://www.openchargealliance.org) v1.6 and v2.0.1 and thus allows you to attach OCPP charging stations.\n - [WWCP ISO15118](https://github.com/OpenChargingCloud/WWCP_ISO15118) defines a mapping between WWCP and [ISO/IEC 15118](https://de.wikipedia.org/wiki/ISO_15118) and thus allows you to communicate between electric vehicles and charging stations.\n\n\nThe following project are compatible solutions:\n- [ChargySharp](https://github.com/OpenChargingCloud/ChargySharp) The C# reference implementation of the Chargy e-mobility transparency software.\n- [Chargy Desktop App](https://github.com/OpenChargingCloud/ChargyDesktopApp) An e-mobility transparency software for Windows, Mac OS X and Linux (based on the [Electron framework](https://electronjs.org/)).\n- [Chargy Mobile App](https://github.com/OpenChargingCloud/ChargyMobilepApp) An e-mobility transparency software for iOS and Android (based on [Apache Cordova](https://cordova.apache.org)).\n\nThe following projects are no longer in active devlopment, but left for educational purposes\n\n - [WWCP OCHP](https://github.com/OpenChargingCloud/WWCP_OCHP) defines a mapping between WWCP and the [Open Clearing House Protocol](http://www.ochp.eu) and thus allows you to enable EMP and CPO roaming.\n - [WWCP OIOI](https://github.com/OpenChargingCloud/WWCP_OIOI) defines a mapping between WWCP and the [OIOI Protocol](https://docs.plugsurfing.com) as defined by [PlugSurfing](https://www.plugsurfing.com) and thus allows you to enable EMP and CPO roaming.\n - [WWCP eMIP](https://github.com/OpenChargingCloud/WWCP_eMIP) defines a mapping between WWCP and the [eMobility Protocol Inter-Operation](https://www.gireve.com/wp-content/uploads/2017/02/Gireve_Tech_eMIP-V0.7.4_ProtocolDescription_1.0.2_en.pdf) as defined by [Gireve](https://www.gireve.com) and thus allows you to enable EMP and CPO roaming.\n\n\n#### Requirements & Configuration\n\n1. You will need .NET7\n2. (Stress) tested under Debian GNU/Linux running in a KVM environment on AMD Ryzen 9 16-Core Zen3 machines\n\n\n#### Your contributions\n\nThis software is developed by [GraphDefined GmbH](http://www.graphdefined.com).\nWe appreciate your participation in this ongoing project, and your help to improve it.\nIf you find bugs, want to request a feature or send us a pull request, feel free to\nuse the normal GitHub features to do so. For this please read the\n[Contributor License Agreement](Contributor%20License%20Agreement.txt)\ncarefully and send us a signed copy.\n'",,"2015/05/05, 12:15:42",3095,AGPL-3.0,176,1171,"2020/06/28, 03:44:21",4,0,0,0,1214,0,0,0.0,,,0,1,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",false,false,,,https://github.com/OpenChargingCloud,https://open.charging.cloud,"Jena, Germany",,,https://avatars.githubusercontent.com/u/17928234?v=4,,, openv2g,"The objective to start this project is primarily to support the ISO and IEC standardization process to specify the so called ""Vehicle 2 Grid Communication Interface"" (V2G CI) which became the ISO IEC 15118 specification by now.",,,custom,,Mobility and Transportation,,,,,,,,,,http://openv2g.sourceforge.net/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, RAMP-mobility,A novel application of the RAMP main engine for generating bottom-up stochastic electric vehicles load profiles.,RAMP-project,https://github.com/RAMP-project/RAMP-mobility.git,github,"electric-vehicles-mobility,charging-profiles,ramp-model,charging-strategies,demand",Mobility and Transportation,"2022/10/05, 18:23:03",19,0,11,false,Python,RAMP,RAMP-project,Python,,"b'\n\n*RAMP-mobility: a RAMP application for generating bottom-up stochastic electric vehicles load profiles.*\n\n---\n\n## Overview\nRAMP-mobility is an original application of the open-source **[RAMP software engine](https://github.com/RAMP-project/RAMP)**, tailored to the generation of European electric vehicles mobility and charging profiles at high temporal resolution (1-min).\n\nThis repository contains the complete RAMP-Mobility model, entirely developed in Python 3.7. The model is currently released as v0.3.1. It includes a minimum \'quick-start\' documentation (see below), complemented by the code being fully commented in each line to allow a complete understanding of it. \n\nFurther details about the conceptual and mathematical model formulation of the RAMP software engine can be found in the original [RAMP](https://doi.org/10.1016/j.energy.2019.04.097) and in the specific [RAMP-mobility](https://doi.org/10.1016/j.apenergy.2022.118676) publications. What is more, you are welcome to join our **[Gitter chat](https://gitter.im/RAMP-project/community)** to discuss doubts and make questions about the code!\n\n

\n\n

\n\n## Quick start\n\nPlease refer to the complete **[getting started](https://github.com/RAMP-project/RAMP-mobility/blob/master/docs/getting_started.md)** guide for instructions on how to run RAMP-Mobility. This includes information about installation and Python dependencies, as well as a minimum walkthrough of model structure and usage.\n\n## Model description\n\nRAMP-mobility covers 28 European countries, namely: EU27 minus Cyprus and Malta, plus Norway, Switzerland and the UK. \nThe model consists of 2 main modules:\n\n**Module 1:** bottom-up stochastic simulation of electric vehicle mobility profiles\n\n

\n\n

\n\n**Module 2:** simulation, for each electric vehicle, of a charging profile based on the previously obtained mobility pattern\n\nFour pre-defined charging strategies are implemented, to simulate different plausible scenarios: \n\n1. *Uncontrolled*: The base case, where no control over the user behaviour is applied. If the charging point is available, the battery is charged immediately at the nominal power, until a user-defined value of SOCmax.\n2. *Perfect Foresight*: Strategy aiming at quantifying the possibility to implement a Vehicle-to-grid solution. If the charging point is available, the car is charged right before the end of the parking, at the nominal power, until the SOC satisfies the needs of the following\njourney. This allows to compute the part of the vehicle\'s battery available to the system, without affecting the user driving range. \n3. *Night Charge*: First smart charging strategy. It aims at shifting the charging events to the night period. The car is charged only if the charging point is available and the parking happens during nighttime.\n4. *RES Integration*: Second smart charging method. Has the goal of coupling the renewable power generation with the transport sector. The car is charged only if the charging point is available and the parking happens during periods when there is excess of renewable power production. As this condition is evaluated through the residual load curve, a file containing it should be provided in the folder ""Input_data/Residual Load duration curve"".\n\n

\n\n

\n\n## Authors\nThe model has been developed by:\n\n**Andrea Mangipinto**
\nPolitecnico di Milano, Italy
\n\n**Francesco Lombardi**
\nTU Delft, Netherlands
\n [@FrLomb](https://twitter.com/FrLomb)\n(Correspondence should be sent to: f.lombardi@tudelft.nl)
\n\n**Francesco Sanvito**
\nPolitecnico di Milano, Italy
\n [@FrancescoSanvi2](https://twitter.com/FrancescoSanvi2)\n\n**Sylvain Quoilin**
\nKU Leuven, Belgium
\n\n**Matija Pavi\xc4\x8devi\xc4\x87**
\nKU Leuven, Belgium
\n\n**Emanuela Colombo**
\nPolitecnico di Milano, Italy
\n\n## Citing\n\nPlease cite the related Journal publicaton if you use RAMP-mobility in your research:\n*A. Mangipinto, F. Lombardi, F. Sanvito, M. Pavi\xc4\x8devi\xc4\x87, S. Quoilin, E. Colombo, Impact of mass-scale deployment of electric vehicles and benefits of smart charging across all European countries, Applied Enery, 2022, https://doi.org/10.1016/j.apenergy.2022.118676. *\n\nAdditionally, you may cite the EMP-E presentation of RAMP-mobility:\n*A. Mangipinto, F. Lombardi, F. Sanvito, S. Quoilin, M. Pavi\xc4\x8devi\xc4\x87, E. Colombo, RAMP-mobility: time series of electric vehicle consumption and charging strategies for all European countries, EMP-E, 2020, https://doi.org/10.13140/RG.2.2.29560.26880*\n\nOr the publication of the original RAMP software engine:\n*F. Lombardi, S. Balderrama, S. Quoilin, E. Colombo, Generating high-resolution multi-energy load profiles for remote areas with an open-source stochastic model, Energy, 2019, https://doi.org/10.1016/j.energy.2019.04.097.*\n\n## Contribute\nThis project is open-source. Interested users are therefore invited to test, comment or contribute to the tool. Submitting issues is the best way to get in touch with the development team, which will address your comment, question, or development request in the best possible way. We are also looking for contributors to the main code, willing to contibute to its capabilities, computational-efficiency, formulation, etc. \n\n## License\n\nCopyright 2020 RAMP-Mobility, contributors listed in **Authors**\n\nLicensed under the European Union Public Licence (EUPL), Version 1.2-or-later; you may not use this file except in compliance with the License. \n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License\n'",",https://doi.org/10.1016/j.energy.2019.04.097,https://doi.org/10.1016/j.apenergy.2022.118676,https://doi.org/10.1016/j.apenergy.2022.118676,https://doi.org/10.13140/RG.2.2.29560.26880*\n\nOr,https://doi.org/10.1016/j.energy.2019.04.097.*\n\n##","2020/05/24, 09:47:42",1249,EUPL-1.2,0,132,"2022/10/05, 18:24:08",16,11,20,0,385,2,0.1,0.28037383177570097,"2021/05/28, 07:43:30",v0.3.1,0,4,false,,false,false,,,https://github.com/RAMP-project,,,,,https://avatars.githubusercontent.com/u/65850039?v=4,,, PCT,The goal is to increase the accessibility and reproducibility of the data produced by the Propensity to Cycle Tool (PCT).,ITSLeeds,https://github.com/ITSLeeds/pct.git,github,,Mobility and Transportation,"2023/05/20, 08:04:40",19,0,4,true,TeX,Institute for Transport Studies,ITSLeeds,"TeX,R",https://itsleeds.github.io/pct/,"b'\n\n\n\n\n[![](https://cranlogs.r-pkg.org/badges/grand-total/pct)](https://cran.r-project.org/package=pct)\n[![R build\nstatus](https://github.com/itsleeds/pct/workflows/R-CMD-check/badge.svg)](https://github.com/itsleeds/pct/actions)\n[![](https://www.r-pkg.org/badges/version/pct)](https://www.r-pkg.org/pkg/pct)\n[![R-CMD-check](https://github.com/itsleeds/pct/workflows/R-CMD-check/badge.svg)](https://github.com/itsleeds/pct/actions)\n\n\n\n\n\n\n# pct\n\nThe goal of pct is to make the data produced by the Propensity to Cycle\nTool (PCT) easier to access and reproduce. The PCT a research project\nand web application hosted at [www.pct.bike](https://www.pct.bike/). For\nan overview of the data provided by the PCT, clicking on the previous\nlink and trying it out is a great place to start. An academic\n[paper](https://www.jtlu.org/index.php/jtlu/article/view/862) on the PCT\nprovides detail on the motivations for and methods underlying the\nproject.\n\nA major motivation behind the project was making transport evidence more\naccessible, encouraging evidence-based transport policies. The code base\nunderlying the PCT is publicly available (see\n[github.com/npct](https://github.com/npct/)). However, the code hosted\nthere is not easy to run or reproduce, which is where this package comes\nin: it provides quick access to the data underlying the PCT and enables\nsome of the key results to be reproduced quickly. It was developed\nprimarily for educational purposes (including for upcoming PCT training\ncourses) but it may be useful for people to build on the the methods,\nfor example to create a scenario of cycling uptake in their\ntown/city/region.\n\nIn summary, if you want to know how PCT works, be able to reproduce some\nof its results, and build scenarios of cycling uptake to inform\ntransport policies enabling cycling in cities worldwide, this package is\nfor you\\!\n\n## Installation\n\n``` r\n# from CRAN\ninstall.packages(""pct"")\n```\n\nYou can install the development version of the package as follows:\n\n``` r\nremotes::install_github(""ITSLeeds/pct"")\n```\n\nLoad the package as follows:\n\n``` r\nlibrary(pct)\n```\n\n## Documentation\n\nProbably the best place to get further information on the PCT is from\nthe package\xe2\x80\x99s website at \n\nThere you will find the following vignettes, which we recommend reading,\nand reproducing and experimenting with the code contained within to\ndeepen your understanding of the code, in the following order:\n\n1. A \xe2\x80\x98get started\xe2\x80\x99 introduction to the PCT and associated R package:\n \n2. Getting and using PCT data, an\n [article](https://itsleeds.github.io/pct/articles/getting.html)\n showing how to get and use data from the PCT, based on a case study\n from North Yorkshire\n3. A [training\n vignette](https://itsleeds.github.io/pct/articles/pct_training.html)\n providing more detailed guidance on data provided by the PCT\n package, with interactive exercises based on a case study of the\n Isle of Wight\n4. A\n [vignette](https://itsleeds.github.io/pct/articles/cycling-potential-uk.html)\n show how to use the data provided by the package to estimate cycling\n uptake in UK cities\n5. A\n [vignette](https://itsleeds.github.io/pct/articles/pct-international.html)\n demonstrating the international applicability of the PCT method,\n with help from this and other R packages\n\nYou will also find there documentation for each of the functions at\n[itsleeds.github.io/pct/reference/](https://itsleeds.github.io/pct/reference/index.html).\nBelow we describe some of the basics.\n\n## Get PCT data\n\nFrom feedback, we hear that the use of the data is critical in decision\nmaking. Therefore, one area where the package could be useful is making\nthe data \xe2\x80\x9ceasily\xe2\x80\x9d available to be processed.\n\n - `get_pct`: the basic function to obtain data available\n [here](https://itsleeds.github.io/pct/reference/get_pct.html).\n\nThe rest of these should be self explanatory.\n\n - `get_pct_centroids`\n - `get_pct_lines`\n - `get_pct_rnet`\n - `get_pct_routes_fast`\n - `get_pct_routes_quiet`\n - `get_pct_zones`\n - `uptake_pct_godutch`\n - `uptake_pct_govtarget`\n\nFor example, to get the centroids in West Yorkshire:\n\n``` r\ncentroids = get_pct_centroids(region = ""west-yorkshire"")\nplot(centroids[, ""geo_name""])\n```\n\n\n\nLikewise to download the desire lines for \xe2\x80\x9cwest-yorkshire\xe2\x80\x9d:\n\n``` r\nlines = get_pct_lines(region = ""west-yorkshire"")\nlines = lines[order(lines$all, decreasing = TRUE), c(""all"")]\nplot(lines[1:10,], lwd = 4)\n```\n\n\n\n``` r\n# view the lines on a map\n# mapview::mapview(lines[1:3000, c(""geo_name1"")])\n```\n\n## Estimate cycling uptake\n\nAn important part of the PCT is its ability to create model scenarios of\ncycling uptake. Key to the PCT uptake model is \xe2\x80\x98distance decay\xe2\x80\x99, meaning\nthat short trips are more likely to be cycled than long trips. The\nfunctions `uptake_pct_govtarget()` and `uptake_pct_godutch()` implement\nuptake models used in the PCT, which use distance and hilliness per\ndesire line as inputs and output the proportion of people who could be\nexpected to cycle if that scenario were realised. The scenarios of\ncycling uptake produced by these functions are not predictions of what\n*will* happen, but illustrative snapshots of what *could* happen if\noverall propensity to cycle reached a certain level. The uptake levels\nproduced by Go Dutch and Government Target scenarios (which represent\nincreases in cycling, not final levels) are illustrated in the graph\nbelow (other scenarios could be produced, see the [source\ncode](https://itsleeds.github.io/pct/reference/uptake_pct_govtarget.html)\nsee how these models work):\n\n``` r\nmax_distance = 50\ndistances = 1:max_distance\nmax_hilliness = 5\nhilliness = 0:max_hilliness\nuptake_df = data.frame(\n distances = rep(distances, times = max_hilliness + 1),\n hilliness = rep(hilliness, each = max_distance)\n)\np_govtarget = uptake_pct_govtarget(\n distance = uptake_df$distances,\n gradient = uptake_df$hilliness\n )\np_godutch = uptake_pct_godutch(\n distance = uptake_df$distances,\n gradient = uptake_df$hilliness\n )\nuptake_df = rbind(\n cbind(uptake_df, scenario = ""govtarget"", pcycle = p_govtarget),\n cbind(uptake_df, scenario = ""godutch"", pcycle = p_godutch)\n)\nlibrary(ggplot2)\nggplot(uptake_df) +\n geom_line(aes(\n distances,\n pcycle,\n linetype = scenario,\n colour = as.character(hilliness)\n )) +\n scale_color_discrete(""Gradient (%)"")\n```\n\n\n\nThe proportion of trips made by cycling along each origin-destination\n(OD) pair therefore depends on the trip distance and hilliness. The\nequivalent plot for hilliness is as follows:\n\n``` r\ndistances = c(1, 3, 6, 10, 15, 21)\nhilliness = seq(0, 10, by = 0.2)\nuptake_df = \n data.frame(\n expand.grid(distances, hilliness)\n )\nnames(uptake_df) = c(""distances"", ""hilliness"")\np_govtarget = uptake_pct_govtarget(\n distance = uptake_df$distances,\n gradient = uptake_df$hilliness\n )\np_godutch = uptake_pct_godutch(\n distance = uptake_df$distances,\n gradient = uptake_df$hilliness\n )\nuptake_df = rbind(\n cbind(uptake_df, scenario = ""govtarget"", pcycle = p_govtarget),\n cbind(uptake_df, scenario = ""godutch"", pcycle = p_godutch)\n)\nggplot(uptake_df) +\n geom_line(aes(\n hilliness,\n pcycle,\n linetype = scenario,\n colour = formatC(distances, flag = ""0"", width = 2)\n )) +\n scale_color_discrete(""Distance (km)"")\n```\n\n\n\nNote: if distances or gradient values appear to be provided in incorrect\nunits, they will automatically be updated:\n\n``` r\ndistances = uptake_df$distances * 1000\nhilliness = uptake_df$hilliness / 100\nres = uptake_pct_godutch(distances, hilliness, verbose = TRUE)\n#> Distance assumed in m, switching to km\n#> Gradient assumed to be gradient, switching to % (*100)\n```\n\nThe main input dataset into the PCT is OD data and, to convert each OD\npair into a geographic desire line, geographic zone or centroids.\nTypical input data is provided in packaged datasets `od_leeds` and\n`zones_leeds`, as shown in the next section.\n\n## Reproduce PCT for Leeds\n\nThis example shows how scenarios of cycling uptake, and how \xe2\x80\x98distance\ndecay\xe2\x80\x99 works (short trips are more likely to be cycled than long trips).\n\nThe input data looks like this (origin-destination data and geographic\nzone data):\n\n``` r\nclass(od_leeds)\n#> [1] ""tbl_df"" ""tbl"" ""data.frame""\nod_leeds[c(1:3, 12)]\n#> # A tibble: 10 \xc3\x97 4\n#> area_of_residence area_of_workplace all bicycle\n#> \n#> 1 E02002363 E02006875 922 43\n#> 2 E02002373 E02006875 1037 73\n#> 3 E02002384 E02006875 966 13\n#> 4 E02002385 E02006875 958 52\n#> 5 E02002392 E02006875 753 19\n#> 6 E02002404 E02006875 1145 10\n#> 7 E02002411 E02006875 929 27\n#> 8 E02006852 E02006875 1221 99\n#> 9 E02006861 E02006875 1177 56\n#> 10 E02006876 E02006875 1035 10\nclass(zones_leeds)\n#> [1] ""sf"" ""data.frame""\nzones_leeds[1:3, ]\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> Simple feature collection with 3 features and 6 fields\n#> Geometry type: MULTIPOLYGON\n#> Dimension: XY\n#> Bounding box: xmin: -1.727245 ymin: 53.90046 xmax: -1.294313 ymax: 53.94589\n#> Geodetic CRS: WGS 84\n#> objectid msoa11cd msoa11nm msoa11nmw st_areasha st_lengths\n#> 2270 2270 E02002330 Leeds 001 Leeds 001 3460674 10002.983\n#> 2271 2271 E02002331 Leeds 002 Leeds 002 21870986 26417.665\n#> 2272 2272 E02002332 Leeds 003 Leeds 003 2811303 8586.548\n#> geometry\n#> 2270 MULTIPOLYGON (((-1.392046 5...\n#> 2271 MULTIPOLYGON (((-1.340405 5...\n#> 2272 MULTIPOLYGON (((-1.682211 5...\n```\n\nThe `stplanr` package can be used to convert the non-geographic OD data\ninto geographic desire lines as follows:\n\n``` r\nlibrary(sf)\n#> Linking to GEOS 3.11.1, GDAL 3.6.2, PROJ 9.1.1; sf_use_s2() is TRUE\ndesire_lines = stplanr::od2line(flow = od_leeds, zones = zones_leeds[2])\n#> Creating centroids representing desire line start and end points.\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\n#> old-style crs object detected; please recreate object with a recent sf::st_crs()\nplot(desire_lines[c(1:3, 12)])\n```\n\n\n\nWe can convert these straight lines into routes with a routing service,\ne.g.:\n\n``` r\nsegments_fast = stplanr::route(l = desire_lines, route_fun = cyclestreets::journey)\n#> Most common output is sf\n```\n\nWe got useful information from this routing operation, we will convert\nthe route segments into complete routes with `dplyr`:\n\n``` r\nlibrary(dplyr)\n#> \n#> Attaching package: \'dplyr\'\n#> The following objects are masked from \'package:stats\':\n#> \n#> filter, lag\n#> The following objects are masked from \'package:base\':\n#> \n#> intersect, setdiff, setequal, union\nroutes_fast = segments_fast %>% \n group_by(area_of_residence, area_of_workplace) %>% \n summarise(\n all = unique(all),\n bicycle = unique(bicycle),\n length = sum(distances),\n av_incline = mean(gradient_smooth) * 100\n ) \n#> `summarise()` has grouped output by \'area_of_residence\'. You can override using\n#> the `.groups` argument.\n```\n\nThe results at the route level are as follows:\n\n``` r\nplot(routes_fast)\n```\n\n\n\nNow we estimate cycling uptake:\n\n``` r\nroutes_fast$uptake = uptake_pct_govtarget(distance = routes_fast$length, gradient = routes_fast$av_incline)\nroutes_fast$bicycle_govtarget = routes_fast$bicycle +\n round(routes_fast$uptake * routes_fast$all)\n```\n\nLet\xe2\x80\x99s see how many people started cycling:\n\n``` r\nsum(routes_fast$bicycle_govtarget) - sum(routes_fast$bicycle)\n#> [1] 408\n```\n\nNearly 1000 more people cycling to work, just in 10 desire is not bad\\!\nWhat % cycling is this, for those routes?\n\n``` r\nsum(routes_fast$bicycle_govtarget) / sum(routes_fast$all)\n#> [1] 0.07985803\nsum(routes_fast$bicycle) / sum(routes_fast$all)\n#> [1] 0.03963324\n```\n\nIt\xe2\x80\x99s gone from 4% to 11%, a realistic increase if cycling were enabled\nby good infrastructure and policies.\n\nNow: where to prioritise that infrastructure and those policies?\n\n``` r\nroutes_fast_linestrings = sf::st_cast(routes_fast, ""LINESTRING"")\nrnet = stplanr::overline(routes_fast_linestrings, attrib = c(""bicycle"", ""bicycle_govtarget""))\nlwd = rnet$bicycle_govtarget / mean(rnet$bicycle_govtarget)\nplot(rnet[""bicycle_govtarget""], lwd = lwd)\n```\n\n\n\nWe can view the results in an interactive map and share with policy\nmakers, stakeholders, and the public\\! E.g. (see interactive map\n[here](https://rpubs.com/RobinLovelace/474074)):\n\n``` r\nmapview::mapview(rnet, zcol = ""bicycle_govtarget"", lwd = lwd * 2)\n```\n\n![](https://raw.githubusercontent.com/ITSLeeds/pct/master/pct-leeds-demo.png)\n\n## Limitations\n\n - This package does not contain code to estimate cycling uptake\n associated with intrazonal flows and people with no fixed job data,\n although the datasets downloaded with the `get_pct_centroids()`\n functions provide estimated uptake for intrazonal flows.\n - This package currently does not contiain code to estimate health\n benefits\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Testing the package\n\nTest the package with the following code:\n\n``` r\nremotes::install_github(""ITSLeeds/pct"")\ndevtools::check()\n```\n'",,"2019/02/21, 13:48:11",1707,EUPL-1.2,30,585,"2023/05/20, 08:01:31",3,40,123,28,158,2,1.2,0.22605363984674332,"2023/05/20, 07:31:32",v0.9.9,0,8,false,,false,false,,,https://github.com/ITSLeeds,https://environment.leeds.ac.uk/transport,"Leeds, UK",,,https://avatars.githubusercontent.com/u/22447619?v=4,,, goat,A tool capable of modeling walking and cycling accessibility.,goat-community,https://github.com/goat-community/goat.git,github,"accessibility-analysis,isochrones,walking,cycling,accessiblity-scenarios,mobility",Mobility and Transportation,"2023/10/11, 19:54:39",83,0,17,true,Python,Plan4Better,goat-community,"Python,JavaScript,Vue,HTML,PLpgSQL,HCL,Shell,Cython,Makefile,Dockerfile,SCSS,Mako",,"b'
\n \n \n \n
\n\n
\n
\n\n\n[![Build Status](https://github.com/goat-community/goat/actions/workflows/push.yaml/badge.svg)](https://github.com/goat-community/goat/actions/workflows/push.yaml)\n\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n
\n\n## About\n\nThis is the homebase of Geo Open Accessibility Tool (GOAT). GOAT is meant to be an open source, interactive,\nflexible and useful web-tool for accessibility planning. Currenty the tool is maintained by the Startup [Plan4Better](https://plan4better.de) and the [Technical University of Munich (TUM)](https://www.bgu.tum.de/en/sv/homepage/). \n\nFor more information:\n\n[GOAT Docs](https://plan4better.de/docs/background/)\n\n[GOAT demo versions](https://plan4better.de/goatlive/)\n\n[Join GOAT User Group on Telegram](https://t.me/joinchat/EpAk7BYbIF72q7D3OTUCZQ)\n\n[Follow GOAT on LinkedIn](https://www.linkedin.com/company/plan4better)\n\n[Follow GOAT on Twitter](https://twitter.com/plan4better)\n\n## Scientific Publications\n\nIf you are interested in scientific publications related to GOAT check out the following: \n\nPajares, E.; B\xc3\xbcttner, B.; Jehle, U.; Nichols, A.; Wulfhorst, G. Accessibility by proximity: Addressing the lack of interactive\naccessibility instruments for active mobility. J. Transp. Geogr. 2021, 93, 103080, https://doi.org/10.1016/j.jtrangeo.2021.103080.\n\nPajares, E.; Mu\xc3\xb1oz Nieto, R.; Meng, L.; Wulfhorst, G. Population Disaggregation on the Building Level Based on Outdated Census Data. ISPRS Int. J. Geo-Inf. 2021, 10, 662. https://doi.org/10.3390/ijgi10100662\n\nPajares, E., Jehle, U., 2021. GOAT: Ein interaktives Erreichbarkeitsinstrument zur Planung der 15-Minuten-Stadt, in: Fl\xc3\xa4chennutzungsmonitoring XIII: Fl\xc3\xa4chenpolitik - Konzepte\xc2\xa0- Analysen - Tools, I\xc3\x96R Schriften. Rhombos-Verlag, Berlin, pp. 265\xe2\x80\x93273. https://doi.org/10.26084/13dfns-p024\n\nPajares, E., Jehle, U., Hall, J., Miramontes, M., Wulfhorst, G., 2022. Assessment of the usefulness of the accessibility instrument GOAT for the planning practice. Journal of Urban Mobility 2, 100033. https://doi.org/10.1016/j.urbmob.2022.100033\n\nJehle, U., Pajares, E., Analyse der Fu\xc3\x9fwegequalit\xc3\xa4ten zu Schulen \xe2\x80\x93 Entwicklung von Indikatoren auf Basis von OpenData. In: Meinel, G.; Kr\xc3\xbcger, T.; Behnisch, M.; Ehrhardt, D. (Hrsg.): Fl\xc3\xa4chennutzungsmonitoring XIII: Fl\xc3\xa4chenpolitik - Konzepte - Analysen - Tools. Berlin: Rhombos-Verlag, 2021, (I\xc3\x96R-Schriften Band 79), S.221-232, https://doi.org/10.26084/13dfns-p020\n\nPajares, E., 2022. Dissertation: Development, application, and testing of an accessibility instrument for planning active mobility. Technische Universit\xc3\xa4t M\xc3\xbcnchen. https://mediatum.ub.tum.de/680889?show_id=1688064\n\n'",",https://doi.org/10.1016/j.jtrangeo.2021.103080.\n\nPajares,https://doi.org/10.3390/ijgi10100662\n\nPajares,https://doi.org/10.26084/13dfns-p024\n\nPajares,https://doi.org/10.1016/j.urbmob.2022.100033\n\nJehle,https://doi.org/10.26084/13dfns-p020\n\nPajares","2018/09/30, 11:09:17",1851,GPL-3.0,957,6942,"2023/10/24, 19:09:31",305,1062,2415,960,1,7,0.8,0.5985487350460874,"2023/05/10, 10:39:21",v1.5,0,18,false,,true,true,,,https://github.com/goat-community,https://plan4better.de,Germany,,,https://avatars.githubusercontent.com/u/58163984?v=4,,, gtfs-router,An R package for routing with GTFS (General Transit Feed Specification) data.,ATFutures,https://github.com/UrbanAnalyst/gtfsrouter.git,github,"gtfs,public-transportation,router,r-package,gtfsrouter",Mobility and Transportation,"2023/09/25, 08:55:28",78,0,19,true,R,Urban Analyst,UrbanAnalyst,"R,C++,Makefile",https://urbananalyst.github.io/gtfsrouter/,"b'# gtfsrouter \n\n[![R build\nstatus](https://github.com/UrbanAnalyst/gtfsrouter/workflows/R-CMD-check/badge.svg)](https://github.com/UrbanAnalyst/gtfsrouter/actions?query=workflow%3AR-CMD-check)\n[![codecov](https://codecov.io/gh/UrbanAnalyst/gtfsrouter/branch/main/graph/badge.svg)](https://app.codecov.io/gh/UrbanAnalyst/gtfsrouter)\n[![Project Status:\nActive](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/gtfsrouter)](https://cran.r-project.org/package=gtfsrouter)\n[![CRAN\nDownloads](http://cranlogs.r-pkg.org/badges/grand-total/gtfsrouter?color=orange)](https://cran.r-project.org/package=gtfsrouter)\n\n**R** package for public transport routing with [GTFS (General Transit\nFeed Specification)](https://developers.google.com/transit/gtfs/) data.\n\n## Installation\n\nYou can install latest stable version of `gtfsrouter` from CRAN with:\n\n``` r\ninstall.packages (""gtfsrouter"")\n```\n\nAlternatively, the current development version can be installed using\nany of the following options:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_git (""https://git.sr.ht/~mpadge/gtfsrouter"")\nremotes::install_git (""https://codeberg.org/UrbanAnalyst/gtfsrouter"")\nremotes::install_bitbucket (""urbananalyst/gtfsrouter"")\nremotes::install_gitlab (""UrbanAnalyst/gtfsrouter"")\nremotes::install_github (""UrbanAnalyst/gtfsrouter"")\n```\n\nTo load the package and check the version:\n\n``` r\nlibrary (gtfsrouter)\npackageVersion (""gtfsrouter"")\n```\n\n ## [1] \'0.0.5.158\'\n\n## Main functions\n\nThe main functions can be demonstrated with sample data included with\nthe package from Berlin (the \xe2\x80\x9cVerkehrverbund Berlin Brandenburg\xe2\x80\x9d, or\nVBB). GTFS data are always stored as `.zip` files, and these sample data\ncan be written to the temporary directory (`tempdir()`) of the current R\nsession with the function `berlin_gtfs_to_zip()`.\n\n``` r\nfilename <- berlin_gtfs_to_zip ()\nprint (filename)\n```\n\n ## [1] ""/tmp/RtmpeXCbTq/vbb.zip""\n\nFor normal package use, `filename` will specify the name of a local GTFS\n`.zip` file.\n\n### gtfs_route\n\nGiven the name of a GTFS `.zip` file, `filename`, routing is as simple\nas the following code:\n\n``` r\ngtfs <- extract_gtfs (filename)\ngtfs <- gtfs_timetable (gtfs, day = ""Wed"") # A pre-processing step to speed up queries\ngtfs_route (gtfs,\n from = ""Tegel"",\n to = ""Berlin Hauptbahnhof"",\n start_time = 12 * 3600 + 120\n) # 12:02 in seconds\n```\n\n| route_name | trip_name | stop_name | arrival_time | departure_time |\n|:-----------|:-----------------|:--------------------------------|:-------------|:---------------|\n| U8 | U Paracelsus-Bad | U Schonleinstr. (Berlin) | 12:04:00 | 12:04:00 |\n| U8 | U Paracelsus-Bad | U Kottbusser Tor (Berlin) | 12:06:00 | 12:06:00 |\n| U8 | U Paracelsus-Bad | U Moritzplatz (Berlin) | 12:08:00 | 12:08:00 |\n| U8 | U Paracelsus-Bad | U Heinrich-Heine-Str. (Berlin) | 12:09:30 | 12:09:30 |\n| U8 | U Paracelsus-Bad | S+U Jannowitzbrucke (Berlin) | 12:10:30 | 12:10:30 |\n| S5 | S Westkreuz | S+U Jannowitzbrucke (Berlin) | 12:15:24 | 12:15:54 |\n| S5 | S Westkreuz | S+U Alexanderplatz Bhf (Berlin) | 12:17:24 | 12:18:12 |\n| S5 | S Westkreuz | S Hackescher Markt (Berlin) | 12:19:24 | 12:19:54 |\n| S5 | S Westkreuz | S+U Friedrichstr. Bhf (Berlin) | 12:21:24 | 12:22:12 |\n| S5 | S Westkreuz | S+U Berlin Hauptbahnhof | 12:24:06 | 12:24:42 |\n\n### gtfs_traveltimes\n\nThe [`gtfs_traveltimes()`\nfunction\\`](https://UrbanAnalyst.github.io/gtfsrouter/reference/gtfs_traveltimes.html)\ncalculates minimal travel times from any nominated stop to all other\nstops within a feed. It requires the two parameters of start station,\nand a vector of two values specifying earliest and latest desired start\ntimes. The following code returns the fastest travel times to all\nstations within the feed for services which leave the nominated station\n(\xe2\x80\x9cAlexanderplatz\xe2\x80\x9d) between 12:00 and 13:00 on a Monday:\n\n``` r\ngtfs <- extract_gtfs (filename)\ngtfs <- gtfs_timetable (gtfs, day = ""Monday"")\nx <- gtfs_traveltimes (gtfs,\n from = ""Alexanderplatz"",\n start_time_limits = c (12, 13) * 3600\n)\n```\n\nThe function returns a simple table detailing all stations reachable\nwith services departing from the nominated station and start times:\n\n``` r\nhead (x)\n```\n\n| start_time | duration | ntransfers | stop_id | stop_name | stop_lon | stop_lat |\n|:-----------|:---------|-----------:|:-------------|:------------------------|---------:|---------:|\n| 12:00:42 | 00:14:42 | 1 | 060003102223 | S Bellevue (Berlin) | 13.34710 | 52.51995 |\n| 12:00:42 | 00:08:36 | 0 | 060003102224 | S Bellevue (Berlin) | 13.34710 | 52.51995 |\n| 12:00:42 | 00:15:06 | 1 | 060003103233 | S Tiergarten (Berlin) | 13.33624 | 52.51396 |\n| 12:00:42 | 00:10:42 | 0 | 060003103234 | S Tiergarten (Berlin) | 13.33624 | 52.51396 |\n| 12:00:42 | 00:14:18 | 1 | 060003201213 | S+U Berlin Hauptbahnhof | 13.36892 | 52.52585 |\n| 12:00:42 | 00:05:54 | 0 | 060003201214 | S+U Berlin Hauptbahnhof | 13.36892 | 52.52585 |\n\nFurther details are provided in a [separate\nvignette](https://UrbanAnalyst.github.io/gtfsrouter/articles/traveltimes.html).\n\n### gtfs_transfer_table\n\nFeeds should include a \xe2\x80\x9ctransfers.txt\xe2\x80\x9d table detailing all possible\ntransfers between nearby stations, yet many feeds omit these tables,\nrendering them unusable for routing because transfers between services\ncan not be calculated. The `gtfsrouter` package also includes a\nfunction,\n[`gtfs_transfer_table()`](https://UrbanAnalyst.github.io/gtfsrouter/reference/gtfs_transfer_table.html),\nwhich can calculate a transfer table for a given feed, with transfer\ntimes calculated either using straight-line distances (the default), or\nusing more realistic pedestrian times routed through the underlying\nstreet network.\n\nThis function can also be used to enable routing through multiple\nadjacent or overlapping GTFS feeds. The feeds need simply be merged\nthrough binding the rows of all tables, and the resultant aggregate feed\nsubmitted to the\n[`gtfs_transfer_table()`](https://UrbanAnalyst.github.io/gtfsrouter/reference/gtfs_transfer_table.html)\nfunction. This transfer table will retain all transfers specified in the\noriginal feeds, yet be augmented by all possible transfers between the\nmultiple systems up to a user-specified maximal distance. Further\ndetails of this function are also provided in another [separate\nvignette](https://UrbanAnalyst.github.io/gtfsrouter/articles/transfers.html).\n\n## Additional Functionality\n\nThere are many ways to construct GTFS feeds. For background information,\nsee [`gtfs.org`](https://gtfs.org), and particularly their [GTFS\nExamples](https://docs.google.com/document/d/16inL5BVcM1aU-_DcFJay_tC6Ni0wPa0nvQEstueG5k4/edit).\nFeeds may include a \xe2\x80\x9cfrequencies.txt\xe2\x80\x9d table which defines \xe2\x80\x9cservice\nperiods\xe2\x80\x9d, and overrides any schedule information during the specified\ntimes. The `gtfsrouter` package includes a function,\n[`frequencies_to_stop_times()`](https://UrbanAnalyst.github.io/gtfsrouter/reference/frequencies_to_stop_times.html),\nto convert \xe2\x80\x9cfrequencies.txt\xe2\x80\x9d tables to equivalent \xe2\x80\x9cstop_times.txt\xe2\x80\x9d\nentries, to enable the feed to be used for routing.\n\n## Contributors\n\n\n\n\n\n\nAll contributions to this project are gratefully acknowledged using the [`allcontributors` package](https://github.com/ropenscilabs/allcontributors) following the [all-contributors](https://allcontributors.org) specification. Contributions of any kind are welcome!\n\n### Code\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n
\nmpadge\n
\n\n\n
\nAlexandraKapp\n
\n\n\n
\nstmarcin\n
\n\n\n
\ndhersz\n
\n\n\n
\npolettif\n
\n\n\n### Issue Authors\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n
\nsridharraman\n
\n\n\n
\norlandombaa\n
\n\n\n
\nMaxime2506\n
\n\n\n
\nchinhqho\n
\n\n\n
\nfedericotallis\n
\n\n\n
\nrafapereirabr\n
\n\n\n
\ndcooley\n
\n\n\n
\nbernd886\n
\n\n\n
\nstefan-overkamp\n
\n\n\n
\nluukvdmeer\n
\n\n\n
\nszaboildi\n
\n\n\n
\ncseveren\n
\n\n\n
\njh0ker\n
\n\n\n
\nzamirD123\n
\n\n\n
\nviajerus\n
\n\n\n
\njmertic\n
\n\n\n
\n5balls\n
\n\n\n
\npteridin\n
\n\n\n### Issue Contributors\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n
\ntbuckl\n
\n\n\n
\ntuesd4y\n
\n\n\n
\nRobinlovelace\n
\n\n\n
\nloanho23\n
\n\n\n
\nabyrd\n
\n\n\n
\nhansmib\n
\n\n\n\n\n'",,"2019/01/28, 08:28:13",1731,GPL-3.0,109,885,"2023/08/21, 12:56:02",12,11,99,21,65,0,0.2,0.03472222222222221,"2023/09/24, 11:04:07",v0.1.2,0,5,false,,false,false,,,https://github.com/UrbanAnalyst,https://UrbanAnalyst.city,Germany,,,https://avatars.githubusercontent.com/u/110684109?v=4,,, CityFlow,A Multi-Agent Reinforcement Learning Environment for Large Scale City Traffic Scenario.,cityflow-project,https://github.com/cityflow-project/CityFlow.git,github,"multiagent-systems,multiagent-reinforcement-learning,traffic-simulation,traffic-signal-control",Mobility and Transportation,"2020/12/04, 03:30:19",685,0,115,true,C++,,cityflow-project,"C++,Python,JavaScript,HTML,CMake,CSS,Dockerfile",https://cityflow-project.github.io,"b'CityFlow\n============\n\n.. image:: https://readthedocs.org/projects/cityflow/badge/?version=latest\n :target: https://cityflow.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://dev.azure.com/CityFlow/CityFlow/_apis/build/status/cityflow-project.CityFlow?branchName=master\n :target: https://dev.azure.com/CityFlow/CityFlow/_build/latest?definitionId=2&branchName=master\n :alt: Build Status\n\nCityFlow is a multi-agent reinforcement learning environment for large-scale city traffic scenario.\n\nCheckout these features!\n\n- A microscopic traffic simulator which simulates the behavior of each vehicle, providing highest level detail of traffic evolution.\n- Supports flexible definitions for road network and traffic flow\n- Provides friendly python interface for reinforcement learning\n- **Fast!** Elaborately designed data structure and simulation algorithm with multithreading. Capable of simulating city-wide traffic. See the performance comparison with SUMO [#sumo]_.\n\n.. figure:: https://user-images.githubusercontent.com/44251346/54403537-5ce16b00-470b-11e9-928d-76c8ba0ab463.png\n :align: center\n :alt: performance compared with SUMO\n\n Performance comparison between CityFlow with different number of threads (1, 2, 4, 8) and SUMO. From small 1x1 grid roadnet to city-level 30x30 roadnet. Even faster when you need to interact with the simulator through python API.\n\nScreencast\n----------\n\n.. figure:: https://user-images.githubusercontent.com/44251346/62375390-c9e98600-b570-11e9-8808-e13dbe776f1e.gif\n :align: center\n :alt: demo\n\nFeatured Research and Projects Using CityFlow\n---------------------------------------------\n- `PressLight: Learning Max Pressure Control to Coordinate Traffic Signals in Arterial Network (KDD 2019) `_\n- `CoLight: Learning Network-level Cooperation for Traffic Signal Control `_\n- `Traffic Signal Control Benchmark `_\n- `TSCC2050: A Traffic Signal Control Game by Tianrang Intelligence (in Chinese) `_ [#tianrang]_\n\nLinks\n-----\n\n- `WWW 2019 Demo Paper `_\n- `Home Page `_\n- `Documentation and Quick Start `_\n- `Docker `_\n\n\n.. [#sumo] `SUMO home page `_\n.. [#tianrang] `Tianrang Intelligence home page `_\n'",",https://arxiv.org/abs/1905.05717,https://arxiv.org/abs/1905.05217","2019/02/26, 04:16:52",1702,Apache-2.0,0,74,"2023/09/27, 14:17:34",33,49,140,16,28,4,0.2,0.5675675675675675,,,0,8,false,,false,false,,,https://github.com/cityflow-project,,,,,https://avatars.githubusercontent.com/u/48002753?v=4,,, Complete_Street_Rule,An ArcGIS CityEngine scenario oriented design tool intended to enable users to quickly create procedural generated multimodal streets.,d-wasserman,https://github.com/d-wasserman/Complete_Street_Rule.git,github,"complete,streets,transportation,cityengine,procedural-generation,cga,treatments,street-rule,geometry,simulation,environmental-modelling,urban-planning,arcgis-cityengine,street-configuration,urban-design",Mobility and Transportation,"2023/04/10, 05:23:52",148,0,30,true,Python,,,Python,https://geonet.esri.com/docs/DOC-6915,"b'# What is the Complete Street Rule?\nThe Complete Street Rule is a scenario oriented design tool intended to enable users to quickly create procedurally generated multimodal streets in ArcGIS CityEngine. The rule incorporates knowledge and ideas from various sources of transportation planning guidance including NACTO Design Guidelines, AASHTO Design Recommendations, and MUTCD standards. The goal of the rule is to enable the 3D representation of a diversity of street configurations to support multimodal planning in urban areas and provide some basis for before and after comparisons of street treatments and transportation investments in ArcGIS CityEngine. Alongside being a quick response parametric visualization tool for streets, this street rule has dynamic performance metrics and reports that react to changes in a street\'s configuration and related parameters. These performance metrics provide a template for how procedural rules can create a responsive connection between design, metrics, and visualization that enables the rapid exploration and communication of different design scenarios. This street rule is well suited to representing transportation planning treatments for complete streets and common highway configurations that might include shoulders, jersey barriers, and HOV Lanes. By being a part of ArcGIS CityEngine, the Complete Street Rule can create 3D models of streets that can be exported to different 3D formats, scene layer packages to be shared over the web, and even exports compatible with game engines such as Unity & Unreal to create virtual experiences as part of public outreach efforts. \n\nThis is an updated repository for a modified version of the ESRI Complete Street rule by the original rule author.\n\n## Scenario Oriented Design Tool\n\n![alt tag](/images/CSRuleCEDemo.gif)\n\n# Instructions\n\nIf you are new to using CityEngine, then the instructions on this [page](Instructions.md) provide step by step instructions on how to open the project or integrate the rule into an existing project. \n\n\n# Key Features of the Complete Street Rule\n\n* Enables Quick Visualization of Multiple Features of Complete Streets: The rule can be used to quickly iterate on high level cross-sectional designs for streets through changes to its parameters. The features that can be visualized include bike lanes, bike lane buffers, shared-use lanes, bus lanes, HOV lanes, parking lanes, medians, two-way left turn lanes, and sidewalks featuring trees, street furniture, and other amenities. \n\n* Dynamic Performance Metrics & Analytics: The Complete Street rule includes a diversity of reports that can be leveraged inside of CityEngine to power dynamic dashboards that react to changes to a street\'s configuration and design. The supported metrics include modal preference metrics such as level of traffic stress for bicyclists, metrics related to curbside allocations and parking space counts, metrics related to how much space on the street overall is allocated to different modes of transport, vegetation & impervious cover amounts, and speed related metrics. \n\n* Support for Curbside Management: The rule\'s options for parallel parking include options to reallocate curb space to other uses. These curbspace management options provide a template for how cities can reallocate curbspace to support micro-mobility (scooters/bikeshare/DoBi), transit operations, freight loading zones, and passenger drop-off locations to support TNC/Taxi operations and in preparation for supporting shared autonomous vehicles.\n\n* Mode Focused Thematics: Allows a user to highlight specific improvements to a street with custom color choices. For example, if you add a bike lane and select ""Bicycle Highlight"" thematic, the solid color attribute will only highlight added bike lanes. Also, the addition of a All Mode Preference option helps visualize all the mode preference reports at once. There are also options for NACTO themed highlights of the street, and preliminary support for semantic highlighting for the purpose of supporting synthetic data generation for deep learning models. \n\n* CityEngine Handles Support: Local Edits allow randomly generated and spaced assets to be moved within a CityEngine model rather than post processed in Photoshop or some other 3D modeling software. Current assets and elements that can be edited with handles include: Street Lamps|Traffic Lights|Trees|Benches|Curbside Allocations.\n\n* Support for Multiple Levels of Detail (LOD): If LOD is set to High, the street will now pick default population parameters to make the street seem occupied. LOD Settings are now Low (Asset choice changes to reduce polygon count), Moderate (high polygon assets/choices), and High/Very High (high polygon assets and populated streets).\n\n* Support for Asset Replacement: Use of stencils instead of multi-color textures enable clean asset replacements in a variety of platforms. In addition, the rule will label relevant objects and shapes based on their location making it easier to replace assets in platforms such as TwinMotion, Unreal Engine, or Unity. \n\n[![alt tag](/images/Road_Diet_Update.jpg)](https://www.youtube.com/watch?v=6t4TYrB0TZ4)\n\n# Citations\nIf you use the complete street rule in academic research or as part of professional reports, please cite the rule as the following: \n\n\nWasserman, D. Complete Street Rule. (2015) GitHub repository, https://github.com/d-wasserman/Complete_Street_Rule.\n'",,"2015/09/01, 05:15:46",2976,Apache-2.0,7,300,"2023/01/12, 03:19:27",3,27,47,6,286,0,0.0,0.007299270072992692,"2023/01/12, 05:35:22",2.9.1,0,1,false,,true,false,,,,,,,,,,, tesla_powerwall,Python Tesla Powerwall API for consuming a local endpoint.,jrester,https://github.com/jrester/tesla_powerwall.git,github,"python,api,tesla,tesla-powerwall,battery,powerwall,powerwall-api,powerwall-status",Mobility and Transportation,"2023/09/16, 13:03:57",68,0,20,true,Python,,,Python,,"b'![Licence](https://img.shields.io/github/license/jrester/tesla_powerwall?style=for-the-badge)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/tesla_powerwall?color=blue&style=for-the-badge)\n![PyPI](https://img.shields.io/pypi/v/tesla_powerwall?style=for-the-badge)\n\nPython Tesla Powerwall API for consuming a local endpoint.\n> Note: This is not an official API provided by Tesla and this project is not affilated with Tesla in any way.\n\nPowerwall Software versions from 1.47.0 to 1.50.1 as well as 20.40 to 22.9.2 are tested, but others will probably work too.\n\n# Table of Contents \n\n- [Installation](#installation)\n- [Limitations](#limitations)\n - [Adjusting Backup Reserve Percentage](#adjusting-backup-reserve-percentage)\n- [Usage](#usage)\n - [Setup](#setup)\n - [Authentication](#authentication)\n - [General](#general)\n - [Errors](#errors)\n - [Response](#response)\n - [Battery level](#battery-level)\n - [Capacity](#capacity)\n - [Battery Packs](#battery-packs)\n - [Powerwall Status](#powerwall-status)\n - [Sitemaster](#sitemaster)\n - [Siteinfo](#siteinfo)\n - [Meters](#meters)\n - [Aggregates](#aggregates)\n - [Current power supply/draw](#current-power-supplydraw)\n - [Energy exported/imported](#energy-exportedimported)\n - [Details](#details)\n - [Device Type](#device-type)\n - [Grid Status](#grid-status)\n - [Operation mode](#operation-mode)\n - [Powerwalls Serial Numbers](#powerwalls-serial-numbers)\n - [Gateway DIN](#gateway-din)\n - [VIN](#vin)\n - [Off-grid status](#off-grid-status-set-island-mode)\n\n## Installation\n\nInstall the library via pip:\n\n```bash\n$ pip install tesla_powerwall\n```\n\n## Limitations\n\n### Adjusting Backup Reserve Percentage\n\nCurrently it is not possible to control the Backup Percentage, because you need to be logged in as installer, which requires physical switch toggle. There is an ongoing discussion about a possible solution [here](https://github.com/vloschiavo/powerwall2/issues/55).\nHowever, if you believe there exists a solution, feel free to open an issue detailing the solution.\n\n## Usage\n\nFor a basic Overview of the functionality of this library you can take a look at `examples/example.py`:\n\n```bash\n$ export POWERWALL_IP=\n$ export POWERWALL_PASSWORD=\n$ python3 examples/example.py\n```\n\n### Setup\n\n```python\nfrom tesla_powerwall import Powerwall\n\n# Create a simple powerwall object by providing the IP\npowerwall = Powerwall("""")\n#=> \n\n# Create a powerwall object with more options\npowerwall = Powerwall(\n endpoint="""",\n # Configure timeout; default is 10\n timeout=10,\n # Provide a requests.Session or None. If None is provided, a Session will be created.\n http_session=None,\n # Whether to verify the SSL certificate or not\n verify_ssl=False,\n disable_insecure_warning=True\n)\n#=> \n```\n\n> Note: By default the API client does not verify the SSL certificate of the Powerwall. If you want to verify the SSL certificate you can set `verify_ssl` to `True`.\n> The API client suppresses warnings about an inseucre request (because we aren\'t verifing the certificate). If you want to enable those warnings you can set `disable_insecure_warning` to `False`.\n\n### Authentication\n\nSince version 20.49.0 authentication is required for all methods. For that reason you must call `login` before making a request to the API.\nWhen you perform a request without being authenticated, an `AccessDeniedError` will be thrown.\n\nTo login you can either use `login` or `login_as`. `login` logs you in as `User.CUSTOMER` whereas with `login_as` you can choose a different user:\n\n```python\nfrom tesla_powerwall import User\n\n# Login as customer without email\n# The default value for the email is """"\npowerwall.login("""")\n#=> \n\n# Login as customer with email\npowerwall.login("""", """")\n#=> \n\n# Login with different user\npowerwall.login_as(User.INSTALLER, """", """")\n#=> \n\n# Check if we are logged in\n# This method only checks wether a cookie with a Bearer token exists\n# It does not verify whether this token is valid\npowerwall.is_authenticated()\n#=> True\n\n# Logout\npowerwall.logout()\npowerwall.is_authenticated()\n#=> False\n```\n\n### General\n\nThe API object directly maps the REST endpoints with a python method in the form of `_`. So if you need the raw json responses you can use the API object. It can be either created manually or retrived from an existing `Powerwall`:\n\n```python\nfrom tesla_powerwall import API\n\n# Manually create API object\napi = API(\'https:///\')\n# Perform get on \'system_status/soe\'\napi.get_system_status_soe()\n#=> {\'percentage\': 97.59281925744594}\n\n# From existing powerwall\napi = powerwall.get_api()\napi.get_system_status_soe()\n```\n\nThe `Powerwall` objet provides a wrapper around the API and exposes common methods.\n\n### Battery level\n\nGet charge in percent:\n\n```python\npowerwall.get_charge()\n#=> 97.59281925744594 (%)\n```\n\nGet charge in watt:\n\n```python\npowerwall.get_energy()\n#=> 14807 (Wh)\n```\n\n### Capacity\n\nGet the capacity of your powerwall in watt:\n\n```python\npowerwall.get_capacity()\n#=> 28078 (Wh)\n```\n\n### Battery Packs\n\nGet information about the battery packs that are installed:\n\n```python\nbatteries = powerwall.get_batteries()\n#=> [, ]\nbatteries[0].part_number\n#=> ""XXX-G""\nbatteries[0].serial_number\n#=> ""TGXXX""\nbatteries[0].energy_remaining\n#=> 7378 (Wh)\nbatteries[0].capacity\n#=> 14031 (Wh)\nbatteries[0].energy_charged\n#=> 5525740 (Wh)\nbatteries[0].energy_discharged\n#=> 4659550 (Wh)\nbatteries[0].wobble_detected\n#=> False\n```\n\n### Powerwall Status\n\n```python\nstatus = powerwall.get_status()\n#=> \nstatus.version\n#=> \'1.49.0\'\nstatus.up_time_seconds\n#=> datetime.timedelta(days=13, seconds=63287, microseconds=146455)\nstatus.start_time\n#=> datetime.datetime(2020, 9, 23, 23, 31, 16, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800)))\nstatus.device_type\n#=> DeviceType.GW2\n```\n\n### Sitemaster\n\n```python\nsm = powerwall.sitemaster\n#=> \nsm.status\n#=> StatusUp\nsm.running\n#=> true\nsm.connected_to_tesla\n#=> true\n```\n\nThe sitemaster can be started and stopped using `run()` and `stop()`\n\n### Siteinfo\n\n```python\ninfo = powerwall.get_site_info()\n#=> \ninfo.site_name\n#=> \'Tesla Home\'\ninfo.country\n#=> \'Germany\'\ninfo.nominal_system_energy\n#=> 13.5 (kWh)\ninfo.timezone\n#=> \'Europe/Berlin\'\n```\n\n### Meters\n\n#### Aggregates\n\n```python\nfrom tesla_powerwall import MeterType\n\nmeters = powerwall.get_meters()\n#=> \n\n# access meter, but may return None when meter is not available\nmeters.get_meter(MeterType.SOLAR)\n#=> \n\n# access meter, but may raise MeterNotAvailableError when the meter is not available at your powerwall (e.g. no solar panels installed)\nmeters.solar\n#=> \n\n# get all available meters at the current powerwall\nmeters.meters.keys()\n#=> [, , , ]\n```\n\nAvailable meters are: `solar`, `site`, `load`, `battery`, `generator`, and `busway`. Some of those meters might not be available based on the installation and raise MeterNotAvailableError when accessed.\n\n#### Current power supply/draw\n\n`Meter` provides different methods for checking current power supply/draw:\n\n```python\nmeters = powerwall.get_meters()\nmeters.solar.get_power()\n#=> 0.4 (kW)\nmeters.solar.instant_power\n#=> 409.941801071167 (W)\nmeters.solar.is_drawing_from()\n#=> True\nmeters.load.is_sending_to()\n#=> True\nmeters.battery.is_active()\n#=> False\n\n# Different precision settings might return different results\nmeters.battery.is_active(precision=5)\n#=> True\n```\n\n> Note: For MeterType.LOAD `is_drawing_from` **always** returns `False` because it cannot be drawn from `load`.\n\n#### Energy exported/imported\n\nGet energy exported/imported in watt-hours (Wh) with `energy_exported` and `energy_imported`. For the values in kilowatt-hours (kWh) use `get_energy_exported` and `get_energy_imported`:\n\n```python\nmeters.battery.energy_exported\n#=> 6394100 (Wh)\nmeters.battery.get_energy_exported()\n#=> 6394.1 (kWh)\nmeters.battery.energy_imported\n#=> 7576570 (Wh)\nmeters.battery.get_energy_imported()\n#=> 7576.6 (kWh)\n```\n\n### Details\n\nYou can receive more detailed information about the meters `site` and `solar`:\n\n```python\nmeter_details = powerwall.get_meter_site() # or get_meter_solar() for the solar meter\n#=> \nreadings = meter_details.readings\n#=> \nreadings.real_power_a # same for real_power_b and real_power_c\n#=> 619.13532458\nreadings.i_a_current # same for i_b_current and i_c_current\n#=> 3.02\nreadings.v_l1n # smae for v_l2n and v_l3n\n#=> 235.82\nreadings.instant_power\n#=> -18.000023458\nreadings.is_sending()\n```\n\nAs `MeterDetailsReadings` inherits from `MeterResponse` (which is used in `MetersAggratesResponse`) it exposes the same data and methods.\n\n> For the meters battery and grid no additional details are provided, therefore no methods exist for those meters\n\n### Device Type\n\n```python\npowerwall.get_device_type()\n#=> \n```\n\n### Grid Status\n\nGet current grid status.\n\n```python\npowerwall.get_grid_status()\n#=> \npowerwall.is_grid_services_active()\n#=> False\n```\n\n### Operation mode\n\n```python\npowerwall.get_operation_mode()\n#=> \npowerwall.get_backup_reserve_percentage()\n#=> 5.000019999999999 (%)\n```\n\n### Powerwalls Serial Numbers\n\n```python\nserials = powerwall.get_serial_numbers()\n#=> [""..."", ""..."", ...]\n```\n\n### Gateway DIN\n\n```python\ndin = powerwall.get_gateway_din()\n#=> 4159645-02-A--TGXXX\n```\n\n### VIN\n\n```python\nvin = powerwall.get_vin()\n```\n\n### Off-grid status (Set Island mode)\n\nTake your powerwall on- and off-grid similar to the ""Take off-grid"" button in the Tesla app.\n\n#### Set powerwall to off-grid (Islanded)\n\n```python\npowerwall.set_island_mode(IslandMode.OFFGRID)\n```\n\n#### Set powerwall to off-grid (Connected)\n\n```python\npowerwall.set_island_mode(IslandMode.ONGRID)\n```\n\n# Development\n\n## Building\n\n```sh\n$ python -m build\n```\n\n## Testing\n\n### Unit-Tests\n\nTo run unit tests use tox:\n\n```sh\n$ tox -e unit\n```\n\n### Integration-Tests\n\n```sh\n$ tox -e integration\n```\n'",,"2019/02/12, 19:50:07",1716,MIT,6,143,"2023/09/17, 15:17:41",2,22,52,13,38,0,1.1,0.16000000000000003,"2023/09/16, 13:13:28",v0.4.0,0,13,false,,false,false,,,,,,,,,,, Vehicle Energy Dataset,A large-scale dataset for vehicle energy consumption research.,gsoh,https://github.com/gsoh/VED.git,github,,Mobility and Transportation,"2022/01/26, 21:05:07",73,0,21,false,,,,,,"b'# VED (Vehicle Energy Dataset)\nA novel large-scale database for fuel and energy use of diverse vehicles in real-world.\n\nVED captures GPS trajectories of vehicles along with their timeseries data of fuel, energy, speed, and auxiliary power usage, and the data was collected through onboard OBD-II loggers from Nov, 2017 to Nov, 2018.\nThe fleet consists of total 383 personal cars (264 gasoline vehicles, 92 HEVs, and 27 PHEV/EVs) in Ann Arbor, Michigan, USA. \nDriving scenarios range from highways to traffic-dense downtown area in various driving conditions and seasons. \nIn total, VED accumulates approximately 374,000 miles. \n\nA number of examples were presented in the paper to demonstrate how VED can be utilized for vehicle energy and behavior studies. Potential research opportunities include data-driven vehicle energy consumption modeling, driver behavior modeling, machine and deep learning, calibration of traffic simulators, optimal route choice modeling, prediction of human driver behaviors, and decision making of self-driving cars.\n\nLink to the paper: \n[Vehicle Energy Dataset (VED), A Large-scale Dataset for Vehicle Energy Consumption Research](https://doi.org/10.1109/TITS.2020.3035596)\\\n**Geunseob (GS) Oh**, David J. LeBlanc, Huei Peng\\\nIEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020.\\\nThe paper is also available on [Arxiv](https://arxiv.org/pdf/1905.02081.pdf).\n\n\nContact: gsoh@umich.edu.\n\nGS Oh, Ph.D. Candidate, University of Michigan.\n\n\n\n## Files\nVED consists of Dynamic Data (time-stamped naturalistic driving records of 383 vehicles) and Static Data (Vehicle parameters for the 383 vehicles)\n\nDynamic Data: ""VED_DynamicData.7z"" contains a number of ""VED_mmddyy_week.csv"" files\n- Includes a week worth dynamic data, for mmddyy ~ (mmddyy + 7 days)\n- Columns represent:\n\tDayNum,\tVehId,\tTrip,\tTimestamp(ms),\tLatitude[deg],\tLongitude[deg],\tVehicle Speed[km/h],\tMAF[g/sec],\tEngine RPM[RPM],\tAbsolute Load[%],\tOutside Air Temperature[DegC],\tFuel Rate[L/hr],\tAir Conditioning Power[kW],\tAir Conditioning Power[Watts],\tHeater Power[Watts],\tHV Battery Current[A],\tHV Battery SOC[%],\tHV Battery Voltage[V],\tShort Term Fuel Trim Bank 1[%],\tShort Term Fuel Trim Bank 2[%],\tLong Term Fuel Trim Bank 1[%],\tLong Term Fuel Trim Bank 2[%]\n- Notes:\n\tEach combination of VehID, Trip is unique.\n\tDayNum represents elapsed days since a reference date. (DayNum 1 = Nov, 1st, 2017, 00:00:00, DayNum 1.5 = Nov, 1st, 2017, 12:00:00)\n\tFor the details, refer to [the VED paper](https://arxiv.org/abs/1905.02081)\n\t\n\t\nStatic Data: ""VED_Static_Data_ICE&HEV.xlsx"", and ""VED_Static_Data_PHEV&EV.xlsx""\n- Includes parameters of all 383 vehicles (264 gasoline vehicles, 92 HEVs, and 27 PHEV/EVs)\n\t- There are 3 pure EV vehicles in the dataset. All of them are 2013 Nissan Leaf with an advertised battery capacity of 24 kWh.\n- Columns represent: \n\tVehId,\tEngineType,\tVehicle Class,\tEngine Configuration & Displacement\tTransmission,\tDrive Wheels,\tGeneralized_Weight[lb]\n\n\n## License\n\nLicense under the [Apache License 2.0](LICENSE)\n'",",https://doi.org/10.1109/TITS.2020.3035596,https://arxiv.org/pdf/1905.02081.pdf,https://arxiv.org/abs/1905.02081","2019/04/19, 06:55:51",1650,Apache-2.0,0,13,"2022/02/03, 03:16:39",0,0,4,0,629,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, gbfs,"Documentation for the General Bikeshare Feed Specification, a standardized data feed for shared mobility system availability.",NABSA,https://github.com/MobilityData/gbfs.git,github,"gbfs,bikesharing,mobility,scooter-sharing,bike-share,bike-sharing,mobility-as-a-service,mobilitydata,carshare,carsharing,open-data,gbfs-documentation,shared-mobility,civic-tech,urban-mobility",Mobility and Transportation,"2023/10/25, 12:38:08",726,0,90,true,,MobilityData IO,MobilityData,,https://gbfs.org,"b""# General Bikeshare Feed Specification\nDocumentation for the General Bikeshare Feed Specification, a standardized data feed for shared mobility system availability.\n\n**Please note that GBFS is now hosted at [github.com/MobilityData/gbfs](https://github.com/MobilityData/gbfs).**\n\n## Table of Contents\n* [What is GBFS?](#what-is-gbfs)\n* [How to Participate](#how-to-participate)\n* [Current Version](#current-version-recommended)\n* [Guiding Principles](#guiding-principles)\n* [Specification Versioning](#specification-versioning)\n* [Systems Catalog - Systems Implementing GBFS](#systems-catalog---systems-implementing-gbfs)\n* [GBFS JSON Schemas](#gbfs-json-schemas)\n* [GBFS and Other Shared Mobility Resources](#gbfs-and-other-shared-mobility-resources)\n* [Relationship Between GBFS and MDS](#relationship-between-gbfs-and-mds)\n\n## What is GBFS?\nThe General Bikeshare Feed Specification, known as GBFS, is the open data standard for shared mobility. GBFS makes real-time data feeds in a uniform format publicly available online, with an emphasis on findability. GBFS is intended to make information publicly available online; therefore information that is personally identifiable is not currently and will not become part of the core specification.\n \nGBFS was created in 2014 by [Mitch Vars](https://github.com/mplsmitch) with collaboration from public, private sector and non-profit shared mobility system owners and operators, application developers, and technology vendors. [Michael Frumin](https://github.com/fruminator), [Jesse Chan-Norris](https://github.com/jcn) and others made significant contributions of time and expertise toward the development of v1.0 on behalf of Motivate International LLC (now Lyft). The [North American Bikeshare Association\xe2\x80\x99s](http://www.nabsa.net) endorsement, support, and hosting was key to its success starting in 2015. In 2019, NABSA chose MobilityData to govern and facilitate the improvement of GBFS. MobilityData hosts a [GBFS Resource Center](https://gbfs.mobilitydata.org/) and a [public GBFS Slack channel](https://share.mobilitydata.org/slack) - you are welcome to contact us there or at with questions. \n\nGBFS is intended as a specification for real-time, read-only data - any data being written back into individual shared mobility systems are excluded from this spec.\n\nThe specification has been designed with the following concepts in mind:\n*\tProvide the status of the system at this moment\n*\tDo not provide information whose primary purpose is historical\n\nThe data in the specification contained in this document is intended for consumption by clients intending to provide real-time (or semi-real-time) transit advice and is designed as such.\n\n## How to Participate\nGBFS is an open source project developed under a consensus-based governance model. Contributors come from across the shared mobility industry, public sector, civic technology and elsewhere. GBFS is not owned by any one person or organization. The specification is not fixed or unchangeable. As the shared mobility industry evolves, it is expected that the specification will be extended by the GBFS community to include new features and capabilities over time.

Comments or questions can be addressed to the community by [opening an issue](https://github.com/MobilityData/gbfs/issues). Proposals for changes or additions to the specification can be made through [pull requests](https://github.com/MobilityData/gbfs/pulls). Questions can also be addressed to the community via the [public GBFS Slack channel](https://bit.ly/mobilitydata-slack) or to the shared mobility staff at MobilityData: [sharedmobility@mobilitydata.org](mailto:sharedmobility@mobilitydata.org).\nIf you are new to engaging with the community on this repository, firstly welcome! Here is a brief overview of how to contribute to the specification:\n* Anyone can raise an issue.\n* Anyone can open a pull request - make sure PRs in line with our [Guiding Principles](#guiding-principles).\n* If you are wanting to open a pull request but don't know how, MobiilityData is happy to help. Get in touch at [sharedmobility@mobilitydata.org](mailto:sharedmobility@mobilitydata.org).\n* Discussions on pull requests must be a minimum of 7 calendar days.\n* Votes are open for a total of 10 calendar days, anyone can vote.\n* A successful vote must have at least 3 votes, not including the pull request author.\n* A successful vote must include a vote from a GBFS producer and a GBFS consumer.\n\nFind a real-world example of the governance in action [here](https://github.com/MobilityData/gbfs/pull/454). For a more in depth look at the change and contribution process, go to [governance.md](https://github.com/MobilityData/gbfs/blob/master/governance.md).\n\n### Project Roadmap\nMobiltyData has compiled a [project roadmap](https://github.com/MobilityData/gbfs/wiki/Project-Roadmap) with a list of major features, changes and other work coming up in the near future.\n\n## Current Version *(Recommended)* \n| Version | Type | Release Date | Status | JSON Schema | Release Notes |\n|:---:|:---:|---|---|---| ---|\n| [v2.3](https://github.com/NABSA/gbfs/blob/v2.3/gbfs.md) | MINOR | April 5, 2022 | :white_check_mark:   *Current Version* | [v2.3 Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v2.3) | |\n\n\n### Upcoming MAJOR Version \n| Version | Type | Release Target | Status |\n|------------------------------------|:---:|---|---|\n| No current upcoming major versions | | | |\n\n### Release Candidates \nRelease Candidates will receive *Current Version* status when they have been fully implemented in public feeds.\n\n| Version | Type | Release Date | Status | JSON Schema | Release Notes |\n|:---:|:-----:|----------------|----------------------------|----------------------------------------------------------------------------------------|---------------------|\n|[v3.0-RC](https://github.com/NABSA/gbfs/blob/master/gbfs.md) | MAJOR | March 10, 2023 | \xe2\x9c\x85 Ready for implementation | [v3.0-RC Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v3.0-RC) | [v3.0-RC Article](https://mobilitydata.org/2023-the-year-of-v3/) |\n\n### Past Version Releases \nPast versions with *Supported* status MAY be patched to correct bugs or vulnerabilities but new features will not be introduced.
\nPast versions with *Deprecated* status will not be patched and their use SHOULD be discontinued.\n\n| Version | Type | Release Date | Status | JSON Schema | Release Notes |\n|:---:|:---:|---|---|---|---|\n| [v2.2](https://github.com/NABSA/gbfs/blob/v2.2/gbfs.md) | MINOR | March 19, 2021 |:white_check_mark:   *Supported* | [v2.2 Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v2.2)| [v2.2 Article](https://mobilitydata.org/cities-gbfs-v2-2-is-here-for-you/)\n| [v2.1](https://github.com/NABSA/gbfs/blob/v2.1/gbfs.md) | MINOR | March 18, 2021 |:white_check_mark:   *Supported* | [v2.1 Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v2.1)| [v2.1 Article](https://mobilitydata.org/gbfs-now-fully-supports-dockless-systems-%f0%9f%9b%b4%f0%9f%91%8f/)\n| [v2.0](https://github.com/NABSA/gbfs/blob/v2.0/gbfs.md) | MAJOR | March 16, 2020 | :white_check_mark:   *Supported* | [v2.0 Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v2.0) | [v2.0 Article](https://mobilitydata.org/whats-new-in-gbfs-v2-0-%f0%9f%9a%b2%f0%9f%9b%b4/) |\n| [v1.1](https://github.com/NABSA/gbfs/blob/v1.1/gbfs.md) | MINOR | March 16, 2020 |:white_check_mark:   *Supported* | [v1.1 Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v1.1) | |\n| [v1.0](https://github.com/NABSA/gbfs/blob/v1.0/gbfs.md) | MAJOR | Prior to October 2019 | :x:   *Deprecated* | [v1.0 Schema](https://github.com/MobilityData/gbfs-json-schema/tree/master/v1.0)| |\n \n### Full Version History \nThe complete GBFS version history is available on the [wiki](https://github.com/NABSA/gbfs/wiki/Complete-Version-History).\n\n## Specification Versioning\nTo enable the evolution of GBFS, including changes that would otherwise break backwards-compatibility with consuming applications, GBFS uses [semantic versioning](https://semver.org/).\nSemantic versions are established by a git tag in the form of `vX.Y` where `X.Y` is the version name. A whole integer increase is used for breaking changes (MAJOR changes). A decimal increase is used for non-breaking changes (MINOR changes or patches). MINOR versions may introduce new features as long as those changes are OPTIONAL and do not break backwards compatibility.\n\nExamples of breaking changes include:\n\n* Changes to requirements, like adding or removing a REQUIRED endpoint or field, or changing an OPTIONAL endpoint or field to REQUIRED.\n* Changing the data type or semantics of an existing field.\n\nExamples of non-breaking changes include:\n\n* Adding an OPTIONAL endpoint or field\n* Adding new enum values\n* Modifying documentation or specification language in a way that clarifies semantics or recommended practices\n\n\n#### Version Release Cycles - Release Deprecation\n* There is no strict limitation on the frequency of MAJOR releases, but the GBFS community aims to limit MAJOR releases to no more than one a 12 month period. To limit releases, multiple breaking changes SHOULD be batched together in a single release.\n* There is no guideline to limit the number of MINOR releases. MINOR changes may be applied at any time. MINOR changes MAY be batched together in single release or released immediately, based on the needs of the community.\n* GBFS documentation will include a list of current and supported MAJOR and MINOR versions. Supported versions SHALL NOT span more than two MAJOR versions. Past versions that are beyond the two most recent MAJOR versions will be deprecated 180 days after the latest MAJOR version becomes official.\n \n## Guiding Principles\nTo preserve the original vision of GBFS, the following guiding principles should be taken into consideration when proposing extensions to the spec:\n\n* **GBFS is a specification for real-time or semi-real-time, read-only data.**\nThe spec is not intended for historical or archival data such as trip records.\nThe spec is about public information intended for shared mobility users.\n\n* **GBFS is targeted at providing transit information to the shared mobility end user.**\n Its primary purpose is to power tools for riders that will make shared mobility more accessible to users. GBFS is about public information. Producers and owners of GBFS data should take licensing and discoverability into account when publishing GBFS feeds.\n\n* **Changes to the spec should be backwards-compatible, when possible.**\nCaution should be taken to avoid making changes to the spec that would render existing feeds invalid.\n\n* **Speculative features are discouraged.**\nEach new addition to the spec adds complexity. We want to avoid additions to the spec that do not provide additional value to the shared mobility end user.\n\n## Systems Catalog - Systems Implementing GBFS\nThere are now over 600 shared mobility systems publishing GBFS worldwide. This list contains all known systems publishing GBFS feeds and is maintained by the GBFS community. This is an incomplete list. If you have or are aware of a system that doesn\xe2\x80\x99t appear on the list please add it.\nIf you would like to add a system, please fork this repository and submit a pull request. Please keep this list alphabetized by country and system name. Alternatively, fill out [this contribution form](https://forms.gle/WSXFuXx9k4PSTfbC9) for a Github-less contribution. \n* [systems.csv](systems.csv)\n\n Field Name | REQUIRED | Definition \n --- | :---: | ---- \n|Country Code | Yes | ISO 3166-1 alpha-2 code designating the country where the system is located. For a list of valid codes [see here](https://en.wikipedia.org/wiki/ISO_3166-1).\n| Name | Yes| Name of the mobility system. This MUST match the `name` field in `system_information.json`\nLocation | Yes| Primary city in which the system is located followed by country or state.\nSystem ID | Yes | ID for the system. This MUST match the `system_id` field in `system_information.json`.\nURL | Yes | URL for the system from the `url` field in `system_information.json`. If the `url` field is not included in `system_information.json` this SHOULD be the primary URL for the system operator.\nAuto-Discovery URL | Yes | URL for the system's `gbfs.json` auto-discovery file.\nValidation Report | Optional | URL to the validation report of the system which can be obtained by running the Auto-Discovery URL into the [GBFS Validator](https://gbfs-validator.netlify.app/). This is not mandatory, MobilityData can add this link before merging the pull request. \n\n## GBFS JSON Schemas\nComplete JSON schemas for each version of GBFS can be found [here](https://github.com/MobilityData/gbfs-json-schema).\n## GBFS and Other Shared Mobility Resources\nIncluding APIs, datasets, validators, research, and software can be found [here](https://gbfs.mobilitydata.org/toolbox/resources/).\n## Relationship Between GBFS and MDS\nThere are many similarities between GBFS and [MDS](https://github.com/openmobilityfoundation/mobility-data-specification) (Mobility Data Specification), however, their intended use cases are different. GBFS is a real-time or near real-time specification for public data primarily intended to provide transit advice through consumer-facing applications. MDS is not public data and is intended for use only by mobility regulators. Publishing a public GBFS feed is a [requirement](https://github.com/openmobilityfoundation/mobility-data-specification#gbfs-requirement) of all MDS compatible *Provider* APIs.\n## Copyright\nThe copyright for GBFS is held by the [MobilityData](https://mobilitydata.org/). \n""",,"2015/12/04, 23:43:43",2881,CUSTOM,80,603,"2023/10/25, 12:38:08",25,375,531,129,0,9,1.2,0.8860103626943006,"2023/04/12, 17:40:40",v3.0-RC,0,111,false,,false,false,,,https://github.com/MobilityData,https://mobilitydata.org/,Canada,,,https://avatars.githubusercontent.com/u/41021710?v=4,,, Bike Index,"Bike registration that works: online, powerful, free.",bikeindex,https://github.com/bikeindex/bike_index.git,github,,Mobility and Transportation,"2023/10/25, 18:17:29",249,0,13,true,Ruby,Bike Index,bikeindex,"Ruby,Haml,HTML,SCSS,JavaScript,CSS,CoffeeScript,Shell,Procfile",https://bikeindex.org,"b'# [![BIKE INDEX][bike-index-logo]][bike-index] ![Cloud66 Deployment Status][cloud66-badge] [![CircleCI][circleci-badge]][circleci] [![Test Coverage][codeclimate-badge]][codeclimate] [![View performance data on Skylight][skylight-badge]][skylight]\n\n[bike-index-logo]: https://github.com/bikeindex/bike_index/blob/main/bike_index.png?raw=true\n[circleci]: https://circleci.com/gh/bikeindex/bike_index/tree/main\n[circleci-badge]: https://circleci.com/gh/bikeindex/bike_index/tree/main.svg?style=svg\n[codeclimate]: https://codeclimate.com/github/bikeindex/bike_index\n[codeclimate-badge]: https://codeclimate.com/github/bikeindex/bike_index/badges/coverage.svg\n[skylight]: https://oss.skylight.io/app/applications/j93iQ4K2pxCP\n[skylight-badge]: https://badges.skylight.io/status/j93iQ4K2pxCP.svg\n[bike-index]: https://www.bikeindex.org\n[cloud66-badge]: https://app.cloud66.com/stacks/badge/ff54cf1d55d7eb91ef09c90f125ae4f1.svg\n\nBike registration that works: online, powerful, free.\n\nRegistering a \xf0\x9f\x9a\xb2 only takes a few minutes and gives \xf0\x9f\x9a\xb4\xe2\x80\x8d\xe2\x99\x80\xef\xb8\x8f a permanent record linked to their identity that proves ownership in the case of a theft.\n\nWe\'re an open source project. Take a gander through our code, report bugs, or download it and run it locally.\n\n### Dependencies\n\n_We recommend [asdf-vm](https://asdf-vm.com/#/) for managing versions of Ruby and Node. Check the [.tool-versions](.tool-versions) file to see the versions of the following dependencies that Bike Index uses._\n\n- [Ruby 2.7](http://www.ruby-lang.org/en/)\n\n- [Rails 5.2](http://rubyonrails.org/)\n\n- [Node 12.18](https://nodejs.org/en/) & [yarn](https://yarnpkg.com/en/)\n\n- PostgreSQL >= 9.6\n\n- Imagemagick ([railscast](http://railscasts.com/episodes/374-image-manipulation?view=asciicast))\n\n- [Sidekiq](https://github.com/mperham/sidekiq), which requires [Redis](http://redis.io/).\n\n- Requires 1gb of ram (or at least more than 512mb)\n\n## Running Bike Index locally\n\nThis explanation assumes you\'re familiar with developing Ruby on Rails applications.\n\n- `bin/setup` sets up the application and seeds:\n - Three test user accounts: admin@example.com, member@example.com, user@example.com (all have password `pleaseplease12`)\n - Gives user@example.com 50 bikes\n\n- `./start` start the server.\n\n - [start](start) is a bash script. It starts redis in the background and runs foreman with the [dev procfile](Procfile_development). If you need/prefer something else, do that. If your ""something else"" isn\'t running at localhost:3001, change the appropriate values in [Procfile_development](Procfile_development) and [.env](.env)\n\n- Go to [localhost:3001](http://localhost:3001)\n\n| Toggle in development | command | default |\n| --------- | ------- | ------- |\n| Caching | `bundle exec rails dev:cache` | disabled |\n| [letter_opener][] | `bin/rake dev:letter_opener` | enabled |\n| logging with lograge | `bin/rake dev:lograge` | enabled |\n\n[letter_opener]: https://github.com/ryanb/letter_opener\n\n## Internationalization\n\nSee the [internationalization docs](docs/internationalization.markdown) for details.\n\n## Testing\n\nWe use [RSpec](https://github.com/rspec/rspec) and\n[Guard](https://github.com/guard/guard) for testing.\n\n- Run the test suite continuously in the background with `bin/guard` (watches for file changes/saves and runs those specs)\n\n- You may have to manually add the `fuzzystrmatch` extension, which we use for\n near serial searches, to your databases. The migration should take care of\n this but sometimes doesn\'t. Open the databases in postgres\n (`psql bikeindex_development` and `psql bikeindex_test`) and add the extension.\n\n ```sql\n CREATE EXTENSION fuzzystrmatch;\n ```\n\nWe use [`parallel_tests`](https://github.com/grosser/parallel_tests/) to run the test suite in parallel. By default, this will spawn one process per CPU in your computer.\n\n- Run all the tests in parallel with `bin/rake parallel:spec`\n\n- Run `bin/rake parallel:prepare` to synchronize the test db schema after migrations (rather than `db:test:prepare`).\n\n- Run specific files or test directories with `bin/parallel_rspec `\n\n- Run Guard with parallelism `bin/guard -G Guardfile_parallel`\n\n## Code Hygiene\n\nWe use the following tools to automate code formatting and linting:\n\n- [EditorConfig](https://editorconfig.org/)\n- [StandardRB](https://github.com/testdouble/standard)\n- [ESlint](https://eslint.org/)\n\nRun `bin/lint` to automatically lint the files and/or add auto formatters to your text editor (e.g. [prettier-standard](https://github.com/sheerun/prettier-standard))\n\n### EditorConfig\n\nEditorConfig ensures whitespace consistency. See the [Download a\nPlugin][editorconfig-plugin] section of the EditorConfig docs to find a plugin\nappropriate to your editor.\n\n[editorconfig-plugin]: https://editorconfig.org/#download\n\n### StandardRB\n\nStandardRB is an opinionated Ruby style guide, linter, and formatter - it is ""a spiritual port of [StandardJS](https://standardjs.com/)"".\n\nSee the [how do I run standard in my editor](standardrb-plugin) section of the StandardRB docs to find an appropriate plugin for on-the-fly linting.\n\n[standardrb-plugin]: https://github.com/testdouble/standard#how-do-i-run-standard-in-my-editor\n\n### ESLint\n\nESlint is configured to run on project JavaScript. To run it, issue `yarn lint`.\n\n## Bug tracker\n\nHave a bug or a feature request? [Open an issue](https://github.com/bikeindex/bike_index/issues/new).\n\n\n## Community\n\nKeep track of development and community news.\n\n- Follow [@bikeindex](http://twitter.com/bikeindex) on Twitter.\n- Read the [Bike Index Blog](https://bikeindex.org/blog).\n\n## Contributing\n\nOpen a Pull request!\n\nDon\'t wait until you have a finished feature before before opening the PR, unfinished pull requests are welcome! The earlier you open the pull request, the earlier it\'s possible to discuss the direction of the changes.\n\nOnce the PR is ready for review, request review from the relevant person.\n\nIf your pull request contains Ruby patches or features, you must include relevant Rspec tests.\n\n\n... and go hard\n'",,"2013/07/25, 21:52:26",3744,AGPL-3.0,150,4070,"2023/10/25, 18:17:30",74,2113,2385,130,0,22,0.0,0.12164626204740814,,,0,25,false,,false,false,,,https://github.com/bikeindex,https://bikeindex.org,,,,https://avatars.githubusercontent.com/u/4251544?v=4,,, go-ocpp,v1.5 and v1.6 Open Charge Point Protocol implementation in Golang.,voltbras,https://github.com/voltbras/go-ocpp.git,github,"ocpp,electric-vehicles,charging-stations,chargingstation,emobility",Mobility and Transportation,"2021/02/17, 00:52:08",45,0,7,true,Go,Voltbras,voltbras,Go,,"b'# go-ocpp\n\nOCPP(1.5/1.6) implementation in Golang.\n\n- v1.5, it\'s assumed it is SOAP\n- v1.6, it\'s assumed it is JSON\n\n## Usage\n\n### Central System\n\nJust pass a handler that takes in a `cpreq.ChargePointRequest` and returns a `(cpresp.ChargePointResponse, error)`.\n\nIn SOAP, error messages will be sent back via Fault, as specified in OCPP v1.5\n\nIn Websockets, error messages will be sent back as specified in OCPP-J v1.6\n\n```go\ncsys := cs.New()\ngo csys.Run("":12811"", func(req cpreq.ChargePointRequest, metadata cs.ChargePointRequestMetadata) (cpresp.ChargePointResponse, error) {\n // Return an error to the Station communicating to the Central System\n //\n // station, isAuthorized := getStation(metadata.ChargePointID)\n // if !isAuthorized {\n // return nil, errors.New(""charger not authorized to join network"")\n // }\n // ---\n // Or check some specific header in the underlying HTTP request:\n // \n // if shouldBlock(metadata.HTTPRequest) {\n // return nil, errors.New(""charger should send appropriate headers"")\n // }\n\n switch req := req.(type) {\n case *cpreq.BootNotification:\n // accept chargepoint in the network\n return &cpresp.BootNotification{\n Status: ""Accepted"",\n CurrentTime: time.Now(),\n Interval: 60,\n }, nil\n\n case *cpreq.Heartbeat:\n return &cpresp.Heartbeat{CurrentTime: time.Now()}, nil\n\n case *cpreq.StatusNotification:\n if req.Status != ""Available"" {\n // chargepoint is unavailable\n }\n return &cpresp.StatusNotification{}, nil\n\n default:\n return nil, errors.New(""Response not supported"")\n }\n}\n```\n\n### Charge Point\n\nPass the required parameters to the constructor function, and then just send any request(`cpreq.*`).\n\nTODO: assertion of the response type should be done inside the `.Send`?\n\n```go\nstationID := ""id01""\ncentralSystemURL := ""ws://localhost:12811""\nst, err := cp.NewChargePoint(stationID, centralSystemURL, ocpp.V16, ocpp.JSON, nil, handler) // or ocpp.SOAP\nif err != nil {\n fmt.Println(""could not create charge point:"", err)\n return\n}\nrawResp, err := st.Send(&cpreq.Heartbeat{})\nif err != nil {\n fmt.Println(""could\'t send heartbeat:"", err)\n return\n}\nresp, ok := rawResp.(*cpresp.Heartbeat)\nif !ok {\n fmt.Println(""response is not a heartbeat response"")\n return\n}\nfmt.Println(""got reply:"", resp)\n```\n\n### Logs\n\nFor more useful logging, do:\n\n```go\nocpp.SetDebugLogger(log.New(os.Stdout, ""DEBUG:"", log.Ltime))\nocpp.SetErrorLogger(log.New(os.Stderr, ""ERROR:"", log.Ltime))\n```\n'",,"2019/02/20, 17:11:54",1708,GPL-3.0,0,57,"2023/09/21, 15:19:06",4,5,11,1,34,0,0.6,0.0,"2021/03/05, 22:44:09",v1.1.0,0,2,false,,false,false,,,https://github.com/voltbras,https://voltbras.com.br/,Florianópolis,,,https://avatars.githubusercontent.com/u/64649105?v=4,,, EVMap,Android app to access the goingelectric.de electric vehicle charging station directory.,johan12345,https://github.com/ev-map/EVMap.git,github,"charging-stations,android,electric-vehicle,hacktoberfest,map",Mobility and Transportation,"2023/10/22, 17:05:16",149,0,59,true,Kotlin,EVMap,ev-map,"Kotlin,Java,Ruby",https://ev-map.app/,"b'EVMap [![Build Status](https://github.com/ev-map/EVMap/actions/workflows/tests.yml/badge.svg)](https://github.com/ev-map/EVMap/actions)\n=====\n\n\n\n\nAndroid app to find electric vehicle charging stations.\n\n\n\n\n\n\nFeatures\n--------\n\n- [Material Design](https://material.io/)\n- Shows all charging stations from the community-maintained [GoingElectric.de](https://www.goingelectric.de/stromtankstellen/) and [Open Charge Map](https://openchargemap.org) directories\n- Realtime availability information (only in Europe)\n- Search for places\n- Advanced filtering options, including saved filter profiles\n- Favorites list, also with availability information\n- Integrated price comparison using [Chargeprice.app](https://chargeprice.app) (only in Europe)\n- Android Auto & Android Automotive OS integration\n- No ads, fully open source\n- Compatible with Android 5.0 and above\n- Can use Google Maps or Mapbox (OpenStreetMap) as map backends - the version available on F-Droid only uses Mapbox.\n\nScreenshots\n-----------\n\n\n\nDevelopment setup\n-----------------\n\nThe App is developed using Android Studio and should pretty much work out-of-the-box when you clone\nthe Git repository and open the project with Android Studio.\n\nThe only exception is that you need to obtain some free API keys for the different data sources that\nEVMap uses and put them into the app in the form of a resource file called `apikeys.xml` under\n`app/src/main/res/values`. You can find more information on which API keys are necessary for which\nfeatures and how they can be obtained in our [documentation page](doc/api_keys.md).\n\nThere are three different build flavors, `googleNormal`, `fossNormal` and `googleAutomotive`.\n- The `foss` variants only use Mapbox data and should run on most Android devices, even without\n Google Play Services.\n - `fossNormal` is intended to run on smartphones and tablets, and also includes the Android\n Auto app for use on the car display (however for that to work, the Android Auto app is\n necessary, which in turn does require Google Play Services).\n - `fossAutomotive` can be installed directly on\n [Android Automotive OS (AAOS)](https://source.android.com/docs/automotive/start/what_automotive)\n headunits without Google services.\n It does not provide the usual smartphone UI, and requires an implementation of the\n [AOSP template app host](https://source.android.com/docs/automotive/hmi/aosp_host)\n to be installed. If you are an OEM and would like to distribute EVMap to your AAOS vehicles,\n please [get in touch](mailto:evmap@vonforst.net).\n- The `google` variants also include access to Google Maps data.\n - `googleNormal` is intended to run on smartphones and tablets, and also includes the Android\n Auto app for use on the car display.\n - `googleAutomotive` can be installed directly on car infotainment systems running the\n Google-flavored Android Automotive OS (Google Automotive Services /\n [""Google built-in""](https://built-in.google/cars/)).\n It does not provide the usual smartphone UI, and requires the\n [Google Automotive App Host](https://play.google.com/store/apps/details?id=com.google.android.apps.automotive.templates.host)\n to run, which should be preinstalled on those cars and can be updated through the Play Store.\n\nWe also have a special [documentation page](doc/android_auto.md) on how to test the Android Auto\napp.\n\nTranslations\n------------\n\nYou can use our [Weblate page](https://hosted.weblate.org/projects/evmap/) to help translate EVMap\ninto new languages.\n\n\n\n\n'",,"2020/03/23, 21:03:43",1311,MIT,287,1341,"2023/10/22, 17:05:54",47,48,258,60,3,4,1.7,0.06329113924050633,"2023/09/23, 16:56:00",1.7.0,5,11,true,"github,custom",false,false,,,https://github.com/ev-map,https://ev-map.app/,,,,https://avatars.githubusercontent.com/u/115927597?v=4,,, emobility-smart-charging,Smart charging algorithms with REST API for electric vehicle fleets.,SAP,https://github.com/SAP/emobility-smart-charging.git,github,,Mobility and Transportation,"2022/06/28, 08:30:43",40,0,11,true,Java,SAP,SAP,"Java,TypeScript,HTML,Makefile,JavaScript,CSS,Dockerfile",,"b'# emobility-smart-charging\n\n[![Build Status](https://travis-ci.com/SAP/emobility-smart-charging.svg?branch=master)](https://travis-ci.com/SAP/emobility-smart-charging)\n[![REUSE status](https://api.reuse.software/badge/github.com/SAP/emobility-smart-charging)](https://api.reuse.software/info/github.com/SAP/emobility-smart-charging)\n\n## Contents:\n1. [Description](#description)\n1. [Requirements](#requirements)\n1. [Download and Installation](#download-and-installation)\n1. [Getting Started](#getting-started)\n1. [Known Issues](#known-issues)\n1. [How to obtain support](#how-to-obtain-support)\n1. [To-Do (upcoming changes)](#to-do-upcoming-changes)\n\n\n### Description\nThis repository is an implementation of smart charging for electric vehicles (EVs). It contains a charging optimizer which schedules EVs for charging throughout the day. \nThe optimization algorithm addresses the following goals: \n- Main goal: Ensure all EVs are charged at the end of the day while respecting infrastructure constraints\n- Secondary goal: Minimize peak load (avoid peaks in power consumption)\n- Secondary goal: Minimize electricity prices (charge EVs at times when electricity is cheap if prices vary)\n\nRefer to [1] for a detailed explanation of the algorithm. \n\nOn a technical note, this repository contains the following components: \n- The algorithm for charging optimization (implemented in Java)\n- A server with a REST API for accessing the algorithm (implemented with Spring Boot)\n- A frontend ""playground"" application to test REST API input parameters and check results (implemented with Angular 8)\n- A Dockerfile to containerize the components described above\n\n\n[1] O. Frendo, N. Gaertner, and H. Stuckenschmidt, ""Real-Time Smart Charging Based on Precomputed Schedules"", IEEE Transactions on Smart Grid, vol. 10, no. 6, pp. 6921 \xe2\x80\x93 6932, 2019.\n\n\n### Requirements\nThe application may be run either with or without Docker. \n\n#### With Docker \nThe application can be containerized using [Docker](https://docs.docker.com/install/) and the `Dockerfile` in this repository. If the application is run via Docker the other requirements may be ignored. \n\n#### Without Docker\nThe server requires Java and the dependency management tool [Maven](https://maven.apache.org/). \nThe minimum required Java version is Java 8. \n\nEnter `java -version` and `mvn -version` in your command line to test your installation. \n\nThe frontend is optional. The server and its REST API will work without the frontend. \nThe requirements for the frontend are [Node.js](https://nodejs.org/en/) and its package manager NPM. \n\nEnter `node --version` and `npm --version` in your command line to test your installation. \n\n\n\n### Download and Installation\n#### With Docker\nThe simplest way to run this application is to use the public [Docker image](https://hub.docker.com/repository/docker/sapemob/evse_emobility-smart-charging). \nFirst, pull the Docker image: \n```\ndocker pull sapemob/evse_emobility-smart-charging\n```\n\nNext, start the application by running the Docker container (the server runs on port 8080). \n[Parameters](https://docs.docker.com/engine/reference/run/): \n- `-d` Detached mode: Run container in the background\n- `-p` Publish a container\'s port to the host: Change the first port in `8080:8080` to adjust which port you want the application to run on\n```\ndocker run -d -p 8080:8080 sapemob/evse_emobility-smart-charging\n```\n\n\nAlternatively, you can build the Docker image yourself using the `Dockerfile` in this repository. \nThis will compile the server and the frontend. \n[Parameters](https://docs.docker.com/engine/reference/commandline/build/): \n- `-t` Tag the image with a name\n``` \ndocker build -t sapemob/evse_emobility-smart-charging .\n```\n\n\n\n#### Without Docker\nThis section is relevant if the application should be run without Docker, for example for development purposes. \n\nFirst, compile the server: \n```\nmvn clean install\n```\n\n(Optional) Prepare and compile the frontend: \n```\ncd frontend/\nnpm install\nnpm run build:playground\n```\n\nStart the server (from the root directory of the repository): \n```\njava -jar target/emobility-smart-charging-0.0.1-SNAPSHOT.jar\n```\n\n#### Accessing the application\nAfter you have started the application it runs on `localhost:8080`. \nThe frontend can be accessed via [/playground/index.html](http://localhost:8080/playground/index.html). \nThe API documentation is implemented via Swagger and can be accessed via [/swagger-ui.html](http://localhost:8080/swagger-ui.html). \n\n\n#### Generate TypeScript mappings (optional)\nIf you plan to use the API in a project which uses TypeScript you can generate the expected types of API requests and responses using the following command: \n``` \nmvn typescript-generator:generate\n```\nThis approach is used in the frontend. Type declarations are generated in the file `frontend/src/assets/server_types.d.ts`. \n\n### Getting Started\nThe purpose of this section is to get you started on using the charging optimizer API. \n\nThe easiest way to understand the interface of the API is to tinker with the playground ([/playground/index.html](http://localhost:8080/playground/index.html)). The playground is a visual interface which lets you edit the input for the charging optimizer in a natural way. The playground translates your model into a JSON request which is the technical input to the charging optimizer. You can easily pick up how to assemble JSON requests for the optimizer by observing how your playground input is reflected in the generated request.\n\n#### Understanding charging optimizer input\nIn the top part of the playground screen you can edit the following input parameters:\n* **Current time**: This is the actual time of day assumed by the optimizer. The optimizer can only schedule charging sessions after the current time, not before. By default, the playground uses midday as current time. \n\n* **Charging infrastructure**: The charging infrastructure consists of a hierarchy of fuses reflecting the technical installation of the charging hardware. In real life, fuses are installed in a tree structure. There is typically one fuse per charging station, another fuse for a set of charging stations, and then further fuses for sets of fuses. By default, the playground contains a charging infrastructure with two levels of fuses to illustrate the concept of the tree structure.\n\n* **Fuse**: Each fuse is characterized by the current at which the fuse cuts off the power supply. The charging optimizer assumes three-phase electrical circuits. Therefore, each fuse is defined by a triplet of current values, one per phase. The playground lets you add further fuses to the infrastructure by clicking the corresponding buttons. By default, the playground uses 32 Ampere per phase for new fuses.\n* **Charging station**: Each charging station is characterized by the current at which the built-in fuse cuts off the power supply. The playground lets you add further charging stations by clicking the corresponding buttons. By default, the playground uses charging stations with 32 Ampere fuses.\n\n* **Car**: In the playground, cars can be added to charging stations to express their arrival at the charging station. When you add cars via the corresponding button, semantically you create a charging demand. In the charging optimizer, the cars with their charging demands are the central items for the optimization process. The charging optimizer creates one charge plan per car. Therefore you need to have at least one car in your input for the charge optimizer to create a non-trivial output. The more cars you add to the input, the higher becomes the competition for the scarce resource of charging current. With more cars, the available charging capacity is divided and more cars are assigned only partial or no charging opportunities.\nWhen you check out the generated JSON request you will notice the long list of parameters per car.\n\nUse the button \xe2\x80\x9cShow circuit diagram of fuse tree\xe2\x80\x9d to visualize the wiring of the configured charging infrastructure. \n\n#### Understanding charging optimizer output\nTo trigger the charging optimizer us the button labelled **Optimize charge plans**. The resulting JSON response contains a list of charge plans, one per car. Note that the actual charge plan for the car is labelled **currentPlan** and consists of a list of 96 entries. Each entry corresponds to a 15 minute interval since midnight. The entered value specifies the charging current which the optimizer assigns to this car in the given interval.\n\nAdditionally, the playground visualizes the aggregated charge plans in a stacked diagram. Each vehicle\xe2\x80\x99s charge plan is represented by one color. The placement of the colored boxes along the horizontal axis indicates start and end of the charging intervals. The height of the boxes corresponds to the respective charging power (in Watt).\n\n### Known Issues\nPlease refer to the list of [issues](../../issues) on GitHub.\n\n\n### How to obtain support\nPlease use the [GitHub issue tracker](../../issues) for any questions, bug reports, feature requests, etc.\n\n\n\n### To-Do (upcoming changes) \n- Provide translation of charging profiles to the Open Charge Point Protocol (OCPP) 1.6 or 2.0 \n- Provide parameters for standard EV models \n- Document API EV parameters in Swagger\n'",,"2019/12/18, 14:18:01",1407,Apache-2.0,0,71,"2022/06/28, 08:30:46",29,27,31,2,484,27,0.0,0.3278688524590164,,,0,9,false,,false,true,,,https://github.com/SAP,https://opensource.sap.com,"Walldorf, Germany",,,https://avatars.githubusercontent.com/u/2531208?v=4,,, open-ev-data,Open Dataset of Electric Vehicle specs.,chargeprice,https://github.com/chargeprice/open-ev-data.git,github,,Mobility and Transportation,"2022/04/07, 05:23:04",55,0,8,false,,Chargeprice,chargeprice,,,"b""\n\n# Open EV Data is now part of the Chargeprice API!\n\nAfter two years of being a separate project, we have decided to integrate Open\nEV Data directly into the Chargeprice API.\n\nYou can now access the EV data via the [/v2/vehicles\nendpoint](https://github.com/chargeprice/chargeprice-api-docs/blob/master/api/v2/vehicles/index.md)\nof the [Chargeprice API](https://github.com/chargeprice/chargeprice-api-docs).\n\n[Get access\nnow!](https://github.com/chargeprice/chargeprice-api-docs#getting-access)\n\n## FAQ\n\n### Why did we integrate Open EV Data into the Chargeprice API?\n\nThere are multiple reasons:\n\n1) Initially the idea of Open EV Data was to build an open dataset that everyone\n can use. This was achieved. We were also hoping to get support from the\n community to add new vehicles and keep the data up to date. Unfortunately\n this didn't really work out. So it was mainly (with a few exceptions,\n thanks!) on us, Chargeprice, to manage the data. While we love to do this,\n it's also resource intensive and it's not sustainable for us to provide data\n for free for any - even commercial - projects. In the end we believe that\n only sustainable projects can survive.\n\n2) Technically there has always been some manual effort to get Open EV Data up\n to date with the data that we are already using in Chargeprice. With the\n integration into the Chargeprice API this manual step is now gone.\n\n3) We have played around with multiple data management systems in the past and\n each one resulted in the need to adapt two systems: The Chargeprice API and\n Open EV Data. Now we have a single source of truth and this step is not\n needed anymore.\n\n### What will happen with this repository?\n\nWe will keep the `/data/ev-data.json` as it is, because it's anyway accessible\nvia the Git history and forks. Also we published this data with the MIT Licence\nthat grants free usage by anyone.\n\nHowever we won't push any updates anymore.\n\n### Can I still use the data for free?\n\nYou can still use the outdated `/data/ev-data.json` data set for free.\n\nIf you want to get regular updates, you need to subscribe to our API. You can\nfind the pricing [here](https://www.chargeprice.net/pricing).\n\n### How do I migrate?\n\nFollow the instructions on the [Chargeprice API\ndocs](https://github.com/chargeprice/chargeprice-api-docs) to get access to the\ndata.\n\nThen you need to call our API instead of fetching the file from Github directly.\n\nThe data format has slightly changed, but overall it's the same as before. \n\n### Can I take over the Open EV Data project/idea by fetching the data from your API and publish it here (or anywhere else) for usage by anyone?\n\nNo. Thanks for your understanding.\n\n### Are there any benefits for me with the new approach?\n\nBesides making sure that this project can also exist in the future, you will now\nalso get updates to the dataset much faster! We've usually updated Open EV Data\nonly on a monthly basis. The data from the Chargeprice API however will be\nupdated on a weekly or even daily basis!\n""",,"2019/08/08, 17:07:17",1539,MIT,0,78,"2021/04/18, 12:57:59",5,3,8,0,920,2,0.0,0.06666666666666665,,,0,2,false,,false,false,,,https://github.com/chargeprice,https://www.chargeprice.app,,,,https://avatars.githubusercontent.com/u/66462851?v=4,,, SmartEVSE,Smart EVSE Electric Vehicle Charging Station.,SmartEVSE,https://github.com/SmartEVSE/smartevse.git,github,,Mobility and Transportation,"2021/02/12, 20:15:10",87,0,7,false,C,,SmartEVSE,"C,Pawn,Assembly,Makefile,C++,OpenSCAD,Shell,Objective-C,CMake",,b'SmartEVSE v1\n=========\n\nThis repository is now for the older v1 of the module only.\nAll developement for the current version v2 of the module is taking place here:\n[SmartEVSE-2](https://github.com/SmartEVSE/SmartEVSE-2)\n\n',,"2014/01/06, 12:05:03",3579,MIT,0,180,"2022/04/25, 09:11:37",14,8,32,0,548,3,0.25,0.4700854700854701,,,0,5,false,,false,false,,,https://github.com/SmartEVSE,,,,,https://avatars.githubusercontent.com/u/6329604?v=4,,, BikeshareClient,Dotnet library for integrating with GBFS bikeshare systems.,andmos,https://github.com/andmos/BikeshareClient.git,github,"gbfs,bikeshare-systems,bikeshare,netstandard",Mobility and Transportation,"2023/09/19, 12:23:54",5,5,1,true,C#,,,"C#,Dockerfile",,"b'BikeshareClient\n===\n\nDotnet client for the General Bikeshare Feed Specification ([GBFS](https://github.com/NABSA/gbfs)).\nMainly used against [Urban Infrastructure Partner](https://urbansharing.com/), with [Trondheim City Bike](https://trondheimbysykkel.no/en/open-data) and [Bergen City Bike](https://bergenbysykkel.no/en/apne-data).\n\nFor all available GBFS systems, [see the system overview from the GBFS project](https://github.com/NABSA/gbfs/blob/master/systems.csv).\n\nSupports the required fields in the GBFS standard for now.\n\n## Basic Usage\n\n```csharp\n\n// Create the client from a GBFS API URL.\nIBikeshareClient client = new Client(""http://gbfs.urbansharing.com/trondheim/gbfs.json"");\n\n// Or with an existing HTTPClient\nIBikeshareClient client = new Client(""http://gbfs.urbansharing.com/trondheim/gbfs.json"", httpClient);\n\n// All available stations, containing name, id, lat, long, address and capacity\nvar stations = await client.GetStationsAsync();\n\n// All stations status, containing number of bikes and docks available, is renting, is returning etc.\nvar statuses = await client.GetStationsStatusAsync();\n\n```\n\nA simple [dotnet-script](https://github.com/filipw/dotnet-script) test script for the client can be seen [here](https://github.com/andmos/BikeshareClient/blob/master/src/TestScript/main.csx).\n\n## Microsoft.Extensions.DependencyInjection integration\n\n`BikeshareClient` can be registered to `IServiceCollection` by referencing the `BikeshareClient.DependencyInjection` [NuGet package](https://www.nuget.org/packages/BikeshareClient.DependencyInjection/):\n\n```csharp\nusing BikeshareClient.DependencyInjection;\nusing Microsoft.Extensions.DependencyInjection;\n\nservices.AddBikeshareClient(""http://gbfs.urbansharing.com/trondheim/gbfs.json"");\n```\n\n## Build and testscript\n\nSimple build:\n\n```bash\ndocker run --rm -it -v $(pwd):/app mcr.microsoft.com/dotnet/sdk:7.0 dotnet pack app/src/BikeshareClient -o /app\n```\n\nRun test script:\n\n```bash\ndocker run --rm -it -v $(pwd)/src/TestScript/:/scripts andmos/dotnet-script main.csx ""Skansen""\n```\n\n[![CI / CD](https://github.com/andmos/BikeshareClient/actions/workflows/ci.yaml/badge.svg?branch=master)](https://github.com/andmos/BikeshareClient/actions/workflows/ci.yaml)\n\n[![codecov](https://codecov.io/gh/andmos/BikeshareClient/branch/master/graph/badge.svg)](https://codecov.io/gh/andmos/BikeshareClient)\n\n[![NuGet BikeshareClient](https://img.shields.io/nuget/v/BikeshareClient.svg)](https://www.nuget.org/packages/BikeshareClient/)\n\n[![NuGet BikeshareClient.DependencyInjection](https://img.shields.io/nuget/v/BikeshareClient.DependencyInjection.svg)](https://www.nuget.org/packages/BikeshareClient.DependencyInjection/)\n\n[![Dependabot Status](https://api.dependabot.com/badges/status?host=github&repo=andmos/BikeshareClient)](https://dependabot.com)\n\n>[GBFS](https://github.com/NABSA/gbfs) is a standard backed by the North American Bike Share Association ([NABSA](https://nabsa.net/)).\n'",,"2018/06/04, 12:37:55",1969,MIT,33,258,"2023/10/17, 04:18:01",5,138,159,38,8,3,0.3,0.33333333333333337,,,0,4,false,,false,false,"andmos/BikeshareClient,wdbprog/OsloBysykkel,andmos/BikeshareFunction,andmos/BikeDashboard,Sankra/DIPSbot",,,,,,,,,, Growing Urban Bicycle Networks,"Source code for the paper Growing Urban Bicycle Networks, exploring algorithmically the limitations of urban bicycle network growth.",mszell,https://github.com/mszell/bikenwgrowth.git,github,"cycling,gis,transportation-network,urban-planning,osmnx,network-analysis,bicycle-network",Mobility and Transportation,"2022/05/05, 20:06:06",59,0,19,false,Jupyter Notebook,,,"Jupyter Notebook,Python,Shell",,"b'# Growing Urban Bicycle Networks\n\nThis is the source code for the scientific paper [*Growing urban bicycle networks*](https://www.nature.com/articles/s41598-022-10783-y) by [M. Szell](http://michael.szell.net/), S. Mimar, T. Perlman, [G. Ghoshal](http://gghoshal.pas.rochester.edu/), and [R. Sinatra](http://www.robertasinatra.com/). The code downloads and pre-processes data from OpenStreetMap, prepares points of interest, runs simulations, measures and saves the results, creates videos and plots. \n\n**Paper**: [https://www.nature.com/articles/s41598-022-10783-y](https://www.nature.com/articles/s41598-022-10783-y) \n**Data repository**: [zenodo.5083049](https://zenodo.org/record/5083049) \n**Visualization**: [GrowBike.Net](https://growbike.net) \n**Videos & Plots**: [https://growbike.net/download](https://growbike.net/download)\n\n[![Growing Urban Bicycle Networks](readmevideo.gif)](https://growbike.net/city/paris)\n\n## Folder structure\nThe main folder/repo is `bikenwgrowth`, containing Jupyter notebooks (`code/`), preprocessed data (`data/`), parameters (`parameters/`), result plots (`plots/`), HPC server scripts and jobs (`scripts/`).\n\nOther data files (network plots, videos, results, exports, logs) make up many GBs and are stored in the separate external folder `bikenwgrowth_external` due to Github\'s space limitations.\n\n## Setting up code environment\n### Conda yml\n[Download `.yml`](env.yml)\n\n### Manual procedure\n```\nconda create --override-channels -c conda-forge -n OSMNX python=3.8.2 osmnx=0.16.2 python-igraph watermark haversine rasterio tqdm geojson\nconda activate OSMNX\nconda install -c conda-forge ipywidgets\npip install opencv-python\nconda install -c anaconda gdal\npip install --user ipykernel\npython -m ipykernel install --user --name=OSMNX\n```\nRun Jupyter Notebook with kernel OSMNX (Kernel > Change Kernel > OSMNX)\n\n## Running the code on an HPC cluster with SLURM\nFor multiple, esp. large, cities, running the code on a high performance computing cluster is strongly suggested as the tasks are easy to paralellize. The shell scripts are written for [SLURM](https://slurm.schedmd.com/overview.html). \n\n1. Populate `parameters/cities.csv`, see below.\n2. Run 01 and 02 once locally to download and prepare all networks and POIs (The alternative is server-side `sbatch scripts/download.job`, but OSMNX throws too many connection issues, so manual supervision is needed)\n3. Upload `code/*.py`, `parameters/*`, `scripts/*`\n4. Run: `./mastersbatch_analysis.sh`\n5. Run, if needed: `./mastersbatch_export.sh`\n6. After all is finished, run: `./cleanup.sh`\n7. Recommended, run: `./fixresults.sh` (to clean up results in case of amended data from repeated runs)\n\n## Running the code locally\nSingle (or few/small) cities could be run locally but require manual, step-by-step execution of Jupyter notebooks:\n\n1. Populate `parameters/cities.csv`, see below.\n2. Run 01 and 02 once to download and prepare all networks and POIs \n3. Run 03,04,05 for each parameter set (see below), set in `parameters/parameters.py` \n4. Run 06 or other steps as needed.\n\n### Parameter sets \n1. `prune_measure = ""betweenness""`, `poi_source = ""railwaystation""` \n2. `prune_measure = ""betweenness""`, `poi_source = ""grid""` \n3. `prune_measure = ""closeness""`, `poi_source = ""railwaystation""` \n4. `prune_measure = ""closeness""`, `poi_source = ""grid""` \n5. `prune_measure = ""random""`, `poi_source = ""railwaystation""` \n6. `prune_measure = ""random""`, `poi_source = ""grid""` \n\n## Populating cities.csv\n### Checking nominatimstring \n* Go to e.g. [https://nominatim.openstreetmap.org/search.php?q=paris%2C+france&polygon_geojson=1&viewbox=](https://nominatim.openstreetmap.org/search.php?q=paris%2C+france&polygon_geojson=1&viewbox=) and enter the search string. If a correct polygon (or multipolygon) pops up it should be fine. If not leave the field empty and acquire a shape file, see below.\n\n### Acquiring shape file \n* Go to [Overpass](overpass-turbo.eu), to the city, and run:\n `relation[""boundary""=""administrative""][""name:en""=""Copenhagen Municipality""]({{bbox}});(._;>;);out skel;`\n* Export: Download as GPX\n* Use QGIS to create a polygon, with Vector > Join Multiple Lines, and Processing Toolbox > Polygonize (see [Stackexchange answer 1](https://gis.stackexchange.com/questions/98320/connecting-two-line-ends-in-qgis-without-resorting-to-other-software) and [Stackexchange answer 2](https://gis.stackexchange.com/questions/207463/convert-a-line-to-polygon))\n'",",https://zenodo.org/record/5083049","2020/09/03, 12:35:05",1147,AGPL-3.0,0,189,"2021/02/26, 14:12:41",4,2,2,0,971,0,0.0,0.09202453987730064,"2021/07/08, 10:01:35",1.0.0,0,3,false,,false,false,,,,,,,,,,, A/B Street,"A traffic simulation game exploring how small changes to roads affect cyclists, transit users, pedestrians, and drivers.",a-b-street,https://github.com/a-b-street/abstreet.git,github,"traffic-simulation,game,openstreetmap,simulation,seattle",Mobility and Transportation,"2023/10/19, 08:34:46",7123,0,419,true,Rust,A/B Street,a-b-street,"Rust,Shell,Python,HTML,TypeScript,GLSL,Go,Makefile,CSS",https://a-b-street.github.io/docs/,"b'# A/B Street\n\n[![DOI](https://zenodo.org/badge/135952436.svg)](https://zenodo.org/badge/latestdoi/135952436)\n[![](https://dcbadge.vercel.app/api/server/nCvMD4xj4K?style=flat)](https://discord.gg/nCvMD4xj4K)\n\nEver been stuck in traffic on a bus, wondering why is there legal street parking\ninstead of a dedicated bus lane? A/B Street is a project to plan, simulate, and\ncommunicate visions for making cities friendlier to people walking, biking, and\ntaking public transit. We create software to\n[simulate traffic, edit streets and intersections](https://a-b-street.github.io/docs/software/abstreet.html),\n[plan bike networks](https://a-b-street.github.io/docs/software/ungap_the_map/index.html),\ncreate\n[low-traffic neighborhoods](https://a-b-street.github.io/docs/software/ltn/index.html),\nand educate the public about\n[15-minute neighborhoods through games](https://a-b-street.github.io/docs/software/santa.html).\nThe project works anywhere in the world, thanks to\n[OpenStreetMap](https://www.openstreetmap.org/about).\n\n- Run it on [your web browser](https://play.abstreet.org/0.3.48/abstreet.html),\n [Windows](https://github.com/a-b-street/abstreet/releases/download/v0.3.48/abstreet_windows_v0_3_48.zip),\n [Mac](https://github.com/a-b-street/abstreet/releases/download/v0.3.48/abstreet_mac_v0_3_48.zip),\n [Linux](https://github.com/a-b-street/abstreet/releases/download/v0.3.48/abstreet_linux_v0_3_48.zip),\n [FreeBSD](https://www.freshports.org/games/abstreet/), or\n [read all instructions](https://a-b-street.github.io/docs/user/index.html)\n- [build from source](https://a-b-street.github.io/docs/tech/dev/index.html)\n\n## Videos\n\n- [Alpha release trailer](https://www.youtube.com/watch?v=LxPD4n_1-LU)\n- [Presentations](https://a-b-street.github.io/docs/project/presentations.html)\n\n![](https://a-b-street.github.io/docs/project/history/retrospective/traffic_sim.gif)\n\n## Documentation\n\n- [User guide](https://a-b-street.github.io/docs/user/index.html)\n- Technical\n - [Developer guide](https://a-b-street.github.io/docs/tech/dev/index.html)\n - [How the traffic simulation works](https://a-b-street.github.io/docs/tech/trafficsim/discrete_event/index.html)\n - [Intersection geometry](https://a-b-street.github.io/docs/tech/map/geometry/index.html)\n- Project\n - [Roadmap](https://a-b-street.github.io/docs/software/ungap_the_map/plan.html#future-directions)\n - [Getting involved](https://a-b-street.github.io/docs/project/contributing.html)\n - [Accomplishments & challenges](https://a-b-street.github.io/docs/project/history/retrospective/index.html)\n\n## Project mission\n\nWe amplify the efforts of individuals and advocacy groups who campaign to\ntransition cities away from private motor vehicles. We believe in transparent\nand reproducible analysis, so all of our work is open source and based on public\ndata. We believe everybody should have a voice in shaping their city, so our\nsoftware aims to be easy to use.\n\nWhy not leave city planning to professionals? People are local experts on the\nsmall slice of the city they interact with daily -- the one left turn lane that\nalways backs up or a certain set of poorly timed walk signals.\n[Laura Adler](http://www.govtech.com/data/SimCities-Can-City-Planning-Mistakes-Be-Avoided-Through-Data-Driven-Simulations.html)\nwrites:\n\n> ""Only with simple, accessible simulation programs can citizens become active\n> generators of their own urban visions, not just passive recipients of options\n> laid out by government officials.""\n\nExisting urban planning software is either proprietary or hard to use. A/B\nStreet strives to be highly accessible, by being a fun, engaging game. See\n[here](https://a-b-street.github.io/docs/project/motivations.html) for more\nguiding principles.\n\n## Credits\n\nCore team:\n\n- Dustin Carlino ()\n- [Yuwen Li](https://www.yuwen-li.com/) (UX)\n- [Michael Kirk](https://github.com/michaelkirk)\n\n[See full credits](https://a-b-street.github.io/docs/project/team.html)\n\nContact or follow\n[@CarlinoDustin](https://twitter.com/CarlinoDustin) for updates.\n'",",https://zenodo.org/badge/latestdoi/135952436","2018/06/04, 00:44:43",1969,Apache-2.0,415,8396,"2023/10/09, 14:00:27",222,535,869,65,16,12,3.9,0.06293288750895631,"2023/10/09, 09:59:17",v0.3.48,28,41,true,github,false,false,,,https://github.com/a-b-street,abstreet.org,"Seattle, WA",,,https://avatars.githubusercontent.com/u/78323823?v=4,,, enviroCar,An Android App for collecting car sensor data for the enviroCar platform.,enviroCar,https://github.com/enviroCar/enviroCar-app.git,github,,Mobility and Transportation,"2023/08/23, 09:00:01",84,0,13,true,Java,enviroCar,enviroCar,"Java,HTML,Python",https://envirocar.org,"b""# enviroCar Android App\n\nThis is the app for the enviroCar platform. (www.envirocar.org)\n\n## Description\n\n### XFCD Mobile Data Collection and Analysis\n\n**Collecting and analyzing vehicle sensor data**\n\nenviroCar Mobile is an Android application for smartphones that can be used to collect Extended Floating Car Data (XFCD). The app communicates with an OBD2 Bluetooth adapter while the user drives. This enables read access to data from the vehicle\xe2\x80\x99s engine control. The data is recorded along with the smartphone\xe2\x80\x99s GPS position data.The driver can view statistics about his drives and publish his data as open data. The latter happens by uploading tracks to the enviroCar server, where the data is available under the ODbL license for further analysis and use. The data can also be viewed and analyzed via the enviroCar website. enviroCar Mobile is one of the enviroCar Citizen Science Platform\xe2\x80\x99s components (www.envirocar.org).\n\n\n**Key Technologies**\n\n-\tAndroid\n-\tJava\n\n**Benefits**\n\n-\tEasy collection of Extended Floating Car Data\n- Optional automation of data collection and upload\n- Estimation of fuel consumption and CO2 emissions\n- Publishing anonymized track data as Open Data\n- Map based visualization of track data and track statistics\n\n\n## Quick Start \n\n\n### Installation\n\nUse the [Google Play Store](https://play.google.com/store/apps/details?id=org.envirocar.app) to install the app on your device.\n\nWe are planning to include the project into F-Droid in the near future.\n\n## Development\n\nThis software uses the gradle build system and is optimized to work within Android Studio 1.3+.\nThe setup of the source code should be straightforward. Just follow the Android Studio guidelines\nfor existing projects.\n\n### Setting up the mapbox SDK\nThe enviroCar App project uses the ``Mapbox Maven repository``. **Mapbox is a mapping and location cloud platform for developers.**\nTo build the project you need the mapbox account, you can create an account for free from [here](https://account.mapbox.com/auth/signup/). \nOnce you have created an account, you will need to configure credentials \n\n### Configure credentials\n1. From your account's [tokens page](https://account.mapbox.com/access-tokens/), click the **Create a token** button.\n2. Give your token a name and do check that you have checked ``Downloads:Read`` scope.\n3. Make sure you copy your token and save it somehwere as you will not be able to see the token again. \n\n### Configure your secret token\n1. This is a secret token, and we will use it in ``gradle.properties`` file. You should not expose the token in public, that's why add ``gradle.properties`` in ``.gitignore`` . It's also possible to store the sercret token in your local user's _gradle.properties_ file, usually stored at _\xc2\xabUSER_HOME\xc2\xbb/.gradle/gradle.properties_. \n2. Now open the ``gradle.properties`` file and add this line ``MAPBOX_DOWNLOADS_TOKEN = ``. The secret token has to be pasted without any quote marks. \n``MAPBOX_DOWNLOADS_TOKEN=sk.dutaksgjdvlsayVDSADUTLASDs@!ahsvdaslud*JVAS@%DLUTSVgdJLA&&>Hdval.sujdvadvasuydgalisy``(this is just a random string, not a real token)\n3. That't it. You are good to go!\n\nIf you are still facing any problem, checkout the [Mapbox guide](https://docs.mapbox.com/android/maps/guides/install/) or feel free to [create an issue](https://github.com/enviroCar/enviroCar-app/issues/new)\n\n## License\n\nThe enviroCar App is licensed under the [GNU General Public License, Version 3](https://github.com/enviroCar/enviroCar-app/blob/master/LICENSE).\n\n## Recorded Parameters\n|Parametername\t |Unit \t| \t| \t| \t|\n|---\t |---\t|---\t|---\t|---\t|\n|Speed \t |km/h \t| \t| \t| \t|\n|Mass-Air-Flow (MAF) \t|l/s \t| \t| \t| \t|\n|Calculated (MAF) |g/s \t| \t| \t| \t|\n|RPM |u/min \t| \t| \t| \t|\n|Intake Temperature |c \t| \t| \t| \t|\n|Intake Pressure |kPa \t| \t| \t| \t|\n|CO2 |kg/h \t| \t| \t| \t|\n|CO2 (GPS-based) |kg/h \t| \t| \t| \t|\n|Consumption |l/h \t| \t| \t| \t|\n|Consumption (GPS-based)|l/h \t| \t| \t| \t|\n|Throttle Position |% \t| \t| \t| \t|\n|Engine Load |% \t| \t| \t| \t|\n|GPS Accuracy |% \t| \t| \t| \t|\n|GPS Speed |km/h \t| \t| \t| \t|\n|GPS Bearing |deg \t| \t| \t| \t|\n|GPS Altitude |m \t| \t| \t| \t|\n|GPS PDOP |precision \t| \t| \t| \t|\n|GPS HDOP |precision \t| \t| \t| \t|\n|GPS VDOP |precision \t| \t| \t| \t|\n|Lambda Voltage |V \t| \t| \t| \t|\n|Lambda Voltage ER |ratio \t| | \t| \t|\n|Lambda Current |A \t| \t| \t| \t|\n|Lambda Current ER |ratio | \t| \t| \t|\n|Fuel System Loop |boolean| \t| \t| \t|\n|Fuel System Status Code|category| \t| \t| \t|\n|Long Term Trim 1 |% \t| \t| \t| \t|\n|Short Term Trim 1 |% \t| \t| \t| \t|\n\n\n## Changelog\n\nCheck out the [Changelog](https://github.com/enviroCar/enviroCar-app/blob/master/CHANGELOG.md) for current changes.\n\n## OBD simulator\n\nThe repository also contains a simple OBD simulator (dumb, nothing fancy) that can\nbe used on another Android device and mock the actual car adapter.\n\n## References\n\nThis app is in operational use in the [CITRAM - Citizen Science for Traffic Management](https://www.citram.de/) project. Check out the [enviroCar website](https://envirocar.org/) for more information about the enviroCar project.\n\n## How to Contribute\nFor contributing to the enviroCar Android App, please, have a look at our [Contributor Guidelines](https://github.com/enviroCar/enviroCar-app/blob/master/CONTRIBUTING.md).\n\n\n## Contributors\n\nHere is the list of [contributors to this project](https://github.com/enviroCar/enviroCar-app/blob/master/CONTRIBUTORS.md)\n""",,"2013/04/17, 14:19:21",3843,GPL-3.0,42,2551,"2023/09/03, 04:41:35",88,468,908,35,52,38,1.3,0.7364831953239162,"2023/08/23, 09:07:06",v2.2.10,0,20,false,,false,true,,,https://github.com/enviroCar,https://envirocar.org/,Münster,,,https://avatars.githubusercontent.com/u/4232323?v=4,,, EVerest,"An open source software stack for EV charging infrastructure from firmware to cloud: OCPP, ISO 15118, SunSpec, Modbus, energy management and load balancing and an entire flexible middle-ware framework based on MQTT. Part of the Linux Foundation Energy ecosystem.",EVerest,https://github.com/EVerest/EVerest.git,github,,Mobility and Transportation,"2023/10/24, 08:38:55",112,0,63,true,,EVerest,EVerest,,,"b""[![OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/projects/6739/badge)](https://bestpractices.coreinfrastructure.org/projects/6739)\n\n![EVerest Logo](docs/img/everest_horizontal-color.svg)\n\nThe primary goal of EVerest is to develop and maintain an open source software stack for EV charging infrastructure. EVerest is developed having modularity and customizability in mind, so it consists of a framework to configure several interchangeable modules which are coupled by MQTT with each other. EVerest will help to speed the adoption to e-mobility by utilizing all the open source advantages for the EV charging world. It will also enable new features for local energy management, PV-integration, and many more. \nThe EVerest project was initiated by PIONIX GmbH, to help with the electrification of the mobility sector.\n\nFor questions and support please join the [EVerest mailing list](https://lists.lfenergy.org/g/everest).\nThere is also a calendar for weekly dev calls.\n\nA complete documentation can be found [here](https://everest.github.io).\n\n## Build & Install\n\n- [everest-core](https://github.com/EVerest/everest-core#readme)\n\n## Discussion\n\nYou can connect with the community in a variety of ways:\n\n- [EVerest mailing list](https://lists.lfenergy.org/g/everest)\n- [EVerest TSC mailing list / Sensitive Topics](https://lists.lfenergy.org/g/everest-tsc)\n- Join the [weekly meetings](https://everest.github.io/nightly/#weekly-tech-meetup)\n\n## Contributing\nAnyone can contribute to the EVerest project - learn more at [CONTRIBUTING.md](CONTRIBUTING.md). \nAll project management related documents incl. our roadmap can be found [here](tsc/README.md).\n\n## Governance\nEVerest is a project hosted by the [LF Energy Foundation](https://lfenergy.org). This project's technical charter is located in [CHARTER.md](tsc/CHARTER.md) and has established it's own processes for managing day-to-day processes in the project at [GOVERNANCE.md](GOVERNANCE.md).\n\n## Reporting Issues\nTo report a problem, you can open an [issue](https://github.com/EVerest/everest/issues) in repository against a specific workflow. If the issue is sensitive in nature or a security related issue, please do not report in the issue tracker but instead email everest-tsc@lists.lfenergy.org.\n\n## Licensing\nEVerest and its subprojects are licensed under the Apache License, Version 2.0. See [LICENSE]() for the full license text.\n\n""",,"2021/12/20, 11:23:39",674,GPL-3.0,127,206,"2023/10/24, 08:39:01",4,79,107,80,1,3,1.4,0.7135678391959799,"2023/02/22, 07:58:00",2023.2.0,0,15,false,,true,true,,,https://github.com/EVerest,https://www.lfenergy.org/projects/everest/,,,,https://avatars.githubusercontent.com/u/73219292?v=4,,, Streetmix,Makes it easy for people to design public spaces together.,streetmix,https://github.com/streetmix/streetmix.git,github,"streetmix,civic-tech,urban-planning,city-builder,city-planning",Mobility and Transportation,"2023/10/24, 02:02:08",628,0,43,true,JavaScript,Streetmix,streetmix,"JavaScript,TypeScript,SCSS,Handlebars,API Blueprint,Shell,Procfile",https://streetmix.net,"b'

\n \n \n \n

\n\n

\n Streetmix is a collaborative process for communities and city planners to improve public spaces. \n
Design, remix, and share your neighborhood street at streetmix.net.\n

\n\n

\n :couple: :palm_tree: :oncoming_automobile: :oncoming_bus: :palm_tree: :dancer:\n

\n\n

\n
Join our community on Discord!\n

\n\n

\n We welcome contributions!\n
Please see our contributor guidelines.\n

\n\n

\n \n \n \n

\n\n
\n\n

\n \n

\n\n## About\n\n#### What are street sections?\n\nA ""section"" is shortened way of saying ""cross-section view"", a type of 2D non-perspectival drawing commonly used in engineering and architecture to show what something looks like when you take a slice of it and look at it head-on. Similarly, a street section is a cross section view of a street, showing the widths and placement of vehicle lanes, bike lanes, sidewalks, trees, street furniture or accessories (like benches or street lamps), as well as engineering information like how the road is sloped to facilitate drainage, or the locations of underground utilities. Although sections can be simplified line drawings, urban designers and landscape architects have created very colorful illustrative street sections, removing most of the engineering particulars to communicate how a street could be designed to feel safe, walkable or habitable.\n\n![example-sections](docs/static/thumb_sections.png ""Left to Right: (1) Existing conditions section of Market Street, from the Better Market Street Plan, San Francisco (2) Proposed one-way cycletrack design of Second Street, from the Great Second Street Plan, San Francisco (3)Example of an illustrative section, courtesy of Lou Huang"")\n\n#### Why does Streetmix exist?\n\nWhen city planners seek input from community meetings from the public on streetscape improvements, one common engagement activity is to create paper cut-outs depicting different street components (like bike lanes, sidewalks, trees, and so on) and allow attendees to reassemble them into their desired streetscape. Planners and city officials can then take this feedback to determine a course of action for future plans. By creating an web-based version of this activity, planners can reach a wider audience than they could at meetings alone, and allow community members to share and remix each other\'s creations.\n\nThe goal is to promote two-way communication between planners and the public, as well. Streetmix intends to communicate not just feedback to planners but also information and consequences of actions to the users that are creating streets. Kind of like SimCity did with its in-game advisors!\n\nStreetmix can be used as a tool to promote and engage citizens around streetscape and placemaking issues, such as [Complete Streets][completestreets] or the Project for Public Spaces\' [Rightsizing Streets Guide][rightsizing].\n\n[completestreets]: https://smartgrowthamerica.org/program/national-complete-streets-coalition/\n[rightsizing]: http://www.pps.org/reference/rightsizing/\n\n#### Why the name ""Streetmix""?\n\n""Streets"" + ""remix"" :-)\n\n#### How did this project start?\n\nStreetmix started as a [Code for America][cfa] hackathon project in January 2013, inspired by community meetings like the one described above.\n\n[cfa]: https://codeforamerica.org/\n\n#### How do I install / set up Streetmix myself?\n\nStreetmix is a [Node.js](https://nodejs.org/) based project. Set up your own by [following these instructions](https://docs.streetmix.net/contributing/code/local-setup)!\n\n## Sponsors\n\n

\n \n

\n\n## Copyright\n\nCopyright (c) 2013-2018 Code for America and contributors. \nCopyright (c) 2019-2023 Streetmix LLC. \nSee [LICENSE][] for details.\n\n[license]: https://github.com/streetmix/streetmix/blob/main/LICENSE\n\nStreetmix is maintained by [Bad Idea Factory](https://biffud.com/) with the support of many contributors.\n'",,"2013/01/19, 21:03:45",3931,CUSTOM,313,7993,"2023/10/24, 02:00:44",218,1816,2528,321,1,7,0.0,0.45245106530400137,"2014/02/21, 20:50:04",v0.9,0,55,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",true,true,,,https://github.com/streetmix,https://streetmix.net/,United States of America,,,https://avatars.githubusercontent.com/u/6877271?v=4,,, Transitland," An open data platform that collects GTFS, GTFS Realtime, and other open data feeds from transit providers around the world.",transitland,https://github.com/transitland/transitland-atlas.git,github,"gtfs,gtfs-realtime,gtfs-rt,transit,mobility,open-data,transportation,gbfs,mds,transitland",Mobility and Transportation,"2023/10/25, 13:03:46",94,0,27,true,Python,Transitland,transitland,"Python,Shell",https://www.transit.land/operators/,"b'\n# Transitland Atlas\n\nAn open catalog of transit/mobility data feeds and operators.\n\nThis catalog is used to power the canonical [Transitland](https://transit.land) platform, is available for distributed used of the [transitland-lib](https://github.com/interline-io/transitland-lib) tooling, and is open to use as a ""crosswalk"" within other transportation data systems.\n\n**Table of contents**:\n\n\n- [Feeds](#feeds)\n- [How to Add a New Feed](#how-to-add-a-new-feed)\n- [How to Update an Existing Feed](#how-to-update-an-existing-feed)\n- [Operators](#operators)\n- [Onestop IDs](#onestop-ids)\n- [License](#license)\n\n## Feeds\n\nPublic mobility/transit data feeds cataloged in the [Distributed Mobility Feed Registry](https://github.com/transitland/distributed-mobility-feed-registry) format.\n\nIncludes feeds in the following data specifications (specs):\n\n- [GTFS](https://gtfs.org/reference/static)\n- [GTFS Realtime](https://gtfs.org/reference/realtime/v2/)\n- [GBFS](https://github.com/NABSA/gbfs) - automatically synchronized from https://github.com/NABSA/gbfs/blob/master/systems.csv\n- [MDS](https://github.com/openmobilityfoundation/mobility-data-specification) - automatically synchronized from https://github.com/openmobilityfoundation/mobility-data-specification/blob/main/providers.csv\n\n## How to Add a New Feed\n\n1. Check if a `./feeds` file exists with the domain name for the feed URL. (ex. `http://bart.gov` -> `bart.gov.dmfr.json`)\n * If a file exists, use that file, otherwise create a new empty DMFR file.\n * To create a new file, you can use `example.com.dmfr.json` as a starting point, which contains the basic schema and an example feed.\n * Feeds exist as an array in the `feeds` property of a DMFR file.\n2. Propose a new Onestop ID for the feed (see [below](#onestop-ids))\n * Feed Onestop ID\'s begins with `f-` and continues with a unique string, like the transit operator\'s name\n * Use lowercase, alphanumeric unicode characters in the name component\n * Use `~` instead of spaces or other punctuation\n3. Add the appropriate URL to `static_current`\n4. Add license and/or authorization metadata if you are aware of it.\n5. Open a PR. Feel free to add any questions as a comment on the PR if you are uncertain about your DMFR file.\n6. GitHub Actions (continuous integration service) will run a basic validation check on your PR and report any errors.\n7. A moderator will review and comment on your PR. If you don\'t get a response shortly, feel free to ping us at [hello@transit.land](mailto:hello@transit.land)\n\nIf you are using the Github web interface, you can click ""Add a file -> Create a new file"" in the `./feeds` directory, or when viewing an individual existing file, the pencil icon in the upper right of the contents display. Make sure to select ""Create a new branch for this commit"" and begin creating a pull request to propose changes.\n\nFor more information on what can go into a DMFR file, see the [DMFR documentation](https://github.com/transitland/distributed-mobility-feed-registry).\n\n## How to Update an Existing Feed\n\n1. Find the DMFR file containing the feed.\n2. Update the URLs and other properties for that feed\n * For static feeds, use `static_current` for the present URL.\n * Add the previous URL value to the `static_historic` array.\n3. Edit the file and open the PR as described above.\n\nOnestop ID values for feeds and operators are used to synchronize with existing values in the Transitland database. Editing the Onestop ID value will cause a new feed or operator record to be created; values in the database that are no longer present in the Transitland Atlas will be marked as soft-deleted. Use caution and clear intent when changing a Onestop ID value.\n\n## Operators\n\n[Operators](https://transit.land/operators) describe, annotate, and group data from different feed data sources. For example, `o-9q9-actransit` describes a transit operator, Alameda-Contra Costa Transit District, which pulls from two different data sources (one GTFS-RT, one static GTFS) and adds additional metadata such as a US National Transit Database ID.\n\nOperators can exist in the top-level `operators` property if a DMFR file, or nested within a feed. An operator defined in the top-level `operators` property requires an `associated_feeds` value to connect the operator with data sources. When an operator is nested within a feed, there is an implicit association that all GTFS agencies contained in that file are associated with that operator, which helps reduces complexity and maintenance.\n\nThe key properties for an operator are:\n* `onestop_id`: A OnestopID value for this operator, starting with `o-`\n* `name`: A formal name for the operator, such as `Bay Area Rapid Transit`\n* `short_name`: A simpler, colloqial name for an operator, such as `BART`\n* `tags`: A set of key,value string pairs that provide additional metadata and references\n* `website`: A URL to find more information about this operator\n* `associated_feeds`: An array of feed association objects; for each entry, `feed_onestop_id` is required and `gtfs_agency_id` is optional\n\nValues for `onestop_id` and `name` are required; `associated_feeds` (either explicit or through nesting the operator in a feed) are highly recommended.\n\n## Onestop IDs\n\nEvery feed and operator record in the Atlas repository is identified by a unique [Onestop ID](https://transit.land/documentation/onestop-id-scheme/). Onestop IDs are meant to be globally unique (no duplicates in the world) and to be stable (no change over time).\n\nTo simplify the process of creating Onestop IDs, we now allow two different variants:\n\n- a three-part Onestop ID includes an entity prefix, a geohash, and a name. For example: `f-9q9-bart`\n- a two-part Onestop ID includes just the entity prefix and a name. For example: `f-banning~pass~transit`\n\nThe two-part Onestop ID is simpler to create if you are manually adding records to the Transitland Atlas repository.\n\nRules for Onestop IDs in this repository:\n\n- Feeds start with `f-` and operators start with `o-`\n- Geohash part is optional\n- Name can include any alphanumeric characters in UTF-8\n- The only separation or punctuation character allowed in the name component is a tilde (`~`)\n\n## License\n\nAll data files in this repository are made available under the [Community Data License Agreement \xe2\x80\x93 Permissive, Version 1.0](LICENSE.txt). This license allows you to:\n\n1. use this data for commercial, educational, or research purposes and be able to trust that it\'s cleanly licensed\n2. duplicate data, as long as you mention (attribute) this source\n3. use this data to create analyses and derived data (such as geocoding), without needing to provide attribution\n\nWe welcome you to contribute your edits and improvements directly to this repository. Please open a pull request!\n'",,"2019/11/15, 21:28:51",1440,CUSTOM,331,1290,"2023/10/24, 20:54:57",13,940,961,335,1,8,0.2,0.41903914590747326,,,0,83,false,,false,false,,,https://github.com/transitland,https://www.transit.land,,,,https://avatars.githubusercontent.com/u/9141652?v=4,,, cyclestreets,The goal of cyclestreets is to provide a simple R interface to the CycleStreets routing service.,cyclestreets,https://github.com/cyclestreets/cyclestreets-r.git,github,"cycling,routing,r,transport,transportation-planning",Mobility and Transportation,"2023/09/24, 21:25:04",26,0,12,true,R,CycleStreets,cyclestreets,R,https://rpackage.cyclestreets.net/,"b'\n\n\n[![R-CMD-check](https://github.com/cyclestreets/cyclestreets-r/workflows/R-CMD-check/badge.svg)](https://github.com/cyclestreets/cyclestreets-r/actions)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/cyclestreets)](https://CRAN.R-project.org/package=cyclestreets)\n[![R-CMD-check](https://github.com/cyclestreets/cyclestreets-r/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/cyclestreets/cyclestreets-r/actions/workflows/R-CMD-check.yaml)\n\n\n\n\n# cyclestreets\n\nThe goal of cyclestreets is to provide a simple R interface to the\nCycleStreets routing service.\n\nIt was split-out from **stplanr** for modularity.\n\n## Installation\n\nYou can install the released version of cyclestreets from\n[CRAN](https://CRAN.R-project.org) with:\n\n``` r\ninstall.packages(""cyclestreets"")\n```\n\nInstall the development version with **devtools** as follows:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""cyclestreets/cyclestreets-r"")\n```\n\n## Example\n\nA common need is to get from A to B:\n\n``` r\nlibrary (""cyclestreets"")\n# stplanr::geo_code (""leeds rail station"") \nfrom = c(-1.544, 53.794)\n# stplanr::geo_code (""leeds university"") \nto = c(-1.551, 53.807)\nr = cyclestreets::journey(from, to, ""balanced"")\nsf:::plot.sf(r)\n#> Warning: plotting the first 10 out of 43 attributes; use max.plot = 43 to plot\n#> all\n```\n\n\n\nTo get a key go to \n\nSave the key as an environment varible using\n`export CYCLESTREETS=your_key_here` by adding\n`CYCLESTREETS=your_key_here` as a new line in your `.Renviron` file,\ne.g.\xc2\xa0with the following command:\n\n``` r\nusethis::edit_r_environ()\n```\n\nCheck the map is good with leaflet:\n\n``` r\nlibrary(leaflet)\np = colorNumeric(""RdYlBu"", domain = r$quietness, reverse = TRUE)\nleaflet(r) %>% \n addTiles() %>% \n addPolylines(color = ~p(quietness), weight = 20, opacity = 0.9) %>% \n addLegend(pal = p, values = ~quietness)\n```\n\nOr **tmap**, highlighting the recently added \xe2\x80\x98quietness\xe2\x80\x99 variable:\n\n``` r\nlibrary(tmap)\ntmap_mode(""view"")\ntm_shape(r) + tm_lines(""quietness"", palette = ""RdYlBu"", lwd = 3)\n```\n\n #> The legacy packages maptools, rgdal, and rgeos, underpinning the sp package,\n #> which was just loaded, will retire in October 2023.\n #> Please refer to R-spatial evolution reports for details, especially\n #> https://r-spatial.org/r/2023/05/15/evolution4.html.\n #> It may be desirable to make the sf package available;\n #> package maintainers should consider adding sf to Suggests:.\n #> The sp package is now running under evolution status 2\n #> (status 2 uses the sf package in place of rgdal)\n #> Breaking News: tmap 3.x is retiring. Please test v4, e.g. with\n #> remotes::install_github(\'r-tmap/tmap\')\n #> tmap mode set to interactive viewing\n #> Interactive map saved to /home/robin/github/cyclestreets/cyclestreets-r/m.html\n\n\n\nSee an interactive version of this map, showing all variables per\nsegment, [here](https://rpubs.com/RobinLovelace/784236).\n\nOr **mapview**:\n\n``` r\nmapview::mapview(r)\n```\n\nRoute types available are: fastest, quietest, balanced. See help pages\nsuch as `?journey` and for details.\n\nYou can also get streets by LTN status.\n\n``` r\nnetwork_ltns = ltns(r)\nplot(network_ltns)\n```\n\n\n'",,"2018/04/27, 05:38:21",2007,GPL-3.0,131,275,"2023/09/24, 21:20:29",4,27,81,44,31,2,0.3,0.16666666666666663,"2023/08/13, 13:36:26",v1.0.0,0,5,false,,false,false,,,https://github.com/cyclestreets,https://www.cyclestreets.org/,"Cambridge, UK",,,https://avatars.githubusercontent.com/u/361419?v=4,,, Transportation Fuels Reporting System,An online application for fuel suppliers to manage their compliance obligations under the Greenhouse Gas Reduction.,bcgov,https://github.com/bcgov/tfrs.git,github,"credit,transfer,low,carbon,fuel,tra,tran,transfers,award,f,fu,fue,fuels,trans,cre,cred,credi,credits,nrm,empr",Mobility and Transportation,"2023/10/24, 17:59:48",21,0,3,true,Python,Province of British Columbia,bcgov,"Python,JavaScript,CSS,Shell,Groovy,SCSS,Less,PLpgSQL,Dockerfile,Smarty,Go,HTML,Mustache,Batchfile",,"b'\n# Production release\n\n## Pre-production release\n\n* Update the description of the tracking pull request\n* Verify the changes made during the previous post production release\n\n## Production release\n\n* Manually trigger the pipeline tfrs-release.yaml\n\n## Post production release\n\n* Merge the tracking pull request to master\n* Create the release tag from master amd make it as the lasted release (this is done automatically by pipeline create-release.yaml)\n* Create the new release branch from master\n* Update tfrs-release.yaml\n * name\n * branches\n * PR_NUMBER\n * RELEASE_NAME\n* Update .pipeline/lib/config.js\n * const version\n * releaseBranch\n* Update frontend/package.json\n * version\n* update dev-release.yaml\n * name\n * branches\n * PR_NUMBER\n * RELEASE_NAME\n* Commit all the above changes and create the tracking pull request to merge the new release branch to master. Need to update the PR_NUMBER after the tracking pull request is created. \n\n# TFRS Pipelines\n\n## Primary Pipelines\n\n* dev-release.yaml (TFRS Dev release-2.10.0): the pipeline is automatically triggered when there is a commit to the release branch\n* tfrs-release.yaml (TFRS release-2.10.0): the pipeline builds the release and deploys on Test and Prod, it needs to be manually triggered\n* create-release.yaml (Create Release after merging to master): tag and create the release after merging release branch to master. The description of the tracking pull request becomes release notes\n\n## Other Pipelines\n\n* cleanup-cron-workflow-runs.yaml (Scheduled cleanup old workflow runs): a cron job to cleanup the old workflows\n* cleanup-workflow-runs.yaml (Cleanup old workflow runs): manually cleanup teh workflow runs\n\n'",,"2017/01/25, 21:12:26",2464,Apache-2.0,31,2498,"2023/10/25, 06:43:21",40,1486,2637,863,0,5,0.7,0.6611993849308047,"2023/09/21, 21:19:25",v2.10.0,0,30,false,,true,true,,,https://github.com/bcgov,https://github.com/bcgov/BC-Policy-Framework-For-GitHub,Canada,,,https://avatars.githubusercontent.com/u/916280?v=4,,, pycontrails,Python library for modeling aviation climate impacts.,contrailcirrus,https://github.com/contrailcirrus/pycontrails.git,github,,Mobility and Transportation,"2023/10/25, 12:51:18",35,0,35,true,Python,Contrails,contrailcirrus,"Python,Jupyter Notebook,Makefile,Cython",https://py.contrails.org/,"b'# pycontrails\n\n> Python library for modeling aviation climate impacts\n\n| | |\n|---------------|-------------------------------------------------------------------|\n| **Version** | [![PyPI version](https://img.shields.io/pypi/v/pycontrails.svg)](https://pypi.python.org/pypi/pycontrails) [![Supported python versions](https://img.shields.io/pypi/pyversions/pycontrails.svg)](https://pypi.python.org/pypi/pycontrails) |\n| **Citation** | [![DOI](https://zenodo.org/badge/617248930.svg)](https://zenodo.org/badge/latestdoi/617248930) |\n| **Tests** | [![Unit test](https://github.com/contrailcirrus/pycontrails/actions/workflows/test.yaml/badge.svg)](https://github.com/contrailcirrus/pycontrails/actions/workflows/test.yaml) [![Docs](https://github.com/contrailcirrus/pycontrails/actions/workflows/docs.yaml/badge.svg)](https://github.com/contrailcirrus/pycontrails/actions/workflows/docs.yaml) [![Release](https://github.com/contrailcirrus/pycontrails/actions/workflows/release.yaml/badge.svg)](https://github.com/contrailcirrus/pycontrails/actions/workflows/release.yaml) [![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/contrailcirrus/pycontrails/badge)](https://securityscorecards.dev/viewer?uri=github.com/contrailcirrus/pycontrails)|\n| **License** | [![Apache License 2.0](https://img.shields.io/pypi/l/pycontrails.svg)](https://github.com/contrailcirrus/pycontrails/blob/main/LICENSE) |\n| **Community** | [![Github Discussions](https://img.shields.io/github/discussions/contrailcirrus/pycontrails)](https://github.com/contrailcirrus/pycontrails/discussions) [![Github Issues](https://img.shields.io/github/issues/contrailcirrus/pycontrails)](https://github.com/contrailcirrus/pycontrails/issues) [![Github PRs](https://img.shields.io/github/issues-pr/contrailcirrus/pycontrails)](https://github.com/contrailcirrus/pycontrails/pulls) |\n\n**pycontrails** is an open source project and Python package for modeling aircraft contrails and other\naviation related climate impacts.\n\n`pycontrails` defines common [data structures](https://py.contrails.org/api.html#data) and [interfaces](https://py.contrails.org/api.html#datalib) to efficiently build and run [models](https://py.contrails.org/api.html#models) of aircraft performance, emissions, and radiative forcing.\n\n## Documentation\n\nDocumentation and examples available at [py.contrails.org](https://py.contrails.org/).\n\n\n\n## Install\n\nRequires Python (3.9 or later)\n\n```bash\n$ pip install pycontrails\n\n# install with all optional dependencies\n$ pip install ""pycontrails[complete]""\n```\n\nInstall the latest development version directly from GitHub:\n\n```bash\npip install git+https://github.com/contrailcirrus/pycontrails.git\n```\n\nSee more options in the [install documentation](https://py.contrails.org/install).\n\n## Get Involved\n\n- Ask questions, discuss models, and present ideas in [GitHub Discussions](https://github.com/contrailcirrus/pycontrails/discussions).\n- Report bugs or suggest changes in [GitHub Issues](https://github.com/contrailcirrus/pycontrails/issues).\n- Review the [contributing guidelines](https://py.contrails.org/contributing.html) and contribute improvements as [Pull Requests](https://github.com/contrailcirrus/pycontrails/pulls).\n\n## License\n\n[Apache License 2.0](https://github.com/contrailcirrus/pycontrails/blob/main/LICENSE)\n\nAdditional attributions in [NOTICE](https://github.com/contrailcirrus/pycontrails/blob/main/NOTICE).\n'",",https://zenodo.org/badge/latestdoi/617248930","2023/03/22, 01:43:27",217,Apache-2.0,941,941,"2023/10/23, 23:02:37",24,46,71,71,1,2,3.7,0.2595936794582393,"2023/10/24, 11:55:17",v0.48.0,0,6,false,,false,true,,,https://github.com/contrailcirrus,https://contrails.org,,,,https://avatars.githubusercontent.com/u/108766390?v=4,,, EV Footprint,A simulation of the true impact on climate and CO2 emissions of an electric car vs a traditional gasoline car.,Traace-co,https://github.com/Traace-co/ev-footprint.git,github,,Mobility and Transportation,"2023/08/09, 13:02:07",16,0,5,true,TypeScript,Traace,Traace-co,"TypeScript,HTML,JavaScript,Less,CSS",https://evfootprint.org,"b'# EV Footprint\n\n## What is it?\n\nThis simulator is based on the latest studies and we built it to:\n* simply compare carbon emissions through their lifecycle of most common types of cars.\n* break down misconceptions about the electric car.\n* help you estimate if you should switch to an electric vehicle.\n\n## Who created this tool?\n\nThis simulator was originally created as a side project by team members of [Traace](https://traace.co).\nTraace is developing a software platform to help companies of all sizes decarbonize faster and more efficiently, thanks to advanced data analysis and forecast models. And they are hiring!\n\n## Is it really open source?\n\nYes. We believe that trust and transparency go together, and we want the model and the hypotheses to be as transparent as possible.\n\n## How can I contribute?\n\nJust open an issue in the project and describe the issue that you are facing or the feature request that your suggest. The project administrators will discuss it with you.'",,"2022/06/09, 11:37:06",503,MIT,21,24,"2023/08/09, 13:02:02",3,11,11,10,77,2,0.1,0.3571428571428571,,,0,3,false,,false,false,,,https://github.com/Traace-co,https://traace.co,"Paris, France",,,https://avatars.githubusercontent.com/u/78267248?v=4,,, Mobility,An open-source solution to compute the carbon emissions due to the mobility of a local population.,mobility-team,https://github.com/mobility-team/mobility.git,github,"carbon,carbon-footprint,mobility,open-source,transport,transportation",Mobility and Transportation,"2023/04/25, 12:02:57",12,1,4,true,Python,,mobility-team,Python,,"b""[![codecov](https://codecov.io/github/mobility-team/mobility/branch/main/graph/badge.svg?token=D31X32AZ43)](https://codecov.io/github/mobility-team/mobility)\n[![Python package](https://github.com/mobility-team/mobility/actions/workflows/python-package.yml/badge.svg?branch=main)](https://github.com/mobility-team/mobility/actions/workflows/python-package.yml)\n[![Code style: black][black-badge]][black-link]\n[![Documentation Status][rtd-badge]][rtd-link]\n\n# Mobility, an open-source library for mobility modelisation\nMobility is an open-source solution to compute the carbon emissions due to the mobility of a local population.\n\nIt is developed mainly by [AREP](https://arep.fr) and [Elioth](https://elioth.com/) with [ADEME](https://wiki.resilience-territoire.ademe.fr/wiki/Mobility) support, but anyone can join us!\nFor now, it is mainly focused on French territories.\n\n[Documentation on mobility.readthedocs.io](https://mobility.readthedocs.io/en/latest/)\n\nFind more infos (in French) on [Mobility website](https://mobility-team.github.io/)\n\n# Mobility, une librairie open source pour la mod\xc3\xa9lisation de la mobilit\xc3\xa9\nMobility est une solution open source servant \xc3\xa0 calculer l'empreinte carbone li\xc3\xa9e \xc3\xa0 la mobilit\xc3\xa9 d'une population locale.\n\n\nL'outil est principalement d\xc3\xa9velopp\xc3\xa9 par [AREP](https://arep.fr) et [Elioth](https://elioth.com/) avec le soutien de l'[ADEME](https://wiki.resilience-territoire.ademe.fr/wiki/Mobility), mais toute personne peut nous rejoindre !\nPour l'instant, la solution est centr\xc3\xa9e sur les territoires et les donn\xc3\xa9es fran\xc3\xa7aises.\n\n[Documentation sur mobility.readthedocs.io](https://mobility.readthedocs.io/en/latest/)\n\nPlus d'infos sur [le site web](https://mobility-team.github.io/) !\n\n# Contributeur\xc2\xb7ices\n| Entreprise/\xc3\xa9cole | Participant\xc2\xb7es |\n| :------------- | :------------- |\n| AREP | Capucine-Marin Dubroca-Voisin
Antoine Gauchot
F\xc3\xa9lix Pouchain |\n| Elioth | Louise Gontier
Arthur Haulon |\n| \xc3\x89cole Centrale de Lyon | Anas Lahmar
Ayoub Foundou
Charles Pequignot
Lyes Kaya
Zakariaa El Mhassani |\n\n# Utilisations\n| Utilisateur | Date | Projet |\n| :------------- | :------------- | :------------- |\n| AREP | 2020-2022 | [Luxembourg in Transition]([url](https://www.arep.fr/nos-projets/luxembourg-in-transition-paysage-capital/)) |\n| AREP | En cours (2022) | \xc3\x89tude pour le [Grand Annecy]([url](https://www.arep.fr/nos-projets/grand-annecy/)) |\n\n# Comment utiliser Mobility ?\n_En cours de r\xc3\xa9daction_\n\n# Comment contribuer ?\n* Vous pouvez regarder nos [issues](https://github.com/mobility-team/mobility/issues), particuli\xc3\xa8rement celles marqu\xc3\xa9es comme [good-first-issue](https://github.com/mobility-team/mobility/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22), et proposer d'y contribuer.\n* Tester l'outil et nous indiquer l\xc3\xa0 o\xc3\xb9 la documentation peut \xc3\xaatre am\xc3\xa9lior\xc3\xa9e est tr\xc3\xa8s utile ! Que ce soit pour une suggestion ou une issue, n'h\xc3\xa9sitez pas \xc3\xa0 [ouvrir une issue](https://github.com/mobility-team/mobility/issues/new).\n* Nous esp\xc3\xa9rons que vous pourrez utiliser Mobility pour vos travaux de recherche et de conseil ! Nous comptons sur vous pour partager le code que vous avez utilis\xc3\xa9.\n* Nous suivons PEP8 pour notre code Python. Pour d'autres bonnes pratiques, [suivez le guide](https://github.com/mobility-team/mobility/tree/main/mobility) !\n\n[rtd-badge]: https://readthedocs.org/projects/mobility/badge/?version=latest\n[rtd-link]: https://mobility.readthedocs.io/en/latest/?badge=latest\n[black-badge]: https://img.shields.io/badge/code%20style-black-000000.svg\n[black-link]: https://github.com/ambv/black\n""",,"2022/05/09, 08:49:57",534,MIT,169,229,"2023/04/25, 12:02:57",21,48,63,53,183,2,0.1,0.487012987012987,"2022/06/10, 10:15:11",v0.0.1,1,6,false,,false,false,quentin-guignard-pinoncely/mod-mob,,https://github.com/mobility-team,,,,,https://avatars.githubusercontent.com/u/105203080?v=4,,, SpiceEV,Simulation Program for Individual Charging Events of Electric Vehicles.,rl-institut,https://github.com/rl-institut/spice_ev.git,github,,Mobility and Transportation,"2023/10/19, 12:18:11",16,0,7,true,Python,Reiner Lemoine Institut,rl-institut,Python,,"b'# SpiceEV - Simulation Program for Individual Charging Events of Electric Vehicles\n\n**Simulation program to generate scenarios of electric vehicle fleets and simulate different charging strategies.**\n\n[![Build Status](https://github.com/rl-institut/spice_ev/actions/workflows/pythonpackage.yaml/badge.svg?branch=dev)](https://github.com/rl-institut/spice_ev/actions/workflows/pythonpackage.yaml) [![Documentation Status](https://readthedocs.org/projects/spice-ev/badge/?version=latest)](https://spice-ev.readthedocs.io/en/latest/?badge=latest)\n\n# Introduction\n\nSpiceEV is a program to simulate different charging strategies for a defined set of vehicles and corresponding trips to and from charging stations. The output shows load profiles of the vehicle battery, the corresponding charging station, the grid connector as well as the electricity price and, if applicable, stationary batteries. Each vehicle is by default connected to a separate charging station. All charging stations of one location can be connected to one grid connector with a defined maximum power. Some charging strategies only allow for one grid connector, please check [charging strategies](https://spice-ev.readthedocs.io/en/latest/charging_strategies_incentives.html#charging-strategies) for more information.\n\nThe first step of SpiceEV is to generate a `scenario.json`. The `scenario.json` contains information about the vehicles and their specific attributes (e.g. battery capacity, charging curve, etc.) as well as their trips from and to a specific charging station (so-called vehicle events). Further, the charging stations attributes, such as its maximum power, the attached grid connector and the according electricity price are defined. Depending on the scenario, a certain foresight can be applied for grid operator signals. If applicable, stationary batteries with according capacities and c_rates can be defined and fixed load profiles or local generation time series attached to a grid connector can be imported from CSV files. The input `scenario.json` can be generated by one of the [generate scripts](https://spice-ev.readthedocs.io/en/latest/code.html#generate-modules).\n\n# Documentation\n\nThe full documentation can be found [here](https://spice-ev.readthedocs.io/en/latest/index.html)\n\n# Installation\n\nClone this repository. SpiceEV just has an optional dependency on\nMatplotlib. Everything else uses the Python (>= 3.6) standard library.\n\nTo install `spice_ev` as a package run:\n```sh\npip install -e .\n```\n\n# Run Examples\n\nIn order to run a simulation with `simulate.py` a scenario JSON has to be generated first using `generate.py`.\nFor this three modes are available: \n\n* `statistics` Generate a scenario JSON with trips from statistical input parameters.\n* `csv` Generate a scenario JSON with trips listed in a CSV.\n* `simbev` Generate a scenario JSON from SimBEV results.\n\nShow all command line options:\n```sh\npython generate.py -h\npython simulate.py -h\n```\n\n## Quick Start\nGenerate a scenario and store it in a JSON. By default, the mode `statistics` is used.\n```sh\npython generate.py --output scenario.json\n```\n\nRun a simulation of this scenario using the `greedy` charging strategy and show\nplots of the results:\n```sh\npython simulate.py scenario.json --strategy greedy --visual\n```\n\n## Using Configuration Files\nThere are example configuration files in the folder **examples**. The required input/output must still be specified manually:\n```sh\npython generate.py --config examples/configs/generate.cfg\npython simulate.py --config examples/configs/simulate.cfg\n```\n\n## Generating Scenarios\nGenerate a 7-day scenario with 10 vehicles of different types and 15 minute timesteps:\n```sh\npython generate.py --days 7 --vehicles 6 golf --vehicles 4 sprinter --interval 15 --output scenario.json\n```\n\n## Including external Timeseries\nInclude a fixed load in the scenario:\n```sh\npython generate.py --include-fixed-load-csv fixed_load.csv --output scenario.json\n```\n\nPlease note that included file paths are relative to the scenario file location. Consider this directory structure:\n```sh\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 scenarios\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 price\n\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 price.csv\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 my_scenario\n\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 fixed_load.csv\n\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 scenario.json\n```\n\nThe file `fixed_load.csv` is in the same directory as the `scenario.json`, hence no relative path is specified.\nTo include the price and fixed load timeseries:\n```sh\npython generate.py --include-price-csv ../price/price.csv --include-fixed-load-csv fixed_load.csv --output scenario.json\n```\n\n## Adding a Schedule to a Scenario\nCalculate and include schedule:\n```sh\npython generate_schedule.py --scenario scenario.json --input examples/data/grid_situation.csv --output schedules/schedule_example.csv\n```\n\n# Integrating SimBEV\nSpiceEV supports scenarios generated by the [SimBEV](https://github.com/rl-institut/simbev) tool. Convert SimBEV output files to a SpiceEV scenario: \n```sh\npython generate.py simbev --simbev /path/to/simbev/output/ --output scenario.json\n```\n\n# License\nSpiceEV is licensed under the MIT License as described in the file [LICENSE](https://github.com/rl-institut/spice_ev/blob/dev/LICENSE)\n'",,"2021/05/05, 14:04:45",903,MIT,462,1433,"2023/10/19, 12:18:16",9,117,177,73,6,2,1.3,0.6931659693165969,"2023/08/10, 08:10:34",v1.0.1,0,10,false,,false,true,,,https://github.com/rl-institut,http://www.reiner-lemoine-institut.de,Berlin/Germany,,,https://avatars.githubusercontent.com/u/18393972?v=4,,, EV Fleet Simulator,Predict the energy usage of a fleet of electric vehicles.,eputs,https://gitlab.com/eputs/ev-fleet-sim,gitlab,,Mobility and Transportation,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, btp-ai-sustainability-bootcamp,Showcasing SAP partners how to add Intelligence and Sustainability into your industry cloud solutions on SAP Business Technology Platform.,SAP-samples,https://github.com/SAP-samples/btp-ai-sustainability-bootcamp.git,github,"sample,sample-code,sap-ai-core,sap-ai-launchpad,sap-analytics-cloud,sac-planning,deep-learning,condition-monitoring,computer-vision,sustainability,defect-detection,image-segmentation,predictive-maintenance,sound-classification,machine-learning",Production and Industry,"2023/04/21, 12:36:38",33,0,22,true,Jupyter Notebook,SAP Samples,SAP-samples,"Jupyter Notebook,JavaScript,CAP CDS,Python,HTML,Dockerfile,CSS",https://blogs.sap.com/2022/05/19/building-intelligent-scenarios-and-sustainability-on-sap-btp/,"b""[![REUSE status](https://api.reuse.software/badge/github.com/sap-samples/btp-ai-sustainability-bootcamp)](https://api.reuse.software/info/github.com/sap-samples/btp-ai-sustainability-bootcamp)\n\n# Building Intelligent Scenarios and Sustainability on SAP BTP with SAP AI Core and SAP Analytics Cloud for Planning\n\n## Overview and Motivation\nSustainability is a hot topic for us all today, and for CEOs profitability is no longer the sole goal of a business, as they also have to carefully consider sustainability goals and take our Planet and People into consideration.
\n
\nAt SAP, we are not only acting now on sustainability with goals of zero emissions, zero waste, and zero inequality by 2030, but also having a vision to enable every enterprise to become an intelligent, networked, sustainable enterprise with our technologies and solutions.
\n
\nSAP\xe2\x80\x99s Business Technology Platform provides the foundation of application development, integration, data and AI. On top of this technology platform, SAP has created a new sustainability portfolio to help the enterprise drive sustainable practices inside its organization and across its entire value chain, such as [SAP Product Footprint Management](https://help.sap.com/docs/SAP_BTP_PFM), [SAP Responsible Design and Production](https://www.sap.com/products/responsible-design-and-production.html) and [SAP Sustainability Control Tower](https://www.sap.com/products/sustainability-control-tower.html) etc.
\n
\nWe also would like our SAP Partners to create industry cloud solutions for end-to-end industry-specific business processes that consider sustainability dimensions. Therefore, we have developed this AI&Sustainability Bootcamp to inspire and enable our partners to build intelligent scenarios and sustainability on BTP with AI and Planning. Please read more in [our blog series about this topic](https://blogs.sap.com/2022/05/19/building-intelligent-scenarios-and-sustainability-on-sap-btp/).
\n
\nProfitability and sustainability are two sides of a coin for an intelligent, networked, and sustainable enterprise. To achieve the ultimate goals of making profitability sustainable, and sustainability profitable, organizations need to embed sustainability goals into strategy planning and business operations. Importantly, Artificial Intelligence plays a critical role in this journey by helping businesses to be more efficient and intelligent. AI(Artificial Intelligence) and Sustainability are the very frontier of frontiers in today's digital technologies, where count for enormous opportunities in various industries.
\n
\nIn agriculture, AI can transform agricultural production by better monitoring and managing environmental conditions, and higher crop/livestock yields. For example, Drone can fly over and film the field, and Computer Vision Algorithm can be applied for Automated Pest & Disease Diagnosis of crops etc. Another example in grape field, IoT sensors are used to monitor the light, wind, humility and temperature etc environmental factors, while AI algorithms come in to help with prediction on when to water, fertilize and harvest.
\n
\nIn manufacturing, AI can help factories by improving production efficiency, and reducing waste, energy consumption and Green House Gas Emissions. Such as automatic defect detection for production with computer vision and equipment sound-based predictive maintenance...
\n\n## Description\nThis github repository includes the sample code and exercises of the btp-ai-core-bootcamp, which is developed and delivered by Partner Ecosystem Success Organization (formerly known as GPO) of SAP SE, showcasing SAP partners how to add Intelligence and Sustainability into your industry cloud solutions on SAP Business Technology Platform with SAP AI Core/SAP Launchpad and SAP Analytics Cloud for Planning. The bootcamp uses an end-to-end storyline about a Sustainable Smart Factory filled with Intelligence and Sustainability\n- Building a deep learning [Image Segmentation Model](src/ai-models/defect-detection) on product images with SAP AI Core for automatic Defeat Detection in production lines\n- Building a deep learning [Sound Anomaly Classification Model](src/ai-models/predictive-maintenance) on acoustical sounds of machinery with SAP AI Core for condition monitoring based Predictive Maintenance \n- Configure and Deploy the [sustainable-smart-factory-app](src/sustainable-smart-factory-app)(CAP based) to your own SAP HANA Cloud \n- Creating a Plant 360 story to analyze Production and Sustainability KPIs of plant\xc2\xa0such as production, defective rate, energy consumption and CO2\xc2\xa0emission etc SAP Analytics Cloud.\n- Maintenance Cost Planning with Predictive Planning of SAP Analytics Cloud for Planning. \n- Sustainability Planning and Energy Rate Prediction with Predictive Planning of SAP Analytics Cloud for Planning.\n\n### Target Audience\nThe bootcamp showcases an end-to-end process of building intelligence and sustainability scenarios on SAP BTP with AI and Planning, which involve several personas as below, also as a reflection of real-life to build an industry cloud solution of Intelligence and Sustainability.\n* Data Scientist or Machine Learning Engineer, who are responsible for building, training and serving AI Models as APIs.\n* Application Developer, who is in charge of creating a cloud-based application which will extend the backend ERP with industry-related business process in a side-by-side manner, record and collect the sustainability data in daily business process, and inference the AI Models APIs. \n* Enterprise Planning Consultant or Analytics Consultant, who can help to make sense of the sustainability KPIs along with business KPIs for corporate performance management, and assure the planning and execution of sustainability goals.\n\n### Solution Architecture\n![Solution Architecture](resources/solution-architecture-full.png)\n- SAP AI Core and SAP AI Launchpad:
\nStreamline the execution and operations of Deep Learning Models in a standardized, scalable, and hyperscaler-agnostic way
\n * [Image Segmentation Model for Auto. Defect Detection](src/ai-models/defect-detection)\n * [Sound Anomaly Classification Model for Predictive Maintenance](src/ai-models/predictive-maintenance)\n\n- [Sustainable Smart Factory Application](src/sustainable-smart-factory-app):
\nA CAP-based application on BTP glues all the pieces together by inferencing the AI models with IoT streaming data(product images from camera, machinery sound collected by the microphones), and recording data of plant daily operation and sustainability KPIs, extending [Maintenance Management of SAP S/4HANA Cloud](https://help.sap.com/docs/SAP_S4HANA_CLOUD/2dfa044a255f49e89a3050daf3c61c11/f9f9400b81de4235b910887d91d925c4.html?version=2202.500) with [Predictive Maintenance](https://en.wikipedia.org/wiki/Predictive_maintenance)\n * Auto. Defect Detection
\n -Quality records via computer vision\n * Sound-based Predictive Maintenance
\n -Historical conditions of plant and equipment, and sound anomalies
\n -Integration with Maintenance Management of SAP S/4HANA Cloud\n- SAP S/4HANA Cloud
\n * Central Master Data for Products and Equipments\n * Maintenance Management\n- SAP Analytics Cloud for Planning\n * Plant 360 story\n * Maintenance Cost Planning\n * Sustainability Planning and Energy Rate Prediction\n\n### Storyline\nBagnoli Co. is a manufacturer of Light Guide Plates (LGP) used in LED panels since 2008, based in Milan, Italy. The company has design and manufacturing of LGP, adopted SAP S/4HANA Cloud as digital business platform since 2020. The Bagnoli brothers have a vision to become an sustainable smart LGP manufacturer by reducing waste and improving production efficiency and workplace safety. In 2021, An SAP gold partner has been hired to implement their vision with SAP Business Technology Platform. SAP AI Core has been proposed to optimize these business process efficiency in quality inspection with computer vision, and sound anomaly detection based predictive maintenance in which sustainability key figures are recorded during plant daily operation such as energy consumption and CO2 emission. And SAP Analytics Cloud for Planning is also suggested to provide insights of production and sustainability KPI to plant manager, also used for Planning of Maintenance Cost and Sustainability. And a CAP-based Sustainable Smart Factory Application is also created for end users to glue different components for extending SAP S/4HANA Cloud in Quality Inspection and Maintenance Management with intelligence and sustainability.\n- LGP Product
\nLight Guide Plates (LGP) is used in LED panels, which can transform a line light source into a surface light source, widely applied in liquid crystal display screens such as computer monitors, car navigators, and PADs.\n![LGP Product](resources/lgp-product.png)\n- Factory Layout\n![LGP Factory Layout](resources/factory-layout.png)\n- 2020 before implementing sustainable-smart-factory-app \n![2020](resources/lgp-factory-2020.png)\n- 2021 after implementing sustainable-smart-factory-app \n![2021](resources/lgp-factory-2021.png)\n### Final Outcomes\n#### Sustainable Smart Factory App\n![Sustainable Smart Factory App](resources/sustainable-smart-factory-app.png)\n#### Auto. Defect Detection\n![Auto. Defect Detection](resources/auto-defect-detection.gif)\n#### Predictive Maintenance\n![Predictive Maintenance](resources/predictive-maintenance.gif)\n#### Plant 360\n![Predictive Maintenance](resources/plant-360.gif)\n#### Maintenance Cost & Sustainability Planning\n![Maintenance Cost & Sustainability Planning](resources/maintenance-cost&sustainability-planning.gif)\n\n## Requirements\n### Software Requirements\nSystem access below for exercises will be provided in bootcamp by SAP. Therefore, no action required for the bootcamp participants.\n- SAP AI Core\n- SAP AI Launch Pad\n- SAP Analytics Cloud for Planning\n- SAP S/4HANA Cloud\n### Other Requirements\n- Complete the [openSAP course](https://open.sap.com/courses/sac3) about Planning with SAP Analytics Cloud\n\n## Exercises\nPlease follow [this manual](exercises/README.md) to perform the excises, which allows you to replicate the end-to-end Sustainable Smart Factory solution on your own SAP BTP account as described in above. \n\n## More Materials\n### Blog post series of Building AI and Sustainability Solutions on SAP BTP\n* [An overview of sustainability on top of SAP BTP](https://blogs.sap.com/2022/05/19/building-intelligent-scenarios-and-sustainability-on-sap-btp/)\n* [Introduction of end-to-end ML ops with SAP AI Core](https://blogs.sap.com/2022/06/13/introduction-of-the-end-to-end-ml-ops-with-sap-ai-core/)\n* [BYOM with TensorFlow in SAP AI Core for Defect Detection](https://blogs.sap.com/2022/06/28/build-your-own-model-with-tensorflow-in-sap-ai-core-for-defect-detection/)\n* [BYOM with TensorFlow in SAP AI Core for Sound-based Predictive Maintenance](https://blogs.sap.com/2022/07/07/sound-based-predictive-maintenance-with-sap-ai-core-and-sap-ai-launchpad/)\n* [Embedding Intelligence and Sustainability into Custom Applications on SAP BTP](https://blogs.sap.com/2022/07/14/embedding-intelligence-and-sustainability-into-custom-applications-on-sap-btp/)\n* [Maintenance Cost & Sustainability Planning with SAP Analytics Cloud Planning](https://blogs.sap.com/2022/08/05/maintenance-cost-budgeting-sustainability-planning-with-sap-analytics-cloud/)\n\n### Demo videos recorded by SAP HANA Academy\nWe have closely worked with SAP HANA Academy Team for more deep dive content about AI&Sustainability based on the bootcamp storyline. If you would like to learn more, please visit this following YouTube video playlists prepared by SAP HANA Academy Team \n* [SAP Artificial Intelligence Onboarding / Pre-reqs Playlist](https://youtube.com/playlist?list=PLkzo92owKnVyJ5bZXYHb8QUTNRaUMYNST)\n* [SAP Artificial Intelligence Defect Detection Playlist](https://youtube.com/playlist?list=PLkzo92owKnVzjYmJJMk17pu567BAKW5NL) \n* [SAP Artificial Intelligence Predictive Maintenance Playlist](https://youtube.com/playlist?list=PLkzo92owKnVw6OOhfauKAM7MJSamnWBLn)\n* [SAP Artificial Intelligence; Application Playlist](https://youtube.com/playlist?list=PLkzo92owKnVxcURT-afSJePEUTSqFYVP_)\n* [Maintenance Cost & Sustainability Planning with SAP Analytics Cloud Playlist](https://youtube.com/playlist?list=PLkzo92owKnVy2ED2ZfPzJcSLZVlcQ0jDw)\n\n### Useful links for SAP AI Core and SAP AI Launchpad\n* [SAP AI Core Help Center](https://help.sap.com/docs/AI_CORE?locale=en-US)\n* [SAP AI Launchpad Help Center](https://help.sap.com/docs/AI_LAUNCHPAD?locale=en-US)\n* [Learning Journey of SAP AI](https://learning.sap.com/learning-journey/discover-sap-business-technology-platform/describing-artificial-intelligence-ai-_f6c2ab8d-2fa1-45db-9895-ac84b635ced5)\n* [SAP Computer Vision Package](https://pypi.org/project/sap-computer-vision-package/)\n* [Metaflow library for SAP AI Core](https://pypi.org/project/sap-ai-core-metaflow/)\n\n### Useful links for SAP Analytics Cloud\n* [SAP Analytics Cloud Help Center](https://help.sap.com/docs/SAP_ANALYTICS_CLOUD)\n* [Role-based learnign journey of SAP Analytics Cloud](https://help.sap.com/learning-journeys/overview?categories=SOL_10_01&search=SAP%20Analytics%20Cloud)\n* [openSAP course about Intelligent Decisions with SAP Analytics Cloud](https://open.sap.com/courses/sac1)\n* [openSAP course about SAP Analytics Cloud \xe2\x80\x93 Authentication and Data Connectivity](https://open.sap.com/courses/sac2)\n* [openSAP course about Planning with SAP Analytics Cloud](https://open.sap.com/courses/sac3)\n* [openSAP course about Planning with SAP Analytics Cloud \xe2\x80\x93 Advanced Topics](https://open.sap.com/courses/sac4)\n\n### Useful links for SAP Cloud for Sustainable Enterprise\n* [SAP Product Footprint Management Help Center](https://help.sap.com/docs/SAP_BTP_PFM)\n* [SAP Responsible Design and Production Help Center](https://help.sap.com/docs/SAP_RESPONSIBLE_DESIGN_AND_PRODUCTION)\n* [SAP Sustainability Control Tower Help Center](https://help.sap.com/docs/SAP_SUS_SCT)\n\n### Other useful links\n* [openSAP course about Helping Business Thrive in a Circular Economy](https://open.sap.com/courses/ce1)\n* [openSAP course about AI Ethics at SAP](https://open.sap.com/courses/aie1)\n\n## Download and Installation\nExercise manuals are available [here](exercises) if you would like to replicate the solution on your own BTP account.\n\n## Known Issues\nIoT Gateway part is out of the scope in this sample. However, in a real-life project, IoT Gateway is required for IoT sensor data streaming and ingestion.\n\n## How to obtain support\n[Create an issue](https://github.com/SAP-samples/btp-ai-core-bootcamp/issues) in this repository if you find a bug or have questions about the content.\n \nFor additional support, [ask a question in SAP Community](https://answers.sap.com/questions/ask.html).\n\n## Contributing\nIf you wish to contribute code, offer fixes or improvements, please send a pull request. Due to legal reasons, contributors will be asked to accept a DCO when they create the first pull request to this project. This happens in an automated fashion during the submission process. SAP uses [the standard DCO text of the Linux Foundation](https://developercertificate.org/).\n\n## License\nCopyright (c) 2022 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the [LICENSE](LICENSES/Apache-2.0.txt) file.\n""",,"2022/02/14, 16:26:31",618,Apache-2.0,69,946,"2023/09/06, 10:12:15",2,10,17,5,49,2,0.0,0.6115007012622721,,,0,9,false,,false,false,,,https://github.com/SAP-samples,https://developers.sap.com/,"Walldorf, Germany",,,https://avatars.githubusercontent.com/u/50221243?v=4,,, AMO-Tools-Desktop," An energy efficiency calculation application for use with industrial equipment such as pumps, furnaces, fans, and motors, as well as for industrial systems such as steam.",ORNL-AMO,https://github.com/ORNL-AMO/AMO-Tools-Desktop.git,github,"energy,energy-efficiency,industrial,factory,plants,pump,furnace,steam,energy-assessment,measur,modeling,modeling-tool",Production and Industry,"2023/10/25, 13:23:55",34,0,12,true,TypeScript,Oak Ridge National Laboratory - Energy Management Software,ORNL-AMO,"TypeScript,HTML,CSS,JavaScript,Shell",,"b'# AMO-Tools-Desktop\n### Downloads ![Github Releases](https://img.shields.io/github/downloads/ORNL-AMO/AMO-Tools-Desktop/latest/total.svg?label=Current%20Release) ![Github All Releases](https://img.shields.io/github/downloads/ORNL-AMO/AMO-Tools-Desktop/total.svg?label=All%20Time&colorB=afdffe)\n\n## Dependencies\n- Node.js LTS (https://nodejs.org/en/)\n - Due to legacy dependencies required by MEASUR and the dependent AMO-Tool-Suite, **MEASUR\'s targeted Node version must be used**. This version can be found in package.json ""engines"".\n## Build for Development\n- To remove node modules, dist, and related package-lock: `npm run clean` from the root project directory\n- To install all required packages: `npm install`\n- To build for electron development with hot-reload: `npm run build-watch`\n - To start the electron app: `npm run electron`\n\n## Build Production Package\n- Clean and install:\n - `npm run clean`\n - `npm install`\n- To build desktop package:\n - `npm run build-prod-desktop` \n - `npm run dist`\n - The package will be placed in `../output`\n- To build web dist:\n - `npm run build-prod-web` \n\n'",,"2017/01/30, 16:26:56",2459,CUSTOM,702,12775,"2023/10/25, 16:05:28",303,3275,6244,521,0,1,0.9,0.7033087229969921,"2023/08/21, 16:28:58",v1.3.1,0,25,false,,true,false,,,https://github.com/ORNL-AMO,https://ornl-amo.github.io/,,,,https://avatars.githubusercontent.com/u/16310767?v=4,,, Industry Energy Tool,A calculator developed by NREL for projecting energy efficiency and fuel switching scenarios for the U.S. industrial sector energy use and emissions at the Census Region and county-level.,NREL,https://github.com/NREL/Industry-Energy-Tool.git,github,,Production and Industry,"2021/10/01, 15:56:20",14,0,3,false,Python,National Renewable Energy Laboratory,NREL,Python,,"b'# Industry-Energy-Tool (IET)\nThe Industry Energy Tool (IET) is a calculator developed by NREL for projecting energy efficiency and fuel switching scenarios for the U.S. industrial sector energy use and emissions at the Census Region and county-level. The IET is built on a foundation of county-level industry end use data primarily derived from facility-level emissions data and the U.S. Energy Information Administration\'s (EIA) 2010 Manufacturing Energy Consumption Survey.\nThe IET is currently comprised of modules for calculating stock turnover, and implementing changes to energy efficiency and industry energy fuel mix (i.e., fuel switching). A material efficiency module, which includes a hybrid input-output (IO) model of U.S. energy use and implementation of the RAS algorithm, has been developed, but is not yet integrated with the rest of the tool.\n\n## Data Foundation (Base-Year Dataset)\nThe foundation of IET\'s base-year (2014) dataset is facility-level combustion energy data calculated from greenhouse gas emissions reported under the [U.S. EPA\'s Greenhouse Gas Reporting Program (GHGRP)](https://www.epa.gov/ghgreporting). Please see the [Industrial Heat Demand Analysis repo](https://github.com/NREL/Industrial-Heat-Demand-Analysis) for more information about this methodology. Remaining industrial energy use is estimated from various publicly-available sources, including data from EIA, U.S. Department of Agriculture, and U.S. Census Bureau. Please consult the Data Foundation directory for additional discussion of the data sources and calculation methodology.\n\n## Greenhouse Gas Emissions (GHG)\nGHG emission are calculated from projected energy use by fuel type. All non-electricity emissions are calculated using EPA default emission factors for CO2 and CH4. Electricity emission factors are based on [EPA eGRID](https://www.epa.gov/energy/emissions-generation-resource-integrated-database-egrid); [AEO 2017](https://www.eia.gov/outlooks/archive/aeo17/) grid mix projections are used to modify eGRID emission factors in future years. IET also includes the option to modifiy renewable electricity deployment by region.\n\n## Stock Turnover\nThe IET approximates capital equipment stock retirements and purchases by assuming a user-specified lifetime for pre- and post-2014 equipment and assuming linear retirement. Additionally, due to difficulties of modeling industrial equipment stock (e.g., little data on physical stock, investment decisions unrelated to equipment age [c.f., [Worrell and Biermans (2005)](https://doi.org/10.1016/j.enpol.2003.10.017), [Doms and Dunne (1998)](https://doi.org/10.1006/redy.1998.0011)]), the IET takes an approach similar to EIA\'s National Energy Modeling System (NEMS) by using annual value of shipments as a proxy for capital stock. IET does not capture stock vintages and does not distinguish stock behavior by equipment type/end use. \n\n## Energy Efficiency\nThe IET assumes a baseline energy efficiency improvement for pre- and post-2014 stock and existing stock by industry and end use that is based on 2014 technical possibility curves (TPCs) used in the Industrial Demand Module of NEMS (see [Table 6.3](https://www.eia.gov/outlooks/archive/aeo14/assumptions/pdf/0554(2014).pdf)). Energy efficiency scenarios for post-2014 stock are implemented by scaling either state-of-the-art or practical minimum efficiency levels identified by [DOE ""bandwidth studies""](https://www.energy.gov/eere/amo/energy-analysis-data-and-reports). Note that bandwidth studies do not cover all industrial sectors.\n\n## Fuel Switching\nThe IET assumes linear adoption of new equipment for fuel switching by 2050. Currently only switching from combustion fuels to electricity (i.e., electrification) is caputured. Users have the ability to specifiy fuel switching by industry, end use, and/or temperature range. \n\n## Material Efficiency / Circular Economy\nAlthough it is not currently integrated with the IET, a hybrid IO model (i.e., energy units for energy industry transactions and monetary units for transactions of all remaining industries) was developed to estimate the direct and indirect energy requirements of the U.S. economy. A RAS implementation is included for dynamic rebalancing. Ideally, this module will be populated with energy projections from the IET and expanded to include additional hybrid models for materials to allow exploration of the energy implications of energy efficiency and material efficiency strategies. See [McMillan (2018)](https://www.nrel.gov/docs/fy18osti/70609.pdf) for a discussion of the analysis intent. \n\nA new model, the [Hybrid Supply and Use Table (HSUT) Model](./HSUT_model/) has been developed as foundation to conduct input-output analysis. See the model\'s directory for more background and instruction on its use.\n'",",https://doi.org/10.1016/j.enpol.2003.10.017,https://doi.org/10.1006/redy.1998.0011","2018/05/07, 15:48:04",1997,CUSTOM,0,26,"2023/10/25, 16:05:28",1,0,0,0,0,1,0,0.0,,,0,1,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, Industry Energy Data Book,"Summarizes the status of, and identifies the key trends in energy use and its underlying economic drivers across the four industrial subsectors: agriculture, construction, manufacturing, and mining.",NREL,https://github.com/NREL/Industry-energy-data-book.git,github,,Production and Industry,"2020/05/27, 16:53:17",6,0,0,false,Python,National Renewable Energy Laboratory,NREL,Python,,"b""# Industry Energy Data Book (IEDB)\nThe 2018 Industry Energy Data Book summarizes the status of, and\nidentifies the key trends in energy use and its underlying economic drivers\nacross the four industrial subsectors: agriculture, construction, manufacturing,\nand mining. In addition to aggregating and visualizing industrial data from\nacross multiple sectors, the IEDB also provides annual estimates of\ncombustion energy use for large energy-using facilities and county-level\nindustrial energy use. The landing page for the IEDB is here: (tbd)\n\nThis repository contains the source code for estimating combustion energy\nfor large energy users and for estimating industrial energy use at the county\nlevel. Both data sets are available for download from the\n[NREL Data Catalog](https://dx.doi.org/10.7799/1575074).\n\nThese estimates are a small part of the effort to improve the resolution and\ntimeliness of publicly-available industrial energy data for the United States.\nThe source code provided here is meant to be improved over time with the help\nof the developer and energy analyst communities. \n\n## Combustion Energy Estimates of Large Energy Users\nAnnual combustion energy is estimated from 2010 - 2017 for all industrial\nfacilities reporting combustion emissions under the [U.S. EPA's Greenhouse Gas Reporting Program (GHGRP)](https://www.epa.gov/ghgreporting). The basic approach\nis to back out combustion energy use from reported GHG emissions. The calculation\nmethodology is an evolution of the approach first outlined by\n[McMillan et al. (2016)](https://doi.org/10.2172/1335587) and\nsubsequently refined by [McMillan and Ruth (2019)](https://doi.org/10.1016/j.apenergy.2019.01.077).\nNote that because the GHGRP does not track estimates from purchased electricity,\nelectricity use is not estimated in this data set.\n\nThe most notable improvement in the methodology is using calculation\ntier-specific information (e.g., facility-reported higher heating values [HHV]\nand emission factors) instead of relying solely on EPA default emission factors.\nMethods have also been introduced to estimate the uncertainty of calculations,\nas discussed in McMillan and Ruth (2019).\n\n## County-Level Estimates of Industrial Energy\nAnnual energy use for the industrial sector (agriculture, construction, mining,\nand manufacturing) for 2010 - 2016 at the county level. The combustion energy\nestimates of large energy users provide the foundation of the county-level\nestimates. Energy use of industrial facilities that do not report to the GHGRP\nare estimated using data sources from the Energy Information Administration,\nCensus Bureau, and the U.S. Department of Agriculture. The methodology was first\ndescribed by [McMillan and Narwade (2018)](https://doi.org/10.2172/1484348),\nwith the associated source code provided [here](https//github.com/NREL/Industry-Energy-Tool/).\n\nThe methodology has changed slightly in an effort to automate more calculation\nsteps.\n""",",https://doi.org/10.2172/1335587,https://doi.org/10.1016/j.apenergy.2019.01.077,https://doi.org/10.2172/1484348","2019/03/04, 17:25:41",1696,CUSTOM,0,93,"2023/10/25, 16:05:28",4,0,0,0,0,0,0,0.3058823529411765,,,0,2,false,,false,false,,,https://github.com/NREL,http://www.nrel.gov,"Golden, CO",,,https://avatars.githubusercontent.com/u/1906800?v=4,,, CalTRACK,Methods are developed in an open and transparent stakeholder process that uses empirical testing to define replicable methods for calculating normalized metered energy consumption using either monthly or interval data from an existing conditions baseline.,energy-market-methods,https://github.com/openeemeter/caltrack.git,github,,Production and Industry,"2023/05/18, 21:58:12",51,0,3,true,Dockerfile,OpenEEmeter,openeemeter,Dockerfile,http://docs.caltrack.org,"b'CalTRACK Technical Documentation\n================================\n\nCalTRACK methods are developed in an open and transparent stakeholder process that uses empirical testing to define replicable methods for calculating normalized metered energy consumption using either monthly or interval data from an existing conditions baseline.\n\nThis repository contains the CalTRACK 2.0 methods, and the CalTRACK Technical Appendix, which explains how many of the methods were developed.\n\nFuture improvements are catalogued as ""Issues"" and are found in the Projects tab. These issues are considered ""closed"" until they are formally re-opened by the Working Group. \n\nFormal changes in methods will follow processes established under the JDF charter:\n\nDeliverable Development Process\n\nWorking Groups. The Project may have multiple Working Groups, and each Working Group will operate as set forth in this Section and its Working Group Charter.\n\nWorking Group Chair. Each Working Group will designate a chair for that Working Group. A Working Group may select a new chair upon Approval of the Working Group Participants.\n\nWorking Group Requirements. Each Working Group must be comprised of at least 2 Working Group Participants. No Working Group Participant will be permitted to participate in a Working Group without first Joining the Working Group.\n\nConditions for Contributions. A Steering Member, Associate, or Contributor may not make any Contribution unless that Steering Member, Associate or Contributor is the exclusive copyright owner of the Contribution or has sufficient copyright rights from the copyright owners to make the Contribution under the terms of this Project Charter and applicable Working Group Charter. The Steering Member, Associate, or Contributor must disclose the identities of all known copyright owners in the Contribution.\n\n**Deliverable Development Process**\n\nPre-Draft. Any Working Group Participant or Contributor may submit a proposed initial draft document as a candidate Draft Deliverable of that Working Group. The Working Group chair will designate each submission as a \xe2\x80\x9cPre-Draft\xe2\x80\x9d document. This [quick-start video](https://www.dropbox.com/s/n5r3ihq6eanyl7l/em2_github_issues.mp4?dl=0) shows how to submit an issue for consideration.\n\nDraft. Each Pre-Draft document of a Working Group must first be Approved by the Working Group Participants of that Working Group to become a Draft Deliverable. Once the Working Group approves a document as a Draft Deliverable, the Draft Deliverable becomes the basis for all going forward work on that deliverable.\n\nWorking Group Approval. Once a Working Group believes it has achieved the objectives for its deliverable as described in the Scope, it will progress its Draft Deliverable to \xe2\x80\x9cWorking Group Approved\xe2\x80\x9d status. \n\nFinal Approval. Upon a Draft Deliverable reaching Working Group Approved status, the Executive Director or his/her designee will present that Working Group Approved Draft Deliverable to all Steering Members for Approval. Upon Approval by the Steering Members, that Draft Deliverable will be designated an \xe2\x80\x9cApproved Deliverable.\xe2\x80\x9d\n\nPublication and Submission. Upon the designation of a Draft Deliverable as an Approved Deliverable, the Executive Director will publish the Approved Deliverable in a manner agreed upon by the Working Group Participants (i.e., Project Participant only location, publicly available location, Project maintained website, Project member website, etc.). The publication of an Approved Deliverable in a publicly accessible manner must include the terms under which the Approved Deliverable and/or source code is being made available under, as set forth in the applicable Working Group Charter.\n\nSubmissions to Standards Bodies. No Draft Deliverable or Approved Deliverable may be submitted to another standards development organization without Approval by the Steering Members. Upon Approval by the Steering Members, the Executive Director will coordinate the submission of the applicable Draft Deliverable or Approved Deliverable to another standards development organization with Joint Development Foundation Projects, LLC. Working Group Participants that developed that Draft Deliverable or Approved Deliverable agree to grant the copyright rights necessary to make those submissions.\n'",,"2018/01/24, 16:30:37",2100,CC0-1.0,5,307,"2023/05/18, 21:58:12",1,19,141,25,160,1,0.1,0.7201646090534979,"2019/11/14, 21:34:37",v2.0,0,16,false,,false,false,,,https://github.com/openeemeter,https://lfenergy.org/projects/openeemeter/,,,,https://avatars.githubusercontent.com/u/19336002?v=4,,, OpenModelica,An open source Modelica-based modeling and simulation environment intended for industrial and academic usage.,OpenModelica,https://github.com/OpenModelica/OpenModelica.git,github,,Production and Industry,"2023/10/25, 15:16:10",655,0,132,true,Modelica,OpenModelica,OpenModelica,"Modelica,C,C++,CMake,Makefile,GAP,Python,Yacc,M4,QMake,Shell,Java,D,Julia,XSLT,Lex,Perl,Groovy,Batchfile,TeX,MATLAB,ANTLR,Ruby,Mathematica,Roff,Emacs Lisp,JavaScript,Dockerfile,DTrace",https://openmodelica.org,"b'# OpenModelica [![License: OSMC-PL](https://img.shields.io/badge/license-OSMC--PL-lightgrey.svg)](OSMC-License.txt)\n\n[OpenModelica](https://openmodelica.org) is an open-source Modelica-based modeling and\nsimulation environment intended for industrial and academic usage.\n\n## OpenModelica User\'s Guide\n\nThe [User\'s Guide](https://openmodelica.org/doc/OpenModelicaUsersGuide/latest/) is\nautomatically generated from the documentation repository.\n\n## OpenModelica environment\n\nThe [OpenModelica Compiler](OMCompiler/) is the core of the OpenModelica project.\n[OMEdit](OMEdit/README.md) is the graphical user interface on top of the compiler.\n[OMSimulator](OMSimulator/README.md) is a capable FMI and SSP-based Co-Simulation environment,\navailable as a standalone version or integrated in OMEdit.\nIn addition there are interactive environments\n[OMNotebook](OMNotebook/README.md), [OMPlot](OMPlot/README.md) and [OMShell](OMShell/README.md)\ninteraction with the OMCompiler as well as various other tools:\n[OMOptim](OMOptim/README.md), [OMParser](OMParser/README.md), [OMSens](OMSens/README.md),\n[OMSense_Qt](OMSens_Qt/README.md).\n\n## Working with the repository\n\nOpenModelica.git is a superproject. Clone the project using one of:\n\n```bash\n# Faster pulling by using openmodelica.org read-only mirror (low latency in Europe; very important when updating all submodules)\n# Replace the openmodelica.org pull URL with https://github.com/OpenModelica/OpenModelica.git if you want to pull directly from github\n# The default choice is to push to your fork on github.com (SSH). Replace MY_FORK with OpenModelica to push directly to the OpenModelica repositories (if you have access)\nMY_FORK=\ngit clone --recurse-submodules https://openmodelica.org/git-readonly/OpenModelica.git\ncd OpenModelica\ngit remote set-url --push origin git@github.com:$MY_FORK/OpenModelica.git\ngit submodule foreach --recursive \'git remote set-url --push origin `git config --get remote.origin.url | sed s,^.*/,git@github.com:\'$MY_FORK\'/,`\'\n```\n\nIf you are a developer and want to update your local git repository to the latest\ndevelopments or latest heads, use:\n\n```bash\n# After cloning\ncd OpenModelica\ngit checkout master\ngit pull\n# To checkout the latest master on each submodule run\n# you will need to merge each submodule, but your changes will remain\ngit submodule foreach --recursive ""git checkout master && git pull""\n\n# Running master on all submodules might lead to build errors\n# so use this to make sure you force all submodules to the commits\n# from the OpenModelica glue project which are properly tested\ngit submodule update --force --init --recursive\n```\n\nIn order to push to the repository, you will push to your own fork of OpenModelica.git,\netc. You will need to create a fork of each repository that you want to push to (by\nclicking the Fork button in the GitHub web interface).\n\nIf you do not checkout the repositories for some GUI clients (such as OMOptim.git), these\ndirectories will be ignored by autoconf and skipped during compilation.\n\nTo checkout a specific version of OpenModelica, say tag v1.16.2 do:\n```bash\ngit clone --recurse-submodules https://github.com/OpenModelica/OpenModelica.git\ncd OpenModelica\ngit checkout v1.16.2\ngit submodule update --force --init --recursive\n```\n\nIf you have issues building you can try to clean and reset the repository using:\n\n```bash\ngit clean -fdx\ngit submodule foreach --recursive git clean -fdx\ngit reset --hard\ngit submodule foreach --recursive git reset --hard\ngit submodule update --init --recursive\n```\n\nTo check your working copy status and the hashes of the submodules, use:\n\n```bash\ngit status\ngit submodule status --recursive\n```\n\n### To checkout a minimal version of OpenModelica\n\n```bash\ngit clone https://openmodelica.org/git-readonly/OpenModelica.git OpenModelica-minimal\ncd OpenModelica-minimal\ngit submodule update --init --recursive libraries\n```\n\n## Build OpenModelica\n\n* [Linux/WSL/OSX Instructions](OMCompiler/README.Linux.md)\n* [Windows Instructions](OMCompiler/README.Windows.md)\n\nWe automatically generate nightly builds for\n[Windows](https://openmodelica.org/download/download-windows/) and for various flavours of\n[Linux](https://openmodelica.org/download/download-linux/). You can download and install\nthem directly if you just want to run the latest development version of OpenModelica without\nthe effort of compiling the sources yourself.\n\n## How to run\n\nHere is a short example session.\nThis example uses [OMShell-terminal](OMShell), but OMShell, mos-scripts, or OMNotebook\nwork the same way.\n\n```\n$ cd trunk/build/bin\n$ ./OMShell-terminal\nOMShell Copyright 1997-2015, Open Source Modelica Consortium (OSMC)\nDistributed under OMSC-PL and GPL, see www.openmodelica.org\n\nTo get help on using OMShell and OpenModelica, type ""help()"" and press enter\nStarted server using:omc -d=interactive > /tmp/omshell.log 2>&1 &\n>>> loadModel(Modelica)\ntrue\n>>> getErrorString()\n""""\n>> instantiateModel(Modelica.Electrical.Analog.Basic.Resistor)\n""class Modelica.Electrical.Analog.Basic.Resistor \\""Ideal linear electrical resistor\\""\n Real v(quantity = \\""ElectricPotential\\"", unit = \\""V\\"") \\""Voltage drop between the two pins (= p.v - n.v)\\"";\n Real i(quantity = \\""ElectricCurrent\\"", unit = \\""A\\"") \\""Current flowing from pin p to pin n\\"";\n Real p.v(quantity = \\""ElectricPotential\\"", unit = \\""V\\"") \\""Potential at the pin\\"";\n Real p.i(quantity = \\""ElectricCurrent\\"", unit = \\""A\\"") \\""Current flowing into the pin\\"";\n Real n.v(quantity = \\""ElectricPotential\\"", unit = \\""V\\"") \\""Potential at the pin\\"";\n Real n.i(quantity = \\""ElectricCurrent\\"", unit = \\""A\\"") \\""Current flowing into the pin\\"";\n parameter Boolean useHeatPort = false \\""=true, if HeatPort is enabled\\"";\n parameter Real T(quantity = \\""ThermodynamicTemperature\\"", unit = \\""K\\"", displayUnit = \\""degC\\"", min = 0.0, start = 288.15, nominal = 300.0) = T_ref \\""Fixed device temperature if useHeatPort = false\\"";\n Real LossPower(quantity = \\""Power\\"", unit = \\""W\\"") \\""Loss power leaving component via HeatPort\\"";\n Real T_heatPort(quantity = \\""ThermodynamicTemperature\\"", unit = \\""K\\"", displayUnit = \\""degC\\"", min = 0.0, start = 288.15, nominal = 300.0) \\""Temperature of HeatPort\\"";\n parameter Real R(quantity = \\""Resistance\\"", unit = \\""Ohm\\"", start = 1.0) \\""Resistance at temperature T_ref\\"";\n parameter Real T_ref(quantity = \\""ThermodynamicTemperature\\"", unit = \\""K\\"", displayUnit = \\""degC\\"", min = 0.0, start = 288.15, nominal = 300.0) = 300.15 \\""Reference temperature\\"";\n parameter Real alpha(quantity = \\""LinearTemperatureCoefficient\\"", unit = \\""1/K\\"") = 0.0 \\""Temperature coefficient of resistance (R_actual = R*(1 + alpha*(T_heatPort - T_ref))\\"";\n Real R_actual(quantity = \\""Resistance\\"", unit = \\""Ohm\\"") \\""Actual resistance = R*(1 + alpha*(T_heatPort - T_ref))\\"";\nequation\n assert(1.0 + alpha * (T_heatPort - T_ref) >= 1e-15, \\""Temperature outside scope of model!\\"");\n R_actual = R * (1.0 + alpha * (T_heatPort - T_ref));\n v = R_actual * i;\n LossPower = v * i;\n v = p.v - n.v;\n 0.0 = p.i + n.i;\n i = p.i;\n T_heatPort = T;\n p.i = 0.0;\n n.i = 0.0;\nend Modelica.Electrical.Analog.Basic.Resistor;\n""\n>> a:=1:5;\n>> b:=3:8\n{3,4,5,6,7,8}\n>>> a*b\n\n>>> getErrorString()\n""[:1:1-1:0:writable] Error: Incompatible argument types to operation scalar product in component , left type: Integer[5], right type: Integer[6]\n[:1:1-1:0:writable] Error: Incompatible argument types to operation scalar product in component , left type: Real[5], right type: Real[6]\n[:1:1-1:0:writable] Error: Cannot resolve type of expression a * b. The operands have types Integer[5], Integer[6] in component .\n""\n>> b:=3:7;\n>> a*b\n85\n>>> listVariables()\n{b, a}\n>>\n```\n\n## How to contribute to the OpenModelica Compiler\n\nThe long-term development of OpenModelica is supported by a non-profit organization - the\n[Open Source Modelica Consortium (OSMC)](https://openmodelica.org/home/consortium/).\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) on how to contribute to the development.\nIf you encounter any bugs, feel free to open a ticket about it.\nFor general questions regarding OpenModelica there is a\n[discussions section](https://github.com/OpenModelica/OpenModelica/discussions) available.\n\n## License\n\nSee [OSMC-License.txt](OSMC-License.txt).\n\n## How to cite\n\nSee the [CITATIONS](CITATION.cff) file for information on how to cite OpenModelica in\nany publications reporting work done using OpenModelica.\nFor a complete list of all publications related to OpenModelica see\n[doc/bibliography/openmodelica.bib](./doc/bibliography/openmodelica.bib).\n\n------------\nLast updated: 2023-06-21\n'",,"2015/05/03, 16:59:29",3097,CUSTOM,897,36303,"2023/10/25, 20:22:58",1761,3950,9484,1506,0,34,0.0,0.7545220549345932,"2023/10/24, 10:18:41",v1.22.0-dev.beta.6,0,76,false,,false,true,,,https://github.com/OpenModelica,https://openmodelica.org,"Linköping, Sweden",,,https://avatars.githubusercontent.com/u/4006504?v=4,,, Eco-CI,Estimating the energy consumption of CI / CD pipelines on GitHub and GitLab.,green-coding-berlin,https://github.com/green-coding-berlin/eco-ci-energy-estimation.git,github,,Computation and Communication,"2023/10/16, 08:57:18",13,9,13,true,Shell,Green Coding Berlin,green-coding-berlin,Shell,,"b'# Eco-CI\n\nEco-CI is a project aimed at estimating energy consumption in continuous integration (CI) environments. It provides functionality to calculate the energy consumption of CI jobs based on the power consumption characteristics of the underlying hardware.\n\n## Usage\n\nEco-CI supports both GitHub and GitLab as CI platforms. When you integrate it into your pipeline, you must call the start-measurement script to begin collecting power consumption data, then call the get-measurement script each time you wish to make a spot measurement. When you call get-measurment, you can also assign a label to it to more easily identify the measurement. At the end, call the display-results to see all the measurement results, overall total usage, and export the data. \n\nFollow the instructions below to integrate Eco-CI into your CI pipeline:\n\n### Github:\nTo use Eco-CI in your GitHub workflow, call it with the relevant task name (start-measurement, get-measurement, or display-results). Here is a sample workflow that runs some python tests with eco-ci integrated.\n\n```yaml\nname: Daily Tests with Energy Measurement\nrun-name: Scheduled - DEV Branch\non:\n schedule:\n - cron: \'0 0 * * *\'\n\npermissions:\n read-all\n\njobs:\n run-tests:\n runs-on: ubuntu-latest\n steps:\n - name: Initialize Energy Estimation\n uses: green-coding-berlin/eco-ci-energy-estimation@v2 # use hash or @vX here (See note below)\n with:\n task: start-measurement\n\n - name: \'Checkout repository\'\n uses: actions/checkout@v3\n with:\n ref: \'dev\'\n submodules: \'true\'\n\n - name: Checkout Repo Measurement\n uses: green-coding-berlin/eco-ci-energy-estimation@v2 # use hash or @vX here (See note below)\n with:\n task: get-measurement\n label: \'repo checkout\'\n\n - name: setup python\n uses: actions/setup-python@v4\n with:\n python-version: \'3.10\'\n cache: \'pip\'\n \n - name: pip install\n shell: bash\n run: |\n pip install -r requirements.txt\n\n - name: Setup Python Measurment\n uses: green-coding-berlin/eco-ci-energy-estimation@v2 # use hash or @vX here (See note below)\n with:\n task: get-measurement\n label: \'python setup\'\n\n - name: Run Tests\n shell: bash\n run: |\n pytest\n\n - name: Tests measurement\n uses: green-coding-berlin/eco-ci-energy-estimation@v2 # use hash or @vX here (See note below)\n with:\n task: get-measurement\n label: \'pytest\'\n\n - name: Show Energy Results\n uses: green-coding-berlin/eco-ci-energy-estimation@v2 # use hash or @vX here (See note below)\n with:\n task: display-results\n```\n\n#### Github Action Mandatory and Optional Variables:\n- `task`: (required) (options are `start-measurement`, `get-measurement`, `display-results`)\n - `start-measurement` - Initialize the action starts the measurement. This must be called, and only once per job.\n - `get-measurement` - Measures the energy at this point in time since either the start-measurement or last get-measurement action call. \n - `display-results` - Outputs the energy results to the`$GITHUB_STEP_SUMMARY`. Creates a table that shows the energy results of all the get-measurements, and then a final row for the entire run. Displays the avergae cpu utilization, the total Joules used, and average wattage for each measurment+total run. It will also display a graph of the energy used, and a badge for you to display.\n - This badge will always be updated to display the total energy of the most recent run of the workflow that generated this badge.\n - The total measurement of this task is provided as output `data-total-json` in json format (see example below).\n - Can be used with `pr-comment` flag (see below) to post the results as a comment on the PR.\n- `branch`: (optional) (default: ${{ github.ref_name }})\n - Used with `get_measurement` and `display_results` to correctly identify this CI run for the Badge. \n- `label`: (optional) (default: \'measurement ##\')\n - Used with `get_measurement` and `display_results` to identify the measurement\n- `send-data`: (optional) (default: true)\n - Send metrics data to metrics.green-coding.berlin to create and display badge, and see an overview of the energy of your CI runs. Set to false to send no data. The data we send are: the energy value and duration of measurement; cpu model; repository name/branch/workflow_id/run_id; commit_hash; source (GitHub or GitLab). We use this data to display in our green-metrics-tool front-end here: https://metrics.green-coding.berlin/ci-index.html \n- `display-table`: (optional) (default: true)\n - call during the `display-graph` step to either show/hide the energy reading table results in the output\n- `display-graph`: (optional) (default: true)\n - We use an ascii charting library written in go (https://github.com/guptarohit/asciigraph). For GitHub hosted runners their images come with go so we do not install it. If you are using a private runner instance however, your machine may not have go installed, and this will not work. As we want to minimize what we install on private runner machines to not intefere with your setup, we will not install go. Therefore, you will need to call `start-measurement` with the `display-graph` flag set to false, and that will skip the installation of this go library.\n- `display-badge`: (optional) (default: true)\n - used with display-results\n - Shows the badge for the ci run during display-results step\n - automatically false if send-data is also false\n- `pr-comment`: (optional) (default: false)\n - used with display-results\n - if on, will post a comment on the PR issue with the Eco-CI results. only occurs if the triggering event is a pull_request\n - remember to set `pull-requests: write` to true in your workflow file\n\n\n#### Continuing on Errors\n\nWe recommend running our action with `continue-on-error:true`, as it is not critical to the success of your workflow, but rather a nice feature to have.\n\n```yaml\n - name: Eco CI Energy Estimation\n uses: green-coding-berlin/eco-ci-energy-estimation@v2\n with:\n task: final-measurement\n continue-on-error: true\n```\n\n#### Consuming the Measurements as JSON\n\nFor both tasks `get-measurement` and `display-results` the lap measurements and total measurement can be consumed in JSON format.\nYou can use the outputs `data-lap-json` or `data-total-json` respectively.\nHere is an example demonstrating how this can be achieved:\n\n```yaml\n # ...\n - name: \'Checkout repository\'\n uses: actions/checkout@v3\n with:\n ref: \'dev\'\n submodules: \'true\'\n\n - name: Checkout Repo Measurment\n uses: green-coding-berlin/eco-ci-energy-estimation@v2\n id: checkout-step\n with:\n task: get-measurement\n label: \'repo checkout\'\n\n - name: Print checkout data\n run: |\n echo ""total json: ${{ steps.checkout-step.outputs.data-lap-json }}"" \n \n - name: Show Energy Results\n uses: green-coding-berlin/eco-ci-energy-estimation@v2\n id: total-measurement-step\n with:\n task: display-results\n\n - name: Print total data\n run: |\n echo ""total json: ${{ steps.total-measurement-step.outputs.data-total-json }}""\n```\n\nNote that the steps you want to consume the measurements of need to have an `id` so that you can access the corresponding data from their outputs.\n\n#### Note on private repos\n If you are running in a private repo, you must give your job actions `read` permissions for the GITHUB_TOKEN. This is because we make an api call to get your workflow_id which uses your `$GITHUB_TOKEN`, and it needs the correct permissions to do so:\n ```yaml\njobs:\n test:\n runs-on: ubuntu-latest\n permissions:\n actions: read\n steps:\n - name: Eco CI - Initialize\n uses: green-coding-berlin/eco-ci-energy-estimation@v2\n with:\n task: start-measurement\n ``` \n\n### GitLab:\nTo use Eco-CI in your GitLab pipeline, you must first include a reference to the eco-ci-gitlab.yml file as such:\n```\ninclude:\n remote: \'https://raw.githubusercontent.com/green-coding-berlin/eco-ci-energy-estimation/main/eco-ci-gitlab.yml\'\n```\n\nand you call the various scripts in your pipeline with call like this:\n```\n- !reference [., script]\n```\nwhere function name is one of the following:\n`initialize_energy_estimator` - used to setup the machine for measurement. Needs to be called once per VM job.\n`start_measurement` - begin the measurment\n`get_measurement` - make a spot measurment here. If you wish to label the measurement, you need to set the ECO_CI_LABEL environment variable right before this call.\n`display_results` - will print all the measurement values to the jobs-output and prepare the artifacts, which must be exported in the normal GitLab way.\n\nBy default, we send data to our API, which will allow us to present you with a badge, and a front-end display to review your results. The data we send are: the energy value and duration of measurement; cpu model; repository name/branch/workflow_id/run_id; commit_hash; source (GitHub or GitLab). We use this data to display in our green-metrics-tool front-end here: https://metrics.green-coding.berlin/ci-index.html \n\nIf you do not wish to send us data, you can set this global variable in your pipeline:\n\n```\nvariables:\n ECO_CI_SEND_DATA: ""false""\n```\n\nThen, for each job you need to export the artifacts. We currently export the pipeline data as a regular artifact, as well as make use of GitLab\'s [Metric Report](https://docs.gitlab.com/ee/ci/testing/metrics_reports.html) artifact (which we output to the default metrics.txt):\n\n```\nartifacts:\n paths:\n - eco-ci-output.txt\n - eco-ci-total-data.json\n reports:\n metrics: metrics.txt\n```\n\nHere is a sample .gitlab-ci.yml example file to illustrate:\n\n```\nimage: ubuntu:22.04\ninclude:\n remote: \'https://raw.githubusercontent.com/green-coding-berlin/eco-ci-energy-estimation/main/eco-ci-gitlab.yml\'\n\nstages:\n - test\n\ntest-job:\n stage: test\n script:\n - !reference [.initialize_energy_estimator, script]\n - !reference [.start_measurement, script]\n\n - sleep 10s # Your main pipeline logic here\n - export ECO_CI_LABEL=""measurement 1""\n - !reference [.get_measurement, script]\n\n - sleep 3s # more of your pipeline logic here\n - export ECO_CI_LABEL=""measurement 2""\n - !reference [.get_measurement, script]\n\n - !reference [.display_results, script]\n\n artifacts:\n paths:\n - eco-ci-output.txt\n reports:\n metrics: metrics.txt\n ```\n\n\n### How does it work?\n- The Eco-CI at its core makes its energy estimations based on an XGBoost Machine Learning model we have created based on the SpecPower database. The model and further information can be found here: https://github.com/green-coding-berlin/spec-power-model\n- When you initialize the Eco-CI, it downloads the XGBoost model onto the machine, as well as a small program to track the cpu utilization over a period of time. This tracking begins when you call the start_measurement function. Then, each time you call get-measurement, it will take the cpu-utilization data collected (either from the start, or since the last get-measurement call) and make an energy estimation based on the detected hardware (mainly cpu data) and utilization.\n\n### Limitations\n- At the moment this will only work with linux based pipelines, mainly tested on ubuntu images.\n\n- If you have your pipelines split over multiple VM\'s (often the case with many jobs) ,you have to treat each VM as a seperate machine for the purposes of measuring and setting up Eco-CI.\n\n- The XGBoost model requires the CPU to have a fixed frequency setting. This is typical for cloud testing, but not always the case. \n\n- The XGBoost model data is trained via the SpecPower database, which was mostly collected on compute machines. Results will be off for non big cloud servers and also for machines that are memory heavy or machines which rely more heavily on their GPU\'s for computations.\n\n### Note on the integration\n- If you use dependabot and want to get updates, we recommend using the hash notation\n + `uses: green-coding-berlin/eco-ci-energy-estimation@06837b0b3b393a04d055979e1305852bda82f044 #v2.2`\n + Note that this hash is just an example. You find the latest current hash under *Tags*\n\n- If you want the extension to automatically update within a version number, use the convenient @v2 form\n + `uses: green-coding-berlin/eco-ci-energy-estimation@v2 # will pick the latest minor v2. for example v2.2`\n \n\n'",,"2023/01/16, 10:21:20",282,MIT,157,157,"2023/10/16, 08:57:18",4,16,38,38,9,1,0.4,0.3695652173913043,"2023/10/06, 09:45:45",2.4,0,5,false,,false,false,"DynamicsValue/github-action,dan-mm/test-repo-a,liamlaverty/ideal-umbrella,contributor-assistant/github-action,franciscotbjr/05-basic-example,franciscotbjr/gh-first-action,ceddlyburge/johnson-trotter,green-coding-berlin/example-applications,green-coding-berlin/green-metrics-tool",,https://github.com/green-coding-berlin,https://www.green-coding.berlin,Germany,,,https://avatars.githubusercontent.com/u/97227681?v=4,,, Green Metrics Tool,"An open source suite to measure, display and compare software energy and CO2 consumption for containerized software. External power meters as well as RAPL and also ML-estimation models are supported.",green-coding-berlin,https://github.com/green-coding-berlin/green-metrics-tool.git,github,,Computation and Communication,"2023/10/25, 12:07:31",87,1,62,true,Python,Green Coding Berlin,green-coding-berlin,"Python,JavaScript,C,HTML,Shell,CSS,Makefile,Dockerfile",,"b'[![Tests Status - Main](https://github.com/green-coding-berlin/green-metrics-tool/actions/workflows/tests-vm-main.yml/badge.svg)](https://github.com/green-coding-berlin/green-metrics-tool/actions/workflows/tests-vm-main.yml)\n\n\n[![Energy Used](https://api.green-coding.berlin/v1/ci/badge/get/?repo=green-coding-berlin/green-metrics-tool&branch=dev&workflow=45267392)](https://metrics.green-coding.berlin/ci.html?repo=green-coding-berlin/green-metrics-tool&branch=dev&workflow=45267392) (This is the energy cost of running our CI-Pipelines on Github. [Find out more about Eco-CI](https://www.green-coding.berlin/projects/eco-ci/))\n\n# Introduction\n\nThe Green Metrics Tool is a developer tool indented for measuring the energy and CO2 consumption of software through a software life cycle analysis (SLCA).\n\nKey features are:\n- Reproducible measurements through configuration/setup-as-code\n- [POSIX style metric providers](https://docs.green-coding.berlin/docs/measuring/metric-providers/metric-providers-overview/) for many sensors (RAPL, IPMI, PSU, Docker, Temperature, CPU ...)\n- [Low overhead](https://docs.green-coding.berlin/docs/measuring/metric-providers/overhead-of-measurement-providers/)\n- Statististical frontend with charts - [DEMO](https://metrics.green-coding.berlin/stats.html?id=7169e39e-6938-4636-907b-68aa421994b2)\n- API - [DEMO](https://api.green-coding.berlin)\n- [Cluster setup](https://docs.green-coding.berlin/docs/installation/installation-cluster/)\n- [Free Hosted service for more precise measurements](https://docs.green-coding.berlin/docs/measuring/measurement-cluster/)\n- Timeline-View: Monitor software projects over time - [DEMO for Wagtail](https://metrics.green-coding.berlin/timeline.html?uri=https://github.com/green-coding-berlin/bakerydemo-gold-benchmark&filename=usage_scenario_warm.yml&branch=&machine_id=7) / [DEMO Overview](https://metrics.green-coding.berlin/energy-timeline.html)\n- [Energy-ID Score-Cards](https://www.green-coding.berlin/projects/energy-id/) for software (Also see below)\n\nIt is designed to re-use existing infrastructure and testing files as much as possible to be easily integrateable into every software repository and create transparency around software energy consumption.\n\nIt can orchestrate Docker containers according to a given specificaion in a `usage_scenario.yml` file.\n\nThese containers will be setup on the host system and the testing specification in the `usage_scenario.yml` will be\nrun by sending the commands to the containers accordingly.\n\nThis repository contains the command line tools to schedule and run the measurement report\nas well as a web interface to view the measured metrics in some nice charts.\n\n# Frontend\nTo see the frontend in action and get an idea of what kind of metrics the tool can collect and display go to out [Green Metrics Frontend](https://metrics.green-coding.berlin)\n\n# Documentation\n\nTo see the the documentation and how to install and use the tool please go to [Green Metrics Tool Documentation](https://docs.green-coding.berlin)\n\n# Screenshots of Single Run View\n\n![](https://www.green-coding.berlin/img/projects/gmt-screenshot-1.webp)\n![](https://www.green-coding.berlin/img/projects/gmt-screenshot-2.webp)\n![](https://www.green-coding.berlin/img/projects/gmt-screenshot-3.webp)\n![](https://www.green-coding.berlin/img/projects/gmt-screenshot-4.webp)\n \n\n# Screenshots of Comparison View\n![](https://www.green-coding.berlin/img/projects/gmt-screenshot-5.webp)\n![](https://www.green-coding.berlin/img/projects/gmt-screenshot-6.webp)\n\n# Energy-ID Scorecards\n\n\nDetails: [Energy-ID project page](https://www.green-coding.berlin/projects/energy-id/\n)\n\n\n'",,"2022/02/25, 13:45:29",607,AGPL-3.0,772,1134,"2023/10/23, 08:08:12",79,274,424,405,2,5,0.2,0.2678244972577697,"2023/10/02, 09:53:07",v0.20.2,18,7,false,,false,true,green-coding-berlin/example-applications,,https://github.com/green-coding-berlin,https://www.green-coding.berlin,Germany,,,https://avatars.githubusercontent.com/u/97227681?v=4,,, Scaphandre,An open source software agent to track energy consumption of ICT services from the servers.,hubblo-org,https://github.com/hubblo-org/scaphandre.git,github,"greenit,rust,rust-lang,energy,energy-consumption,energy-efficiency,energy-monitor,sustainability,electricity,electricity-consumption,electricity-meter,prometheus,watts,wattmeter,qemu,virtual-machines,carbon-footprint,virtual-machine,measure,hacktoberfest",Computation and Communication,"2023/08/14, 14:38:14",1270,28,451,true,Rust,Hubblo,hubblo-org,"Rust,Python,Jsonnet,Makefile,Dockerfile,Shell,Batchfile,Mustache",,"b'

\n \n

\n

\n Scaphandre\n

\n\n

\n Your tech stack doesn\'t need so much energy \xe2\x9a\xa1\n

\n\n---\n\nScaphandre *[skaf\xc9\x91\xcc\x83d\xca\x81]* is a metrology agent dedicated to electrical [power](https://en.wikipedia.org/wiki/Electric_power) consumption metrics. The goal of the project is to permit to any company or individual to **measure** the power consumption of its tech services and get this data in a convenient form, sending it through any monitoring or data analysis toolchain.\n\n**Scaphandre** means *heavy* **diving suit** in [:fr:](https://fr.wikipedia.org/wiki/Scaphandre_%C3%A0_casque). It comes from the idea that tech related services often don\'t track their power consumption and thus don\'t expose it to their clients. Most of the time the reason is a presumed bad [ROI](https://en.wikipedia.org/wiki/Return_on_investment). Scaphandre makes, for tech providers and tech users, easier and cheaper to go under the surface to bring back the desired power consumption metrics, take better sustainability focused decisions, and then show the metrics to their clients to allow them to do the same.\n\nThis project was born from a deep sense of duty from tech workers. Please refer to the [why](https://hubblo-org.github.io/scaphandre-documentation/why.html) section to know more about its goals.\n\n**Warning**: this is still a very early stage project. Any feedback or contribution will be highly appreciated. Please refer to the [contribution](https://hubblo-org.github.io/scaphandre-documentation/contributing.html) section.\n\n![Fmt+Clippy](https://github.com/hubblo-org/scaphandre/workflows/Tests/badge.svg?branch=main)\n[![](https://img.shields.io/crates/v/scaphandre.svg?maxAge=25920)](https://crates.io/crates/scaphandre)\n\n\nJoin us on [Gitter](https://gitter.im/hubblo-org/scaphandre) or [Matrix](https://app.element.io/#/room/#hubblo-org_scaphandre:gitter.im) !\n\n---\n\n## \xe2\x9c\xa8 Features\n\n- measuring power consumption on **bare metal hosts**\n- measuring power consumption of **qemu/kvm virtual machines** from the host\n- **exposing** power consumption metrics of a virtual machine, to allow **manipulating those metrics in the VM** as if it was a bare metal machine (relies on hypervisor features)\n- exposing power consumption metrics as a **[prometheus](https://prometheus.io) (HTTP) exporter**\n- sending power consumption metrics to **[riemann](http://riemann.io/)**\n- sending power consumption metrics to **[Warp10](http://warp10.io/)**\n- works on **[kubernetes](https://kubernetes.io/)**\n- storing power consumption metrics in a **JSON** file\n- showing basic power consumption metrics **in the terminal**\n\nHere is an example dashboard built thanks to scaphandre: [https://metrics.hubblo.org](https://metrics.hubblo.org).\n\n\n\n## \xf0\x9f\x93\x84 How to ... ?\n\nYou\'ll find everything you may want to know about scaphandre in the [documentation](https://hubblo-org.github.io/scaphandre-documentation), like:\n\n- \xf0\x9f\x8f\x81 [Getting started](https://hubblo-org.github.io/scaphandre-documentation/tutorials/getting_started.html)\n- \xf0\x9f\x92\xbb [Installation & compilation on GNU/Linux](https://hubblo-org.github.io/scaphandre-documentation/tutorials/installation-linux.html) or [on Windows](https://hubblo-org.github.io/scaphandre-documentation/tutorials/installation-windows.html)\n- \xf0\x9f\x91\x81\xef\xb8\x8f [Give a virtual machine access to its power consumption metrics, and break the opacity of being on the computer of someone else](https://hubblo-org.github.io/scaphandre-documentation/how-to_guides/propagate-metrics-hypervisor-to-vm_qemu-kvm.html)\n- \xf0\x9f\x8e\x89 [Contributing guide](https://hubblo-org.github.io/scaphandre-documentation/contributing.html)\n- [And much more](https://hubblo-org.github.io/scaphandre-documentation)\n\nIf you are only interested in the code documentation [here it is](https://docs.rs/scaphandre).\n\n## \xf0\x9f\x93\x85 Roadmap\n\nThe ongoing roadmap can be seen [here](https://github.com/hubblo-org/scaphandre/projects/1). Feature requests are welcome, please join us.\n\n## \xe2\x9a\x96\xef\xb8\x8f Footprint\n\nIn opposition to its name, scaphandre aims to be as light and clean as possible. One of the main focus areas of the project is to come as close as possible to a 0 overhead, both about resources consumption and power consumption.\n'",,"2020/10/16, 14:10:05",1104,Apache-2.0,34,782,"2023/08/08, 16:34:45",80,143,250,97,78,23,0.0,0.18972895863052786,"2023/01/30, 11:52:13",v0.5.0,4,19,true,github,true,true,"CERIT-SC/scaphandre,Playfloor/scaphandre,jeromedruais/scaphandre,AAABBBCCCAAAA/scaphandre,metacosm/scaphandre,DerekStrickland/scaphandre,standardgalactic/scaphandre,hemanthnakkina-zz/scaphandre,da-ekchajzer/scaphandre,bbc/scaphandre,pseguret/scaphandre,arthurzenika/scaphandre,Canop/scaphandre,STDigitalDatacenter/scaphandre,slanglois/scaphandre,bobyali/scaphandre,jotak/scaphandre,tstrempel/scaphandre,Talebna/scaphandre,HurdmanBegins/scaphandre,Habibou-bot/scaphandre,demeringo/scaphandre,jdrouet/scaphandre,rossf7/scaphandre,uggla/scaphandre,florimondmanca/scaphandre,thegreenwebfoundation/scaphandre,hubblo-org/scaphandre",,https://github.com/hubblo-org,https://hubblo.org,France,,,https://avatars.githubusercontent.com/u/71067866?v=4,,, Tracarbon,Tracarbon tracks your device's energy consumption and calculates your carbon emissions using your location.,fvaleye,https://github.com/fvaleye/tracarbon.git,github,"energy,sustainability,energy-consumption,electricity-consumption,energy-efficiency,carbon-footprint",Computation and Communication,"2023/10/25, 19:33:49",83,0,15,true,Python,,,"Python,Makefile,Smarty,Dockerfile",https://fvaleye.github.io/tracarbon/documentation/,"b'![Tracarbon Logo](https://raw.githubusercontent.com/fvaleye/tracarbon/main/logo.png ""Tracarbon logo"")\n\n![example workflow](https://github.com/fvaleye/tracarbon/actions/workflows/build.yml/badge.svg)\n[![pypi](https://img.shields.io/pypi/v/tracarbon.svg?style=flat-square)](https://pypi.org/project/tracarbon/)\n[![doc](https://img.shields.io/badge/docs-python-blue.svg?style=for-the-badgee)](https://fvaleye.github.io/tracarbon)\n[![licence](https://img.shields.io/badge/license-Apache--2.0-green)](https://github.com/fvaleye/tracarbon/blob/main/LICENSE.txt)\n\n\n## \xf0\x9f\x93\x8c Overview\nTracarbon is a Python library that tracks your device\'s energy consumption and calculates your carbon emissions.\n\nIt detects your location and your device automatically before starting to export measurements to an exporter.\nIt could be used as a CLI with already defined metrics or programmatically with the API by defining the metrics that you want to have.\n\nRead more in this [article](https://medium.com/@florian.valeye/tracarbon-track-your-devices-carbon-footprint-fb051fcc9009).\n\n## \xf0\x9f\x93\xa6 Where to get it\n\n```sh\n# Install Tracarbon\npip install tracarbon\n```\n\n```sh\n# Install one or more exporters from the list\npip install \'tracarbon[datadog,prometheus,kubernetes]\'\n```\n\n### \xf0\x9f\x94\x8c Devices: energy consumption\n| **Devices** | **Description** |\n|-------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| Mac | \xe2\x9c\x85 Global energy consumption of your Mac (must be plugged into a wall adapter). |\n| Linux | \xe2\x9a\xa0\xef\xb8\x8f Only with [RAPL](https://web.eece.maine.edu/~vweaver/projects/rapl/). See [#1](https://github.com/fvaleye/tracarbon/issues/1). It works with containers on [Kubernetes](https://kubernetes.io/) using the [Metric API](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api) if available. |\n| Windows | \xe2\x9d\x8c Not yet implemented. See [#184](https://github.com/hubblo-org/scaphandre/pull/184). |\n\n| **Cloud Provider** | **Description** |\n|--------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| AWS | \xe2\x9c\x85 Use the hardware\'s usage with the EC2 instances carbon emissions datasets of [cloud-carbon-coefficients](https://github.com/cloud-carbon-footprint/ccf-coefficients/blob/main/data/aws-instances.csv). |\n| GCP | \xe2\x9d\x8c Not yet implemented. |\n| Azure | \xe2\x9d\x8c Not yet implemented. |\n\n## \xf0\x9f\x93\xa1 Exporters\n| **Exporter** | **Description** |\n|--------------|:---------------------------------:|\n| Stdout | Print the metrics in Stdout. |\n| JSON | Write the metrics in a JSON file. |\n| Prometheus | Send the metrics to Prometheus. |\n| Datadog | Send the metrics to Datadog. |\n\n### \xf0\x9f\x97\xba\xef\xb8\x8f Locations\n| **Location** | **Description** | **Source** |\n|--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Worldwide | Get the latest co2g/kwh in near real-time using the CO2Signal or ElectricityMaps APIs. See [here](http://api.electricitymap.org/v3/zones) for the list of available zones. | [CO2Signal API](https://www.co2signal.com) or [ElectricityMaps](https://static.electricitymaps.com/api/docs/index.html) |\n| Europe | Static file created from the European Environment Agency Emission for the co2g/kwh in European countries. | [EEA website](https://www.eea.europa.eu/data-and-maps/daviz/co2-emission-intensity-9#tab-googlechartid_googlechartid_googlechartid_googlechartid_chart_11111) |\n| AWS | Static file of the AWS Grid emissions factors. | [cloud-carbon-coefficients](https://github.com/cloud-carbon-footprint/cloud-carbon-coefficients/blob/main/data/grid-emissions-factors-aws.csv) |\n\n### \xe2\x9a\x99\xef\xb8\x8f Configuration\nThe environment variables can be set from an environment file `.env`.\n\n| **Parameter** | **Description** |\n|-------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| TRACARBON_CO2SIGNAL_API_KEY | The api key received from [CO2Signal](https://www.co2signal.com) or [ElectricityMaps](https://static.electricitymaps.com/api/docs/index.html). |\n| TRACARBON_CO2SIGNAL_URL | The url of [CO2Signal](https://docs.co2signal.com/#get-latest-by-country-code) is the default endpoint to retrieve the last known state of the zone, but it could be changed to [ElectricityMaps](https://static.electricitymaps.com/api/docs/index.html#live-carbon-intensity). |\n| TRACARBON_METRIC_PREFIX_NAME | The prefix to use in all the metrics name. |\n| TRACARBON_INTERVAL_IN_SECONDS | The interval in seconds to wait between the metrics evaluation. |\n| TRACARBON_LOG_LEVEL | The level to use for displaying the logs. |\n\n## \xf0\x9f\x94\x8e Usage\n\n**Request your API key**\n- Go to [CO2Signal](https://www.co2signal.com/) and get your free API key for non-commercial use, or go to [ElectricityMaps](https://static.electricitymaps.com/api/docs/index.html) for commercial use.\n- This API is used to retrieve the last known carbon intensity (in gCO2eq/kWh) of electricity consumed in your location.\n- Set your API key in the environment variables, in the `.env` file or directly in the configuration.\n- If you would like to start without an API key, it\'s possible, the carbon intensity will be loaded statistically from a file.\n- Launch Tracarbon \xf0\x9f\x9a\x80\n\n**Command Line**\n```sh\ntracarbon run\n```\n\n**API**\n```python\nfrom tracarbon import TracarbonBuilder, TracarbonConfiguration\n\nconfiguration = TracarbonConfiguration() # Your configuration\ntracarbon = TracarbonBuilder(configuration=configuration).build()\ntracarbon.start()\n# Your code\ntracarbon.stop()\n\nwith tracarbon:\n # Your code\n\nreport = tracarbon.report() # Get the report\n```\n\n## \xf0\x9f\x92\xbb Development\n\n**Local: using Poetry**\n```sh\nmake init\nmake test-unit\n```\n\n## \xf0\x9f\x9b\xa1\xef\xb8\x8f Licence\n[Apache License 2.0](https://raw.githubusercontent.com/fvaleye/tracarbon/main/LICENSE.txt)\n\n## \xf0\x9f\x93\x9a Documentation\nThe documentation is hosted here: https://fvaleye.github.io/tracarbon/documentation\n'",,"2022/03/11, 20:22:45",593,Apache-2.0,317,437,"2023/10/23, 18:16:03",6,234,238,169,2,0,0.0,0.3023255813953488,"2023/07/17, 09:12:48",v0.7.1,0,3,false,,true,false,,,,,,,,,,, H2020 CATALYST,Converting data centres in energy flexibility ecosystems.,,https://gitlab.com/project-catalyst,gitlab,,Computation and Communication,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Energy-Languages,"The complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game.",greensoftwarelab,https://github.com/greensoftwarelab/Energy-Languages.git,github,"clbg,energy,programming,languages",Computation and Communication,"2022/02/27, 21:04:35",644,0,68,true,C,Green Software Lab,greensoftwarelab,"C,Pascal,Java,Makefile,Python,Common Lisp,C#,Lua,C++,Rust,Fortran,Swift,Chapel,Go,Dart,Hack,PHP,F#,Ruby,TypeScript,JavaScript,Perl,Shell",,"b'# Energy Efficiency in Programming Languages\n#### Checking Energy Consumption in Programming Languages Using the _Computer Language Benchmark Game_ as a case study.\n\n### What is this?\n\nThis repo contains the source code of 10 distinct benchmarks, implemented in 28 different languages (exactly as taken from the [Computer Language Benchmark Game](https://benchmarksgame-team.pages.debian.net/benchmarksgame/)).\n\nIt also contains tools which provide support, for each benchmark of each language, to 4 operations: *(1)* **compilation**, *(2)* **execution**, *(3)* **energy measuring** and *(4)* **memory peak detection**.\n\n### How is it structured and hows does it work?\n\nThis framework follows a specific folder structure, which guarantees the correct workflow when the goal is to perform and operation for all benchmarks at once.\nMoreover, it must be defined, for each benchmark, how to perform the 4 operations considered.\n\nNext, we explain the folder structure and how to specify, for each language benchmark, the execution of each operation.\n\n#### The Structure\nThe main folder contains 32 elements: \n1. 28 sub-folders (one for each of the considered languages); each folder contains a sub-folder for each considered benchmark.\n2. A `Python` script `compile_all.py`, capable of building, running and measuring the energy and memory usage of every benchmark in all considered languages.\n3. A `RAPL` sub-folder, containing the code of the energy measurement framework.\n4. A `Bash` script `gen-input.sh`, used to generate the input files for 3 benchmarks: `k-nucleotide`, `reverse-complement`, and `regex-redux`.\n\nBasically, the directories tree will look something like this:\n\n```Java\n| ...\n| \n\t| \n\t\t| \n\t\t| Makefile\n\t\t| [input]\n\t| ...\n\t| \n\t\t| \n\t\t| Makefile\n\t\t| [input]\n| ...\n| \n\t| \n\t| ...\n\t| \n| RAPL\n| compile_all.py\n| gen-input.sh\n\n```\n\nTaking the `C` language as an example, this is how the folder for the `binary-trees` and `k-nucleotide` benchmarks would look like:\n\n```Java\n| ...\n| C\n\t| binary-trees\n\t\t| binarytrees.gcc-3.c\n\t\t| Makefile\n\t| k-nucleotide\n\t\t| knucleotide.c\n\t\t| knucleotide-input25000000.txt\n\t\t| Makefile\n\t| ...\n| ...\n\n```\n\n#### The Operations\n\nEach benchmark sub-folder, included in a language folder, contains a `Makefile`.\nThis is the file where is stated how to perform the 4 supported operations: *(1)* **compilation**, *(2)* **execution**, *(3)* **energy measuring** and *(4)* **memory peak detection**.\n\nBasically, each `Makefile` **must** contains 4 rules, one for each operations:\n\n| Rule | Description |\n| -------- | -------- |\n| `compile` | This rule specifies how the benchmark should be compiled in the considered language; Interpreted languages don\'t need it, so it can be left blank in such cases. |\n| `run` | This rule specifies how the benchmark should be executed; It is used to test whether the benchmark runs with no errors, and the output is the expected. |\n| `measure` | This rule shows how to use the framework included in the `RAPL` folder to measure the energy of executing the task specified in the `run` rule. |\n| `mem` | Similar to `measure`, this rule executes the task specified in the `run` rule but with support for memory peak detection. |\n\nTo better understand it, here\'s the `Makefile` for the `binary-trees` benchmark in the `C` language:\n\n```Makefile\ncompile:\n\t/usr/bin/gcc -pipe -Wall -O3 -fomit-frame-pointer -march=native -fopenmp -D_FILE_OFFSET_BITS=64 -I/usr/include/apr-1.0 binarytrees.gcc-3.c -o binarytrees.gcc-3.gcc_run -lapr-1 -lgomp -lm\n\t\nmeasure:\n\tsudo ../../RAPL/main ""./binarytrees.gcc-3.gcc_run 21"" C binary-trees\n\nrun:\n\t./binarytrees.gcc-3.gcc_run 21\n\nmem:\n\t/usr/bin/time -v ./binarytrees.gcc-3.gcc_run 21\n\n```\n\n### Running an example.\n\n*First things first:* We must give sudo access to the energy registers for RAPL to access\n```\nsudo modprobe msr\n```\nand then generate the input files, like this\n```Makefile\n./gen-input.sh\n```\nThis will generate the necessary input files, and are valid for every language.\n\nWe included a main Python script, `compile_all.py`, that you can either call from the main folder or from inside a language folder, and it can be executed as follows:\n\n```PowerShell\npython compile_all.py [rule]\n```\n\nYou can provide a rule from the available 4 referenced before, and the script will perform it using **every** `Makefile` found in the same folder level and bellow.\n\nThe default rule is `compile`, which means that if you run it with no arguments provided (`python compile_all.py`) the script will try to compile all benchmarks.\n\nThe results of the energy measurements will be stored in files with the name `.csv`, where `` is the name of the running language. \nYou will find such file inside of corresponding language folder.\n\nEach .csv will contain a line with the following: \n\n```benchmark-name ; PKG (Joules) ; CPU (J) ; GPU (J) ; DRAM (J) ; Time (ms)```\n\nDo note that the availability of GPU/DRAM measurements depend on your machine\'s architecture. These are requirements from RAPL itself.\n\n### Add your own example!\n#### Wanna know your own code\'s energy behavior? We can help you!\n#### Follow this steps:\n\n##### 1. Create a folder with the name of you benchmark, such as `test-benchmark`, inside the language you implemented it.\n\n##### 2. Follow the instructions presented in the [Operations](#the-operations) section, and fill the `Makefile`.\n\n##### 3. Use the `compile_all.py` script to compile, run, and/or measure what you want! Or run it yourself using the [`make`](https://linux.die.net/man/1/make) command.\n\n### Further Reading\nWanna know more? Check [this website](https://sites.google.com/view/energy-efficiency-languages)!\n\nThere you can find the results of a successful experimental setup using the contents of this repo, and the used machine and compilers specifications.\n\nYou can also find there the paper which include such results and our discussion on them:\n\n>**""_Energy Efficiency across Programming Languages: How does Energy, Time and Memory Relate?_""**, \n>Rui Pereira, Marco Couto, Francisco Ribeiro, Rui Rua, J\xc3\xa1come Cunha, Jo\xc3\xa3o Paulo Fernandes, and Jo\xc3\xa3o Saraiva. \n>In *Proceedings of the 10th International Conference on Software Language Engineering (SLE \'17)*\n\n#### IMPORTANT NOTE:\nThe `Makefiles` have specified, for some cases, the path for the language\'s compiler/runner. \nIt is most likely that you will not have them in the same path of your machine.\nIf you would like to properly test every benchmark of every language, please make sure you have all compilers/runners installed, and adapt the `Makefiles` accordingly.\n\n### Contacts and References\n\n[Green Software Lab](http://greenlab.di.uminho.pt)\n\nMain contributors: [@Marco Couto](http://github.com/MarcoCouto) and [@Rui Pereira](http://haslab.uminho.pt/ruipereira)\n\n\n[The Computer Language Benchmark Game](https://benchmarksgame-team.pages.debian.net/benchmarksgame/)\n\n'",,"2017/08/28, 16:41:20",2249,MIT,0,23,"2023/10/12, 12:50:08",11,16,28,2,13,2,0.6,0.44999999999999996,,,0,7,false,,false,false,,,https://github.com/greensoftwarelab,http://greenlab.di.uminho.pt/,,,,https://avatars.githubusercontent.com/u/11410556?v=4,,, energyusage,A Python package that measures the environmental impact of computation.,responsibleproblemsolving,https://github.com/responsibleproblemsolving/energy-usage.git,github,,Computation and Communication,"2021/06/03, 19:58:58",148,10,11,false,Python,,responsibleproblemsolving,Python,,"b'# See [CodeCarbon](https://codecarbon.io/) instead!\nWe are no longer actively maintaining this project, since we have merged its functionality into another project that is being actively maintained and has more features than this one. We recommend using that project instead!\n\n# energyusage\n\nA Python package that measures the environmental impact of computation. Provides a function to\nevaluate the energy usage and related carbon emissions of another function.\nEmissions are calculated based on the user\'s location via the GeoJS API and that location\'s\nenergy mix data (sources: US E.I.A and eGRID for the year 2016).\n\n## Installation\n\nTo install, simply `$ pip install energyusage`.\n\n## Usage\n\nTo evaluate the emissions of a function, just call `energyusage.evaluate` with the function\nname and the arguments it requires. Use `python3` to run your code.\n\n```python\nimport energyusage\n\n# user function to be evaluated\ndef recursive_fib(n):\n if (n <= 2): return 1\n else: return recursive_fib(n-1) + recursive_fib(n-2)\n\nenergyusage.evaluate(recursive_fib, 40, pdf=True)\n# returns 102,334,155\n```\nIt will return the value of your function, while also printing out the energy usage report on the command line.\nOptional keyword arguments:\n* `pdf`(default = `False`): generates a PDF report, alongside the command-line utility\n* `powerLoss` (default = `0.8`): accounts for PSU loss, can be set by user if known for higher accuracy of results\n* `energyOutput` (default = `False`): prints amount of energy used by the process and time taken. The order is time, energy used, return of function\n* `printToScreen` (default = `True`): controls whether there is a terminal printout of the package running\n* `energyOutput` (default = `False`): determines whether the energy used and time taken are output. When set to true the order is `time used`, `energy used`, `return value of function`.\n* `locations` (default = `[""Mongolia"", ""Iceland"", ""Switzerland""]`): allows selecting the countries in the emissions comparison section for the terminal printout and pdf. These can be set to the name of any country or US state.\n* `year` (default = `2016`): controls the year for the data. Default is `2016` as that is currently the most recent year of data from both of our sources. Note that only this year of data is included in the package installation but more can be added in a process described later.\n\n### Energy Report\nThe report that will be printed out will look like the one below. The second and third lines will show a real-time reading that disappears once the process has finished evaluating.\n```\nLocation: Pennsylvania\n--------------------------------------------------------------------------------\n------------------------------- Final Readings -------------------------------\n--------------------------------------------------------------------------------\nAverage baseline wattage: 1.86 watts\nAverage total wattage: 19.42 watts\nAverage process wattage: 17.56 watts\nProcess duration: 0:00:01\n--------------------------------------------------------------------------------\n------------------------------- Energy Data -------------------------------\n--------------------------------------------------------------------------------\n Energy mix in Pennsylvania \nCoal: 25.42%\nOil: 0.17%\nNatural Gas: 31.64%\nLow Carbon: 42.52%\n--------------------------------------------------------------------------------\n------------------------------- Emissions -------------------------------\n--------------------------------------------------------------------------------\nEffective emission: 4.05e-06 kg CO2\nEquivalent miles driven: 1.66e-12 miles\nEquivalent minutes of 32-inch LCD TV watched: 2.51e-03 minutes\nPercentage of CO2 used in a US household/day: 1.33e-12%\n--------------------------------------------------------------------------------\n------------------------- Assumed Carbon Equivalencies -------------------------\n--------------------------------------------------------------------------------\nCoal: 995.725971 kg CO2/MWh\nPetroleum: 816.6885263 kg CO2/MWh\nNatural gas: 743.8415916 kg CO2/MWh\nLow carbon: 0 kg CO2/MWh\n--------------------------------------------------------------------------------\n------------------------- Emissions Comparison -------------------------\n--------------------------------------------------------------------------------\n Quantities below expressed in kg CO2 \n US Europe Global minus US/Europe\nMax: Wyoming 9.59e-06 Kosovo 9.85e-06 Mongolia 9.64e-06\nMedian: Tennessee 4.70e-06 Ukraine 6.88e-06 Korea, South 7.87e-06\nMin: Vermont 2.69e-07 Iceland 1.77e-06 Bhutan 1.10e-06\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\nProcess used: 1.04e-05 kWh\n```\nThe report is divided into several sections.\n* **Final Readings**: Presents an average of:\n\t* *Average baseline wattage*: your computer\'s average power usage minus the process, ran for 10 seconds before starting your process\n\t* *Average total wattage*: your computer\'s average power usage while the process runs\n\t* *Average process usage*: the difference between the baseline and total, highlighting the usage solely from the specific process you evaluated\n\t* *Process duration*: how long your program ran for\n\n* **Energy Data**: The energy mix of the location.\n\n* **Emissions**: The effective CO2 emissions of running the program one time and some real-world equivalents to those emissions.\n\n* **Assumed Carbon Equivalencies**: The formulas used to convert from kWh to CO2 based on the energy mix of the location (for international locations, see below for more information).\n\n* **Emissions Comparison**: What the emissions would be for the same energy used in a representative group of US states and countries. Note that if these locations are specified as described below these default values are not shown.\n\n* **Process used**: The amount of energy running the program used in total.\n\nThe PDF report contains the same sections, but does not include the process duration or the emissions comparison momentarily.\n\n## Methodology\n### Power Measurement\n#### CPU\nWe calculate CPU power usage via the RAPL (Running Average Power Limit) interfaces found on Intel processors. These are non-architectural model-specific registers that provide power-related information\nabout the CPU. They are used primarily for limiting power consumption, but the Energy Status\nregister (MSR_PKG_ENERGY_STATUS) allows for power measurement.\n\nThe RAPL interface differentiates between several domains, based on the number of processors. For a single processor machine:\n * Package\n * Power planes:\n * Core\n * Uncore\n * DRAM\n\nFor a machine with multiple processors:\n * Package 0\n * Package 1\n * ...\n * Package n\n * DRAM\n\nPresently, we use the Package domain (or a sum of all of the domains, for a multi-processor machine), which represents the complete processor package.\n\nAs outlined by [Vince Weaver](http://web.eece.maine.edu/~vweaver/projects/rapl/), there are multiple ways to access the RAPL interface data, namely:\n * Using perf_event interface\n * Reading the underlying MSR\n * Reading the files under `/sys/class/powercap/intel-rapl/`\n\nWe elected to use the final method because it is the only one that does not require sudo access. We read the `energy_uj.txt` files inside the package folder(s) `intel-rapl:*`. These files represent the energy used in microjoules, and they update roughly every millisecond. The value in the file increases to the point of overflow and then resets. We take 2 readings with a delay in-between, and then calculate the wattage based on the difference (energy) and the delay (time). To avoid errors due to the reset of the file, we discard negative values.\n\nFor more information on the RAPL interface, consult the [Intel\xc2\xae 64 and IA-32 Architectures Software Developer\'s Manual](https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf).\n\n#### GPU\nTo the package measurement we also add the power usage of the GPU for machines that have an Nvidia GPU that support the NVIDIA-smi program.\n\nThe NVIDIA-smi is a command-line utility that allows for the querying of information about the GPU. If the GPU is identified as valid, we use the built-in method to query the current wattage, and then convert the output into a float.\n\nMore information on NVIDIA-smi can be found on the [Nvidia website](https://developer.nvidia.com/nvidia-system-management-interface).\n\n\n### Calculating CO2 Emissions\n#### Location\nIn order to accurately calculate the CO\xe2\x82\x82 emissions associated with the computational power used, we determine the geographical location of the user via their IP address with the help of the [GeoJS](https://www.geojs.io/) API. If the location cannot be determined, we use the United States as the default.\n\nLocation is especially important as the emissions differ based on the country\'s (and, in the case of the United States, the state\'s) energy mix.\n\n#### Energy Mix Information\nWe obtained international energy mix data from the [U.S. Energy Information Administration data](https://www.eia.gov/beta/international/data/browser/#/?pa=0000000010000000000000000000000000000000000000000000000000u&c=ruvvvvvfvsujvv1vrvvvvfvvvvvvfvvvou20evvvfvrvvvvvvurs&ct=0&vs=INTL.44-2-AFG-QBTU.A&cy=2016&vo=0&v=H&start=2014&end=2016) for the year 2016. Specifically, we looked at the energy consumption of countries worldwide, broken down by energy source. For the data points labeled *(s)* (meaning that the value is too small for the number of decimal places shown), we approximated those amounts to 0. No data was available for, and thus we removed from consideration, the following: Former Czechoslovakia, Former Serbia and Montenegro, Former U.S.S.R., Former Yugoslavia, Hawaiian Trade Zone, East Germany and West Germany.\n\nOur United States energy mix and emissions data was obtained from the [U.S. Environmental Protection Agency eGRID data](https://www.epa.gov/sites/production/files/2018-02/egrid2016_summarytables.xlsx) for the year 2016. We used the *State Resource Mix* section for displaying the energy mix, and the *State Output Emission Rates* section for calculating emissions in the United States. We did not use the *otherFossil* data as the values were predominantly 0 (and in cases in which the value was nonzero, it was below 1%).\n\nAs of July 2019, the most recent eGRID data was from the year 2016. We elected to use 2016 U.S. E.I.A. data for consistency between the data sources.\n\n#### Conversion to CO2\nSince the international data only contained an energy mix, and no emission data, we reverse-engineered the formulas used in the eGRID data. This gives us additionally consistency between the separate datasets.\n* *Coal*: 2195.20 lbs CO2/MWh = 995.725971 kg CO2/MWh\n* *Petroleum*: 1800.49 lbs CO2/MWh = 816.6885263 kg CO2/MWh\n* *Natural gas*: 1639.89 lbs CO2/MWh = 743.8415916 kg CO2/MWh\n\n#### Using Different Years of Data\nIn case one wishes to compare energy usage between different years of data, we have included a script to allow for adding other years. If you navigate to the package directory and go into the `data` folder, you can use `raw_to_json.py`. First, you need to download the US and international data in the years of your choice from the links above and place them in `data/raw/""year of the data""` after creating the required year folder. Then, run the script with a flag for that year (for example, `python raw_to_json.py -2016`). This will allow selecting that year when using package in the future by using the `year` optional argument for `evaluate`.\n\n## Related Work\n* In their paper [*Energy and Policy Considerations for Deep Learning in NLP*](https://arxiv.org/abs/1906.02243), Strubell et. al not only analyze the computational power needed for training deep learning models in NLP, but further convert the data into carbon emissions and cost. Our tool aims to facilitate this analysis for developers in a single package. We do not consider cost, instead choosing to focus solely on the environmental impact. Further, we do not focus on a specific computational area. We also extend their analysis of carbon emissions by including international data on energy consumption and CO2 emissions for localized analysis of the carbon footprint of the tested program.\n\n## Limitations\n* Due to the methods in which the energy measurement is being done (through the Intel RAPL\ninterface and NVIDIA-smi), our package is only available on Linux kernels that have the\nRAPL interface and/or machines with an Nvidia GPU.\n\n* A country\xe2\x80\x99s overall energy consumption mix is not necessarily representative of the mix of energy sources used to produce electricity (and even electricity production is not necessarily representative of electricity consumption due to imports/exports). However, the E.I.A. data is the most geographically comprehensive that we have found. We are working on obtaining even more accurate data.\n\n\n## Acknowledgements\nWe would like to thank [Jon Wilson](https://www.haverford.edu/users/jwilson) for all his valuable insight with regards to the environmental aspect of our project.\n'",",https://arxiv.org/abs/1906.02243","2019/07/15, 17:17:20",1563,CUSTOM,0,268,"2022/02/09, 08:16:38",6,3,11,0,623,2,0.0,0.3076923076923077,,,0,6,false,,false,false,"exp-er/carbon-chain,olzama/neural-supertagging,gypark23/CS259,derlehner/IndoorAirQuality_DigitalTwin_Exemplar,rajatk0007/Car_prediction_deployment,EricSchles/pyscraper_ml,mhhabib/urlshortener,algakovic/Among-Us-auto-sorter,EricSchles/drifter_ml,Nathanlauga/transparentai",,https://github.com/responsibleproblemsolving,,,,,https://avatars.githubusercontent.com/u/52930580?v=4,,, Green Cost Explorer,"See how much of your cloud bill is spent on fossil fuels, so you can do the right thing and switch.",thegreenwebfoundation,https://github.com/thegreenwebfoundation/green-cost-explorer.git,github,,Computation and Communication,"2020/09/09, 14:49:21",165,0,14,false,JavaScript,The Green Web Foundation,thegreenwebfoundation,JavaScript,,"b'# Green Cost Explorer - climate related spend analysis for AWS\n[![All Contributors](https://img.shields.io/badge/all_contributors-1-orange.svg?style=flat-square)](#contributors)\n\nIf you work in technology, it\'s reasonable to think that you have some respect for science.\n\nAnd if you have some respect for science, then you\'ll understand why spending a significant chunk of your monthly AWS bills on fossil fuel powered infrastructure isn\'t a thing we can afford to do anymore.\n\nBecause Amazon provide a helpful breakdown of which [regions you use run on what they refer to as sustainable power, and which ones do not][1], and [because they provide a cost-explorer tool][2], you combine this information to get an idea of where you might be spending money on fossil fuels without realising.\n\n[1]: https://aws.amazon.com/about-aws/sustainability/\n[2]: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CostExplorer.html#getCostAndUsage-property\n\nYou can also just look at this nice cartoon. The ones which are notionally sustainable, have the green leaf next to them:\n\n![aws-geek-sustainable-regions](./AWS-Regions.png)\n\n### What this does\n\nTODO:\n\n- [x] Sort your monthly spend into green vs grey spend\n- [x] Create a basic table showing this\n- [ ] Show this as a chart\n- [ ] Project forward, using AWS\'s cost projection features, to help you see these against your own commmitments\n\n### Usage\n\n#### Prerequisites\n\n- [Enable AWS Cost Explorer](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-enable.html): Simply opening AWS CostExplorer in the AWS console\n- An IAM user with CostExplorer read permissions\n\n#### AWS Credential access\n\nYou can use `aws configure` to export keys as access, or manually export:\n\n```\nAWS_ACCESS_KEY_ID=\'YOUR_KEY_ID\'\nAWS_SECRET_ACCESS_KEY=\'YOUR_SECRET_ACCESS_KEY\'\n```\n\nThis is a wrapper around the AWS NodeJS SDK, so by default, it looks for creds in your environment the way the AWS NodeJS normally does. However, you can also set following the environment variables to override these to try it out.\nIt looks for the AWS credentials in your environment, but if you\'re not comfortable with this, the [AWS SDK lets you pass in credentials][creds] in number of ways.\n\n[creds]: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html\n\n#### Usage\n\n```\nnpx @tgwf/green-cost-explorer\n```\n\nIf you have it installed and `npm link`-ed locally you can run:\n\n```\nnpx greencost\n```\n\nIf all goes well, you\'ll get something like this (sample data below):\n\n```\n\xe2\x94\x8c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x90\n\xe2\x94\x82 Total Green Cost \xe2\x94\x82 Total Grey Cost \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 49.0% ($146.66) \xe2\x94\x82 51.0% ($152.48) \xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n\xe2\x94\x8c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x90\n\xe2\x94\x82 month \xe2\x94\x82 Green Cost by month \xe2\x94\x82 Grey Cost by month \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2018-08-01 \xe2\x94\x82 64.8% ($11.55) \xe2\x94\x82 35.2% ($6.27) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2018-09-01 \xe2\x94\x82 27.5% ($13.42) \xe2\x94\x82 72.5% ($35.47) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2018-10-01 \xe2\x94\x82 66.6% ($13.60) \xe2\x94\x82 33.4% ($6.82) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2018-11-01 \xe2\x94\x82 68.0% ($13.59) \xe2\x94\x82 32.0% ($6.39) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2018-12-01 \xe2\x94\x82 34.0% ($11.62) \xe2\x94\x82 66.0% ($22.54) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2019-01-01 \xe2\x94\x82 52.0% ($19.43) \xe2\x94\x82 48.0% ($17.94) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2019-02-01 \xe2\x94\x82 49.4% ($19.64) \xe2\x94\x82 50.6% ($20.13) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2019-03-01 \xe2\x94\x82 52.0% ($21.61) \xe2\x94\x82 48.0% ($19.92) \xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xbc\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xa4\n\xe2\x94\x82 2019-04-01 \xe2\x94\x82 56.6% ($22.19) \xe2\x94\x82 43.4% ($16.99) \xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n```\n\nThere are also flags to let you see a breakdown by service as well.\n\nNote: Only infrastructure costs are taken into consideration. Non-infrastructure global costs like Taxes are ignored.\n\n\n### Licensing\n\nFeel free to use this commercially - part of your job as a professional in tech is to avoid unnecessary harm, and burning fossil fuels to run our infrastructure:\n\n- objectively causes harm\n- is avoidable, by either switching regions, or using a different provider, or contacting AWS about offsetting the emissions from running infra in their non-sustainable regions.\n\nTo be honest, given this is all about tracking your own spend, so it\'s actually pretty hard to make this something you _couldn\'t use_ for commercial use.\n\nSo, Apache 2.0, yo.\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n \n \n \n

Franka

\xf0\x9f\x92\xbb
\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",,"2019/08/07, 09:31:54",1540,Apache-2.0,0,33,"2020/09/09, 14:49:25",23,13,18,0,1141,12,0.4,0.4736842105263158,,,0,8,false,,false,false,,,https://github.com/thegreenwebfoundation,https://www.thegreenwebfoundation.org,The Internet,,,https://avatars.githubusercontent.com/u/8995024?v=4,,, CPU Energy Meter,A Linux tool that allows to monitor power consumption of Intel CPUs at fine time intervals.,sosy-lab,https://github.com/sosy-lab/cpu-energy-meter.git,github,"rapl,energy,resource-measurement,linux",Computation and Communication,"2023/10/02, 05:43:55",101,0,19,true,C,SoSy-Lab,sosy-lab,"C,Ruby,Makefile,Shell,Dockerfile,Jinja",,"b'\n\nCPU Energy Meter\n================\n\n[![Build Status](https://gitlab.com/sosy-lab/software/cpu-energy-meter/badges/main/pipeline.svg)](https://gitlab.com/sosy-lab/software/cpu-energy-meter/pipelines)\n[![BSD-3-Clause License](https://img.shields.io/badge/license-BSD--3--clause-brightgreen.svg)](https://github.com/sosy-lab/cpu-energy-meter/blob/main/LICENSE)\n[![Releases](https://img.shields.io/github/release/sosy-lab/cpu-energy-meter.svg)](https://github.com/sosy-lab/cpu-energy-meter/releases)\n[![DOI via Zenodo](https://zenodo.org/badge/46493895.svg)](https://zenodo.org/badge/latestdoi/46493895)\n\n\nCPU Energy Meter is a Linux tool that allows to monitor power consumption of Intel CPUs\nat fine time granularities (few tens of milliseconds).\nPower monitoring is available for the following power domains:\n- per package domain (CPU socket)\n- per core domain (all the CPU cores on a package)\n- per uncore domain (uncore components, e.g., integrated graphics on client CPUs)\n- per memory node (memory local to a package, server CPUs only)\n- per platform (all devices in the platform that receive power from integrated\n power delivery mechanism, e.g., processor cores, SOC, memory, add-on or\n peripheral devices)\n\nTo do this, the tool uses a feature of Intel CPUs that is called [RAPL (Running Average Power Limit)](https://en.wikipedia.org/wiki/Running_average_power_limit),\nwhich is documented in the [Intel Software Developers Manual](https://software.intel.com/en-us/articles/intel-sdm), Volume 3B Chapter 14.9.\nRAPL is available on CPUs from the generation [Sandy Bridge](https://en.wikipedia.org/wiki/Sandy_Bridge) and later.\nBecause CPU Energy Meter uses the maximal possible measurement interval\n(depending on the hardware this is between a few minutes and an hour),\nit causes negligible overhead.\n\nCPU Energy Meter is a fork of the [Intel Power Gadget](https://software.intel.com/en-us/articles/intel-power-gadget-20)\nand developed at the [Software Systems Lab](https://www.sosy-lab.org)\nof the [Ludwig-Maximilians-Universit\xc3\xa4t M\xc3\xbcnchen (LMU Munich)](https://www.uni-muenchen.de)\nunder the [BSD-3-Clause License](https://github.com/sosy-lab/cpu-energy-meter/blob/main/LICENSE).\n\n\nInstallation\n------------\n\nFor Debian or Ubuntu the easiest way is to install from our [PPA](https://launchpad.net/~sosy-lab/+archive/ubuntu/benchmarking):\n\n sudo add-apt-repository ppa:sosy-lab/benchmarking\n sudo apt install cpu-energy-meter\n\nAlternatively, you can download our `.deb` package from [GitHub](https://github.com/sosy-lab/cpu-energy-meter/releases)\nand install it with `apt install ./cpu-energy-meter*.deb`.\n\nDependencies of CPU Energy Meter are [libcap](https://sites.google.com/site/fullycapable/),\nwhich is available on most Linux distributions in package `libcap` (e.g., Fedora)\nor `libcap2` (e.g, Debian and Ubuntu: `sudo apt install libcap2`),\nand a Linux kernel with the MSR and CPUID modules (available by default)\n\nAlternatively, for running CPU Energy Meter from source (quick and dirty):\n\n sudo apt install libcap-dev\n sudo modprobe msr\n sudo modprobe cpuid\n make\n sudo ./cpu-energy-meter\n\nIt is also possible (and recommended) to run CPU Energy Meter without root.\nTo do so, the following needs to be done:\n\n- Load kernel modules `msr` and `cpuid`.\n- Add a group `msr`.\n- Add a Udev rule that grants access to `/dev/cpu/*/msr` to group `msr` ([example](https://github.com/sosy-lab/cpu-energy-meter/blob/main/debian/additional_files/59-msr.rules)).\n- Run `chgrp msr`, `chmod 2711`, and `setcap cap_sys_rawio=ep` on the binary (`make setup` is a shortcut for this).\n\nThe provided Debian package in our [PPA](https://launchpad.net/~sosy-lab/+archive/ubuntu/benchmarking)\nand on [GitHub](https://github.com/sosy-lab/cpu-energy-meter/releases) does these steps automatically\nand lets all users execute CPU Energy Meter.\n\nHow to use it\n-------------\n\n cpu-energy-meter [-d] [-e sampling_delay_ms] [-r]\n\nThe tool will continue counting the cumulative energy use of all supported CPUs\nin the background and will report a key-value list of its measurements when it\nreceives SIGINT (Ctrl+C):\n\n```\n+--------------------------------------+\n| CPU-Energy-Meter Socket 0 |\n+--------------------------------------+\nDuration 2.504502 sec\nPackage 3.769287 Joule\nCore 0.317749 Joule\nUncore 0.010132 Joule\nDRAM 0.727783 Joule\nPSYS 29.792603 Joule\n```\n\nTo get intermediate measurements, send signal `USR1` to the process.\n\nOptionally, the tool can be executed with parameter `-r`\nto print the output as a raw (easily parsable) list:\n\n```\ncpu_count=1\nduration_seconds=3.241504\ncpu0_package_joules=4.971924\ncpu0_core_joules=0.461182\ncpu0_uncore_joules=0.053406\ncpu0_dram_joules=0.953979\ncpu0_psys_joules=38.904785\n```\n\nThe parameter `-d` adds debug output.\nBy default, CPU Energy Meter computes the necessary measurement interval automatically,\nthis can be overridden with the parameter `-e`.\n'",",https://zenodo.org/badge/latestdoi/46493895","2015/11/19, 13:28:07",2897,BSD-3-Clause,3,212,"2023/10/02, 05:44:32",1,2,31,1,23,0,0.0,0.5303030303030303,"2019/08/27, 13:27:58",1.2,0,3,false,,false,false,,,https://github.com/sosy-lab,https://www.sosy-lab.org,"LMU Munich, Germany",,,https://avatars.githubusercontent.com/u/16129993?v=4,,, PowerAPI,A middleware toolkit for building software-defined power meters.,powerapi-ng,https://github.com/powerapi-ng/powerapi.git,github,"power-meter,python,inria,green-computing,energy,energy-monitoring",Computation and Communication,"2023/10/19, 11:26:34",145,3,47,true,Python,PowerAPI,powerapi-ng,"Python,Shell,Dockerfile",https://powerapi.org,"b'\n\n[![Join the chat at https://gitter.im/Spirals-Team/powerapi](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/Spirals-Team/powerapi?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n[![License: BSD 3](https://img.shields.io/pypi/l/powerapi.svg)](https://opensource.org/licenses/BSD-3-Clause)\n[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/powerapi-ng/powerapi/build.yml)](https://github.com/powerapi-ng/powerapi/actions/workflows/build.yml)\n[![PyPI](https://img.shields.io/pypi/v/powerapi)](https://pypi.org/project/powerapi/)\n[![Codecov](https://codecov.io/gh/powerapi-ng/powerapi/branch/master/graph/badge.svg)](https://codecov.io/gh/powerapi-ng/powerapi)\n\nPowerAPI is a middleware toolkit for building software-defined power meters.\nSoftware-defined power meters are configurable software libraries that can estimate the power consumption of software in real-time.\nPowerAPI supports the acquisition of raw metrics from a wide diversity of sensors (*eg.*, physical meters, processor interfaces, hardware counters, OS counters) and the delivery of power consumptions via different channels (including file system, network, web, graphical).\nAs a middleware toolkit, PowerAPI offers the capability of assembling power meters *\xc2\xab\xc3\xa0 la carte\xc2\xbb* to accommodate user requirements.\n\n# About\n\nPowerAPI is an open-source project developed by the [Spirals project-team](https://team.inria.fr/spirals), a joint research group between the [University of Lille](https://www.univ-lille.fr) and [Inria](https://www.inria.fr).\n\nThe documentation of the project is available [here](http://powerapi.org).\n\n## Mailing list\nYou can follow the latest news and asks questions by subscribing to our mailing list.\n\n## Contributing\nIf you would like to contribute code you can do so through GitHub by forking the repository and sending a pull request.\n\nWhen submitting code, please make every effort to [follow existing conventions and style](CONTRIBUTING.md) in order to keep the code as readable as possible.\n\n## Publications\n* **[SelfWatts: On-the-fly Selection of Performance Events to Optimize Software-defined Power Meters](https://hal.inria.fr/hal-03173410)**: G. Fieni, R. Rouvoy, L. Seinturier. *IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing* (CCGrid). May 2021, Melbourne, Australia\n* **[Power Budgeting of Big Data Applications in Container-based Clusters](https://hal.inria.fr/hal-02904300)**: J. Enes, G. Fieni, R. Exp\xc3\xb3sito, R. Rouvoy, J. Tourino. *IEEE Cluster*, September 2020, Kobe, Japan\n* **[SmartWatts: Self-Calibrating Software-Defined Power Meter for Containers](https://hal.inria.fr/hal-02470128)**: G. Fieni, R. Rouvoy, L. Seinturier. *IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing* (CCGrid). May 2020, Melbourne, Australia\n* **[Taming Energy Consumption Variations in Systems Benchmarking](https://hal.inria.fr/hal-02403379)**: Z. Ournani, M.C. Belgaid, R. Rouvoy, P. Rust, J. Penhoat, L. Seinturier. *ACM/SPEC International Conference on Performance Engineering* (ICPE). April 2020, Edmonton, Canada\n* **[WattsKit: Software-Defined Power Monitoring of Distributed Systems](https://hal.inria.fr/hal-01439889)**: M. Colmant, P. Felber, R. Rouvoy, L. Seinturier. *IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing* (CCGrid). April 2017, Spain, France\n* **[Process-level Power Estimation in VM-based Systems](https://hal.inria.fr/hal-01130030)**: M. Colmant, M. Kurpicz, L. Huertas, R. Rouvoy, P. Felber, A. Sobe. *European Conference on Computer Systems* (EuroSys). April 2015, Bordeaux, France\n* **[Monitoring Energy Hotspots in Software](https://hal.inria.fr/hal-01069142)**: A. Noureddine, R. Rouvoy, L. Seinturier. *Journal of Automated Software Engineering*, Springer, 2015\n* **[Unit Testing of Energy Consumption of Software Libraries](https://hal.inria.fr/hal-00912613)**: A. Noureddine, R. Rouvoy, L. Seinturier. *International Symposium On Applied Computing* (SAC), March 2014, Gyeongju, South Korea\n* **[Informatique : Des logiciels mis au vert](http://www.jinnove.com/Actualites/Informatique-des-logiciels-mis-au-vert)**: L. Seinturier, R. Rouvoy. *J\'innove en Nord Pas de Calais*, [NFID](http://www.jinnove.com)\n* **[PowerAPI: A Software Library to Monitor the Energy Consumed at the Process-Level](http://ercim-news.ercim.eu/en92/special/powerapi-a-software-library-to-monitor-the-energy-consumed-at-the-process-level)**: A. Bourdon, A. Noureddine, R. Rouvoy, L. Seinturier. *ERCIM News, Special Theme: Smart Energy Systems*, 92, pp.43-44. [ERCIM](http://www.ercim.eu), 2013\n* **[Mesurer la consommation en \xc3\xa9nergie des logiciels avec pr\xc3\xa9cision](http://www.lifl.fr/digitalAssets/0/807_01info_130110_16_39.pdf)**: A. Bourdon, R. Rouvoy, L. Seinturier. *01 Business & Technologies*, 2013\n* **[A review of energy measurement approaches](https://hal.inria.fr/hal-00912996v2)**: A. Noureddine, R. Rouvoy, L. Seinturier. *ACM SIGOPS Operating Systems Review*, ACM, 2013, 47 (3)\n* **[Runtime Monitoring of Software Energy Hotspots](https://hal.inria.fr/hal-00715331)**: A. Noureddine, A. Bourdon, R. Rouvoy, L. Seinturier. *International Conference on Automated Software Engineering* (ASE), September 2012, Essen, Germany\n* **[A Preliminary Study of the Impact of Software Engineering on GreenIT](https://hal.inria.fr/hal-00681560)**: A. Noureddine, A. Bourdon, R. Rouvoy, L. Seinturier. *International Workshop on Green and Sustainable Software* (GREENS), June 2012, Zurich, Switzerland\n\n## Use Cases\nPowerAPI is used in a variety of projects to address key challenges of GreenIT:\n* [SmartWatts](https://github.com/powerapi-ng/smartwatts-formula) is a self-adaptive power meter that can estimate the energy consumption of software containers in real-time.\n* [GenPack](https://hal.inria.fr/hal-01403486) provides a container scheduling strategy to minimize the energy footprint of cloud infrastructures.\n* [VirtualWatts](https://github.com/powerapi-ng/virtualwatts-formula) provides process-level power estimation of applications running in virtual machines.\n* [Web Energy Archive](http://webenergyarchive.com) ranks popular websites based on the energy footpring they imposes to browsers.\n* [Greenspector](https://greenspector.com) optimises the power consumption of software by identifying potential energy leaks in the source code.\n\n## Acknowledgments\nWe all stand on the shoulders of giants and get by with a little help from our friends. PowerAPI is written in [Python](https://www.python.org/) (under [PSF license](https://docs.python.org/3/license.html)) and built on top of:\n* [pyzmq](https://github.com/zeromq/pyzmq) (under [3-Clause BSD license](https://opensource.org/licenses/BSD-3-Clause)) for inter-process communication.\n* [pymongo](https://github.com/mongodb/mongo-python-driver) (under [Apache 2 license](https://github.com/mongodb/mongo-python-driver/blob/master/LICENSE)) for the MongoDB database (input/output) support.\n'",,"2019/03/11, 14:27:02",1689,BSD-3-Clause,279,1277,"2023/10/19, 11:26:36",8,132,190,62,6,0,0.9,0.5567729083665338,"2023/07/18, 11:52:44",v2.1.0,0,12,false,,false,true,"powerapi-ng/smartwatts-formula,edward62740/rapl-monitor,maryamekhlasi/LOG8415-Advanced-Cloud",,https://github.com/powerapi-ng,https://powerapi.org,"Lille, France",,,https://avatars.githubusercontent.com/u/47974262?v=4,,, GreenFrame,"A tool to measure the carbon footprint of a user scenario on a given website application. GreenFrame is able to measure CPU, memory and network usage of Docker or Kubernetes containers. By measuring resource consumption of dockerized E2E tests, GreenFrame allows to compare the consumption of an app between its different versions.",marmelab,https://github.com/marmelab/greenframe-cli.git,github,,Computation and Communication,"2023/10/13, 10:35:46",194,0,191,true,TypeScript,marmelab,marmelab,"TypeScript,JavaScript,Shell,Makefile,Batchfile",https://greenframe.io,"b'# GreenFrame CLI\n\nEstimate the carbon footprint of a user scenario on a web application. Full-stack analysis (browser, screen, network, server).\n\nCan be used standalone, in a CI/CD pipeline, and in conjunction with the [greenframe.io](https://greenframe.io) service.\n\n- [In A Nutshell](#in-a-nutshell)\n- [Installation](#installation)\n- [Usage](#usage)\n- [How Does GreenFrame Work?](#how-does-greenframe-work)\n- [Which Factors Influence The Carbon Footprint?](#which-factors-influence-the-carbon-footprint)\n- [Commands](#commands)\n\n# In A Nutshell\n\nThe share of digital technologies in global greenhouse gas emissions has passed air transport, and will soon pass car transport ([source](https://theshiftproject.org/wp-content/uploads/2019/03/Executive-Summary_Lean-ICT-Report_EN_lowdef.pdf)). At 4% of total emissions, and with a growth rate of 9% per year, the digital sector is a major contributor to global warming.\n\nHow do developers adapt their practices to build less energy intensive web applications?\n\nGreenFrame is a command-line tool that estimates the carbon footprint of web apps at every stage of the development process. Put it in your Continuous Integration workflow to get warned about ""carbon leaks"", and force a threshold of maximum emissions.\n\nFor instance, to estimate the energy consumption and carbon emissions of a visit to a public web page, call `greenframe analyze`:\n\n```\n$ greenframe analyze https://marmelab.com\n\xe2\x9c\x85 main scenario completed\nThe estimated footprint is 0.038 g eq. co2 \xc2\xb1 1.3% (0.085 Wh).\n```\n\n# Installation\n\nTo install GreenFrame CLI, type the following command in your favorite terminal:\n\n```\ncurl https://assets.greenframe.io/install.sh | bash\n```\n\nTo verify that GreenFrame CLI has correctly been installed, type:\n\n```\n$ greenframe -v\nenterprise-cli/1.5.0 linux-x64 node-v16.14.0\n```\n\n# Usage\n\nBy default, GreenFrame runs a ""visit"" scenario on a public web page and computes the energy consumption of the browser, the screen, and the public network. But it can go further.\n\n## Custom Scenario\n\nYou can run a custom scenario instead of the ""visit"" scenario by passing a scenario file to the `analyze` command:\n\n```\n$ greenframe analyze https://marmelab.com ./my-scenario.js\n```\n\nGreenFrame uses [PlayWright](https://playwright.dev/) to run scenarios. To discover what a custom PlayWright scenario looks alike, you can refer to our [documentation](https://docs.greenframe.io/scenario/).\n\nCheck [the PlayWright documentation on writing tests](https://playwright.dev/docs/writing-tests) for more information.\n\nYou can test your scenario using the `greenframe open` command. It uses the local Chrome browser to run the scenario:\n\n```\n$ greenframe open https://marmelab.com ./my-scenario.js\n```\n\nYou can write scenarios by hand, or use [the PlayWright Test Generator](https://playwright.dev/docs/codegen) to generate a scenario based on a user session.\n\n## Full-Stack Analysis\n\nYou can monitor the energy consumption of other docker containers while running the scenario. This allows spawning an entire infrastructure and monitoring the energy consumption of the whole stack.\n\nFor instance, if you start a set of docker containers using `docker-compose`, containing the following services:\n\n```\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nd94f1c458c19 node:16 ""docker-entrypoint.s\xe2\x80\xa6"" 7 seconds ago Up 7 seconds 0.0.0.0:3003->3000/tcp enterprise_app\nf024c10e666b node:16 ""docker-entrypoint.s\xe2\x80\xa6"" 7 seconds ago Up 7 seconds 0.0.0.0:3006->3006/tcp enterprise_api\nb6b5f8eb9a6d postgres:13 ""docker-entrypoint.s\xe2\x80\xa6"" 8 seconds ago Up 8 seconds 0.0.0.0:5434->5432/tcp enterprise_db\n```\n\nYou can run an analysis on the full stack (the browser + the 3 server containers) by passing the `--containers` and `--databaseContainers` option:\n\n```sh\n$ greenframe analyze https://localhost:3000/ ./my-scenario.js --containers=""enterprise_app,enterprise_api"" --databaseContainers=""enterprise_db""\n```\n\nGreenFrame needs to identify database containers because it computes the impact of network I/O differently between the client and the server, and within the server infrastructure.\n\n## Using An Ad Blocker\n\nThird-party tags can be a significant source of energy consumption. When you use the `--useAdblock` option, GreenFrame uses an Ad Blocker to let you estimate that cost.\n\nRun two analyses, a normal one then an ad-blocked one, and compare the results:\n\n```sh\n$ greenframe analyze https://adweek.com\nThe estimated footprint is 0.049 g eq. co2 \xc2\xb1 1% (0.112 Wh).\n$ greenframe analyze https://adweek.com --useAdblock\nThe estimated footprint is 0.028 g eq. co2 \xc2\xb1 1.1% (0.063 Wh).\n```\n\nIn this example, the cost of ads and analytics is 0.049g - 0.028g = 0.021g eq. co2 (42% of the total footprint).\n\n## Defining A Threshold\n\nThe `greenframe` CLI was designed to be used in a CI/CD pipeline. You can define a threshold in `g eq. co2` to fail the build if the carbon footprint is too high:\n\n```sh\n$ greenframe analyze https://cnn.com --threshold=0.045\n\xe2\x9d\x8c main scenario failed\nThe estimated footprint at 0.05 g eq. co2 \xc2\xb1 1.3% (0.114 Wh) passes the limit configured at 0.045 g eq. co2.\n```\n\nIn case of failed analysis, the CLI exits with exit code 1.\n\n## Syncing With GreenFrame.io\n\nIf you want to get more insights about your carbon footprint, you can sync your analysis with [GreenFrame.io](https://greenframe.io). This service provides:\n\n- A dashboard to monitor your carbon footprint over time\n- A detailed analysis of your carbon footprint, with a breakdown by scenario, container, scenario step, and component\n- A comparison with previous analyses on the `main` branch (for Pull Request analysis)\n\n![image](https://user-images.githubusercontent.com/99944/193788309-447a3006-4f05-4330-aa13-ab27d3cd8522.png)\n\nTo get started, [subscribe to GreenFrame.io](https://greenframe.io/#pricing) and create a new project. Then, get your token from the greenframe project page. Pass this token to each greenframe command using the `GREENFRAME_SECRET_TOKEN` environment variable:\n\n```\n$ GREENFRAME_SECRET_TOKEN=your-token-here greenframe analyze https://marmelab.com\n\xe2\x9c\x85 main scenario completed\nThe estimated footprint is 0.038 g eq. co2 \xc2\xb1 9.6% (0.086 Wh).\nCheck the details of your analysis at https://app.greenframe.io/analyses/7d7b7777-600c-4399-842f-b70db9408f53\n```\n\nWhen using a greenframe.io token, the `greenframe analyze` command generates an online report with much more details than the estimated footprint, and outputs its URL on the console.\n\nAlternately, you can export this environment variable in your shell configuration file (`.bashrc`, `.zshrc`, etc.).\n\n```\nexport GREENFRAME_SECRET_TOKEN=your-token-here\n```\n\n## Benchmarking Against Other Sites\n\nHow does the carbon footprint of your site compare to other sites?\n\nGreenFrame.io runs a ""visit"" scenario over many websites in several categories. This allows you to compare your site to other sites in the same category.\n\nIf you\'re using a custom scenario, run the same scenario over another URL to compare the results.\n\nThe problem is that a given ""scenario"" may need adaptations to run on another site. For instance, the ""add to cart"" scenario may need to click on a different button to add an item to the cart. So the hard part of benchmarking is to define a scenario for each site.\n\n## Diffing Against Previous Analyses\n\nIf you\'re using GreenFrame.io, you can compare your analysis with the previous one on the `main` branch. This allows you to monitor the evolution of your carbon footprint over time.\n\nThe greenframe CLI will automatically detect that you\'re in a git checkout, and store the commit hash in the analysis metadata. When run on a branch, it will also look for the latest analysis on the main branch, and compare the two. The results are visible on the analysis page on GreenFrame.io.\n\n**Tip**: You can customize the name of the main branch using the `.greenframe.yml` config file.\n\n## Using a Config File\n\nInstead of passing all options on the command line, you can use a `.greenframe.yml` file to configure the CLI. This file must be located in the same directory as the one where you run the `greenframe` CLI.\n\n```yaml\nbaseURL: YOUR_APP_BASE_URL\nscenarios:\n - path: PATH_TO_YOUR_SCENARIO_FILE\n name: My first scenario\n threshold: 0.1\nprojectName: YOUR_PROJECT_NAME\nsamples: 3\n//distant: ""This option has been deprecated due to security issues""\nuseAdblock: true\nignoreHTTPSErrors: true\nlocale: \'fr-FR\',\ntimezoneId: \'Europe/Paris\',\ncontainers:\n - \'CONTAINER_NAME\'\n - \'ANOTHER_CONTAINER_NAME\'\ndatabaseContainers:\n - \'DATABASE_CONTAINER_NAME\',\nenvFile: PATH_TO_YOUR_ENVIRONMENT_VAR_FILE\nenvVar:\n - envVarA: \'An environment variable needed for the scenario (ie : a secret-key)\',\n - envVarB: \'Another environment variable needed\'\n```\n\n## More Information / Troubleshooting\n\nCheck the docs at greenframe.io:\n\n[https://docs.greenframe.io/](https://docs.greenframe.io/)\n\n# How Does GreenFrame Work?\n\nGreenFrame relies on a [scientific model](./src/model/README.md) of the energy consumption of a digital system built in collaboration with computer scientists at [Loria](https://www.loria.fr/en/).\n\nWhile running the scenario, GreenFrame uses `docker stats` to collect system metrics (CPU, memory, network and disk I/O, scenario duration) every second from the browser and containers.\n\nIt then uses the GreenFrame Model to convert each of these metrics into energy consumption in Watt.hours. GreenFrame sums up the energy of all containers over time, taking into account a theoretical datacenter PUE (set to 1.4, and configurable) for server containers. This energy consumption is then converted into CO2 emissions using a configurable ""carbon cost of energy"" parameter (set to 442g/kWh by default).\n\nGreenFrame repeats the scenario 3 times and computes the average energy consumption and CO2 emissions. It also computes the standard deviation of energy consumption and CO2 emissions to provide a confidence interval.\n\nFor more details about the GreenFrame Model, check this article on the Marmelab blog:\n\n[GreenFrame.io: What is the carbon footprint of a web page?](https://marmelab.com/blog/2021/04/08/greenframe-io-website-carbon.html).\n\n# Which Factors Influence The Carbon Footprint?\n\nBased on our research, the carbon footprint of a web page depends on:\n\n- The duration of the scenario\n- The size of the page (HTML, CSS, JS, images, fonts, etc.)\n- The amount of JS executed on the browser\n- The number of third-party tags (ads, analytics, etc.)\n- The complexity of the page (number of DOM elements, number of layout changes, etc.)\n\nServer containers have a low impact on the carbon footprint (around 5% in most cases).\n\nThis means that the lowest hanging fruit for optimizing the emissions of a web page is to use [Web Performance Optimization (WPO) techniques](https://developer.mozilla.org/en-US/docs/Web/Performance).\n\n# Commands\n\n\n* [`greenframe analyze [BASEURL] [SCENARIO]`](#greenframe-analyze-baseurl-scenario)\n* [`greenframe kube-config`](#greenframe-kube-config)\n* [`greenframe open [BASEURL] [SCENARIO]`](#greenframe-open-baseurl-scenario)\n* [`greenframe update [CHANNEL]`](#greenframe-update-channel)\n\n## `greenframe analyze [BASEURL] [SCENARIO]`\n\nCreate an analysis on GreenFrame server.\n\n```\nUSAGE\n $ greenframe analyze [BASEURL] [SCENARIO] [-C ] [-K ] [-t ] [-p ] [-c ]\n [--commitId ] [-b ] [-s ] [-a] [-i] [--locale] [--timezoneId] [-e ] [-E ]\n [--dockerdHost ] [--dockerdPort ] [--containers ] [--databaseContainers ]\n [--kubeContainers ] [--kubeDatabaseContainers ]\n\nARGUMENTS\n BASEURL Your baseURL website\n SCENARIO Path to your GreenFrame scenario\n\nFLAGS\n -C, --configFile= Path to config file\n -E, --envFile= File of environment vars\n -K, --kubeConfig= Path to kubernetes client config file\n -a, --useAdblock Use an adblocker during analysis\n -b, --branchName= Pass branch name manually\n -c, --commitMessage= Pass commit message manually\n -e, --envVar=... List of environment vars to read in the scenarios\n -i, --ignoreHTTPSErrors Ignore HTTPS errors during analysis\n -p, --projectName= Project name\n -s, --samples= Number of runs done for the score computation\n -t, --threshold= Consumption threshold\n --commitId= Pass commit id manually\n --containers= Pass containers manually\n --databaseContainers= Pass database containers manually\n --dockerdHost= Docker daemon host\n --dockerdPort= Docker daemon port\n --kubeContainers= Pass kubebernetes containers manually\n --kubeDatabaseContainers= Pass kubebernetes database containers manually\n --locale Set greenframe browser locale\n --timezoneId Set greenframe browser timezoneId\n\nDESCRIPTION\n Create an analysis on GreenFrame server.\n```\n\n_See code: [dist/commands/analyze.ts](https://github.com/marmelab/greenframe-cli/blob/v1.7.0/dist/commands/analyze.ts)_\n\n## `greenframe kube-config`\n\nConfigure kubernetes cluster to collect greenframe metrics\n\n```\nUSAGE\n $ greenframe kube-config [-C ] [-K ] [-D]\n\nFLAGS\n -C, --configFile= Path to config file\n -D, --delete Delete daemonset and namespace from kubernetes cluster\n -K, --kubeConfig= Path to kubernetes client config file\n\nDESCRIPTION\n Configure kubernetes cluster to collect greenframe metrics\n ...\n greenframe kube-config\n```\n\n_See code: [dist/commands/kube-config.ts](https://github.com/marmelab/greenframe-cli/blob/v1.7.0/dist/commands/kube-config.ts)_\n\n## `greenframe open [BASEURL] [SCENARIO]`\n\nOpen browser to develop your GreenFrame scenario\n\n```\nUSAGE\n $ greenframe open [BASEURL] [SCENARIO] [-C ] [-a] [--ignoreHTTPSErrors] [--locale] [--timezoneId]\n\nARGUMENTS\n BASEURL Your baseURL website\n SCENARIO Path to your GreenFrame scenario\n\nFLAGS\n -C, --configFile= Path to config file\n -a, --useAdblock Use an adblocker during analysis\n --ignoreHTTPSErrors Ignore HTTPS errors during analysis\n --locale Set greenframe browser locale\n --timezoneId Set greenframe browser timezoneId\n\nDESCRIPTION\n Open browser to develop your GreenFrame scenario\n ...\n greenframe analyze ./yourScenario.js https://greenframe.io\n```\n\n_See code: [dist/commands/open.ts](https://github.com/marmelab/greenframe-cli/blob/v1.7.0/dist/commands/open.ts)_\n\n## `greenframe update [CHANNEL]`\n\nUpdate GreenFrame to the latest version\n\n```\nUSAGE\n $ greenframe update [CHANNEL]\n\nARGUMENTS\n CHANNEL [default: stable] Release channel\n\nDESCRIPTION\n Update GreenFrame to the latest version\n ...\n greenframe update\n```\n\n_See code: [dist/commands/update.ts](https://github.com/marmelab/greenframe-cli/blob/v1.7.0/dist/commands/update.ts)_\n\n\n## Development\n\nThe GreenFrame CLI is written in Node.js. Install depencencies with:\n\n```sh\nyarn\n```\n\nTo run the CLI locally, you must compile the TypeScript files with:\n\n```sh\n$ yarn build\n```\n\nThen you can run the CLI:\n\n```sh\n$ ./bin/run analyze https://greenframe.io ./src/examples/visit.js\n```\n\nWhile developing, instead of running `yarn build` each time you make a change, you can watch for changes and automatically recompile with:\n\n```sh\n$ yarn watch\n```\n\n## License\n\nGreenFrame is licensed under the [Elastic License v2.0](https://www.elastic.co/licensing/elastic-license).\n\nThis means you can use GreenFrame for free both in open-source projects and commercial projects. You can run GreenFrame in your CI, whether your project is open-source or commercial.\n\nBut you cannot build a competitor to [greenframe.io](https://greenframe.io), i.e. a paid service that runs the GreenFrame CLI on demand.\n'",,"2022/09/05, 12:05:48",415,CUSTOM,115,220,"2023/10/13, 10:35:48",7,51,59,43,12,1,1.3,0.7650602409638554,"2023/07/25, 15:08:40",v1.6.8,0,13,false,,false,false,,,https://github.com/marmelab,http://marmelab.com,France,,,https://avatars.githubusercontent.com/u/3116319?v=4,,, ecometer,"Loads websites, compute metrics (from network activity, loaded payloads and the web page), and uses them to assess the website's ecodesign maturity based on a list of best practices.",ecoconceptionweb,https://gitlab.com/ecoconceptionweb/ecometer,gitlab,,Computation and Communication,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, patch-node,The road to global carbon-neutrality will be through programmatic compensation.,patch-technology,https://github.com/patch-technology/patch-node.git,github,"javascript,carbon,carbon-offsetting,carbon-emissions,carbon-neutral,carbon-offsets",Computation and Communication,"2023/10/17, 13:49:32",69,5,2,true,JavaScript,Patch,patch-technology,"JavaScript,Makefile,Dockerfile",https://www.patch.io,"b""# Patch JavaScript SDK\n\n![Test](https://github.com/patch-technology/patch-node/workflows/Test/badge.svg)\n[![npm version](https://badge.fury.io/js/%40patch-technology%2Fpatch@2x.svg)](https://www.npmjs.com/package/@patch-technology/patch)\n[![Discord](https://img.shields.io/discord/733029448558837792)](https://discord.gg/M23NnGR)\n\nThe official JavaScript package for the [Patch API](https://www.patch.io).\n\n## Documentation\n\nFor a complete API reference, check out [Patch's API Reference.](https://docs.patch.io)\n\n## Installation\n\n### NPM\n\n```shell\nnpm install @patch-technology/patch --save\n```\n\n### Yarn\n\n```shell\nyarn add @patch-technology/patch\n```\n\n### Requirements\n\n- Node 10+\n\n## Usage\n\n### Configuration\n\nAfter installing the package, you'll have to configure it with your API key which is available from the API key page in the Patch dashboard:\n\n```javascript\n// ES6+\nimport Patch from '@patch-technology/patch';\nconst patch = Patch('key_test_1234');\n\n// ES5\nvar patch = require('@patch-technology/patch').default('key_test_1234');\n```\n\n#### Peer dependencies\n\nFor environments that do not include the Node Standard Library, such as React Native, you will need to install the listed peer dependencies in order for the package to work as expected. You can install the peer dependencies by running:\n\n```\nnpm install install-peers\n```\n\n### Orders\n\nIn Patch, orders represent a purchase of carbon offsets or negative emissions by mass.\nPlace orders directly if you know the amount of carbon dioxide you would like to sequester.\nIf you do not know how much to purchase, use an estimate.\nYou can also create an order with a maximum desired price, and we'll allocate enough mass to\nfulfill the order for you.\n\n[API Reference](https://docs.patch.io/#/?id=orders)\n\n#### Examples\n\n```javascript\n// Create an order - you can create an order\n// providing either amount (and unit) or total_price (and currency), but not both\n\n// Create order with amount\nconst amount = 1_000_000; // Pass in the amount in unit specified\nconst unit = 'g';\npatch.orders.createOrder({ amount: amount, unit: unit });\n\n// Create an order with total price\nconst totalPrice = 500; // Pass in the total price in smallest currency unit (ie cents for USD).\nconst currency = 'USD';\npatch.orders.createOrder({ total_price: totalPrice, currency: currency });\n\n// Create order with the issued_to field (optional)\nconst amount = 1_000_000; // Pass in the amount in unit specified\nconst unit = 'g';\nconst issued_to = { email: 'issuee@companya.com', name: 'Olivia Jones' };\npatch.orders.createOrder({ amount: amount, unit: unit, issued_to: issued_to });\n\n// Retrieve an order\norderId = 'ord_test_1234'; // Pass in the order's id\npatch.orders.retrieveOrder(orderId);\n\n// Place an order\nconst orderId = 'ord_test_1234'; // Pass in the order's id\npatch.orders.placeOrder(orderId);\n\n// Place an order with the issued_to field (optional)\nconst orderId = 'ord_test_1234'; // Pass in the order's id\nconst issued_to = { email: 'issuee@companya.com', name: 'Olivia Jones' };\npatch.orders.placeOrder(orderId, { issued_to: issued_to });\n\n// Cancel an order\nconst orderId = 'ord_test_1234'; // Pass in the order's id\npatch.orders.cancelOrder(orderId);\n\n// Retrieve a list of orders\nconst page = 1; // Pass in which page of orders you'd like\npatch.orders.retrieveOrders({ page });\n```\n\n### Estimates\n\nEstimates allow API users to get a quote for the cost of compensating a certain amount of CO2. When creating an estimate, an order in the `draft` state will also be created, reserving the allocation of a project for 5 minutes. If you don't place your draft order within those 5 minutes, the order will automatically be cancelled.\n\n[API Reference](https://docs.patch.io/#/?id=estimates)\n\n#### Examples\n\n```javascript\n// Create a mass estimate\nconst mass_g = 1000000; // Pass in the mass in grams (i.e. 1 metric tonne)\npatch.estimates.createMassEstimate({ mass_g });\n\n// Create a flight estimate\nconst distance_m = 9000000; // Pass in the distance traveled in meters\npatch.estimates.createFlightEstimate({ distance_m });\n\n// Create an ecommerce estimate\nconst distance_m = 9000000;\n// Pass in the shipping distance in meters, the transportation method, and the package mass\npatch.estimates.createEcommerceEstimate({\n distance_m,\n package_mass_g: 1000,\n transportation_method: 'air'\n});\n\n// Create a bitcoin estimate\nconst transaction_value_btc_sats = 1000; // [Optional] Pass in the transaction value in satoshis\npatch.estimates.createBitcoinEstimate({\n transaction_value_btc_sats\n});\n\n// Create a vehicle estimate\nconst distance_m = 9000000;\n// Pass in the driving distance in meters and the model/make/year of the vehicle\npatch.estimates.createVehicleEstimate({\n distance_m,\n make: 'Toyota',\n model: 'Corolla',\n year: 1995\n});\n\n// Create a hotel estimate\nconst country_code = 'US'; // ISO3166 alpha-2 country code\nconst city = 'New York'; // [Optional]\nconst region = 'New York'; // [Optional]\nconst star_rating = 4; // [Optional] Star rating of the hotel from 2 to 5\nconst number_of_nights = 2; // [Optional] Default value is 1\nconst number_of_rooms = 2; // [Optional] Default value is 1\npatch.estimates.createHotelEstimate({\n country_code,\n city,\n region,\n star_rating,\n number_of_nights,\n number_of_rooms\n});\n\n// Retrieve an estimate\nconst estimateId = 'est_test_1234';\npatch.estimates.retrieveEstimate(estimate_id);\n\n// Retrieve a list of estimates\nconst page = 1; // Pass in which page of estimates you'd like\npatch.estimates.retrieveEstimates({ page });\n```\n\n### Projects\n\nProjects are the ways Patch takes CO2 out of the air. They can represent reforestation, enhanced weathering, direct air carbon capture, etc. When you place an order via Patch, it is allocated to a project.\n\nWhen fetching Projects, you can add filters to the query to narrow the result. Currently supported filters are:\n\n- `country`\n- `type`\n- `minimumAvailableMass`\n\nYou can also set the `acceptLanguage` option to retrieve projects in a different language.\n\n[API Reference](https://docs.patch.io/#/?id=projects)\n\n#### Examples\n\n```javascript\n// Retrieve a project\nconst projectId = 'pro_test_1234'; // Pass in the project's ID\npatch.projects.retrieveProject(projectId);\n\n// Retrieve a list of projects\nconst page = 1; // Pass in which page of projects you'd like\npatch.projects.retrieveProjects({ page });\n\n// Retrieve a filtered list of projects\nconst country = 'CA'; // Pass in the country you'd like to get projects from\npatch.projects.retrieveProjects({ country });\n\n// Retrieve a filtered list of projects\nconst type = 'biomass'; // Pass in the project type you'd like to filter by\npatch.projects.retrieveProjects({ type });\n\n// Retrieve a filtered list of projects\nconst minimumAvailableMass = 100; // Pass in the minimum available inventory the projects should have\npatch.projects.retrieveProjects({ minimumAvailableMass });\n\n// Retrieve a project in another language\n// See http://docs.patch.test:3000/#/internationalization for more information and support languages\nconst projectId = 'pro_test_1234';\npatch.projects.retrieveProject(projectId, { acceptLanguage: 'fr' });\n```\n\n## Contributing\n\nWhile we value open-source contributions to this SDK, the core of this library is generated programmatically. Complex additions made directly to the library would have to be moved over to our generation code, otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept, but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!\n\nOn the other hand, contributions to the README, as well as new test cases are always very welcome!\n\n### Build and manually test\n\nTo build and test the package locally, run:\n\n```sh\n$ npm run build\n```\n\nThis will generate a `dist` folder with the compiled code. Next you want to link the package and use it in a different folder.\n\nIn the patch-node folder, run:\n\n```sh\n$ npm link\n```\n\nNavigate to a different, empty folder:\n\n```sh\n$ cd ..\n$ mkdir test-patch-node\n$ cd test-patch-node\n```\n\nIn that repository, run the following command to use the locally built package:\n\n```sh\n$ npm link @patch-technology/patch\n```\n\nThis will create a `node_modules` directory in your test repository which will symlink to your locally built package. To test out the package, open a node REPL and import the package and run some queries.\n\n```sh\nSANDBOX_API_KEY=xxx node\n```\n\n```node\nconst Patch = require('@patch-technology/patch');\nconst patch = Patch.default(process.env.SANDBOX_API_KEY);\npatch.projects.retrieveProjects().then((response) => console.log(response));\n```\n\n### Run the specs\n\nBefore running the tests, make sure you set the test API key! Please use test API keys and not production ones, they usually start with `key_test_`.\nBe sure you navigate back to the root `patch-node` directory to run the tests.\n\n```sh\n$ export SANDBOX_API_KEY=\n```\n\nThen you are ready to run the tests:\n\n```sh\n$ npm run test\n```\n""",,"2020/07/26, 23:18:20",1185,CUSTOM,8,101,"2023/10/17, 13:49:34",0,92,92,13,8,0,0.8,0.7155963302752293,"2023/04/18, 21:44:21",2.1.1,0,15,false,,false,false,"varun2508/neutrl,edendao/members-app,CarbonBlocks/patch_purchase,myang1834/OffsetsProject,pcothenet/patch-subscription",,https://github.com/patch-technology,https://patch.io,San Francisco,,,https://avatars.githubusercontent.com/u/63677591?v=4,,, co2.js,"A npm module for accessing the green web API, and estimating the carbon emissions from using digital services.",thegreenwebfoundation,https://github.com/thegreenwebfoundation/co2.js.git,github,,Computation and Communication,"2023/10/09, 12:08:54",265,1,132,true,JavaScript,The Green Web Foundation,thegreenwebfoundation,"JavaScript,HTML,Shell",,"b'# CO2.js\n\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-13-orange.svg?style=flat-square)](#contributors-)\n\n\n\n\nOne day, the internet will be powered by renewable energy. Until that day comes, there\xe2\x80\x99ll be a CO2 cost that comes with every byte of data that\xe2\x80\x99s uploaded or downloaded. By being able to calculate these emissions, developers can be empowered to create more efficient, lower carbon apps, websites, and software.\n\n## [Documentation](https://developers.thegreenwebfoundation.org/co2js/overview/) | [Changelog](/CHANGELOG.md) | [Roadmap](https://github.com/orgs/thegreenwebfoundation/projects/3/views/1)\n\n## What is CO2.js?\n\nCO2.js is a JavaScript library that enables developers a way to estimate the emissions related to use of their apps, websites, and software.\n\n## Why use it?\n\nBeing able to estimate the CO2 emissions associated with digital activities can be of benefit to both developers and users.\n\nInternally, you may want to use this library to create a _carbon budget_ for your site or app. It is also useful for inclusion in dashboards and monitoring tools.\n\nFor user facing applications, CO2.js could be used to check & block the uploading of carbon intensive files. Or, to present users with information about the carbon impact of their online activities (such as browsing a website).\n\nThe above a just a few examples of the many and varied ways CO2.js can be applied to provide carbon estimates for data transfer. If you\xe2\x80\x99re using CO2.js in production we\xe2\x80\x99d love to hear how! [Contact us](https://www.thegreenwebfoundation.org/support-form/) via our website.\n\n## Installation\n\n### Using NPM\n\nYou can install CO2.js as a dependency for your projects using NPM.\n\n```bash\nnpm install @tgwf/co2\n```\n\n### Using Skypack\n\nYou can import the CO2.js library into projects using Skypack.\n\n```js\nimport tgwf from ""https://cdn.skypack.dev/@tgwf/co2"";\n```\n\n## Using a JS CDN\n\nYou can get the latest version of CO2.js using one of the content delivery networks below.\n\n### jsDelivr\n\nYou can find the package at [https://www.jsdelivr.com/package/npm/@tgwf/co2](https://www.jsdelivr.com/package/npm/@tgwf/co2).\n\n- CommonJS compatible build `https://cdn.jsdelivr.net/npm/@tgwf/co2@latest/dist/cjs/index-node.min.js`\n- ES Modules compatible build `https://cdn.jsdelivr.net/npm/@tgwf/co2@latest/dist/esm/index.js`\n- IIFE compatible build `https://cdn.jsdelivr.net/npm/@tgwf/co2@latest/dist/iife/index.js`\n\n### Unpkgd\n\nYou can find the package at [https://unpkg.com/browse/@tgwf/co2@latest/](https://unpkg.com/browse/@tgwf/co2@latest/).\n\n- CommonJS compatible build `https://unpkg.com/@tgwf/co2@latest/dist/cjs/index-node.min.js`\n- ES Modules compatible build `https://unpkg.com/@tgwf/co2@latest/dist/esm/index.js`\n- IIFE compatible build `https://unpkg.com/@tgwf/co2@latest/dist/iife/index.js`\n\n### Build it yourself\n\nYou can also build the CO2.js library from the source code. To do this:\n\n1. Go to the [CO2.js repository](https://github.com/thegreenwebfoundation/co2.js) on GitHub.\n1. Clone or fork the repository.\n1. Navigate to the folder on your machine and run `npm run build` in your terminal.\n1. Once the build has finished running, you will find a `/dist` folder has been created. Inside you can find:\n\n - `dist/cjs` - A CommonJS compatible build.\n - `dist/esm` - An ES Modules compatible build.\n - `dist/iife` - An Immediately Invoked Function Expression (IIFE) version of the library.\n\n## Marginal and average emissions intensity data\n\nCO2.js includes yearly average grid intensity data from [Ember](https://ember-climate.org/data/data-explorer/), as well as marginal intensity data from the [UNFCCC](https://unfccc.int/) (United Nations Framework Convention on Climate Change). You can find the data in JSON and CommonJS Module format in the `data/output` folder.\n\n### Using emissions intensity data\n\nYou can import annual, country-level marginal or average grid intensity data into your projects directly from CO2.js. For example, if we wanted to use the average grid intensity for Australia in our project, we could use the code below:\n\n```js\nimport { averageIntensity } from ""@tgwf/co2"";\nconst { data } = averageIntensity;\nconst { AUS } = data;\nconsole.log({ AUS });\n```\n\nAll countries are represented by their respective [Alpha-3 ISO country code](https://www.iso.org/obp/ui/#search).\n\n## Publishing to NPM\n\nWe use [`np`](https://www.npmjs.com/package/np) to publish new versions of this library to NPM. To do this:\n\n1. First login to NPM by running the `npm login` command in your terminal.\n2. Then run `npx np `.\n3. `np` will run several automated steps to publish the new package to NPM.\n4. If everything runs successfully, you can then add release notes to GitHub for the newly published package.\n\n## Release communication\n\nCO2.js releases will be communicated through the following channels:\n\n| Channel | Minor Release (0.xx) | Patch Release (0.xx.x) |\n| ----------------------------------------------------------------------------------------------- | -------------------- | ---------------------- |\n| [Github](https://github.com/thegreenwebfoundation/co2.js/releases) | \xe2\x9c\x85 | \xe2\x9c\x85 |\n| [Green Web Foundation website](https://www.thegreenwebfoundation.org/co2-js/#releases) | \xe2\x9c\x85 | \xe2\x9d\x8c |\n| W3C Slack Sustainability Channel | \xe2\x9c\x85 | \xe2\x9d\x8c |\n| ClimateAction.Tech Slack | \xe2\x9c\x85 | \xe2\x9d\x8c |\n| [Green Web Foundation LinkedIn Account](https://www.linkedin.com/company/green-web-foundation/) | \xe2\x9c\x85 | \xe2\x9d\x8c |\n\n## Licenses\n\nThe code for CO2.js is licensed Apache 2.0. ([What does this mean?]())\n\nThe average carbon intensity data from Ember is published under the Creative Commons ShareAlike Attribution Licence ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). ([What does this mean?](https://www.tldrlegal.com/license/creative-commons-attribution-share-alike-cc-sa))\n\nThe marginal intensity data is published by the Green Web Foundation, under the Creative Commons ShareAlike Attribution Licence ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). ([What does this mean?](https://www.tldrlegal.com/license/creative-commons-attribution-share-alike-cc-sa))\n\nSee LICENCE for more.\n\n## Contributors\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Chris Adams

\xf0\x9f\x92\xbb

fershad

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x9a\xa7

Peter Hedenskog

\xf0\x9f\x92\xbb

Dryden

\xf0\x9f\x92\xbb

Evan Hahn

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Prathum Pandey

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb

Frazer Smith

\xf0\x9f\x92\xbb \xe2\x9a\xa0\xef\xb8\x8f

Hamish Fagg

\xf0\x9f\x92\xbb

Atul Varma

\xf0\x9f\x92\xbb

Piper

\xf0\x9f\x92\xbb

Raymundo V\xc3\xa1squez Ruiz

\xf0\x9f\x92\xbb

JamieB

\xf0\x9f\x92\xbb

p-gerard

\xf0\x9f\x90\x9b \xf0\x9f\x92\xbb
\n\n\n\n\n\n\n\n\n\n\n\n\n'",,"2020/02/23, 19:39:37",1340,CUSTOM,146,444,"2023/10/09, 05:22:53",18,112,156,54,16,3,0.2,0.5777777777777777,"2023/10/09, 12:08:28",v0.13.8,1,16,true,custom,false,false,MakakWasTaken/green-analytics,,https://github.com/thegreenwebfoundation,https://www.thegreenwebfoundation.org,The Internet,,,https://avatars.githubusercontent.com/u/8995024?v=4,,, nvidia-co2,Adds gCO2eq emissions to nvidia-smi.,kylemcdonald,https://github.com/kylemcdonald/nvidia-co2.git,github,,Computation and Communication,"2020/10/14, 05:23:45",109,0,7,false,Python,,,Python,,"b""# nvidia-co2\n\nShow gCO2eq emissions information with nvidia-smi, at the top right corner. For example: **79.2gCO2eq/h** or **23.76mm^2/h sea ice**.\n\nCopies code from [experiment-impact-tracker](https://github.com/Breakend/experiment-impact-tracker) for mapping geolocations to energy usage, which can be used to monitor and report on longer-running experiments.\n\nThis script doesn't take into account:\n\n- Carbon intensity changes with time of day.\n- Datacenters often have unique energy sources. `experiment-impact-tracker` tracks this information, and it can be accessed with their `scripts/lookup-cloud-region-info`. I would be happy to add this info if the script can automatically detect the provider and region, possibly from the IP address.\n- The state of California has more detailed information available via [California ISO](http://www.caiso.com/Pages/default.aspx) and this script does not use that data.\n- CPU usage is only monitored if it is tracked at `/sys/class/powercap/intel-rapl`. Doing this in a hardware-independent way requires a lot more code, with some first steps in `experiment-impact-tracker`.\n\nWhen running the first time at an IP address, the script will geolocate your IP address and estimate the local carbon intensity. This information will be cached between runs in `/tmp/nvidia-co2-cache.(dir|bak|dat)`. The first run might take 1 second, additional runs should take 200ms.\n\nThis script won't work by default on Google Cloud because I'm using `dig` to quickly get a public IP address. Permissions are also set up in a way where you would need to install it to `--user` and call `python -m nvidia-co2` or similar. But with a little work it could be done :)\n\n## Install\n\n`pip install git+https://github.com/kylemcdonald/nvidia-co2.git`\n\n## Example\n\n```\n$ nvidia-co2 -m ice\nSun Feb 16 14:44:50 2020 23.76mm^2/h sea ice\n+-----------------------------------------------------------------------------+\n| NVIDIA-CO2 435.21 Driver Version: 435.21 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 GeForce RTX 208... Off | 00000000:05:00.0 On | N/A |\n| 45% 59C P2 206W / 260W | 10975MiB / 11016MiB | 89% Default |\n+-------------------------------+----------------------+----------------------+\n| 1 GeForce RTX 208... Off | 00000000:09:00.0 Off | N/A |\n| 26% 34C P8 19W / 260W | 166MiB / 11019MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| 0 1149 G /usr/lib/xorg/Xorg 85MiB |\n| 0 1359 G /usr/bin/gnome-shell 91MiB |\n| 0 21752 C ...e/kyle/anaconda3/envs/tf2gpu/bin/python 10787MiB |\n| 1 21752 C ...e/kyle/anaconda3/envs/tf2gpu/bin/python 155MiB |\n+-----------------------------------------------------------------------------+\n```\n\n## Command-line options\n\n```\n$ nvidia-co2 --help\nusage: nvidia-co2 [-h] [--mode MODE]\n\nShow gCO2eq emissions information with nvidia-smi. Combines CPU and GPU usage.\nEmissions are corrected for location using IP address geolocation.\n\noptional arguments:\n -h, --help show this help message and exit\n --mode MODE, -m MODE [ice|beef|tofu|car-mph|car-kph|bulb|cfl|watt|gco2eqph]\n `ice` shows how much sea ice is lost per hour due to\n your emissions. `beef` and `tofu` shows how many grams\n of each it takes to produce the same emissions. `car-\n mph` and `car-kph` show how fast a car would have to\n drive to produce the same emissions. `bulb` and `cfl`\n show how many incandescent lightbulbs or CFLs are\n required to use the same power. `watt` shows how many\n watts used, and `gco2eqph` shows gCOeq/hour used.\n (default: gco2eqph)\n```""",,"2020/02/16, 20:16:28",1347,MIT,0,17,"2020/10/14, 05:23:47",0,4,5,0,1106,0,0.0,0.3846153846153846,,,0,4,false,,false,false,,,,,,,,,,, The Low Impact Website,Reduces data transfer by up to 70% in comparison to our regular website.,Organic-Basics,https://github.com/Organic-Basics/ob-low-impact-website.git,github,"shopify,sustainability,co2,nuxtjs,climate",Computation and Communication,"2021/12/16, 14:12:10",223,0,9,true,Vue,Organic Basics ApS,Organic-Basics,"Vue,JavaScript,TypeScript,SCSS,Shell,Makefile",https://lowimpact.organicbasics.com,"b""[![Build Status](https://img.shields.io/static/v1.svg?label=CSL&message=software%20against%20climate%20change&color=green?style=flat&logo=github)](https://img.shields.io/static/v1.svg?label=CSL&message=software%20against%20climate%20change&color=green?style=flat&logo=github)\n\n# The Low Impact Website by Organic Basics\n\nThe internet is dirty. Data transfer requires electricity, which creates carbon emissions \xe2\x80\x94 and this leads to climate change. The Low Impact website reduces data transfer by up to 70% in comparison to our regular website.\n\n## Low Impact Manifesto\n\nTo build this website, we wrote down 10 rules for how to make a low impact website. We share these openly, so you can use it in your own project.\n\n### A low impact website:\n 1. Does not load any images before they are actively requested by the user.\n 2. Minimizes the power consumption on the users device.\n 3. Adapts to reflect the amount of renewable energy it\xe2\x80\x99s currently running on.\n 4. Informs the user of the impact of their browsing behavior.\n 5. Does not make use of videos.\n 6. Stores data locally on the user\xe2\x80\x99s device to minimize data transfer.\n 7. Compresses all data to the greatest extent possible.\n 8. Loads only the most crucial programming scripts, frameworks and cookies.\n 9. Limits the amount of light emitted by the screen.\n 10. Optimizes and limits the use of custom fonts.\n\n## Setup\n\nThe Low Impact Website uses Shopify's [Storefront API](https://shopify.dev/docs/storefront-api) to fetch products and basic ecommerce functionality.\nIt uses the [Nuxt.js framework](https://github.com/nuxt/nuxt.js).\n\n### Installation\n\nTo get started, you first need to install [Yarn](https://yarnpkg.com/), and then run `yarn install`.\n\n### Environment variables\n\n#### Shopify\n\nTo setup your environment variables, you should copy the `.env.example` file and rename it `.env`.\nThen you must fill it out with your own Shopify setup. \n\nOrganic Basics uses 4 different production Shopify shops and 1 for development, so we have defined 5 Apollo clients in `nuxt.config.js`: _eur_, _gbp_, _usd_, _dkk_ and _dev_.\n\nThese 5 different configs are used to navigate between different currencies, and thus Shopify shops.\n\nThis happens via the `_locale` part of the Nuxt routing, e.g. [https://lowimpact.organicbasics.com/eur](https://lowimpact.organicbasics.com/eur). \n\nYou might not need 5 different shops, and so you must adapt the `nuxt.config.js` and `.env` files.\n\n#### Contentful\n\nWe also use Contentful to display some content on the product pages. To fetch this, you must fill in `.contentful.json` with your own values.\n\n## Usage\n\nTo start the development server at `localhost:3000`, run `yarn dev`.\n\nTo build for production, run `yarn build`.\n\nTo launch the server and the production build, run `yarn start`\n\nFor a more detailed explanation on how things work, check out [Nuxt.js docs](https://nuxtjs.org).\n\n## Disclaimer\n\nThe Low Impact Website code is highly customized to the Organic Basics' website setup, and will more than likely not easily transfer to your setup.\n\nThus the main outcomes of making this code available to the public will likely be in sharing learnings and techniques, instead of providing a plug-and-play solution for low impact websites. But if you want to make such a thing, let us know! \n\n## Support\n\nOrganic Basics does not currently have the resources to provide any support to help you setup this project.\n\nIf you find any bugs, issues or similar, please create an issue on this Github repository.\n\n## License\n\nThe Low Impact Website code is released under the [Climate Strike License](https://github.com/climate-strike/license). """,,"2020/05/04, 15:30:49",1269,CUSTOM,0,302,"2023/01/26, 23:41:41",17,17,20,1,271,13,0.3,0.2247191011235955,,,0,2,false,,false,false,,,https://github.com/Organic-Basics,https://www.organicbasics.com,"Copenhagen, Denmark",,,https://avatars.githubusercontent.com/u/54848514?v=4,,, cryptoart-footprint,Estimate the total emissions for popular CryptoArt platforms.,kylemcdonald,https://github.com/kylemcdonald/ethereum-nft-activity.git,github,"ethereum,cryptoart,climate",Computation and Communication,"2022/07/20, 22:36:38",181,0,2,false,Jupyter Notebook,,,"Jupyter Notebook,Python,Shell",,"b'# ethereum-nft-activity\n\nHow much energy does it take to power popular Ethereum-backed CryptoArt platforms? And what emissions are associated with this energy use?\n\nThese questions do not have clear answers for two reasons:\n\n1. The overall energy usage and emissions of Ethereum are hard to estimate. I am working on this in a separate repo: [kylemcdonald/ethereum-energy](https://github.com/kylemcdonald/ethereum-energy)\n2. The portion for which a specific user, platform, or transaction might be considered ""responsible"" is more of a philosophical question than a technical one. Like many complex systems, there is an indirect relationship between the service and the emissions. I am working on different approaches in this notebook: [Per-Transaction Models](https://github.com/kylemcdonald/cryptoart-footprint/blob/main/Per-Transaction%20Models.ipynb)\n\nThis table represents one method for computing emissions, as of March 5, 2022. The methodology is described below.\n\n| Name | Fees | Transactions | kgCO2 |\n|---------------|---------|--------------|-------------|\n| Art Blocks | 12,006 | 244,594 | 21,531,626 |\n| Async | 224 | 27,403 | 332,657 |\n| Foundation | 8,602 | 661,074 | 14,568,164 |\n| KnownOrigin | 507 | 64,326 | 904,455 |\n| Makersplace | 1,840 | 144,163 | 3,010,383 |\n| Nifty Gateway | 1,621 | 151,950 | 2,385,675 |\n| OpenSea | 314,515 | 20,012,086 | 551,268,013 |\n| Rarible | 20,930 | 1,802,971 | 27,706,539 |\n| SuperRare | 2,215 | 320,697 | 3,172,169 |\n| Zora | 532 | 21,660 | 721,254 |\n\nUpdates:\n\n* March 5, 2022: Missing contracts were added to Art Blocks and OpenSea, massively incresaing their totals. Duplicate contracts were removed remove Nifty Gateway, halving the totals. The contracts were duplicated because they were found both when scraping Nifty Gateway, and also when pulling labeled contracts from Etherscan.\n\n\n## Preparation\n\nFirst, sign up for an API key at [Etherscan](https://etherscan.io/myapikey). Create `env.json` and add the API key. It should look like:\n\n```json\n{\n ""etherscan-api-key"": """"\n}\n```\n\nInstall dependencies:\n\n```sh\npip install -r requirements.txt\n```\n\nNote: this project requires Python 3.\n\n### `contracts_footprint.py`\n\nThis will pull all the transactions from Etherscan, sum the gas and transaction counts, and do a basic emissions estimate. Results are saved in the `/output` directory as JSON or TSV. Run the script with, for example: `python contracts_footprint.py --verbose --tsv data/contracts.json data/nifty-gateway-contracts.json`. \n\nThis may take longer the first time, while your local cache is updated. When updating after a week, it can take 5 minutes or more to download all new transactions. The entire cache can be multiple gigabytes.\n\nThis script has a few unique additional flags:\n\n* `--summary` to summarize the results in a format similar to the above table, combining multiple contracts into a single row of output.\n* `--startdate` and `--enddate` can be used to only analyze a specific date range, using the format `YYYY-MM-DD`.\n* `--tsv` will save the results of analysis as a TSV file instead of JSON.\n\n### `contracts_history.py`\n\nThis will pull all the transactions from Etherscan, sum the transaction fees and gas used, and group by day and platform. Results are saved in the `/output` directory as CSV files. Run the script with, for example: `python contracts_history.py --verbose data/contracts.json data/nifty-gateway-contracts.json`\n\nThe most recent results are [cached in the gh_pages branch](https://github.com/kylemcdonald/ethereum-nft-activity/tree/gh-pages/output).\n\n### Additional flags\n\nBoth scripts have these shared additional flags:\n\n* `--noupdate` runs from cached results. This will not make any requests to Nifty Gateway or Etherscan. When using the `Etherscan` class in code without an API key, this is the default behavior.\n* `--verbose` prints progress when scraping Nifty Gateway or pulling transactions from Etherscan.\n\n### Helper scripts\n\n* `python ethereum_stats.py` will pull stats from Etherscan like daily fees and block rewards and save them to `data/ethereum-stats.json`\n* `python nifty_gateway.py` will scrape all the contracts from Nifty Gateway and save them to `data/nifty-gateway-contracts.json`\n\n## Methodology\n\nThe footprint of a platform is the sum of the footprints for all artwork on the platform. Most platforms use a few Ethereum contracts and addresses to handle all artworks. For each contract, we download all the transactions associated with that address from Etherscan. Then for each day, we take the sum of all fees paid on all those transactions divided by the total fees paid across the whole network for that day. This ratio is multiplied by the daily [Ethereum emissions estimate](https://github.com/kylemcdonald/ethereum-emissions) to get the total emissions for that address. Finally, the total emissions for a platform are equal to the emissions for all addresses across all days.\n\n## Sources\n\nContracts are sourced from a combination of personal research, [DappRadar](https://dappradar.com/), and [Etherscan](https://etherscan.io/) tags.\n\nWhen possible, we have confirmed contract coverage directly with the marketplaces. Confirmed contracts include:\n\n* SuperRare: all confirmed\n* Foundation: all confirmed\n* OpenSea: some contracts on DappRadar have not been confirmed\n* Nifty Gateway: all confirmed\n\n## How to add more platforms\n\nTo modify this code so that it works with more platforms, add every possible contract and wallet for each platform to the `data/contracts.json` file, using the format:\n\n```js\n\'/\': \'<0xAddress>\'\n```\n\nThen [submit a pull request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request) back to this repository. Thanks in advance!\n\n## Contracts and Addresses\n\nContracts and addresses used by each platform can be found in `data/contracts.json` and are also listed here using `python print_contracts.py` to generate Markdown. Nifty Gateway contracts are listed separately in `data/nifty-gateway-contracts.json`.\n\n### Art Blocks\n\n* [Deployer](https://etherscan.io/address/0x96dc73c8b5969608c77375f085949744b5177660) 2020-11-26 to 2021-08-13\n* [GenArt721](https://etherscan.io/address/0x059EDD72Cd353dF5106D2B9cC5ab83a52287aC3a) 2020-11-26 to 2022-01-09\n* [GenArt721Core](https://etherscan.io/address/0xa7d8d9ef8d8ce8992df33d8b8cf4aebabd5bd270) 2020-12-12 to 2022-01-09\n\n### Async\n\n* [ASYNC](https://etherscan.io/address/0x6c424c25e9f1fff9642cb5b7750b0db7312c29ad) 2020-02-25 to 2021-12-21\n* [ASYNC-V2](https://etherscan.io/address/0xb6dae651468e9593e4581705a09c10a76ac1e0c8) 2020-07-21 to 2022-01-09\n\n### Foundation\n\n* [ERC-721](https://etherscan.io/address/0xcda72070e455bb31c7690a170224ce43623d0b6f) 2021-01-13 to 2022-01-09\n* [FND NFT (FNDNFT) ERC-20](https://etherscan.io/address/0x3b3ee1931dc30c1957379fac9aba94d1c48a5405) 2021-01-13 to 2022-01-09\n\n### KnownOrigin\n\n* [ArtistAcceptingBids](https://etherscan.io/address/0x921ade9018eec4a01e41e80a7eeba982b61724ec) 2018-10-23 to 2020-11-13\n* [ArtistAcceptingBidsV2](https://etherscan.io/address/0x848b0ea643e5a352d78e2c0c12a2dd8c96fec639) 2019-02-26 to 2022-01-07\n* [ArtistEditionControls](https://etherscan.io/address/0x06c741e6df49d7fda1f27f75fffd238d87619ba1) 2018-11-07 to 2020-07-07\n* [ArtistEditionControlsV2](https://etherscan.io/address/0x5327cf8b4127e81013d706330043e8bf5673f50d) 2019-02-26 to 2022-01-09\n* [KnownOrigin Token](https://etherscan.io/address/0xfbeef911dc5821886e1dda71586d90ed28174b7d) 2018-09-04 to 2022-01-09\n* [KnownOriginDigitalAsset](https://etherscan.io/address/0xdde2d979e8d39bb8416eafcfc1758f3cab2c9c72) 2018-04-04 to 2021-12-21\n* [SelfServiceAccessControls](https://etherscan.io/address/0xec133df5d806a9069aee513b8be01eeee2f03ff0) 2019-04-25 to 2021-08-23\n* [SelfServiceEditionCuration](https://etherscan.io/address/0x8ab96f7b6c60df169296bc0a5a794cae90493bd9) 2019-04-08 to 2020-07-07\n* [SelfServiceEditionCurationV2](https://etherscan.io/address/0xff043a999a697fb1efdb0c18fd500eb7eab4e846) 2019-04-25 to 2020-07-07\n* [SelfServiceEditionCurationV3](https://etherscan.io/address/0x50782a63b7735483be07ef1c72d6d75e94b4a8f6) 2019-04-30 to 2020-07-07\n\n### Makersplace\n\n* [DigitalMediaCore](https://etherscan.io/address/0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756) 2019-03-11 to 2022-01-09\n* [Unknown 1](https://etherscan.io/address/0x3981a1218a95becb4258305584bf2f24ff8dedf2) 2019-02-26 to 2020-07-29\n* [Unknown 2](https://etherscan.io/address/0x0ba51d9c015a7544e3560081ceb16ffe222dd64f) 2020-02-24 to 2022-01-08\n\n### Nifty Gateway\n\n* Many individual contracts defined in `data/nifty-gateway-contracts.json`\n* [Builder Shop](https://etherscan.io/address/0x431bd1297a1c7664d599364a427a2d926a1f58ae) 2020-03-21 to 2021-07-27\n* [First and Last](https://etherscan.io/address/0x11ab0243c57c6c1b39f2908aaebaed7ccf351491) 2021-03-13 to 2022-01-09\n* [GUSD cashout](https://etherscan.io/address/0x3e6722f32cbe5b3c7bd3dca7017c7ffe1b9e5a2a) 2020-01-31 to 2022-01-09\n* [Getaway Auctions](https://etherscan.io/address/0xf72136fb50f0c90324c9619aaab4289eeb277f3e) 2021-03-15 to 2021-12-06\n* [Hard To Explain](https://etherscan.io/address/0xef2883efa7bf4ecf169f3ad6c012994078608985) 2021-03-14 to 2022-01-07\n* [Insides](https://etherscan.io/address/0x934fdb5084d448de4c61c960c5f806689ae720b1) 2021-03-14 to 2022-01-04\n* [Omnibus](https://etherscan.io/address/0xe052113bd7d7700d623414a0a4585bcae754e9d5) 2020-01-31 to 2022-01-09\n\n### OpenSea\n\n* [OpenSea Shared Storefront (OPENSTORE)](https://etherscan.io/address/0x495f947276749ce646f68ac8c248420045cb7b5e) 2020-12-02 to 2022-01-09\n* [OpenSea Token (OPT)](https://etherscan.io/address/0x1129eb10812935593bf44fe0a9b62a59a9202f6d) 2021-02-05 to 2021-04-29\n* [OpenSeaENSResolver](https://etherscan.io/address/0x9c4e9cce4780062942a7fe34fa2fa7316c872956) 2019-06-27 to 2020-07-03\n* [SaleClockAuction](https://etherscan.io/address/0x1f52b87c3503e537853e160adbf7e330ea0be7c4) 2018-01-08 to 2021-10-18\n* [Unknown 1](https://etherscan.io/address/0x23b45c658737b12f1748ce56e9b6784b5e9f3ff8) 2018-02-15 to 2020-06-18\n* [Unknown 2](https://etherscan.io/address/0x78997e9e939daffe7eb9ed114fbf7128d0cfcd39) 2018-04-03 to 2021-08-16\n* [Wallet](https://etherscan.io/address/0x5b3256965e7c3cf26e11fcaf296dfc8807c01073) 2018-01-02 to 2022-01-09\n* [Wyvern Exchange](https://etherscan.io/address/0x7be8076f4ea4a4ad08075c2508e481d6c946d12b) 2018-06-12 to 2022-01-09\n* [WyvernProxyRegistry](https://etherscan.io/address/0xa5409ec958c83c3f309868babaca7c86dcb077c1) 2018-06-12 to 2022-01-09\n\n### Rarible\n\n* [Asset Contract ERC1155](https://etherscan.io/address/0xb66a603f4cfe17e3d27b87a8bfcad319856518b8) 2021-06-11 to 2022-01-09\n* [Asset Contract ERC721](https://etherscan.io/address/0xf6793da657495ffeff9ee6350824910abc21356c) 2021-06-11 to 2022-01-09\n* [Deployer](https://etherscan.io/address/0x3482549fca7511267c9ef7089507c0f16ea1dcc1) 2018-10-08 to 2021-12-16\n* [ERC1155 Factory](https://etherscan.io/address/0x81243681078bee8e251d02ee6872b1eaa6dd982a) 2021-08-05 to 2021-12-09\n* [ERC1155 Sale 1](https://etherscan.io/address/0x8c530a698b6e83d562db09079bc458d4dad4e6c5) 2020-05-27 to 2020-10-11\n* [ERC1155 Sale 2](https://etherscan.io/address/0x93f2a75d771628856f37f256da95e99ea28aafbe) 2020-09-03 to 2021-06-05\n* [ERC721 Factory](https://etherscan.io/address/0x6d9dd3547baf4c190ab89e0103c363feaf325eca) 2021-08-05 to 2021-10-22\n* [ERC721 Sale](https://etherscan.io/address/0x131aebbfe55bca0c9eaad4ea24d386c5c082dd58) 2020-09-03 to 2021-01-22\n* [Exchange 1](https://etherscan.io/address/0xcd4ec7b66fbc029c116ba9ffb3e59351c20b5b06) 2020-11-17 to 2022-01-05\n* [Exchange Proxy](https://etherscan.io/address/0x9757f2d2b135150bbeb65308d4a91804107cd8d6) 2021-06-11 to 2022-01-09\n* [External Royalties](https://etherscan.io/address/0xea90cfad1b8e030b8fd3e63d22074e0aeb8e0dcd) 2021-06-18 to 2022-01-09\n* [MintableToken 1](https://etherscan.io/address/0xf79ab01289f85b970bf33f0543e41409ed2e1c1f) 2019-10-18 to 2021-10-17\n* [MintableToken 2](https://etherscan.io/address/0x6a5ff3ceecae9ceb96e6ac6c76b82af8b39f0eb3) 2019-12-23 to 2022-01-06\n* [MintableToken 3](https://etherscan.io/address/0x60f80121c31a0d46b5279700f9df786054aa5ee5) 2020-05-27 to 2022-01-09\n* [RARI Token 1](https://etherscan.io/address/0xfca59cd816ab1ead66534d82bc21e7515ce441cf) 2020-07-14 to 2022-01-09\n* [RARI Token 2](https://etherscan.io/address/0x60f80121c31a0d46b5279700f9df786054aa5ee5) 2020-05-27 to 2022-01-09\n* [RaribleToken](https://etherscan.io/address/0xd07dc4262bcdbf85190c01c996b4c06a461d2430) 2020-05-27 to 2022-01-09\n* [TokenSale](https://etherscan.io/address/0xf2ee97405593bc7b6275682b0331169a48fedec7) 2019-10-24 to 2020-08-31\n* [Treasury](https://etherscan.io/address/0x1cf0df2a5a20cd61d68d4489eebbf85b8d39e18a) 2021-01-07 to 2021-12-09\n* [Unknown 1](https://etherscan.io/address/0xa5af48b105ddf2fa73cbaac61d420ea31b3c2a07) 2020-05-27 to 2020-09-12\n\n### SuperRare\n\n* [Bids](https://etherscan.io/address/0x2947f98c42597966a0ec25e92843c09ac17fbaa7) 2019-09-04 to 2021-12-31\n* [SupeRare (SUPR)](https://etherscan.io/address/0x41a322b28d0ff354040e2cbc676f0320d8c8850d) 2018-04-01 to 2022-01-09\n* [SuperRareV2 (SUPR)](https://etherscan.io/address/0xb932a70a57673d89f4acffbe830e8ed7f75fb9e0) 2019-09-04 to 2022-01-09\n* [Unknown 1](https://etherscan.io/address/0x65b49f7aee40347f5a90b714be4ef086f3fe5e2c) 2020-12-05 to 2022-01-09\n* [Unknown 2](https://etherscan.io/address/0x8c9f364bf7a56ed058fc63ef81c6cf09c833e656) 2020-12-05 to 2022-01-09\n\n### Zora\n\n* [Market](https://etherscan.io/address/0xe5bfab544eca83849c53464f85b7164375bdaac1) 2020-12-31 to 2020-12-31\n* [Media (ZORA)](https://etherscan.io/address/0xabefbc9fd2f806065b4f3c237d4b59d9a97bcac7) 2020-12-31 to 2022-01-09'",,"2021/03/09, 01:50:23",960,MIT,0,111,"2022/03/15, 08:16:22",2,6,15,0,589,0,0.3333333333333333,0.196078431372549,,,0,3,false,,false,false,,,,,,,,,,, Cloud Carbon Footprint,A tool to estimate energy use (kilowatt-hours) and carbon emissions (metric tons CO2e) from public cloud usage.,cloud-carbon-footprint,https://github.com/cloud-carbon-footprint/cloud-carbon-footprint.git,github,"thoughtworks,climate,sustainability,cloud,carbon-footprint,carbon-emissions,hacktoberfest",Computation and Communication,"2023/10/17, 15:32:45",753,0,213,true,TypeScript,Cloud Carbon Footprint,cloud-carbon-footprint,"TypeScript,JavaScript,Shell,Handlebars,CSS,HCL,HTML,Smarty,Dockerfile",https://cloudcarbonfootprint.org,"b""# Cloud Carbon Footprint\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n![CI](https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/actions/workflows/ci.yml/badge.svg)\n[![codecov](https://codecov.io/gh/cloud-carbon-footprint/cloud-carbon-footprint/branch/trunk/graph/badge.svg)](https://codecov.io/gh/cloud-carbon-footprint/cloud-carbon-footprint)\n\n[Cloud Carbon Footprint](https://www.cloudcarbonfootprint.org) is an application that estimates the energy (kilowatt hours) and carbon emissions (metric tons CO2e) of public cloud provider utilization.\n\nIf you would like to learn more about the various calculations and constants that we use for the emissions estimates, check out the [Methodology page](https://www.cloudcarbonfootprint.org/docs/methodology).\n\n## Getting Started\n\nThe core logic is exposed through 2 applications: a CLI and a website. The CLI resides in `packages/cli/`, and the website is split between `packages/api/` and `packages/client/`\n\nFor instructions on how to get up and running, please visit the [Getting Started page](https://www.cloudcarbonfootprint.org/docs/getting-started).\n\n## Project\n\nPlease visit the [project board](https://github.com/orgs/cloud-carbon-footprint/projects/4/views/1) to get a glimpse of the roadmap or to submit an [issue or bug](https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/issues).\n\n## Join Us!\n\nTo begin as a contributor, please see the [contributing page](CONTRIBUTING.md).\nPlease read through our [code of conduct](CODE_OF_CONDUCT.md) for our expectations around this community.\n\n\xe2\xad\x90\xef\xb8\x8fGive us a star if you like the project or find this work interesting!\n\n### Don\xe2\x80\x99t be shy\n\nIf you're using or planning to use CCF in any capacity, please add to our list of **[adopters](https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/blob/trunk/ADOPTERS.md)**. Having an understanding of the use of CCF (what it's being used for, what setup is used, cloud providers used, size of org and data, etc.) will help us better evolve the tool to meet the needs of this community.\n\nAdditionally, please fill out our **[feedback form](https://forms.gle/Sp58KuwvGiYNS7ko6)** to provide us more insight about your use of CCF and how we can best support your needs.\n\nReach out with any questions, support requests, and further discussions in our **[discussion google group](https://groups.google.com/g/cloud-carbon-footprint)**.\n\n## License\n\nLicensed under the Apache License, Version 2.0: \n\n\xc2\xa9 2021 Thoughtworks, Inc.\n""",,"2020/11/17, 20:53:48",1072,Apache-2.0,527,3374,"2023/10/17, 15:21:17",157,633,1063,270,8,23,0.3,0.8342960288808664,"2023/10/17, 15:56:46",latest,1,90,false,,true,true,,,https://github.com/cloud-carbon-footprint,https://cloudcarbonfootprint.org,,,,https://avatars.githubusercontent.com/u/79275027?v=4,,, pyJoules,A software toolkit to measure the energy footprint of a host machine along the execution of a piece of Python code.,powerapi-ng,https://github.com/powerapi-ng/pyJoules.git,github,"rapl,intel-rapl,energy-consumption,energy,python,power",Computation and Communication,"2022/10/07, 09:45:52",48,0,20,true,Python,PowerAPI,powerapi-ng,Python,,"b'# PyJoules\n\n[![License: MIT](https://img.shields.io/pypi/l/pyRAPL)](https://spdx.org/licenses/MIT.html)\n[![Build Status](https://img.shields.io/circleci/build/github/powerapi-ng/pyJoules.svg)](https://circleci.com/gh/powerapi-ng/pyjoules)\n[![Doc Status](https://readthedocs.org/projects/pyjoules/badge/?version=latest)](https://pyjoules.readthedocs.io/en/latest/)\n\n# About\n**pyJoules** is a software toolkit to measure the energy footprint of a host machine along the execution of a piece of Python code.\nIt monitors the energy consumed by specific device of the host machine such as :\n\n- intel CPU socket package\n- RAM (for intel server architectures)\n- intel integrated GPU (for client architectures)\n- nvidia GPU\n\n## Limitation\n\n### CPU, RAM and integrated GPU\n**pyJoules** uses the Intel ""_Running Average Power Limit_"" (RAPL) technology that estimates power consumption of the CPU, ram and integrated GPU.\nThis technology is available on Intel CPU since the [Sandy Bridge generation](https://fr.wikipedia.org/wiki/Intel#Historique_des_microprocesseurs_produits)(2010).\n\n### Nvidia GPU\n**pyJoules** uses the nvidia ""_Nvidia Management Library_"" technology to measure energy consumption of nvidia devices. The energy measurement API is only available on nvidia GPU with [Volta architecture](https://en.wikipedia.org/wiki/Volta_(microarchitecture))(2018)\n\n### Windows and MacOS\nOnly GNU/Linux support is available for the moment. We are working on Mac support\n\n## Known issues\nRAPL energy counters overflow after several minutes or hours, potentially causing false-negative energy readings.\n\npyJoules takes this into account and adds the counter\'s maximum possible value, `max_energy_range_uj`, to negative energy measurements. However, if a counter overflows twice during a single energy measurement, the reported energy will be `max_energy_range_uj` less than the expected value.\n\n\n# Installation\n\n### Measurement frequency\nPyJoule use hardware measurement tools (intel RAPL, nvidia GPU tools, ...) to measure device energy consumption. Theses tools have a mesasurement frequency that depend of the device. Thus, you can\'t use Pyjoule to measure energy consumption during a period shorter than the device energy measurement frequency. Pyjoule will return null values if the measurement period is to short.\n\n## Requirements\n\n- python >= 3.7\n- [nvml](https://developer.nvidia.com/nvidia-management-library-nvml) (if you want nvidia GPU support)\n\n## Installation\nYou can install **pyJoules** with pip: `pip install pyJoules`\n\nif you want to use pyJoule to also measure nvidia GPU energy consumption, you have to install it with nvidia driver support using this command : `pip install pyJoules[nvidia]`.\n\n# Basic usage\n\nThis Readme describe basic usage of pyJoules. For more in depth description, read the documentation [here](https://pyjoules.readthedocs.io/en/latest/)\n\nHere are some basic usages of **pyJoules**. Please note that the reported energy consumption is not only the energy consumption of the code you are running. This includes the _global energy consumption_ of all the process running on the machine during this period, thus including the operating system and other applications.\nThat is why we recommend to eliminate any extra programs that may alter the energy consumption of the machine hosting experiments and to keep _only_ the code under measurement (_i.e._, no extra applications, such as graphical interface, background running task...). This will give the closest measure to the real energy consumption of the measured code.\n\n## Decorate a function to measure its energy consumption\n\nTo measure the energy consumed by the machine during the execution of the function `foo()` run the following code:\n```python\nfrom pyJoules.energy_meter import measure_energy\n\n@measure_energy\ndef foo():\n\t# Instructions to be evaluated.\n\nfoo()\n```\n\nThis will print on the console the recorded energy consumption of all the monitorable devices during the execution of function `foo`.\n\n### Output description\ndecorator basic usage will print iformation with this format : \n\n`begin timestamp : XXX; tag : YYY; duration : ZZZ;device_name: AAAA`\n\nwith : \n- `begin timestamp` : monitored function launching time\n- `tag`: tag of the measure, if nothing is specified, this will be the function name\n- `duration`: function execution duration\n- `device_name`: power consumption of the device `device_name` in uJ\n\nfor cpu and ram devices, device_name match the RAPL domain described on the image below plus the CPU socket id. Rapl domain are described [here](https://github.com/powerapi-ng/pyJoules/blob/master/README.md#rapl-domain-description)\n\n## Configure the decorator specifying the device to monitor\n\nYou can easily configure which device to monitor using the parameters of the `measureit` decorator. \nFor example, the following example only monitors the CPU power consumption on the CPU socket `1` and the Nvidia GPU `0`.\nBy default, **pyJoules** monitors all the available devices of the CPU sockets.\n```python\nfrom pyJoules.energy_meter import measure_energy\nfrom pyJoules.device.rapl_device import RaplPackageDomain\nfrom pyJoules.device.nvidia_device import NvidiaGPUDomain\n\t\n@measure_energy(domains=[RaplPackageDomain(1), NvidiaGPUDomain(0)])\ndef foo():\n\t# Instructions to be evaluated.\n\t\nfoo()\t\n```\n\nYou can append the following domain list to monitor them : \n\t\n- `pyJoules.device.rapl_device.RaplPackageDomain` : CPU (specify the socket id in parameter)\n- `pyJoules.device.rapl_device.RaplDramDomain` : RAM (specify the socket id in parameter)\n- `pyJoules.device.rapl_device.RaplUncoreDomain` : integrated GPU (specify the socket id in parameter)\n- `pyJoules.device.rapl_device.RaplCoreDomain` : RAPL Core domain (specify the socket id in parameter)\n- `pyJoules.device.nvidia_device.NvidiaGPUDomain` : Nvidia GPU (specify the socket id in parameter)\n\nto understand which par of the cpu each RAPL domain monitor, see this [section](https://github.com/powerapi-ng/pyJoules/blob/master/README.md#rapl-domain-description)\n\n## Configure the output of the decorator\n\nIf you want to handle data with different output than the standard one, you can configure the decorator with an `EnergyHandler` instance from the `pyJoules.handler` module.\n\nAs an example, if you want to write the recorded energy consumption in a .csv file:\n```python\nfrom pyJoules.energy_meter import measure_energy\nfrom pyJoules.handler.csv_handler import CSVHandler\n\t\ncsv_handler = CSVHandler(\'result.csv\')\n\t\n@measure_energy(handler=csv_handler)\ndef foo():\n\t# Instructions to be evaluated.\n\nfor _ in range(100):\n\tfoo()\n\t\t\ncsv_handler.save_data()\n```\n\nThis will produce a csv file of 100 lines. Each line containing the energy\nconsumption recorded during one execution of the function `foo`.\nOther predefined `Handler` classes exist to export data to *MongoDB* and *Panda*\ndataframe.\n\n## Use a context manager to add tagged ""_breakpoint_"" in your measurment\n\nIf you want to know where is the ""_hot spots_"" where your python code consume the\nmost energy you can add ""_breakpoints_"" during the measurement process and tag\nthem to know amount of energy consumed between this breakpoints.\n\nFor this, you have to use a context manager to measure the energy\nconsumption. It is configurable as the decorator. For example, here we use an\n`EnergyContext` to measure the power consumption of CPU `1` and nvidia gpu `0`\nand report it in a csv file : \n\n```python\nfrom pyJoules.energy_meter import EnergyContext\nfrom pyJoules.device.rapl_device import RaplPackageDomain\nfrom pyJoules.device.nvidia_device import NvidiaGPUDomain\nfrom pyJoules.handler.csv_handler import CSVHandler\n\t\ncsv_handler = CSVHandler(\'result.csv\')\n\nwith EnergyContext(handler=csv_handler, domains=[RaplPackageDomain(1), NvidiaGPUDomain(0)], start_tag=\'foo\') as ctx:\n\tfoo()\n\tctx.record(tag=\'bar\')\n\tbar()\n\ncsv_handler.save_data()\n```\n\nThis will record the energy consumed :\n\n- between the beginning of the `EnergyContext` and the call of the `ctx.record` method\n- between the call of the `ctx.record` method and the end of the `EnergyContext`\n\nEach measured part will be written in the csv file. One line per part.\n\n# RAPL domain description\n\nRAPL domains match part of the cpu socket as described in this image : \n\n![](https://raw.githubusercontent.com/powerapi-ng/pyJoules/master/rapl_domains.png)\n\n- Package : correspond to the wall cpu energy consumption\n- core : correpond to the sum of all cpu core energy consumption\n- uncore : correspond to the integrated GPU\n\n# Miscellaneous\n\n## About\n\n**pyJoules** is an open-source project developed by the [Spirals research group](https://team.inria.fr/spirals) (University of Lille and Inria) that is part of the [PowerAPI](http://powerapi.org) initiative.\n\nThe documentation is available [here](https://pyJoules.readthedocs.io/en/latest/).\n\n## Mailing list\n\nYou can follow the latest news and asks questions by subscribing to our mailing list.\n\n## Contributing\n\nIf you would like to contribute code, you can do so via GitHub by forking the repository and sending a pull request.\n\nWhen submitting code, please make every effort to follow existing coding conventions and style in order to keep the code as readable as possible.\n'",,"2019/11/19, 12:10:31",1436,MIT,0,124,"2023/09/25, 14:56:20",16,5,15,3,30,1,0.0,0.1709401709401709,"2021/10/05, 09:25:52",v0.5.2,0,6,false,,false,false,,,https://github.com/powerapi-ng,https://powerapi.org,"Lille, France",,,https://avatars.githubusercontent.com/u/47974262?v=4,,, Carbon free energy for Google Cloud regions,Contains sustainability characteristics of Google Cloud regions in a machine readable format.,GoogleCloudPlatform,https://github.com/GoogleCloudPlatform/region-carbon-info.git,github,"google-cloud,carbon-emissions,carbon-free",Computation and Communication,"2023/07/25, 07:41:40",65,0,6,true,,Google Cloud Platform,GoogleCloudPlatform,,,"b'# Carbon free energy for Google Cloud regions \n\nThis repository contains sustainability characteristics of Google Cloud regions in a machine readable format. Read more [on the Google Cloud website](https://cloud.google.com/sustainability/region-carbon).\n\n## Data\n\n* **[2019](data/yearly/2019.csv)**\n* **[2020](data/yearly/2020.csv)**\n* **[2021](data/yearly/2021.csv)**\n* **[2022](data/yearly/2022.csv)**\n\n## Understanding the data\n\n**Google CFE**: This is the average percentage of carbon free energy consumed in a particular location on an hourly basis, while taking into account the investments we have made in renewable energy in that location. This means that in addition to the carbon free energy that\xe2\x80\x99s already supplied by the grid, we have added renewable energy generation in that location to reach [our 24/7 carbon free energy objective](https://www.gstatic.com/gumdrop/sustainability/247-carbon-free-energy.pdf). As a customer, this represents the average percentage of time your application will be running on carbon-free energy.\n\n**Grid carbon intensity (gCO2eq/kWh)**: This metric indicates the average operational gross emissions per unit of energy from the grid. This metric should be used to compare the regions in terms of carbon intensity of their electricity from the local grid. For regions that are similar in CFE%, this will indicate the relative emissions for when your workload is not running on carbon free energy. As an example, Frankfurt and the Netherlands have similar CFE scores, but the Netherlands has a higher emissions factor.\n\n**Google Cloud net operational greenhouse gas (GHG) emissions**: After calculating our Scope 2 market-based emissions per the GHG Protocol including our renewable energy contracts, Google ensures any remaining Scope 2 emissions are neutralized by investments in carbon offsets; this brings our global net operational emissions to zero.\n'",,"2021/03/17, 03:42:01",952,Apache-2.0,1,34,"2023/08/22, 04:49:09",0,4,8,2,64,0,0.5,0.13793103448275867,,,0,3,false,,false,true,,,https://github.com/GoogleCloudPlatform,https://cloud.google.com,,,,https://avatars.githubusercontent.com/u/2810941?v=4,,, Carbon-API-2.0,Estimating the carbon emissions per page on thousands of sites by looking at the amount of data that they transfer on load.,wholegrain,https://gitlab.com/wholegrain/carbon-api-2-0,gitlab,,Computation and Communication,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, FEEP, Improve the energy efficiency of free and open source software.,cschumac,,custom,,Computation and Communication,,,,,,,,,,https://invent.kde.org/cschumac/feep,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, LEAF,"Simulator for modeling energy consumption in cloud, fog, and edge computing environments.",dos-group,https://github.com/dos-group/leaf.git,github,"simulation,modeling,fog-computing,edge-computing,energy-consumption",Computation and Communication,"2022/12/14, 11:23:01",88,3,26,true,Python,DOS Group at TU Berlin,dos-group,Python,https://leaf.readthedocs.io,"b'# LEAF [![PyPI version](https://img.shields.io/pypi/v/leafsim.svg?color=52c72b)](https://pypi.org/project/leafsim/) [![Supported versions](https://img.shields.io/pypi/pyversions/leafsim.svg)](https://pypi.org/project/leafsim/) [![License](https://img.shields.io/pypi/l/leafsim.svg)](https://pypi.org/project/leafsim/)\n\n\n\nLEAF is a simulator for analytical modeling of energy consumption in cloud, fog, or edge computing environments.\nIt enables the modeling of simple tasks running on a single node as well as complex application graphs in distributed, heterogeneous, and resource-constrained infrastructures.\nLEAF is based on [SimPy](https://simpy.readthedocs.io/en/latest/) for discrete-event simulation and [NetworkX](https://networkx.org/) for modeling infrastructure or application graphs.\n\nPlease have a look at out [examples](https://github.com/dos-group/leaf/tree/main/examples) and visit the official [documentation](https://leaf.readthedocs.io) for more information on this project.\n\nThis Python implementation was ported from the [original Java protoype](https://www.github.com/birnbaum/leaf).\nAll future development will take place in this repository.\n\n\n## \xe2\x9a\x99\xef\xb8\x8f Installation\n\nYou can install the [latest release](https://pypi.org/project/leafsim/) of LEAF via [pip](https://pip.pypa.io/en/stable/quickstart/):\n\n```\n$ pip install leafsim\n```\n\nAlternatively, you can also clone the repository (including all examples) and set up your environment via:\n\n```\n$ pip install -e .\n```\n\n\n## \xf0\x9f\x9a\x80 Getting started\n\nLEAF uses [SimPy](https://simpy.readthedocs.io/en/latest/) for process-based discrete-event simulation and adheres to their API.\nTo understand how to develop scenarios in LEAF, it makes sense to familiarize yourself with SimPy first.\n\n```python\nimport simpy\nfrom leaf.application import Task\nfrom leaf.infrastructure import Node\nfrom leaf.power import PowerModelNode, PowerMeter\n\n# Processes modify the model during the simulation\ndef place_task_after_2_seconds(env, node, task):\n """"""Waits for 2 seconds and places a task on a node.""""""\n yield env.timeout(2)\n task.allocate(node)\n\nnode = Node(""node1"", cu=100, power_model=PowerModelNode(max_power=30, static_power=10))\ntask = Task(cu=100)\npower_meter = PowerMeter(node, callback=lambda m: print(f""{env.now}: Node consumes {int(m)}W""))\n\nenv = simpy.Environment() \n# register our task placement process\nenv.process(place_task_after_2_seconds(env, node, task))\n# register power metering process (provided by LEAF)\nenv.process(power_meter.run(env))\nenv.run(until=5)\n```\n\nWhich will result in the output:\n\n```\n0: Node consumes 10W\n1: Node consumes 10W\n2: Node consumes 30W\n3: Node consumes 30W\n4: Node consumes 30W\n```\n\nFor other examples, please refer to the [examples folder](https://github.com/dos-group/leaf/blob/main/examples).\n\n\n## \xf0\x9f\x8d\x83 What can I do with LEAF?\n\nLEAF enables high-level simulation of computing scenarios, where experiments are easy to create and easy to analyze.\nBesides allowing research on scheduling and placement algorithms on resource-constrained environments, LEAF puts a special focus on:\n\n- **Dynamic networks**: Simulate mobile nodes which can join or leave the network during the simulation.\n- **Power consumption modeling**: Model the power usage of individual compute nodes, network traffic, and applications.\n- **Energy-aware algorithms**: Implement dynamically adapting task placement strategies, routing policies, and other energy-saving mechanisms.\n- **Scalability**: Model the execution of thousands of compute nodes and applications in magnitudes faster than real time.\n\nPlease visit the official [documentation](https://leaf.readthedocs.io) for more information and examples on this project.\n\n

\n \n

\n\n\n## \xf0\x9f\x93\x96 Publications\n\nIf you use LEAF in your research, please cite our paper:\n\nPhilipp Wiesner and Lauritz Thamsen. ""[LEAF: Simulating Large Energy-Aware Fog Computing Environments](https://ieeexplore.ieee.org/document/9458907)"" In the Proceedings of the 2021 *5th IEEE International Conference on Fog and Edge Computing (ICFEC)*. IEEE. 2021 [[arXiv preprint]](https://arxiv.org/pdf/2103.01170.pdf) [[video]](https://youtu.be/G70hudAhd5M)\n\nBibtex:\n```\n@inproceedings{WiesnerThamsen_LEAF_2021,\n author={Wiesner, Philipp and Thamsen, Lauritz},\n booktitle={2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC)}, \n title={{LEAF}: Simulating Large Energy-Aware Fog Computing Environments}, \n year={2021},\n pages={29-36},\n doi={10.1109/ICFEC51620.2021.00012}\n}\n```\n\n## \xf0\x9f\x92\x9a Projects using LEAF\n\n- Rui Zhang, Xuesen Chu, Ruhui Ma, Meng Zhang, Liwei Lin, Honghao Gao, Haibing Guan. ""[OSTTD: Offloading of Splittable Tasks with Topological Dependence in Multi-Tier Computing Networks](https://ieeexplore.ieee.org/abstract/document/9978666)"". *IEEE Journal on Selected Areas in Communications*. 2022.\n- Zizheng Liu. ""A Web-Based User Interface for the LEAF Simulator"". *MSc IT+ Dissertation* at U of Glasgow. 2022 [[code](https://github.com/ZZZZZZZZZED/leaf-GUI)]\n- Yana Kernerman. ""Interactive Visualization of Energy Consumption in Edge and Fog Computing Simulations"". *Bachelor Thesis* at TU Berlin. 2022 [[code](https://github.com/dos-group/leaf/tree/gui)]\n- Sangeeta Kakati and Rupa Deka. ""[Computational and Adaptive Offloading in Edge/Fog based IoT Environments](https://ieeexplore.ieee.org/document/9847743)."" *2nd International Conference on Intelligent Technologies (CONIT)*. IEEE. 2022\n- Philipp Wiesner, Ilja Behnke, Dominik Scheinert, Kordian Gontarska, and Lauritz Thamsen. ""[Let\'s Wait Awhile: How Temporal Workload Shifting Can Reduce Carbon Emissions in the Cloud](https://arxiv.org/pdf/2110.13234.pdf)"". *22nd International Middleware Conference*. ACM. 2021 [[code](https://github.com/dos-group/lets-wait-awhile)]\n- Liam Brugger. ""An Evaluation of Carbon-Aware Load Shifting Techniques"". *Bachelor Thesis* at TU Berlin. 2021 [[code](https://gitlab.com/lbrugger72/Bachelor)]\n'",",https://arxiv.org/pdf/2103.01170.pdf,https://arxiv.org/pdf/2110.13234.pdf","2021/01/22, 17:00:06",1006,MIT,3,44,"2022/06/08, 12:39:12",0,8,9,0,504,0,0.0,0.025000000000000022,"2022/05/16, 14:57:54",0.4.0,0,2,false,,false,false,"ZZZZZZZZZED/leaf-GUI,Maky55-hub/soc_pybamm_simpy,ide3a/connecticity",,https://github.com/dos-group,https://tu.berlin/en/dos,Technische Universität Berlin,,,https://avatars.githubusercontent.com/u/5664005?v=4,,, ethereum-nft-activity,Estimate the total emissions for popular CryptoArt platforms.,kylemcdonald,https://github.com/kylemcdonald/ethereum-nft-activity.git,github,"ethereum,cryptoart,climate",Computation and Communication,"2022/07/20, 22:36:38",181,0,2,false,Jupyter Notebook,,,"Jupyter Notebook,Python,Shell",,"b'# ethereum-nft-activity\n\nHow much energy does it take to power popular Ethereum-backed CryptoArt platforms? And what emissions are associated with this energy use?\n\nThese questions do not have clear answers for two reasons:\n\n1. The overall energy usage and emissions of Ethereum are hard to estimate. I am working on this in a separate repo: [kylemcdonald/ethereum-energy](https://github.com/kylemcdonald/ethereum-energy)\n2. The portion for which a specific user, platform, or transaction might be considered ""responsible"" is more of a philosophical question than a technical one. Like many complex systems, there is an indirect relationship between the service and the emissions. I am working on different approaches in this notebook: [Per-Transaction Models](https://github.com/kylemcdonald/cryptoart-footprint/blob/main/Per-Transaction%20Models.ipynb)\n\nThis table represents one method for computing emissions, as of March 5, 2022. The methodology is described below.\n\n| Name | Fees | Transactions | kgCO2 |\n|---------------|---------|--------------|-------------|\n| Art Blocks | 12,006 | 244,594 | 21,531,626 |\n| Async | 224 | 27,403 | 332,657 |\n| Foundation | 8,602 | 661,074 | 14,568,164 |\n| KnownOrigin | 507 | 64,326 | 904,455 |\n| Makersplace | 1,840 | 144,163 | 3,010,383 |\n| Nifty Gateway | 1,621 | 151,950 | 2,385,675 |\n| OpenSea | 314,515 | 20,012,086 | 551,268,013 |\n| Rarible | 20,930 | 1,802,971 | 27,706,539 |\n| SuperRare | 2,215 | 320,697 | 3,172,169 |\n| Zora | 532 | 21,660 | 721,254 |\n\nUpdates:\n\n* March 5, 2022: Missing contracts were added to Art Blocks and OpenSea, massively incresaing their totals. Duplicate contracts were removed remove Nifty Gateway, halving the totals. The contracts were duplicated because they were found both when scraping Nifty Gateway, and also when pulling labeled contracts from Etherscan.\n\n\n## Preparation\n\nFirst, sign up for an API key at [Etherscan](https://etherscan.io/myapikey). Create `env.json` and add the API key. It should look like:\n\n```json\n{\n ""etherscan-api-key"": """"\n}\n```\n\nInstall dependencies:\n\n```sh\npip install -r requirements.txt\n```\n\nNote: this project requires Python 3.\n\n### `contracts_footprint.py`\n\nThis will pull all the transactions from Etherscan, sum the gas and transaction counts, and do a basic emissions estimate. Results are saved in the `/output` directory as JSON or TSV. Run the script with, for example: `python contracts_footprint.py --verbose --tsv data/contracts.json data/nifty-gateway-contracts.json`. \n\nThis may take longer the first time, while your local cache is updated. When updating after a week, it can take 5 minutes or more to download all new transactions. The entire cache can be multiple gigabytes.\n\nThis script has a few unique additional flags:\n\n* `--summary` to summarize the results in a format similar to the above table, combining multiple contracts into a single row of output.\n* `--startdate` and `--enddate` can be used to only analyze a specific date range, using the format `YYYY-MM-DD`.\n* `--tsv` will save the results of analysis as a TSV file instead of JSON.\n\n### `contracts_history.py`\n\nThis will pull all the transactions from Etherscan, sum the transaction fees and gas used, and group by day and platform. Results are saved in the `/output` directory as CSV files. Run the script with, for example: `python contracts_history.py --verbose data/contracts.json data/nifty-gateway-contracts.json`\n\nThe most recent results are [cached in the gh_pages branch](https://github.com/kylemcdonald/ethereum-nft-activity/tree/gh-pages/output).\n\n### Additional flags\n\nBoth scripts have these shared additional flags:\n\n* `--noupdate` runs from cached results. This will not make any requests to Nifty Gateway or Etherscan. When using the `Etherscan` class in code without an API key, this is the default behavior.\n* `--verbose` prints progress when scraping Nifty Gateway or pulling transactions from Etherscan.\n\n### Helper scripts\n\n* `python ethereum_stats.py` will pull stats from Etherscan like daily fees and block rewards and save them to `data/ethereum-stats.json`\n* `python nifty_gateway.py` will scrape all the contracts from Nifty Gateway and save them to `data/nifty-gateway-contracts.json`\n\n## Methodology\n\nThe footprint of a platform is the sum of the footprints for all artwork on the platform. Most platforms use a few Ethereum contracts and addresses to handle all artworks. For each contract, we download all the transactions associated with that address from Etherscan. Then for each day, we take the sum of all fees paid on all those transactions divided by the total fees paid across the whole network for that day. This ratio is multiplied by the daily [Ethereum emissions estimate](https://github.com/kylemcdonald/ethereum-emissions) to get the total emissions for that address. Finally, the total emissions for a platform are equal to the emissions for all addresses across all days.\n\n## Sources\n\nContracts are sourced from a combination of personal research, [DappRadar](https://dappradar.com/), and [Etherscan](https://etherscan.io/) tags.\n\nWhen possible, we have confirmed contract coverage directly with the marketplaces. Confirmed contracts include:\n\n* SuperRare: all confirmed\n* Foundation: all confirmed\n* OpenSea: some contracts on DappRadar have not been confirmed\n* Nifty Gateway: all confirmed\n\n## How to add more platforms\n\nTo modify this code so that it works with more platforms, add every possible contract and wallet for each platform to the `data/contracts.json` file, using the format:\n\n```js\n\'/\': \'<0xAddress>\'\n```\n\nThen [submit a pull request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request) back to this repository. Thanks in advance!\n\n## Contracts and Addresses\n\nContracts and addresses used by each platform can be found in `data/contracts.json` and are also listed here using `python print_contracts.py` to generate Markdown. Nifty Gateway contracts are listed separately in `data/nifty-gateway-contracts.json`.\n\n### Art Blocks\n\n* [Deployer](https://etherscan.io/address/0x96dc73c8b5969608c77375f085949744b5177660) 2020-11-26 to 2021-08-13\n* [GenArt721](https://etherscan.io/address/0x059EDD72Cd353dF5106D2B9cC5ab83a52287aC3a) 2020-11-26 to 2022-01-09\n* [GenArt721Core](https://etherscan.io/address/0xa7d8d9ef8d8ce8992df33d8b8cf4aebabd5bd270) 2020-12-12 to 2022-01-09\n\n### Async\n\n* [ASYNC](https://etherscan.io/address/0x6c424c25e9f1fff9642cb5b7750b0db7312c29ad) 2020-02-25 to 2021-12-21\n* [ASYNC-V2](https://etherscan.io/address/0xb6dae651468e9593e4581705a09c10a76ac1e0c8) 2020-07-21 to 2022-01-09\n\n### Foundation\n\n* [ERC-721](https://etherscan.io/address/0xcda72070e455bb31c7690a170224ce43623d0b6f) 2021-01-13 to 2022-01-09\n* [FND NFT (FNDNFT) ERC-20](https://etherscan.io/address/0x3b3ee1931dc30c1957379fac9aba94d1c48a5405) 2021-01-13 to 2022-01-09\n\n### KnownOrigin\n\n* [ArtistAcceptingBids](https://etherscan.io/address/0x921ade9018eec4a01e41e80a7eeba982b61724ec) 2018-10-23 to 2020-11-13\n* [ArtistAcceptingBidsV2](https://etherscan.io/address/0x848b0ea643e5a352d78e2c0c12a2dd8c96fec639) 2019-02-26 to 2022-01-07\n* [ArtistEditionControls](https://etherscan.io/address/0x06c741e6df49d7fda1f27f75fffd238d87619ba1) 2018-11-07 to 2020-07-07\n* [ArtistEditionControlsV2](https://etherscan.io/address/0x5327cf8b4127e81013d706330043e8bf5673f50d) 2019-02-26 to 2022-01-09\n* [KnownOrigin Token](https://etherscan.io/address/0xfbeef911dc5821886e1dda71586d90ed28174b7d) 2018-09-04 to 2022-01-09\n* [KnownOriginDigitalAsset](https://etherscan.io/address/0xdde2d979e8d39bb8416eafcfc1758f3cab2c9c72) 2018-04-04 to 2021-12-21\n* [SelfServiceAccessControls](https://etherscan.io/address/0xec133df5d806a9069aee513b8be01eeee2f03ff0) 2019-04-25 to 2021-08-23\n* [SelfServiceEditionCuration](https://etherscan.io/address/0x8ab96f7b6c60df169296bc0a5a794cae90493bd9) 2019-04-08 to 2020-07-07\n* [SelfServiceEditionCurationV2](https://etherscan.io/address/0xff043a999a697fb1efdb0c18fd500eb7eab4e846) 2019-04-25 to 2020-07-07\n* [SelfServiceEditionCurationV3](https://etherscan.io/address/0x50782a63b7735483be07ef1c72d6d75e94b4a8f6) 2019-04-30 to 2020-07-07\n\n### Makersplace\n\n* [DigitalMediaCore](https://etherscan.io/address/0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756) 2019-03-11 to 2022-01-09\n* [Unknown 1](https://etherscan.io/address/0x3981a1218a95becb4258305584bf2f24ff8dedf2) 2019-02-26 to 2020-07-29\n* [Unknown 2](https://etherscan.io/address/0x0ba51d9c015a7544e3560081ceb16ffe222dd64f) 2020-02-24 to 2022-01-08\n\n### Nifty Gateway\n\n* Many individual contracts defined in `data/nifty-gateway-contracts.json`\n* [Builder Shop](https://etherscan.io/address/0x431bd1297a1c7664d599364a427a2d926a1f58ae) 2020-03-21 to 2021-07-27\n* [First and Last](https://etherscan.io/address/0x11ab0243c57c6c1b39f2908aaebaed7ccf351491) 2021-03-13 to 2022-01-09\n* [GUSD cashout](https://etherscan.io/address/0x3e6722f32cbe5b3c7bd3dca7017c7ffe1b9e5a2a) 2020-01-31 to 2022-01-09\n* [Getaway Auctions](https://etherscan.io/address/0xf72136fb50f0c90324c9619aaab4289eeb277f3e) 2021-03-15 to 2021-12-06\n* [Hard To Explain](https://etherscan.io/address/0xef2883efa7bf4ecf169f3ad6c012994078608985) 2021-03-14 to 2022-01-07\n* [Insides](https://etherscan.io/address/0x934fdb5084d448de4c61c960c5f806689ae720b1) 2021-03-14 to 2022-01-04\n* [Omnibus](https://etherscan.io/address/0xe052113bd7d7700d623414a0a4585bcae754e9d5) 2020-01-31 to 2022-01-09\n\n### OpenSea\n\n* [OpenSea Shared Storefront (OPENSTORE)](https://etherscan.io/address/0x495f947276749ce646f68ac8c248420045cb7b5e) 2020-12-02 to 2022-01-09\n* [OpenSea Token (OPT)](https://etherscan.io/address/0x1129eb10812935593bf44fe0a9b62a59a9202f6d) 2021-02-05 to 2021-04-29\n* [OpenSeaENSResolver](https://etherscan.io/address/0x9c4e9cce4780062942a7fe34fa2fa7316c872956) 2019-06-27 to 2020-07-03\n* [SaleClockAuction](https://etherscan.io/address/0x1f52b87c3503e537853e160adbf7e330ea0be7c4) 2018-01-08 to 2021-10-18\n* [Unknown 1](https://etherscan.io/address/0x23b45c658737b12f1748ce56e9b6784b5e9f3ff8) 2018-02-15 to 2020-06-18\n* [Unknown 2](https://etherscan.io/address/0x78997e9e939daffe7eb9ed114fbf7128d0cfcd39) 2018-04-03 to 2021-08-16\n* [Wallet](https://etherscan.io/address/0x5b3256965e7c3cf26e11fcaf296dfc8807c01073) 2018-01-02 to 2022-01-09\n* [Wyvern Exchange](https://etherscan.io/address/0x7be8076f4ea4a4ad08075c2508e481d6c946d12b) 2018-06-12 to 2022-01-09\n* [WyvernProxyRegistry](https://etherscan.io/address/0xa5409ec958c83c3f309868babaca7c86dcb077c1) 2018-06-12 to 2022-01-09\n\n### Rarible\n\n* [Asset Contract ERC1155](https://etherscan.io/address/0xb66a603f4cfe17e3d27b87a8bfcad319856518b8) 2021-06-11 to 2022-01-09\n* [Asset Contract ERC721](https://etherscan.io/address/0xf6793da657495ffeff9ee6350824910abc21356c) 2021-06-11 to 2022-01-09\n* [Deployer](https://etherscan.io/address/0x3482549fca7511267c9ef7089507c0f16ea1dcc1) 2018-10-08 to 2021-12-16\n* [ERC1155 Factory](https://etherscan.io/address/0x81243681078bee8e251d02ee6872b1eaa6dd982a) 2021-08-05 to 2021-12-09\n* [ERC1155 Sale 1](https://etherscan.io/address/0x8c530a698b6e83d562db09079bc458d4dad4e6c5) 2020-05-27 to 2020-10-11\n* [ERC1155 Sale 2](https://etherscan.io/address/0x93f2a75d771628856f37f256da95e99ea28aafbe) 2020-09-03 to 2021-06-05\n* [ERC721 Factory](https://etherscan.io/address/0x6d9dd3547baf4c190ab89e0103c363feaf325eca) 2021-08-05 to 2021-10-22\n* [ERC721 Sale](https://etherscan.io/address/0x131aebbfe55bca0c9eaad4ea24d386c5c082dd58) 2020-09-03 to 2021-01-22\n* [Exchange 1](https://etherscan.io/address/0xcd4ec7b66fbc029c116ba9ffb3e59351c20b5b06) 2020-11-17 to 2022-01-05\n* [Exchange Proxy](https://etherscan.io/address/0x9757f2d2b135150bbeb65308d4a91804107cd8d6) 2021-06-11 to 2022-01-09\n* [External Royalties](https://etherscan.io/address/0xea90cfad1b8e030b8fd3e63d22074e0aeb8e0dcd) 2021-06-18 to 2022-01-09\n* [MintableToken 1](https://etherscan.io/address/0xf79ab01289f85b970bf33f0543e41409ed2e1c1f) 2019-10-18 to 2021-10-17\n* [MintableToken 2](https://etherscan.io/address/0x6a5ff3ceecae9ceb96e6ac6c76b82af8b39f0eb3) 2019-12-23 to 2022-01-06\n* [MintableToken 3](https://etherscan.io/address/0x60f80121c31a0d46b5279700f9df786054aa5ee5) 2020-05-27 to 2022-01-09\n* [RARI Token 1](https://etherscan.io/address/0xfca59cd816ab1ead66534d82bc21e7515ce441cf) 2020-07-14 to 2022-01-09\n* [RARI Token 2](https://etherscan.io/address/0x60f80121c31a0d46b5279700f9df786054aa5ee5) 2020-05-27 to 2022-01-09\n* [RaribleToken](https://etherscan.io/address/0xd07dc4262bcdbf85190c01c996b4c06a461d2430) 2020-05-27 to 2022-01-09\n* [TokenSale](https://etherscan.io/address/0xf2ee97405593bc7b6275682b0331169a48fedec7) 2019-10-24 to 2020-08-31\n* [Treasury](https://etherscan.io/address/0x1cf0df2a5a20cd61d68d4489eebbf85b8d39e18a) 2021-01-07 to 2021-12-09\n* [Unknown 1](https://etherscan.io/address/0xa5af48b105ddf2fa73cbaac61d420ea31b3c2a07) 2020-05-27 to 2020-09-12\n\n### SuperRare\n\n* [Bids](https://etherscan.io/address/0x2947f98c42597966a0ec25e92843c09ac17fbaa7) 2019-09-04 to 2021-12-31\n* [SupeRare (SUPR)](https://etherscan.io/address/0x41a322b28d0ff354040e2cbc676f0320d8c8850d) 2018-04-01 to 2022-01-09\n* [SuperRareV2 (SUPR)](https://etherscan.io/address/0xb932a70a57673d89f4acffbe830e8ed7f75fb9e0) 2019-09-04 to 2022-01-09\n* [Unknown 1](https://etherscan.io/address/0x65b49f7aee40347f5a90b714be4ef086f3fe5e2c) 2020-12-05 to 2022-01-09\n* [Unknown 2](https://etherscan.io/address/0x8c9f364bf7a56ed058fc63ef81c6cf09c833e656) 2020-12-05 to 2022-01-09\n\n### Zora\n\n* [Market](https://etherscan.io/address/0xe5bfab544eca83849c53464f85b7164375bdaac1) 2020-12-31 to 2020-12-31\n* [Media (ZORA)](https://etherscan.io/address/0xabefbc9fd2f806065b4f3c237d4b59d9a97bcac7) 2020-12-31 to 2022-01-09'",,"2021/03/09, 01:50:23",960,MIT,0,111,"2022/03/15, 08:16:22",2,6,15,0,589,0,0.3333333333333333,0.196078431372549,,,0,3,false,,false,false,,,,,,,,,,, kube-green,A k8s operator to reduce CO2 footprint of your clusters.,kube-green,https://github.com/kube-green/kube-green.git,github,"kubernetes,k8s,green-software,resources,cloud-native,downscale,hacktoberfest,climate-change",Computation and Communication,"2023/10/19, 22:54:51",738,0,421,true,Go,,kube-green,"Go,Makefile,Shell,Dockerfile",https://kube-green.dev,"b'[![Go Report Card][go-report-svg]][go-report-card]\n[![Coverage][test-and-build-svg]][test-and-build]\n[![Security][security-badge]][security-pipelines]\n[![Coverage Status][coverage-badge]][coverage]\n[![Documentations][website-badge]][website]\n[![Adopters][adopters-badge]][adopters]\n\n\n \n \n\n\nHow many of your dev/preview pods stay on during weekends? Or at night? It\'s a waste of resources! And money! But fear not, *kube-green* is here to the rescue.\n\n*kube-green* is a simple **k8s addon** that automatically **shuts down** (some of) your **resources** when you don\'t need them.\n\nIf you already use *kube-green*, add you as an [adopter][add-adopters]!\n\n## Getting Started\n\nThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See how to install the project on a live system in our [docs](https://kube-green.dev/docs/install/).\n\n### Prerequisites\n\nMake sure you have Go installed ([download](https://go.dev/dl/)). Version 1.19 or higher is required.\n\n## Installation\n\nTo have *kube-green* running locally just clone this repository and install the dependencies running:\n\n```golang\ngo get\n```\n\n## Running the tests\n\nThere are different types of tests in this repository.\n\nIt is possible to run all the unit tests with\n\n```sh\nmake test\n```\n\nTo run integration tests:\n\n```sh\nmake e2e-test\n```\n\n## Deployment\n\nTo deploy *kube-green* in live systems, follow the [docs](https://kube-green.dev/docs/install/).\n\nTo run kube-green for development purpose, you can use [ko](https://ko.build/) to deploy\nin a KinD cluster.\nIt is possible to start a KinD cluster running `kind create cluster --name kube-green-development`.\nTo deploy kube-green using ko, run:\n\n```sh\nmake local-run clusterName=kube-green-development\n```\n\n## Usage\n\nThe use of this operator is very simple. Once installed on the cluster, configure the desired CRD to make it works.\n\nSee [here](https://kube-green.dev/docs/configuration/) the documentation about the configuration of the CRD.\n\n### CRD Examples\n\nPods running during working hours with Europe/Rome timezone, suspend CronJobs and exclude a deployment named `api-gateway`:\n\n```yaml\napiVersion: kube-green.com/v1alpha1\nkind: SleepInfo\nmetadata:\n name: working-hours\nspec:\n weekdays: ""1-5""\n sleepAt: ""20:00""\n wakeUpAt: ""08:00""\n timeZone: ""Europe/Rome""\n suspendCronJobs: true\n excludeRef:\n - apiVersion: ""apps/v1""\n kind: Deployment\n name: api-gateway\n```\n\nPods sleep every night without restore:\n\n```yaml\napiVersion: kube-green.com/v1alpha1\nkind: SleepInfo\nmetadata:\n name: working-hours-no-wakeup\nspec:\n sleepAt: ""20:00""\n timeZone: Europe/Rome\n weekdays: ""*""\n```\n\nTo see other examples, go to [our docs](https://kube-green.dev/docs/configuration/#examples).\n\n## Contributing\n\nPlease read [CONTRIBUTING.md](https://gist.github.com/PurpleBooth/b24679402957c63ec426) for details on our code of conduct, and the process for submitting pull requests to us.\n\n## Versioning\n\nWe use [SemVer](http://semver.org/) for versioning. For the versions available, see the [release on this repository](https://github.com/kube-green/kube-green/releases).\n\n### How to upgrade the version\n\nTo upgrade the version:\n\n1. `make release version=v{{NEW_VERSION_TO_TAG}}` where `{{NEW_VERSION_TO_TAG}}` should be replaced with the next version to upgrade. N.B.: version should include `v` as first char.\n2. `git push --tags origin v{{NEW_VERSION_TO_TAG}}`\n\n## API Reference documentation\n\nAPI reference is automatically generated with [this tool](https://github.com/ahmetb/gen-crd-api-reference-docs). To generate it automatically, are added in api versioned folder a file `doc.go` with the content of file `groupversion_info.go` and a comment with `+genclient` in the `sleepinfo_types.go` file for the resource type.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details\n\n## Acknowledgement\n\nSpecial thanks to [JGiola](https://github.com/JGiola) for the tech review.\n\n## Give a Star! \xe2\xad\x90\n\nIf you like or are using this project, please give it a star. Thanks!\n\n## Adopters\n\n[Here](https://kube-green.dev/docs/adopters/) the list of adopters of *kube-green*.\n\nIf you already use *kube-green*, add you as an [adopter][add-adopters]!\n\n[go-report-svg]: https://goreportcard.com/badge/github.com/kube-green/kube-green\n[go-report-card]: https://goreportcard.com/report/github.com/kube-green/kube-green\n[test-and-build-svg]: https://github.com/kube-green/kube-green/actions/workflows/test.yml/badge.svg\n[test-and-build]: https://github.com/kube-green/kube-green/actions/workflows/test.yml\n[coverage-badge]: https://coveralls.io/repos/github/kube-green/kube-green/badge.svg?branch=main\n[coverage]: https://coveralls.io/github/kube-green/kube-green?branch=main\n[website-badge]: https://img.shields.io/static/v1?label=kube-green&color=blue&message=docs&style=flat\n[website]: https://kube-green.dev\n[security-badge]: https://github.com/kube-green/kube-green/actions/workflows/security.yml/badge.svg\n[security-pipelines]: https://github.com/kube-green/kube-green/actions/workflows/security.yml\n[adopters-badge]: https://img.shields.io/static/v1?label=ADOPTERS&color=blue&message=docs&style=flat\n[adopters]: https://kube-green.dev/docs/adopters/\n[add-adopters]: https://github.com/kube-green/kube-green.github.io/blob/main/CONTRIBUTING.md#add-your-organization-to-adopters\n'",,"2021/02/21, 17:50:33",976,MIT,179,549,"2023/10/21, 10:38:20",34,288,314,130,4,6,0.0,0.19529411764705884,"2023/10/07, 15:15:51",v0.5.2,1,10,false,,false,true,,,https://github.com/kube-green,https://kube-green.dev/,,,,https://avatars.githubusercontent.com/u/94223580?v=4,,, ecoCode,Reduce the environmental footprint of your software applications with this cutting-edge sonarQube plugin.,cnumr,https://github.com/cnumr/ecoCode.git,github,"green,code,quality,sustainability,ecology,environment,energy",Computation and Communication,"2023/01/13, 13:25:53",62,0,13,false,Groovy,Collectif Conception Numérique Responsable,cnumr,"Groovy,Java,HTML,Python,CSS,PHP,JavaScript,Shell,Dockerfile,Batchfile",,"b'![Logo](docs/resources/logo-large.png)\n\n---\n\n
\n\n

\xe2\x9a\xa0\xef\xb8\x8f WARNING: This repository is no longer maintained \xe2\x9a\xa0\xef\xb8\x8f\n
\nPlease use the latest ecoCode version here :

\n\n
\n\n

\n Visit the new ecoCode repository\n

\n

\n
\nContinue to follow the project on : \n\nAnd join us on our public Slack : \n\n
\xc2\xa9 The ecoCode team that continues to love cnumr \xe2\x99\xa5\n

\n\n
\n
\n---\n\n*ecoCode* is a collective project aiming to reduce environmental footprint of software at the code level. The goal of the project is to provide a list of static code analyzers to highlight code structures that may have a negative ecological impact: energy and resources over-consumption, ""fatware"", shortening terminals\' lifespan, etc.\n\necoCode is based on evolving catalogs of [good practices](docs/rules), for various technologies. A SonarQube plugin then implement these catalogs as rules for scanning your projects.\n\n**Warning**: this is still a very early stage project. Any feedback or contribution will be highly appreciated. Please refer to the contribution section.\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n## \xf0\x9f\x8c\xbf SonarQube Plugin\n\n5 technologies are supported by the plugin right now:\n- [Java](src/java-plugin/)\n- [PHP](src/php-plugin/)\n- [Python](src/python-plugin/)\n- [Android](src/android-plugin/)\n- [Eslint](src/ecolinter-plugin)\n\n![Screenshot](docs/resources/screenshot.PNG)\n\n\n## \xf0\x9f\x9a\x80 Getting Started\n\nYou can download each plugin separatly or you can directly use a [all-in-one docker-compose](src/INSTALL.md)\n\n## \xf0\x9f\xa4\x9d Contribution\n\nYou are a technical expert, a designer, a project manager, a CSR expert, an ecodesign expert...\n\nYou want to offer the help of your company, help us to organize, communicate on the project?\n\nYou have ideas to submit to us?\n\nWe are listening to you to make the project progress collectively, and maybe with you!\n\nWE NEED YOU !\n\nHere the [starter-pack](./hackathon/starter-pack.md)\n\n## \xf0\x9f\xa4\x93 Main contributors\n\nAny question ? We are here for you !\n\n- Jules Delecour\n- [Geoffrey Lallou\xc3\xa9](https://github.com/glalloue)\n- Julien Hertout\n- [Justin Berque](https://www.linkedin.com/in/justin-berque-444412140)\n- [Olivier Le Goa\xc3\xabr](https://olegoaer.perso.univ-pau.fr)\n\n---\n## \xf0\x9f\xa7\x90 Core Team Emeriti\n\nHere we honor some no-longer-active core team members who have made valuable contributions in the past.\n\n- Ga\xc3\xabl Pellevoizin \n- [Nicolas Daviet](https://github.com/NicolasDaviet)\n- [Mathilde Grapin](https://github.com/fkotd)\n\n---\nThey have contributed to the success of ecoCode :\n\n- [Davidson Consulting](https://www.davidson.fr/)\n- [Orange Business Services](https://www.orange-business.com/)\n- [Snapp\'](https://www.snapp.fr/)\n- [Universit\xc3\xa9 de Pau et des Pays de l\'Adour (UPPA)](https://www.univ-pau.fr/)\n\nThey supported the project :\n\n- [R\xc3\xa9gion Nouvelle-Aquitaine](https://www.nouvelle-aquitaine.fr/)\n'",,"2021/03/11, 14:36:58",958,GPL-3.0,94,492,"2023/01/13, 13:26:00",16,147,158,52,285,6,0.3,0.7718309859154929,,,1,40,false,,false,false,,,https://github.com/cnumr,https://collectif.greenit.fr,France,,,https://avatars.githubusercontent.com/u/52161143?v=4,,, Kepler,Uses eBPF to probe energy related system stats and exports as Prometheus metrics.,sustainable-computing-io,https://github.com/sustainable-computing-io/kepler.git,github,"kubernetes,sustainability,ebpf,prometheus-exporter,energy-consumption,energy-monitor,energy-efficiency,prometheus,cloud-native,machine-learning",Computation and Communication,"2023/10/25, 10:06:50",729,1,553,true,Go,Sustainable Computing,sustainable-computing-io,"Go,Shell,C,Makefile,Dockerfile,Ruby",https://sustainable-computing.io,"b'\n\n![GitHub Workflow Status (event)](https://img.shields.io/github/actions/workflow/status/sustainable-computing-io/kepler/unit_test.yml?branch=main&label=CI)\n![Coverage](https://img.shields.io/badge/Coverage-42.6%25-yellow)\n[![OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/projects/7391/badge)](https://bestpractices.coreinfrastructure.org/projects/7391)\n\n\n![GitHub](https://img.shields.io/github/license/sustainable-computing-io/kepler)\n\n[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/KeplerProject.svg?style=social&label=Follow%20%40KeplerProject)](https://twitter.com/KeplerProject)\n\n# Kepler\nKepler (Kubernetes Efficient Power Level Exporter) uses eBPF to probe energy-related system stats and exports them as Prometheus metrics.\n\nAs a CNCF Sandbox project, Kepler uses [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n## Architecture\nKepler Exporter exposes a variety of [metrics](https://sustainable-computing.io/design/metrics/) about the energy consumption of Kubernetes components such as Pods and Nodes. \n\n![Architecture](doc/kepler-arch.png)\n\n## Install Kepler\nInstructions to install Kepler can be found in the [Kepler docs](https://sustainable-computing.io/installation/kepler/).\n\n## Visualise Kepler metrics with Grafana\nTo visualise the power consumption metrics made available by the Kepler Exporter, import the pre-generated [Kepler Dashboard](grafana-dashboards/Kepler-Exporter.json) into Grafana:\n ![Sample Grafana dashboard](doc/dashboard.png)\n\n## Contribute to Kepler\nInterested in contributing to Kepler? Follow the [Contributing Guide](CONTRIBUTING.md) to get started!\n\n## Talks & Demos\n- [Kepler Demo](https://www.youtube.com/watch?v=P5weULiBl60)\n- [""Sustainability the Container Native Way"" - Open Source Summit NA 2022](doc/OSS-NA22.pdf)\n\nA full list of talks and demos about Kepler can be found [here](https://github.com/sustainable-computing-io/kepler-doc/tree/main/demos).\n\n## Community Meetings\nPlease join the biweekly community meetings. The meeting calendar and agenda can be found [here](https://github.com/sustainable-computing-io/community/blob/main/community-event.md)\n'",,"2022/02/01, 19:48:56",631,Apache-2.0,1157,1747,"2023/10/25, 10:05:21",44,642,945,681,0,11,3.6,0.7076537013801756,"2023/10/12, 15:15:09",v0.6.1,0,43,false,,false,true,sustainable-computing-io/label-exporter,,https://github.com/sustainable-computing-io,www.sustainable-computing.io,United States of America,,,https://avatars.githubusercontent.com/u/91567619?v=4,,, Software Carbon Intensity Specification,A specification that describes how to calculate a carbon intensity for software applications.,Green-Software-Foundation,https://github.com/Green-Software-Foundation/sci.git,github,,Computation and Communication,"2023/02/09, 11:18:08",215,0,95,true,,Green Software Foundation,Green-Software-Foundation,,,"b'# Software Carbon Intensity (SCI) Specification\n\nA specification that describes how to calculate a carbon intensity score for software applications. \n\nCreated and managed by the [Standards Working Group](https://github.com/Green-Software-Foundation/standards_wg) in the [greensoftware.foundation](https://greensoftware.foundation).\n\n## Project Scope\nThis document, the Software Carbon Intensity technical specification, describes how to calculate the carbon intensity of a software application. It describes the methodology of calculating the total carbon emissions and the selection criteria to turn the total into a rate that can be used to achieve real-world, physical emissions reductions, also known as abatement.\n\nElectricity has a carbon intensity depending on where and when it is consumed. An intensity is a rate. It has a numerator and a denominator. A rate provides you with helpful information when considering how to design, develop, and deploy software applications. This specification describes the carbon intensity of a software application or service.\n\n## Getting Started\n- The development version of the specification is [here](https://github.com/Green-Software-Foundation/software_carbon_intensity/blob/dev/Software_Carbon_Intensity/Software_Carbon_Intensity_Specification.md).\n- The latest published version of the specification is [here](https://github.com/Green-Software-Foundation/software_carbon_intensity/blob/main/Software_Carbon_Intensity/Software_Carbon_Intensity_Specification.md).\n- The `dev` branch contains the current version that is being worked on and the `main` branch contains the latest published version.\n- Check the [issues tab](https://github.com/Green-Software-Foundation/software_carbon_intensity/issues) for active and closed conversations regarding the spec.\n\n## GitHub Training \n- [Getting started with GitHub](https://green-software-foundation.github.io/github-training/)\n\n## Contributing\nThe recommended approach for getting involved with the specification is to:\n- Read the [development version](https://github.com/Green-Software-Foundation/software_carbon_intensity/blob/dev/Software_Carbon_Intensity/Software_Carbon_Intensity_Specification.md) of the specification.\n- Raise an issue, question, or recommendation in the issues tab above and start a discussion with other members.\n- Once agreement has been reached, then raise a pull request to update the specification with your recommended changes.\n- Let others know about your pull request by either commenting on the relevant issue or posting in the Standards Working Group slack channel.\n- Pull requests are reviewed and merged during Standards Working Group meetings.\n- Only chairs of the Standards Working Group can merge pull requests.\n\n## Versioning\n* We use [Semantic Versioning](http://semver.org/) for versioning.\n\n## Copyright\nStandard WG projects are copyrighted under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/).\n\n## License\nStandard WG projects are licensed under the MIT License - see the [LICENSE.md](Software_Carbon_Intensity/License.md) file for details.\n\n## Patent\nStandard WG projects operate under the W3C Patent Mode.\n\n# Feedback\n* [GitHub discussions](https://github.com/Green-Software-Foundation/software_carbon_intensity/discussions/new?category=sci-feedback)\n* \n'",,"2021/06/30, 10:35:32",847,Apache-2.0,6,171,"2023/08/17, 15:37:51",10,120,232,24,69,1,1.6,0.6774193548387097,,,0,15,false,,false,false,,,https://github.com/Green-Software-Foundation,https://greensoftware.foundation,,,,https://avatars.githubusercontent.com/u/84547728?v=4,,, Principles of Green Software Engineering,"Are a core set of competencies needed to define, build and run sustainable software applications.",jawache,https://github.com/jawache/principles-green.git,github,,Computation and Communication,"2023/05/23, 23:03:09",170,0,19,true,SCSS,,,"SCSS,JavaScript,Nunjucks",,"b""# Principles.Green\n\nThe **Principles of Green Software Engineering** are a core set of competencies needed to define, build and run sustainable software applications.\n\n\nUpdate and install the web site localy:\n1. Duplicate repo\n2. npm install markdown-serve\n3. npm install express\n4. Create server.js in root folder with this code\n\n```\nvar express = require('express'),\n mds = require('markdown-serve'),\n path = require('path');\n \nvar app = express();\n \napp.set('views', path.join(__dirname, 'views'));\napp.set('view engine', 'jade');\n \napp.use(mds.middleware({ \n rootDirectory: path.resolve(__dirname, '/'),\n view: 'markdown'\n}));\n``` \n5. npm run start""",,"2020/03/19, 22:00:35",1315,MIT,12,101,"2023/02/09, 20:13:06",16,31,32,1,258,12,0.0,0.5,,,0,15,false,,false,true,,,,,,,,,,, grid-intensity-go,A tool written in go to help you factor carbon intensity into decisions about where and when to run computing jobs.,thegreenwebfoundation,https://github.com/thegreenwebfoundation/grid-intensity-go.git,github,,Computation and Communication,"2023/09/21, 13:24:23",43,2,26,true,Go,The Green Web Foundation,thegreenwebfoundation,"Go,HCL,Shell,Dockerfile",,"b'[![GoDoc](https://godoc.org/github.com/thegreenwebfoundation/grid-intensity-go?status.svg)](http://godoc.org/github.com/thegreenwebfoundation/grid-intensity-go) ![go-unit-test](https://github.com/thegreenwebfoundation/grid-intensity-go/workflows/go-unit-test/badge.svg) ![docker](https://github.com/thegreenwebfoundation/grid-intensity-go/workflows/docker-integration-test/badge.svg) ![kubernetes](https://github.com/thegreenwebfoundation/grid-intensity-go/workflows/kubernetes-integration-test/badge.svg) ![nomad](https://github.com/thegreenwebfoundation/grid-intensity-go/workflows/nomad-integration-test/badge.svg)\n\n# grid-intensity-go\n\nA tool written in Go, designed to be integrated into Kubernetes, Nomad, and other schedulers, to help you factor carbon intensity into decisions about where and when to run jobs.\n\nThe tool has 3 components.\n\n- The `grid-intensity` CLI for interacting with carbon intensity data.\n- A [Prometheus](https://prometheus.io/) exporter with carbon intensity metrics that can be deployed via\nDocker, Nomad, or Kubernetes.\n- A Go library that can be integrated into your Go code.\n\n## Changelog\n\nSee [CHANGELOG.md](/CHANGELOG.md).\n\n## Background\n\nWe know that the internet runs on electricity. That electricity comes from a mix of energy sources, including wind and solar, nuclear power, biomass, fossil gas, oil and coal and so on,\n\nWe call this the fuel mix, and this fuel mix can impact on the carbon intensity of your code.\n\n## Move your code through time and space\n\nBecause the fuel mix will be different depending when and where you run your code, you can influence the carbon intensity of the code you write by moving it through time and space - either by making it run when the grid is greener, or making it run where it\'s greener, like a CDN running on green power.\n\n## Inspired By\n\nThis tool builds on research and tools developed from across the sustainable software community. \n\n### Articles\n\n- A carbon aware internet - Branch magazine - https://branch.climateaction.tech/issues/issue-2/a-carbon-aware-internet/\n- Carbon Aware Kubernetes - https://devblogs.microsoft.com/sustainable-software/carbon-aware-kubernetes/\n- Clean energy technologies threaten to overwhelm the grid. Here\xe2\x80\x99s how it can adapt. - https://www.vox.com/energy-and-environment/2018/11/30/17868620/renewable-energy-power-grid-architecture\n\n### Papers\n\n- A Tale of Two Visions: Designing a Decentralized Transactive Electric System - https://ieeexplore.ieee.org/document/7452738\n- Carbon Explorer - https://github.com/facebookresearch/CarbonExplorer/\n- Cucumber: Renewable-Aware Admission Control for Delay-Tolerant Cloud and Edge Workloads - https://arxiv.org/abs/2205.02895 \n- Let\'s Wait Awhile: How Temporal Workload Shifting Can Reduce Carbon Emissions in the Cloud - https://arxiv.org/abs/2110.13234\n\n### Tools\n\n- Carbon Aware Nomad - experimental branch - https://github.com/hashicorp/nomad/blob/h-carbon-meta/CARBON.md\n- Cloud Carbon Footprint - https://www.cloudcarbonfootprint.org/\n- Scaphandre - https://github.com/hubblo-org/scaphandre\n- Solar Protocol - http://solarprotocol.net/\n- The carbon aware scheduler - https://pypi.org/project/carbon-aware-scheduler/\n\n## Installing\n\n- Install via [brew](https://brew.sh/).\n\n```sh\nbrew install thegreenwebfoundation/carbon-aware-tools/grid-intensity\n```\n\n- Install via curl (feel free to do due diligence and check the [script](https://github.com/thegreenwebfoundation/grid-intensity-go/blob/main/install.sh) first).\n\n```sh\ncurl -fsSL https://raw.githubusercontent.com/thegreenwebfoundation/grid-intensity-go/main/install.sh | sudo sh \n```\n\n- Fetch a binary release from the [releases](https://github.com/thegreenwebfoundation/grid-intensity-go/releases) page.\n\n## grid-intensity CLI\n\nThe CLI allows you to interact with carbon intensity data from multiple providers.\n\n```sh\n$ grid-intensity\nProvider ember-climate.org needs an ISO country code as a location parameter.\nESP detected from your locale.\nESP\n[\n\t{\n\t\t""emissions_type"": ""average"",\n\t\t""metric_type"": ""absolute"",\n\t\t""provider"": ""Ember"",\n\t\t""location"": ""ESP"",\n\t\t""units"": ""gCO2e per kWh"",\n\t\t""valid_from"": ""2021-01-01T00:00:00Z"",\n\t\t""valid_to"": ""2021-12-31T23:59:00Z"",\n\t\t""value"": 193.737\n\t}\n]\n```\n\nThe `--provider` and `--location` flags allow you to select other providers and locations.\nYou can also set the `GRID_INTENSITY_PROVIDER` and `GRID_INTENSITY_LOCATION` environment\nvariables or edit the config file at `~/.config/grid-intensity/config.yaml`.\n\n```sh\n$ grid-intensity --provider CarbonIntensityOrgUK --location UK\n{\n\t""from"": ""2022-07-14T14:30Z"",\n\t""to"": ""2022-07-14T15:00Z"",\n\t""intensity"": {\n\t\t""forecast"": 184,\n\t\t""actual"": 194,\n\t\t""index"": ""moderate""\n\t}\n}\n```\n\nThe [providers](#providers) section shows how to configure other providers.\n\n## grid-intensity exporter\n\nThe `exporter` subcommand starts the prometheus exporter on port 8000.\n\n```sh\n$ grid-intensity exporter --provider Ember --location FR\nUsing provider ""Ember"" with location ""FR""\nMetrics available at :8000/metrics\n```\n\nView the metrics with curl.\n\n```\n$ curl -s http://localhost:8000/metrics | grep grid\n# HELP grid_intensity_carbon_average Average carbon intensity for the electricity grid in this location.\n# TYPE grid_intensity_carbon_average gauge\ngrid_intensity_carbon_average{provider=""Ember"",location=""FR"",units=""gCO2 per kWh""} 67.781\n```\n\n### Docker Image\n\nBuild the docker image to deploy the exporter.\n\n```sh\nCGO_ENABLED=0 GOOS=linux go build -o grid-intensity .\ndocker build -t thegreenwebfoundation/grid-intensity:latest .\n```\n\n### Kubernetes\n\nInstall the [helm](https://helm.sh/) chart in [/helm/grid-intensity-exporter](https://github.com/thegreenwebfoundation/grid-intensity-go/tree/main/helm/grid-intensity-exporter).\nNeeds the Docker image to be available in the cluster.\n\n```sh\nhelm install --set gridIntensity.location=FR grid-intensity-exporter helm/grid-intensity-exporter\n```\n\n### Nomad\n\nEdit the Nomad job in [/nomad/grid-intensity-exporter.nomad](https://github.com/thegreenwebfoundation/grid-intensity-go/blob/main/nomad/grid-intensity-exporter.nomad) to set the\nenv vars `GRID_INTENSITY_LOCATION` and `GRID_INTENSITY_PROVIDER`\n\nStart the Nomad job. Needs the Docker image to be available in the cluster.\n\n```sh\nnomad run ./nomad/grid-intensity-exporter.nomad\n```\n\n## grid-intensity-go library\n\nSee the [/examples/](https://github.com/thegreenwebfoundation/grid-intensity-go/tree/main/examples) \ndirectory for examples of how to integrate each provider.\n\n## Providers\n\nCurrently these providers of carbon intensity data are integrated. If you would like\nus to integrate more providers please open an [issue](https://github.com/thegreenwebfoundation/grid-intensity-go/issues).\n\n### Electricity Maps\n\n[Electricity Map](https://app.electricitymaps.com/map) have carbon intensity data\nfrom multiple sources. You need to get an API token and URL from their\n[API portal](https://api-portal.electricitymaps.com/) to use the API. You can use\ntheir free tier for non-commercial use or sign up for a 30 day trial.\n\nThe `location` parameter needs to be set to a zone present in the public [zones](https://static.electricitymap.org/api/docs/index.html#zones) endpoint.\n\n```sh\nELECTRICITY_MAP_API_TOKEN=your-token \\\nELECTRICITY_MAP_API_URL=https://api-access.electricitymaps.com/free-tier/ \\\ngrid-intensity --provider=ElectricityMap --location=IN-KA\n```\n\n### WattTime\n\n[WattTime](https://www.watttime.org/) have carbon intensity data from multiple sources.\nYou need to [register](https://www.watttime.org/api-documentation/#authentication) to use the API.\n\nThe `location` parameter should be set to a supported location. The `/ba-from-loc`\nendpoint allows you to provide a latitude and longitude. See the [docs](https://www.watttime.org/api-documentation/#determine-grid-region) for more details.\n\n```sh\nWATT_TIME_USER=your-user \\\nWATT_TIME_PASSWORD=your-password \\\ngrid-intensity --provider=WattTime --location=CAISO_NORTH\n```\n\n### Ember\n\nCarbon intensity data from [Ember](https://ember-climate.org/), is embedded in the binary\nin accordance with their licensing - [CC-BY-SA 4.0](https://ember-climate.org/creative-commons/)\n\n```sh\ngrid-intensity --provider=Ember --location=DE\n```\n\nThe `location` parameter should be set to a 2 or 3 char ISO country code.\n\n### UK Carbon Intensity API\n\nUK Carbon Intensity API https://carbonintensity.org.uk/ this is a public API\nand the only location supported is `UK`.\n\n```sh\ngrid-intensity --provider=CarbonIntensityOrgUK --location=UK\n```\n'",",https://arxiv.org/abs/2205.02895,https://arxiv.org/abs/2110.13234\n\n###","2020/11/03, 11:31:35",1086,Apache-2.0,10,65,"2023/09/21, 13:24:27",13,50,65,12,34,0,1.8,0.1384615384615384,"2023/07/13, 12:01:47",v0.5.0,1,4,false,,false,false,"rossf7/carbon-aware-karmada-operator,thegreenwebfoundation/grid-intensity-exporter",,https://github.com/thegreenwebfoundation,https://www.thegreenwebfoundation.org,The Internet,,,https://avatars.githubusercontent.com/u/8995024?v=4,,, Eco2AI,A Python library which accumulates statistics about power consumption and CO2 emission during running code.,sb-ai-lab,https://github.com/sb-ai-lab/Eco2AI.git,github,"carbon-emissions,carbon-footprint,esg,sustainability,ai,environment,co2-emissions,co2-monitoring,energy-consumption,power-consumption-measurement,emission-tracker,deep-learning,machine-learning,python,ghg",Computation and Communication,"2023/10/20, 13:55:26",165,10,94,true,Python,,sb-ai-lab,Python,,"b'\n\n\n![PyPI Downloads](https://img.shields.io/pypi/dm/eco2ai?color=brightgreen&label=PyPI%20downloads&logo=pypi&logoColor=yellow)\n[![PyPI all Downloads](https://img.shields.io/badge/All%20PyPI%20downloads-look%20in%20Colab-brightgreen)](https://colab.research.google.com/drive/1UoSHPRUHbg5B1U2x8p_ACo21X9N6n1im?authuser=1)\n\n\n[![PyPI - Downloads](https://img.shields.io/badge/%20PyPI%20-link%20for%20download-brightgreen)](https://pypi.org/project/eco2ai/)\n![PyPI - Downloads](https://img.shields.io/pypi/v/eco2ai?color=bright-green&label=PyPI&logo=pypi&logoColor=yellow)\n[![DOI](https://img.shields.io/badge/DOI-eco2AI%20article-brightgreen)](https://link.springer.com/article/10.1134/S1064562422060230)\n[![telegram support](https://img.shields.io/twitter/url?label=eco2ai%20support&logo=telegram&style=social&url=https%3A%2F%2Ft.me%2F%2BjsaoAgioprQ4Zjk6)](https://t.me/eco2ai)\n\n# Eco2AI\n\n+ [About Eco2AI :clipboard:](#1)\n+ [Installation :wrench:](#2)\n+ [Use examples :computer:](#3)\n+ [Important note :blue_book:](#4)\n+ [Citing](#5)\n+ [Feedback :envelope:](#6) \n\n\n\n\n\n## About Eco2AI :clipboard: \n\n\n\nThe Eco2AI is a python library for CO2 emission tracking. It monitors energy consumption of CPU & GPU devices and estimates equivalent carbon emissions taking into account the regional emission coefficient. \nThe Eco2AI is applicable to all python scripts and all you need is to add the couple of strings to your code. All emissions data and information about your devices are recorded in a local file. \n\nEvery single run of Tracker() accompanies by a session description added to the log file, including the following elements:\n \n\n+ project_name\n+ experiment_description\n+ start_time\n+ duration(s)\n+ power_consumption(kWTh)\n+ CO2_emissions(kg)\n+ CPU_name\n+ GPU_name\n+ OS\n+ country\n\n## Installation \nTo install the eco2AI library, run the following command:\n\n```\npip install eco2ai\n```\n\n## Use examples \n\nExample usage eco2AI [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hn0DQiKHeyXwvOOR3UEXaGsD6DqVm6b7?authuser=1)\nYou can also find eco2AI tutorial on youtube [![utube](https://img.shields.io/youtube/views/-fegQpA2gPg?label=eco2AI&style=social)](https://www.youtube.com/watch?v=-fegQpA2gPg&ab_channel=AIRIInstitute)\n\nThe eco2AI interface is quite simple. Here is the simplest usage example:\n\n```python\n\nimport eco2ai\n\ntracker = eco2ai.Tracker(project_name=""YourProjectName"", experiment_description=""training the model"")\n\ntracker.start()\n\n\n\ntracker.stop()\n```\n\nThe eco2AI also supports decorators. As soon as the decorated function is executed, the information about the emissions will be written to the emission.csv file:\n\n```python\nfrom eco2ai import track\n\n@track\ndef train_func(model, dataset, optimizer, epochs):\n ...\n\ntrain_func(your_model, your_dataset, your_optimizer, your_epochs)\n```\n\nFor your convenience, every time you instantiate the Tracker object with your custom parameters, these settings will be saved until the library is deleted. Each new tracker will be created with your custom settings (if you create a tracker with new parameters, they will be saved instead of the old ones). For example:\n\n```python\nimport eco2ai\n\ntracker = eco2ai.Tracker(\n project_name=""YourProjectName"", \n experiment_description=""training model"",\n file_name=""emission.csv""\n )\n\ntracker.start()\n\ntracker.stop()\n\n...\n\n# now, we want to create a new tracker for new calculations\ntracker = eco2ai.Tracker()\n# now, it\'s equivalent to:\n# tracker = eco2ai.Tracker(\n# project_name=""YourProjectName"", \n# experiment_description=""training the model"",\n# file_name=""emission.csv""\n# )\ntracker.start()\n\ntracker.stop()\n\n```\n\nYou can also set parameters using the set_params() function, as in the example below:\n\n```python\nfrom eco2ai import set_params, Tracker\n\nset_params(\n project_name=""My_default_project_name"",\n experiment_description=""We trained..."",\n file_name=""my_emission_file.csv""\n)\n\ntracker = Tracker()\n# now, it\'s equivelent to:\n# tracker = Tracker(\n# project_name=""My_default_project_name"",\n# experiment_description=""We trained..."",\n# file_name=""my_emission_file.csv""\n# )\ntracker.start()\n\ntracker.stop()\n```\n\n\n\n\n## Important note \n\nIf for some reasons it is not possible to define country, then emission coefficient is set to 436.529kg/MWh, which is global average.\n[Global Electricity Review](https://ember-climate.org/insights/research/global-electricity-review-2022/#supporting-material-downloads)\n\nFor proper calculation of gpu and cpu power consumption, you should create a ""Tracker"" before any gpu or CPU usage.\n\nCreate a new \xe2\x80\x9cTracker\xe2\x80\x9d for every new calculation.\n\n# Usage of Eco2AI\n\nAn example of using the library is given in the [publication](https://arxiv.org/abs/2208.00406). It the paper we presented experiments of tracking equivalent CO2 emissions using eco2AI while training [ruDALL-E](https://github.com/sberbank-ai/ru-dalle) models with with 1.3 billion ([Malevich](https://habr.com/ru/company/sberbank/blog/589673/), ruDALL-E XL 1.3B) and 12 billion parameters ([Kandinsky](https://github.com/sberbank-ai/ru-dalle), ruDALL-E XL 12B). These are [multimodal](https://arxiv.org/abs/2202.10435) pre-trained transformers that learn the conditional distribution of images with by some string of text capable of generating arbitrary images from a russian text prompt that describes the desired result.\nProperly accounted carbon emissions and power consumption Malevich and Kandinsky fine-tuning Malevich and Kandinsky on the [Emojis dataset](https://arxiv.org/abs/2112.02448) is given in the table below.\n \n | **Model** | **Train time** | **Power, kWh** | **CO2, kg** | **GPU** | **CPU** | **Batch Size** |\n |:----------|:-------------:|:------:| :-----: |:-----:|:------:|:------:|\n | **Malevich**| 4h 19m | 1.37 | **0.33** | A100 Graphics, 1 | AMD EPYC 7742 64-Core | 4 |\n | **Kandinsky** | 9h 45m | 24.50 | **5.89** | A100 Graphics, 8 | AMD EPYC 7742 64-Core | 12 |\n\nAlso we presented results for training of Malevich with optimized variation of [GELU](https://arxiv.org/abs/1606.08415) activation function. Training of the Malevich with the [8-bit version of GELU](https://arxiv.org/abs/2110.02861) allows us to spent about 10\\% less energy and, consequently, produce less equivalent CO2 emissions.\n\n# Citing Eco2AI\n[![DOI](https://img.shields.io/badge/DOI-eco2AI%20article-brightgreen)](https://link.springer.com/article/10.1134/S1064562422060230)\n\nThe Eco2AI is licensed under a [Apache licence 2.0](https://www.apache.org/licenses/LICENSE-2.0).\n\nPlease consider citing the following paper in any research manuscript using the Eco2AI library:\n\n```\n@inproceedings{budennyy2023eco2ai,\n title={Eco2ai: carbon emissions tracking of machine learning models as the first step towards sustainable ai},\n author={Budennyy, SA and Lazarev, VD and Zakharenko, NN and Korovin, AN and Plosskaya, OA and Dimitrov, DV and Akhripkin, VS and Pavlov, IV and Oseledets, IV and Barsola, IS and others},\n booktitle={Doklady Mathematics},\n pages={1--11},\n year={2023},\n organization={Springer}\n}\n```\n\n## In collaboration with\n[](https://airi.net/)\n'",",https://arxiv.org/abs/2208.00406,https://arxiv.org/abs/2202.10435,https://arxiv.org/abs/2112.02448,https://arxiv.org/abs/1606.08415,https://arxiv.org/abs/2110.02861","2022/05/31, 06:57:26",512,Apache-2.0,28,205,"2023/07/19, 09:20:56",3,2,4,4,98,2,0.0,0.475,,,0,4,false,,false,false,"software-energy-cost-studies/profiling,AIRI-Institute/eco4cast,Ahmed-Khaled-Saleh/DIN-Pytorch,agussomacal/ROMHighContrast,agussomacal/PerplexityLab,agussomacal/NonLinearRBA4PDEs,antoshka17/tph,Roman54228/lfw_facerec,pikvic/fefu-eco2ai,misanchz98/TFG-Project",,https://github.com/sb-ai-lab,,,,,https://avatars.githubusercontent.com/u/103759388?v=4,,, impact,Compute your ML model's emissions with our calculator and add the results to your paper with our generated LaTeX template.,mlco2,https://github.com/mlco2/impact.git,github,,Computation and Communication,"2023/01/25, 02:03:23",160,0,56,true,HTML,,mlco2,"HTML,SCSS,CSS,JavaScript",https://mlco2.github.io/impact,"b""*This repository is passively maintained: issues will be addressed -- especially if they come with a suggested solution -- but there is no active updates to the data or code. Contributions are welcome.*\n\n# Machine Learning's CO2 Impact\n\nCheckout the [**online GPU emissions calculator**](https://mlco2.github.io/impact)!\n\n[![](https://i.postimg.cc/pTqVSx7N/Capture-d-e-cran-2019-11-07-a-12-41-58.png)](https://mlco2.github.io/impact)\n\nBy A. Lacoste, A. Luccioni, V. Schmidt\n\nRead our paper on [**Quantifying Carbon Emissions of Machine Learning**](https://arxiv.org/pdf/1910.09700) (NeurIPS 2019, Climate Change AI Workshop)\n\nUse our **generated latex template** which automatically includes the Calculator's output for you to easily report your procedure's CO2 eq. emissions\n\n[![](https://raw.githubusercontent.com/mlco2/impact/master/img/template.png)](https://mlco2.github.io/impact#publish)\n\n## Contributing\n\n### Setup\n\n1. [Install `yarn`](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable): Node's package manager\n2. [Install `gulp`](https://gulpjs.com/): a build tool\n3. Install dependencies: from the root of this repo `$ yarn install`\n4. Run the local server: `$ gulp watch`\n5. Edit files! Gulp will watch for changes, build the differences and reload the browser\n\n### Content\n\n`html` files are split by section in the `html/` folder, and then built into the `index.html` file.\n\nAfter editing content, if `gulp watch` was running, you're good ; otherwise, run `gulp build` to apply your changes.\n### Data\n\nAnything to say or add? See [`data/`](https://github.com/mlco2/impact/tree/master/data)\n\n## Acknowledgements\n\nSpecial thanks to\n\n* https://sharingbuttons.io/\n* [prism.js](https://prismjs.com/)\n* https://unsplash.com/\n""",",https://arxiv.org/pdf/1910.09700","2019/08/13, 14:57:51",1534,MIT,19,166,"2023/09/18, 16:39:24",6,31,38,12,37,0,0.1,0.14432989690721654,,,1,11,false,,false,false,,,https://github.com/mlco2,,,,,https://avatars.githubusercontent.com/u/54071934?v=4,,, CodeCarbon,Track emissions from Compute and recommend ways to reduce their impact on the environment.,mlco2,https://github.com/mlco2/codecarbon.git,github,,Computation and Communication,"2023/10/15, 19:00:39",784,462,274,true,Jupyter Notebook,,mlco2,"Jupyter Notebook,Python,CSS,Dockerfile,Mako,Shell,Makefile",https://mlco2.github.io/codecarbon,"b'![banner](docs/edit/images/banner.png)\n\nEstimate and track carbon emissions from your computer, quantify and analyze their impact.\n\n[**Documentation**](https://mlco2.github.io/codecarbon)\n\n
\n\n[![](https://anaconda.org/conda-forge/codecarbon/badges/version.svg)](https://anaconda.org/conda-forge/codecarbon)\n[![](https://img.shields.io/pypi/v/codecarbon?color=024758)](https://pypi.org/project/codecarbon/)\n[![DOI](https://zenodo.org/badge/263364731.svg)](https://zenodo.org/badge/latestdoi/263364731)\n\n\n- [About CodeCarbon \xf0\x9f\x92\xa1](#about-codecarbon-)\n- [Quickstart \xf0\x9f\x9a\x80](#quickstart-)\n - [Installation \xf0\x9f\x94\xa7](#installation-)\n - [Start to estimate your impact \xf0\x9f\x93\x8f](#start-to-estimate-your-impact-)\n - [Monitoring your whole machine](#monitoring-your-machine-)\n - [In your python code](#in-your-python-code-)\n - [Visualize](#visualize-)\n- [Contributing \xf0\x9f\xa4\x9d](#contributing-)\n- [Contact \xf0\x9f\x93\x9d](#contact-)\n\n# About CodeCarbon \xf0\x9f\x92\xa1\n\n**CodeCarbon** started with a quite simple question: \n\n**What is the carbon emission impact of my computer program? :shrug:**\n\nWe found some global data like ""computing currently represents roughly 0.5% of the world\xe2\x80\x99s energy consumption"" but nothing on our individual/organisation level impact.\n\nAt **CodeCarbon**, we believe, along with Niels Bohr, that ""Nothing exists until it is measured"". So we found a way to estimate how much CO2 we produce while running our code.\n\n*How?*\n\nWe created a Python package that estimates your hardware electricity power consumption (GPU + CPU + RAM) and we apply to it the carbon intensity of the region where the computing is done.\n\n![calculation Summary](docs/edit/images/calculation.png)\n\nWe explain more about this calculation in the [**Methodology**](https://mlco2.github.io/codecarbon/methodology.html#) section of the documentation.\n\nOur hope is that this package will be used widely for estimating the carbon footprint of computing, and for establishing best practices with regards to the disclosure and reduction of this footprint.\n\n**So ready to ""change the world one run at a time""? Let\'s start with a very quick set up.**\n\n# Quickstart \xf0\x9f\x9a\x80\n\n## Installation \xf0\x9f\x94\xa7\n\n**From PyPI repository**\n```python\npip install codecarbon\n```\n\n**From Conda repository**\n```python\nconda install -c conda-forge codecarbon\n```\nTo see more installation options please refer to the documentation: [**Installation**](https://mlco2.github.io/codecarbon/installation.html#)\n\n## Start to estimate your impact \xf0\x9f\x93\x8f\n\nTo get an experiment_id enter:\n```python\n! codecarbon init\n```\nYou can now store it in a **.codecarbon.config** at the root of your project \n```python\n[codecarbon]\nlog_level = DEBUG\nsave_to_api = True\nexperiment_id = 2bcbcbb8-850d-4692-af0d-76f6f36d79b2 #the experiment_id you get with init\n```\nNow you have 2 main options:\n\n### Monitoring your machine \xf0\x9f\x92\xbb\n\nIn your command prompt use:\n```codecarbon monitor```\nThe package will track your emissions independently from your code.\n\n### In your Python code \xf0\x9f\x90\x8d\n```python\nfrom codecarbon import track_emissions\n@track_emissions()\ndef your_function_to_track():\n # your code\n ```\nThe package will track the emissions generated by the execution of your function.\n\nThere is other ways to use **codecarbon** package, please refer to the documentation to learn more about it: [**Usage**](https://mlco2.github.io/codecarbon/usage.html#)\n\n## Visualize \xf0\x9f\x93\x8a\n\nYou can now visualize your experiment emissions on the [dashboard](https://dashboard.codecarbon.io/).\n![dashboard](docs/edit/images/dashboard.png)\n\n*Note that for now, all emissions data send to codecarbon API are public.*\n\n> Hope you enjoy your first steps monitoring your carbon computing impact!\n> Thanks to the incredible codecarbon community \xf0\x9f\x92\xaa\xf0\x9f\x8f\xbc a lot more options are available using *codecarbon* including:\n> - offline mode\n> - cloud mode\n> - comet integration...\n>\n> Please explore the [**Documentation**](https://mlco2.github.io/codecarbon) to learn about it\n> If ever what your are looking for is not yet implemented, let us know through the *issues* and even better become one of our \xf0\x9f\xa6\xb8\xf0\x9f\x8f\xbc\xe2\x80\x8d\xe2\x99\x80\xef\xb8\x8f\xf0\x9f\xa6\xb8\xf0\x9f\x8f\xbc\xe2\x80\x8d\xe2\x99\x82\xef\xb8\x8f contributors! more info \xf0\x9f\x91\x87\xf0\x9f\x8f\xbc\n\n\n# Contributing \xf0\x9f\xa4\x9d\n\nWe are hoping that the open-source community will help us edit the code and make it better!\n\nYou are welcome to open issues, even suggest solutions and better still contribute the fix/improvement! We can guide you if you\'re not sure where to start but want to help us out \xf0\x9f\xa5\x87\n\nIn order to contribute a change to our code base, please submit a pull request (PR) via GitHub and someone from our team will go over it and accept it.\n\nCheck out our [contribution guidelines :arrow_upper_right:](https://github.com/mlco2/codecarbon/blob/master/CONTRIBUTING.md)\n\nContact [@vict0rsch](https://github.com/vict0rsch) to be added to our slack workspace if you want to contribute regularly!\n\n\n# Contact \xf0\x9f\x93\x9d\n\nMaintainers are [@vict0rsch](https://github.com/vict0rsch) [@benoit-cty](https://github.com/benoit-cty) and [@SaboniAmine](https://github.com/saboniamine). Codecarbon is developed by volunteers from [**Mila**](http://mila.quebec) and the [**DataForGoodFR**](https://twitter.com/dataforgood_fr) community alongside donated professional time of engineers at [**Comet.ml**](https://comet.ml) and [**BCG GAMMA**](https://www.bcg.com/en-nl/beyond-consulting/bcg-gamma/default).\n'",",https://zenodo.org/badge/latestdoi/263364731","2020/05/12, 14:44:03",1261,MIT,277,1472,"2023/10/15, 19:00:39",65,233,397,125,10,10,1.1,0.7847222222222222,"2023/08/16, 12:19:46",v2.3.1,12,51,false,,false,true,"julesname/midtermCheck,ericaannnie/midtermtesttrial,ericaannnie/midterm,jackdoylejackdoyle/midterm,issamemari/real-estate-analysis,ocislyjrti/NeuralAttention,sayuh07/StreamLitProject,intel/osseu-llm-demo,MLOps-essi-upc/TAED2-clothing-reviews,ninaneens/YouTube,ME-ICA/fmripost-tedana,datarootsio/mlflow-emissions-sdk,yining610/in-context-generalization,EdVince/whisper-trtllm,MichelleElizabethK/bias-detection-on-audio-data,furqan-y-khan/TecheEmmisions,ashwincv0112/transformers,MLOps-essi-upc/MLOps-braint,meslubi2021/transformers,narutohyc/transformers,AliAl-Gburi/mlflow-emissions-sdk,alex-askr/llm-api,wujianP/ngc-workspace,pradeepmisal/TechEmmisions,MilaNLProc/interpretability-mt-gender-bias,skzhang1/IDEAL,abishekat/azure-xai-service,tjisousa/green_it_practical,MLOps-essi-upc/taed2-ML-Alphas,ALLIDOISWINFORYOU/transformers,KpKqwq/CHLS,BorrisonXiao/whisper-st,baler-collaboration/baler,TokisakiKurumi2001/transformers_mistral,MLOps-essi-upc/MLOps-TeamBeans,MLOps-essi-upc/MLOps-SentiBites,Rhine-AI-Lab/ThinkingMath,osmarks/transformers-patch-siglip,drasaadmoosa/LMOps,sarvex/LMOps,PingAnIntelligence/paii_transformers,EjbejaranosAI/EmotionUnify,ScorpionBytes/HF-Transformers,taheeraahmed/carbon-footprint-trackers,gkbharathy/LMOps,uvdhatri/dugdashdiscovery,jvzoov/huggingface,taneset/RAGLLM,Jay-Sung/transformers,Yousef-Mush/LM_Harness_Nemo,dariob95/Federated-learning,pouya-haghi/HF-autotrain,tmplxz/xpcr,beratcmn/autotrain-webui,leha-ux/Dashboard_GenerativeAI,kssteven418/SqueezeLLM-gradients,christophe-cerin/Ecoindex-Revisited,Binn37/bert-pytorch,himanshusin/img_test,umasolution/python_2,lih1130/newT5,mahdiabdollahpour/beam-search-with-rollouts,68thandMaine/deep_learning_for_coders,lq147258369/bert-learning,kang-tech/-,kilitary/dash-apps-gaming,kumulaor/test,ksquarekumar/jupyter-docker,Pratikshaa1216/CodeGreen,rudolfKischer/fastAssistant,peterjhwang/llama-api,donnate/gnumap,aramis-lab/clinicadl,joshbickett/basic-llama-convert,TinusAlsos/TDT4265_project,Hanpx20/Anchor_Data_Preprocess,ArtificialZeng/transformers-Explained,satyam5465/huggingface,SimengSun/alpaca_farm_lora,michaelmior/annotate-schema,JacobHeldt/SyntheticEye,Atharva7K/MMS-Code-Switching,GAISSA-UPC/energydl,Bobby-Hua/summarization-via-semantic-graph,fe1ixxu/ALMA,jprachir/image_to_text_converter,GAISSA-UPC/energydl-full,awsm-research/VQM,voidism/DoLa,gmrandazzo/RegressionPipeliner,software-energy-cost-studies/profiling,shinkenuu/rag,anshsarkar/transformers-langchain,alvarodr21/energydl,Jhj9/BRL-Chatbot-Demo,fxmarty/transformers-hard-fork,valexsyu/llama-recipes-NAT,IowaSanae/electrolyte-chatbot,BoyuanJackChen/transformers-v4.29,BoyuanJackChen/transformers-v4.32,markavale/llama-2-service,chau25102001/Intent_and_Slot,chengxuz/lm_eval_for_MLM,paniniDot/summarization-model,richard-urena/ecodreamers-api-lambda,Bit0r/fish-config,BaguHo/Llama2-7b-korean-using-QLoRA,yegcjs/DiffusionLLM,phimer/softDsim-green,apoorvakliv/fed2tier,VivianL292/CO2EmissionsModelTesting,islive233/transformers-4.31.0,SamKenX-Hub-Community/SAMkenXTransformers,cmougan/SelectiveRegression,tammypi/llama-finetune-total,HivaMohammadzadeh1/feedback,mkingopng/nineveh,stan-hill-datatonic/ml-emissions-experiments,Hill-Research/TextExtraction,Hill-Research/FigureClassification,2lambda123/transformers,MLOps-essi-upc/MLOps2023Course-demo,navnit3366/transformers-main,kianwoon/autotrain,tooniez/transformers,sampangtf/ML-wLimited-Supervision-XLM_CLIP,OwenXu6/transformer,rgobinat/TamilGPT,XDeepAzure/nmt-corrector-src,iamrajatroy/Data-Science-Lab,Say383/transformers,jlaumonier/apnea,Eric3911/OpenLLM,alesimattia/Prog_DeepL-ZeroShotLearning-analogy-based,AnneHartebrodt/codegreen-client,OMoooMO/transformer-4.31.0.dev0,EliahKagan/transformers,mboissiere/CityNoise-project-28095,HayaRizel/transformers_project,syskn/transformers-4.30.1,XDeepAzure/Train_code,sovdevs/tubetranslate-api,Zuckerbird/transformerswithLoRA,mathislindner/log-summary,getalp/SmartComp2023-HAR-Supervised-Pretraining,JiaqiLi404/SemiSupervisedObjectDetection,karim-aboelazm/transformers,byungdoh/llm_decomposition,geminiwenxu/Tokenizers,Sonata165/ControllableLyricTranslation,MilaNLProc/simple-generation,ArtificialZeng/tranformers-expalined,tmhho/FixMatch,gaetanbrison/summerschool,Vito-Scaraggi/PoseEstimator,boettiger-lab/approx-model-or-approx-soln,JAEarly/MIL-Multires-EO,gnngo4/animalfmritools,alemolteni/codecarbon_project,RyozoMasukawa/Unilog_Reproduction,alexandrehsd/asgard,luca-rossi/analogy-based-zsl,ejhusom/d2m,allenai/efficiency-pentathlon,christopherburatti/CV-DeepLearning,jcsenciales/transformers,ielab/wandc,narest-qa/repo73,chachoutaieb/encoding_energy_co2,xiaojunjun65/transformers-mlu_4.27.1,SundayZhao/codebert_gec,Anonymous25645/CodePLAN,5tghrt/weefggr,tornede/py_experimenter,octo-technology/Formation-MLOps-3,Ellariel/train-test,GAISSA-UPC/ML-models-compression-for-energy-efficiency,189569400/huggingface_transformers,Pituel/CARLA_RL_TFM_UOC,gonzalo-cordova-pou/MLADHD,fitbenchmarking/fitbenchmarking,transmuteAI/trailmet,mukhal/grace,sainzunai/MUCSI_proyecto_DL_NLP,waleedhassankhan/Transformers,metaed-gauxplay/transformers-hugginface,huggingface/autotrain-advanced,microsoft/LMOps,mmweka/transformers-t5,omrisapir1/transformers,Saurabh1826/CNT_Research_Template,IPmu/transformers,RECeSS-EU-Project/stanscofi,interactivetech/deepspeed-mpt-test,ryfont/transformers,cwilldoner/practicalwork,meghanav13/cnt-research,SnowdenH/transformers_hzy_4290,dayu11/selective_pretraining_for_private_finetuning,griff4692/edu-sum,XMUDM/PIDPA,GliozzoJ/pathonet_compression,yfqiu-nlp/mfact-summ,manon-reusens/text-classification-benchmark,pranavajitnair/DAPA,debayan/sparql-vocab-substitution,UKPLab/2022-RAFT,greenpeace/gpi-techlab-ipr-protection-webapp,Fujitsu-Systems-Europe-FSE/Apheleia,chatprism/transformers,launchnlp/BOLT,gaetanbrison/Finance-app,snapADDY/transformers,fededge/framework,SundayZhao/newRepo,dani-kjh/TFG_replication_package,SaiS-TJHSST/Visual-BERT-Embeddings-Demo,TheMrSheldon/GBaRD,yueming-zhang/transformers,NeurIPS2023-7956/BiLD,MakakWasTaken/green-analytics-python,desmondlew556/ViTPointFuser,sxnohnarla/MTGP,JohannesGetzner/dl-energy-estimator,anupamkliv/FedERA,disi-unibo-nlp/easumm,congtuong/transformers,egegulerr/FederatedLearning,topwhere/transformers,anupam-kliv/fl_framework_initial,griff4692/calibrating-summaries,gaetanbrison/maternal,nhinguyen3499/gtao,SteadyBits/rai_av,CharlesBoydelaTour/EcoCode,dd-test5/transformers,kristinasisiakova/Data-Science---Final-Project,NYU-DS-4-Everyone/Hotel-Bookings,Kihansi95/Linkmedia_AttentionPlausibilityByConstraint,sarvex/transformers,Knarik1/Cross_Lingual_Domain_Generalization,Hamsanand13/Mini-Project,szscer/huggingface,hzvolkan/transformers,yuchenbian/transformers_2020,liuyeah/transformers,gaetanbrison/crisp-test,Veronicium/AnchorDR,josephgiovanelli/mo-importance,haotian-liu/transformers_llava,ishaslavin/April17_Transformers_Custom,DFKI-NI/green_automl_for_plastic_litter_detection,GreenAI-Uppa/la-derniere-bibliotheque,cc0408/bart_emo,i-Eval/ieval-instruction,B-Prasanth3/Food-classification,wxjiao/ParroT,MMV-Lab/EfficientBioAI,UPennBJPrager/CNT_Research_Template,jessikamakinen/thesis_jessika,noggame/gpt4all-kor,goriri/alpaca-training,gaetanbrison/predictive-analytics-app,huangch/gpt,MikeGu721/EasyLLM,zhangbo2008/transformers_4.28_annotated,zentrum-lexikographie/eval-de-lemma,kesperinc/huggingface_transformer,KpKqwq/LSPG,Hornet-Developer/transformers,kssteven418/transformers-alpaca,pasqualedem/EzDL,tsaoni/others-work,doapply/transformers,GreenAITorch/GATorch,Berrylcm/transformers,trujillola/Data-Challenge,Arkai-t/Data-Battle-DatAvengers,Anon-Team/VIT-AVR,NeuroTechX/moabb,KseniaSycheva/LMInference,briancabbott/ChatNow,LocalLegend517/transformers,MJ2090/llama,pedrohmeiraa/TEDA-Regressor,chaoyi-wu/Finetune_LLAMA,mylu/transformers,GiuseppePerugia/Green-Coding-Competition-FY23,Mulatingz/API_PI-2,gymeee0715/ACSSR,lxe/transformers,Navya-Manya/MVP,melihogutcen/transformers,ermenkov/mlcrap,lolofo/AttentionGeometry,cedrickchee/transformers-llama,samkenxstream/SAMkenXTransformers,aleclagarde/tfg,Centaurioun/transformers,ijakenorton/Summary_Framework,neuralswarm/models,Pandafluff025/transformers,kssteven418/BigLittleDecoder,Mojino01/hugging_face,dude2033/data_synthesizer,lasigeBioTM/exposome_NER_NEL,neuralmagic/transformers,Zappandy/spoken_language_detector,buaa-hipo/mimose-transformers,keitokudo/dentaku_skill_tree,MSD-IRIMAS/CF-4-TSC,Lestropie/fmriprep,ShahadBakhsh-1/fmriprep,beggu2007/AIAI-eval,av1m/flask-sustainable,dataforgoodfr/python-dash-template,vladostp/an-experimental-comparison-of-software-based-power-meters,tsalo/fmriprep,trujillola/Detection-Wildfire,TimDettmers/transformers_private,hannawong/prompt_MBART,AashrayGupta2003/Custom_Transformer,w8988998ww/Xiaoshuodiyigwenjian,pinguskku/EnBee,DachengLi1/MPCFormer,SamiNenno/Domain-Adaptation-of-Claim-Detection,IlievskiV/Amusive-Blogging-N-Coding,manojkumartjpk/transformers,nipreps/fmriprep,ZeruiW/XAI-Service,hochschule-darmstadt/MetaAutoML,Xiefeng69/stance-detection-for-covid19-related-health-policies,adarraillan/Use_Case_SPIE,psykei/psyki-python,SamiNenno/Claim-Detection,EricssonResearch/spreadnet,shalevy1/pytorch-transformers,TadakaSuryaTeja/Automation_selenium,weimengmeng1999/Transformers--CLIPSeg,maira123g/projct,Nkluge-correa/teeny-tiny_castle,michelleespranita/mlmi-prototype,hamanhbui/reliable_ssl_baselines,Diaffat/SME,JingWang-RU/ALBUS_activelearningmrc,kamfonas/transformers,minimalparts/SemanticHashingFFA,bigcode-project/transformers,fani-lab/SEERa,BrightKang/transformers,Insomnia-y/news-sum,Aafiya-H/transformer-decoder,pedrohmeiraa/TEDA,disi-unibo-nlp/nlg-metricverse,CaffreyR/time-for-t5,pavankumarbannuru/transformers_huggingface,chooper1/transformers,franciszzj/transformers_mis,JAEarly/MIL-Land-Cover-Classification,dani-kjh/aws_fastapi_text,HPAI-BSC/tl-tradeoff,Kihansi95/ExplanationPairSentencesTasks,dani-kjh/heroku-fastapi,aarnphm/transformers,ComputationalResearchProjects/transformers,GreenAIproject/ICT4S22,William3Johnson/transformers,luismesalas/codecarbon-minidemo,gonzalo-cordova-pou/TextMood,albakoehler/taed2,samarawickrama/NLP-Transformers,qianzmolloy/transformers,disi-unibo-nlp/kg-emb-link-pred,qzqdz/transformers4,mtran5/PubMedQA,stephaniebrandl/eyetracking-subgroups,mlco2/codecarbon,CrystalGazers/TAED2_CrystalGazers,ExoDAO-Network/transformers,jaws777/dash-sample-apps,graphcore/transformers-fork,Wuhn/efficiency-and-debugging-experiments,hatrungdung/transformers,taeyang916/SL_detection,Tharolzakariya/transformers,De30/transformers,chin-liang/chin-liang,autonomio/EasyEnergy,griff4692/abstract_gen,xlang-ai/icl-selective-annotation,DecBayComp/gratin,Prograf-UFF/ConformalLayers,iVincentHH/HuggingFace,alexandrainst/alexandra_ai,MeenaSunderam/responsible_ai,someshfengde/internship-tasks,yzc1114/DLProfiler,danilo-carastan-santos/ai-energy-consumption,alexandrainst/alexandra_ai_eval,jeffrey82221/monad_playground,PeARSearch/PeARS-fruit-fly,shiqichen17/knnlm,freezer2019/dash-sample-apps,discus0434/tweetgen-from-timeline,Nathanlauga/nbcodecarbon,ayansengupta17/transformers,misanchz98/TFG-Project,shiqichen17/housby-adapter,JasonArmitage-res/PM-VLN,JasonA1/PM-VLN-ID-672-Review,JasonA1/PM-VLN-Review,LeBenchmark/NeurIPS2021,PeARSearch/PeARS-multilingual-fly,greatdevaks/iiit-codecarbon,lanngoc10a/plotly-dash,manhtientran/transformers-v1,sajastu/MultiPScienceSum,mcantu-ghas-examples/transformers,tannonk/understanding_control_tokens,Jahb/REMA_Base,alexhroom/pytest-codecarbon,marianna13/pile_tokenizer,Llewe/Autonome-Systeme-Praktikum,remla2022/stackoverflow-tagger,tHrhxcv/transformers,Lemarais/semantic_parsing_transformers,Splend1d/hfDUAL,emrecncelik/zeroshot-turkish,sms821/BERT-squad-distributed,armored-guitar/dewarp_master_abby,KiriKoppelgaard/Classifying-Breast-Cancer-from-Mammograms-Using-CNNs-and-Transfer-Learning,SteveineiterTU/NLP_test,openfoodfacts/off-category-classification,JulianBiesheuvel/REMLA,MeenaSunderam/rAI,KimJaehee0725/Syllogistic-Commonsense-Reasoning,ekvall93/bookrecommendation,ielab/green-ir,griff4692/faith-sum,JoeyeS0/HFACE,ianng1/transformers,tony-rsa/mimtk-dash-ui,raphischer/carelabels_with_mlflow,huggingface/transformers,miszkur/data-efficient-NLP,dataforgoodfr/climatewatch,totocto/MLXChallenge,l-gonz/tfg-gitt-mlcost,lu1120/dash,plotly/dash-sample-apps,ArianeDlns/ML-Drinking_Water_Potability,UKPLab/conll2021-metaphoric-paraphrase-generation,Apoorvgarg-creator/AI-Image-Caption-Bot,Kerl1310/codecarbon_tutorial,jlaumonier/mlsurvey,clementpoiret/bagginghsf,Hstellar/FHIR_Hack,pierresegonne/SGGM,ValentinRicher/food-classification,cc-ai/climategan,mikaelberglund/pollution_map,kungfuai/d3m-segmentation-research",,https://github.com/mlco2,,,,,https://avatars.githubusercontent.com/u/54071934?v=4,,, experiment-impact-tracker,"Meant to be a simple drop-in method to track energy usage, carbon emissions, and compute utilization of your system.",Breakend,https://github.com/Breakend/experiment-impact-tracker.git,github,,Computation and Communication,"2021/06/04, 17:06:52",253,0,35,false,Python,,,"Python,CSS,HTML",,"b'# experiment-impact-tracker\n\nThe experiment-impact-tracker is meant to be a simple drop-in method to track energy usage, carbon emissions, and compute utilization of your system. Currently, on Linux systems with Intel chips (that support the RAPL or powergadget interfaces) and NVIDIA GPUs, we record: power draw from CPU and GPU, hardware information, python package versions, estimated carbon emissions information, etc. In California we even support realtime carbon emission information by querying caiso.com!\n\nOnce all this information is logged, you can generate an online appendix which shows off this information like seen here:\n\nhttps://breakend.github.io/RL-Energy-Leaderboard/reinforcement_learning_energy_leaderboard/pongnoframeskip-v4_experiments/ppo2_stable_baselines,_default_settings/0.html\n\n## Installation\n\nTo install:\n\n```bash\npip install experiment-impact-tracker\n```\n\n## Usage\n\nPlease go to the docs page for detailed info on the design, usage, and contributing: https://breakend.github.io/experiment-impact-tracker/ \n\nIf you think the docs aren\'t helpful or need more expansion, let us know with a Github Issue!\n\nBelow we will walk through an example together.\n\n### Add Tracking\nWe included a simple example in the project which can be found in ``examples/my_experiment.py``\n\nAs show in ``my_experiment.py``, you just need to add a few lines of code!\n\n```python\nfrom experiment_impact_tracker.compute_tracker import ImpactTracker\ntracker = ImpactTracker()\ntracker.launch_impact_monitor()\n```\n\nThis will launch a separate python process that will gather compute/energy/carbon information in the background.\n\n**NOTE: Because of the way python multiprocessing works, this process will not interrupt the main one even if the monitoring process errors out. To address this, you can add the following to periodically \n read the latest info from the log file and check for any errors that might\'ve occurred in the tracking process. \n If you have a better idea on how to handle exceptions in the tracking thread please open an issue or submit a pull request!**\n\n```python\ninfo = tracker.get_latest_info_and_check_for_errors()\n```\n\nAlternatively, you can use context management!\n\n```python\nexperiment1 = tempfile.mkdtemp()\nexperiment2 = tempfile.mkdtemp()\n\nwith ImpactTracker(experiment1):\n do_something()\n\nwith ImpactTracker(experiment2):\n do_something_else()\n```\n\nTo kick off our simple experiment, run ``python my_experiment.py``. You will see our \ntraining starts and in the end the script will output something like ``Please find your experiment logs in: /var/folders/n_/9qzct77j68j6n9lh0lw3vjqcn96zxl/T/tmpcp7sfese`` \n\nNow let\'s go over to the temp dir, we can see our logging there!\n```bash\n$ log_path=/var/folders/n_/9qzct77j68j6n9lh0lw3vjqcn96zxl/T/tmpcp7sfese\n$ cd $log_path\n$ tree \n.\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 impacttracker\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data.json\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 impact_tracker_log.log\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 info.pkl\n```\n\nYou can then access the information via the DataInterface:\n\n```python\nfrom experiment_impact_tracker.data_interface import DataInterface\n\ndata_interface1 = DataInterface([experiment1_logdir])\ndata_interface2 = DataInterface([experiment2_logdir])\n\ndata_interface_both = DataInterface([experiment1_logdir, experiment2_logdir])\n\nassert data_interface1.kg_carbon + data_interface2.kg_carbon == data_interface_both.kg_carbon\nassert data_interface1.total_power + data_interface2.total_power == data_interface_both.total_power\n```\n\n### Creating a carbon impact statement\n\nWe can also use a script to automatically generate a carbon impact statement for your paper! Just call this, we\'ll find all the logfiles generated by the tool and calculate emissions information! Specify your ISO3 country code as well to get a dollar amount based on the per-country cost of carbon.\n\n```bash\ngenerate-carbon-impact-statement my_directories that_contain all_my_experiments ""USA""\n```\n\n#### Custom PUE\n\nSome people may know the PUE of their data center, while we use a PUE of 1.58 in our calculations. To set a\n different PUE, do:\n \n```bash\nOVERRIDE_PUE=1.1 generate-carbon-impact-statement my_directories that_contain all_my_experiments ""USA""\n```\n\n\n### Generating an HTML appendix\n\nAfter logging all your experiments into a dir, we can automatically search for the impact tracker\'s \nlogs and generate an HTML appendix.\n\nFirst, create a json file with the structure of the website you\'d like to see \n(this lets you create hierarchies of experiment as web pages).\n\nFor an example of all the capabilities of the tool you can see the json structure \nhere: https://github.com/Breakend/RL-Energy-Leaderboard/blob/master/leaderboard_generation_format.json\n\n\nBasically, you can group several runs together and specify variables to summarize. You should probably just copypaste the example above and remove what you don\'t need, but here are some descriptions of what is being specified:\n\n```javascript\n""Comparing Translation Methods"" : {\n # FILTER: this regex we use to look through the directory \n # you specify and find experiments with this in the directory structure,\n ""filter"" : ""(translation)"", \n \n # Use this to talk about your experiment\n ""description"" : ""An experiment on translation."", \n \n # executive_summary_variables: this will aggregate the sums and averages across these metrics.\n # you can see available metrics to summarize here: \n # https://github.com/Breakend/experiment-impact-tracker/blob/master/experiment_impact_tracker/data_info_and_router.py\n ""executive_summary_variables"" : [""total_power"", ""exp_len_hours"", ""cpu_hours"", ""gpu_hours"", ""estimated_carbon_impact_kg""], \n \n # The child experiments to group together\n ""child_experiments"" : \n {\n ""Transformer Network"" : {\n ""filter"" : ""(transformer)"",\n ""description"" : ""A subset of experiments for transformer experiments""\n },\n ""Conv Network"" : {\n ""filter"" : ""(conv)"",\n ""description"" : ""A subset of experiments for conv experiments""\n }\n \n }\n}\n```\n\nThen you just run this script, pointing to your data, the json file and an output directory. \n\n```bash\ncreate-compute-appendix ./data/ --site_spec leaderboard_generation_format.json --output_dir ./site/\n```\n\nTo see this in action, take a look at our RL Energy Leaderboard. \n\nThe specs are here: https://github.com/Breakend/RL-Energy-Leaderboard\n\nAnd the output looks like this: https://breakend.github.io/RL-Energy-Leaderboard/reinforcement_learning_energy_leaderboard/\n\n\n### Looking up cloud provider emission info\n\nBased on energy grid locations, we can estimate emission from cloud providers using our tools. A script to do that is here:\n\n```bash\nlookup-cloud-region-info aws\n```\n\n### Or you can look up emissions information for your own address!\n\n```bash\n\n% get-region-emissions-info address --address ""Stanford, California""\n\n({\'geometry\': ,\n \'id\': \'US-CA\',\n \'properties\': {\'zoneName\': \'US-CA\'},\n \'type\': \'Feature\'},\n {\'_source\': \'https://github.com/tmrowco/electricitymap-contrib/blob/master/config/co2eq_parameters.json \'\n \'(ElectricityMap Average, 2019)\',\n \'carbonIntensity\': 250.73337617853463,\n \'fossilFuelRatio\': 0.48888711737336304,\n \'renewableRatio\': 0.428373256377554})\n \n ```\n\n### Asserting certain hardware\n\nIt may be the case that you\'re trying to run two sets of experiments and compare emissions/energy/etc. In this case, you generally want to ensure that there\'s parity between the two sets of experiments. If you\'re running on a cluster you might not want to accidentally use a different GPU/CPU pair. To get around this we provided an assertion check that you can add to your code that will kill a job if it\'s running on a wrong hardware combo. For example:\n\n```python\nfrom experiment_impact_tracker.gpu.nvidia import assert_gpus_by_attributes\nfrom experiment_impact_tracker.cpu.common import assert_cpus_by_attributes\n\nassert_gpus_by_attributes({ ""name"" : ""GeForce GTX TITAN X""})\nassert_cpus_by_attributes({ ""brand"": ""Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz"" })\n```\n\n## Building docs\n\n```bash\nsphinx-build -b html docsrc docs\n```\n\n## Compatible Systems\n\nRight now, we\'re only compatible with Linux and Mac OS X systems running NVIDIA GPU\'s and Intel processors (which\n support RAPL or PowerGadget). \n\nIf you\'d like support for your use-case or encounter missing/broken functionality on your system specs, please open an issue or better yet submit a pull request! It\'s almost impossible to cover every combination on our own!\n\n### Mac OS X Suppport\n\nCurrently, we support only CPU and memory-related metrics on Mac OS X for Intel-based CPUs. However, these require the\n Intel PowerGadget driver and the Intel PowerGadget tool. The easiest way to install this is:\n \n```bash\n$ brew cask install intel-power-gadget\n$ which ""/Applications/Intel Power Gadget/PowerLog""\n```\n\nor for newer versions of OS X\n\n```bash\n$ brew install intel-power-gadget\n$ which ""/Applications/Intel Power Gadget/PowerLog""\n```\n\nYou can also see here: https://software.intel.com/content/www/us/en/develop/articles/intel-power-gadget.html\n\nThis will install a tool called PowerLog that we rely on to get power measurements on Mac OS X systems.\n\n### Tested Successfully On\n\nGPUs:\n+ NVIDIA Titan X\n+ NVIDIA Titan V\n\nCPUs:\n+ Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz\n+ Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz\n+ 2.7 GHz Quad-Core Intel Core i7\n\nOS:\n+ Ubuntu 16.04.5 LTS\n+ Mac OS X 10.15.6 \n\n## Testing\n\nTo test, run:\n\n```bash\npytest \n```\n\n## Citation\n\nIf you use this work, please cite our paper:\n\n```\n@misc{henderson2020systematic,\n title={Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning},\n author={Peter Henderson and Jieru Hu and Joshua Romoff and Emma Brunskill and Dan Jurafsky and Joelle Pineau},\n year={2020},\n eprint={2002.05651},\n archivePrefix={arXiv},\n primaryClass={cs.CY}\n}\n```\n\nAlso, we rely on a number of downstream packages and work to make this work possible. For carbon accounting, we relied on open source code from https://www.electricitymap.org/ as an initial base. psutil provides many of the compute metrics we use. nvidia-smi and Intel RAPL provide energy metrics. \n\n'",,"2019/11/06, 05:44:17",1449,MIT,0,82,"2022/06/10, 19:17:15",34,20,46,0,502,8,0.3,0.2567567567567568,"2019/12/31, 20:48:09",0.1.3,0,5,false,,false,false,,,,,,,,,,, Let's Wait Awhile,Simulator and datasets to research on carbon-aware temporal workload shifting.,dos-group,https://github.com/dos-group/lets-wait-awhile.git,github,"carbon-aware-scheduling,green-computing,simulator,datasets",Computation and Communication,"2023/10/12, 08:47:31",19,0,10,true,Jupyter Notebook,DOS Group at TU Berlin,dos-group,"Jupyter Notebook,Python",,"b'# Let\'s Wait Awhile - Datasets, Simulator, Analysis\n\n\n\nThis repository contains datasets, simulation code, and analysis notebooks used in the paper ""Let\'s Wait Awhile: How Temporal Workload Shifting Can Reduce Carbon Emissions in the Cloud"":\n\n- `data/*`: Energy production and carbon intensity datasets for the regions Germany, Great Britain, France (all via the [ENTSO-E Transparency Platform](https://transparency.entsoe.eu/)) and California (via [California ISO](https://www.caiso.com/)) for the entire year 2020 +-10 days.\n- `compute_carbon_intensity.py`: The script used to convert energy production to carbon intensity data using energy source carbon intensity values provided by an [IPCC study](http://www.ipcc-wg3.de/report/IPCC_SRREN_Annex_II.pdf).\n- `simulate.py`: A simulator to experimentally evaluate temporal workload shifting approaches in data centers with the goal to consume low-carbon energy.\n- `analysis.ipynb`: Notebook used to analyze the carbon intensity data.\n- `evaluation.ipynb`: Notebook used to analyze the simulation results.\n\nFor executing the code you need to install the libraries listed in `environment.yml`, e.g. by using a [conda environment](https://conda.io/).\n\n\n## Publications\n\nIf you use any datasets or code from this repository, please reference our publication:\n\n- Philipp Wiesner, Ilja Behnke, Dominik Scheinert, Kordian Gontarska, and Lauritz Thamsen. ""[Let\'s Wait Awhile: How Temporal Workload Shifting Can Reduce Carbon Emissions in the Cloud](https://arxiv.org/pdf/2110.13234.pdf)"" In the Proceedings of the *22nd International Middleware Conference*, ACM, 2021.\n\nBibTeX:\n```\n@inproceedings{Wiesner_LetsWaitAwhile_2021,\n author={Wiesner, Philipp and Behnke, Ilja and Scheinert, Dominik and Gontarska, Kordian and Thamsen, Lauritz},\n booktitle={Middleware\'21: 22nd International Middleware Conference}, \n title={Let\'s Wait Awhile: How Temporal Workload Shifting Can Reduce Carbon Emissions in the Cloud}, \n publisher = {{ACM}},\n year={2021},\n doi={10.1145/3464298.3493399}\n}\n```\n'",",https://arxiv.org/pdf/2110.13234.pdf","2021/10/21, 14:13:38",734,MIT,4,6,"2023/10/12, 08:47:31",0,2,3,3,13,0,0.5,0.4,,,0,2,false,,false,false,,,https://github.com/dos-group,https://tu.berlin/en/dos,Technische Universität Berlin,,,https://avatars.githubusercontent.com/u/5664005?v=4,,, Environmental Footprint Data,"Aims to reference as much data as possible to help organizations to evaluate the environmental footprint of their information systems, applications and digital services.",Boavizta,https://github.com/Boavizta/environmental-footprint-data.git,github,"ghg-emissions,footprint,environment,carbon-emissions,carbon-footprint,sustainability,digital-sustainability",Computation and Communication,"2023/02/23, 10:26:09",100,0,30,true,Python,Boavizta,Boavizta,Python,,"b'# Boavizta Project - Environmental Footprint Data\n\nThis data repository is maintained by [Boavizta](https://www.boavizta.org) and is complementary to Boavizta\'s environmental footprint evaluation methology. It aims to reference as much data as possible to help organizations to evaluate the environmental footprint of their information systems, applications and digital services.\n\nBoavizta database is quite exclusively derived from PCF (Product Carbon Footprint) sheets provided by the manufacturers. Methodologies used by manufactureres are not transparent and have very large margins of error and the purpose of making these data available is mainly to give ideas of orders of magnitude and to compare different models from the same manufacturer.\n\nTherefore **WE RECOMMAND NOT USING THESE DATA TO MAKE ACCURATE IMPACTS EVALUATIONS** or to compare the impacts of devices from different manufacturers.\n\nIn addition, most manufacturers rely on the [PAIA evaluation method](https://msl.mit.edu/projects/paia/main.html) developed by MIT. This method is based on data from non-public studies and Boavizta was therefore unable to evaluate its relevance.\n\nTo browse data, you can use https://dataviz.boavizta.org.\n\n## License\nThis dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/This data can be freely used for any purpose including without using Boavizta\'s methodology.\n\n## Data sets\nAt this time, we provide two CSV files grouping together data collected from manufacturers (mainly Product Carbon Footprint reports) publicly avaiblable :\n\n* `boavizta-data-fr.csv`: French version (`;` used as a delimiter, comma as a decimal separator)\n* `boavizta-data-us.csv`: English version (`,` used as a delimiter, dot as a decimal separator)\n\nWe encourage all manufacturers to provide us with similar data or to correct potential errors in these files.\n\nBoavizta working group works actively to enrich these files with new data :\n* from manufacturers\n* resulting from its analyzes and intended to provide ratios or average values that would simplify the evaluation\n\nPlease refer to [sources.md](sources.md) for a complete list of sources.\n\n## Contribute\nPeople are encouraged to contribute to these files.\n\nYou can easily contribute by :\n* forking this repo and submitting PRs\n* sending us an email to data@boavizta.org\n* submitting data through [dedicated form on Boavizta\'s website](https://boavizta.org/data-form)\n\nIf any manufacturers wish to share data with us, we will be happy to discuss with them how we can efficiently synchronize this data.\n\n## Running the code\n\nDownload Chromedriver for your version of Chrome: https://chromedriver.chromium.org/downloads and move it to a folder that belongs to your path.\nFor Mac you can also run\n```sh\nbrew install chromedriver\n```\nand restart Chrome.\n\nThen create a python3.9 virtual environment, run\n```sh\npip install -r tools/requirements.txt\n```\nto install the required packages and follow the instructions on the [spiders README.md](tools/spiders/README.md) to run a spider and parse the pdfs of the associated brand.\n\nWhen developing a new parser you can also follow the instructions on the [parsers README.md](tools/parsers/README.md).\n\n## Data format\n\n* `manufacturer`: Manufacturer name, e.g. ""Dell"" or ""HP""\n* `name`: Product name\n* `category`:\n * Workplace: product commonly used in a workplace\n * Datacenter: product commonly used in a data center (e.g. server, network switch, etc.)\n* `gwp_total`: GHG emissions (estimated as CO2 equivalent, the unit is kgCO2eq) through the total lifecycle of the product (Manufacturing, Transportation, Use phase and Recycling)\n* `gwp_use_ratio`: part of the GHG emissions coming from the use phase (the hypothesis for this use phase\n are detailed in the other columns, especially the `lifetime` and the `use_location`)\n* `yearly_tec`: Yearly estimated energy demand in kWh\n* `lifetime`: Expected lifetime (in years)\n* `use_location`: The region of the world in which the device usage footprint has been estimated.\n * US: United States of America\n * EU: Europe\n * DE: Germany\n * CN: China\n * WW: Worldwide\n* `report_date`: the date at which the Product Carbon Footprint report of the device was published\n* `sources`: the original URLs from which the data for this row was sourced\n* `gwp_error_ratio`: the datasheets commonly come with a diagram that shows the error margin for the footprint\n* `gwp_manufacturing_ratio` part of the GHG emissions coming from the manufacturing phase\n* `weight`: product weight in kg\n* `assembly_location`: The region of the world in which the device is assembled\n * US: United States of America\n * EU: Europe\n * CN: China\n * Asia: Asia\n* `screen_size`: in inches\n* `server_type`: the type of server\n* `hard_drive`: the hard drive of the device if any\n* `memory`: RAM in GB\n* `number_cpu`: number of CPUs\n* `height`: the height of the device in a datacenter rack, in U\n* `added_date`: the date at which this row was added\n* `add_method`: how was the data for this row collected\n\n## About Boavizta.org\n\nBoavizta.org is a working group:\n\n* Working to improve and generalize environmental footprint evaluation in organizations\n* Federating and connecting stakeholders of the ""environmental footprint evaluation"" ecosystem\n* Helping members to improve their skills and to carry out their own projects\n* Leveraging group members initiatives\n'",,"2020/12/30, 15:01:59",1029,MIT,15,312,"2023/09/24, 20:33:49",18,61,73,7,31,0,0.2,0.39583333333333337,,,0,11,false,,false,false,,,https://github.com/Boavizta,https://boavizta.org/en,,,,https://avatars.githubusercontent.com/u/74682393?v=4,,, Carbonalyser,Allows to visualize the electricity consumption and greenhouse gases emissions that your Internet browsing leads to.,carbonalyser,https://github.com/carbonalyser/Carbonalyser.git,github,"addon,ecology,carbon-emissions,firefox-addon,chrome-extension",Computation and Communication,"2022/01/03, 16:57:46",142,0,9,true,JavaScript,,,"JavaScript,HTML,CSS",https://theshiftproject.org/en/carbonalyser-browser-extension/,"b'# Carbonalyser\n\n## Installation\n\n* Firefox: https://addons.mozilla.org/fr/firefox/addon/carbonalyser/\n* Chrome and Edge: [see this issue](https://github.com/carbonalyser/Carbonalyser/issues/42)\n\n## Overview\n\nThe add-on ""Carbonalyser"" allows to visualize the electricity consumption and greenhouse gases (GHG) emissions that your Internet browsing leads to.\n\nVisualizing it will get you to understand that impacts of digital technologies on climate change and natural resources are not virtual, although they are hidden behind our screens.\n\nTo evaluate these impacts, the add-on:\n\n1. Measures the quantity of data travelling through your Internet browser,\n2. Calculates the electricity consumption this traffic leads to (with the ""1byte"" model, developed by The Shift Project),\n3. Calculates the GHG emissions this electricity consumption leads to, following the selected location.\n\n## Features\n\n""Run analysis / Stop analysis"" - run or stop the measuring of data volume travelling through the Internet browser. Results shown in the pop-up window are calculated for the cumulated running time.\n\n""Reset data"" - erase measures and calculations made during the running time to reset the analysis to zero.\n\nChart area - features rankings of the top 5 websites in terms of data traffic and their share in the total data volume generated by browsing the Internet.\n\n""Select your location"" - allow to choose the electrical mix to apply in the calculations of GHG emissions (by default, if none is selected: average world mix is taken)\n\nResults area - features:\n\n* Time during which the device has been running, from the start of the analysis,\n* Quantity of data generated by Internet browsing,\n* Electricity consumption generated by data traffic,\n* GHG emissions this leads to,\n* Comparison of GHG emitted by browsing and GHG emitted by charging a smartphone,\n* Comparison of GHG emitted by browsing and GHG emitted by driving a car.\n\nThe box ""How to change that? What responsibility?"" links to publications from The Shift Project about environmental impact of our digital uses and solutions we have at individual and collective scales.\n\n## Privacy\n\nNone of your data are collected: all browsing data are analyzed directly on the user device and are not sent or processed anywhere else in any way.\n\nThe source code of this program is available in open access, to ensure transparency and for any other purpose. \n\n## Methodology\n\nSee: https://theshiftproject.org/en/carbonalyser-browser-extension/\n\n## Support & source code\n\nQuestions, bug reports and feature requests are to be addressed to communication@theshiftproject.org\n\nSource code is available in open access: https://github.com/supertanuki/Carbonalyser\n \n## Contributors\n\nDeveloper: [Richard Hanna](https://twitter.com/richardhanna).\n\nDesigner: [Gauthier Roussilhe](http://gauthierroussilhe.com).\n\nIn partnership with Maxime Efoui-Hess for [The Shift Project](https://theshiftproject.org/en/home/).\n'",,"2019/03/06, 20:42:01",1694,MIT,0,91,"2023/03/19, 17:53:38",31,12,29,1,220,8,1.0,0.17500000000000004,"2020/01/10, 13:47:55",1.2.0,0,5,false,,false,false,,,,,,,,,,, carbontracker,Track and predict the energy consumption and carbon footprint of training deep learning models.,lfwa,https://github.com/lfwa/carbontracker.git,github,,Computation and Communication,"2023/09/13, 07:34:21",296,32,95,true,Python,,,Python,,"b'# **carbontracker**\n[![pypi](https://img.shields.io/pypi/v/carbontracker?label=pypi)](https://pypi.org/project/carbontracker/)\n[![Python](https://img.shields.io/badge/python-%3E%3D3.7-blue)](https://www.python.org/downloads/)\n[![build](https://github.com/lfwa/carbontracker/workflows/build/badge.svg)](https://github.com/lfwa/carbontracker/actions)\n[![License](https://img.shields.io/github/license/lfwa/carbontracker)](https://github.com/lfwa/carbontracker/blob/master/LICENSE)\n\n## About\n**carbontracker** is a tool for tracking and predicting the energy consumption and carbon footprint of training deep learning models as described in [Anthony et al. (2020)](https://arxiv.org/abs/2007.03051).\n\n## Citation\nKindly cite our work if you use **carbontracker** in a scientific publication:\n```\n@misc{anthony2020carbontracker,\n title={Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models},\n author={Lasse F. Wolff Anthony and Benjamin Kanding and Raghavendra Selvan},\n howpublished={ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems},\n month={July},\n note={arXiv:2007.03051},\n year={2020}}\n```\n\n## Installation\n### PyPi\n```\npip install carbontracker\n```\n\n## Basic usage\n\n#### Required arguments\n- `epochs`:\n Total epochs of your training loop.\n#### Optional arguments\n- `epochs_before_pred` (default=1):\n Epochs to monitor before outputting predicted consumption. Set to -1 for all epochs. Set to 0 for no prediction.\n- `monitor_epochs` (default=1):\n Total number of epochs to monitor. Outputs actual consumption when reached. Set to -1 for all epochs. Cannot be less than `epochs_before_pred` or equal to 0.\n- `update_interval` (default=10):\n Interval in seconds between power usage measurements are taken.\n- `interpretable` (default=True):\n If set to True then the CO2eq are also converted to interpretable numbers such as the equivalent distance travelled in a car, etc. Otherwise, no conversions are done.\n- `stop_and_confirm` (default=False):\n If set to True then the main thread (with your training loop) is paused after `epochs_before_pred` epochs to output the prediction and the user will need to confirm to continue training. Otherwise, prediction is output and training is continued instantly.\n- `ignore_errors` (default=False):\n If set to True then all errors will cause energy monitoring to be stopped and training will continue. Otherwise, training will be interrupted as with regular errors.\n- `components` (default=""all""):\n Comma-separated string of which components to monitor. Options are: ""all"", ""gpu"", ""cpu"", or ""gpu,cpu"".\n- `devices_by_pid` (default=False):\n If True, only devices (under the chosen components) running processes associated with the main process are measured. If False, all available devices are measured (see Section \'Notes\' for jobs running on SLURM or in containers). Note that this requires your devices to have active processes before instantiating the `CarbonTracker` class.\n- `log_dir` (default=None):\n Path to the desired directory to write log files. If None, then no logging will be done.\n- `log_file_prefix` (default=""""):\n Prefix to add to the log file name.\n- `verbose` (default=1):\n Sets the level of verbosity.\n- `decimal_precision` (default=6):\n Desired decimal precision of reported values.\n\n#### Example usage\n\n```python\nfrom carbontracker.tracker import CarbonTracker\n\ntracker = CarbonTracker(epochs=max_epochs)\n\n# Training loop.\nfor epoch in range(max_epochs):\n tracker.epoch_start()\n \n # Your model training.\n\n tracker.epoch_end()\n\n# Optional: Add a stop in case of early termination before all monitor_epochs has\n# been monitored to ensure that actual consumption is reported.\ntracker.stop()\n```\n\n#### Example output\n##### Default settings\n```\nCarbonTracker: \nActual consumption for 1 epoch(s):\n Time: 0:00:10\n Energy: 0.000038 kWh\n CO2eq: 0.003130 g\n This is equivalent to:\n 0.000026 km travelled by car\nCarbonTracker: \nPredicted consumption for 1000 epoch(s):\n Time: 2:52:22\n Energy: 0.038168 kWh\n CO2eq: 4.096665 g\n This is equivalent to:\n 0.034025 km travelled by car\nCarbonTracker: Finished monitoring.\n```\n##### verbose=2\n```\nCarbonTracker: The following components were found: CPU with device(s) cpu:0.\nCarbonTracker: Average carbon intensity during training was 82.00 gCO2/kWh at detected location: Copenhagen, Capital Region, DK.\nCarbonTracker: \nActual consumption for 1 epoch(s):\n Time: 0:00:10\n Energy: 0.000041 kWh\n CO2eq: 0.003357 g\n This is equivalent to:\n 0.000028 km travelled by car\nCarbonTracker: Carbon intensity for the next 2:59:06 is predicted to be 107.49 gCO2/kWh at detected location: Copenhagen, Capital Region, DK.\nCarbonTracker: \nPredicted consumption for 1000 epoch(s):\n Time: 2:59:06\n Energy: 0.040940 kWh\n CO2eq: 4.400445 g\n This is equivalent to:\n 0.036549 km travelled by car\nCarbonTracker: Finished monitoring.\n```\n\n## Parsing log files\n\n### Aggregating log files\n**carbontracker** supports aggregating all log files in a specified directory to a single estimate of the carbon footprint.\n#### Example usage\n```python\nfrom carbontracker import parser\n\nparser.print_aggregate(log_dir=""./my_log_directory/"")\n```\n#### Example output\n```\nThe training of models in this work is estimated to use 4.494 kWh of electricity contributing to 0.423 kg of CO2eq. This is equivalent to 3.515 km travelled by car. Measured by carbontracker (https://github.com/lfwa/carbontracker).\n```\n\n### Convert logs to dictionary objects\nLog files can be parsed into dictionaries using `parser.parse_all_logs()` or `parser.parse_logs()`.\n#### Example usage\n```python\nfrom carbontracker import parser\n\nlogs = parser.parse_all_logs(log_dir=""./logs/"")\nfirst_log = logs[0]\n\nprint(f""Output file name: {first_log[\'output_filename\']}"")\nprint(f""Standard file name: {first_log[\'standard_filename\']}"")\nprint(f""Stopped early: {first_log[\'early_stop\']}"")\nprint(f""Measured consumption: {first_log[\'actual\']}"")\nprint(f""Predicted consumption: {first_log[\'pred\']}"")\nprint(f""Measured GPU devices: {first_log[\'components\'][\'gpu\'][\'devices\']}"")\n```\n#### Example output\n```\nOutput file name: ./logs/2020-05-17T19:02Z_carbontracker_output.log\nStandard file name: ./logs/2020-05-17T19:02Z_carbontracker.log\nStopped early: False\nMeasured consumption: {\'epochs\': 1, \'duration (s)\': 8.0, \'energy (kWh)\': 6.5e-05, \'co2eq (g)\': 0.019201, \'equivalents\': {\'km travelled by car\': 0.000159}}\nPredicted consumption: {\'epochs\': 3, \'duration (s)\': 25.0, \'energy (kWh)\': 1000.000196, \'co2eq (g)\': 10000.057604, \'equivalents\': {\'km travelled by car\': 10000.000478}}\nMeasured GPU devices: [\'Tesla T4\']\n```\n\n\n\n## Compatibility\n**carbontracker** is compatible with:\n- NVIDIA GPUs that support [NVIDIA Management Library (NVML)](https://developer.nvidia.com/nvidia-management-library-nvml)\n- Intel CPUs that support [Intel RAPL](http://web.eece.maine.edu/~vweaver/projects/rapl/rapl_support.html)\n- Slurm\n- Google Colab / Jupyter Notebook\n\n\n## Notes\n### Availability of GPUs and Slurm\n- Available GPU devices are determined by first checking the environment variable `CUDA_VISIBLE_DEVICES` (only if `devices_by_pid`=False otherwise we find devices by PID). This ensures that for Slurm we only fetch GPU devices associated with the current job and not the entire cluster. If this fails we measure all available GPUs.\n- NVML cannot find processes for containers spawned without `--pid=host`. This affects the `device_by_pids` parameter and means that it will never find any active processes for GPUs in affected containers. \n\n## Extending **carbontracker**\nSee [CONTRIBUTING.md](CONTRIBUTING.md).\n\n\n## carbontracker in media\n* Official press release from University of Copenhagen can be obtained here: [en](https://news.ku.dk/all_news/2020/11/students-develop-tool-to-predict-the-carbon-footprint-of-algorithms/) [da](https://nyheder.ku.dk/alle_nyheder/2020/11/studerende-opfinder-vaerktoej-der-forudsiger-algoritmers-co2-aftryk/)\n\n* Carbontracker has recieved some attention in popular science forums within, and outside of, Denmark [[1](https://videnskab.dk/teknologi-innovation/kunstig-intelligens-er-en-kaempe-klimasynder-men-unge-danskeres-nye-vaerktoej)][[2](https://www.anthropocenemagazine.org/2020/11/time-to-talk-about-carbon-footprint-artificial-intelligence/)][[3](https://www.theregister.com/2020/11/04/gpt3_carbon_footprint_estimate/)][[4](https://jyllands-posten.dk/nyviden/ECE12533278/kunstig-intelligens-er-en-kaempe-klimasynder-men-nyt-dansk-vaerktoej-skal-hjaelpe/)][[5](https://www.sciencenewsforstudents.org/article/training-ai-energy-emissions-climate-risk)][[6](https://www.digitaltrends.com/news/carbontracker-deep-learning-sustainability/)][[7](https://www.prosa.dk/artikel/detail/news/effektivt-vaaben-mod-klimaforandringer/)][[8](https://medium.com/techtalkers/artificial-intelligence-contributes-to-climate-change-heres-how-405ff919186e)]\n\n\n\n'",",https://arxiv.org/abs/2007.03051","2020/04/21, 12:01:38",1282,MIT,16,121,"2023/09/13, 07:41:44",11,2,49,27,42,1,0.0,0.49650349650349646,"2023/09/13, 06:36:30",v1.2.1,0,5,false,,false,true,"Anemosx/auto-sys-agent,donnate/gnumap,NoeGille/seg-models,aidausmanova/commonsense_qa,software-energy-cost-studies/profiling,Banking-Analytics-Lab/INFLECT,gaetanbrison/summerschool,ejhusom/d2m,narest-qa/repo54,bgmeulem/EmissionCommonDS,asch-billetbillet/MPS-nn-decomposition,vperifan/Federated-Time-Series-Forecasting,Pulkit-Khandelwal/upenn-picsl-brain-ex-vivo,vladostp/an-experimental-comparison-of-software-based-power-meters,olzama/neural-supertagging,AndySAnker/Brute-force-PDF-modelling,ITU-AI-ML-in-5G-Challenge/ITU-ML5G-PS-001-Euclid-Federated-Traffic-Prediction2022,neurodatascience/watts_up_compute,StrombergNLP/Low-Carbon-NLP,ConstanceDws/neural-audio-energy,misanchz98/TFG-Project,SonyCSLParis/AutoFX,anonymous9992/neural_audio_energy,NavneetSinghArora/Attention_and_Move,LeBenchmark/NeurIPS2021,Greazi-Times/IOT_MQTT,LukasHedegaard/pytorch-benchmark,kaiboon0216/covid-cases-analysis,mkonxd/HyperparameterPowerImpact,evantancy/ece4179,lucaspuvisdechavannes/LowCarbonNLP,perslev/U-Time",,,,,,,,,, green-ai,The Green AI Standard aims to develop a standard and raise awareness for best environmental practices in AI research and development.,daviddao,https://github.com/daviddao/green-ai.git,github,,Computation and Communication,"2020/10/13, 13:08:55",77,0,6,false,,,,,,"b'
\n \n
\n\n# Green Artificial Intelligence Standard\n[![](https://tinyurl.com/greenai-pledge)](https://github.com/daviddao/green-ai)\n\nThe Green AI Standard aims to develop a standard and raise awareness for best environmental practices in AI research and development\n\n## The climate issue in AI\nDeveloping machine learning models is extremely costly for the environment ([Strubell *et al.* (2019)](https://arxiv.org/abs/1906.02243))\n- Training [BERT](https://arxiv.org/abs/1810.04805) on a GPU is roughly equivalent to a trans-American flight (650kg CO2)\n- One (512px) [BigGAN](https://arxiv.org/abs/1809.11096) experiment is equivalent to a trans-Atlantic roundtrip (~1 to 2t of CO2)\n- Neural architecture search experiments for [Transformer](https://arxiv.org/abs/1706.03762) is emitting as much as 50 years of an average human life (~280t of CO2) \n\nInformation and communications technology is on track to create 3.5% of global emissions by 2020\xe2\x80\x93which is more than the aviation and shipping industries\xe2\x80\x93and could hit 14% by 2040 ([Guardian (2018)](https://www.theguardian.com/environment/2017/dec/11/tsunami-of-data-could-consume-fifth-global-electricity-by-2025)). We need to take a stand now!\n\n## Best practices in development\n\n1. Report time to retrain machine learning models (e.g. GigaFLOPS till convergence, [Strubell *et al.* (2019)](https://arxiv.org/abs/1906.02243))\n2. Report sensitivity of hyperparameters for machine learning models (e.g. variance with respect to Hyperparameters searched, [Strubell *et al.* (2019)](https://arxiv.org/abs/1906.02243))\n3. Use more efficient alternatives to brute-force grid search for hyperparameter tuning (e.g. random or bayesian search, [Strubell *et al.* (2019)](https://arxiv.org/abs/1906.02243))\n\n## Best practices in infrastructure\n\nMinimize costs and carbon emissions by sharing local infrastructure instead of relying on on-demand cloud computing resources ([Strubell *et al.* (2019)](https://arxiv.org/abs/1906.02243))\n\n## Best practices in deployment\n\nThe fossil fuel industry is responsible for most of the world\'s CO2 emission by a large margin. Artificial intelligence has been a driving force of optimizing gas and oil extraction processes. By following the Standard, we pledge to not make developed applications available for fossil fuel focused usage.\n\n## Offset your resulting emissions\n\nWe recommend offsetting your emissions to certified carbon neutrality projects if possible.\nOffsets can be calculated via [MyClimate](https://co2.myclimate.org/en/offset_further_emissions) and purchased here:\n\n- [MyClimate](https://www.myclimate.org/)\n- [Gold Standard](https://www.goldstandard.org/)\n- [ActForest](http://actforest.glideapp.io)\n\nYou can also measure how much power your deep learning model has consumed via [Power Meter](https://autoai-incubator.github.io/powermeter/). Note that it only covers GPU consumption.\n\n## Show your commitment with a badge on your repository\n\n| **\xf0\x9f\x91\x87 The Pledge Badge** | **\xf0\x9f\x91\x87 The Carbon Neutral Badge** |\n|-----------------|-----------------|\n| [![](https://tinyurl.com/greenai-pledge)](https://github.com/daviddao/green-ai) | ![](https://tinyurl.com/greenai-neutral) |\n\n**The pledge badge** shows your commitment to do the best to reduce the greenhouse gas emissions caused by your research by following the best practices developed by the Green AI Standard\n\n**The Carbon Neutral Badge** shows that your greenhouse gas emissions caused by your code repository are offsetted. The badge should link to the offset certificate for verification\n\n## Acknowledgement\n\nThe green ring is inspired by the [Climate Reality project](https://www.climaterealityproject.org/blog/why-does-al-gore-wear-green-ring-pin)\n'",",https://arxiv.org/abs/1906.02243,https://arxiv.org/abs/1810.04805,https://arxiv.org/abs/1809.11096,https://arxiv.org/abs/1706.03762,https://arxiv.org/abs/1906.02243,https://arxiv.org/abs/1906.02243,https://arxiv.org/abs/1906.02243,https://arxiv.org/abs/1906.02243","2019/10/16, 11:35:57",1470,MIT,0,7,"2020/10/13, 13:13:00",0,2,3,0,1107,0,0.0,0.25,,,0,2,true,github,false,false,,,,,,,,,,, Carbon Aware SDK,Helps you build the carbon aware software solutions with the intelligence to use the greenest energy sources.,Green-Software-Foundation,https://github.com/Green-Software-Foundation/carbon-aware-sdk.git,github,,Computation and Communication,"2023/09/20, 12:48:35",357,2,207,true,C#,Green Software Foundation,Green-Software-Foundation,"C#,Shell,Dockerfile,PowerShell",,"b'# Carbon Aware SDK\n\nYou can reduce the carbon footprint of your application by just running things\nat different times and in different locations. That is because not all\nelectricity is produced in the same way. Most is produced through burning fossil\nfuels, some is produced using cleaner sources like wind and solar.\n\nWhen software does more when the electricity is clean and do less when the\nelectricity is dirty, or runs in a location where the energy is cleaner, we call\nthis **carbon aware software**.\n\nThe Carbon Aware SDK helps you build the carbon aware software solutions with\nthe intelligence to use the greenest energy sources. Run them at the greenest\ntime, or in the greenest locations, or both! Capture consistent telemetry and\nreport on your emissions reduction and make informed decisions.\n\nWith the Carbon Aware SDK you can build software that chooses to run when the\nwind is blowing, enable systems to follow the sun, moving around the world to\nwhere energy is the greenest, and create tools that give insights and help\nsoftware innovators to make greener software decisions. All of this helps reduce\ncarbon emissions.\n\n# Getting Started Overview\n\nHead on over to the [Getting Started Overview Guide](./docs/overview.md) to get up and running.\n\nGet started on creating sustainable software innovation for a greener future\ntoday!\n\n## What is the Carbon Aware SDK?\n\nAt its core the Carbon Aware SDK is a WebApi and Command Line Interface (CLI) to\nassist in building carbon aware software. The functionality across the CLI and\nWebApi is identical by design.\n\n### The WebApi\n\nThe WebApi is the preferred deployment within large organisations to centralise\nmanagement and increase control and auditability, especially in regulated\nenvironments. It can be deployed as a container for easy management, and can be\ndeployed alongside an application within a cluster or separately.\n\n![WebApi Screenshot](./images/screenshot_web_api.png)\n\n### The CLI\n\nThe CLI tends to be handy for legacy integration and non-cloud deployments,\nwhere a command-line can be used. This tends to be common with legacy DevOps\npipelines to drive deployment for integration testing where you can test your\ndeployment in the greenest location.\n\n![WebApi Screenshot](./images/screenshot_cli.png)\n\n## Who Is Using the Carbon Aware SDK?\n\nThe Carbon Aware SDK is being used by large and small companies around the\nworld. Some of the world\xe2\x80\x99s biggest enterprises and software companies, through\nto start-ups.\n\nMachine Learning (ML) workloads are a great example of long running compute\nintensive workloads, that often are also not time critical. By moving these\nworkloads to a different time, the carbon emissions from the ML training can be\nreduced by up to 15%, and by moving the location of the training this can be\nreduced even further, at times by up to 50% or more.\n\n## What does the SDK/API provide that 3rd party data providers such as WattTime or ElectricityMaps do not?\n\nMany of the benefits tend to relate to removing the tight coupling of an\napplication from the 3rd party data source it is using, and allow the\napplication to focus on the sustainability impact it is looking to drive. This\nabstraction allows for changing of data providers, data provider aggregation,\ncentralised management, auditability and traceability, and more.\n\n### Collaborative Effort\n\nThe Carbon Aware SDK is a collaborative effort between companies around the\nworld, with the intention of providing a platform that everyone can use. This\nmeans the API will be striving towards what solves the highest impact issues\nwith diverse perspectives from these organisation and contributors.\n\n### Standardization\n\nSomething we are driving with the Carbon Aware SDK is towards standardisation of\nthe interface into these data providers. This ultimately will help to drive SCI\ncalculations in the future, and also helps to drive innovation. The 3rd party\nAPI\xe2\x80\x99s do differ, and the results can vary in units, from lbCO2/kWh to gCO2/Wh.\nThe Carbon Aware SDK will take care of all conversions to a standardised\ngCO2/kWh, which becomes increasingly valuable with aggregated data sources.\n\nStandardisation also helps drive innovation. For example, if a 3rd party\ndevelops tools to scale Kubernetes clusters based on emissions, they can build\nagainst the Carbon Aware SDK. If you want to use this 3rd party tool, the SDK\nallows the tool to plug in _your_ choice of data providers, not _their_ choice\nof data provider. In this way the standardisation drives innovation and\nflexibility of choice.\n\nThe intention is to have other compatible tooling and software that leverages\nthe Carbon Aware SDK to obtain emissions data, while being agnostic to the data\nprovider.\n\n### Centralised secret and key management\n\nThe ability to manage keys to 3rd party API\xe2\x80\x99s can be centralised with the Carbon\nAware API. This means that any changes to keys or rotation can be done in a\ncentralised and controlled manner without exposing the keys to application\ndevelopment teams.\n\nIt also can be upgraded across all applications within an organisation when\ncentralised, with new data sources being added without consuming applications to\nmake any changes.\n\nIn addition, the need for the Carbon Aware SDK is something that has been\nidentified by some of the largest enterprises when looking to drive innovation\nwithin their own organisations by centralising the capability within their\nbusiness, creating green software engineering practices and providing the API\ninternally across their organisation.\n\n### Auditability\n\nDue to the API being centralised, this gives you the ability to audit a\ncontrolled environment for when decisions are made. With increasing regulatory\nneed, the ability to prove sustainability actions and impact will need to be\nfrom highly trusted sources, and having centralised management provides this\ncapability.\n\n### Aggregated Sources\n\nA feature we have in the roadmap is the ability aggregate data sources across\nmultiple providers. Different data providers have different levels of\ngranularity depending on region, and it may be that data provider A is preferred\nin Japan, while data provider B is preferred in US regions.\n\nSimilarly, you may have your own data for your data centres that you would\nprefer to use for on premises workloads, which you can combine in aggregate with\n3rd party data providers.\n\n## Is it possible to retrieve energy mix information from the SDK?\n\nEnergy mix (the percentages that are from different energy soruces i.e. coal,\nnuclear, wind, gas, solar, tidal, hydro etc) is not provided in the API to date.\nThis may be a feature we will consider in the future. The SDK provides emissions\npercentage information only at the moment.\n\n## Contributing\n\nThe Carbon Aware SDK is open for contribution! Want to contribute? Check out the\n[contribution guide](./CONTRIBUTING.md).\n\n## Green Software Foundation Project Summary\n\nThe Carbon Aware SDK is a project as part of the\n[Green Software Foundation](https://greensoftware.foundation/) (GSF) and the GSF\nOpen Source Working Group.\n\n### Appointments\n\n- Chair/Project lead - Vaughan Knight (Microsoft)\n- Vice Chair - Szymon Duchniewicz (Avanade)\n\n### GSF Project Scope\n\nFor developers to build carbon aware software, there is a need for a unified\nbaseline to be implemented. The Carbon Aware Core SDK is a project to build a\ncommon core, that is flexible, agnostic, and open, allowing software and systems\nto build around carbon aware capabilities, and provide the information so those\nsystems themselves become carbon aware.\n\nThe Carbon Aware Core API will look to standardise and simplify carbon awareness\nfor developers through a unified API, command line interface, and modular\ncarbon-aware-logic plugin architecture.\n'",,"2021/09/22, 23:10:41",762,MIT,309,929,"2023/10/17, 07:46:51",24,160,349,249,8,9,1.3,0.8214285714285714,"2023/07/25, 07:37:55",v1.1.0,1,33,false,,false,true,"cloudyspells/azure-one-sub-lab,Green-Software-Foundation/carbon-aware-sdk",,https://github.com/Green-Software-Foundation,https://greensoftware.foundation,,,,https://avatars.githubusercontent.com/u/84547728?v=4,,, ecoCode,Reduce the environmental footprint of your programs with this cutting-edge SonarQube plugin.,green-code-initiative,https://github.com/green-code-initiative/ecoCode.git,github,"java,php,python,javascript",Computation and Communication,"2023/10/06, 19:43:40",76,0,75,true,Java,Green Code Initiative,green-code-initiative,"Java,Shell,Dockerfile",,"b'![Logo](docs/resources/logo-large.png)\n======================================\n\n_ecoCode_ is a collective project aiming to reduce environmental footprint of software at the code level. The goal of\nthe project is to provide a list of static code analyzers to highlight code structures that may have a negative\necological impact: energy and resources over-consumption, ""fatware"", shortening terminals\' lifespan, etc.\n\n_ecoCode_ is based on evolving catalogs of [good practices](docs/rules), for various technologies. A SonarQube plugin\nthen implements these catalogs as rules for scanning your projects.\n\n**Warning**: this is still a very early stage project. Any feedback or contribution will be highly appreciated. Please\nrefer to the contribution section.\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](https://github.com/green-code-initiative/ecoCode-common/blob/main/doc/CODE_OF_CONDUCT.md)\n\n\xf0\x9f\x8c\xbf SonarQube Plugins\n-------------------\n\n4 technologies are supported by ecoCode right now:\n\n- [Java](java-plugin/)\n- [JavaScript](https://github.com/green-code-initiative/ecoCode-javascript)\n- [PHP](https://github.com/green-code-initiative/ecoCode-php)\n- [Python](https://github.com/green-code-initiative/ecoCode-python)\n\n![Screenshot](docs/resources/screenshot.PNG)\n\n### eco-design SonarQube plugin\n\n![Ekko logo](docs/resources/5ekko.png)\n\nThere is two kind of plugins :\n\n- One for web / backoffice (PHP, Python, Java, JavaScript), using smells described in the 2nd edition of the repository\n published in september 2015.\n You can find all the\n rules [here (in french)](https://docs.google.com/spreadsheets/d/1nujR4EnajnR0NSXjvBW3GytOopDyTfvl3eTk2XGLh5Y/edit#gid=1386834576).\n The current repository is for web / backOffice\n- One for mobile (Android), using [a set of smells](https://olegoaer.perso.univ-pau.fr/android-energy-smells/) theorised\n by Olivier Le Goa\xc3\xabr for Android.\n You can find this plugin in the repository [here](https://github.com/green-code-initiative/ecocode-mobile)\n\n### How a SonarQube plugin works\n\nCode is parsed to be transformed as AST. AST will allow you to access one or more nodes of your code.\nFor example, you\xe2\x80\x99ll be able to access of all your `for` loop, to explore content etc.\n\nTo better understand AST structure, you can use the [AST Explorer](https://astexplorer.net/).\n\n\xf0\x9f\x9a\x80 Getting Started\n------------------\n\nYou can give a try with a one command docker :\n\n```sh\ndocker run -ti --rm \\\n -v sq_ecocode_logs:/opt/sonarqube/logs \\\n -v sq_ecocode_data:/opt/sonarqube/data \\\n -p 9000:9000 \\\n --name sonarqube-ecocode ghcr.io/green-code-initiative/sonarqube-ecocode:latest\n```\n\nAnd add the `eco-design` tagged rules to Quality Profiles.\n\nYou can also download each plugin separatly and copy the plugin (jar file) to `$SONAR_INSTALL_DIR/extensions/plugins` and restart SonarQube.\nThen you can use different test project repositories (please check `README.md` files inside) to test the environment. Example : [PHP test project](https://github.com/green-code-initiative/ecoCode-php-test-project)\n\nOr you can directly use a [all-in-one docker-compose](INSTALL.md)\n\n\xf0\x9f\x9b\x92 Distribution\n------------------\n\nReady to use binaries are available [from GitHub](https://github.com/green-code-initiative/ecoCode/releases).\n\n\xf0\x9f\xa7\xa9 Plugins version compatibility\n------------------\n\n| Plugins Version | SonarQube version |\n|------------------|-----------------------------|\n| 1.4.+ | SonarQube 9.4.+ LTS to 10.1 |\n| 1.3.+ | SonarQube 9.4.+ LTS to 10.0 |\n| 1.2.+ | SonarQube 9.4.+ LTS to 10.0 |\n| 1.1.+ | SonarQube 9.4.+ LTS to 9.9 |\n| 1.0.+ | SonarQube 9.4.+ LTS to 9.9 |\n| 0.2.+ | SonarQube 9.4.+ LTS to 9.9 |\n| 0.1.+ | SonarQube 8.9.+ LTS to 9.3 |\n\n\xe2\x98\x95 Plugin Java part compatibility\n------------------\n\n| Plugins Version | Java version |\n|------------------|--------------|\n| 1.4.+ | 11 / 17 |\n| 1.3.+ | 11 / 17 |\n| 1.2.+ | 11 / 17 |\n| 1.1.+ | 11 / 17 |\n| 1.0.+ | 11 / 17 |\n| 0.2.+ | 11 / 17 |\n| 0.1.+ | 11 / 17 |\n\n\xf0\x9f\xa4\x9d Contribution\n---------------\n\nYou are a technical expert, a designer, a project manager, a CSR expert, an ecodesign expert...\n\nYou want to offer the help of your company, help us to organize, communicate on the project ?\n\nYou have ideas to submit to us ?\n\nWe are listening to you to make the project progress collectively, and maybe with you !\n\nWE NEED YOU !\n\nHere the [Starter pack](https://github.com/green-code-initiative/ecoCode-common/blob/main/doc/starter-pack.md)\n\n\xf0\x9f\xa4\x93 Main contributors\n--------------------\n\nAny question ? We are here for you !\nfirst, create an issue, please.\nThen, if no answer, contact ...\n\n- [Jules Delecour](https://www.linkedin.com/in/jules-delecour-498680118/)\n- [Geoffrey Lallou\xc3\xa9](https://github.com/glalloue)\n- [Julien Hertout](https://www.linkedin.com/in/julien-hertout-b1175449/)\n- [Justin Berque](https://www.linkedin.com/in/justin-berque-444412140)\n- [Olivier Le Goa\xc3\xabr](https://olegoaer.perso.univ-pau.fr)\n- [Maxime DUBOIS](https://www.linkedin.com/in/maxime-dubois-%F0%9F%8C%B1-649a3a3/)\n- [David DE CARVALHO](https://www.linkedin.com/in/david%E2%80%8E-de-carvalho-8b395284/)\n- [Maxime MALGORN](https://www.linkedin.com/in/maximemalgorn/)\n\n\xf0\x9f\xa7\x90 Core Team Emeriti\n--------------------\n\nHere we honor some no-longer-active core team members who have made valuable contributions in the past.\n\n- Ga\xc3\xabl Pellevoizin\n- [Nicolas Daviet](https://github.com/NicolasDaviet)\n- [Mathilde Grapin](https://github.com/fkotd)\n\nThey have contributed to the success of ecoCode :\n\n- [Davidson Consulting](https://www.davidson.fr/)\n- [Orange Business Services](https://www.orange-business.com/)\n- [Snapp\'](https://www.snapp.fr/)\n- [Universit\xc3\xa9 de Pau et des Pays de l\'Adour (UPPA)](https://www.univ-pau.fr/)\n- [Solocal](https://www.solocal.com/) / [PagesJaunes.fr](https://www.pagesjaunes.fr/)\n\nThey supported the project :\n\n- [R\xc3\xa9gion Nouvelle-Aquitaine](https://www.nouvelle-aquitaine.fr/)\n\nLinks\n-----\n\n- https://docs.sonarqube.org/latest/analysis/overview/\n'",,"2022/11/28, 22:21:52",331,GPL-3.0,627,1025,"2023/10/06, 19:43:47",65,100,167,167,19,28,0.4,0.5981182795698925,"2023/08/08, 20:56:17",1.4.0,0,47,false,,false,true,,,https://github.com/green-code-initiative,,France,,,https://avatars.githubusercontent.com/u/117859860?v=4,,, Solar Protocol,A solar powered network of servers that host a distributed web platform.,alexnathanson,https://github.com/alexnathanson/solar-protocol.git,github,,Computation and Communication,"2023/09/02, 19:09:29",207,0,47,true,Python,,,"Python,HTML,C++,JavaScript,C,PHP,Cython,CMake,CSS,Shell,Fortran,Roff,Smarty,PowerShell,Batchfile,Xonsh",http://solarprotocol.net,"b'# Solar Protocol\n\nA system for load balancing and serving content based on photovoltaic logic.\n=======\nA repository in development for a solar powered network of servers that host a distributed web platform. Project by Tega Brain, Alex Nathanson and Benedetta Piantella. Supported by Eyebeam Rapid Response for a Better Digital Future fellowship.\n\nContent at solarprotocol.net is served by whichever server in our network is in the most sunlight at a given time. (We are basing this off of the solar module wattage.)\n\nIt is a decentralized network. Each server checks in with the other devices and independently determines if it should be the \'point of entry\' (poe) for the system.\n\nServer stewards have the ability to host their own content on the devices as well.\n\nSolar Protocol is an art project exploring the poetics of internet infrastructure; as well as an education and research platform for exploring energy efficient and energy aware web design; and ecologically responsive internet protocols, among many other things.\n\nOur work is inspired by the great work done previously by the folks at Low Tech Magazine.\n\n## Installation\n\nhttps://github.com/alexnathanson/solar-protocol/blob/master/installation.md\n\n## API documentation\n\nhttps://github.com/alexnathanson/solar-protocol/blob/master/API.md\n\n## Hardware install notes \n\n[Notes in a doc here](https://docs.google.com/document/d/1hdcTf9xUmsjRPd3waJEkQf1Bjive8Z6RmyWv_p5n8Is/edit)\n\n## Hardware Troubleshooting & Maintenance\n\nhttps://github.com/alexnathanson/solar-protocol/blob/master/hardware-troubleshooting-and-maintenance.md\n\n\n\n## Collaborate with us!\n\nThis is a growing global collaborative project and there are many ways to contribute. Some tasks that a volunteer could take on are listed below. Please get in touch if you would like to contribute in some way.\n\n### Software development\n\n* Enable better network analytics\n* Refactor the admin console\n* Write a script to periodically run a software update automatically\n* Write a script to run the backend processes based on battery status, rather than just time\n\n### Design\n\n* Admin console redesign\n* Solar Protocol header for steward pages\n\n### Content\n\n* Do you have a great idea for something that could make use of this unique system? It could be an art project, research project, essay, etc.\n\n### Other\n\n* Can you conduct an LCA of the hardware we use?\n* Can you help identify an accurate way to quantify the energy consumed by transferring data across the internet?\n'",,"2020/08/03, 22:48:24",1178,GPL-3.0,299,2488,"2023/05/01, 15:24:13",26,36,40,28,177,5,0.0,0.16085678286434268,,,0,9,false,,false,false,,,,,,,,,,, PowerJoular,Allows monitoring power consumption of multiple platforms and processes.,joular,https://github.com/joular/powerjoular.git,github,"ada,energy,green,power,software,joular,powerjoular",Computation and Communication,"2023/10/20, 14:42:42",33,0,27,true,Ada,The Joular Project,joular,"Ada,Shell",https://www.noureddine.org/research/joular/powerjoular,"b'# PowerJoular :zap:\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue)](https://www.gnu.org/licenses/gpl-3.0)\n[![Ada](https://img.shields.io/badge/Made%20with-Ada-blue)](https://www.adaic.org)\n\n![PowerJoular Logo](powerjoular.png)\n\nPowerJoular is a command line software to monitor, in real time, the power consumption of software and hardware components.\n\n## :rocket: Features\n\n- Monitor power consumption of CPU and GPU of PC/servers\n- Monitor power consumption of individual processes in GNU/Linux\n- Expose power consumption to the terminal and CSV files\n- Provides a systemd service (daemon) to continuously monitor power of devices\n- Low overhead (written in Ada and compiled to native code)\n\n## :satellite: Supported platforms\n\nPowerJoular monitors the following platforms:\n- :computer: PC/Servers using a RAPL supported Intel processor (since Sandy Bridge) or a RAPL supported AMD processor (Ryzen or EPYC), and optionally an Nvidia graphic card.\n- :radio: Raspberry Pi devices (multiple models) and Asus Tinker Board.\n\nIn all platforms, PowerJoular works currently only on GNU/Linux.\n\nOn PC/Servers, PowerJoular uses powercap Linux interface to read Intel RAPL (Running Average Power Limit) energy consumption.\n\nPowerJoular supports RAPL package domain (core, including integrated graphics, and dram), and for more recent processors, we support Psys package (which covers the energy consumption of the entire SoC).\n\nOn Raspberry Pi and Asus Tinker Board, PowerJoular uses its own research-based empirical regression models to estimate the power consumption of the ARM processor.\n\nThe supported list of Raspberry Pi and Asus Tinker Board models are listed below.\nWe support all revisions of each model lineup. However, the model is generated and trained on a specific revision (listed between brackets), and the accuracy is best on this particular revision.\n\nWe currently support the following Raspberry Pi and Asus Tinker Board models:\n- Model Zero W (rev 1.1), for 32 bits OS\n- Model 1 B (rev 2), for 32 bits OS\n- Model 1 B+ (rev 1.2), for 32 bits OS\n- Model 2 B (rev 1.1), for 32 bits OS\n- Model 3 B (rev 1.2), for 32 bits OS\n- Model 3 B+ (rev 1.3), for 32 bits OS\n- Model 4 B (rev 1.1, and rev 1.2), for both 32 bits and 64 bits OS\n- Model 400 (rev 1.0), for 64 bits OS\n- Asus Tinker Board (S)\n\n## :package: Installation\n\nPowerJoular is written in Ada and can be easily compiled, and its unique binary added to your system PATH.\n\nEasy-to-use installation scripts are available in the ```installer``` folder.\nJust open the installer folder and run the appropriate file to build and/or install or uninstall the program and systemd service.\n\n- ```build-install.sh```: will build (using ```gprbuild```) and install the program binary to ```/usr/bin``` and systemd service. It requires having installed GNAT and gprbuild (see [Compilation](#floppy_disk-compilation)).\n- ```uninstall.sh```: deletes the program binary and systemd service.\n\n## :bulb: Usage\n\nTo use PowerJoular, just run the command ```powerjoular```.\nOn PC/servers, PowerJoular uses Intel\'s RAPL through the Linux powercap sysfs, and therefore requires root/sudo access on the latest Linux kernels (5.10 and newer): ```sudo powerjoular```.\n\nBy default, the software will show the power consumption of the CPU and its utilization.\nThe difference (increase or decrease) of power consumption from last metric will also be shown.\n\nThe following options are available:\n- ```-h```: show the help message\n- ```-h```: show version number\n- ```-p pid```: specifiy a particular PID to monitor\n- ```-a appName```: specifiy a particular application name to monitor (will monitor all PIDs of the application)\n- ```-f filename```: save monitoring data to the given filename path\n- ```-o filename```: save only last monitoring data to the given filename path (file overwritten with only latest power measures)\n- ```-t```: print energy data to the terminal\n- ```-d```: print debug info to the terminal\n- ```-l```: use linear regression models (less accurate than the default polynomial models) for Raspberry Pi energy models\n \nYou can mix options, i.e., ```powerjoular -tp 144``` will monitor PID 144 and will print to the terminal.\n\n## :floppy_disk: Compilation\n\nPowerJoular is written with Ada, and requires a modern Ada compiler, such as GNAT.\n\nPowerJoular depends on the following commands and libraries for certain of its functions, but can function without them:\n- nvidia-smi: for monitoring power consumption of Nvidia graphic cards\n- Linux powercap with Intel RAPL support: for monitoring power consumption of Intel processors and SoC\n\nOn a modern GNU/Linux distribution, just install the GNAT compiler (and GPRBuild), usually available from the distribution\'s repositories:\n\n```\nFedora:\nsudo dnf install fedora-gnat-project-common gprbuild gcc-gnat\n\nDebian, Ubuntu or Raspberry Pi OS:\nsudo apt install gnat gprbuild\n```\n\nFor other distributions, use their package manager to download the compiler, or check [this article for easy instruction for various distributions](https://www.noureddine.org/articles/ada-on-windows-and-linux-an-installation-guide), including RHEL and its clones which does not ship with Ada support in GCC.\n\n### Compilation with the GNAT compiler and GPRBuild\n\nTo compile the project, just type ```gprbuild``` if using the latest GPRBuild versions.\n\nOr, on older versions, create the ```/obj``` folder first, then type ```gprbuild powerjoular.gpr```.\n\nThe PowerJoular binary will be created in the ```obj/``` folder.\n\nBy default, the project will statically link the required libraries, and therefore the PowerJoular binary can be copied to any compatible system and used as-is.\n\nTo build with dynamic linking, remove or comment the static switch in the ```powerjoular.gpr``` file, in particular these lines:\n\n```\npackage Binder is\n for Switches (""Ada"") use (""-static"");\nend Binder;\n```\n\n### Compilation with the GNAT compiler only\n\nYou can also compile PowerJoular with the GNAT compiler only (without the need for GPRBuild).\n\nJust compile using gnatmake. For example, to compile from ```obj/``` folder (so .o and .ali files are generated there), type the following:\n\n```\nmkdir -p obj\ncd obj\ngnatmake ../src/powerjoular.adb\n```\n\n### Compilation with Alire\n\nIf you have [Alire](https://alire.ada.dev/) installed, you can use it to build PowerJoular with:\n\n```\nalr build\n```\n\n## :hourglass: Systemd service\n\nA systemd service is provided and can be installed (by copying ```powerjoular.service``` in ```systemd``` folder to ```/etc/systemd/system/```).\nThe service will run the program with the ```-o``` option (which only saves the latest power data) and saves data to ```/tmp/powerjoular-service.csv```.\nThe service can be enabled to run automatically on boot.\n\nThe systemd service is automatically installed when installing PowerJoular using the GNU/Linux provided packages.\n\n## :bookmark_tabs: Cite this work\n\nTo cite our work in a research paper, please cite our paper in the 18th International Conference on Intelligent Environments (IE2022).\n\n- **PowerJoular and JoularJX: Multi-Platform Software Power Monitoring Tools**. Adel Noureddine. In the 18th International Conference on Intelligent Environments (IE2022). Biarritz, France, 2022.\n\n```\n@inproceedings{noureddine-ie-2022,\n title = {PowerJoular and JoularJX: Multi-Platform Software Power Monitoring Tools},\n author = {Noureddine, Adel},\n booktitle = {18th International Conference on Intelligent Environments (IE2022)},\n address = {Biarritz, France},\n year = {2022},\n month = {Jun},\n keywords = {Power Monitoring; Measurement; Power Consumption; Energy Analysis}\n}\n```\n\n## :newspaper: License\n\nPowerJoular is licensed under the GNU GPL 3 license only (GPL-3.0-only).\n\nCopyright (c) 2020-2023, Adel Noureddine, Universit\xc3\xa9 de Pau et des Pays de l\'Adour.\nAll rights reserved. This program and the accompanying materials are made available under the terms of the GNU General Public License v3.0 only (GPL-3.0-only) which accompanies this distribution, and is available at: https://www.gnu.org/licenses/gpl-3.0.en.html\n\nAuthor : Adel Noureddine'",,"2021/03/22, 07:49:31",947,GPL-3.0,43,88,"2023/10/20, 18:02:34",5,14,29,29,5,1,0.1,0.038461538461538436,"2023/10/20, 18:03:08",0.7.0,0,2,false,,false,false,,,https://github.com/joular,https://www.noureddine.org/research/joular,,,,https://avatars.githubusercontent.com/u/78238145?v=4,,, Green Algorithms,Aims at promoting more environmentally sustainable computational science.,GreenAlgorithms,https://github.com/GreenAlgorithms/green-algorithms-tool.git,github,,Computation and Communication,"2023/10/16, 10:52:05",65,0,25,true,Python,,,"Python,CSS,JavaScript,Procfile",http://green-algorithms.org/,"b'# Green Algorithms \n**www.green-algorithms.org**\n\n[![Generic badge](https://img.shields.io/badge/Version-v2.2-blue.svg)](https://shields.io/)\n \n[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity)\n[![Open Source? Yes!](https://badgen.net/badge/Open%20Source%20%3F/Yes%21/purple?icon=github)](https://github.com/Naereen/badges/)\n\n---\n\n\n\n\n## Methods and data\n\nThe methodology behind the Green Algorithms project is described in our publication:\nhttps://onlinelibrary.wiley.com/doi/10.1002/advs.202100707\n\nAll the data used for the calculator are in the `/data` directory above. \n\n## Questions, issues, suggestions? Want to contribute?\n\nStart by opening an issue here, and we will try to address it quickly:\nhttps://github.com/GreenAlgorithms/green-algorithms-tool/issues\n\nYou can also contact us at: green.algorithms@gmail.com\n\n## How to cite this work\n> Lannelongue, L., Grealey, J., Inouye, M., \n> Green Algorithms: Quantifying the Carbon Footprint of Computation. \n> Adv. Sci. 2021, 2100707. https://doi.org/10.1002/advs.202100707\n\n## FAQ\n[![Ask Me Anything !](https://img.shields.io/badge/Ask%20me-anything-1abc9c.svg)](https://GitHub.com/Naereen/ama)\n\n\n> Should I include the number of processors, number of cores, or number of threads used?\n\nFor CPUs, the number of cores (CPUs usually have 4-12 cores per processor). For GPUs, the number of GPUs. \nIf using multi-threading on CPUs (i.e. using more threads than cores), still input the number of cores, \nbut be aware that your emissions might be underestimated. \n\n> What if my processor is not in the list? \n\nYou can select ""Other"" and find the TDP (Thermal Design Power) value on the manufacturer\'s website. \nPlus, add a comment on [this issue](https://github.com/GreenAlgorithms/green-algorithms-tool/issues/1) so that we can add it to the list! \n\n> What if my country is not in the list? \n\nAdd a comment on [this issue](https://github.com/GreenAlgorithms/green-algorithms-tool/issues/2) so that we can add it to the list! \n(some countries are more secretive than others about their energy mix). \nYou can use the world average, or a close proxy, for your estimations.\n\n> Can I compare algorithms impact independantly of the location?\n\nYes, simply use the ""Energy needed"" (in W) displayed next to the carbon emissions. \n\n> How do I find the usage factor of my processors?\n\nIt depends on your system. For example if you\'re using SLURM, `seff ` will give you the ""CPU Efficiency"". \nSimilar commands exist for the different systems, and if you can\'t find it, you can just leave the default value of 1. \n\n> How do I estimate my PSF (Pragmatic Scaling Factor)?\n\nTry to estimate how many times you need to run your full analysis to get results you\'re happy with. \nIt can be trials and errors, parameters optimisations, memory issues etc. \n\n> What if I found a bug in the tool?\n\n[Open an issue](https://github.com/GreenAlgorithms/green-algorithms-tool/issues) on the GitHub so that we can look at it. \n\n## Credits \n\n- The app was designed using [Plotly Dash](https://plot.ly/dash/).\n- The background image is realised by [Ed Hawkins](https://showyourstripes.info) from the University of Reading.\n- The icons used are under [CC Attribution licence](https://creativecommons.org/licenses/by/4.0/) \nand have been designed by \n[Laura Reen](https://icon-icons.com/icon/weather-co2-pollution/90772),\n[Jeremiah](https://icon-icons.com/icon/preferences-system-power-energy/103835),\n[Sergei Kokota](https://icon-icons.com/icon/tree-greenery-nature/53329),\n[Baianat](https://icon-icons.com/icon/car/61086) and\n[RoundIcons](https://icon-icons.com/icon/plane-airplane/89770).\n- The app has also been improved by [ongcp97](https://www.fiverr.com/ongcp97).\n\n## Licence\n\nThis work is licensed under a\n[Creative Commons Attribution 4.0 International License][cc-by].\n\n[![CC BY 4.0][cc-by-shield]][cc-by]\n\n[![CC BY 4.0][cc-by-image]][cc-by]\n\n[cc-by]: http://creativecommons.org/licenses/by/4.0/\n[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png\n[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg\n\n'",",https://doi.org/10.1002/advs.202100707\n\n##","2020/07/13, 19:16:36",1199,CC-BY-4.0,28,243,"2023/04/19, 09:08:14",15,13,20,4,189,7,0.0,0.09047619047619049,"2023/04/18, 21:51:20",v2.2.2,0,5,false,,false,false,,,,,,,,,,, Camunda Carbon Reductor,Allows you to time shift your processes' carbon emissions when energy is clean while still fulfilling the requested service level agreement.,envite-consulting,https://github.com/envite-consulting/camunda-carbon-reductor.git,github,"bpmn,camunda,connector,sustainability,camunda-platform-8,green-it",Computation and Communication,"2023/10/12, 06:04:24",18,0,14,true,Java,envite consulting GmbH,envite-consulting,"Java,Dockerfile",,"b'\n\n# \xf0\x9f\x8c\xb1Camunda Carbon Reductor\n\nThe Camunda Carbon Reductor allows you to time shift your processes\' carbon emissions when energy is clean while still fulfilling the requested SLAs.\n\nTechnically, it\'s implemented as a Camunda [Connector](https://docs.camunda.io/docs/components/connectors/introduction-to-connectors/) for Camunda Platform 8 and as an [External Task Worker](https://docs.camunda.org/manual/latest/user-guide/process-engine/external-tasks/) for Camunda Platform 7.\n\n---\n\nYou want to contribute \xf0\x9f\x8c\xb1? Please read the [Contribution Guidelines](CONTRIBUTING.md).\n\n# Table of Contents\n\n* \xe2\x9c\xa8 [Features](#features)\n * [Time Shifting Mode \xe2\x8f\xad\xef\xb8\x8f](#time-shifting-mode-)\n * [Measuring Mode \xf0\x9f\x93\x8f](#measuring-mode-)\n* \xf0\x9f\x9a\x80 [Getting Started](#getting-started)\n * [Camunda 8](#camunda-8)\n * [Camunda 7](#camunda-7)\n* \xf0\x9f\x93\x88 [Reporting Carbon Reduction via Camunda Optimize](#reporting-carbon-reduction-via-camunda-optimize)\n* \xf0\x9f\x93\x9a [Releases](#releases)\n* \xf0\x9f\x93\x86 [Publications](#publications)\n* \xf0\x9f\x93\xa8 [Contact](#contact)\n\n# \xe2\x9c\xa8Features\n\nThe Carbon Reductor Connector consists of an element-template that can be used in the process\nmodel that connects to the connector function.\n\n## Time Shifting Mode \xe2\x8f\xad\xef\xb8\x8f\n\nUsing the time shifting mode (default) the Carbon Reductor Connector allows you to\noptimize for lower carbon emissions by moving your process execution into a timeframe with the least amount \nof carbon possible while still fulfilling SLAs.\n\nThe Carbon Reductor Connector defines the following inputs:\n\n- the location where the worker are running (e.g. Germany, UK, France, USA, etc.)\n- a milestone (a timestamp used to calculate the duration the process instance has been running)\n- a duration for the remainder of the process (the duration the remainder needs at most)\n- the maximum duration a process instance can take\n\n## Measuring Mode \xf0\x9f\x93\x8f\n\n> *Note*: This mode only works with the [Carbon Aware SDK](./api/api-carbon-aware/README.md) API at the moment.\n\nUsing the measuring mode the Carbon Reductor Connector allows you to measure the emissions in gCO2e/KWh at \nexecution time and reports as well the emissions at the optimal time while still fulfilling your SLAs.\n\nThe same input as in the [Time Shifting Mode \xe2\x8f\xad\xef\xb8\x8f](#time-shifting-mode-) is required.\n\n# \xf0\x9f\x9a\x80Getting Started\n\nWe provide the Carbon Reductor for Camunda Platform 7 and 8:\n\n## Camunda 8\n\nTo start the Camunda 8 Connector have a look at the specific [README](./camunda-carbon-reductor-c8/README.md).\n\n## Camunda 7\n\nTo start the Camunda 7 External Task Worker have a look at the specific [README](./camunda-carbon-reductor-c7/README.md).\n\n# \xf0\x9f\x93\x88Reporting Carbon Reduction via Camunda Optimize\n\nSince Camunda Carbon Reductor stores the carbon savings as process variables, \nwe have the ability to visualize our successes in [Camunda Optimize](https://camunda.com/de/platform/optimize/).\n\nFor the [CarbonHack22](https://taikai.network/gsf/hackathons/carbonhack22/projects/cl9czuvwy65500401uzm9hfwbs9/idea) \nwe visualized the results for out example process and it looks like the following: \n\n![CarbonHack22 Dashboard](assets/CarbonHack22-Camunda-Optimize-Dashboard.png)\n\nThe exported Dashboard Definition could be found [here](assets/optimize-dashboard-definition.json).\n\n# \xf0\x9f\x93\x9aReleases\n\nThe list of [releases](https://github.com/envite-consulting/camunda-carbon-reductor/releases) contains a detailed changelog.\n\nWe use [Semantic Versioning](https://semver.org/).\n\nThe following compatibility matrix shows the officially supported Camunda versions for each release.\nOther combinations might also work but have not been tested.\n\n| Release | Camunda Platform 8 | Camunda Platform 7 |\n|---------|--------------------|--------------------|\n| 2.0.2 | 8.2.3 | 7.19.0 |\n| 2.0.3 | 8.3.0 | 7.19.0 |\n\n
\n\n\nClick to see older releases\n\n| Release | Camunda Platform 8 | Camunda Platform 7 |\n|---------|--------------------|--------------------|\n| 1.0.0 | 8.1.0 | 7.18.0 |\n| 1.1.0 | 8.2.0 | 7.19.0 |\n| 2.0.0 | 8.2.3 | 7.19.0 |\n| 2.0.1 | 8.2.3 | 7.19.0 |\n| 2.0.2 | 8.2.3 | 7.19.0 |\n| 2.0.3 | 8.3.0 | 7.19.0 |\n\n
\n\nDownload of Releases:\n* [GitHub Artifacts](https://github.com/envite-consulting/camunda-carbon-reductor/releases)\n\n\n# \xf0\x9f\x93\x86Publications\n\n* 2023-09: [Camunda Marketplace](https://marketplace.camunda.com/en-US/apps/419555/carbon-reductor)\n* 2023-07: [The Camunda 8 Connector for Carbon-Aware Process Execution](https://bit.ly/3NZ5LMz)\n* 2023-02: Hehnle, Philipp; Behrendt, Maximilian; Weinbrecht, Luc (20.2023): Digitale Gesch\xc3\xa4ftsprozesse klimabewusst ausf\xc3\xbchren. In: Uwe Friedrichsen (Hg.): IT Spektrum. Green IT, S. 16\xe2\x80\x9319.\n* 2022-11: [Carbon Reduced Business Process Execution](https://youtu.be/sGW5MJoOxPk) \n 2 Minute pitch on YouTube as part of the [#CarbonHack22](https://greensoftware.foundation/articles/carbonhack22) hackathon\n* 2022-11: [Project Pitch](https://taikai.network/gsf/hackathons/carbonhack22/projects/cl9czuvwy65500401uzm9hfwbs9/idea) \n Project pitch on Taikai as part of the [#CarbonHack22](https://greensoftware.foundation/articles/carbonhack22) hackathon\n\nIf you are interested in our work and want to get to know more, feel free to reach out to us.\n\n# \xf0\x9f\x93\xa8Contact\n\nIf you have any questions or ideas feel free to create an [issue](https://github.com/envite-consulting/carbonaware-process-automation/discussions/issues) or contact us via GitHub Discussions or [mail](mailto:carbon-reductor@envite.de).\n\nThis open source project is being developed by [envite consulting GmbH](https://envite.de).\n\n![envite consulting GmbH](assets/envite-black.png#gh-light-mode-only)\n![envite consulting GmbH](assets/envite-white.png#gh-dark-mode-only)\n'",,"2022/10/19, 06:07:20",371,MIT,133,150,"2023/10/16, 16:14:12",8,78,83,77,9,3,0.8,0.5384615384615384,"2023/10/12, 06:18:32",v2.0.3,0,6,false,,false,true,,,https://github.com/envite-consulting,https://envite.de,Germany,,,https://avatars.githubusercontent.com/u/112858958?v=4,,, Quell,The Content Management Software that combats climate change stopping web carbon production in its tracks.,rollthecloudinc,https://github.com/rollthecloudinc/quell.git,github,,Computation and Communication,"2023/07/02, 00:09:06",15,5,5,true,TypeScript,ROLL THE CLOUD INC.,rollthecloudinc,"TypeScript,HTML,JavaScript,SCSS,Shell",https://demo.carbonfreed.app/pages/create-panel-page,"b'\n\n\n\nIntroducing Quell, a sustainable and carbon free CMS that operates efficiently on renewable energy resources. Not only does Quell run on clean energy, but it also takes additional measures to significantly diminish energy consumption. This is accomplished by neutralizing servers and databases, eliminating these energy-intensive elements from the web hosting equation. Unlike traditional websites, our carbon free sites run entirely in the browser and securely communicate directly with cloud resources when necessary. Quell is the preferred editor for [Carbon Free](https://github.com/rollthecloudinc/carbonfree), our cloud hosted platform that envisions a web without scope 1, 2, and 3 emissions. Everything needed for the swift development of energy sustainable websites is packaged in the [SPeaRhead](https://github.com/rollthecloudinc/spearhead) template repository. Experience the power of Quell as we reimagine the internet with a focus on sustainability and reduced carbon footprint.\n\n# Climate Aware\n\nQuell is architected from the ground-up to be climate friendly, carbon free.\n\n* Quell follows the [principles of green software engineering](https://principles.green/).\n* Quell is built on [sustainability pillar](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sustainability-pillar.html) of the Amazon Web Services (AWS) [Well-Architected Framework](https://aws.amazon.com/architecture/well-architected).\n* Quell is [serverless](https://github.com/rollthecloudinc/verti-go) favoring [zero trust](https://aws.amazon.com/security/zero-trust/) [signed requests](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) directly in the browser.\n\n# Features\n\n## Dev Inspector Styling\n\n[Dem Video](https://www.youtube.com/watch?v=0dP7lS8eUEE)\n\n# Rapid Dev\n\nQuickly realize simple and complex carbon free web experiences using the quell editor.\n\n## Site Builders\n\nSite builders can create carbon free web experiences using the quell without knowing how to code.\n\n### Collect\n\nPrototype quell form to collect and store submissions.\n\n* [Page](https://demo.carbonfreed.app/native_forms_rebuild_v1/89087abb-326d-4a93-888e-9c597ba81b8e)\n* [Editor](https://demo.carbonfreed.app/native_forms_rebuild_v1/89087abb-326d-4a93-888e-9c597ba81b8e/manage)\n\n### Consume\n\nPrototype quell search browser using Marvel API.\n\n* [Page](https://demo.carbonfreed.app/dev-test-virtual-list-flex-v1/character/1011334)\n* [Editor](https://demo.carbonfreed.app/dev-test-virtual-list-flex-v1/character/1011334/manage)\n\n## Developers\n\nDevelopers can use quell as an app shell to orchestrate micro-frontends built with Angular, React, Vue, Svelte, etc. Any app compatible with [module federation](https://webpack.js.org/concepts/module-federation/) can be used with quell. Module federation can also be used to extend the platform without hacking core using extensions. Extensions are Angular micro-frontends developed for the sole purpose of providing new plugin implementations.\n\n### Orchestrate\n\nPrototype using quell as a shell app to host Angular architects workflow designer micro-frontends.\n\n* [Page](https://demo.carbonfreed.app/workflow-designer-v2)\n* [Editor](https://demo.carbonfreed.app/workflow-designer-v2/manage)\n\n### Extend\n\nPrototype using quell to load extension with new content type at runtime from external site.\n\n* [Page](https://demo.carbonfreed.app/tractorbeam-test-v3)\n* [Editor](https://demo.carbonfreed.app/tractorbeam-test-v3/manage)\n\n# Modules\n\nCatalog of each primary Quell module enabling devs, builders and editors to quickly realize usable, modern web experiences optimised for 0 server, 0 trust, and 0 cost low-code cloud hosting.\n\n## Security\n\nWatch out for safety and security of the pack.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/wolves.png)\n\n### Auth\n\n* auth\n* odic\n\n## Extensibility\n\nFar reaching wing span enabling travel over long distance.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/eagle2.png)\n\n### Plugin\n\n* plugin\n\n### Context\n\n* context\n\n### Meta\n\n* attributes\n\n### Parsing\n\n* dparam\n* durl\n* token\n\n## Routing\n\nMove users quickly to their destination.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/elephants.png)\n\n### Alias\n\n* alias\n\nImplementations:\n\n* pagealias\n* alienalias\n\n## Persistence\n\nMove data quickly with consistency.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/croc.png)\n\n### Crud\n\n* crud\n\nImplementations\n* aws3\n* awos\n* rest\n\n## Search\n\nSoar over, take hold and consume data.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/owl.png)\n\n### Datasource\n\n* datasource\n\nImplementations:\n* transform\n* crud\n* loop\n* rest\n\nArticles:\n* [Datasources Explained](https://github.com/ng-druid/platform/wiki/Feature-Demo:-Data-Datasource)\n\n## Orchestration\n\nSwim alongside one another with ease and consistency as one.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/dolphines.png)\n\n### Module Federation\n\n* alienalias\n* outsider\n* tractorbeam\n\n## Publishing\n\nRealize killer breathtaking experiences.\n\n![](https://smeskey-github-prod.s3.amazonaws.com/projects/druid/github/whale.png)\n\n### PanelPage\n\n* panels\n* render\n* pages\n* pagealias\n* layout\n* sheath\n\nArticles:\n* [Toggling Pane Visibility](https://github.com/ng-druid/platform/wiki/Feature-Demo:-Toggling-Pane-Visibility)\n\n# Future\n\nVisit the charities [github account](https://github.com/rollthecloudinc) for planned initiatives using quell.\n\n[Donations welcomed](https://www.paypal.com/fundraiser/charity/4587641)\n'",,"2020/10/20, 19:51:55",1100,GPL-3.0,62,1096,"2023/06/25, 04:46:30",309,18,59,10,122,0,0.0,0.1560693641618497,,,0,3,false,,true,true,"rollthecloudinc/bloodhound-build,rollthecloudinc/spearhead-build,rollthecloudinc/spearhead-docs-build,rollthecloudinc/carbonintense-build,rollthecloudinc/vertigoapp-build",,https://github.com/rollthecloudinc,https://rtc.eco,United States of America,,,https://avatars.githubusercontent.com/u/104270036?v=4,,, Vessim,"It lets users connect domain-specific simulators for energy system components like renewable power generation, energy storage, and power flow analysis with real software and hardware.",dos-group,https://github.com/dos-group/vessim.git,github,"carbon-aware,energy-system,simulation,testbed",Computation and Communication,"2023/10/23, 09:55:36",33,0,33,true,Python,DOS Group at TU Berlin,dos-group,Python,,"b'# Vessim\n\n[![PyPI version](https://img.shields.io/pypi/v/vessim.svg?color=52c72b)](https://pypi.org/project/vessim/)\n![Tests](https://github.com/dos-group/vessim/actions/workflows/linting-and-testing.yml/badge.svg)\n[![License](https://img.shields.io/pypi/l/vessim.svg)](https://pypi.org/project/vessim/)\n[![Supported versions](https://img.shields.io/pypi/pyversions/vessim.svg)](https://pypi.org/project/vessim/)\n\nVessim is a versatile **co-simulation testbed for carbon-aware applications and systems**.\nIt lets users connect domain-specific simulators for energy system components like renewable power generation, \nenergy storage, and power flow analysis with real software and hardware.\n\nVessim is in alpha stage and under active development.\nFunctionality and documentation will improve in the next weeks and months.\n\n\n## \xe2\x9a\x99\xef\xb8\x8f Installation\n\nIf you are using Vessim for the first time, we recommend to clone and install this repository, so you have all\ncode and examples at hand:\n\n```\n$ pip install -e .\n```\n\nAlternatively, you can also install our [latest release](https://pypi.org/project/vessim/) \nvia [pip](https://pip.pypa.io/en/stable/quickstart/):\n\n```\n$ pip install vessim\n```\n\n\n## \xf0\x9f\x9a\x80 Getting started\n\nTo execute our exemplary co-simulation scenario, run:\n\n```\n$ python examples/cosim_example.py\n```\n\n\n### Software-in-the-Loop Simulation\n\nSoftware-in-the-Loop (SiL) allows Vessim to interact with real computing systems.\nThere is not yet good documentation on how to set up a full SiL scenario, but you can play with the existing\nfunctionality by installing \n\n```\npip install vessim[sil]\n```\n\nand running:\n\n```\n$ python examples/sil_example.py\n```\n\n\n### Vessim Base Components\n\nWe are still working on examples for the base modules such as `CarbonApi` or `Generator` which can be used directly\nwithout the use of Mosaik to support simple experiments that do not require the entire co-simulation engine to run.\n\nDocumentation and API are in progress.\n\n\n## \xf0\x9f\x8f\x97\xef\xb8\x8f Development\n\nInstall Vessim with the `dev` option in a virtual environment:\n\n```\npython -m venv venv # create venv\n. venv/bin/activate # activate venv\npip install "".[sil,dev,analysis]"" # install dependencies\n```\n\n\n## \xf0\x9f\x93\x96 Publications\n\nIf you use Vessim in your research, please cite our vision paper:\n\n- Philipp Wiesner, Ilja Behnke and Odej Kao. ""[A Testbed for Carbon-Aware Applications and Systems](https://arxiv.org/pdf/2306.09774.pdf)"" arXiv:2302.08681 [cs.DC]. 2023.\n\nBibtex:\n```\n@misc{vessim2023,\n title={A Testbed for Carbon-Aware Applications and Systems}, \n author={Wiesner, Philipp and Behnke, Ilja and Kao, Odej},\n year={2023},\n eprint={2306.09774},\n archivePrefix={arXiv},\n primaryClass={cs.DC}\n}\n```\n'",",https://arxiv.org/pdf/2306.09774.pdf","2023/02/18, 12:59:09",249,MIT,659,659,"2023/10/23, 09:55:43",7,99,137,137,2,1,2.1,0.6816037735849056,,,0,5,false,,false,false,,,https://github.com/dos-group,https://tu.berlin/en/dos,Technische Universität Berlin,,,https://avatars.githubusercontent.com/u/5664005?v=4,,, Ecoindex_cli,This tool provides an easy way to analyze websites with Ecoindex from your local computer using multi-threading.,cnumr,https://github.com/cnumr/ecoindex_cli.git,github,"ecoindex,greenit,python,typer",Computation and Communication,"2023/09/11, 16:03:44",41,0,28,true,Python,Collectif Conception Numérique Responsable,cnumr,"Python,HTML,Dockerfile",,"b'# Ecoindex-Cli\n\n[![Quality check](https://github.com/cnumr/ecoindex_cli/workflows/Quality%20checks/badge.svg)](https://github.com/cnumr/ecoindex_cli/actions/workflows/quality.yml)\n[![PyPI version](https://badge.fury.io/py/ecoindex-cli.svg)](https://badge.fury.io/py/ecoindex-cli)\n\nThis tool provides an easy way to analyze websites with [Ecoindex](https://www.ecoindex.fr) from your local computer using multi-threading. You have the ability to:\n\n- Make the analysis on multiple pages\n- Define multiple screen resolution\n- Make a recursive analysis from a given website\n\nThis CLI is built on top of [ecoindex-python](https://pypi.org/project/ecoindex/) with [Typer](https://typer.tiangolo.com/)\n\nThe output is a CSV or JSON file with the results of the analysis.\n\n## Requirements\n\n- [Docker](https://docs.docker.com/get-docker/)\n\n## Quickstart\n\nThe simplest way to start with ecoindex-cli is to install docker and then create an alias in your .bashrc or .zshrc file:\n\n```bash\nalias ecoindex-cli=""docker run -it --rm -v /tmp/ecoindex-cli:/tmp/ecoindex-cli vvatelot/ecoindex-cli:latest ecoindex-cli""\n```\n\nThen you can use the cli as if it was installed on your computer:\n\n```bash\necoindex-cli --help\n```\n\n## Use case\n\nThe docker image [vvatelot/ecoindex-cli](https://hub.docker.com/r/vvatelot/ecoindex-cli) is available for `linux/amd64` and `linux/arm64` platforms and provides you an easy way to use this CLI on your environment.\n\nThe one line command to use it is:\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --recursive --html-report \n```\n\n### Make a simple analysis\n\nYou give just one web url\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr\n```\n\n
Result\n\n```bash\n\xf0\x9f\x93\x81\xef\xb8\x8f Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`\nThere are 1 url(s), do you want to process? [Y/n]: \n1 urls for 1 window size with 8 maximum workers\n100% \xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81 1/1 \xe2\x80\xa2 0:00:10 \xe2\x80\xa2 0:00:00\n\xe2\x94\x8f\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x93\n\xe2\x94\x83 Total analysis \xe2\x94\x83 Success \xe2\x94\x83 Failed \xe2\x94\x83\n\xe2\x94\xa1\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xa9\n\xe2\x94\x82 1 \xe2\x94\x82 1 \xe2\x94\x82 0 \xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n\xf0\x9f\x99\x8c\xef\xb8\x8f File /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_140013/results.csv written !\n```\n\n
\n\n> This makes an analysis with a screen resolution of 1920x1080px by default and with the last known version of chromedriver. You can set those settings with options: `--window-size` and `--chrome-version`\n> You can add multiple urls to analyze with the option `--url`. For example:\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --url https://www.ecoindex.fr/a-propos/\n```\n\n### Provide urls from a file\n\nYou can use a file with given urls that you want to analyze: One url per line. This is helpful if you want to play the same scenario recurrently.\n\n```bash\necoindex-cli analyze --urls-file input/ecoindex.csv\n```\n\n
Result\n\n```bash\n\xf0\x9f\x93\x81\xef\xb8\x8f Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`\nThere are 2 url(s), do you want to process? [Y/n]: \n2 urls for 1 window size with 8 maximum workers\n100% \xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81 2/2 \xe2\x80\xa2 0:00:14 \xe2\x80\xa2 0:00:00\n\xe2\x94\x8f\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x93\n\xe2\x94\x83 Total analysis \xe2\x94\x83 Success \xe2\x94\x83 Failed \xe2\x94\x83\n\xe2\x94\xa1\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xa9\n\xe2\x94\x82 2 \xe2\x94\x82 2 \xe2\x94\x82 0 \xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n\xf0\x9f\x99\x8c\xef\xb8\x8f File /tmp/ecoindex-cli/output/www.ecoindex.fr.csv/2023-14-04_140853/results.csv written !\n```\n\n
\n\n### Make a recursive analysis\n\nYou can make a recursive analysis of a given webiste. This means that the app will try to find out all the pages into your website and launch an analysis on all those web pages. \xe2\x9a\xa0\xef\xb8\x8f This can process for a very long time! **Use it at your own risks!**\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --recursive\n```\n\n
Result\n\n```bash\nYou are about to perform a recursive website scraping. This can take a long time. Are you sure to want to proceed? [Y/n]: \n\xe2\x8f\xb2\xef\xb8\x8f Crawling root url https://www.ecoindex.fr -> Wait a minute!\n-2023-04-14 14:09:38 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: scrapybot)\n2023-04-14 14:09:38 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.3 (main, Apr 5 2023, 14:15:06) [GCC 9.4.0], pyOpenSSL 23.0.0 (OpenSSL 3.0.8 7 Feb 2023), cryptography 39.0.2, Platform Linux-5.15.0-67-generic-x86_64-with-glibc2.31\n2023-04-14 14:09:38 [scrapy.crawler] INFO: Overridden settings:\n{\'LOG_ENABLED\': False}\n\xf0\x9f\x93\x81\xef\xb8\x8f Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`\nThere are 7 url(s), do you want to process? [Y/n]: \n7 urls for 1 window size with 8 maximum workers\n100% \xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81 7/7 \xe2\x80\xa2 0:00:25 \xe2\x80\xa2 0:00:00\n\xe2\x94\x8f\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x93\n\xe2\x94\x83 Total analysis \xe2\x94\x83 Success \xe2\x94\x83 Failed \xe2\x94\x83\n\xe2\x94\xa1\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xa9\n\xe2\x94\x82 7 \xe2\x94\x82 7 \xe2\x94\x82 0 \xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n\xf0\x9f\x99\x8c\xef\xb8\x8f File /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_141011/results.csv written !\n```\n\n
\n\n### Generate a html report\n\nYou can generate a html report easily at the end of the analysis. You just have to add the option `--html-report`.\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --recursive --html-report\n```\n\n
Result\n\n```bash\nYou are about to perform a recursive website scraping. This can take a long time. Are you sure to want to proceed? [Y/n]: \n\xe2\x8f\xb2\xef\xb8\x8f Crawling root url https://www.ecoindex.fr -> Wait a minute!\n-2023-04-14 14:16:13 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: scrapybot)\n2023-04-14 14:16:13 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.3 (main, Apr 5 2023, 14:15:06) [GCC 9.4.0], pyOpenSSL 23.0.0 (OpenSSL 3.0.8 7 Feb 2023), cryptography 39.0.2, Platform Linux-5.15.0-67-generic-x86_64-with-glibc2.31\n2023-04-14 14:16:13 [scrapy.crawler] INFO: Overridden settings:\n{\'LOG_ENABLED\': False}\n\xf0\x9f\x93\x81\xef\xb8\x8f Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`\nThere are 7 url(s), do you want to process? [Y/n]: \n7 urls for 1 window size with 8 maximum workers\n100% \xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81 7/7 \xe2\x80\xa2 0:00:28 \xe2\x80\xa2 0:00:00\n\xe2\x94\x8f\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xb3\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x93\n\xe2\x94\x83 Total analysis \xe2\x94\x83 Success \xe2\x94\x83 Failed \xe2\x94\x83\n\xe2\x94\xa1\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x95\x87\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\xa9\n\xe2\x94\x82 7 \xe2\x94\x82 7 \xe2\x94\x82 0 \xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n\xf0\x9f\x99\x8c\xef\xb8\x8f File /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_141645/results.csv written !\n\xf0\x9f\xa6\x84\xef\xb8\x8f Amazing! A report has been generated to /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_141645/index.html\n```\n\n
\n\n> When generating a html report, the results are written in a CSV file and you can not specify the result file location. So options `--export-format` and `--output-file` are ignored.\n\nHere is a sample result:\n![Sample report](doc/report.png)\n\n### Other features\n\n#### Set the output file\n\nYou can define the csv output file\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --output-file ~/ecoindex-results/ecoindex.csv\n```\n\n#### Export to JSON file\n\nBy default, the results are exported to a CSV file. But, you can specify to export the results to a JSON file.\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --export-format json\n```\n\n### Change wait before / after scroll\n\nBy default, the scenario waits 3 seconds before and after scrolling to bottom of the page so that the analysis results are conform to the Ecoindex main API methodology.\n\nYou can change this value with the option `--wait-before-scroll` and `--wait-after-scroll` to fit your needs.\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --wait-before-scroll 1 --wait-after-scroll 1\n```\n\n### Using a specific Chrome version\n\nYou can use a specific Chrome version to make the analysis. This is useful if you use an old chrome version. You just have to provide the main Chrome version number.\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --chrome-version 107\n```\n\nOr if you do not know the Chrome version number, you can use the one line command\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --chrome-version (google-chrome --version | grep --only -P \'(?<=\\\\s)\\\\d{3}\')\n```\n\n### Using multi-threading\n\nYou can use multi-threading to speed up the analysis when you have a lot of websites to analyze. In this case, you can define the maximum number of workers to use:\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --url https://www.greenit.fr/ --max-workers 10\n```\n\n> By default, the number of maximum workers is set to CPU count.\n\n### Disable console interaction\n\nYou can disable confirmations, and force the app to answer yes to all of them. It can be useful if you need to start the app from another script, or if you have no time to wait it to finish.\n\n```bash\necoindex-cli analyze --url https://www.ecoindex.fr --recursive --no-interaction\n```\n\n### Only generate a report from existing result file\n\nIf you already performed an anlayzis and (for example), forgot to generate the html report, you do not need to re-run a full analyzis, you can simply request a report from your result file :\n\n```bash\necoindex-cli report ""/tmp/ecoindex-cli/output/www.ecoindex.fr/2021-05-06_191355/results.csv"" ""www.synchrone.fr""\n```\n\n
Result\n\n```bash\n\xf0\x9f\xa6\x84\xef\xb8\x8f Amazing! A report has been generated to /tmp/ecoindex-cli/output/www.ecoindex.fr/2021-05-06_191355/index.html\n```\n\n
\n\n## Results example\n\nThe result of the analysis is a CSV or JSON file which can be easily used for further analysis:\n\n### CSV example\n\n```csv\nwidth,height,url,size,nodes,requests,grade,score,ges,water,date,page_type\n1920,1080,https://www.ecoindex.fr,521.54,45,68,B,75.0,1.5,2.25,2022-05-03 22:28:49.280479,\n1920,1080,https://www.greenit.fr,1374.641,666,167,E,32.0,2.36,3.54,2022-05-03 22:28:51.176216,website\n```\n\n### JSON example\n\n```json\n[\n {\n ""width"": 1920,\n ""height"": 1080,\n ""url"": ""https://www.ecoindex.fr"",\n ""size"": 521.54,\n ""nodes"": 45,\n ""requests"": 68,\n ""grade"": ""B"",\n ""score"": 75.0,\n ""ges"": 1.5,\n ""water"": 2.25,\n ""date"": ""2022-05-03 22:25:01.016749"",\n ""page_type"": null\n },\n {\n ""width"": 1920,\n ""height"": 1080,\n ""url"": ""https://www.greenit.fr"",\n ""size"": 1163.386,\n ""nodes"": 666,\n ""requests"": 148,\n ""grade"": ""E"",\n ""score"": 34.0,\n ""ges"": 2.32,\n ""water"": 3.48,\n ""date"": ""2022-05-03 22:25:04.516676"",\n ""page_type"": ""website""\n }\n]\n```\n\n### Fields description\n\n- `width` is the screen width used for the page analysis (in pixels)\n- `height` is the screen height used for the page analysis (in pixels)\n- `url` is the analysed page url\n- `size` is the size of the page and of the downloaded elements of the page in KB\n- `nodes` is the number of the DOM elements in the page\n- `requests` is the number of external requests made by the page\n- `grade` is the corresponding ecoindex grade of the page (from A to G)\n- `score`\xc2\xa0is the corresponding ecoindex score of the page (0 to 100)\n- `ges` is the equivalent of greenhouse gases emission (in `gCO2e`) of the page\n- `water`is the equivalent water consumption (in `cl`) of the page\n- `date` is the datetime of the page analysis\n- `page_type` is the type of the page, based ton the [opengraph type tag](https://ogp.me/#types)\n\n## Development\n\n### Requirements\n\n- Python 3.10+\n- [Poetry](https://python-poetry.org/)\n- [Chrome](https://www.google.com/chrome/) (or [Chromium](https://www.chromium.org/))\n- [ChromeDriver](https://chromedriver.chromium.org/)\n\n### Installation\n\nAt first, you need to install dependencies:\n\n```bash\ngit clone\ncd ecoindex-cli\npoetry install\n```\n\nYou also need to install Google Chrome or Chromium and the corresponding [ChromeDriver](https://chromedriver.chromium.org/) for your OS.\n\nYou have to download the chrome driver and put it in the project path. You can also use the `--chromedriver-path` option to specify the path to the chrome driver.\n\n### Usage\n\n```bash\npoetry run ecoindex-cli --help\npoetry run ecoindex-cli analyze --help\npoetry run ecoindex-cli report --help\n```\n\n### Testing\n\nWe use Pytest to run unit tests for this project. The test suite are in the `tests` folder. Just execute :\n\n```bash\npoetry run pytest --cov-report term-missing:skip-covered --cov=. --cov-config=.coveragerc tests\n```\n\n> This runs pytest and also generate a [coverage report](https://pytest-cov.readthedocs.io/en/latest/) (terminal and html)\n\n## Disclaimer\n\nThe LCA values used by [ecoindex_cli](https://github.com/cnumr/ecoindex_cli) to evaluate environmental impacts are not under free license - \xc2\xa9Fr\xc3\xa9d\xc3\xa9ric Bordage\nPlease also refer to the mentions provided in the code files for specifics on the IP regime.\n\n## [License](LICENSE)\n\n## [Contributing](CONTRIBUTING.md)\n\n## [Code of conduct](CODE_OF_CONDUCT.md)\n'",,"2021/01/14, 16:25:45",1014,CUSTOM,55,182,"2023/10/18, 00:49:59",14,283,313,134,7,11,0.0,0.3944444444444445,"2023/09/11, 16:04:26",v2.23.0,0,4,false,,true,true,,,https://github.com/cnumr,https://collectif.greenit.fr,France,,,https://avatars.githubusercontent.com/u/52161143?v=4,,, Zeus,A Framework for Deep Learning Energy Measurement and Optimization.,ml-energy,https://github.com/ml-energy/zeus.git,github,"deep-learning,energy,mlsys",Computation and Communication,"2023/10/23, 17:49:10",95,0,60,true,Python,ML.ENERGY,ml-energy,"Python,C++,CMake,Dockerfile,Shell",https://ml.energy/zeus,"b'
\n\n \n \n \n\n

Deep Learning Energy Measurement and Optimization

\n
\n\n[![NSDI23 paper](https://custom-icon-badges.herokuapp.com/badge/NSDI\'23-paper-b31b1b.svg)](https://www.usenix.org/conference/nsdi23/presentation/you)\n[![Docker Hub](https://badgen.net/docker/pulls/symbioticlab/zeus?icon=docker&label=Docker%20pulls)](https://hub.docker.com/r/mlenergy/zeus)\n[![Slack workspace](https://badgen.net/badge/icon/Join%20workspace/611f69?icon=slack&label=Slack)](https://join.slack.com/t/zeus-ml/shared_invite/zt-1najba5mb-WExy7zoNTyaZZfTlUWoLLg)\n[![Homepage build](https://github.com/ml-energy/zeus/actions/workflows/deploy_homepage.yaml/badge.svg)](https://github.com/ml-energy/zeus/actions/workflows/deploy_homepage.yaml)\n[![Apache-2.0 License](https://custom-icon-badges.herokuapp.com/github/license/ml-energy/zeus?logo=law)](/LICENSE)\n\n---\n**Project News** \xe2\x9a\xa1 \n\n- \\[2023/10\\] We released Perseus, an energy optimizer for large model training. Get started [here](https://ml.energy/zeus/perseus/)!\n- \\[2023/09\\] We moved to under [`ml-energy`](https://github.com/ml-energy)! Please stay tuned for new exciting projects!\n- \\[2023/07\\] [`ZeusMonitor`](https://ml.energy/zeus/reference/monitor/#zeus.monitor.ZeusMonitor) was used to profile GPU time and energy consumption for the [ML.ENERGY leaderboard & Colosseum](https://ml.energy/leaderboard).\n- \\[2023/03\\] [Chase](https://symbioticlab.org/publications/files/chase:ccai23/chase-ccai23.pdf), an automatic carbon optimization framework for DNN training, will appear at ICLR\'23 workshop.\n- \\[2022/11\\] [Carbon-Aware Zeus](https://taikai.network/gsf/hackathons/carbonhack22/projects/cl95qxjpa70555701uhg96r0ek6/idea) won the **second overall best solution award** at Carbon Hack 22.\n---\n\nZeus is a framework for (1) measuring GPU energy consumption and (2) optimizing energy and time for DNN training.\n\n### Measuring GPU energy\n\n```python\nfrom zeus.monitor import ZeusMonitor\n\nmonitor = ZeusMonitor(gpu_indices=[0,1,2,3])\n\nmonitor.begin_window(""heavy computation"")\n# Four GPUs consuming energy like crazy!\nmeasurement = monitor.end_window(""heavy computation"")\n\nprint(f""Energy: {measurement.total_energy} J"")\nprint(f""Time : {measurement.time} s"")\n```\n\n### Finding the optimal GPU power limit\n\nZeus silently profiles different power limits during training and converges to the optimal one.\n\n```python\nfrom zeus.monitor import ZeusMonitor\nfrom zeus.optimizer import GlobalPowerLimitOptimizer\n\nmonitor = ZeusMonitor(gpu_indices=[0,1,2,3])\nplo = GlobalPowerLimitOptimizer(monitor)\n\nplo.on_epoch_begin()\n\nfor x, y in train_dataloader:\n plo.on_step_begin()\n # Learn from x and y!\n plo.on_step_end()\n\nplo.on_epoch_end()\n```\n\n### CLI power and energy monitor\n\n```console\n$ python -m zeus.monitor power\n[2023-08-22 22:39:59,787] [PowerMonitor](power.py:134) Monitoring power usage of GPUs [0, 1, 2, 3]\n2023-08-22 22:40:00.800576\n{\'GPU0\': 66.176, \'GPU1\': 68.792, \'GPU2\': 66.898, \'GPU3\': 67.53}\n2023-08-22 22:40:01.842590\n{\'GPU0\': 66.078, \'GPU1\': 68.595, \'GPU2\': 66.996, \'GPU3\': 67.138}\n2023-08-22 22:40:02.845734\n{\'GPU0\': 66.078, \'GPU1\': 68.693, \'GPU2\': 66.898, \'GPU3\': 67.236}\n2023-08-22 22:40:03.848818\n{\'GPU0\': 66.177, \'GPU1\': 68.675, \'GPU2\': 67.094, \'GPU3\': 66.926}\n^C\nTotal time (s): 4.421529293060303\nTotal energy (J):\n{\'GPU0\': 198.52566362297537, \'GPU1\': 206.22215216255188, \'GPU2\': 201.08565518283845, \'GPU3\': 201.79834523367884}\n```\n\n```console\n$ python -m zeus.monitor energy\n[2023-08-22 22:44:45,106] [ZeusMonitor](energy.py:157) Monitoring GPU [0, 1, 2, 3].\n[2023-08-22 22:44:46,210] [zeus.util.framework](framework.py:38) PyTorch with CUDA support is available.\n[2023-08-22 22:44:46,760] [ZeusMonitor](energy.py:329) Measurement window \'zeus.monitor.energy\' started.\n^C[2023-08-22 22:44:50,205] [ZeusMonitor](energy.py:329) Measurement window \'zeus.monitor.energy\' ended.\nTotal energy (J):\nMeasurement(time=3.4480526447296143, energy={0: 224.2969999909401, 1: 232.83799999952316, 2: 233.3100000023842, 3: 234.53700000047684})\n```\n\nPlease refer to our NSDI\xe2\x80\x9923 [paper](https://www.usenix.org/conference/nsdi23/presentation/you) and [slides](https://www.usenix.org/system/files/nsdi23_slides_chung.pdf) for details.\nCheckout [Overview](https://ml.energy/zeus/overview/) for a summary.\n\nZeus is part of [The ML.ENERGY Initiative](https://ml.energy).\n\n## Repository Organization\n\n```\n.\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 zeus/ # \xe2\x9a\xa1 Zeus Python package\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 optimizer/ # - GPU energy and time optimizers\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 run/ # - Tools for running Zeus on real training jobs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 policy/ # - Optimization policies and extension interfaces\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 util/ # - Utility functions and classes\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 monitor.py # - `ZeusMonitor`: Measure GPU time and energy of any code block\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 controller.py # - Tools for controlling the flow of training\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 callback.py # - Base class for Hugging Face-like training callbacks.\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 simulate.py # - Tools for trace-driven simulation\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 analyze.py # - Analysis functions for power logs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 job.py # - Class for job specification\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 zeus_monitor/ # \xf0\x9f\x94\x8c GPU power monitor\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 zemo/ # - A header-only library for querying NVML\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 main.cpp # - Source code of the power monitor\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 examples/ # \xf0\x9f\x9b\xa0\xef\xb8\x8f Examples of integrating Zeus\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 capriccio/ # \xf0\x9f\x8c\x8a A drifting sentiment analysis dataset\n\xe2\x94\x82\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 trace/ # \xf0\x9f\x97\x83\xef\xb8\x8f Train and power traces for various GPUs and DNNs\n```\n\n## Getting Started\n\nRefer to [Getting started](https://ml.energy/zeus/getting_started) for complete instructions on environment setup, installation, and integration.\n\n### Docker image\n\nWe provide a Docker image fully equipped with all dependencies and environments.\nThe only command you need is:\n\n```sh\ndocker run -it \\\n --gpus all `# Mount all GPUs` \\\n --cap-add SYS_ADMIN `# Needed to change the power limit of the GPU` \\\n --ipc host `# PyTorch DataLoader workers need enough shm` \\\n mlenergy/zeus:latest \\\n bash\n```\n\nRefer to [Environment setup](https://ml.energy/zeus/getting_started/environment/) for details.\n\n### Examples\n\nWe provide working examples for integrating and running Zeus in the `examples/` directory.\n\n\n## Extending Zeus\n\nYou can easily implement custom policies for batch size and power limit optimization and plug it into Zeus.\n\nRefer to [Extending Zeus](https://ml.energy/zeus/extend/) for details.\n\n\n## Carbon-Aware Zeus\n\nThe use of GPUs for training DNNs results in high carbon emissions and energy consumption. Building on top of Zeus, we introduce *Chase* -- a carbon-aware solution. *Chase* dynamically controls the energy consumption of GPUs; adapts to shifts in carbon intensity during DNN training, reducing carbon footprint with minimal compromises on training performance. To proactively adapt to shifting carbon intensity, a lightweight machine learning algorithm is used to forecast the carbon intensity of the upcoming time frame. For more details on Chase, please refer to our [paper](https://symbioticlab.org/publications/files/chase:ccai23/chase-ccai23.pdf) and the [chase branch](https://github.com/ml-energy/zeus/tree/chase). \n\n\n## Citation\n\n```bibtex\n@inproceedings{zeus-nsdi23,\n title = {Zeus: Understanding and Optimizing {GPU} Energy Consumption of {DNN} Training},\n author = {Jie You and Jae-Won Chung and Mosharaf Chowdhury},\n booktitle = {USENIX NSDI},\n year = {2023}\n}\n```\n\n## Contact\nJae-Won Chung (jwnchung@umich.edu)\n'",,"2022/08/13, 21:20:30",438,Apache-2.0,88,181,"2023/10/13, 15:40:00",11,10,18,12,12,0,5.2,0.04371584699453557,"2023/10/13, 21:34:53",v0.8.0,5,4,false,,false,true,,,https://github.com/ml-energy,https://ml.energy,"Ann Arbor, MI",,,https://avatars.githubusercontent.com/u/109987045?v=4,,, perun,Calculates the energy consumption of Python scripts by sampling usage statistics from your hardware components.,Helmholtz-AI-Energy,https://github.com/Helmholtz-AI-Energy/perun.git,github,,Computation and Communication,"2023/10/25, 11:05:14",30,1,25,true,Python,Helmholtz AI Energy,Helmholtz-AI-Energy,Python,,"b'
\n \n
\n\n \n \n\n[![fair-software.eu](https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F-green)](https://fair-software.eu)\n[![OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/projects/7253/badge)](https://bestpractices.coreinfrastructure.org/projects/7253)\n[![DOI](https://zenodo.org/badge/523363424.svg)](https://zenodo.org/badge/latestdoi/523363424)\n![PyPI](https://img.shields.io/pypi/v/perun)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/perun)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![](https://img.shields.io/badge/Python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)\n[![Documentation Status](https://readthedocs.org/projects/perun/badge/?version=latest)](https://perun.readthedocs.io/en/latest/?badge=latest)\n\nperun is a Python package that calculates the energy consumption of Python scripts by sampling usage statistics from your Intel, Nvidia or AMD hardware components. It can handle MPI applications, gather data from hundreds of nodes, and accumulate it efficiently. perun can be used as a command-line tool or as a function decorator in Python scripts.\n\nCheck out the [docs](https://perun.readthedocs.io/en/latest/) or a working [example](https://github.com/Helmholtz-AI-Energy/perun/blob/main/examples/torch_mnist/README.md)!\n\n## Key Features\n\n - Measures energy consumption of Python scripts using Intel RAPL, ROCM-SMI, Nvidia-NVML, and psutil\n - Capable of handling MPI application, gathering data from hundreds of nodes efficiently\n - Monitor individual functions using decorators\n - Tracks energy usage of the application over multiple executions\n - Easy to benchmark applications and functions\n\n## Installation\n\nFrom PyPI:\n\n```console\npip install perun\n```\n\n> Extra dependencies like nvidia-smi, rocm-smi and mpi can be installed through pip as well:\n```console\npip install perun[nvidia, rocm, mpi]\n```\n\nFrom Github:\n\n```console\npip install git+https://github.com/Helmholtz-AI-Energy/perun\n```\n\n## Quick Start\n\n### Command Line\n\nTo use perun as a command-line tool, run the monitor subcommand followed by the path to your Python script and its arguments:\n\n```console\n$ perun monitor path/to/your/script.py [args]\n```\n\nperun will output two files, and HDF5 style containing all the raw data that was gathered, and a text file with a summary of the results.\n\n\n```text\nPERUN REPORT\n\nApp name: finetune_qa_accelerate\nFirst run: 2023-08-15T18:56:11.202060\nLast run: 2023-08-17T13:29:29.969779\n\n\nRUN ID: 2023-08-17T13:29:29.969779\n\n| Round # | Host | RUNTIME | ENERGY | CPU_POWER | CPU_UTIL | GPU_POWER | GPU_MEM | DRAM_POWER | MEM_UTIL |\n|----------:|:--------------------|:----------|:-----------|:------------|:-----------|:------------|:-----------|:-------------|:-----------|\n| 0 | hkn0432.localdomain | 995.967 s | 960.506 kJ | 231.819 W | 3.240 % | 702.327 W | 55.258 GB | 29.315 W | 0.062 % |\n| 0 | hkn0436.localdomain | 994.847 s | 960.469 kJ | 235.162 W | 3.239 % | 701.588 W | 56.934 GB | 27.830 W | 0.061 % |\n| 0 | All | 995.967 s | 1.921 MJ | 466.981 W | 3.240 % | 1.404 kW | 112.192 GB | 57.145 W | 0.061 % |\n\nThe application has been run 7 times. Throught its runtime, it has used 3.128 kWh, released a total of 1.307 kgCO2e into the atmosphere, and you paid 1.02 \xe2\x82\xac in electricity for it.\n```\n\nPerun will keep track of the energy of your application over multiple runs.\n\n### Function Monitoring\n\nUsing a function decorator, information can be calculated about the runtime, power draw and component utilization while the function is executing.\n\n```python\n\nimport time\nfrom perun import monitor\n\n@monitor()\ndef main(n: int):\n time.sleep(n)\n```\n\nAfter running the script with ```perun monitor```, the text report will add information about the monitored functions.\n\n```text\nMonitored Functions\n\n| Round # | Function | Avg Calls / Rank | Avg Runtime | Avg Power | Avg CPU Util | Avg GPU Mem Util |\n|----------:|:----------------------------|-------------------:|:----------------|:-----------------|:---------------|:-------------------|\n| 0 | main | 1 | 993.323\xc2\xb10.587 s | 964.732\xc2\xb10.499 W | 3.244\xc2\xb10.003 % | 35.091\xc2\xb10.526 % |\n| 0 | prepare_train_features | 88 | 0.383\xc2\xb10.048 s | 262.305\xc2\xb119.251 W | 4.541\xc2\xb10.320 % | 3.937\xc2\xb10.013 % |\n| 0 | prepare_validation_features | 11 | 0.372\xc2\xb10.079 s | 272.161\xc2\xb119.404 W | 4.524\xc2\xb10.225 % | 4.490\xc2\xb10.907 % |\n```\n\n### MPI\n\nPerun is compatible with MPI applications that make use of ```mpi4py```, and requires changes in the code or in the perun configuration. Simply replace the ```python``` command with ```perun monitor```.\n\n```console\nmpirun -n 8 perun monitor path/to/your/script.py\n```\n\n## Docs\n\nTo get more information, check out our [docs page](https://perun.readthedocs.io/en/latest/).\n\n## Citing perun\n\nIf you found perun usefull, please consider citing the conference paper:\n\n * Guti\xc3\xa9rrez Hermosillo Muriedas, J.P., Fl\xc3\xbcgel, K., Debus, C., Obermaier, H., Streit, A., G\xc3\xb6tz, M.: perun: Benchmarking Energy Consumption of\xc2\xa0High-Performance Computing Applications. In: Cano, J., Dikaiakos, M.D., Papadopoulos, G.A., Peric\xc3\xa0s, M., and Sakellariou, R. (eds.) Euro-Par 2023: Parallel Processing. pp. 17\xe2\x80\x9331. Springer Nature Switzerland, Cham (2023). https://doi.org/10.1007/978-3-031-39698-4_2.\n\n\n```bibtex\n@InProceedings{10.1007/978-3-031-39698-4_2,\n author=""Guti{\\\'e}rrez Hermosillo Muriedas, Juan Pedro\n and Fl{\\""u}gel, Katharina\n and Debus, Charlotte\n and Obermaier, Holger\n and Streit, Achim\n and G{\\""o}tz, Markus"",\n editor=""Cano, Jos{\\\'e}\n and Dikaiakos, Marios D.\n and Papadopoulos, George A.\n and Peric{\\`a}s, Miquel\n and Sakellariou, Rizos"",\n title=""perun: Benchmarking Energy Consumption of\xc2\xa0High-Performance Computing Applications"",\n booktitle=""Euro-Par 2023: Parallel Processing"",\n year=""2023"",\n publisher=""Springer Nature Switzerland"",\n address=""Cham"",\n pages=""17--31"",\n abstract=""Looking closely at the Top500 list of high-performance computers (HPC) in the world, it becomes clear that computing power is not the only number that has been growing in the last three decades. The amount of power required to operate such massive computing machines has been steadily increasing, earning HPC users a higher than usual carbon footprint. While the problem is well known in academia, the exact energy requirements of hardware, software and how to optimize it are hard to quantify. To tackle this issue, we need tools to understand the software and its relationship with power consumption in today\'s high performance computers. With that in mind, we present perun, a Python package and command line interface to measure energy consumption based on hardware performance counters and selected physical measurement sensors. This enables accurate energy measurements on various scales of computing, from a single laptop to an MPI-distributed HPC application. We include an analysis of the discrepancies between these sensor readings and hardware performance counters, with particular focus on the power draw of the usually overlooked non-compute components such as memory. One of our major insights is their significant share of the total energy consumption. We have equally analyzed the runtime and energy overhead perun\xc2\xa0generates when monitoring common HPC applications, and found it to be minimal. Finally, an analysis on the accuracy of different measuring methodologies when applied at large scales is presented."",\n isbn=""978-3-031-39698-4""\n}\n```\n'",",https://zenodo.org/badge/latestdoi/523363424,https://doi.org/10.1007/978-3-031-39698-4_2.\n\n\n```bibtex\n@InProceedings","2022/08/10, 13:53:19",441,BSD-3-Clause,98,143,"2023/10/25, 11:01:35",7,52,88,82,0,0,0.0,0.21259842519685035,"2023/10/25, 11:09:44",v0.5.0,0,4,false,,false,false,Helmholtz-AI-Energy/perun,,https://github.com/Helmholtz-AI-Energy,https://www.helmholtz.ai/,"Karlsruhe, Germany",,,https://avatars.githubusercontent.com/u/72967658?v=4,,, carbon footprint,Calculate your carbon footprint easily using a command line interface.,protea-earth,https://github.com/protea-earth/carbon_footprint.git,github,"carbon,carbon-emissions,carbon-footprint,carbon-cycle,carbon-dioxide,co2,co2-emissions,co2monitor,co2-emission,co2-measurements,co2-concentration,climate-change,climate-science,climate-models,climate-model,climate-data,climate,climate-analysis,climatology,machine-learning",Carbon Intensity and Accounting,"2023/04/24, 21:10:41",33,0,7,true,HTML,Protea.Earth,protea-earth,"HTML,Python",http://www.protea.earth,"b""# carbon_footprint\n![](https://github.com/protea-earth/carbon_footprint/blob/master/assets/logo.png)\n\nCalculate your carbon footprint easily using a command line interface. \n\nBuilt by [Protea](http://protea.earth), the world's leading social network community to reduce your effect on the climate.\n\n## Getting started\n\nFirst, clone the repository:\n\n```\ngit clone git@github.com:protea-earth/carbon_footprint.git\n```\n\nNow install all the dependencies:\n\n```\npip3 install -r requirements.txt \n```\n\nNow run the script \n\n```\npython3 carbon_footprint.py\n```\n\nYou can then answer all the questions provided:\n\n```\nJims-MBP:carbon_footprint jimschwoebel$ python3 carbon_footprint.py\npygame 1.9.4\nHello from the pygame community. https://www.pygame.org/contribute.html\nplease type answers to the following questions below\nwhat is your email? \njim@protea.earth\nHow many people are in your household? (e.g. 2) \n2\nWhat is your electric bill (in dollars) monthly? (e.g. 50) \n50\nHow many flights do you take per year? (e.g. 10) \n10\nDo you own a car? (e.g. n | y) \nn\nWhat is your average distance to commute to/from work in miles - for example 21? (e.g. 10) \n1\nDo you use public transportation? (e.g. y)\ny\nDo you use uber or another ride sharing platform like Lyft? (e.g. y) \ny\nHow many ride-sharing trips do you complete per month? (e.g. 10) \n10\nAre you a vegetarian? (e.g. n) \nn\nDo you eat meat more than 3 times each week? (e.g. y) \ny\nHow much money do you spend on Amazon per month in US dollars - for example, fifty dollars? (e.g. 150) \n150\n{'email': 'jim@protea.earth', 'answers': ['2', '50', '10', 'no', '1', 'yes', 'yes', '10', 'no', 'yes', '150'], 'footprint': 7710.966198878506, 'footprintbytype': [1401.8691588785048, 2868.8, 427.25239999999997, 2993.70964, 19.334999999999997], 'footprint_delta': -6949.883801121495, 'footprintbytype_delta': [-5850.890841121495, 2266.3500000000004, -4088.0176000000006, 725.74964, -3.075000000000003], 'labels_footprint': ['electric (kg Co2/year)', 'flight (kg Co2/year)', 'transportation (kg Co2/year)', 'food (kg Co2/year)', 'retail (kg Co2/year)'], 'labels_footprintbytype': 'total kg Co2/year'}\n2993.70964\n['Electricity consumption (kwh * 1000)', '# of flights per year', '# commute miles per year (thousands)', '# of uber trips per year', 'food choice (tons of CO2 emissions/year)']\n[4, 10, 0, 120, 2]\n[11, 2, 15, 7, 2]\n['electricity', 'flights', 'transportation', 'food', 'retail']\n[1401, 2868, 427, 2993, 19]\n[7252, 602, 4515, 2267, 22]\n['1.pdf', '2.pdf', '3.pdf', '4.pdf', '5.pdf', '6.pdf', '7.pdf', '8.pdf']\n```\n\nAfter this, a [.PDF report](https://github.com/protea-earth/carbon_footprint/blob/master/footprint_report.pdf) will pop up with your results.\n\n## [Reports](https://github.com/protea-earth/carbon_footprint/blob/master/footprint_report.pdf)\n\nReports look like [this](https://github.com/protea-earth/carbon_footprint/blob/master/footprint_report.pdf), and generate figures like the ones below that highlight your carbon consumption relative to the average American. \n\n### Carbon consumption by category (by label)\n![](https://github.com/protea-earth/carbon_footprint/blob/master/assets/bar.png)\n### Carbon consumption by category (kg Co2/year)\n![](https://github.com/protea-earth/carbon_footprint/blob/master/assets/bar_2.png)\n### Carbon consumption % by category \n![](https://github.com/protea-earth/carbon_footprint/blob/master/assets/pi.png)\n\n## [Assumptions](https://github.com/protea-earth/carbon_footprint/blob/master/assets/7.pdf) in carbon footprint \n### Electricity:\n- Electric bill - 11,698 kwh/year = average energy consumption / household.\n- the U.S. average is 13.27 cents per kilowatt hour (kwh), 0.62 kilogramCO2 / kwh.\n- 11,698 kwh/year * 0.62 kgCO2/kwh = 7,252.76 kg CO2/year.\n### Flights per year:\n- Flights per year - average is 2.1 trips per American (if fly).\n- Assume 0.1304 kgCO2/km for medium-term flights (DEFRA model).\n- Domestic flights ~2.5-3 hr, 2200km * 0.1304 kgCO2/km * 2.1 trips = 602.448 kg CO2/yr.\n### Transportation:\n- Average American drives around 15,000 per year (assume this is all forms of ground transport).\n- Fuel efficiency on car - assume 30 mpg on road car now.\n- 0.960 pounds = 0.435 kg CO2 per mile (driving) * 15,000 miles/year = 6,525.0 kg CO2 / year. \n- 0.657 pounds = 0.298 kg of CO2 per mile - 50% public transport, 50% single auto * 15,000 miles/year = 4470.0 kg CO2 /year.\n- 0.354 pounds = 0.161 kg of CO2 per mile (public transportation) * 15,000 miles / year = 2,415.0 kg CO2 / year.\n### Uber trips:\n- (14 uber million trips / day * 0.50)9 / 325 million people (USA) * 365 days / year = 7.86 rides/person in the USA (assumes 50% of all uber rides are in the USA).\n- Average ride length = 6 miles/trip * 7.86 trips / person / year * 0.960 kgCo2/mile = 45.27 kg Co2/uber rider/year.\n### Food choices / nutrition:\n- 1.7 US tons = 1,542.21406 kilograms of CO2/year - vegetarian.\n- 2.5 US tons = 2,267.96185 kilograms of CO2/year - average.\n- 3.3 US tons = 2,993.70964 kilograms of CO2/year - meat lover.\n### Amazon supply chain:\n- 0.1289 kg CO2e per dollar (USD).\n- 3,300,000 boxes/day13 * 365 days/year / 325,000,000 Americans = 3.70 boxes/American.\n- 3.70 boxes/year * $47 / box = ~$173.90/year * 0.1289 kg / $1 USD= 22.41 kg CO2/year.\n\n## References\n- [Shrinkthatfootprint.com](http://shrinkthatfootprint.com/average-household-electricity-consumption)\n- [Chooseenergy.com](https://www.chooseenergy.com/electricity-rates-by-state/)\n- [Quora](https://www.quora.com/How-much-CO2-is-produced-per-KWH-of-electricity)\n- [Airlines.org](http://airlines.org/wp-content/uploads/2016/04/2016Survey.pdf)\n- [Environmental Change Institute](https://www.eci.ox.ac.uk/research/energy/downloads/jardine09-carboninflights.pdf)\n- [Reference.com](https://www.reference.com/vehicles/average-mileage-put-car-year-5c8f88fa02be73c8)\n- [Wikipedia](https://en.wikipedia.org/wiki/Corporate_average_fuel_economy)\n- [Tranistscreen.com](http://blog.transitscreen.com/how-public-transit-can-and-must-help-reduce-carbon-pollution)\n- [Businesofapps.com](https://www.businessofapps.com/data/uber-statistics/)\n- [Ride.guru](https://ride.guru/lounge/p/what-is-the-average-trip-distance-for-an-uber-or-lyft-ride)\n- [Shrinkthatfootprint.com](http://shrinkthatfootprint.com/food-carbon-footprint-diet)\n- [AboutAmazon.com](https://sustainability.aboutamazon.com/carbon-footprint)\n- [Quora](https://www.quora.com/How-many-boxes-does-Amazon-ship-every-day)\n""",,"2019/10/07, 16:48:56",1479,BSD-3-Clause,1,24,"2023/10/25, 11:01:35",1,0,0,0,0,1,0,0.0,,,0,1,false,,false,false,,,https://github.com/protea-earth,http://protea.earth,San Francisco ,,,https://avatars.githubusercontent.com/u/56147791?v=4,,, CarbonFootprintEGU,Travel carbon footprint of the European Geosciences Union General Assembly 2019.,milankl,https://github.com/ConferenceCarbonTracker/CarbonFootprintEGU.git,github,,Carbon Intensity and Accounting,"2022/03/01, 19:49:12",15,0,0,false,Python,,ConferenceCarbonTracker,Python,,"b""[![DOI](https://zenodo.org/badge/218331367.svg)](https://zenodo.org/badge/latestdoi/218331367)\n# Travel carbon footprint of the EGU General Assembly 2019\n*How much carbon dioxide does travelling to the annual EGU General Assembly emit and how it can be reduced to less than 5% of current emissions in short time.*\n\n\n**Milan Kl\xc3\xb6wer**\\\nAtmospheric, Oceanic and Planetary Physics, University of Oxford\\\n*milan.kloewer@physics.ox.ac.uk*\n\nFor comments and changes, please raise an [issue](https://github.com/milankl/CarbonFootprintEGU/issues) or create a pull request.\n\n## Summary\n\n16,273 scientists from 113 countries participated in the [European Geoscience Union's (EGU) General Assembly 2019](https://egu2019.eu) in Vienna, Austria. We estimate that these scientists travelled in total 94 million km to Vienna and back, which emitted 22,300 tC02e, an average of ca 1.4 tCO2e per scientist. 86% of these carbon emissions result from long-haul flights (>1500km), 13% from short-haul (between 700 and 1500km) and <1% from rail journeys (<700km). Scientists from China and the United States are responsible for 40% of emissions. If all short-haul flights were replaced by rail journeys the total travel carbon footprint would be reduced by 11.5% down to 19,750 tCO2e. If the equivalent of the 9% highest emitting participants would participate virtually, then the carbon emissions would be reduced by 34%. Virtual participation for 26% of the highest emitting participants would reduce the carbon footprint by 80%. An EGU General Assembly with most European scientists arriving by train and 26% virtual participation reduces the current travel carbon emissions by 91%. Combining this scenario with a completely virtual format every other year, travel emissions to the EGU General Assembly would be reduced by 96% to less than 1000 tCO2e.\n\n## 1. Introduction\n\nInternational aviation is projected to contribute 22% to global greenhouse gas emissions in 2050 [[1](http://www.europarl.europa.eu/RegData/etudes/STUD/2015/569964/IPOL_STU(2015)569964_EN.pdf)], as doubling of passengers is expected by 2036 [[2](https://www.iata.org/pressroom/pr/Pages/2017-10-24-01.aspx)]. In Paris 2015, world governments have agreed to keep the global temperature increase well below 2\xc2\xb0C [[3](https://treaties.un.org/doc/Treaties/2016/02/20160215%2006-03%20PM/Ch_XXVII-7-d.pdf)], which requires drastic reductions in greenhouse gas emissions. Universities and other research institutions are high emitters [[4](https://tyndall.ac.uk/sites/default/files/twp161.pdf)], with individual carbon emissions an order of magnitude larger [[5](https://www.sciencedirect.com/science/article/pii/S0959652619311862)] than the suggested personal carbon allowance [[6](https://wordpress.hotorcool.org/wp-content/uploads/2019/02/15_Degree_Lifestyles_MainReport.pdf), [7](https://iopscience.iop.org/article/10.1088/1748-9326/aa7541/pdf), [8](https://www.sciencedirect.com/science/article/pii/S0969699719303229)]. Research-associated carbon emissions are dominated by air travel to conferences, meetings and for fieldwork [[4](https://tyndall.ac.uk/sites/default/files/twp161.pdf)]. Recent studies suggest that academic air travel has limited direct link to professional success [[5](https://www.sciencedirect.com/science/article/pii/S0959652619311862), [9](https://www.tandfonline.com/doi/full/10.1080/17450101.2019.1589727)].\n\nMost conferences do not provide live-streaming, nor allow for remote-speaking, although alternatives exist. Very few conferences are fully virtual (e.g. [virtual island summit](https://www.islandinnovation.co/summit/)) and therefore often almost carbon neutral [[10](https://hiltner.english.ucsb.edu/index.php/ncnc-guide/)]. Continuous virtual seminar series allow for frequent academic exchange (e.g. [Virtual Blue COP 25](https://virtualbluecop25.org/)) sometimes with a focus on field-specific subjects (e.g. [EBUS Webinars](https://ebuswebinars.wixsite.com/ebuswebinars)). Live-streaming is provided by more conferences (e.g. [JuliaCon](https://juliacon.org/2019/), with an automatic archive on [YouTube](https://www.youtube.com/playlist?list=PLP8iPy9hna6StY9tIJIUN3F_co9A0zh0H)), mainly to make them more accesible for participants with constraints on time, money, or freedom of travel.\n\nThe carbon footprint of most conferences and meetings is dominated by a small amount of participants with disproportionate travel emissions due to long distance flights [[11](https://dx.doiorg/10.1038/news031208-13),[12](https://dx.doi.org/10.1126/science.318.5847.36)]. For the EGU General Assembly 2012 it was estimated that 20% of the highest emitting participants are responsible for 70% of the travel emissions [[4](https://tyndall.ac.uk/sites/default/files/twp161.pdf)]. Here, we calcuate the travel emissions for the same conference in 2019 and present reduction scenarios based on an increased number of rail journeys and virtual participation.\n\n## 2. Results\n![](https://github.com/milankl/CarbonFootprintEGU/blob/master/plots/world.png)\n\nFigure 1: The journeys of all 16,273 scientists are illustrated on an equi-distant map, which preserves the distances with respect to Vienna. Line thicknesses are weighted by the amount of participants per country. Capital cities or largest cities are assumed as the departure location, with a few exceptions (see section [4.1](https://github.com/milankl/CarbonFootprintEGU#41-departure-location)). The total distance travelled is ca 94 million km. A version of this Figure covering only Europe can be found [here](https://github.com/milankl/CarbonFootprintEGU/blob/master/plots/europe.png).\n\n![](https://github.com/milankl/CarbonFootprintEGU/blob/master/plots/CO2_permode.png)\n\nFigure 2: a) Splitting the total carbon footprint of 22,300 tCO2e into the modes of transport shows that long-haul flights are the major contributor with 86%. Contribution of rail journeys are less than 1%. b) A scenario, in which all short-haul flights are replaced with rail journeys, decreases the carbon footprint by 11.5% to 19,750 tCO2e. This scenario makes long-haul flights the dominating contribution to the overall carbon footprint with 97% and the emissions from rail journeys are negligible (<3%).\n\n![](https://github.com/milankl/CarbonFootprintEGU/blob/master/plots/CO2_percountry.png)\n\nFigure 3: a) China (1194 scientists) and USA (1068) are the biggest contributors, due to large number of participants and large distances to Vienna. Although many scientists come from Germany (2587), UK (1355), Italy (1191), France (1151), Austria (754), and Switzerland (723), their contribution is minor due to short distances to Vienna, despite short-haul flights. On the other hand, 51 scientists (0.3% of participants) from New Zealand contribute 2% of the overall carbon emissions. b) A scenario, in which short-haul flights are replaced with rail journeys, decreases the carbon emissions from the United Kingdom, Germany, France by a factor of 7, which results from the assumptions on carbon emission per mode of transport (30gCO2e / km / person for rail versus 200gCO2e / km / person for short-haul (see section [4.5](https://github.com/milankl/CarbonFootprintEGU#45-carbon-emissions)). Same holds for other countries in regions like Benelux, Scandinavia and Eastern Europe (incl. South-East and North-East, like Greece, Turkey, Romania, Estonia, Bulgaria, Latvia and Ukraine, to name a few), although travel times from these countries may be considerable given current rail and bus infrastructure.\n\n![](https://github.com/milankl/CarbonFootprintEGU/blob/master/plots/carbon_sorted.png)\n\nFigure 4: Carbon emissions sorted by highest per capita emissions. Each grey rectangle represents one country, some of the largest in terms of emissions and participants are named. The 26% furthest-travelling EGU participants (green lines) are responsible for 80% of the conference's total travel carbon footprint, with the top 9% (blue lines) responsible for 34% of the total. The [Gini-coefficient](https://en.wikipedia.org/wiki/Gini_coefficient) is 63% and therefore similar to the global inequality of income.\n\n## 3. Data\n\nData is based on number of participants per country, [published by EGU](https://egu2019.eu/#CountryStatistics). The processed data, including coordinates of departure location, distance to Vienna and sum of emissions per country, can be found in [`data/data_processed.csv`](https://github.com/milankl/CarbonFootprintEGU/blob/master/data/data_processed.csv)\n\n## 4. Methods\n\nAll scripts can be found in [`src/`](https://github.com/milankl/CarbonFootprintEGU/tree/master/src) and the assumptions are discussed in section [4.6](https://github.com/milankl/CarbonFootprintEGU#46-sensitivity-to-assumptions).\n\n### 4.1 Departure location\n\nThe departure location per country is chosen as the capital or largest city (see [data](https://github.com/milankl/CarbonFootprintEGU/blob/master/data/data_processed.csv)), with a few exceptions that are explained in the following:\n\n a) Germany: Participants from Germany are split into 4 groups (Berlin 20%, Hamburg 20%, Munich 20%, Cologne 40%) to better represent the participant distribution and their distance to Vienna across Germany.\n \n b) United Kingdom: For similar reasons scientists from the UK are split into 2 groups (London 70%, Manchester 30%).\n \n c) United States: Washington DC 70%, Los Angeles 30%.\n \n d) Austria: Vienna 50%, Graz 50%. Graz has a relatively high number to account for journey distances of participants from Innsbruck, Salzburg, etc.\n\n e) Canada: Toronto 80%, Vancouver 20%.\n \nThe named location `city, country` is converted to geographical coordinates with [Nominatim](https://nominatim.org/release-docs/develop/api/Overview/) from the [OpenStreetMap database](https://www.openstreetmap.org/) (see [`src/get_locations.py`](https://github.com/milankl/CarbonFootprintEGU/blob/master/src/get_locations.py)).\n \n### 4.2 Retour and other conferences\n\nEvery participant is assumed to travel back to their departure location with the same mode of transport. Due to the lack of data, we have to assume that every scientists only came to Vienna for the purpose of the EGU General Assembly. Some scientists likely connected their journey to Vienna with other conferences, meetings or holidays, which has to be taken into account in case the carbon footprint of individuals or a whole research field is calculated.\n \n### 4.3 Mode of transport\n\nRail is assumed for all journeys with distances of less than 700km. Airplanes are assumed for longer distances. Short-haul is defined as distances of less than 1500km, longer distances are long-haul.\n\n## 4.4 Indirect journeys\n\nWe assume all journeys to be direct, that means, we calculate the distance as the great circle distance. This is more accurate for long-haul than for short-haul, and may have some considerable errors for railways, but less than a factor of 2. More in section [4.6](https://github.com/milankl/CarbonFootprintEGU#46-sensitivity-to-assumptions).\n\n### 4.5 Carbon emissions\n\nRail journeys are assumed to emit 30gCO2e / km / person.\n[[13](http://ecopassenger.hafas.de/hafas-res/download/Ecopassenger_Methodology_Data.pdf),\n[14](http://www.cer.be/sites/default/files/publication/Facts%20and%20figures%202014.pdf),\n[15](https://www.eea.europa.eu/data-and-maps/indicators/energy-efficiency-and-specific-co2-emissions/energy-efficiency-and-specific-co2-9),\n[16](https://dataportal.orr.gov.uk/media/1114/rail-infrastructure-assets-environmental-2017-18.pdf)]\n\nShort-haul flights are assumed to emit 200gCO2e / km / person, long-haul flights are assumed to emit 250gCO2e / km / person. \nThese values take into account factors that typically decrease the per km emissions for long-haul flights such as [[17](https://www.atmosfair.de/wp-content/uploads/atmosfair-flight-emissions-calculator-englisch-1.pdf),\n[18](https://www.icao.int/environmental-protection/CarbonOffset/Documents/Methodology%20ICAO%20Carbon%20Calculator_v10-2017.pdf),[19](https://www.myclimate.org/fileadmin/user_upload/myclimate_-_home/01_Information/01_About_myclimate/09_Calculation_principles/Documents/myclimate-flight-calculator-documentation_EN.pdf)]\n\n- increased fuel consumption for take-off\n- decreased detour factors for longer flights\n- average aircraft types and their fuel consumption\n- average passenger load factors for average airlines.\n\nAdditionally, we take into account factors that typically increase the per km emissions for longer flights, which on average tend to outweigh the factors from above [[17](https://www.atmosfair.de/wp-content/uploads/atmosfair-flight-emissions-calculator-englisch-1.pdf)]\n\n- increased fuel weight for longer flights\n- increased flight altitudes depending on distance covered\n- indirect CO2 effects on ozone and cloud formation depending on flight altitude [[20](https://doi.org/10.1007/s11367-018-1556-3)].\n\nSome emission calculators do not include all of the factors above (e.g. [18](https://www.icao.int/environmental-protection/CarbonOffset/Documents/Methodology%20ICAO%20Carbon%20Calculator_v10-2017.pdf) and [19](https://www.myclimate.org/fileadmin/user_upload/myclimate_-_home/01_Information/01_About_myclimate/09_Calculation_principles/Documents/myclimate-flight-calculator-documentation_EN.pdf)). To our knowledge, the atmosfair calculator [[17](https://www.atmosfair.de/wp-content/uploads/atmosfair-flight-emissions-calculator-englisch-1.pdf)] is the most sophisticated. It includes the indirect CO2 effects not just as a factor 2, as an approximation recommended by [Jungbluth and Meili, 2019 ([20])](https://doi.org/10.1007/s11367-018-1556-3) but makes this factor flight altitude dependent (as recommended as a next order accuracy therein). Additionally, atmosfair's calculator uses a database which analysed the aircraft types, their fuel consumption and passenger loads typically flown on specific routes. We therefore obtained our assumed emissions values by searching for typical flight routes to Vienna and simplified the results.\n\nWe assume economy class for every participant.\n\nCarbon emissions of live-streaming are assumed to be negligible.\n\n### 4.6 Sensitivity to assumptions\n\nSensitivity to the assumptions is fairly low. Main contributions to the uncertainty of the carbon footprint are\n\na) The carbon dioxide equivalent emissions of long-haul flights: These are assumed to be 250gCO2e / km / person, which is a representative average with probably less than 10% error [[17](https://www.atmosfair.de/wp-content/uploads/atmosfair-flight-emissions-calculator-englisch-1.pdf)]. The emissions of individual flights have much higher uncertainty and depend on number of passengers, airline / flight class, type of aircraft, potential detours, flight altitude, and weather conditions. The carbon dioxide equivalent emissions of super long-haul flights (>10,000km) are usually higher due to additional fuel weight and flight altitude, although increased fuel consumption from start and detour contribute less for such long distances.\n\nb) The exact departure location of scientists from USA: A flight from Los Angeles to Vienna emits 1.8 times more tCO2e than a flight from New York City to Vienna [[17](https://www.atmosfair.de/wp-content/uploads/atmosfair-flight-emissions-calculator-englisch-1.pdf)]. We assume that a ratio of 70% of scientists depart from Washington DC and 30% from Los Angeles is representative to account for longer journeys (but therefore probably also fewer scientists) from Midwestern, Southern USA or the West Coast. Assuming 50% of scientists from the USA depart from Washington and 50% from Los Angeles, would increase the emission of those by 17%. As the USA contribution to the overall carbon dioxide emissions of EGU travel is 20%, this uncertainty accounts for less than 4% in total.\n\nc) The exact departure location of scientists from China: We assume that all scientists from China fly in from Beijing. A flight from Shanghai emits less than 20% more tCO2e than a flight from Beijing. Assuming half of the scientists from China flew in from Shanghai, this would increase China's emission by 10%. Taking into account that China contributes 20% to the overall emissions of EGU travel, this uncertainty is less than 2% in total.\n\nd) Similar arguments hold for the exact departure locations of scientists from Canada, Brazil, Australia, and India. Smaller countries like New Zealand, Taiwan, South Korea, contribute even less to the uncertainty.\n\ne) The carbon dioxide equivalent emissions of rail journeys. These are assumed to be 30gCO2e / km / person [[13](http://ecopassenger.hafas.de/hafas-res/download/Ecopassenger_Methodology_Data.pdf),\n[14](http://www.cer.be/sites/default/files/publication/Facts%20and%20figures%202014.pdf),\n[15](https://www.eea.europa.eu/data-and-maps/indicators/energy-efficiency-and-specific-co2-emissions/energy-efficiency-and-specific-co2-9),\n[16](https://dataportal.orr.gov.uk/media/1114/rail-infrastructure-assets-environmental-2017-18.pdf)], which can be considered as an European average. Emissions from individual trains can, however, be lower by an order of magnitude depending on the type of train (electric, diesel, highspeed or regional), the local energy mix (for electric trains), number of passengers, etc. The highspeed train in France is estimated to emit only 3gCO2e / km / person [[21](https://en.oui.sncf/en/help-en/calculation-of-co2-emissions-on-your-train-journey)], due to a very low carbon electric grid, but average trains in the UK emit 40gCO2e / km / person [[16](https://dataportal.orr.gov.uk/media/1114/rail-infrastructure-assets-environmental-2017-18.pdf)] as many services are not electrified and diesel trains are used instead. As the contribution of rail journeys to the overall carbon footprint of EGU-related travel is negligible (<1%), the uncertainty here is negligible too.\n\nf) Indirect rail journeys. We assume great circle distances of rail journeys such that we likely underestimate the actually travelled distance. However, this error is within a factor of 2. Since the contribution of rail travel to the overall carbon emissions is very small, the resultant uncertainty in the overall budget is negligible.\n\n# References\n\n[1] [Cames, M, J Graichen, A Siemons, V Cook, 2015. *Emission Reduction Targets for International Aviation and Shipping*, European Parliament, Policy Department A: Economic and Scientific Policy](http://www.europarl.europa.eu/RegData/etudes/STUD/2015/569964/IPOL_STU(2015)569964_EN.pdf)\n\n[2] [International Air Transport Association, 2017. *20 Year Passenger Forecast*](https://www.iata.org/publications/store/Pages/20-year-passenger-forecast.aspx)\n\n[3] [Paris Agreement of the United Nations Framework Convention on Climate Change, 2016.](https://treaties.un.org/doc/Treaties/2016/02/20160215%2006-03%20PM/Ch_XXVII-7-d.pdf)\n\n[4] [Le Quere, C, et al., 2015. *Towards a culture of low-carbon research for the 21st century*, Tyndall Centre for Climate Change Research, Working paper 161.](https://tyndall.ac.uk/sites/default/files/twp161.pdf)\n\n[5] [Wynes, S, SD Donner, S Tannason, N Nabors, 2019. *Academic air travel has a limited influence on professional success*. **Journal of Cleaner Production**, 226, p. 959-967.](https://www.sciencedirect.com/science/article/pii/S0959652619311862)\n\n[6] [Institute for Global Environmental Strategies, Aalto University, and D-mat ltd. 2019. *1.5-Degree Lifestyles: Targets and Options for Reducing Lifestyle Carbon Footprints*. Technical Report.Institute for Global Environmental Strategies, Hayama, Japan. ](https://wordpress.hotorcool.org/wp-content/uploads/2019/02/15_Degree_Lifestyles_MainReport.pdf)\n\n[7] [Wynes, S and KA Nicholas, 2017. *The climate mitigation gap: education andgovernment recommendations miss the mosteffective individual actions.* **Environ. Res. Lett.**,12,074024.](https://iopscience.iop.org/article/10.1088/1748-9326/aa7541/pdf)\n\n[8] [Goessling, S, P Hanna, J Higham, S Cohen, D Hopkins, 2019. *Can we fly less? Evaluating the \xe2\x80\x98necessity\xe2\x80\x99 of air travel*,**Journal of Air Transport Management**, 81](https://www.sciencedirect.com/science/article/pii/S0969699719303229)\n\n[9] [Higham, JES, Hopkins D, Orchiston C, 2019. *The work-sociology of academic aeromobility at remote institutions*, **Mobilities**, 14, p. 612-631](https://www.tandfonline.com/doi/full/10.1080/17450101.2019.1589727)\n\n[10] [Ken Hiltner, 2016. *A Nearly Carbon Neutral Conference Model*](https://hiltner.english.ucsb.edu/index.php/ncnc-guide/)\n\n[11] [Mason, B, 2003. *Scientists contribute to greenhouse-gas emissions*, **Nature News**, doi:10.1038/news031208-13](https://dx.doi.org/10.1038/news031208-13)\n\n[12] [Lester, B, 2007. *Greening the Meeting*. **Science**, 318, 5847, pp. 36-38, doi:10.1126/science.318.5847.36](https://dx.doi.org/10.1126/science.318.5847.36)\n\n[13] [Knoerr, W and R Huettermann, 2016. *EcoPassenger: Environmental Methodology and Data, Update 2016*](http://ecopassenger.hafas.de/hafas-res/download/Ecopassenger_Methodology_Data.pdf)\n\n[14] [International Railway Association UIC and Community of European Railway and Infrastructure Companies CER, 2016. *Rail Transport and Environment: Facts & Figures.*](http://www.cer.be/sites/default/files/publication/Facts%20and%20figures%202014.pdf)\n\n[15] [European Environment Agency EEA, 2017. *Energy efficiency and specific CO2 emissions*](https://www.eea.europa.eu/data-and-maps/indicators/energy-efficiency-and-specific-co2-emissions/energy-efficiency-and-specific-co2-9)\n\n[16] [UK Office of Rail and Road ORR, 2018: *Rail infrastructure, assets and environmental 2017-18 Annual Statistical Release*](https://dataportal.orr.gov.uk/media/1114/rail-infrastructure-assets-environmental-2017-18.pdf)\n\n[17] [Atmosfair, 2016. *atmosfair Flight Emissions Calculator: Documentation of the Method and Data*.](https://www.atmosfair.de/wp-content/uploads/atmosfair-flight-emissions-calculator-englisch-1.pdf)\n\n[18] [International Civil Aviation Organization ICAO, 2017. *ICAO Carbon Emissions Calculator Methodology, Version 10*](https://www.icao.int/environmental-protection/CarbonOffset/Documents/Methodology%20ICAO%20Carbon%20Calculator_v10-2017.pdf)\n\n[19] [Foundation myclimate, 2019. *The myclimate Flight Emission Calculator*.](https://www.myclimate.org/fileadmin/user_upload/myclimate_-_home/01_Information/01_About_myclimate/09_Calculation_principles/Documents/myclimate-flight-calculator-documentation_EN.pdf)\n\n[20] [Jungbluth, N and C Meili, 2019. *Recommendations for calculation of the global warming potential of aviation including the radiative forcing index*. **Int J Life Cycle Assess**, 24, 404](https://doi.org/10.1007/s11367-018-1556-3)\n\n[21] [OUI SNCF, 2018. *Calculation of CO2 emissions on your train journey*.](https://en.oui.sncf/en/help-en/calculation-of-co2-emissions-on-your-train-journey)\n""",",https://zenodo.org/badge/latestdoi/218331367,https://doi.org/10.1007/s11367-018-1556-3,https://doi.org/10.1007/s11367-018-1556-3,https://doi.org/10.1007/s11367-018-1556-3","2019/10/29, 16:20:12",1457,MIT,0,75,"2022/03/02, 14:16:41",2,14,16,0,602,0,0.0,0.05882352941176472,"2019/11/21, 15:49:26",v1.2,0,2,false,,false,false,,,https://github.com/ConferenceCarbonTracker,,,,,https://avatars.githubusercontent.com/u/83351285?v=4,,, Bloom,"A SaaS that allow companies to become climate leaders, from calculating their climate impact to communicating about their climate efforts. It connects to as many data sources as possible to assess your carbon footprint and find mitigation opportunities.",tmrowco,https://github.com/electricitymaps/bloom-contrib.git,github,"climate-change,carbon-footprint,carbon-model,life-cycle-assessment,climate-impact",Carbon Intensity and Accounting,"2021/03/25, 18:25:04",429,4,9,false,Jupyter Notebook,Electricity Maps,electricitymaps,"Jupyter Notebook,JavaScript,Python,Shell,CSS,HTML",https://www.bloomclimate.com,"b""# bloom-contrib [![Slack Status](http://slack.tmrow.com/badge.svg)](http://slack.tmrow.com)\n\nAttention: This repository is deprecated and not maintained!\n\n## Bloom\n\n[Bloom](https://www.bloomclimate.com) was a SaaS that allowed companies to become climate leaders, from calculating their climate impact to communicating about their climate efforts. It connected to as many data sources as possible to assess your carbon footprint and find mitigation opportunities.\n\n## Tomorrow is hiring!\n\nTomorrow, the organisation behind Bloom builds tech to empower organisations and individuals to understand and reduce their carbon footprint.\n\nWe're often hiring great people to join our team in Copenhagen. Head over to [our jobs page](https://www.tmrow.com/jobs) if you want to help out!\n\n## Structure of this repository\n\n- `./co2eq`: carbon models\n- `./integrations`: contains all integrations\n- `./integrations/img`: contains all integration logos\n- `./playground`: source code of the playground for integrations\n- `./definitions.js`: constant definitions\n\n\n\n""",,"2018/11/06, 15:14:30",1814,MIT,0,903,"2021/03/25, 18:23:46",34,261,466,0,944,3,3.1,0.7528735632183908,,,2,34,false,,false,false,"ShurshevK/CO2-Project,ShurshevK/co2-calculator---BRANCH,ShurshevK/co2-calculator,JarnoRFB/planted-co2-calculator",,https://github.com/electricitymaps,https://www.electricitymaps.com,Denmark,,,https://avatars.githubusercontent.com/u/24733017?v=4,,, NMF.earth app,iOS & Android app to understand and reduce your carbon footprint.,NMF-earth,https://github.com/NMF-earth/nmf-app.git,github,"react-native,ios,android,expo,climate-change,sustainability,global-warming,functional-programming,redux-toolkit,zero-waste,typescript,openfoodfacts,hacktoberfest",Carbon Intensity and Accounting,"2023/06/03, 07:16:00",438,0,101,true,TypeScript,NMF.earth,NMF-earth,"TypeScript,JavaScript,Batchfile",https://nmf.earth,"b'

\xf0\x9f\x8c\xb1 NMF.earth app

\n

Understand and reduce your carbon footprint

\n\n
\n\n

\n \n \n \n \n \n \n

\n\n
\n\n

\n \n \n \n

\n\n
\n\n![screenshots](https://github.com/NotMyFaultEarth/nmf-app/blob/main/app-preview.png)\n\n![](https://github.com/NMF-earth/nmf-app/workflows/Test%20CI/badge.svg)\n[![Depfu](https://badges.depfu.com/badges/f3b06c819202baf2a14b3241cbf249c9/overview.svg)](https://depfu.com/repos/github/NotMyFaultEarth/nmf-app?project_id=10243)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](code_of_conduct.md)\n[![Build Status](https://img.shields.io/static/v1.svg?label=CSL&message=software%20against%20climate%20change&color=green?style=flat&logo=github)](https://github.com/climate-strike/license)\n[![runs with expo](https://img.shields.io/badge/Runs%20with%20Expo-000.svg?style=flat-square&logo=EXPO&labelColor=f3f3f3&logoColor=000)](https://expo.io/)\n\nRepository for the [NMF.earth](https://nmf.earth/) React Native application, built with Expo, Redux Toolkit and Typescript.\nDesign can be found on [Figma](https://www.figma.com/community/file/967052407514062912).\n\n
\n\n### \xf0\x9f\x93\x8a Data source\n\nCarbon data comes from NMF\'s [carbon footprint repo](https://github.com/NMF-earth/carbon-footprint) while barcode product scanned data comes from [Open Food Facts](https://world.openfoodfacts.org/) api.\n\n
\n\n### \xf0\x9f\x93\xa6 Getting started\n\nInstalling Dependencies:\n\n```bash\n$ yarn\n```\n\nRunning the app:\n\n```bash\n$ yarn start\n```\n\nFor starting the app on a specific OS:\n\n```bash\n$ yarn ios | yarn android\n```\n\nCopy the 2 files that contain secrets and replace them with yours\n\n```bash\n$ cp app.example.json app.json\n$ cp secret.example.ts secret.ts\n```\n\n
\n\n### \xf0\x9f\x91\xa9\xf0\x9f\x8f\xbe\xe2\x80\x8d\xf0\x9f\x92\xbb Development\n\n- Eslint is used in the project to enforce code style and should be configured in your [editor](https://eslint.org/docs/user-guide/integrations).\n\n- Prettier is also used and apply automatically by eslint\n\n- Typescript is used in the project for type-checking and should be configured in your [editor](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Editor-Support).\n\nYou can check this manually by running:\n\n```bash\n$ yarn lint\n```\n\nor\n\n```bash\n$ yarn typescript\n```\n\nYou can ask eslint to fix issues by running:\n\n```bash\n$ yarn lint:fix\n```\n\n
\n\n### \xf0\x9f\x9b\xa0 Testing\n\nUse the following command to run unit tests with coverage:\n\n```bash\n$ yarn test\n```\n\nUse the following to update unit tests\n\n```bash\n$ yarn test -u\n```\n\nUse the following to run unit tests in watch mode while developing:\n\n```bash\n$ yarn test --watch\n```\n\n
\n\n### \xf0\x9f\x8e\xa8 Storybook\n\nStories (\\*.story.tsx) can be automatically added to `storyLoader.js` with :\n\n```bash\n$ yarn prestorybook\n```\n\n
\n\n### \xf0\x9f\x93\x97 Sustainable guide\n\nTo place new `.md` files inside `guides` folder or modify existing guide and then run `node scripts/generate-guides.js` to generate a new sustainable guide. Images can be used in the `.md` as follow: `![Earth](earth.png)` and should be place in `assets/images/guide`.\n\n
\n\nFor methodology screen, just run `node scripts/generate-methodology.js` to update `methodology.json` from `methodology.md`.\n\n
\n\nFor emission info screen, just run `node scripts/generate-emission-info.js` to update `emission-info.json` from `assets/emission-info/markdown/*.md`.\n\n
\n\n## \xf0\x9f\x97\xa3 Translations\n\nYou can help us with translate the app with our online tool [POEditor](https://poeditor.com/join/project/0MbginCsWp). Any help is appreciate and no coding skills are needed \xf0\x9f\xa4\x97\n\nPS: please do not use send translations made by Google translate or similar.\n\n
\n\n### Generate\n\nRun `node scripts/generate-translation-files.js` in order to create the files needed for the new language you want to add to the app.\n\n### Manage Files\n\nRun `node scripts/poeditor/group-translation-files.js` to generate 1 JSON file per language, with all the translation vars in it. From there, you can easily make any edit you want. When you\'re done, you can run `node scripts/poeditor/spread-translation-files.js` to merge your edits and spread them into all the translation files across the repo.\n\n
\n\n### \xf0\x9f\x9a\x80 Deployment\n\nAny tag starting with `v` will runs expo publish. During this step `app.example.json` is used to generate an `app.json` file for expo\'s deployment, this is done with the following script `scripts/generate-app-json.js`.\n\n
\n\n### \xf0\x9f\x8f\x97 Build\n\nFirst you need to configure `SENTRY_AUTH_TOKEN` and `SENTRY_DSN` secrets on [expo.dev](https://expo.dev/accounts/%5Baccount%5D/settings/secrets) and then define `projectId` in `app.config.js`. Then run `npm install dotenv` and place your secrets inside `.env` file that you need to create with `SENTRY_AUTH_TOKEN` and `SENTRY_DSN`, like in `.env.example`.\n\nRun `eas build -p ios` to build for the [App Store Connect](https://appstoreconnect.apple.com) and `eas build -p android` for the [Google Play Console](https://play.google.com/console/developers).\n\n
\n\n### \xf0\x9f\x91\xa8\xe2\x80\x8d\xf0\x9f\x92\xbb Contribute \xe2\x9d\xa4\xef\xb8\x8f\n\nMore than 40 developers have contribute to the app, thanks a lot to [them](https://github.com/NMF-earth/nmf-app/graphs/contributors)!\n\nHave a look to [contributing.md](https://github.com/NotMyFaultEarth/nmf-app/blob/main/contributing.md) if you want to contribute!\n\n
\n\n### \xf0\x9f\x8f\x86 Backers\n\nA big thank you to [Christopher\xc2\xa0Gwilliams](https://github.com/encima) and to the Phelps family for their amazing contribution to the [Kickstarter](https://www.kickstarter.com/projects/pierrebresson/not-my-fault)!\n\n
\n\n### \xc2\xa9\xef\xb8\x8f Open source - licence\n\nRepository and contributions are under [GNU General Public License v3.0](https://github.com/NotMyFaultEarth/nmf-app/blob/main/LICENSE)\n'",,"2019/11/25, 10:12:37",1430,GPL-3.0,21,812,"2023/06/03, 07:19:37",27,166,354,23,144,1,2.3,0.37349397590361444,"2023/03/20, 08:26:35",v0.10.0,0,41,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",true,true,,,https://github.com/NMF-earth,http://nmf.earth,,,,https://avatars.githubusercontent.com/u/63661783?v=4,,, EnergyPATHWAYS,"The energyPATHWAYS Model is a professional, open source energy and carbon planning tool for use in evaluating long-term, economy-wide greenhouse gas mitigation scenarios.",energyPATHWAYS,https://github.com/energyPATHWAYS/EnergyPATHWAYS.git,github,,Carbon Intensity and Accounting,"2021/05/25, 16:00:44",41,0,3,false,Python,EnergyPATHWAYS,energyPATHWAYS,"Python,Makefile,Shell,PLpgSQL",,b'# EnergyPATHWAYS\nThis version of the EnergyPATHWAYS model was depreciated at the end of 2020.\n\n## License\nEnergyPathways is released under the MIT License (MIT). See the LICENSE file for further details.\n',,"2015/12/30, 09:48:46",2856,MIT,0,917,"2016/05/23, 05:59:49",20,0,45,0,2711,0,0,0.5139372822299652,,,0,6,false,,false,false,,,https://github.com/energyPATHWAYS,,,,,https://avatars.githubusercontent.com/u/16344026?v=4,,, blockchain-carbon-accounting,Code of the Carbon Accounting and Certification Working Group.,opentaps,https://github.com/opentaps/blockchain-carbon-accounting.git,github,,Carbon Intensity and Accounting,"2023/03/28, 16:08:25",23,0,3,true,TypeScript,,opentaps,"TypeScript,JavaScript,Shell,Solidity,Go,HTML,Python,Dockerfile,CSS,Just,HCL,Makefile",,"b'# blockchain-carbon-accounting\n\n[![CI](https://github.com/hyperledger-labs/blockchain-carbon-accounting/actions/workflows/ci.yml/badge.svg)](https://github.com/hyperledger-labs/blockchain-carbon-accounting/actions/workflows/ci.yml)\n[![Test Report](https://github.com/hyperledger-labs/blockchain-carbon-accounting/actions/workflows/test-report.yml/badge.svg)](https://github.com/hyperledger-labs/blockchain-carbon-accounting/actions/workflows/test-report.yml)\n\nThis project uses web3/blockchain/distributed ledger to solve several key challenges for climate change:\n\n- Storing data from energy use, renewable energy production, carbon removal or reduction projects in a permissioned data ledger.\n- Tokenizing emissions audits, carbon credits, and energy attribute certificates.\n- Validating climate projects by voting through a Distributed Autonomous Organization (DAO).\n\nWith it you could implement a variety of use cases, such as developing and monetizing carbon emissions reductions projects; emissions calculations for individuals, companies, and supply chains; and using carbon credits to implement emissions reduction plans.\n\nThe code is divided into the following components:\n\n- [lib](lib/README.md): Common library of code\n- [fabric](fabric/README.md): [Emissions Data Channel](https://wiki.hyperledger.org/display/CASIG/Emissions+Data+Channel)\n- [hardhat](hardhat/README.md): [Net Emissions Token Network](https://wiki.hyperledger.org/display/CASIG/Emissions+Tokens+Network) and [Climate DAO](https://wiki.hyperledger.org/display/CASIG/DAO)\n- [app](app/README.md): Applications built on these components, including React user interface and supply chain emissions calculations.\n- [open-offsets-directory](open-offsets-directory/README.md): [Voluntary Carbon Offsets Directory](https://wiki.hyperledger.org/display/CASIG/Completed+Research%3A+Voluntary+Carbon+Offsets+Directory+Research)\n- [secure-identities](secure-identities/README.md): Support for signing transactions using Vault or web-socket\n- [supply-chain](app/supply-chain/README.md): [Supply Chain Decarbonization](https://wiki.hyperledger.org/display/CASIG/Supply+Chain+Decarbonization)\n- [data](data/README.md): Data for setting up the applications\n\nTo try it out, use the demo at [emissions-test.opentaps.org](https://emissions-test.opentaps.org/) or follow the steps in [Getting Started](Getting_Started.md) to set it up yourself.\n\nFor more details, see the\n\n- [User\'s Guide](User_Guide.md)\n- [Open Source Carbon Accounting video](https://www.youtube.com/watch?v=eNM7V8vQCg4)\n- [Hyperledger Carbon Accounting and Neutrality Working Group](https://wiki.hyperledger.org/display/CASIG/Carbon+Accounting+and+Certification+Working+Group)\n- [Setup Guide](Setup.md) and documentation in each component.\n\nGet involved! Please see [How to Contribute](https://wiki.hyperledger.org/display/CASIG/How+to+Contribute) to help us build this open source platform for climate action.\n\n## Git Notes\n\nPlease sign off all your commits. This can be done with\n\n $ git commit -s -m ""your message""\n\n'",,"2021/01/16, 14:41:03",1012,Apache-2.0,96,2808,"2023/03/16, 20:14:56",7,53,56,19,223,6,0.0,0.5782178217821783,,,0,27,false,,false,false,,,https://github.com/opentaps,,,,,https://avatars.githubusercontent.com/u/79276703?v=4,,, footprint,An R package to calculate carbon footprints from air travel based on IATA airport codes or latitude and longitude.,acircleda,https://github.com/acircleda/footprint.git,github,,Carbon Intensity and Accounting,"2020/12/31, 14:25:02",15,0,2,true,R,,,"R,Rebol",,"b'\n\n\n\n\n[![R-CMD-check](https://github.com/acircleda/footprint/workflows/R-CMD-check/badge.svg)](https://github.com/acircleda/footprint/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/acircleda/footprint/branch/master/graph/badge.svg)](https://codecov.io/gh/acircleda/footprint?branch=master)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/footprint)](https://CRAN.R-project.org/package=footprint)\n\n\n# footprint\n\n\n\nThe goal of footprint is to calculate carbon footprints from air travel\nbased on IATA airport codes or latitude and longitude.\n\n## Installation\n\nYou can install the development version from\n[GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""acircleda/footprint"")\n```\n\n## Data and Methodology\n\nPackage `footprint` uses the the Haversine great-circle distance formula\nto calculate distance between airports or distance between latitude and\nlongitude pairs. This distance is then used to derive a carbon footprint\nestimate, which is based on conversion factors from the Department for\nEnvironment, Food & Rural Affairs (UK) 2019 Greenhouse Gas Conversion\nFactors for Business Travel (air):\n.\n\n## Example Usage\n\nLoad `footprint` using\n\n``` r\nlibrary(footprint)\n```\n\n### Using Airport Codes\n\nYou can use pairs of three-letter IATA airport codes to calculate\ndistance. This function uses the\n[`airportr`](https://github.com/dshkol/airportr) package, which contains\nthe data and does the work of getting the distance between airports.\n*Note*: the `airportr` package offers a number of useful functions for\nlooking up airports by city or name and getting the IATA airport codes.\n\n#### Calculating a Single Trip\n\nThe example below calculates a simple footprint estimation for an\neconomy flight from Los Angeles International (LAX) to Heathrow (LHR).\nThe estimate will be in CO2e (carbon dioxide equivalent,\nincluding radiative forcing). The output is always in kilograms.\n\n``` r\nairport_footprint(""LAX"", ""LHR"", ""Economy"", ""co2e"")\n#> [1] 1312.696\n```\n\nIf there is a layover in Chicago, you could calculate each leg of the\ntrip as follows:\n\n``` r\nairport_footprint(""LAX"", ""ORD"", ""Economy"", ""co2e"") + \n airport_footprint(""ORD"", ""LHR"", ""Economy"", ""co2e"")\n#> [1] 1387.167\n```\n\n#### Calculating More than One Trip\n\nWe can calculate the footprint for multiple itineraries at the same time\nand add to an existing data frame using `mutate`. Here is some example\ndata:\n\n``` r\nlibrary(tibble)\n#> Warning: package \'tibble\' was built under R version 4.0.3\n\ntravel_data <- tibble(\n name = c(""Mike"", ""Will"", ""Elle""),\n from = c(""LAX"", ""LGA"", ""TYS""),\n to = c(""PUS"", ""LHR"", ""TPA"")\n)\n```\n\n| name | from | to |\n| :--- | :--- | :-- |\n| Mike | LAX | PUS |\n| Will | LGA | LHR |\n| Elle | TYS | TPA |\n\nHere is how you can take the `from` and `to` data and calculate\nemissions for each trip. The following function calculates an estimate\nfor CO2 (carbon dioxide with radiative forcing).\n\n| name | from | to | emissions |\n| :--- | :--- | :-- | --------: |\n| Mike | LAX | PUS | 1434.663 |\n| Will | LGA | LHR | 825.497 |\n| Elle | TYS | TPA | 136.721 |\n\n## From Latitude and Longitude\n\nIf you have a list of cities, it might be easier to calculate emissions\nbased on longitude and latitude rather than trying to locate the\nairports used. For example, one could take city and state data and join\nthat with data from `maps::us.cities` to quickly get latitude and\nlongitude. They can then use the `latlong_footprint()` function to\neasily calculate emissions based on either a single itinerary or\nmultiple itineraries:\n\n### Calculating a Single Trip\n\nThe following example calculates the footprint of a flight from Los\nAngeles (34.052235, -118.243683) to Busan, South Korea (35.179554,\n129.075638). It assumes an average passenger (no `flightClass` argument\nis included) and its output will be in kilograms of CO2e (the\ndefault)\n\n``` r\nlatlong_footprint(34.052235, -118.243683, 35.179554, 129.075638)\n#> [1] 1881.589\n```\n\n### Calculating Multiple Trips\n\nYou can use `mutate` to calculate emissions based on a dataframe of\nlatitude and longitude pairs.\n\nHere is some example data:\n\n``` r\ntravel_data2 <- tribble(~name, ~departure_lat, ~departure_long, ~arrival_lat, ~arrival_long,\n # Los Angeles -> Busan\n ""Mike"", 34.052235, -118.243683, 35.179554, 129.075638,\n # New York -> London\n ""Will"", 40.712776, -74.005974, 51.52, -0.10)\n```\n\n| name | departure\\_lat | departure\\_long | arrival\\_lat | arrival\\_long |\n| :--- | -------------: | --------------: | -----------: | ------------: |\n| Mike | 34.05224 | \\-118.24368 | 35.17955 | 129.0756 |\n| Will | 40.71278 | \\-74.00597 | 51.52000 | \\-0.1000 |\n\nAnd here is code to apply it to a dataframe:\n\n``` r\ntravel_data2 %>%\n rowwise() %>%\n mutate(emissions = latlong_footprint(departure_lat,\n departure_long,\n arrival_lat,\n arrival_long))\n```\n\n| name | departure\\_lat | departure\\_long | arrival\\_lat | arrival\\_long | emissions |\n| :--- | -------------: | --------------: | -----------: | ------------: | --------: |\n| Mike | 34.05224 | \\-118.24368 | 35.17955 | 129.0756 | 1881.589 |\n| Will | 40.71278 | \\-74.00597 | 51.52000 | \\-0.1000 | 1090.260 |\n'",,"2020/12/07, 13:39:28",1052,CC0-1.0,0,64,"2023/06/21, 19:52:36",0,8,10,1,126,0,1.0,0.0,,,0,2,false,,false,false,,,,,,,,,,, intensegRid,"Provides information on national and regional carbon intensity, the amount of carbon emitted per unit of energy consumed, for the UK.",KKulma,https://github.com/KKulma/intensegRid.git,github,"climate-change,carbon-intensity,carbon-api,carbon-emissions,r,hacktoberfest",Carbon Intensity and Accounting,"2022/11/07, 14:32:12",9,0,0,true,R,,,R,https://carbonintensity.org.uk/,"b'\n\n\n# About\n\n\n\n[![R-CMD-check](https://github.com/KKulma/intensegRid/workflows/R-CMD-check/badge.svg)](https://github.com/KKulma/intensegRid/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/KKulma/intensegRid/branch/master/graph/badge.svg)](https://codecov.io/gh/KKulma/intensegRid?branch=master)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/intensegRid)](https://CRAN.R-project.org/package=intensegRid)\n[![Downloads](http://cranlogs.r-pkg.org/badges/grand-total/intensegRid)](https://cran.r-project.org/package=intensegRid)\n[![Downloads](http://cranlogs.r-pkg.org/badges/intensegRid)](https://cran.r-project.org/package=intensegRid)\n\n\n# intensegRid \n\nThis package is an API wrapper for [National Grid\xe2\x80\x99s Carbon Intensity\nAPI](https://carbonintensity.org.uk/). The API provides information on\nnational and regional carbon intensity - the amount of carbon emitted\nper unit of energy consumed - for the UK.\n\n## Installation\n\nInstall the latest CRAN package with:\n\n``` r\ninstall.packages(""intensegRid"")\n```\n\nOr you can install the development version from\n[GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""KKulma/intensegRid"")\n```\n\n## Examples\n\nFor examples on how to use **intensegRid** package refer to the\n[vignette](https://kkulma.github.io/intensegRid/articles/intro-to-carbon-intensity.html).\n\n## Limitations\n\nIn its current form, the package only accepts dates as `start` or `end`\ninputs (as Dates or character string), but not timestamps. However you\ncan easily filter the output of **intensegRid** functions using\n[dplyr](https://dplyr.tidyverse.org/) and\n[lubridate](https://lubridate.tidyverse.org/) packages.\n\n## Contribution\n\nThis is an open-source project and it welcomes your contribution! Feel\nfree to use and test the package and if you find a bug, please, report\nit as [an issue](https://github.com/KKulma/intensegRid/issues). You may\nwant to go even a step further and fix an issue you just raised!\n\nIf you\xe2\x80\x99re rather new to open source and git, [this\nrepo](https://github.com/firstcontributions/first-contributions/blob/master/README.md)\noffers some easy to follow guidance on how to start. Thanks for your\ntime and efforts!\n'",,"2020/03/02, 13:21:20",1332,CC0-1.0,1,103,"2022/11/07, 14:32:12",5,24,31,1,352,1,0.0,0.16666666666666663,"2022/10/18, 14:02:01",pre-v0.1.2,2,2,false,,false,false,,,,,,,,,,, Carbonfact Models,The carbon footprint models used by carbonfact.co.,kansoapp,https://github.com/kansoapp/carbonfact-models.git,github,,Carbon Intensity and Accounting,"2021/12/07, 09:40:17",33,0,1,false,TypeScript,Carbonfact,kansoapp,"TypeScript,JavaScript",,"b""# Carbonfact Models\n\n![](./banner_share.jpg)\n\nThis repository contains the carbon footprint models used by [Carbonfact](https://www.carbonfact.com).\n\n## Disclaimer\n\nThis repository is in its early days. Please be indulgent, we're progressively improving it to make it easier to understand, re-use and contribute to.\n\nIf you have questions, please have a look at [the main website](https://www.carbonfact.com) and contact us at [hello@carbonfact.com](mailto:hello@carbonfact.com).\n\n## How to use\n\nThis repository is intended for:\n\n- Explaining how the carbon footprint calculation is done on [Carbonfact](https://www.carbonfact.com).\n- Enabling anyone to audit or reuse our methodology.\n\nFor now, we are sharing the source code used on our platform to calculate the carbon footprints. We share it as open-source using a [simple copyleft license](https://www.mozilla.org/en-US/MPL/2.0/FAQ/) so that it may be reused but any improvement to the model must be shared with the community (by us and others).\n\nTo be able to understand how the model works, you will need basic knowledge in software programming and the Typescript language. (We intend to make this easier to understand for non-programmers in the future.)\n\n**Tests**\n\nTest files have been added (ending with `.test.js`). You can start by looking at them to understand how to use the different parts of the source code.\n\n**Run locally**\n\n```sh\ngit clone https://github.com/kansoapp/carbonfact-models\ncd carbonfact-models\nyarn install\nyarn test\n```\n\n## How it works\n\n**[Compute](./src/operations/ComputeProductFootprint.ts)**\n\n- The carbon footprint is calculated in here, by the `computeFootprint(productData: ProductData)` method.\n- Different types (e.g. `ProductDataEntity`, `EmissionFactorEntity`...) represent the data necessary to perform the computation. They are defined in [src/entities](./src/entities).\n\n**[Expand](./src/operations/ExpandPartialProductData.ts)**\n\n_Introduced in `v0.2.0`_\n\n> When calculating the carbon emissions of a pair of sneakers for which we don't have the weight of the upper, we apply a template (which may be based on the shoes' brand and category) that provides this value. The template may be used to determine any value of the data necessary to compute the footprint.\n\n- Since we may not have exhaustive data for all the products we want to estimate the carbon footprint for, we use a templating system.\n- The templating system will _expand_ a partial product data using a specific template.\n- The templates' data is defined in [productTemplates.ts](./src/data/productTemplates.ts).\n\n## Included data\n\nThe emission factors and model parameters used in our calculation engine are defined in [parameters.ts](./src/data/parameters.ts).\n\nThey are mostly:\n\n**Emission factors**\n\n- The most important parameters of the calculation! They determine how much carbon-equivalent emissions a given material or component represents. For example, how much CO2eq a kg of recycled cotton made in Spain emits.\n- Emission factors are defined in [parameters.ts](./src/data/parameters.ts), prefixed by `emissionFactor`.\n\n**Model parameters**\n\n- For some parts of the carbon footprint assessment, it is less important to have a very detailed analysis (e.g. the most important part - in terms of CO2eq emissions - of the life-cycle of a pair of shoes is the materials it's made of). For those, we may rely on model parameters which provides high-level approximates of the emissions (e.g. the distribution step).\n- Those parameters are defined in [parameters.ts](./src/data/parameters.ts), prefixed by `fixedValue`.\n\nData-owner? Reach out if you have any remark or question on how we use your data.\n\n## Contribute\n\nIf you see errors or want to suggest improvements, feel free to submit Github issues.\n\n**Data owners**\n\n- You own data that may help us in our assessment (e.g. emission factors) and would like to share them with us?\n- We use some of your data and you have any remark?\n\n\xf0\x9f\x91\x89 contact us on [hello@carbonfact.com](mailto:hello@carbonfact.com)\n\n## TODOs\n\n- [x] Explain how to use it\n- [ ] Explain how it's used by Carbonfact\n- [ ] Detail sources of model parameters\n- [ ] Explain how to contribute\n- [ ] Add a code of conduct\n- [x] Add tests\n- [ ] More tests and automated built in CI\n\n---\n\nCopyright \xc2\xa9 Kanso Inc. 2021\n""",,"2021/07/06, 01:09:22",841,MPL-2.0,0,16,"2021/12/07, 09:40:18",1,4,5,0,687,0,0.0,0.0,"2021/12/07, 09:49:55",v0.5.0,0,1,false,,false,false,,,https://github.com/kansoapp,https://www.carbonfact.com,,,,https://avatars.githubusercontent.com/u/79087885?v=4,,, elmada,Dynamic electricity carbon emission factors and prices for Europe.,DrafProject,https://github.com/DrafProject/elmada.git,github,"carbon-emissions,electricity-prices,electricity-market,demand-response,energy-system-modeling,python,marginal-emissions",Carbon Intensity and Accounting,"2022/12/12, 09:44:43",20,1,10,true,Python,The Draf Project,DrafProject,"Python,TeX,HTML",,"b'\n\n---\n\n# elmada: Dynamic electricity carbon emission factors and prices for Europe\n\n**Status:**\n[![PyPI](https://img.shields.io/pypi/v/elmada?color=success&label=pypi%20package)](https://pypi.python.org/pypi/elmada)\n[![CI](https://github.com/DrafProject/elmada/actions/workflows/CI.yml/badge.svg)](https://github.com/DrafProject/elmada/actions/workflows/CI.yml)\n[![CI with conda](https://github.com/DrafProject/elmada/actions/workflows/CI_conda.yml/badge.svg)](https://github.com/DrafProject/elmada/actions/workflows/CI_conda.yml)\n[![codecov](https://codecov.io/gh/DrafProject/elmada/branch/main/graph/badge.svg?token=EOKKJG48A9)](https://codecov.io/gh/DrafProject/elmada)\n\n**Usage:**\n[![python](https://img.shields.io/badge/python-_3.9|_3.10|_3.11-blue?logo=python&logoColor=white)](https://github.com/DrafProject/elmada)\n[![License: LGPL v3](https://img.shields.io/badge/License-LGPL%20v3-blue.svg)](https://www.gnu.org/licenses/lgpl-3.0)\n[![status](https://joss.theoj.org/papers/10.21105/joss.03625/status.svg)][JOSS paper]\n[![Downloads](https://pepy.tech/badge/elmada)](https://pepy.tech/project/elmada)\n\n**Contribution:**\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1)](https://pycqa.github.io/isort/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Gitter](https://badges.gitter.im/DrafProject/elmada.svg)](https://gitter.im/DrafProject/elmada)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg)](CODE_OF_CONDUCT.md)\n\nThe open-source Python package **elmada** provides electricity carbon emission factors and wholesale prices for European countries.\nThe target group includes modelers of distributed energy hubs who need **el**ectricity **ma**rket **da**ta (short: **elmada**), e.g., to evaluate the environmental effect of demand response.\n**elmada** is part of the [Draf Project] but can be used as a standalone package.\n\n\n\n## Features\n\n* __Dynamic electricity Carbon Emission Factors (CEFs)__ are calculated depending on country and year in up to quarter-hourly resolution.\nThere are two types of CEFs: __Grid Mix Emission Factors (XEFs)__ and __Marginal Emission Factors (MEFs)__.\nWhile XEFs reflect the carbon footprint of an electricity use (attributional approach), MEFs estimate the carbon impact (consequential approach) of a change in electricity demand (Learn more in the [white paper][CEFWhitepaper] from Tomorrow and WattTime).\nChoose between\n * __XEFs__ from fuel type-specific [ENTSO-E] electricity generation data only for Germany (`XEF_EP`),\n * and __XEFs__ & __MEFs__ from merit order based simulations for [30 European Countries][Europe30] (`XEF_PP`, `XEF_PWL`, `MEF_PP`, `MEF_PWL`).\n The according Power Plant method (`PP`) and Piecewise Linear method (`PWL`) are described in the open-access [Applied Energy paper].\n The data used depend on the method chosen, see [scheme below](#cef-scheme).\n\n* __Wholesale electricity prices__ are provided for European countries. You can choose between the real historical [ENTSO-E] data (`hist_EP`) or the simulation results of the `PP` / `PWL` method.\n\n* Other interesting market data such as merit order lists & plots, fuel-specific generation data, or power plant lists are provided as a by-product of the CEF calculations.\n\n## Methodology\n\nWith the `XEF_EP` method, XEFs are calculated by multiplying the share matrix *S* (fuel type specific share of electricity generation per time step from [ENTSO-E]) with the intensity vector *\xce\xb5* (fuel type specific life cycle carbon emission intensities from [Tranberg.2019]):\n\n\n\nThe methods `PP`, `PWL`, and `PWLv` are explained in the [Applied Energy paper]. Here is an overview:\n \n \n\n# Data\n\n## Geographic scope\n\nIn `elmada`, two-letter country codes ([ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2)) are used.\n\nThe countries supported by `elmada` can be seen in the map below which is the output of `elmada.plots.cef_country_map(year=2020, method=""XEF_EP"")`.\n\n\n\nIn the [Usage section](#usage) they are referred to as Europe30.\nThey include:\n\n* 20 countries analyzed in the [Applied Energy paper]: AT, BE, CZ, DE, DK, ES, FI, FR, GB, GR, HU, IE, IT, LT, NL, PL, PT, RO, RS, SI\n* 8 countries with only [one reported fossil fuel type][APENsupplPage8]: BA, CH, EE, LV, ME, MK, NO, SE\n* 2 countries where installed generation capacity data for 2019 were only available after the publication of the [Applied Energy paper]: BG, SK\n\n## Data modes\n\nYou can use **elmada** in two data modes which can be set with `elmada.set_mode(mode=)`:\n\n* `mode=""safe""` (default):\n * Pre-cached data for 4 years and 20 countries are used. The data are described in the [Applied Energy paper].\n * The years are 2017 to 2020 and the countries AT, BE, CZ, DE, DK, ES, FI, FR, GB, GR, HU, IE, IT, LT, NL, PL, PT, RO, RS, SI.\n * The data is available in the space-saving and quick-to-read [Parquet format] under [.../safe_cache].\n* `mode=""live""`:\n * Up-to-date data are retrieved on demand and are cached to an OS-specific directory, see `elmada.paths.CACHE_DIR`. A symbolic link to it can be conveniently created by executing `elmada.make_symlink_to_cache()`.\n * Available years are 2017 until the present.\n * Slow due to API requests.\n * Requires valid API keys of ENTSO-E, Morph, Quandl, see [table below](#data-sources).\n\n## Data sources\n\n| Description | Local data location | Source | Channel | Involved in |\n|-|-|-|-|-|\n| Generation time series & installed generation capacities | [.../safe_cache] or `CACHE_DIR` | [ENTSO-E] | \xf0\x9f\x94\x8c on-demand-retrieval via [EntsoePandasClient] (requires valid [ENTSO-E API key]) | CEFs via `EP`, `PP`, `PWL`, `PWLv` |\n| Carbon prices (EUA)| [.../safe_cache] or `CACHE_DIR` | [Sandbag] & [ICE] | \xf0\x9f\x94\x8c on-demand-retrieval via [Quandl] (requires valid [Quandl API key]) | CEFs via `PP`, `PWL`, `PWLv` |\n| Share of CCGT among gas power plants | [.../safe_cache] or `CACHE_DIR` | [GEO] | \xf0\x9f\x94\x8c on-demand-download via [Morph] (requires valid [Morph API key])| CEFs via `PWL`, `PWLv` |\n| (Average) fossil power plants sizes | [.../safe_cache] or `CACHE_DIR` | [GEO] | \xf0\x9f\x94\x8c on-demand-scraping via [BeautifulSoup4] | CEFs via `PWL`, `PWLv` |\n| German fossil power plant list with efficiencies | [.../safe_cache] or `CACHE_DIR` | [OPSD] | \xf0\x9f\x94\x8c on-demand-download from [here][opsd_download] | CEFs via `PP`, `PWL`, `PWLv` |\n| Transmission & distribution losses | [.../worldbank] | [Worldbank] | \xf0\x9f\x92\xbe manual download from [here][wb] | CEFs via `PP`, `PWL`, `PWLv` |\n| Fuel prices for 2015 (+ trends) | [.../from_other.py] (+ [.../destatis]) | [Konstantin.2017] (+ [DESTATIS]) | \xf0\x9f\x94\xa2 hard-coded values (+ \xf0\x9f\x92\xbe manual download from [here][destatis_download]) | CEFs via `PP`, `PWL`, `PWLv` |\n| Fuel type-specific carbon emission intensities | [.../from_other.py] & [.../tranberg] | [Quaschning] & [Tranberg.2019] | \xf0\x9f\x94\xa2 hard-coded values | CEFs via `EP`, `PP`, `PWL`, `PWLv` |\n\n## Time zones\n\nThe data is in local time since the [Draf Project] focuses on the modeling of individual local energy hubs.\nStandard time is used i.e. daylight saving time is ignored.\nAlso see [this table](https://github.com/DrafProject/marginal-emission-factors/blob/main/README.md#time-zones) of the time zones used.\n\n# Installation\n\n## Using `pip`\n\n```sh\npython -m pip install elmada\n```\n\nNOTE: Read [here](https://snarky.ca/why-you-should-use-python-m-pip/) why you should use `python -m pip` instead of `pip`.\n\n## From source using conda\n\nFor a conda environment including a full editable **elmada** version do the following steps.\n\nClone the source repository:\n\n```sh\ngit clone https://github.com/DrafProject/elmada.git\ncd elmada\n```\n\nCreate an conda environment based on `environment.yml` and install an editable local **elmada** version:\n\n```sh\nconda env create\n```\n\nActivate the environment:\n\n```sh\nconda activate elmada\n```\n\n## From source without using conda\n\n### For Unix\n\n```sh\ngit clone https://github.com/DrafProject/elmada.git\ncd elmada\npython3 -m venv env\nsource env/bin/activate\npython -m pip install -e .[dev]\n```\n\n### For Windows\n\n```sh\ngit clone https://github.com/DrafProject/elmada.git\ncd elmada\npy -m venv env\n.\\env\\Scripts\\activate\npy -m pip install -e .[dev]\n```\n\n# Tests\n\nThis should always work:\n\n```sh\npytest -m=""not apikey""\n```\n\nThis works only if API keys are set as described [below](#optional-set-your-api-keys-and-go-live-mode):\n\n```sh\npytest\n```\n\n# Usage\n\n```py\nimport elmada\n```\n\n## OPTIONAL: Set your API keys and go live mode\n\n```py\nelmada.set_api_keys(entsoe=""YOUR_ENTSOE_KEY"", morph=""YOUR_MORPH_KEY"", quandl=""YOUR_QUANDL_KEY"")\n# NOTE: API keys are stored in an OS-dependent config directory for later use.\n\nelmada.set_mode(""live"")\n```\n\n## Carbon Emission factors\n\n```py\nelmada.get_emissions(year=2019, country=""DE"", method=""MEF_PWL"", freq=""60min"", use_datetime=True)\n```\n\n... returns marginal emission factors calculated by the `PWL` method with hourly datetime index:\n\n```sh\n2019-01-01 00:00:00 990.103492\n2019-01-01 01:00:00 959.758367\n ...\n2019-12-31 22:00:00 1064.122146\n2019-12-31 23:00:00 1049.852079\nFreq: 60T, Name: MEFs, Length: 8760, dtype: float64\n```\n\nThe `method` argument of `get_emissions()` takes strings that consists of two parts seperated by an underscore.\nThe first part is the type of emission factor: grid mix emission factors (`XEF`) or marginal emission factors (`MEF`).\nThe second part determines the calculation method: power plant method (`PP`), piecewise linear method (`PWL`), or piecewise linear method in validation mode (`PWLv`).\n\nThe first part can be omitted (`_PP`, `_PWL`, `_PWLv`) to return a DataFrame that includes additional information.\n\n```py\nelmada.get_emissions(year=2019, country=""DE"", method=""_PWL"")\n```\n\n... returns all output from the PWL method:\n\n```sh\n residual_load total_load marginal_fuel efficiency marginal_cost MEFs XEFs\n0 21115.00 51609.75 lignite 0.378432 40.889230 990.103492 204.730151\n1 18919.50 51154.50 lignite 0.390397 39.636039 959.758367 164.716687\n... ... ... ... ... ... ... ...\n8758 27116.00 41652.00 lignite 0.352109 43.946047 1064.122146 388.542911\n8759 25437.75 39262.75 lignite 0.356895 43.356723 1049.852079 376.009477\n[8760 rows x 7 columns]\n```\n\nAdditionally, XEFs can be calculated from historic fuel type-specific generation data (`XEF_EP`).\n\nHere is an overview of valid `method` argument values:\n\n| `method` | Return type | Return values | Restriction |\n| --: | -- | -- | -- |\n| `XEF_PP` | Series | XEFs using PP method | DE |\n| `XEF_PWL` | Series | XEFs using PWL method | [Europe30] |\n| `XEF_PWLv` | Series | XEFs using PWLv method | DE |\n| `MEF_PP` | Series | MEFs from PP method | DE |\n| `MEF_PWL` | Series | MEFs using PWL method | [Europe30] |\n| `MEF_PWLv` | Series | MEFs using PWLv method | DE |\n| `_PP` | Dataframe | extended data for PP method | DE |\n| `_PWL` | Dataframe | extended data for PWL method | [Europe30] |\n| `_PWLv` | Dataframe | extended data for PWLv method | DE |\n| `XEF_EP` | Series | XEFs using fuel type-specific generation data from [ENTSO-E] | [Europe30] |\n\nYou can plot the carbon emission factors with\n\n```py\nelmada.plots.cefs_scatter(year=2019, country=""DE"", method=""MEF_PP"")\n```\n\n\n\n## Wholesale prices\n\n```py\nelmada.get_prices(year=2019, country=""DE"", method=""hist_EP"")\n```\n\n```sh\n0 28.32\n1 10.07\n ... \n8758 38.88\n8759 37.39\nLength: 8760, dtype: float64\n```\n\nPossible values for the `method` argument of `get_prices()` are:\n\n| `method` | Description | Restriction |\n| --: | -- | -- |\n| `PP` | Using the power plant method | DE |\n| `PWL` | Using piecewise linear method | [Europe30] |\n| `PWLv` | Using piecewise linear method in validation mode | DE |\n| `hist_EP` | Using historic [ENTSO-E] data | [Europe30] without BA, ME, MK|\n| `hist_SM` | Using historic [Smard] data | used only as backup for DE, 2015 and 2018 |\n\n## Merit order\n\n```py\nelmada.plots.merit_order(year=2019, country=""DE"", method=""PP"")\n```\n\n... plots the merit order:\n\n\n\n```py\nelmada.get_merit_order(year=2019, country=""DE"", method=""PP"")\n```\n\n... returns the merit order as DataFrame with detailed information on individual power plant blocks.\n\n## Pre-processed data\n\nThe following table describes additional `elmada` functions that provide pre-processed data.\nKeyword arguments are for example `kw = dict(year=2019, freq=""60min"", country=""DE"")`.\n\n| `elmada.` function call | Return type (Dimensions) | Return value | Usage in `elmada` | Used within |\n| -- | -- | -- | -- | -- |\n| `get_el_national_generation(**kw)` | DataFrame (time, fuel type) | National electricity generation | Share matrix *S* | `XEF_EP` method |\n| `get_el_national_generation(**kw).sum(axis=1)` | Series (time) | Total national electricity generation | Proxy for the total load | XEFs calculations |\n| `get_residual_load(**kw)` | Series (time) | Conventional national generation | Proxy for the residual load (see [scheme above](#methodology)) | `PP`, `PWL` and `PWLv`|\n\n# Contributing\n\nContributions in any form are welcome! To contribute changes, please have a look at our [contributing guidelines](CONTRIBUTING.md).\n\nIn short:\n\n1. Fork the project and create a feature branch to work on in your fork (`git checkout -b new-feature`).\n1. Commit your changes to the feature branch and push the branch to GitHub (`git push origin my-new-feature`).\n1. On GitHub, create a new pull request from the feature branch.\n\n# Citing elmada\n\nIf you use **elmada** for academic work please cite this paper published in the Journal for Open Source Software:\n\n[![status](https://joss.theoj.org/papers/10.21105/joss.03625/status.svg)][JOSS paper]\n\n```bibtex\n@article{Fleschutz2021,\n title = {elmada: Dynamic electricity carbon emission factors and prices for Europe},\n author = {Markus Fleschutz and Michael D. Murphy},\n journal = {Journal of Open Source Software},\n publisher = {The Open Journal},\n year = {2021},\n volume = {6},\n number = {66},\n pages = {3625},\n doi = {10.21105/joss.03625},\n}\n```\n\nIf you use the PP or PWL method, please also cite the open-access [Applied Energy paper]:\n\n[![APEN](https://img.shields.io/badge/AppliedEnergy-10.1016/j.apenergy.2021.117040-brightgreen)][Applied Energy paper]\n\n```bibtex\n@article{Fleschutz2021b,\n title = {The effect of price-based demand response on carbon emissions in European electricity markets: The importance of adequate carbon prices},\n author = {Markus Fleschutz and Markus Bohlayer and Marco Braun and Gregor Henze and Michael D. Murphy},\n journal = {Applied Energy},\n year = {2021},\n volume = {295},\n issn = {0306-2619},\n pages = {117040},\n doi = {10.1016/j.apenergy.2021.117040},\n}\n```\n\n# License\n\nCopyright (c) 2021 Markus Fleschutz\n\n[![License: LGPL v3](https://img.shields.io/badge/License-LGPL%20v3-blue.svg)](https://www.gnu.org/licenses/lgpl-3.0)\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\n[.../destatis]: elmada/data/raw/destatis\n[.../from_other.py]: elmada/from_other.py\n[.../safe_cache]: elmada/data/safe_cache\n[.../tranberg]: elmada/data/raw/tranberg\n[.../worldbank]: elmada/data/raw/worldbank\n[APENsupplPage8]: https://ars.els-cdn.com/content/image/1-s2.0-S0306261921004992-mmc1.pdf#page=8\n[Applied Energy paper]: https://doi.org/10.1016/j.apenergy.2021.117040\n[BeautifulSoup4]: https://pypi.org/project/beautifulsoup4\n[destatis_download]: https://www.destatis.de/DE/Themen/Wirtschaft/Preise/Publikationen/Energiepreise/energiepreisentwicklung-xlsx-5619001.xlsx?__blob=publicationFile\n[DESTATIS]: https://www.destatis.de\n[Draf Project]: https://github.com/DrafProject\n[ENTSO-E API key]: https://transparency.entsoe.eu/content/static_content/Static%20content/web%20api/Guide.html\n[ENTSO-E]: https://transparency.entsoe.eu/\n[EntsoePandasClient]: https://github.com/EnergieID/entsoe-py#EntsoePandasClient\n[Europe30]: #geographic-scope\n[GEO]: http://globalenergyobservatory.org\n[ICE]: https://www.theice.com\n[JOSS paper]: https://doi.org/10.21105/joss.03625\n[Konstantin.2017]: https://doi.org/10.1007/978-3-662-49823-1\n[Morph API key]: https://morph.io/documentation/api\n[Morph]: https://morph.io\n[opsd_download]: https://data.open-power-system-data.org/conventional_power_plants/latest\n[OPSD]: https://open-power-system-data.org\n[Parquet format]: https://parquet.apache.org\n[Quandl API key]: https://docs.quandl.com/docs#section-authentication\n[Quandl]: https://www.quandl.com\n[Quaschning]: https://www.volker-quaschning.de/datserv/CO2-spez/index_e.ph\n[Sandbag]: https://sandbag.org.uk/carbon-price-viewer\n[Smard]: https://www.smard.de/en\n[Tranberg.2019]: https://doi.org/10.1016/j.esr.2019.100367\n[wb]: https://databank.worldbank.org/reports.aspx?source=2&series=EG.ELC.LOSS.ZS\n[CEFWhitepaper]: https://docplayer.net/217796110-A-vision-for-how-ambitious-organizations-can-accurately-measure-electricity-emissions-to-take-genuine-action.html\n[Worldbank]: https://databank.worldbank.org/reports.aspx?source=2&series=EG.ELC.LOSS.ZS\n'",",https://doi.org/10.1016/j.apenergy.2021.117040\n,https://doi.org/10.21105/joss.03625\n,https://doi.org/10.1007/978-3-662-49823-1\n,https://doi.org/10.1016/j.esr.2019.100367\n","2021/05/19, 08:31:56",889,LGPL-3.0,5,117,"2021/09/06, 15:13:13",0,0,1,0,779,0,0,0.0,"2021/10/13, 12:01:04",v0.1.0,0,1,false,,true,true,DrafProject/draf,,https://github.com/DrafProject,,Germany,,,https://avatars.githubusercontent.com/u/62054152?v=4,,, UNFCCC emissions data,UNFCCC Emissions data from the Detailed Data By Party interface.,openclimatedata,https://github.com/openclimatedata/unfccc-detailed-data-by-party.git,github,,Carbon Intensity and Accounting,"2022/02/08, 18:51:13",6,0,0,false,Python,Open Climate Data,openclimatedata,"Python,Shell,Makefile",,"b'CSV files of UNFCCC emissions data, downloaded from the UNFCCC [""Detailed Data By Party""](http://di.unfccc.int/detailed_data_by_party) interface.\n\nSee also the [GHG data from UNFCCC](https://unfccc.int/process-and-meetings/transparency-and-reporting/greenhouse-gas-data/ghg-data-unfccc/ghg-data-from-unfccc)\npages.\n\n[![DOI](https://www.zenodo.org/badge/DOI/10.5281/zenodo.1300334.svg)](https://doi.org/10.5281/zenodo.1300334)\n\nContributors:\n\n- Robert Gieseke ()\n- Johannes G\xc3\xbctschow ()\n\n\n## Data\n\nThe data is available in two files, one for Annex-I and one for Non-Annex-I parties.\nIs is in a ""wide"" format, with base year and individual years as columns.\n\nThe current release contains data donwloaded on July 31 2021.\n\nNote also the following footnotes from the data interface:\n\n> Note 1: The reporting and review requirements for GHG inventories are different for Annex I and non-Annex I Parties. The definition format of data for emissions/removals from the forestry sector is different for Annex I and non-Annex I Parties.\n\n> Note 2: Base year data in the data interface relate to the base year under the Climate Change Convention (UNFCCC). The base year under the Convention is defined slightly different than the base year under the Kyoto Protocol. An exception is made for European Union (KP) whereby the base year under the Kyoto Protocol is displayed.\n\n> Note 3: Some non-Annex I Parties submitted their GHG inventory data using the format of the 2006 IPCC Guidelines in reporting GHG emissions/removals. For this reason, these data could not be included in the data interface. However, the data are available in the national communications (Andorra, Antigua and Barbuda, Armenia, Bahrain, Bangladesh, Bhutan, Brazil, Brunei Darussalam, Cabo Verde, Cook Islands, Costa Rica, C\xc3\xb4te d\'Ivoire, Colombia, Cuba, Equatorial Guinea, Eswatini, Fiji, Gambia, Georgia, Grenada, Ghana, Honduras, Indonesia, Iran, Jamaica, Kuwait, Malaysia, Mauritania, Mauritius, Mexico, Mongolia, Montenegro, Namibia, Nicaragua, Nigeria, Panama, Oman, Republic of Moldova, Rwanda, Samoa, Serbia, Sierra Leone, Singapore, Somalia, South Africa, Suriname, Timor-Leste, United Arab Emirates, Vanuatu, Venezuela, Viet Nam, and Zambia) and biennial update reports (Afghanistan, Andorra, Antigua and Barbuda, Argentina, Armenia, Azerbaijan, Belize, Benin, Cambodia, Chile, Colombia, Costa Rica, C\xc3\xb4te d\'Ivoire, Dominican Republic, Egypt, El Salvador, Ghana, Georgia, Guinea-Bissau, Honduras, India, Indonesia, Jordan, Laos Peoples Republic, Malaysia, Mauritania, Mexico, Mongolia, Montenegro, Morocco, Namibia, Nigeria, North Macedonia, Oman, Panama, Paraguay, Papua New Guinea, Peru, Republic of Moldova, Serbia, Singapore, South Africa, Tajikistan, Thailand, Togo, Tunisia, Uruguay, Uganda, Viet Nam, and Zambia).\n\n> Note 5: Data displayed on the data interface are ""as received"" from Parties. The publication of Party submissions on this website does not imply the expression of any opinion whatsoever on the part of the UNFCCC or the Secretariat of the United Nations concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries as may be referred to in any of the submissions.\n\n\n## Processing\n\nTo update the dataset once newer data becomes available, the following steps need to be run. It might be necessary to adjust the scripts if the data format or website changes. Python3 and Make are required. The scripts should run on Linux or macOS.\n\nRun\n\n```shell\nmake mappings\n```\n\nto download category and group mappings,\n\n```shell\nmake download\n```\n\nto download the data as JSON files and\n\n```shell\nmake process\n```\n\nto generate CSV files for Annex-I and Non-Annex-I.\n\nTo remove all downloaded and generated files run\n\n```shell\nmake clean\n```\n\nThis needs to be done to check for updated data. To continue an interrupted\ndownload or check for new data simply re-run `make download`.\nFiles already downloaded are skipped.\n\n\n## License\n\nThe [UNFCCC website](http://unfccc.int/home/items/2783.php) states:\n\n> All official texts, data and documents are in the public domain and may be freely downloaded, copied and printed provided no change to the content is introduced, and the source is acknowledged.\n'",",https://doi.org/10.5281/zenodo.1300334","2018/04/03, 14:53:26",2031,LGPL-3.0,0,65,"2022/02/08, 18:51:13",6,3,8,0,624,0,0.0,0.22580645161290325,"2021/07/31, 14:34:29",2021-07-31,0,2,false,,false,false,,,https://github.com/openclimatedata,https://openclimatedata.net,"Potsdam, Germany",,,https://avatars.githubusercontent.com/u/20420557?v=4,,, Silicone,Automated filling of detail in reported emission scenarios.,GranthamImperial,https://github.com/GranthamImperial/silicone.git,github,"emissions,automation,filling,detail,climate",Carbon Intensity and Accounting,"2022/10/19, 08:18:25",4,1,1,false,Jupyter Notebook,,GranthamImperial,"Jupyter Notebook,Python,Makefile",https://silicone.readthedocs.io,"b""Silicone\n========\n\n+--------+-----------+\n| Basics | |License| |\n+--------+-----------+\n\n+-------------------+----------------+-----------+--------+\n| Repository health | |Build Status| | |Codecov| | |Docs| |\n+-------------------+----------------+-----------+--------+\n\n+-----------------+----------------+----------------+------------------+\n| Latest releases | |PyPI Install| | |PyPI Version| | |Latest Version| |\n+-----------------+----------------+----------------+------------------+\n\n+-----------------+----------------+---------------+------------------------------+\n| Latest activity | |Contributors| | |Last Commit| | |Commits Since Last Release| |\n+-----------------+----------------+---------------+------------------------------+\n\n.. sec-begin-long-description\n.. sec-begin-index\n\n\nSilicone is a Python package which can be used to infer emissions from other emissions data.\nIt is intended to 'infill' integrated assessment model (IAM) data so that their scenarios\nquantify more climate-relevant emissions than are natively reported by the IAMs themselves.\nIt does this by comparing the incomplete emissions set to complete data from other sources.\nIt uses the relationships within the complete data to make informed infilling estimates of\notherwise missing emissions timeseries.\nFor example, it can add emissions of aerosol precurors based on carbon dioxide emissions\nand infill nitrous oxide emissions based on methane, or split HFC emissions pathways into\nemissions of different specific HFC gases.\n\n\n.. sec-end-index\n\nLicense\n-------\n\n.. sec-begin-license\n\nSilicone is free software under a BSD 3-Clause License, see\n`LICENSE `_.\n\n.. sec-end-license\n\n.. sec-begin-funders\n\nFunders\n-------\nThis project has received funding from the European Union Horizon 2020 research and\ninnovation programme under grant agreement No 820829 (CONSTRAIN) and No 641816 (CRESCENDO).\n\n.. sec-end-funders\n.. sec-end-long-description\n\n.. sec-begin-installation\n\nInstallation\n------------\n\nSilicone can be installed with pip\n\n.. code:: bash\n\n pip install silicone\n\nIf you also want to run the example notebooks, install additional\ndependencies using\n\n.. code:: bash\n\n pip install silicone[notebooks]\n\n**Coming soon** Silicone can also be installed with conda\n\n.. code:: bash\n\n conda install -c conda-forge silicone\n\n.. sec-end-installation\n\nDocumentation\n-------------\n\nDocumentation can be found at our `documentation pages `_\n(we are thankful to `Read the Docs `_ for hosting us).\n\nContributing\n------------\n\nPlease see the `Development section of the docs `_.\n\n.. sec-begin-links\n\n.. |Docs| image:: https://readthedocs.org/projects/silicone/badge/?version=latest\n :target: https://silicone.readthedocs.io/en/latest/\n.. |License| image:: https://img.shields.io/github/license/GranthamImperial/silicone.svg\n :target: https://github.com/GranthamImperial/silicone/blob/master/LICENSE\n.. |Build Status| image:: https://github.com/GranthamImperial/silicone/workflows/Silicone%20CI-CD/badge.svg\n :target: https://github.com/GranthamImperial/silicone/actions?query=workflow%3A%22Silicone+CI-CD%22\n.. |Codecov| image:: https://img.shields.io/codecov/c/github/GranthamImperial/silicone.svg\n :target: https://codecov.io/gh/GranthamImperial/silicone/branch/master/graph/badge.svg\n.. |Latest Version| image:: https://img.shields.io/github/tag/GranthamImperial/silicone.svg\n :target: https://github.com/GranthamImperial/silicone/releases\n.. |PyPI Install| image:: https://github.com/GranthamImperial/silicone/workflows/Test%20PyPI%20install/badge.svg\n :target: https://github.com/GranthamImperial/silicone/actions?query=workflow%3A%22Test+PyPI+install%22\n.. |PyPI Version| image:: https://img.shields.io/pypi/v/silicone.svg\n :target: https://pypi.org/project/silicone/\n.. |Last Commit| image:: https://img.shields.io/github/last-commit/GranthamImperial/silicone.svg\n :target: https://github.com/GranthamImperial/silicone/commits/master\n.. |Commits Since Last Release| image:: https://img.shields.io/github/commits-since/GranthamImperial/silicone/latest.svg\n :target: https://github.com/GranthamImperial/silicone/commits/master\n.. |Contributors| image:: https://img.shields.io/github/contributors/GranthamImperial/silicone.svg\n :target: https://github.com/GranthamImperial/silicone/graphs/contributors\n\n.. sec-end-links\n""",,"2019/05/13, 05:27:42",1626,BSD-3-Clause,0,835,"2022/10/19, 08:18:26",8,123,143,0,371,1,2.0,0.36,"2022/10/19, 08:24:16",v1.3.0,0,3,false,,false,false,polyclimate/polyclimate1,,https://github.com/GranthamImperial,,,,,https://avatars.githubusercontent.com/u/61453394?v=4,,, OpenClimate,Independent Climate Accounting Network in support of Paris Agreement goals.,Open-Earth-Foundation,https://github.com/Open-Earth-Foundation/OpenClimate.git,github,"climate,climate-change,climate-change-api,climate-data,api,climatechange,nodejs-api,open-data,reactjs-app,climate-action,sustainability,climate-targets",Carbon Intensity and Accounting,"2023/09/25, 09:16:18",24,0,24,true,TypeScript,Open Earth Foundation,Open-Earth-Foundation,"TypeScript,Python,JavaScript,SCSS,CSS,Shell,R,M4,HTML,Dockerfile",https://openclimate.network/,"b'# OpenClimate\n\n[OpenClimate](https://openclimate.network/) is a data utility for tracking climate action.\n\nIt provides a database and API for current and historical CO2 emissions data, emissions reduction targets, climate plans and actions, and geographical, demographic and economic contextual data.\n\nIt covers climate actors of all kinds, including countries, regions and provinces, cities, companies, down to specific emissions sites like factories and mines. OpenClimate connects these actors by geographic, political or business relationships to provided nested climate accounting.\n\nThe data explorer Web interface lets you discover and compare actors in this network.\n\n![OpenClimate](./openclimate-screenshot.png ""OpenClimate screenshot"")\n\n## License\n\nCopyright 2021-2023 [Open Earth Foundation](https://openearth.org/) and contributors.\n\nLicensed under the Apache License, Version 2.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n## Quick links\n\n* [Code of Conduct](./CODE_OF_CONDUCT.md)\n* [Contributing code](./CONTRIBUTING.md)\n* [Contributing data](./CONTRIBUTING_DATA.md)\n* [Running OpenClimate in Kubernetes](./k8s/README.md)\n* [API documentation](./api/API.md)\n* [Schema documentation](./api/schema/README.md)\n* [Issues](https://github.com/Open-Earth-Foundation/OpenClimate/issues) ([good first issues](https://github.com/Open-Earth-Foundation/OpenClimate/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22))'",,"2021/09/23, 18:55:09",762,Apache-2.0,976,1043,"2023/10/23, 15:27:09",36,187,189,186,2,27,0.0,0.47010869565217395,,,2,11,false,,true,true,,,https://github.com/Open-Earth-Foundation,https://openearth.org/,,,,https://avatars.githubusercontent.com/u/71936582?v=4,,, Scope3,Build a framework where the media and advertising industry can collaborate on best practices for measuring emissions from the advertising value chain.,scope3data,https://github.com/scope3data/methodology.git,github,,Carbon Intensity and Accounting,"2023/10/23, 13:39:36",21,0,20,true,Python,Scope3,scope3data,"Python,Shell,Dockerfile",https://methodology.scope3.com,"b'# An open framework for measuring digital advertising emissions\n\nOur goal with this project is to build a framework where the media and advertising industry can collaborate on best practices for measuring emissions from the advertising value chain. This project was originally developed by Scope3 and is used to produce the Scope3 dataset.\n\nMeasuring emissions is extremely complicated in general. In the words of one industry leader, ""it took us 100 years to figure out how to do financial accounting... and now we\'re trying to figure out carbon accounting in 2 or 3."" As such, we feel like it\'s critical to learn in public and to be honest about what we know and what we don\'t know. Assuming the carbon accounting will require the same auditing and assurance process as the financial accounting world, we hope that this project will enable every step of the process to be traced and validated.\n\nThis measurement process, at a high level, works as follows:\n\n1. Gather public materials referencing sustainability and other related data from industry participants\n2. Pull out factual statements from these reports and normalize them into a common framework\n3. Apply the facts we have about each company to a model that outputs emissions by activity (for instance, per ad impression)\n\nAs of now (summer of 2022) most sustainability reports have few useful facts that help us model the emissions of a company. They often omit entire categories of emissions, omit methodology information, and blend data from disparate business units. Trying to pull out data at a product or activity level is essentially impossible. Therefore, we need to apply domain knowledge to understand how these businesses work. We also need to integrate third-party data sources to increase the granularity of our data - for instance, using a service like SimilarWeb to get sessions and traffic for a domain or app. Finally, we can use the facts that we have across the industry to fill in gaps for companies that don\'t fully report all of the information we need.\n\n## What\'s inside\n\nThis project is an attempt to ""show our work"" as we fill in the gaps in our knowledge. We encourage companies to use this project to improve their disclosures and even to consider providing machine-readable versions of their sustainability data.\n\nIn this project you will find:\n\n- Public sustainability materials and the structured ""fact"" data from them. These are in the `data/companies` directory\n- Scope3 has received confidential sustainability data from a number of companies. Some of this data is useful for producing default values, and is aggregated and included anonymously in `data/private/scope3`.\n- A script to scan through the source data and produce industry defaults for various types of company. The script is `./scope3_methodology/cli/compute_defaults.py` and the templates are in `templates`. Also see `./scope3_methodology/cli/fact_finder.py` to see how defaults are derived from the data sources we have analyzed.\n- A script to model the emissions for ad tech platforms (ssps, dsps, ad networks, dmps, creative ad servers, etc). See [ad tech platform docs](docs/ad_tech_model.mdx).\n- A script to model the emissions for publishers. See [publisher docs](docs/publisher_model.mdx).\n- Documentation of our calculations and assumptions in the `docs` directory. See [instructions on adding to docs](docs/README.md).\n\n## Installation\n\n[poetry](https://python-poetry.org/docs/) is used for python dependency management. See the poetry docs for offical instructions.\n\nOn Mac you can also install poetry via [brew](https://brew.sh/)\n\n```sh\nbrew install poetry\n```\n\nInstall Dependencies\n\n```sh\npoetry install\n```\n\nActivate virtual environment\n\n```sh\npoetry shell\n```\n\nIf you want to commit code, install pre-commit hooks\n\n```sh\npre-commit install\n```\n\n## Usage\n\nTo write defaults from latest sources:\n\n```sh\n./scope3_methodology/cli/compute_defaults.py\n```\n\nTo run tests:\n\n```sh\npython -m unittest\n```\n\nTo compute the corporate emissions, pass in its YAML file and org type (which will make defaults more accurate):\n\n```sh\n./scope3_methodology/cli/model_corporate_emissions.py --verbose {generic,atp,publisher} [company_file.yaml]\n```\n\nTo compute the emissions for an ad tech company, pass in its YAML file:\n\n```sh\n./scope3_methodology/cli/model_ad_tech_platform.py -v [--corporateEmissionsG] [--corporateEmissionsGPerRequest] [company_file.yaml]\n```\n\nTo compute the emissions for publisher, pass in its YAML file:\n\n```sh\n./scope3_methodology/cli/model_publisher_emissions.py -v [--corporateEmissionsG] [--corporateEmissionsGPerImp] [company_file.yaml]\n\n```\n'",,"2022/08/15, 12:59:37",436,Apache-2.0,128,266,"2023/10/16, 14:07:07",19,107,119,79,9,3,0.7,0.4455958549222798,"2022/10/08, 00:54:22",defaults-release-20221008,0,8,false,,false,false,,,https://github.com/scope3data,https://scope3.com,,,,https://avatars.githubusercontent.com/u/97109212?v=4,,, World Carbon Pricing Database,This dataset contains information on carbon pricing mechanisms (carbon taxes or cap-and-trade) introduced around the world since 1990.,g-dolphin,https://github.com/g-dolphin/WorldCarbonPricingDatabase.git,github,,Carbon Intensity and Accounting,"2023/10/04, 12:00:33",45,0,18,true,Python,,,"Python,CSS,HTML,Julia",,"b""# World Carbon Pricing Database\n\nThe present dataset constitutes an extension of a dataset initially developed while pursuing my PhD within the Energy Policy Research Group at the University of Cambridge. Its existence owes much to its support as well as that of the Cambridge Judge Business School and the UK Economic and Social Research Council. Its most recent update was supported by Resources for the Future.\n\nThis dataset contains information on carbon pricing mechanisms (carbon taxes or cap-and-trade) introduced around the world since 1990. To date, it is the most comprehensive attempt at providing a consistent-across-jurisdiction description of carbon pricing mechanisms in terms of their sectoral (and fuel) coverage as well as the associated price signal.\n\nIn a separate project, it is used in combination with greenhouse gas emissions data to calculate an emissions-weighted average carbon price. This project is hosted here: [https://github.com/g-dolphin/ECP](https://github.com/g-dolphin/ECP).\n\nIf this dataset has been useful to you or simply think it's cool, feel free to give it a \xe2\xad\x90!\n\n## Dataset \n### Description\n\nThe database records information on the sectoral scope and price associated with carbon pricing mechanisms, i.e. mechanisms creating an explicit price on CO2 emissions. This information is recorded **at the sector-fuel level**. The sectoral disaggregation of the economy follows the [IPCC 2006 guidelines for national greenhouse gas emission inventories](https://www.ipcc-nggip.iges.or.jp/public/2006gl/). \n\nA key feature of this dataset is that it provides information structured by territorial jurisdiction, not carbon pricing mechanism. This is achieved by mapping information available for each carbon pricing mechanism onto jurisdictions. This mapping accounts for the possibility that multiple schemes apply to the same emissions sectors and, in such instances, presents information separately for each scheme. It also covers a long period of time (1990-2020) and, hence, allows to (re)construct time series of prices applied to emissions in the jurisdictions where such prices were in place. In addition, its disaggregation by IPCC 2006 sectors allows for a straightforward integration with several other data sources following the same sectoral disaggregation. \n\nMore details about the methodology supporting the construction of the dataset and the variables included in it are provided in the associated Data Descriptor available at [https://doi.org/10.1038/s41597-022-01659-x](https://doi.org/10.1038/s41597-022-01659-x).\n\n### Scope\n\n- Jurisdictions: The dataset currently covers 198 national jurisdictions and 98 sub-national jurisdictions (50 US States, 13 Canadian Provinces and Territories, 3 Japanese Municipalities, 32 Chinese Provinces and Municipalities). It records their institutional development (sectoral and fuel coverage as well as price) from 1990 (year of introduction of the first carbon pricing mechanism in Finland) to this day (currently, 2018 is the last year for which data has been collected).\n\n- Sectors: The dataset covers all IPCC source categories. In addition, the file [IPCC2006-IEA-category-codes](https://github.com/g-dolphin/WorldCarbonPricingDatabase/blob/master/_raw/_aux_files) provides a mapping between IPCC sector names, their associated code and the corresponding International Energy Agency sector code. This latter file is particularly useful to the update of the dataset, since its `.csv` files only include sector codes.\n\n- Greenhouse gases: the information currently in the dataset pertains exclusively to policy instruments targeting CO2 emissions. A future iteration will expand the dataset to other Kyoto gases that are subject to pricing mechanisms.\n\n## Repository files\n\nThe repository is organised around three main directories:\n1. `_dataset`, which contains the `.csv` files constituting the dataset. Wihtin that directory, the actual data files can be found under the `data` directory and files with references to the data source under the `sources` directory. The full details of cited references are available in separate files in the [references](https://github.com/g-dolphin/WorldCarbonPricingDatabase/tree/master/_dataset/sources/references) directory.\n2. `_raw`, which contains the files recording or coding the pricing mechanisms' design features.\n3. `_code`, which contains scripts for the compilation of the dataset as well as short Python scripts for basic manipulation of the dataset files.\n\n**Note** The files located at the ``_raw`` directory contain empty cells. These cells indicate that no relevant information has been recorded.\n\n## Citation\n\nIf you use the dataset in scientific publication, please reference the following paper:\n\n``Dolphin, G., Xiahou, Q. World carbon pricing database: sources and methods. Sci Data 9, 573 (2022)``. The article is available in Open Access at [https://doi.org/10.1038/s41597-022-01659-x](https://doi.org/10.1038/s41597-022-01659-x).\n\n## License\n\nThis work is licensed under a [Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/) \n\n## Contribution\n\nThe dataset is under continuous development. While every precaution has been taken to accurately record coverage and price information, the size of the undertaking has been such that some inaccuracies might remain. Contributions to its development and improvement as well as to update of existing records are welcome (and encouraged).\n\n### Principles for the selection of sources of information\n\nContributions to the dataset are greatly appreciated. Please bear in mind the following principles:\n1. Updates to the dataset should be accurate and traceable. All proposed updates must provide a complete reference to the source of information.\n2. Information is recorded at the lowest level of (IPCC) sectoral(-fuel) disaggregation:\n - Records at higher levels of aggregation are the result of aggregation of lower-level entries\n3. No source of information is excluded from the set of admissible sources *a priori*. However:\n - pulicly available sources are preferred to sources subject to access restrictions;\n - 'higher quality' sources are preferred to 'lower quality' ones. For instance, official government legislation published in a jurisdiction's official journal will be prioritised over a third party report on the jurisdiction's policy.\n - to enhance the consistency of the dataset, sources offering standardized information on a larger set of jurisdictions are preferred to jurisdiction-specific sources.\n \n### Step-by-step guidance \n\n! All files under the `_data` directory are the final dataset files and are not the ones to be updated !\n\nThe files to be modified to update the dataset are found under the `_raw` and `_compilation` directories, respectively.\n\nIf you wish to contribute to the development of the dataset, please follow these steps:\n1. Clone the repository to your local machine\n2. Create a new (local) branch on which you will execute the files update(s)\n3. Save your files and commit your changes.\n4. Push your branch to the remote repository.\n \nTo update the scope of one of the carbon pricing mechanisms, update either `ets_coverage.py` or `taxes_coverage.py` in the directory `_raw/coverage`. To update the price associated with a mechanism, update the corresponding `csv` file in the directory `_raw/price`.\n""",",https://doi.org/10.1038/s41597-022-01659-x,https://doi.org/10.1038/s41597-022-01659-x,https://doi.org/10.1038/s41597-022-01659-x,https://doi.org/10.1038/s41597-022-01659-x","2020/07/09, 07:41:33",1203,CUSTOM,102,505,"2023/10/04, 12:00:33",13,108,132,30,21,0,0.0,0.3457943925233645,,,0,5,false,,false,false,,,,,,,,,,, NEMED,"A python package to retrieve and process historical emissions data of the National Electricity Market, reproduced by datasets published by the Australian Energy Market Operator.",UNSW-CEEM,https://github.com/UNSW-CEEM/NEMED.git,github,"aemo,australia,emissions,energy,national-electricity-market,nem,python,cdeii",Carbon Intensity and Accounting,"2023/06/10, 01:14:44",7,0,4,true,Python,Collaboration on Energy and Environmental Markets (CEEM),UNSW-CEEM,Python,http://nemed.readthedocs.io/,"b'# NEMED\n\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Documentation Status](https://readthedocs.org/projects/nemed/badge/?version=latest)](https://nemed.readthedocs.io/en/latest/?badge=latest)\n\nNEMED[^1], or NEM Emissions Data, is a python package to retrieve and process historical emissions data of the National Electricity Market (NEM), reproduced by datasets published by the Australian Energy Market Operator (AEMO).\n\n[^1]: Not to be confused with *""Nemed"", ""Nimeth""* of the [Irish legend](https://en.wikipedia.org/wiki/Nemed), who was the leader of the third group of people to settle in Ireland.\n\n## Installation\n```bash\npip install nemed\n```\n\n## Introduction\n\nThis tool is designed to allow users to retrieve historical NEM regional emissions data, either total or marginal emissions, for any 5-minute dispatch interval or aggregations thereof. Total emissions data produced by NEMED is given as both absolute total emissions (tCO2-e) and as an emissions intensity index (tCO2-e/MWh). Marginal emissions data reflects the price setter of a particular region, yielding an emissions intensity index (tCO2-e/MWh) corresponding to a particular plant.\nAlthough data is published by AEMO via the [Carbon Dioxide Equivalent Intensity Index (CDEII) Procedure](https://www.aemo.com.au/energy-systems/electricity/national-electricity-market-nem/market-operations/settlements-and-payments/settlements/carbon-dioxide-equivalent-intensity-index) this only reflects a daily summary for each region by total and (average) emissions intensity.\n\n### How does NEMED calculate emissions?\nTotal Emissions are computed by considering 5-minute generation dispatch data for each generator in the NEM for each respective region, along with their CO2-equivalent emissions factors per unit (generator) level. A detailed method of the process to produce results for total emissions(tCO2-e) and the corresponding emisssions intensities can be found [here](https://nemed.readthedocs.io/en/latest/method.html). The tool is able to provide these metrics on a dispatch interval basis, or aggregated to hourly, daily, monthly or yearly measures. For more advanced users, the emissions associated with each generator and hence that generator\'s contribution to total regional emissions can be extracted.\n\nMarginal Emissions are computed by identifying the marginally dispatched generators from AEMO\'s Price Setter files, mapping emissions intensity metrics mentioned above and computing marginal emissions intensity (tCO2-e/MWh).\n\n### How accurate is NEMED?\nA series of [benchmark results](https://nemed.readthedocs.io/en/latest/examples/cdeii_benchmark.html) for total emissions shows a comparison between AEMO\'s daily CDEII reported emissions figures and NEMED\'s emissions figures which have been aggregated from a 5-minute dispatch-interval resolution to a daily basis. \n\nThe [example](https://nemed.readthedocs.io/en/latest/examples/cdeii_benchmark.html) includes a region by region comparison for each metric, while an overview of the historical NEM Emissions Intensity produced using NEMED is shown here.\n![NEM Emissions Intensity](./docs/source/examples/charts_benchmark/intensity_NEM.png)\n\n## Usage\n\n### Examples\nExamples can be found in [NEMED\'s documentation](https://nemed.readthedocs.io/en/latest/examples/total_emissions.html).\n\n### Possible Use Cases\nSome example use cases of data produced from this tool include:\n- Analysis of historical emissions between NEM regions, generation technologies contributions to them and assessing the difference between total and marginal emissions.\n- Using emissions intensities traces (total and marginal) from NEMED in counter-factual optimisation models; studying the influence of shadow-pricing carbon or imposing carbon constraints.\n- Considering the emissions assosciated with grid-energy consumption for residential/C&I consumers, or in counterfactual studies of hypothetical EV usage or H2 electrolyser operation. \n\n## Contributing\nInterested in contributing? Check out the [contributing guidelines](CONTRIBUTING.md), which also includes steps to install `NEMED` for development.\n\nPlease note that this project is released with a [Code of Conduct](CONDUCT.md). By contributing to this project, you agree to abide by its terms.\n\n## License\n`NEMED` was created by Declan Heim and Shayan Naderi. It is licensed under the terms of the `BSD 3-Clause license`.\n\n## Credits\nThis package was created using the [`UNSW CEEM template`](https://github.com/UNSW-CEEM/ceem-python-template). It also adopts functionality from sister tools including [`NEMOSIS`](https://github.com/UNSW-CEEM/NEMOSIS) and [`NEMPY`](https://github.com/UNSW-CEEM/nempy).'",,"2022/08/12, 23:05:23",439,BSD-3-Clause,21,70,"2023/06/10, 01:10:52",4,10,14,11,137,0,0.0,0.0,"2023/06/10, 01:49:46",v0.3.3,0,1,false,,false,true,,,https://github.com/UNSW-CEEM,http://ceem.unsw.edu.au/,Sydney Australia,,,https://avatars.githubusercontent.com/u/33536784?v=4,,, Steel Emissions Reporting Guidance,RMI's steel emissions accounting and data exchange guidance.,RMI,https://github.com/RMI/steel-guidance.git,github,,Carbon Intensity and Accounting,"2023/10/12, 21:28:16",5,0,4,true,,RMI,RMI,,,"b""# RMI Steel GHG Emissions Reporting Guidance\nThis repository holds the necessary tools and guidance for exchanging emissions data in line with RMI's Steel GHG Emissions Reporting Guidance, along with licensing and sample files.\n\nThe guidance was launched in September of 2022--you can read the blog post [here](https://rmi.org/knowing-the-emissions-of-your-steel-supply-chain/) and press release [here](https://rmi.org/press-release/rmi-releases-guidance-to-cut-steel-industrys-climate-threat/).\n\nThe guidance serves as a methodological foundation for RMI's Steel Data Model. Designed in concert with the guidance, the Steel Data Model encapsulated the outputs of the guidance, represented as a [Data Model Extension](https://wbcsd.github.io/data-model-extensions/spec/) to the [Pathfinder Data Model.](https://wbcsd.github.io/data-exchange-protocol/v2/#biblio-extensions-guidance)\n\n## Product Level Accounting Guidance\n[RMI's Steel GHG Emissions Reporting Guidance](https://rmi.org/wp-content/uploads/2022/09/steel_emissions_reporting_guidance.pdf) provides a methodology for reporting emissions in a way that enables the development of a differentiated market for low-embodied-emissions steel that promotes the necessary investment to decarbonize the sector.\n\nThe broad outcome of the implementation of this guidance is as follows:\n1. Accelerate the deployment of low-emissions steel production technologies by ensuring sufficient information is\navailable to link demand with supply.\n2. Increase transparency and provide further information on steel production emissions with a methodology that is\nconsistent across geographies and commodities.\n3. Enable steel consumers to purchase with a clear embodied emissions association and demonstrate evidence of that\nemissions performance to their customers.\n4. Credibly recognize steel producers leading their peers in terms of emissions performance, particularly in deployment\nof new technologies.\n\nThis guidance was authored by [RMI](https://rmi.org/) in collaboration with the [World Business Council for Sustainable Development (WBCSD)](https://www.wbcsd.org/) and its [Automotive Partnership for Carbon Transparency (A-PACT)](https://www.wbcsd.org/Pathways/Transport-Mobility/News/Leading-manufacturers-support-move-towards-better-emissions-measurement-for-the-automotive-industry) initiative.\n\n### Key Principles\nThere are four key requirements for reporting steel sector emissions using this Steel Emissions Reporting Guidance:\n\n#### 1. Use of primary data\nAs much as possible, emissions calculations should be based on first-hand information from\nactors in the supply chain.\n\n#### 2. Create a boundary for comparison\nCompanies should report emissions against a fixed boundary (i.e., a consistent\nset of processes) to enable comparability between disclosures.\n\n#### 3. Measurement made for markets\nEnsure calculation and reporting decisions provide the transparency necessary to\nenable the development of a market for low-embodied-emissions products.\n\n### Fixed System Boundary\nThe fixed system boundary defines all the process steps from which emissions need to be reported irrespective of the steel companies\xe2\x80\x99 ownership structure. This approach solves two key issues:\n\n#### Emissions disclosure at the corporate level will vary depending on the degree of vertical integration\n#### Scope 1, 2, and 3 will likely become more fluid overtime, further limiting comparability\n\nFor more information on the Fixed System Boundary, see section 2.2 of the [accounting guidance](https://rmi.org/wp-content/uploads/2022/09/steel_emissions_reporting_guidance.pdf)\n\n## RMI Steel Data Model\nRMI's Steel Data Model enables the use of machine-readable sharing of the product level information calculated with [RMI's Steel GHG Emissions Reporting Guidance.](https://rmi.org/wp-content/uploads/2022/09/steel_emissions_reporting_guidance.pdf)\n\nIt has been designed as an extension to the [Pathfinder Data Model.](https://wbcsd.github.io/data-exchange-protocol/v2/#biblio-extensions-guidance) Extensions to the model allow for sector specific information to be added to the data model.\n\n### Properties\nRMI's Steel Data Model is comprised of the following properties:\n\n#### Abatement Technology\n A qualitative label of the techology used to reduce emissions in the steel and aluminum supply chains. For more information, please refer to the Sec 3.4 [Abatement Technology] section of RMI's [Steel GHG Emissions Reporting Guidance](https://rmi.org/wp-content/uploads/2022/09/steel_emissions_reporting_guidance.pdf)\n\n#### Benchmarking\n Emissions associated with all relevant processes to either crude steel or hot-rolled product. For more information see Sec 2.3 of RMI's [Steel GHG Emissions Reportings Guidance.](https://rmi.org/wp-content/uploads/2022/09/steel_emissions_reporting_guidance.pdf)\n \n#### Full Boundary\n Emissions associated with all relevant processes from cradle to gate. For more information see Sec 2.2 Fixed System Boundary of RMI's [Steel GHG Emissions Reportings Guidance.](https://rmi.org/wp-content/uploads/2022/09/steel_emissions_reporting_guidance.pdf)\n\n### Techical Guidance\nFor the full techical specification of RMI's Steel Data Model, see [here.](https://github.com/RMI/steel-guidance/blob/main/specs/technical_specification.md)\n\n#### Sample and Schema Files\nA sample file of the data model can be found [here.](https://github.com/RMI/steel-guidance/blob/main/samples/steel_sample.json) Note: this sample file includes the full Pathfinder Data Model file as well as RMI's Steel Data Model.\n\nA json schema file for the data model can be found [here.](https://github.com/RMI/steel-guidance/blob/main/specs/steel_schema.json)\n\n## Contacts\n\n### RMI\nFor questions please contact ghgtransparency@rmi.org\n""",,"2022/09/27, 18:59:43",393,MIT,93,95,"2023/06/10, 01:10:52",0,0,0,0,137,0,0,0.04210526315789476,,,0,3,false,,false,false,,,https://github.com/RMI,https://rmi.org/,"Boulder, CO",,,https://avatars.githubusercontent.com/u/113151636?v=4,,, carbonr,A package in R to conveniently calculate carbon-equivalent emissions.,IDEMSInternational,https://github.com/IDEMSInternational/carbonr.git,github,,Carbon Intensity and Accounting,"2023/06/27, 12:07:17",8,0,5,true,R,IDEMS International,IDEMSInternational,R,,"b'\n\n\n# carbonr \n\n\n\n[![R-CMD-check](https://github.com/IDEMSInternational/carbonr/workflows/R-CMD-check/badge.svg)](https://github.com/IDEMSInternational/carbonr/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/IDEMSInternational/carbonr/branch/main/graph/badge.svg)](https://app.codecov.io/gh/IDEMSInternational/carbonr?branch=main)\n[![Lifecycle:\nexperimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental)\n[![Project Status: WIP \xe2\x80\x93 Initial development is in progress, but there\nhas not yet been a stable, usable release suitable for the\npublic.](https://www.repostatus.org/badges/latest/wip.svg)](https://www.repostatus.org/#wip)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/carbonr)](https://CRAN.R-project.org/package=carbonr)\n[![license](https://img.shields.io/badge/license-LGPL%20(%3E=%203)-lightgrey.svg)](https://www.gnu.org/licenses/lgpl-3.0.en.html)\n\n\n## Overview\n\ncarbonr is a package in R to conveniently calculate carbon-equivalent\nemissions. The emissions values in the calculations are from the [UK\nGovernment report\n(2022)](https://www.gov.uk/government/publications/greenhouse-gas-reporting-conversion-factors-2022)\nwhereever available. For more specific functions related to operating\ntheatre waste, alternative sources are used given in the References\nsection. Carbon credit prices are additionally available in the\n`carbon_credit_price` function where values are based on the [World Bank\ndata](https://carbonpricingdashboard.worldbank.org/). The jurisdiction\nand year available for that jurisdiction can be found in the\n`check_CPI()` function.\n\n## Installation\n\nYou can install the development version of carbonr from\n[GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""IDEMSInternational/carbonr"")\n```\n\n## Aims of carbonR\n\nIn 2021, work began on the carbonr package in R with the aim of\naddressing the following challenges and improving the estimation of\ncarbon-equivalent emissions. This came after a review of current\napproaches to estimate carbon-equivalent emissions.\n\n**Reproducibility:** The carbonr package seeks to provide a reliable and\nreproducible approach to calculating emissions levels, ensuring that the\nresults can be saved, edited, and redistributed easily.\n\n**Open Source:** R is an open-source language, which means that the\ncarbonr package benefits from the collaborative nature of the R\ncommunity. This allows for open discussions and contributions on\nplatforms like GitHub, capturing different perspectives and enhancing\nthe functionality of the package.\n\n**Transparency and Flexibility:** The carbonr package aims for\ntransparency to provide the ability to amend emissions values and tailor\nthem to specific environments and contexts. This allows for greater\nflexibility and customisation in estimating emissions.\n\n**Cost-effective:** The carbonr package being open source eliminates the\nneed for users to incur additional costs. This makes it a cost-effective\nsolution for estimating carbon-equivalent emissions.\n\n**Accessibility:** The carbonr package aims to make the estimation of\ncarbon-equivalent emissions more accessible by offering a user-friendly\nfront-end interface using Shiny. This ensures that the tools are easier\nto use, even for individuals with limited programming experience.\n\n**Informed Decision Making:** The carbonr package includes a feature\nwhere users can estimate their carbon credit price calculations. By\nincorporating carbon credit price calculations, the package equips users\nwith the information to estimate the financial implications associated\nwith their carbon emissions. This feature, we hope, empowers users to\nmake informed decisions and take appropriate actions in mitigating their\nenvironmental impact.\n\n**Expansion and Collaboration:** Although currently a small stand-alone\npackage used within IDEMS International, the vision for carbonr is to\nexpand and become more comprehensive. The creators invite contributions\nfrom the community to extend the package\xe2\x80\x99s functionality, build\nadditional features, and transform it into a more robust tool for\nestimating carbon-equivalent emissions.\n\nBy addressing these challenges and incorporating these improvements, the\ncarbonr package aims to provide a reliable, accessible, and customisable\nsolution for estimating and offsetting carbon-equivalent emissions.\n\n## Functions in CarbonR\n\ncarbonr is a package in R to conveniently calculate carbon-equivalent\nemissions. Currently, emission estimates relate to travel, materials,\nday-to-day, and clinically based.\n\n- `airplane_emissions()`\n- `ferry_emissions()`\n- `rail_emissions()`\n- `land_emissions()`\n- `vehicle_emissions()`\n- `hotel_emissions()`\n- `building_emissions()`\n- `office_emissions()`\n- `household_emissions()`\n- `construction_emissions()`\n- `electrical_emissions()`\n- `material_emissions()`\n- `metal_emissions()`\n- `paper_emissions()`\n- `plastic_emissions()`\n- `raw_fuels()`\n- `anaesthetic_emissions()`\n- `clinical_emissions()`\n- `clinical_theatre_data()`\n\nThese all return carbon-equivalent emissions in tonnes.\n\nA shiny app is also available by `shiny_emissions()` to calculate\ncarbon-equivalent emissions with a GUI.\n\n## Usage\n\nWe give some small examples in using the functions in `carbonr()`\n\n``` r\nlibrary(carbonr)\n```\n\nTo calculate emissions for a flight between Vancouver and Toronto, we\nfirst want to find the name of the airports. We do this using the\n`airport_finder()` function:\n\n``` r\nairport_finder(name = ""Vancouver"")\n```\n\n| Name | City | Country | IATA |\n|:--------------------------------------|:----------|:--------|:-----|\n| Vancouver International Airport | Vancouver | Canada | YVR |\n| Vancouver Harbour Water Aerodrome | Vancouver | Canada | CXH |\n| Vancouver International Seaplane Base | Vancouver | Canada | |\n\n``` r\nairport_finder(name = ""Toronto"")\n```\n\n| Name | City | Country | IATA |\n|:-----------------------------------------|:--------|:--------|:-----|\n| Billy Bishop Toronto City Centre Airport | Toronto | Canada | YTZ |\n| Toronto/Oshawa Executive Airport | Oshawa | Canada | YOO |\n\nNow we can find the overall emission value using the appropriate IATA\ncode. These distances are calculated using the Haversine formula:\n\n``` r\nairplane_emissions(""YVR"", ""YTZ"")\n#> [1] 0.7210839\n```\n\nA similar approach can be performed for ferry emissions. For example, to\ncalculate emissions for a round trip ferry from Melbourne to New York,\nwe first find the appropriate seaport code with the `seaport_finder()`\nfunction:\n\n``` r\nseaport_finder(country = ""Australia"", city = ""Melbourne"")\n```\n\n| country | city | country_code | port_code | latitude | longitude |\n|:----------|:---------------------------|:-------------|:----------|---------:|----------:|\n| Australia | Point Henry Pier/Melbourne | AU | PHP | -38.07 | 144.26 |\n| Australia | Port Melbourne | AU | POR | -37.50 | 144.56 |\n\n``` r\nseaport_finder(country = ""US"", city = ""New York"")\n```\n\n| country | city | country_code | port_code | latitude | longitude |\n|:--------------|:------------------|:-------------|:----------|---------:|----------:|\n| United States | Brooklyn/New York | US | BOY | 40.44 | -73.56 |\n\nNow we can find the overall emission value using the appropriate seaport\ncode:\n\n``` r\nferry_emissions(""POR"", ""BOY"", round_trip = TRUE)\n#> [1] 4.422754\n```\n\nFor the UK we can calculate emissions for a train journey. Like with\n`airplane_emissions()` and `ferry_emissions()`, the distances are\ncalculated using the Haversine formula - this is calculated as the crow\nflies. As before, we first find the stations. As always, for a more\naccurate estimation we can include via points:\n\nTo calculate emissions for a train journey from Bristol Temple Meads to\nEdinburgh Waverley, via Birmingham New Street. We can use a data frame\nand `purrr::map()` to read through the data easier:\n\n``` r\nmultiple_ind <- tibble::tribble(~ID, ~station,\n ""From"", ""Bristol"",\n ""To"", ""Edinburgh"",\n ""Via"", ""Birmingham"")\npurrr::map(.x = multiple_ind$station, .f = ~rail_finder(.x)) %>%\n dplyr::bind_rows()\n```\n\n| station_code | station | region | county | district | latitude | longitude |\n|:-------------|:-------------------------|:--------------|:----------------------|:----------------------|---------:|----------:|\n| BPW | Bristol Parkway | South West | South Gloucestershire | South Gloucestershire | 51.51380 | -2.542163 |\n| BRI | Bristol Temple Meads | South West | Bristol City Of | Bristol City Of | 51.44914 | -2.581315 |\n| EDB | Edinburgh | Scotland | Edinburgh City Of | Edinburgh City Of | 55.95239 | -3.188228 |\n| EDP | Edinburgh Park | Scotland | Edinburgh City Of | Edinburgh City Of | 55.92755 | -3.307664 |\n| BBS | Birmingham Bordesley | West Midlands | West Midlands | Birmingham | 52.47187 | -1.877769 |\n| BHI | Birmingham International | West Midlands | West Midlands | Solihull | 52.45081 | -1.725857 |\n| BHM | Birmingham New Street | West Midlands | West Midlands | Birmingham | 52.47782 | -1.900205 |\n| BMO | Birmingham Moor Street | West Midlands | West Midlands | Birmingham | 52.47908 | -1.892473 |\n| BSW | Birmingham Snow Hill | West Midlands | West Midlands | Birmingham | 52.48335 | -1.899088 |\n\nThen we can estimate the overall tCO2e emissions for the journey:\n\n``` r\nrail_emissions(from = ""Bristol Temple Meads"", to = ""Edinburgh"", via = ""Birmingham New Street"")\n#> [1] 0.02303694\n```\n\nWe can use a data frame to read through the data easier in general. For\nexample, if we had data for multiple individuals, or journeys:\n\n``` r\nmultiple_ind <- tibble::tribble(~ID, ~rail_from, ~rail_to, ~air_from, ~air_to, ~air_via,\n ""Clint"", ""Bristol Temple Meads"", ""Paddington"", ""LHR"", ""KIS"", ""NBO"",\n ""Zara"", ""Bristol Temple Meads"", ""Paddington"", ""LHR"", ""LAX"", ""ORL"")\nmultiple_ind %>%\n dplyr::rowwise() %>%\n dplyr::mutate(plane_emissions = airplane_emissions(air_from,\n air_to,\n air_via)) %>%\n dplyr::mutate(train_emissions = rail_emissions(rail_from,\n rail_to)) %>%\n dplyr::mutate(total_emissions = plane_emissions + train_emissions)\n```\n\n| ID | rail_from | rail_to | air_from | air_to | air_via | plane_emissions | train_emissions | total_emissions |\n|:------|:---------------------|:-----------|:---------|:-------|:--------|----------------:|----------------:|----------------:|\n| Clint | Bristol Temple Meads | Paddington | LHR | KIS | NBO | 1.526127 | 0.0074019 | 1.533529 |\n| Zara | Bristol Temple Meads | Paddington | LHR | LAX | ORL | 2.253014 | 0.0074019 | 2.260416 |\n\nAdditional emissions can be calculated as well. For example, office\nemissions\n\n``` r\noffice_emissions(specify = TRUE, electricity_kWh = 255.2, water_supply = 85, heat_kWh = 8764)\n#> [1] 0.002230256\n```\n\nAlternatively, more advance emissions can be given with other functions,\nsuch as the `material_emissions()`, `construction_emissions()`, and\n`raw_fuels()` functions.\n\n## Operating Theatre Emissions\n\nUpon request, we have introduced the estimation of CO2e emissions\nspecifically for operating theatres. We walk through a small example to\ndemonstrate this function.\n\nTo begin, we\xe2\x80\x99ll create a dummy data frame of clinical data. The data\nframe will serve as a representative sample of the information typically\nfound in operating theatres. It could include various parameters such as\nthe anaesthetic type (desflurane, isoflurane), the wet clinical waste in\nkg, the electricity in kWh, and general waste in kg.\n\n``` r\ndf <- data.frame(time = c(""10/04/2000"", ""10/04/2000"", ""11/04/2000"", ""11/04/2000"", ""12/04/2000"", ""12/04/2000""),\ntheatre = rep(c(""A"", ""B""), times = 3),\ndesflurane = c(30, 0, 25, 0, 28, 0),\nisoflurane = c(0, 37, 0, 30, 0, 35),\nclinical_waste = c(80, 90, 80, 100, 120, 110),\nelectricity_kwh = c(100, 110, 90, 100, 100, 110),\ngeneral_waste = c(65, 55, 70, 50, 60, 30))\n```\n\n| time | theatre | desflurane | isoflurane | clinical_waste | electricity_kwh | general_waste |\n|:-----------|:--------|-----------:|-----------:|---------------:|----------------:|--------------:|\n| 10/04/2000 | A | 30 | 0 | 80 | 100 | 65 |\n| 10/04/2000 | B | 28 | 0 | 90 | 110 | 55 |\n| 11/04/2000 | A | 25 | 0 | 80 | 90 | 70 |\n| 11/04/2000 | B | 0 | 30 | 100 | 100 | 50 |\n| 12/04/2000 | A | 0 | 37 | 120 | 100 | 60 |\n| 12/04/2000 | B | 0 | 35 | 110 | 110 | 30 |\n\nAfter creating the dummy data frame of clinical data, we can obtain the\nCO2e emissions and the carbon price index by the `clinical_theatre_data`\nfunction. This information can be conveniently presented in a table\nformat:\n\n``` r\n# get emissions and CPI (carbon price index)\nclinical_theatre_data(df, time = time, name = theatre,\n wet_clinical_waste = clinical_waste,\n wet_clinical_waste_unit = ""kg"",\n average = general_waste,\n plastic_units = ""kg"",\n electricity_kWh = electricity_kwh,\n include_cpi = TRUE,\n jurisdiction = ""Australia"",\n year = 2023)\n```\n\n| time | theatre | emissions | carbon_price_credit |\n|:-----------|:--------|----------:|--------------------:|\n| 10/04/2000 | A | 0.2990340 | 3.181279 |\n| 10/04/2000 | B | 0.2792765 | 2.971089 |\n| 11/04/2000 | A | 0.3119999 | 3.319217 |\n| 11/04/2000 | B | 0.2698696 | 2.871013 |\n| 12/04/2000 | A | 0.3186125 | 3.389565 |\n| 12/04/2000 | B | 0.2189492 | 2.329296 |\n\n## Shiny App\n\nAn interactive calculator using Shiny can be accessed by the\n`shiny_emissions()` function. This calculator uses some of the functions\nin the `carbonr` package:\n\n``` r\nshiny_emissions()\n```\n\n\n\n## For the future\n\nTo calculate office emissions, we want the ability for the function to\nread in data from the office, perhaps accounting data. While the R side\nof this is relatively straightforward, this is on hold while we look to\nhave an appropriate data set in place.\n\nWe intend to build in reports that give information on the users\nestimated emissions. This would include summary statistics, tables, and\ngraphs.\n\n## References\n\n### Other online calculators:\n\n- \n- \n\n### Sources:\n\n\\[1\\] UK government 2022 report. See\n\n\n\nNote emissions for flights in the code uses values from direct effects\nonly. Radiative forcing = TRUE will give indirect and direct effects.\n(multiplys by 1.891). See \xe2\x80\x9cbusiness travel - air\xe2\x80\x9d sheet of gov.uk excel\nsheet linked above.\n\n\\[2\\] Radiative forcing as 1.891 is from www.carbonfund.org\n\n\\[3\\] For Clinically-based emissions, we expanded beyond the 2022\nGovernment Report since there were not estimates available.\n\nanaesthetic emissions from:\n;\n;\n;\n\n\nclinical_wet_waste: p32 of\n\n'",,"2021/11/26, 20:26:16",698,CUSTOM,84,256,"2023/06/04, 22:09:08",8,11,13,3,143,0,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/IDEMSInternational,https://www.idems.international/,,,,https://avatars.githubusercontent.com/u/44813009?v=4,,, Pledge4Future,Allows you to calculate your work related CO2e emissions from heating and electricity consumptions as well as business trips and commuting.,pledge4future,https://github.com/pledge4future/WePledge.git,github,"climate-change,co2-emissions,research,paris-agreement",Carbon Intensity and Accounting,"2023/07/01, 17:07:44",6,0,4,true,HTML,pledge4future,pledge4future,"HTML,TypeScript,Python,JavaScript,SCSS,Shell,CSS,Dockerfile",https://pledge4future.org,"b""# Pledge4Future App\n\nPledge4Future is a project to help you and your working group to measure and reduce your work-related CO2e emissions.\n\nThe [pledge4future app](https://pledge4future.org) allows you to calculate your work related CO2e emissions from heating and electricity consumptions as well as business trips and commuting. The methodology for the calculation of the emissions is implemented in the [co2calculator package](https://github.com/pledge4future/co2calculator).\n\nCheck out the [demo emission dashboard](https://pledge4future.org/dashboard)!\n\n### Installation\n\nThis is a dockerized app which uses React in the frontend and Python, Django and GraphQL in the backend.\n\n### 1. Clone repository \n\n```\ngit clone \ncd WePledge\n```\n\n### 2. Load the submodules\n\n```\ngit submodule update --init --recursive\n```\n\n### 3. Run docker\n\n```\ndocker compose up\n```\n\nThis will start the following services on your computer:\n\nFrontend: [http://localhost:3000](http://localhost:3000) \nBackend: [http://localhost:8000](http://localhost:8000) \nDjango Admin: [http://localhost:8000/admin](http://localhost:8000/admin) \nGraphQL API: [http://localhost:8000/graphql](http://localhost:8000/graphql) \n\nRefer to the [wiki](https://github.com/pledge4future/WePledge/wiki) for detailed instructions on how to run, adapt and debug the app.\n\n## Contribution guidelines \n\nWe're always happy about new people contributing to our project! \n\n- If you encounter problems with the app, feel free to create an [issue in this repository](https://github.com/pledge4future/WePledge/issues). \n- If you can fix it yourself, please create a new branch from 'dev', add your changes and once you're done create a pull request. \n- If you would like to become a regular contributor to the project, please contact us at [info@pledge4future.org](mailto:info@pledge4future.org).\n\n## License\n\nThis project is licensed under the [GPL-3.0 License](./LICENSE).\n\n## Acknowledgments\n\nWe are supported by\n\n- [Goethe Institute](https://www.goethe.de)\n- [HeiGIT gGmbH (Heidelberg Institute for Geoinformation Technology)](https://heigit.org/)\n- [openrouteservice](https://openrouteservice.org/)\n- [GIScience Research Group, Institute of Geography at Heidelberg \nUniversity](https://www.geog.uni-heidelberg.de/giscience.html)\n- [Scientists4Future Heidelberg](https://heidelberg.scientists4future.org/)\n\n\n\n\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\n
\n\n""",,"2020/12/07, 07:40:41",1052,GPL-3.0,215,732,"2023/08/06, 09:25:48",87,109,245,115,80,3,1.1,0.5300429184549356,,,5,11,false,,false,false,,,https://github.com/pledge4future,,"Heidelberg, Germany",,,https://avatars.githubusercontent.com/u/75507191?v=4,,, Travel Impact Model,"Describes the modeling assumptions and input specifications behind the Travel Impact Model (TIM), a state of the art emission estimation model that Google's Travel Sustainability team has compiled from several external data source.",google,https://github.com/google/travel-impact-model.git,github,,Carbon Intensity and Accounting,"2023/10/20, 08:09:49",76,0,48,true,,Google,google,,,"b'## Travel Impact Model 1.8.0\n\n#### (Implementation of the Travalyst Shared Framework by Google)\n\n## Table of contents\n\n* [Background](#background)\n* [Model overview](#model-overview)\n * [Flight level emission estimates](#flight-level-emission-estimates)\n * [Flight level CO2 estimates](#flight-level-co2-estimates)\n * [Data sources](#data-sources)\n * [Breakdown from flight level to individual level](#breakdown-from-flight-level-to-individual-level)\n * [Data sources](#data-sources-1)\n * [Outlier detection and basic correctness checking](#outlier-detection-and-basic-correctness-checking)\n * [Factors details](#factors-details)\n* [Example emission estimation](#example-emission-estimation)\n* [Legal base for model data sharing](#legal-base-for-model-data-sharing)\n* [API access](#api-access)\n* [Versioning](#versioning)\n* [Changelog](#changelog)\n* [Limitations](#limitations)\n* [Data quality](#data-quality)\n* [Contact](#contact)\n* [Glossary](#glossary)\n* [Appendix](#appendix)\n * [Appendix A: Aircraft type support](#appendix-a-aircraft-type-support)\n\n## Background\n\nIn this document we describe the modeling assumptions and input specifications\nbehind the Travel Impact Model (TIM), a state of the art emission estimation\nmodel that Google\xe2\x80\x99s Travel Sustainability team has compiled from several\nexternal data sources. The TIM aims at predicting carbon emissions for future\nflights to help travelers plan their travel.\n\n## Model overview\n\nFor each flight, the TIM considers several factors, such as the Great Circle\ndistance between the origin and destination airports and the aircraft type being\nused for the route. Actual carbon emissions at flight time may vary depending on\nfactors not known at modeling time, such as speed and altitude of the aircraft,\nthe actual flight route, and weather conditions at the time of flight.\n\n### Flight level emission estimates\n\n#### Flight level CO2 estimates\n\nThe Travel Impact Model estimates fuel burn based on the Tier 3 methodology for emission\nestimates from the\n[Annex 1.A.3.a Aviation 2019](https://www.eea.europa.eu/publications/emep-eea-guidebook-2019/part-b-sectoral-guidance-chapters/1-energy/1-a-combustion/1-a-3-a-aviation/view)\npublished by the European Environment Agency (EEA).\n\nThere are several resources about the EEA model available:\n\n* the main\n [documentation](https://www.eea.europa.eu/publications/emep-eea-guidebook-2019/part-b-sectoral-guidance-chapters/1-energy/1-a-combustion/1-a-3-a-aviation/view)\n* the\n [data set](https://www.eea.europa.eu/publications/emep-eea-guidebook-2019/part-b-sectoral-guidance-chapters/1-energy/1-a-combustion/1-a-3-a-aviation-1/view)\n* further\n [documentation](https://www.eurocontrol.int/sites/default/files/content/documents/201807-european-aviation-fuel-burn-emissions-system-eea-v2.pdf)\n on pre-work for the EEA model\n\nAdditionally, the Travel Impact Model updates the fuel burn to emissions conversion factor to align with the [ISO 14083](https://www.iso.org/standard/78864.html) Fuel Heat Combustion factor and [CORSIA Life Cycle Assessment](https://www.icao.int/environmental-protection/CORSIA/Documents/CORSIA_Eligible_Fuels/CORSIA_Supporting_Document_CORSIA%20Eligible%20Fuels_LCA_Methodology_V5.pdf), and breaks down emissions estimates into Well-to-Tank (WTT) and Tank-to-Wake (TTW) emissions.\n\nTank-to-Wake emissions account for emissions produced by burning jet fuel during flying, take-off and landing. Well-to-Tank emissions account for emissions generated during the production, processing, handling and delivery of jet fuel. When summed, Well-to-Wake (WTW) emissions account for the full life cycle of flying.\n\nThe EEA model takes the efficiency of the aircraft into account. As shown in\nFigure 1, a typical flight is modeled in two stages: *take off and landing*\n(LTO, yellow) and *cruise, climb, and descend* (CCD, blue).\n\n![alt_text](images/image3.png ""image_tooltip"")\n\n

\n(Fig 1)

\n\nFor each stage, there are aircraft-specific and distance-specific fuel burn estimates. Table 1 shows an example fuel burn forecast for a Boeing 787-9 (B789) aircraft:\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Aircraft\n Distance (nm)\n LTO fuel forecast (kg)\n CCD fuel forecast (kg)\n
B789\n 500\n 1,727\n 5,815\n
B789\n 1000\n 1,727\n 10,770\n
B789\n ...\n ...\n ...\n
B789\n 5000\n 1,727\n 52,375\n
B789\n 5500\n 1,727\n 57,430\n
\n\n

\n(Table 1)

\n\nBy using these numbers together with linear interpolation or extrapolation, it\nis possible to deduce the emission estimate for flights of any length on\nsupported aircraft:\n\n* Interpolation is used for flights that are in between two distance data points. As a theoretical example, a 5250 nautical miles flight on a Boeing 787-9 will burn approximately 54902.5 kg of fuel during the CCD phase (where 54902.5 equals 52375 + (57430 - 52375)/2, with figures for 5000nm and 5500nm taken from Table 1).\n* Extrapolation is used for flights that are either shorter than the smallest\n supported distance, or longer than the longest supported distance for that\n aircraft type.\n* The Lower Heating Value from ISO 14083 (43.1 MJ/kg averaged over US and EU numbers from [source](https://www.iso.org/standard/78864.html) Table I1 and Table I2) and CORSIA Carbon Intensity value (74 gCO2e/MJ from [source](https://www.icao.int/environmental-protection/CORSIA/Documents/CORSIA_Eligible_Fuels/CORSIA_Supporting_Document_CORSIA%20Eligible%20Fuels_LCA_Methodology_V5.pdf) Table 5) are used to calculate the jet fuel combustion to CO2 conversion factor of 3.1894. The CORSIA Life Cycle Assessment methodology is used to calculate a WTT CO2e emissions factor of 0.6465 (TTW 15g CO2e/MJ added to the TTW 74 gCO2e/MJ Carbon Intensity to total up to the WTW lifecycle Carbon Intensity of 89 gCO2e/MJ from [source](https://www.icao.int/environmental-protection/CORSIA/Documents/CORSIA_Eligible_Fuels/CORSIA_Supporting_Document_CORSIA%20Eligible%20Fuels_LCA_Methodology_V5.pdf) page 22 and Table 7). The factors used are as follows:\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n
kg CO2e/kg of A1 jet fuel burn\n TTW [kg CO2e/kg]\n WTT [kg CO2e/kg]\n WTW [kg CO2e/kg]\n
CORSIA and ISO\n 3.1894\n 0.6465\n 3.8359\n
\n\nCO2e is short for CO2 equivalent and includes Kyoto Gases (GHG) as described [here](https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Glossary:Kyoto_basket#:~:text=The%20Kyoto%20basket%20encompasses%20the,sulphur%20hexafluoride%20(SF6)). Warming effects produced by short-lived climate pollutants (such as contrail-induced cirrus clouds) are not yet included in CO2e as calculated by the Travel Impact Model.\n\nThere is information for most commonly-used aircraft types in the EEA data, but\nsome are missing. For missing aircraft types, one of the following alternatives\nis applied in ranked order:\n\n* *Supported using the Piano-X data set:* If an aircraft type is supported in\n the Piano-X data set and a comparable type is supported both in the Piano-X\n and the EEA data set, a correction factor is derived by comparing the\n Piano-X output for both types across a range of missions. The correction\n factor will be applied to the LTO and CCD numbers of the comparable type in\n the EEA database.\n* *Supported by fallback to non-optimized aircraft type:* If there are\n estimates in the EEA data set for an aircraft that is identical except for\n the lack of optimizations such as winglets or sharklets, the non-optimized\n counterpart is used for the estimate.\n* *Supported by fallback to previous generation aircraft type:* If there are\n estimates in the EEA data set for a previous generation aircraft type in the\n same family, from the same manufacturer, the previous generation aircraft is\n used for the estimate.\n* *Supported by fallback to least efficient aircraft in the family:* For\n umbrella codes that refer to a group of aircraft, the least efficient\n aircraft in the family will be assumed.\n* *Not supported:* For aircraft types for which none of the cases above apply,\n there are no emissions estimates available.\n\nSee [Appendix A](#bookmark=id.pbnw7e5sw0vi) for a table with detailed\ninformation about aircraft type support status.\n\n#### Data sources\n\nUsed for flight level emissions:\n\n* EEA Report No 13/2019 1.A.3.a Aviation 1 Master emissions calculator 2019\n ([link](https://www.eea.europa.eu/publications/emep-eea-guidebook-2019/part-b-sectoral-guidance-chapters/1-energy/1-a-combustion/1-a-3-a-aviation-1/view))\n* Piano-X aircraft database ([link](https://www.lissys.uk/PianoX.html))\n* CORSIA Eligible Fuels Life Cycle Assessment Methodology ([link](https://www.icao.int/environmental-protection/CORSIA/Documents/CORSIA_Eligible_Fuels/CORSIA_Supporting_Document_CORSIA%20Eligible%20Fuels_LCA_Methodology_V5.pdf))\n* ISO 14083 ([link](https://www.iso.org/standard/78864.html))\n\n### Breakdown from flight level to individual level\n\nIn addition to predicting a flight\xe2\x80\x99s emissions, it is possible to estimate the\nemissions for an individual seat on that flight. To perform this estimate, it\xe2\x80\x99s\nnecessary to perform an individual breakdown based on three relevant factors:\n\n1. Number of total seats on the plane in each seating class (first, business,\n premium economy, economy)\n2. Number of occupied seats on the plane\n3. Amount of cargo being carried\n\nThe emission estimates are higher for premium economy, business and first\nseating classes because the seats in these sections take up more space. As a\nresult, those seats account for a larger share of the flight\'s total emissions.\nDifferent space allocations on narrow and wide-body aircraft are considered\nusing separate weighing factors.\n\n#### Data sources\n\nUsed to determine which aircraft type was used for a given flight:\n\n* Aircraft type from published flight schedules\n\nUsed to determine seating configuration and calculate emissions per available\nseat:\n\n* Aircraft Configuration/Version (ACV) from published flight schedules\n* Fleet-level aircraft configuration information from the ""Seats (Equipment\n Configuration) File"" provided by [OAG](https://oag.com)\n\n#### Primary fallback for missing seat configuration\n\nIf there are no individual seat configuration numbers for a flight available\nfrom the published flight schedules, we query the fleet-level seating data for a\nunique match by carrier and aircraft. This is only possible in cases where a\ncarrier uses the same seating configuration for all their aircraft of a certain\naircraft model.\n\n#### Outlier detection and basic correctness checking\n\nIf there are no individual seat configuration numbers for a flight available\nfrom the published flight schedules, nor from the fleet-level data, or if they\nare incorrectly formatted or implausible, the TIM uses aircraft-specific medians\nderived from the overall dataset instead. Basic correctness checks based on\nreference seat configurations for the aircraft are performed, specifically:\n\n* The *calculated total seat area* for a flight is the total available seating\n area. This is calculated based on seating data and seating class factors.\n For example, the total seat area for a wide-body aircraft would be:\n * `1.0 * num_economy_class_seats +`\n
`1.5 * num_premium_economy_class_seats +`\n
`4.0 * num_business_class_seats +`\n
`5.0 * num_first_class_seats`\n* The *reference total seat area* for an aircraft is roughly the median total\n seat area.\n* During a *comparison* step: If the *calculated total seat area* for a given\n flight is within certain boundaries of the reference for that aircraft, the\n filed seating data from published flight schedules is used. Otherwise the\n *reference total seat area* is used.\n\n#### Factors details\n\n**Seating class factors**\n\nSeating parameters follow\n[IATA RP 1726](https://www.iata.org/en/programs/environment/passenger-emissions-methodology/).\nAn analysis of seat pitch and width in each seating class in typical plane\nconfigurations confirmed the accuracy of these factors.\n\n* Narrow-body aircraft\n * Economy and Premium Economy **1**\n * Business and First **1.5**\n* Wide-body aircraft\n * Economy **1**\n * Premium Economy **1.5**\n * Business **4**\n * First **5**\n\n**Load factors**\n\nPassenger load factors are predicted based on historical passenger statistics.\nTIM uses a tiered approach to determine passenger load factors. High resolution,\nspecific data (i.e. by route) is preferred where available, and in the absence\nof more granular data the model falls back to a generic value (i.e. global\ndefault) only when no suitable high resolution options are available.\n\nTier 1: Highly specific passenger load factors\n\n* For flights within, to, and from the United States, TIM uses historical data\n provided by the\n [U.S. Department of Transportation Bureau of Transportation Statistics](https://www.bts.gov/airline-data-downloads).\n\n * Where data is available for a given carrier, route, and month of travel,\n use the average passenger load factor over the last 6 years.\n * Where data is available for the given carrier and month of travel, but\n not the specific route, use the average passenger load factor across all\n routes over the last 6 years.\n * If fewer than three years of data are available for averaging, we do not\n calculate an average, and fallback to the approach described below\n instead.\n\nTier 2: Global default passenger load factor\n\n* For all other flights for which an equivalent public-domain dataset with\n similar granularity is not currently available, TIM falls back to use a load\n factor value of **84.5%**. This value is derived from\n [historical data for the U.S.](https://fred.stlouisfed.org/series/LOADFACTOR)\n from 2019.\n* An analysis of load factors sourced from publically available airline\n investor reports indicates that this value is a good approximation for the\n passenger load factor globally.\n\nCargo load factors are not included.\n\n**Load factor data source specifics**\n\nT-100 from\n[U.S. Department of Transportation Bureau of Transportation Statistics](https://www.bts.gov/airline-data-downloads)\n\n* Only data from the last six years is used.\n* Data is updated on a monthly basis (TIM version number will not increase).\n* Any month of data for which the overall load factor (aggregated over all\n airlines and routes) differs more than 20% from the average load factor over\n the last 10 years is removed as an outlier month. March 2020 - February 2021\n (inclusive) are removed from the data as a result.\n* To account for patterns of seasonality that do not correspond with the exact\n month of travel (e.g. public holidays), the previous and next month are\n taken into account for the average load factor of any given month of travel.\n E.g. For future flights in March, we aggregate over all flights in February,\n March, and April.\n\n## Example emission estimation\n\nLet\xe2\x80\x99s consider the following flight parameters:\n\n* Origin: Zurich ZRH\n* Destination: San Francisco SFO\n* Aircraft: Boeing 787-9\n * Economy seats: 188\n * Premium Economy seats: 21\n * Business seats: 48\n * First seats: 0\n\nTo get the total emissions for a flight, let\xe2\x80\x99s follow the process below:\n\n1. Calculate great circle distance between ZRH and SFO: 9369 km (= 5058.9 nautical miles)\n2. Look up the static LTO numbers and the distance-based CCD number from aircraft performance data (see Table 1), and interpolate fuel burn for a 9369 km long flight:\n * LTO 1727 kg of fuel burn\n * CCD 52970 kg of fuel burn calculated\n * 52375 + (5058.9 - 5000) * (57430 - 52375) / (5500 - 5000) = 52970\n3. Sum LTO and CCD number for total flight-level result:\n * 1727 kg + 52970 kg = 54697 kg of fuel burn\n4. Convert from fuel burn to CO2e emissions for total flight-level result:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 54697 * 0.6465 = 35362\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 54697 * 3.1894 = 174451\n * Well-to-Wake (WTW) emissions in kg of CO2e: (54697 * 0.6465) + (54697 * 3.1894) = 209812\n\n\nOnce the total flight emissions are computed, let\xe2\x80\x99s compute the per passenger\nbreak down:\n\n1. Determine which seating class factors to use for the given flight. In the\n ZRH-SFO example, we will use the wide-body factors (Boeing 787-9).\n2. Calculate the equivalent capacity of the aircraft according to the following\n \\\n C = first\\_class\\_seats \\* first\\_class\\_multiplier + business\\_class\\_seats\n \\* business\\_class\\_multiplier + \xe2\x80\xa6\n * In this specific example, the estimated area is: \\\n 0 \\* 5 + 48 \\* 4 + 1.5 \\* 21 + 188 \\* 1 = 411.5\n3. Divide the total CO2e emissions by the equivalent capacity calculated above to get the CO2e emissions per economy passenger.\n * Well-to-Tank (WTT) emissions in kg of CO2e: 35362 / 411.5 = 85.934\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 174451 / 411.5 = 423.939\n * Well-to-Wake (WTW) emissions in kg of CO2e: 85.934 + 423.939 = 509.873\n4. Emissions per passenger for other cabins can be derived by multiplying by the corresponding cabin factor.\n * First:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 85.934 * 5 = 429.67\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 423.939 * 5 = 2119.695\n * Well-to-Wake (WTW) emissions in kg of CO2e: 509.873 * 5 = 2549.365\n * Business:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 85.934 * 4 = 343.736\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 423.939 * 4 = 1695.756\n * Well-to-Wake (WTW) emissions in kg of CO2e: 509.873 * 4 = 2039.492\n * Premium Economy:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 85.934 * 1.5 = 128.901\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 423.939 * 1.5 = 635.909\n * Well-to-Wake (WTW) emissions in kg of CO2e: 509.873 * 1.5 = 764.81\n * Economy:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 85.934\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 423.939\n * Well-to-Wake (WTW) emissions in kg of CO2e: 509.873\n5. Scale to estimated load factor 0.845 by apportioning emissions to occupied seats:\n * First:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 429.67 / 0.845 = 508.485\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 2119.695 / 0.845 = 2508.515\n * Well-to-Wake (WTW) emissions in kg of CO2e: 2549.365 / 0.845 = 3017\n * Business:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 343.736 / 0.845 = 406.788\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 1695.756 / 0.845 = 2006.812\n * Well-to-Wake (WTW) emissions in kg of CO2e: 2039.492 / 0.845 = 2413.6\n * Premium Economy:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 128.901 / 0.845 = 152.546\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 635.909 / 0.845 = 752.555\n * Well-to-Wake (WTW) emissions in kg of CO2e: 764.81 / 0.845 = 905.101\n * Economy:\n * Well-to-Tank (WTT) emissions in kg of CO2e: 85.934 / 0.845 = 101.697\n * Tank-to-Wake (TTW) emissions in kg of CO2e: 423.939 / 0.845 = 501.703\n * Well-to-Wake (WTW) emissions in kg of CO2e: 509.873 / 0.845 = 603.4\n\nNote that the model generates emission estimates for all cabin classes, including cabin classes where the seat count is zero, as cabin classifications are not always consistent across data providers. Therefore, providing estimates for all cabin classes simplifies integration of TIM data with other datasets.\n\n## Legal base for model data sharing\n\nThe carbon emission estimate data are available via API under the\n[Creative Commons Attribution-ShareAlike CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)\nopen source license\n([legal code](https://creativecommons.org/licenses/by-sa/4.0/legalcode)).\n\n## API access\n\nDeveloper documentation is available on the Google Developers site for the [Travel Impact Model API](https://developers.google.com/travel/impact-model).\n\n## Versioning\n\nThe model will be developed further over time, e.g. with improved load factors\nmethodology or more fine grained seat area ratios calculation. New versions will\nbe published.\n\nA full model version will have four components: **MAJOR.MINOR.PATCH.DATE**, e.g.\n1.3.1.20230101. The four tiers of change tracking are handled differently:\n\n* **Major versions**: Changes to the model that would break existing client\n implementations if not addressed (e.g. changes in data types or schema) or\n major methodology changes (e.g. adding new data sources to the model that\n lead to major output changes). We expect these to be infrequent but they\n need to be managed with special care.\n* **Minor versions**: Changes to the model that, while being consistent across\n schema versions, change the model parameters or implementation.\n* **Patch versions**: Implementation changes meant to address bugs or\n inaccuracies in the model implementation.\n* **Dated versions**: Model datasets are recreated with refreshed input data\n but no change to the algorithms regularly.\n\n## Changelog\n\n### 1.8.0\n\nAdding Well-to-Tank (WTT) and Tank-to-Wake (TTW) emissions break-downs to all flight emissions. Updating the jet fuel combustion to CO2 conversion factor from the minimum value of 3.1672 to the value of 3.1894 (using Lower Heating Value from ISO 14083 and CORSIA Carbon Intensity value), and using the CORSIA Life Cycle Assessment methodology to implement a WTT CO2e emissions factor 0.6465. Reference: [ISO](https://www.iso.org/standard/78864.html), [CORSIA](https://www.icao.int/environmental-protection/CORSIA/Documents/CORSIA_Eligible_Fuels/CORSIA_Supporting_Document_CORSIA%20Eligible%20Fuels_LCA_Methodology_V5.pdf).\n\n### 1.7.0\n\nUpdating the jet fuel combustion to CO2 conversion factor from 3.15 based on the EEA methodology to 3.1672 to align with the [CORSIA methodology\xe2\x80\x99s](https://www.icao.int/environmental-protection/CORSIA/Documents/CORSIA_Eligible_Fuels/CORSIA_Supporting_Document_CORSIA%20Eligible%20Fuels_LCA_Methodology_V5.pdf) recommended factor.\n\n### 1.6.0\n\nAdding carrier and route specific passenger load factors for flights from, to,\nand within the U.S., taking seasonality patterns into account. We are using data\nfrom the\n[U.S. Department of Transportation Bureau of Transportation Statistics](https://www.bts.gov/).\nFor more details, see the [section on load factors](#factors-details).\n\n### 1.5.1\n\nAdding a fleet-level source for seating configuration data. For airlines that\ndon\'t file seating configuration information in flight schedules but use the\nsame seating configuration for all their aircraft of a certain model, a fall\nback to the ""Seats (Equipment Configuration) File"" provided by OAG is performed.\n\n### 1.5.0\n\nFollowing recent discussions with academic and industry partners, we are\nadjusting the TIM to focus on CO2 emissions. While we strongly believe in\nincluding non-CO2 effects in the model long-term, the details of how and when to\ninclude these factors requires more input from our stakeholders as part of a\ngovernance model that\xe2\x80\x99s in development. With this change, we are provisionally\nremoving contrails effects from our CO2e estimates but will keep the labeling as\n\xe2\x80\x9cCO2e\xe2\x80\x9d in the model to ensure future compatibility.\n\nWe believe CO2e factors are critical to include in the model, given the emphasis\non them in the IPCC\xe2\x80\x99s AR6 report. We want to make sure that when we do\nincorporate them into the model, we have a strong plan to account for time of\nday and regional variations in contrails\xe2\x80\x99 warming impact. We are committed to\nproviding consumers the most accurate information as they make informed choices\nabout their travel options.\n\nWe continue to invest into research and collaborate with leading scientists,\nNGOs, and partners to better incorporate contrails and other non-GHG impact into\nour model, and we look forward to sharing updates at a later date.\n\n### 1.4.0\n\nInitial public version of the Travel Impact Model.\n\n## Limitations\n\nThe model described in this document produces estimates of carbon emissions.\nEmission estimates aim to be representative of what the typical emissions for a\nflight matching the model inputs would be. Estimates might differ from actual\nemissions based on a number of factors.\n\n**Actual flight distances:** When modeling the distance between a given origin\nand destination, the Great Circle Distance between the origin and destination\nairport is used, as opposed to the actual distance flown.\n\nThis simplifying assumption enables the model to be used even when precise\nflight path information is not available, such as when computing emission\nestimates for future flights.\n\n**Aircraft types:** The emissions model accounts for the equipment type as\npublished in the flight schedules. The majority of aircraft types in use are\ncovered. See [Appendix A](#appendix-a-aircraft-type-support) for a list of\nsupported aircraft types.\n\nSome aircraft types are supported by falling back to a related model thought to\nhave comparable emissions. See\n[Flight level emission estimates](#flight-level-emission-estimates) for more\ndetails.\n\nIf no reasonable approximation is available for a given aircraft, the model will\nnot produce estimates for it.\n\n**Cargo load factors:** Cargo load is not yet supported in the model.\n\n**Engine information:** Beyond the aircraft type, there are other aircraft\ncharacteristics that can have an effect on the flight emissions (e.g. engine\ntype, engine age, etc.) that are not currently included when computing emission\nestimates.\n\n**Fuel type:** The emissions model assumes that all flights operate on 100%\nconventional fuel. Alternative fuel types (e.g. Sustainable Aviation Fuel) are\nnot supported.\n\n**Seat configurations:** If there are no seat configurations individual numbers\nfor a flight available from published flight schedules, or if they are\nincorrectly formatted or implausible, aircraft specific medians derived from the\noverall dataset are employed.\n\n**Contrail-induced cirrus clouds:** Warming effects produced by short-lived climate pollutants such as contrail-induced cirrus clouds are not yet included in emissions as calculated by the Travel Impact Model.\n\n## Data quality\n\nThe CO2 estimates were validated by comparing against a limited\namount of real-world fuel burn data. The finding was that the TIM is\nunderestimating by 7% on average.\n\nThe\n[EEA guidebook](https://www.eea.europa.eu/publications/emep-eea-guidebook-2019/part-b-sectoral-guidance-chapters/1-energy/1-a-combustion/1-a-3-a-aviation/view)\n(chapter 4) cites sources from ICAO that estimate the uncertainty of the LTO\nfactors between 5 and 10%. The CCD factor uncertainty is estimated between 15\nand 40%.\n\n## Contact\n\nWe are welcoming feedback and enquiries. Please get in touch using this\n[form](https://support.google.com/travel/contact/tim?pcff=category:travel_impact_model_(TIM)_specifications).\n\n## Glossary\n\n**CCD:** The flight phases *Climb*, *Cruise*, *and* *Descend* occur above a\nflight altitude of 3,000 feet.\n\n**CO2**: Carbon dioxide is the most significant long-lived greenhouse\ngas in Earth\'s atmosphere. Since the Industrial Revolution anthropogenic\nemissions \xe2\x80\x93 primarily from use of fossil fuels and deforestation \xe2\x80\x93 have rapidly\nincreased its concentration in the atmosphere, leading to global warming.\n\n**CO2e**: CO2e is short for CO2 equivalent, and is a metric measure used to compare the emissions from various greenhouse gases on the basis of their global-warming potential (GWP), by converting amounts of other gases to the equivalent amount of carbon dioxide with the same global warming potential ([source](https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Glossary:Carbon_dioxide_equivalent)).\n\n**Contrail-induced cirrus clouds**: Cirrus clouds are atmospheric clouds that\nlook like thin strands. There are natural cirrus clouds, and also contrail\ninduced cirrus clouds that under certain conditions occur as the result of a\ncontrail formation from aircraft engine exhaust.\n\n**CORSIA**: Carbon Offsetting and Reduction Scheme for International Aviation, a carbon offset and reduction scheme to curb the aviation impact on climate change developed by the International Civil Aviation Organization.\n\n**Radiative Forcing (RF):** Radiative Forcing is the instantaneous difference in\nradiative energy flux stemming from a climate perturbation, measured at the top\nof the atmosphere.\n\n**Effective Radiative Forcing (ERF):** Radiative forcing effects can create\nrapid responses in the troposphere, which can either enhance or reduce the flux\nover time, and makes RF a difficult proxy for calculating long-term climate\neffects. ERF attempts to capture long-term climate forcing, and represents the\nchange in net radiative flux after allowing for short-term responses in\natmospheric temperatures, water vapor and clouds.\n\n**European Environment Agency (EEA):** An agency of the European Union whose\ntask is to provide sound, independent information on the environment.\n\n**Google\xe2\x80\x99s Travel Sustainability team**: A team at Google focusing on travel\nsustainability, based in Zurich (Switzerland) and Cambridge (U.S.), with the\ngoal to enable users to make more sustainable travel choices.\n\n**Great circle distance:** Defined as the shortest distance between two points\non the surface of a sphere when measured along the surface of the sphere.\n\n**ICAO:** The International Civil Aviation Organization, a specialized agency of\nthe United Nations.\n\n**ISO 14083**: The international standard that establishes a common methodology for the quantification and reporting of greenhouse gas (GHG) emissions arising from the operation of transport chains of passengers and freight ([source](https://www.iso.org/standard/78864.html)), published by the International Organization for Standardization (ISO).\n\n**LTO:** The flight phases *Take Off and Landing* occur below a flight altitude\nof 3000 feet at the beginning and the end of a flight. They include the\nfollowing phases: taxi-out, taxi-in (idle), take-off, climb-out, approach and\nlanding.\n\n**TIM:** The Travel Impact Model described in this document.\n\n**Short Lived Climate Pollutants (SLCPs):** Pollutants that stay in the\natmosphere for a short time (e.g. weeks) in comparison to Long Lived Climate\nPollutants such as CO2 that stay in the atmosphere for hundreds of\nyears.\n\n## Appendix\n\n### Appendix A: Aircraft type support\n\n| Aircraft full name | IATA aircraft code | Mapping (ICAO aircraft code) | Support status |\n| ----------------------------------------------- | ------------------ | ---------------------------- | ------------------------------------------------------- |\n| Airbus A220-100 | 221 | | Supported via correction factor derived from Piano data |\n| Airbus A220-300 | 223 | | Supported via correction factor derived from Piano data |\n| Airbus A300-600 Freighter | ABY | A306 | Direct match in EEA |\n| Airbus A300-600/600C | AB6 | A306 | Direct match in EEA |\n| Airbus A300B2/B4/C4 | AB4 | A30B | Direct match in EEA |\n| Airbus A310 | 310 | A310 | Direct match in EEA |\n| Airbus A310-300 | 313 | A310 | Direct match in EEA |\n| Airbus A318 | 318 | A318 | Direct match in EEA |\n| Airbus A318/A319/A320/A321 | 32S | A321 | Mapped to least efficient in family |\n| Airbus A319 | 319 | A319 | Direct match in EEA |\n| Airbus A320-100/200 | 320 | A320 | Direct match in EEA |\n| Airbus A320neo | 32N | | Supported via correction factor derived from Piano data |\n| Airbus A321 | 321 | A321 | Direct match in EEA |\n| Airbus A321neo | 32Q | | Supported via correction factor derived from Piano data |\n| Airbus A330 | 330 | A332 | Mapped to least efficient in family |\n| Airbus A330-200 | 332 | A332 | Direct match in EEA |\n| Airbus A330-300 | 333 | A333 | Direct match in EEA |\n| Airbus A330-900neo | 339 | A333 | Supported via correction factor derived from Piano data |\n| Airbus A340 | 340 | A345 | Mapped to least efficient in family |\n| Airbus A340-300 | 343 | A343 | Direct match in EEA |\n| Airbus A340-500 | 345 | A345 | Direct match in EEA |\n| Airbus A340-600 | 346 | A346 | Direct match in EEA |\n| Airbus A350 | 350 | A350 | Mapped to least efficient in family |\n| Airbus A350-900 | 359 | A350 | Direct match in EEA |\n| Airbus A380 | 380 | A380 | Mapped to least efficient in family |\n| Airbus A380-800 | 388 | A380 | Direct match in EEA |\n| Airbus A320 (Sharklets) | 32A | | Supported via correction factor derived from Piano data |\n| Airbus A321 (Sharklets) | 32B | | Supported via correction factor derived from Piano data |\n| Airbus A350-1000 | 351 | A350 | Supported via correction factor derived from Piano data |\n| Antonov AN-148-100 | A81 | AN148 | Direct match in EEA |\n| Antonov AN-24 | AN4 | AN24 | Direct match in EEA |\n| Antonov AN-26/30/32 | AN6 | AN32 | Mapped to least efficient in family |\n| Antonov AN-32 | A32 | AN32 | Direct match in EEA |\n| ATR 42-300/320 | AT4 | ATR42 | Mapped to similar model |\n| ATR 42-500 | AT5 | ATR42 | Direct match in EEA |\n| ATR 42/ATR 72 | ATR | ATR72 | Mapped to least efficient in family |\n| ATR 72 | AT7 | ATR72 | Direct match in EEA |\n| Avro Regional Jet Avroliner | ARJ | | Not supported |\n| Avro Regional Jet RJ100 Avroliner | AR1 | | Not supported |\n| Avro Regional Jet RJ85 Avroliner | AR8 | | Not supported |\n| Beechcraft 1900 | BE1 | | Not supported |\n| Beechcraft 1900/1900C | BES | | Not supported |\n| Beechcraft 1900D | BEH | | Not supported |\n| Beechcraft C99 Airliner | BE9 | | Not supported |\n| Beechcraft Light Aircraft twin engine | BET | | Not supported |\n| Boeing 717-200 | 717 | B717 | Direct match in EEA |\n| Boeing 737 | 737 | B734 | Mapped to least efficient in family |\n| Boeing 737 Freighter | 73F | B734 | Mapped to least efficient in family |\n| Boeing 737-200 | 732 | B732 | Direct match in EEA |\n| Boeing 737-200 | 73M | B732 | Direct match in EEA |\n| Boeing 737-200/200 Advanced | 73S | B732 | Direct match in EEA |\n| Boeing 737-300 | 733 | B733 | Direct match in EEA |\n| Boeing 737-300 | 73N | B733 | Direct match in EEA |\n| Boeing 737-300 (winglets) | 73C | B733 | Mapped to non-optimized aircraft |\n| Boeing 737-400 | 734 | B734 | Direct match in EEA |\n| Boeing 737-400 | 73Q | B734 | Direct match in EEA |\n| Boeing 737-500 | 735 | B735 | Direct match in EEA |\n| Boeing 737-500 (winglets) | 73E | B735 | Mapped to non-optimized aircraft |\n| Boeing 737-600 | 736 | B736 | Direct match in EEA |\n| Boeing 737-700 | 73G | B737 | Direct match in EEA |\n| Boeing 737-700 (winglets) | 73W | | Supported via correction factor derived from Piano data |\n| Boeing 737-800 | 738 | B738 | Direct match in EEA |\n| Boeing 737-800 (Scimitar Winglets) | 7S8 | | Supported via correction factor derived from Piano data |\n| Boeing 737-800 (winglets) | 73H | | Supported via correction factor derived from Piano data |\n| Boeing 737-900 | 739 | B739 | Direct match in EEA |\n| Boeing 737-900 (winglets) | 73J | B739 | Mapped to non-optimized aircraft |\n| Boeing 737MAX 8 | 7M8 | | Supported via correction factor derived from Piano data |\n| Boeing 737MAX 9 | 7M9 | | Supported via correction factor derived from Piano data |\n| Boeing 747 | 747 | B744 | Mapped to least efficient in family |\n| Boeing 747 Freighter | 74F | B744 | Mapped to least efficient in family |\n| Boeing 747-400 | 744 | B744 | Direct match in EEA |\n| Boeing 747-400 Mixed | 74E | B744 | Direct match in EEA |\n| Boeing 747-400F Freighter | 74Y | B744 | Direct match in EEA |\n| Boeing 747-8F (Freighter) | 74N | B744 | Mapped onto older model |\n| Boeing 747-8I | 74H | B744 | Mapped onto older model |\n| Boeing 757 | 757 | B753 | Mapped to least efficient in family |\n| Boeing 757-200 | 752 | B752 | Direct match in EEA |\n| Boeing 757-200 (winglets) | 75W | | Supported via correction factor derived from Piano data |\n| Boeing 757-300 | 753 | B753 | Direct match in EEA |\n| Boeing 757-300 (winglets) | 75T | B753 | Mapped to non-optimized aircraft |\n| Boeing 767 | 767 | B764 | Mapped to least efficient in family |\n| Boeing 767-200 | 762 | B762 | Direct match in EEA |\n| Boeing 767-300 | 763 | B763 | Direct match in EEA |\n| Boeing 767-300 (winglets) | 76W | | Supported via correction factor derived from Piano data |\n| Boeing 767-400 | 764 | B764 | Direct match in EEA |\n| Boeing 777 | 777 | B773 | Mapped to least efficient in family |\n| Boeing 777 Freighter | 77F | B773 | Mapped to least efficient in family |\n| Boeing 777-200/200ER | 772 | B772 | Direct match in EEA |\n| Boeing 777-200F Freighter | 77X | B772 | Direct match in EEA |\n| Boeing 777-200LR | 77L | B772 | Mapped to similar model |\n| Boeing 777-300 | 773 | B773 | Direct match in EEA |\n| Boeing 777-300ER | 77W | B77W | Direct match in EEA |\n| Boeing 787 | 787 | B789 | Mapped to least efficient in family |\n| Boeing 787-10 | 781 | | Supported via correction factor derived from Piano data |\n| Boeing 787-8 | 788 | B788 | Direct match in EEA |\n| Boeing 787-9 | 789 | B789 | Direct match in EEA |\n| Bombardier CS100 | CS1 | | Not supported |\n| Bombardier CS300 | CS3 | | Not supported |\n| British Aerospace 146 | 146 | BAE146 | Direct match in EEA |\n| British Aerospace Jetstream 31/32/41 | JST | | Not supported |\n| British Aerospace Jetstream 32 | J32 | | Not supported |\n| British Aerospace Jetstream 41 | J41 | | Not supported |\n| Canadair Regional Jet | CRJ | CS900RJ | Mapped to least efficient in family |\n| Canadair Regional Jet 100 | CR1 | | Not supported |\n| Canadair Regional Jet 1000 | CRK | | Not supported |\n| Canadair Regional Jet 200 | CR2 | | Not supported |\n| Canadair Regional Jet 550 | CR5 | | Supported via correction factor derived from Piano data |\n| Canadair Regional Jet 700 | CR7 | CS700RJ | Direct match in EEA |\n| Canadair Regional Jet 900 | CR9 | CS900RJ | Direct match in EEA |\n| Cessna (Light Aircraft - single engine) | CNC | C208 | Direct match in EEA |\n| Cessna (Light Aircraft) | CNA | C208 | Direct match in EEA |\n| Cessna 208B Freighter | CNF | C208 | Direct match in EEA |\n| Cessna Citation | CNJ | C500 | Direct match in EEA |\n| Comac ARJ21-700 | C27 | | Not supported |\n| Convair 440/580/600/640 Freighter | CVF | | Not supported |\n| De Havilland-Bombardier DHC-4 Caribou | DHC | | Not supported |\n| De Havilland-Bombardier DHC-6 Twin Otter | DHT | DHC6 | Direct match in EEA |\n| De Havilland-Bombardier DHC-8 Dash 8 | DH8 | DHC8 | Direct match in EEA |\n| De Havilland-Bombardier DHC-8 Dash 8 Series 200 | DH2 | DHC8 | Mapped to non-optimized aircraft |\n| De Havilland-Bombardier DHC-8 Dash 8 Series 300 | DH3 | DHC8 | Mapped to non-optimized aircraft |\n| De Havilland-Bombardier DHC-8 Dash 8 Series 400 | DH4 | DHC8 | Mapped to non-optimized aircraft |\n| Embraer 170 Regional Jet | E70 | E170 | Direct match in EEA |\n| Embraer 175 (Enhanced Winglets) | E7W | | Supported via correction factor derived from Piano data |\n| Embraer 175 Regional Jet | E75 | E175 | Direct match in EEA |\n| Embraer 190 E2 | 290 | E190 | Mapped onto older model |\n| Embraer 190 Regional Jet | E90 | E190 | Direct match in EEA |\n| Embraer 195 E2 | 295 | E195 | Mapped onto older model |\n| Embraer 195 Regional Jet | E95 | E195 | Direct match in EEA |\n| Embraer EMB-110 Bandeirante | EMB | E110 | Direct match in EEA |\n| Embraer EMB-120 Brasilia | EM2 | E120 | Direct match in EEA |\n| Embraer ERJ-135 Regional Jet | ER3 | E135 | Direct match in EEA |\n| Embraer ERJ-135/140/145 Regional Jet | ERJ | | Mapped to least efficient in family |\n| Embraer ERJ-140 Regional Jet | ERD | E145 | Direct match in EEA |\n| Embraer ERJ-145 Regional Jet | ER4 | E145 | Direct match in EEA |\n| Embraer RJ-170/175/190/195 Regional Jet | EMJ | | Mapped to least efficient in family |\n| Fairchild (Swearingen) Metro/Merlin | SWM | | Not supported |\n| Fairchild Dornier 328JET | FRJ | | Not supported |\n| Fokker 100 | 100 | F100 | Direct match in EEA |\n| Fokker 50 | F50 | F50 | Direct match in EEA |\n| Fokker 70 | F70 | F70 | Direct match in EEA |\n| Ilyushin IL-76 | IL7 | IL76 | Direct match in EEA |\n| Ilyushin IL-96-300 | IL9 | IL96 | Direct match in EEA |\n| LET L410 Turbolet | L4T | L410 | Direct match in EEA |\n| McDonnell Douglas MD-11 Freighter | M1F | MD11 | Direct match in EEA |\n| McDonnell Douglas MD-80 | M80 | | Not supported |\n| McDonnell Douglas MD-83 | M83 | | Not supported |\n| McDonnell Douglas MD-87 | M87 | | Not supported |\n| McDonnell Douglas MD-88 | M88 | | Not supported |\n| McDonnell Douglas MD-90 | M90 | MD90 | Direct match in EEA |\n| Pilatus Brit-Norm BN-2A/B ISL/BN-2T | BNI | | Not supported |\n| SAAB 2000 | S20 | | Not supported |\n| Saab 340B | SFB | | Not supported |\n| SAAB SF 340 | SF3 | | Not supported |\n| Sukhoi Superjet 100-95 | SU9 | | Not supported |\n| Tupolev TU-154 | TU5 | | Not supported |\n| Xian Yunshuji MA-60 | MA6 | | Not supported |\n| Yakovlev YAK-40 | YK4 | | Not supported |\n| Yakovlev YAK-42 | YK2 | | Not supported |\n'",,"2022/04/13, 10:14:13",560,CC-BY-4.0,15,23,"2022/09/20, 13:03:59",1,2,2,0,400,1,0.0,0.6111111111111112,,,0,4,false,,false,true,,,https://github.com/google,https://opensource.google/,,,,https://avatars.githubusercontent.com/u/1342004?v=4,,, Carbon Dioxide Removal Database,Open science reports on carbon removal projects and technologies.,carbonplan,https://github.com/carbonplan/cdr-database.git,github,,Carbon Credits and Capture,"2023/06/01, 15:33:02",22,0,3,true,JavaScript,carbonplan,carbonplan,"JavaScript,Python",https://carbonplan.org/research/cdr-database,"b""\n\n# carbonplan / cdr-database\n\n**reports on carbon removal projects and technologies**\n\n[![GitHub][github-badge]][github]\n[![Build Status]][actions]\n![MIT License][]\n[![DOI](https://zenodo.org/badge/252217021.svg)](https://zenodo.org/badge/latestdoi/252217021)\n\n[github]: https://github.com/carbonplan/cdr-database\n[github-badge]: https://badgen.net/badge/-/github?icon=github&label\n[build status]: https://github.com/carbonplan/cdr-database/actions/workflows/main.yml/badge.svg\n[actions]: https://github.com/carbonplan/cdr-database/actions/workflows/main.yaml\n[mit license]: https://badgen.net/badge/license/MIT/blue\n\n## resources\n\n- Main website: https://carbonplan.org/\n- This site: https://carbonplan.org/research/cdr-database\n\n## to build the projects data\n\nTo run the script that generates the project data, first install the requirements:\n\n```shell\npip install -r requirements.txt\n```\n\nYou will also need to unlock the Google Sheets key using [`git-crypt`](https://github.com/AGWA/git-crypt). Unlocking is simplest using a symmetric secret key securely shared by a team member.\n\nFinally, you may run the command to generate the projects list for all review cycles:\n\n```shell\npython scripts/build_projects.py strp2020 strp2021q1 strp2021q4 msft2021\n```\n\n## to build the site locally\n\nAssuming you already have `Node.js` installed, you can install the build dependencies with:\n\n```shell\nnpm install .\n```\n\nTo start a development version of the site, simply run:\n\n```shell\nnpm run dev\n```\n\nand then visit `http://localhost:5000/research/cdr-database` in your browser.\n\n## license\n\nAll the code in this repository is [MIT](https://choosealicense.com/licenses/mit/) licensed, but we request that you please provide attribution if reusing any of our digital content (graphics, logo, reports, etc.). Some of the data featured here is sourced from content made available under a [CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/) license. We include attribution for this content, and we please request that you also maintain that attribution if using this data.\n\n## about us\n\nCarbonPlan is a non-profit organization that uses data and science for climate action. We aim to improve the transparency and scientific integrity of climate solutions with open data and tools. Find out more at [carbonplan.org](https://carbonplan.org/) or get in touch by [opening an issue](https://github.com/carbonplan/cdr-database/issues/new) or [sending us an email](mailto:hello@carbonplan.org).\n""",",https://zenodo.org/badge/latestdoi/252217021","2020/04/01, 15:38:55",1302,MIT,2,747,"2022/10/24, 22:28:53",12,240,251,3,366,10,0.7,0.2962226640159046,"2021/11/19, 21:08:55",1.2.0,0,10,false,,false,false,,,https://github.com/carbonplan,carbonplan.org,earth,,,https://avatars.githubusercontent.com/u/58278235?v=4,,, FOQUS,Framework for Optimization and Quantification of Uncertainty and Surrogates.,CCSI-Toolset,https://github.com/CCSI-Toolset/FOQUS.git,github,foqus,Carbon Credits and Capture,"2023/09/29, 17:13:08",40,0,6,true,Python,CCSI Toolset,CCSI-Toolset,"Python,JavaScript,R,GAMS,PowerShell,Shell,HTML,Makefile,CSS",https://foqus.readthedocs.io,"b""# FOQUS: Framework for Optimization, Quantification of Uncertainty, and Surrogates\nPackage includes: FOQUS GUI, Optimization Engine, Turbine Client. *Requires access to a Turbine Gateway installation either locally or on a separate cluster/server. #GAMS is required for heat integration option.*\n\n## Project Status\n\n[![Documentation Status](https://readthedocs.org/projects/foqus/badge/?version=latest)](https://foqus.readthedocs.io/en/latest/?badge=latest)\n[![Tests](https://github.com/CCSI-Toolset/FOQUS/actions/workflows/tests.yml/badge.svg)](https://github.com/CCSI-Toolset/FOQUS/actions/workflows/tests.yml)\n[![Nightlies](https://github.com/CCSI-Toolset/FOQUS/actions/workflows/nightlies.yml/badge.svg)](https://github.com/CCSI-Toolset/FOQUS/actions/workflows/nightlies.yml)\n[![GitHub contributors](https://img.shields.io/github/contributors/CCSI-Toolset/FOQUS.svg)](https://github.com/CCSI-Toolset/FOQUS/graphs/contributors)\n[![Merged PRs](https://img.shields.io/github/issues-pr-closed-raw/CCSI-Toolset/FOQUS.svg?label=merged+PRs)](https://github.com/CCSI-Toolset/FOQUS/pulls?q=is:pr+is:merged)\n[![Issue stats](http://isitmaintained.com/badge/resolution/CCSI-Toolset/FOQUS.svg)](http://isitmaintained.com/project/CCSI-Toolset/FOQUS)\n[![Downloads](https://pepy.tech/badge/ccsi-foqus)](https://pepy.tech/project/ccsi-foqus)\n\n\n## Getting Started\n\n### Install\nTo get started right away, start with the [installation](https://foqus.readthedocs.io/en/stable/chapt_install/index.html) instructions for the most recent stable release.\n\nWe have several videos playlists on how to install FOQUS:\n* [Python 3 version of FOQUS](https://www.youtube.com/playlist?list=PLmBxveOxgaXl-H9Wp3X6SIpVWg3Ua1Y2X)\n* [Optional software for FOQUS](https://www.youtube.com/playlist?list=PLmBxveOxgaXn24WEhFMyrtA-0_4Rvlesw)\n* [Python 2 version of FOQUS](https://www.youtube.com/playlist?list=PLmBxveOxgaXkyrQP9CAgUu_ZPYsS4qCvd) \n\n### Documentation and User's Manual\nRead the full [documentation for FOQUS](https://foqus.readthedocs.io/en/stable/) (including the installation manual). Documentation for [past releases or the latest](https://readthedocs.org/projects/foqus/) (unreleased) development version are available.\n\nA complete set of usage and installation instruction videos for FOQUS are available on our [YouTube channel](https://www.youtube.com/channel/UCBVjFnxrsWpNlcnDvh0_GzQ/).\n\n### FAQ\nSee our [FAQ](FAQs.md) for frequently asked questions and answers\n\n## Authors\nSee also the list of [contributors](../../graphs/contributors) who participated in this project.\n\n## Development Practices\n* Code development will be peformed in a forked copy of the repo. Commits will not be \n made directly to the repo. Developers will submit a pull request that is then merged\n by another team member, if another team member is available.\n* Each pull request should contain only related modifications to a feature or bug fix. \n* Sensitive information (secret keys, usernames etc) and configuration data \n (e.g database host port) should not be checked in to the repo.\n* A practice of rebasing with the main repo should be used rather that merge commmits.\n\n## Versioning\nWe use [SemVer](http://semver.org/) for versioning. For the versions available, \n[releases](../../releases) or [tags](../../tags) on this repository.\n\n## License & Copyright\nSee [LICENSE.md](LICENSE.md) file for details.\n\n## Reference\nIf you are using FOQUS for your work, please reference the following paper:\n\nMiller, D.C., Agarwal, D., Bhattacharyya, D., Boverhof, J., Chen, Y., Eslick, J., Leek, J., Ma, J., Mahapatra, P., Ng, B., Sahinidis, N.V., Tong, C., Zitney, S.E., 2017. Innovative computational tools and models for the design, optimization and control of carbon capture processes, in: Papadopoulos, A.I., Seferlis, P. (Eds.), Process Systems and Materials for CO2 Capture: Modelling, Design, Control and Integration. John Wiley & Sons Ltd, Chichester, UK, pp. 311\xe2\x80\x93342.\n\n## Technical Support\nIf you require assistance, or have questions regarding FOQUS, please send an e-mail to: ccsi-support@acceleratecarboncapture.org or [open an issue in GitHub](https://github.com/CCSI-Toolset/FOQUS/issues)\n""",,"2017/06/12, 22:05:34",2326,CUSTOM,45,1762,"2023/09/29, 16:45:20",50,541,1118,93,26,8,1.2,0.7819905213270142,"2023/10/10, 19:00:38",3.19.0,0,27,false,,false,false,,,https://github.com/CCSI-Toolset,https://www.acceleratecarboncapture.org/,,,,https://avatars.githubusercontent.com/u/27831154?v=4,,, GEOSX,"A simulation framework for modeling coupled flow, transport, and geomechanics in the subsurface.",GEOSX,https://github.com/GEOS-DEV/GEOS.git,github,"hpc,reservoir-simulation,geomechanics,gpu,carbon-storage,llnl",Carbon Credits and Capture,"2023/10/25, 18:48:37",170,0,68,true,C++,GEOS,GEOS-DEV,"C++,Python,CMake,Shell,Perl,C,TeX,Dockerfile,CSS",,"b""[![DOI](https://zenodo.org/badge/131810578.svg)](https://zenodo.org/badge/latestdoi/131810578)\n\nWelcome to the GEOS project!\n-----------------------------\nGEOS is a simulation framework for modeling coupled flow, transport, and geomechanics\nin the subsurface. The code provides advanced solvers for a number of target applications,\nincluding\n - carbon sequestration,\n - geothermal energy,\n - and similar systems. \n\nA key focus of the project is achieving scalable performance on current and next-generation\nhigh performance computing systems. We do this through a portable programming model and research into scalable algorithms.\n\nYou may want to browse our\n[publications](https://geosx-geosx.readthedocs-hosted.com/en/latest/docs/sphinx/Publications.html)\npage for more details on the HPC, numerics,\nand applied engineering components of this effort.\n\nDocumentation\n---------------------\n\nOur documentation is hosted [here](https://geosx-geosx.readthedocs-hosted.com/en/latest/?).\n\n\nWho develops GEOS?\n-------------------\nGEOS is an open source project and is developed by a community of researchers at\nseveral institutions. The bulk of the code has been written by contributors from\nthree main organizations:\n - Lawrence Livermore National Laboratory,\n - Stanford University,\n - TotalEnergies.\n\nSee our\n[authors](https://geosx-geosx.readthedocs-hosted.com/en/latest/docs/sphinx/Contributors.html)\nand\n[acknowledgements](https://geosx-geosx.readthedocs-hosted.com/en/latest/docs/sphinx/Acknowledgements.html)\npage for more details. \n\nHow does GEOS relate to the earlier GEOS code?\n------------------------------\nGEOS is the offshoot of an earlier code developed at LLNL also called GEOS. The new\ncode differs from our previous efforts in two important ways:\n - This new code GEOS uses a fundamentally different programming model to achieve\n high performance on the complicated chip architectures common on today's\n HPC systems. This code is ready for exascale-class systems as they are delivered.\n - The new code has been released as an open-source effort to encourage collaboration\n within the research and industrial community. See the release notes below\n for details of the [LGPL 2.1 License](./LICENSE) that has been adopted.\n\n\nRelease\n-------\n\nFor release details and restrictions, please read the [LICENSE](./LICENSE) file.\n\nFor copyrights, please read the [COPYRIGHT](./COPYRIGHT ) file.\n\nFor contributors, please read the [CONTRIBUTORS](./CONTRIBUTORS ) file.\n\nFor acknowledgements, please read the [ACKNOWLEDGEMENTS](./ACKNOWLEDGEMENTS ) file.\n\nFor notice, please read the [NOTICE](./NOTICE ) file.\n\n`LLNL-CODE-812638` `OCEC-18-021`\n""",",https://zenodo.org/badge/latestdoi/131810578","2018/05/02, 06:54:50",2002,LGPL-2.1,288,4443,"2023/10/25, 18:48:39",307,1660,2408,516,0,110,1.4,0.613947696139477,"2023/10/24, 21:59:58",v1.0.1,0,60,false,,false,false,,,https://github.com/GEOS-DEV,,,,,https://avatars.githubusercontent.com/u/38363894?v=4,,, OpenIAM,"An open source integrated assessment model developed by National Risk Assessment Partnership Phase II to facilitate risk assessment, management and containment assurance for geologic carbon sequestration projects.",NRAP,https://gitlab.com/NRAP/OpenIAM,gitlab,,Carbon Credits and Capture,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, CDRMEx,Carbon Dioxide Removal Modeling Experiments.,hsbay,https://github.com/hsbay/cdrmex.git,github,"carbon-dioxide-removal,magicc,cdr,carbon-removal,climate-modeling-experiments,climate-change",Carbon Credits and Capture,"2023/05/16, 21:37:54",10,0,2,true,Jupyter Notebook,Open NanoCarbon,hsbay,"Jupyter Notebook,Python",https://opennanocarbon.atlassian.net/wiki/spaces/REF/pages/575963137/Method+to+Determine+A+CDR+Target#MethodtoDetermineACDRTarget-ExperimentalValidationPaper,"b""# CDRMEx\nCarbon Dioxide Removal (CDR) Modeling Experiments\n\n##### CC-BY-4.0, 2020 Shannon A. Fiume\n\nThis project models highly speculative Carbon Dioxide Removal to understand\nits effects and speculate how much carbon may need to be removed to return to a\ncarbon dioxide concentration of 280 ppm. The experiments are performed in MAGICC6.8\nand have been run on pymagicc. The repo contains the scenario input files for MAGICC\nand a notebook that outlines the experiments and results.\n\nThe experiments are shown in [ONCtests.ipynb](ONCtests.ipynb) which is \na jupyter notebook that runs pymagicc, and requires windows or \nwine when run on a non-windows platform. To run these experiments, download \n[wine](https://sourceforge.net/projects/wine/files/latest/download),\n[python](https://www.python.org/downloads/), pip, \n[pymagicc](https://github.com/openscm/pymagicc), this repo, and open the \nnotebook in jupyter.\n\n#### Install and run the workbook\nDownload/install [wine](https://sourceforge.net/projects/wine/files/latest/download)\n\nNext open a terminal, and add wine to the path.\n\nThen run:\n```\npip install -r requirements.txt\njupyter-notebook ONCtests.ipynb\n```\n\n#### Install for development \nOpen a terminal and do something like the following:\n\n```\nwhich wine\ngit clone https://github.com/hsbay/cdrmex\ngit clone https://github.com/openscm/pymagicc\ncd pymagicc\nmake venv\n./venv/bin/pip install --editable .\n./venv/bin/pip install ipywidgets appmode\n./venv/bin/pip install -r requirements.txt\njupyter nbextension enable --py --sys-prefix widgetsnbextension\njupyter nbextension enable --py --sys-prefix appmode\njupyter serverextension enable --py --sys-prefix appmode\n./venv/bin/jupyter-notebook ../cdrmex/ONCtests.ipynb\n```\n\nAfter the notebook is up, run all the cells, if they haven't already been populated.\n\nThis workbook uses [pymagicc](https://pymagicc.readthedocs.io/en/latest/) by R. Gieseke, S. N. Willner and M. Mengel, (2018). \nPymagicc: A Python wrapper for the simple climate model MAGICC. \n Journal of Open Source Software, 3(22), 516, \n https://doi.org/10.21105/joss.00516\n\n[MAGICC](http://magicc.org/) is by:\n M. Meinshausen, S. C. B. Raper and T. M. L. Wigley (2011). \n \xe2\x80\x9cEmulating coupled atmosphere-ocean and carbon cycle models with a simpler model, MAGICC6: Part I \xe2\x80\x9cModel Description and Calibration.\xe2\x80\x9d \n Atmospheric Chemistry and Physics 11: 1417-1456. \n https://doi.org/10.5194/acp-11-1417-2011\n\nThis software is CC-BY-4.0 and carries no warranty towards any liability, use at your own risk.\nSee license.txt for more information.\n""",",https://doi.org/10.21105/joss.00516\n\n,https://doi.org/10.5194/acp-11-1417-2011\n\nThis","2019/04/26, 05:54:41",1643,CC-BY-4.0,2,196,"2023/05/16, 21:37:54",0,37,41,1,162,0,0.0,0.0062893081761006275,,,0,2,true,"github,custom",false,false,,,https://github.com/hsbay,http://www.autofracture.com/opencarbon,"Horseshoe Bay, California",,,https://avatars.githubusercontent.com/u/35169692?v=4,,, National Carbon Credit Registry,As an online database using national and international standards for quantifying and verifying greenhouse gas emissions reductions by programmes.,undp,https://github.com/undp/carbon-registry.git,github,"carbon-emissions,climate,sustainable-development-goals,digital-public-goods",Carbon Credits and Capture,"2023/10/24, 17:47:36",26,3,26,true,TypeScript,UNDP,undp,"TypeScript,HTML,SCSS,Dockerfile,JavaScript",https://demo.carbreg.org,"b'# National Carbon Credit Registry\n\n\n\n![GitHub last commit](https://img.shields.io/github/last-commit/undp/carbon-registry)\n![Uptime](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/undp/carbon-registry-status/master/api/carbon-registry/uptime.json)\n![GitHub Workflow Status (with branch)](https://img.shields.io/github/actions/workflow/status/undp/carbon-registry/server-deployments.yml?branch=main&label=server%20build)\n ![GitHub Workflow Status (with branch)](https://img.shields.io/github/actions/workflow/status/undp/carbon-registry/frontend-deployment-prod.yml?branch=main&label=frontend%20build)\n[![SparkBlue Chat](https://img.shields.io/badge/chat-SparkBlue-blue)](https://www.sparkblue.org/group/keeping-track-digital-public-goods-paris-agreement)\n[![Digital Public Goods certification](https://img.shields.io/badge/Digital%20Public%20Good-Certified-blueviolet)](https://app.digitalpublicgoods.net/a/10403)\n\nThe National Carbon Registry enables carbon credit trading in order to reduce greenhouse gas emissions.\n\nAs an online database, the National Carbon Registry uses national and international standards for quantifying and verifying greenhouse gas emissions reductions by programmes, tracking issued carbon credits and enabling credit transfers in an efficient and transparent manner. The Registry functions by receiving, processing, recording and storing data on mitigations projects, the issuance, holding, transfer, acquisition, cancellation, and retirement of emission reduction credits. This information is publicly accessible to increase public confidence in the emissions reduction agenda.\n\nThe National Carbon Registry enables carbon credit tracking transactions from mitigation activities, as the digital implementation of the Paris Agreement. Any country can customize and deploy a local version of the registry then connect it to other national & international registries, MRV systems, and more.\n\nThe system has 3 key features:\n\n* **Analytics Dashboard:** Enabling governments, companies, and certification bodies to operate transparently and function on an immutable blockchain.\n* **Carbon Credit Calculator:** Standardized system According to the UNFCCC - CDM (Clean Development Mechanism) methodologies, across defined sectors.\n* **Serial Number Generator:** Standardizing the technical format to allow for easy cross-border collaboration between carbon trading systems.\n\n## Index\n\n* [About](#about)\n* [Standards](#standards)\n* [System Architecture](#architecture)\n* [Project Structure](#structure)\n* [Run Services as Containers](#container)\n* [Run Services Locally](#local)\n* [Deploy System on the AWS Cloud](#cloud)\n* [Modules](#modules)\n* [Web Frontend](#frontend)\n* [Localization](#localization)\n* [API (Application Programming Interface)](#api)\n* [Status Page](#status)\n* [User Manual](#manual)\n* [Demonstration Video](#demo)\n* [Governance and Support](#support)\n\n\n\n## Standards\n\nThis codebase aims to full the [Digital Public Goods standard](https://digitalpublicgoods.net/standard/) and it is built according to the [Principles for Digital Development](https://digitalprinciples.org/).\n\n\n\n## System Architecture\n\nUNDP Carbon Registry is based on service oriented architecture (SOA). Following diagram visualize the basic components in the system.\n\n![System Architecture](./documention/imgs/System%20Architecture.svg)\n\n\n\n### System Services\n\n### National Service\n\nAuthenticate, Validate and Accept user (Government, Programme Developer/Certifier) API requests related to the following functionalities,\n\n* User and company CRUD operations.\n* User authentication.\n* Programme life cycle management.\n* Credit life cycle management.\n\nService is horizontally scalable and state maintained in the following locations,\n\n* File storage.\n* Operational Database.\n* Ledger Database.\n\nUses the Carbon Credit Calculator and Serial Number Generator node modules to estimate the programme carbon credit amount and issue a serial number.\nUses Ledger interface to persist programme and credit life cycles.\n\n### Analytics Service\n\nServe all the system analytics. Generate all the statistics using the operational database.\nHorizontally scalable.\n\n### Replicator Service\n\nAsynchronously replicate ledger database events in to the operational database. During the replication process it injects additional information to the data for query purposes (Eg: Location information).\nCurrently implemented for QLDB and PostgreSQL ledgers. By implementing [replicator interface](./backend/services/src/ledger-replicator/replicator-interface.service.ts) can support more ledger replicators.\nReplicator select based on the `LEDGER_TYPE` environment variable. Support types are `PGSQL` (default) and `QLDB`.\n\n### Deployment\n\nSystem services can deploy in 2 ways.\n\n* **As a Container** - Each service boundary containerized in to a docker container and can deploy on any container orchestration service. [Please refer Docker Compose file](./docker-compose.yml)\n* **As a Function** - Each service boundary packaged as a function (Serverless) and host on any Function As A Service (FaaS) stack. [Please refer Serverless configuration file](./backend/services/serverless.yml)\n\n### **External Service Providers**\n\nAll the external services access through a generic interface. It will decouple the system implementation from the external services and enable extendability to multiple services.\n\n#### Geo Location Service\n\nCurrently implemented for 2 options.\n\n1. File based approach. User has to manually add the regions with the geo coordinates. [Sample File](./backend/services/regions.csv). To apply new file changes, replicator service needs to restart.\n2. [Mapbox](https://mapbox.com). Dynamically query geo coordinates from the Mapbox API.\n\nCan add more options by implementing [location interface](./backend/services/src/shared/location/location.interface.ts)\n\nChange by environment variable `LOCATION_SERVICE`. Supported types are `FILE` (default) and `MAPBOX`.\n\n#### File Service\n\nImplemented 2 options for static file hosting.\n\n1. NestJS static file hosting using the local storage and container volumes.\n2. AWS S3 file storage.\n\nCan add more options by implementing [file handler interface](./backend/services/src/shared/file-handler/filehandler.interface.ts)\n\nChange by environment variable `FILE_SERVICE`. Supported types are `LOCAL` (default) and `S3`.\n\n### **Database Architecture**\n\nPrimary/secondary database architecture used to store carbon programme and account balances.\nLedger database is the primary database. Add/update programmes and update account balances in a single transaction. Currently implemented only for AWS QLDB\n\nOperational Database is the secondary database. Eventually replicated to this from primary database via data stream. Implemented based on PostgreSQL\n\n#### Why Two Database Approach?\n\n1. Cost and Query capabilities - Ledger database (blockchain) read capabilities can be limited and costly. To support rich statistics and minimize the cost, data is replicated in to a cheap query database.\n2. Disaster recovery\n3. Scalability - Primary/secondary database architecture is scalable since additional secondary databases can be added as needed to handle more read operations.\n\n#### Why Ledger Database?\n\n1. Immutable and Transparent - Track and maintain a sequenced history of every carbon programme and credit change.\n2. Data Integrity (Cryptographic verification by third party).\n3. Reconcile carbon credits and company account balance.\n\n#### Ledger Database Interface\n\nThis enables the capability to add any blockchain or ledger database support to the carbon registry without functionality module changes. Currently implemented for PostgreSQL and AWS QLDB.\n\n#### PostgreSQL Ledger Implementation\n\nThis ledger implementation stores all the carbon programme and credit events in a separate event database with the sequence number. Support all the ledger functionalities except immutability. \n\nSingle database approach used for user and company management.\n\n### **Ledger Layout**\n\nCarbon Registry contains 3 ledger tables.\n\n1. Programme ledger - Contains all the programme and credit transactions.\n2. Company Account Ledger (Credit) - Contains company accounts credit transactions.\n3. Country Account Ledger (Credit) - Contains country credit transactions.\n\nThe below diagram demonstrates the ledger behavior of programme create, authorise, issue and transfer processes. Blue color document icon denotes a single data block in a ledger.\n\n![Ledger Layout](./documention/imgs/Ledger.svg)\n\n### **Authentication**\n\n* JWT Authentication - All endpoints based on role permissions.\n* API Key Authentication - MRV System connectivity.\n\n\n\n## Project Structure\n\n```text\n.\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 .github # CI/CD [Github Actions files]\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 deployment # Declarative configuration files for initial resource creation and setup [AWS Cloudformation]\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 backend # System service implementation\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 services # Services implementation [NestJS application]\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 src\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 national-api # National API [NestJS module] \n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 stats-api # Statistics API [NestJS module]\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 ledger-replicator # Blockchain Database data replicator [QLDB to Postgres]\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 shared # Shared resources [NestJS module] \n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 serverless.yml # Service deployment scripts [Serverless + AWS Lambda]\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 libs\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 carbon-credit-calculator # Implementation for the Carbon credit calculation library [Node module + Typescript]\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 serial-number-gen # Implementation for the carbon programme serial number calculation [Node module + Typescript]\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 web # System web frontend implementation [ReactJS]\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 .gitignore\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 docker-compose.yml # Docker container definitions\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md\n```\n\n\n\n## Run Services As Containers\n\n* Update [docker compose file](./docker-compose.yml) env variables as required.\n * Currently all the emails are disabled using env variable `IS_EMAIL_DISABLED`. When the emails are disabled email payload will be printed on the console. User account passwords needs to extract from this console log. Including root user account, search for a log line starting with `Password (temporary)` on national container (`docker logs -f undp-carbon-registry-national-1`).\n * Add / update following environment variables to enable email functionality.\n * `IS_EMAIL_DISABLED`=false\n * `SOURCE_EMAIL` (Sender email address)\n * `SMTP_ENDPOINT`\n * `SMTP_USERNAME`\n * `SMTP_PASSWORD`\n * Use `DB_PASSWORD` env variable to change PostgreSQL database password\n * Configure system root account email by updating environment variable `ROOT EMAIL`. If the email service is enabled, on the first docker start, this email address will receive a new email with the root user password.\n * By default frontend does not show map images on dashboard and programme view. To enable them please update `REACT_APP_MAP_TYPE` env variable to `Mapbox` and add new env variable `REACT_APP_MAPBOXGL_ACCESS_TOKEN` with [MapBox public access token](https://docs.mapbox.com/help/tutorials/get-started-tokens-api/) in web container.\n* Add user data\n * Update [organisations.csv](./organisations.csv) file to add organisations.\n * Update [users.csv](./users.csv) file to add users.\n * When updating files keep the header and replace existing dummy data with your data.\n * These users and companies add to the system each docker restart.\n* Run `docker-compose up -d --build`. This will build and start containers for following services:\n * PostgresDB container\n * National service\n * Analytics service\n * Replicator service\n * React web server with Nginx.\n* Web frontend on \n* API Endpoints,\n * \n * \n\n\n\n## Run Services Locally\n\n* Setup postgreSQL locally and create a new database.\n* Update following DB configurations in the `.env.local` file (If the file does not exist please create a new `.env.local`)\n * `DB_HOST` (default `localhost`)\n * `DB_PORT` (default `5432`)\n * `DB_USER` (default `root`)\n * `DB_PASSWORD`\n * `DB_NAME` (default `carbondbdev`)\n* Move to folder `cd backend/service`\n* Run `yarn run sls:install`\n* Initial user data setup\n\n```sh\nserverless invoke local --stage=local --function setup --data \'{""rootEmail"": """",""systemCountryCode"": """", ""name"": """", ""logoBase64"": """"}\'\n```\n\n* Start all the services by executing\n \n```sh\nsls offline --stage=local\n```\n\n* Now all the system services are up and running. Swagger documentation will be available on \n\n\n\n## Deploy System on the AWS Cloud\n\n* Execute to create all the required resources on the AWS.\n\n```sh\naws cloudformation deploy --template-file ./deployment/aws-formation.yml --stack-name carbon-registry-basic --parameter-overrides EnvironmentName= DBPassword= --capabilities CAPABILITY_NAMED_IAM\n```\n\n* Setup following Github Secrets to enable CI/CD\n * `AWS_ACCESS_KEY_ID`\n * `AWS_SECRET_ACCESS_KEY`\n* Run it manually to deploy all the lambda services immediately. It will create 2 lambda layers and following lambda functions,\n * national-api: Handle all carbon registry user and program creation. Trigger by external http request.\n * replicator: Replicate Ledger database entries in to Postgres database for analytics. Trigger by new record on the Kinesis stream.\n * setup: Function to add initial system user data.\n* Create initial user data in the system by invoking setup lambda function by executing\n\n```sh\naws lambda invoke \\\n --function-name carbon-registry-services-dev-setup --cli-binary-format raw-in-base64-out\\\n --payload \'{""rootEmail"": """",""systemCountryCode"": """", ""name"": """", ""logoBase64"": """"}\' \\\n response.json\n```\n\n\n\n## Modules\n\n### Serial Number Generation\n\nSerial Number generation is implemented in a separate node module. [Please refer to this](./libs/serial-number-gen/README.md) for more information.\n\n### Carbon Credit Calculator\n\nCarbon credit calculation is implemented in a separate node module. [Please refer to this](./libs/carbon-credit-calculator/README.md) for more information.\n\n### UNDP Platform for Voluntary Bilateral Cooperation\n\nUNDP Platform for Voluntary Bilateral Cooperation generation is implemented in a separate node module. [Please refer this](./modules/Platform%20for%20Voluntary%20Bilateral%20Cooperation/README.md) for more information.\n\n\n\n### Web Frontend\n\nWeb frontend implemented using ReactJS framework. Please refer [getting started with react app](./web/README.md) for more information.\n\n\n\n### Localization\n\n* Languages (Current): English\n* Languages (In progress): French, Spanish\n\nFor updating translations or adding new ones, reference \n\n\n\n### API (Application Programming Interface)\n\nFor integration, reference RESTful Web API Documentation documentation via Swagger. To access\n\n* National API: `api.APP_URL`/national\n* Status API: `api.APP_URL`/stats\n\nOur [Data Dictionary](./Data%20Dictionary.csv) is available for field analysis.\n\n\n\n### Resource Requirements\n\n| Resource | Minimum | Recommended |\n| :--- | ---: | ---: |\n| Memory | 4 GB | 8 GB |\n| CPU | 4 Cores | 4 Cores |\n| Storage | 20 GB | 50 GB |\n| OS | Linux
Windows Server 2016 and later versions. | |\n\nNote: Above resource requirement mentioned for a single instance from each microservice.status.APP_URL\n\n\n\n### Status Page\n\nFor transparent uptime monitoring go to the [status page](https://status.carbreg.org/).\n\nOpen source code available at \n\n\n\n### User Manual\n\nTo learn more about how the system is structured and how to manage it visit the [user manual](https://github.com/undp/carbon-registry/blob/main/documention/manual.md)\n\n\n\n### Demonstration Video\n\nWatch our [demo walkthrough](https://www.youtube.com/watch?v=xSxXvcPveT0)\n[![Video Thumbnail](https://img.youtube.com/vi/xSxXvcPveT0/maxresdefault.jpg)](https://www.youtube.com/watch?v=xSxXvcPveT0)\n\n\n\n### Governance and Support\n\n[Digital For Climate (D4C)](https://www.theclimatewarehouse.org/work/digital-4-climate) is responsible for managing the application. D4C is a collaboration between the [European Bank for Reconstruction and Development (EBRD)](https://www.ebrd.com), [United Nations Development Program (UNDP)](https://www.undp.org), [United Nations Framework Convention on Climate Change (UNFCCC)](https://www.unfccc.int), [International Emissions Trading Association (IETA)](https://www.ieta.org), [European Space Agency (ESA)](https://www.esa.int), and [World Bank Group](https://www.worldbank.org) that aims to coordinate respective workflows and create a modular and interoperable end-to-end digital ecosystem for the carbon market. The overarching goal is to support a transparent, high integrity global carbon market that can channel capital for impactful climate action and low-carbon development.\n\nThis code is managed by [United Nations Development Programme](https://www.undp.org) as custodian, detailed in the [press release](https://www.undp.org/news/newly-accredited-digital-public-good-national-carbon-registry-will-help-countries-meet-their-climate-targets). For any questions, contact us at [digital4planet@undp.org](mailto:digital4planet@undp.org).\n'",,"2022/06/24, 20:57:37",488,AGPL-3.0,2672,2690,"2023/10/24, 17:20:39",61,113,136,136,1,1,0.6,0.4913990825688074,"2023/10/19, 20:40:02",v1.0,0,19,false,,true,true,"undp/carbon-library,undp/carbon-registry,xeptagondev/carbon-registry",,https://github.com/undp,https://www.undp.org,International,,,https://avatars.githubusercontent.com/u/2916543?v=4,,, forest-risks,Statistical models of forest carbon potential and risks.,carbonplan,https://github.com/carbonplan/forest-risks.git,github,,Carbon Credits and Capture,"2022/11/23, 02:01:51",28,2,6,true,Jupyter Notebook,carbonplan,carbonplan,"Jupyter Notebook,Python,R,Dockerfile,Shell",https://carbonplan.org/research/forest-risks-explainer,"b""\n\n# carbonplan / forest-risks\n\n**forest carbon potential and risks**\n\n[![CI](https://github.com/carbonplan/forest-offsets/actions/workflows/main.yaml/badge.svg)](https://github.com/carbonplan/forest-offsets/actions/workflows/main.yaml)\n[![MIT License](https://badgen.net/badge/license/MIT/blue)](./LICENSE)\n[![DOI](https://img.shields.io/badge/code-10.5281/zenodo.4741329-6aa3d5?link=https://doi.org/10.5281/zenodo.4741329)](https://doi.org/10.5281/zenodo.4741329)\n\nThis repository includes our libraries and scripts for mapping forest carbon potential and risks.\n\n## install\n\n```shell\npip install carbonplan[forest-risks]\n```\n\n## usage\n\nThis codebase is organized into modules that implement data loading and model fitting as well as utitlies for plotting and other common tasks. Most anlayses involve some combination of the `load` and `fit` modules.\n\nThe `scripts` folder contains tools to import data, run models, and parse results.\n\nThe `regrid.py` and `convert.py` scripts are for converting the results to zarr files for storage and geojson for visualization purposes.\n\nSeveral notebooks are additionally provided that show the use of these tools for fitting models and inspecting model outputs. Notebooks are organized by the model type, e.g. `biomass`, `fire`, etc.\n\n## data products\n\nAs part of this project we have created derived data products for five key variables relevant to evaluating forest carbon storage potential and risks.\n- `biomass` The potential carbon storage in forests assuming continued growth of existing forests.\n- `fire` The risks associated with forest fires.\n- `drought` The risk to forests from insect-related tree mortality.\n- `insects` The risk to forests from insect-related tree mortality.\n\nGridded rasters for each of these layers are available for the continental United States at a 4km spatial scale. For biomass and fire, projections are shown through the end of the 21st century in decadal increments. Drought and insect models are still in development so we currently only show historical risks for these disturbance types. All data are accessible via this [catalog](https://github.com/carbonplan/forest-risks/blob/master/carbonplan_forest_risks/data/catalog.yaml). Additional formats and download options will be provided in the future.\n\n## license\n\nAll the code in this repository is [MIT](https://choosealicense.com/licenses/mit/) licensed. When possible, the data used by this project is licensed using the [CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/) license. We include attribution and additional license information for third party datasets, and we request that you also maintain that attribution if using this data.\n\n## about us\n\nCarbonPlan is a non-profit organization that uses data and science for climate action. We aim to improve the transparency and scientific integrity of climate solutions with open data and tools. Find out more at [carbonplan.org](https://carbonplan.org/) or get in touch by [opening an issue](https://github.com/carbonplan/forest-risks/issues/new) or [sending us an email](mailto:hello@carbonplan.org).\n\n## contributors\n\nThis project is being developed by CarbonPlan staff and the following outside contributors:\n\n- Bill Anderegg (@anderegg)\n- Grayson Badgley (@badgley)\n- Anna Trugman\n""",",https://doi.org/10.5281/zenodo.4741329","2020/08/07, 20:18:36",1174,MIT,6,376,"2023/02/06, 13:01:15",13,89,91,19,261,10,0.4,0.5316455696202531,,,0,10,false,,false,false,"narest-qa/repo54,carbonplan/carbonplan-python",,https://github.com/carbonplan,carbonplan.org,earth,,,https://avatars.githubusercontent.com/u/58278235?v=4,,, Guardian,"Provides auditable, traceable, reproducible records that document the emission process and lifecycle of carbon credits, which reduce fraud in the ESG market.",hashgraph,https://github.com/hashgraph/guardian.git,github,,Carbon Credits and Capture,"2023/10/24, 18:04:44",72,0,28,true,TypeScript,Hedera,hashgraph,"TypeScript,HTML,JavaScript,CSS,Solidity,SCSS,Dockerfile,Shell,Makefile",,"b'# Guardian\n\n[![Apache 2.0 License](https://img.shields.io/hexpm/l/apa)](LICENSE) ![Build results](https://github.com/hashgraph/guardian/actions/workflows/main.yml/badge.svg?branch=main) ![GitHub package.json version (branch)](https://img.shields.io/github/package-json/v/hashgraph/guardian/master/guardian-service?label=version) [![Discord chat](https://img.shields.io/discord/373889138199494658)](https://discord.com/channels/373889138199494658/898264469786988545)\n\n## Overview\n\nGuardian is a modular open-source solution that includes best-in-class identity management and decentralized ledger technology (DLT) libraries. At the heart of Guardian solution is a sophisticated Policy Workflow Engine (PWE) that enables applications to offer a digital (or digitized) Measurement, Reporting, and Verification requirements-based tokenization implementation.\n\n[HIP-19](https://github.com/hashgraph/hedera-improvement-proposal/blob/master/HIP/hip-19.md) \xc2\xb7 [HIP-28](https://github.com/hashgraph/hedera-improvement-proposal/blob/master/HIP/hip-28.md) \xc2\xb7 [HIP-29](https://github.com/hashgraph/hedera-improvement-proposal/blob/master/HIP/hip-29.md) \xc2\xb7 [Report a Bug](CONTRIBUTING#bug-reports) \xc2\xb7 [Request a Policy or a Feature](CONTRIBUTING#new-policy-or-feature-requests)\n\n## Discovering Digital Environmental Assets assets on Hedera\n\nAs identified in Hedera Improvement Proposal 19 (HIP-19), each entity on the Hedera network may contain a specific identifier in the memo field for discoverability. Guardian demonstrates this when every Hedera Consensus Service transaction is logged to a Hedera Consensus Service (HCS) Topic. Observing the Hedera Consensus Service Topic, you can discover newly minted tokens. \n\nIn the memo field of each token mint transaction you will find a unique Hedera message timestamp. This message contains the url of the Verifiable Presentation (VP) associated with the token. The VP can serve as a starting point from which you can traverse the entire sequence of documents produced by Guardian policy workflow, which led to the creation of the token. This includes a digital Methodology (Policy) HCS Topic, an associated Registry HCS Topic for that Policy, and a Project HCS Topic.\n\nPlease see p.17 in the FAQ for more information. This is further defined in [Hedera Improvement Proposal 28 (HIP-28)](https://hips.hedera.com/hip/hip-28).\n\n([back to top](#readme))\n\n## Getting started\n\nTo get a local copy up and running quickly, follow the steps below. Please refer to for complete documentation.\n\n**Note**. If you have already installed another version of Guardian, remember to **perform a backup operation before upgrading**.\n\n## Prerequisites\n\n* [Hedera Testnet Account](https://portal.hedera.com)\n* [Web3.Storage Account](https://web3.storage/)\n\nWhen building the reference implementation, you can [manually build every component](#manual-installation) or run a single command with Docker.\n\n## Automatic installation\n\n### Prerequisites for automatic installation\n\n* [Docker](https://www.docker.com)\n\nIf you build with docker [MongoDB V6](https://www.mongodb.com), [NodeJS V16](https://nodejs.org), [Yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) and [Nats 1.12.2](https://nats.io/) will be installed and configured automatically.\n\n### Installation\n\nThe following steps need to be executed in order to start Guardian using docker:\n\n1. Clone the repo\n2. Configure project level .env file\n3. Update BC access variables\n4. Setup IPFS\n5. Build and launch with Docker\n6. Browse to [http://localhost:3000](http://localhost:3000)\n\nHere the steps description follows:\n\n1. Clone the repo\n\n ```shell\n git clone https://github.com/hashgraph/guardian.git\n ```\n\n2. Configure project level .env file.\n\nThe main configuration file that needs to be provided to the Guardian system is the `.env` file.\nCut and paste the `.env.template` renaming it as `.env` here you may choose the name of the Guardian platform. Leave the field empty or unspecified if you update a production environment to keep previous data ( for more details read [here](https://docs.hedera.com/guardian/guardian/readme/environments/ecosystem-environments)).\n\nFor this example purpose let\'s name the Guardian platform as ""develop""\n\n```shell\n GUARDIAN_ENV=""develop""\n```\n\n> **_NOTE:_** Every single service is provided in its folder with a `.env.template` file, this set of files are only needed for the case of Manual installation. \n\n3. Update BC access variables.\n\nUpdate the following files with your Hedera Testnet account info (see prerequisites) as indicated. Please check complete steps to generate Operator_ID and Operator_Key by looking at the link: [How to Create Operator_ID and Operator_Key](https://docs.hedera.com/guardian/getting-started/getting-started/how-to-create-operator-id-and-operator-key).\nThe Operator_ID and Operator_Key and HEDERA_NET are all that Guardian needs to access the Hedera Blockchain assuming a role on it. This parameters needs to be configured in a file at the path `./configs`, the file should use the following naming convention:\n\n `./configs/.env.\\.guardian.system`\n\nThere will be other steps in the Demo Usage Guide that will be required for the generation of Operator\\_ID and Operator\\_Key. It is important to mention that the Operator_ID and Operator_Key in the `./configs/.env..guardian.system` will be used to generate demo accounts.\n\nThe parameter `HEDERA_NET` may assume the following values: `mainnet`, `testnet`, `previewnet`, `localnode`. choose the right value depending on your target Hedera network on which the `OPERATOR_ID` has been defined.\n\n As examples:\n \n following the previous example, the file to configure should be named: `./configs/.env.develop.guardian.system`, this file is already provided in the folder as an example, only update the variables OPERATOR_ID, OPERATOR_KEY and HEDERA_NET.\n\n ```plaintext\n OPERATOR_ID=""...""\n OPERATOR_KEY=""...""\n HEDERA_NET=""...""\n ```\n\nStarting from Multi-environment release (2.13.0) it has been introduced a new parameter `PREUSED_HEDERA_NET`.\nMultienvironemnt is a breaking change and the configuration of this parameter intend to smooth the upgrading. \n`PREUSED_HEDERA_NET` configuration depends on the installation context.\n\n- If the installation is a completely new one just remove the parameter and feel free to jump to the next paragraph.\n- if you are upgrading from a release after the Multi-environment (>= to 2.13.0) do not change the state of this parameter (so if you removed the parameter in some previous installation do not introduce it).\n- if the installation is an upgrading from a release previous of the Multi-environment (<= to 2.13.0) to a following one you need to configure the `PREUSED_HEDERA_NET`. After that the parameter will last in the configuration unchanged.\n\n#### 3.1. PREUSED_HEDERA_NET configuration\n\nThe `PREUSED_HEDERA_NET` parameter is intended to hold the target Hedera network that the system already started to notarize data to. PREUSED\\_HEDERA\\_NET is the reference to the HEDERA_NET that was in use before the upgrade.\nTo let the Multi-environment transition happen in a transparent way the `GUARDIAN_ENV` parameter in the `.env` file has to be configured as empty while the `PREUSED_HEDERA_NET` has to be set with the same value configured in the `HEDERA_NET` parameter in the previous configuration file. \n\n`PREUSED_HEDERA_NET` never needs to be changed after the first initialization. On the contrary it will be possible to change `HEDERA_NET` to dials with all the Hedera different networks.\n\n - as first Example: \n\n in case of the upgrading from a release minor then 2.13.0 to a bigger one and keep using the same HEDERA_NET=""Mainnet""(as example)\n\n configure the name the Guardian platform as empty in the `.env` file \n\n ```shell\n GUARDIAN_ENV=""""\n ```\n\n In this case the configuration is stored in the file named: `./configs/.env..guardian.system`, and is already provided in the folder as an example, updating the variables OPERATOR_ID and OPERATOR_KEY.\n\n ```plaintext\n OPERATOR_ID=""...""\n OPERATOR_KEY=""...""\n ```\n PREUSED_HEDERA_NET is the reference to your previous HEDERA_NET configuration then you should set its value to match your previous HEDERA_NET configuration.\n\n ```plaintext\n HEDERA_NET=""mainnet""\n PREUSED_HEDERA_NET=""mainnet""\n ```\n\n because you are keeping on using HEDERA_NET as it was pointing to the ""mainnet"" in the previous installation too.\n\n - As a second example: to test the new release change the HEDERA_NET to ""testnet"". This is the complete configuration:\n\n Set the name of the Guardian platform to whatever descripting name in the `.env` file \n\n ```shell\n GUARDIAN_ENV=""testupgrading""\n ```\n\n In this case the configuration is stored in the file named: `./configs/.env.testupgrading.guardian.system` again update the variables OPERATOR_ID and OPERATOR_KEY using your testnet account.\n\n ```plaintext\n OPERATOR_ID=""...""\n OPERATOR_KEY=""...""\n ```\n\n set the HEDERA_NET=""testnet"" and set the PREUSED_HEDERA_NET to refer to the mainnet as you wish that Mainet data remains unchanged.\n\n ```plaintext\n HEDERA_NET=""testnet""\n PREUSED_HEDERA_NET=""mainnet""\n ```\n\n This configuration allows you to leave untouched all the data referring to Mainnet in the Database while testing on Testnet. Refer to Guardian \n [documentation](https://docs.hedera.com/guardian/guardian/readme/environments/multi-session-consistency-according-to-environment) for more details.\n\n> **_NOTE:_** You can use the Schema Topic ID (`INITIALIZATION_TOPIC_ID`) already present in the configuration files, or you can specify your own.\n\n> **_NOTE:_** for any other GUARDIAN\\_ENV name of your choice just copy and paste the file `./configs/.env.template.guardian.system` and rename as `./configs/.env..guardian.system`\n \n4. Now, we have two options to setup IPFS node : 1. Local node 2. IPFS Web3Storage node.\n\n#### 4.1 Setting up IPFS Local node:\n\n - 4.1.1 We need to install and configure any IPFS node. [example](https://github.com/yeasy/docker-ipfs)\n\n - 4.1.2 For setup IPFS local node you need to set variables in the same file `./configs/.env.develop.guardian.system`\n\n\n ```\n IPFS_NODE_ADDRESS=""..."" # Default IPFS_NODE_ADDRESS=""http://localhost:5001""\n IPFS_PUBLIC_GATEWAY=\'...\' # Default IPFS_PUBLIC_GATEWAY=\'https://localhost:8080/ipfs/${cid}\'\n IPFS_PROVIDER=""local""\n ```\n \n\n\n#### 4.2 Setting up IPFS Web3Storage node:**\n \nFor setup IPFS web3storage node you need to set variables in file `./configs/.env.develop.guardian.system`:\n \n ```\n IPFS_STORAGE_API_KEY=""...""\n IPFS_PROVIDER=""web3storage""\n ```\n \n To generate Web3.Storage API KEY please follow the steps from . To know complete information on generating API Key please check: [How to Create Web3.Storage API Key](https://docs.hedera.com/guardian/guardian/readme/getting-started/how-to-generate-web3.storage-api-key).\n \n5. Build and launch with Docker. Please note that this build is meant to be used in production and will not contain any debug information. From the project\'s root folder:\n\n ```shell\n docker compose up -d --build\n ```\n \n> **_NOTE:_** About docker-compose: from the end of June 2023 Compose V1 won\xe2\x80\x99t be supported anymore and will be removed from all Docker Desktop versions. Make sure you use Docker Compose V2 (comes with Docker Desktop > 3.6.0) as at https://docs.docker.com/compose/install/\n\n6. Browse to and complete the setup.\n\nfor other examples go to:\n* [Deploying Guardian using a specific environment( DEVELOP)](https://docs.hedera.com/guardian/guardian/readme/getting-started/installation/building-from-source-and-run-using-docker/deploying-guardian-using-a-specific-environment-develop.md)\n* [Steps to deploy Guardian using a specific Environment ( QA)](https://docs.hedera.com/guardian/guardian/readme/getting-started/installation/building-from-source-and-run-using-docker/deploying-guardian-using-a-specific-environment-qa.md)\n* [Steps to deploy Guardian using default Environment](https://docs.hedera.com/guardian/guardian/readme/getting-started/installation/building-from-source-and-run-using-docker/deploying-guardian-using-default-environment.md)\n\n\n## Manual installation\n\nIf you want to manually build every component with debug information, then build and run the services and packages in the following sequence: Interfaces, Logger Helper, Message Broker, Logger Service, Auth Service, IPFS, Guardian Service, UI Service, and lastly, the MRV Sender Service. See below for commands.\n\n### Prerequisites for manual installation\n\n* [MongoDB V6](https://www.mongodb.com)\n* [NodeJS V16](https://nodejs.org)\n* [Yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable)\n* [Nats 1.12.2](https://nats.io/)\n\n### Build and start each component\n\nInstall, configure and start all the prerequisites, then build and start each component.\n\n#### Services Configuration: \n\n- for each of the services create the file `.//.env` to do this copy, paste and rename the file `.//.env.template` \n\n For example:\n\n in `./guardian-service/.env`:\n ```plaintext\n GUARDIAN_ENV=""develop""\n ```\n\n If need to configure OVERRIDE uncomment the variable in file `./guardian-service/.env`:\n ```plaintext\n OVERRIDE=""false"" \n ```\n\n- configure the file `.//configs/.env..` file: to do this copy, \n paste and rename the file `.//.env..template` \n\n following previous example:\n\n in `./guardian-service/configs/.env.guardian.develop`:\n ```plaintext\n OPERATOR_ID=""...""\n OPERATOR_KEY=""...""\n ```\n\n> **_NOTE:_** Once you start each service, please wait for the initialization process to be completed.**\n\n1. Clone the repo\n\n ```shell\n git clone https://github.com/hashgraph/guardian.git\n ```\n2. Install dependencies\n\n Yarn:\n ```\n yarn\n ```\n\n Npm:\n ```\n npm install\n ``` \n3. From the **interfaces** folder\n\n Yarn:\n ```\n yarn workspace @guardian/interfaces run build\n ```\n\n Npm:\n ```\n npm --workspace=@guardian/interfaces run build\n ```\n\n4. From the **common** folder\n\n Yarn:\n ```\n yarn workspace @guardian/common run build\n ```\n\n Npm:\n ```\n npm --workspace=@guardian/common run build\n ```\n5. From the **logger-service** folder\n\n To build the service:\n\n Yarn:\n ```shell\n yarn workspace logger-service run build\n ```\n\n Npm:\n ```\n npm --workspace=logger-service run build\n ```\n\n Configure the service as previously described. Do not need special configuration variables.\n\n To start the service:\n\n Yarn:\n ```shell\n yarn workspace logger-service start\n ```\n\n Npm:\n ```\n npm --workspace=logger-service start\n ```\n6. From the **auth-service** folder\n\n To build the service:\n\n Yarn:\n ```shell\n yarn workspace auth-service run build\n ```\n\n Npm:\n ```\n npm --workspace=auth-service run build\n ```\n\n Configure the service as previously described. Do not need special configuration variables.\n\n To start the service:\n\n Yarn:\n ```shell\n yarn workspace auth-service start\n ```\n\n Npm:\n ```\n npm --workspace=auth-service start\n ```\n \n7. From the **policy-service** folder\n\n To build the service:\n\n Yarn:\n ```shell\n yarn workspace policy-service run build\n ```\n\n Npm:\n ```\n npm --workspace=policy-service run build\n ```\n Configure the service as previously described. Do not need special configuration variables.\n\n To start the service:\n\n Yarn:\n ```shell\n yarn workspace policy-service start\n ```\n\n Npm:\n ```\n npm --workspace=policy-service start\n ``` \n8. Build and start **worker-service** service\n\n Yarn:\n To build the service:\n ```\n yarn workspace worker-service run build\n ```\n\n Npm:\n ```\n npm --workspace=worker-service run build\n ```\n Configure the service as previously described. Update **IPFS_STORAGE_API_KEY** value in `./worker-service/configs/.env.worker` file.\n\n Yarn:\n To start the service:\n ```\n yarn workspace worker-service start\n ```\n\n Npm:\n ```\n npm --workspace=worker-service start\n ```\n9. Build and start **notification-service** service\n\n To build the service:\n\n Yarn:\n ```shell\n yarn workspace notification-service run build\n ```\n\n Npm:\n ```\n npm --workspace=notification-service run build\n ```\n Configure the service as previously described. Update **OPERATOR_ID** and **OPERATOR_KEY** values in `./guardian-service/configs/.env.worker` file as in the example above.\n\n To start the service (found on ):\n\n Yarn:\n ```shell\n yarn workspace notification-service start\n ```\n\n Npm:\n ```\n npm --workspace=notification-service start\n ```\n10. Build and start **guardian-service** service\n\nTo build the service:\n\nYarn:\n```shell\nyarn workspace guardian-service run build\n```\n\nNpm:\n```\nnpm --workspace=guardian-service run build\n```\nConfigure the service as previously described. Update **OPERATOR_ID** and **OPERATOR_KEY** values\nin `./guardian-service/configs/.env.worker` file as in the example above.\n\nTo start the service (found on ):\n\nYarn:\n```shell\nyarn workspace guardian-service start\n```\n\nNpm:\n```\nnpm --workspace=guardian-service start\n```\n\n11. From the **api-gateway** folder\n\n To build the service:\n\nYarn:\n ```shell\n yarn workspace api-gateway run build\n ```\n\nNpm:\n```\nnpm --workspace=api-gateway run build\n```\n\nConfigure the service as previously described. Do not need special configuration variables.\n\nTo start the service (found on ):\n\nYarn:\n ```shell\n yarn workspace api-gateway start\n ```\n\nNpm:\n```\nnpm --workspace=api-gateway start\n```\n\n12. From the **mrv-sender** folder\n\n To build the service:\n\n ```shell\n npm install\n npm run build\n ```\n\n Configure the service as previously described. Do not need special configuration variables.\n\n To start the service (found on ):\n\n ```shell\n npm start\n ```\n\n13. From the **frontend** folder\n\n To build the service:\n\n ```shell\n npm install\n npm run build\n ```\n\n To start the service (found on ):\n\n ```shell\n npm start\n ```\n\n## Configuring a Hedera local network\n\n1. Install a Hedera Local Network following the [official documentation](https://github.com/hashgraph/hedera-local-node#docker)\n\n2. Configure Guardian\'s configuration files `/.env/.env.docker` accordingly:\n\n ```shell\n OPERATOR_ID=""""\n OPERATOR_KEY=""""\n LOCALNODE_ADDRESS=""11.11.11.11""\n LOCALNODE_PROTOCOL=""http""\n HEDERA_NET=""localnode""\n ```\n\n **Note:**\n * Set `LOCALNODE_ADDRESS` to the IP address of your local node instance. The value above is given as an example.\n * Set `HEDERA_NET` to `localnode`. If not specified, the default value is `testnet`.\n * Configure `OPERATOR_ID` and `OPERATOR_KEY` accordingly with your local node configuration.\n * Remove `INITIALIZATION_TOPIC_ID` as the topic will be created automatically.\n * Set `LOCALNODE_PROTOCOL` to `http` or `https` accordingly with your local node configuration (it uses HTTP by default).\n\n## Configuring Hashicorp Vault\n1. Configure .env/.env.docker files in the auth-service folder\n\n ```\n VAULT_PROVIDER = ""hashicorp""\n ```\n \n Note: VAULT_PROVIDER can be set to ""database"" or ""hashicorp"" to select Database instance or a hashicorp vault instance correspondingly.\n \n If the VAULT_PROVIDER value is set to ""hashicorp"" the following 3 parameters should be configured in the auth-service folder. \n \n 1. HASHICORP_ADDRESS : http://localhost:8200 for using local vault. For remote vault, we need to use the value from the configuration settings of Hashicorp vault service.\n 2. HASHICORP_TOKEN : the token from the Hashicorp vault.\n 3. HASHICORP_WORKSPACE : this is only needed when we are using cloud vault for Hashicorp. Default value is ""admin"".\n\n2. Hashicorp should be configured with the created Key-Value storage, named ""secret"" by default, with the settingKey= records for the following keys:\n 1. OPERATOR_ID\n 2. OPERATOR_KEY\n 3. IPFS_STORAGE_API_KEY\n \n Note: These records in the vault will be created automatically if there are environment variables with the matching names.\n \n **How to import existing user keys from DB into the vault:**\n \n During Guardian services initialization, we need to set the following configuration settings in **auth-service** folder:\n \n ```\n IMPORT_KEYS_FROM_DB = 1\n VAULT_PROVIDER = ""hashicorp""\n ```\n \n## Local development using Docker\n\n1. create .env file at the root level and update all variable requires for docker\n\n ```shell\n cp .env.example .env\n ```\n\n2. Start local development using docker compose\n\n ```shell\n docker compose -f docker-compose-dev.yml up --build\n ```\n\n3. Access local development using or \n\n## Troubleshoot\n\n**To delete all the containers**:\n\n ```shell\n docker builder prune --all\n ```\n\n**To run by cleaning Docker cache**:\n\n ```shell\n docker compose build --no-cache\n ```\n\n([back to top](readme))\n\n## Unit tests\n\nTo run **guardian-service** unit tests, following commands needs to be executed:\n\n```shell\ncd guardian-service \nnpm run test\n```\n\nIt is also an ability to run Hedera network tests only. To do that, the following command needs to be executed:\n\n```shell\nnpm run test:network\n```\n\nTo run stability tests (certain transactions will be executed 10 times each), the following command needs to be executed:\n\n```shell\nnpm run test:stability\n```\n\n([back to top](readme))\n\nPlease refer to for complete documentation about the following topics:\n\n* Swagger API\n* Postman Collection\n* Demo Usage guide\n* Contribute a New Policy\n* Reference Implementation\n* Technologies Built on\n* Roadmap\n* Change Log\n* Contributing\n* License\n* Security\n\n## Contact\n\nFor any questions, please reach out to the Envision Blockchain Solutions team at:\n\n* Website: \n* Email: [info@envisionblockchain.com](mailto:info@envisionblockchain.com)\n\n([back to top](#readme))\n\n[license-url]: https://github.com/hashgraph/guardian/blob/main/LICENSE\n'",,"2021/10/11, 17:49:43",744,Apache-2.0,62,1677,"2023/10/24, 21:15:39",112,1890,2679,1365,1,4,0.1,0.6888198757763975,"2023/10/24, 18:07:42",v2.18.0-prerelease,0,15,false,,false,true,,,https://github.com/hashgraph,https://hedera.com,United States of America,,,https://avatars.githubusercontent.com/u/31002956?v=4,,, NCX Harvest Deferral Methodology,"Documents, Data, and Code for the NCX Methodology For Improved Forest Management Through Short-Term Harvest Deferral.",ncx-co,https://github.com/ncx-co/ifm_deferred_harvest.git,github,"carbon,climate-science,ecological-modelling,forestry",Carbon Credits and Capture,"2023/03/22, 21:00:19",11,0,11,true,R,NCX,ncx-co,"R,Python",,"b'# NCX Harvest Deferral Methodology\n\n**more release details [at ncx.com](https://info.ncx.com/next-generation-methodology)**\n\nIncluded in this repo:\n- Methodology text in markdown for change tracking and better public comment\n- All comments received in previous public comment period, converted to github issues and with NCX responses\n- Source code of our carbon accounting R package, an implementation of the key equations of the methodology we use to measure climate impact\n- Auxiliary data (csvs) for download and inspection\n'",,"2022/11/01, 18:37:52",358,Apache-2.0,10,10,"2023/03/22, 21:00:19",10,2,278,278,217,1,0.0,0.125,"2022/11/03, 02:42:19",v2,0,2,false,,false,false,,,https://github.com/ncx-co,https://www.ncx.com,,,,https://avatars.githubusercontent.com/u/3497011?v=4,,, SimCCS Map Tool,"Online maptool that provides novel decision-support capabilities for evaluating carbon capture, utilization and storage technologies.",SciGaP,https://github.com/SciGaP/simccs-maptool.git,github,"carbon-emissions,carbon-capture,carbon-storage,ccus,mapping-tools",Carbon Credits and Capture,"2023/09/28, 21:21:19",5,0,2,true,JetBrains MPS,Science Gateway Platform as a service,SciGaP,"JetBrains MPS,HTML,Python,JavaScript,Vue,CSS,Java,Makefile,SCSS",,"b'# SimCCS Map Tool\n\n## Getting Started\n\n1. Follow the instructions for installing the\n [Airavata Django Portal](https://github.com/apache/airavata-django-portal)\n2. With the Django Portal virtual environment activated, clone this repo and\n install it into the portal\'s virtual environment. Note, the `pip install`\n command will also run the JS frontend build and will require Node.js and Yarn\n installed (see the Airavata Django Portal installation instructions for more\n details).\n\n ```\n git clone https://github.com/SciGaP/simccs-maptool.git\n cd simccs-maptool\n pip install -e .\n ```\n\n3. Start (or restart) the Django Portal server.\n4. Open in your browser.\n\n## Django portal configuration\n\nThe following settings are relevant for the SimCCS Map Tool. These can be\nspecified in Django Portal\'s `settings_local.py` file.\n\n- `JAVA_HOME` - the Java home directory. Defaults to the JAVA_HOME env variable\n if not set.\n- `MAPTOOL_SETTINGS` - this is a dictionary of Map Tool specific settings:\n - `CPLEX_APPLICATION_ID` - The Airavata application module id of the Cplex\n application to launch.\n - `CPLEX_HOSTNAME` - The hostname of the compute resource on which to launch\n Cplex.\n - `DATASETS_DIR` - Directory of datasets and their basedata (cost network).\n - `JAVA_OPTIONS` - JVM command line options. Defaults to `-Xmx4g`. May be a\n list or tuple to pass multiple options.\n - `MAX_CONCURRENT_JAVA_CALLS` - maximum concurrent calls into Java code\n allowed across all HTTP requests. Default to 1.\n\nExample of custom settings in a `settings_local.py` file:\n\n```python\nJAVA_HOME = ""/usr/java/default""\nMAPTOOL_SETTINGS = {\n ""CPLEX_APPLICATION_ID"": ""Cplex_a7eaf483-ab92-4441-baeb-2f302ccb2919"",\n ""DATASETS_DIR"": ""/data/simccs-datasets""\n}\n```\n\n## Creating DB migrations\n\n```\ndjango-admin makemigrations --pythonpath . --settings simccs_maptool.tests.settings simccs_maptool\n```\n\n## Building the Vue.js frontend code\n\n```bash\ncd frontend\nyarn install\nyarn run build\n```\n\nYou can also instead run `yarn run serve` to start a Webpack dev server with hot\nreloading. See\nhttps://apache-airavata-django-portal.readthedocs.io/en/latest/dev/developing_frontend/\nfor more details.\n\n## Pyjnius - simccs.jar notes\n\n### Installing dependencies\n\nIn your virtual environment install the following:\n\n```\npip install cython\npip install pyjnius\n```\n\n### Building the SimCCS jar\n\n#### Building simccs GitHub repo code\n\n**Note: No longer need to build. Just grab the SimCCS.jar from\nhttps://github.com/simccs/SimCCS/tree/master/store**\n\nClone https://github.com/simccs/SimCCS\n\nThen copy `store/SimCCS.jar` to `simccs_maptool/simccs/lib/SimCCS.jar`.\n\n### MacOS notes\n\nI ran into issues and followed the suggestions here:\nhttps://github.com/joeferner/node-java/issues/90#issuecomment-45613235\n\nEdited `/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Info.plist`\nand added JNI to JVMCapabilities:\n\n```xml\n...\nJVMCapabilities\n\n CommandLine\n JNI\n\n...\n```\n\n### Testing Pyjnius\n\nYou should be able to run the following with your virtual environment activated:\n\n```python\nimport jnius_config\nimport os\n\njnius_config.set_classpath(\n os.path.join(os.getcwd(), ""simccs_maptool"", ""simccs"", ""lib"", ""simccs-app-1.0-jar-with-dependencies.jar""),\n)\nfrom jnius import autoclass\n\nbasepath = os.path.join(os.getcwd(), ""simccs_maptool"", ""simccs"", ""Datasets"")\ndataset = ""SoutheastUS""\nscenario = ""scenario1""\nDataStorer = autoclass(""simccs.dataStore.DataStorer"")\ndata = DataStorer(basepath, dataset, scenario)\nSolver = autoclass(""simccs.solver.Solver"")\nsolver = Solver(data)\ndata.setSolver(solver)\ncandidate_graph = data.generateCandidateGraph()\n```\n'",,"2019/03/15, 15:33:42",1685,Apache-2.0,6,785,"2023/09/28, 21:22:16",50,22,160,5,27,22,0.0,0.06462585034013602,,,0,3,false,,false,false,,,https://github.com/SciGaP,http://scigap.org/,,,,https://avatars.githubusercontent.com/u/5615761?v=4,,, OceanBioME.jl,A tool to study the effectiveness and impacts of ocean carbon dioxide removal strategies.,OceanBioME,https://github.com/OceanBioME/OceanBioME.jl.git,github,"biogeochemical-models,biogeochemistry,julia,ocean-modelling,ocean-sciences,oceanography,climate,ocean",Carbon Credits and Capture,"2023/10/25, 12:54:56",25,0,24,true,Julia,OceanBioME,OceanBioME,"Julia,TeX",https://oceanbiome.github.io/OceanBioME.jl/,"b'![](OceanBioME_headerbar.jpg?raw=true)\n[![Documentation](https://img.shields.io/badge/documentation-stable%20release-blue?style=flat-square)](https://oceanbiome.github.io/OceanBioME.jl/stable/)\n[![Documentation](https://img.shields.io/badge/documentation-dev%20release-orange?style=flat-square)](https://oceanbiome.github.io/OceanBioME.jl/dev/)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.05669/status.svg)](https://doi.org/10.21105/joss.05669)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10038575.svg)](https://doi.org/10.5281/zenodo.10038575)\n[![MIT license](https://img.shields.io/badge/License-MIT-blue.svg?style=flat-square)](https://mit-license.org)\n[![ColPrac: Contributor\'s Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor\'s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)\n\n[![Testing](https://github.com/OceanBioME/OceanBioME.jl/actions/workflows/tests.yml/badge.svg)](https://github.com/OceanBioME/OceanBioME.jl/actions/workflows/tests.yml)\n[![codecov](https://codecov.io/gh/OceanBioME/OceanBioME.jl/branch/main/graph/badge.svg?token=3DIW4R7N3R)](https://codecov.io/gh/OceanBioME/OceanBioME.jl)\n[![Documentation](https://github.com/OceanBioME/OceanBioME.jl/actions/workflows/documentation.yml/badge.svg)](https://github.com/OceanBioME/OceanBioME.jl/actions/workflows/documentation.yml)\n# *Ocean* *Bio*geochemical *M*odelling *E*nvironment\n\n## Description\nOceanBioME was developed with generous support from the Cambridge Centre for Climate Repair [CCRC](https://www.climaterepair.cam.ac.uk) and the Gordon and Betty Moore Foundation as a tool to study the effectiveness and impacts of ocean carbon dioxide removal (CDR) strategies.\n\nOceanBioME is a flexible modelling environment written in Julia for modelling the coupled interactions between ocean biogeochemistry, carbonate chemistry, and physics. OceanBioME can be run as a stand-alone box model, or coupled with [Oceananigans.jl](https://github.com/cliMA/oceananigans.jl/) to run as a 1D column model or with 2 and 3D physics. \n\n## Installation:\n\nFirst, [download and install Julia](https://julialang.org/downloads/)\n\nFrom the Julia prompt (REPL), type:\n```julia\njulia> using Pkg\njulia> Pkg.add(""OceanBioME"")\n```\n\n## Running your first model\nAs a simple example lets run a Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD) model in a two-dimensional simulation of a buoyancy front. This example requires Oceananigans, so we install that first:\n\n```julia\nusing Pkg; Pkg.add(""Oceananigans"")\n\nusing OceanBioME, Oceananigans\nusing Oceananigans.Units\n\ngrid = RectilinearGrid(CPU(), size = (160, 32), extent = (10000meters, 500meters), topology = (Bounded, Flat, Bounded))\n\nbiogeochemistry = NutrientPhytoplanktonZooplanktonDetritus(; grid) \n\nmodel = NonhydrostaticModel(; grid, biogeochemistry,\n advection = WENO(; grid),\n\t\t\t closure = AnisotropicMinimumDissipation(),\n\t\t\t buoyancy = SeawaterBuoyancy(constant_salinity = true))\n\n@inline front(x, z, \xce\xbc, \xce\xb4) = \xce\xbc + \xce\xb4 * tanh((x - 7000 + 4 * z) / 500)\n\nP\xe1\xb5\xa2(x, y, z) = ifelse(z > -50, 0.03, 0.01) \nN\xe1\xb5\xa2(x, y, z) = front(x, z, 2.5, -2)\nT\xe1\xb5\xa2(x, y, z) = front(x, z, 9, 0.05)\n\nset!(model, N = N\xe1\xb5\xa2, P = P\xe1\xb5\xa2, Z = P\xe1\xb5\xa2, T = T\xe1\xb5\xa2)\n\nsimulation = Simulation(model; \xce\x94t = 50, stop_time = 4days)\n\nsimulation.output_writers[:tracers] = JLD2OutputWriter(model, model.tracers,\n filename = ""buoyancy_front.jld2"",\n schedule = TimeInterval(24minute),\n overwrite_existing = true)\n\nrun!(simulation)\n```\n\n
\nWe can then visualise this:\n\n```julia\nT = FieldTimeSeries(""buoyancy_front.jld2"", ""T"")\nN = FieldTimeSeries(""buoyancy_front.jld2"", ""N"")\nP = FieldTimeSeries(""buoyancy_front.jld2"", ""P"")\n\nxc, yc, zc = nodes(T)\n\ntimes = T.times\n\nusing CairoMakie\n\nn = Observable(1)\n\nT_lims = (8.94, 9.06)\nN_lims = (0, 4.5)\nP_lims = (0.007, 0.02)\n\nT\xe2\x82\x99 = @lift interior(T[$n], :, 1, :)\nN\xe2\x82\x99 = @lift interior(N[$n], :, 1, :)\nP\xe2\x82\x99 = @lift interior(P[$n], :, 1, :)\n\nfig = Figure(resolution = (1000, 520), fontsize = 20)\n\ntitle = @lift ""t = $(prettytime(times[$n]))""\nLabel(fig[0, :], title)\n\naxis_kwargs = (xlabel = ""x (m)"", ylabel = ""z (m)"", width = 770, yticks = [-400, -200, 0])\nax1 = Axis(fig[1, 1]; title = ""Temperature (\xc2\xb0C)"", axis_kwargs...)\nax2 = Axis(fig[2, 1]; title = ""Nutrients concentration (mmol N / m\xc2\xb3)"",axis_kwargs...)\nax3 = Axis(fig[3, 1]; title = ""Phytoplankton concentration (mmol N / m\xc2\xb3)"", axis_kwargs...)\n\nhm1 = heatmap!(ax1, xc, zc, T\xe2\x82\x99, colorrange = T_lims, colormap = Reverse(:lajolla), interpolate = true)\nhm2 = heatmap!(ax2, xc, zc, N\xe2\x82\x99, colorrange = N_lims, colormap = Reverse(:bamako), interpolate = true)\nhm3 = heatmap!(ax3, xc, zc, P\xe2\x82\x99, colorrange = P_lims, colormap = Reverse(:bamako), interpolate = true)\n\nColorbar(fig[1, 2], hm1, ticks = [8.95, 9.0, 9.05])\nColorbar(fig[2, 2], hm2, ticks = [0, 2, 4])\nColorbar(fig[3, 2], hm3, ticks = [0.01, 0.02, 0.03])\n\nrowgap!(fig.layout, 0)\n\nrecord(fig, ""buoyancy_front.gif"", 1:length(times)) do i\n n[] = i\nend\n```\n
\n\nhttps://github.com/OceanBioME/OceanBioME.jl/assets/26657828/7e45ebc0-f1f4-4ea6-9be2-32d9472c97f3\n\nIn this example `OceanBioME` is providing the `biogeochemistry` and the remainder is taken care of by `Oceananigans`.\nFor comprehensive documentation of the physics modelling see\n[Oceananigans\' Documentation](https://clima.github.io/OceananigansDocumentation/stable/), and for\nbiogeochemistry and other features we provide read below.\n\n## Using GPU\n\nTo run the same example on a GPU we just need to construct the `grid` on the GPU; the rest is taken care of!\n\nJust replace `CPU()` with `GPU()` in the grid construction with everything else left unchanged:\n\n```julia\ngrid = RectilinearGrid(GPU(), size = (256, 32), extent = (500meters, 100meters), topology = (Bounded, Flat, Bounded))\n```\n\n## Documentation\n\nSee the [documentation](https://oceanbiome.github.io/OceanBioME.jl) for full description of the software\npackage and more examples.\n\n## Contributing\nIf you\'re interested in contributing to the development of OceanBioME we would appreciate your help!\n\nIf you\'d like to work on a new feature, or if you\'re new to open source and want to crowd-source projects that fit your interests, please start a discussion.\n\nFor more information check out our [contributor\'s guide](https://oceanbiome.github.io/OceanBioME.jl/stable/contributing/).\n\n## Citing\n\nIf you use OceanBioME as part of your research, teaching, or other activities, we would be grateful if you could cite our work below and mention the package by name.\n\n```bibtex\n@article{OceanBioMEJOSS,\n doi = {10.21105/joss.05669},\n url = {https://doi.org/10.21105/joss.05669},\n year = {2023},\n publisher = {The Open Journal},\n volume = {8},\n number = {90},\n pages = {5669},\n author = {Jago Strong-Wright and Si Chen and Navid C. Constantinou and Simone Silvestri and Gregory LeClaire Wagner and John R. Taylor},\n title = {{OceanBioME.jl: A flexible environment for modelling the coupled interactions between ocean biogeochemistry and physics}},\n journal = {Journal of Open Source Software}\n}\n```\n\nTo cite a specific version of the package please also cite its [Zenode archive](https://doi.org/10.5281/zenodo.10038575).\n'",",https://doi.org/10.21105/joss.05669,https://doi.org/10.5281/zenodo.10038575,https://doi.org/10.21105/joss.05669,https://doi.org/10.5281/zenodo.10038575","2022/07/15, 13:33:46",467,MIT,1199,1391,"2023/10/25, 12:54:57",5,99,135,103,0,0,4.8,0.29554655870445345,"2023/10/24, 20:10:08",v0.8.0,1,6,false,,false,false,,,https://github.com/OceanBioME,https://oceanbiome.github.io/OceanBioME.jl,United Kingdom,,,https://avatars.githubusercontent.com/u/109746627?v=4,,, ClimateMARGO.jl,"A Julia implementation of MARGO, an idealized framework for optimization of climate change control strategies.",ClimateMARGO,https://github.com/ClimateMARGO/ClimateMARGO.jl.git,github,"adaptation,carbon-removal,climate-science,geoengineering,julia,jump,mitigation,optimization,pluto-notebooks",Carbon Credits and Capture,"2023/10/18, 10:12:39",61,0,7,true,Julia,ClimateMARGO,ClimateMARGO,Julia,https://margo.plutojl.org/,"b'\n

\n ClimateMARGO.jl\n

\n\n\n

\n A Julia implementation of MARGO, an idealized framework for optimization of climate control strategies.\n

\n\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n

\n\n\n\nThe MARGO model is described in full in an [accompanying Research Article](https://iopscience.iop.org/article/10.1088/1748-9326/ac243e/pdf), published *Open-Access* in the journal *Environmental Research Letters*. The julia scripts and jupyter notebooks that contain all of the paper\'s analysis are available in the [MARGO-paper](https://github.com/ClimateMARGO/MARGO-paper) repository (these are useful as advanced applications of MARGO to complement the minimal examples included in the documentation).\n\nTry out the MARGO model by running our [Pluto](https://plutojl.org)-based [**web-app**](https://margo.plutojl.org/introduction.html) directly in your browser!\n\n![Gif of ClimateMARGO.jl being used interactively. The user\'s mouse cursor clicks on an emissions curve to drag the emissions down. A second panel shows how these emissions reductions result in less global warming, ultimately keeping global warming below a target of 2\xc2\xbaC.](https://raw.githubusercontent.com/hdrake/MARGO-gifs/main/MARGO_interactive_2degrees.gif)\n\nClimateMARGO.jl is currently in beta testing; basic model documentation is slowly being added. Substantial structural changes may still take place before the first stable release v1.0.0. Anyone interested in helping develop the model post an Issue here or contact the lead developer Henri Drake directly (henrifdrake `at` gmail.com), until explicit guidelines for contributing to the model are posted at a later date.\n\n\n----\nREADME.md formatting inspired by [Oceananigans.jl](https://github.com/CliMA/Oceananigans.jl)\n'",,"2020/05/25, 16:43:47",1248,MIT,4,233,"2023/10/09, 15:16:33",18,55,68,3,16,4,0.3,0.10679611650485432,"2022/11/14, 13:45:35",v0.3.3,1,3,false,,false,false,,,https://github.com/ClimateMARGO,,,,,https://avatars.githubusercontent.com/u/73409911?v=4,,, Carbon Mapper,"Accelerate local climate action globally by locating, quantifying and tracking methane leaks and CO2 point-sources from space.",,,custom,,Emission Observation and Modeling,,,,,,,,,,https://carbonmapperdata.org/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ghg emissions indicator,R scripts for a greenhouse gases emissions indicator published on Environmental Reporting British Columbia.,bcgov,https://github.com/bcgov/ghg-emissions-indicator.git,github,"env,rstats,r,data-science",Emission Observation and Modeling,"2023/09/13, 20:41:14",8,0,1,true,R,Province of British Columbia,bcgov,R,,"b'[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n\n![img](https://img.shields.io/badge/Lifecycle-Stable-97ca00)\n\n# Trends in Greenhouse Gas Emissions in B.C. \n \nA set of R scripts to populate an indicator on trends in greenhouse gas emissions in British Columbia. These scripts reproduce the results and graphs published on [Environmental Reporting BC](http://www.env.gov.bc.ca/soe/indicators/sustainability/ghg-emissions.html).\n\n## Usage\n\n### Data\n[British Columbia Greenhouse Gas Emissions](https://catalogue.data.gov.bc.ca/dataset/24c899ee-ef73-44a2-8569-a0d6b094e60c) data are sourced from the [B.C. Data Catalogue](https://catalogue.data.gov.bc.ca/dataset?download_audience=Public), released under the\n[Open Government Licence - British Columbia](http://www2.gov.bc.ca/gov/content/governments/about-the-bc-government/databc/open-data/open-government-license-bc).\n\nBritish Columbia population estimates ([Table: 17-10-0005-01](https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=1710000501)) and gross domestic product ([Table: 36-10-0222-01](https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=3610022201)) data are sourced from [Statistics Canada](https://www.statcan.gc.ca/eng/start), released under the [Statistics Canada Open Licence Agreement](https://www.statcan.gc.ca/eng/reference/licence). \n\n\n### Code\nThere are three core scripts that are required for the indicator, they need to be run in order:\n\n- 01_load.R\n- 02_clean.R\n- 03_output.R\n\nThe `run_all.R` script can be `source`ed to run it all at once.\n\nMost packages used in the analysis can be installed from CRAN using `install.packages()`, but you will need to install [envreportutils](https://github.com/bcgov/envreportutils) using remotes:\n\n\n```r\ninstall.packages(""remotes"") # If you don\'t already have it installed\n\nremotes::install_github(""bcgov/envreportutils"")\n\n```\n\n## Getting Help or Reporting an Issue\n\nTo report bugs/issues/feature requests, please file an [issue](https://github.com/bcgov/ghg-emissions-indicator/issues).\n\n## How to Contribute\n\nIf you would like to contribute, please see our [CONTRIBUTING](CONTRIBUTING.md) guidelines.\n\nPlease note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.\n\n## License\n\n Copyright 2023 Province of British Columbia\n\n Licensed under the Apache License, Version 2.0 (the ""License"");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at \n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an ""AS IS"" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n \nThis repository is maintained by [Environmental Reporting BC](http://www2.gov.bc.ca/gov/content?id=FF80E0B985F245CEA62808414D78C41B). Click [here](https://github.com/bcgov/EnvReportBC) for a complete list of our repositories on GitHub.\n'",,"2016/08/19, 21:03:41",2623,Apache-2.0,11,158,"2023/03/16, 17:14:05",1,7,13,4,223,0,0.0,0.509933774834437,"2023/03/16, 17:14:47",update-2023,0,7,false,,true,true,,,https://github.com/bcgov,https://github.com/bcgov/BC-Policy-Framework-For-GitHub,Canada,,,https://avatars.githubusercontent.com/u/916280?v=4,,, DuMux,"Based on the DUNE framework and aims to provide a multitude of numerical models as well as flexible discretization methods for complex non-linear phenomena, such as CO2 sequestration, soil remediation, drug delivery in cancer therapy and more.",dumux-repositories,,custom,,Emission Observation and Modeling,,,,,,,,,,https://git.iws.uni-stuttgart.de/dumux-repositories/dumux,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, oco2peak,The goal of our project is to localize CO2 emissions on Earth based on the carbon concentration data measured by the OCO-2 Satellite from NASA.,dataforgoodfr,https://github.com/dataforgoodfr/batch7_satellite_ges.git,github,,Emission Observation and Modeling,"2022/05/08, 05:10:38",18,0,2,true,Jupyter Notebook,Data For Good France,dataforgoodfr,"Jupyter Notebook,HTML,Python,Shell,Dockerfile,Makefile,CSS",,"b'# OCO-2 CO2 peak detector\n\n\n\n## General presentation\n> The goal of our project is to localize CO2 emissions on Earth based on the the carbon concentration data measured by the OCO-2 Satellite from the NASA. \n\nWe are working with:- Matthieu Porte, from IGN who submit the projet- Marie Heckmann, from the French Ministry of Ecology\n- Frederic Chevallier, from IPSL, one of the author of [Observing carbon dioxide emissions over China\xe2\x80\x99s cities with the Orbiting Carbon Observatory-2](https://www.atmos-chem-phys-discuss.net/acp-2020-123/acp-2020-123.pdf)\n\n## What we have as input\n\n**1/ OCO-2 Satellite data**\n\n\n\nThe OCO-2 Satellite (Orbiting Carbon Observatory) from the NASA orbits around Earth and measures the CO2 concentration in the atmosphere. \n\nHere is a visualisation of the CO2 concentration mesured by the OCO-2 satellite in December 2019. \n![CO2_ concentration_OCO2](notebooks/assets/CO2_emissions_Edgar_2018.png)\n\nThe satellite uses spectrometers to detect CO2 in the atmosphere, as shown in the image bellow.\n\n![OCO2 spectrometers](https://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Artist_rendition_of_the_CO2_column_that_OCO-2_will_see.jpg/321px-Artist_rendition_of_the_CO2_column_that_OCO-2_will_see.jpg)\n\n[source](https://commons.wikimedia.org/wiki/File:Artist_rendition_of_the_CO2_column_that_OCO-2_will_see.jpg)\n\nMore info here : \n\nThere are some limitations to the satellite measurement of the CO2 concentration:\n- The satellite can not see through clouds or fog;\n- It does not work the same over ground or water;\n- The swath of the satellite is quite narrow (only 10km), as shown in the image bellow; \n- As the satellite orbits the Earth, the coverage is partial.\n\n![OCO2 spectrometers](https://scx1.b-cdn.net/csz/news/800/2020/3-nasasatellit.jpg)\n!!\n\nMore info on the mission on .\n\nThe NASA made a global CO2 image (see bellow), however this is an extrapolation of the data, and not what the satellite really see.\n\n![NASA Global CO2](https://www.jpl.nasa.gov/images/oco/20090219/sinks-browse.jpg)\n\n**2/ Data on known CO2 emissions**\n\n- The Emissions Database for Global Atmospheric Research (EDGAR) on CO2 emissions. For the energy related sectors the activity data is mainly based on the energy balance statistics of IEA (2017), whereas the activity data for the agricultural sectors originates mainly from FAO (2018). The spatial allocation of emissions on the grid is made based on spatial proxy datasets with the location of energy and manufacturing facilities, road networks, shipping routes, human and animal population density and agricultural land use, that vary over time. \nSource : https://edgar.jrc.ec.europa.eu/overview.php?v=50_GHG\n\n![CO2_emissions_Edgar_2018](https://user-images.githubusercontent.com/61688979/79775474-9637d180-8334-11ea-9712-274a11356aea.PNG)\n\n- The World Resource Institute provides a list of power plants producing electricity based on different primary energies. We filtered this list to keep only the fossil primary energies (gas, oil and coal), that release CO2 during their combustion.\nSource: http://datasets.wri.org/dataset/globalpowerplantdatabase\n\n![power_plant_emissions_2017](notebooks/assets/power_plant_emissions_2017.png)\n\n- Other sources of CO2 emissions are under study. \n\n## What we do\n\n\nFirst approach: peak detection from O-CO2 & inference from inventory data\n\n- Detect peak in O-CO2 data, 2 step methodology\n\t- Step 1: Identification of local \xe2\x80\x98peaks\xe2\x80\x99 through Gaussian fits (curve_fit) ; Taking into account intrinsic complexity of O-CO2 data, notably: High variance across \xe2\x80\x98background\xe2\x80\x99 CO2 level across the globe, narrowness & incompleteness of plumes observations (due to clouds / fogs / \xe2\x80\xa6), ...\n\t- Step 2: Elimination of irrelevant peaks to keep only \xe2\x80\x98true\xe2\x80\x99 anomalies: So far, through a quite drastic & manual methodology, with rules to keep only clear Gaussians ; Objective to improve this part with algo-based anomaly detection \n\n- Aggregate known sources of CO2 from inventory data: Using EDGAR & World Resource Institute\n\n- Find nearest inventory from peak position, using the wind vector.\n\n- Compare peak to known sources emissions and confirm them\n\nSecond approach: supervised model to learn to detect peaks from inventory data [not started]\n- Use areas where inventory data are complete to let a supervised model learn peaks in OCO2 data\n\nOn top: dynamic visualization of data\n- Display the result on a comprehensive map, crossing satellite & inventory data\n\n## What we have achieved\n\n - Gather data from EDGAR and World Resource Institute and plotted them on a map.\n - Get raw satellite data from NASA and merge the to monthly dataset with the data we need.\n - Compute a Gaussian curve fit over each orbit and save the results.\n - Interactive dasboard to share our work on the web.\n \nHere is a sample of a peak witth the gaussian found :\n\n![Gaussian Peak](notebooks/assets/gaussian_peak.png)\n\nAnd the result on the website :\n\n![OCO2 Peak app](notebooks/assets/screen-shot.png)\n\n\n\n## We need help\n\n- Better peak detection: So far, we are fitting Gaussian curves to detect relevant peaks. 2 issues:\n - We use SciKit Learn curve_fit. Do you know a better algorithme or how to tune parameters of curve_fit ?\n - We are looking at other methodologies to detect anomalies (our \'peaks\') in the concentrations - any idea? \n- Wind modeling to estimate emission from detected concentration - any idea? (inverting the Gaussian plume model)\n\n\n## Git directories structure\n* /dataset contains a sample of OCO-2 data and inventory data; _**Important**_ : The whole datas are in a shared Open Stack Storage, not in the Github.\n* /notebooks contains the notebooks made by the team;\n* /pipeline contains the scripts used to process the NASA\'s data.\n* /oco2peak containts the modules\n\n**Warning** : The project use NBDev so the doc (including this README !) and the modules ar generated from Notebooks. So you have only to edit the Notebooks.\n\n## Open Stack directories structure\n\nWe do not store the original OCO-2 files from NASA.\n\n* /emissions/ contains all the potential source of emissions : factories, power plants, cities...\n* /soudings/ contains CSV of the raw features extracted from NASA NC4 files.\n* /peaks-detected/ contains all the peak found in the satellite orbit datas.\n* /peaks-detected-details/ contains one JSON file of the full data for all detected peak\n\n## Install\n\n### Python Package Only\nIf you are interested to use only our modules for your own project :\n`pip install oco2peak`\n\n### With Docker\n\n#### For use only\n`\ndocker-compose up\n`\n\nFront on http://localhost:7901\n\n#### For dev\n\n`docker-compose -f docker-compose-dev.yml up`\n\n- Front on http://localhost:7901\n- Jupyter Lab on http://localhost:7988\n\n### Dataset access\n\nYou need a config.json with token to your OpenStack:\n```json\n{\n ""swift_storage"": {\n ""user"":""B..r"",\n ""key"":""ep..ca"",\n ""auth_url"":""https://auth.cloud.ovh.net/v3/"",\n ""tenant_name"":""8..8"",\n ""auth_version"":""3"",\n ""options"" : {\n ""region_name"": ""GRA""\n },\n ""base_url"" : ""https://storage.gra.cloud.ovh.net/v1/AUTH_2...d/oco2/""\n }\n}\n```\n\n```python\nconfig = \'../configs/config.json\'\ndatasets = Datasets(config)\ndatasets.get_files_urls(prefix=""/datasets/oco-2/peaks-and-invent/"", pattern=\'1908\')\n```\n\n\n\n\n [\'https://storage.gra.cloud.ovh.net/v1/AUTH_2aaacef8e88a4ca897bb93b984bd04dd/oco2//datasets/oco-2/peaks-and-invent/peaks_and_invent_1908.csv\']\n\n\n\n```python\ndatasets.get_files_urls(prefix=""/map/peaks_map/"", pattern=\'1908\')\n```\n\n\n\n\n [\'https://storage.gra.cloud.ovh.net/v1/AUTH_2aaacef8e88a4ca897bb93b984bd04dd/oco2//map/peaks_map/peaks_capture_map_1908.html\']\n\n\n\n### Upload a file\n\n```python\ndatasets.upload(mask=\'../*.md\', prefix=""/Trash/"",content_type=\'text/text\')\n```\n\n\n\n
\n \n \n 100.00% [1/1 00:00<00:00]\n
\n\n\n\n
\n \n \n 100.00% [3/3 00:01<00:00]\n
\n\n\n\n## Build docs and modules\n\n`make all`\n\nOr if you are using Docker:\n\n`docker exec -it batch7_satellite_ges_oco2-dev_1 make all`\n\n\n# Process NASA Files\n\nIn `docker-compose-dev.yml` change `source: /media/NAS-Divers/dev/datasets/` to the path to you NC4 files.\n\nThen run :\n`docker-compose -f docker-compose-dev.yml up`\n\nIn another terminal, run:\n\n```bash\ndocker exec -it batch7_satellite_ges_oco2-dev_1 /bin/bash\npython pipeline/01_extract_nc4_to_csv.py\npython pipeline/02_find_peak_in_all_files.py\npython pipeline/03_upload_json_to_the_cloud.py\n```\n'",,"2020/03/04, 10:28:31",1330,Apache-2.0,0,376,"2023/04/12, 06:09:38",2,40,40,1,196,2,0.0,0.6634304207119741,,,0,11,false,,false,false,,,https://github.com/dataforgoodfr,http://www.dataforgood.fr,France,,,https://avatars.githubusercontent.com/u/11797105?v=4,,, CO2 Emission Datasets,"The CHE Data Portal provides an interface to the distributed data used and made available through the project, either as input data sets or as resulting data sets.",,,custom,,Emission Observation and Modeling,,,,,,,,,,https://www.che-project.eu/data-portal,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Global Carbon Budget,"An annual living data publication of carbon cycle sources and sinks, generated from multiple data sources and by multiple organisations and research groups.",openclimatedata,https://github.com/openclimatedata/global-carbon-budget.git,github,data-package,Emission Observation and Modeling,"2020/12/14, 08:56:47",56,0,5,false,Python,Open Climate Data,openclimatedata,"Python,Makefile",,"b'The Global Carbon Budget is an annual living data publication of carbon cycle\nsources and sinks, generated from multiple data sources and by multiple\norganisations and research groups.\n\nThis [Data Package](http://frictionlessdata.io/specs/data-package/) makes the data from the 2019 Global Carbon Budget and National Emissions [Excel files](https://www.icos-cp.eu/GCP/2019) v1.0 available as CSV files. For updates of the original data and further information refer to the\n[Global Carbon Budget](http://www.globalcarbonproject.org/carbonbudget/index.htm) website.\n\nMaintainer of this Data Package is Robert Gieseke ().\nSee below for [license information](#license)\n\n## Data\n\n### Global Carbon Budget\n\n[Notes and Methods](doc/global-carbon-budget.md)\n\n[global-carbon-budget.csv](data/global-carbon-budget.csv)\n\n\n### Fossil fuel and cement production emissions by fuel type\n\n[Notes and Methods](doc/fossil-fuel-cement.md)\n\n[fossil-fuel-cement.csv](data/fossil-fuel-cement.csv)\n\n[fossil-fuel-cement-per-capita.csv](data/fossil-fuel-cement-per-capita.csv)\n\n\n### Land-use change emissions\n\n[Notes and Model References](doc/land-use-change.md)\n\n[land-use-change.csv](data/land-use-change.csv)\n\n\n### Ocean CO2 sink (positive values represent a flux from the atmosphere to the ocean)\n\n[Notes and Model References](doc/ocean-sink.md)\n\n[ocean-sink.csv](data/ocean-sink.csv)\n\n\n### Terrestrial CO2 sink (positive values represent a flux from the atmosphere to the land)\n\n[Notes and Model References](doc/terrestrial-sink.md)\n\n[terrestrial-sink.csv](data/terrestrial-sink.csv)\n\n\n### Historical CO2 budget\n\n[Notes and References](doc/historical-budget.md)\n\n[historical-budget.csv](data/historical-budget.csv)\n\n\n### Territorial Emissions\n\n[Notes and Methods](doc/territorial-emissions.md)\n\n[territorial-emissions.csv](data/territorial-emissions.csv)\n\n\n### Consumption Emissions GCB\n\n[Notes and Methods](doc/consumption-emissions.md)\n\n[consumption-emissions.csv](data/consumption-emissions.csv)\n\n\n### Emissions Transfers GCB\n\n[Notes and Methods](doc/emissions-transfers.md)\n\n[emissions-transfers.csv](data/emissions-transfers.csv)\n\n\n\n### Country Definitions\n\nDetails of the geographical information corresponding to countries and regions used in this database for Consumption and Transfer emissions\n\n[country-definitions.csv](data/country-definitions.csv)\n\n## Preparation\n\nTo update or regenerate the CSV files the following steps need to be run:\n\n```\nmake clean\n```\n\n```\nmake\n```\n\nTo validate the Data Package:\n```\nmake validate\n```\n\n\n## Notes\n\nThe *Global Carbon Budget* data is written to CSV files using the\naccuracy used for display in the original Excel, or one digit more files\nassuming this to be the implied precision.\n\nThe *National Emissions* are written to CSV files with three significant digits\nas this is the accuracy used for the CDIAC data in the Excel file, thus\nrounding the numbers derived from splitting up countries or using trend data as\nwith BP emissions data.\n\nIf other accuracy is needed adjust the processing scripts accordingly.\n\n## License\n\nThe Global Carbon Budget [data page](http://www.globalcarbonproject.org/carbonbudget/19/data.htm) states:\n\n> The use of data is conditional on citing the original data sources. Full details on how to cite the data are given at the top of each page. For research projects, if the data are essential to the work, or if an important result or conclusion depends on the data, co-authorship may need to be considered. The Global Carbon Project facilitates access to data to encourage its use and promote a good understanding of the carbon cycle. Respecting original data sources is key to help secure the support of data providers to enhance, maintain and update valuable data.\n\nThe primary reference for the full Global Carbon Budget 2019 is:\n\nGlobal Carbon Budget 2019, by Pierre Friedlingstein, Matthew W. Jones, Michael O\xe2\x80\x99Sullivan, Robbie M. Andrew, Judith Hauck, Glen P. Peters, Wouter Peters, Julia Pongratz, Stephen Sitch, Corinne Le Qu\xc3\xa9r\xc3\xa9, Dorothee C. E. Bakker, Josep G. Canadell, Philippe Ciais, Rob Jackson, Peter Anthoni, Leticia Barbero, Ana Bastos, Vladislav Bastrikov, Meike Becker, Laurent Bopp, Erik Buitenhuis, Naveen Chandra, Fr\xc3\xa9d\xc3\xa9ric Chevallier, Louise P. Chini, Kim I. Currie, Richard A. Feely, Marion Gehlen, Dennis Gilfillan, Thanos Gkritzalis, Daniel S. Goll, Nicolas Gruber, S\xc3\xb6ren Gutekunst, Ian Harris, Vanessa Haverd, Richard A. Houghton, George Hurtt, Tatiana Ilyina, Atul K. Jain, Emilie Joetzjer, Jed O. Kaplan, Etsushi Kato, Kees Klein Goldewijk, Jan Ivar Korsbakken, Peter Landsch\xc3\xbctzer, Siv K. Lauvset, Nathalie Lef\xc3\xa8vre, Andrew Lenton, Sebastian Lienert, Danica Lombardozzi, Gregg Marland, Patrick C. McGuire, Joe R. Melton, Nicolas Metzl, David R. Munro, Julia E. M. S. Nabel, Shin-Ichiro Nakaoka, Craig Neill, Abdirahman M. Omar, Tsuneo Ono, Anna Peregon, Denis Pierrot, Benjamin Poulter, Gregor Rehder, Laure Resplandy, Eddy Robertson, Christian R\xc3\xb6denbeck, Roland S\xc3\xa9f\xc3\xa9rian, J\xc3\xb6rg Schwinger, Naomi Smith, Pieter P. Tans, Hanqin Tian, Bronte Tilbrook, Francesco N Tubiello, Guido R. van der Werf, Andrew J. Wiltshire, and S\xc3\xb6nke Zaehle (2019), Earth System Science Data, 11, 1783-1838, 2019, \n\nOtherwise please refer as:\n\nGlobal Carbon Project. (2019). Supplemental data of Global Carbon Budget 2019 (Version 1.0) [Data set]. Global Carbon Project. https://doi.org/10.18160/gcp-2019\n\nor\n\nGlobal Carbon Project (2019) Carbon budget and trends 2019. published on 4 December 2019, along with any other original peer-reviewed papers and data sources as appropriate.\n\nSee also the [Global Carbon Budget Publications](http://www.globalcarbonproject.org/carbonbudget/19/publications.htm) page.\n\nThe source code in `scripts` and the metadata in this Data Package itself are released under a\n[CC0 Public Dedication License](https://creativecommons.org/publicdomain/zero/1.0/).\n'",",https://doi.org/10.5194/essd-11-1783-2019,https://doi.org/10.18160/gcp-2019\n\nor\n\nGlobal","2016/07/21, 14:29:32",2652,Apache-2.0,0,93,"2019/12/09, 17:36:06",1,2,6,0,1416,0,0.5,0.010752688172043001,,,0,2,false,,false,false,,,https://github.com/openclimatedata,https://openclimatedata.net,"Potsdam, Germany",,,https://avatars.githubusercontent.com/u/20420557?v=4,,, emissions-api,A solution that provides simple access to emissions data of climate-relevant gases.,emissions-api,https://github.com/emissions-api/emissions-api.git,github,"emissions-api,python,esa,sentinel-5,copernicus,remote-sensing,air-quality,citizen-science,hacktoberfest",Emission Observation and Modeling,"2023/03/01, 19:36:29",68,0,7,true,Python,Emissions API,emissions-api,"Python,Mako",https://emissions-api.org,"b'Emissions API\n=============\n\n.. image:: https://img.shields.io/travis/com/emissions-api/emissions-api?label=Docs\n :target: https://docs.emissions-api.org\n :alt: Documentation Status\n.. image:: https://github.com/emissions-api/emissions-api/actions/workflows/main.yml/badge.svg\n :target: https://github.com/emissions-api/emissions-api/actions/workflows/main.yml\n :alt: Integration Test\n\nThis is the main repository for the `Emissions API `_.\n\nIf you just want to use Emissions API as a service, take a look at our `API documentation `_\nor visit our `website `_ for additional information and examples.\n\nBelow you will find a small introduction about setting the services in this repository up for development.\n\nIf you want to take a deeper dive into this, you can take a look at the `documentation `_,\nvisit the `issues `_\nor take a look into the `libraries and tools `_ we created around this project.\n\nInstallation\n------------\n\nTo install the requirements execute\n\n.. code-block:: bash\n\n pip install -r requirements.txt\n\nYou might have to explicitly deal with C-dependencies like ``psycopg2`` yourself,\nOne way to do this is to use your corresponding system packages.\n\nAfter that you can run the different services using\n\n* **preprocess**\\ : ``python -m emissionsapi.preprocess``\n* **autoupdater**\\ : ``python -m emissionsapi.autoupdater``\n* **web**\\ : ``python -m emissionsapi.web``\n\nConfiguration\n-------------\n\nEmissions API will look for configuration files in the following order:\n\n* ``./emissionsapi.yml``\n* ``~/emissionsapi.yml``\n* ``/etc/emissionsapi.yml``\n\nA configuration file template can be found at ``etc/emissionsapi.yml``.\nTo get started, just copy this to the main project directory and adjust the\nvalues if the defaults do not work for you.\n\nDatabase Setup\n--------------\n\nThis project is using a `PostgreSQL `_ database with the `PostGIS `_ extension.\n\nThere is a simple ``docker-compose.yml`` file to make it easier to setup a\ndatabase for development.\n'",,"2019/09/27, 11:33:23",1489,MIT,11,390,"2023/10/01, 04:14:06",22,443,494,35,24,10,0.0,0.739946380697051,,,0,11,false,,false,true,,,https://github.com/emissions-api,https://emissions-api.org,"Osnabrück, Germany",,,https://avatars.githubusercontent.com/u/47664099?v=4,,, eixport,An R package that provides functions to read emissions from VEIN and from other models in different formats and export the emissions into the appropriate format suitable to other models.,atmoschem,https://github.com/atmoschem/eixport.git,github,"wrf,emissions,exporting-emissions,atmospheric-models,atmospheric-science",Emission Observation and Modeling,"2023/09/27, 05:02:24",27,0,2,true,R,ATMOSCHEM,atmoschem,"R,TeX",https://atmoschem.github.io/eixport/,"b'\n\n\n# eixport \n\n[![Travis-CI Build\nStatus](https://travis-ci.org/atmoschem/eixport.svg?branch=master)](https://travis-ci.org/atmoschem/eixport)[![Build\nstatus](https://ci.appveyor.com/api/projects/status/frk36kmayf8yff70?svg=true)](https://ci.appveyor.com/project/Schuch666/eixport)\n[![Coverage\nStatus](https://img.shields.io/codecov/c/github/atmoschem/eixport/master.svg)](https://codecov.io/github/atmoschem/eixport?branch=master)\n[![DOI](https://zenodo.org/badge/106145968.svg)](https://zenodo.org/badge/latestdoi/106145968)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/eixport)](http://cran.r-project.org/web/packages/eixport)\n[![CRAN\nDownloads](http://cranlogs.r-pkg.org/badges/grand-total/eixport?color=orange)](http://cran.r-project.org/package=eixport)\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.00607/status.svg)](https://doi.org/10.21105/joss.00607)\n[![cran\nchecks](https://cranchecks.info/badges/worst/eixport)](https://cran.r-project.org/web/checks/check_results_eixport.html)\n[![Github\nStars](https://img.shields.io/github/stars/atmoschem/eixport.svg?style=social&label=Github)](https://github.com/atmoschem/eixport)\n\n## Exporting emissions to atmospheric models, eixport: 0.6.0\n\nEmissions are mass that affects atmosphere in complex ways, not only\nphysical, but also, in the health of humans, ecosystems, economically,\netc.\n\nThere are several models whose inputs are emissions, such as\n[R-Line](https://www.cmascenter.org/r-line/) or\n[WRF-Chem](https://ruc.noaa.gov/wrf/wrf-chem/). This R-Package provide\nfunctions to read emissions from\n[VEIN](https://github.com/ibarraespinosa/vein) and from other models in\ndifferent formats and export the emissions into the appropriate format\nsuitable to other models.\n\n## Install\n\nTo install the [CRAN](https://CRAN.R-project.org/package=eixport)\nversion:\n\n``` r\ninstall.packages(""eixport"")\n```\n\nTo install the development version:\n\n``` r\ndevtools::install_github(""atmoschem/eixport"")\n```\n\n## Some functions:\n\n- [get_edgar](https://atmoschem.github.io/eixport/reference/get_edgar.html):\n Download EDGAR emissions data.\n- [to_rline](https://atmoschem.github.io/eixport/reference/to_rline.html):\n Export emissions to other formats\n- [to_wrf](https://atmoschem.github.io/eixport/reference/to_wrf.html):\n Combine total/spatial/temporal/split and write emission to file\n- [to_brams_spm](https://atmoschem.github.io/eixport/reference/to_brams_spm.html):\n inputs for SPM BRAMS\n- [wrf_profile](https://atmoschem.github.io/eixport/reference/wrf_profile.html):\n Create spatial profile for WRF-Chem\n- [wrf_create](https://atmoschem.github.io/eixport/reference/wrf_create.html):\n Create emission files to the WRF-Chem\n- [wrf_plot](https://atmoschem.github.io/eixport/reference/wrf_plot.html):\n simple but useful plot\n- [wrf_get](https://atmoschem.github.io/eixport/reference/wrf_get.html):\n Read variables\n- [wrf_put](https://atmoschem.github.io/eixport/reference/wrf_put.html):\n Write variables\n- [to_as4wrf](https://atmoschem.github.io/eixport/reference/to_as4wrf.html):\n Create WRF-Chem inputs using NCL scrip AS4WRF.ncl.\n- [to_munich](https://atmoschem.github.io/eixport/reference/to_munich.html):\n To generate inputs for MUNICH model.\n\n### Summary\n\n``` r\nlibrary(eixport)\n#> The legacy packages maptools, rgdal, and rgeos, underpinning the sp package,\n#> which was just loaded, will retire in October 2023.\n#> Please refer to R-spatial evolution reports for details, especially\n#> https://r-spatial.org/r/2023/05/15/evolution4.html.\n#> It may be desirable to make the sf package available;\n#> package maintainers should consider adding sf to Suggests:.\n#> The sp package is now running under evolution status 2\n#> (status 2 uses the sf package in place of rgdal)\nfile = paste0(system.file(""extdata"", package = ""eixport""),""/wrfinput_d02"")\nwrf_summary(file = file)\n#> | | | 0% | |======================= | 33% | |=============================================== | 67% | |======================================================================| 100%\n#> Min. 1st Qu. Median Mean 3rd Qu.\n#> Times 1.312178e+09 1.312178e+09 1.312178e+09 1.312178e+09 1.312178e+09\n#> XLAT -2.438538e+01 -2.405025e+01 -2.370471e+01 -2.370379e+01 -2.335773e+01\n#> XLONG -4.742899e+01 -4.696930e+01 -4.650305e+01 -4.650304e+01 -4.603427e+01\n#> Max. sum\n#> Times 1.312178e+09 NA\n#> XLAT -2.301877e+01 -76160.28\n#> XLONG -4.558643e+01 -149414.28\n```\n\n### Attributes as data.frame\n\n``` r\nfile = paste0(system.file(""extdata"", package = ""eixport""),""/wrfinput_d02"")\nf <- wrf_meta(file)\nnames(f)\n#> [1] ""global"" ""vars""\nhead(f$global)\n#> att vars\n#> 1 TITLE OUTPUT FROM REAL_EM V3.9.1.1 PREPROCESSOR\n#> 2 START_DATE 2011-08-01_00:00:00\n#> 3 SIMULATION_START_DATE 2011-08-01_00:00:00\n#> 4 WEST-EAST_GRID_DIMENSION 64\n#> 5 SOUTH-NORTH_GRID_DIMENSION 52\n#> 6 BOTTOM-TOP_GRID_DIMENSION 35\nhead(f$vars)\n#> vars MemoryOrder description units stagger FieldType\n#> 1 XLAT XY LATITUDE, SOUTH IS NEGATIVE degree north 104\n#> 2 XLONG XY LONGITUDE, WEST IS NEGATIVE degree east 104\n```\n\n## Paper on Journal of Open Source Software (JOSS)\n\n\n\n @article{eixport,\n title = {eixport: An R package to export emissions to atmospheric models},\n journal = {The Journal of Open Source Software},\n author = {Sergio Ibarra-Espinosa and Daniel Schuch and Edmilson {Dias de Freitas}},\n year = {2018},\n doi = {10.21105/joss.00607},\n url = {http://joss.theoj.org/papers/10.21105/joss.00607},\n }\n\n\n\n\n## Contributing\n\nPlease, read\n[this](https://github.com/atmoschem/eixport/blob/master/CONTRIBUTING.md)\nguide. Contributions of all sorts are welcome, issues and pull requests\nare the preferred ways of sharing them. When contributing pull requests,\nplease follow the [Google\xe2\x80\x99s R Style\nGuide](https://google.github.io/styleguide/Rguide.xml). This project is\nreleased with a [Contributor Code of\nConduct](https://github.com/atmoschem/eixport/blob/master/CODE_OF_CONDUCT.md).\nBy participating in this project you agree to abide by its terms.\n'",",https://zenodo.org/badge/latestdoi/106145968,https://doi.org/10.21105/joss.00607,https://doi.org/10.21105/joss.00607","2017/10/08, 02:58:22",2208,CUSTOM,15,452,"2023/06/12, 19:14:37",3,6,73,6,135,0,0.0,0.43410852713178294,"2022/04/14, 14:45:29",v0.5.2,0,6,false,,true,true,,,https://github.com/atmoschem,,,,,https://avatars.githubusercontent.com/u/35502286?v=4,,, EmissV,This package provides some methods to create emissions (with a focus on vehicular emissions) for use in numeric air quality models such as WRF-Chem.,atmoschem,https://github.com/atmoschem/EmissV.git,github,"atmos,atmospheric-chemistry,atmospheric-modelling,atmospheric-models,atmospheric-science,emissions,wrf-chem",Emission Observation and Modeling,"2023/07/09, 20:05:27",32,0,5,true,R,ATMOSCHEM,atmoschem,"R,TeX",https://atmoschem.github.io/EmissV/,"b'# EmissV\n\n[![windown Build](https://ci.appveyor.com/api/projects/status/guuaaklaw6uyn4lj?svg=true)](https://ci.appveyor.com/project/Schuch666/emissv) \n[![Coverage Status](https://img.shields.io/codecov/c/github/atmoschem/EmissV/master.svg)](https://codecov.io/github/atmoschem/EmissV?branch=master) \n[![R build status](https://github.com/atmoschem/EmissV/workflows/R-CMD-check/badge.svg)](https://github.com/atmoschem/EmissV/actions)\n[![Licence:MIT](https://img.shields.io/github/license/hyperium/hyper.svg)](https://opensource.org/licenses/MIT) [![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/EmissV)](http://cran.r-project.org/web/packages/EmissV)\n[![cran checks](https://badges.cranchecks.info/worst/EmissV.svg)](https://cran.r-project.org/web/checks/check_results_EmissV.html)\n[![metacran downloads](https://cranlogs.r-pkg.org/badges/grand-total/EmissV)](https://cran.r-project.org/package=EmissV)\n[![metacran downloads](https://cranlogs.r-pkg.org/badges/EmissV)](https://cran.r-project.org/package=EmissV)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1451027.svg)](https://doi.org/10.5281/zenodo.1451027) [![status](http://joss.theoj.org/papers/071d027997ac93d8992099cb5010a044/status.svg)](http://joss.theoj.org/papers/071d027997ac93d8992099cb5010a044)\n \n\n![hex_logo](https://raw.githubusercontent.com/atmoschem/EmissV/master/hex_logo_true.png)\n\nThis package provides tools to create emissions (with a focus on vehicular emissions) for use in numeric air quality models such as [WRF-Chem](https://ruc.noaa.gov/wrf/wrf-chem/).\n\n## Installation\n\n### System dependencies \n\nEmissV import functions from [ncdf4](http://cran.r-project.org/package=ncdf4) for reading model information, [raster](http://cran.r-project.org/package=raster) and [sf](https://cran.r-project.org/web/packages/sf/index.html) to process grinded/geographic information and [units](https://github.com/edzer/units/). These packages need some aditional libraries: \n\n### To Ubuntu\nThe following steps are required for installation on Ubuntu:\n```bash\n sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable --yes\n sudo apt-get --yes --force-yes update -qq\n # netcdf dependencies:\n sudo apt-get install --yes libnetcdf-dev netcdf-bin\n # units/udunits2 dependency:\n sudo apt-get install --yes libudunits2-dev\n # sf dependencies (without libudunits2-dev):\n sudo apt-get install --yes libgdal-dev libgeos-dev libproj-dev\n```\n\n### To Fedora\nThe following steps are required for installation on Fedora:\n```bash\n sudo dnf update\n # netcdf dependencies:\n sudo yum install netcdf-devel\n # units/udunits2 dependency:\n sudo yum install udunits2-devel\n # sf dependencies (without libudunits2-dev):\n sudo yum install gdal-devel proj-devel proj-epsg proj-nad geos-devel\n```\n\n### To Windows\nNo additional steps for windows installation.\n\nDetailed instructions can be found at [netcdf](https://www.unidata.ucar.edu/software/netcdf/), [libudunits2-dev](https://r-quantities.github.io/units/) and [sf](https://r-spatial.github.io/sf/#installing) developers page.\n\n### To [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) (miniconda / anaconda)\n\nFirst create a new environment called rspatial *(or a better name)*:\n```bash\n conda create -n rspatial -y\n conda activate rspatial\n```\n\nand to install some requisites:\n```bash\n conda install -c conda-forge r-sf -y\n conda install -c conda-forge r-rgdal -y\n conda install -c conda-forge r-lwgeom -y\n conda install -c conda-forge r-raster -y\n```\n\n### Package installation\nTo install the *[CRAN](https://cran.r-project.org/package=EmissV) version (0.665.5.2)*:\n\n```r\ninstall.packages(""EmissV"")\n```\n\nTo install the *development version (0.665.5.3)* using [remotes](https://CRAN.R-project.org/package=remotes):\n```r\nrequire(""remotes"")\nremotes::install_github(""atmoschem/EmissV"")\n```\nor to install the *development version (0.665.5.3)* using [devtools](https://CRAN.R-project.org/package=devtools):\n```r\nrequire(""devtools"")\ndevtools::install_github(""atmoschem/EmissV"")\n```\n\n## Using `EmissV` with EDGAR 5.0 emissions\n\n`EmissV` can be used to process emissions of [atmospheric pollutants](https://en.wikipedia.org/wiki/Air_pollution#Sources) and [green house gases](https://en.wikipedia.org/wiki/Greenhouse_gas) from inventories such as [EDGAR](https://data.europa.eu/doi/10.2904/JRC_DATASET_EDGAR), [RCP](https://tntcat.iiasa.ac.at/RcpDb/dsd?Action=htmlpage&page=welcome#), [GAINS](https://iiasa.ac.at/web/home/research/researchPrograms/air/GAINS.html) and other datasets in [NetCDF](https://www.unidata.ucar.edu/software/netcdf/) format, the [GEIA-ACCENT](http://accent.aero.jussieu.fr/database_table_inventories.php) and [ECCAD](https://eccad3.sedoo.fr/) emission data portal makes available some of these inventories. You can verify the supported format with:\n\n```r\nEmissV::read()\n```\n\nTo generate a simple emission it\'s a straightforward process in 4 steps:\n\n```r\nlibrary(EmissV)\n### 1. download the EDGAR Netcdf using the function get_edgar from the eixport R-package \n### or from the http://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/EDGAR/datasets/v50_AP/ \n### EDGAR 5.0 website and unzip inside a temporary directory\n# create the temporary directory to download the data\ndir.create(file.path(tempdir(), ""EDGAR""))\n# download the total emissions of NOx from EDGAR v50_AP for 2015\neixport::get_edgar(dataset = ""v50_AP"",\n pol = \'NOx\',\n sector = ""TOTALS"",\n year = 2015,\n type = \'nc\', ask = FALSE, copyright = FALSE,\n destpath = file.path(tempdir(), ""EDGAR""))\n# unzip the file\nunzip(zipfile = paste0(file.path(tempdir(), ""EDGAR""),\'/v50_NOx_2015.0.1x0.1.zip\'),\n exdir = paste0(file.path(tempdir(), ""EDGAR"")))\n\n### 2. read the emissions (using the spec argument to split NOx into NO and NO2)\nNOx <- read(paste0(file.path(tempdir(), ""EDGAR""),\'/v50_NOx_2015.0.1x0.1.nc\'),\n version = \'EDGAR\',\n spec = c(E_NO = 0.9 , # optional, 90% of NOx used to NO\n E_NO2 = 0.1 )) # optional, 10% of NOx uset to NO2\n\n### 3. get the information from a WRF grid from a initial conditions file (wrfinput)\ng <- gridInfo(paste(system.file(""extdata"", package = ""EmissV""),""/wrfinput_d01"",sep=""""))\n\n### 4. calculate the emissions for grid g\nNO <- emission(grid = g, inventory = NOx$E_NO, pol = ""NO"", mm = 30.01, plot = T)\nNO2 <- emission(grid = g, inventory = NOx$E_NO2,pol = ""NO2"",mm = 46.0055, plot = T)\n```\nThe next step is to save the emission in a emission file, the next example show how to save emissions using the [eixport](https://github.com/atmoschem/eixport) R-package:\n\n```r\nlibrary(eixport)\n### create a temporary folder for emissions\ndir.create(file.path(tempdir(), ""EMISSION""))\n\n### create the emision file\nwrf_create(wrfinput_dir = system.file(""extdata"", package = ""EmissV""),\n wrfchemi_dir = file.path(tempdir(), ""EMISSION""),\n domains = 1)\n \n### get the file path of the emission file\nemis_file <- list.files(path = file.path(tempdir(), ""EMISSION""),\n pattern = ""wrfchemi_d01"",\n full.names = TRUE)\n\n### save the emission\nwrf_put(NO, file = emis_file, name = ""E_NO"", verbose = TRUE)\nwrf_put(NO2, file = emis_file, name = ""E_NO2"", verbose = TRUE)\n```\n\nCheck the [wrf_create](https://atmoschem.github.io/eixport/reference/wrf_create.html), [wrf_put](https://atmoschem.github.io/eixport/reference/wrf_put.html) and [to_wrf](https://atmoschem.github.io/eixport/reference/to_wrf.html) to more information and customize for your application.\n\n**NOTE**: The emission file must be compatible with the WRF-Chem options (many arguments are the same as the namelist.input from WRF) check the [eixport](https://atmoschem.github.io/eixport/reference/wrf_create.html) R-Package documentation and the [WRF-Chem manual](https://ruc.noaa.gov/wrf/wrf-chem/Users_guide.pdf) for more information.\n\nOther R-packages are available to write netcdf such as [ncdf4](https://CRAN.R-project.org/package=ncdf4), [RNetCDF](https://CRAN.R-project.org/package=RNetCDF), [tidync](https://CRAN.R-project.org/package=tidync) are available on [CRAN](https://cran.r-project.org/). Other languages such as [NCL leanguage](https://www.ncl.ucar.edu/) and the Python package [wrf-python](https://wrf-python.readthedocs.io/en/latest/), and preprocessor [anthro_emiss](https://www2.acom.ucar.edu/wrf-chem/wrf-chem-tools-community) are aternative to write NetCDF files.\n\n## Using `EmissV` to estimate vehicular emissions\n\nIn EmissV the vehicular emissions are estimated by a top-down approach, i.e. the emissions are calculated using the statistical description of the fleet at avaliable level (National, Estadual, City, etc).The following steps show an example workflow for calculating vehicular emissions, these emissions are initially temporally and spatially disaggregated, and then distributed spatially and temporally.\n\n**I.** Total: emission of pollutants is estimated from the fleet, use and emission factors and for the interest area (cities, states, countries, etc).\n\n``` r\nlibrary(EmissV)\n\nfleet <- vehicles(example = T)\n# using a example of vehicles (DETRAN 2016 data and SP vahicle distribution):\n# Category Type Fuel Use SP ...\n# Light Duty Vehicles Gasohol LDV_E25 LDV E25 41 km/d 11624342 ...\n# Light Duty Vehicles Ethanol LDV_E100 LDV E100 41 km/d 874627 ...\n# Light Duty Vehicles Flex LDV_F LDV FLEX 41 km/d 9845022 ...\n# Diesel Trucks TRUCKS_B5 TRUCKS B5 110 km/d 710634 ...\n# Diesel Urban Busses CBUS_B5 BUS B5 165 km/d 792630 ...\n# Diesel Intercity Busses MBUS_B5 BUS B5 165 km/d 21865 ...\n# Gasohol Motorcycles MOTO_E25 MOTO E25 140 km/d 3227921 ...\n# Flex Motorcycles MOTO_F MOTO FLEX 140 km/d 235056 ...\n\nfleet <- fleet[,c(-6,-8,-9)] # dropping RJ, PR and SC\n\nEF <- emissionFactor(example = T)\n# using a example emission factor (values calculated from CETESB 2015):\n# CO PM\n# Light duty Vehicles Gasohol 1.75 g/km 0.0013 g/km\n# Light Duty Vehicles Ethanol 10.04 g/km 0.0000 g/km\n# Light Duty Vehicles Flex 0.39 g/km 0.0010 g/km\n# Diesel trucks 0.45 g/km 0.0612 g/km\n# Diesel urban busses 0.77 g/km 0.1052 g/km\n# Diesel intercity busses 1.48 g/km 0.1693 g/km\n# Gasohol motorcycles 1.61 g/km 0.0000 g/km\n# Flex motorcycles 0.75 g/km 0.0000 g/km\n\nTOTAL <- totalEmission(fleet,EF,pol = c(""CO""),verbose = T)\n# Total of CO : 1128297.0993334 t year-1\n```\n\n**II.** Spatial distribution: The package has functions to read information from tables, georeferenced images (tiff), shapefiles (sh), OpenStreet maps (osm), global inventories in NetCDF format (nc) to calculate point, line and area sources.\n\n``` r\nraster <- raster::raster(paste(system.file(""extdata"", package = ""EmissV""),\n ""/dmsp.tiff"",sep=""""))\n\ngrid <- gridInfo(paste(system.file(""extdata"", package = ""EmissV""),\n ""/wrfinput_d02"",sep=""""))\n# Grid information from: .../EmissV/extdata/wrfinput_d02\n\nshape <- raster::shapefile(paste(system.file(""extdata"", package = ""EmissV""),\n ""/BR.shp"",sep=""""),verbose = F)[12,1]\nMinas_Gerais <- areaSource(shape,raster,grid,name = ""Minas Gerais"")\n# processing Minas Gerais area ...\n# fraction of Minas Gerais area inside the domain = 0.0145921494236101\n\nshape <- raster::shapefile(paste(system.file(""extdata"", package = ""EmissV""),\n ""/BR.shp"",sep=""""),verbose = F)[22,1]\nSao_Paulo <- areaSource(shape,raster,grid,name = ""Sao Paulo"")\n# processing Sao Paulo area ...\n# fraction of Sao Paulo area inside the domain = 0.474658563750987\n\nsp::spplot(raster::merge(drop_units(TOTAL$CO[[1]]) * Sao_Paulo, \n drop_units(TOTAL$CO[[2]]) * Minas_Gerais),\n scales = list(draw=TRUE),ylab=""Lat"",xlab=""Lon"",\n main=list(label=""Emissions of CO [g/d]""),\n col.regions = c(""#031638"",""#001E48"",""#002756"",""#003062"",\n ""#003A6E"",""#004579"",""#005084"",""#005C8E"",\n ""#006897"",""#0074A1"",""#0081AA"",""#008FB3"",\n ""#009EBD"",""#00AFC8"",""#00C2D6"",""#00E3F0""))\n```\n![*Figure 1* - Emissions of CO using nocturnal lights.](https://raw.githubusercontent.com/atmoschem/EmissV/master/CO_all.png)\n\n**III.** Emission calculation: calculate the final emission from all different sources and converts to model units and resolution.\n``` r\nCO_emissions <- emission(total = TOTAL,\n pol = ""CO"",\n area = list(SP = Sao_Paulo, MG = Minas_Gerais),\n grid = grid,\n mm = 28, \n plot = T)\n# calculating emissions for CO using molar mass = 28 ...\n```\n![*Figure 2* - CO emissions ready for use in air quality model.](https://raw.githubusercontent.com/atmoschem/EmissV/master/CO_final.png)\n\n**IV.** Temporal distribution: the package has a set of hourly profiles that represent the mean activity for each day of the week calculated from traffic counts of toll stations located in S\xc3\xa3o Paulo city.\n``` r\ndata(perfil)\nnames(perfil)\n```\n\nThe package has additional functions for read netcdf data, create line and point sources (with plume rise) and to estimate the total emissions of of volatile organic compounds from exhaust (through the exhaust pipe), liquid (carter and evaporative) and vapor (fuel transfer operations).\n\nFunctions:\n\n- [read](https://atmoschem.github.io/EmissV/reference/read.html): read global inventories in netcdf format\n- [vehicles](https://atmoschem.github.io/EmissV/reference/vehicles.html): tool to set-up vehicle data.table\n- [emissionFactor](https://atmoschem.github.io/EmissV/reference/emissionFactor.html): tool to set-up emission factors data.table\n- [gridInfo](https://atmoschem.github.io/EmissV/reference/gridInfo.html): read grid information from a NetCDF file\n- [pointSource](https://atmoschem.github.io/EmissV/reference/pointSource.html): emissions from point sources\n- [plumeRise](https://atmoschem.github.io/EmissV/reference/plumeRise.html): calculate plume rise\n- [rasterSource](https://atmoschem.github.io/EmissV/reference/rasterSource.html): distribution of emissions by a georeferenced image\n- [lineSource](https://atmoschem.github.io/EmissV/reference/lineSource.html): distribution of emissions by line vectors\n- [areaSource](https://atmoschem.github.io/EmissV/reference/areaSource.html): distribution of emissions by region\n- [totalEmission](https://atmoschem.github.io/EmissV/reference/totalEmission.html): total emissions\n- [emission](https://atmoschem.github.io/EmissV/reference/emission.html): Emissions to atmospheric models\n- [speciation](https://atmoschem.github.io/EmissV/reference/speciation.html): Speciation of emissions in different compounds\n\nSample datasets:\n\n- [Species](https://atmoschem.github.io/EmissV/reference/species.html): species mapping tables\n- [Perfil](https://atmoschem.github.io/EmissV/reference/perfil.html): vehicle counting profile for vehicular activity\n- Sample of an image of persistent lights of the Defense Meteorological Satellite Program (DMSP)\n- CETESB 2015 emission factors as ```emissionFactor(example=T)```\n- DETRAN 2016 data and SP vahicle distribution as ```vehicles(example=T)```\n- Shapefiles for Brazil states\n\n\n### Contributing\n\nBug reports, suggestions, and code contributions are all welcome. Please see [CONTRIBUTING.md](https://github.com/atmoschem/EmissV/blob/master/CONTRIBUTING.md) for details. Note that this project adopt the [Contributor Code of Conduct](https://github.com/atmoschem/EmissV/blob/master/CONDUCT.md) and by participating in this project you agree to abide by its terms.\n\n\n### License\n\nEmissV is published under the terms of the [MIT License](https://opensource.org/licenses/MIT). Copyright [(c)](https://raw.githubusercontent.com/atmoschem/emissv/master/LICENSE) 2018 Daniel Schuch.\n'",",https://doi.org/10.5281/zenodo.1451027","2017/10/12, 04:55:38",2204,CUSTOM,29,548,"2023/03/27, 15:44:18",2,2,28,1,212,0,0.0,0.0331858407079646,"2023/03/27, 15:34:58",Tinybones,0,3,false,,false,true,,,https://github.com/atmoschem,,,,,https://avatars.githubusercontent.com/u/35502286?v=4,,, vein,An R package to estimate Vehicular Emissions INventories.,ibarraespinosa,https://gitlab.com/ibarraespinosa/vein,gitlab,,Emission Observation and Modeling,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The Community Emissions Data System,Produces consistent estimates of global air emissions species over the industrial era (1750 - present).,JGCRI,https://github.com/JGCRI/CEDS.git,github,"emissions,data",Emission Observation and Modeling,"2022/10/19, 22:34:37",72,0,16,false,R,Joint Global Change Research Institute,JGCRI,"R,Makefile,Shell,Batchfile",http://www.globalchange.umd.edu/ceds/,"b'# CEDS\nThe Community Emissions Data System (CEDS) produces consistent estimates of global air emissions species (BC, CO, CO2, NH3, NMVOC, NOx, OC, SO2) over the industrial era (1750 - present) along with CH4 and N2O over recent decades. The system is written in R and uses open-source data (with the exception of the IEA energy statistics which must be purchased from IEA). CEDS is publicly available through an [Open Source License](#license-section).\n\n**April 2021 Release:** April 21, 2021 (v\\_2021\\_04\\_21)\n\nThis release updates emissions for four isos: Australia, Canada, South Korea, and Taiwan as shown [in Figure S9 here](./documentation/Version_comparison_figures_v_2021_04_21_vs_v_2021_02_05.pdf). Global trends are similar to the v\\_2021\\_02\\_05 release. Global gridded emission data have also been produced with updated spatial distributions for most sectors.\n\n* See the [release notes](https://github.com/JGCRI/CEDS/wiki/Release-Notes) for a summary of changes.\n* [Graphs of emission differences](./documentation/Version_comparison_figures_v_2021_04_21_vs_v_2016_07_16(CMIP6).pdf) between this version and the CEDS CMIP6 data release documented in Hoesly et al (2018a). \n* [Graphs of emission differences](./documentation/Version_comparison_figures_v_2021_04_21_vs_v_2019_12_23.pdf) between this version and the previous December 2019 CEDS data release. \n* Emissions by country and sector, archived [here](http://doi.org/10.5281/zenodo.4741285).\n* Gridded emissions in the same format as the CMIP6 data release areavailable at [PNNL DataHub](https://data.pnnl.gov/dataset/CEDS-4-21-21).\n\n\n**Feb 2021 Release:** February 05, 2021 (v\\_2021\\_02\\_05)\n\nThis data and code release extends the emissions time series to 2019 and updates driver and emissions data throughout. This version builds on the extension of the CEDS system to 2017 described in [McDuffie et al. 2020](https://essd.copernicus.org/preprints/essd-2020-103/). Major features:\n\n* Emissions estimates to 2019\n* Updated default data from GAINS and EDGAR\n* Updates to country inventories used for scaling\n* Improved BC/OC emission time series\n* Improved consistency over time\n* Updated code and driver data\n\nFor details on this release see:\n\n* See the [release notes](https://github.com/JGCRI/CEDS/wiki/Release-Notes) for a summary of changes.\n* [Graphs of emission differences](./documentation/Version_comparison_figures_v_2021_02_05_vs_v_2016_07_16(CMIP6).pdf) between this version and the CEDS CMIP6 data release documented in Hoesly et al (2018a). \n* [Graphs of emission differences](./documentation/Version_comparison_figures_v_2021_02_05_vs_v_2019_12_23.pdf) between this version and the previous December 2019 CEDS data release. \n* Emissions by country and sector, archived [here](http://doi.org/10.5281/zenodo.4509372).\n\nWe encourage comments on this data. The best way to comment on the data is through the [CEDS Issues](https://github.com/JGCRI/CEDS/issues) page.\n\n_Gridded data corresponding to this release is in production and will be released shortly._\n\n_A journal paper describing this dataset is in preparation. A notice will be updated here and sent to the CEDS listserv when this is available._\n\n***\n\nDocumentation of CEDS assumptions and system operation, including a user guide, are available at the [CEDS project wiki](https://github.com/JGCRI/CEDS/wiki) and in the journal papers noted below. \n\nCurrent issues with the data or system are documented in the [CEDS Issues](https://github.com/JGCRI/CEDS/issues) system in this GitHub repository. Users can submit issues using this system. These can include anomalies found in either the aggregate or gridded emissions data. Please use an appropriate tag for any submitted issues. Note that by default only unresolved issues are shown. All issues, including resolved issues, can be viewed by removing the ""is:open"" filter. *Issues relevant for CMIP6 data releases are tagged with a \xe2\x80\x9cCMIP6\xe2\x80\x9d label (note that issues will be closed when resolved in subsequent CEDS data releases, but are still available for viewing.)*\n\nFurther information can also be found at the [project web site](https://www.pnnl.gov/projects/ceds), including a link to a page that provides details for obtaining gridded emission datasets produced by this project for use in CMIP6. You can also sign up for data release announcements from the CEDSinfo listserv following the instructions on the [project web site](https://www.pnnl.gov/projects/ceds).\n\nIf you plan to use the CEDS data system for a research project you are encouraged to contact [Steve Smith](mailto:ssmith@pnnl.gov) so that we can coordinate with any on-going work on the CEDS system and make sure we are not duplicating effort. CEDS is research software, and we will be happy to help and make sure that you are able to make the best possible use of this system.\n\nUsers should use the most recent version of this repository, which will include maintenance updates to address documentation or usability issues. Major changes that alter emission estimates or system structure and use will be noted in the [release notes](https://github.com/JGCRI/CEDS/wiki/Release-Notes).\n\nCEDS has only been possible through the participation of many collaborators. Our **collaboration policy** is that collaborators who contribute unpublished data used in CEDS updates will be included as co-authors on the journal paper that describes the next CEDS major release. We particularly encourage contributions of updated emission information from countries or sectors not well represented in the data currently used in CEDS.\n\n# Data Reference\n\nReference for [this data version](https://github.com/JGCRI/CEDS/wiki/Release-Notes):\nO\'Rourke, P. R, Smith, S. J., Mott, A., Ahsan, H., McDuffie, E. E., Crippa, M., Klimont, S., McDonald, B., Z., Wang, Nicholson, M. B, Feng, L., and Hoesly, R. M. (2021, February 05). CEDS v-2021-02-05 Emission Data 1975-2019 (Version Feb-05-2021). [Zenodo. http://doi.org/10.5281/zenodo.4509372](http://doi.org/10.5281/zenodo.4509372).\n\n# Journal Papers\n[Hoesly et al, Historical (1750\xe2\x80\x932014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS). ](https://www.geosci-model-dev.net/11/369/2018/gmd-11-369-2018.html) _Geosci. Model Dev._ 11, 369-408, 2018a.\n\n_Note that the paper zip file supplement contains annual emissions estimates by country and sector for the July 26, 2016 data version. The most recent data is available from the links above._\n\n[Hoesly et al Informing energy consumption uncertainty: an analysis of energy data revisions.\xe2\x80\x9d](https://iopscience.iop.org/article/10.1088/1748-9326/aaebc3/meta) _Environ. Res. Lett._ 13 124023, 2018b.\n\n[Feng et al, The generation of gridded emissions data for CMIP6.](https://gmd.copernicus.org/articles/13/461/2020/) _Geosci. Model Dev._ 13, 461\xe2\x80\x93482, 2020.\n\n# License\nCopyright \xc2\xa9 2017, Battelle Memorial Institute\nAll rights reserved.\n\n1.\tBattelle Memorial Institute (hereinafter Battelle) hereby grants permission to any person or entity lawfully obtaining a copy of this software and associated documentation files (hereinafter \xe2\x80\x9cthe Software\xe2\x80\x9d) to redistribute and use the Software in source and binary forms, with or without modification. Such person or entity may use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and may permit others to do so, subject to the following conditions:\n\n * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers. \n * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. \n * Other than as used herein, neither the name Battelle Memorial Institute or Battelle may be used in any form whatsoever without the express written consent of Battelle.\n\n2.\tTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ""AS IS"" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL BATTELLE OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n'",",http://doi.org/10.5281/zenodo.4741285,http://doi.org/10.5281/zenodo.4509372,http://doi.org/10.5281/zenodo.4509372,http://doi.org/10.5281/zenodo.4509372","2017/09/07, 21:53:39",2239,CUSTOM,0,1760,"2022/10/19, 22:34:44",11,13,35,0,371,0,0.0,0.6544401544401545,"2021/05/05, 22:21:43",2021_4_21_Release,0,13,false,,false,false,,,https://github.com/JGCRI,https://www.pnnl.gov/projects/jgcri,"College Park, MD, USA",,,https://avatars.githubusercontent.com/u/8431983?v=4,,, national-climate-plans,"Intended Nationally Determined Contributions as provided in the UNFCCC registries, containing only the main document and using the English version if multiple are available.",openclimatedata,https://github.com/openclimatedata/national-climate-plans.git,github,data-package,Emission Observation and Modeling,"2022/06/07, 12:35:02",10,0,1,false,HTML,Open Climate Data,openclimatedata,"HTML,Python,Makefile",,"b'# ARCHIVED\n\nThis repository is no longer updated. Please use the\n[NDC](https://github.com/openclimatedata/ndcs) repository for a CSV file of\nNDCS directly.\n\nData Package with National Climate Plans from Nationally Determined Contributions (NDCs) and Intended Nationally Determined Contributions (INDCs) as provided in the UNFCCC secreteriat\'s registries.\nContains only the main document using an English version if multiple are available.\n\n## Data\n\nNDCs are pre-processed in the\nNDC Data Package (https://github.com/openclimatedata/ndcs) and INDCs in the\nINDC Data Package (https://github.com/openclimatedata/indcs).\nThis Data Package only contains one document per party, the main NDC or INDC\ndocument. For convenience all documents are copied with unified filenames\nin the `pdfs` directory.\n\nThe EU is listed with code ""EUU"", for France its NDC for overseas territories\nis included with code ""FRA"".\n\n## Preparation\n\nClone this repository with\n\n git clone https://github.com/openclimatedata/national-climate-plans.git --recursive\n\nRun\n\n make\n\nto generate the combined list from the Data Packages mentioned above.\n\n## Requirements\n\nPython3 is used, all dependencies are installed automatically into a Virtualenv\nwhen using the `Makefile`.\n\n## License\n\nThe Python files in `scripts` are released under an\n[CC0 Public Dedication License](https://creativecommons.org/publicdomain/zero/1.0/).\n'",,"2017/09/05, 12:50:13",2241,CUSTOM,0,78,"2019/03/08, 14:25:26",0,0,2,0,1692,0,0,0.0,,,0,1,false,,false,false,,,https://github.com/openclimatedata,https://openclimatedata.net,"Potsdam, Germany",,,https://avatars.githubusercontent.com/u/20420557?v=4,,, PyChEmiss,A Python script to create the wrfchemi file from local emissions needed to run WRF-Chem model.,quishqa,https://github.com/quishqa/PyChEmiss.git,github,"emission,wrf-chem,wrf-domain",Emission Observation and Modeling,"2023/01/05, 17:51:57",25,0,5,true,Python,,,Python,,"b'# PyChEmiss\n\n`PyChEmiss` is a Python script to create the `wrfchemi` file from surface local emissions needed to run WRF-Chem model. It\'s based on his older broder [AAS4WRF.ncl](https://github.com/alvv1986/AAS4WRF).\n\n\n## Installation\n\nYou need to install the packages that `PyChEmiss` needs. We recommend to use\n[miniconda](https://docs.conda.io/en/latest/miniconda.html).\n\nYou can download this repo or clone it by:\n\n```\ngit clone https://github.com/quishqa/PyChEmiss.git\n```\n\nThen add `conda-forge` channel by:\n\n```\nconda config --add channels conda-forge\n```\n\nTo avoid conflicts during the installation, we also recommend create a new environment to run `PyChEmiss`:\n\n```\nconda create --name PyChEmiss\nconda activate PyChEmiss\n```\n\n\n### Option A\n\nNow you can install `espmy`, `xesmf` and `pyyaml`. By doing this, `xarray`,\n`numpy`, and `pandas`will be also installed:\n\n```\nconda install esmpy\nconda install xesmf\nconda install pyyaml\n```\n\nIt\'s important to first install `esmpy` to avoid [this issue](https://github.com/JiaweiZhuang/xESMF/issues/47#issuecomment-593322288).\n\n\n### Option B\nOr, you can install the packages located in `requirements.txt` by typing:\n\n```\nconda install --yes --file requirements.txt\n```\n\nIf everything goes well, you are ready to go.\n\n## The input data\nTo run this script you need the `wrfinput_d0x` and your temporal and spatial disaggregated emissions in **mol/km2/hr** for gasses and in **ug/m2/s** for aerossol species. You can see the needed format by exploring `emissions_3km.txt` file.\n\nTo untar the example files:\n```\ntar -zxvf emissions_3km.tar.gz\ntar -zxvf wrfinput_d02.tar.gz\n```\n\n## Configuration file: `pychemiss.yml`\nThis file controls some parameters to run the script. `""""` are required only in `sep`.\n* `wrfinput_file`: the location of wrfinput_d0x.\n* `emission_file`: the location of the local emission file.\n* `nx` and `ny`: the number of longitude and latitude points in which local emission were spatially disaggregated.\n* `cell_area`: cell area in km2 of input `emission_file`.\n* `start_date` and `end_date`: `emissions_3km.txt` temporal availability in `%Y-%m-%d %H:%M` format.\n* `header`: If your local emission file has a header.\n* `col_names`: Names of emission file column names. **Remember that the three\nfirst columns have to be named ""i"", ""lon"", and ""lat""**.\n* `sep`: Column delimiter in emission file. Use quotes (`""""`)\n* `method`: we implement `nearest_s2d` methods for emissions regridding\n(a conservative method is on the way!).\n\n## Usage\n\nTo run the script, type:\n```\npython src/pychemiss.py pychemiss.yml\n```\n\nTo check that everything is working properly up to this point, we recommend to visualize the content of the output file, for example, by using `ncview`\n```\nncview wrfchemi_d02_2018-06-21_00:00:00\n```\n\n### WRF-Chem namelist configuration\n\nTo use the `wrfchemi` file in a standard WRF-Chem simulation, set some control parameters in the `namelist.input` file as follows\n```\n&time_control\nio_form_auxinput5 = 2,\nauxinput5_inname = \'wrfchemi_d\',\nauxinput5_interval_m = 60,\nframes_per_auxinput5 = 240,\n/\n\n&chem\nio_style_emissions = 2,\n/\n```\n\n240 is the number of times (hours) in the `wrfchemi` file.\n\nFor 24 hours of emissions data, the preprocessor will automatically build two 12-hour emission files: `wrfchemi_00z_d02` (00 to 11 UTC) and `wrfchemi_12z_d02` (12 to 23 UTC). In this case, set `frame_per_auxinput5` to 12 and `io_style_emissions` to 1.\n\n### Output example\nHere there is a comparison between the local emission of CO (with ΔX= 3 Km) and the\noutput after using `pychemiss.py` for a WRF domain of ΔX = 3 km.\n\n![Alt text](./pychemiss_example.svg)\n\n### Expected Runtime\n\nFor a WRF domain with 150 x 100 points and for ten days with hourly emissions (nx =30 and ny=27, like the above figure), in a ""normal"" laptop, it took 30 seconds to run.\n'",,"2020/05/20, 20:55:39",1253,GPL-3.0,1,70,"2023/01/06, 00:37:29",1,6,7,1,292,0,0.0,0.05084745762711862,,,0,3,false,,false,false,,,,,,,,,,, co2-data,Data on CO2 and greenhouse gas emissions by Our World in Data.,owid,https://github.com/owid/co2-data.git,github,"co2-emissions,greenhouse-gas-emissions,environment,energy",Emission Observation and Modeling,"2023/10/17, 14:42:25",525,0,186,true,Python,Our World in Data,owid,Python,https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions,"b'# Data on CO2 and Greenhouse Gas Emissions by *Our World in Data*\n\nOur complete CO2 and Greenhouse Gas Emissions dataset is a collection of key metrics maintained by [*Our World in Data*](https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions). It is updated regularly and includes data on CO2 emissions (annual, per capita, cumulative and consumption-based), other greenhouse gases, energy mix, and other relevant metrics.\n\n## The complete *Our World in Data* CO2 and Greenhouse Gas Emissions dataset\n\n### \xf0\x9f\x97\x82\xef\xb8\x8f Download our complete CO2 and Greenhouse Gas Emissions dataset : [CSV](https://nyc3.digitaloceanspaces.com/owid-public/data/co2/owid-co2-data.csv) | [XLSX](https://nyc3.digitaloceanspaces.com/owid-public/data/co2/owid-co2-data.xlsx) | [JSON](https://nyc3.digitaloceanspaces.com/owid-public/data/co2/owid-co2-data.json)\n\nThe CSV and XLSX files follow a format of 1 row per location and year. The JSON version is split by country, with an array of yearly records.\n\nThe variables represent all of our main data related to CO2 emissions, other greenhouse gas emissions, energy mix, as well as other variables of potential interest.\n\nWe will continue to publish updated data on CO2 and Greenhouse Gas Emissions as it becomes available. Most metrics are published on an annual basis.\n\nA [full codebook](https://github.com/owid/co2-data/blob/master/owid-co2-codebook.csv) is made available, with a description and source for each variable in the dataset.\n\n## Our source data and code\n\nThe dataset is built upon a number of datasets and processing steps:\n\n- Statistical review of world energy (Energy Institute, EI):\n - [Source data](https://www.energyinst.org/statistical-review)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/energy_institute/2023-06-26/statistical_review_of_world_energy.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/energy_institute/2023-06-26/statistical_review_of_world_energy.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy_institute/2023-06-26/statistical_review_of_world_energy.py)\n- International energy data (U.S. Energy Information Administration, EIA):\n - [Source data](https://www.eia.gov/opendata/bulkfiles.php)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/eia/2023-07-10/international_energy_data.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/eia/2023-07-10/energy_consumption.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/eia/2023-07-10/energy_consumption.py)\n- Primary energy consumption (Our World in Data based on EI\'s Statistical review of world energy & EIA\'s International energy data):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/energy/2023-07-10/primary_energy_consumption.py)\n- Global carbon budget - Fossil CO2 emissions (Global Carbon Project):\n - [Source data](https://zenodo.org/record/7215364#.Y3y3sezMIeY)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/gcp/2023-09-28/global_carbon_budget.py)\n- Global carbon budget - Global carbon emissions (Global Carbon Project):\n - [Source data](https://globalcarbonbudget.org/wp-content/uploads/Global_Carbon_Budget_2022v1.0.xlsx)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/gcp/2023-09-28/global_carbon_budget.py)\n- Global carbon budget - National fossil carbon emissions (Global Carbon Project):\n - [Source data](https://globalcarbonbudget.org/wp-content/uploads/National_Fossil_Carbon_Emissions_2022v1.0.xlsx)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/gcp/2023-09-28/global_carbon_budget.py)\n- Global carbon budget - National land-use change carbon emissions (Global Carbon Project):\n - [Source data](https://globalcarbonbudget.org/wp-content/uploads/National_LandUseChange_Carbon_Emissions_2022v1.0.xlsx)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/gcp/2023-09-28/global_carbon_budget.py)\n- Global carbon budget (Our World in Data based on the Global Carbon Project\'s Fossil CO2 emissions, Global carbon emissions, National fossil carbon emissions, and National land-use change emissions):\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/2023-09-28/global_carbon_budget.py)\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/2023-09-28/global_carbon_budget.py)\n- National contributions to climate change (Jones et al. (2023)):\n - [Source data](https://zenodo.org/record/7636699#.ZFCy4exBweZ)\n - [Ingestion code](https://github.com/owid/etl/blob/master/snapshots/emissions/2023-05-02/national_contributions.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/2023-09-28/global_carbon_budget.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/2023-09-28/global_carbon_budget.py)\n- Greenhouse gas emissions (including methane and nitrous oxide) by sector (CAIT):\n - [Source data](https://www.climatewatchdata.org/data-explorer/historical-emissions)\n - [Ingestion code](https://github.com/owid/walden/blob/master/ingests/cait/2022-08-10/cait_ghg_emissions.py)\n - [Basic processing code](https://github.com/owid/etl/blob/master/etl/steps/data/meadow/cait/2022-08-10/ghg_emissions_by_sector.py)\n - [Further processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/cait/2022-08-10/ghg_emissions_by_sector.py)\n- CO2 dataset (Our World in Data based on all sources above):\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/emissions/2023-09-28/owid_co2.py)\n - [Exporting code](https://github.com/owid/co2-data/blob/master/scripts/make_dataset.py)\n - [Uploading code](https://github.com/owid/co2-data/blob/master/scripts/upload_datasets_to_s3.py)\n\nAdditionally, to construct variables per capita and per GDP, we use the following datasets and processing steps:\n- Regions (Our World in Data).\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/regions/2023-01-01/regions.py)\n- Population (Our World in Data based on [a number of different sources](https://ourworldindata.org/population-sources)).\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/demography/2023-03-31/population/__init__.py)\n- Income groups (World Bank).\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/wb/2023-04-30/income_groups.py)\n- GDP (University of Groningen GGDC\'s Maddison Project Database, Bolt and van Zanden, 2020).\n - [Source data](https://www.rug.nl/ggdc/historicaldevelopment/maddison/releases/maddison-project-database-2020)\n - [Ingestion code](https://github.com/owid/walden/blob/master/ingests/ggdc_maddison.py)\n - [Processing code](https://github.com/owid/etl/blob/master/etl/steps/data/garden/ggdc/2020-10-01/ggdc_maddison.py)\n\n## Changelog\n\n- 2023-10-16:\n - Improved codebook.\n - Fixed issue related to consumption-based emissions in Africa, and Palau emissions.\n- 2023-07-10:\n - Updated primary energy consumption and other variables relying on energy data, to use the latest Statistical Review of World Energy by the Energy Institute.\n - Renamed countries \'East Timor\' and \'Faroe Islands\'.\n- 2023-05-04:\n - Added variables `share_of_temperature_change_from_ghg`, `temperature_change_from_ch4`, `temperature_change_from_co2`, `temperature_change_from_ghg`, and `temperature_change_from_n2o` using data from Jones et al. (2023).\n- 2022-11-11:\n - Updated CO2 emissions data with the newly released Global Carbon Budget (2022) by the Global Carbon Project.\n - Added various new variables related to national land-use change emissions.\n - Added the emissions of the 1991 Kuwaiti oil fires in Kuwait\'s emissions (while also keeping \'Kuwaiti Oil Fires (GCP)\' as a separate entity), to properly account for these emissions in the aggregate of Asia.\n - Applied minor changes to entity names (e.g. ""Asia (excl. China & India)"" -> ""Asia (excl. China and India)"").\n- 2022-09-06:\n - Updated data on primary energy consumption (from BP & EIA) and greenhouse gas emissions by sector (from CAIT).\n - Refactored code, since now this repository simply loads the data, generates the output files, and uploads them to the cloud; the code to generate the dataset is now in our [etl repository](https://github.com/owid/etl).\n - Minor changes in the codebook.\n- 2022-04-15:\n - Updated primary energy consumption data.\n - Updated CO2 data to include aggregations for the different country income levels.\n- 2022-02-24:\n - Updated greenhouse gas emissions data from CAIT Climate Data Explorer.\n - Included two new columns in dataset: total greenhouse gases excluding land-use change and forestry, and the same as per capita values.\n- 2021-11-05: Updated CO2 emissions data with the newly released Global Carbon Budget (v2021).\n- 2021-09-16:\n - Fixed data quality issues in CO2 emissions variables (emissions less than 0, missing data for Eswatini, ...).\n - Replaced all input CSVs with data retrieved directly from ourworldindata.org.\n- 2021-02-08: Updated this dataset with the latest annual release from the Global Carbon Project.\n- 2020-08-07: The first version of this dataset was made available.\n\n## Data alterations\n\n- **We standardize names of countries and regions.** Since the names of countries and regions are different in different data sources, we standardize all names in order to minimize data loss during data merges.\n- **We recalculate carbon emissions to CO2.** The primary data sources on CO2 emissions\xe2\x80\x94the Global Carbon Project, for example\xe2\x80\x94typically report emissions in tonnes of carbon. We have recalculated these figures as tonnes of CO2 using a conversion factor of 3.664.\n- **We calculate per capita figures.** All of our per capita figures are calculated from our metric `Population`, which is included in the complete dataset. These population figures are sourced from [Gapminder](http://gapminder.org) and the [UN World Population Prospects (UNWPP)](https://population.un.org/wpp/).\n\n## License\n\nAll visualizations, data, and code produced by _Our World in Data_ are completely open access under the [Creative Commons BY license](https://creativecommons.org/licenses/by/4.0/). You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.\n\nThe data produced by third parties and made available by _Our World in Data_ is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our database, and you should always check the license of any such third-party data before use.\n\n## Authors\n\nThis data has been collected, aggregated, and documented by Hannah Ritchie, Max Roser, Edouard Mathieu, Bobbie Macdonald and Pablo Rosado.\n\nThe mission of *Our World in Data* is to make data and research on the world\xe2\x80\x99s largest problems understandable and accessible. [Read more about our mission](https://ourworldindata.org/about).\n\n\n## How to cite this data?\n\nIf you are using this dataset, please cite both [Our World in Data](https://ourworldindata.org/co2-and-greenhouse-gas-emissions#citation) and the underlying data source(s).\n\nPlease follow [the guidelines in our FAQ](https://ourworldindata.org/faqs#citing-work-produced-by-third-parties-and-made-available-by-our-world-in-data) on how to cite our work.\n'",",https://zenodo.org/record/7215364#.Y3y3sezMIeY,https://zenodo.org/record/7636699#.ZFCy4exBweZ","2020/08/06, 13:02:32",1175,GPL-3.0,29,172,"2023/10/17, 14:42:30",0,18,38,9,8,0,0.5,0.5714285714285714,,,0,5,false,,false,false,,,https://github.com/owid,https://ourworldindata.org,,,,https://avatars.githubusercontent.com/u/14187135?v=4,,, X-STILT,An atmospheric transport model that deals with vertically integrated column CO2 and potentially other trace gases.,uataq,https://github.com/uataq/X-STILT.git,github,"atmospheric-modelling,remote-sensing,column-transport-error,urban-emission",Emission Observation and Modeling,"2023/07/14, 16:40:55",10,0,5,true,R,Utah Atmospheric Trace gas & Air Quality Lab,uataq,R,,"b'# X-STILT: Column-Stochastic Time-Inverted Lagrangian Transport model\n\nX-STILT is an atmospheric transport model that deals with vertically integrated column concentrations of various trace gases ([Wu et al., 2018, 2023](https://doi.org/10.5194/gmd-11-4843-2018)). The model code was built upon the Stochastic Time-Inverted Lagrangian Transport (STILT) model ([Lin et al., 2003](https://doi.org/10.1029/2002JD003161)) and its [latest version 2](https://github.com/uataq/stilt) ([Fasoli et al., 2018](https://doi.org/10.5194/gmd-11-2813-2018)). \n\nThe model framework can now work with OCO-2&3 XCO2 and TROPOMI column CO ([Wu et al., ACP](https://acp.copernicus.org/articles/22/14547/2022/acp-22-14547-2022.html)), CH4 (Li et al., and Tribby et al., in prep), and NO2 ([Wu et al., EGUsphere: STILT-NOx](https://egusphere.copernicus.org/preprints/2023/egusphere-2023-876/)) and is testing for [TCCON and EM27](https://github.com/uataq/X-STILT/pull/6). :sunglasses: \n\nThis GitHub repo includes built-in scripts/functions for \n1. running backward trajectories from an atmospheric column and column footprint (start with `run_xstilt.r` for either initial X-STILT or updated STILT-NOx); \n2. running forward-time trajectories from a box around the site and compute background from satellite observations (start with `compute_bg.r`); \n3. estimating wind and PBL uncertainties and translating those into XCO2 uncertainties (start with `run_xstilt.r`)\n4. simulating column CO2, CO, and NO2 abundances at OCO-2/3 and TROPOMI soundings (STILT-NOx, start with `run_sim_nox.r`)\n\nModel developments are ongoing and contributions are welcomed and appreciated. Please contact Dien (dienwu@caltech.edu) if you are interested in other column sensors/species or have any questions.\n\n# X-STILT Features\n## Table of Contents\n- [**Recent commits**](HISTORY.md)\n- [**Download and install model**](#download-and-install-model)\n- [**Prerequisites**](#prerequisites)\n- [**Obtain column footprint**](#obtain-column-footprint)\n- [**Determine background XCO2**](#determine-background-xco2)\n- [**Estimate horizontal and vertical transport errors**](#estimate-horizontal-and-vertical-transport-errors)\n- [**Atmospheric inversion on XCO2**](#atmospheric-inversion-on-xco2)\n- [**Example figures of column footprints and XCO2.ff**](#example-figures-of-column-footprints-and-xco2ff)\n- [**Reference**](#reference)\n\nDownload and install model\n============\n * As STILT-R version 2 now serves as a submodule of X-STILT, you will need to use the git command to download the entire model packages including the underlying STILTv2 and HYSPLITv5:\n ```\n git clone --recursive https://github.com/uataq/X-STILT.git\n ```\n\n * To automatically install all R packages (requiring R > 3.5.0) and the STILTv2 model with the fortran program for HYSPLITv5, start with the [first few lines of `run_xstilt.r`](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L45-L47). \n\n * Required datasets and methodologies are described in [prerequisites](prerequisites) and [*Wu et al*., 2018](https://doi.org/10.5194/gmd-11-4843-2018). Please refer to the [reference](#reference) section if you plan to publish any study using the (X-)STILT model. \n\n\n\nPrerequisites\n============\n1. For **SATELLITE-dependent column simulation** - \n * download [OCO-2 Lite](https://disc.gsfc.nasa.gov/datasets?keywords=OCO%20L2%20Lite%20FP&page=1) or [TROPOMI](https://disc.gsfc.nasa.gov/datasets?keywords=TROPOMI%20L2&page=1) Level 2 files and modify the corresponding parameters including `obs_sensor`, `obs_ver`, `obs_species`, `oco_path`, or `trp_path`. \n * X-STILT reads in averaging kernels from satellite files but calculates its own pressure weighting functions to perform vertical weighting on footprint values for air parcels that releases from different altitudes and then provide the vertically compressed column footprints. \n\n2. For **IDEAL column simulation** WITHOUT satellite dependence - \n * prepare a csv file for receptor `lon`, `lat`, and `time`. An example is provided as in `receptor_demo.csv`. Check out instructions in [recent commit](https://github.com/uataq/X-STILT/blob/master/HISTORY.md#commit-by-july-6-2021). \n\n\n3. Meteorological fields in ARL format, e.g., ones provided by [NOAA ARL](ftp://arlftp.arlhq.noaa.gov/archives/), and modify the section on [ARL format meteo params](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L164-L179). One need to perform ARL conversion, e.g., when using WRF as the met fields. \n\n4. (OPTIONAL) For calculating column enhancements [in ppm] - \n * choose your favourite emission inventories. The default is to use ODIAC for fossil fuel emission representation; so download [1 km ODIAC files](http://db.cger.nies.go.jp/dataset/ODIAC/DL_odiac2019.html) in tif format and modify `odiac_path`. \n\n5. (OPTIONAL) For performing transport error analyses - \n * [NOAA radiosonde data](https://ruc.noaa.gov/raobs/) for computing model-data wind errors; please choose wind speed unit of tenths of m s-1 and FSL format; Users could download RAOB stations within a spatial area around `site`;\n * [Carbon-Tracker mole fraction data, e.g., CT-NRT](https://www.esrl.noaa.gov/gmd/ccgg/carbontracker/CT-NRT/) for getting the total XCO2 in addition to FF XCO2. \n\n6. (OPTIONAL) For performing emission error analyses - \n * Bottom-up emission inventories ensemble [FFDAS](http://ffdas.rc.nau.edu/index.html) and [EDGAR](https://edgar.jrc.ec.europa.eu/). \n\n\nObtain column footprint\n============\n*Column Footprints* [ppm / (umol m-2 s-1)] are the source-receptor sensivities or essentially the Jacobian Matrics between concentration (enhancements) and fluxes (for a given source/sink). Users can start with `run_xstilt.r` for model and parameter initializations.\n\n1. Select a satellite overpass. Either by a manual insertion Or an automatic search. By default, [STEP 1](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L102-L116) searches for all overpasses that have soundings falling into a spatial domain (e.g., [2 deg x 2 deg](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L57-L58)) as well as a near-field domain (i.e., [0.6 deg x 0.6 deg](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L59-L60)).\n\n2. Select the kind of simulation one would like to conduct in [STEP 2](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L120-L132). One needs to modify\xc2\xa0logical flags which included:\n\n >* `run_trajec = T or run_foot = T`: Backward trajectories + vertically weighted column footprints;\n\n >* `run_hor_err = T`: Horizontal transport error analysis (one needs to calculate wind error statistics first, see below); \n\n >* `run_wind_err = T`: Modeled wind error estimates against radiosonde data (one need to download them from NOAA webpage first); \n\n >* `run_ver_err = T`: Vertical transport error analysis (via scaling mixed layer height up and down with scaling factors stated in [STEP 4](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L253-L256));\n\n >* `run_sim = T`: Simulate FFCO2 XCO2 enhancements (requires footprint), remember to turn off `run_trajec` and `run_foot`;\n\n >* `run_emiss_err = T`: Prior emission error analysis (requires footprint).\n\n3. Modify parameters for placing column receptors (e.g., max height and vertical spacing between two vertical levels in meters, # of particles per level, receptor locations). By default, X-STILT chooses more receptors within NEAR-FIELD area, see [STEP 3](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L135-L160).\n\n4. Modify meteorological fields, [STEP 4](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L164-L179). See [STILT documentation](https://uataq.github.io/stilt/#/configuration?id=meteorological-data-input) for more explanations. \n\n5. Modify horizontal resolution and spatial extent of footprint, [STEP 5](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L182-L214). \n * X-STILT can generate multiple sets of footprints based on the same set of trajectory in one simulation (hourly-integrated or hourly-explicit footprints). For generating a second or third set of footprints, modify these [params](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L202-L214). \n\n6. Set up SLURM parallel computing, [STEP 6](https://github.com/uataq/X-STILT/blob/master/run_xstilt.r#L241-L260).\n\n\nDetermine background XCO2\n============\nStart with main script of `compute_bg.r`. [M3 is the overpass specific background [ppm]](https://github.com/uataq/X-STILT/blob/master/compute_bg.r) by releasing air parcels in a forward fashion from a city and determining urban plume and background region using 2D kernel density. Please refer to details described in [Sect. 2.3 in Wu et al. (2018)](https://www.geosci-model-dev.net/11/4843/2018/#section2).\n\n >* `run_trajec = T`: calculate forward-time trajectories from a small box around the site (see parameter `box.len` and other parameters in STEP 2) no matter if those trajectories had been calculated previously. In this mode, air parcels are released continuously for every `dtime.sep` hours (see parameter `dtime.*`);\n\n >* `run_trajec = T & run_for_err = T`: add a wind error component (controlled by parameter `siguverr`) while calculating forward-time trajectories;\n\n >* `run_wind_err = T`: calculate modeled wind errors against radiosonde data (one need to download them first);\n\n >* `run_bg = T`: calculate the background values based on OCO-2/3 or TROPOMI data (see parameters in STEP 4); the inner functions will generate plots of plumes and observed enhancements by assuming background region to the north/south/east/west outside the plume. The final estimated background values will be stored in `file.path(store_path, fn)` if `writeTF = T`. \n\n\nEstimate horizontal and vertical transport errors\n============\nInstead of using model ensembles for estimating errors, X-STILT\xc2\xa0propagates\xc2\xa0real-world random u-v- wind errors (with correlation length scale and timescale) into errors in XCO2. The fundamental approach is proposed and documented in [Lin and Gerbig, 2005](https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2004GL021127), which is now extended to column CO2. We calculate and propagate XCO2 errors from release levels, to receptor locations, and finally to overpasses. \n\nBriefly speaking, users need to first generate another set of trajectories with wind error statistics (i.e., blue dots in Fig. S4) by turning on `run_hor_err`. Once trajectories with wind errors have been generated, one need to turn on `run_sim` to let X-STILT calculate the errors in XCO2 based on two sets of trajectories (with/without wind errors). Hard to explain all the steps here, but feel free to contact Dien if one is interested in performing such error estimates. Please also refer to details described in [Sect. 2.6 in Wu et al. (2018)](https://www.geosci-model-dev.net/11/4843/2018/#section2).\n\n\n\n\nAtmospheric inversion\n============\nAs discussed in [Sect. 4.2 in Wu et al. (2018)](https://www.geosci-model-dev.net/11/4843/2018/#section4), we could come up the posterior scaling factor for anthropogenic emissions based on 5 overpasses over Riyadh, via a simple Bayesian Inversion. We treated the entire city as a whole and solve for one scaling factor, given biases in near-field wind direction. So, we did not solve for posterior emissions for every grid cell within a city. Codes are not included in this repo. \n\nExample figures of column footprints and XCO2.ff\n============\n\n\nFigure - Latitude integrated map of weighted column footprints [umol m-2 s-1] on 12/29/2014 from 70+ selected sounding/receptor over Riyadh.\n\n\n\nFigure - Latitude integrated XCO2.ff contribution maps [ppm] on 12/29/2014 from 70+ selected sounding/receptor over Riyadh.\n\n\nReference\n============\n## Model development \n**Initial X-STILTv1 paper**: \nWu, D., Lin, J. C., Fasoli, B., Oda, T., Ye, X., Lauvaux, T., Yang, E. G., and Kort, E. A.: A Lagrangian approach towards extracting signals of urban CO2 emissions from satellite observations of atmospheric column CO2 (XCO2): X-Stochastic Time-Inverted Lagrangian Transport model (\xe2\x80\x9cX-STILT v1\xe2\x80\x9d), *Geosci. Model Dev.*, 11, 4843-4871, https://doi.org/10.5194/gmd-11-4843-2018, 2018. \n\n**STILTv2 paper**: \nFasoli, B., Lin, J. C., Bowling, D. R., Mitchell, L., and Mendoza, D.: Simulating atmospheric tracer concentrations for spatially distributed receptors: updates to the Stochastic Time-Inverted Lagrangian Transport model\'s R interface (STILT-R version 2), *Geosci. Model Dev.*, 11, 2813-2824, https://doi.org/10.5194/gmd-11-2813-2018, 2018.\n\n**Initial STILTv1 paper**: \nLin, J.C., Gerbig, C., Wofsy, S.C., Andrews, A.E., Daube, B.C., Davis, K.J. and Grainger, C.A.: A near\xe2\x80\x90field tool for simulating the upstream influence of atmospheric observations: The Stochastic Time\xe2\x80\x90Inverted Lagrangian Transport (STILT) model. *Journal of Geophysical Research: Atmospheres*, 108(D16), https://doi.org/10.1029/2002JD003161. 2003. \n\n**STILT error uncertainties**: \nLin, J. C., and Gerbig, C., Accounting for the effect of transport errors on tracer inversions, *Geophys. Res. Lett.*, 32, L01802, https://doi.org/10.1029/2004GL021127, 2005.\n\n## Model applications \n**X-STILT for TROPOMI XCO**: Wu, D., Liu, J., Wennberg, P. O., Palmer, P. I., Nelson, R. R., Kiel, M., and Eldering, A.: Towards sector-based attribution using intra-city variations in satellite-based emission ratios between CO2 and CO, Atmos. Chem. Phys. Discuss. [preprint], https://doi.org/10.5194/acp-2021-1029, in review, 2022.\n\n'",",https://doi.org/10.5194/gmd-11-4843-2018,https://doi.org/10.1029/2002JD003161,https://doi.org/10.5194/gmd-11-2813-2018,https://doi.org/10.5194/gmd-11-4843-2018,https://doi.org/10.5194/gmd-11-4843-2018,https://doi.org/10.5194/gmd-11-2813-2018,https://doi.org/10.1029/2002JD003161,https://doi.org/10.1029/2004GL021127,https://doi.org/10.5194/acp-2021-1029","2018/04/06, 22:46:36",2028,GPL-3.0,3,155,"2023/06/19, 22:57:02",2,5,5,1,128,0,0.0,0.0,"2023/06/19, 23:08:24",v1.6,0,1,false,,false,false,,,https://github.com/uataq,https://air.utah.edu,"Salt Lake City, UT",,,https://avatars.githubusercontent.com/u/30093582?v=4,,, stilt,An open source Lagrangian particle dispersion model which is widely used to simulate the transport of pollution and greenhouse gases through the atmosphere.,uataq,https://github.com/uataq/stilt.git,github,"atmospheric-science,r,climate-science",Emission Observation and Modeling,"2023/02/03, 19:27:51",40,0,8,true,R,Utah Atmospheric Trace gas & Air Quality Lab,uataq,"R,Shell,Dockerfile,Fortran",https://uataq.github.io/stilt/,"b'

\n \n \n \n

\n\n

\n Stochastic Time-Inverted Lagrangian Transport model\n

\n\n

\n An open source Lagrangian particle dispersion model.\n

\n\n

\n \n \n \n \n \n \n \n \n \n

\n\n## Docs\n\n[**STILT documentation**](https://uataq.github.io/stilt/) \n[Methods details](https://www.geosci-model-dev.net/11/2813/2018/)\n\n## About\n\nSTILT would not be possible without the strong community of developers behind it. This distribution contains a completely redesigned STILT wrapper and proposes a centralized, collaborative platform for documentation and future development. Model development in the form of feature enhancements, documentation updates, bug fixes, or simple suggestions from the community are welcome. Contribution guidelines can be found [here](https://uataq.github.io/stilt/#/contribute).\n\n### Relevant manuscripts\n\nLoughner, C. P., Fasoli, B., Stein, A. F., Lin, J. C.: Incorporating features from the Stochastic Time-Inverted Lagrangian Transport (STILT) model into the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model: a unified dispersion model for time-forward and time-reversed applications, J. Appl. Meteorol. Climatol., [10.1175/JAMC-D-20-0158.1](https://doi.org/10.1175/JAMC-D-20-0158.1), 2021.\n\nFasoli, B., Lin, J. C., Bowling, D. R., Mitchell, L., and Mendoza, D.: Simulating atmospheric tracer concentrations for spatially distributed receptors: updates to the Stochastic Time-Inverted Lagrangian Transport model\'s R interface (STILT-R version 2), Geosci. Model Dev., [10.5194/gmd-11-2813-2018](https://doi.org/10.5194/gmd-11-2813-2018), 2018.\n\nStein, A. R., Draxler, R. R., Rolph, G. D., Stunder, B. J. B., and Cohen M. D.: NOAA\xe2\x80\x99s HYSPLIT atmospheric transport and dispersion modeling system. Bull. Amer. Meteor. Soc., [10.1175/BAMS-D-14-00110.1](https://doi.org/10.1175/BAMS-D-14-00110.1), 2015.\n\nLin, J. C., Gerbig, C., Wofsy, S. C., Andrews, A. E., Daube, B. C., Davis, K. J. and Grainger, C. A.: A near-field tool for simulating the upstream influence of atmospheric observations: The Stochastic Time-Inverted Lagrangian Transport (STILT) model, J. Geophys. Res., [10.1029/2002JD003161](https://doi.org/10.1029/2002JD003161), 2003.\n'",",https://doi.org/10.1175/JAMC-D-20-0158.1,https://doi.org/10.5194/gmd-11-2813-2018,https://doi.org/10.1175/BAMS-D-14-00110.1,https://doi.org/10.1029/2002JD003161","2016/07/08, 22:37:37",2665,GPL-3.0,1,232,"2023/02/03, 19:27:53",2,65,94,11,264,0,0.1,0.004608294930875556,"2020/06/23, 17:27:59",v1.2,0,2,false,,false,false,,,https://github.com/uataq,https://air.utah.edu,"Salt Lake City, UT",,,https://avatars.githubusercontent.com/u/30093582?v=4,,, OPGEE,Oil Production Greenhouse Gas Emissions Estimator.,arbrandt,https://github.com/arbrandt/OPGEE.git,github,,Emission Observation and Modeling,"2023/05/11, 19:32:34",17,0,2,true,MATLAB,,,MATLAB,,"b'# OPGEE\nOil Production Greenhouse Gas Emissions Estimator\n\nThis is the repository for active scientific development of the OPGEE model. The OPGEE model is developed at Stanford University in the Environmental Assessment and Optimization group (https://eao.stanford.edu).\n\nThis repository contains beta software under active development. This beta software may not work in all conditions and can produce results that are different from those generated by stable OPGEE model versions. Do not cite, distribute, or further use results from these models. \n\nFor stable versions of the model as used in regulatory processes or cited in scientific papers, please consult the following resources:\n\nOPGEE webpage: https://eao.stanford.edu/opgee-oil-production-greenhouse-gas-emissions-estimator\n\nCARB webpage: https://www.arb.ca.gov/fuels/lcfs/crude-oil/crude-oil.htm\n\n'",,"2018/07/27, 01:24:52",1916,GPL-3.0,11,725,"2023/05/06, 22:53:37",52,1,464,13,172,1,0.0,0.36857142857142855,,,0,5,false,,false,false,,,,,,,,,,, OpenGHG,A cloud platform for greenhouse gas data analysis and collaboration.,openghg,https://github.com/openghg/openghg.git,github,"greenhouse-gas,data-science,cloud,analysis,collaboration",Emission Observation and Modeling,"2023/10/19, 10:09:34",19,1,10,true,Python,OpenGHG,openghg,"Python,C,Jupyter Notebook,Shell,Dockerfile",https://www.openghg.org,"b'![OpenGHG logo](https://github.com/openghg/logo/raw/main/OpenGHG_Logo_Landscape.png)\n\n## OpenGHG - a cloud platform for greenhouse gas data analysis and collaboration\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![codecov](https://codecov.io/gh/openghg/openghg/branch/devel/graph/badge.svg)](https://codecov.io/gh/openghg/openghg) ![OpenGHG tests](https://github.com/openghg/openghg/workflows/OpenGHG%20tests/badge.svg?branch=master)\n\nOpenGHG is a project based on the prototype [HUGS platform](https://github.com/hugs-cloud/hugs) which aims to be a platform for collaboration and analysis\nof greenhouse gas (GHG) data.\n\nThe platform will be built on open-source technologies and will allow researchers to collaborate on large datasets by harnessing the\npower and scalability of the cloud.\n\nFor more information please see [our documentation](https://docs.openghg.org/).\n\n## Install locally\n\nTo run OpenGHG locally you\'ll need Python 3.8 or later on Linux or MacOS, we don\'t currently support Windows.\n\nYou can install OpenGHG using `pip` or `conda`, though `conda` allows the complete functionality to be accessed at once.\n\n## Using `pip`\n\nTo use `pip`, first create a virtual environment\n\n```bash\npython -m venv openghg_env\n```\n\nThen activate the environment\n\n```bash\nsource openghg_env/bin/activate\n```\n\nIt\'s best to make sure you have the most up to date versions of the packages that `pip` will use behind the scenes when installing OpenGHG.\n\n```bash\npip install --upgrade pip wheel setuptools\n```\n\nThen we can install OpenGHG itself\n\n```bash\npip install openghg\n```\n\nEach time you use OpenGHG please make sure to activate the environment using the `source` step above.\n\n\n> **_NOTE:_** Some functionality is not completely accessible when OpenGHG is installed with `pip`. This only affects some map regridding functionality. See the Additional Functionality section below for more information.\n\n## Using `conda`\n\nTo get OpenGHG installed using `conda` we\'ll first create a new environment\n\n```bash\nconda create --name openghg_env\n```\n\nThen activate the environment\n\n```bash\nconda activate openghg_env\n```\n\nThen install OpenGHG and its dependencies from our [conda channel](https://anaconda.org/openghg/openghg)\nand conda-forge.\n\n```bash\nconda install --channel conda-forge --channel openghg openghg\n```\n\nNote: the `xesmf` library is already incorporated into the conda install from vx.x onwards and so does not need to be installed separately.\n\n## Create the configuration file\n\nOpenGHG stores object store and user data in a configuration file in the user\'s home directory at `~/.config/openghg/openghg.conf`. As this sets the path of the object store, the user must\ncreate this file in one of two ways\n\n### Command line\n\nUsing the `openghg` command line tool\n\n```\nopenghg --quickstart\n\nOpenGHG configuration\n---------------------\n\nEnter path for object store (default /home/gareth/openghg_store):\nINFO:openghg.util:Creating config at /home/gareth/.config/openghg/openghg.conf\n\nINFO:openghg.util:Configuration written to /home/gareth/.config/openghg/openghg.conf\n```\n\n### Python\n\nUsing the `create_config` function from the `openghg.util` submodule.\n\n```\nfrom openghg.util import create_config\n\ncreate_config()\n\nOpenGHG configuration\n---------------------\n\nEnter path for object store (default /home/gareth/openghg_store):\nINFO:openghg.util:Creating config at /home/gareth/.config/openghg/openghg.conf\n\nINFO:openghg.util:Configuration written to /home/gareth/.config/openghg/openghg.conf\n```\n\nYou will be prompted to enter the path to the object store, leaving the prompt empty tells OpenGHG to use the default path in the user\'s home directory at `~/openghg_store`.\n\n## Additional functionality\n\nSome optional functionality is available within OpenGHG to allow for multi-dimensional regridding of map data (`openghg.tranform` sub-module). This makes use of the [`xesmf` package](https://xesmf.readthedocs.io/en/latest/). This Python library is built upon underlying FORTRAN and C libraries (ESMF) which cannot be installed directly within a Python virtual environment.\n\nTo use this functionality these libraries must be installed separately. One suggestion for how to do this is as follows.\n\nIf still within the created virtual environment, exit this using\n```bash\ndeactivate\n```\n\nWe will need to create a `conda` environment to contain just the additional C and FORTRAN libraries necessary for the `xesmf` module (and dependencies) to run. This can be done by installing the `esmf` package using `conda`\n```bash\nconda create --name openghg_add esmf -c conda-forge\n```\n\nThen activate the Python virtual environment in the same way as above:\n```bash\nsource openghg_env/bin/activate\n```\n\nRun the following lines to link the Python virtual environment to the installed dependencies, doing so by installing the `esmpy` Python wrapper (a dependency of `xesmf`):\n```bash\nESMFVERSION=\'v\'$(conda list -n openghg_add esmf | tail -n1 | awk \'{print $2}\')\n$ export ESMFMKFILE=""$(conda env list | grep openghg_add | awk \'{print $2}\')/lib/esmf.mk""\n$ pip install ""git+https://github.com/esmf-org/esmf.git@${ESMFVERSION}#subdirectory=src/addon/ESMPy/""\n```\n\n**Note**: The pip install command above for `esmf` module may produce an AttributeError. At present (19/07/2022) an error of this type is expected and may not mean the `xesmf` module cannot be installed. This error will be fixed if [PR #49](https://github.com/esmf-org/esmf/pull/49) is merged.\n\nNow the dependencies have all been installed, the `xesmf` library can be installed within the virtual environment\n\n```bash\npip install xesmf\n```\n\n## Developers\n\nIf you\'d like to contribute to OpenGHG please see the contributing section of our documentation. If you\'d like to take a look at the source and run the tests follow the steps below.\n\n### Clone\n\n```bash\ngit clone https://github.com/openghg/openghg.git\n```\n\n### Install dependencies\n\nWe recommend you create a virtual environment first\n\n```bash\npython -m venv openghg_env\n```\n\nThen activate the environment\n\n```bash\nsource openghg_env/bin/activate\n```\n\nThen install the dependencies\n\n```bash\ncd openghg\npip install --upgrade pip wheel setuptools\npip install -r requirements.txt -r requirements-dev.txt\n```\n\nNext you can install OpenGHG in editable mode using the `-e` flag. This installs the package from\nthe local path and means any changes you make to the code will be immediately available when\nusing the package.\n\n```bash\npip install -e .\n```\n\nOpenGHG should now be installed in your virtual environment.\n\nSee above for additional steps to install the `xesmf` library as required.\n\n### Run the tests\n\nTo run the tests\n\n```bash\npytest -v tests/\n```\n\n> **_NOTE:_** Some of the tests require the [udunits2](https://www.unidata.ucar.edu/software/udunits/) library to be installed.\n\nThe `udunits` package is not `pip` installable so we\'ve added a separate flag to specifically run these tests. If you\'re on Debian / Ubuntu you can do\n\n```bash\nsudo apt-get install libudunits2-0\n```\n\nYou can then run the `cfchecks` marked tests using\n\n```bash\npytest -v --run-cfchecks tests/\n```\n\nIf all the tests pass then you\'re good to go. If they don\'t please [open an issue](https://github.com/openghg/openghg/issues/new) and let us\nknow some details about your setup.\n\n## Documentation\n\nFor further documentation and tutorials please visit [our documentation](https://docs.openghg.org/).\n\n## Community\n\nIf you\'d like further help or would like to talk to one of the developers of this project, please join\nour Gitter at gitter.im/openghg/lobby.\n'",,"2020/09/30, 14:35:48",1120,Apache-2.0,788,3066,"2023/10/19, 10:09:42",152,289,659,390,6,22,8.0,0.2790606653620352,,,0,7,false,,true,false,openghg/openghg_inversions,,https://github.com/openghg,https://www.openghg.org,,,,https://avatars.githubusercontent.com/u/67903512?v=4,,, Open Carbon Watch,"We monitor greenhouse gases emission reports published by organizations, along with their legal obligations and their own commitments, and track them over time.",OpenCarbonWatch,https://github.com/OpenCarbonWatch/Website.git,github,"opendata,carbon-emissions,database,laravel,vuejs",Emission Observation and Modeling,"2022/12/18, 15:26:43",9,0,1,true,PHP,Open Carbon Watch,OpenCarbonWatch,"PHP,Blade,Vue,Shell",https://opencarbonwatch.org,"b'# Website\n\nResources used to build our main website\n\n## Installation of the database server\n\n### Database\n\nWe use PostgreSQL on Ubuntu 18.04.\n\n```bash\nsudo su postgres\npsql -c ""CREATE USER ocw_user PASSWORD \'secret_password\';""\npsql -c ""CREATE DATABASE ocw OWNER ocw_user ENCODING \'UTF-8\';""\n```\n\n### Import data\n\n```bash\nsudo su postgres\npsql -d ocw -c ""TRUNCATE activities, cities, legal_types, assessment_organization, assessments, organizations;""\npsql -d ocw -c ""DROP INDEX IF EXISTS idx_organizations_name;""\npsql -d ocw -c ""COPY activities FROM \'/home/data/activities.csv\' CSV HEADER;""\npsql -d ocw -c ""COPY cities FROM \'/home/data/cities.csv\' CSV HEADER;""\npsql -d ocw -c ""COPY legal_types FROM \'/home/data/legal_types.csv\' CSV HEADER;""\npsql -d ocw -c ""COPY assessments FROM \'/home/data/assessments.csv\' CSV HEADER;""\npsql -d ocw -c ""COPY organizations FROM \'/home/data/organizations.csv\' CSV HEADER;""\npsql -d ocw -c ""COPY assessment_organization FROM \'/home/data/assessment_organization.csv\' CSV HEADER;""\npsql -d ocw -c ""CREATE INDEX idx_organizations_name ON organizations USING gin (name gin_trgm_ops);""\n```\n\n## Installation of the application server\n\nWe install the application layer on an Ubuntu 22.04 server.\n\n### Packages\n\nStart by installing the underlying software packages, including PHP, Composer, Node.JS and Yarn.\n\n```bash\nsudo apt update\nsudo apt -y upgrade\nsudo apt install -y php8.1 php8.1-{cli,curl,fpm,common,pgsql,intl,xml,mbstring,zip,soap,gd,gmp}\nsudo apt -y install composer\nwget -qO- https://deb.nodesource.com/setup_16.x | sudo -E bash\nsudo apt-get install -y nodejs\ncurl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -\necho ""deb https://dl.yarnpkg.com/debian/ stable main"" | sudo tee /etc/apt/sources.list.d/yarn.list\nsudo apt update\nsudo apt install -y yarn\n```\n\n### Application\n\n```bash\nsudo git clone https://github.com/OpenCarbonWatch/Website.git /srv/ocw\ncd /srv/ocw\nsudo cp .env.example .env\nsudo composer install\nsudo php artisan key:generate\n```\n\nConfigure the `.env`file (mostly with the connection information towards the database).\n\nInstall the application\n\n```bash\nsudo php artisan migrate\nsudo yarn install\nsudo yarn run prod\nsudo chown -R www-data:www-data ocw\n```\n\n### Nginx\n\nUninstall Apache and install Nginx\n```bash\nsudo systemctl disable --now apache2\nsudo apt remove -y apache2\nsudo apt install -y nginx\nsudo systemctl start nginx\n```\n\nCreate a configuration file `/etc/nginx/sites-available/ocw` with the following content\n\n```\nserver {\n if ($host = opencarbonwatch.org) {\n return 301 https://$host$request_uri;\n }\n listen 80;\n server_name opencarbonwatch.org;\n return 301 https://$host$request_uri;\n}\n\nserver {\n listen 443 ssl http2;\n server_name opencarbonwatch.org;\n client_max_body_size 100M;\n gzip on;\n gzip_types text/plain text/css application/javascript application/xml;\n root /srv/ocw/public;\n index index.php index.html index.htm index.nginx-debian.html;\n location / {\n try_files $uri $uri/ /index.php?$query_string;\n }\n # Cache header\n location ~* \\.(?:css|js|svg|ico)$ {\n expires 1y;\n access_log off;\n add_header Cache-Control ""public"";\n add_header X-Robots-Tag ""noindex, nofollow, nosnippet, noarchive"";\n }\n location ~ \\.php$ {\n try_files $uri =404;\n fastcgi_split_path_info ^(.+\\.php)(/.+)$;\n fastcgi_index index.php;\n fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;\n include fastcgi_params;\n fastcgi_param PATH_INFO $fastcgi_path_info;\n fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;\n fastcgi_param PHP_VALUE ""memory_limit = 2G"";\n }\n access_log /var/log/nginx/ocw_access.log;\n}\n```\n\nThen run\n```bash\nsudo ln -s /etc/nginx/sites-available/ocw /etc/nginx/sites-enabled/\nsudo rm /etc/nginx/sites-enabled/default\nsudo systemctl restart nginx\nsudo apt -y install certbot python3-certbot-nginx\nsudo certbot\n```\n'",,"2019/11/02, 09:22:52",1453,AGPL-3.0,12,139,"2022/11/13, 13:49:27",5,23,23,5,346,4,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/OpenCarbonWatch,https://opencarbonwatch.org,France,,,https://avatars.githubusercontent.com/u/57277203?v=4,,, Methane-detection-from-hyperspectral-imagery,Deep Learning based Remote Sensing Methods for Methane Detection in Airborne Hyperspectral Imagery.,satish1901,https://github.com/satish1901/Methane-detection-from-hyperspectral-imagery.git,github,,Emission Observation and Modeling,"2021/07/14, 00:06:11",39,0,17,true,Python,,,"Python,HTML,Shell",,"b'## Methane-detection-from-hyperspectral-imagery\nH-MRCNN introduces fast algorithms to analyze large-area hyper-spectral information and methods to autonomously represent and detect CH4 plumes. This repo contains 2 methods for processing different type of data, Single detector works on 4-channels data and Ensemble detectors works on 432-channels raw hyperspectral data recorded from AVIRIS-NG instrument. \n### [**Deep Remote Sensing Methods for Methane Detection in Overhead Hyperspectral Imagery**](https://openaccess.thecvf.com/content_WACV_2020/papers/Kumar_Deep_Remote_Sensing_Methods_for_Methane_Detection_in_Overhead_Hyperspectral_WACV_2020_paper.pdf)\n[Satish Kumar*](https://www.linkedin.com/in/satish-kumar-81912540/), [Carlos Torres*](https://torrescarlos.com), [Oytun Ulutan](https://sites.google.com/view/oytun-ulutan), [Alana Ayasse](https://www.linkedin.com/in/alana-ayasse-18370972/), [Dar Roberts](http://geog.ucsb.edu/archive/people/faculty_members/roberts_dar.htm), [B S Manjunath](https://vision.ece.ucsb.edu/people/bs-manjunath).\n\nOfficial repository of our [**WACV 2020**](https://openaccess.thecvf.com/content_WACV_2020/papers/Kumar_Deep_Remote_Sensing_Methods_for_Methane_Detection_in_Overhead_Hyperspectral_WACV_2020_paper.pdf) paper.\n\n\n\nThis repository includes:\n* Source code of single-detector and ensemble detectors(H-MRCNN) built on Mask-RCNN.\n* Training code for single-detector and ensemble detectors(H-MRCNN)\n* Pre-trained ms-coco weights of Mask-RCNN\n* Annotation generator to read-convert mask annotation into json.\n* Modified spectral library of python\n* Example of training on your own dataset\n\n![supported versions](https://img.shields.io/badge/python-(3.5--3.8)-brightgreen/?style=flat&logo=python&color=green)\n![Library](https://img.shields.io/badge/Library-TensorFlow-blue)\n![GitHub license](https://img.shields.io/cocoapods/l/AFNetworking)\n\nThe whole repo folder structure follows the same style as written in the paper for easy reproducibility and easy to extend. If you use it in your research, please consider citing our paper (bibtex below)\n\n## Citing\nIf this work is useful to you, please consider citing our paper:\n```\n@inproceedings{kumar2020deep,\n title={Deep Remote Sensing Methods for Methane Detection in Overhead Hyperspectral Imagery},\n author={Kumar, Satish and Torres, Carlos and Ulutan, Oytun and Ayasse, Alana and Roberts, Dar and Manjunath, BS},\n booktitle={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},\n pages={1765--1774},\n year={2020},\n organization={IEEE}\n}\n```\n\n### Requirements\n- Linux or macOS with Python \xe2\x89\xa5 3.6\n- Tensorflow <= 1.8\n- CUDA 9.0\n- cudNN (compatible to CUDA)\n\n### Installation\n1. Clone this repository\n2. Install dependencies\n```\npip install -r requirements.txt\n```\n#### Single-detector\nRunning single-detector is quite simple. Follow the [README.md](https://github.com/satish1901/Methane-detection-from-hyperspectral-imagery/blob/master/single_detector/README.md) in single_detector folder\n```\nsingle_detector/README.md\n```\n\n#### Ensemble-detector\nFor Running ensemble-detector we need some pre-processing. Follow the [README.md](https://github.com/satish1901/Methane-detection-from-hyperspectral-imagery/blob/master/ensemble_detectors/README.md) in emsemble_detector folder\n```\nensemble_detector/README.md\n```\n\n'",,"2020/06/01, 22:19:06",1241,MIT,0,107,"2022/07/06, 20:23:10",12,13,13,1,476,11,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Methane Source Finder,"Explore, analyze, and download methane plumes detected from airborne platforms on an interactive map alongside VISTA infrastructure, gridded methane estimates, and other additional data layers.",,,custom,,Emission Observation and Modeling,,,,,,,,,,https://methane.jpl.nasa.gov/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, deep-smoke-machine,Deep learning models and dataset for recognizing industrial smoke emissions.,CMU-CREATE-Lab,https://github.com/CMU-CREATE-Lab/deep-smoke-machine.git,github,"deep-learning,neural-network,machine-learning,computer-vision,smoke,air-quality,python,pytorch,smoke-recognition,citizen-science",Emission Observation and Modeling,"2023/04/21, 15:30:02",104,0,12,true,Python,CMU CREATE Lab,CMU-CREATE-Lab,"Python,Shell",http://smoke.createlab.org,"b'# deep-smoke-machine\nDeep learning models and dataset for recognizing industrial smoke emissions. The videos are from the [smoke labeling tool](https://github.com/CMU-CREATE-Lab/video-labeling-tool). The code in this repository assumes that Ubuntu 18.04 server is installed. The code is released under the BSD 3-clause license, and the dataset is released under the Creative Commons Zero (CC0) license. If you found this dataset and the code useful, we would greatly appreciate it if you could cite our paper below:\n\nYen-Chia Hsu, Ting-Hao (Kenneth) Huang, Ting-Yao Hu, Paul Dille, Sean Prendi, Ryan Hoffman, Anastasia Tsuhlares, Jessica Pachuta, Randy Sargent, and Illah Nourbakhsh. 2021. Project RISE: Recognizing Industrial Smoke Emissions. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2021). https://ojs.aaai.org/index.php/AAAI/article/view/17739\n\n**IMPORTANT:** There was an error in implementing the non-local blocks when we wrote the paper. We are very sorry about this error. The problem has been fixed in the code in this repository. However, the result of model RGB-NL in Table 7 in the paper is incorrect. We are working on re-running the code and will submit a corrected version of the paper to arXiv.\n\n![This figure shows different types of videos (high-opacity smoke, low-opacity smoke, steam, and steam with smoke).](back-end/data/dataset/2020-02-24/smoke-type.gif)\n\nThe following figures show how the [I3D model](https://arxiv.org/abs/1705.07750) recognizes industrial smoke. The heatmaps (red and yellow areas on top of the images) indicate where the model thinks have smoke emissions. The examples are from the testing set with different camera views, which means that the model never sees these views at the training stage. These visualizations are generated by using the [Grad-CAM](https://arxiv.org/abs/1610.02391) technique. The x-axis indicates time.\n\n![Example of the smoke recognition result.](back-end/data/dataset/2020-02-24/0-1-2019-01-17-6007-928-6509-1430-180-180-3906-1547732890-1547733065-grad-cam.png)\n\n![Example of the smoke recognition result.](back-end/data/dataset/2020-02-24/0-7-2019-01-11-3544-899-4026-1381-180-180-7891-1547236155-1547236330-grad-cam.png)\n\n![Example of the smoke recognition result.](back-end/data/dataset/2020-02-24/1-0-2018-08-24-3018-478-3536-996-180-180-8732-1535140050-1535140315-grad-cam.png)\n\n### Table of Content\n- [Install Nvidia drivers, cuda, and cuDNN](#install-nvidia)\n- [Setup this tool](#setup-tool)\n- [Use this tool](#use-this-tool)\n- [Code infrastructure](#code-infrastructure)\n- [Dataset](#dataset)\n- [Pretrained models](#pretrained-models)\n- [Deploy models to recognize smoke](#deploy-models-to-recognize-smoke)\n- [Acknowledgements](#acknowledgements)\n\n# Install Nvidia drivers, cuda, and cuDNN\nDisable the nouveau driver.\n```sh\nsudo vim /etc/modprobe.d/blacklist.conf\n# Add the following to this file\n# Blacklist nouveau driver (for nvidia driver installation)\nblacklist nouveau\nblacklist lbm-nouveau\noptions nouveau modeset=0\nalias nouveau off\nalias lbm-nouveau off\n```\nRegenerate the kernel initramfs.\n```sh\nsudo update-initramfs -u\nsudo reboot now\n```\nRemove old nvidia drivers.\n```\n# For drivers that are installed using sudo apt-get\nsudo apt-get remove --purge \'^nvidia-.*\'\nsudo apt-get autoremove\n\n# For drivers that are installed from NVIDIA website file\nsudo nvidia-uninstall\n```\nIf using a desktop version of Ubuntu (not the server version), run the following:\n```\nsudo apt-get install ubuntu-desktop # only for desktop version, not server version\n```\nInstall cuda and the nvidia driver from [Nvidia\'s website](https://developer.nvidia.com/cuda-toolkit). Old versions of cuda can be found [here](https://developer.nvidia.com/cuda-toolkit-archive).\n```sh\nsudo apt install build-essential\nsudo apt-get install linux-headers-$(uname -r)\nwget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda_11.6.2_510.47.03_linux.run\nsudo sh cuda_11.6.2_510.47.03_linux.run\n```\nCheck if Nvidia driver is installed. Should be no nouveau.\n```sh\nsudo nvidia-smi\ndpkg -l | grep -i nvidia\nlsmod | grep -i nvidia\nlspci | grep -i nvidia\nlsmod | grep -i nouveau\ndpkg -l | grep -i nouveau\n```\nAdd cuda runtime library.\n```sh\nsudo bash -c ""echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf""\nsudo ldconfig\n```\nAdd cuda environment path.\n```sh\nsudo vim /etc/environment\n# add :/usr/local/cuda/bin (including the "":"") at the end of the PATH=""/[some_path]:/[some_path]"" string (inside the quotes)\nsudo reboot now\n```\nCheck cuda installation. This step is optional.\n```sh\ncd /usr/local/cuda/samples\nsudo make\ncd /usr/local/cuda/samples/bin/x86_64/linux/release\n./deviceQuery\n```\nInstall cuDNN. Documentation can be found on [Nvidia\'s website](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-linux). Visit [Nvidia\'s page](https://developer.nvidia.com/cudnn) to download cuDNN to your local machine. Then, move the file to the Ubuntu server (using the rsync command below) and follow the instructions on the website to unzip the cuDNN package and copy the files into the CUDA toolkit directory.\n```sh\nrsync -av /[path_on_local]/cudnn-linux-x86_64-8.6.0.163_cuda11-archive.tar.xz [user_name]@[server_name]:[path_on_server]\n```\n \n# Setup this tool\nInstall conda. This assumes that Ubuntu is installed. A detailed documentation is [here](https://conda.io/projects/conda/en/latest/user-guide/install/index.html). First visit [here](https://conda.io/miniconda.html) to obtain the downloading path. The following script install conda for all users:\n```sh\nwget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\nsudo sh Miniconda3-latest-Linux-x86_64.sh -b -p /opt/miniconda3\n\nsudo vim /etc/bash.bashrc\n# Add the following lines to this file\nexport PATH=""/opt/miniconda3/bin:$PATH""\n. /opt/miniconda3/etc/profile.d/conda.sh\n\nsource /etc/bash.bashrc\n```\nFor Mac OS, I recommend installing conda by using [Homebrew](https://brew.sh/).\n```sh\nbrew cask install miniconda\necho \'export PATH=""/usr/local/Caskroom/miniconda/base/bin:$PATH""\' >> ~/.bash_profile\necho \'. /usr/local/Caskroom/miniconda/base/etc/profile.d/conda.sh\' >> ~/.bash_profile\nsource ~/.bash_profile\n```\nClone this repository and set the permission.\n```sh\ngit clone --recursive https://github.com/CMU-CREATE-Lab/deep-smoke-machine.git\nsudo chown -R $USER deep-smoke-machine/\nsudo addgroup [group_name]\nsudo usermod -a -G [group_name] [user_name]\ngroups [user_name]\nsudo chmod -R 775 deep-smoke-machine/\nsudo chgrp -R [group_name] deep-smoke-machine/\n```\nFor git to ignore permission changes.\n```sh\n# For only this repository\ngit config core.fileMode false\n\n# For globally\ngit config --global core.fileMode false\n```\nCreate conda environment and install packages. It is important to install pip first inside the newly created conda environment.\n```sh\nconda create -n deep-smoke-machine\nconda activate deep-smoke-machine\nconda install python=3.9\nconda install pip\nwhich pip # make sure this is the pip inside the deep-smoke-machine environment\n```\nInstall PyTorch by checking the command on the [PyTorch website](https://pytorch.org). An example for Ubuntu is below:\n```sh\nconda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge\n```\nInstall packages.\n```sh\nsh deep-smoke-machine/back-end/install_packages.sh\n```\nIf the environment already exists and you want to remove it before installing packages, use the following:\n```sh\nconda env remove -n deep-smoke-machine\n```\nUpdate the optical_flow submodule.\n```sh\ncd deep-smoke-machine/back-end/www/optical_flow/\ngit submodule update --init --recursive\ngit checkout master\n```\nInstall system packages for OpenCV.\n```sh\nsudo apt update\nsudo apt install -y libsm6 libxext6 libxrender-dev\n```\n\n# Use this tool\nTo use our publicly released dataset (a snapshot of the [smoke labeling tool](http://smoke.createlab.org/) on 2/24/2020), we include [metadata_02242020.json](back-end/data/dataset/2020-02-24/metadata_02242020.json) file under the deep-smoke-machine/back-end/data/dataset/ folder. You need to copy, move, and rename this file to deep-smoke-machine/back-end/data/metadata.json.\n```sh\ncd deep-smoke-machine/back-end/data/\ncp dataset/2020-02-24/metadata_02242020.json metadata.json\n```\nFor researchers in our team, if you wish to update the dataset, you need to obtain user token from the [smoke labeling tool](https://smoke.createlab.org/gallery.html) and put the user_token.js file in the deep-smoke-machine/back-end/data/ directory. You need permissions from the system administrator to download the user token. After getting the token, get the video metadata (using the command below). This will create a metadata.json file under deep-smoke-machine/back-end/data/.\n```sh\npython get_metadata.py confirm\n```\nSplit the metadata into three sets: train, validation, and test. This will create a deep-smoke-machine/back-end/data/split/ folder that contains all splits, as indicated in our paper. The method for splitting the dataset will be explained in the next ""Dataset"" section.\n```sh\npython split_metadata.py confirm\n```\nDownload all videos in the metadata file to deep-smoke-machine/back-end/data/videos/. We provide a shell script (see [bg.sh](back-end/www/bg.sh)) to run the python script on the background using the [screen command](https://www.gnu.org/software/screen/manual/html_node/index.html).\n```sh\npython download_videos.py\n\n# Background script (on the background using the ""screen"" command)\nsh bg.sh python download_videos.py\n```\nHere are some tips for the screen command:\n```sh\n# List currently running screen names\nsudo screen -ls\n\n# Go into a screen\nsudo screen -x [NAME_FROM_ABOVE_COMMAND] (e.g. sudo screen -x 33186.download_videos)\n# Inside the screen, use CTRL+C to terminate the screen\n# Or use CTRL+A+D to detach the screen and send it to the background\n\n# Terminate all screens\nsudo screen -X quit\n\n# Keep looking at the screen log\ntail -f screenlog.0\n```\nProcess and save all videos into RGB frames (under deep-smoke-machine/back-end/data/rgb/) and optical flow frames (under deep-smoke-machine/back-end/data/flow/). Because computing optical flow takes a very long time, by default, this script will only process RGB frames. If you need the optical flow frames, change the flow_type to 1 in the [process_videos.py](back-end/www/process_videos.py) script.\n```sh\npython process_videos.py\n\n# Background script (on the background using the ""screen"" command)\nsh bg.sh python process_videos.py\n```\nExtract [I3D features](https://github.com/piergiaj/pytorch-i3d) under deep-smoke-machine/back-end/data/i3d_features_rgb/ and deep-smoke-machine/back-end/data/i3d_features_flow/. Notice that you need to process the optical flow frames in the previous step to run the i3d-flow model.\n```sh\npython extract_features.py [method] [optional_model_path]\n\n# Extract features from pretrained i3d\npython extract_features.py i3d-rgb\npython extract_features.py i3d-flow\n\n# Extract features from a saved i3d model\npython extract_features.py i3d-rgb ../data/saved_i3d/ecf7308-i3d-rgb/model/16875.pt\npython extract_features.py i3d-flow ../data/saved_i3d/af00751-i3d-flow/model/30060.pt\n\n# Background script (on the background using the ""screen"" command)\nsh bg.sh python extract_features.py i3d-rgb\nsh bg.sh python extract_features.py i3d-flow\n```\nTrain the model with cross-validation on all dataset splits, using different hyper-parameters. The model will be trained on the training set and validated on the validation set. Pretrained weights are obtained from the [pytorch-i3d repository](https://github.com/piergiaj/pytorch-i3d). By default, the information of the trained I3D model will be placed in the deep-smoke-machine/back-end/data/saved_i3d/ folder. For the description of the models, please refer to our paper. Note that by default the PyTorch [DistributedDataParallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) GPU parallel computing is enabled (see [i3d_learner.py](back-end/www/i3d_learner.py)).\n```sh\npython train.py [method] [optional_model_path]\n\n# Use I3D features + SVM\npython train.py svm-rgb-cv-1\n\n# Use Two-Stream Inflated 3D ConvNet\npython train.py i3d-rgb-cv-1\n\n# Background script (on the background using the ""screen"" command)\nsh bg.sh python train.py i3d-rgb-cv-1\n```\nTest the performance of a model on the test set. This step will also generate summary videos for each cell in the confusion matrix (true positive, true negative, false positive, and false negative).\n```sh\npython test.py [method] [model_path]\n\n# Use I3D features + SVM\npython test.py svm-rgb-cv-1 ../data/saved_svm/445cc62-svm-rgb/model/model.pkl\n\n# Use Two-Stream Inflated 3D ConvNet\npython test.py i3d-rgb-cv-1 ../data/saved_i3d/ecf7308-i3d-rgb/model/16875.pt\n\n# Background script (on the background using the ""screen"" command)\nsh bg.sh python test.py i3d-rgb-cv-1 ../data/saved_i3d/ecf7308-i3d-rgb/model/16875.pt\n```\nRun [Grad-CAM](https://arxiv.org/abs/1610.02391) to visualize the areas in the videos that the model is looking at.\n```sh\npython grad_cam_viz.py i3d-rgb [model_path]\n\n# Background script (on the background using the ""screen"" command)\nsh bg.sh python grad_cam_viz.py i3d-rgb [model_path]\n```\nAfter model training and testing, the folder structure will look like the following:\n```\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 saved_i3d # this corresponds to deep-smoke-machine/back-end/data/saved_i3d/\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 549f8df-i3d-rgb-s1 # the name of the model, s1 means split 1\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 cam # the visualization using Grad-CAM\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 log # the log when training models\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 metadata # the metadata of the dataset split\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 model # the saved models\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 run # the saved information for TensorBoard\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 viz # the sampled videos for each cell in the confusion matrix\n```\nIf you want to see the training and testing results on [TensorBoard](https://pytorch.org/docs/stable/tensorboard.html), run the following and go to the stated URL in your browser.\n```\ncd deep-smoke-machine/back-end/data/\ntensorboard --logdir=saved_i3d\n```\nRecommended training strategy:\n1. Set an initial learning rate (e.g., 0.1)\n2. Keep this learning rate and train the model until the training error decreases too slow (or fluctuate) or until the validation error increases (a sign of overfitting)\n3. Decrease the learning rate (e.g., by a factor of 10)\n4. Load the best model weight from the ones that were trained using the previous learning rate\n5. Repeat step 2, 3, and 4 until convergence\n\n# Code infrastructure\nThis section explains the code infrastructure related to the I3D model training and testing in the [deep-smoke-machine/back-end/www/](back-end/www/) folder. Later in this section, I will describe how to build your own model and integrate it with the current pipeline. This code assumes that you are familiar with the [PyTorch deep learning framework](https://pytorch.org/). If you do not know PyTorch, I recommend checking [their tutorial page](https://pytorch.org/tutorials/) first.\n- [base_learner.py](back-end/www/base_learner.py)\n - The abstract class for creating model learners. You will need to implement the fit and test function. This script provides shared functions, such as model loading, model saving, data augmentation, and progress logging.\n- [i3d_learner.py](back-end/www/i3d_learner.py)\n - This script inherits the base_learner.py script for training the I3D models. This script contains code for back-propagation (e.g., loss function, learning rate scheduler, video batch loading) and GPU parallel computing (PyTorch DistributedDataParallel).\n- [check_models.py](back-end/www/check_models.py)\n - Check if a developed model runs in simple cases. This script is used for debugging when developing new models.\n- [smoke_video_dataset.py](back-end/www/smoke_video_dataset.py)\n - Definition of the dataset. This script inherits the PyTorch Dataset class for creating the DataLoader, which can be used to provide batches iteratively when training the models.\n- [opencv_functional.py](back-end/www/opencv_functional.py)\n - A special utility function that mimics [torchvision.transforms.functional](https://pytorch.org/docs/stable/_modules/torchvision/transforms/functional.html), designed for processing video frames and augmenting video data.\n- [video_transforms.py](back-end/www/video_transforms.py)\n - A special utility function that mimics [torchvision.transforms.transforms](https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html), designed for processing video frames and augmenting video data.\n- [deep-smoke-machine/back-end/www/model/](back-end/www/model/)\n - The place to put all models (e.g., I3D, Non-Local modules, Timeception modules, Temporal Shift modules, LSTM).\n\nIf you want to develop your own model, here are the steps that I recommend.\n1. Play with the check_models.py script to understand the input and output dimensions.\n2. Create your own model and place it in the deep-smoke-machine/back-end/www/model/ folder. You can take a look at other models to get an idea about how to write the code.\n3. Import your model to the check_models.py script, then run the script to debug your model.\n4. If you need a specific data augmentation pipeline, edit the get_transform function in the base_learner.py file. Depending on your needs, you may also need to edit the opencv_functional.py and video_transforms.py files.\n5. Copy the i3d_learner.py file, import your model, and modify the code to suit your needs. Make sure that you import your customized learner class in the train.py and test.py files.\n\n# Dataset\nWe include our publicly released dataset (a snapshot of the [smoke labeling tool](http://smoke.createlab.org/) on 2/24/2020) [metadata_02242020.json](back-end/data/dataset/2020-02-24/metadata_02242020.json) file under the deep-smoke-machine/back-end/data/dataset/ folder. The JSON file contains an array, with each element in the array representing the metadata for a video. Each element is a dictionary with keys and values, explained below:\n- camera_id\n - ID of the camera (0 means [clairton1](http://mon.createlab.org/#v=3703.5,970,0.61,pts&t=456.42&ps=25&d=2020-04-06&s=clairton1&bt=20200406&et=20200407), 1 means [braddock1](http://mon.createlab.org/#v=2868.5,740.5,0.61,pts&t=540.67&ps=25&d=2020-04-07&s=braddock1&bt=20200407&et=20200408), and 2 means [westmifflin1](http://mon.createlab.org/#v=1722.89321,1348.42994,0.806,pts&t=704.33&ps=25&d=2020-04-07&s=westmifflin1&bt=20200407&et=20200408))\n- view_id\n - ID of the cropped view from the camera\n - Each camera produces a panarama, and each view is cropped from this panarama (will be explained later in this section)\n- id\n - Unique ID of the video clip\n- label_state\n - State of the video label produced by the citizen science volunteers (will be explained later in this section)\n- label_state_admin\n - State of the video label produced by the researchers (will be explained later in this section)\n- start_time\n - Starting epoch time (in seconds) when capturing the video, corresponding to the real-world time\n- url_root\n - URL root of the video, need to combine with url_part to get the full URL (url_root + url_part)\n- url_part\n - URL part of the video, need to combine with url_root to get the full URL (url_root + url_part)\n- file_name\n - File name of the video, for example 0-1-2018-12-13-6007-928-6509-1430-180-180-6614-1544720610-1544720785\n - The format of the file_name is [camera_id]-[view_id]-[year]-[month]-[day]-[bound_left]-[bound_top]-[bound_right]-[bound_bottom]-[video_height]-[video_width]-[start_frame_number]-[start_epoch_time]-[end_epoch_time]\n - bound_left, bound_top, bound_right, and bound_bottom mean the bounding box of the video clip in the panarama\n\nNote that the url_root and url_part point to videos with 180 by 180 resolutions. We also provide a higher resolution (320 by 320) version of the videos. Replace the ""/180/"" with ""/320/"" in the url_root, and also replace the ""-180-180-"" with ""-320-320-"" in the url_part. For example, see the following:\n- URL for the 180 by 180 version: https://smoke.createlab.org/videos/180/2019-06-24/0-7/0-7-2019-06-24-3504-1067-4125-1688-180-180-9722-1561410615-1561410790.mp4\n- URL for the 320 by 320 version: https://smoke.createlab.org/videos/320/2019-06-24/0-7/0-7-2019-06-24-3504-1067-4125-1688-320-320-9722-1561410615-1561410790.mp4\n\nEach video is reviewed by at least two citizen science volunteers (or one researcher who received the [smoke reading training](https://www.eta-is-opacity.com/resources/method-9/)). Our paper describes the details of the labeling and quality control mechanism. The state of the label (label_state and label_state_admin) in the metadata_02242020.json is briefly explained below.\n- 23 : strong positive\n - Two volunteers both agree (or one researcher says) that the video has smoke.\n- 16 : strong negative\n - Two volunteers both agree (or one researcher says) that the video does not have smoke.\n- 19 : weak positive\n - Two volunteers have different answers, and the third volunteer says that the video has smoke.\n- 20 : weak negative\n - Two volunteers have different answers, and the third volunteer says that the video does not have smoke.\n- 5 : maybe positive\n - One volunteers says that the video has smoke.\n- 4 : maybe negative\n - One volunteers says that the video does not have smoke.\n- 3 : has discord\n - Two volunteers have different answers (one says yes, and another one says no).\n- -1 : no data, no discord\n - No data. If label_state_admin is -1, it means that the label is produced solely by citizen science volunteers. If label_state is -1, it means that the label is produced solely by researchers. Otherwise, the label is jointly produced by both citizen science volunteers and researchers. Please refer to our paper about these three cases.\n\nAfter running the [split_metadata.py](back-end/www/split_metadata.py) script, the ""label_state"" and ""label_state_admin"" keys in the dictionary will be aggregated into the final label, represented by the new ""label"" key (see the JSON files in the generated deep-smoke-machine/back-end/data/split/ folder). Positive (value 1) and negative (value 0) labels mean if the video clip has smoke emissions or not, respectively. \n\nAlso, the dataset will be divided into several splits, based on camera views or dates. The file names (without "".json"" file extension) are listed below. The Split S0, S1, S2, S3, S4, and S5 correspond to the ones indicated in the paper.\n\n| Split | Train | Validate | Test |\n| --- | --- | --- | --- |\n| S0 | metadata_train_split_0_by_camera | metadata_validation_split_0_by_camera | metadata_test_split_0_by_camera |\n| S1 | metadata_train_split_1_by_camera | metadata_validation_split_1_by_camera | metadata_test_split_1_by_camera |\n| S2 | metadata_train_split_2_by_camera | metadata_validation_split_2_by_camera | metadata_test_split_2_by_camera |\n| S3 | metadata_train_split_by_date | metadata_validation_split_by_date | metadata_test_split_by_date |\n| S4 | metadata_train_split_3_by_camera | metadata_validation_split_3_by_camera | metadata_test_split_3_by_camera |\n| S5 | metadata_train_split_4_by_camera | metadata_validation_split_4_by_camera | metadata_test_split_4_by_camera |\n\nThe following table shows the content in each split, except S3. The splitting strategy is that each view will be present in the testing set at least once. Also, the camera views that monitor different industrial facilities (view 1-0, 2-0, 2-1, and 2-2) are always on the testing set. Examples of the camera views will be provided later in this section.\n\n| View | S0 | S1 | S2 | S4 | S5 |\n| --- | --- | --- | --- | --- | --- |\n| 0-0 | Train | Train | Test | Train | Train |\n| 0-1 | Test | Train | Train | Train | Train |\n| 0-2 | Train | Test | Train | Train | Train |\n| 0-3 | Train | Train | Validate | Train | Test |\n| 0-4 | Validate | Train | Train | Test | Validate |\n| 0-5 | Train | Validate | Train | Train | Test |\n| 0-6 | Train | Train | Test | Train | Validate |\n| 0-7 | Test | Train | Train | Validate | Train |\n| 0-8 | Train | Train | Validate | Test | Train |\n| 0-9 | Train | Test | Train | Validate | Train |\n| 0-10 | Validate | Train | Train | Test | Train |\n| 0-11 | Train | Validate | Train | Train | Test |\n| 0-12 | Train | Train | Test | Train | Train |\n| 0-13 | Test | Train | Train | Train | Train |\n| 0-14 | Train | Test | Train | Train | Train |\n| 1-0 | Test | Test | Test | Test | Test |\n| 2-0 | Test | Test | Test | Test | Test |\n| 2-1 | Test | Test | Test | Test | Test |\n| 2-2 | Test | Test | Test | Test | Test |\n\nThe following shows the split of S3 by time sequence, where the farthermost 18 days are used for training, the middle 2 days are used for validation, and the nearest 10 days are used for testing. You can find our camera data by date on [our air pollution monitoring network](http://mon.createlab.org/).\n- Training set of S3\n - 2018-05-11, 2018-06-11, 2018-06-12, 2018-06-14, 2018-07-07, 2018-08-06, 2018-08-24, 2018-09-03, 2018-09-19, 2018-10-07, 2018-11-10, 2018-11-12, 2018-12-11, 2018-12-13, 2018-12-28, 2019-01-11, 2019-01-17, 2019-01-18\n- Validation set of S3\n - 2019-01-22, 2019-02-02\n- Testing set of S3\n - 2019-02-03, 2019-02-04, 2019-03-14, 2019-04-01, 2019-04-07, 2019-04-09, 2019-05-15, 2019-06-24, 2019-07-26, 2019-08-11\n\nThe dataset contains 12,567 clips with 19 distinct views from cameras on three sites that monitored three different industrial facilities. The clips are from 30 days that spans four seasons in two years in the daytime. The following provides examples and the distribution of labels for each camera view, with the format [camera_id]-[view_id]:\n\n![This figure shows a part of the dataset.](back-end/data/dataset/2020-02-24/dataset_1.png)\n\n![This figure shows a part of the dataset.](back-end/data/dataset/2020-02-24/dataset_2.png)\n\n![This figure shows a part of the dataset.](back-end/data/dataset/2020-02-24/dataset_3.png)\n\n![This figure shows a part of the dataset.](back-end/data/dataset/2020-02-24/dataset_4.png)\n\nWe made sure that we were not invading the privacy of surrounding residential neighbors. Areas in the videos that look inside house windows were cropped or blocked. Also, there is no law in our region to prohibit the monitoring of industrial activities.\n\n# Pretrained models\nWe release two of our best baseline models: [RGB-I3D](back-end/data/pretrained_models/RGB-I3D-S3.pt) and [RGB-TC](https://github.com/CMU-CREATE-Lab/deep-smoke-machine/blob/master/back-end/data/pretrained_models/RGB-TC-S3.pt), both trained and tested on split S3 using four NVIDIA GTX 1080 Ti GPUs. Please feel free to finetune your models based on our baseline. Our paper describes the details of these models. RGB-I3D uses [I3D ConvNet architecture with Inception-v1 layers](https://arxiv.org/pdf/1705.07750.pdf) and RGB frame input. RGB-TC is finetuned from RGB-I3D, with additional [Timeception](https://arxiv.org/pdf/1812.01289.pdf) layers. Below shows an example usage:\n```python\n# Import I3D\nfrom i3d_learner import I3dLearner\n\n# Initialize the model\nmodel = I3dLearner(\n mode=""rgb"",\n augment=True,\n p_frame=""../data/rgb/"",\n use_tc=True,\n freeze_i3d=True,\n batch_size_train=8,\n milestones=[1000, 2000],\n num_steps_per_update=1)\n\n# Finetune the RGB-TC model from the RGB-I3D model\nmodel.fit(\n p_model=""../data/pretrained_models/RGB-I3D-S3.pt"",\n model_id_suffix=""-s3"",\n p_metadata_train=""../data/split/metadata_train_split_by_date.json"",\n p_metadata_validation=""../data/split/metadata_validation_split_by_date.json"",\n p_metadata_test=""../data/split/metadata_test_split_by_date.json"")\n```\n\n# Deploy models to recognize smoke\nWe provide an example script ([recognize_smoke.py](back-end/www/recognize_smoke.py)) to show how you can deploy the trained models to recognize industrial smoke emissions. This script only works for the videos on our camera monitoring system ([http://mon.createlab.org/](http://mon.createlab.org/)) or others that are created using the [timemachine-creator](https://github.com/CMU-CREATE-Lab/timemachine-creator) and [timemachine-viewer](https://github.com/CMU-CREATE-Lab/timemachine-viewer). In sum, the script takes a list of video URLs (examples can be found [here](https://github.com/CMU-CREATE-Lab/deep-smoke-machine/blob/improve-documentation/back-end/data/production_url_list/2019-01-03.json)), gets their date and camera view boundary information, generates a bunch of cropped clips, and run the model on these clips to recognize smoke emissions. Here are the steps:\n\nFirst, for a date that you want to process, create a JSON file under the [back-end/data/production_url_list/](back-end/data/production_url_list/) folder to add video URLs. The format of the file name must be ""YYYY-MM-DD.json"" (such as ""2019-01-03.json""). If the file for that date exists, just open the file and add more video URLs. Each video URL is specified using a dictionary, and you need to put the video URLs in a list in each JSON file. For example:\n```json\n[{\n ""url"": ""https://thumbnails-v2.createlab.org/thumbnail?root=https://tiles.cmucreatelab.org/ecam/timemachines/clairton1/2019-01-03.timemachine/&width=180&height=180&startFrame=9716&format=mp4&fps=12&tileFormat=mp4&startDwell=0&endDwell=0&boundsLTRB=6304,884,6807,1387&nframes=36"",\n ""cam_id"": 0,\n ""view_id"": 0\n },{\n ""url"": ""https://thumbnails-v2.createlab.org/thumbnail?root=https://tiles.cmucreatelab.org/ecam/timemachines/clairton1/2019-01-03.timemachine/&width=180&height=180&startFrame=9716&format=mp4&fps=12&tileFormat=mp4&startDwell=0&endDwell=0&boundsLTRB=6007,928,6509,1430&nframes=36"",\n ""cam_id"": 0,\n ""view_id"": 1\n}]\n```\nThe URL indicates the cropped video clips, which is obtained by using the thumbnail tool on our camera monitoring system ([http://mon.createlab.org/](http://mon.createlab.org/)). To access the thumbnail tool, click the ""share"" button at the bottom-right near the timeline slider and then select the ""Share as image or video"" tab. A tutorial about how to use the thumbnail tool for sharing videos can be found [here](https://vimeo.com/140196813#t=415s). The cam_id and view_id correspond to the camera views presented in the ""Dataset"" section in this READEME. For example, if cam_id is 0 and view_id is 1, this means that the camera view is 0-1, as shown in [this graph](back-end/data/dataset/2020-02-24/dataset_1.png). After creating the JSON files or adding video URLs to existing JSON files, run the following to perform a sanity check, which will identify problems related to the camera data and attemp to fix the problems:\n```sh\npython recognize_smoke.py check_and_fix_urls\n```\nNext, run the following at the background (this step will take a long time) to process each clip and predict the probability of having smoke:\n```sh\nsh bg.sh python recognize_smoke.py process_all_urls\n```\nThis will create a ""production"" folder under [back-end/data/](back-end/data) to store the processed results. Then, run the following to identify events based on the probabilities of having smoke:\n```sh\npython recognize_smoke.py process_events\n```\nThis will create an ""event"" folder under [back-end/data/](back-end/data) to store the links to the video clips that are identified as having smoke emissions. To visualize the smoke events, copy the folder (with the same folder name, ""event"") to the front-end of the [video labeling tool](https://github.com/CMU-CREATE-Lab/video-labeling-tool/tree/master/front-end). Then, the [event page](https://smoke.createlab.org/event.html?date=2019-04-02&camera=0&view=all) will be able to access the ""event"" folder and show the results. You may also want to consider running the following to scan the video clips so that users do not need to wait for the [thumbnail server](https://thumbnails-v2.createlab.org/status) to render videos:\n```sh\nsh bg.sh python recognize_smoke.py scan_urls\n```\n\n# Acknowledgements\nWe thank [GASP](https://gasp-pgh.org/) (Group Against Smog and Pollution), [Clean Air Council](https://cleanair.org/), [ACCAN](https://accan.org/) (Allegheny County Clean Air Now), [Breathe Project](https://breatheproject.org/), [NVIDIA](https://developer.nvidia.com/academic_gpu_seeding), and the [Heinz Endowments](http://www.heinz.org/) for the support of this research. We also greatly appreciate the help of our volunteers, which includes labeling videos and providing feedback in system development.\n'",",https://arxiv.org/abs/1705.07750,https://arxiv.org/abs/1610.02391,https://arxiv.org/abs/1610.02391,https://arxiv.org/pdf/1705.07750.pdf,https://arxiv.org/pdf/1812.01289.pdf","2019/04/30, 19:10:41",1639,BSD-3-Clause,2,504,"2023/03/27, 23:09:22",0,3,13,3,212,0,0.0,0.1302605210420842,"2020/05/18, 21:16:24",v1.0,0,2,false,,false,false,,,https://github.com/CMU-CREATE-Lab,cmucreatelab.org,"Pittsburgh, PA, USA",,,https://avatars.githubusercontent.com/u/3278646?v=4,,, Ribbit Network Frog Sensor,"The sensor for the world's largest crowdsourced network of open-source, low-cost, CO2 Gas Detection Sensors.",Ribbit-Network,https://github.com/Ribbit-Network/ribbit-network-frog-hardware.git,github,"climate,co2-sensors,balena,influxdb,raspberry-pi",Emission Observation and Modeling,"2023/08/29, 19:46:07",87,0,22,true,,Ribbit Network,Ribbit-Network,,https://www.ribbitnetwork.org/,"b'# Ribbit Network Frog Sensor\n[![Chat](https://img.shields.io/discord/870113194289532969.svg?style=flat-square&colorB=758ED3)](https://discord.gg/vq8PkDb2TC)\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-16-orange.svg?style=flat-square)](#contributors-)\n\n[This project will create the world\'s largest Greenhouse Gas Emissions dataset that will empower anyone to join in the work on climate and provide informed data for climate action.](https://ribbitnetwork.org/)\n\nRibbit Network is a large network of open-source, low-cost, Greenhouse Gas (CO2 and hopefully other gasses in the future) Detection Sensors. These sensor units will be sold by the Ribbit Network and will upload their data to the cloud, creating the world\'s most complete Greenhouse Gas dataset.\n\nThis repository contains the design files, software, documentation, and assembly instructions for the Frog Sensor.\n\n![OSHW](images/oshw-logo.svg)\n\n## Frog Sensors\nOur \xe2\x80\x9cFrogs\xe2\x80\x9d are tiny sensors that you can build and deploy at your own home! It\xe2\x80\x99s a small, \nopen-source device that measures the amount of CO2 in the air using a tiny laser.\nIt\'s easy to set up and will constantly record valuable scientific data on our climate.\n\nFrogs are one of the species that are most affected by climate change! \nJust like our sensors, they ribbit to one another to create a powerful network \nof data about the environment.\n\n![Frog Sensor](images/finished_frog.gif)\n\n## Build a Frog!\nWe encourage you to jump in and build your own Frog Sensor! It\'s recommended\nto build a V4 Frog and the instructions from this repo are rendered into a nice webpage at the link below:\n\n[Build a Frog Sensor!](https://ribbit-network.github.io/ribbit-network-frog-hardware/)\n\nThere have been four major versions of the Frog Sensor to date. We currently recommend building version 4 of the Frog and this repo contains the files and instructions relevant to V4.\n\n* Frog Sensor Version 4 - This version of the sensor is based on an esp32 microcontroller.\n* [Frog Sensor Version 3](https://github.com/Ribbit-Network/ribbit-network-frog-hardware/tree/hw_v3) - This version is based on a Raspberry Pi CM4 and includes a fully 3D printed enclosure.\n* [Frog Sensor Version 2](https://github.com/Ribbit-Network/ribbit-network-frog-hardware/tree/hw_v2) - This version is based on a Beaglebone black and includes a fully 3D printed enclosure.\n* [Frog Sensor Version 1](https://github.com/Ribbit-Network/ribbit-network-frog-hardware/tree/hw_v1) - This version was based on the Raspberry Pi and included an off-the-shelf enclosure\n\nSee each folder linked above for the relevant design files, CAD, and assembly instructions!\n\n## Need Help?\n[If you are not sure where to start or just want to chat join our developer discord here.](https://discord.gg/vq8PkDb2TC). You can also [start a discussion](https://github.com/Ribbit-Network/ribbit-network-frog-sensor/discussions) right here in Github.\n\n# View the Data!\nThe first prototype sensors are up and running! [Here is some real data from our sensor network!](https://dashboard.ribbitnetwork.org/) (Note this dashboard is still experimental and may be down occasionally).\n\n[See more about the cloud database here.](https://github.com/Ribbit-Network/ribbit-network-dashboard)\n\n## Questions?\n[Check out the Frequently Asked Questions section.](https://github.com/Ribbit-Network/ribbit-network-faq) If you don\'t see your question, let us know either in a Github Discussion or via Discord.\n\n## Ribbit Frog V4\nThis is the hardware repository that contains all the design files for the physical assembly of the Frog Sensor.\n\n## Bill of Materials (Parts List)\nThe hardware [Bill of Materials is located here.](ribbit_network_frog_sensor_bom.csv)\n\n\n## Mechanical CAD Files\nThe mechanical design files are hosted on Onshape. Onshape is available to use for free with public assemblies and you can create a copy of the assembly for any edits you would like to make.\n\n[Link to Onshape Assembly](https://cad.onshape.com/documents/b3e6eeabf50d585d20f25fc6/w/a4a82302d129f025f23b244a/e/c221edb50cd05a98d22970e2?renderMode=0&uiState=64a86ef7086f6a55cf1984e7)\n\n## Electronics Block Diagram\n![Block Diagram](frog_sensor.drawio.svg)\n\nThe diagram above can be edited with drawio or with the awesome [Draw.io Visual Studio Code Plugin](https://marketplace.visualstudio.com/items?itemName=hediet.vscode-drawio)\n\n## Get Involved\nAre you interested in getting more involved or leading an effort in the Ribbit Network project? We are recruiting for additional members to join the Core Team. [See the Open Roles and descriptions here.](https://ribbitnetwork.notion.site/Core-Team-Role-Postings-105df298e0634f179f8f063c01708069).\n\n## Contributing\nSee the [Issues](https://github.com/keenanjohnson/ghg-gas-cloud/issues) section of this project for the work that I\'ve currently scoped out to be done. Reach out to me if you are interested in helping out! The [projects section](https://github.com/Ribbit-Network/ribbit-network-frog-sensor/projects) helps detail the major efforts going on right now.\n\nWe have a [contributing guide](https://github.com/Ribbit-Network/ribbit-network-frog-sensor/blob/main/CONTRIBUTING.md) that details the process for making a contribution.\n\n[If you are not sure where to start or just want to chat join our developer discord here.](https://discord.gg/vq8PkDb2TC). You can also [start a discussion](https://github.com/Ribbit-Network/ribbit-network-frog-sensor/discussions) right here in Github.\n\n## Background Information\n[See the Wiki for background research.](https://ribbitnetwork.notion.site/Learnings-Low-cost-sensors-for-the-measurement-of-atmospheric-composition-e3d41736c49e41ad81dcdf7e16a6573b) This project is inspired by some awesome research by incredible scientists in academia.\n\n\n\n## Ribbit Network\nRibbit Network is a non-profit (501c3) creating the world\'s largest Greenhouse Gas Emissions dataset that will empower anyone to join in the work on climate and provide informed data for climate action. We\'re an all volunteer team building everything we do in the open-source community.\n\nIf you would like to consider sponsoring Ribbit Network you can do [via this link](https://givebutter.com/ribbitnetwork). The money is used to pay for software fees, purchase R&D hardware and generally support the mission of Ribbit Network.\n\n## Ribbit Network Code of Conduct\nBy participating in this project, you agree to follow the Ribbit Network Code of Conduct and Anti-Harassement Policy.\nViolations can be reported anonymously by filling out this form. \n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Eric Audiffred

\xf0\x9f\x8e\xa8 \xf0\x9f\xa4\x94 \xe2\x9a\xa0\xef\xb8\x8f

Desmond Good

\xf0\x9f\xa4\x94 \xf0\x9f\x92\xbb \xf0\x9f\x93\x86

Laurence Watson

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96 \xf0\x9f\xa4\x94 \xf0\x9f\x93\x86

Steven Pestana

\xf0\x9f\x93\x96 \xf0\x9f\xa4\x94 \xf0\x9f\x94\xa3 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xb5

sanfk2

\xf0\x9f\x92\xbb

Eren Rudy

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\x96

David Bengtson

\xf0\x9f\xa4\x94 \xf0\x9f\x93\x86

Lance Bantoto

\xf0\x9f\xa4\x94 \xf0\x9f\x93\x86

Kevin Miller

\xf0\x9f\xa4\x94 \xf0\x9f\x96\x8b

Marc Pous

\xe2\x9a\xa0\xef\xb8\x8f

Zolt\xc3\xa1n Nagy

\xf0\x9f\x92\xbb

eliasfallon

\xf0\x9f\x93\x96

Ryan

\xf0\x9f\x93\x96

akhilgupta1093

\xf0\x9f\x92\xbb

outdoorclone

\xf0\x9f\x93\x96

Mudit Agrawal

\xf0\x9f\x8e\xa8
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",,"2021/06/05, 18:53:57",872,MIT,48,362,"2023/08/29, 19:46:45",4,90,184,60,57,0,0.4,0.3220858895705522,"2023/01/26, 07:31:40",hw_v4,0,17,true,custom,false,true,,,https://github.com/Ribbit-Network,https://ribbitnetwork.org/,,,,https://avatars.githubusercontent.com/u/88076953?v=4,,, FIRECAM,An online app for end-users to diagnose and explore regional differences in fire emissions from five global fire emissions inventories.,tianjialiu,https://github.com/tianjialiu/FIRECAM.git,github,"fires,emissions,google-earth-engine,modis,fire-emissions,fire-inventory,gfedv4s-emissions",Emission Observation and Modeling,"2023/09/22, 12:09:33",15,0,1,true,JavaScript,,,"JavaScript,R,Python",https://sites.google.com/view/firecam/home,"b'# FIRECAM\n[FIRECAM](https://globalfires.earthengine.app/view/firecam): Fire Inventories - Regional Evaluation, Comparison, and Metrics\n\nFIRECAM is an online app for end-users to diagnose and explore regional differences in fire emissions from five global fire emissions inventories:\n1. Global Fire Emissions Database ([GFEDv4s](https://www.globalfiredata.org/); van der Werf et al., 2017)\n2. Fire Inventory from NCAR ([FINNv1.5](http://bai.acom.ucar.edu/Data/fire); Wiedinmyer et al., 2011)\n3. Global Fire Assimilation System ([GFASv1.2](http://gmes-atmosphere.eu/about/project_structure/input_data/d_fire/); Kaiser et al., 2012)\n4. Quick Fire Emissions Dataset ([QFEDv2.5r1](https://gmao.gsfc.nasa.gov/research/science_snapshots/global_fire_emissions.php); Darmenov and da Silva, 2013)\n5. Fire Energetics and Emissions Research ([FEERv1.0-G1.2](https://feer.gsfc.nasa.gov/data/emissions/); Ichoku and Ellison, 2014)\n\nPlease see our [website](https://sites.google.com/view/firecam/home) for more information.\n\nFIRECAM can be accessed through (1) Earth Engine Apps and (2) the Google Earth Engine (GEE) Javascript playground. While EE Apps facilitates access to FIRECAM for any user (GEE account not required), accessing the FIRECAM repository in the GEE playground allows rapid exports of timeseries and additional data analysis. The latter is also a fallback option if EE Apps is running too slowly.\n\n### Ancillary Apps\n* [GFEDv4s Explorer](https://globalfires.earthengine.app/view/gfedv4s): Explore GFEDv4s emissions (1997-2016, 2017-2020 beta estimates) for burned area and all available chemical species, partitioned by land use/land cover\n - *Note*: Burned area from small fires is approximate based on the small fire fraction for emissions\n* [GFEDv4s Animated Burned Area](https://globalfires.earthengine.app/view/gfedv4s-monthly-ba-animated): Visualize the seasonality of GFEDv4s burned area. \n - *Note*: GFEDv4s BA is averaged into monthly means. Please wait until all the images load (check the layer list in the upper-right hand corner of the map) before clicking \'Play.\'\n* [GFEDv4s Explorer, with Andreae (2019) EFs](https://globalfires.earthengine.app/view/gfedv4s-andreae-2019-efs): How much does the [Andreae (2019, ACP)](https://www.atmos-chem-phys.net/19/8523/2019/acp-19-8523-2019.html) emissions factors updates impact GFEDv4s emissions?\n* [FIRMS Explorer](https://globalfires.earthengine.app/view/firms): Plot daily timeseries of near-real-time active fire counts from FIRMS/MODIS by region\n\n## FIRECAM App\n(*Earth Engine Apps, no Google Earth Engine account required*)\n

\n![banner image](https://github.com/tianjialiu/FIRECAM/blob/main/docs/imgs/FIRECAM.png)\n\n### Step 1: Time Range\n*Select a time range.* Use the start year and end year sliders to select a time range for the annual and monthly regional emissions time series charts.\n\n### Step 2: Select Bounds Type and Region/Pixel of Interest\n*Select a bounds type.* Choose 1) ""Global,"" 2) ""Basis Region,"" 3) ""Country/Sub-Region,"" 4) ""Pixel,"" 5) ""Custom,"" or 6) ""Draw.""\n1. **Global**: all grid cells within GFEDv4s bounds (*Note*: monthly time series plot only shown for individual years)\n2. **Basis Region**: 14 broad geographic regions from GFEDv4s (van der Werf et al., 2017).\n3. **Country/Sub-Region**: countries and sub-regions from simplified Large Scale International Boundary (LSIB) Polygons; those with negligible fire emissions were excluded\n4. **Pixel**: individual grid cells, 0.5\xc2\xb0 x 0.5\xc2\xb0 spatial resolution; the centroid of the selected grid cell is displayed on the map\n5. **Custom**: user-defined polygon using an array of longitude, latitude coordinates; the tool re-defines the polygon to match the 0.5\xc2\xb0 x 0.5\xc2\xb0 grid of the basis regions\n5. **Draw**: user-defined polygon, drawn interactively on the base map; the tool re-defines the polygon to match the 0.5\xc2\xb0 x 0.5\xc2\xb0 grid of the basis regions\n

\n![banner image](https://github.com/tianjialiu/FIRECAM/blob/main/docs/imgs/basisRegions.png)\n\n### Step 3: Species\n*Select a species.* The six available species are CO2, CO, CH4, organic carbon (OC), black carbon (BC), and fine particulate matter (PM2.5)\n\n### Regional Emissions\nAfter clicking the submit button, please wait a few seconds for the default map layers and three charts to display. Note that for large regions, such as BOAS, and long time ranges, calculations for the monthly and annual time series can take up to a few minutes. The three charts (annual average from 2003-2016 and two time series charts, yearly and monthly emissions by inventory), can be viewed in a new tab and exported as tables or images. Map layers consist of emissions at 0.5\xc2\xb0 x 0.5\xc2\xb0 spatial resolution for a given species for each of the five global fire emissions inventories and fire relative fire confidence metrics (described below) at 0.25\xc2\xb0 x 0.25\xc2\xb0 spatial resolution. The distribution of peatlands (0.25\xc2\xb0 x 0.25\xc2\xb0), based on GFEDv4s emissions from 2003-2016, and MODIS land use/land cover map (500 m, MCD12Q1 C6), based on FINNv1.0 aggregated vegetation classes, are also available as map layers. (*Tip*: Zoom in or zoom out in the web browser to adjust the displayed text.)\n\n### Relative Fire Confidence Metrics\n| # | Metric | Range | Units | Description |\n| :---: | :--- | :--- | :--- | :--- |\n| 1 | BA-AFA Discrepancy | -1 to 1 | unitless | discrepancy between burned area (BA; MCD64A1) and active fire area (AFA; MxD14A1), calculated as a normalized index using the area of BA outside AFA and AFA outside BA |\n| 2 | Cloud-Haze Obscuration | 0 to 1 | unitless | degree to which clouds and/or haze obscure the land surface from satellite observations of fires during fire-prone months |\n| 3 | Burn Size/ Fragmentation | \xe2\x89\xa5 0 | km2 / fragment | average size of burned area per burn scar fragment (large, contiguous versus small, fragmented fire landscapes) |\n| 4 | Topography Variance | \xe2\x89\xa5 0 | m2 | roughness in terrain, expressed as the variance in elevation across neighboring pixels (flat versus mountainous) |\n| 5 | VIIRS FRP Outside MODIS Burn Extent | 0 to 1 | unitless | additional small fires from VIIRS (375 m), a sensor with higher spatial resolution than MODIS (500 m, 1 km) |\n\n------\n\n(*Google Earth Engine account required*)\n### Step 1: Sign up for a free Google Earth Engine account\nGoogle Earth Engine ([GEE](https://earthengine.google.com/)) is a powerful cloud-computing platform for geospatial analysis and capable of computations with petabyte-scale datasets. To sign up, simply fill out a [form](https://signup.earthengine.google.com/) and wait for an email. GEE works best with the [Google Chrome web browser](https://www.google.com/chrome/).\n\n### Step 2: The FIRECAM online tool repository\nCopy and paste the following link in a tab in Google Chrome to enter the [GEE Javascript playground](https://code.earthengine.google.com/) and add the FIRECAM repository to your account under the read-only permissions folder in one step:\n```\nhttps://code.earthengine.google.com/?accept_repo=users/tl2581/FIRECAM\n```\nThe repository should then appear in the top-left panel under \'Reader\' as \'users/tl2581/FIRECAM\'. The GEE Javascript playground is a code editor with a map and console to display or print results.\n\n### Step 3: Diving into the GUI\nClick the \'Apps/UI_FIRECAM.js\' script in the \'users/tl2581/FIRECAM\' repository. The script should appear in the code editor. Click \'Run\' in the top-right corner of the code editor to activate the user interface. The repository also contains a script to export monthly and annual timeseries data (\'Exports/UI_FIRECAM_Exports.js\').\n\n## Updates\n* April 2023: updated FIRECAM, SMOKE-FIRECAM, and GFEDv4s apps with 2022 emissions; added python code to download GFAS from the new CDS API; updated readme in `fire_inv`.\n* February 2022: updated FIRECAM, SMOKE-FIRECAM, and GFEDv4s apps with 2021 emissions\n* July 2021: updated FIRECAM, SMOKE-FIRECAM, and GFEDv4s apps with 2020 emissions\n* February 2021: re-uploaded FINNv1.5 for 2018 and 2019 based on the most recent version of the annual text files; note that 2019 emissions were higher in the near-real-time files used before\n* November 2020: added url support for saving the app state, cumulative sum plot, and pan map option in FIRMS app\n* August 2020: added FIRMS ancillary app\n* July 2020: added daily timeseries for GFEDv4s app\n* April 2020: updated FIRECAM and SMOKE-FIRECAM apps with 2019 emissions\n* January 2020: updated ancillary GFEDv4s apps with preliminary 2019 emissions\n* October 2019: fixed FIRECAM emissions layers with 0.5\xc2\xb0 x 0.5\xc2\xb0 reprojection, updated FINNv1.5 emissions for 2016 with revised files from NCAR, added 2017-2018 emissions for all inventories; updated FIRECAM exports to allow user-defined polygons; added SMOKE-FIRECAM Tool\n* September 2019: added ""Draw"" option to FIRECAM and GFEDv4s apps; added GFEDv4s monthly averaged burned area animation and GFEDv4s with Andreae (2019) EFs ancillary apps\n* February 2019: added data download/processing code under the ""fire_inv"" subfolder; added ""Country/Sub-Region"" and ""Pixel"" options to FIRECAM app; created ancillary app for GFEDv4s (GFEDv4s Explorer)\n* March 2019: added ""Country/Sub-Region"" and ""Pixel"" options to FIRECAM exports\n* May 2019: added R/EE code for calculating the relative fire confidence metrics under the ""fire_metrics"" subfolder; added ""Global"" and ""Custom"" options to FIRECAM, GFEDv4s apps\n\n## Publications\n1. Liu, T., L.J. Mickley, R.S. DeFries, M.E. Marlier, M.F. Khan, M.T. Latif, and A. Karambelas (2020). Diagnosing spatial uncertainties and relative biases in global fire emissions inventories: Indonesia as regional case study. *Remote Sens. Environ.* 237, 111557. https://doi.org/10.1016/j.rse.2019.111557\n\n2. van der Werf, G.R., J.T. Randerson, L. Giglio, T.T. van Leeuwen, Y. Chen, B.M. Rogers, M. Mu, M.J.E. van Marle, D.C. Morton, G.J. Collatz, R.J. Yokelson, and P.S. Kasibhatla (2017). Global fire emissions estimates during 1997-2016. *Earth Syst. Sci. Data* 9, 697\xe2\x80\x93720. https://doi.org/10.5194/essd-9-697-2017\n\n3. Wiedinmyer, C., S.K. Akagi, R.J. Yokelson, L.K. Emmons, J.J. Orlando, and A.J. Soja (2011). The Fire INventory from NCAR (FINN): a high resolution global model to estimate the emissions from open burning. *Geosci. Model Dev.* 4, 625\xe2\x80\x93641. https://doi.org/10.5194/gmd-4-625-2011\n\n4. Kaiser, J.W., A. Heil, M.O. Andreae, A. Benedetti, N. Chubarova, L. Jones, J.J. Morcrette, M. Razinger, M.G. Schultz, M. Suttie, and G.R. van der Werf (2012). Biomass burning emissions estimated with a global fire assimilation system based on observed fire radiative power. *Biogeosciences* 9, 527\xe2\x80\x93554. https://doi.org/10.5194/bg-9-527-2012\n\n5. Darmenov, A.S. and A. da Silva (2013). The Quick Fire Emissions Dataset (QFED) - Documentation of versions 2.1, 2.2, and 2.4, NASA Technical Report Series on Global Modeling and Data Assimilation, Volume 32. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.406.7724\n\n6. Ichoku, C. and L. Ellison (2014). Global top-down smoke-aerosol emissions estimation using satellite fire radiative power measurements. *Atmos. Chem. Phys.* 14, 6643\xe2\x80\x936667. https://doi.org/10.5194/acp-14-6643-2014\n'",",https://doi.org/10.1016/j.rse.2019.111557\n\n2,https://doi.org/10.5194/essd-9-697-2017\n\n3,https://doi.org/10.5194/gmd-4-625-2011\n\n4,https://doi.org/10.5194/bg-9-527-2012\n\n5,https://doi.org/10.5194/acp-14-6643-2014\n","2018/09/17, 20:15:59",1864,MIT,5,71,"2023/08/29, 19:46:45",0,0,0,0,57,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, ESTA,"A command-line tool for processing raw emissions data into spatially and temporally-allocated emissions inventories, suitable for photochemicaly modeling or other analysis.",mmb-carb,https://github.com/mmb-carb/ESTA.git,github,"emissions,air-pollution,modeling,inventory,on-road,automobiles",Emission Observation and Modeling,"2023/02/02, 17:52:41",9,0,0,true,Python,Modeling and Meteorology Branch of the California Air Resources Board,mmb-carb,"Python,Shell",,"b""# ESTA\n> Emissions Spatial and Temporal Allocator\n\nESTA is a command-line tool for processing raw emissions data into spatially and temporally-allocated emissions inventories, suitable for photochemicaly modeling or other analysis. ESTA is an open-source, Python-based tool designed by the AQPSD branch of the [California EPA][CalEPA]'s [Air Resources Board][ARB]. Though it is a general-purpose model, it is currently only used for processing on-road inventories.\n\n\n## Recent Updates\n\nThe source code is updated to read EMFAC2017 emissions, and has the option to output diesel PM emissions.\nThe directory NH3_data_EF17_MPO010 contains the NH3 emissions files for several years and user can append the specific year of the NH3 emissions files to the EMFAC2017 emissions files. The scripts that append the NH3 emissions files to the EMFAC2017 emissions files are provided at EF17_format_ld and EF17_format_hd directories.\nThe current version of the NH3 inventory is MPO010. Day of week fraction file 'calvad_gai_dow_factors_2012.csv' has been replaced with 'pems_dow_factors_2018.csv'.\n\nThere is another version of ESTA that uses EMFAC2021 emissions (branch EMFAC2021).\nThe source code can be downloaded by using the following command.\ngit clone -b EMFAC2021 https://github.com/mmb-carb/ESTA.git\n\n\n## ESTA Documentation\n\nThe ESTA documentation is provided as its own repository:\n\n* [The ESTA Documentation on GitHub](https://github.com/mmb-carb/ESTA_Documentation)\n\n\n## Open-Source Licence\n\nAs ESTA was developed by the California State government, the model and its documenation are part of the public domain. They are openly licensed under the GNU GPLv3 license, and free for all.\n\n* [GNU GPLv3 License](LICENSE)\n\n\n[ARB]: http://www.arb.ca.gov/homepage.htm\n[CalEPA]: http://www.calepa.ca.gov/\n\n""",,"2017/09/14, 15:51:02",2232,GPL-3.0,1,552,"2019/01/02, 17:40:01",4,0,16,0,1757,0,0,0.32533333333333336,,,0,3,false,,false,false,,,https://github.com/mmb-carb,https://www.arb.ca.gov/homepage.htm,"Sacramento, California",,,https://avatars.githubusercontent.com/u/31774095?v=4,,, FlyingClimate,Model the CO2 and non-CO2 effects like nitrogen oxide emissions and contrail formation to analyse aviation's total warming footprint.,milankl,https://github.com/milankl/FlyingClimate.git,github,,Emission Observation and Modeling,"2021/11/12, 11:32:02",14,0,2,false,Jupyter Notebook,,,Jupyter Notebook,,"b'# Quantifying aviation\xe2\x80\x99s contribution to global warming\n[![DOI](https://zenodo.org/badge/314593749.svg)](https://zenodo.org/badge/latestdoi/314593749)\n\nSupplementary data and scripts for\n\nM Kl\xc3\xb6wer, MR Allen, DS Lee, SR Proud, L Gallagher and A Skowron, 2021.\n*Quantifying aviation\xe2\x80\x99s contribution to global warming*,\n**Environmental Research Letters**, accepted. Preprint [10.1002/essoar.10507359.1](https://www.essoar.org/doi/10.1002/essoar.10507359.1)\n\n### Abstract\n\nGrowth in aviation contributes more to global warming than is generally appreciated because\nof the mix of climate pollutants it generates. Here, we model the CO2 and non-CO2 effects\nlike nitrogen oxide emissions and contrail formation to analyse aviation\xe2\x80\x99s total warming footprint.\nAviation contributed approximately 4% to observed human-induced global warming to date, despite\nbeing responsible for only 2.4% of global annual emissions of CO2. Aviation is projected to cause\na total of about 0.1\xcb\x9aC of warming by 2050, half of it to date and the other half over the next\nthree decades, should aviation\xe2\x80\x99s pre-COVID growth resume. The industry would then contribute a\n6-17% share to the remaining 0.3-0.8\xcb\x9aC to not exceed 1.5-2\xcb\x9aC of global warming. Under this scenario,\nthe reduction due to COVID-19 to date is small and is projected to only delay aviation\xe2\x80\x99s warming\ncontribution by about 5 years. But the leveraging impact of growth also represents an opportunity:\nAviation\xe2\x80\x99s contribution to further warming would be immediately halted by either a sustained annual\n2.5% decrease in air traffic under the existing fuel mix, or a transition to a 90% carbon-neutral\nfuel mix by 2050.\n\n---\n### Contents\n- Data can be found in [`/data`](https://github.com/milankl/FlyingClimate/tree/main/data) \n- The main analyis and plotting notebook is\n[`/scripts/aviation_warming.ipynb`](https://github.com/milankl/FlyingClimate/blob/main/scripts/aviation_warming.ipynb). \n- Figure 1 is created in [`/scripts/euro_flights.ipynb`](https://github.com/milankl/FlyingClimate/blob/main/scripts/euro_flights.ipynb).\n'",",https://zenodo.org/badge/latestdoi/314593749","2020/11/20, 15:29:47",1069,GPL-3.0,0,64,"2021/11/12, 11:32:02",0,22,22,0,712,0,0.0,0.13953488372093026,"2021/09/28, 10:56:44",v1.1,0,3,false,,false,false,,,,,,,,,,, ETS-Watch,Provides a Python client for retrieving the latest data on the EU Emissions Trading System market and its participants.,OSUKED,https://github.com/OSUKED/ETS-Watch.git,github,"git-scraping,eu-ets-market",Emission Observation and Modeling,"2023/10/25, 01:33:18",4,0,0,true,Python,Open Source UK Energy Data,OSUKED,"Python,Jupyter Notebook,HTML,CSS,JavaScript,TeX,Batchfile",https://OSUKED.github.io/ETS-Watch/,"b'# ETS Watch\n\n> `etswatch` provides a Python client for retrieving the latest data on the EU ETS market and its participants\n\nLast updated: 2023-10-25 01:33\n\n
\n\nTo install the library you can run:\n\n`pip install etswatch`\n\nTo then download all of the EUTL accounts data you can run:\n\n`python -m etswatch.cli download-all-accounts-data`\n\nN.b. this will create a sub-directory called `data` in the directory that you run the command\n\n
\n\n### Long-Term Average Price\n\n![Long-term average](https://github.com/OSUKED/ETS-Watch/raw/master/img/long_term_avg.png)\n\n
\n\n### Candle-Stick Chart for Last 8-Weeks\n\n![Open, High, Low, Close & Volume](https://github.com/OSUKED/ETS-Watch/raw/master/img/ohlc_vol.png)\n'",,"2020/12/08, 18:06:18",1051,GPL-3.0,277,706,"2021/11/12, 11:32:02",0,0,0,0,712,0,0,0.0,"2021/04/28, 23:14:36",v0.0.1,0,1,false,,false,false,,,https://github.com/OSUKED,https://osuked.com/,,,,https://avatars.githubusercontent.com/u/75696139?v=4,,, Industrial Smoke Plume Detection,Characterization of Industrial Smoke Plumes from Remote Sensing Data.,HSG-AIML,https://github.com/HSG-AIML/IndustrialSmokePlumeDetection.git,github,,Emission Observation and Modeling,"2021/04/20, 10:20:15",37,0,12,false,Python,Artificial Intelligence & Machine Learning (AI:ML Lab) @ HSG,HSG-AIML,Python,,"b'# Industrial Smoke Plume Detection\n\nThis repository contains the code base for our publication *Characterization of Industrial Smoke Plumes from\nRemote Sensing Data*, presented at the *Tackling Climate Change with Machine\n Learning* workshop at NeurIPS 2020.\n\n\n![segmentation example images](segmentation.png ""Segmentation Example Images"")\n\n \n## About this Project\n\nThe major driver of global warming has been identified as the anthropogenic release\nof greenhouse gas (GHG) emissions from industrial activities. The quantitative\nmonitoring of these emissions is mandatory to fully understand their effect on the\nEarth\xe2\x80\x99s climate and to enforce emission regulations on a large scale. In this work,\nwe investigate the possibility to detect and quantify industrial smoke plumes from\nglobally and freely available multiband image data from ESA\xe2\x80\x99s Sentinel-2 satellites.\nUsing a modified ResNet-50, we can detect smoke plumes of different sizes with\nan accuracy of 94.3%. The model correctly ignores natural clouds and focuses on\nthose imaging channels that are related to the spectral absorption from aerosols and\nwater vapor, enabling the localization of smoke. We exploit this localization ability\nand train a U-Net segmentation model on a labeled subsample of our data, resulting\nin an Intersection-over-Union (IoU) metric of 0.608 and an overall accuracy for\nthe detection of any smoke plume of 94.0%; on average, our model can reproduce\nthe area covered by smoke in an image to within 5.6%. The performance of our\nmodel is mostly limited by occasional confusion with surface objects, the inability\nto identify semi-transparent smoke, and human limitations to properly identify\nsmoke based on RGB-only images. Nevertheless, our results enable us to reliably\ndetect and qualitatively estimate the level of smoke activity in order to monitor\nactivity in industrial plants across the globe. Our data set and code base are publicly\navailable.\n\nThe full publication is available on arxiv.\n\nThe data set is available on [zenodo](http://doi.org/10.5281/zenodo.4250706).\n\n## Content\n\n`classification/`: Resnet-50 classifier code, training and evaluation\n routines\n`segmentation/`: U-Net segmentation model code, training and evaluation\n routines\n\n \n## How to Use\n\nDownload this repository as well as the \n[data](http://doi.org/10.5281/zenodo.4250706) and decompress the latter. For\nboth model training and evaluation, you will have to modify the directory\npaths appropriately so that they point to the image and segmentation label\ndata.\n \nIt is expected that the data are split into separate data sets for training, \nvalidation, and evaluation. For our publication, this has been done in such a\nway that all observations of a single location are contained in a \nsingle data set. Other strategies are possible and will be left to the user. \n\nEither model can be trained by invoking:\n\n python train.py\n \nwith the following optional parameters:\n \n* `-bs ` to define a batch size,\n* `-ep ` to define the number of training epochs,\n* `-lr ` to define a starting learning rate, and\n* `-mo ` to define a momentum value.\n\nThe models can be evaluated on the test data set by calling the corresponding\n `eval.py` script.\n \n \n## Acknowledgements\n\nIf you use this code for your own project, please cite the following\nconference contribution:\n\n Mommert, M., Sigel, M., Neuhausler, M., Scheibenreif, L., Borth, D.,\n ""Characterization of Industrial Smoke Plumes from Remote Sensing Data"",\n Tackling Climate Change with Machine Learning Workshop,\n NeurIPS 2020.\n'",",http://doi.org/10.5281/zenodo.4250706,http://doi.org/10.5281/zenodo.4250706","2020/11/06, 15:09:47",1083,GPL-3.0,0,6,"2021/11/12, 11:32:02",1,0,0,0,712,0,0,0.33333333333333337,,,0,2,false,,false,false,,,https://github.com/HSG-AIML,www.hsg.ai,St.Gallen,,,https://avatars.githubusercontent.com/u/59830997?v=4,,, EDGAR,Emissions Database for Global Atmospheric Research.,,,custom,,Emission Observation and Modeling,,,,,,,,,,https://edgar.jrc.ec.europa.eu/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OpenGHGMap,A city-level CO2 emissions inventory for Europe.,,,custom,,Emission Observation and Modeling,,,,,,,,,,https://openghgmap.net/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, GCP-GridFED,Gridded fossil CO2 emissions and related O2 combustion consistent with national inventories 1959-2018.,record,,custom,,Emission Observation and Modeling,,,,,,,,,,https://zenodo.org/record/3958283,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Integrated Carbon Observation System,"Produces standardised data on greenhouse gas concentrations in the atmosphere, as well as on carbon fluxes between the atmosphere, the earth and oceans.",ICOS-Carbon-Portal,https://github.com/ICOS-Carbon-Portal/pylib.git,github,"icos,dataaccess,greenhouse-gas,european",Emission Observation and Modeling,"2023/07/25, 13:33:45",10,10,2,true,Python,ICOS Carbon Portal,ICOS-Carbon-Portal,Python,,"b'# ICOS Carbon Portal Python Package\n\n \n \n \n \n \n \n \n \n
Latest Release\n \n \n
PyPI Downloads\n \n \n
\n\n## About ICOS\n\nThe Integrated Carbon Observation System, ICOS, is a European-wide greenhouse gas research infrastructure. ICOS produces standardised data on greenhouse gas concentrations in the atmosphere, as well as on carbon fluxes between the atmosphere, earth and oceans. This information is being used by scientists as well as by decision makers in predicting and mitigating climate change. The high-quality and open ICOS data is based on the measurements from over 130 stations across Europe. For more information about the ICOS station network, data quality control and assurance, and much more, please read the [ICOS Handbook 2022](https://www.icos-cp.eu/sites/default/files/2022-03/ICOS_handbook_2022_WEB.pdf), or visit our website [https://www.icos-cp.eu/](https://www.icos-cp.eu/).\n\nThis package is under active development. Please be aware that changes to names of functions and classes are possible without further notice. Please do feedback any recommendations, issues, etc. if you try it out.\n\n\nWhat is the package about?\nIn essence this package allows you to have direct access to data objects from the ICOS CarbonPortal where a ""Preview"" is available. It is an easy access to data objects hosted at the ICOS Carbon Portal (https://data.icos-cp.eu/). By using this library you can load data files directly into memory.\n\nPlease be aware, that by either downloading data, or accessing data directly through this library, you agree and accept, that all ICOS data is provided under a CC BY 4.0 licence \n\n## Installation\nThe latest release is available on [https://pypi.org/project/icoscp/](https://pypi.org/project/icoscp/). You can simply run\n\n`pip install icoscp`\n\nIf you need the cutting edge version you may install the library directly from github with\n\n`pip install git+https://github.com/ICOS-Carbon-Portal/pylib.git`\n\nWe would encourage you to use a virtual environment for python to test this library.\nFor example with [Miniconda](https://docs.conda.io/en/latest/miniconda.html) you can create a new environment with:\n\n- `conda create -n icos python`\n- `activate icos`\n- `pip install icoscp`\n\n## Documentation\nThe full documentation about the library and all the modules are available at [https://icos-carbon-portal.github.io/pylib/](https://icos-carbon-portal.github.io/pylib/)\n\n\n## Development\n\nFor instructions about how to go about extending and testing this software, please see \n'",,"2020/07/27, 13:20:53",1185,GPL-3.0,141,517,"2023/10/20, 14:21:00",21,88,137,43,5,2,1.2,0.44067796610169496,,,0,8,false,,false,false,"squaregoldfish/QuinCe,BjerknesClimateDataCentre/QuinCe,lumia-dev/lumia,ZogopZ/zupload,ICOS-Carbon-Portal/python-tools,openghg/openghg,pawel-wolff/atmo-access-time-series,claudiodonofrio/icosapi,emanueletramutola/icos_api,greglit/policybear",,https://github.com/ICOS-Carbon-Portal,https://www.icos-cp.eu,"Lund, Sweden",,,https://avatars.githubusercontent.com/u/10807176?v=4,,, "Global Database of Cement, Iron and Steel Production Assets","The Global Database of Cement, Iron and Steel Production Assets provides information on global cement production plants that are operational today.",spatial-finance-initiative/geoasset-project,,custom,,Emission Observation and Modeling,,,,,,,,,,https://www.cgfi.ac.uk/spatial-finance-initiative/geoasset-project/geoasset-databases/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The Global Registry of Fossil Fuels,"Open Source database of oil, gas and coal production and reserves globally, expressed in CO2-equivalent.",fossilfuelregistry,https://github.com/fossilfuelregistry/portal-v2.git,github,,Emission Observation and Modeling,"2022/12/20, 17:13:32",3,0,0,true,TypeScript,,,"TypeScript,JavaScript,CSS,HTML",,b'[![Deploy to DO](https://www.deploytodo.com/do-btn-blue.svg)](\nhttps://cloud.digitalocean.com/apps/new?repo=https://github.com/fossilfuelregistry/portal-v2/tree/master\n)\n\n# fossilfuelregistry.org web client\n\n## Stack\n\nThe client is based on the NextJS framework.\n\nIt is based on:\n\n- Chakra UI for UI components\n- Apollo GraphQL for data fetching\n- The POEditor translation service for i18n texts\n- AirBnB VisX for chart graphics\n- MaplibreGL with a private vector tile server for maps\n\n## Building\n\nThe build process is 100% standard NextJS. It needs a `.env.local` file for various keys:\n\n```\nPOEDITOR_API_TOKEN=...\nPOEDITOR_PROJECT_ID=...\n\nNEXT_PUBLIC_BACKEND_URL=https://api.fossilfuelregistry.org\n\nNEXT_PUBLIC_GA=... (Google Analytics Property id)\nNEXT_PUBLIC_GOOGLE_TRANSLATE_API_KEY=...\nNEXT_PUBLIC_OPENCORPORATES_API_TOKEN=...\n\nNEXT_PUBLIC_CMS_URL=https://cms.fossilfuelregistry.org\nNEXT_PUBLIC_CMS_TOKEN=...\n\nNEXT_PUBLIC_SENTRY_DSN=...\nNEXT_PUBLIC_ENVIRONMENT= local | development | production\n```\n',,"2022/07/27, 08:49:42",455,MIT,13,243,"2023/10/20, 14:21:00",8,0,0,0,5,0,0,0.5844155844155844,,,0,4,false,,false,false,,,,,,,,,,, Easy-ERA5-Trck,A super lightweight Lagrangian model for calculating millions of trajectories using ERA5 data.,Novarizark,https://github.com/lzhenn/easy-era5-trck.git,github,"era5,python,lagrangian,trajectory,multiprocessing",Emission Observation and Modeling,"2022/06/24, 12:23:47",29,0,4,false,Python,,,"Python,Shell",,"b'\n# Easy-ERA5-Trck\n\n- [Easy-ERA5-Trck](#easy-era5-trck)\n + [Galleries](#galleries)\n + [Install](#install)\n + [Usage](#usage)\n + [Repository Structure](#repository-structure)\n + [Module Files](#module-files)\n + [Version iteration](#version-iteration)\n\nEasy-ERA5-Trck is a super lightweight Lagrangian model for calculating thousands (even millions) of trajectories simultaneously and efficiently using ERA5 data sets. \nIt can implement super simplified equations of 3-D motion to accelerate integration, and use python multiprocessing to parallelize the integration tasks.\nDue to its simplification and parallelization, Easy-ERA5-Trck performs great speed in tracing massive air parcels, which makes **areawide** tracing possible.\n\nAnother version using WRF output to drive the model can be found [here](https://github.com/Novarizark/easy-wrf-trck). \n\n**Caution: Trajectory calculation is based on the nearest-neighbor interpolation and first-guess velocity for super efficiency. Accurate calculation algorithm can be found on http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-14-00110.1, or use a professional and complicated model e.g. [NOAA HYSPLIT](https://www.ready.noaa.gov/HYSPLIT.php) instead.**\n\n**Any question, please contact Zhenning LI (zhenningli91@gmail.com)**\n\n### Galleries\n\n#### Tibetan Plateau Air Source Tracers\n\n\n#### Tibetan Plateau Air Source Tracers (3D)\n\n\n### Install\n\nIf you wish to run easy-era5-trck using `grib2` data, Please first install [ecCodes](https://confluence.ecmwf.int/display/ECC/ecCodes+Home).\n\nPlease install python3 using Anaconda3 distribution. [Anaconda3](https://www.anaconda.com/products/individual) with python3.8 has been fully tested, lower version of python3 may also work (without testing).\n\nNow, we recommend to create a new environment in Anaconda and install the `requirements.txt`:\n\n```bash\nconda create -n test_era5trck python=3.8\nconda activate test_era5trck\npip install -r requirements.txt\n```\n\nIf everything goes smoothly, first `cd` to the repo root path, and run `config.py`:\n\n```bash\npython3 config.py\n```\n\nThis will convey fundamental configure parameters to `./conf/config_sys.ini`.\n\n### Usage\n\n#### test case\nWhen you install the package ready. You may first want to try the test case. `config.ini` has been set for testcase, which is a very simple run:\n``` python\n[INPUT]\ninput_era5_case = ./testcase/\ninput_parcel_file=./input/input.csv\n\n[CORE]\n# timestep in min\ntime_step = 30\nprecession = 1-order\n# 1 for forward, -1 for backward\nforward_option = -1\n# for forward, this is the initial time; otherwise, terminating time\nstart_ymdh = 2015080212\n# integration length in hours\nintegration_length = 24\n# how many processors are willing to work for you\nntasks = 4\n# not used yet\nboundary_check = False\n\n[OUTPUT]\n# output format, nc/csv, nc recommended for large-scale tracing\nout_fmt = nc\nout_prefix = testcase\n# output frequency in min\nout_frq = 60\n# when out_fmt=csv, how many parcel tracks will be organized in a csv file.\nsep_num = 5000\n\n```\nWhen you type `python3 run.py`, Easy-ERA5-Trck will uptake the above configurations, by which the ERA5 UVW data in `./testcase` will be imported for driving the Lagrangian integration.\n\nNow you will see your workers are dedicated to tracing the air parcels. After several seconds, if you see something like:\n``` bash\n2021-05-31 17:32:14,015 - INFO : All subprocesses done.\n2021-05-31 17:32:14,015 - INFO : Output...\n2021-05-31 17:32:14,307 - INFO : Easy ERA5 Track Completed Successfully!\n```\nCongratulations! The testcase works smoothly on your machine!\n\nNow you could check the output file in `./output`, named as `testcase.I20150802120000.E20150801120000.nc|csv`, which indicates the initial time and endding time. For backward tracing, I > E, and vice versa.\n\nYou could choose output files as plain ascii csv format or netCDF format (Recommended). netCDF format output metadata looks like:\n``` bash\n{\ndimensions:\n time = 121 ;\n parcel_id = 413 ;\nvariables:\n double xlat(time, parcel_id) ;\n xlat:_FillValue = NaN ;\n double xlon(time, parcel_id) ;\n xlon:_FillValue = NaN ;\n double xh(time, parcel_id) ;\n xh:_FillValue = NaN ;\n int64 time(time) ;\n time:units = ""hours since 1998-06-10 00:00:00"" ;\n time:calendar = ""proleptic_gregorian"" ;\n int64 parcel_id(parcel_id) ;\n}\n```\n\n#### setup your case\nCongratulation! After successfully run the toy case, of course, now you are eager to setup your own case. \nFirst, build your own case directory, for example, in the repo root dir:\n```bash\nmkdir mycase\n```\nNow please make sure you have configured **[ECMWF CDS API](https://cds.climate.copernicus.eu/api-how-to)** correctly, both in your shell environment and python interface.\n\nNext, set `[DOWNLOAD]` section in `config.ini` to fit your desired period, levels, and region for downloading.\n\n```python\n[DOWNLOAD]\nstore_path=./mycase/\nstart_ymd = 20151220\nend_ymd = 20160101\npres=[700, 750, 800, 850, 900, 925, 950, 975, 1000]\n\n# eara: [North, West, South, East]\narea=[-10, 0, -90, 360]\n# data frame frequency: recommend 1, 2, 3, 6. \n# lower frequency will download faster but less accurate in tracing\nfreq_hr=3\n```\nHere we hope to download 1000-700 hPa data, from 20151220 to 20160101, 3-hr temporal frequency UVW data from ERA5 CDS.\n\n`./utlis/getERA5-UVW.py` will help you to download the ERA5 reanalysis data for your case, in daily file with `freq_hr` temporal frequency.\n```bash\ncd utils\npython3 getERA5-UVW.py\n```\n\nWhile the machine is downloading your data, you may want to determine the destinations or initial points of your targeted air parcels.\n`./input/input.csv`: This file is the default file prescribing the air parcels for trajectory simulation. Alternatively, you can assign it by `input_parcel_file` in `config.ini`.\n\nThe format of this file:\n\n```\nairp_id, init_lat, init_lon, init_h0 (hPa)\n```\nFor forward trajectory, the init_{lat|lon|h0} denote initial positions; while for backward trajectory, they indicate ending positions.\nYou can write it by yourself. Otherwise, there is also a utility `./utils/take_box_grid.py`, which will help you to take air parcels in a rectanguler domain.\n\nplese also set other sections in `config.ini` accordingly, now these air parcels are waiting your command `python3 run.py` to travel the world!\n\nBesides, `./utils/control_multi_run.py` will help you to run multiple seriels of the simulation. There are some postprocessing scripts for visualization in `post_process`, you may need to modify them to fit your visualization usage.\n\n\n### Repository Structure\n\n#### run.py\n`./run.py`: Main script to run the Easy-ERA5-Trck. \n\n\n#### conf\n* `./conf/config.ini`: Configure file for the model. You may set ERA5 input file, input frequency, integration time steps, and other settings in this file.\n* `./conf/config_sys.ini`: Configure file for the system, generate by run `config.py`. \n* `./conf/logging_config.ini`: Configure file for logging module. \n\n#### core\n* `./core/lagrange.py`: Core module for calculating the air parcels Lagrangian trajectories.\n\n#### lib\n* `./lib/cfgparser.py`: Module file containing read/write method of the `config.ini`\n* `./lib/air_parcel.py`: Module file containing definition of air parcel class and related methods such as march and output.\n* `./lib/preprocess_era5inp.py`: Module file that defines the field_hdl class, which contains useful fields data (U, V, W...) and related method, including ERA5 grib file IO operations.\n* `./lib/utils.py`: utility functions for the model.\n\n#### post_process\nSome visualization scripts.\n\n#### utils\nUtils for downloading, generating `input.csv`, etc.\n\n### Version iteration\n\n#### Oct 28, 2020\n* Fundimental pipeline design, multiprocessing, and I/O.\n* MVP v0.01\n\n#### May 31, 2021\n* Major Revision, logging module, and exception treatment\n* test case\n* Major documentation update\n* Utility for data downloading\n* Utility for taking grids in a box \n* Basic functions done, v0.10\n\n#### Jun 09, 2021\n* The automatic detection of longitude range is added, allowing users to adopt two different ranges of longitude: [-180\xc2\xb0, 180\xc2\xb0] or [0\xc2\xb0, 360\xc2\xb0].\n* Currently, if you want to use the [-180\xc2\xb0, 180\xc2\xb0] data version, you can only set ntasks = 1 in the config.ini file.\n\n#### Oct 19, 2021\n* Modify `requirements.txt` to fit updated version of libs.\n\n#### Jun 24, 2022\n* Add Administrative Grid and sparse matrix match utils: `./utils/assign_nodes_to_city.py` and `./utils/assign_sparse_nodes.py`.\n'",,"2020/10/28, 08:04:54",1092,MIT,0,44,"2021/06/15, 13:19:13",0,3,3,0,862,0,0.0,0.050000000000000044,"2021/06/02, 06:09:36",v0.10-beta,0,2,false,,false,false,,,,,,,,,,, GRACED,Near-real-time Global Gridded Daily CO2 Emissions Dataset from fossil fuel and cement production with a global spatial resolution of 0.1° by 0.1° and a temporal resolution of 1 day.,,,custom,,Emission Observation and Modeling,,,,,,,,,,https://carbonmonitor-graced.com/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ECAM,Water and wastewater utility operators can assess their greenhouse gas emissions and energy consumption.,icra,https://github.com/icra/ecam.git,github,"ecam,giz,iwa",Emission Observation and Modeling,"2023/03/07, 13:27:10",11,0,6,true,JavaScript,ICRA,icra,"JavaScript,HTML,CSS,Shell,Makefile",http://wacclim.org/ecam,"b'# [ECAM](https://climatesmartwater.org/ecam)\nCurrent version: v3.\n\nECAM is a free and open source web application. Water and wastewater utility\noperators can assess their greenhouse gas emissions and energy consumption.\n\n- Perfect for climate reporting needs\n- Overview of system-wide greenhouse gas emissions\n- IPCC-2019 compliant\n\nECAM is developed by\n[ICRA](https://icra.cat),\n[IWA](https://www.iwa-network.org) and\n[GIZ](https://www.giz.de/) under the\n[WaCCliM project](https://climatesmartwater.org/), and Cobalt Water.\n\n## License\nECAM is licensed under a Creative Commons Attribution-ShareAlike 4.0\nInternational License. [LICENSE](LICENSE)\n\n## Code\nECAM is a serverless single-page-application (SPA) written in entirely in Javascript, HTML\nand CSS. It uses VueJS to render the user interface.\n\n## Tutorial videos\nhttps://www.youtube.com/playlist?list=PL6u1Pjpf8O0Ymz7bLlOCkLTJWHyPReOxP\n\n## Dependencies\nAll these libraries are automatically loaded when the tool is opened in the\nbrowser:\n- Vue.js v2.6.11 (https://vuejs.org/)\n- Chart.js v3.7.1 (https://www.chartjs.org/)\n- Code-prettify v2015-12-04 (https://cdn.jsdelivr.net/gh/google/code-prettify@master/)\n- D3.js v3 (https://d3js.org/)\n- ExcelJS v4.2.1 (https://github.com/exceljs/exceljs)\n- FileSaver v1.2.2 (https://github.com/eligrey/FileSaver.js)\n\n## Using ecam online\nTo use ecam, just go to [climatesmartwater.org/ecam](https://climatesmartwater.org/ecam)\n\n## Using ecam offline\nEcam can be also used offline. You need download this package and place it\ninside a folder from a web server software, for example: apache, nginx, xampp,\netc.\n\n## Guide for deployment in a server (or offline usage) using [Apache HTTP Server](http://httpd.apache.org/)\n1. Install Apache HTTP Server.\n2. Download this repository\n3. Move the repository to ""/var/www/html/ecam"".\n note: the equivalent ""/var/www/html"" folder for XAMPP is usually in ""C:\\XAMPP\\htdocs"" (in Windows)\n4. Open your browser and go to ""http://localhost/ecam""\n'",,"2015/11/03, 14:53:31",2913,CUSTOM,7,1424,"2022/06/10, 12:09:43",26,20,469,2,502,0,0.2,0.03328509406657021,,,0,7,false,,false,false,,,https://github.com/icra,https://www.icra.cat/,Girona,,,https://avatars.githubusercontent.com/u/38762567?v=4,,, Emissions Modeling Framework,"A client-server system designed to store information related to emissions modeling, with integrated quality control processes.",USEPA,https://github.com/USEPA/emf.git,github,,Emission Observation and Modeling,"2023/10/18, 04:33:07",6,0,2,true,Java,U.S. Environmental Protection Agency,USEPA,"Java,Gnuplot,PLpgSQL,HTML,Perl,Shell,JavaScript,Python,Lex,CSS,Batchfile,Haskell,TeX",,"b'# Emissions Modeling Framework\n\nThe Emissions Modeling Framework (EMF) is a client-server system designed to store information related to emissions modeling, with integrated quality control processes. The EMF can drive and track emissions modeling processes, providing organization for model run settings, input datasets, and generated outputs. The Control Strategy Tool (CoST) is a component of the EMF used for estimating the emission reductions and economic costs associated with different control scenarios.\n\nTo download the latest version of the EMF and CoST, visit https://www.cmascenter.org/cost/. The CMAS Center website also provides documentation on installing and using the EMF.\n'",,"2014/04/16, 04:39:06",3479,CUSTOM,24,1056,"2023/05/02, 15:02:22",57,3,83,2,176,0,0.0,0.3187721369539551,"2023/06/14, 16:28:41",redeploy_EPA_20230614,0,7,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, MOVES,"A state-of-the-science emission modeling system that estimates emissions for mobile sources at the national, county, and project level for criteria air pollutants, greenhouse gases, and air toxics, available under EPA's Open Source Software policy.",USEPA,https://github.com/USEPA/EPA_MOVES_Model.git,github,oar,Emission Observation and Modeling,"2023/10/05, 20:20:31",66,0,20,true,Java,U.S. Environmental Protection Agency,USEPA,"Java,Fortran,Go,XSLT,C++,Batchfile,Pascal,Shell,Makefile,Perl,Python",,"b'# MOVES4\n\nEPA\'s MOtor Vehicle Emission Simulator (MOVES) is a state-of-the-science emission modeling system that estimates emissions for mobile sources at the national, county, and project level for criteria air pollutants, greenhouse gases, and air toxics, available under EPA\'s Open Source Software policy. \n\nMOVES4 (available at https://github.com/USEPA/EPA_MOVES_Model and https://www.epa.gov/moves/latest-version-motor-vehicle-emission-simulator-moves) is the latest version of MOVES available for regulatory purposes. For more information, see [MOVES4 Policy Guidance: Use of MOVES for State Implementation Plan Development, Transportation Conformity, General Conformity and Other Purposes (EPA-420-B-23-009)](https://www.epa.gov/moves/latest-version-motor-vehicle-emission-simulator-moves#guidance). \n\nFor additional information on MOVES, visit EPA\'s [MOVES website](https://www.epa.gov/moves). A standard installer for MOVES is available [here](https://www.epa.gov/moves/latest-version-motor-vehicle-emission-simulator-moves#download). Or, to compile and MOVES from source, follow the instructions below.\n\n### Requirements and Set Up\n\nThis repository contains all of the source code and data required to compile and run MOVES.\n\nMOVES uses MariaDB, Java, and Go. To run MOVES from the source code (i.e., without running the installer), you will need the following:\n\n* [MariaDB](https://mariadb.org/download/?t=mariadb&p=mariadb&r=10.11.5) (version 10.11 is recommended)\n* [Java JDK](https://learn.microsoft.com/en-us/java/openjdk/download#openjdk-17) (version 17 is recommended)\n* [Go](https://golang.org/dl) (version 1.13 or later)\n\nTo get set up:\n\n1. Clone or download this repository.\n\n2. Make sure MariaDB, Java, and Go are all installed and available on your system path (i.e., make sure the `\\bin` folders for each of these are in your PATH environment variable). You can test this by running each of the following lines in a command prompt:\n * `mysql.exe --version`\n * `java.exe -version`\n * `go.exe version`\n\n3. MOVES has some specific MariaDB configuration requirements. Locate the MariaDB configuration file (on Windows, this is my.ini located in your data directory; see #4 of [Quick Start Guide to Accessing MariaDB Data](docs/QuickStartGuideToAccessingMariaDBData.pdf) for help on finding your data directory), and ensure the following lines are saved in the file:\n\n ```ini\n [mysqld]\n default-storage-engine=MyISAM\n secure-file-priv=\'\'\n sql_mode=STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION\n lower_case_table_names=1\n character-set-server=utf8\n collation-server=utf8_unicode_ci\n init-connect=\'SET NAMES utf8\'\n ```\n Restart the MariaDB service after modifying this file.\n\n4. If MariaDB is running on a non-default port (i.e., any port other than 3306), create a file called `MySQL.txt` in the root MOVES directory, and save the port number to this text file. This file should contain no whitespace, just the port number.\n\n5. Uncomment line 2 of `setenv.bat` and comment out line 3 to set the JDK and JRE paths to match your environment (i.e., remove `REM` from the beginning of line 2 and add it to the beginning of line 3). Then, set `JAVA_HOME` to your JDK path on line 6.\n\n6. Unzip the default database dumpfile from the .zip file in the `\\database\\Setup` directory to the same directory.\n\n7. Edit `\\database\\Setup\\SetupDatabase.bat` to use the MariaDB root user\'s password, and then run it. This batch file creates the MOVES database user by running `\\database\\Setup\\CreateMOVESUser.sql`, and then installs the default database by running the dumpfile you extracted in the previous step. \n\n Note: if MariaDB is running on a different port, you will need to edit `SetupDatabase.bat` to add the command-line flag `--port=XXXX`, where `XXXX` is the port number.\n\n8. Open a command prompt, navigate to your MOVES source code directory, and run the following commands to compile MOVES and launch the GUI:\n\n ```bash\n setenv\n ant crungui\n ```\n\n For additional information about compiling MOVES, see [CommandLineMOVES.md](docs\\CommandLineMOVES.md#compiling-moves).\n\n9. Hereafter to run MOVES, simply navigate to the MOVES directory and run:\n\n ```bash\n MOVESMain.bat\n ```\n\n### Need help?\n\nDocumentation on the software components of MOVES, database structure, running MOVES from the command line, tips for improving MOVES performance, and other information are available in the [\\docs](docs/Readme.md) directory of this project. To check the status of reported issues and planned improvements, see the [MOVES GitHub Issue Tracker](https://github.com/USEPA/EPA_MOVES_Model/issues).\n\nAdditional resources listed below are available at the MOVES website:\n\n* [MOVES Technical Guidance](https://www.epa.gov/moves/latest-version-motor-vehicle-emission-simulator-moves#guidance): Guidance on appropriate input assumptions and sources of data for the use of MOVES in SIP development and regional emissions analyses for transportation conformity determinations in states other than California. It also includes guidance on developing nonroad inventories with MOVES.\n* [Onroad Technical Reports](https://www.epa.gov/moves/moves-onroad-technical-reports) and [Nonroad Technical Reports](https://www.epa.gov/moves/nonroad-technical-reports): Access peer-reviewed documentation on the default inputs and algorithms used in MOVES\n* [MOVES Training](https://www.epa.gov/moves/moves-training-sessions#training): Contains on-your-own training modules for using MOVES\n* [MOVES FAQ](https://www.epa.gov/moves/frequent-questions-about-moves-and-related-models): Frequent asked questions about MOVES and related models\n\nIf you have questions or feedback about MOVES, [email the MOVES inbox](mailto:mobile@epa.gov).\n\n### Previous MOVES Versions\n\nPrevious versions of MOVES going back to the MOVES2014b December 2018 Technical Update can be accessed on GitHub at [MOVES Releases](https://github.com/USEPA/EPA_MOVES_Model/releases). Older versions of MOVES are available at the [MOVES website](https://www.epa.gov/moves/previous-moves-versions-and-documentation).\n\n### License\n\nMOVES is licensed for use pursuant to the [GNU General Public License (GPL)](http://www.gnu.org/licenses/old-licenses/gpl-2.0.html).\n\n### EPA Disclaimer\n\nThe United States Environmental Protection Agency (EPA) GitHub project code is provided on an ""as is"" basis and the user assumes responsibility for its use. EPA has relinquished control of the information and no longer has responsibility to protect the integrity, confidentiality, or availability of the information. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by EPA. The EPA seal and logo shall not be used in any manner to imply endorsement of any commercial product or activity by EPA or the United States Government.\n'",,"2020/04/20, 15:14:19",1283,CUSTOM,7,86,"2023/10/05, 21:25:01",2,0,58,6,20,0,0,0.03749999999999998,"2023/08/30, 13:58:38",MOVES4.0.0,0,2,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, EPA_ALPHA_Model,Evaluate the Greenhouse Gas emissions of Light-Duty vehicles.,USEPA,https://github.com/USEPA/EPA_ALPHA_Model.git,github,oar,Emission Observation and Modeling,"2023/07/11, 14:11:13",8,0,4,true,Python,U.S. Environmental Protection Agency,USEPA,"Python,HTML,C++,MATLAB,Batchfile,Makefile",,"b'EPA_ALPHA_Model\n===============\n\nThe Advanced Light-Duty Powertrain and Hybrid Analysis (ALPHA) tool was created by EPA to evaluate the Greenhouse Gas (GHG) emissions of Light-Duty (LD) vehicles. ALPHA is a physics-based, forward-looking, full vehicle computer simulation capable of analyzing various vehicle types combined with different powertrain technologies. The software tool is a MATLAB/Simulink based application.\n\nEPA has developed the ALPHA model to enable the simulation of current and future vehicles, and as a tool for understanding vehicle behavior, greenhouse gas emissions and the effectiveness of various powertrain technologies. For GHG, ALPHA calculates CO2 emissions based on test fuel properties and vehicle fuel consumption. No other emissions are calculated at the present time but future work on other emissions is not precluded.\n\nEPA engineers utilize ALPHA as an in-house research tool to explore in detail current and future advanced vehicle technologies. ALPHA is continually refined and updated to more accurately model light-duty vehicle behavior and to include new technologies.\n\nALPHA (and EPA\'s Heavy-Duty compliance model, GEM) are built on a common platform known as ""REVS"" - Regulated Emissions Vehicle Simulation. REVS forms the foundation of ALPHA. This document refers to the third revision of REVS, known as REVS3. ALPHA can be considered a tool as well as a modeling process, the components of which are defined in REVS.\n\nFor more information, visit:\n\nhttps://www.epa.gov/regulations-emissions-vehicles-and-engines/advanced-light-duty-powertrain-and-hybrid-analysis-alpha\n\nThis repository is the public home of the ALPHA documentation source code.\n\nDocumentation\n^^^^^^^^^^^^^\n\nThe published documentation homepage is https://epa-alpha-model.readthedocs.io/en/latest/\n\nThe latest .pdf docs are available at https://epa-alpha-model.readthedocs.io/_/downloads/en/latest/pdf/\n'",,"2019/09/19, 15:40:36",1497,CUSTOM,89,363,"2019/10/10, 14:23:15",0,0,1,0,1476,0,0,0.25988700564971756,,,0,6,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, gtfs2emis,Estimating public transport emissions from GTFS data.,ipeaGIT,https://github.com/ipeaGIT/gtfs2emis.git,github,"emissions,environmental-modelling,gtfs,public-transport,r,rspatial,transport",Emission Observation and Modeling,"2023/09/04, 21:00:00",24,0,11,true,R,IpeaDIRUR,ipeaGIT,R,https://ipeagit.github.io/gtfs2emis/,"b'# gtfs2emis: Estimating public transport emissions from GTFS data \n\n[![CRAN/METACRAN\nVersion](https://www.r-pkg.org/badges/version/gtfs2emis)](https://CRAN.R-project.org/package=gtfs2emis)\n[![CRAN/METACRAN Total\ndownloads](http://cranlogs.r-pkg.org/badges/grand-total/gtfs2emis?color=blue)](https://CRAN.R-project.org/package=gtfs2emis)\n[![R-CMD-check](https://github.com/ipeaGIT/gtfs2emis/workflows/R-CMD-check/badge.svg)](https://github.com/ipeaGIT/gtfs2emis/actions)\n[![Lifecycle:\nexperimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![Codecov test\ncoverage](https://codecov.io/gh/ipeaGIT/gtfs2emis/branch/master/graph/badge.svg)](https://app.codecov.io/gh/ipeaGIT/gtfs2emis?branch=master)\n[![DOI](https://img.shields.io/badge/DOI-10.1016/j.trd.2023.103757-blue)](https://doi.org/10.1016/j.trd.2023.103757)\n\n**gtfs2emis** is an R package to estimate the emission levels of public\ntransport vehicles based on General Transit Feed Specification (GTFS)\ndata. The package requires two main inputs: i) public transport data in\nGTFS standard format; and ii) some basic information on fleet\ncharacteristics such as vehicle age, technology, fuel, and Euro stage.\nAs it stands, the package estimates several pollutants (see table below)\nat high spatial and temporal resolutions. Pollution levels can be\ncalculated for specific transport routes, trips, time of the day, or for\nthe transport system as a whole. The output with emission estimates can\nbe extracted in different formats, supporting analysis of how emission\nlevels vary across space, time, and by fleet characteristics. A full\ndescription of the methods used in the gtfs2emis model is presented in\n[Vieira, Pereira and Andrade\n(2022)](https://doi.org/10.31219/osf.io/8m2cy).\n\n## Installation\n\nYou can install `gtfs2emis`:\n\n``` r\n# From CRAN\ninstall.packages(""gtfs2emis"")\nlibrary(gtfs2emis)\n\n# or use the development version with latest features\nutils::remove.packages(\'gtfs2emis\')\ndevtools::install_github(""ipeaGIT/gtfs2emis"")\nlibrary(gtfs2emis)\n```\n\n## Usage and Data requirements\n\nThe `gtfs2emis` package has two core functions.\n\n1. `transport_model()` converts GTFS data into a GPS-like table with\n the space-time positions and speeds of public transport vehicles.\n The only input required is a `GTFS.zip` feed.\n\n2. `emission_model()` estimates hot-exhaust emissions based on four\n inputs:\n\n- 1) the result from the `transport_model()`;\n- 2) a `data.frame` with info on fleet characteristics;\n- 3) a `string` indicating which emission factor model should be considered;\n- 4) a `string` indicating which pollutants should be estimated.\n\nTo help users analyze the output from `emission_model()`, the\n`gtfs2emis` package has few functions:\n\n3. `emis_to_dt()` to convert the output of `emission_model()` from\n `list` to `data.table`.\n4. `emis_summary()` to aggregate emission estimates by the time of the\n day, vehicle type, or road segment.\n5. `emis_grid()` to spatially aggregate emission estimates using any\n custom spatial grid or polygons.\n\n## Demonstration on sample data\n\nTo illustrate functionality, the package includes small sample data sets\nof the public transport and fleet of Curitiba (Brazil), Detroit (USA),\nand Dublin (Ireland). Estimating the emissions of a given public\ntransport system using `gtfs2emis` can be done in three simple steps, as\nfollows.\n\n### 1. Run transport model\n\nThe first step is to use the `transport_model()` function to convert\nGTFS data into a GPS-like table, so that we can get the space-time\nposition and speed of each vehicle of the public transport system at\nhigh spatial and temporal resolutions.\n\n``` r\n# read GTFS.zip\ngtfs_file <- system.file(""extdata/irl_dub_gtfs.zip"", package = ""gtfs2emis"")\ngtfs <- gtfstools::read_gtfs(gtfs_file)\n\n# generate transport model\ntp_model <- transport_model(gtfs_data = gtfs,spatial_resolution = 100,parallel = TRUE) \n```\n\n### 2. Prepare fleet data\n\nThe second step is to prepare a `data.frame` with some characteristics\nof the public transport fleet. Note that different emission factor\nmodels may require information on different fleet characteristics, such\nas vehicle age, type, Euro standard, technology, and fuel. This can be\neither: - A simple table with the overall composition of the fleet. In\nthis case, the `gtfs2emis` will assume that fleet is homogeneously\ndistributed across all routes; OR - A detailed table that (1) brings\ninfo on the characteristics of each vehicle and, (2) tells the\nprobability with which each vehicle type is allocated to each transport\nroute.\n\nHere is what a simple fleet table to be used with the EMEP-EEA emission\nfactor model looks like:\n\n``` r\nfleet_file <- system.file(""extdata/irl_dub_fleet.txt"", package = ""gtfs2emis"")\n\nfleet_df <- read.csv(fleet_file)\nfleet_df\n#> veh_type euro fuel N fleet_composition tech\n#> 1 Ubus Std 15 - 18 t III D 10 0.00998004 -\n#> 2 Ubus Std 15 - 18 t IV D 296 0.29540918 SCR\n#> 3 Ubus Std 15 - 18 t V D 148 0.14770459 SCR\n#> 4 Ubus Std 15 - 18 t VI D 548 0.54690619 DPF+SCR\n```\n\n### 3. Run emission model\n\nIn the final step, the `emission_model()` function to estimate hot\nexhaust emissions of our public transport system. Here, the user needs\nto pass the results from `transport_model()`, some fleet data as\ndescribed above, and select which emission factor model and pollutants\nshould be considered (see the options available below). The output from\n`emission_model()` is a `list` with several `vectors` and `data.frames`\nwith emission estimates and related information such as vehicle\nvariables (`fuel`, `age`, `tech`, `euro`, `fleet_composition`), travel\nvariables (`slope`, `load`, `gps`) or pollution (`EF`, `emi`).\n\n``` r\nemi_list <- emission_model(tp_model = tp_model\n, ef_model = ""ef_europe_emep""\n, fleet_data = fleet_df\n, pollutant = c(""NOx"",""PM10"")\n)\n\nnames(emi_list)\n#> [1] ""pollutant"" ""veh_type"" ""euro"" \n#> [4] ""fuel"" ""tech"" ""slope"" \n#> [7] ""load"" ""speed"" ""EF"" \n#> [10] ""emi"" ""fleet_composition"" ""tp_model""\n```\n\n## Emission factor models and pollutants available\n\nCurrently, the `gtfs2emis` package provides a computational method to\nestimate running exhaust emissions factors based on the following\nemission factor models:\n\n- Brazil\n - [CETESB](https://cetesb.sp.gov.br/veicular/relatorios-e-publicacoes/):\n 2019 model from the Environmental Company of Sao Paulo (CETESB)\n- Europe\n - [EMEP/EEA](https://www.eea.europa.eu/themes/air/air-pollution-sources-1/emep-eea-air-pollutant-emission-inventory-guidebook/emep):\n European Monitoring and Evaluation Programme, developed by the\n European Environment Agency (EEA).\n- United States\n - [EMFAC2017/CARB](https://arb.ca.gov/emfac/): California Emission\n Factor model, developed by the California Air Resources Board\n (CARB).\n - [MOVES3/EPA](https://www.epa.gov/moves): Vehicle Emission\n Simulator, developed by the Environmental Protection Agency\n (EPA).\n\n#### List of pollutants available by emission factor models\n\n| Source | Pollutants |\n|--------------|----------------------------------------------------------|\n| CETESB | CH4, CO, CO2, ETOH, FC (Fuel Consumption), FS (Fuel Sales), gCO2/KWH, gD/KWH, HC, KML, N2O, NH3, NMHC, NO, NO2, NOx, PM10 and RCHO |\n| EMFAC2017/CARB | CH4, CO, CO2, N2O, NOx, PM10, PM25, ROG (Reactive Organic Gases), SOX, and TOG (Total Organic Gases) |\n| EMEP/EEA | CH4, CO, CO2, EC, FC, N2O, NH3, NOx, PM10, SPN23 (#kWh), and VOC |\n| MOVES3/EPA | CH4, CO, CO2, EC, HONO, N2O, NH3, NH4, NO, NO2, NO3, NOx, PM10, PM25, SO2, THC, TOG, and VOC |\n\n#### Fleet characteristics required by each emission factor model\n\n| Source | Buses | Characteristics |\n|-----------------|----------------------|----------------------------------|\n| CETESB | Micro, Standard, Articulated | Age, Fuel, EURO standard |\n| EMEP/EAA | Micro, Standard, Articulated | Fuel, EURO standard, technology, load, slope |\n| EMFAC2017/CARB | Urban Buses | Age, Fuel |\n| MOVES3/EPA | Urban Buses | Age, Fuel |\n\n### Emissions from road vehicle tire, brake, and surface wear\n\n`gtfs2emis` also provides emissions estimates from tire, brake and\nsurface wear using the [EMEP/EEA\nmodel](https://www.eea.europa.eu/themes/air/air-pollution-sources-1/emep-eea-air-pollutant-emission-inventory-guidebook/emep).\nThe function estimates emissions of particulate matter (PM),\nencompassing black carbon (BC), which arises from distinct sources\n(tire, brake, and road surface wear). The focus is on primary particles,\nwhich refer to those that are directly emitted, rather than those\ngenerated from the re-suspension of previously deposited material.\n\n## Learn more\n\nCheck out the guides for learning everything there is to know about all\nthe different features:\n\n- [Getting\n started](https://ipeagit.github.io/gtfs2emis/articles/gtfs2emis_intro_vignette.html)\n- [Defining Fleet\n data](https://ipeagit.github.io/gtfs2emis/articles/gtfs2emis_fleet_data.html)\n- [Exploring Emission\n Factors](https://ipeagit.github.io/gtfs2emis/articles/gtfs2emis_emission_factor.html)\n- [Exploring Non Exhaust Emission\n Factors](https://ipeagit.github.io/gtfs2emis/articles/gtfs2emis_non_exhaust_ef.html)\n\n### **Related packages**\n\nThere are several others transport emissions models available for\ndifferent purposes (see below). As of today, `gtfs2emis` is the only\nmethod with the capability to estimate emissions of public transport\nsystems using GTFS data.\n\n- R: [vein](https://github.com/atmoschem/vein) Bottom-up and top-down\n inventory using GPS data.\n- R: [EmissV](https://github.com/atmoschem/emissv) Top-down inventory.\n- Python:\n [PythonEmissData](https://github.com/adelgadop/PythonEmissData)\n Jupyter notebook to estimate simple top-down emissions.\n- Python: [YETI](https://github.com/twollnik/YETI) YETI - Yet Another\n Emissions From Traffic Inventory\n- Python: [mobair](https://github.com/matteoboh/mobility_emissions)\n bottom-up model using GPS data.\n\n### **Future enhancements**\n\n- Include cold-start, resuspension, and evaporative emissions factors\n- Add railway emission factors\n\n------------------------------------------------------------------------\n\n## Citation\n\n``` r\ncitation(""gtfs2emis"")\n#> To cite gtfs2emis in publications use:\n#> \n#> Vieira, J. P. B., Pereira, R. H. M., & Andrade, P. R. (2023). Estimating \n#> Public Transport Emissions from General Transit Feed Specification Data. \n#> Transportation Research Part D: Transport and Environment. Volume 119, \n#> 103757. https://doi.org/10.1016/j.trd.2023.103757\n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @article{vieira2023estimating,\n#> title = {Estimating Public Transport Emissions from {{General Transit Feed Specification}} Data},\n#> author = {Vieira, Jo{\\~a}o Pedro Bazzo and Pereira, Rafael H. M. and Andrade, Pedro R.},\n#> year = {2023},\n#> month = jun,\n#> journal = {Transportation Research Part D: Transport and Environment},\n#> volume = {119},\n#> pages = {103757},\n#> issn = {1361-9209},\n#> doi = {10.1016/j.trd.2023.103757},\n#> urldate = {2023-05-06},\n#> langid = {english},\n#> keywords = {Emission factors,Emission models,GTFS,Gtfs2emis,Public transport emissions,Urban bus}\n#> }\n```\n\n### Credits \n\nThe **gtfs2emis** package is developed by a team at the Institute for\nApplied Economic Research (IPEA) in collaboration from the National\nInstitute for Space Research (INPE), both from Brazil.\n'",",https://doi.org/10.1016/j.trd.2023.103757,https://doi.org/10.31219/osf.io/8m2cy,https://doi.org/10.1016/j.trd.2023.103757\n#","2019/10/29, 14:21:46",1457,CUSTOM,65,688,"2023/07/17, 14:30:57",9,12,91,13,100,0,0.3,0.2009724473257699,"2022/11/14, 19:03:32",v.0.1.0,0,4,false,,false,false,,,https://github.com/ipeaGIT,,,,,https://avatars.githubusercontent.com/u/18657547?v=4,,, HEMCO," Computing atmospheric emissions from different sources, regions, and species on a user-defined grid.",geoschem,https://github.com/geoschem/HEMCO.git,github,"hemco,geos-chem,emissions,atmospheric-modeling,atmospheric-composition,scientific-computing,cloud-computing,aws,bash-script,configuration-files,run-directory,standalone,data-broker,masks,regridding,scale-factors",Emission Observation and Modeling,"2023/10/10, 14:02:04",15,0,3,true,Fortran,GEOS-Chem,geoschem,"Fortran,C,CMake,Perl,Shell,Jupyter Notebook,Makefile,Dockerfile",https://hemco.readthedocs.io,"b'# HEMCO: The Harmonized Emissions Component\n\n

\n \n \n \n
\n \n \n \n

\n\n## Description\n\nThis repository (https://github.com/geoschem/HEMCO) contains the Harmonized Emissions Component (HEMCO) source code. HEMCO is a software component for computing (atmospheric) emissions from different sources, regions, and species on a user-defined grid. It can combine, overlay, and update a set of data inventories (\'base emissions\') and scale factors, as specified by the user through the HEMCO configuration file. Emissions that depend on environmental variables and non-linear parameterizations are calculated in separate HEMCO extensions. HEMCO can be run\nin standalone mode or coupled to an atmospheric model. A more detailed description of HEMCO is given in Keller et al. (2014) and Lin et al (2021).\n\nHEMCO has been coupled to several atmospheric and Earth System Models, and can be coupled with or without using the Earth System Modeling Framework (ESMF). A detailed description of HEMCO coupled with other models is given in Lin et al. (2021).\n\n## Documentation\n\n### Reference\n\nC. A. Keller, M. S. Long, R. M. Yantosca, A. M. Da Silva, S. Pawson, D. J. Jacob, *HEMCO v1.0: a versatile, ESMF-compliant component for calculation emissions in atmospheric models*, Geosci. Model Dev., **7**, 1409-1417, 2014.\n\nLin, H., Jacob, D. J., Lundgren, E. W., Sulprizio, M. P., Keller, C. A., Fritz, T. M., Eastham, S. D., Emmons, L. K., Campbell, P. C., Baker, B., Saylor, R. D., and Montuoro, R.: *Harmonized Emissions Component (HEMCO) 3.0 as a versatile emissions component for atmospheric models: application in the GEOS-Chem, NASA GEOS, WRF-GC, CESM2, NOAA GEFS-Aerosol, and NOAA UFS models*, Geosci. Model Dev., **14**, 5487\xe2\x80\x935506, 2021.\n\n### Online user\'s manual\n\nInstallation and usage instructions are posted online at [hemco.readthedocs.io](http://hemco.readthedocs.io)\n\n## Support\nWe encourage GEOS-Chem users to use [the Github issue tracker attached to this repository](https://github.com/geoschem/HEMCO/issues/new/choose) to report bugs or technical issues with the HEMCO code.\n\n## License\n\nHEMCO is distributed under the MIT license. Please see the license documents LICENSE.txt and AUTHORS.txt in the root folder.\n'",",https://doi.org/10.5281/zenodo.4618253","2019/09/27, 16:47:43",1489,CUSTOM,208,659,"2023/09/15, 19:30:24",29,83,209,79,40,4,1.4,0.5834896810506567,"2023/10/10, 14:05:48",3.7.1,0,17,false,,false,true,,,https://github.com/geoschem,http://www.geos-chem.org,International,,,https://avatars.githubusercontent.com/u/8321017?v=4,,, MethaneMapper,A fast and accurate deep learning based solution for methane detection from airborne hyperspectral imagery.,UCSB-VRL,https://github.com/UCSB-VRL/MethaneMapper-Spectral-Absorption-aware-Hyperspectral-Transformer-for-Methane-Detection.git,github,,Emission Observation and Modeling,"2023/09/12, 23:47:32",36,0,36,true,Python,Vision Research Lab @ UCSB,UCSB-VRL,"Python,Shell",,"b'## MethaneMapper: Spectral Absorption aware Hyperspectral Transformer for Methane Detection\n\nMethaneMapper is a fast and accurate deep learning based solution for methane detection from airborne hyperspectral imagery. MethaneMapper introduces a spectral absorption wavelength aware transformer network and largest public dataset called Methane HotSpot dataset (MHS). This repository contains code for MethaneMapper, scripts to download and online tool to visualize dataset.\n\n### [**MethaneMapper: Spectral Absorption aware Hyperspectral Transformer for Methane Detection**](https://openaccess.thecvf.com/content/CVPR2023/papers/Kumar_MethaneMapper_Spectral_Absorption_Aware_Hyperspectral_Transformer_for_Methane_Detection_CVPR_2023_paper.pdf)\n[Satish Kumar*](https://www.linkedin.com/in/satish-kumar-81912540/), [Ivan Arevalo](https://www.linkedin.com/in/ivanfarevalo/), [A S M Iftekhar](), [B S Manjunath](https://vision.ece.ucsb.edu/people/bs-manjunath).\n\nOfficial repository of our [**CVPR 2023 (Highlights)**](https://openaccess.thecvf.com/content/CVPR2023/papers/Kumar_MethaneMapper_Spectral_Absorption_Aware_Hyperspectral_Transformer_for_Methane_Detection_CVPR_2023_paper.pdf) paper.\n\n\n\nThis repository includes:\n* Source code of MethaneMapper.\n* Pre-trained weights for methane plume bounding box detector and segmentation mask\n* Scripts to download MHS dataset\n* Online tool to visualize MHS dataset ([**BisQue**](https://bisque2.ece.ucsb.edu/client_service/view?resource=https://bisque2.ece.ucsb.edu/data_service/00-kKkPJUHK6KJDEVBRfDpmmA))\n* Code for custom data preparation for training/testing\n* Code for mapping ground truth masks from CarbonMapper to AVIRIS-NG flightline\n* Annotation generator to read-convert mask annotation into json.\n\n\n![supported versions](https://img.shields.io/badge/python-(3.8--3.10)-brightgreen/?style=flat&logo=python&color=green)\n![Library](https://img.shields.io/badge/Library-Pytorch-blue)\n![GitHub license](https://img.shields.io/cocoapods/l/AFNetworking)\n\n\nThe repository follows the structure of paper, making it easy to follow and use/extend the work. If this research is helpful to you, please consider citing our paper (bibtex below)\n\n## Citing\nIf this research is helpful to you, please consider citing our paper:\n```\n@inproceedings{kumar2023methanemapper,\n title={Methanemapper: Spectral absorption aware hyperspectral transformer for methane detection},\n author={Kumar, Satish and Arevalo, Ivan and Iftekhar, ASM and Manjunath, BS},\n booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n pages={17609--17618},\n year={2023}\n}\n```\n\n## Usage\n\n### Requirements\n- Linux or macOS with Python >= 3.7\n- Pytorch >= 1.7.0\n- CUDA >= 10.0\n- cudNN (compatible with CUDA)\n\n### Installation\n1. Clone the repository\n2. Install dependencies\n```\npip install -r requirements.txt\n```\n\n### Data Visualization\nPlease checkout to the [BisQue](https://github.com/UCSB-VRL/MethaneMapper-Spectral-Absorption-aware-Hyperspectral-Transformer-for-Methane-Detection/blob/main/data/visualize_data/README.md)\n\n\n### Download Methane HotSpot (MHS) Dataset\nPlease follow the tutorial [MHS_dataset](https://github.com/UCSB-VRL/MethaneMapper-Spectral-Absorption-aware-Hyperspectral-Transformer-for-Methane-Detection/tree/main/mhs_dataset) to download dataset\n\n### Training\nFollow the training tutorial [Here](https://github.com/UCSB-VRL/MethaneMapper-Spectral-Absorption-aware-Hyperspectral-Transformer-for-Methane-Detection/blob/main/methanemapper/README.md)\n\n## For Developers\nPlease refer to [CONTRIBUTING.md](https://github.com/UCSB-VRL/MethaneMapper-Spectral-Absorption-aware-Hyperspectral-Transformer-for-Methane-Detection/blob/main/docs/CONTRIBUTING.md) for contribution to the repository. Thank you!\n\n## License\nMethaneMapper is released under the UCSB license. Please see the [LICENSE](./LICENSE) file for more information.\n'",,"2023/03/24, 19:36:51",215,CUSTOM,94,94,"2023/09/11, 15:26:10",1,2,8,8,44,1,0.0,0.0888888888888889,,,0,3,false,,false,false,,,https://github.com/UCSB-VRL,http://vision.ece.ucsb.edu,"Santa Barbara, CALIFORNIA",,,https://avatars.githubusercontent.com/u/26205443?v=4,,, Pyra,Automated EM27/SUN Greenhouse Gas Measurement Software.,tum-esm,https://github.com/tum-esm/pyra.git,github,"bruker,opus,python,tauri,camtracker,em27,cli,click,fabric,docusaurus,poetry,tailwindcss,typescript,climate,spectrometry,autonomous,sensor,emissions,monitoring,mypy",Emission Observation and Modeling,"2023/10/04, 09:35:26",8,0,7,true,Python,TUM - Environmental Sensing and Modeling,tum-esm,"Python,TypeScript,MDX,JavaScript,CSS,SCSS,HTML,Rust,Shell",https://pyra.esm.ei.tum.de/docs,"b'# Pyra: Automated EM27/SUN Greenhouse Gas Measurement Software\n\n**Source Code:** https://github.com/tum-esm/pyra (this website)
\n**Documentation:** https://pyra.esm.ei.tum.de/docs
\n**Contributor Guide:** https://pyra.esm.ei.tum.de/docs/contributor-guide/becoming-a-contributor\n\n[![status](https://joss.theoj.org/papers/d47b5197eb098bccfbd27b6a6c441cb4/status.svg)](https://joss.theoj.org/papers/d47b5197eb098bccfbd27b6a6c441cb4)\n![](https://badgen.net/github/license/tum-esm/pyra?color=e11d48)\n![](https://badgen.net/github/release/tum-esm/pyra/stable?label=latest%20release&color=e11d48)\n[![](https://badgen.net/github/checks/tum-esm/pyra/main?icon=github&label=ci%20tests)](https://github.com/tum-esm/pyra/actions)\n\n
\n\n## What is Pyra?\n\nPyra (name based on [Python]() and [Ra](https://en.wikipedia.org/wiki/Ra)) is a software that automates the operation of [EM27/SUN](https://www.bruker.com/en/products-and-solutions/infrared-and-raman/remote-sensing/em27-sun-solar-absorption-spectrometer.html) measurement setups. Operating EM27/SUN devices requires a lot of human interaction. Pyra makes it possible to operate these devices 24/7 autonomously.\n\nPyra has enabled us, the **[Professorship of Environmental Sensing and Modeling](https://www.ee.cit.tum.de/en)** at the **[Technical University of Munich](https://www.tum.de/en)** to collect continuous data from 5 stations around the city of Munich since 2019 using [MUCCnet](https://atmosphere.ei.tum.de/). Versions 1 to 3 of Pyra have been experimental tools improved internally since 2016. The goal of version 4 is to make Pyra even more stable, easy to understand and extend, and usable by the whole EM27/SUN community.\n\n![](packages/docs/static/img/docs/muccnet-image-roof.jpg)\n\nThe software is licensed under GPLv3 and is open-sourced here, on GitHub: https://github.com/tum-esm/pyra. Pyra has been published in the Journal of Open Source Software (JOSS): https://doi.org/10.21105/joss.05131.\n\n
\n\n## Citing Pyra\n\nWhenever using data generated by Pyra, please make sure to cite the following two papers. With the first one (MUCCnet), we developed the versions 1 to 3 of Pyra internally. With the second one (Pyra), we turned Pyra into a reusable open-source software.\n\n**APA Style:**\n\nDietrich, F., Chen, J., Voggenreiter, B., Aigner, P., Nachtigall, N., and Reger, B.: MUCCnet: Munich Urban Carbon Column network, Atmos. Meas. Tech., 14, 1111\xe2\x80\x931126, https://doi.org/10.5194/amt-14-1111-2021, 2021.\n\nAigner, P., Makowski, M., Luther, A., Dietrich, F., & Chen, J. (2023). Pyra: Automated EM27/SUN Greenhouse Gas Measurement Software. Journal of Open Source Software, 8(84), 5131. https://doi.org/10.21105/joss.05131\n\n**BibTex:**\n\n```bibtex\n@article{Dietrich2021,\n author = {Dietrich, F. and Chen, J. and Voggenreiter, B. and Aigner, P. and Nachtigall, N. and Reger, B.},\n title = {MUCCnet: Munich Urban Carbon Column network},\n journal = {Atmospheric Measurement Techniques},\n volume = {14},\n year = {2021},\n number = {2},\n pages = {1111--1126},\n url = {https://amt.copernicus.org/articles/14/1111/2021/},\n doi = {10.5194/amt-14-1111-2021}\n}\n@article{Aigner2023,\n doi = {10.21105/joss.05131},\n url = {https://doi.org/10.21105/joss.05131},\n year = {2023},\n publisher = {The Open Journal},\n volume = {8},\n number = {84},\n pages = {5131},\n author = {Patrick Aigner and Moritz Makowski and Andreas Luther and Florian Dietrich and Jia Chen},\n title = {Pyra: Automated EM27/SUN Greenhouse Gas Measurement Software},\n journal = {Journal of Open Source Software}\n}\n```\n'",",https://doi.org/10.21105/joss.05131.\n\n,https://doi.org/10.5194/amt-14-1111-2021,https://doi.org/10.21105/joss.05131\n\n**BibTex:**\n\n```bibtex\n@article,https://doi.org/10.21105/joss.05131","2022/02/25, 18:46:30",607,GPL-3.0,157,1213,"2023/10/25, 21:34:30",40,31,150,46,0,0,1.8,0.2581755593803786,"2023/06/14, 10:15:09",v4.0.8,0,4,false,,false,false,,,https://github.com/tum-esm,https://www.ee.cit.tum.de/en/esm,Germany,,,https://avatars.githubusercontent.com/u/89810129?v=4,,, EMIT-Data-Resources,"Built to help scientists understand how dust affects climate, the EMIT can also pinpoint emissions of the potent greenhouse gas.",nasa,https://github.com/nasa/EMIT-Data-Resources.git,github,"emit,lpdaac",Emission Observation and Modeling,"2023/10/24, 15:12:57",58,0,55,true,HTML,NASA,nasa,"HTML,Jupyter Notebook,Python",,"b""# EMIT-Data-Resources \n\nWelcome to the EMIT-Data-Resources repository. This repository provides guides, short how-tos, and tutorials to help users access and work with data from the [Earth Surface Mineral Dust Source Investigation (EMIT) mission](https://lpdaac.usgs.gov/data/get-started-data/collection-overview/missions/emit-overview/). In the interest of open science this repository has been made public but is still under active development. All notebooks and scripts should be functional, however, changes or additions may be made. Make sure to consult the [CHANGE_LOG.md](CHANGE_LOG.md) for the most recent changes to the repository. Contributions from all parties are welcome. \n\n---\n\n## EMIT Background \n\nThe [EMIT](https://earth.jpl.nasa.gov/emit/) Project delivers space-based measurements of surface mineralogy of the Earth\xe2\x80\x99s arid dust source regions. These measurements are used to initialize the compositional makeup of dust sources in Earth System Models (ESMs). The dust cycle, which describe the generation, lofting, transport, and deposition of mineral dust, plays an important role in ESMs. Dust composition is presently the largest uncertainty factor in quantifying the magnitude of aerosol direct radiative forcing. By understanding the composition of mineral dust sources, EMIT aims to constrain the sign and magnitude of dust-related radiative forcing at regional and global scales. During its one-year mission on the International Space Station (ISS), EMIT will make measurements over the sunlit Earth\xe2\x80\x99s dust source regions that fall within \xc2\xb152\xc2\xb0 latitude. EMIT will schedule up to five visits (three on average) of each arid target region and only acquisitions not dominated by cloud cover will be downlinked. EMIT-based maps of the relative abundance of source minerals will advance the understanding of the current and future impacts of mineral dust in the Earth system. \n\nEMIT Data Products are distributed by the [LP DAAC](https://lpdaac.usgs.gov/). Learn more about EMIT data products from [EMIT Product Pages](https://lpdaac.usgs.gov/product_search/?query=emit&status=Operational&view=cards&sort=title) and search for and download EMIT data products using [NASA EarthData Search](https://search.earthdata.nasa.gov/search?q=%22EMIT%22) \n\n---\n\n## Prerequisites/Setup Instructions \n\nThis repository requires that users set up a compatible Python environment and download the EMIT granules used. See the `setup_instuctions.md` file in the `./setup/` folder. \n\n## Repository Contents \n\nBelow are the resources available for EMIT Data. \n\n|Name|Type|Summary|\n|:---|:---|:---|\n|[Getting EMIT Data using EarthData Search](guides/Getting_EMIT_Data_using_EarthData_Search.md)|Markdown Guide|A thorough walkthrough for using [EarthData Search](https://search.earthdata.nasa.gov/search) to find and download EMIT data|\n|[Exploring EMIT L2A Reflectance](python/tutorials/Exploring_EMIT_L2A_Reflectance.ipynb)|Jupyter Notebook|Explore EMIT L2A Reflectance data using interactive plots|\n|[How to find and access EMIT data](python/how-tos/How_to_find_and_access_EMIT_data.ipynb)|Jupyter Notebook|Use the `earthaccess` Python library to find and download or stream EMIT data|\n|[How to Convert to ENVI Format](python/how-tos/How_to_Convert_to_ENVI.ipynb)|Jupyter Notebook|Convert from downloaded netCDF4 (.nc) format to .envi format|\n|[How to Orthorectify](python/how-tos/How_to_Orthorectify.ipynb)|Jupyter Notebook|Use the geometry lookup table (GLT) included with the EMIT netCDF4 file to project on a geospatial grid (EPSG:4326)|\n|[How to Extract Point Data](python/how-tos/How_to_Extract_Points.ipynb)|Jupyter Notebook|Extract spectra using lat/lon coordinates from a .csv and build a dataframe/.csv output|\n|[How to Extract Area Data](python/how-tos/How_to_Extract_Area.ipynb)|Jupyter Notebook|Extract an area defined by a .geojson or shapefile|\n|[How to use EMIT Quality Data](python/how-tos/How_to_use_EMIT_Quality_data.ipynb)|Jupyter Notebook|Build a mask using an EMIT L2A Mask file and apply it to an L2A Reflectance file|\n|[How to use Direct S3 Access with EMIT](python/how-tos/How_to_Direct_S3_Access.ipynb)|Jupyter Notebook|Use S3 from inside AWS us-west2 to access EMIT Data|\n|[How to find EMIT Data using NASA's CMR API](python/how-tos/How_to_find_EMIT_data_using_CMR_API.ipynb)|Jupyter Notebook|Use NASA's CMR API to programmatically find EMIT Data|\n\n---\n\n## Helpful Links \n\n+ [JPL EMIT Website](https://earth.jpl.nasa.gov/emit/) \n+ [Video of 2023 Tutorial Series](https://www.youtube.com/playlist?list=PLO2yB4LGNlWrC5NdxeHMxyAxdwQhSypXe)\n+ [LP DAAC EMIT Product Pages](https://lpdaac.usgs.gov/product_search/?query=emit&status=Operational&view=cards&sort=title) - Learn more about available EMIT products \n+ [VISIONS Open Data Portal](https://earth.jpl.nasa.gov/emit/data/data-portal/coverage-and-forecasts/) - Learn about current and forecasted EMIT coverage \n\n+ [EMIT on Earth Data Search](https://search.earthdata.nasa.gov/search?q=%22EMIT%22) - Download EMIT Data from NASA\n\n+ [EMIT Github Repository](https://github.com/emit-sds) - Main EMIT Repository \n\n+ [EMIT Utilities Github Repository](https://github.com/emit-sds/emit-utils) - General convenience utilities for working with EMIT data\n\n+ [L2A Reflectance User Guide](https://lpdaac.usgs.gov/documents/1569/EMITL2ARFL_User_Guide_v1.pdf) \n\n+ [L2A Algorithm Theoretical Basis Document](https://lpdaac.usgs.gov/documents/1571/EMITL2A_ATBD_v1.pdf) \n\n+ [EMIT on Slack]( https://forms.gle/XefLVG6e6A7ezwpY9) - Join the EMIT slack community!\n\n---\n\n## Contact Info \n\nEmail: \nVoice: +1-866-573-3222 \nOrganization: Land Processes Distributed Active Archive Center (LP DAAC)\xc2\xb9 \nWebsite: \nDate last modified: 07-07-2023 \n\n\xc2\xb9Work performed under USGS contract G15PD00467 for NASA contract NNG14HH33I. \n""",,"2022/12/30, 15:30:07",299,Apache-2.0,189,189,"2023/10/24, 15:12:57",4,30,40,40,1,0,0.3,0.19580419580419584,,,0,7,false,,true,false,,,https://github.com/nasa,https://github.com/nasa/nasa.github.io/blob/master/docs/INSTRUCTIONS.md,United States of America,,,https://avatars.githubusercontent.com/u/848102?v=4,,, Integrated Methane Inversion,Contains the source code for setting up and running the Integrated Methane Inversion with GEOS-Chem.,geoschem,https://github.com/geoschem/integrated_methane_inversion.git,github,"atmospheric-chemistry,atmospheric-composition,atmospheric-modeling,aws,climate-change,climate-modeling,cloud-computing,geos-chem,greenhouse-gases,inverse-modeling,inversions,methane,scientific-computing,analytical-inversions,python",Emission Observation and Modeling,"2023/07/18, 20:08:45",18,0,15,true,Python,GEOS-Chem,geoschem,"Python,Jupyter Notebook,Shell,Perl",https://imi.readthedocs.org,"b'# Integrated Methane Inversion (IMI) Workflow\n## Overview:\n\nThis directory contains the source code for setting up and running the\n[Integrated Methane Inversion](https://imi.seas.harvard.edu/) with GEOS-Chem.\n\n\n## Documentation:\n\nPlease see the [IMI readthedocs site](https://imi.readthedocs.io)\n\n\n## Reference:\n\nVaron, D. J., Jacob, D. J., Sulprizio, M., Estrada, L. A., Downs, W. B., Shen, L., Hancock, S. E., Nesser, H., Qu, Z., Penn, E., Chen, Z., Lu, X., Lorente, A., Tewari, A., and Randles, C. A.: Integrated Methane Inversion (IMI 1.0): a user-friendly, cloud-based facility for inferring high-resolution methane emissions from TROPOMI satellite observations, Geosci. Model Dev., 15, 5787\xe2\x80\x935805, [https://doi.org/10.5194/gmd-15-5787-2022](https://doi.org/10.5194/gmd-15-5787-2022), 2022.'",",https://doi.org/10.5194/gmd-15-5787-2022,https://doi.org/10.5194/gmd-15-5787-2022","2020/04/27, 00:36:27",1276,MIT,67,492,"2023/10/12, 18:42:57",52,75,121,97,13,3,2.1,0.4507658643326039,"2023/07/19, 00:19:32",imi-1.2.1,0,8,false,,false,false,,,https://github.com/geoschem,http://www.geos-chem.org,International,,,https://avatars.githubusercontent.com/u/8321017?v=4,,, eCalc,A software tool for calculation of energy demand and greenhouse gas emissions from oil and gas production and processing.,equinor,https://github.com/equinor/ecalc.git,github,,Emission Observation and Modeling,"2023/10/20, 09:00:50",23,1,23,true,Python,Equinor,equinor,"Python,Dockerfile,JavaScript",https://equinor.github.io/ecalc,"b'![eCalc Logo](https://raw.githubusercontent.com/equinor/ecalc/main/docs/static/img/logo.svg)\n\n[![CI Build](https://github.com/equinor/ecalc/actions/workflows/on-push-main-branch.yml/badge.svg)](https://github.com/equinor/ecalc/actions/workflows/on-push-main-branch.yml)\n![License](https://img.shields.io/github/license/equinor/ecalc)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/libecalc)\n![PyPI - Wheel](https://img.shields.io/pypi/wheel/libecalc)\n![PyPI - Implementation](https://img.shields.io/pypi/implementation/libecalc)\n![Pre-commit - Enabled](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white) \n\n# eCalc\xe2\x84\xa2\neCalc\xe2\x84\xa2 is a software tool for calculation of energy demand and greenhouse gas (GHG) emissions from oil and gas production and processing.\n\n> **Note**\n>\n> eCalc\xe2\x84\xa2 is a work in progress and is by no means considered a finished and final product. We currently recommend to use the YAML API when using eCalc, and only\n> fallback to the Python API when it is strictly needed, as a breaking v2 version of the Python API is in development and expected finished in 2023.\n\n> **Warning**\n>\n> The quality of the results produced by eCalc\xe2\x84\xa2 is highly dependent on the quality of the input data. Further, we do not make any guarantees and are not liable for the quality of results when using eCalc\xe2\x84\xa2.\n\n---\n\n![eCalc Illustration](https://raw.githubusercontent.com/equinor/ecalc/main/docs/docs/about/ecalc_illustration.svg)\n\n\n---\n## Reference Links\n\n* [**Documentation**](/about/)\n* [**Contribution**](CONTRIBUTING.md)\n* [**Security**](SECURITY.md)\n* [**Code of Conduct**](CODE_OF_CONDUCT.md)\n* [**Source Code**](https://github.com/equinor/ecalc)\n\n---\n\n## Introduction\n\neCalc\xe2\x84\xa2 is a software tool for calculation of energy demand and GHG emissions from oil and gas production and processing. It enables the cross-disciplinary collaboration required to achieve high-quality and transparent energy and GHG emission prognosis and decision support.\n\neCalc\xe2\x84\xa2 performs energy and emission calculations by integrating data, knowledge and future plans from different disciplines. This could be production and injection profiles from the reservoir engineer, characteristics of energy consuming equipment units such as gas turbines, compressors and pumps from the facility engineer, and emission factors for different fuels from the sustainability engineer. The main idea is using physical or data-driven models to relate production rates and pressures to the required processing energy and resulting emissions. Integrated bookkeeping for all emission sources is offered.\n\neCalc\xe2\x84\xa2 uses a bottom-up approach to give high-quality installation and portfolio level forecasts at the same time as detailed insights about the energy drivers and processing capacities for the individual installation.\n\n## Getting started\n\neCalc\xe2\x84\xa2 is both a Python library and has a command line interface (CLI) to use with eCalc YAML Models. We currently recommend using eCalc\xe2\x84\xa2 from the command line with eCalc YAML Models, since the Python API is about to change soon, but the YAML will\nbe more or less stable and backwards compatible.\n\nTo get started, please refer to the [eCalc\xe2\x84\xa2 Docs - Getting Started](/about/getting_started/),\nor follow the quick guide below:\n\n### Prerequisites\n\n* [Python](https://www.python.org/), version 3.8 or higher\n* Java, version 8 or higher\n* [Docker](https://www.docker.com/) (Optional), Linux or MacOS\n\neCalc\xe2\x84\xa2 only supports Python 3, and will follow [Komodo](https://github.com/equinor/komodo) wrt. minimum requirement for Python, which currently is 3.8.\n\n### Installation\n\n```bash\npip install libecalc\necalc --version\necalc selftest\n```\n\n**Alternative using Docker**:\n\n```bash\ndocker build --target build -t ecalc .\ndocker run -it ecalc /bin/bash\n```\n\nInside the docker container, run:\n\n```bash\necalc --version\necalc selftest\n```\n\nPlease refer to [Docker Docs](https://docs.docker.com/) for details on how to use Docker.\n\n### Create and run your first model\n\nPlease refer to the https://equinor.github.io/ecalc/docs/about/modelling/setup/ on how to set up your own model\nwith the YAML API and https://equinor.github.io/ecalc/docs/about/getting_started/cli/ on how to run it.\n\nSee [Examples](#examples) below to use one of our predefined examples.\n\n## Development and Contribution\n\nWe welcome all kinds of contributions, including code, bug reports, issues, feature requests, and documentation.\nThe preferred way of submitting a contribution is to either make an issue on GitHub or by forking the project on GitHub\nand making a pull request.\n\nSee [Contribution Document](CONTRIBUTING.md) on how to contribute.\n\nSee the [Developer Guide](/contribute/get-started) for details.\n\n## Examples\nJupyter Notebook examples can be found in /examples. In order to run these examples, you need to install the optional\ndependencies.\n\n```bash\npip install libecalc[notebooks]\n```\n\nIn the examples you will find examples using both the YAML specifications and Python models. See /examples\n\n## Documentation\n\nThe documentation can be found at https://equinor.github.io/ecalc\n'",,"2023/05/04, 13:02:22",174,GPL-3.0,211,211,"2023/10/24, 10:29:54",11,238,240,240,1,10,2.8,0.7535545023696683,"2023/10/09, 08:46:14",v8.4.4,0,11,false,,true,true,equinor/ecalc,,https://github.com/equinor,http://www.equinor.com,,,,https://avatars.githubusercontent.com/u/525862?v=4,,, forest-offsets,Includes our libraries and scripts for analyzing California's compliance forest carbon offsets program.,carbonplan,https://github.com/carbonplan/forest-offsets.git,github,,Emission Observation and Modeling,"2023/06/01, 15:31:15",15,0,2,true,Python,carbonplan,carbonplan,"Python,Jupyter Notebook,R,Dockerfile,Shell",http://carbonplan.org/research/forest-offsets-explainer,"b""\n\n# carbonplan / forest-offsets\n\n**analysis of forest offset projects**\n\n[![CI](https://github.com/carbonplan/forest-offsets/actions/workflows/main.yaml/badge.svg)](https://github.com/carbonplan/forest-offsets/actions/workflows/main.yaml)\n[![MIT License](https://badgen.net/badge/license/MIT/blue)](./LICENSE)\n[![DOI](https://img.shields.io/badge/code-10.5281/zenodo.4628604-6aa3d5?link=https://doi.org/10.5281/zenodo.4628604)](https://doi.org/10.5281/zenodo.4628604)\n[![DOI:10.1101/2021.04.28.441870](http://img.shields.io/badge/preprint-10.1101/2021.04.28.441870-9f3a44.svg)](https://doi.org/10.1101/2021.04.28.441870)\n\nThis repository includes our libraries and scripts for analyzing California's compliance forest carbon offsets program. This work is described in more detail in a [web article](https://carbonplan.org/research/forest-offsets-explainer) and a [preprint](https://doi.org/10.1101/2021.04.28.441870). See the [carbonplan/forest-offsets-paper](https://github.com/carbonplan/forest-offsets-paper) repository for Jupyter notebooks that specifically recreate all the figures in the preprint. You can also browse some of these data and results in an [interactive web map](https://carbonplan.org/research/forest-offsets).\n\n## install\n\nFrom PyPI:\n\n```shell\npip install carbonplan_forest_offsets\n```\n\nor from source:\n\n```shell\npip install git+https://github.com/carbonplan/forest-offsets.git\n```\n\n## data sources\n\nAll data sources used in this project are described in [this](./carbonplan_forest_offsets/data/catalog.yaml) Intake Catalog. A schematic representing the primary input datasets and outputs is shown below for reference.\n\n![offsets-dag](./offsets-dag.png)\n\n## data products\n\nSee the following Zenodo archives for descriptions of the data products produced by this project:\n\n- G Badgley, J Freeman, J J Hamman, B Haya, D Cullenward (2021) California improved forest management offset project database (Version 1.0) https://doi.org/10.5281/zenodo.4630684.\n- G Badgley, et al. (2021) Systematic over-crediting in California\xe2\x80\x99s forest carbon offsets program https://doi.org/10.5281/zenodo.4630711.\n\n## environments\n\nThis project uses the Python and R programing languages. Environment specifications are included in the `envs/` directory and pre-built Docker images are available on Dockerhub:\n\n- [retro-python-notebook](https://hub.docker.com/repository/docker/carbonplan/retro-python-notebook)\n- [retro-r-notebook](https://hub.docker.com/repository/docker/carbonplan/retro-r-notebook)\n\n## license\n\nAll the code in this repository is [MIT](https://choosealicense.com/licenses/mit/) licensed. When possible, the data used by this project is licensed using the [CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/) license. We include attribution and additional license information for third party datasets, and we request that you also maintain that attribution if using this data.\n\n## about us\n\nCarbonPlan is a non-profit organization that uses data and science for climate action. We aim to improve the transparency and scientific integrity of climate solutions with open data and tools. Find out more at [carbonplan.org](https://carbonplan.org/) or get in touch by [opening an issue](https://github.com/carbonplan/forest-offsets/issues/new) or [sending us an email](mailto:hello@carbonplan.org).\n\n## contributors\n\nThis project is being developed by CarbonPlan staff and the following outside contributors:\n\n- Grayson Badgley (@badgley)\n""",",https://doi.org/10.5281/zenodo.4628604,https://doi.org/10.1101/2021.04.28.441870,https://doi.org/10.1101/2021.04.28.441870,https://doi.org/10.5281/zenodo.4630684.\n-,https://doi.org/10.5281/zenodo.4630711.\n\n##","2020/10/01, 03:21:25",1119,MIT,4,365,"2023/05/30, 19:44:16",7,79,80,8,148,7,0.0,0.18571428571428572,"2022/04/27, 00:25:18",1.2.0,0,8,false,,false,false,,,https://github.com/carbonplan,carbonplan.org,earth,,,https://avatars.githubusercontent.com/u/58278235?v=4,,, OceanSODA,Methods for evaluating and using empirical approaches for studying the surface marine carbonate system.,JamieLab,https://github.com/JamieLab/OceanSODA.git,github,,Emission Observation and Modeling,"2023/10/04, 15:53:15",4,0,2,true,Python,,JamieLab,Python,,"b'\n# Optimal satellite remote sensing of the carbonate system using empirical methods\n\nMethods for evaluating and using empirical approaches for studying the surface marine carbonate system. All work builds on the methods and approaches developed within Land et al., (2019).\n\n# Overview of pipeline and scripts\n1) A set of algorithms for predicting DIC or AT are compared, using the \'matchup database\' as a test/validation data set (i.e. using the algorithm to predict DIC/AT and comparing to in situ derrived measures in the matchup database).\n\n2) Optimal algorithms are selected, and these are used to calculated gridded time series for AT, DIC and other carbonate system parameters. In this step, the \'gridded prediction data\' are used as input to the algorithms. The resulting output is gridded netCDF (.nc) files containing carbonate chemistry fields for each region. A copy of input data sets, and uncertainty data are also included in these files.\n\n3) The gridded time series are used to calculate DIC outflow for Amazon river.\n\n4) The gridded time series are used to assess coral reef vulnerability in each region.\n\nTo run the complete analysis run `osoda_driver.py`. This file essentially calls the other scripts in the root directory, each of which run each one of the steps enumerated above:\n* `osoda_algorithm_comparison.py` - Uses matchup database to compute algorithm performance metrics for each algorithm and input data combination.\n* `osoda_calculate_gridded_predictions.py` - Uses an \'optimal\' algorithm table produced by the algorithm comparison, with Earth observation data sets to calculate gridded time series for DIC, AT and other carbonate system parameters.\n* `osoda_dic_outflow.py` - Uses guaging station discharge data for the Amazon river and gridded DIC time series prediction data, to estimate DIC outflow from the Amazon.\n* `osoda_reef_vulnerability.py` - Uses ReefBase data and gridded DIC time series predictions to estimate reef vulnerability.\n* `osoda_global_settings.py` - This contains the global settings used to configure each aspect of the analysis (e.g. data file paths and activating/deactivating different aspects of the analysis).\n\nAll output from each step is written to a separate subdirectory in the `output` directory (although this is configurable in the global settings file). Parameters, variable names and filepaths (e.g. for matchup data base, prediction data, region masks etc.) and global options are all defined in osoda_global_settings.py. The driver script makes use of these values, and some parts of the analysis will import the global settings using osoda_global_settings.get_default_settings(). The resulting object is a Python dictionary, so values can be easily overwritten in a custom driver script if required.\n\n# Calculating the algorithm comparison metrics\nosoda_algorithm_comparison.py performs the pairwise algorithm comparison using the matchup database and generates a set of weighted and unweighted metrics for each region/input data combination. It uses the methodology of Land et al (2019).\n\nWhich algorithms are compared for each region, region boundaries, output directory and other settings are all controlled by the global settings file (osoda_global_settings.py::get_default_settings). \n\nThere is no commandline interface to run this tool and it must be ran using Python. To run, simply import osoda_algorithm_comparison.py and run the main function, passing the global settings object (or your own global settings object) as an argument:\n\n```\nimport osoda_global_settings\nimport osoda_algorithm_comparison\nsettings = osoda_global_settings.get_default_settings()\nosoda_algorithm_comparison.main(settings)\n```\n\nOr just run the file from the command line to run it with the default global settings\n`python osoda_algorithm_comparison.py`\n\nA detailed description of the output files and directory structure is provided in the comments at the top of osoda_algorithm_comparison.py\n\n# Running the gridded time series tool separately\nosoda_calculate_gridded_predictions.py can be run as a command line tool. Use -h to see the help information:\n`python osoda_calculate_gridded_predictions.py -h`\n\nRequired inputs are the path to the selected algorithm table (e.g. generated from osoda_algorithm_comparison) and output path template. The path to the gridded input prediction data can be supplied (as in the example below) but will default to that defined by the global settings file. Start and end years can be supplied also, but will default to those defined in the global settings file if left blank. A list of regions and path to the region mask are optional, if not supplied it will use all the regions and region mask defined in the global settings file. Example usage:\n`python osoda_calculate_gridded_predictions.py ""output/algo_metrics/overall_best_algos_min_years=8.csv"" ""output/gridded_predictions_min_year_range/gridded_${REGION}_${LATRES}x${LONRES}_${OUTPUTVAR}.nc"" --input_data_root ""output/gridded_output_new""`\n\nNote that the output path takes the format of a string.Template definition from the Python standard library. This means that, for example, `${REGION}` is a placeholder which is replaces by the region name. Similarly `${LATRES}`, `${LONRES}` and `${OUTPUTVAR}` are replaced by the resolution and output variable (DIC or AT).\n\nWhen running the gridded time series calculation for the first time, the tool will try to automatically download and resample the gridded input data, unless it detects it at the path provided (--input_data_root). It takes a long time to download everything (potentially over a day with a fast internet connection), so plan accordingly. When running again it should automatically detect that you have the input data. You may want to delete the raw `downloaded_data` folder after running the tool for the first time to save space, as you only need the resampled files. The specification for the file format, variable names and directory hierarchy are all in the global settings file under the `datasetInfoMap` key.\n\n`datasetInfoMap` in the osoda_global_settings.py defines the mapping between an ocean parameter (e.g. sea surface tempoerature, SST) and the matchup database and prediction data set. To do this it defined a `commonName` (the key used to refer to the ocean parameter), a `datasetName` (a unique name for the specific data set - since multiple data sets for the same ocean parameter can be used). File paths to the netCDF file/s, netCDF variable name, and netCDF variable name for the uncertainty field are also defined here for the matchup database and gridded prediction data used by osoda_calculate_gridded_predictions.py to create the gridded time sereis.\n\n# Running the Amazon DIC outflow calculation\nSimply import osoda_dic_outflow.py in python and run main. The main function is commented with descriptions of each argument, all of which can be supplied using values from the global settings file. For example:\n\n```\nimport osoda_global_settings\nimport osoda_dic_outflow\nsettings = osoda_global_settings.get_default_settings()\nosodaMasksPath = settings[""regionMasksPath""];\nprecomputedGridAreaPath = settings[""gridAreasPath""];\nregions = [""oceansoda_amazon_plume""];\ncarbonateParametersTemplate = settings[""longGriddedTimeSeriesPathTemplate""];\noutputDir = path.join(settings[""outputPathRoot""], ""dic_outflow_amazon_best"");\nosoda_dic_outflow.main(carbonateParametersTemplate, outputDir, regions, osodaMasksPath, precomputedGridAreaPath);\n```\n\n# Running the reef vulnerability analysis\nEither import osoda_reef_vulnerability and run main with the defualt (or custom) global settings dictionary, or run the python file from command line with no arguments to use the default settings. E.g.\n```\nimport osoda_global_settings\nimport osoda_reef_vulnerability\nsettings = osoda_global_settings.get_default_settings()\nosoda_reef_vulnerability(settings)\n```\nis equivailent to running `python osoda_reef_vulnerability.py` in your terminal / command prompt.\n\n\n# Calculating metrics for custom algorithms\nIt is possible to add new algorithms which do not conform to the standard \'implemented\' algorithm format used in the `os_algorithms` module. The easiest way to do this is to pre-compute model output, RMSD, propagated input data uncertainty and combined (input and model) uncertainty, and add these to the matchup database. Algorithms added to the analysis in this way are referred to as \'custom\' algorithms, while algorithms inside the `os_algorithms` module (i.e. with simple Python implementations) are described as \'implemented\' algorithms within the code.\n\nThe `osoda_algorithm_comparison.py` file contains a function (\'custom_algorithm_metrics\') for calculating the metrics using a combination of custom and implemented algorithms. The `example_metrics_from_algo_output.py` script (in the root directory) provides a full example of how to use this function and is fully commented. This script first generates four synthesised test data sets and adds them to the matchup database. For each custom algorithm you must provide the model output (calculated from matchup database inputs), model RMSD, propagated input uncertainty and combined (model and input) uncertainty. The script then defines which matchup database variables correspond to each of these fields for each custom algorithm, and runs the `custom_algorith_metrics` function to perform the full algorithm comparison.\n\n`custom_algorith_metrics` requires a list of algorithm info about each custom algorithm (locations of the fields described above in the matchup database), a list of any \'implemented\' algorithms you want to also include in the analysis, a region name (which must correspond to one of the regions in the mask file), an SST and SSS input dataset name (which must correspond to one of the data sets defined in the global settings \'datasetInfoMap\' dictionary), and an output path. Optionally, the script can generate simple diagnostic plots which plot in situ derrived matchup database output (DIC/AT) against the model output as a quick visual aid. Note that, since a given set of custom algorithm output data must have been generated using a single input data combination, the function can only be ran using one input combination at a time (hence, the SST and SSS data set names are required inputs to the function).\n\n\n### References\n\nLand PE, Findlay H, Shutler J, Ashton I, Holding T, Grouazel A, GIrard-Ardhuin F, Reul N, Piolle J-F, Chapron B, et al (2019). Optimum satellite remote sensing of the marine carbonate system using empirical algorithms in the Global Ocean, the Greater Caribbean, the Amazon Plume and the Bay of Bengal. Remote Sensing of Environment, doi: 10.1016/j.rse.2019.111469\n'",,"2020/03/02, 13:39:15",1332,MIT,13,82,"2023/10/04, 15:53:15",0,4,4,3,21,0,0.0,0.21875,"2023/04/26, 16:34:14",v1.1.0,0,3,false,,false,false,,,https://github.com/JamieLab,,,,,https://avatars.githubusercontent.com/u/61696163?v=4,,, exiobase,A global and detailed Multi-Regional Environmentally Extended Supply-Use Table (MR-SUT) and Input-Output Table (MR-IOT).,,,custom,,Life Cycle Assessment,,,,,,,,,,https://www.exiobase.eu/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, bonsai,"The aim of BONSAI is to make reliable, unbiased sustainability information on products (product footprints) readily and freely available whenever and wherever it is needed to support product comparisons and decisions.",BONSAMURAIS,https://github.com/BONSAMURAIS/bonsai.git,github,"overview,guidelines,management",Life Cycle Assessment,"2020/09/08, 13:57:00",47,0,3,false,Jupyter Notebook,BONSAI,BONSAMURAIS,Jupyter Notebook,https://bonsai.uno/,"b""# BONSAI\n\nWelcome to the **management and documentation repository** for the Bonsai organisation (named _Bonsamurais_ on GitHub).\n\nThis repository hosts **[the Wiki](https://github.com/BONSAMURAIS/bonsai/wiki)** which describes all components of the BONSAI modular software architecture (work-in-progress!). This prominent README document describes how to get started with Bonsai.\n\nIf you're a potential Bonsai **developer/contributor**: you're in the right place. There are many ways to contribute. This page will try to effectively direct your energies. Not sure if you want to contribute? [See here!](https://github.com/BONSAMURAIS/bonsai#why-contribute).\n\nFor potential Bonsai **user**, open data on BONSAI can be accessed from [the Open Virtuoso triplestore](http://odas.aau.dk/). Accessing data from the triplestore requires [SPARQL](https://www.w3.org/TR/rdf-sparql-query/) know-how. Development of easy to use tools is currently ongoing. Meanwhile some example queries have been developed to help users extract data.\n\n## Overviews and priorities\nIf you wish to contribute, a starting point could be to look at [the BONSAI repositories overview](https://github.com/BONSAMURAIS/bonsai/blob/master/repositories_overview.md) page.\nThe figure below explains how (some of) the project repositories described in the BONSAI repositories overview are linked. \n \n\n![](https://github.com/BONSAMURAIS/bonsai/blob/master/Bonsai_git_scheme.png) \n\n## What is the current development status?\nThe Bonsai project is currently undergoing development, since the March 2019 [hackathon](https://github.com/BONSAMURAIS/hackathon-2019). The topics discussed during the hackathon are available on the dedicated [bonsai.groups.io](https://bonsai.groups.io/g/hackathon2019/topics?p=RecentPostDate%2FSticky,,,20,1,0,0) page. BONSAI is currently in alpha stage. Preliminary softwares to convert open datasets on sustainibility assessment using the BONSAI ontology. \nThe development status is also visible in the [project board](https://github.com/orgs/BONSAMURAIS/projects/2). This intends to support the prioritisation of tasks. \n\n## How to communicate with other developers?\nBONSAI has a [mailing list](https://bonsai.groups.io/g/main) used for communication to the broader community.\nThe [Bonsai Slack Workspace](https://bonsai-open.slack.com) is in active usage during planned hackathons and workshops. \n\nThe BONSAI enhancement proposal [BEP0004](https://github.com/BONSAMURAIS/enhancements/blob/bep4-communications/beps/0004-bonsai-communication-strategy.md) describes the knowledge management and internal communication strategy of the organisation.\n\n## What do we mean when we say 'Bonsai'?\nThe term _Bonsai_ (**b**ig **o**pen **n**etwork **o**f **s**ustainability **a**ssessment **i**nformation) is used within the community for various related concepts. The BONSAI association is a non-profit headquartered in Denmark. The _big open network_ extends well-beyond this organisation, to include the volunteers, digital artefacts, concepts and processes which constitute the full Bonsai project. Core software modules may also be referred to collectively as _Bonsai_, although the boundaries of correct usage here are not yet final or formalised.\n\n## Where is BONSAI described?\nHigh-level descriptions of goals and objective of BONSAI is available at the [Bonsai website](https://bonsamurais.github.io/bonsai.uno/).\nThis Wiki provides an up-to-date consensus description of the Bonsai project. An overview of the different BONSAI repositories and their content can be found [here](https://github.com/BONSAMURAIS/bonsai/blob/master/repositories_overview.md)\n#### Note\n###### The tasks and planning on the Wiki supercede the static [work plan](https://bonsai.uno/strategy-work-plan/) hosted on the website.\n\n## Code-of-conduct and decision-making process\nParticipation is subject to the [BONSAI code of conduct](https://github.com/BONSAMURAIS/.github/blob/master/CODE_OF_CONDUCT.md).\n\nA proposal for using Python-style Bonsai enhancement proposals (BEPs) [has been formulated](https://github.com/BONSAMURAIS/enhancements/blob/master/beps/0002-bonsai-project-community-governance-structure.md) and is [under discussion.](https://bonsai.groups.io/g/main/topic/bep0002_proposal_open_for/30399914?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,1,0,30399914). The organisation Chair Chris Mutel has blogged about his thoughts on this topic [here](https://chris.mutel.org/bonsai-governance.html)\n\n## Expected/useful knowledge for contributors\nThere are many ways to conribute, which require different knowledge and skills, mostly relating to software development. Some general knowledge areas that are useful for all contribution forms are suggested here.\n\n### GitHub\nAs a distributed collaborative open-source project, we have chosen GitHub for our version control management and project management (decision pending). To contribute source code, you will need to understand the [standard GitHub workflow](https://guides.github.com/introduction/flow/). To help with task management and organisation, familiarity with the usage of [Issues](https://guides.github.com/features/issues/) will help.\n\n### Sustainability Assessments \nThe _sai_ in _Bonsai_ stands for _sustainability assessment information_. Most contributors have a background connected to [Lifecycle Assessment](https://en.wikipedia.org/wiki/Life-cycle_assessment) (LCA) and/or [environmentally extended input-output analysis](https://en.wikipedia.org/wiki/Environmentally_extended_input-output_analysis) (EIOA). High-level decision making requires knowledge of this context. However vital low-level contributions do not require this background knowledge.\n\n### Resource Description Framework\nThe W3C standard [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework) is used for modelling knowledge and brings numerous benefits (such as allowing to link different data sources consistently). This is related to Linked Data, and the Semantic Web. Understanding RDF will help you to understand what makes Bonsai unique and potentially very powerful. The final two sets of presentation slides of [this Web Fundamentals course](https://rubenverborgh.github.io/WebFundamentals/semantic-web/) are a good starting point and contain links to many other resources. \n\nAdditional background information on open issues at the interface between LCA and Open Data are available [here](https://chris.mutel.org/next-steps.html#id2) and [here](https://lca-net.com/blog/next-step-open-lca-data/).\n\n### Why contribute?\nWe assume that you recognise the importance of sustainability assessments: to create a world which is more sustainable - society must have a scientifically-grounded understanding of environmental and social impacts of products and processes. But why should you put your efforts into the Bonsai project specifically? \nSustainability assessment, and life cycle assessment (LCA) in particular, is to a large extent built on cathedrals - large background databases, erected by the efforts of many people over a long period of time, but which are now both expensive and exclusive, and whose gatekeepers limit access to both the data and decisions on its management. [BONSAI](https://bonsai.uno/) believes in another model, a [bazaar](https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar) where the entire community can contribute to data generation, validation, and management decisions. We strongly feel that an open database is more transparent and more reproducible, and therefore the only option for the science of life cycle assessment. Such databases are also a prerequisite for LCA studies being used to support democratic decision making.\n""",,"2017/01/11, 01:29:37",2478,BSD-3-Clause,0,95,"2019/04/12, 07:45:12",7,2,12,0,1657,0,0.0,0.6129032258064516,,,0,10,false,,false,false,,,https://github.com/BONSAMURAIS,https://bonsai.uno/,"Aalborg, Denmark",,,https://avatars.githubusercontent.com/u/25032243?v=4,,, brightway2,An open source framework for life cycle assessment.,brightway-lca,https://github.com/brightway-lca/brightway2.git,github,"life-cycle-assessment,bw2,python,documentation",Life Cycle Assessment,"2023/09/28, 11:18:42",79,53,36,true,Jupyter Notebook,Brightway LCA Software Framework,brightway-lca,"Jupyter Notebook,HTML,TeX,Python,Shell",https://brightway.dev/,"b""# Brightway2 life cycle assessment framework\n\nBrightway2 is a framework for advanced life cycle assessment calculations. It consists of several components which each accomplish specific tasks. This package is a container for all the separate components, for ease of documentation and installation.\n\nBrightway2 is inspired by the Brightway software, which was developed during Chris Mutel's PhD work at ETH Zurich, but is a complete rewrite focusing on simplicity, power, and ease of use. Brightway2 can run on all major operating systems.\n\n## Official site\n\n* https://brightway.dev\n\n## Online documentation\n\n* https://docs.brightway.dev/\n\n## Development blog\n\n* http://chris.mutel.org\n\n## Packages\n\n* brightway2-data: https://github.com/brightway-lca/brightway2-data\n* brightway2-calc: https://github.com/brightway-lca/brightway2-calc\n* brightway2-io: https://github.com/brightway-lca/brightway2-io\n""",,"2020/03/02, 18:57:27",1332,BSD-3-Clause,17,499,"2023/10/02, 18:17:15",20,3,48,12,23,3,0.3333333333333333,0.018433179723502335,,,0,7,false,,true,false,"romainsacchi/autumn_school_2023,flechtenberg/pulpo_methanol_case,flechtenberg/pulpo,Stew-McD/WasteAndMaterialFootprint-MacroStudy,Stew-McD/WasteAndMaterialFootprint,mid2SUPAERO/LCA4MDAO,brightway-lca/temporalis,Haitham-ghaida/bw4built,narest-qa/repo50,robyistrate/internet-environmental-footprint,Anita2891/decision_tool,matthieu-str/ES_MOO_validation,premise-community-scenarios/scenario-example-bread,brightway-lca/bw2-docker,dbantje/internalization,tyrael147/premise_testing,MTES-MCT/ecobalyse-data,CIRAIG/IWP_Reborn,sysarcher/hack_zurich,dgfug/swolfpy,SwolfPy-Project/swolfpy,ENVIRO-Module/enbios,cyrillefrancois/openlca2bw,QSD-Group/BW2QSD,aleksandra-kim/gsa_framework,cmutel/tension,AntelopeLCA/lca_disclosures_bw2,brightway-lca/multifunctional,polca/premise,MaximeAgez/pylcaio,mfastudillo/ulcarchetype,scyjth/biosteam_lca,oie-mines-paristech/lca_algebraic,PascalLesage/bw2-database-preaggregator,CIRAIG/bw2waterbalancer,aleksandra-kim/brightway2-calc-copy,brightway-lca/brightway2-regional,CIRAIG/bw2landbalancer,Loisel/lca2rmnd,pjamesjoyce/futura,CIRAIG/brightway2-aggregated,MAGIC-nexus/nis-backend,pjamesjoyce/lca_disclosures,NicolasDumoulin/brightway2-sensitivity,cmutel/bw2_aware,pjamesjoyce/lcopt_cv,pjamesjoyce/lcopt,pjamesjoyce/rmflow,peteWT/fcat_biomass,WoodResourcesGroup/RoundwoodHarvestGHG,LCA-ActivityBrowser/activity-browser,cardosan/tempo_test,cmutel/bw2-lcimpact",,https://github.com/brightway-lca,https://docs.brightway.dev/,,,,https://avatars.githubusercontent.com/u/26960762?v=4,,, Activity Browser,An open source and free software for Life Cycle Assessment extending the brightway2 framework.,LCA-ActivityBrowser,https://github.com/LCA-ActivityBrowser/activity-browser.git,github,"brightway2,pyqt5,lca,python,d3",Life Cycle Assessment,"2023/10/20, 06:41:49",111,0,39,true,JavaScript,,LCA-ActivityBrowser,"JavaScript,Python,HTML,CSS",,"b'[![conda-forge version](https://img.shields.io/conda/vn/conda-forge/activity-browser.svg)](https://anaconda.org/conda-forge/activity-browser)\n[![Downloads](https://anaconda.org/conda-forge/activity-browser/badges/downloads.svg)](https://anaconda.org/conda-forge/activity-browser)\n![linux](https://raw.githubusercontent.com/vorillaz/devicons/master/!PNG/linux.png)\n![apple](https://raw.githubusercontent.com/vorillaz/devicons/master/!PNG/apple.png)\n![windows](https://raw.githubusercontent.com/vorillaz/devicons/master/!PNG/windows.png)\n[![Pull request tests](https://github.com/LCA-ActivityBrowser/activity-browser/actions/workflows/main.yaml/badge.svg)](https://github.com/LCA-ActivityBrowser/activity-browser/actions/workflows/main.yaml)\n[![Coverage Status](https://coveralls.io/repos/github/LCA-ActivityBrowser/activity-browser/badge.svg?branch=master)](https://coveralls.io/github/LCA-ActivityBrowser/activity-browser?branch=master)\n\n\n# Activity Browser\n\n\n\nThe **Activity Browser (AB) is an open source software for Life Cycle Assessment (LCA)** that builds on [Brightway2](https://brightway.dev).\n\n[Video tutorials](https://www.youtube.com/channel/UCsyySKrzEMsRFsWW1Oz-6aA/) are available on youtube.\n\nPlease also read and cite our [scientific paper](https://doi.org/10.1016/j.simpa.2019.100012).\n\n\n### Some highlights\n\n- **Fast LCA calculations**: for multiple reference flows, impact categories, and scenarios\n- **A productivity tool for brightway**: model in brightway (python) and see the results in the AB or vice-versa \n- **Advanced modeling:** Use parameters, scenarios (including prospective LCI databases from [premise](https://premise.readthedocs.io/en/latest/)), uncertainties and our Graph Explorer\n- **Advanced analyses:** Contribution analyses, Sankey Diagrams, Monte Carlo, and Global Sensitivity Analysis\n\n# Contents\n- [Installation](#installation)\n- [Updating the AB](#updating-the-ab)\n- [Getting started](#getting-started)\n - [Running the AB](#running-the-ab)\n - [Importing LCI databases](#importing-lci-databases)\n - [Additional Resources](#additional-resources)\n- [Plugins](#plugins)\n - [Available plugins](#available-plugins)\n - [Installation](#installation-1)\n - [Usage](#usage)\n - [Development](#development)\n- [Contributing](#contributing)\n- [Developers](#developers)\n- [Copyright](#copyright)\n- [License](#license)\n\n# Installation\n\n## The quick way\n\nYou can install and start the activity-browser like this:\n\n```bash\nconda create -n ab -c conda-forge activity-browser\nconda activate ab\nactivity-browser\n```\n\n### Mamba\n\nYou can also install the AB using [Mamba](https://mamba.readthedocs.io/en/latest/mamba-installation.html#mamba-install):\n\n```bash\nmamba create -n ab activity-browser\nmamba activate ab\nactivity-browser\n```\n\n## The thorough way\n### Conda\n\nWe recommend that you use **conda** to manage your python installation. You can install [Anaconda](https://www.anaconda.com/products/individual) or the more compact [miniconda](https://conda.io/miniconda.html) (Python 3 version) for your operating system. Installation instructions for miniconda can be found [here](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html). See also the [conda user guide](https://docs.conda.io/projects/conda/en/latest/user-guide/index.html) or the [Conda cheat sheet](https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf).\n\nSkip this step if you already have a working installation of anaconda or miniconda, but make sure to keep your conda installation up-to-date: `conda update conda`.\n\n### Add the Conda-Forge channel\nThe activity-browser has many dependencies that are managed by the [conda-forge](https://conda.io/docs/user-guide/tasks/manage-channels.html) channel. Open a cmd-window or terminal (in Windows you may have to use the Anaconda prompt) and type the following:\n\n```bash\nconda config --prepend channels conda-forge\n```\n\n### Installing Activity Browser\n\n```bash\nconda create -n ab -c conda-forge activity-browser\nconda activate ab\nactivity-browser\n```\n\n#### Activity Browser is installed\n\nAt this point the activity-browser and all of its dependencies will be installed in a new conda environment called `ab`. You can change the environment name `ab` to whatever suits you.\n\n## Updating the AB\n\nWe recommend to regularly update the AB to receive new features & bugfixes. These commands will update the activity-browser and all of its dependencies in the conda environment called `ab`.\n\n```bash\nconda activate ab\nconda update activity-browser\n```\n\n# Getting started\n\n## Running the AB\n\nFirst activate the environment where the activity browser is installed:\n\n```bash\nconda activate ab\n```\n\nThen simply run `activity-browser` and the application will open.\n\n## Importing LCI databases\n\n- In the `Project`-tab there is initially a button called _""Add default data (biosphere flows and impact categories)""_. Click this button to add the default data. This is equivalent to `brightway2.bw2setup()` in python.\n- After adding the default data, you can import a database with the _""Import Database""_-Button. Follow the instructions of the database import wizard. Imports can be done in several ways:\n - Directly from the ecoinvent homepage (ecoinvent login credentials required)\n - From a 7zip archive\n - From a directory with ecospold2 files (same as in brightway2)\n - From Excel files using the brightway Excel format\n\n## Additional Resources\n\n- [Youtube tutorials](https://www.youtube.com/channel/UCsyySKrzEMsRFsWW1Oz-6aA/)\n- [AB Wiki](https://github.com/LCA-ActivityBrowser/activity-browser/wiki)\n- [AB scientific article](https://doi.org/10.1016/j.simpa.2019.100012)\n- The AB has two mailing lists, for [updates](https://brightway.groups.io/g/AB-updates) and [user exchange](https://brightway.groups.io/g/AB-discussion)\n- [Brightway2](https://brightway.dev/)\n- [Global Sensitiviy Analysis paper](https://onlinelibrary.wiley.com/doi/10.1111/jiec.13194) describing GSA as implemented in the AB; see also our [wiki](https://github.com/LCA-ActivityBrowser/activity-browser/wiki/Global-Sensitivity-Analysis)\n- [Modular LCA paper](https://link.springer.com/article/10.1007/s11367-015-1015-3); [documentation modular LCA](http://activity-browser.readthedocs.io/en/latest/index.html); re-implementation of modular LCA into the AB is [ongoing](https://github.com/marc-vdm/activity-browser/tree/mLCA)\n\n# Plugins\n| :warning: DISCLAIMER |\n|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Plugins are not necessarily developed by Activity Browser maintainers. Below are listed plugins from people we trust but we do not check plugins code. **Use them at your own risk**. |\n| The plugin system is still in development so keep in mind that things may change at any point. |\n\nSince the `2.8 release` a plugin system has been added to the AB. Plugins are a flexible way to add new functionalities to the AB without modifying the software itself.\n\nThe plugin code has been designed and written by Remy le Calloch (supported by [G-SCOP laboratories](https://g-scop.grenoble-inp.fr/en/laboratory/g-scop-laboratory)) with revisions from the AB-team.\n\n\n## Available plugins\n\nThese are the plugins that we know about. To add your plugin to this list either open an issue, or a pull request. All submitted plugins will be reviewed, although all risks associated with their use shall be born by the user.\n\n| Name | Description | Links | Author(s) |\n|:---------|-------------|-------|-----------|\n| [ScenarioLink](https://github.com/polca/ScenarioLink) | Enables you to seamlessly fetch and reproduce scenario-based LCA databases, such as those generated by [premise](https://github.com/polca/premise) | [anaconda](https://anaconda.org/romainsacchi/ab-plugin-scenariolink), [pypi](https://pypi.org/project/ab-plugin-scenariolink/), [github](https://github.com/polca/ScenarioLink) | Romain Sacchi & Marc van der Meide |\n| [ReSICLED](https://github.com/Pan6ora/ab-plugin-ReSICLED) | Evaluating the recyclability of electr(on)ic product for improving product design | [anaconda](https://anaconda.org/pan6ora/ab-plugin-resicled), [github](https://github.com/Pan6ora/ab-plugin-ReSICLED) | G-SCOP Laboratory |\n| [Notebook](https://github.com/Pan6ora/ab-plugin-Notebook) | Use Jupyter notebooks from AB | [anaconda](https://anaconda.org/pan6ora/ab-plugin-template), [github](https://github.com/Pan6ora/ab-plugin-Notebook) | R\xc3\xa9my Le Calloch |\n| [template](https://github.com/Pan6ora/activity-browser-plugin-template) | An empty plugin to start from | [anaconda](https://anaconda.org/pan6ora/ab-plugin-template), [github](https://github.com/Pan6ora/activity-browser-plugin-template) | R\xc3\xa9my Le Calloch |\n\n## Installation\n\n### detailed instructions\n\nEvery plugin\'s Github page (links are provided in the above table) should have a **Get this plugin** section with installation instructions.\n\n### general instructions\n\nPlugins are conda packages (like the Activity Browser). To add a plugin simply install it in your conda environment from the Anaconda repos.\n\n_Nb: add `-c conda-forge` to the install command like below to avoid problems with dependencies._\n\nEx: \n\n```\nconda activate ab\nconda install -c pan6ora -c conda-forge ab-plugin-notebook\n```\n\n## Usage\n\nOnce a new plugin is installed restart the Activity Browser.\n\n### enabling a plugin\n\nPlugins are enabled **per-project**. Simply open the plugin manager in the `Tools > Plugins` menu. \n\nClose the plugin manager. New tabs should have appeared in the AB (each plugin can spawn one tab on each left/right panel).\n\n### disabling a plugin\n\nDisable a plugin the same way you activated it.\n\n**:warning: Keep in mind that all data created by the plugin in a project could be erased when you disable it.**\n\n## Development\n\nThe best place to start to create new plugins is the [plugin template](https://github.com/Pan6ora/activity-browser-plugin-template). Its code and README will help you to understand how to create a plugin.\n\n# Contributing\n\n**The Activity Browser is a community project. Your contribution counts!**\n\nIf you have ideas for improvements to the code or documentation or want to propose new features, please take a look at our [contributing guidelines](CONTRIBUTING.md) and open issues and/or pull-requests.\n\nIf you experience problems or are suffering from a specific bug, please [raise an issue](https://github.com/LCA-ActivityBrowser/activity-browser/issues) here on github.\n\n# Developers\n\n### Current main developers\n\n- Bernhard Steubing (b.steubing@cml.leidenuniv.nl) (creator)\n- Marc van der Meide ([github](https://github.com/marc-vdm)) (maintainer)\n\n### Important contributers\n\n- [Adrian Haas](https://github.com/haasad)\n- [Chris Mutel](https://github.com/cmutel)\n- [Daniel de Koning](https://github.com/dgdekoning)\n- [Jonathan Kidner](https://github.com/Zoophobus)\n- [Remy le Calloch](https://remy.lecalloch.net)\n\n# Copyright\n- 2016-2023: Bernhard Steubing (Leiden University)\n\n# License\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU Lesser General Public License as published\nby the Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Lesser General Public License for more details.\n\nYou should have received a copy of the GNU Lesser General Public License\nalong with this program. If not, see .\n'",",https://doi.org/10.1016/j.simpa.2019.100012,https://doi.org/10.1016/j.simpa.2019.100012","2017/04/19, 17:10:57",2380,LGPL-3.0,305,2059,"2023/10/21, 09:30:24",129,463,947,254,4,6,0.0,0.6741790083708951,"2023/10/20, 11:54:44",2.9.2,9,19,false,,false,true,,,https://github.com/LCA-ActivityBrowser,,,,,https://avatars.githubusercontent.com/u/34097504?v=4,,, EOS-AYCE,Eaternity's software platform serving as an open-source environmental operating system (EOS) for all you can eat (AYCE) for climate.,eaternity,https://gitlab.com/eaternity/eos,gitlab,,Life Cycle Assessment,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, carculator,Prospective environmental and economic life cycle assessment of vehicles made blazing fast.,romainsacchi,https://github.com/romainsacchi/carculator.git,github,,Life Cycle Assessment,"2023/09/09, 09:37:42",42,4,10,true,Python,,,Python,http://carculator.psi.ch,"b'# ``carculator``\n\n

\n \n

\n\n

\n \n \n \n \n \n

\n\nProspective environmental and economic life cycle assessment of vehicles made blazing fast.\n\nA fully parameterized Python model developed by the [Technology Assessment group](https://www.psi.ch/en/ta) of the\n[Paul Scherrer Institut](https://www.psi.ch/en) to perform life cycle assessments (LCA) of passenger cars and light-duty vehicles.\n\nSee [the documentation](https://carculator.readthedocs.io/en/latest/index.html) for more detail, validation, etc.\n\nSee our [examples notebook](https://github.com/romainsacchi/carculator/blob/master/examples/Examples.ipynb) as well.\n\n## Table of Contents\n\n- [Background](#background)\n - [What is Life Cycle Assessment](#what-is-life-cycle-assessment)\n - [Why carculator](#why-carculator)\n- [Install](#install)\n- [Usage](#usage)\n - [As a Python library](#as-a-python-library)\n - [As a web app](#as-a-web-app)\n- [Support](#support)\n- [Maintainers](#maintainers)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Background\n\n### What is Life Cycle Assessment?\n\nLife Cycle Assessment (LCA) is a systematic way of accounting for environmental impacts along the relevant phases of the life of a product or service.\nTypically, the LCA of a passenger vehicle includes the raw material extraction, the manufacture of the vehicle, its distribution, use and maintenance, as well as its disposal.\nThe compiled inventories of material and energy required along the life cycle of the vehicle is characterized against some impact categories (e.g., climate change).\n\nIn the research field of mobility, LCA is widely used to investigate the superiority of a technology over another one.\n\n### Why ``carculator``?\n\n``carculator`` allows to:\n* produce [life cycle assessment (LCA)](https://en.wikipedia.org/wiki/Life-cycle_assessment) results that include conventional midpoint impact assessment indicators as well cost indicators\n* ``carculator`` uses time- and energy scenario-differentiated background inventories for the future, based on outputs of Integrated Asessment Model [REMIND](https://www.pik-potsdam.de/research/transformation-pathways/models/remind/remind). \n* calculate hot pollutant and noise emissions based on a specified driving cycle\n* produce error propagation analyzes (i.e., Monte Carlo) while preserving relations between inputs and outputs\n* control all the parameters sensitive to the foreground model (i.e., the vehicles) but also to the background model\n(i.e., supply of fuel, battery chemistry, etc.)\n* and easily export the vehicle models as inventories to be further imported in the [Brightway2](https://brightwaylca.org/) LCA framework\n or the [SimaPro](https://www.simapro.com/) LCA software.\n\n``carculator`` integrates well with the [Brightway](https://brightwaylca.org/) LCA framework.\n\n``carculator`` was built based on work described in [Uncertain environmental footprint of current and future battery electric vehicles by Cox, et al (2018)](https://pubs.acs.org/doi/abs/10.1021/acs.est.8b00261).\n\n## Install\n\n``carculator`` is at an early stage of development and is subject to continuous change and improvement.\nThree ways of installing ``carculator`` are suggested.\n\nWe recommend the installation on **Python 3.7 or above**.\n\n### Installation of the latest version, using conda\n\n conda install -c romainsacchi carculator\n\n### Installation of a stable release from Pypi\n\n pip install carculator\n\n## Usage\n\n### As a Python library\n\nCalculate the fuel efficiency (or ``Tank to wheel`` energy requirement) in km/L of petrol-equivalent of current SUVs for the driving cycle WLTC 3.4\nover 800 Monte Carlo iterations:\n\n```python\n\n from carculator import *\n import matplotlib.pyplot as plt\n \n cip = CarInputParameters()\n cip.stochastic(800)\n dcts, array = fill_xarray_from_input_parameters(cip)\n cm = CarModel(array, cycle=\'WLTC 3.4\')\n cm.set_all()\n TtW_energy = 1 / (cm.array.sel(size=\'SUV\', year=2020, parameter=\'TtW energy\') / 42000) # assuming 42 MJ/L petrol\n \n l_powertrains = TtW_energy.powertrain\n [plt.hist(e, bins=50, alpha=.8, label=e.powertrain.values) for e in TtW_energy]\n plt.xlabel(\'km/L petrol-equivalent\')\n plt.ylabel(\'number of iterations\')\n plt.legend()\n```\n\n![MC results](https://github.com/romainsacchi/carculator/blob/master/docs/_static/img/stochastic_example_ttw.png)\n\nCompare the carbon footprint of electric vehicles with that of rechargeable hybrid vehicles for different size categories today and in the future\nover 500 Monte Carlo iterations:\n\n```python\n\n from carculator import *\n cip = CarInputParameters()\n cip.stochastic(500)\n dcts, array = fill_xarray_from_input_parameters(cip)\n cm = CarModel(array, cycle=\'WLTC\')\n cm.set_all()\n scope = {\n \'powertrain\': [\'BEV\', \'PHEV\'],\n }\n ic = InventoryCalculation(cm)\n \n results = ic.calculate_impacts()\n data_MC = results.sel(impact_category=\'climate change\').sum(axis=3).to_dataframe(\'climate change\')\n plt.style.use(\'seaborn\')\n data_MC.unstack(level=[0, 1, 2]).boxplot(showfliers=False, figsize=(20, 5))\n plt.xticks(rotation=70)\n plt.ylabel(\'kg CO2-eq./vkm\')\n```\n\n![MC results](https://github.com/romainsacchi/carculator/blob/master/docs/_static/img/example_stochastic_BEV_PHEV.png)\n\nFor more examples, see [examples](https://github.com/romainsacchi/carculator/blob/master/examples/Examples.ipynb).\n\n## As a Web app\n\n``carculator`` has a [graphical user interface](https://carculator.psi.ch) for fast comparisons of vehicles.\n\n## Support\n\nDo not hesitate to contact the development team at [carculator@psi.ch](mailto:carculator@psi.ch).\n\n## Maintainers\n\n* [Romain Sacchi](https://github.com/romainsacchi)\n* [Chris Mutel](https://github.com/cmutel/)\n\n## Contributing\n\nSee [contributing](https://github.com/romainsacchi/carculator/blob/master/CONTRIBUTING.md).\n\n## License\n\n[BSD-3-Clause](https://github.com/romainsacchi/carculator/blob/master/LICENSE). Copyright 2023 Paul Scherrer Institut.'",",https://doi.org/10.5281/zenodo.3778259","2019/06/07, 11:42:08",1601,BSD-3-Clause,97,1036,"2023/06/19, 19:41:20",4,12,26,3,128,0,0.0,0.04712041884816753,"2023/09/09, 08:36:20",v.1.8.4,0,3,false,,true,true,"narest-qa/repo54,romainsacchi/commute,polca/premise,romainsacchi/carculator_online",,,,,,,,,, Electricity Life Cycle Inventory,"A Python package that uses standardized facility release and generation data to create regionalized life cycle inventory (LCI) models for the generation, mix of generation, mix of consumption, and distribution of electricity to end users for the US, with embedded system processes of upstream fuel production and infrastructure.",USEPA,https://github.com/USEPA/ElectricityLCI.git,github,ord,Life Cycle Assessment,"2020/08/26, 09:34:02",23,0,4,true,Python,U.S. Environmental Protection Agency,USEPA,Python,,"b'# Electricity Life Cycle Inventory\n\nA python package that uses standardized facility release and generation data to create regionalized life cycle inventory (LCI) models for the generation,\n mix of generation, mix of consumption, and distribution of electricity to end users for the US, with embedded system processes of upstream fuel production and infrastructure. Pre-configured model specifications are included or users can specify their own models. The created LCI models can be exported\n for use in standard life cycle assessment software.\n\nSee the [wiki](http://github.com/USEPA/ElectricityLCI/wiki) for installation and use instructions, descriptions of files, and a list of contributors.\n\nThis code was created as part of a collaboration between US EPA Office of Research and Development (USEPA) and the National Energy Technology Laboratory (NETL) with contributions from the National Renewable Energy Laboratory (NREL) and support from Eastern Research Group (ERG). More information on this effort can be found in the [Framework for an Open-Source Life Cycle Baseline for Electricity Consumption in the United States](https://netl.doe.gov/energy-analysis/details?id=4004).\n\n## Disclaimer\n\nThis United States Environmental Protection Agency (EPA) and National Energy Technology Laboratory (NETL) GitHub project code is provided on an ""as is"" basis\nand the user assumes responsibility for its use. EPA and NETL have relinquished control of the information and no longer\nhas responsibility to protect the integrity, confidentiality, or availability of the information.\nAny reference to specific commercial products, processes, or services by service mark, trademark, manufacturer,\nor otherwise, does not constitute or imply their endorsement, recommendation or favoring by EPA or NETL.\n\n'",,"2017/06/23, 18:37:55",2315,CC0-1.0,0,974,"2023/10/03, 09:53:55",39,84,167,1,22,0,0.0,0.5901639344262295,"2020/08/26, 09:36:44",v1.0.1,0,10,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, uslci-content,Supplementary content for the U.S. Life Cycle Inventory Database.,uslci-admin,https://github.com/uslci-admin/uslci-content.git,github,,Life Cycle Assessment,"2023/09/25, 17:52:23",22,0,4,true,,,,,,"b""README\n==========\nWelcome to the Content Repository for the [U.S. Life Cycle Inventory Database (USLCI)][uslci-home]. This repository contains supplemental resources for the [**USLCI online database**](https://www.lcacommons.gov/lca-collaboration/National_Renewable_Energy_Laboratory/USLCI_Public/datasets). The most important resources are shown in Table 1. A complete list of USLCI resources is given in the Table of Resources section below.\n\n###### Table 1. Key USLCI Supplementary Resources\n| [USLCI Data Submission Handbook](https://github.com/uslci-admin/uslci-content/blob/dev/docs/submission_handbook/00-sub-handbook-landing.md) | [USLCI Database Downloads](https://github.com/uslci-admin/uslci-content/blob/dev/docs/release_info/release-downloads.md) |\n|:------:|:------:| \n| Guidelines and resources for submitting data to and publishing in the USLCI | Complete USLCI database downloads in the most common data formats | \n\n| [Metadata Guidance](https://github.com/uslci-admin/uslci-content/blob/dev/docs/submission_handbook/02-how-to-publish-in-the-uslci.md#metadata-guidance-tables) | [USLCI Update Press Release](https://github.com/uslci-admin/uslci-content/blob/dev/docs/release_info/press-release.md) |\n|:---:|:---:| \n| Guidelines for completing metadata fields in openLCA unit &/or system processes | Release details & latest news on USLCI as a part of the Federal LCA Commons | \n
\n\nSee this repository [Wiki](https://github.com/uslci-admin/uslci-content/wiki) for additional resources.\n\n## Table of Resources\n### [ReadMe](./README.md)\ni.e. the file you are currently reading\n### [Submission Handbook](./submission_handbook/00-sub-handbook-landing.md)\n### USLCI Release Information\n * [Change log](./release_info/change-log.md)\n * [Release downloads](./release_info/release-downloads.md)\n * [Release statistics](./release_info/release-stats.md)\n\n\n[nrel]: https://www.nrel.gov/\n[uslci-home]: https://www.nrel.gov/lci/\n[uslci-online]: https://uslci.lcacommons.gov \n\n## Contact Information\n* Please contact us with USLCI related questions at USLCI@erg.com\n* If you'd like to join the release mailing list please email us with your name and desired email address at USLCI@erg.com\n""",,"2017/12/21, 17:25:44",2134,MIT,98,755,"2023/05/10, 18:00:19",1,0,9,7,168,0,0,0.11819389110225764,,,0,4,false,,false,false,,,,,,,,,,, OpenLCA,An open source and free software for Sustainability and Life Cycle Assessment.,GreenDelta,https://github.com/GreenDelta/olca-app.git,github,"openlca,eclipse-rcp,java",Life Cycle Assessment,"2023/10/24, 13:38:56",153,0,40,true,Java,,GreenDelta,"Java,Python,TypeScript,JavaScript,CSS,Shell,HTML,NSIS,Batchfile",openlca.org,"b'# openLCA\nThis repository contains the source code of [openLCA](http://openlca.org).\nopenLCA is a Java application that runs on the Eclipse Rich Client Platform\n([Eclipse RCP](http://wiki.eclipse.org/index.php/Rich_Client_Platform)). This\nproject depends on the [olca-modules](https://github.com/GreenDelta/olca-modules)\nproject which is a plain [Maven](http://maven.apache.org/) project that contains\nthe core functionalities of openLCA (e.g. the model, database access,\ncalculations, data exchange, and database updates). \n\nThis repository has the following sub-projects:\n\n* [olca-app](./olca-app): contains the source code of the openLCA RCP \n application.\n* [olca-app-build](./olca-app-build): contains the build scripts for compiling\n openLCA and creating the installers for Windows, Linux, and macOS.\n* [olca-app-html](./olca-app-html): contains the source code for the HTML views\n in openLCA (like the start page or the report views).\n* [olca-refdata](./olca-refdata): contains the current reference data (units,\n quantities, and flows) that are packaged with openLCA.\n\nSee also the README files that are contained in these sub-projects.\n\n## Building from source\nopenLCA is an Eclipse RCP application with parts of the user interface written\nin HTML5 and JavaScript. To compile it from source you need to have the\nfollowing tools installed:\n\n* [Git](https://git-scm.com/) (optional)\n* a [Java Development Kit >= v17](https://adoptium.net)\n* [Maven](http://maven.apache.org/)\n* the [Eclipse package for RCP developers](https://www.eclipse.org/downloads/packages/)\n* [Node.js](https://nodejs.org/) \n\nWhen you have these tools installed you can build the application from source\nvia the following steps:\n\n#### Install the openLCA core modules\nThe core modules contain the application logic that is independent from the user\ninterface and can be also used in other applications. These modules are plain\nMaven projects and can be installed via `mvn install`. See the\n[olca-modules](https://github.com/GreenDelta/olca-modules) repository for more\ninformation.\n\n#### Get the source code of the application\nWe recommend to use Git to manage the source code but you can also download\nthe source code as a [zip file](https://github.com/GreenDelta/olca-app/archive/master.zip).\nCreate a development directory (the path should not contain whitespaces):\n\n```bash\nmkdir olca\ncd olca\n```\n\nand get the source code:\n\n```bash\ngit clone https://github.com/GreenDelta/olca-app.git\n```\n\nYour development directory should now look like this:\n\n```\nolca-app\n .git\n olca-app\n olca-app-build\n olca-app-html\n olca-refdata\n ...\n```\n\n#### Building the HTML pages\nTo build the HTML pages of the user interface navigate to the\n[olca-app-html](./olca-app-html) folder:\n\n```bash\ncd olca-app/olca-app-html\n```\n\nThen install the Node.js modules via [npm](https://www.npmjs.com/) (npm is a\npackage manager that comes with your Node.js installation):\n\n```\nnpm install\n```\n\nThis also installs a local version of `webpack` which is used to create the\ndistribution package. The build of this package can be invoked via:\n\n```bash\nnpm run build\n```\n\nThe output is generated in the `dist` folder of this directory and packaged\ninto a zip file that is copied to the `../olca-app/html` folder.\n\n#### Prepare the Eclipse workspace\nDownload the current Eclipse package for RCP and RAP developers (to have\neverything together you can extract it into your development directory). Create\na workspace directory in your development directory (e.g. under the eclipse\nfolder to have a clean structure):\n\n```\neclipse\n ...\n workspace\nolca-app\n .git\n olca-app\n olca-app-build\n olca-app-html\n olca-app-refdata\n ...\n```\n\nAfter this, open Eclipse and select the created workspace directory. Import the\nprojects into Eclipse via `Import > General > Existing Projects into Workspace`\n(select the `olca/olca-app` directory). You should now see the `olca-app`, \n`olca-app-build`, projects in your Eclipse workspace.\n\n#### Loading the target platform \nThe file `platform.target` in the `olca-app` project contains the definition of\nthe [target platform](https://help.eclipse.org/oxygen/index.jsp?topic=%2Forg.eclipse.pde.doc.user%2Fconcepts%2Ftarget.htm)\nof the openLCA RCP application. Just open the file with the `Target Editor`\nand click on `Set as target platform` on the top right of the editor.\n\nThis will download the resources of the target platform into your local\nworkspace and, thus, may take a while. Unfortunately, setting up and\nconfiguring Eclipse can be quite challenging. If you get errors like\n`Unable locate installable unit in target definition`,\n[this discussion](https://stackoverflow.com/questions/10547007/unable-locate-installable-unit-in-target-definition)\nmay help. \n\n#### Copy the Maven modules\nGo back to the command line and navigate to the \n`olca-app/olca-app` folder:\n\n```bash\ncd olca-app/olca-app\n```\n\nand run \n\n```bash\nmvn package\n```\n\nThis will copy the installed openLCA core modules and dependencies (see above)\nto the folder `olca-app/olca-app/libs`.\n\n#### Test the application\nRefresh your Eclipse workspace (select all and press `F5`). Open the file\n[olca-app/openLCA.product](./olca-app/openLCA.product) within Eclipse and click\non the run icon inside the `openLCA.product` tab. openLCA should now start.\n\nIf you want to build an installable product, see the description in the \n[olca-app-build](./olca-app-build) sub-project or simply use the Eclipse export\nwizard (Export/Eclipse product). \n\n#### Build the database templates\nThe openLCA application contains database templates that are used when the user\ncreates a new database (empty, with units, or with all reference data). There\nis a Maven project `olca-refdata` that creates these database templates and\ncopies them to the `olca-app/olca-app/db_templates` folder from which openLCA\nloads these templates. To build the templates, navigate to the refdata project\nand run the build:\n\n```bash\ncd olca-app/olca-refdata\nmvn package\n```\n\n## License\nUnless stated otherwise, all source code of the openLCA project is licensed\nunder the [Mozilla Public License, v. 2.0](http://mozilla.org/MPL/2.0/). Please\nsee the LICENSE.txt file in the root directory of the source code.\n'",,"2013/08/29, 14:14:54",3709,MPL-2.0,492,4361,"2023/10/20, 07:34:37",19,181,341,125,5,0,0.5,0.34300595238095233,,,0,11,false,,false,false,,,https://github.com/GreenDelta,,,,,https://avatars.githubusercontent.com/u/3889246?v=4,,, openlca-python-tutorial,Explains the usage of the openLCA API from Python.,GreenDelta,https://github.com/GreenDelta/openlca-python-tutorial.git,github,"python,jython,tutorial,openlca",Life Cycle Assessment,"2023/06/28, 10:43:38",41,0,15,true,Jupyter Notebook,,GreenDelta,"Jupyter Notebook,Python,Batchfile",,"b'# openLCA Python Tutorial\n[openLCA](https://github.com/GreenDelta/olca-app) is a Java application\nand, thus, runs on the Java Virtual Machine (JVM). [Jython](http://www.jython.org/)\nis a Python 2.7 implementation that runs on the JVM. It compiles Python code to\nJava bytecode which is then executed on the JVM. The final release of Jython 2.7\nis bundled with openLCA. Under `Window > Developer tools > Python` you can\nfind a small Python editor where you can write and execute Python scripts:\n\n![Open the Python editor](./images/olca_open_python_editor.png)\n\nIn order to execute a script, you click on the `Run` button in the toolbar of\nthe Python editor:\n\n![Run a script in openLCA](./images/olca_run_script.png)\n\nThe script is executed in the same Java process as openLCA. Thus, you have\naccess to all the things that you can do with openLCA via this scripting API\n(and also to everything that you can do with the Java and Jython runtime). Here\nis a small example script that will show the information dialog below when you\nexecute it in openLCA:\n\n```python\nfrom org.openlca.app.util import UI, Dialog\nfrom org.openlca.app import App\n\ndef say_hello():\n Dialog.showInfo(UI.shell(), \'Hello from Python (Jython)!\')\n\nif __name__ == \'__main__\':\n App.runInUI(\'say hello\', say_hello)\n```\n\n![Hello from Jython](./images/olca_hello.png)\n\n\n## Relation to standard Python\nAs said above, Jython runs on the JVM. It implements a great part of the\n[Python 2.7 standard library for the JVM](http://www.jython.org/docs/library/indexprogress.html).\nFor example the following script will work when you set the file\npath to a valid path on your system:\n\n```python\nimport csv\n\nwith open(\'path/to/file.csv\', \'w\') as stream:\n writer = csv.writer(stream)\n writer.writerow([""data you"", ""may want"", ""to export"",])\n```\n\nThe Jython standard library is extracted to the `python` folder of the openLCA\nworkspace which is by default located in your user directory\n`~/openLCA-data-1.4/python`. This is also the location in which you can put your\nown Jython 2.7 compatible modules. For example, when you create a file\n`tutorial.py` with the following function in this folder:\n\n```python\n# ~/openLCA-data-1.4/python/tutorial.py\ndef the_answer():\n f = lambda s, x: s + x if x % 2 == 0 else s\n return reduce(f, range(0, 14))\n```\n\nYou can then load it in the openLCA script editor:\n\n```python\nimport tutorial\nimport org.openlca.app.util.MsgBox as MsgBox\n\nMsgBox.info(\'The answer is %s!\' % tutorial.the_answer())\n```\n\nAn **important thing** to note is that Python modules that use C-extensions\n(like NumPy and friends) or parts of the standard library that are not\nimplemented in Jython are **not** compatible **with Jython**. If you want to\ninteract from standard CPython with openLCA (using Pandas, NumPy, etc.)\n**you can use** the [openLCA-IPC Python API](https://github.com/GreenDelta/olca-ipc.py).\n\n\n## The openLCA API\nAs said above, with Jython you directly access the openLCA Java API. In Jython,\nyou interact with a Java class in the same way as with a Python class. The\nopenLCA API starts with a set of classes that describe the basic data model,\nlike `Flow`, `Process`, `ProductSystem`. You can find these classes in the\n[olca-module repository](https://github.com/GreenDelta/olca-modules/tree/master/olca-core/src/main/java/org/openlca/core/model).\n\n```\n...\n```\n\n## Content\n* ...\n* [Using visualization APIs](./data_viz.md)\n* [The basic data model](./data_model.md)\n* [Setting up a development environment](./ide_setup.md)\n* [Examples](./examples.md)\n\n\n## License\nThis project is in the worldwide public domain, released under the\n[CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/).\n\n![Public Domain Dedication](https://licensebuttons.net/p/zero/1.0/88x31.png)\n'",,"2017/02/07, 11:24:19",2451,CUSTOM,3,58,"2023/06/28, 10:43:53",30,3,6,2,119,0,1.3333333333333333,0.2321428571428571,,,0,6,false,,false,false,,,https://github.com/GreenDelta,,,,,https://avatars.githubusercontent.com/u/3889246?v=4,,, Global LCA Data Access Network,"Gathers life cycle dataset providers and other stakeholders who share the goal of improving sustainability-related decisions through enhanced, interoperable and global access to LCA datasets.",,,custom,,Life Cycle Assessment,,,,,,,,,,https://www.globallcadataaccess.org/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, BioSTEAM_LCA,An agile life cycle assessment platform that enables a fast and flexible evaluation of the life cycle environmental impacts of biorefineries under uncertainty.,scyjth,https://github.com/scyjth/biosteam_lca.git,github,"life-cycle-assessment,environmental-impacts,integrated-lca-and-tea,uncertainty",Life Cycle Assessment,"2021/08/30, 20:11:40",13,0,2,false,Python,,,"Python,Jupyter Notebook",,"b'=========================================================================\nBioSTEAM_LCA: The Biorefinery Simulation Module with Techno-Economic Analysis and Life Cycle Assessment\n=========================================================================\n\n.. image:: http://img.shields.io/pypi/v/biosteam-lca.svg?style=flat\n :target: https://pypi.org/project/biosteam-lca/\n :alt: Version_status\n.. image:: http://img.shields.io/badge/license-UIUC-blue.svg?style=flat\n :target: https://github.com/scyjth/biosteam_lca/blob/master/LICENSE.txt\n :alt: license\n.. image:: https://img.shields.io/pypi/pyversions/biosteam.svg\n :target: https://pypi.python.org/pypi/biosteam\n :alt: Supported_versions\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/python/black\n :alt: Formatted with Black\n\n\n\n\n\nBiosteam_LCA is an an agile life cycle assessment (LCA) platform that enables the fast and flexible evaluation of the life cycle environmental impacts of biorefineries under uncertainty. It interfaces with BioSTEAM to simultaneously characterize environmental and economic metrics while enabling complete flexibility for user-defined biofuels, bioproducts, biomass compositions, and processes. This open-source, installable package allows users to perform streamlined LCAs of biorefineries. The focus of BioSTEAM-LCA is to streamline and automate early-stage environmental impact analyses of processes and technologies, and to enable rigorous sensitivity and uncertainty analyses linking process design, performance, economics, and environmental impacts.\n\nThis package is continuing to develop, with new features and extensions being added.\n\nGetting started\n~~~~~~~~~~~~~~~~\n\nInstallation\n------------\n\nGet the latest version from `PyPI `__. This package can be installed through pip::\n\n $ pip install biosteam_lca\n\nTo get the git version, run:\n\n $ git clone git://https://github.com/scyjth/biosteam_lca/tree/master/biosteam_lca\n\n\nPrerequisites\n-------------\n\n- Valid [ecoinvent](https://www.ecoinvent.org) login credentials\n- Alternatively, several open source life cycle inventory databases are built in, such as `FORWAST `__. \n\nSetup\n-------------\n\nLinking to other LCA repositories\n***************************************************\n\n\nIf you have not done so, add the required conda channels to your conda config file. You also need to install brightway and eidl. \nThe recommended way (Sep 2020) to install with conda is:\n\n $ conda install -c conda-forge -c cmutel -c haasad brightway2\n\n $ conda install -c haasad eidl\n\nThere has been more and more LCA researchers participating in open source communities. BioSTEAM_LCA interfaces with these excellent packages to enable rapid translation of biorefinery designs/processes and laboratory scale results to systems-scale sustainability assessments. This `Dashboard `__ contains a list of all related repositories for LCA researchers. \n\nChoose LCI Databases and impact assessment methods\n***************************************************\n\n\nUser can create models using life cycle inventory data/datasets from any of the supported databases (including any version of the ecoinvent database, ecoinvent license required). LCI Database serves as a central source of critically reviewed, consistent, and transparent data. It allows users to objectively review and compare analysis results based on similar data collection and analysis methods.\n\nWhere the inventory inputs can be chosen from \n\n========== =====================\nDatabase ``inputs``\n========== =====================\nEcoinvent ``\'ecoinvent\'``\nFORWAST ``\'forwast\'``\nU.S LCI ``\'us_lci\'``\nCustomized ``\'user_customized\'``\n========== =====================\n\nUser can choose from over 840+ different life cycle impact assessment (LCIA) methods, and automatically calculate the impact scores for any unit processes. Other than specified by user, the default assessment methods will be `U.S. EPA TRACI2.0 `__, which provides characterization factors for LCIA, industrial ecology, and sustainability metrics.\n\n\nLicense information\n~~~~~~~~~~~~~~~~\n\nThis project is licensed under the University of Illinois at Urbana-Champaign License. See the ``LICENSE.txt`` file for information on the terms & conditions, and a DISCLAIMER OF ALL WARRANTIES.\n\n\nAbout the authors\n~~~~~~~~~~~~~~~~\n\nBioSTEAM_LCA was created and developed by Dr. Rui Shi as part of the `Guest Group `__ and the `Center for Advanced Bioenergy and Bioproducts Innovation (CABBI) `__ at the `University of Illinois at Urbana-Champaign (UIUC) `__. \n\nReferences\n~~~~~~~~~~~~~~~~\n[1] Shi, Rui and Jeremy S. Guest, ""BioSTEAM-LCA: An Integrated Modeling Framework for Agile Life Cycle Assessment of Biorefineries Under Uncertainty. "" ACS Sustainable Chemistry & Engineering. Under review. \n\n'",,"2020/04/10, 17:52:37",1293,CUSTOM,0,109,"2020/09/16, 05:52:52",1,1,1,0,1134,0,0.0,0.01869158878504673,,,0,2,false,,false,false,,,,,,,,,,, Federal LCA Commons,A central point of access to a collection of data repositories for use in Life Cycle Assessment.,,,custom,,Life Cycle Assessment,,,,,,,,,,https://www.lcacommons.gov/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Trase,Brings unprecedented transparency to commodity supply chains revealing new pathways towards achieving a deforestation-free economy.,Vizzuality,https://github.com/Vizzuality/trase.git,github,,Life Cycle Assessment,"2023/10/24, 12:15:03",32,0,1,true,Ruby,Vizzuality,Vizzuality,"Ruby,JavaScript,PLpgSQL,SCSS,HTML,MDX,CSS,Shell,EJS,CoffeeScript",https://trase.earth,b'# Trase\n\n[Trase](http://supplychains.trase.earth) brings unprecedented transparency to commodity supply chains revealing new pathways towards achieving a deforestation-free economy.\n\n![TRASE](trase-screenshot.png)\n\n[Documentation](http://trase.surge.sh)\n\n## LICENSE\n\n[MIT](LICENSE)\n',,"2017/02/16, 15:55:05",2442,MIT,330,7574,"2023/10/24, 12:15:06",44,2104,2290,196,1,12,0.0,0.6796202252152793,"2023/08/17, 08:25:27",6.9.9,0,17,false,,false,false,,,https://github.com/Vizzuality,http://www.vizzuality.com,"Madrid, Cambridge, Barcelona, Washington DC",,,https://avatars.githubusercontent.com/u/305994?v=4,,, QSDsan,A package for the quantitative sustainable design of sanitation and resource recovery systems.,QSD-Group,https://github.com/QSD-Group/QSDsan.git,github,"sanitation,resource-recovery,techno-economic-analysis,life-cycle-assessment,quantitative-sustainable-design,sustainability-analysis,dynamic-simulations,process-modeling",Life Cycle Assessment,"2023/10/24, 21:45:52",28,9,5,true,Python,Quantitative Sustainable Design (QSD) Group,QSD-Group,Python,https://qsdsan.com,"b""====================================================================================\nQSDsan: Quantitative Sustainable Design for Sanitation and Resource Recovery Systems\n====================================================================================\n\n.. License\n.. image:: https://img.shields.io/pypi/l/qsdsan?color=blue&logo=UIUC&style=flat\n :target: https://github.com/QSD-Group/QSDsan/blob/main/LICENSE.txt\n\n.. Tested Python version\n.. image:: https://img.shields.io/pypi/pyversions/qsdsan?style=flat\n :target: https://pypi.python.org/pypi/qsdsan\n\n.. PyPI version\n.. image:: https://img.shields.io/pypi/v/qsdsan?style=flat&color=blue\n :target: https://pypi.org/project/qsdsan\n\n.. DOI\n.. image:: https://img.shields.io/badge/DOI-10.1039%2Fd2ew00455k-blue?style=flat\n :target: https://doi.org/10.1039/d2ew00455k\n\n.. Documentation build\n.. image:: https://readthedocs.org/projects/qsdsan/badge/?version=latest\n :target: https://qsdsan.readthedocs.io/en/latest\n\n.. GitHub test and coverage of the main branch\n.. image:: https://github.com/QSD-Group/QSDsan/actions/workflows/build-coverage.yml/badge.svg?branch=main\n :target: https://github.com/QSD-Group/QSDsan/actions/workflows/build-coverage.yml\n\n.. Codecov\n.. image:: https://codecov.io/gh/QSD-Group/QSDsan/branch/main/graph/badge.svg?token=Z1CASBXEOE\n :target: https://codecov.io/gh/QSD-Group/QSDsan\n\n.. Binder launch of tutorials\n.. image:: ./docs/source/images/custom_binder_logo.svg\n :target: https://mybinder.org/v2/gh/QSD-Group/QSDsan-env/main?urlpath=git-pull%3Frepo%3Dhttps%253A%252F%252Fgithub.com%252FQSD-group%252FQSDsan%26urlpath%3Dtree%252FQSDsan%252Fdocs%252Fsource%252Ftutorials%26branch%3Dmain\n\n.. Email subscription form\n.. image:: https://img.shields.io/badge/news-subscribe-F3A93C?style=flat&logo=rss\n :target: https://groups.webservices.illinois.edu/subscribe/154591\n\n.. Event calendar\n.. image:: https://img.shields.io/badge/events-calendar-F3A93C?style=flat&logo=google%20calendar\n :target: https://qsdsan.readthedocs.io/en/latest/Events.html\n\n.. YouTube video\n.. image:: https://img.shields.io/endpoint?color=%23ff0000&label=YouTube%20 @qsd-group&url=https%3A%2F%2Fyoutube-channel-badge-blond.vercel.app%2Fapi%2Fvideos\n :target: https://www.youtube.com/@qsd-group\n\n.. Code of Conduct\n.. image:: https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg\n :target: https://qsdsan.readthedocs.io/en/latest/CODE_OF_CONDUCT.html\n\n.. AppVeyor test of the stable branch, not in active use\n..\n .. image:: https://img.shields.io/appveyor/build/yalinli2/QSDsan/main?label=build-stable&logo=appveyor\n :target: https://github.com/QSD-Group/QSDsan/tree/stable\n\n|\n\n.. contents::\n\n|\n\nWhat is ``QSDsan``?\n-------------------\n``QSDsan`` is an open-source, community-led platform for the quantitative sustainable design (QSD) of sanitation and resource recovery systems [1]_. It is one of a series of platforms that are being developed for the execution of QSD - a methodology for the research, design, and deployment of technologies and inform decision-making [2]_. It leverages the structure and modules developed in the `BioSTEAM `_ platform [3]_ with additional functions tailored to sanitation processes.\n\nAs an open-source and impact-driven platform, QSDsan aims to identify configuration combinations, systematically probe interdependencies across technologies, and identify key sensitivities to contextual assumptions through the use of quantitative sustainable design methods (techno-economic analysis and life cycle assessment and under uncertainty). \n\nAll systems developed with ``QSDsan`` are included in the package `EXPOsan `_ - exposition of sanitation and resource recovery systems.\n\nAdditionally, another package, `DMsan `_ (decision-making for sanitation and resource recovery systems), is being developed for decision-making among multiple dimensions of sustainability with consideration of location-specific contextual parameters.\n\n\nInstallation\n------------\nThe easiest way is through ``pip``, in command-line interface (e.g., Anaconda prompt, terminal):\n\n.. code::\n\n pip install qsdsan\n\nIf you need to upgrade:\n\n.. code::\n\n pip install -U qsdsan\n\nor for a specific version (replace X.X.X with the version number):\n\n.. code::\n\n pip install qsdsan==X.X.X\n\nIf you want to install the latest GitHub version at the `main branch `_ (note that you can still use the ``-U`` flag for upgrading):\n\n.. code::\n\n pip install git+https://github.com/QSD-Group/QSDsan.git\n\n\n.. note::\n\n If this doesn't give you the newest ``qsdsan``, try ``pip uninstall qsdsan`` first.\n\n Also, you may need to update some ``qsdsan``'s dependency package (e.g., ' ``biosteam`` and ``thermosteam``) versions in order for the new ``qsdsan`` to run.\n\n\nor other fork and/or branch (replace ```` and ```` with the desired fork and branch names)\n\n.. code::\n\n pip install git+https://github.com//QSDsan.git@\n\n\nYou can also download the package from `PyPI `_.\n\nNote that development of this package is currently under initial stage with limited backward compatibility, please feel free to `submit an issue `_ for any questions regarding package upgrading.\n\nIf you want to contribute to ``QSDsan``, please follow the steps in the `Contributing Guidelines `_ section of the documentation to clone the repository. If you find yourself struggle with the installation of QSDsan/setting up the environment, this extended version of `installation instructions `_ might be helpful to you.\n\n\nDocumentation\n-------------\nYou can find tutorials and documents at:\n\n https://qsdsan.readthedocs.io\n\nAll tutorials are written using Jupyter Notebook, you can run your own Jupyter environment, or you can click the ``launch binder`` badge on the top to launch the environment in your browser.\n\nFor each of these tutorials, we are also recording videos where one of the QSD group members will go through the tutorial step-by-step. We are gradually releasing these videos on our `YouTube channel `_ so subscribe to receive updates!\n\n\nAbout the Authors\n-----------------\nPlease refer to `Contributors `_ section for a list of contributors.\n\n\nContributing\n------------\nPlease refer to the `Contributing Guidelines `_ section of the documentation for instructions and guidelines.\n\n\nStay Connected\n--------------\nIf you would like to receive news related to the QSDsan platform, you can subscribe to email updates using `this form `_ (don't worry, you will be able to unsubscribe :)). Thank you in advance for your interest!\n\n\nQSDsan Events\n-------------\nWe will keep this `calendar `_ up-to-date as we organize more events (office hours, workshops, etc.), click on the events in the calendar to see the details (including meeting links).\n\n\nLicense Information\n-------------------\nPlease refer to the ``LICENSE.txt`` for information on the terms & conditions for usage of this software, and a DISCLAIMER OF ALL WARRANTIES.\n\n\nReferences\n----------\n.. [1] Li, Y.; Zhang, X.; Morgan, V.L.; Lohman, H.A.C.; Rowles, L.S.; Mittal, S.; Kogler, A.; Cusick, R.D.; Tarpeh, W.A.; Guest, J.S. QSDsan: An integrated platform for quantitative sustainable design of sanitation and resource recovery systems. Environ. Sci.: Water Res. Technol. 2022, 8 (10), 2289-2303. https://doi.org/10.1039/d2ew00455k.\n\n.. [2] Li, Y.; Trimmer, J.T.; Hand, S.; Zhang, X.; Chambers, K.G.; Lohman, H.A.C.; Shi, R.; Byrne, D.M.; Cook, S.M.; Guest, J.S. Quantitative Sustainable Design (QSD): A Methodology for the Prioritization of Research, Development, and Deployment of Technologies. (Tutorial Review) Environ. Sci.: Water Res. Technol. 2022, 8 (11), 2439\xe2\x80\x932465. https://doi.org/10.1039/D2EW00431C.\n\n.. [3] Cort\xc3\xa9s-Pe\xc3\xb1a, Y.; Kumar, D.; Singh, V.; Guest, J.S. BioSTEAM: A Fast and Flexible Platform for the Design, Simulation, and Techno-Economic Analysis of Biorefineries under Uncertainty. ACS Sustainable Chem. Eng. 2020, 8 (8), 3302\xe2\x80\x933310. https://doi.org/10.1021/acssuschemeng.9b07040.\n\n\n.. Custom launch badges: https://mybinder.readthedocs.io/en/latest/howto/badges.html\n.. binder_badge: https://img.shields.io/badge/launch-binder%20%7C%20tutorial-579ACA.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC\n""",",https://doi.org/10.1039/d2ew00455k\n\n,https://doi.org/10.1039/d2ew00455k.\n\n,https://doi.org/10.1039/D2EW00431C.\n\n,https://doi.org/10.1021/acssuschemeng.9b07040.\n\n\n","2020/09/29, 15:53:46",1121,CUSTOM,256,1633,"2023/10/18, 15:31:01",3,60,109,22,7,1,0.1,0.3111111111111111,,,0,11,false,,true,true,"yalinli2/binder-test,QSD-Group/QSDsan-env,QSD-Group/QSDsan-workshop,QSD-Group/QSDsan-webapp,QSD-Group/DMsan,QSD-Group/QSDsan,QSD-Group/QSDedu,BioSTEAMDevelopmentGroup/biosteam,QSD-Group/EXPOsan",,https://github.com/QSD-Group,,,,,https://avatars.githubusercontent.com/u/68925850?v=4,,, premise,Coupling Integrated Assessment Models output with Life Cycle Assessment.,romainsacchi,https://github.com/polca/premise.git,github,"lifecycle,energy,ecoinvent,transport,inventory",Life Cycle Assessment,"2023/10/23, 11:42:47",72,12,21,true,Jupyter Notebook,POLCA,polca,"Jupyter Notebook,Python",,"b'# ``premise``\n\n
\n\n
\n\n# **PR**ospective **E**nviron**M**ental **I**mpact As**SE**ssment\n## Coupling the ecoinvent database with projections from Integrated Assessment Models (IAM)\n\n\n

\n \n \n \n \n

\n \nPreviously named *rmnd-lca*. *rmnd-lca* was designed to work with the IAM model REMIND only.\nAs it now evolves towards a more IAM-neutral approach, a change of name was considered.\n\nScientific publication available here: [Sacchi et al, 2022](https://doi.org/10.1016/j.rser.2022.112311).\n\nWhat\'s new in 1.8.0?\n====================\n\n- Added support for brightway 2.5\n- Added support for Python 3.11\n- Uses bw2io 0.8.10\n- Adds electricity storage in electricity markets -- see [docs](https://premise.readthedocs.io/en/latest/transform.html#storage)\n- Adds [scenario explorer dashboard](https://premisedash-6f5a0259c487.herokuapp.com/)\n\nWhat\'s new in 1.5.0?\n====================\n\n- Added support for ecoinvent 3.9 and 3.9.1\n- Added support for ecoinvent 3.8 and 3.9/3.9.1 consequential -- see [docs](https://premise.readthedocs.io/en/latest/consequential.html)\n- Added REMIND SSP1 and SSP5 scenarios -- see [docs](https://premise.readthedocs.io/en/latest/introduction.html#default-iam-scenarios)\n- Updated GAINS emission factors, using GAINS-EU and GAINS-IAM -- see [docs](https://premise.readthedocs.io/en/latest/transform.html#gains-emission-factors)\n- Added new inventories for DAC and DACCS -- see [docs](https://premise.readthedocs.io/en/latest/transform.html#direct-air-capture)\n- Added new inventories for EPR and SMR nuclear reactors -- see [EPR inventories](https://github.com/polca/premise/blob/master/premise/data/additional_inventories/lci-nuclear_EPR.xlsx) and [SMR inventories](https://github.com/polca/premise/blob/master/premise/data/additional_inventories/lci-nuclear_SMR.xlsx)\n- Made mapping to new IAM models easier -- see [docs](https://premise.readthedocs.io/en/latest/mapping.html)\n- Better logging of changes made to the ecoinvent database -- see [docs](https://premise.readthedocs.io/en/latest/transform.html#logs)\n\nWhat\'s new in 1.3.0?\n====================\n\n- Added support for user-generated scenarios (see [docs](https://premise.readthedocs.io/en/latest/user_scenarios.html) and [notebook](https://github.com/polca/premise/blob/master/examples/examples%20user-defined%20scenarios.ipynb))\n- Updated REMIND scenarios to REMIND v.3.0\n\n\n\nDocumentation\n-------------\n[https://premise.readthedocs.io/en/latest/](https://premise.readthedocs.io/en/latest/)\n\nObjective\n---------\n\nThe objective is to produce life cycle inventories under future energy policies, by modifying the inventory database\necoinvent 3 to reflect projected energy policy trajectories.\n\nRequirements\n------------\n* **Python 3.9, 3.10 or 3.11**\n* License for [ecoinvent 3][1]\n* Some IAM output files come with the library and are located by default in the subdirectory ""/data/iam_output_files"". **If you wish to use\n those files, you need to request (by [email](mailto:romain.sacchi@psi.ch)) an encryption key from the developers**.\n A file path can be specified to fetch IAM output files elsewhere on your computer.\n * [brightway2][2] (optional)\n\nHow to install this package?\n----------------------------\n\nTwo options:\n\nA development version with the latest advancements (but with the risks of unseen bugs),\nis available from Anaconda Cloud:\n\n\n conda install -c romainsacchi premise\n\n\nFor a more stable and proven version, from Pypi:\n\n pip install premise\n\nwill install the package and the required dependencies.\n\n\nHow to use it?\n--------------\n\nThe best way is to follow [the examples from the Jupyter Notebook](https://github.com/polca/premise/blob/master/examples/examples.ipynb). \n\n# Support\n\nDo not hesitate to contact [romain.sacchi@psi.ch](mailto:romain.sacchi@psi.ch).\n\n## Contributors\n\n* [Romain Sacchi](https://github.com/romainsacchi)\n* [Alois Dirnaichner](https://github.com/Loisel)\n* [Tom Mike Terlouw](https://github.com/tomterlouw)\n* [Laurent Vandepaer](https://github.com/lvandepaer)\n* [Chris Mutel](https://github.com/cmutel/)\n\n\n## Maintainers\n\n* [Romain Sacchi](https://github.com/romainsacchi)\n* [Chris Mutel](https://github.com/cmutel/)\n\n## Contributing\n\nSee [contributing](https://github.com/polca/premise/blob/master/CONTRIBUTING.md).\n\n## References\n\n[1]:https://www.ecoinvent.org/\n[2]:https://brightway.dev/\n\n## License\n\n[BSD-3-Clause](https://github.com/polca/premise/blob/master/LICENSE).\nCopyright 2020 Potsdam Institute for Climate Impact Research, Paul Scherrer Institut.\n'",",https://doi.org/10.1016/j.rser.2022.112311","2018/11/13, 10:31:56",1807,BSD-3-Clause,448,1899,"2023/10/23, 11:41:43",21,42,108,39,2,1,0.8,0.09440769693325313,"2023/10/24, 20:06:56",v.1.8.1,0,12,false,,true,true,"romainsacchi/autumn_school_2023,flechtenberg/pulpo,polca/pathways,robyistrate/internet-environmental-footprint,tyrael147/premise_testing,alideoro/Team_PV,jorissimaitis/Team-Awesome-Autumn-School,premise-community-scenarios/ammonia-prospective-scenarios,premise-community-scenarios/cobalt-perspective-2050,premise-community-scenarios/energy-perspective-2050-switzerland,premise-community-scenarios/scenario-example-bread,whatofit/LevelWordWithFreq",,https://github.com/polca,,,,,https://avatars.githubusercontent.com/u/58623740?v=4,,, useeior,Estimating potential environmental impacts of goods and services in the US economy.,USEPA,https://github.com/USEPA/useeior.git,github,ord,Life Cycle Assessment,"2023/08/15, 12:49:36",26,0,11,true,R,U.S. Environmental Protection Agency,USEPA,R,,"b'# useeior \n\n[![R CI/CD test](https://github.com/USEPA/useeior/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/USEPA/useeior/actions/workflows/R-CMD-check.yaml)\n[![Lifecycle: stable](https://img.shields.io/badge/lifecycle-stable-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![useeior v1.0.0](http://img.shields.io/badge/useeior%20v1.0.0-10.5281/zenodo.6370101-blue.svg)](https://doi.org/10.5281/zenodo.6370101)\n[![useeior paper](http://img.shields.io/badge/useeior%20paper-10.3390/app12094469-blue.svg)](https://doi.org/10.3390/app12094469)\n\n\n`useeior` is an R package for building and using [USEEIO models](https://www.epa.gov/land-research/us-environmentally-extended-input-output-useeio-models).\n\nThe [model object](format_specs/Model.md) is the primary output that is built according to a given [model specification](format_specs/ModelSpecification.md) and optional hybridization specification, e.g. [disaggregation](format_specs/DisaggregationAndAggregationSpecification.md).\n[Model specifications](inst/extdata/modelspecs) and associated hybridization specifications, e.g. [disaggregation](inst/extdata/disaggspecs), for EPA-validated models are included in the package.\n\n`useeior` offers various functions for validating, calculating, visualizing, and writing out models and/or their components.\n`useeior` is a core component of the [USEEIO Modeling Framework](https://github.com/USEPA/useeio) and is in a stable development state.\nUsers intending to use the package for production purposes and applications should use [Releases](https://github.com/USEPA/useeior/releases).\n`useeior` v1.0.0 was peer-reviewed internally at USEPA and published at Zenodo.\n\nAn peer-reviewed article describing `useeior` was published in the journal Applied Sciences in April 2022.\nIf you use `useeior` in a scientific publication, we would appreciate that you cite it using:\n```\n@article{li_useeior_2022,\n title = {useeior: {An} {Open-Source} {R} {Package} for {Building} and {Using} {US} {Environmentally-Extended} {Input-Output} {Models}},\n journal = {Applied Sciences},\n author = {{Li, Mo} and {Ingwersen, Wesley} and {Young, Ben} and {Vendries, Jorge} and {Birney, Catherine}},\n year = {2022},\n pages = {4469},\n number = {9},\n volume = {12},\n doi = {10.3390/app12094469}\n}\n```\nor\n```\nLi, M., Ingwersen, W.W., Young, B., Vendries, J. and Birney, C., 2022. useeior: An Open-Source R Package for Building and Using US Environmentally-Extended Input\xe2\x80\x93Output Models. Applied Sciences, 12(9), p.4469.\n```\n\nSee the following sections for installation and basic usage of `useeior`.\n\nSee [Wiki](https://github.com/USEPA/useeior/wiki) for advanced uses, details about built-in data and metadata and how to contribute to `useeior`.\n\n## Installation\n\n```r\n# Install development version from GitHub\ninstall.packages(""devtools"")\ndevtools::install_github(""USEPA/useeior"")\n```\n\n```r\n# Install a previously released version (e.g. v1.0.0) from GitHub\ndevtools::install_github(""USEPA/useeior@v1.0.0"")\n```\n\nSee [Releases](https://github.com/USEPA/useeior/releases) for all previously realeased versions.\n\n## Usage\n\n### Build Model\n\nView all models with existing config files that can be built using useeior\n\n```r\nlibrary(""useeior"")\nseeAvailableModels()\n```\n\nBuild a model that is available in useeior (e.g. the [USEEIOv2.0.1-411](inst/extdata/modelspecs/USEEIOv2.0.1-411.yml) model)\n\n```r\nmodel <- buildModel(\'USEEIOv2.0.1-411\')\n```\n\nTo build a customized model, refer to [Advanced Uses](https://github.com/USEPA/useeior/wiki/Using-useeior#advanced-uses) in Wiki.\n\nThis generates a complete USEEIO model with components described in the [Model](format_specs/Model.md#model) table.\n\n### Adjust Price Year and Type of Model Results\n\nAdjust model results (e.g. `N` matrix) to user-specified price year (e.g. `2018`) and type (producer\'s or purchaser\'s).\n\n```r\nN_adj <- adjustResultMatrixPrice(""N"", \n currency_year = 2018,\n purchaser_price = TRUE,\n model)\n```\n\n### Calculate Model LCI and LCIA\n\nCalculate model life cycle inventory (LCI) and life cycle impact assessment (LCIA) results with a user-specified [calculation perspective](format_specs/Calculation.md#calculation-perspectives), [demand vector](format_specs/Model.md#demandvectors) (from `DemandVectors` in the model object, which includes four [default vectors](format_specs/ModelSpecification.md#demand-vector-specifications), or a user-provided vector) and a model [direct requirements matrix](format_specs/Model.md#a).\n\n```r\nresult <- calculateEEIOModel(model,\n perspective = ""DIRECT"",\n demand = ""CompleteProduction"",\n use_domestic_requirements = FALSE)\n```\n\nThis returns a [Calculation Result](format_specs/Calculation.md#calculation-result). \n\n### Write Model to File\n\nWrite selected model matrices, demand vectors, and metadata as one `.xlsx` file to a given output folder.\n```r\nwriteModeltoXLSX(model, outputfolder)\n```\n\nWrite model matrices as `.csv` files to a given output folder.\n```r\nwriteModelMatrices(model, to_format = ""csv"", outputfolder)\n```\n\n### Validate Model\n\nComplete model validation checks can be found in [ValidateModel.Rmd](inst/doc/ValidateModel.Rmd).\nKnit [ValidateModel_render.Rmd](inst/doc/ValidateModel_render.Rmd) to perform all validation checks on selected models (specified under the [YAML header](inst/doc/ValidateModel_render.Rmd#L5)).\nThis will generate an `.html` and a `.md` file containing validation results for each model. See example output in [inst/doc/output/](inst/doc/output). \n\n#### Examples\n\nValidate that flow totals by commodity `E_c` can be recalculated (within 1%) using the model satellite matrix `B`, market shares matrix `V_n`, total requirements matrix `L`, and demand vector `y` for US production.\n\n```r\n> modelval <- compareEandLCIResult(model, tolerance = 0.01)\n> print(paste(""Number of flow totals by commodity passing:"", modelval$N_Pass))\n[1] ""Number of flow totals by commodity passing: 1118742""\n> print(paste(""Number of flow totals by commodity failing:"", modelval$N_Fail))\n[1] ""Number of flow totals by commodity failing: 0""\n```\n\nValidate that commodity output can be recalculated (within 1%) with the model total requirements matrix `L` and demand vector `y` for US production.\n\n```r\n> econval <- compareOutputandLeontiefXDemand(model, tolerance = 0.01)\n> print(paste(""Number of sectors passing:"",econval$N_Pass))\n[1] ""Number of sectors passing: 409""\n> print(paste(""Number of sectors failing:"",econval$N_Fail))\n[1] ""Number of sectors failing: 2""\n> print(paste(""Sectors failing:"", paste(econval$Failure$rownames, collapse = "", "")))\n[1] ""Sectors failing: S00402/US, S00300/US""\n```\nNote: `S00402/US - Used and secondhand goods` and `S00300/US - Noncomparable imports` are two commodities that are not produced by any industry in the US, therefore their commodity output naturally cannot be recalculated with the model total requirements matrix `L` and demand vector `y` for US production. Results for these sectors are not recommended for use.\n\n### Visualize Model Results\n\n#### Examples\n\nRank sectors based a composite score of selected total impacts (LCIA_d or LCIA_f) associated with total US demand (US production or consumption vector).\nComparing rankings may also be used as another form of model validation that incorporates the demand vectors and the indicators as well as the model result matrices.\n\n```r\n# Calculate model LCIA_d and LCIA_f\nresult <- c(calculateEEIOModel(model, perspective = \'DIRECT\', demand = ""Production""),\n calculateEEIOModel(model, perspective = \'FINAL\', demand = ""Consumption""))\ncolnames(result$LCIA_d) <- model$Indicators$meta[match(colnames(result$LCIA_d),\n model$Indicators$meta$Name),\n ""Code""]\ncolnames(result$LCIA_f) <- colnames(result$LCIA_d)\n# Define indicators\nindicators <- c(""ACID"", ""CCDD"", ""CMSW"", ""CRHW"", ""ENRG"", ""ETOX"", ""EUTR"", ""GHG"",\n ""HRSP"", ""HTOX"", ""LAND"", ""MNRL"", ""OZON"", ""SMOG"", ""WATR"")\n# Create figure on the left\nheatmapSectorRanking(model,\n matrix = result$LCIA_d,\n indicators,\n sector_to_remove = """",\n N_sector = 20,\n x_title = ""LCIA_d (DIRECT perspective) & US production demand"")\n# Create figure on the right\nheatmapSectorRanking(model,\n matrix = result$LCIA_f,\n indicators,\n sector_to_remove = """",\n N_sector = 20,\n x_title = ""LCIA_f (FINAL perspective) & US consumption demand"")\n```\n\n![](inst/img/ranking_direct_prod_final_cons_v2.0.1.png)\n\nMore visualization examples are available in [Example.Rmd](inst/doc/Example.Rmd).\n\n### Analyze Flow and Sector Contribution to Impact\n\n#### Examples\n\nAnalyze `flow` contribution to total (direct+indirect) `Acidification Potential` in the `Electricity` sector (`221100/US`), showing top 5 contributors below.\n\n```r\n> ACID_elec <- calculateFlowContributiontoImpact(model, ""221100/US"", ""Acidification Potential"")\n> ACID_elec$contribution <- scales::percent(ACID_elec$contribution, accuracy = 0.1)\n> head(subset(ACID_elec, TRUE, select = ""contribution""), 5)\n contribution\n""Sulfur dioxide/emission/air/kg"" 57.4%\n""Nitrogen dioxide/emission/air/kg"" 39.2%\n""Ammonia/emission/air/kg"" 2.3%\n""Sulfuric acid/emission/air/kg"" 0.7%\n""Hydrofluoric acid/emission/air/kg"" 0.2%\n```\n\nAnalyze `sector` contribution to total (direct+indirect) `Human Health - Respiratory Effects` in the `Flours and malts` sector (`311210/US`), showing top 5 contributors below.\n\n```r\n> HHRP_flour <- calculateSectorContributiontoImpact(model, ""311210/US"", ""Human Health - Respiratory Effects"")\n> HHRP_flour$contribution <- scales::percent(HHRP_flour$contribution, accuracy = 0.1)\n> head(subset(HHRP_flour, TRUE, select = ""contribution""), 5)\n contribution\n""1111B0/US - Fresh wheat, corn, rice, and other grains"" 90.7%\n""311210/US - Flours and malts"" 1.5%\n""115000/US - Agriculture and forestry support"" 0.9%\n""2123A0/US - Sand, gravel, clay, phosphate, other nonmetallic minerals"" 0.8%\n""1111A0/US - Fresh soybeans, canola, flaxseeds, and other oilseeds"" 0.8%\n```\n\nMore analysis examples are available in [Example.Rmd](inst/doc/Example.Rmd).\n\n### Compare Model Results\n\nComparison betwen two models can be found in [CompareModel.Rmd](inst/doc/CompareModels.Rmd).\nKnit [CompareModel_render.Rmd](inst/doc/CompareModels_render.Rmd) to perform comparison on selected models (specified under the [YAML header](inst/doc/CompareModels_render.Rmd#L5)).\nThis will return an `.html` and a `.md` file containing comparison results for each model specified in the header. An example can be found [inst/doc/output/](inst/doc/output)\n\nCurrently, it only compares flow totals between two models. More comparisons will be added in the future.\n\n### Additional Information\n\nA complete list of available functions for calculating, validating, exporting and visualizing model can be found [here](https://github.com/USEPA/useeior/wiki/Using-useeior#calculate-validate-export-visualize-model) in the Wiki.\n\n## Disclaimer\n\nThe United States Environmental Protection Agency (EPA) GitHub project code is provided on an ""as is"" basis and the user assumes responsibility for its use. EPA has relinquished control of the information and no longer has responsibility to protect the integrity , confidentiality, or availability of the information. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by EPA. The EPA seal and logo shall not be used in any manner to imply endorsement of any commercial product or activity by EPA or the United States Government.\n'",",https://doi.org/10.5281/zenodo.6370101,https://doi.org/10.3390/app12094469","2019/11/13, 14:07:05",1442,MIT,52,2110,"2023/09/29, 15:51:27",8,150,255,22,26,2,0.3,0.5296398891966758,"2023/08/15, 12:51:21",v1.3.0,0,12,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, fedelemflowlist,A Python package that generates and provides a standardized elementary flow list for use in life cycle assessment (LCA) data as well as mappings to convert data from other sources.,USEPA,https://github.com/USEPA/Federal-LCA-Commons-Elementary-Flow-List.git,github,ord,Life Cycle Assessment,"2023/09/12, 19:51:08",31,0,8,true,Python,U.S. Environmental Protection Agency,USEPA,"Python,Jupyter Notebook",,"b'\n[![Applied Sciences](http://img.shields.io/badge/Applied%20Sciences-10.3390/app12199687-blue.svg)](https://doi.org/10.3390/app12199687)\n\n\n# fedelemflowlist\n\n`fedelemflowlist` is a Python package that generates and provides a standardized elementary flow list for use in life cycle assessment (LCA) data\n as well as mappings to convert data from other sources. This list supports the [Federal LCA Commons](http://www.lcacommons.gov),\n where preferred flows from the active version of the flow list produced by this package can be found in formats for use in LCA software.\n\n Standard formats for a [Flow List](./format%20specs/FlowList.md)\n and a [Flow Mapping](./format%20specs/FlowMapping.md) are defined and provided by `fedelemflowlist`.\n They are implemented as [pandas](https://pandas.pydata.org/) dataframes.\n Standard formats are also described for the input files used in building the flow list, and implemented as .csv files\n in the [input](https://github.com/USEPA/Federal-LCA-Commons-Elementary-Flow-List/tree/master/fedelemflowlist/input) directory. \n\n The version of the package (see [Releases](https://github.com/USEPA/Federal-LCA-Commons-Elementary-Flow-List/releases/))\n corresponds to the version of the flow list that it provides. The complete or \'master\' list contains all valid flows,\n where the \'preferred\' flows are the recommended flows for use in LCA data.\n \n`fedelemflowlist` can export complete or subsets of the flow list and mapping files as a .zip archive of [JSON-LD](https://json-ld.org/)\n files conforming to the [openLCA schema](http://greendelta.github.io/olca-schema/).\n\nThe background and methodology behind creation of the flow list, as well as a summary of the flow list itself can be found in the USEPA Report\n [\'The Federal LCA Commons Elementary Flow List: Background, Approach, Description and Recommendations for Use\'](https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NRMRL&dirEntryId=347251).\nDefinitions for terms used in the flow list can be found in on EPA\'s Terminology Services in the [Federal LCA Commons Elementary Flow List for Life Cycle Assessment vocabulary](https://sor.epa.gov/sor_internet/registry/termreg/searchandretrieve/glossariesandkeywordlists/search.do?details=&vocabName=FEDEFL). \n\nSee the [Wiki](https://github.com/USEPA/Federal-LCA-Commons-Elementary-Flow-List/wiki/) for installation, more info on repository\ncontents, use examples, and for instructions on how to contribute to the flow list through additions or edits to flows or flow mappings.\n\n## Disclaimer\n\nThe United States Environmental Protection Agency (EPA) GitHub project code is provided on an ""as is"" basis\n and the user assumes responsibility for its use. EPA has relinquished control of the information and no longer\n has responsibility to protect the integrity , confidentiality, or availability of the information. Any\n reference to specific commercial products, processes, or services by service mark, trademark, manufacturer,\n or otherwise, does not constitute or imply their endorsement, recommendation or favoring by EPA. The EPA seal\n and logo shall not be used in any manner to imply endorsement of any commercial product or activity by EPA or\n the United States Government.\n'",",https://doi.org/10.3390/app12199687","2018/02/05, 15:35:37",2088,MIT,91,1367,"2023/09/12, 19:57:37",14,127,161,30,43,3,0.0,0.686217008797654,"2023/09/12, 19:58:12",v1.2.0,0,13,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, pymrio,Multi-Regional Input-Output Analysis in Python.,konstantinstadler,https://github.com/IndEcol/pymrio.git,github,"python,calculations,mrio,input-output-analysis",Life Cycle Assessment,"2023/10/21, 21:17:43",126,17,35,true,Python,Industrial Ecology,IndEcol,"Python,Shell",http://pymrio.readthedocs.io/en/latest/,"b""############\nPymrio\n############\n\nPymrio: Multi-Regional Input-Output Analysis in Python.\n\n.. image:: https://img.shields.io/pypi/v/pymrio.svg\n :target: https://pypi.python.org/pypi/pymrio/\n.. image:: https://anaconda.org/conda-forge/pymrio/badges/version.svg \n :target: https://anaconda.org/conda-forge/pymrio\n.. image:: https://github.com/IndEcol/pymrio/workflows/build/badge.svg\n :target: https://github.com/IndEcol/pymrio/actions\n.. image:: https://coveralls.io/repos/github/IndEcol/pymrio/badge.svg?branch=master\n :target: https://coveralls.io/github/IndEcol/pymrio\n.. image:: https://readthedocs.org/projects/pymrio/badge/?version=latest\n :target: http://pymrio.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n.. image:: https://img.shields.io/badge/License-GPL%20v3-blue.svg\n :target: https://www.gnu.org/licenses/gpl-3.0\n.. image:: https://zenodo.org/badge/21688312.svg\n :target: https://zenodo.org/badge/latestdoi/21688312\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n\nWhat is it\n==========\n\nPymrio is an open source tool for analysing global environmentally extended multi-regional input-output tables (EE MRIOs). \nPymrio aims to provide a high-level abstraction layer for global EE MRIO databases in order to simplify common EE MRIO data tasks. \nPymrio includes automatic download functions and parsers for available EE MRIO databases like EXIOBASE_, WIOD_ and EORA26_. \nIt automatically checks parsed EE MRIOs for missing data necessary for calculating standard EE MRIO accounts (such as footprint, territorial, impacts embodied in trade) and calculates all missing tables. \nVarious data report and visualization methods help to explore the dataset by comparing the different accounts across countries. \n\nFurther functions include:\n\n- analysis methods to identify where certain impacts occur\n- modifying region/sector classification\n- restructuring extensions\n- export to various formats\n- visualization routines and \n- automated report generation\n \n\nWhere to get it\n===============\n\nThe full source code is available on Github at: https://github.com/IndEcol/pymrio\n\nPymrio is registered at PyPI and on the Anaconda Cloud. Install it by:\n\n.. code:: bash\n\n pip install pymrio --upgrade\n \nor when using conda install it by\n\n.. code:: bash\n\n conda install -c conda-forge pymrio\n\nor update to the latest version by\n\n.. code:: bash\n\n conda update -c conda-forge pymrio\n\nThe source-code of Pymrio available at the GitHub repo: https://github.com/IndEcol/pymrio \n\nThe master branch in that repo is supposed to be ready for use and might be \nahead of the official releases. To install directly from the master branch use:\n\n.. code:: bash\n\n pip install git+https://github.com/IndEcol/pymrio@master\n\n\n\nQuickstart \n==========\n\nA small test mrio is included in the package. \n\nTo use it call\n\n.. code:: python\n\n import pymrio\n test_mrio = pymrio.load_test()\n\nThe test mrio consists of six regions and eight sectors: \n\n.. code:: python\n\n\n print(test_mrio.get_sectors())\n print(test_mrio.get_regions())\n\nThe test mrio includes tables flow tables and some satellite accounts. \nTo show these:\n\n.. code:: python\n\n test_mrio.Z\n test_mrio.emissions.F\n \nHowever, some tables necessary for calculating footprints (like test_mrio.A or test_mrio.emissions.S) are missing. pymrio automatically identifies which tables are missing and calculates them: \n\n.. code:: python\n\n test_mrio.calc_all()\n\nNow, all accounts are calculated, including footprints and emissions embodied in trade:\n\n.. code:: python\n\n test_mrio.A\n test_mrio.emissions.D_cba\n test_mrio.emissions.D_exp\n\nTo visualize the accounts:\n\n\n.. code:: python\n\n import matplotlib as plt\n test_mrio.emissions.plot_account('emission_type1')\n plt.show()\n\nEverything can be saved with\n\n.. code:: python\n \n test_mrio.save_all('some/folder')\n\nSee the documentation_ , tutorials_ and `Stadler 2021`_ for further examples.\n\nTutorials\n=========\n\nThe documentation_ includes information about how to use pymrio for automatic downloading_ and parsing_ of the EE MRIOs EXIOBASE_, WIOD_, OECD_ and EORA26_ as well as tutorials_ for the handling, aggregating and analysis of these databases. \n\nCitation\n========\n\nIf you use Pymrio in your research, citing the article describing the package \n(`Stadler 2021`_) is very much appreciated. \n\n.. _`Stadler 2021`: https://openresearchsoftware.metajnl.com/articles/10.5334/jors.251/\n\nFor the full bibtex key see CITATION_ file.\n\n.. _CITATION: CITATION\n\nContributing\n=============\n\nWant to contribute? Great!\nPlease check `CONTRIBUTING.rst`_ if you want to help to improve Pymrio.\n \n.. _CONTRIBUTING.rst: https://github.com/IndEcol/pymrio/blob/master/CONTRIBUTING.rst\n \nCommunication, issues, bugs and enhancements\n============================================\n\nPlease use the issue tracker for documenting bugs, proposing enhancements and all other communication related to pymrio.\n\nYou can follow me on twitter_ to get the latest news about all my open-source and research projects (and occasionally some random retweets).\n\nResearch notice\n~~~~~~~~~~~~~~~\n\nPlease note that this repository is participating in a study into\nsustainability of open source projects. Data will be gathered about this\nrepository for approximately the next 12 months, starting from June\n2021.\n\nData collected will include number of contributors, number of PRs, time\ntaken to close/merge these PRs, and issues closed.\n\nFor more information, please visit `the informational\npage `__ or\ndownload the `participant information\nsheet `__.\n\n\n.. _twitter: https://twitter.com/kst_stadler\n\n.. _downloading: http://pymrio.readthedocs.io/en/latest/notebooks/autodownload.html\n.. _parsing: http://pymrio.readthedocs.io/en/latest/handling.html\n.. _documentation: http://pymrio.readthedocs.io/en/latest/\n.. _tutorials: http://pymrio.readthedocs.io/en/latest/handling.html\n\n.. _EXIOBASE: http://www.exiobase.eu/\n.. _WIOD: http://www.wiod.org/home\n.. _OECD: https://www.oecd.org/sti/ind/inter-country-input-output-tables.htm\n.. _EORA26: http://www.worldmrio.com/simplified/\n\n""",",https://zenodo.org/badge/latestdoi/21688312\n","2014/07/10, 09:20:44",3394,CUSTOM,41,407,"2023/10/18, 09:14:35",35,31,91,24,7,2,1.0,0.032786885245901676,"2023/10/21, 21:12:38",v0.5.3,0,7,false,,false,true,"laurentpauwels/sanctionpaper,laurentpauwels/sanctiondashboard,open-risk/matrix2json,griff-rees/estios,spjuhel/BoARIO,baptiste-an/Mapping-global-ghg-emissions,spjuhel/BoARIO-Tools,mbesserve/lie-inter,CIRAIG/OpenIO-Canada,AntoineTeixeira/MatMat-Trade,it-is-me-mario/MARIO,Open-Risk-Academy/Academy-Course-SFI32064,skinnydelgado/Dashboard_App,alyabolowich/emissions-in-trade-api,jakobsarthur/Price_Uncertainty_HLCA,CIRAIG/Quebec_consumption_footprint,konstantinstadler/pymrio_article",,https://github.com/IndEcol,http://www.is4ie.org,Global society of scholars and practitioners of industrial ecology,,,https://avatars.githubusercontent.com/u/13447554?v=4,,, flowsa,"Library that attributes resource use, waste, emissions, and loss to economic sectors.",USEPA,https://github.com/USEPA/flowsa.git,github,ord,Life Cycle Assessment,"2023/08/03, 20:19:26",18,0,9,true,Python,U.S. Environmental Protection Agency,USEPA,Python,,"b'\n[![FLOWSA Paper](http://img.shields.io/badge/FLOWSA%20Paper-10.3390/app12115742-blue.svg)](https://doi.org/10.3390/app12115742)\n[![DOI](https://zenodo.org/badge/225456627.svg)](https://zenodo.org/badge/latestdoi/225456627)\n\n\n# flowsa\n`flowsa` is a data processing library attributing resources (environmental, \nmonetary, and human), wastes, emissions, and losses to sectors, typically \n[NAICS codes](https://www.census.gov/naics/). `flowsa` aggregates, combines,\nand allocates data from a variety of sources. The sources can be found in the \n[GitHub wiki](https://github.com/USEPA/flowsa/wiki/Available-Data#flow-by-activity-datasets) \nunder ""Flow-By-Activity Datasets"".\n\n`flowsa` helps support \n[USEEIO](https://www.epa.gov/land-research/us-environmentally-extended-input-output-useeio-technical-content) \nas part of the [USEEIO modeling](https://www.epa.gov/land-research/us-environmentally-extended-input-output-useeio-models) \nframework. The USEEIO models estimate potential impacts of goods and \nservices in the US economy. The \n[Flow-By-Sector datasets](https://github.com/USEPA/flowsa/wiki/Available-Data#flow-by-sector-datasets) \ncreated in FLOWSA are the environmental inputs to \n[`useeior`](https://github.com/USEPA/useeior).\n\n## Usage\n### Flow-By-Activity (FBA) Datasets\nFlow-By-Activity datasets are formatted tables from a variety of sources. \nThey are largely unchanged from the original data source, with the \nexception of formatting. A list of available FBA datasets can be found in \nthe [Wiki](https://github.com/USEPA/flowsa/wiki/Available-Data#flow-by-activity-datasets).\n\n`import flowsa` \\\n`flowsa.seeAvailableFlowByModels(\'FBA\')` \\\n`flowsa.getFlowByActivity(datasource=""USDA_CoA_Cropland"", year=2017)`\n\n### Flow-By-Sector (FBS) Datasets\nFlow-By-Sector datasets are tables of environmental and other data \nattributed to [sectors](https://www.census.gov/naics/). A list of available \nFBS datasets can be found in the [Wiki](https://github.com/USEPA/flowsa/wiki/Available-Data#flow-by-sector-datasets).\n\n`import flowsa` \\\n`flowsa.seeAvailableFlowByModels(\'FBS\')` \\\n`flowsa.getFlowBySector(\'Water_national_2015_m1\')`\n\n## Installation\n`pip install git+https://github.com/USEPA/flowsa.git@vX.X.X#egg=flowsa`\n\nwhere vX.X.X can be replaced with the version you wish to install under \n[Releases](https://github.com/USEPA/flowsa/releases).\n\n### Additional Information on Installation, Examples, Detailed Documentation\nFor more information on `flowsa` see the [wiki](https://github.com/USEPA/flowsa/wiki).\n\n## Disclaimer\n\nThe United States Environmental Protection Agency (EPA) GitHub project code \nis provided on an ""as is"" basis and the user assumes responsibility for its \nuse. EPA has relinquished control of the information and no longer has \nresponsibility to protect the integrity, confidentiality, or availability \nof the information. Any reference to specific commercial products, \nprocesses, or services by service mark, trademark, manufacturer, or \notherwise, does not constitute or imply their endorsement, recommendation \nor favoring by EPA. The EPA seal and logo shall not be used in any manner \nto imply endorsement of any commercial product or activity by EPA or\nthe United States Government.\n'",",https://doi.org/10.3390/app12115742,https://zenodo.org/badge/latestdoi/225456627","2019/12/02, 19:53:24",1423,MIT,227,3921,"2023/10/18, 18:31:59",22,269,347,90,7,4,0.6,0.3299248120300752,"2023/06/09, 18:07:03",v1.3.2,0,16,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, LCIA formatter,A Python tool for standardizing the format and flows of life cycle impact assessment data.,USEPA,https://github.com/USEPA/LCIAformatter.git,github,ord,Life Cycle Assessment,"2023/09/12, 20:01:24",19,0,8,true,Python,U.S. Environmental Protection Agency,USEPA,"Python,TeX",,"b'# LCIA formatter\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.03392/status.svg)](https://doi.org/10.21105/joss.03392)\n[![build](https://github.com/USEPA/LCIAformatter/actions/workflows/python-package.yml/badge.svg)](https://github.com/USEPA/LCIAformatter/actions/workflows/python-package.yml)\n\nThe LCIA formatter, or `lciafmt`, is a Python tool for standardizing the format and flows of life cycle impact assessment (LCIA) data. The tool acquires LCIA data transparently from its original \nsource, cleans the data, shapes them into a standard format using the [LCIAmethod format](./format%20specs/LCIAmethod.md), and optionally applies flow mappings as defined in the [Federal LCA Commons Elementary Flow List](https://github.com/USEPA/Federal-LCA-Commons-Elementary-Flow-List). The result can be exported to all formats supported by the\n`pandas` package (e.g. Excel, CSV) or the [openLCA JSON-LD format](https://github.com/GreenDelta/olca-schema). \n\nThe LCIA Formatter v1 was peer-reviewed internally at USEPA and externally through the Journal of Open Source software. An [article describing the LCIA Formatter was published by JOSS](https://doi.org/10.21105/joss.03392).\n\n## Data Provided\n|LCIA Data|Provider|Link|\n|---|---|---|\n|TRACI 2.1|US Environmental Protection Agency|[Tool for Reduction and Assessment of Chemicals and Other Environmental Impacts](https://www.epa.gov/chemical-research/tool-reduction-and-assessment-chemicals-and-other-environmental-impacts-traci)|\n|ReCiPe 2016 Midpoint|National Institute for Public Health and the Environment (The Netherlands)|[LCIA: the ReCiPe Model](https://www.rivm.nl/en/life-cycle-assessment-lca/recipe)|\n|ReCiPe 2016 Endpoint|National Institute for Public Health and the Environment (The Netherlands)|[LCIA: the ReCiPe Model](https://www.rivm.nl/en/life-cycle-assessment-lca/recipe)|\n|ImpactWorld+ Midpoint*|International Reference Center for Life Cycle of Products, Services and Systems (CIRAIG)|[ImpactWorld+](http://www.impactworldplus.org/en/team.php)|\n|ImpactWorld+ Endpoint*|International Reference Center for Life Cycle of Products, Services and Systems (CIRAIG)|[ImpactWorld+](http://www.impactworldplus.org/en/team.php)|\n|IPCC GWP|Intergovernmental Panel on Climate Change (IPCC)| |\n|FEDEFL Inventory Methods|US Environmental Protection Agency|[FEDEFL Inventory Methods](https://github.com/USEPA/LCIAformatter/wiki/Inventory-Methods)|\n\n\\* only works on Windows installations\n\n## Installation Instructions\n`lciafmt` requires Python 3.9 or greater.\n\nInstall a release directly from github using pip. From a command line interface, run:\n> pip install git+https://github.com/USEPA/LCIAformatter.git@v1.1.0#egg=lciafmt\n\nwhere you can replace \'v1.1.0\' with the version you wish to use under [Releases](https://github.com/USEPA/LCIAformatter/releases).\n\nAlternatively, to install from the most current point on the repository:\n```\ngit clone https://github.com/USEPA/LCIAformatter.git\ncd LCIAformatter\npip install . # or pip install -e . for devs\n```\nThe current version contains an optional dependency on the `pyodbc` library to generate the Impact World+ LCIA method.\nDue to limitations in reading Access databases from non-Windows platforms, this will only be install on Windows machines.\n \nThis needs to be specified in the pip install command. It can be done in one of two ways:\n\n```\npip install .[""ImpactWorld""]\n```\n\nor\n\n```\npip install . -r requirements.txt -r impactworld_requirements.txt \n```\n\nSee the [Wiki](https://github.com/USEPA/LCIAformatter/wiki/) for further installation and [use instructions](https://github.com/USEPA/LCIAformatter/wiki/Using-lciafmt) or for information on how to seek [support](https://github.com/USEPA/LCIAformatter/wiki/Support).\n\n## Disclaimer\nThe United States Environmental Protection Agency (EPA) GitHub project code is provided on an ""as is"" basis\n and the user assumes responsibility for its use. EPA has relinquished control of the information and no longer\n has responsibility to protect the integrity , confidentiality, or availability of the information. Any\n reference to specific commercial products, processes, or services by service mark, trademark, manufacturer,\n or otherwise, does not constitute or imply their endorsement, recommendation or favoring by EPA. The EPA seal\n and logo shall not be used in any manner to imply endorsement of any commercial product or activity by EPA or\n the United States Government.\n'",",https://doi.org/10.21105/joss.03392,https://doi.org/10.21105/joss.03392","2019/05/22, 13:55:50",1617,MIT,63,539,"2023/09/12, 20:01:32",2,59,93,12,43,0,0.0,0.3776223776223776,"2023/09/12, 20:07:57",v1.1.0,0,12,false,,false,false,,,https://github.com/USEPA,https://www.epa.gov,United States of America,,,https://avatars.githubusercontent.com/u/1304320?v=4,,, LCAx,"The goal for LCAx is to make an open, machine and human-readable data format for exchanging LCA results, EPD's and assemblies.",ocni-dtu,https://github.com/ocni-dtu/lcax.git,github,,Life Cycle Assessment,"2023/10/18, 10:05:29",24,1,24,true,Python,,,"Python,Rust",http://lcax.kongsgaard.eu/,"b""# LCAx\n\nThe goal for LCAx is to make an open, machine and human-readable data format for exchanging LCA results,\nEPD's and assemblies. \n\nWe propose a simple three level data format with information on project, assembly and EPD level,\nwritten in an open data format and paired with a validator for a more robust and standardized format.\nWe intend to create connections to existing tools and API\xe2\x80\x99s.\n\n# Install \n\nInstall Python packages\n\n```bash\npipenv install --dev\n```\n\n# Local Development\n\n```bash\nmkdocs serve\n```""",,"2023/05/12, 14:35:43",166,Apache-2.0,69,69,"2023/07/13, 09:58:42",9,1,14,14,104,0,0.0,0.3793103448275862,"2023/08/25, 08:30:25",v1.3.1,0,2,false,,false,false,ocni-dtu/lcax,,,,,,,,,, ecobalyse,Ecobalyse makes it possible to understand and calculate the ecological impacts of the products distributed in France.,MTES-MCT,https://github.com/MTES-MCT/ecobalyse.git,github,"environment,carbon-emissions,carbon-footprint,simulation",Life Cycle Assessment,"2023/10/24, 09:58:13",20,0,7,true,Elm,Ministère de la Transition écologique et de la Cohésion des territoires et Ministère de la Transition énergétique,MTES-MCT,"Elm,Python,JavaScript,SCSS,Makefile,CSS,Dockerfile,Shell,HTML,Procfile",https://ecobalyse.beta.gouv.fr,"b""# Ecobalyse ![Build status](https://github.com/MTES-MCT/ecobalyse/actions/workflows/node.js.yml/badge.svg)\n\n> Acc\xc3\xa9lerer la mise en place de l'affichage environnemental\n\nL'application est accessible [\xc3\xa0 cette adresse](https://ecobalyse.beta.gouv.fr/).\n\n> Note: le projet Ecobalyse s'appellait initialement **Wikicarbone**.\n\n## Socle technique et pr\xc3\xa9requis\n\nCette application est \xc3\xa9crite en [Elm](https://elm-lang.org/). Vous devez disposer d'un environnement [NodeJS](https://nodejs.org/fr/) 14+ et `npm` sur votre machine.\n\n## Installation\n\n $ npm install\n\n## D\xc3\xa9veloppement\n\n### Environnement de d\xc3\xa9veloppement local\n\nLe serveur local de d\xc3\xa9veloppement se lance au moyen des deux commandes suivantes\xc2\xa0:\n\n & npm run db:build\n $ npm start\n\nDeux instances de d\xc3\xa9veloppement sont alors accessibles\xc2\xa0:\n\n- [localhost:3000](http://localhost:3000/) sert le frontend et le backend (API)\xc2\xa0;\n- [localhost:1234](http://localhost:1234/) sert seulement le frontend en mode _hot-reload_, permettant de mettre \xc3\xa0 jour en temps-r\xc3\xa9el l'interface Web \xc3\xa0 chaque modification du code frontend.\n\n### Mode d\xc3\xa9bogage\n\nPour lancer le serveur de d\xc3\xa9veloppement en mode de d\xc3\xa9bogage:\n\n & npm run db:build\n $ npm run start:dev\n\nUn server frontend de d\xc3\xa9bogage est alors disponible sur [localhost:1234](http://localhost:1234/).\n\n### Hooks Git avec Husky et Formatage de Code avec Prettier\n\nCe projet utilise Husky pour g\xc3\xa9rer les hooks Git, et Prettier pour le formatage automatique du code.\n\n#### Pr\xc3\xa9-requis\n\n- Husky\n- Prettier\n\nSi vous clonez le d\xc3\xa9p\xc3\xb4t pour la premi\xc3\xa8re fois, les d\xc3\xa9pendances devraient \xc3\xaatre install\xc3\xa9es automatiquement apr\xc3\xa8s avoir ex\xc3\xa9cut\xc3\xa9 npm install. Si ce n'est pas le cas, vous pouvez les installer manuellement.\n\n $ npm install --save-dev husky prettier\n\n#### V\xc3\xa9rification Automatique avant chaque Commit\n\nUn hook de pre-commit a \xc3\xa9t\xc3\xa9 configur\xc3\xa9 pour v\xc3\xa9rifier que le code est bien format\xc3\xa9 avant de permettre le commit. Si le code n'est pas correctement format\xc3\xa9, le commit sera bloqu\xc3\xa9.\n\nPour r\xc3\xa9soudre ce probl\xc3\xa8me, vous pouvez ex\xc3\xa9cuter la commande suivante :\n\n $ npm run format:json\n\n## Compilation\n\nPour compiler la partie client de l'application\xc2\xa0:\n\n $ npm run build\n\nLes fichiers sont alors g\xc3\xa9n\xc3\xa9r\xc3\xa9s dans le r\xc3\xa9pertoire `build` \xc3\xa0 la racine du projet, qui peut \xc3\xaatre servi de fa\xc3\xa7on statique.\n\n## D\xc3\xa9ploiement\n\nL'application est d\xc3\xa9ploy\xc3\xa9e automatiquement sur la plateforme [Scalingo](https://scalingo.com/) \xc3\xa0 chaque mise \xc3\xa0 jour de la branche `master` sur [le d\xc3\xa9p\xc3\xb4t](https://github.com/MTES-MCT/ecobalyse/tree/master).\n\nChaque _Pull Request_ effectu\xc3\xa9e sur le d\xc3\xa9p\xc3\xb4t est \xc3\xa9galement automatiquement d\xc3\xa9ploy\xc3\xa9e sur une instance de revue sp\xc3\xa9cifique, par exemple `https://ecobalyse-pr44.osc-fr1.scalingo.io/` pour la pull request #44. **Ces instances de recette restent actives 72 heures, puis sont automatiquement d\xc3\xa9commisionn\xc3\xa9es pass\xc3\xa9 ce d\xc3\xa9lai ou si la pull request correspondante est merg\xc3\xa9e.**\n\n# Serveur de production\n\n## Variables d'environnement\n\nCertaines variables d'environnement doivent \xc3\xaatre configur\xc3\xa9es via l'interface de [configuration Scalingo](https://dashboard.scalingo.com/apps/osc-fr1/ecobalyse/environment)\xc2\xa0:\n\n- `SENTRY_DSN`: le DSN [Sentry](https://sentry.io) \xc3\xa0 utiliser pour les rapports d'erreur.\n- `MATOMO_TOKEN`: le token [Matomo](https://stats.data.gouv.fr/) permettant le suivi d'audience de l'API.\n\n## Lancement du serveur\n\nPour lancer le serveur applicatif complet (frontend + backend), par exemple depuis un environnement de production, la d\xc3\xa9marche est la suivante\xc2\xa0:\n\n```\n$ npm run build\n$ npm run server:start\n```\n\nL'application est alors servie sur le port d\xc3\xa9fini par la variable d'environnement `PORT` (par d\xc3\xa9faut: `3000`).\n\n# Ecobalyse data\n\nCe d\xc3\xa9p\xc3\xb4t contient aussi les scripts (principalement python) utilis\xc3\xa9s pour\nimporter et exporter les donn\xc3\xa9es du projet [Ecobalyse](https://github.com/MTES-MCT/ecobalyse).\n\nCes scripts se trouvent dans `data/`, et un fichier [README](data/README.md) sp\xc3\xa9cifique\nen d\xc3\xa9taille l'installation et l'utilisation.\n""",,"2021/08/18, 15:18:29",798,MIT,401,1108,"2023/10/24, 07:30:10",2,366,369,204,1,2,2.1,0.23702422145328716,,,0,8,false,,false,false,,,https://github.com/MTES-MCT,https://ecologie.gouv.fr/,France,,,https://avatars.githubusercontent.com/u/20193330?v=4,,, lca_algebraic,"This library is a small layer above brightway2, designed for the definition of parametric inventories with fast computation of LCA impacts, suitable for monte-carlo analyis.",oie-mines-paristech,https://github.com/oie-mines-paristech/lca_algebraic.git,github,"lca,brightway2,numpy,monte-carlo,lca-algebraic,foreground-activities,symbolic-expressions",Life Cycle Assessment,"2022/09/14, 09:42:56",27,0,15,false,Jupyter Notebook,OIE - Mines ParisTech,oie-mines-paristech,"Jupyter Notebook,Python,Makefile",,"b'\n# Introduction\n\nThis library is a small layer above [**brightway2**](https://brightway.dev/), designed for the definition of **parametric inventories** \nwith fast computation of LCA impacts, suitable for **monte-carlo** analyis.\n\n**lca-algebraic** provides a set of **helper functions** for : \n* **compact** & **human readable** definition of activites : \n * search background (tech and biosphere) activities \n * create new foreground activites with parametrized amounts\n * parametrize / update existing background activities (extending the class **Activity**)\n* Definition of parameters\n* Fast computation of LCAs\n* Computation of monte carlo method and global sensivity analysis (Sobol indices) \n\n# Installation \n\nIf you already have Anaconda & Jupyter installed, you can install the library with either **pip** or **conda** :\n\n## Conda\n\n> conda install -c oie-minesparistech lca_algebraic\n\n## PIP\n\n> pip install lca_algebraic\n\n## Pre-packaged installer for Windows\n\nAlternatively, you can download and execute [this installer](https://github.com/oie-mines-paristech/lca_algebraic/releases/download/1.0.0/incer-acv-model-installer.exe). It will setup a full anaconda environment with **Jupyter**, \n**Brightway2** and **LCA Algebraic**.\n\n# Usage & documentation \n\nPlease refer to the [sample notebook (Markdown)](./example-notebook.md) [(or here as ipynb)](./example-notebook.ipynb). \n\nThe full API is [documented here](https://oie-mines-paristech.github.io/lca_algebraic/doc/).\n\n# Licence & Copyright\n\nThis library has been developed by [OIE - MinesParistech](http://www.oie.mines-paristech.fr), for the project *INCER-ACV*, \nlead by [ADEME](https://www.ademe.fr/). \n\nIt is distributed under the **BSD licence**.\n\n \n# Principles \n\nThe main idea of this libray is to move from **procedural definition** of models (slow and prone to errors) to a **declarative / purely functionnal** definition of parametric models (models as **pure functions**). \n\nThis enables **fast computation of LCA impacts**. \nWe leverage the **power of symbolic calculus** provided by the great libary [SymPy](https://www.sympy.org/en/index.html).\n\nWe define our model in a **separate DB**, as a nested combination of : \n* other foreground activities\n* background activities :\n * Technical, refering **ecoinvent DB**\n * Biopshere, refering **brightway2** biosphere activities\n \nThe **amounts** in exchanges are expressed either as **static amounts**, or **symbolic expressions** of pre-defined **parameters**.\n\nEach activity of our **root model** is defined as a **parametrized combination** of the **foreground activities**, which can themselves be expressed by the **background activities**.\n\nWhen computing LCA for foreground models, the library develops the model as a combination of **only background activities**. It computes **once for all** the impact of **background activities** and compiles a **fast numpy** (vectorial) function for each impact, replacing each background activity by the **static value of the corresponding impact**.\n\nBy providing **large vectors** of **parameter values** to those numpy functions, we can compute LCA for **thousands of values** at a time.\n\n![](https://oie-mines-paristech.github.io/lca_algebraic/doc/lca-algebraic.png)\n\n# Compatibility with brightway2 \n\nUnder the hood, the activities we define with **lca-algebraic** are standard **brightway2** activities. \nThe amounts of exchanges are stored as **float values** or **serialized as string** in the property **formula**.\n\nParameters are also stored in the **brightay2** projets, making it fully compatible with **brightway**.\n\nThus, a model defined with **lca-algebraic** is stored as a regular **bw2** projet. We can use **bw2** native support for [parametrized dataset](https://2.docs.brightway.dev/intro.html#parameterized-datasets) for computing LCAs, even if much more slower than the method explain here.\n'",,"2020/03/30, 14:53:35",1304,BSD-2-Clause,0,132,"2022/09/13, 16:15:29",18,11,16,0,407,3,0.0,0.016949152542372836,"2021/06/23, 11:51:05",1.0.0,0,3,false,,false,false,,,https://github.com/oie-mines-paristech,http://www.oie.mines-paristech.fr/,Sophia Antipolis - France,,,https://avatars.githubusercontent.com/u/62893802?v=4,,, pyLCAIO,An object class to hybridize lifecycle assessment and environmentally extended input-output (EEIO) databases.,maximeagez,https://github.com/MaximeAgez/pylcaio.git,github,"exiobase,life-cycle-assessment,carbon-footprint,scientific-research,input-output,industrial-ecology,ecoinvent,brightway2,database,energy-consumption,environmental-modelling,water-footprint",Life Cycle Assessment,"2022/12/15, 20:41:40",35,0,13,true,Python,,,"Python,Jupyter Notebook",,"b'# pyLCAIO\nAn object class to hybridize lifecycle assessment (LCA) and environmentally extended input-output (EEIO) databases.\n\n\n* Create your own LCA-IO hybrid database (e.g., combining ecoinvent and exiobase data)\n* Automates hybridization and correction for double-counting with two available methods (STAM and binary)\n* Default parameters only allow the hybridization of ecoinvent 3.5, 3.6, 3.7, 3.7.1, 3.8 and 3.9 with EXIOBASE3.7+ (v3.7 and higher)\n* The resulting hybrid-ecoinvent database can be exported to brightway2 and the GUI activity-browser\n* Includes matching of ecoinvent and EXIOBASE environmental flows to Impact World+\n\nSpecific additional features, _**only available**_ while hybridization ecoinvent3.5 with exiobase\n* Can accept capitals-endogenized version of EXIOBASE\n* Includes extrapolated additional environmental extensions for EXIOBASE (from USEEIO)\n* Includes _**regionalized**_ characterization matrices for use with Impact World+\n\nThis library will be regularly updated to provide support for newer versions of ecoinvent.\n\n# System requirements\nUnder 12GM of RAM you will most likely run into a MemoryError, making it impossible to generate a database\n\n# Dependencies\n* Python 3\n* Pandas\n* Numpy\n* Scipy\n* pymrio\n* ecospold2matrix\n* pickle\n* brightway2\n* bw2agg\n\n# Related publications\n* Majeau-Bettez, G., Agez, M., Wood, R., S\xc3\xb6dersten, C., Margni, M., Str\xc3\xb8mman, A. H., & Samson, R. (2017). Streamlined Hybridization software: merging Ecoinvent and Exiobase. In Biennial Conference of the International Society for Industrial Ecology.\n* Agez, M., Majeau-Bettez, G., Margni, M., Str\xc3\xb8mman, A. H., & Samson, R. (2019). Lifting the veil on the correction of double counting incidents in hybrid Life Cycle Assessment. Journal of Industrial Ecology, 24(3), 517\xe2\x80\x93533. https://doi.org/https://doi.org/10.1111/jiec.12945\n* Agez, M., Wood, R., Margni, M., Str\xc3\xb8mman, A. H., Samson, R., & Majeau-Bettez, G. (2020). Hybridization of complete LCA and MRIO databases for a comprehensive product system coverage. Journal of Industrial Ecology, 24(4), 774\xe2\x80\x93790. https://doi.org/10.1111/jiec.12979\n* Agez, M., Muller, E., Patouillard, L., S\xc3\xb6dersten, C. J. H., Arvesen, A., Margni, M., Samson, R., & Majeau-Bettez, G. (2021). Correcting remaining truncations in hybrid LCA database compilation. Journal of Industrial Ecology. https://doi.org/10.1111/jiec.13132'",",https://doi.org/https://doi.org/10.1111/jiec.12945\n*,https://doi.org/10.1111/jiec.12979\n*,https://doi.org/10.1111/jiec.13132","2019/03/19, 14:32:47",1681,GPL-2.0,1,194,"2023/02/24, 02:34:53",1,2,6,1,243,1,0.0,0.021164021164021163,"2022/12/15, 20:43:29",v2.4,0,2,false,,false,false,,,,,,,,,,, ONEARMY,A series of tools for the Precious Plastic community to collaborate around the world and tackle plastic waste.,ONEARMY,https://github.com/ONEARMY/precious-plastic.git,github,,Circular Economy and Waste,"2023/10/21, 14:49:17",52,0,1,true,HTML,ONE ARMY,ONEARMY,"HTML,CSS",http://preciousplastic.com,"b""# Precious Plastic\n\nThe current website is made from exported Webflow HTML, CSS and JS. It currently makes no sense to make pull requests. However, if you spot a bug or a problem it would be great if you'd let us know by opening an issue so we can fix it as soon as possible. Peace :)\n""",,"2016/02/27, 10:27:21",2797,GPL-2.0,51,728,"2023/06/27, 17:45:59",0,22,93,4,120,0,0.0,0.4078947368421053,"2020/01/06, 19:07:50",3.0,0,10,false,,false,false,,,https://github.com/ONEARMY,,,,,https://avatars.githubusercontent.com/u/43448126?v=4,,, Trash-ICRA19,A Bounding Box Labeled Dataset of Underwater Trash.,handle/11299,,custom,,Circular Economy and Waste,,,,,,,,,,https://conservancy.umn.edu/handle/11299/214366,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, TACO,Trash Annotations in Context Dataset Toolkit.,pedropro,https://github.com/pedropro/TACO.git,github,"trash,dataset,mask-rcnn,object-detection,litter,deep-learning,garbage",Circular Economy and Waste,"2023/02/13, 20:00:07",517,0,94,true,Jupyter Notebook,,,"Jupyter Notebook,Python",http://tacodataset.org,"b'

\n\n

\n\nTACO is a growing image dataset of waste in the wild. It contains images of litter taken under\ndiverse environments: woods, roads and beaches. These images are manually labeled and segmented\naccording to a hierarchical taxonomy to train and evaluate object detection algorithms. Currently,\nimages are hosted on Flickr and we have a server that is collecting more images and\nannotations @ [tacodataset.org](http://tacodataset.org)\n\n\n
\n
\n \n \n \n \n \n
\n
\n
\n\nFor convenience, annotations are provided in COCO format. Check the metadata here:\nhttp://cocodataset.org/#format-data\n\nTACO is still relatively small, but it is growing. Stay tuned!\n\n# Publications\n\nFor more details check our paper: https://arxiv.org/abs/2003.06975\n\nIf you use this dataset and API in a publication, please cite us using:  \n```\n@article{taco2020,\n title={TACO: Trash Annotations in Context for Litter Detection},\n author={Pedro F Proen\xc3\xa7a and Pedro Sim\xc3\xb5es},\n journal={arXiv preprint arXiv:2003.06975},\n year={2020}\n}\n```\n\n# News\n**December 20, 2019** - Added more 785 images and 2642 litter segmentations.
\n**November 20, 2019** - TACO is officially open for new annotations: http://tacodataset.org/annotate\n\n# Getting started\n\n### Requirements \n\nTo install the required python packages simply type\n```\npip3 install -r requirements.txt\n```\nAdditionaly, to use ``demo.pynb``, you will also need [coco python api](https://github.com/cocodataset/cocoapi). You can get this using\n```\npip3 install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI\n```\n\n### Download\n\nTo download the dataset images simply issue\n```\npython3 download.py\n```\nAlternatively, download from [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3587843.svg)](https://doi.org/10.5281/zenodo.3587843)\n\nOur API contains a jupyter notebook ``demo.pynb`` to inspect the dataset and visualize annotations.\n\n**Unlabeled data**\n\nA list of URLs for both unlabeled and labeled images is now also provided in `data/all_image_urls.csv`.\nEach image contains one URL for each original image (second column) and one URL for a VGA-resized version (first column)\nfor images hosted by Flickr. If you decide to annotate these images using other tools, please make them public and contact us so we can keep track.\n\n**Unofficial data**\n\nAnnotations submitted via our website are added weekly to `data/annotations_unofficial.json`. These have not yet been been reviewed by us -- some may be inaccurate or have poor segmentations. \nYou can use the same command to download the respective images:\n```\npython3 download.py --dataset_path ./data/annotations_unofficial.json\n```\n\n### Trash Detection\n\nThe implementation of [Mask R-CNN by Matterport](https://github.com/matterport/Mask_RCNN) is included in ``/detector``\nwith a few modifications. Requirements are the same. Before using this, the dataset needs to be split. You can either donwload our [weights and splits](https://github.com/pedropro/TACO/releases/tag/1.0) or generate these from scratch using the `split_dataset.py` script to generate \nN random train, val, test subsets. For example, run this inside the directory `detector`:\n```\npython3 split_dataset.py --dataset_dir ../data\n```\n\nFor further usage instructions, check ``detector/detector.py``.\n\nAs you can see [here](http://tacodataset.org/stats), most of the original classes of TACO have very few annotations, therefore these must be either left out or merged together. Depending on the problem, ``detector/taco_config`` contains several class maps to target classes, which maintain the most dominant classes, e.g., Can, Bottles and Plastic bags. Feel free to make your own classes.\n\n

\n

\n'",",https://arxiv.org/abs/2003.06975\n\nIf,https://doi.org/10.5281/zenodo.3587843","2019/06/08, 22:19:04",1600,MIT,16,190,"2022/09/05, 17:39:19",39,3,17,0,415,5,0.0,0.016304347826086918,"2020/03/05, 17:42:44",1.0,0,2,false,,false,false,,,,,,,,,,, circularity.ID Open Data Standard,The standard represents the results and findings of an extensive six-year research into the needs of stakeholders in the fashion industry to achieve a circular economy.,circularfashion,https://github.com/circularfashion/cf-circularity-id-standard.git,github,,Circular Economy and Waste,"2021/10/07, 13:22:16",17,0,4,true,Python,circular.fashion,circularfashion,"Python,Shell",https://circularity.id,"b'![circularity id logo](logo.jpg)\n\n# circularity.ID\xc2\xae Open Data Standard\n\n[View the website](https://circularity.id)\n[Download the Whitepaper](https://circularity.id/static/circular.fashion_circularityID_white_paper_2021.pdf)\n\n__current version of schema__: [3.0](https://circularity.id/open-data-standard.html)\n\n__[older versions](https://github.com/circularfashion/cf-circularity-id-standard/tree/master/schema)__\n\n## Description\nThe circularity.ID Open Data Standard describes essential digital product data for clothing and textiles to enable circular economy and recyclability in the fashion industry. Products identifiable with the circularity.ID Open Data Standard make the entire story of a product and its material components accessible and enable circular use and end-of-life phase on the data level.\n\n## Objective & Scope\nThe circularity.ID Open Data Standard is intended for use in the fashion industry for the labelling, identification and storing of digital product data, optimising products for a circular economy. It is specifically targeting the circularity and recyclability of textiles, and ultimately designed for the potential of fibre-to-fibre recycling.\n\nIn short, the circularity.ID can ensure that:\n1) Information on essential material and chemical components is assessed, stored and accessible \n2) Product data is automatically recognised by software at sorting facilities\n3) Products are matched to appropriate recyclers, to be recyclable to best knowledge and method according to the current state of technology at end-of-life\n4) Essential product information is available to consumers to enable longevity and multiple use cycles for a product, e.g through redesign and resell services.\n\n## Format\n\ncircularity.ID ODS is split into immutable product data including material and chemical components, and a mutable set of data that contains product information such as product images, description, sustainability consumer information and service offers in json format.\n\n## Contact\n\nThe circularity.ID is initiated and administered by circular.fashion UG (haftungsbeschr\xc3\xa4nkt).\n\nContact: [https://circular.fashion/en/about/contact.html](https://circular.fashion/en/about/contact.html)\n\n## Guidelines for Use\n\n### Use of circularity.ID Open Data Standard\nThe circularity.ID Open Data Standard can be used according to the License GNU General Public License v3.0. and always needs to be referenced when using or modifying the standard. Using the circularity.ID Open Data Standard does not imply that a product was tested for recyclability by circular.fashion, but the use of the standard data format enables circular.fashion to conduct Circular Product Checks, and supports connected services and applications. The circularity.ID Open Data Standard does not imply that data created according to the standard must be made public.\n\n### Use of circularity.ID Trademark\ncircularity.ID is a registered trademark. Using the the circularity.ID Open Data Standard does not include allowance for using the circularity.ID trademark. circular.fashion is the only issuing body of the circularity.ID and will provide a unique identifying number as a proof. Only when circular.fashion issued an ID as circularity.ID the trademark is allowed to be used following separate guidelines stated in a different document. \n\n## Developing\n\nFor information on how to integrate the open data standard into your tech tools, or for contributing see [developing circularity.ID](develop.md)\n'",,"2019/10/09, 08:19:09",1477,GPL-3.0,0,160,"2022/07/06, 20:24:14",3,8,12,2,476,2,0.125,0.4516129032258065,,,0,7,false,,false,false,,,https://github.com/circularfashion,https://circular.fashion,,,,https://avatars.githubusercontent.com/u/91959849?v=4,,, RecycleNet,Effective trash classification model using only a small number of annotated images.,sangminwoo,https://github.com/sangminwoo/RecycleNet.git,github,"trash-classification,attention,recyclenet,trashnet",Circular Economy and Waste,"2021/01/04, 16:14:55",31,0,9,false,Python,,,Python,,"b""RecycleNet\n================================\nIn the era of mass production and mass consumption, trash disposal has become an important national issue. With this trend, the social and economic importance of ***trash collection and reusing*** is increasing. An alternative is to allow the machine to classify automatically once the user discharge the trash regardless of the material.\n\nUsing two methods for creating an ***effective trash classification model*** using only a small number of annotated trash images(2527).\n\n***1) Transfer learning: Using ImageNet pre-trained model*** \n***2) Effective feature learning with attention module***\n\nTo demonstrate that the proposed methodologies were effective, a large number of ablation studies were conducted and were more effective than state-of-the-art attention modules.\n\n- Backbone Network: ResNet\n- Attention Module: RecycleNet\n\nRequirements\n-----------\nInstall all the python dependencies using pip:\n```\n$ git clone https://github.com/sangminwoo/RecycleNet.git\n$ cd RecycleNet\n$ pip install -r requirements.txt\n```\n* PyTorch is not inside. Please go to [official website](https://pytorch.org/get-started/locally/).\n\nData Preparation(TrashNet[1]: https://github.com/garythung/trashnet)\n--------------------------------------------------------------------\n* Total: 2527 (contains 6 classes)\n - Glass 501\n - Paper 594\n - Cardboard 403\n - Plastic 482\n - Metal 410\n - Non-recyclable Trash 137\n\n* Train/Val/Test set: 70/13/17\n* Data Augmentation\n\n* :warning: You may use *additional_dataset.zip* as another version of dataset. But if you use both of them on training phase, it will increase intra-class variance thus will leads to decrease of accuracy. Maybe you can try to use it for just testing true-generalizability on totally different dataset.(In terms of real world problem, trashes have high intra-class variance so it's very important!)\n\nData Augmentation(Albumentations[4])\n------------------------------------\n```\n$ python augmentation.py --root_dir $ROOT --save_dir $SAVE --probability $PROB\n```\n**$ROOT**: 'dataset-resized/' (default) \n**$SAVE**: 'augmented/' (default) \n**$PROB**: low(default), mid, high (probability of applying the transform) \n\nTraining\n---------\nWithout pre-train(Training from scratch)\n```\n$ python main.py --gpu $GPUNUM --arch $ARCHITECTURE --no_pretrain\n```\n\nWithout Attention Module\n```\n$ python main.py --gpu $GPUNUM --arch $ARCHITECTURE\n```\n\nWith Attention Module\n```\n$ python main.py --gpu $GPUNUM --arch $ARCHITECTURE --use_att --att_mode $ATT\n```\n**$GPUNUM**: 0; 0,1; 0,3; 0,1,2; whatever \n**$ARCHITECTURE**: resnet18_base(default), resnet34_base, resnet52_base, resnet101_base, resnet152_base \n**$ATT**: ours(default), cbam, se \n\nYou can find more configurations in *main.py*.\n\nEvaluation\n----------\n```\n$ python main.py --gpu $GPUNUM --resume save/model_best.pth.tar --use_att -e\n```\n**$resume**: save/model_best.pth.tar(default) (If you have changed save path, you should change resume path as well.) \n**$e** (or evaluate): set evaluation mode\n\nWebcam Inference\n----------------\n```\n$ python webcam.py --resume save/model_best_pth.tar\n```\n\nConfiguration\n-------------\n* Loss Function: Cross Entropy Loss\n* Optimizer: SGD\n* Initial Learning Rate: 2e-4\n* epochs: 100\n* For every 40 epochs, learning rate = learning rate * 1/10\n\nAttention Module\n----------------\n![Alt text](/images/Attention.jpg)\n\n* Attention Module\n - **Attention mechanism** learns parameters with a high weight for important features and a low weight for unnecessary features. \n \xf0\x9d\x92\x99\xe2\x80\xb2\xe2\x80\xb2 = (\xf0\x9d\x92\x99,\xf0\x9d\x9c\xbd) \xe2\x88\x97 \xf0\x9d\x91\xa8(\xf0\x9d\x92\x99\xe2\x80\xb2, \xe2\x88\x85), \xf0\x9d\x92\x98\xf0\x9d\x92\x89\xf0\x9d\x92\x86\xf0\x9d\x92\x93\xf0\x9d\x92\x86 \xf0\x9d\x9f\x8e \xe2\x89\xa4 \xf0\x9d\x91\xa8(\xf0\x9d\x92\x99\xe2\x80\xb2, \xe2\x88\x85) \xe2\x89\xa4 \xf0\x9d\x9f\x8f. \n \xf0\x9d\x92\x99: Input Feature, \xf0\x9d\x92\x99\xe2\x80\xb2: CNN or later features, \xf0\x9d\x92\x99\xe2\x80\xb2\xe2\x80\xb2: Output Feature, \n \xce\xb8, \xe2\x88\x85: learable parameters, A: Attention operation\n \n - When looking at the network from a **forward perspective**, the features are refined through attention modules. \n (\xf0\x9d\x92\x85(\xf0\x9d\x92\x99, \xf0\x9d\x9c\xbd)\xf0\x9d\x91\xa8(\xf0\x9d\x92\x99\xe2\x80\xb2, \xe2\x88\x85))/\xf0\x9d\x92\x85\xf0\x9d\x9c\xbd = (\xf0\x9d\x92\x85(\xf0\x9d\x92\x99, \xf0\x9d\x9c\xbd))/\xf0\x9d\x92\x85\xf0\x9d\x9c\xbd \xe2\x88\x97 \xf0\x9d\x91\xa8(\xf0\x9d\x92\x99\xe2\x80\xb2, \xe2\x88\x85), \xf0\x9d\x92\x98\xf0\x9d\x92\x89\xf0\x9d\x92\x86\xf0\x9d\x92\x93\xf0\x9d\x92\x86 \xf0\x9d\x9f\x8e \xe2\x89\xa4 \xf0\x9d\x91\xa8(\xf0\x9d\x92\x99\xe2\x80\xb2, \xe2\x88\x85) \xe2\x89\xa4 \xf0\x9d\x9f\x8f. \n - From a **backward perspective**, the greater the attention value, the greater the gradient value, so effective learning is achieved.\n\n![Alt text](/images/Attention%20Visualization.jpg)\n\n* Attention Visualization\n - **Visualization comparison** of feature map extracted after the last convolution block.\n - **ResNet18 + Ours** vs. ResNet18(baseline)\n - While **ResNet18 + Ours** successfully classified, ResNet18 failed classification.\n - Feature map shows that when Attention module is inserted, it attend more precisely on the **object extent**.\n\nAblation Study\n--------------\n* Non Pre-trained Model vs. Pre-trained Model (Transfer Learning)\n\n| Method | Accuracy@1 | Parameters(M) |\n|----------------------|-------------|---------------|\n| ResNet18 | 70.302 | 11.18 |\n| ResNet34 | 64.965 | 21.29 |\n| ResNet50 | 58.701 | 23.52 |\n| Pre-trained ResNet18 | **90.023** | 11.18 |\n| Pre-trained ResNet34 | **93.271** | 21.29 |\n| Pre-trained ResNet50 | **93.735** | 23.52 |\n\n\n* Attention Module(SENet vs. CBAM vs. Ours)\n\n| Method | Accuracy@1 | Parameters(M) |\n|----------------------|-------------|---------------|\n| ResNet18 + SE[2] | 87.703 | 11.27 |\n| ResNet34 + SE[2] | 88.863 | 21.45 |\n| ResNet50 + SE[2] | 91.879 | 26.05 |\n| ResNet18 + CBAM[3] | 79.814 | 11.27 |\n| ResNet34 + CBAM[3] | 81.439 | 21.45 |\n| ResNet50 + CBAM[3] | 82.135 | 26.05 |\n| ResNet18 + Ours | **93.039** | 11.24 |\n| ResNet34 + Ours | **93.968** | 21.35 |\n| ResNet50 + Ours | **94.2** | 24.15 |\n\n\n* Channel Attention & Spatial Attention\n\n| Network ablation | Accuracy@1 | Parameters(M) |\n|--------------------|-------------|---------------|\n| ResNet18 | 90.023 | 11.18 |\n| ResNet18 + s | 92.807 | 11.20 |\n| ResNet18 + s + c | **93.039** | 11.24 |\n\n| Combination ablation | Accuracy@1 | Parameters(M) |\n|----------------------|-------------|---------------|\n| Mul | 91.647 | 11.24 |\n| Max | 92.575 | 11.24 |\n| Sum | **93.039** | 11.24 |\n\nConclusion\n----------\nWhile proposing deep-learning model which is specialized in trash classification, there was two difficult problems faced experimentally:\n\n*1) Insufficiency of data set* \n*2) The absence of effective feature learning methods* \nwas solved by **transfer learning and attention mechanism.**\n\nThe methodology proposed through quantitative and qualitative assessments was experimentally significant. Because the proposed method exhibits significant performance improvements without significantly increasing the number of parameters, it is expected that the experimental value is also high for other applications.\n\nReference\n----------\n| # | Reference | Link |\n|---|----------------|----------------------------------------------|\n| 1 | TrashNet | https://github.com/garythung/trashnet |\n| 2 | SENet | https://github.com/hujie-frank/SENet |\n| 3 | CBAM | https://github.com/Jongchan/attention-module |\n| 4 | Albumentations | https://github.com/albu/albumentations |\n\nAcknowledgement\n---------------\nWe appreciate much the dataset [TrashNet](https://github.com/garythung/trashnet) and the well organized code [CBAM](https://github.com/Jongchan/attention-module). Our codebase is mostly built based on them.\n""",,"2018/10/26, 03:51:23",1825,MIT,0,55,"2022/07/06, 20:24:14",1,0,0,0,476,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, trashnet,Dataset of images of trash. Torch-based CNN for garbage image classification.,garythung,https://github.com/garythung/trashnet.git,github,"dataset,torch,convolutional-neural-networks,deep-learning,trash,garbage,image-classification",Circular Economy and Waste,"2023/06/02, 03:39:01",517,0,61,true,Lua,,,"Lua,Python",,"b""# trashnet\nCode (only for the convolutional neural network) and dataset for mine and [Mindy Yang](http://github.com/yangmindy4)'s final project for [Stanford's CS 229: Machine Learning class](http://cs229.stanford.edu). Our paper can be found [here](https://cs229.stanford.edu/proj2016/report/ThungYang-ClassificationOfTrashForRecyclabilityStatus-report.pdf). The convolutional neural network results on the poster are dated since we continued working after the end of the quarter and were able to achieve around 75% test accuracy (with 70/13/17 train/val/test split) after changing the weight initialization to the Kaiming method.\n\n## Dataset\nThis repository contains the dataset that we collected. The dataset spans six classes: glass, paper, cardboard, plastic, metal, and trash. Currently, the dataset consists of 2527 images:\n- 501 glass\n- 594 paper\n- 403 cardboard\n- 482 plastic\n- 410 metal\n- 137 trash\n\nThe pictures were taken by placing the object on a white posterboard and using sunlight and/or room lighting. The pictures have been resized down to 512 x 384, which can be changed in `data/constants.py` (resizing them involves going through step 1 in usage). The devices used were Apple iPhone 7 Plus, Apple iPhone 5S, and Apple iPhone SE.\n\nThe size of the original dataset, ~3.5GB, exceeds the git-lfs maximum size so it has been uploaded to Google Drive. If you are planning on using the Python code to preprocess the original dataset, then download `dataset-original.zip` from the link below and place the unzipped folder inside of the `data` folder.\n\n**If you are using the dataset, please give a citation of this repository. The dataset can be downloaded [here](https://huggingface.co/datasets/garythung/trashnet).**\n\n## Installation\n### Lua setup\nWe wrote code in [Lua](http://lua.org) using [Torch](http://torch.ch); you can find installation instructions\n[here](http://torch.ch/docs/getting-started.html). You'll need the following Lua packages:\n\n- [torch/torch7](http://github.com/torch/torch7)\n- [torch/nn](http://github.com/torch/nn)\n- [torch/optim](http://github.com/torch/optim)\n- [torch/image](http://github.com/torch/image)\n- [torch/gnuplot](http://github.com/torch/gnuplot)\n\nAfter installing Torch, you can install these packages by running the following:\n\n```bash\n# Install using Luarocks\nluarocks install torch\nluarocks install nn\nluarocks install optim\nluarocks install image\nluarocks install gnuplot\n```\n\nWe also need [@e-lab](http://github.com/e-lab)'s [weight-init module](http://github.com/e-lab/torch-toolbox/blob/master/Weight-init/weight-init.lua), which is already included in this repository.\n\n### CUDA support\nBecause training takes awhile, you will want to use a GPU to get results in a reasonable amount of time. We used CUDA with a GTX 650 Ti with CUDA. To enable GPU acceleration with CUDA, you'll first need to install CUDA 6.5 or higher. Find CUDA installations [here](http://developer.nvidia.com/cuda-downloads).\n\nThen you need to install following Lua packages for CUDA:\n- [torch/cutorch](http://github.com/torch/cutorch)\n- [torch/cunn](http://github.com/torch/cunn)\n\nYou can install these packages by running the following:\n\n```bash\nluarocks install cutorch\nluarocks install cunn\n```\n\n### Python setup\nPython is currently used for some image preprocessing tasks. The Python dependencies are:\n- [NumPy](http://numpy.org)\n- [SciPy](http://scipy.org)\n\nYou can install these packages by running the following:\n\n```bash\n# Install using pip\npip install numpy scipy\n```\n\n## Usage\n\n### Step 1: Prepare the data\nUnzip `data/dataset-resized.zip`.\n\nIf adding more data, then the new files must be enumerated properly and put into the appropriate folder in `data/dataset-original` and then preprocessed. Preprocessing the data involves deleting the `data/dataset-resized` folder and then calling `python resize.py` from `trashnet/data`. This will take around half an hour.\n\n### Step 2: Train the model\nTODO\n\n### Step 3: Test the model\nTODO\n\n### Step 4: View the results\nTODO\n\n## Contributing\n1. Fork it!\n2. Create your feature branch: `git checkout -b my-new-feature`\n3. Commit your changes: `git commit -m 'Add some feature'`\n4. Push to the branch: `git push origin my-new-feature`\n5. Submit a pull request\n\n## Acknowledgments\n- Thanks to the Stanford CS 229 autumn 2016-2017 teaching staff for a great class!\n- [@e-lab](http://github.com/e-lab) for their [weight-init Torch module](http://github.com/e-lab/torch-toolbox/blob/master/Weight-init/weight-init.lua)\n\n## TODOs\n- finish the Usage portion of the README\n- add specific results (and parameters used) that were achieved after the CS 229 project deadline\n- add saving of confusion matrix data and creation of graphic to `plot.lua`\n- rewrite the data preprocessing to only reprocess new images if the dimensions have not changed\n""",,"2017/04/08, 22:16:08",2391,MIT,1,8,"2021/08/05, 18:39:07",7,0,1,0,811,3,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, OpenLitterMap,"An open, interactive and accessible database of the world's litter and plastic pollution.",OpenLitterMap,https://github.com/OpenLitterMap/openlittermap-web.git,github,,Circular Economy and Waste,"2023/10/19, 20:09:48",100,0,17,true,PHP,OpenLitterMap,OpenLitterMap,"PHP,JavaScript,Vue,Blade",https://openlittermap.com,"b'\n\n

About OpenLitterMap

\n
\n

OpenLitterMap is an open, interactive, and accessible database of the world\'s litter and plastic pollution.

\n

We are building a fun data-collection experience to harness the unprecedented potential of citizen scientists around the world.

\n

We believe that science on pollution should be an open, transparent and democratic process- not limited or controlled by anyone or any group.

\n

If you would like to help shape the future of OpenLitterMap, we would love to have you in our Slack Channel

\n

Every Friday, 6pm Irish time, we run a community zoom call for ~1 hour where anyone interested in OpenLitterMap can listen in to learn more, and share ideas to help the future direction of the platform.

\n

OpenLitterMap is underdeveloped, but we are a community of over 3,600 contributors who have crowdsourced more than 100,000 uploads from 80 countries.

\n

All of our data is available to explore on the Global Map and more sophisticated ""city grid maps"" are also available to explore. Anyone can download all of our data for free (bulk photo downloading currently unavailable).

\n
\n

We have a GoFundMe which includes our first promotional video and a demo video showing how to use our app.

\n

The source code for the mobile app (React Native) has launched, and will be followed by the OpenLitterAI and our smart contacts.

\n

OpenLitterMap is the first project to reward users with cryptocurrency for the production of geographic information. By using the app and doing ""proof of work"", users are ""mining"" Littercoin which we are experimenting with to reward and incentivize the sharing of geospatial data on plastic pollution.

\n
\n

STAY TUNED FOR LOTS OF EXCITING UPDATES

\n
\n

OpenLitterMap-web is built with Laravel, Vue.js and Bulma

\n

To install this project locally on your machine, download and install Homestead

\n

First, download Virtual box which will give you a Virtual Machine. This is used to give us all the same development environment. Alternatively, if you use mac, you can use Laravel Valet

\n

Second, you are going to need to download Vagrant which you will use to provision, turn on and shut down your VM.

\n

In your root directory, add the vagrant box with

\n\n`vagrant box add laravel/homestead`\n\nthen clone the box with `git clone https://github.com/laravel/homestead.git ~/Homestead`\n\nYou should now have a ""Homestead"" folder on your machine at `~/Users/You/Homestead`\n\n

Before turning on the VM, we are going to set up the Homestead.yaml file. Every time you save a file, Homestead.yaml will mirror your local code and copy it to the VM which your web-server (VM) will interact with.

\n

Open the Homestead.yaml file, add a new site and create a database.

\n\n```\nip: ""192.168.10.10""\nmemory: 2048\ncpus: 1\nprovider: virtualbox\n\nauthorize: ~/.ssh/id_rsa.pub\n\nkeys:\n - ~/.ssh/id_rsa\n\nfolders:\n - map: ~/Code\n to: /home/vagrant/Code\n\nsites:\n - map: olm.test\n to: /home/vagrant/Code/openlittermap-web/public\n\ndatabases:\n - olm\n - olm_test\n\nfeatures:\n - mysql: true\n - minio: true\n\nbuckets:\n - name: olm-public\n policy: public\n - name: olm-public-bbox\n policy: public\n```\n\nNext, update your hosts file on your host machine (`sudo nano /etc/hosts` on windows it\'s `C:\\Windows\\System32\\Drivers\\etc\\hosts`) and include `192.168.10.10 olm.test`\n\nWhen you want to boot up the VM, cd into the Homestead folder on your host machine and run `vagrant up`\n\n

Download the repo and save it locally into your ""Code"" folder

\n\n`~/Users/You/Code/openlittermap-web`\n\nIf this is your first time installing, you need to run `vagrant provision` \n\n

You also need to install composer and npm dependencies.

\n\nLocally, run `npm install`\n\nSSH into the VM with `vagrant ssh`. cd into Code/openlittermap-web, and then run `composer install`\nYou can migrate and seed the tables with `php artisan migrate --seed`\n\nOnce you\'re done, run `npm run watch` which will build the project into the `public` folder.\n\nYou should now be able to open the browser and visit olm.test\n\n\nIf you would like to contribute something, make a new branch locally `git checkout -b feature/my-new-feature`. We would love to see your pull requests!\n\n

You might notice there are some websocket errors in the browser. Some operations like adding photos broadcast live events to the client. It\'s easy to get websockets set up to resolve this.

\n\n```\nIn your .env file, add ""WEBSOCKET_BROADCAST_HOST=192.168.10.10""\nIn broadcasting.php, change \'host\' => env(\'WEBSOCKET_BROADCAST_HOST\')\nIn one window, run `php artisan websockets:serve --host=192.168.10.10`\nThen, in another window, run `php artisan horizon`\nTo test it\'s working, open another window. Open tinker and run event new(\\App\\Events\\UserSignedUp(1));\n```\n\nIf you would want to generate some dummy photos for development purposes, you can do so by\nusing the `php artisan olm:photos:generate-dummy-photos` command to generate 1500 dummy photos. It also takes\narguments so you can do for e.g. `php artisan olm:photos:generate-dummy-photos 2000` and 2000 photos will be generated.\nAfter running the above command, run `php artisan clusters:generate-all` and the photos should be visible in the `Global Map`\ntab and in http://olm.test/world/Ireland/County%20Cork/Cork/map\n\nThe project uses AWS S3 to store photos on production. On development, however, it uses [Minio](https://laravel.com/docs/8.x/homestead#configuring-minio),\nan open source object storage server with an Amazon S3 compatible API. If you copied the .env.example file into .env\nyou should be able to access the Minio control panel at http://192.168.10.10:9600 (homestead:secretkey).\nRemember to update the Access Policy to public for your buckets, on the admin panel.\n

You are now ready to get started!

\n

Have fun and thanks for taking an interest in OpenLitterMap

\n'",,"2020/08/19, 23:19:26",1162,GPL-3.0,566,2681,"2023/08/19, 14:35:03",82,451,543,88,67,23,0.0,0.32607666824420256,"2022/11/03, 18:57:14",2.19.11,5,19,false,,false,false,,,https://github.com/OpenLitterMap,https://openlittermap.com,"Cork, Ireland",,,https://avatars.githubusercontent.com/u/62770201?v=4,,, Recyclebot,An open source waste plastic extruder that creates 3D printer filament from waste plastic and natural polymers.,,,custom,,Circular Economy and Waste,,,,,,,,,,https://www.appropedia.org/Recyclebot,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, pycirk,Model Circular Economy policy and technological interventions in Environmentally Extended Input-Output Analysis.,CMLPlatform,https://github.com/CMLPlatform/pycirk.git,github,"circular-economy,sustainability,eeioa,environmental-modelling,environmental-science,scenario-creator,economics-models",Circular Economy and Waste,"2021/12/07, 09:54:42",16,0,4,false,Python,CML-IE-EB,CMLPlatform,"Python,Makefile",https://pycirk.readthedocs.io/en/latest/readme.html,"b'# pycirk\n\n_A python package to model Circular Economy policy and technological interventions in Environmentally Extended Input-Output Analysis starting from SUTs (EXIOBASE V3.3)_\n\n[![DOI](https://zenodo.org/badge/157891556.svg)](https://zenodo.org/badge/latestdoi/157891556)\n[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](resources/docs/CONTRIBUTING.md)\n\nDocumentation: https://pycirk.readthedocs.io/en/latest/readme.html\n\nTo cite the use of the software in your research please use the following publication:\n\n""Modeling the circular economy in environmentally extended input-output tables: Methods, software and case study""\n\nhttps://doi.org/10.1016/j.resconrec.2019.104508\n\n\n## Installation\n\n### Stable release\n\nRun in your terminal:\n\n\t$ pip install pycirk\n\n### From source\n\nClone repository:\n\n\t$ git clone https://fdonati@bitbucket.org/CML-IE/pycirk.git\nor\n\n\t$ git clone https://github.com/CMLPlatform/pycirk.git\n\n\nOnce you have a copy of the source, you can install it with:\n\n $ python setup.py install\n\n### Data\n\nYou can download the biregional or multiregional database by following this link\n\nhttp://doi.org/10.5281/zenodo.4695823\n\nYou need to place the data inside the package\ne.g. /home/UserName/.local/lib/python3.6/site-packages/pycirk/data\n\n## Usage\n\n### Import package\n\n\timport pycirk\n\n### Initialize\n\n my_work = pycirk.Launch(method, directory, aggregation, make_secondary)\n\n### set your scenarios and analysis\n\n1. Open scenarios.xls in the directory that was specified\n2. From there you can specify interventions and parameters for the analysis\n3. save and continue to the following steps\n\n### Run scenarios\n\nRun one specific scenario\n\n my_work.scenario_results(scen_no, output_dataset)\n (0 = baseline)\n\nRun all scenarios\n\n my_work.all_results()\n\n### save scenarios\n\nSave your results\n\n my_work.save_results()\n\n\n### Use from command line\n\npycirk --help\n\nUsage: pycirk [OPTIONS]\n\nConsole script for pycirk. A software to model policy and technological\ninterventions in Environmentally Extended Input-Output Analysis (EXIOBASE\nV3.3, 2011)\n\nOptions:\n\n| Command | Variables |\n|----------------------------|--------------------------------------|\n| -tm, --transf_method TEXT | 0 = PXP ITA_TC; 1 = PXP ITA_MSC |\n| -dr, --directory TEXT | if left black it will be default |\n| -ag, --aggregation | 1 = bi-regional (EU-ROW) |\n| | 0 = None (49 regions) |\n| -sc, --scenario TEXT | all, 1, 2,... accepted - 0=baseline |\n| -s, --save TEXT | False=no, True=yes |\n| -od, --output_dataset | False=no, True=yes |\n| --help | Show this message and exit. |\n\n\nCommand example\n\n pycirk -tm 0 -dr """" -sc ""1"" -s True -od False\n\n## Features\n\nExamples of policies that can be modelled through the software:\n\n- sharing\n- recycling\n- life extension\n- rebound effects\n- substituion\n- market and value added changes\n- efficiency\n\nThe tables in which it is possible to apply changes:\n\n- total requirement matrix (A)\n- intermediate transactions (Z)\n- final demand (Y)\n- primary inputs (W)\n\n- emission intermediate extentions (E)\n- material intermediate extensions (M)\n- resource intermediate extensions (R)\n- emission final demand extension (EY)\n- material final demand extension (MY)\n- resource final demand extensions (RY)\n\n- primary inputs coefficients (w)\n- emission intermediate extentions coefficients (e)\n- material intermediate extensions coefficients (m)\n- resource intermediate extensions coefficients (r)\n- emission final demand extension coefficients (eY)\n- material final demand extension coefficients (mY)\n- resource final demand extensions coefficients (rY)\n\nIt is possible to specify:\n\n- region of the intervention\n- whether the intervention affects domestic, import transactions or both\n\n\nThis package was created with Cookiecutter and the `audreyr/cookiecutter-pypackage` project template.\n\nCookiecutter: https://github.com/audreyr/cookiecutter\naudreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage\n'",",https://zenodo.org/badge/latestdoi/157891556","2018/11/16, 16:12:52",1804,CUSTOM,0,78,"2021/04/19, 12:15:01",5,1,2,0,919,4,0.0,0.0,"2021/04/19, 08:28:11",2.0,0,1,false,,false,true,,,https://github.com/CMLPlatform,,,,,https://avatars.githubusercontent.com/u/29304416?v=4,,, Global Plastic Navigator,Visualises the most recent and high-resolution data of current scientific publications on marine plastic pollution.,WWF-Deutschland,https://github.com/WWF-Deutschland/marine-plastic-explorer.git,github,,Circular Economy and Waste,"2023/08/16, 13:29:09",4,0,0,true,JavaScript,WWF Deutschland,WWF-Deutschland,"JavaScript,Handlebars,HTML",https://plasticnavigator.wwf.de,b'# Marine Plastic Explorer\n',,"2020/06/09, 12:32:40",1233,CUSTOM,150,1388,"2023/08/16, 13:29:16",3,48,55,26,70,1,0.0,0.1936,,,0,4,false,,true,false,,,https://github.com/WWF-Deutschland,https://www.wwf.de,Berlin/Hamburg/Frankfurt,,,https://avatars.githubusercontent.com/u/58521212?v=4,,, marine_debris_ML,Marine debris detection with commercial satellite imagery and deep learning.,NASA-IMPACT,https://github.com/NASA-IMPACT/marine_debris_ML.git,github,,Circular Economy and Waste,"2021/11/05, 15:51:29",64,0,11,false,Python,Inter Agency Implementation and Advanced Concepts,NASA-IMPACT,"Python,Jupyter Notebook,Dockerfile,Shell,JavaScript",,"b'# Marine debris detection with commercial satellite imagery and deep learning.\n\nFloating marine debris is a global pollution problem which threatens marine and human life and leads to the loss of biodiversity. Large swaths of marine debris are also navigational hazards to vessels. Artificial intelligence, specifically deep learning, can be used to detect floating marine debris in satellite imagery. In this project, we seek to demonstrate the strong potential of using commercial small satellite imagery for detecting marine debris pollution and strengthening current and future efforts to clean the oceans. We present an application of a deep learning model designed for object detection in the TensorFlow framework for observing marine debris floating on the surface of the ocean. The model was trained on our custom-labeled dataset of 1370 polygons containing marine debris as observed in [Planetscope optical imagery](https://www.planet.com/products/planet-imagery/). An overall precision score of 0.78 and recall score of 0.70 were obtained on the test dataset. \n\n*Model performance on test images:*\n\n\n\n*Paper and dataset forthcoming.*\n\n## Overview\n\n### 1. Data\n\nPlanet small satellite imagery is utilized in this study. Specifically, the 3-meter imagery product called Planetscope. This imagery has four bands namely red, green, blue, and near-infrared. The combination of fairly high spatial resolution, high temporal resolution, availability of a near-infrared channel and global coverage of coastlines made this imagery quite advantageous for the purposes of this research. With these imagery specifications as well as plastic size and ghost fishing net size categories, we anticipated our model would be capable of detecting aggregated debris flotsam as well as some mega plastics including medium to large size ghost fishing nets.\n\nUsing the Planet Explorer, specific image scenes consisting of visible marine debris patches were selected for our training dataset. This step involved manually exploring Planetscope scenes and verifying the presence of marine debris. For this initial study, we decided to focus our efforts on detecting marine debris from optical (red, green, blue) channel imagery. Initial investigation into the utility of the Planetscope near-infrared channel was conducted, and future work will integrate the near-infrared channel.\n\nWe used [Image Labeler](https://impact.earthdata.nasa.gov/labeler/) to manually digitize bounding box annotations for observable debris on Planetscope optical imagery. A total of 1370 bounding boxes were labeled on the image scenes. This constituted the initial training, testing and validation dataset for object detection modeling.\n\nThe next task was to prepare the dataset in model-ready format, which entailed tiling the image scenes into smaller frames and encoding the bounding boxes into coordinate arrays with numerical class ids. The need for tiling the imagery stems from computational efficiency at model runtime. To accomplish these tasks, we used [Label Maker (LM)](https://github.com/developmentseed/label-maker). We used zoom level 16 as it most closely approximates the native spatial resolution of Planetscope imagery. An example configuration file for use with LM is located at *data_utils/config.json*. Finally, the dataset in compressed array format (.npz) was used to create binary TensorFlow Records datasets.\n\nTiled image with labels.npz entry. On the right are the bounding box annotation coordinates `[xmin, ymin, xmax, ymax]` and `class ID 1`, with the image array on the bottom:\n\n\nTiled images with plotted annotations:\n\n\n### 2. Model\nOur architecture of choice for this project is [SSD Resnet 101 Feature Pyramid Network (FPN)](https://arxiv.org/abs/1708.02002), which we\'ve implemented with the [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). We employed a weighted sigmoid focal loss\nand transfer learning for our baseline model from a [pre-trained resnet 101 checkpoint](http://download.tensorflow.org/models/object_detection/ssd_resnet101_v1_fpn_shared_box_predictor_oid_512x512_sync_2019_01_20.tar.gz) hosted on Tensorflow model zoo. Our best model currently performs with a test F1 score of 0.74.\n\nAfter training is complete, we export the best model to [TensorFlow serving format](https://www.tensorflow.org/tfx/guide/serving), package the trained model weights and inference code into a [Docker](https://www.docker.com/) image and deploy at scale through our inference pipeline (shown below).\n\nFor inference, we use the [Planet tile endpoint](https://developers.planet.com/docs/basemaps/tile-services/) to request a list of [XYZ tiles](https://developers.planet.com/planetschool/xyz-tiles-and-slippy-maps/) for a given area of interest and time range. We send that list of tiles via [SQS](https://aws.amazon.com/sqs/) to our inference endpoint, and once deployed, we can inference at a rate of 3000 tiles of size 256x256 pixels per minute. The results written to the database include, for each XYZ tile, the original Planet image scene ID and XYZ tile name (containing the x coordinate, y coordinate and zoom level) and one or more bounding box coordinates, class values and confidence scores. We use the python utility, [Mercantile](https://github.com/mapbox/mercantile), to translate the XYZ coordinates to latitude and longitude coordinates and finally, export the final predictions with a minimum confidence threshold to GeoJSON format. The GeoJSON files are used for display in an online dashboard.\n\nScaled model inference pipeline:\n\n \n## Implementation\n\n### 1. Model training and inference\n\nWe recommend creating a python 3.6+ virtual environment for this project. You can use [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv) to do so.\n\nInstall these Tensorflow versions in the activated environment.\n\n```\ntensorboard==1.14.0\ntensorboard-plugin-wit==1.6.0.post3\ntensorflow-estimator==1.14.0\ntensorflow-gpu==1.14.0\n```\n\n### 2. Setup TensorFlow Object Detection API\n\n#### 2a. Install TensorFlow object detection:\n- Download the necessary scripts with `git clone https://github.com/tensorflow/models.git`\n- Install TensorFlow Object Detection API by strictly following [these instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md). Once you\'ve successfully run `python object_detection/builders/model_builder_test.py` you are ready for the next step.\n- To access the necessary utility scripts, you\'ll need to run all the following commands from the `models/research/object_detection` directory from the cloned repo. **From here on we will refer the TensorFlow Object Detection directory `models/research/object_detection/` as the TOD directory.**\n\nYou could also work from this [codebase](https://github.com/NASA-IMPACT/marine_litter_ML/tree/main/object_detection_api) as a stable implementation with the above listed TF library versions. Just ensure that repo folder is set as `models/research/object_detection/`.\n\n### 3. Create TFRecords for model training\nTensorflow API supports a variety of file formats. The TFRecord file format is a simple record-oriented binary format that many TensorFlow applications use. We have example code in this repo which converts the `labels.npz` file to a TFRecords file:\n\n- Copy [`utils_convert_tfrecords.py` from this repo](https://github.com/NASA-IMPACT/marine_litter_ML/blob/main/data_utils/utils_convert_tfrecords.py) to the TOD directory, .\n- Your $folder will be the `data` path containing your `labels.npz` file and `tiles`.\n- From the TOD directory run:\n\n```shell\npython3 utils_convert_tfrecords.py \\\n --label_input=$folder/labels.npz \\\n --data_dir=tf_records \\\n --tiles_dir=$folder/tiles \\\n --pbtxt=classes.pbtxt\n```\nThis will create `train.record`, `val.record` and `test.record` files in a folder called `tf_records` in the TOD directory. Each record file contains different and non-overlapping partitions of the data (86,7,7 percents, respectively).\n\n### 4. Object detection model setup\nNow we\'re ready to set up the model architecture. For this walkthrough, we\'ll download a pre-trained model from the [TensorFlow model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md). We\'ll demonstrate using [`ssd_resnet_101_fpn_oidv4`](http://download.tensorflow.org/models/object_detection/ssd_resnet101_v1_fpn_shared_box_predictor_oid_512x512_sync_2019_01_20.tar.gz) (download link):\n - Download the model, unzip, and move the folder to the TOD directory\n - Create a new folder `training` in the TOD directory.\n - Copy a [model configuration file](https://github.com/NASA-IMPACT/marine_litter_ML/blob/main/configs/ssd_resnet101_v1_fpn_marine_debris.config) to the `training directory`. \n - Copy a [class definitions file](https://github.com/NASA-IMPACT/marine_litter_ML/blob/main/configs/marine_debris.pbtxt) to the `data` directory.\n\nNow your current directory should be `models/research/object_detection/` and in addition to the files included in that repo originally, your folder structure should look like this:\n\n```\nmodels/research/object_detection/\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 ssd_resnet101_v1_fpn_multilabel/\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 training/\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 ssd_resnet101_v1_fpn_marine_debris.config\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data/\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 train.record\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 val.record\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 test.record\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 marine_debris.pbtxt\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\n```\n\n### 5. Train the TensorFlow object detection model\nYou are now ready to train the model. From the `models/research/` directory, run:\n\n```shell\n#!/usr/bin/env bash\npyenv activate tf114_od\nexport PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim\ncd object_detection\nexport CUDA_VISIBLE_DEVICES=0\npython model_main.py --alsologtostderr --model_dir=training/ --pipeline_config_path=training/ssd_resnet101_v1_fpn_multilabel.config \n```\n\nThe model checkpoints and outputs for this task will save in the `training` folder. \n\n### 6. Visualize the Model\nUsing this [script](https://github.com/NASA-IMPACT/marine_litter_ML/tree/main/object_detection_api/export_inference_graph.py), create the marine debris detection model inference graph with:\n\n```shell\npython export_inference_graph.py --input_type image_tensor \\\n --pipeline_config_path training/ssd_resnet101_v1_fpn_multilabel.config \\\n --trained_checkpoint_prefix training/model.ckpt-500000 \\\n --output_directory model_50k\n```\nWe can visualize this graph using [`tensorboard`](https://github.com/tensorflow/tensorboard):\n\n```shell\ntensorboard --logdir=\'training\'\n```\n\nGo to `http://127.0.0.1:6006/` in your web browser and you will see:\n\n

\n\n

\n\n### 7. Prediction\nNow let\'s run the model over our test tiles to predict where marine debris patches are. Copy [this script](https://github.com/NASA-IMPACT/marine_litter_ML/blob/v0_2/inference_utils/tf_od_predict_image_aug_to_geo_corrected.py) to the TOD directory then run:\n\n```shell\npython tf_od_predict_image_aug_to_geo_corrected.py --model_name=model_50k \\\n --path_to_label=data/marine_debris.pbtxt \\\n --test_image_path=path/to/test/image/tiles\n```\nThis code will read through all your test images in `path/to/test/image/tiles` folder and output the final prediction into the same folder. You will find new images in `test_image_path` with the `_test` suffixed to the end of the file basenames. The are images with the predicted bounding boxes and confidence scores plotted on top. As well, you will find a multipolygon geojson of predicted bounding boxes in the `test_image_path`. \n\nOption for flags:\n\n```\nexport base_dir=models/research/object_detection\nexport EXPORT_DIR=models/research/object_detection/model_50k\npython3 ${base_dir}/tf_od_predict_image_aug_to_geo_corrected.py --model_name=${EXPORT_DIR} --path_to_label=${base_dir}/marine_debris.pbtxt --test_image_path=${base_dir}/test/\n```\n\nDetections geo-registered and vectorized to GeoJSON format:\n\n\n### 8. Evaluation\nYou can use the [code](https://github.com/NASA-IMPACT/marine_litter_ML/tree/main/evaluation_utils) in this folder to compute standard evaluation metrics with your model. Runtime and background instructions live [here](https://github.com/NASA-IMPACT/marine_litter_ML/tree/main/evaluation_utils/evaluation.md).'",",https://arxiv.org/abs/1708.02002","2021/03/05, 17:32:41",964,CUSTOM,0,79,"2021/09/13, 16:18:13",2,10,22,0,772,0,0.0,0.030303030303030276,,,0,2,false,,false,false,,,https://github.com/NASA-IMPACT,,,,,https://avatars.githubusercontent.com/u/22798984?v=4,,, ADVECTOR,A whole-ocean marine debris transport model which is built to handle millions of particles and terabytes of data.,TheOceanCleanupAlgorithms,https://github.com/TheOceanCleanupAlgorithms/ADVECT.git,github,,Circular Economy and Waste,"2022/04/12, 04:14:46",14,0,2,false,Python,The Ocean Cleanup,TheOceanCleanupAlgorithms,"Python,C,Jupyter Notebook",,"b""# ADVECT V1.0\nADVECT is a whole-ocean marine debris transport model which is built to handle millions of particles and terabytes of data. It models the transport of debris based on its size, shape, and density, and simulates basic physical processes including 3D ocean current-driven advection, wind-driven drift, wind-driven near-surface vertical mixing, buoyancy-driven vertical transport, and eddy-driven diffusion. It automatically processes forcing datasets arbitrarily larger than memory capacity, and supports fully parallelized computation on CPUs and GPUs via OpenCL.\n\n## Model Description\nADVECT contains solvers (kernels) for two domains: ocean surface, and whole-ocean.\n* 2D kernel: model domain is constrained to the surface of the ocean (assumption: floating debris), and debris particles are idealized, with no consideration of their size/shape/density.\n* 3D kernel: model domain is the whole oceans, from surface to bathymetry, and physical processes depend on the size/shape/density of debris.\n\n### 2D Kernel\nEach particle is released at some location in space and time. Upon release, each particle is transported according to the following physical processes:\n#### Surface ocean current-driven advection\nParticles are transported in a time-evolving 2D velocity field of surface ocean currents, which the user must provide. The particles are advected according to one of two schemes: forward-Euler, or a second-order Taylor-expansion scheme which corrects for the outward-drift error the Euler method experiences in a curved field.\n#### Wind-driven drift\nOptionally, the user may provide a time-evolving 2D velocity field of 10-meter wind, which will move particles according to a user-provided windage coefficient.\n#### Eddy-driven diffusion\nFinally, the user may specify a constant eddy diffusivity, which will add random noise to the particle's movements. This simulates the effect of eddies smaller than the spatial resolution of the ocean currents.\n#### Boundary processes\nThe model domain only includes the surface waters of the ocean (as defined by the non-null region in the ocean current vectorfield); particles cannot leave this domain, and thus the model does not include beaching. Instead, when particles are pushed against a coastline, their onshore displacement component is cropped to keep them in the model domain, generally resulting in a lateral displacement, as if the boundary was frictionless.\n\n### 3D Kernel\nEach particle is initialized with a size, shape, density, release date, and release location. Upon release, particles are transported according to the following physical processes:\n#### 3D ocean current-driven advection\nParticles are transported according to a time-evolving 3D velocity field of ocean currents, which the user provides. The particles are advected according to one of two schemes: forward-Euler, or a 3D adaptation of the second-order Taylor-expansion scheme from the 2D kernel.\n#### Buoyancy-driven transport\nThe user must provide a time-evolving 3D dataset containing the density of seawater in the ocean domain. Particles are transported vertically according to their terminal sinking (or rising) velocity, which is calculated using their size, shape, and density, as well as the density of the surrounding seawater.\n#### Wind-driven drift\nOptionally, the user may provide a time-evolving 2D velocity field of 10-meter wind, which will move particles floating at the surface based on a parameterization which depends on their emerged surface area. The user may optionally provide a multiplier which scales this drift, for the sake of experimentation.\n#### Wind-driven vertical mixing\nIf wind is provided, the user may optionally enable the simulation of wind-driven vertical mixing. Mixing transport is based on an equilibrium between a particle's rising velocity and the size of ocean waves (estimated from wind).\n#### Eddy-driven diffusion\nFinally, the user may specify a vertical profile of vertical and horizontal eddy diffusivities, which will add noise to the particle's movements according to its depth. This simulates eddies smaller than the spatial resolution of the ocean currents, and allows the user the flexibility to account for the depth-dependent nature of eddy diffusivity in the world's oceans.\n#### Boundary processes\nThe model domain only includes the waters of the ocean above bathymetry (as defined by the non-null region in the ocean current vectorfield); particles cannot leave this domain, and thus the model does not include beaching or sedimentation. Instead, when particles are pushed against coastline/bathymetry, their out-of-domain displacement components are cropped to keep them in the model domain. This is the 3D analog of the frictionless coastlines used in the 2D kernel, and similarly allows particles to travel parallel to domain boundaries.\n\n## Installation Instructions\n1. Install [miniconda](https://docs.conda.io/en/latest/miniconda.html) (if you don't already have it), to manage dependencies. If you are not already familiar with conda, what it is, and what it's for, you should read up [here](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html).\n2. Install ADVECT as a package by opening a terminal and running\n ```\n conda create -n advect # create a fresh conda environment\n conda activate advect # enter the environment\n conda install pip # install pip (python package manager) into this environment\n pip install ADVECTOR # install ADVECTOR from a repository in ~the cloud~\n ```\n3. Acquire forcing data\n\n Run ```ADVECTOR_download_sample_data``` and follow the prompts.\n5. Run example advection\n\n Run ```ADVECTOR_examples_2D``` or ```ADVECTOR_examples_3D``` and follow the prompts to see it in action!\n \n## Using ADVECT in your own programs\n\nThe key entry-point scripts to ADVECT are `ADVECTOR/run_advector_2D.py` and `ADVECTOR/run_advector_3D.py`. Those files include documentation on all their respective arguments. There are also supplementary documentation files in the `documentation` folder; you'll want to read all of these carefully to understand what you can/can't feed into ADVECT, and what it'll give you back.\n\nIn short, your script will look something like:\n```\nfrom ADVECTOR.run_advector_2D import run_advector_2D # sim for run_advector_3D\noutputfile_paths = run_advector_2D()\n```\nThat's it!\n\nIf you need information on the arguments and don't want to refer directly to the source code, just open an interactive python prompt, import the runner as above, then run `help(run_advector_2D)`.\n\nAs a general strategy, you can pretty much copy the structure of `ADVECTOR/examples/ECCO_advect_2D.py` or `ADVECTOR/examples/ECCO_advect_3D.py`, providing your own data and generating your own source/configfiles. The examples exist for your reference!\n\n## Extra: the INTEGRATOR\n\n3D ocean model output generally only includes the zonal/meridional current velocity; ADVECT comes bundled with a tool called the INTEGRATOR which can generate vertical velocity fields from zonal/meridional velocity fields, using the continuity equation. Check out `INTEGRATOR/README.md` for more information. Currently it doesn't install via pip, so you'll need to clone this repository and run the files directly.\n\n### Hardware compatability\nAt this time, ADVECT only has known support for CPUs/GPUs with opencl driver versions 1.1, 1.2, and 2.1. If you are getting OpenCL/GPU related errors, you can run this in a python prompt to directly check your driver version, as that could be the problem:\n ```\n import pyopencl\n print(pyopencl.create_some_context(interactive=True).devices[0].driver_version)\n ```\n Follow the instructions to select a compute device, and its driver version will be displayed.\n""",,"2020/09/10, 15:55:27",1140,MIT,0,258,"2022/04/12, 04:14:49",4,111,128,0,561,2,0.3,0.1187214611872146,"2022/02/27, 19:38:51",V1.0.2,0,2,false,,false,false,,,https://github.com/TheOceanCleanupAlgorithms,https://theoceancleanup.com,"Rotterdam, The Netherlands",,,https://avatars.githubusercontent.com/u/29291389?v=4,,, Surfrider Plastic Origins,A citizen science project that uses AI to map plastic pollution in European rivers and share its data publicly.,surfriderfoundationeurope,https://github.com/surfriderfoundationeurope/The-Plastic-Origins-Project.git,github,"ngo,ngo-data,trash,detection-model,citizen-science",Circular Economy and Waste,"2023/05/03, 14:59:23",21,0,4,true,,Surfrider Foundation Europe,surfriderfoundationeurope,,https://plasticorigins.eu/,"b'

Surfrider Plastic Origins

\n\n\n\n

Proudly Powered by SURFRIDER Foundation Europe, the PLASTIC ORIGINS project is a citizen science project that uses AI to map plastic pollution in European rivers and share its data publicly. Check below to know more about project initiatives and how you can get involved. Please consider starring :star: the project\'s repositories to show your interest and support. We rely on YOU for making this project a success and thank you in advance for your contributions.

\n\n_________________\n\n[Plastic Origins](https://www.plasticorigins.eu/) is a project initiated by the not-for-profit [Surfrider Foundation Europe](https://surfrider.eu/) in search of solutions to address the source of ocean pollution. Supported by [citizens, associations, scientists, and socially responsible companies](https://www.plasticorigins.eu/team), the Plastic Origins project aims to map plastic pollution in European rivers by applying participatory science and Artificial Intelligence technology. This mapping data helps to identify areas of high impact, gain a deeper understanding of the problem on the field, and measure the evolution of pollution over time as well as to raise awareness among political decision-makers on the local, national, and European levels and urge them to act. It\'s a **hot topic**, here is why:\n\n1. Plastic pollution is the second biggest threat to our Ocean after climate change.\n2. Ocean plastic pollution mainly comes from terrestrial sources. Rivers are pathways for litter entering the ocean. For instance, a bottle lost in Paris city might end-up in the Seine River and reach the sea.\n3. Investigating river plastic pollution helps to localize plastic inputs and monitor trends, improvements and measures efficiency.\n4. There is currently no obligation for European countries to monitor river plastic pollution. The European water framework directive does not include plastic pollution as an indicator for good environmental status.\n\n\n## All you need to know to get involved\n\nDevelopments and improvements of Surfrider Plastic Origins tech tools are led by a bunch of amazing volunteers. Surfrider Europe on its own does not have within the organisation technical competencies thus we rely on YOU for making this project a success and welcome any help since there are many ways to contribute, even if you\xe2\x80\x99re not a technical person.\n\nOnly with a common effort we can refine the technology, improve and duplicate it, which is why the code is open-source on the [Surfrider Foundation Europe GitHub page](https://github.com/surfriderfoundationeurope), and collected data is available online and publicly accessible on the [Plastic Origins website](https://www.plasticorigins.eu/). \n\n\n### How does it work?\n\n* Volunteers film riverbanks via the Plastic Origins mobile application available on [Android](https://play.google.com/store/apps/details?id=com.plasticorigins&hl=fr&gl=US) & [IOS](https://apps.apple.com/fr/app/plastic-origins/id1532710998) (collection of videos and manually labeled images) or by using GoPro (collection of videos uploaded to the [Plastic Origins website](https://www.plasticorigins.eu/)).\n* Our [Data Labeling Platform](https://www.trashroulette.com/) allows building a labeled image dataset via crowdsourcing (people tag an existing image dataset and/or contribute to this dataset by uploading images) that helps to improve our AI waste detection and tracking prediction.\n* AI recognizes and classifies waste on collected videos to give an open source mapping that feeds and collects data in #OpenData.\n\n\n### How we work\n\nWe use the following tools for project management and dev:\n\n* [Microsoft Teams](https://teams.microsoft.com/l/team/19%3aa3a655a6dce949ed9fe1a6db4e2d6a95%40thread.skype/conversations?groupId=c312e78d-ae29-4ec0-9cd6-1f9999167ebe&tenantId=1fb581f3-43ef-4dfd-b8f7-4ca7bc32ec24) -> for discussions.\n* [Azure portal](https://portal.azure.com/#home) -> for building, testing and deploying.\n* [GitHub](https://github.com/surfriderfoundationeurope) -> for storing codes, codes\' descriptions, documentation, and Architectural Decision Record (ADR).\n\nOur development language is English. All comments and documentation should be written in English so we can share our learnings with developers around the world.\n\n

\n \n

\n\n\n### Repo organisation\n\n| ID | Repository | Description | RG-Azure | Maintainers |\n| -- | ---------- | ----------- | -------- |------------ |\n| GEN | [The-Plastic-Origins-Project](https://github.com/surfriderfoundationeurope/The-Plastic-Origins-Project) | You are currently in this repo used for the Plastic Origins project description. | |[@SabineAllouSurfrider](https://github.com/sabineallousurfrider) |\n| APP | [App-Plastic-Origins](https://github.com/surfriderfoundationeurope/App-Plastic-Origins) `Private` | `WIP` - Plastic Origins Mobile app (available on [Android](https://play.google.com/store/apps/details?id=com.plasticorigins&hl=fr&gl=US) & [IOS](https://apps.apple.com/fr/app/plastic-origins/id1532710998)). | |[@AlexisReverte](https://github.com/AlexisReverte) / [@LoicLouvet](https://github.com/loiclouvet)|\n| Data LP | [labelcv-web](https://github.com/surfriderfoundationeurope/labelcv-web) | Frontend of our [Data Labeling Platform](https://www.trashroulette.com/). | [Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastic-labellingplatform-dev) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-labellingplatform-prod\xe2\x80\x8b) |[@cmaneu](https://github.com/cmaneu) / [@bertrandlalo](https://github.com/bertrandlalo)|\n| API LP | [LabelCV](https://github.com/surfriderfoundationeurope/LabelCV) | `WIP` - Backend of our [Data Labeling Plateforme](https://www.trashroulette.com/). | |[@cmaneu](https://github.com/cmaneu)|\n| API | [po-mobile-backend](https://github.com/surfriderfoundationeurope/po-mobile-backend) | Plastic Origins \'all in one\' Backend - API for data upload from app or website. |[Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-mobilebackend-dev\xe2\x80\x8b) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-mobilebackend-prod) |[@cmaneu](https://github.com/cmaneu) / [@benzinamohamedelyes](https://github.com/benzinamohamedelyes) / [@Vincent-Guiberteau](https://github.com/Vincent-Guiberteau)|\n| Data MGT | [etl](https://github.com/surfriderfoundationeurope/etl) | ETL script used to send videos to AI, read results and write in DB | [Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-etl-dev\xe2\x80\x8b) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-etl-prod\xe2\x80\x8b) |[@cl3m3nt](https://github.com/cl3m3nt)|\n| AI | [MOT](https://github.com/surfriderfoundationeurope/mot) | AI model currently used to detect trash on videos | [Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastic-ai-dev) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastic-ai-prod)|[@charlesollion](https://github.com/charlesollion) / [@mchagneux](https://github.com/mchagneux)|\n| DB MGT | [plasticorigins-ops-db](https://github.com/surfriderfoundationeurope/plasticorigins-ops-db) | All scripts related to our PostGreSQL database. | [Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-database-dev\xe2\x80\x8b) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-database-prod\xe2\x80\x8b) |[@ChristopheHvd](https://github.com/ChristopheHvd) / [@cmaneu](https://github.com/cmaneu)|\n| BI DB | [fillbidatabase](https://github.com/surfriderfoundationeurope/fillbidatabase) | Code for the recurring job that fills Bi database |[Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-database-dev\xe2\x80\x8b) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-database-prod\xe2\x80\x8b) |[@ChristopheHvd](https://github.com/ChristopheHvd) / [@clembac](https://github.com/clembac) / [@MaxLemarchand](https://github.com/MaxLemarchand) |\n| API DB | [api-plastic-origins](https://github.com/surfriderfoundationeurope/api-plastic-origins) `Private` | `WIP` - API that allows access our data to do cartographic visualization. |[Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-APIdb-dev\xe2\x80\x8b) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-APIdb-prod\xe2\x80\x8b)|[@AntoineGirard](https://github.com/AntoineGirard)|\n| Data VIZ | [Plastic-origin](https://github.com/surfriderfoundationeurope/plastic-origin) `Private`| `WIP` - Frontend of our [Plastic Origins website](https://www.plasticorigins.eu/) | [Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-publicwebsite-dev) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-publicwebsite-prod\xe2\x80\x8b)|[@YoanDo](https://github.com/YoanDo) / [@BmnQuentin](https://github.com/BmnQuentin)|\n| CMS | [plastic-origin-web-cms](https://github.com/surfriderfoundationeurope/plastic-origin-web-cms) `Private`| `WIP` - Headless cms for our [Plastic Origins website](https://www.plasticorigins.eu/) | [Dev](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-publicwebsite-dev) [Prod](https://portal.azure.com/#@surfrider.eu/resource/subscriptions/ad31bdcf-3e05-41d4-857b-ae5b767fc8cd/resourceGroups/rg-plastico-publicwebsite-prod\xe2\x80\x8b)|[@YoanDo](https://github.com/YoanDo)|\n\n\n### Ready? Get involved!\n\nPlease, get in touch with [@SabineAllouSurfrider](https://github.com/sabineallousurfrider).\n\n\n\n## Contributors\n\nThanks everyone !\n\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-35-orange.svg?style=flat-square)](#contributors-)\n\n \t \t \t \t \t \n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Clemsurfrider

\xf0\x9f\x8e\xa8

ChristopheHvd

\xf0\x9f\x92\xbb

cl\xc3\xa9ment baccar

\xf0\x9f\x92\xbb

Rapha\xc3\xablle Bertrand-Lalo

\xf0\x9f\x92\xbb

Christopher MANEU

\xf0\x9f\x92\xbb

Rapha\xc3\xabl Courivaud

\xf0\x9f\x92\xbb

charlesollion

\xf0\x9f\x92\xbb

cl3m3nt

\xf0\x9f\x92\xbb

F\xc3\xa9lix Voituret

\xf0\x9f\x92\xbb

lucasrymenants

\xf0\x9f\x92\xbb

Tekateyy

\xf0\x9f\x92\xbb

Paul Pidou

\xf0\x9f\x92\xbb

Antonin Marchard

\xf0\x9f\x92\xbb

Maxime Lm

\xf0\x9f\x8e\xa8

Adrien Quillet

\xf0\x9f\x92\xbb

jules-larue

\xf0\x9f\x92\xbb

Emilien Garreau

\xf0\x9f\x92\xbb

dprslt

\xf0\x9f\x92\xbb

Guillaume Erhard

\xf0\x9f\x92\xbb

morganeheng

\xf0\x9f\x92\xbb

deleva

\xf0\x9f\x92\xbb

Fred1402

\xf0\x9f\x92\xbb

Jonathan

\xf0\x9f\x92\xbb

Yoan

\xf0\x9f\x92\xbb

Antoine

\xf0\x9f\x92\xbb

Thomas ROBERT

\xf0\x9f\x92\xbb

lise-deguilhem

\xf0\x9f\x92\xbb

Antoine Bruge

\xf0\x9f\x93\x86

iuliia.kozitska

\xf0\x9f\x93\x86

Alexis Reverte

\xf0\x9f\x92\xbb

Antoine Girard

\xf0\x9f\x92\xbb

benzinamohamedelyes

\xf0\x9f\x92\xbb

francis-valla

\xf0\x9f\x92\xbb

mchagneux

\xf0\x9f\x92\xbb

JR

\xf0\x9f\x92\xbb
\n\n\n\n\n\n\n\nThanks also go to people who are not on GitHub but still actively contributing to the project\'s sucess.\n\n\n## License\n\nWe\xe2\x80\x99re using the `MIT` License. For more details, check [`LICENSE`](https://github.com/surfriderfoundationeurope/The-Plastic-Origins-Project/blob/master/LICENSE) file.\n'",,"2020/03/04, 13:47:44",1330,MIT,4,176,"2021/05/07, 09:57:07",1,51,54,0,901,0,0.2,0.3706896551724138,,,0,4,false,,false,false,,,https://github.com/surfriderfoundationeurope,https://surfrider.eu/,"Biarritz, France",,,https://avatars.githubusercontent.com/u/51789219?v=4,,, MARIDA,A marine debris-oriented dataset on Sentinel-2 satellite images.,marine-debris,https://github.com/marine-debris/marine-debris.github.io.git,github,"deep-learning,semantic-segmentation,classification,marine-litter",Circular Economy and Waste,"2022/07/20, 11:32:31",31,0,13,true,Python,MARIDA,marine-debris,"Python,QML",,"b'![Marine Debris Archive Logo](./docs/marida_trans.png)\r\n\r\nMarine Debris Archive (MARIDA) is a marine debris-oriented dataset on Sentinel-2 satellite images. \r\nIt also includes various sea features that co-exist.\r\nMARIDA is primarily focused on the weakly supervised pixel-level semantic segmentation task.\r\nThis repository hosts the basic tools for the extraction of spectral signatures\r\n as well as the code for the reproduction of the baseline models.\r\n \r\nIf you find this repository useful, please consider giving a star :star: and citation:\r\n > Kikaki K, Kakogeorgiou I, Mikeli P, Raitsos DE, Karantzalos K (2022) MARIDA: A benchmark for Marine Debris detection from Sentinel-2 remote sensing data. PLoS ONE 17(1): e0262247. https://doi.org/10.1371/journal.pone.0262247\r\n\r\nIn order to download MARIDA go to https://doi.org/10.5281/zenodo.5151941.\r\n\r\nAlternatively, MARIDA can be downloaded from the [Radiant MLHub](https://mlhub.earth/data/marida_v1). The `tar.gz` archive file downloaded from this source includes the STAC catalog associated with this dataset.\r\n\r\n\r\n## Contents\r\n\r\n- [Installation](#installation)\r\n\t- [Installation Requirements](#installation-requirements)\r\n\t- [Installation Guide](#installation-guide)\r\n- [Getting Started](#getting-started)\r\n\t- [Dataset Structure](#dataset-structure)\r\n\t- [Spectral Signatures Extraction](#spectral-signatures-extraction)\r\n\t- [Weakly Supervised Pixel-Level Semantic Segmentation](#weakly-supervised-pixel-Level-semantic-segmentation)\r\n\t\t- [Unet](#unet)\r\n\t\t- [Random Forest](#random-forest)\r\n\t- [Multi-label Classification](#multi-label-classification)\r\n\t\t- [ResNet](#resnet)\r\n- [MARIDA - Exploratory Analysis](https://marine-debris.github.io/)\r\n- [Talks and Papers](#talks-and-papers)\r\n\r\n\r\n## Installation\r\n\r\n### Installation Requirements\r\n- python == 3.7.10\r\n- pytorch == 1.7 \r\n- cudatoolkit == 11.0 (For GPU usage, compute capability >= 3.5)\r\n- gdal == 2.3.3\r\n- rasterio == 1.0.21\r\n- scikit-learn == 0.24.2\r\n- numpy == 1.20.2\r\n- tensorboard == 1.15\r\n- torchvision == 0.8.0\r\n- scikit-image == 0.18.1\r\n- pandas == 1.2.4\r\n- pytables == 3.6.1\r\n- tqdm == 4.59.0\r\n\r\n\r\n### Installation Guide\r\n\r\nThe requirements are easily installed via\r\n[Anaconda](https://www.anaconda.com/distribution/#download-section) (recommended):\r\n```bash\r\nconda env create -f environment.yml\r\n```\r\n> If the following error occurred: InvalidVersionSpecError: Invalid version spec: =2.7 \r\n>\r\n> Run: conda update conda\r\n\r\nAfter the installation is completed, activate the environment:\r\n```bash\r\nconda activate marida\r\n```\r\n\r\n## Getting Started\r\n\r\n### Dataset Structure\r\n\r\nIn order to train or test the models, download [MARIDA](https://doi.org/10.5281/zenodo.5151941)\r\nand extract it in the `data/` folder. The final structure should be:\r\n\r\n .\r\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 ...\r\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data # Main Dataset folder\r\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 patches # Folder with patches Structured by Unique Dates and S2 Tiles \r\n\t\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 S2_DATE_TILE # Unique Date\r\n\t\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 S2_DATE_TILE_CROP.tif # Unique 256 x 256 Patch \r\n\t\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 S2_DATE_TILE_CROP_cl.tif # 256 x 256 Classification Mask for Semantic Segmentation Task\r\n\t\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 S2_DATE_TILE_CROP_conf.tif # 256 x 256 Annotator Confidence Level Mask\r\n\t\xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 ... \r\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 splits # Train/Val/Test split Folder (train_X.txt, val_X.txt, test_X.txt) \r\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 labels_mapping.txt # Mapping between Unique 256 x 256 Patch and labels for Multi-label Classification Task\r\n\r\n\r\nThe mapping in S2_DATA_TILE_CROP_cl between Digital Numbers and Classes is:\r\n\r\n```yaml\r\n1: \'Marine Debris\',\r\n2: \'Dense Sargassum\',\r\n3: \'Sparse Sargassum\',\r\n4: \'Natural Organic Material\',\r\n5: \'Ship\',\r\n6: \'Clouds\',\r\n7: \'Marine Water\',\r\n8: \'Sediment-Laden Water\',\r\n9: \'Foam\',\r\n10: \'Turbid Water\',\r\n11: \'Shallow Water\',\r\n12: \'Waves\',\r\n13: \'Cloud Shadows\',\r\n14: \'Wakes\',\r\n15: \'Mixed Water\'\r\n```\r\n\r\nFor the confidence level mask or other usefull mappings go to utils/assets.py\r\n\r\nAlso, in order to easily visualize the RGB composite of the S2_DATE_TILE_CROP patches via [QGIS](https://qgis.org/en/site/index.html),\r\nyou can use the `utils/qgis_color_patch_rgb.qml` file.\r\n\r\n### Spectral Signatures Extraction\r\n\r\nFor the extraction of the spectal signature of each annotated pixel and\r\nits storage in a HDF5 Table file (DataFrame-like processing) run the following commands below. \r\nThe output `data/dataset.h5` can be used for the spectral analysis of the dataset.\r\nAlso, this stage is required for the Random Forest training (press [here](#random-forest)). \r\nNote that this is not required for the Unet training. This procedure lasts approximately ~10 minutes.\r\n\r\n```bash\r\npython utils/spectral_extraction.py\r\n```\r\n\r\nAlternatively, you can download the `dataset.h5` file from [here](https://pithos.okeanos.grnet.gr/public/sbT8ASX0HINAdx4tmKCg27) and put it in the `data` folder.\r\nFinally, in order to load the `dataset.h5` with Pandas, run in a python cell the following:\r\n\r\n```python\r\nimport pandas as pd\r\n\r\nhdf = pd.HDFStore(\'./data/dataset.h5\', mode = \'r\')\r\n\r\ndf_train = hdf.select(\'train\')\r\ndf_val = hdf.select(\'val\')\r\ndf_test = hdf.select(\'test\')\r\n\r\nhdf.close()\r\n```\r\n\r\n### Weakly Supervised Pixel-Level Semantic Segmentation\r\n\r\n#### Unet\r\n\r\n**Unet training**\r\n\r\nSpectral Signatures Extraction in not required for this procedure.\r\nFor training in the ""train"" set and evaluation in ""val"" set with the proposed parameters, run:\r\n\r\n```bash\r\ncd semantic_segmentation/unet\r\npython train.py\r\n```\r\n\r\nWhile training, in order to see the loss status and various metrics via tensorboard, run in a different terminal \r\nthe following command and then go to `localhost:6006` with your browser:\r\n\r\n```bash\r\ntensorboard --logdir logs/tsboard_segm\r\n```\r\n\r\nThe `train.py` also supports the following argument flags:\r\n\r\n```bash\r\n # Basic parameters\r\n --agg_to_water ""Aggregate Mixed Water, Wakes, Cloud Shadows, Waves with Marine Water (True or False)""\r\n --mode ""Select between \'train\' or \'test\'""\r\n --epochs ""Number of epochs to run""\r\n --batch ""Batch size""\r\n --resume_from_epoch ""Load model from previous epoch (To continue the training)""\r\n \r\n # Unet\r\n --input_channels ""The number of input bands""\r\n --output_channels ""The number of output classes""\r\n --hidden_channels ""The number of hidden features""\r\n\r\n # Optimization\r\n --weight_param ""Weighting parameter for Loss Function""\r\n --lr ""Learning rate for adam""\r\n --decay ""Learning rate decay for adam""\r\n --reduce_lr_on_plateau ""Reduce learning rate when val loss no decrease (0 or 1)""\r\n --lr_steps ""Specify the steps that the lr will be reduced""\r\n\r\n # Evaluation/Checkpointing\r\n --checkpoint_path ""The folder to save checkpoints into.""\r\n --eval_every ""How frequently to run evaluation (epochs)""\r\n\r\n # misc\r\n --num_workers ""How many cpus for loading data (0 is the main process)""\r\n --pin_memory ""Use pinned memory or not""\r\n --prefetch_factor ""Number of sample loaded in advance by each worker""\r\n --persistent_workers ""This allows to maintain the workers Dataset instances alive""\r\n --tensorboard ""Name for tensorboard run""\r\n```\r\n\r\n**Unet evaluation**\r\n\r\nRun the following commands in order to produce the Confusion Matrix in stdout and `logs/evaluation_unet.log`,\r\n as well as to produce the predicted masks from the test set in `data/predicted_unet/` folder:\r\n\r\n```bash\r\ncd semantic_segmentation/unet\r\npython evaluation.py\r\n```\r\n\r\nIn order to easily visualize the predicted masks via [QGIS](https://qgis.org/en/site/index.html),\r\nyou can use the `utils/qgis_color_mask_mapping.qml` file.\r\n\r\nTo download the pretrained Unet model on MARIDA press [here](https://pithos.okeanos.grnet.gr/public/lxh8hL4zvuSKds2BdVnMd2). \r\nThen, you should put these items in the `semantic_segmentation/unet/trained_models/` folder.\r\n\r\n#### Random Forest\r\n\r\nIn our baseline setup we trained a random forest classifier on Spectral Signatures,\r\nproduced Spectral Indices (SI) and extracted Gray-Level Co-occurrence Matrix (GLCM) texture features.\r\nThus, this process requires the Spectral Signatures Extraction i.e., the `data/dataset.h5` [file](#spectral-signatures-extraction). Also, it requires the `dataset_si.h5` and `dataset_glcm.h5` for SI and GLCM features,\r\nrespectively.\r\n\r\n1) For the extraction of stacked SI patches (in `data/indices/`) run:\r\n\r\n```bash\r\ncd semantic_segmentation/random_forest\r\npython engineering_patches.py\r\n```\r\n\r\nThen, in order to produce the `dataset_si.h5` run:\r\n\r\n```bash\r\npython utils/spectral_extraction.py --type indices\r\n```\r\n\r\n2) For the stacked GLCM patches (in `data/texture/`) run (approximately ~ 110 mins):\r\n\r\n```bash\r\npython engineering_patches.py --type texture\r\n```\r\n\r\nSimilarly, in order to produce the `dataset_glcm.h5` run:\r\n\r\n```bash\r\npython utils/spectral_extraction.py --type texture\r\n```\r\n\r\n Alternatively, you can download the `indices/` and `texture/` folders as well as the `dataset_si.h5` and `dataset_glcm.h5` files from [here](https://pithos.okeanos.grnet.gr/public/7Xm6x2uSBHTknNv7vaqgS6). \r\nThen, you should put these items in the `data` folder.\r\n\r\n**Random Forest training and evaluation**\r\n\r\nFor training in ""train"" set and final evaluation in ""test"" set, run the following commands.\r\nNote that the results will appear in stdout and `logs/evaluation_rf.log`, and the predicted \r\nmasks in `data/predicted_rf/` folder.\r\n\r\n```bash\r\ncd semantic_segmentation\\random_forest\r\npython train_eval.py\r\n```\r\n\r\nThe `train_eval.py` supports the `--agg_to_water` argument for \r\nthe aggregation of various classes to form the Water Super Class (The default setup):\r\n\r\n```bash\r\npython train_eval.py --agg_to_water [\'""Mixed Water""\',\'""Wakes""\',\'""Cloud Shadows""\',\'""Waves""\']\r\n```\r\n\r\n### Multi-label Classification\r\n\r\nThe weakly-supervised multi-label classification task is an incomplete multi-label\r\nassignment problem. Specifically, the assigned labels are definitely positive (assigned as 1),\r\n while the absent labels (assigned as 0) are not necessarily negative. The assigned labels\r\n per patch can be found in `data/labels_mapping.txt`\r\n\r\n#### ResNet\r\n\r\n**ResNet training**\r\n\r\nFor training in ""train"" set and evaluation in ""val"" set, run:\r\n\r\n```bash\r\ncd multi-label/resnet\r\npython train.py\r\n```\r\n\r\nSimilarly to U-Net training, you can use tensorboard thought `localhost:6006` \r\nto visualize the training process:\r\n\r\n```\r\ntensorboard --logdir logs/tsboard_multilabel\r\n```\r\n\r\n**ResNet evaluation**\r\n\r\nRun the following commands in order to produce the accuracy scores and the Confusion Matrix in stdout \r\nand `logs/evaluation_resnet.log`, as well as to produce the predictions for each patch from the test \r\nset in `data/predicted_labels_mapping.txt`:\r\n\r\n```bash\r\npython evaluation.py\r\n```\r\n\r\nTo download the pretrained ResNet model on MARIDA press [here](https://pithos.okeanos.grnet.gr/public/lxh8hL4zvuSKds2BdVnMd2). \r\nThen, you should put these items in the `multi-label/resnet/trained_models/` folder.\r\n\r\n## Presentations\r\n[Kikaki A, Kakogeorgiou I, Mikeli P, Raitsos DE, Karantzalos K. Detecting and Classifying Marine Plastic Debris from high-resolution multispectral satellite data.](https://doi.org/10.5194/egusphere-egu21-15243)\r\n\r\n## License\r\nThis project is licensed under the MIT License.\r\n'",",https://doi.org/10.1371/journal.pone.0262247\r\n\r\nIn,https://doi.org/10.5281/zenodo.5151941.\r\n\r\nAlternatively,https://doi.org/10.5281/zenodo.5151941,https://doi.org/10.5194/egusphere-egu21-15243","2021/08/01, 20:38:30",815,MIT,0,23,"2023/02/18, 11:30:28",1,1,6,2,249,0,0.0,0.045454545454545414,"2021/08/01, 20:54:58",v1.0.0,0,2,false,,false,false,,,https://github.com/marine-debris,,,,,https://avatars.githubusercontent.com/u/85183222?v=4,,, Detect waste,Detecting plastic waste in the environment to combat environmental pollution and promote circular economy.,wimlds-trojmiasto,https://github.com/wimlds-trojmiasto/detect-waste.git,github,"pytorch,deep-learning,object-detection,python,cnn,neural-networks,efficientdet,detr,maskrcnn,fastrcnn,litter,waste-detection,trash",Circular Economy and Waste,"2022/10/23, 19:28:22",146,0,46,true,Python,,wimlds-trojmiasto,"Python,Jupyter Notebook,Shell",https://detectwaste.ml,"b""\n\n[![DOI](https://zenodo.org/badge/314221459.svg)](https://zenodo.org/badge/latestdoi/314221459)\n\n\n# Detect waste\nAI4Good project for detecting waste in environment.\n[www.detectwaste.ml](www.detectwaste.ml).\n\nOur latest results were published in Waste Management journal in article titled [Deep learning-based waste detection in natural and urban environments](https://www.sciencedirect.com/science/article/pii/S0956053X21006474?dgcid=coauthor#fn1).\n\nYou can find more technical details in our technical report [Waste detection in Pomerania: non-profit project for detecting waste in environment](https://arxiv.org/abs/2105.06808).\n\nDid you know that we produce 300 million tons of plastic every year? And only the part of it is properly recycled.\n\nThe idea of detect waste project is to use Artificial Intelligence to detect plastic waste in the environment. Our solution is applicable for video and photography. Our goal is to use AI for Good.\n\n![](notebooks/demo.png)\n\n# Datasets\n\nIn Detect Waste in Pomerania project we used 9 publicity available datasets, and additional data collected using [Google Images Download](https://github.com/hardikvasa/google-images-download).\n\nFor more details, about the data we used, check our [jupyter notebooks](https://github.com/wimlds-trojmiasto/detect-waste/tree/main/notebooks) with data exploratory analysis.\n\n## Data download (WIP)\nData annotations: https://github.com/wimlds-trojmiasto/detect-waste/tree/main/annotations\n\n* TACO bboxes - in progress. TACO dataset can be downloaded [here](http://tacodataset.org/). TACO bboxes will be avaiable for download soon.\n\n Clone Taco repository\n `git clone https://github.com/pedropro/TACO.git`\n\n Install requirements\n `pip3 install -r requirements.txt`\n\n Download annotated data\n `python3 download.py`\n\n* [UAVVaste](https://github.com/UAVVaste/UAVVaste)\n\n Clone UAVVaste repository\n `git clone https://github.com/UAVVaste/UAVVaste.git`\n\n Install requirements\n `pip3 install -r requirements.txt`\n\n Download annotated data\n `python3 main.py`\n\n* [TrashCan 1.0](https://conservancy.umn.edu/handle/11299/214865)\n\n Download directly from web\n `wget https://conservancy.umn.edu/bitstream/handle/11299/214865/dataset.zip?sequence=12&isAllowed=y`\n\n* [TrashICRA](https://conservancy.umn.edu/handle/11299/214366)\n\n Download directly from web\n `wget https://conservancy.umn.edu/bitstream/handle/11299/214366/trash_ICRA19.zip?sequence=12&isAllowed=y`\n\n* [MJU-Waste](https://github.com/realwecan/mju-waste/)\n\n Download directly from [google drive](https://drive.google.com/file/d/1o101UBJGeeMPpI-DSY6oh-tLk9AHXMny/view)\n\n* [Drinking Waste Classification](https://www.kaggle.com/arkadiyhacks/drinking-waste-classification)\n\n In order to download you must first authenticate using a kaggle API token. Read about it [here](https://www.kaggle.com/docs/api#getting-started-installation-&-authentication)\n\n `kaggle datasets download -d arkadiyhacks/drinking-waste-classification`\n\n* [Wade-ai](https://github.com/letsdoitworld/wade-ai/tree/master/Trash_Detection)\n\n Clone wade-ai repository\n `git clone https://github.com/letsdoitworld/wade-ai.git`\n\n For coco annotation check: [majsylw/wade-ai/tree/coco-annotation](https://github.com/majsylw/wade-ai/tree/coco-annotation/Trash_Detection/trash/dataset)\n\n* [TrashNet](https://github.com/garythung/trashnet) - The dataset spans six classes: glass, paper, cardboard, plastic, metal, and trash.\n\n Clone trashnet repository\n `git clone https://github.com/garythung/trashnet`\n\n* [waste_pictures](https://www.kaggle.com/wangziang/waste-pictures) - The dataset contains ~24k images grupped by 34 classes of waste for classification purposes.\n\n In order to download you must first authenticate using a kaggle API token. Read about it [here](https://www.kaggle.com/docs/api#getting-started-installation-&-authentication)\n\n `kaggle datasets download -d wangziang/waste-pictures`\n\nFor more datasets check: [waste-datasets-review](https://github.com/AgaMiko/waste-datasets-review)\n\n## Data preprocessing\n\n### Multiclass training\nTo train only on TACO dataset with detect-waste classes:\n* run *annotations_preprocessing.py*\n\n `python3 annotations_preprocessing.py`\n\n new annotations will be saved in *annotations/annotations_train.json* and *annotations/annotations_test.json*\n\n For binary detection (litter and background) check also generated new annotations saved in *annotations/annotations_binary_train.json* and *annotations/annotations_binary_test.json*.\n\n### Single class training\n\nTo train on one or multiple datasets on a single class:\n\n* run *annotations_preprocessing_multi.py*\n\n `python3 annotations_preprocessing_multi.py`\n\n new annotations will be split and saved in *annotations/binary_mixed_train.json* and *annotations/binary_mixed_test.json*\n\n Example bash file is in **annotations_preprocessing_multi.sh** and can be run by\n\n `bash annotations_preprocessing_multi.sh`\n\nScript will automatically split all datasets to train and test set with `MultilabelStratifiedShuffleSplit`. Then it will convert datasets to one class - litter. Finally all datasets will be concatenated to form single train and test files `annotations/binary_mixed_train.json` and `annotations/binary_mixed_test`.\n\nFor more details check [annotations directory](https://github.com/wimlds-trojmiasto/detect-waste/tree/main/annotations).\n\n# Models\n\nTo read more about past waste detection works check [litter-detection-review](https://github.com/majsylw/litter-detection-review).\n\n* ### EfficientDet\n\n To train EfficientDet check `efficientdet/README.md`\n\n To train EfficientDet implemented in Pytorch Lightning check branch `effdet_lightning`\n\n We based our implementation on [efficientdet-pytorch](https://github.com/rwightman/efficientdet-pytorch) by Ross Wightman.\n\n* ### DETR\n\n To train detr check `detr/README.md` (WIP)\n\n PyTorch training code and pretrained models for **DETR** (**DE**tection **TR**ansformer).\n Authors replaced the full complex hand-crafted object detection pipeline with a Transformer, and matched Faster R-CNN with a ResNet-50, obtaining **42 AP** on COCO using half the computation power (FLOPs) and the same number of parameters. Inference in 50 lines of PyTorch.\n\n For implementation details see [End-to-End Object Detection with Transformers](https://github.com/facebookresearch/detr) by Facebook.\n\n* ### Mask R-CNN\n To train Mask R-CNN check `MaskRCNN/README.md`\n\n Our implementation based on [tutorial](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html).\n\n* ### Faster R-CNN\n To train Faster R-CNN on TACO dataset check `FastRCNN/README.md`\n\n* ### Classification with ResNet50 and EfficientNet\n To train choosen model check `classifier/README.md`\n\n\n## Example usage - models training\n\n1. Waste detection using EfficientDet\n\nIn our github repository you will find [EfficientDet code](https://github.com/wimlds-trojmiasto/detect-waste/tree/main/efficientdet) already adjusted for our mixed dataset. To run training for single class just clone repository, move to efficientdet directory, install necessary dependencies, and launch ```train.py``` script with adjusted parameters, like: path to images, path to directory with annotations (you can use ours provided in [annotations directory](https://github.com/wimlds-trojmiasto/detect-waste/tree/main/annotations)), model parameters and its specific name. It can be done as in the example below.\n\n```bash\npython3 train.py path_to_all_images \\\n--ann_name ../annotations/binary_mixed --model tf_efficientdet_d2 \\\n--batch-size 4 --decay-rate 0.95 --lr .001 --workers 4 --warmup-epochs 5 \\\n--model-ema --dataset multi --pretrained --num-classes 1 --color-jitter 0.1 \\\n--reprob 0.2 --epochs 20 --device cuda:0\n```\n\n2. Waste classification using EfficientNet\n\nIn this step switch to [classifier directory](https://github.com/wimlds-trojmiasto/detect-waste/tree/main/classifier). At first just crop waste objects from images of waste (the same as in previous step).\n\n```bash\npython3 cut_bbox_litter.py --src_img path_to_whole_images \\\n --dst_img path_to_destination_directory_for_images \\\n --square --zoom 1\n```\n\nIn case of using unlabelled [OpenLitterMap dataset](https://openlittermap.com/), make pseudo-predictions using previously trained EfficientDet and map them with orginal openlittermap annotations.\n\n```bash\npython3 sort_openlittermap.py \\\n --src_ann path_to_original_openlittermap_annotations \\\n --coco path_to_our_openlittermap_annotations \\\n --src_img path_to_whole_images \\\n --dst_img path_to_destination_directory_for_images\n```\n\nTo run classifier training in command line just type:\n\n```bash\npython train_effnet.py --data_img path/to/images/train/ \\\n --save path/to/checkpoint.ckpt \\\n --model efficientnet-b2 \\\n --gpu 0 \\\n --pseudolabel_mode per-batch\n```\n\n## Evaluation\n\nWe provided `make_predictions.py` script to draw bounding boxes on choosen image. For example script can be run on GPU (id=0) with arguments:\n\n```bash\n python make_predictions.py --save directory/to/save/image.png \\\n --detector path/to/detector/checkpoint.pth \\\n --classifier path/to/clasifier/checkpoint.pth \\\n --img path/or/url/to/image --device cuda:0\n```\nor on video with `--video` argument:\n\n```bash\n python make_predictions.py --save directory/to/save/frames \\\n --detector path/to/detector/checkpoint.pth \\\n --classifier path/to/clasifier/checkpoint.pth \\\n --img path/to/video.mp4 --device cuda:0 --video \\\n --classes label0 label1 label2\n```\n\nIf you managed to process all the frames, just run the following command from the directory where you saved the results:\n\n```bash\n ffmpeg -i img%08d.jpg movie.mp4\n```\n\n## Tracking experiments\nFor experiment tracking we mostly used [neptune.ai](https://neptune.ai/). To use `Neptune` follow the official Neptune tutorial on their website:\n* Log in to your account\n* Find and set Neptune API token on your system as environment variable (your NEPTUNE_API_TOKEN should be added to ~./bashrc)\n* Add your project_qualified_name name in the `train_.py`\n ```python\n neptune.init(project_qualified_name = 'YOUR_PROJECT_NAME/detect-waste')\n ```\n Currently it is set to a private detect-waste neptune space.\n\n* install neptune-client library\n\n ```bash\n pip install neptune-client\n ```\n\nFor more check [LINK](https://neptune.ai/how-it-works).\n\n## Our results\n\n### Detection/Segmentation task\n| model | backbone | Dataset | # classes | bbox AP@0.5 | bbox AP@0.5:0.95 | mask AP@0.5 | mask AP@0.5:0.95 |\n| :-----:| :-------: | :-----------: | :-------: | :---------: | :--------------: | :---------: | :--------------: |\n| DETR | ResNet 50 |TACO bboxes| 1 | 46.50 | 24.35 | x | x |\n| DETR | ResNet 50 |TACO bboxes| 7 | 12.03 | 6.69 | x | x |\n| DETR | ResNet 50 |`*`Multi | 1 | 50.68 | 27.69 | `**`54.80 | `**`32.17 |\n| DETR |ResNet 101 |`*`Multi | 1 | 51.63 | 29.65 | 37.02 | 19.33 |\n| Mask R-CNN | ResNet 50 | `*`Multi | 1 | 27.95 | 16.49 | 23.05 | 12.94 |\n| Mask R-CNN | ResNetXt 101 | `*`Multi | 1 | 19.70 | 6.20 | 24.70 | 13.20 |\n| EfficientDet-D2 | EfficientNet-B2 | Taco bboxes | 1 | 61.05 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | Taco bboxes | 7 | 18.78 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | Drink-waste | 4 | 99.60 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | MJU-Waste | 1 | 97.74 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | TrashCan v1 | 8 | 91.28 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | Wade-AI | 1 | 33.03 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | UAVVaste | 1 | 79.90 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | Trash ICRA19 | 7 | 9.47 | x | x | x |\n| EfficientDet-D2 | EfficientNet-B2 | `*`Multi | 1 | 74.81 | x | x | x |\n| EfficientDet-D3 | EfficientNet-B3 | `*`Multi | 1 | 74.53 | x | x | x |\n\n* `*` Multi - name for mixed open dataset (with listed below datasets) for detection/segmentation task\n* `**` results achived with frozeen weights from detection task (after addition of mask head)\n\n**Pretrained weights of the best efficientdet model are available to download here: https://drive.google.com/drive/u/0/folders/1wNWoH8rdkG05sBw-OCXp3J73uJPxhcxH**\n\n### Classification task\n\n| model | # classes | ACC | sampler | pseudolabeling |\n| :--------------:| :-------: | :--:| :-----: | :------------: |\n| EfficientNet-B2 | 8 |73.02| Weighted| per batch |\n| EfficientNet-B2 | 8 |74.61| Random | per epoch |\n| EfficientNet-B2 | 8 |72.84| Weighted| per epoch |\n| EfficientNet-B4 | 7 |71.02| Random | per epoch |\n| EfficientNet-B4 | 7 |67.62| Weighted| per epoch |\n| EfficientNet-B2 | 7 |72.66| Random | per epoch |\n| EfficientNet-B2 | 7 |68.31| Weighted| per epoch |\n| EfficientNet-B2 | 7 |74.43| Random | None |\n| ResNet-50 | 8 |60.60| Weighted| None |\n\n* 8 classes - 8th class for additional background category\n* we provided 2 methods to update pseudo-labels: per batch and per epoch\n\n## Citation\n\n```\n@article{MAJCHROWSKA2022274,\n title = {Deep learning-based waste detection in natural and urban environments},\n journal = {Waste Management},\n volume = {138},\n pages = {274-284},\n year = {2022},\n issn = {0956-053X},\n doi = {https://doi.org/10.1016/j.wasman.2021.12.001},\n url = {https://www.sciencedirect.com/science/article/pii/S0956053X21006474},\n author = {Sylwia Majchrowska and Agnieszka Miko\xc5\x82ajczyk and Maria Ferlin and Zuzanna Klawikowska\n and Marta A. Plantykow and Arkadiusz Kwasigroch and Karol Majek},\n keywords = {Object detection, Semi-supervised learning, Waste classification benchmarks,\n Waste detection benchmarks, Waste localization, Waste recognition},\n}\n\n@misc{majchrowska2021waste,\n title={Waste detection in Pomerania: non-profit project for detecting waste in environment}, \n author={Sylwia Majchrowska and Agnieszka Miko\xc5\x82ajczyk and Maria Ferlin and Zuzanna Klawikowska\n and Marta A. Plantykow and Arkadiusz Kwasigroch and Karol Majek},\n year={2021},\n eprint={2105.06808},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n```\n\n## Project Organization (WIP)\n------------\n\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md\n | <- The top-level README for developers using this project.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 annotations <- annotations in json\n \xe2\x94\x82 \n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 classifier <- implementation of CNN for litter classification\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 detr <- implementation of DETR for litter detection\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 efficientdet <- implementation of EfficientDet for litter detection\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 fastrcnn <- implementation of FastRCNN for litter segmentation\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 maskrcnn <- implementation of MaskRCNN for litter segmentation\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 notebooks <- jupyter notebooks.\n \xe2\x94\x82 \n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 utils <- source code with useful functions\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 requirements.txt <- The requirements file for reproducing the analysis environment, e.g.\n \xe2\x94\x82 generated with `pip freeze > requirements.txt`\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 setup.py <- makes project pip installable (pip install -e .) so src can be imported\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 src <- Source code for use in this project.\n\n\n--------\n""",",https://zenodo.org/badge/latestdoi/314221459,https://arxiv.org/abs/2105.06808,https://doi.org/10.1016/j.wasman.2021.12.001","2020/11/19, 11:04:09",1070,MIT,0,133,"2023/09/28, 12:20:04",8,54,71,10,27,5,0.6,0.52,"2021/09/02, 06:29:33",v1.1-alpha,0,8,false,,false,false,,,https://github.com/wimlds-trojmiasto,,,,,https://avatars.githubusercontent.com/u/52125053?v=4,,, Santiago.jl,A Julia package to generate appropriate sanitation system options.,santiago-sanitation-systems,https://github.com/santiago-sanitation-systems/Santiago.jl.git,github,,Circular Economy and Waste,"2022/12/21, 13:39:41",5,0,2,true,Julia,,santiago-sanitation-systems,Julia,,"b'# Santiago.jl\n\n[![version](https://juliahub.com/docs/Santiago/version.svg)](https://juliahub.com/ui/Packages/Santiago/JPJQH)\n[![Build\nStatus](https://github.com/santiago-sanitation-systems/Santiago.jl/workflows/CI/badge.svg)](https://github.com/santiago-sanitation-systems/Santiago.jl/actions)\n[![codecov](https://codecov.io/gh/santiago-sanitation-systems/Santiago.jl/branch/master/graph/badge.svg?token=GWBV5M4Z13)](https://codecov.io/gh/santiago-sanitation-systems/Santiago.jl)\n\n`Santiago` (SANitation sysTem Alternative GeneratOr) is a Julia package to generate appropriate sanitation system options. It is able to\n- find all possible systems given a set of sanitation technologies;\n- assess the appropriateness of a technology in a given case (context);\n- assess the overall appropriateness of a sanitation system in a given context;\n- calculate (optionally with uncertainly quantification) the massflows for each system for\n total `phosphor`, total `nitrogen`, `totalsolids`, and `water`;\n- select a meaningful subset of systems for the given case.\n\nFor non-research applications we recommend to use `Santiago` via the web app [sanichoice.net](https://www.sanichoice.net/).\n\n# Installation\n\n1. Install [Julia](https://julialang.org/) version >= 1.4.\n\n2. Install the `Santiago` package from the Julia prompt:\n```Julia\n] add Santiago\n```\n\n3. To edit Julia files you may also want to install [Visual Studio\nCode](https://code.visualstudio.com/) and its [Julia\nExtension](https://www.julia-vscode.org/docs/stable/). Alternatively, see the [Julia\nhome page](https://julialang.org/) for support for other editors.\n\n# Usage\n\nThe example below demonstrates the typical steps needed to identify\nsanitation systems appropriate for a given case. See the references below for a\nclarification of the terminology and the recommended embedding in the\nstrategic planning process.\n\nMost functions have a documentation string attached that can be accessed with\n`?functionname` on the Julia prompt.\n\nFor reproducibility it is a good idea to create a separate _Julia project_\n(similar like `virtualenv` in Python) for\nevery analysis, see [here](https://julialang.github.io/Pkg.jl/v1/environments/).\n\n## Minimal Example\n\n```Julia\nusing Santiago\n\n# -----------\n# 1) Import technologies\n\n# we use the test data that come with the package\ninput_tech_file = joinpath(pkgdir(Santiago), ""test/example_techs.json"")\ninput_case_file = joinpath(pkgdir(Santiago), ""test/example_case.json"")\n\nsources, additional_sources, techs = import_technologies(input_tech_file)\n\n# -----------\n# 2) Build all systems\n\nallSys = build_systems(sources, techs);\n\n# number of found systems\nlength(allSys)\n\n\n# The computations can be accelerated by setting max_candidates to a low number.\n# However, this will result only in a *stochastic* subset of all possible systems!\nallSys = build_systems(sources, techs, max_candidates=100);\n\n\n# -----------\n# 3) Calculate system properties\n\ntas, tas_components = appropriateness(input_tech_file, input_case_file)\n\nsysappscore!.(allSys)\nntechs!.(allSys)\nnconnections!.(allSys)\nconnectivity!.(allSys)\ntemplate!.(allSys)\n\n# see all properties of the first system\nallSys[1].properties\n\n# -----------\n# 4) Mass flows\n\n# Inputs for different sources in kg/year/person equivalent.\n# See references below.\ninput_masses = Dict(""Dry.toilet"" => Dict(""phosphor"" => 0.548,\n ""nitrogen"" => 4.550,\n ""totalsolids"" => 32.12,\n ""water"" => 547.1),\n ""Pour.flush"" => Dict(""phosphor"" => 0.548,\n ""nitrogen"" => 4.55,\n ""totalsolids"" => 32.12,\n ""water"" => 1277.1),\n ""Cistern.flush"" => Dict(""phosphor"" => 0.548,\n ""nitrogen"" => 4.55,\n ""totalsolids"" => 32.12,\n ""water"" => 22447.1),\n # Urine diversion dry toilet\n ""Uddt"" => Dict(""phosphor"" => 0.548,\n ""nitrogen"" => 4.55,\n ""totalsolids"" => 32.12,\n ""water"" => 547.1)\n )\n\n\n# Calculate massflows with 20 Mont Carlo iterations (probably not enough)\n# for all systems and save to system properties\nmassflow_summary_parallel!(allSys, input_masses, n=20);\n\n# Alternatively, the non-parallelized version can be used:\n# massflow_summary!.(allSys, Ref(input_masses), n=20);\n\n# If the flows of every technology is of interest, set \'techflows=true\'.\n# The default is \'false\' as this produces as very large amount of additional data!\nmassflow_summary_parallel!(allSys, input_masses, n=20, techflows=true);\n\n# Examples how to extract results\nallSys[2].properties[""massflow_stats""][""entered""]\nallSys[2].properties[""massflow_stats""][""recovery_ratio""]\nallSys[2].properties[""massflow_stats""][""recovered""]\n\nallSys[2].properties[""massflow_stats""][""lost""][:,""air loss"",:]\nallSys[2].properties[""massflow_stats""][""lost""][:,:,""mean""]\nallSys[2].properties[""massflow_stats""][""lost""][:,:,""q_0.5""]\n\n# -----------\n# 5) select a subset of systems\n\n# For example, select eight systems for further investigation\nselectedSys = select_systems(allSys, 8)\n\n# We can also include or exclude technologies\nselect_systems(allSys, 8, techs_exclude=[""Pour.flush"", ""wsp_3_trans""])\nselect_systems(allSys, 8, techs_include=[""Pour.flush""])\n\n# Similar for templates\nselect_systems(allSys, 8, templates_exclude=[""ST.3"", ""ST.15""])\nselect_systems(allSys, 8, templates_include=[""ST.17""])\n\n# By default the systems are selected by the `""sysappscore""` but other\n# properties can be used too. For example, here we prefer short systems:\nselect_systems(allSys, 8, target=""ntechs"", maximize=false)\n\n# Or systems with a high phosphor recovery (run massflow calculation first):\nselect_systems(allSys, 8, target=""phosphor"" => ""recovery_ratio"")\n\n# By default the returned systems are diverse while having a good\n# target score. You can ignore the diversity requirement to get the\n# systems with the best target scores by setting\n# the `selection_type` to ""ranking"".\nselect_systems(allSys, 10, selection_type=""ranking"")\n\n# This helper function returns the systems with matching IDs:\npick_systems(allSys, [""003s-QbnU-FvGB"", ""0JLD-YQbJ-SGAu""])\n\n# Investigate how techs and templates are used\ntemplates_per_tech(allSys)\ntechs_per_template(allSys)\n\n# -----------\n# 6) write some properties in a DataFrame for further analysis\n\ndf = properties_dataframe(selectedSys,\n massflow_selection = [""recovered | water | mean"",\n ""recovered | water | sd"",\n ""lost | water | air loss| q_0.5"",\n ""entered | water""])\n\nsize(df)\nnames(df)\n\n# or you could simply export all properties (> 400!)\ndf = properties_dataframe(allSys, massflow_selection = ""all"")\n\n# export as csv\nimport CSV # the package \'CSV\' needs to be installed separately\nCSV.write(""mysystems.csv"", df)\n\n\n# -----------\n# 7) create a visualization of a system as pdf\n\n# First write a dot file\ndot_file(selectedSys[1], ""system.dot"")\n\n# Then, convert it to pdf (The program `graphviz` must be installed on the system)\nrun(`dot -Tpdf system.dot -o system.pdf`)\n\n\n# -----------\n# 8) export to JSON\n\n# Note, the JSON export is designed to interface other applications,\n# but not for serialization.\n\nopen(""system_export.json"", ""w"") do f\n JSON3.write(f, selectedSys)\nend\n```\n\n\n## Input format\n\nTypically the information on the case specification and the available\ntechnologies are provided via files. `Santiago` can only import JSON\nfiles. The structure must match these examples:\n\n- Technologies: [`example_techs.json`](test/example_techs.json)\n- Case: [`example_case.json`](test/example_case.json)\n\nMany tools are available to browse and edit JSON files. For example,\nFirefox renders JSON files nicely, or Visual Studio allows for editing.\n\n\n## Logging\n\nBy default, `Santiago` prints only few information. This can be\nadapted by the logging level. With the package `LoggingExtras.jl` (needs to\nbe installed extra)\ndifferent logging levels can be used for the console output and the log file:\n\n```Julia\nusing Logging\nusing LoggingExtras\n\n# - on console show only infos and errors, write everything in the logfile \'mylogfile.log\'\nmylogger = TeeLogger(\n MinLevelLogger(FileLogger(""mylogfile.log""), Logging.Debug), # logs to file\n MinLevelLogger(ConsoleLogger(), Logging.Info) # logs to console\n)\nglobal_logger(mylogger)\n\n... use Santiago functions ...\n```\n\n## Update systems for a new case profile\n\nThe generation of all systems is computationally intense. The code\nbelow demonstrates how to first generate all systems without case\ninformation and later update the system scores with case data.\n\n```Julia\nusing Serialization\n\n## 1) build systems without case information and cache result\n\nsources, additional_sources, techs = import_technologies(tech_file)\n\nif isfile(""mycachfile.jls"")\n allSys, sources, additional_sources, techs = deserialize(""mycachfile.jls"")\nelse\n allSys = build_systems(sources, techs)\n ...\n massflow_summary!.(allSys, Ref(input_masses), n=100);\n ...\n serialize(""mycachfile.jls"", (allSys, sources, additional_sources, techs)) \n # note: we need to save the techs in order to ensure the link to from systems to tech properties (tas)\nend\n\nsysappscore!.(allSys) # all are \'-1.0\' because no case profile was defined yet\n\n## 2) read case file and update sysappscore\n\ntas, tas_components = appropriateness(tech_file, case_file);\nupdate_appropriateness!(sources, tas)\nupdate_appropriateness!(additional_sources, tas)\nupdate_appropriateness!(techs, tas)\n\nsysappscore!.(allSys) # now we have the updated SAS\n\n## 3) select systems\n\nfewSys = select_systems(allSys, 6)\n\n## 4) scale massflows for 100 people\n\nscale_massflows!.(fewSys, 100)\n\n```\nThe slowest parts are `build_systems` and\n`massflow_summary!`. Therefore we could cache the output as shown in this\nexample. Steps 2 and 4 are fast and can be quickly adapted to new cases.\n\n\n## Multi-threading\n\nThe functions `build_systems` and especially\n`massflow_summary_parallel!` benefit from multi-threading. As this may\ninvolves some overhead, benchmarking is recommended. See the official\n[documentation](https://docs.julialang.org/en/v1/manual/parallel-computing/#man-multithreading-1)\nhow to control the number of threads.\n\n\n\n# References\n\nSpuhler, D., Scheidegger, A., Maurer, M., 2018. Generation of\nsanitation system options for urban planning considering novel\ntechnologies. Water Research 145,\n259\xe2\x80\x93278. https://doi.org/10.1016/j.watres.2018.08.021\n\nSpuhler, D., Scheidegger, A., Maurer, M., 2020. Comparative analysis\nof sanitation systems for resource recovery: influence of\nconfigurations and single technology components. Water\nResearch 116281. https://doi.org/10.1016/j.watres.2020.116281\n\nSpuhler, D., Scheidegger, A., Maurer, M., 2021. Ex-ante quantification\nof nutrient, total solids, and water flows in sanitation\nsystems. Journal of Environmental Management 280, 111785.\nhttps://doi.org/10.1016/j.jenvman.2020.111785\n\n\n# License\n\nThe `Santiago.jl` package is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program. If not, see .\n\nCopyright 2020, Dorothee Spuhler at Eawag. Contact: Dorothee Spuhler, \n'",",https://doi.org/10.1016/j.watres.2018.08.021\n\nSpuhler,https://doi.org/10.1016/j.watres.2020.116281\n\nSpuhler","2020/06/25, 07:06:30",1217,CUSTOM,5,412,"2022/08/02, 09:26:53",17,6,78,1,449,1,0.0,0.1266490765171504,"2022/12/21, 13:57:53",v0.10.1,0,6,false,,false,false,,,https://github.com/santiago-sanitation-systems,https://github.com/santiago-sanitation-systems/Santiago.jl,,,,https://avatars.githubusercontent.com/u/67377940?v=4,,, RaMa-Scene,RaMa-Scene a web-platform to analyse Environmentally Extended Input-Output data and generate scenarios.,CMLPlatform,https://github.com/CMLPlatform/ramascene.git,github,,Circular Economy and Waste,"2023/02/24, 10:51:01",6,0,0,true,SCSS,CML-IE-EB,CMLPlatform,"SCSS,Python,JavaScript,HTML,CSS,Dockerfile,Ruby,Shell",https://www.ramascene.eu/,"b'# RaMa-Scene\n---\nRaMa-Scene is a django 2.0 based web-application that allows for analyzing Environmentally Extended Input-Output (EEIO) tables. EXIOBASE v3.3 is used in this project. \nDemo version: http://cml.liacs.nl:8080/ramascene/\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](resources/docs/CONTRIBUTING.md)\n\n# Developers Guide\n---\nhttp://rama-scene.readthedocs.io/en/latest/\n\n# Getting started\n---\n### Retrieve the raw datasets\n\n \nEXIOBASE-Rama-Scene (modified version including secondary materials): \n\nhttp://doi.org/10.5281/zenodo.3533196\n\n### Clone the project \n``` \ngit clone https://bitbucket.org/CML-IE/rama-scene.git\n```\n\n### Create a virtual environment (python3.5 or higher) and install the app requirements (make sure you have python-dev installed via apt-get)\n``` \n$ pip3 install -r requirements.txt \n```\n\n### Install node.js (node version: 3.10.10 or higher)\n``` \n$ sudo apt-get update\n$ sudo apt-get install nodejs\n```\n> Note: On debian apt install nodejs-legacy\n\n### Install redis (for Django Channels)\n```\n$ sudo apt install redis-server\n```\n\n### Install rabbitMQ (for Celery)\n\n``$ sudo apt-get install -y erlang``\n\n``$ sudo apt-get install rabbitmq-server``\n\nThen enable and start the RabbitMQ service:\n\n``$ systemctl enable rabbitmq-server``\n\n``$ systemctl start rabbitmq-server``\n\nCheck the status to make sure everything is running:\n``$ systemctl status rabbitmq-server``\n\n\n> Note: Perform all next steps in the virtualenv and in the rootfolder of the project\n\n### Set the following environment variables (see sample-dev-env.sh):\n```\nexport DJANGO_SETTINGS_MODULE=ramasceneMasterProject.config.dev\nexport DATASETS_VERSION=version downloaded e.g. v3\nexport DATASETS_DIR=my/path/to/datasets\nexport OPENBLAS_NUM_THREADS=adjust according to how many cores you want to use\n```\nIf you are on Linux and using the OpenBlas library for Numpy. \nIt is advised to set the number of threads Numpy uses. To find which library is used in python:\n```\n>>>import numpy as np\n>>>np.__config__.show()\n```\n\n\n### Prepare the database\n```\n$ python3 manage.py makemigrations\n$ python3 manage.py migrate\n```\n\n### Populate the database \n```\n$ python3 manage.py populateHierarchies\n```\n\n### Prepare static resources\n```\n$ npm install\n```\n\n### Built React bundle\n```\n$ ./node_modules/.bin/webpack --config dev-webpack.config.js \n```\n\n### Start Celery\nStart the default module to enable handling of analytical calculations:\n```\n$ celery -A ramasceneMasterProject worker -l info --concurrency 1 --queue calc_default -n worker1.%h\n```\nStart the modelling module to enable handling of modelling calculations:\n```\n$ celery -A ramasceneMasterProject worker -l info --concurrency 1 --queue modelling -n worker2.%h\n```\n\n### Start the development server\n```\n$ python3 manage.py runserver\n```\n\nAccess the app via the webrowser: http://127.0.0.1:8000/ramascene/\n\n### [Optional] enable debug logging\n\nTo enable debug logging, open the ramasceneMasterProject/config/dev.py file.\nUncomment the ""logging for Django"" section.\n\n### [Optional] run tests\nIn case you want to run tests you can perform unit tests in the root folder:\n```\n$ python3 manage.py test -v2\n```\n\nFor integration tests you need to start the celery workers first (explained above). \nYou can perform the integration test with the following command:\n```\n$ pytest -vs\n```\n\nIf the test has succeeded, you\xef\xbf\xbdll need to repopulate the database with the following command:\n```\n$ python3 manage.py populateHierarchies\n```\n\n### Core dependencies\n---\nThe app uses Celery [4.1.0] (http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html), Django channels [2.1.5] (https://channels.readthedocs.io/en/latest/)\n'",,"2019/07/19, 08:33:50",1559,CUSTOM,2,332,"2023/02/24, 10:55:24",0,67,67,47,243,0,0.0,0.4033333333333333,"2020/01/25, 14:57:20",V1.5,0,4,false,,false,false,,,https://github.com/CMLPlatform,,,,,https://avatars.githubusercontent.com/u/29304416?v=4,,, CircuMat,"RaMa-Scene fork, CircuMat focuses on NUTS2 level classification as opposed to Rama-Scene country level analysis tool.",CMLPlatform,https://github.com/CMLPlatform/CircuMAT.git,github,,Circular Economy and Waste,"2022/02/10, 15:05:19",3,0,0,false,JavaScript,CML-IE-EB,CMLPlatform,"JavaScript,CSS,Python,HTML,SCSS,Shell,Dockerfile",,"b""# CircuMat\n---\nCircuMat is a modified (forked) version of Rama-Scene EIT Raw Materials project related to analyzing Environmentally Extended Input-Output (EEIO) tables. CircuMat focuses on NUTS2 level classification as opposed to Rama-Scene country level analysis tool.\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](resources/docs/CONTRIBUTING.md)\n\n# Developers Guide\n---\nFor more information on the tool architecture, please refer to Rama-Scene's documenation: http://rama-scene.readthedocs.io/en/latest/\n\n# Getting started\n---\n### Retrieve the raw datasets\n\n\n* EXIOBASE-Rama-Scene (v4 - modified version including secondary materials + CircuMat Eurostat data): \n\nhttps://surfdrive.surf.nl/files/index.php/s/bEVnoyJUeYMUiyr\n\npass: circumat\n\nDownload the circumat_v4_clean.zip folder.\n\n### Clone the project \n``` \n$ git clone https://SidneyNiccolson@bitbucket.org/CML-IE/circumat.git\n```\n\n### Create a virtual environment (python3.5 or higher) and install the app requirements (make sure you have python-dev installed via apt-get)\n``` \n$ pip3 install -r requirements.txt \n```\n\n### Install node.js (node version: 3.10.10 or higher)\n``` \n$ sudo apt-get update\n$ sudo apt-get install nodejs\n```\n> Note: On debian apt install nodejs-legacy\n\n### Install redis (for Django Channels)\n```\n$ sudo apt install redis-server\n```\n\n### Install rabbitMQ (for Celery)\n\n``$ sudo apt-get install -y erlang``\n\n``$ sudo apt-get install rabbitmq-server``\n\nThen enable and start the RabbitMQ service:\n\n``$ sudo systemctl enable rabbitmq-server``\n\n``$ sudo systemctl start rabbitmq-server``\n\nCheck the status to make sure everything is running:\n``$ sudo systemctl status rabbitmq-server``\n\n\n> Note: Perform all next steps in the virtualenv and in the rootfolder of the project\n\n### Set the following environment variables (see sample-dev-env.sh):\n```\nexport DJANGO_SETTINGS_MODULE=circumatMasterProject.config.dev\nexport DATASETS_VERSION=[version downloaded e.g. v3]\nexport DATASETS_DIR=my/path/to/datasets (make sure that inside this folder is a folder containing the year 2011)\nexport OPENBLAS_NUM_THREADS=\n```\nIf you are on Linux and using the OpenBlas library for Numpy. \nIt is advised to set the number of threads Numpy uses. To find which library is used in python:\n```\n>>>import numpy as np\n>>>np.__config__.show()\n```\n\n\n### Prepare the database\n```\n$ python3 manage.py makemigrations\n$ python3 manage.py migrate\n```\n\n### Populate the database \n```\n$ python3 manage.py populateHierarchies\n```\n\n### Prepare static resources (npm version 4.6.1 or higher)\n```\n$ npm install\n```\n\n### Built React bundle\n```\n$ ./node_modules/.bin/webpack --config dev-webpack.config.js \n```\n\n### Start Celery\nStart the celery module to enable handling of calculations:\n```\n$ celery -A circumatMasterProject worker -l info --concurrency 1 \n```\n\n### Start the development server\n```\n$ python3 manage.py runserver\n```\n\nAccess the app via the webbrowser: http://127.0.0.1:8000/circumat/\n\n\n### Core dependencies\n---\n#### TO BE UPDATED\nThe app uses Celery [4.1.0] (http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html), Django channels [2.1.5] (https://channels.readthedocs.io/en/latest/)\n\n\n# Contributers\n(Ramascene & Circumat) Back-end developer: Sidney Niccolson & Franco Donati (CML)\n(Ramascene) Front-end developer: Franco Donati (CML) & Bart Daniels (VITO)\n(Ramascene) IO calculations: Arjan de Koning, Hale Cetinay & Franco Donati (CML)\n(Ramascene) IO modelling: Franco Donati (CML)\nEurostat data: Elmer Rietveld (TNO)\nProject manager: Hale Cetinay Iyicil & Franco Donati (CML)""",,"2019/09/16, 08:06:31",1500,CUSTOM,0,7,"2021/12/08, 09:57:25",0,34,34,0,686,0,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/CMLPlatform,,,,,https://avatars.githubusercontent.com/u/29304416?v=4,,, SwolfPy,"A free, multi-platform, Python-based, open-source, SWM LCA optimization framework with built-in parametric and Monte Carlo sensitivity and uncertainty analysis capabilities.",SwolfPy-Project,https://github.com/SwolfPy-Project/swolfpy.git,github,"solid-waste-management,life-cycle-assessment,optimization,monte-carlo-simulation,uncertainty-assessment,brightway2,municipal-solid-waste,pyside2,swolfpy,python",Circular Economy and Waste,"2023/08/18, 05:11:14",18,0,8,true,Python,Solid Waste Optimization Life-cycle Framework in Python (SwolfPy),SwolfPy-Project,"Python,Makefile",https://swolfpy-project.github.io/,"b'.. General\n\n================================================================\nSolid Waste Optimization Life-cycle Framework in Python(SwolfPy)\n================================================================\n\n.. image:: https://img.shields.io/pypi/v/swolfpy.svg\n :target: https://pypi.python.org/pypi/swolfpy\n\n.. image:: https://img.shields.io/pypi/pyversions/swolfpy.svg\n :target: https://pypi.org/project/swolfpy/\n :alt: Supported Python Versions\n\n.. image:: https://img.shields.io/pypi/l/swolfpy.svg\n :target: https://pypi.org/project/swolfpy/\n :alt: License\n\n.. image:: https://img.shields.io/pypi/dm/swolfpy.svg?label=Pypi%20downloads\n :target: https://pypi.org/project/swolfpy/\n :alt: Downloads\n\n.. image:: https://img.shields.io/pypi/format/swolfpy.svg\n :target: https://pypi.org/project/swolfpy/\n :alt: Format\n\n.. image:: https://readthedocs.org/projects/swolfpy/badge/?version=latest\n :target: https://swolfpy.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://github.com/SwolfPy-Project/swolfpy/actions/workflows/python-app.yml/badge.svg?branch=master\n :target: https://github.com/SwolfPy-Project/swolfpy/actions/workflows/python-app.yml\n :alt: Test\n\n.. image:: https://zenodo.org/badge/395802952.svg\n :target: https://zenodo.org/badge/latestdoi/395802952\n :alt: DOi\n\n.. image:: https://img.shields.io/badge/JIE%20DOI-10.1111%2Fjiec.13236-blue\n :target: https://doi.org/10.1111/jiec.13236\n :alt: JIE DOI\n\n* Free software: GNU GENERAL PUBLIC LICENSE V2\n* Website: https://swolfpy-project.github.io\n* Documentation: https://swolfpy.readthedocs.io\n* Repository: https://github.com/SwolfPy-Project/swolfpy\n\n\nFeatures\n--------\n\n* **Life-cycle assessment of Municipal Solid Waste (MSW) systems**\n\n * Comparative LCA\n * Contribution analysis\n * LCI report\n\n* **Monte Carlo simulation**\n\n * Uncertainty analysis\n * Data visualization (distributions & correlations)\n\n* **Optimization**\n\n * Minimize environmental burdens or cost subject to a number of technical or policy-related constraints\n\n\n.. list-table:: **Life-cycle process models**\n :widths: auto\n :header-rows: 1\n\n * -\n - Process model\n - Description\n * - 1\n - Landfill (**LF**)\n - Calculates emissions, material use, and energy use associated with construction, operations,\n closure and post-closure activities, landfill gas and leachate management, and carbon storage.\n * - 2\n - Waste-to-Energy (**WTE**)\n - Calculates emissions, mass flows, and resource use and recovery for the mass burn WTE process.\n * - 3\n - Gasification & Syngas Combustion (**GC**)\n - Calculates emissions, mass flows, and resource use and recovery for the GC process (Produced syngas from\n gasification is combusted to produce electricity by steam turbine).\n * - 4\n - Composting (**Comp**)\n - Calculates emissions, mass flows, and resource use and recovery for aerobic composting process and final use of compost.\n * - 5\n - Home Composting (**HC**)\n - Calculates emissions, mass flows, and resource use and recovery for home composting process and final use of compost.\n * - 6\n - Anaerobic Digestion (**AD**)\n - Calculates emissions, mass flows, and resource use and recovery for anaerobic digestion process and final use of compost.\n * - 7\n - Single-Stream Material Recovery facility (**SS_MRF**)\n - Calculates cost, emissions, and energy use associated with material recovery facilities.\n * - 8\n - Refuse-Derived Fuel (**RDF**)\n - Calculates cost, emissions, and energy use associated with RDF production facilities.\n * - 9\n - Reprocessing (**Reproc**)\n - Calculates emissions, mass flows, and resource use and recovery associated with recycling materials.\n * - 10\n - Transfer Station (**TS**)\n - Calculates cost, emissions, and energy use associated with Transfer Stations.\n * - 11\n - Single Family Collection (**SF_Col**)\n - Calculates cost, emissions, and fossil fuel use associated with MSW collection from single family sector.\n * - 12\n - Multi Family Collection (**MF_Col**)\n - Calculates cost, emissions, and fossil fuel use associated with MSW collection from multi-family sector.\n * - 13\n - Commercial Collection (**COM_Col**)\n - Calculates cost, emissions, and fossil fuel use associated with MSW collection from commercial sector.\n * - 14\n - Animal Feed (**AnF**)\n - Calculates cost, emissions, and energy use associated with conversion of food waste to animal feed and final use of produced feed.\n\n\n.. Installation\n\nInstallation\n------------\n1- Download and install miniconda from: https://docs.conda.io/en/latest/miniconda.html\n\n2- Update conda in a terminal window or anaconda prompt::\n\n conda update conda\n\n3- Create a new environment for swolfpy::\n\n conda create --name swolfpy python=3.9 graphviz\n\n4- Add Graphviz executables to your system PATH (This step is optional; Enables plotting SWM network). You can find Graphviz executables in ``\\\\miniconda3\\\\envs\\\\swolfpy\\\\Library\\\\bin\\\\graphviz`` folder or search for ``dot.exe`` file in your system. Add the directory to the ``Path`` variable in your environment variables.\n\n5- Activate the environment::\n\n conda activate swolfpy\n\n6- Install swolfpy in the environment::\n\n pip install swolfpy\n\n7- Open python to run swolfpy::\n\n python\n\n8- Run swolfpy in python::\n\n import swolfpy as sp\n sp.swolfpy()\n\n.. endInstallation\n'",",https://zenodo.org/badge/latestdoi/395802952\n,https://doi.org/10.1111/jiec.13236\n","2021/08/13, 21:34:14",803,GPL-2.0,11,619,"2023/08/23, 22:09:18",0,5,11,7,63,0,0.0,0.16763005780346818,"2023/06/04, 21:41:16",v1.0.1,0,2,false,,false,true,,,https://github.com/SwolfPy-Project,swolfpy-project.github.io/,United States of America,,,https://avatars.githubusercontent.com/u/89031688?v=4,,, TrashMob,"A website dedicated to organizing groups of people to clean up the world we live in. Users create cleanup events, publicize them, and recruit people to join up, as well as ask for assistance from communities and partners.",TrashMob-eco,https://github.com/TrashMob-eco/TrashMob.git,github,"environment,sustainability,pollution,litter,trash",Circular Economy and Waste,"2023/10/22, 15:07:34",25,0,8,true,C#,TrashMob.eco,TrashMob-eco,"C#,TypeScript,HTML,Bicep,CSS,SCSS,PowerShell,JavaScript",,"b'# TrashMob.eco\n\n**Meet up. Clean up. Feel good.**\n\n# What is TrashMob?\nTrashMob is a website dedicated to organizing groups of people to clean up the world we live in. Users create cleanup events, publicize them, and recruit people to join up, as well as ask for assistance from communities and partners to help haul away the garbage once it is gathered. The idea is to turn what can be an intimidating process for event organizers into a few mouse clicks and simple forms. And once the process is simple, events will spring up all over the world, and the cleanup of the world can begin.\n\n# Where did this idea come from?\nYears ago, Scott Hanselman (and others at Microsoft) built out the NerdDinner.com site as a demo of the capabilities of ASP.NET MVC. I actually went to a bunch of the nerd dinners. They were fantastic and had a huge roll in my career, including eventually leading me to join Microsoft. This site is based on both that code and the idea that getting people together to do small good things results in larger good things in the long term.\n\nMy passion is fixing problems we have on the planet with pollution and climate change. I\'ve been thinking about what technology can do to help in these areas, without creating more problems. And I keep coming back to the thought that a lot of this is a human problem. People want to help and they want to fix things, but they don\'t know where to start. Other people have ideas on where to start, but not enough help to get started.\n \nI read about a guy in California named [Edgar McGregor](https://twitter.com/edgarrmcgregor), who has spent over 1100 days cleaning up a park in his community, two pails of litter at a time, and I thought, that was a great idea. His actions inspired me to get out and clean up a local park one Saturday. It was fun and rewarding and other people saw what I was doing on my own and I know I have already inspired others to do the same. And then I passed by an area of town that is completely covered in trash and I thought ""this is too much for me alone. It would be great to have a group of people descend on this area like a mob and clean it up in an hour or two"". And my idea for TrashMob.eco was born.\n \nBasically, TrashMob is the NerdDinner.com site re-purposed to allow people to start mobs of their own to tackle cleanup or whatever needs doing. And I keep coming up with more and more ideas for it. I\'m hoping this site grows organically because of the good that we can do we get together.\n\n## What is the website address?\n\nTo see what is currently deployed to the prod environment, go to:\nhttps://www.trashmob.eco\n\nTo see what is currently deployed to the dev environment, go to:\nhttps://as-tm-dev-westus2.azurewebsites.net/\n\n# FAQ \n## What is the current state of this project?\n\nAs of 5/15/2022, we are now in full production launch. The site is up and running and people are using it ot help organize litter cleanups! TrashMob.eco is now a 501(c)(3) non-profit in the United States. We are working on new features all the time!\n\n## Are you looking for contributors?\n\nABSOLUTELY! Ping [info@trashmob.eco](mailto:info@trashmob.eco) if you want to get involved. All kinds of skills needed, from reactjs to website design, to aspnet core, to .NET MAUI, to PowerBI, to deployment / github skills. If you have a couple of hours a week, and want to contribute, let us know!\n \n## I have an idea for a TrashMob feature!\n\nFantastic! We want to build this out to be best site on the internet! But before you send us your idea, please take a look at the lists of [projects](https://github.com/orgs/TrashMob-eco/projects) and [issues](https://github.com/TrashMob-eco/TrashMob/issues) we already have going. We may already be working on your idea. If your idea is not there, feel free to reach out to us at [info@trashmob.eco](mailto:info@trashmob.eco)\n\n# Development Notes\n\n## Getting Started - Development\n\n1. You must install the .NET 6 SDK\n1. Install Visual Studio Code\n1. Connect to github and clone the repo\n1. Send your github id to info@trashmob.eco to be added as a contributor to the repository\n1. Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli)\n\n### To use the Shared Dev Environment\nIf you are not doing any database changes (i.e. design work, error handling, etc) you can save yourself time and money by doing the following and using the shared Dev environment:\n1. Send the email address you use on GitHub to [info@trashmob.eco](mailto:info@trashmob.eco)\n1. TrashMob will add you as a contributor to the Sandbox subscription\n1. TrashMob will add you to the Dev KeyVault with Secret Get and List permissions\n1. Log in to the Sandbox subscription, and go to the [Dev Azure SQL Database](https://portal.azure.com/#@jobeedevids.onmicrosoft.com/resource/subscriptions/39a254b7-c01a-45ab-bebd-4038ea4adea9/resourceGroups/rg-trashmob-dev-westus2/providers/Microsoft.Sql/servers/sql-tm-dev-westus2/overview)\n1. Click on Firewalls and Virtual Networks\n1. Add a new Rule with your email address as the name, with the start and end ip address set as your Client IP Address (see the line above the form for what Azure thinks your IP address is)\n1. **Save** changes\n1. Run the following script on your machine from the TrashMob folder in the project to set up your dev machine to run the project locally. You must be logged into Azure in your PowerShell window in the correct subscription\n```\n.\\setupdev.ps1 -environment dev -region westus2 -subscription 39a254b7-c01a-45ab-bebd-4038ea4adea9\n```\n\n### To set up your own environment to test in:\nYou must use this if you are making database changes to ensure you do not break the backend for everyone else:\n\n1. Follow the Infrastructure Deployment Steps (here)[.\\Deploy\\readme.md].\n1. Run the following script on your machine from the TrashMob folder in the project to set up your dev machine to run the project locally. You must be logged into Azure in your PowerShell window in the correct subscription\n```\n.\\setupdev.ps1 -environment -region -subscription \n\ni.e.\n.\\setupdev.ps1 -environment jb -region westus2 -subscription \n\n```\n\n## Setting up your launchsettings.json for website development\n\nBecause of RedirectUrls, life is a lot easier if you stick with the same ports as everyone else. \n\ncd to the TrashMob/Properties folder:\nAdd the following launchsettings.json file (may need to create it if you don\'t have it already): \n\n```\n{\n ""iisSettings"": {\n ""windowsAuthentication"": false,\n ""anonymousAuthentication"": true,\n ""iisExpress"": {\n ""applicationUrl"": ""http://localhost:44332/"",\n ""sslPort"": 44332\n }\n },\n ""profiles"": {\n ""IIS Express"": {\n ""commandName"": ""IISExpress"",\n ""launchBrowser"": true,\n ""environmentVariables"": {\n ""ASPNETCORE_ENVIRONMENT"": ""Development""\n }\n },\n ""TrashMob"": {\n ""commandName"": ""Project"",\n ""launchBrowser"": true,\n ""environmentVariables"": {\n ""ASPNETCORE_ENVIRONMENT"": ""Development""\n },\n ""applicationUrl"": ""https://localhost:44332;http://localhost:5000""\n }\n }\n}\n\n```\n\n## Setting up your environment for Docusign Integration Testing\n\nDocusign is integrated with the TrashMob.eco to ensure that a user has signed the appropriate liability waivers before attending a\nTrashMob.eco event. If the user has not previously signed a waiver, or the waiver they signed is out of date, they will be asked to \nsign a new waiver either when they try to create a new event, or when they sign up for an existing event.\n\nFor developers, there are a number of secrets that need to be set up in your environment before you can test with an identity which has not signed the\nwaiver for testing. \n\nIf you are a new developer and have no need to test the Docusign flow, simply sign into the Dev site for TrashMob.eco, and create or register for an event. \nWhen you try to do this, the system will ask you to sign the waivers. Follow the instructions, and then you can begin testing locally without needing the \nintegration to work. This is, by far, the easiest way to get start on non-docusign features, and is highly recommended.\n\nIf you are, however, attempting to test or alter the Docusign workflow, here are the steps you need to take:\n\n1. Create a Developer Account at https://appdemo.docusign.com/home\n2. Click on **Settings** in the top action bar\n3. Click on Integrations / Apps and Keys in the Left Action Bar\n4. This should bring you to a page with ""My Account Information"" at the top.\n5. Click Add App and Integration Key\n6. Set the App Name\n7. Set the RedirectUrl to https://localhost:44332/waivers\n8. Open Visual Studio Code and a terminal window within\n9. cd trashmob\n10. Set the following user secrets using dotnet user-secrets set with the following names / values\n\n| Secret Name | Where to get the value |\n| --- | --- |\n| DocusignAccountId | API Account Id under My Account Information |\n| DocusignImpersonatedUserId | Use Id under My Account Information |\n| DocusignClientId | Integration Key from the Apps and Integration Keys section |\n| DocusignAuthServer | account-d.docusign.com |\n| DocusignPrivateKeyEncoded | Click the Actions pull down under apps and integration keys, and click edit. Then under service integration, click generate RSA. Copy the value for PrivateKey. Then you need to base64 encode this value before setting that as a secret |\n| DocusignBasePath | https://demo.docusign.net/restapi |\n| DocusignRedirectHome | https://localhost:44332/waivers | \n\n11. Set a breakpoint in the DocusignManager.SendEnvelope method\n12. Start the code in the debugger\n13. Log in to TrashMob via the debugger browser\n14. Click ""Create an Event""\n15. When the breakpoint is reached, step until you get into the catch handler after AuthenticateWithJWT call\n16. Step into the if statement and get the value for the url that is created.\n17. Open a browser, and paste that Url into the browser.\n18. Accept the consent as instructed in the window.\n19. Close the browser\n20. Restart your debugger session\n21. You should now be able to go through the regular Docusign Waiver form flow.\n\n## Getting Started - Mobile Development\n\nThe mobile app is written using .NET MAUI. It requires a few prerequisites in order to get it compiling and running.\n\n1. Ensure you have installed the latest version of Visual Studio and the .NET MAUI Framework option\n2. Install Android Studio https://developer.android.com/studio \n3. Create an Android Emulator device in Android Studio\n4. Load the TrashMobMobileApp.sln Project in Visual Studio.\n5. Set your startup project to TrashMobMobileApp\n5. In order to have the maps feature work, you will need to do the following:\n 1. Create a Google Maps account: https://developers.google.com/maps/gmp-get-started\n 2. Get your Google API Key from your Google Project\n 3. Create a gradle.properties file in your GRADLE_USER_HOME (i.e. c:\\users\\\\\\.gradle)\n 4. Add the following line to your gradle properties file: \n ```\n GOOGLE_API_KEY = """"\n ```\n 5. Restart your emulator. Maps should work now\n\n Never check in any file that contains your real api key.\n\n## To Build the Web App:\n\nIn the Trashmob project folder, run the following command:\n```\ndotnet build\n```\n\n## To Run the Web App:\n\nIn the Trashmob project folder, run the following command:\n```\ndotnet run\n```\n\nor if using Visual Studio, set the TrashMob project as the start up project and hit F5.\n\nIf a browser does not open, open one for yourself and go to https://localhost:44332\n\nIf the app loads, but data does not, it is likely that the firewall rule is not set up correctly. Sometimes the IP address the Web Portal displays is different from the IP address of your machine. If you run into this issue, look in the debug window of VSCode. It will report a failure, and show that your actual IP Address is not enabled to access the database.\n1. Copy the IP Address from the error in VS Code\n1. Log in to the Sandbox subscription, and go to the [Dev Azure SQL Database](https://portal.azure.com/#@jobeedevids.onmicrosoft.com/resource/subscriptions/39a254b7-c01a-45ab-bebd-4038ea4adea9/resourceGroups/rg-trashmob-dev-westus2/providers/Microsoft.Sql/servers/sql-tm-dev-westus2/overview)\n1. Click on Firewalls and Virtual Networks\n1. Add a new Rule with your email address as the name, with the start and end ip address set as your Client IP Address (see the line above the form for what Azure thinks your IP address is)\n1. **Save** changes\n\n## Testing the Web App\n\nAs the site\'s feature set has grown, so have the scenarios that need to be tested after large changes have been made. Please see the [Test Scenarios](./TestScenarios.md) document for a list of checks that should be run. At some point we will need to automate these tests.\n\n## To Update the Database Model\nThe project uses Entity Framework Core V6 Model-First database updates.\n\n1. Update the models / MobDbContext as needed in the repo.\n2. To create the migration, do either of the following steps\n\nIn VS Code\n```\ndotnet ef migrations add \n\n```\n\nor in Visual Studio Package Manager Console\n\nFirst, set the Default Project to **TrashMob.Shared**, then run the following command:\n\n```\n EntityFrameworkCore\\Add-Migration \n```\n\n3. In VS Code in the TrashMob Folder, run the following command\n\n```\ndotnet ef database update\n```\n\n## Allowing the App To Send Email\n\nThis is a pay-per-use feature, so, for the most part, we\'re going to try to limit the number of people developing with this. To not send email, make sure to set the following user secret on your dev box \n```\n dotnet user-secrets set ""sendGridApiKey"" ""x""\n```\n\nTo test sending email, copy the ""sendGridApiKey"" from the dev keyvault to your machine and repeat the above, substituting in the real key. \n\n## A note on Azure Maps usage\nThe call to find the distance between two points in Azure Maps is only available in S1 (Gen 1) or Gen2 Maps. This is significantly more expensive than the S0 maps, so for now, we default to S0 for all dev deployments, and have manually set Prod to Gen2. It is not recommended to rerun the infrastructure deployments to Prod, as this will overwrite this setting.\n\nIn the future, we may want to optimize the use of this function to reduce costs.\n\n## How do I deploy the Azure Web App from GitHub?\nThe Dev site is automatically deployed with each push to the Main branch via a GitHub action. This is the preferred method of updating the Development Server. If you need to push an update from your development machine instead, please let the team know that there are changes in the environment pending checkin.\n\nThe Production site is manually deployed via a GitHub action from the release branch. This is the ONLY way production should be updated.\n\n## How do I deploy the Azure Web App from my PC?\nUse Visual Studio Publish to publish the site to the dev server.\n\nIf setting up a new environment, you will need to add the IP Address of the App Service to the list of IP Addresses accessible in the SQL Server. This needs to be automated in the future to make sure that a change to an IP address doesn\'t break the site.\n\n## The site is asking me to login\nIf you try to access a secure page, you will need a user id on the site. When you hit a secured page, the site will redirect you to a sign in page. Click the Sign up now link at the bottom of the login box. Multiple identity providers are now available, including Facebook, Twitter, Google, and Microsoft, along with the TrashMob tenant itself if you prefer not to use an integrated signup.\n\n## How to Change Mobile App from Test to Prod\nIt is currently hard-coded in the Mobile app that if you run a Debug build, you will point to the test environment, and if you run the release build, you will point to the production environment. \n\n## My Android Pull Request is Failing with Unable to Open jar File\nIn Debug mode, by default, the Android package format is set to apk, and does not build the bundle needed for signing. In order to change that\n1. Open the TrashMobMobileApp.sln in Visual Studio\n2. Right click on the TrashMobMobileApp project\n3. Select ""Properties""\n4. Go to the Android Settings / Options\n5. Change the Android Package Format for Debug & net7.0-android to ""bundle"" instead of ""apk"".\n6. Save you changes and push to your branch.\n\nNote: this may make deployments to your local emulator slower (more data must be copied into the emulator session). You can change this back to ""apk"" for local development, but failure to switch it back to ""bundle"" before checkin will cause the PR build to fail. There may be a way to pass this setting in on the command line for the publish step. That has not yet been investigated.\n\n## How do I get a test distribution of the Mobile App?\nWe currently use Microsoft App Center for building and distributing the Mobile apps. In order to get notified of a new distribution (or to download it). \n1. Send a note to info@trashmob.eco requesting access to App Center. Please include the email address you wish to use to access App Center in the request and why you want to use a dev build.\n2. A TrashMob.eco admin will review your request and if approved, will add you to the testers group.\n3. You will receive an email notification when your request has been approved.\n4. Click on the link to complete your onboarding.\n5. You can either download the latest package at that point, or wait for the next build to be completed. You will be notified when a new version is available.\n\n## How do I get a production distribution of the Mobile App?\nThe production mobile app can be downloaded here:\n\n[Android](https://play.google.com/store/apps/details?id=eco.trashmob.trashmobmobileapp)\n\n[iOS](https://apps.apple.com/us/app/trashmob/id1599996743) \n'",,"2021/03/13, 00:27:38",956,Apache-2.0,1074,2738,"2023/10/22, 15:34:09",82,943,1131,471,3,0,0.1,0.38288288288288286,,,3,19,false,,true,true,,,https://github.com/TrashMob-eco,www.trashmob.eco,United States of America,,,https://avatars.githubusercontent.com/u/98231723?v=4,,, rgbif,Interface to the Global Biodiversity Information Facility API.,ropensci,https://github.com/ropensci/rgbif.git,github,"gbif,api,data,biodiversity,species,rstats,r,spocc,r-package,lifewatch,oscibio",Biodiversity and Species Distribution,"2023/09/11, 07:31:41",141,0,18,true,R,rOpenSci,ropensci,"R,HTML,Makefile",https://docs.ropensci.org/rgbif,"b'# rgbif \n\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![cran checks](https://badges.cranchecks.info/worst/rgbif.svg)](https://cran.r-project.org/web/checks/check_results_rgbif.html)\n[![R-CMD-check](https://github.com/ropensci/rgbif/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/rgbif/actions?query=workflow%3AR-CMD-check)\n[![real-requests](https://github.com/ropensci/rgbif/workflows/R-check-real-requests/badge.svg)](https://github.com/ropensci/rgbif/actions?query=workflow%3AR-check-real-requests)\n[![codecov.io](https://codecov.io/github/ropensci/rgbif/coverage.svg?branch=master)](https://codecov.io/github/ropensci/rgbif?branch=master)\n[![rstudio mirror downloads](https://cranlogs.r-pkg.org/badges/rgbif)](https://github.com/r-hub/cranlogs.app)\n[![cran version](https://www.r-pkg.org/badges/version/rgbif)](https://cran.r-project.org/package=rgbif)\n[![DOI](https://zenodo.org/badge/2273724.svg)](https://zenodo.org/badge/latestdoi/2273724)\n\n**rgbif** is an R package which gives you access to [GBIF](https://www.gbif.org/) mediated data via its [REST API](https://www.gbif.org/developer/summary). \n\n**GBIF** (the Global Biodiversity Information Facility) is an international network and data infrastructure funded by the world\'s governments and aimed at providing anyone, anywhere, open access to data about all types of life on Earth.\n\n## Installation\n\n```r\ninstall.packages(""rgbif"") # CRAN version\n```\n\n```r\npak::pkg_install(""ropensci/rgbif"") # dev version\n```\n\n```r \ninstall.packages(""rgbif"", repos=""https://dev.ropensci.org"") # dev version\n```\n\n## Getting Started \n\nThere are several long-form articles that can help get you started:\n\n* [Getting Started](https://docs.ropensci.org/rgbif/articles/rgbif.html)\n* [Getting Occurrence Data From GBIF](https://docs.ropensci.org/rgbif/articles/getting_occurrence_data.html)\n* [Working With Taxonomic Names](https://docs.ropensci.org/rgbif/articles/taxonomic_names.html)\n\nMost GBIF users are interested in getting lat-lon occurrence records. \n\n```r \nocc_search(scientificName = ""Pan troglodytes"")\nocc_data(scientificName = ""Pan troglodytes"")\n```\n\nIt is usually better to get occurrence records using a **taxonKey**. See the article [Working With Taxonomic Names](https://docs.ropensci.org/rgbif/articles/taxonomic_names.html). \n\n```r \ntaxonKey <- name_backbone(""Pan troglodytes"")$usageKey\nocc_search(taxonKey = taxonKey)\n```\n\nGBIF **strongly recommends** the use of `occ_download()` rather than `occ_search()` for serious research projects. See article [Getting Occurrence Data From GBIF](https://docs.ropensci.org/rgbif/articles/getting_occurrence_data.html). \n\nIt is required to set up your [GBIF credentials](https://docs.ropensci.org/rgbif/articles/gbif_credentials.html) to make downloads from GBIF. \n\n```r\nocc_download(pred(""taxonKey"", 5219534)) # 5219534 is the taxonKey for Pan troglodytes\n```\n\n## Citation \n\nUnder the terms of the GBIF data user agreement, users who download data agree to cite a DOI. Please see GBIF\xe2\x80\x99s [citation guidelines](https://www.gbif.org/citation-guidelines) and [Citing GBIF Mediated Data](https://docs.ropensci.org/rgbif/articles/gbif_citations.html).\n\nPlease also cite **rgbif** by running `citation(package = ""rgbif"")`.\n\n## Contributors\n\nThis list honors all contributors in alphabetical order. Code contributors are in bold.\n\n[adamdsmith](https://github.com/adamdsmith) - [AgustinCamacho](https://github.com/AgustinCamacho) - [AldoCompagnoni](https://github.com/AldoCompagnoni) - [AlexPeap](https://github.com/AlexPeap) - [andzandz11](https://github.com/andzandz11) - [AshleyWoods](https://github.com/AshleyWoods) - [AugustT](https://github.com/AugustT) - [barthoekstra](https://github.com/barthoekstra) - **[benmarwick](https://github.com/benmarwick)** - [cathynewman](https://github.com/cathynewman) - [cboettig](https://github.com/cboettig) - [coyotree](https://github.com/coyotree) - **[damianooldoni](https://github.com/damianooldoni)** - [dandaman](https://github.com/dandaman) - [djokester](https://github.com/djokester) - [dlebauer](https://github.com/dlebauer) - **[dmcglinn](https://github.com/dmcglinn)** - **[dmi3kno](https://github.com/dmi3kno)** - [dnoesgaard](https://github.com/dnoesgaard) - [DupontCai](https://github.com/DupontCai) - [ecology-data-science](https://github.com/ecology-data-science) - [EDiLD](https://github.com/EDiLD) - [elgabbas](https://github.com/elgabbas) - [emhart](https://github.com/emhart) - [fxi](https://github.com/fxi) - [ghost](https://github.com/ghost) - [gkburada](https://github.com/gkburada) - [hadley](https://github.com/hadley) - [Huasheng12306](https://github.com/Huasheng12306) - [ibartomeus](https://github.com/ibartomeus) - **[JanLauGe](https://github.com/JanLauGe)** - **[jarioksa](https://github.com/jarioksa)** - **[jeroen](https://github.com/jeroen)** - **[jhnwllr](https://github.com/jhnwllr)** - [jhpoelen](https://github.com/jhpoelen) - [jivelasquezt](https://github.com/jivelasquezt) - [jkmccarthy](https://github.com/jkmccarthy) - **[johnbaums](https://github.com/johnbaums)** - [jtgiermakowski](https://github.com/jtgiermakowski) - [jwhalennds](https://github.com/jwhalennds) - **[karthik](https://github.com/karthik)** - [kgturner](https://github.com/kgturner) - [Kim1801](https://github.com/Kim1801) - [ljuliusson](https://github.com/ljuliusson) - [ljvillanueva](https://github.com/ljvillanueva) - [luisDVA](https://github.com/luisDVA) - [martinpfannkuchen](https://github.com/martinpfannkuchen) - **[MattBlissett](https://github.com/MattBlissett)** - [MattOates](https://github.com/MattOates) - [maxhenschell](https://github.com/maxhenschell) - **[mdsumner](https://github.com/mdsumner)** - [no-la-ngo](https://github.com/no-la-ngo) - [Octoberweather](https://github.com/Octoberweather) - [omahs](https://github.com/omahs) - [Pakillo](https://github.com/Pakillo) - **[peterdesmet](https://github.com/peterdesmet)** - [PhillRob](https://github.com/PhillRob) - **[PietrH](https://github.com/PietrH)** - [poldham](https://github.com/poldham) - [qgroom](https://github.com/qgroom) - [raymondben](https://github.com/raymondben) - [rossmounce](https://github.com/rossmounce) - [sacrevert](https://github.com/sacrevert) - [sagitaninta](https://github.com/sagitaninta) - **[sckott](https://github.com/sckott)** - [scottsfarley93](https://github.com/scottsfarley93) - [simon-tarr](https://github.com/simon-tarr) - **[SriramRamesh](https://github.com/SriramRamesh)** - [stevenpbachman](https://github.com/stevenpbachman) - [stevensotelo](https://github.com/stevensotelo) - **[stevenysw](https://github.com/stevenysw)** - [TomaszSuchan](https://github.com/TomaszSuchan) - [tphilippi](https://github.com/tphilippi) - [vandit15](https://github.com/vandit15) - [vervis](https://github.com/vervis) - **[vijaybarve](https://github.com/vijaybarve)** - [willgearty](https://github.com/willgearty) - [Xuletajr](https://github.com/Xuletajr) - [yvanlebras](https://github.com/yvanlebras) - [zixuan75](https://github.com/zixuan75)\n\n## Meta\n\n* Please [report any issues or bugs](https://github.com/ropensci/rgbif/issues).\n* License: MIT\n* Get citation information for `rgbif` in R doing `citation(package = \'rgbif\')`\n* Please note that this package is released with a [Contributor Code of Conduct](https://ropensci.org/code-of-conduct/). By contributing to this project, you agree to abide by its terms.\n\nThere are similar GBIF clients in other languages :\n\n* [Python](https://github.com/sckott/pygbif)\n* [Ruby](https://github.com/sckott/gbifrb)\n* [PHP](https://gitlab.res-telae.cat/restelae/php-gbif)\n\nThis package is part of [spocc](https://github.com/ropensci/spocc), along with several other packages, that provide access to occurrence records from multiple data sources.\n\n'",",https://zenodo.org/badge/latestdoi/2273724","2011/08/26, 11:28:18",4443,CUSTOM,304,2034,"2023/09/25, 09:03:23",22,118,662,128,30,0,0.4,0.18429807052561542,"2023/09/11, 11:03:58",v3.7.8,0,19,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, taxize,Allows users to search over many taxonomic data sources for species names (scientific and common) and download up and downstream taxonomic hierarchical information.,ropensci,https://github.com/ropensci/taxize.git,github,"taxonomy,data,api,biodiversity,biology,rstats,nomenclature,darwincore,taxize,api-wrapper,r,r-package",Biodiversity and Species Distribution,"2023/05/02, 20:02:50",254,0,23,true,R,rOpenSci,ropensci,"R,Makefile,TeX",https://docs.ropensci.org/taxize,"b'\n# taxize\n\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable\nstate and is being actively\ndeveloped.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![cran\nchecks](https://badges.cranchecks.info/worst/taxize.svg)](https://cran.r-project.org/web/checks/check_results_taxize.html)\n[![R-CMD-check](https://github.com/ropensci/taxize/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/taxize/actions/)\n[![codecov](https://codecov.io/gh/ropensci/taxize/branch/master/graph/badge.svg)](https://app.codecov.io/gh/ropensci/taxize)\n[![rstudio mirror\ndownloads](https://cranlogs.r-pkg.org/badges/taxize)](https://github.com/r-hub/cranlogs.app)\n[![cran\nversion](https://www.r-pkg.org/badges/version/taxize)](https://cran.r-project.org/package=taxize)\n\n`taxize` allows users to search over many taxonomic data sources for\nspecies names (scientific and common) and download up and downstream\ntaxonomic hierarchical information - among other things.\n\nPackage documentation: \n\n## Installation\n\n### Stable version from CRAN\n\n``` r\ninstall.packages(""taxize"")\n```\n\n### Development version from GitHub\n\nWindows users install Rtools first.\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""ropensci/taxize"")\n```\n\n``` r\nlibrary(\'taxize\')\n```\n\n## Screencast\n\n\n\n## Contributing\n\nSee our\n[CONTRIBUTING](https://github.com/ropensci/taxize/blob/master/.github/CONTRIBUTING.md)\ndocument.\n\n## Contributors\n\nCollected via GitHub Issues: honors all contributors in alphabetical\norder. Code contributors are in bold.\n\n[afkoeppel](https://github.com/afkoeppel) -\n[ahhurlbert](https://github.com/ahhurlbert) -\n[albnd](https://github.com/albnd) -\n[Alectoria](https://github.com/Alectoria) -\n[andzandz11](https://github.com/andzandz11) -\n**[anirvan](https://github.com/anirvan)** -\n[antagomir](https://github.com/antagomir) -\n**[arendsee](https://github.com/arendsee)** -\n[ArielGreiner](https://github.com/ArielGreiner) -\n[arw36](https://github.com/arw36) -\n[ashenkin](https://github.com/ashenkin) -\n**[ashiklom](https://github.com/ashiklom)** -\n[benjaminschwetz](https://github.com/benjaminschwetz) -\n**[benmarwick](https://github.com/benmarwick)** -\n[bienflorencia](https://github.com/bienflorencia) -\n[binkySallly](https://github.com/binkySallly) -\n[bomeara](https://github.com/bomeara) -\n[BridgettCollis](https://github.com/BridgettCollis) -\n[bw4sz](https://github.com/bw4sz) -\n**[cboettig](https://github.com/cboettig)** -\n[cdeterman](https://github.com/cdeterman) -\n[ChrKoenig](https://github.com/ChrKoenig) -\n[chuckrp](https://github.com/chuckrp) -\n[clarson2191](https://github.com/clarson2191) -\n[claudenozeres](https://github.com/claudenozeres) -\n[cmzambranat](https://github.com/cmzambranat) -\n[cparsania](https://github.com/cparsania) -\n[daattali](https://github.com/daattali) -\n[DanielGMead](https://github.com/DanielGMead) -\n[DarrenObbard](https://github.com/DarrenObbard) -\n[davharris](https://github.com/davharris) -\n[davidvilanova](https://github.com/davidvilanova) -\n[diogoprov](https://github.com/diogoprov) -\n**[dlebauer](https://github.com/dlebauer)** -\n[dlenz1](https://github.com/dlenz1) -\n[dougwyu](https://github.com/dougwyu) -\n[dschlaep](https://github.com/dschlaep) -\n**[EDiLD](https://github.com/EDiLD)** -\n[edwbaker](https://github.com/edwbaker) -\n[emhart](https://github.com/emhart) -\n[eregenyi](https://github.com/eregenyi) -\n[fdschneider](https://github.com/fdschneider) -\n[fgabriel1891](https://github.com/fgabriel1891) -\n[fischhoff](https://github.com/fischhoff) -\n**[fmichonneau](https://github.com/fmichonneau)** -\n**[fozy81](https://github.com/fozy81)** -\n**[gedankenstuecke](https://github.com/gedankenstuecke)** -\n[gimoya](https://github.com/gimoya) -\n[git-og](https://github.com/git-og) -\n[glaroc](https://github.com/glaroc) -\n**[gpli](https://github.com/gpli)** -\n[gustavobio](https://github.com/gustavobio) -\n[hlapp](https://github.com/hlapp) -\n**[ibartomeus](https://github.com/ibartomeus)** -\n**[Ironholds](https://github.com/Ironholds)** -\n[jabard89](https://github.com/jabard89) -\n[jangorecki](https://github.com/jangorecki) -\n**[jarioksa](https://github.com/jarioksa)** -\n[jebyrnes](https://github.com/jebyrnes) -\n**[jeroen](https://github.com/jeroen)** -\n**[jimmyodonnell](https://github.com/jimmyodonnell)** -\n[joelnitta](https://github.com/joelnitta) -\n[johnbaums](https://github.com/johnbaums) -\n[jonmcalder](https://github.com/jonmcalder) -\n[jordancasey](https://github.com/jordancasey) -\n**[josephwb](https://github.com/josephwb)** -\n[jsgosnell](https://github.com/jsgosnell) -\n[JulietteLgls](https://github.com/JulietteLgls) -\n**[jwilk](https://github.com/jwilk)** -\n[kamapu](https://github.com/kamapu) -\n**[karthik](https://github.com/karthik)** -\n**[katrinleinweber](https://github.com/katrinleinweber)** -\n[KevCaz](https://github.com/KevCaz) -\n[kgturner](https://github.com/kgturner) -\n[kmeverson](https://github.com/kmeverson) -\n[Koalha](https://github.com/Koalha) -\n**[ljvillanueva](https://github.com/ljvillanueva)** -\n**[maelle](https://github.com/maelle)** -\n[Markus2015](https://github.com/Markus2015) -\n[matutosi](https://github.com/matutosi) -\n[mcsiple](https://github.com/mcsiple) -\n[MikkoVihtakari](https://github.com/MikkoVihtakari) -\n[millerjef](https://github.com/millerjef) -\n[miriamgrace](https://github.com/miriamgrace) -\n[mpnelsen](https://github.com/mpnelsen) -\n[MUSEZOOLVERT](https://github.com/MUSEZOOLVERT) -\n[nate-d-olson](https://github.com/nate-d-olson) -\n[nmatzke](https://github.com/nmatzke) -\n[npch](https://github.com/npch) -\n[ocstringham](https://github.com/ocstringham) -\n[p-neves](https://github.com/p-neves) -\n[p-schaefer](https://github.com/p-schaefer) -\n[padpadpadpad](https://github.com/padpadpadpad) -\n[paternogbc](https://github.com/paternogbc) -\n**[patperu](https://github.com/patperu)** -\n[pederengelstad](https://github.com/pederengelstad) -\n[philippi](https://github.com/philippi) -\n[Phylloxera](https://github.com/Phylloxera) -\n**[pmarchand1](https://github.com/pmarchand1)** -\n[pozsgaig](https://github.com/pozsgaig) -\n[pssguy](https://github.com/pssguy) -\n**[raredd](https://github.com/raredd)** -\n[rec3141](https://github.com/rec3141) -\n**[Rekyt](https://github.com/Rekyt)** -\n[RodgerG](https://github.com/RodgerG) -\n[rossmounce](https://github.com/rossmounce) -\n[sariya](https://github.com/sariya) -\n[sastoudt](https://github.com/sastoudt) -\n[scelmendorf](https://github.com/scelmendorf) -\n**[sckott](https://github.com/sckott)** -\n[SimonGoring](https://github.com/SimonGoring) -\n[snsheth](https://github.com/snsheth) -\n[snubian](https://github.com/snubian) -\n[Squiercg](https://github.com/Squiercg) -\n[sunray1](https://github.com/sunray1) -\n**[taddallas](https://github.com/taddallas)** -\n[tdjames1](https://github.com/tdjames1) -\n[tmkurobe](https://github.com/tmkurobe) -\n[toczydlowski](https://github.com/toczydlowski) -\n[tpaulson1](https://github.com/tpaulson1) -\n[tpoisot](https://github.com/tpoisot) -\n**[TrashBirdEcology](https://github.com/TrashBirdEcology)** -\n**[trvinh](https://github.com/trvinh)** -\n**[vijaybarve](https://github.com/vijaybarve)** -\n[wcornwell](https://github.com/wcornwell) -\n[willpearse](https://github.com/willpearse) -\n[wpetry](https://github.com/wpetry) -\n[yhg926](https://github.com/yhg926) -\n**[zachary-foster](https://github.com/zachary-foster)**\n\n## Road map\n\nCheck out our\n[milestones](https://github.com/ropensci/taxize/milestones) to see what\nwe plan to get done for each version.\n\n## Meta\n\n- Please [report any issues or\n bugs](https://github.com/ropensci/taxize/issues).\n- License: MIT\n- Get citation information for `taxize` in R doing\n `citation(package = \'taxize\')`\n- Please note that this package is released with a [Contributor Code\n of Conduct](https://ropensci.org/code-of-conduct/). By contributing\n to this project, you agree to abide by its terms.\n\n[![rofooter](https://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n'",,"2011/05/19, 15:05:33",4542,CUSTOM,8,2321,"2023/06/05, 15:09:07",49,196,869,9,142,0,2.0,0.1098958333333333,"2020/10/30, 00:50:22",v0.9.99,0,33,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, Global Biotic Interactions,Global Biotic Interactions provides access to existing species interaction datasets.,globalbioticinteractions,https://github.com/globalbioticinteractions/globalbioticinteractions.git,github,"eol,ecology,biology,ecoinformatics,globi,species-interactions,bioinformatics,etl-framework,food-webs,pollinators,diseases,parasites,biodiversity,diet",Biodiversity and Species Distribution,"2023/10/24, 15:07:21",114,0,23,true,Java,,globalbioticinteractions,"Java,HTML,JavaScript,Shell",https://globalbioticinteractions.org,"b'Welcome to Global Biotic Interactions (GloBI)\n======================================\n\nThe mission of this project is to find efficient ways to normalize and integrate species interaction data to enable researchers and enthusiasts to answer questions like: Which species does an Angel Shark ( _Squatina squatina_ ) eat in the Gulf of Mexico? \n\nPlease see https://globalbioticinteractions.org or https://github.com/globalbioticinteractions/globalbioticinteractions/wiki for more information.\n\n [![Java CI](https://github.com/globalbioticinteractions/globalbioticinteractions/workflows/Java%20CI/badge.svg)](https://github.com/globalbioticinteractions/globalbioticinteractions/actions?query=workflow%3A%22Java+CI%22) [![DOI](https://zenodo.org/badge/2478263.svg)](https://zenodo.org/badge/latestdoi/2478263) \n\n## Citing GloBI\n\nPoelen, J. H., Simons, J. D., & Mungall, C. J. (2014). **Global Biotic Interactions: An open infrastructure to share and analyze species-interaction datasets**. *Ecological Informatics*, 24, 148\xe2\x80\x93159. [doi:10.1016/j.ecoinf.2014.08.005](https://doi.org/10.1016/j.ecoinf.2014.08.005)\n\n## Licenses\n[![gplv3](https://www.gnu.org/graphics/gplv3-88x31.png)](https://www.gnu.org/licenses/gpl.html)[![cc-by-nc](https://i.creativecommons.org/l/by/4.0/88x31.png)](https://creativecommons.org/licenses/by/4.0/)\n\nUnless otherwise noted, source code is released under [GLPv3](https://www.gnu.org/licenses/gpl.html) and data is available under [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). We are trying to do the best we can to ensure that the references to the original data sources are preserved and attributed. If you feel that there are better ways to do this, please [let us know](https://github.com/globalbioticinteractions/globalbioticinteractions/issues/new).\n'",",https://zenodo.org/badge/latestdoi/2478263,https://doi.org/10.1016/j.ecoinf.2014.08.005","2011/09/28, 21:51:47",4410,GPL-3.0,184,4394,"2023/10/23, 17:46:41",326,18,602,44,2,0,0.0,0.575,"2023/10/02, 20:45:06",0.25.9,0,14,false,,false,false,,,https://github.com/globalbioticinteractions,http://globalbioticinteractions.org,,,,https://avatars.githubusercontent.com/u/9599468?v=4,,, rredlist,An R client for the IUCN Red List of threatened and endangered species.,ropensci,https://github.com/ropensci/rredlist.git,github,"conservation,api-wrapper,biodiversity,rstats,iucn-red-list,iucn,r,r-package,taxize",Biodiversity and Species Distribution,"2022/11/10, 14:55:57",41,0,7,true,R,rOpenSci,ropensci,"R,Makefile",https://docs.ropensci.org/rredlist,"b'rredlist\n========\n\n\n\n[![cran version](https://www.r-pkg.org/badges/version/rredlist)](https://cran.r-project.org/package=rredlist)\n[![cran status](https://badges.cranchecks.info/worst/rredlist.svg)](https://cran.r-project.org/web/checks/check_results_rredlist.html)\n[![R-check](https://github.com/ropensci/rredlist/actions/workflows/R-check.yml/badge.svg)](https://github.com/ropensci/rredlist/actions/workflows/R-check.yml)\n[![codecov.io](https://codecov.io/github/ropensci/rredlist/coverage.svg?branch=master)](https://codecov.io/github/ropensci/rredlist?branch=master)\n[![rstudio mirror downloads](https://cranlogs.r-pkg.org/badges/rredlist)](https://github.com/r-hub/cranlogs.app)\n\n`rredlist` is an R client for the IUCN Red List (https://apiv3.iucnredlist.org/api/v3/docs). The IUCN Red List is a global list of threatened and endangered species. IUCN Red List docs: https://apiv3.iucnredlist.org/api/v3/docs\n\n## Installation\n\nCRAN\n\n\n```r\ninstall.packages(""rredlist"")\n```\n\nDevelopment version\n\n\n```r\nremotes::install_github(""ropensci/rredlist"")\n# OR\ninstall.packages(""rredlist"", repos=""https://dev.ropensci.org"")\n```\n\n## Meta\n\n* Please [report any issues or bugs](https://github.com/ropensci/rredlist/issues).\n* License: MIT\n* Get citation information for `rredlist` in R doing `citation(package = \'rredlist\')`\n* Please note that this package is released with a [Contributor Code of Conduct](https://ropensci.org/code-of-conduct/). By contributing to this project, you agree to abide by its terms.\n\n[![rofooter](https://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n\n[token]: https://apiv3.iucnredlist.org/api/v3/token\n[redlistr]: https://github.com/red-list-ecosystem/redlistr\n'",,"2016/01/22, 23:11:17",2833,CUSTOM,3,143,"2022/11/11, 19:17:22",5,6,48,2,348,0,1.5,0.13138686131386856,"2023/03/14, 14:02:14",v0.7.1,0,3,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, BIRDS,This set of tools has been developed for systematizing biodiversity data review in order to evaluate whether a set of species observation are fit-for-use and help take decisions upon its use on further analysis.,Greensway,https://github.com/GreenswayAB/BIRDS.git,github,"rstats,gbif,sampling-effort,reported-species,species-observed,biodiversity-data,data-gaps,biodiversity-informatics",Biodiversity and Species Distribution,"2023/10/17, 18:18:49",5,0,0,true,HTML,Greensway AB,GreenswayAB,"HTML,R",https://greenswayab.github.io/BIRDS/,"b'\n# BIRDS \n\n[![License: GPL\nv3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Lifecycle:\nstable](https://img.shields.io/badge/lifecycle-stable-green.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![CRAN\nStatus](https://www.r-pkg.org/badges/version/BIRDS)](https://cran.r-project.org/package=BIRDS)\n[![Downloads](https://cranlogs.r-pkg.org/badges/BIRDS?color=blue)](https://cranlogs.r-pkg.org/)\n[![R-CMD-check](https://github.com/GreenswayAB/BIRDS/workflows/R-CMD-check/badge.svg)](https://github.com/Greensway/BIRDS/actions)\n\n### A set of tools for Biodiversity Informatics in R\n\nThis is the Biodiversity Information Review and Decision Support package\nfor R! \n**NB**: BIRDS is an acronym. This packages is not limited to birds\' data\n(i.e. Aves) :smiley:\n\nThis set of tools has been developed for systematizing biodiversity data\nreview in order to evaluate whether a set of species observation are\nfit-for-use and help take decisions upon its use on further analysis.\n\nThis R-package was awarded the Third Prize on the [2019 GBIF Ebbe\nNielsen\nChallenge](https://www.gbif.org/en/news/2mixX9oDrJI2W3AqPFOxI3/wherenext-wins-2019-gbif-ebbe-nielsen-challenge#birds)\nfor which it was developed.\n\nThe set of tools provided are aimed to review and understand\nbiodiversity data quality in terms of completeness, and the data\ngeneration process (i.e. the observers\' sampling behaviour). The `BIRDS`\npackage provides a systematic approach to evaluate biodiversity data \xe2\x80\x93\nto enhance reproducibility and facilitate the review of data. The\n`BIRDS` package intends to provide the data user with knowledge about\nsampling effort (amount of effort expended during an event) and data\ncompleteness (data gaps) to help judge whether the data is\nrepresentative, valid and fit for the purpose of its intended use \xe2\x80\x93 and\nhence support for making decisions upon the use and further analysis of\nbiodiversity data.\n\nThe `BIRDS` package is most useful for heterogeneous data sets with\nvariation in the sampling process, i.e. where data have been collected\nand reported in variable ways, not conforming to the same sampling\nprotocol and therefore varying in sampling effort, leading to variation\nin data completeness (i.e. how well the reported observations describe\nthe ""true"" state). Primary biodiversity data (PBD) combining data from\ndifferent data sets, like e.g. GBIF mediated data, commonly vary in the\nways data has been generated - containing opportunistically collected\npresence-only data (no sampling protocol, no or inconsistent information\nabout absences, high sampling variability between observers), and data\nsets that have been collected using different sampling protocols. The\nset of tools provided by the `BIRDS` package is aimed at illuminating\nand understanding the process that generated the data (i.e. observing,\nrecording and reporting species into databases). It does this by a\nsystematic approach, and providing summaries that inform about sampling\neffort and data completeness (or data gaps).\n\nThe `BIRDS` package is **not** concerned with data accuracy, which can\nbe evaluated and improved using other existing packages (as outlined in\nthe [technical\ndetails](https://greenswayab.github.io/BIRDS/articles/technical_details.html)\nvignette), before processing the data using `BIRDS`.\n\nThe concepts and methods, and examples are described after a short\ndescription on how to install this package into R.\n\n### How to install `BIRDS`\n\nThis package is now published on CRAN. Therefore the easiest option to\ninstall it is `install.packages(\'BIRDS\')`. Else, you can install the\ndevelopment version directly from GitHub using the package `remotes`.\nInstall `remotes` if you have not already installed it\n(`install.packages(\'remotes\')`):\n\n``` r\nremotes::install_github(\'GreenswayAB/BIRDS\')\nlibrary(BIRDS)\n```\n\n### Concepts and methods\n\n#### Systematic approach \xe2\x80\x93 a workflow for primary biodiversity data\n\nIn order to systematize and enhance reproducibility of the review\nprocess for PBD the `BIRDS` package takes a systematic approach. With\nthis package the date are systematically organised and reviewed. This\nsystematic approach actually starts before using `BIRDS` as we suggest\nsteps and tools for optionally cleaning the data before processing by\n`BIRDS`. Hence, before using biodiversity data for the intended analysis\nstart by optionally cleaning the data, then use `BIRDS` to organize,\nsummarize and review the data:\n\n\n\nThen, use your review to evaluate sampling effort and data gaps, and to\ninform decisions about whether the data are fit-for-purpose and how to\nfurther analyse the data.\n\n##### Field visit\n\nA central concept used by the `BIRDS` package is the ""visit"" \xe2\x80\x93 defining\nthe sampling unit as a sampling event by a unique observer (or group of\nobservers), at a unique unit of space and time (commonly a day). Visits\ncan help us to summarize the amount of effort expended in the field.\nDuring a visit, the observer commonly samples (i.e. observes and\nrecords) species by similar methods. The sampling effort can vary among\nvisits, with the amount of effort expended being greater when spending\nmore time, and reporting more of the observed species. The same number\nof observations (records of species) at a unique unit of time and space\ncould be made by either few observers reporting many species (greater\neffort by each observer) or many observers reporting few species (small\neffort by each observer). Using visits as sampling units allows\nseparation of sampling effort into the effort that can be expressed\nthrough the number of visits by different observers and the effort per\nvisit (e.g. species list length, or when available the time spent during\na visit). Hence, the quality (completeness) of the data can be judged by\nusing information for each visit and information from a collection of\nvisits.\n\nYou can examine this in the [technical\ndetails](https://greenswayab.github.io/BIRDS/articles/technical_details.html)\nvignette.\n\n##### Spatial grid and spillover\n\nDefined by a unique observer (or group of observers), at a unique unit\nof space and time visits can be identified by a unique combination of\nvariables: observer id, location, time. Often location is a named unit\nof space that has been visited during the same sampling event. For\nexample a botanist visiting and reporting species for a meadow, or a\nbird watcher visiting and reporting species for a lake.\n\nSometimes locations can be more accurate positions for individuals of\nspecies that have been observed and reported during the same field\nvisit. The botanist may have visited the meadow but reported species\nfrom a number of different sampling points in that meadow. Or the bird\nwatcher reported species for different parts of the lake. In that case\nthere is no common spatial identifier for the visit.\n\nIf there is no common spatial identifier to define the visit extent, and\nthe observer id is not enough to constrain observations spatially (e.g.\ngroup of observers from organisation where observer id = organisation\nname), then visits can be created *when* overlaying the observation data\nwith the spatial grid. A visit is then defined as all the observations\nfalling into the same grid cell. It is important to keep in mind to\nchoose a grid with a cell size that corresponds to (or at least is not\nsmaller than) the average spatial extent known (or assumed) to be\ntypical for field visits for the reference species group (see below).\nThis process can be repeated with a set of grids with different offset\nto explore the sensitivity of the results to the size of the grid cells.\n\nYou can examine this in the [technical\ndetails](https://greenswayab.github.io/BIRDS/articles/technical_details.html)\nvignette.\n\n##### Reference species group\n\nBecause visits result from the sampling process they can only be defined\nfor a reference species group, i.e. a group of species observed and\nrecorded by similar methods. The rationale for a reference species group\nis based on the assumption that species groups share similar bias: we\nassume that, despite varying field skills and accuracy, observers\nreporting observations for species of a reference species group share\nsimilar observer behavior and methods and, hence, generate data with\nsimilar sampling bias (Phillips et al. 2009). From this we can assume\nthat the larger the number of visits (or observations) reporting species\nfrom the reference group at a specific unit of space and time, the more\nlikely it is that the lack of visits for (or observations of) a\nparticular species reflects the absence of (or failure to detect) a\nfocal species rather than a lack of visits and reports made.\n\nIt is important to keep in mind that, to keep the sampling bias\nconsistent, the reference species group should only include species that\nare assumed to be sampled with the same methodology (Ponder et al.\n2001). For example, a reference group should not include all species in\nthe Order Lepidoptera because butterflies *sensu stricto* (superfamily\nPapilionoidea) are sampled in very different ways than most other\nspecies of Lepidoptera (mainly moths).\n\n##### Species list length (SLL)\n\nThe SLL per visit (i.e. the number of species observed and recorded per\nvisit) is a well known proxy for the time spent in the field and\nwillingness to report all species seen of a reference taxonomic group,\nSzabo et al. 2010). The `BIRDS` package therefore uses SLL as a proxy\nfor sampling effort.\n\n### What does the package do?\n\nWith the `BIRDS`\' package set of tools PBD can be reviewed based on the\ninformation contained in the visits. Use `BIRDS` to organize the data,\nsummarize and review the data as shown above. The `BIRDS` package\norganizes the data into a spatially gridded visit-based format, from\nwhich summaries are retrieved for a number of variables describing the\nvisits across both spatial and temporal dimension. Those variables are\nthe number of visits, number of species, number of observations, average\nspecies list length per visit, number of units of space and time with\nvisits. The variables can be used to collectively describe the sampling\neffort and data completeness (data gaps), and can be examined spatially\n(e.g. viewed on maps) and temporally (e.g. plotted as time series).\n\n#### What does the package help us with?\n\nUsing the detailed information on sampling effort and data completeness\nprovided by the `BIRDS`\' package summaries allows better inference on\nwhat the reported species observations mean. As a much of the PBD is\npresence-only data the provided information helps us judging to what\ndegree a lack of observations may be (1) due to the species not being\nobserved (absent, or failed to detect) or (2) due to a lack of reports\n(lack of visits, or lack of reports for observed species) (little\nsampling effort). We can be more confident about the first when there is\ngood sampling effort and data completeness, while evidence is shaky,\ni.e. high probability to have missed species, when there is little\nsampling effort and data completeness. In this way the user can judge\nwhether the data is fit-for-purpose for the intended use. Using this\ninformation about how the data has been collected the user can also\ndecide about how to analyse the data.\n\nIt helps you getting :world_map: :bar_chart: :chart_with_upwards_trend:\n:chart_with_downwards_trend: :page_facing_up: :bulb: about\n\n:dog2: :cat2: :pig2: :mouse2: :sheep: :cow2: :rat: :rabbit2: :goat:\n:baby_chick: :rooster: :wolf: :frog: :koala: :bear: :boar: :monkey:\n:camel: :elephant: :panda_face: :snake: :bird: :penguin: :turtle: :bug:\n:honeybee: :ant: :lady_beetle: :snail: :octopus: :tropical_fish:\n:blowfish: :fish: :shell: :whale: :dolphin: :water_buffalo: :tiger2:\n:leopard: :ox: :crocodile: :dromedary_camel:\n\nand\n\n:bouquet: :cherry_blossom: :tulip: :four_leaf_clover: :rose: :sunflower:\n:hibiscus: :maple_leaf: :leaves: :fallen_leaf: :herb: :mushroom:\n:cactus: :palm_tree: :evergreen_tree: :deciduous_tree: :chestnut:\n:seedling: :blossom: :ear_of_rice:\n\nbut, maybe not :dragon_face: :dragon: :christmas_tree:\n\n##### References:\n\nPhillips et al. 2009 Sample selection bias and presence\xe2\x80\x90only\ndistribution models: implications for background and pseudo\xe2\x80\x90absence\ndata, Ecol Appl 19:181-197. \nPonder et al. 2001 Evaluation of Museum Collection Data for Use in\nBiodiversity Assessment, Cons Biol 15:648-657. \nSzabo et al. 2010 Regional avian species declines estimated from\nvolunteer\xe2\x80\x90collected long\xe2\x80\x90term data using List Length Analysis, Ecol Appl\n20:2157-2169.\n\n### Overview of main components\n\nYou can find an overview of the `BIRDS` main components and functions,\norganised as an [overview workflow\nhere](https://github.com/GreenswayAB/BIRDS/raw/master/man/figures/BIRDs.png)\nand a [workflow highlighting the decisions to be taking when using BIRDS\nhere](https://github.com/GreenswayAB/BIRDS/raw/master/man/figures/BIRDsDecision.png).\n\n### Example\n\nThe [Intro to\nBIRDS](https://greenswayab.github.io/BIRDS/articles/BIRDS.html) vignette\nprovides a useful walk through the package tools using an example data\nset.\n\nA short introductory [video can be found\nhere](https://www.dropbox.com/s/fxg1t9vl4ainipy/BirdsLR.mp4?dl=0).\n\n### What is new - latest changes and additions\n\nWe continuously update and improve the BIRDS package. Check the\n[changelog](https://greenswayab.github.io/BIRDS/news/index.html)\n\n### In the TODO LIST\n\nCheck [here](https://github.com/GreenswayAB/BIRDS/projects/4) for a list\nof future features to be added, and don\'t hesitate sending your\nsuggestions by [e-mail](mailto:alejandro@greensway.se)\n\n### Acknowledgements\n\nThe development of the BIRDS package is part of a project \'Using\nopportunistic citizen science data for evaluations of environmental\nchange\' financed by the Swedish Research Council Formas.\n'",,"2019/07/26, 16:20:40",1552,GPL-3.0,3,385,"2022/08/17, 07:34:22",6,9,30,0,434,0,0.5555555555555556,0.28658536585365857,"2021/08/23, 11:24:24",v0.2.2,0,4,false,,true,false,,,https://github.com/GreenswayAB,,Sweden,,,https://avatars.githubusercontent.com/u/67310826?v=4,,, spocc,An R package to query and collect species occurrence data from many sources.,ropensci,https://github.com/ropensci/spocc.git,github,"gbif,vertnet,ecoengine,antweb,ebird,data,api,rstats,r,species,inaturalist,idigbio,obis,bison,species-occurrence,occurrence,spocc,r-package",Biodiversity and Species Distribution,"2023/03/23, 08:15:45",109,0,14,true,R,rOpenSci,ropensci,"R,Makefile",https://docs.ropensci.org/spocc,"b'\n\n# spocc (SPecies OCCurrence) \n\n[![R-CMD-check](https://github.com/ropensci/spocc/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/ropensci/spocc/actions/workflows/R-CMD-check.yaml)\n[![test-sp-sf](http://github.com/ropensci/spocc/workflows/test-sp-sf/badge.svg)](http://github.com/ropensci/spocc/actions?query=workflow%3Atest-sp-sf)\n[![codecov.io](http://codecov.io/github/ropensci/spocc/coverage.svg?branch=master)](http://codecov.io/github/ropensci/spocc?branch=master)\n[![cran checks](http://badges.cranchecks.info/worst/spocc.svg)](http://cran.r-project.org/web/checks/check_results_spocc.html)\n[![rstudio mirror downloads](https://cranlogs.r-pkg.org/badges/spocc)](https://github.com/r-hub/cranlogs.app)\n[![cran version](http://r-pkg.org/badges/version/spocc)](http://cran.r-project.org/package=spocc)\n\nDocs: \n\nAt rOpenSci, we have been writing R packages to interact with many sources of species occurrence data, including [GBIF][gbif], [Vertnet][vertnet], [iNaturalist][inat], and [eBird][ebird]. Other databases are out there as well, which we can pull in. `spocc` is an R package to query and collect species occurrence data from many sources. The goal is to to create a seamless search experience across data sources, as well as creating unified outputs across data sources.\n\n`spocc` currently interfaces with seven major biodiversity repositories\n\n1. [Global Biodiversity Information Facility (GBIF)][gbif] (via `rgbif`)\nGBIF is a government funded open data repository with several partner organizations with the express goal of providing access to data on Earth\'s biodiversity. The data are made available by a network of member nodes, coordinating information from various participant organizations and government agencies.\n\n2. [iNaturalist][inat]\niNaturalist provides access to crowd sourced citizen science data on species observations.\n\n3. [VertNet][vertnet] (via `rvertnet`)\nSimilar to `rgbif` (see below), VertNet provides access to more than 80 million vertebrate records spanning a large number of institutions and museums primarly covering four major disciplines (mammology, herpetology, ornithology, and icthyology).\n\n4. [eBird][ebird] (via `rebird`)\nebird is a database developed and maintained by the Cornell Lab of Ornithology and the National Audubon Society. It provides real-time access to checklist data, data on bird abundance and distribution, and communtiy reports from birders.\n\n5. [iDigBio][idigbio] (via `ridigbio`)\niDigBio facilitates the digitization of biological and paleobiological specimens and their associated data, and houses specimen data, as well as providing their specimen data via RESTful web services.\n\n6. [OBIS][obis]\nOBIS (Ocean Biogeographic Information System) allows users to search marine species datasets from all of the world\'s oceans.\n\n7. [Atlas of Living Australia][ala]\nALA (Atlas of Living Australia) contains information on all the known species in Australia aggregated from a wide range of data providers: museums, herbaria, community groups, government departments, individuals and universities; it contains more than 50 million occurrence records.\n\nThe inspiration for this comes from users requesting a more seamless experience across data sources, and from our work on a similar package for taxonomy data ([taxize][taxize]).\n\n__BEWARE:__ In cases where you request data from multiple providers, especially when including GBIF, there could be duplicate records since many providers\' data eventually ends up with GBIF. See `?spocc_duplicates`, after installation, for more.\n\n## Learn more\n\nspocc documentation: \n\n## Contributing\n\nSee [CONTRIBUTING.md](http://github.com/ropensci/spocc/blob/master/.github/CONTRIBUTING.md)\n\n## Installation\n\nStable version from CRAN\n\n\n```r\ninstall.packages(""spocc"", dependencies = TRUE)\n```\n\nOr the development version from GitHub\n\n\n```r\ninstall.packages(""remotes"")\nremotes::install_github(""ropensci/spocc"")\n```\n\n\n```r\nlibrary(""spocc"")\n```\n\n## Make maps\n\nAll mapping functionality is now in a separate package [mapr](http://github.com/ropensci/mapr) (formerly known as `spoccutils`), to make `spocc` easier to maintain. `mapr` [on CRAN](http://cran.r-project.org/package=mapr).\n\n## Meta\n\n* Please [report any issues or bugs](http://github.com/ropensci/spocc/issues).\n* License: MIT\n* Get citation information for `spocc` in R doing `citation(package = \'spocc\')`\n* Please note that this package is released with a [Contributor Code of Conduct](http://ropensci.org/code-of-conduct/). By contributing to this project, you agree to abide by its terms.\n* Sticker: Images come from Phylopic \n\n\n[gbif]: http://www.gbif.org/\n[vertnet]: http://github.com/ropensci/rvertnet\n[inat]: http://www.inaturalist.org/\n[taxize]: http://github.com/ropensci/taxize\n[idigbio]: http://www.idigbio.org/\n[obis]: http://obis.org/\n[ebird]: http://ebird.org/home\n[ala]: http://www.ala.org.au/\n'",,"2013/09/05, 16:46:13",3702,CUSTOM,31,872,"2023/03/23, 16:42:36",16,23,246,9,216,0,0.5,0.12291169451073991,"2023/03/09, 08:33:38",v1.2.1,0,8,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, robis,"Build and maintain a global alliance that collaborates with scientific communities to facilitate free and open access to, and application of, biodiversity and biogeographic data and information on marine life.",iobis,https://github.com/iobis/robis.git,github,,Biodiversity and Species Distribution,"2022/09/24, 17:13:48",29,0,5,false,R,OBIS,iobis,R,https://iobis.github.io/robis,"b'# robis \n\n[![CRAN robis](http://www.r-pkg.org/badges/version-last-release/robis)](https://cran.r-project.org/package=robis)\n[![Travis-CI Build Status](https://api.travis-ci.org/iobis/robis.svg?branch=master&kill_cache=1)](https://travis-ci.org/iobis/robis)\n[![Coverage Status](https://coveralls.io/repos/iobis/robis/badge.svg?branch=master&service=github&kill_cache=1)](https://coveralls.io/github/iobis/robis?branch=master)\n[![DOI](https://zenodo.org/badge/47509713.svg)](https://zenodo.org/badge/latestdoi/47509713)\n\nR client for the OBIS API\n\n## Installation\n\n```R\n# CRAN\ninstall.packages(""robis"")\n\n# latest development version\nremotes::install_github(""iobis/robis"")\n```\n\n## Getting started\n\nSee the [Getting started vignette](https://iobis.github.io/robis/articles/getting-started.html).\n'",",https://zenodo.org/badge/latestdoi/47509713","2015/12/06, 19:10:22",2880,CUSTOM,0,215,"2022/05/06, 20:03:50",32,4,54,0,537,2,0.0,0.22499999999999998,"2022/08/06, 10:46:08",v2.11.0,0,3,false,,false,true,,,https://github.com/iobis,https://obis.org,Belgium,,,https://avatars.githubusercontent.com/u/13709415?v=4,,, redlistr,An R package that contains a set of tools suitable for calculating the metrics required for making assessments of species and ecosystems against the IUCN Red List of Threatened Species and the IUCN Red List of Ecosystems categories and criteria.,red-list-ecosystem,https://github.com/red-list-ecosystem/redlistr.git,github,,Biodiversity and Species Distribution,"2023/10/03, 03:05:09",26,0,6,true,R,IUCN Red List of Ecosystems Science Team,red-list-ecosystem,R,,"b'\n\n\n# redlistr\n\n`redlistr` is an R package that contains a set of tools suitable for\ncalculating the metrics required for making assessments of species and\necosystems against the IUCN Red List of Threatened Species and the IUCN\nRed List of Ecosystems categories and criteria.The paper describing\n`redlistr` has been published on Ecography and is available\n[here](https://onlinelibrary.wiley.com/doi/full/10.1111/ecog.04143).\n\n> important note: [rredlist](https://github.com/ropensci/rredlist) is a\n> different package that works with the IUCN Red List of Threatened\n> Species\xe2\x80\x99 API.\n\n## Overview\n\nThe `redlistr` package was developed to assist users conduct assessments\nfor the IUCN Red List of Ecosystems in `R`. It is also useful for users\ninterested in conducting assessments for the Red List of Threatened\nSpecies. Assessments of ecosystems under the IUCN Red List of Ecosystems\ncriteria require calculation of standardised metrics that were developed\nto objectively assess risk to ecosystem ([Keith et\nal.\xc2\xa02013](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0062111)).\nThis package was designed to assist in the calculation of these metrics,\nincluding two methods of calculating the rate of distirbutional decline:\nAbsolute Rate of Decline (ARD) and Proportional Rate of Decline (PRC).\nAn additional metric: the Annual Rate of Change (ARC), which uses a\ncompound interest law to determine the instantaenous rate of change\n([Puyravaud\n2003](https://www.sciencedirect.com/science/article/pii/S0378112702003353))\nis also included.\n\nAlso included are the two standard measures of the size of an\necosystems\xe2\x80\x99 geographic distribution specified in the red list of\necosystems guidelines ([Bland et\nal.\xc2\xa02016](https://portals.iucn.org/library/sites/library/files/documents/2016-010.pdf)).\nThese are the Extent of Occurrence (EOO) and Area of Occupancy (AOO). As\nmany of these measures are also useful for assessing species under the\nIUCN Red List of Threatened Species criteria, we expect this package\nwill also be useful for assessors conducting species assessments.\n\nIn conducting an assessment with this package, we assume that you are\nfamiliar with IUCN red listing protocols. In particular, you should\nconsult the IUCN guidelines for both of the red lists, which are the\ndefinitive sources of all information required to ensure consistent\napplication of IUCN criteria ([Bland et\nal.\xc2\xa02016](https://portals.iucn.org/library/sites/library/files/documents/2016-010.pdf)).\nIn addition, the papers by Keith et al.\n([2013](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0062111))\nand Rodriguez et al. ([2015](https://doi.org/10.1098/rstb.2014.0003))\nare particularly useful for navigating the IUCN Red List of Ecosystems\ncriteria. A range of important resources, including software tools and\nguiding material is available on the IUCN Red List of Ecosystems\n[website](https://www.iucnrle.org/). There is also plenty of handy\nmaterial for assessing species available on the IUCN Red List of\nThreatened Species [website](https://www.iucnredlist.org).\n\nWe also assume that you are reasonably familiar with the `R` programming\nlanguage, and have some experience in conducting analyses of vector and\nraster data within the `R` environment. Of particular use will be\nfamiliarity with the `raster`, `sp` and `rgeos` packages. This is\ncertainly not a prerequisite, but this package is built upon many of the\nfunctions available in these three packages.\n\nFor a more detailed tutorial explaining how to use this package, please\nrefer to the \xe2\x80\x98Calculating spatial metrics for IUCN red list assessments\xe2\x80\x99\nvignette available with the package.\n\nLastly, this is a work in progress and we aim to continually add new\nfunctions to newer versions of package. Suggestions are welcomed, as are\noffers for collaborative development.\n\n## Installation\n\n`redlistr` is now on CRAN! You can also install the development version\nfrom GitHub.\n\n``` r\n# Easiest way to get redlistr:\ninstall.packages(""redlistr"")\n\n# For the development version from GitHub:\n# install.packages(""devtools"")\ndevtools::install_github(""red-list-ecosystem/redlistr"")\n```\n'",",https://doi.org/10.1098/rstb.2014.0003","2016/04/04, 09:17:59",2760,CUSTOM,29,257,"2023/06/23, 08:32:43",7,5,12,2,124,1,0.0,0.3146067415730337,,,0,3,false,,false,false,,,https://github.com/red-list-ecosystem,http://iucnrle.org/,UNSW | Deakin | Yale | JCU,,,https://avatars.githubusercontent.com/u/29559340?v=4,,, ALA4R,"The Atlas of Living Australia provides tools to enable users of biodiversity information to find, access, combine and visualise data on Australian plants and animals.",AtlasOfLivingAustralia,https://github.com/AtlasOfLivingAustralia/ALA4R.git,github,"r,ala,ala-product-ala4r",Biodiversity and Species Distribution,"2021/09/13, 05:35:10",41,0,1,false,R,Atlas of Living Australia,AtlasOfLivingAustralia,R,https://atlasoflivingaustralia.github.io/ALA4R/,"b'\n\n\n# ALA4R\n\n[![Travis-CI Build\nStatus](https://travis-ci.org/AtlasOfLivingAustralia/ALA4R.svg?branch=master)](https://travis-ci.org/AtlasOfLivingAustralia/ALA4R)\n[![AppVeyor Build\nstatus](https://ci.appveyor.com/api/projects/status/g9pudc4l7053w4vn/branch/master?svg=true)](https://ci.appveyor.com/project/PeggyNewman/ala4r/branch/master)\n[![codecov](https://codecov.io/gh/AtlasOfLivingAustralia/ALA4R/branch/master/graph/badge.svg)](https://codecov.io/gh/AtlasOfLivingAustralia/ALA4R)\n[![CRAN\nDownloads](https://cranlogs.r-pkg.org/badges/grand-total/ALA4R)](https://cran.r-project.org/package=ALA4R)\n\n*`ALA4R` is deprecated; we suggest you use `galah` instead [available on\nCRAN](https://CRAN.R-project.org/package=galah). `galah` provides an\nimproved interface to ALA data, while providing the same core\nfunctionality as ALA4R. For an introduction to `galah`, visit the\n[GitHub page](https://github.com/AtlasOfLivingAustralia/galah).*\n\nThe Atlas of Living Australia (ALA) provides tools to enable users of\nbiodiversity information to find, access, combine and visualise data on\nAustralian plants and animals; these have been made available from\n. Here we provide a subset of the tools to be\ndirectly used within R.\n\nALA4R enables the R community to directly access data and resources\nhosted by the ALA.\n\nThe use-examples presented at the [2014 ALA Science\nSymposium](https://www.ala.org.au/blogs-news/2014-atlas-of-living-australia-science-symposium/)\nare available in the package vignette, via (in R): `vignette(""ALA4R"")`,\nor [browse it\nonline](https://atlasoflivingaustralia.github.io/ALA4R/articles/ALA4R.html).\n\n## Installing\n`ALA4R` is available from GitHub:\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""AtlasOfLivingAustralia/ALA4R"")\n```\n\nOn Linux you will first need to ensure that `libcurl` and `v8` (version\n<= 3.15) are installed on your system \xe2\x80\x94 e.g.\xc2\xa0on Ubuntu/Debian, open a\nterminal and do:\n\n``` sh\nsudo apt-get install libcurl4-openssl-dev libv8-3.14-dev\n```\n\nor install via the Software Centre.\n\n## Usage\n\nSee the online documentation at\n including the [package\nvignette](https://atlasoflivingaustralia.github.io/ALA4R/articles/ALA4R.html).\n\n## Citing ALA4R\n\nTo generate a citation for ALA4R run:\n\n``` r\ncitation(package = ""ALA4R"")\n```\n'",,"2014/09/22, 05:20:49",3320,CUSTOM,0,1288,"2021/03/30, 01:58:18",22,11,46,0,939,0,0.0,0.314859437751004,"2021/07/07, 05:00:41",v1.9.1,0,12,false,,false,false,,,https://github.com/AtlasOfLivingAustralia,https://www.ala.org.au,Australia,,,https://avatars.githubusercontent.com/u/7296572?v=4,,, biodivMapR,An R package for α- and β-diversity mapping using remotely-sensed images.,jbferet,https://github.com/jbferet/biodivMapR.git,github,"diversity-mapping,biodiversity,remote-sensing,sentinel-2,hyperspectral-imaging,tropical-forest",Biodiversity and Species Distribution,"2023/09/29, 08:53:30",30,0,9,true,R,,,R,https://jbferet.github.io/biodivMapR/index.html,"b'# __biodivMapR__ \n\n# An R package for \xce\xb1- and \xce\xb2-diversity mapping using remotely-sensed images\n\n[![build](https://img.shields.io/github/actions/workflow/status/jbferet/biodivMapR/tic.yml?branch=master)](https://github.com/jbferet/biodivMapR/actions)\n[![licence](https://img.shields.io/badge/Licence-GPL--3-blue.svg)](https://www.r-project.org/Licenses/GPL-3)\n[![version](https://img.shields.io/github/v/release/jbferet/biodivMapR?label=version)](https://github.com/jbferet/biodivMapR)\n[![version](https://anaconda.org/conda-forge/r-biodivmapr/badges/version.svg)](https://anaconda.org/conda-forge/r-biodivmapr/)\n\n# 1 Install\n\nThe package `remotes` first needs to be installed from the CRAN\n\n```\ninstall.packages(""remotes"")\n```\n\nThen some packages which were removed from the CRAN may need to be installed directly from authors\' repository. This is the case for `dissUtils`: \n\n```\nremotes::install_github(\'cran/dissUtils\')\n```\n\nAfter installing `remotes`and `dissUtils`, `biodivMapR` should be ready for installation with the following command line in your R session:\n\n```\nremotes::install_github(\'jbferet/biodivMapR\')\n```\n\n\n# 2 Tutorial\n\nA tutorial vignette is available [here](https://jbferet.github.io/biodivMapR/articles/biodivMapR.html).\n\nThe corresponding script is available in file `examples/tutorial.R`.\n\n# 3 Citation\n\nIf you use **biodivMapR**, please cite the following references:\n\nF\xc3\xa9ret, J.-B., de Boissieu, F., 2019. biodivMapR: an R package for \xce\xb1\xe2\x80\x90 and \xce\xb2\xe2\x80\x90diversity mapping using remotely\xe2\x80\x90sensed images. Methods Ecol. Evol. 00:1-7. https://doi.org/10.1111/2041-210X.13310\n\nF\xc3\xa9ret, J.-B., Asner, G.P., 2014. Mapping tropical forest canopy diversity using high-fidelity imaging spectroscopy. Ecol. Appl. 24, 1289\xe2\x80\x931296. https://doi.org/10.1890/13-1824.1\n\n\n# 4 Acknowledgments / Fundings\n\nThis research was supported by the Agence Nationale de la Recherche ([ANR](https://anr.fr/en/open-calls-and-preannouncements/), France) through the young researchers project **BioCop** (ANR-17-CE32-0001)\n'",",https://doi.org/10.1111/2041-210X.13310\n\nF\xc3\xa9ret,https://doi.org/10.1890/13-1824.1\n\n\n#","2019/09/04, 18:53:25",1512,GPL-3.0,70,372,"2023/10/04, 10:12:23",0,4,24,10,21,0,0.5,0.388646288209607,"2023/09/18, 07:43:28",v1.12,0,3,false,,false,false,,,,,,,,,,, DiversiTree,"Help urban foresters, planners, greeners, and ecologists in quantifying tree ecosystem diversity in cities.",DiversiTree,https://github.com/DiversiTree/TreeDiversity.git,github,"ecology,forestry,urban-planning",Biodiversity and Species Distribution,"2021/03/25, 02:09:56",13,0,1,false,Jupyter Notebook,,,"Jupyter Notebook,Python",,"b'\n\n# Diversi*Tree* Notebooks\n### About\nThese Python notebooks are intended to help urban foresters, planners, greeners, and ecologists in quantifying tree ecosystem diversity in cities. Our recent study (citation) explored street tree diversity contrasted between the city center and outer areas, and we made these notebooks to compute tree diversity in these areas across two major dimensions:\n\n1. **Diversity Indices:** Using two entropy indices, the Shannon Index and Simpson Index, we can measure how evenly diversity exists in your tree inventory.\n\n2. **10/20/30 rule:** Urban foresters have long used the 10/20/30 benchmark to measure diversity. The rule suggests no more than 10% of an urban tree ecosystem should be made up of the same species, 20% of the same genus, and 30% of the same family.\n\nThese notebooks calculate these measures based on boths tree stem count (the raw number of trees) and the basal area of the tree, estimated from the diameter at breast height value. \n\n### How To Use These Notebooks\nThe two notebooks in this repository conduct a simple sensitivity analysis to check if the number of trees is sufficient to calculate these diversity indices, and then calculates the city center and outer area results. Simple plots along the way will be provided! \n\nTo get started, gather the data as described below and open the first notebook. You\'ll need to add the file path to your data and input the column names needed in the notebook, and the rest of the notebook will run as you progress through the cells. \n\n### What You\'ll Need\nHere\'s the data you\'ll need to use these notebooks:\n1. **City Center geography:** To specify which trees are in the city center, you\'ll need a geospatial data file (geopackage, geojson, shapefile) with just the geometry of your city center. If multiple districts or geometries make up your city center area, we recommend dissolving these into a single feature.\n\n2. **Tree data:** You\'ll need tree inventory data with the following data attached:\n * Geographic location\n * Diameter at breast height (DBH)\n * Scientific or species name\n * Genus name\n * Family name (Some ways to add this data to your tree inventory include [Open Tree of Life](https://opentreeoflife.github.io/) or [Encyclopedia of Life](https://eol.org/))\n\n3. **Jupyter Notebook:** You\'ll need a Python 3 equipped [Jupyter Notebook](https://jupyter.org/install) to run these analyses! The dependency packages needed are listed in `requirements.txt` and listed below:\n* pandas\n* geopandas\n* matplotlib\n* descartes\n* tqdm\n* numpy\n\n### Sample Data\n\nCleaned sample data for Paris, France is provided in this repository. \n\n### Publication\n\nGalle, N., Halpern, D., Nitoslawski, S., Duarte, F., Ratti, C., & Pilla, F. (2021). [Mapping the diversity of street tree inventories across eight cities internationally using open data](https://senseable.mit.edu/papers/pdf/20210325_Galle-etal_MappingDiversity_UFUG.pdf). Urban Forestry and Urban Greening.\n\n### Website\n\nWant to learn more? Check out [DiversiTree\'s Website!](https://diversitree.netlify.app/)\n\n_DiversiTree is a project of:_\n\n    \n\n___\n\n\n\nThis research received partial funding from the Connecting Nature project (Grant Agreement No. 730222) under the European Community\xe2\x80\x99s Framework Program Horizon 2020.\n'",,"2021/02/01, 15:07:52",996,CC-BY-4.0,0,16,"2021/03/25, 02:07:40",0,3,3,0,944,0,0.0,0.5,,,0,2,false,,false,false,,,,,,,,,,, mobr,Tools for analyzing changes in biodiversity across scales.,MoBiodiv,https://github.com/MoBiodiv/mobr.git,github,"biodiversity,ecology,conservation,statistics,rarefaction,species",Biodiversity and Species Distribution,"2021/07/22, 13:05:18",21,0,4,true,R,Measurement of Biodiversity (MoB),MoBiodiv,R,,"b'# mobr\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![cran checks](https://cranchecks.info/badges/worst/mobr)](https://cranchecks.info/pkgs/mobr)\n[![Travis build status](https://travis-ci.org/MoBiodiv/mobr.svg?branch=master)](https://travis-ci.org/MoBiodiv/mobr)\n[![rstudio mirror downloads](https://cranlogs.r-pkg.org/badges/mobr)](https://github.com/r-hub/cranlogs.app)\n[![cran version](https://www.r-pkg.org/badges/version/mobr)](https://cran.r-project.org/package=mobr)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4014111.svg)](https://doi.org/10.5281/zenodo.4014111)\n============\n\n# Measurement of Biodiversity in R \n\nThis repository hosts an R package that is actively being developed for \nestimating biodiversity and the components of its change. The key innovations of\nthis R package over other R packages that also carry out rarefaction (e.g.,\n`vegan`, `iNext`) is that `mobr` is focused on 1) making empirical comparisons between \ntreatments or gradients, and 2) our framework emphasizes how changes in \nbiodiversity are linked to changes in community structure: the SAD, total\nabundance, and spatial aggregation. \n\n**Please use the [dev](https://github.com/MoBiodiv/mobr/tree/dev) branch for the beta version of the repository that has the most up-to-date methods. See examples of how to compute diversity metrics using the `dev` branch here: [R script](https://github.com/MoBiodiv/mobr/blob/dev/vignettes/beta_div_demo.R) and [pdf](https://github.com/MoBiodiv/mobr/blob/dev/vignettes/beta_div_demo.pdf). Instructions are provided below on how to use `devtools` to install the `dev` branch using R**\n\nThe concepts and methods behind this R package are described in three publications.\n\nMcGlinn, D.J. X. Xiao, F. May, N.J Gotelli, T. Engel, S.A Blowes, T.M. Knight, O. Purschke, J.M Chase, and B.J. McGill. 2019. MoB (Measurement of Biodiversity): a method to separate the scale-dependent effects of species abundance distribution, density, and aggregation on diversity change. Methods in Ecology and Evolution. 10:258\xe2\x80\x93269. https://doi.org/10.1111/2041-210X.13102\n\nMcGlinn, D.J. T. Engel, S.A. Blowes, N.J. Gotelli, T.M. Knight, B.J. McGill, N. Sanders, and J.M. Chase. 2020. A multiscale framework for disentangling the roles of evenness, density, and aggregation on diversity gradients. Ecology. https://doi.org/10.1002/ecy.3233\n\nChase, J.M., B. McGill, D.J. McGlinn, F. May, S.A. Blowes, X. Xiao, T. Knight. 2018. Embracing scale-dependence to achieve a deeper understanding of biodiversity and its change across communities. Ecology Letters. 21: 1737\xe2\x80\x931751. https://doi.org/10.1111/ele.13151 \n\nPlease cite `mobr`. Run the following to get the appropriate citation for the version you\'re using:\n\n```r\ncitation(package = ""mobr"")\n```\n\n## Installation\n\n```r\ninstall.packages(\'mobr\')\n```\n\nOr, install development version\n\n```r\ninstall.packages(\'devtools\')\nlibrary(devtools)\n```\n\nNow that `devtools` is installed you can install `mobr using the following R code:\n\n```r\ninstall_github(\'MoBiodiv/mobr\')\n# if dev branch wanted used\ninstall_github(\'MoBiodiv/mobr\', ref = \'dev\')\n```\n\n## Examples\n\nThe package [vignette](https://github.com/MoBiodiv/mobr/blob/master/vignettes/mobr_intro.pdf)\nprovides a useful walk-through the package tools, but below is some example code\nthat uses the two key analyses and related graphics. \n\n```r\nlibrary(mobr)\ndata(inv_comm)\ndata(inv_plot_attr)\ninv_mob_in = make_mob_in(inv_comm, inv_plot_attr, coord_names = c(\'x\', \'y\'))\ninv_stats = get_mob_stats(inv_mob_in, \'group\', ref_level = \'uninvaded\')\nplot(inv_stats)\ninv_deltaS = get_delta_stats(inv_mob_in, \'group\', ref_level=\'uninvaded\',\n type=\'discrete\', log_scale=TRUE, n_perm = 5)\nplot(inv_deltaS, \'b1\')\n```\n\n## Meta\n\n* Please [report any issues or bugs](https://github.com/mobiodiv/mobr).\n* License: MIT\n* Get citation information for `mobr` in R doing `citation(package = \'mobr\')`\n* Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.\n\n## Thanks\n\n* Gregor Seyer for providing a constructive review of our CRAN submission\n* Kurt Hornik for helping us keep up with CRAN changes. \n'",",https://doi.org/10.5281/zenodo.4014111,https://doi.org/10.1111/2041-210X.13102\n\nMcGlinn,https://doi.org/10.1002/ecy.3233\n\nChase,https://doi.org/10.1111/ele.13151","2015/06/02, 20:28:37",3067,CUSTOM,0,762,"2023/02/26, 11:37:24",20,149,251,3,241,1,0.0,0.3783783783783784,"2020/09/03, 19:10:40",2.0.0,0,4,false,,false,false,,,https://github.com/MoBiodiv,,,,,https://avatars.githubusercontent.com/u/12683972?v=4,,, Wildbook,"Blends structured wildlife research with artificial intelligence, citizen science, and computer vision to speed population analysis and develop new insights to help fight extinction.",WildMeOrg,https://github.com/WildMeOrg/Wildbook.git,github,"cloud,conservation,java,nonprofit,tomcat",Biodiversity and Species Distribution,"2023/10/24, 21:36:15",88,0,3,true,Java,Wild Me,WildMeOrg,"Java,JavaScript,HTML,Less,CSS,Perl,PLpgSQL,Shell,Python,SCSS,Dockerfile",https://www.wildme.org/codex-and-wildbook.html,"b'\n

Wildbook

\n\nWildbook® is an open source software framework to support mark-recapture, molecular ecology, and social ecology studies. The biological and statistical communities already support a number of excellent tools, such as Program MARK,GenAlEx, and SOCPROG for use in analyzing wildlife data. Wildbook is a complementary software application that:\n\n-provides a scalable and collaborative platform for intelligent wildlife data storage and management, including advanced, consolidated searching\n\n-provides an easy-to-use software suite of functionality that can be extended to meet the needs of wildlife projects, especially where individual identification is used\n\n-provides an API to support the easy export of data to cross-disciplinary analysis applications (e.g., GenePop ) and other software (e.g., Google Earth)\n\n-provides a platform that supports the exposure of data in biodiversity databases (e.g., GBIF and OBIS)\n\n-provides a platform for animal biometrics that supports easy data access and facilitates matching application deployment for multiple species\n\n

Wildbook IA - Formerly IBEIS

\n\n Wildbook is the data management layer for the Wildbook IA (WBIA). The WBIA project is the successor to the Image-Based Ecological Information System (IBEIS) computer vision research platform, which pulls data from Wildbook servers to detect features in images and identify individual animals. WBIA brings massive-scale computer vision to wildlife research for the first time. \n
\n

Support

\n\nPlease see Wildbook.org for documentation. \n\nNeed direct help?\n\nWild Me (wildme.org) engineering staff provide support for Wildbook. You can contact us at: support@wildme.org\n\nWe provide support during regular office hours on Mondays and Tuesdays.\n\nSupport resources include:\n\n\n

Want to contribute code?

\n

Variable naming conventions

\n
    \n
  • Camel case
  • \n
  • Please don\xe2\x80\x99t use single-letter variable names (no matter how temporary you think the code is)
  • \n
  • Avoid comments; code should be clear enough to speak for itself in almost all cases
  • \n
  • Code for clarity rather than for efficiency (one-liners are cool, but not at the expense of future obfuscation)
  • \n
\n\n

Overall outline of code framework

\nSpell out how .jsp files relate to servlet files relate to java files, etc. Someone new to the codebase should be able to orient themselves based on your notes.\n\n

Java/jsp style

\nInitialize variables and type signatures at the abstract/interface level when possible.\n\nInstead of:\n\n```\nArrayList encounters = new ArrayList();\n...\npublic int getMax(ArrayList numbers) {\n```\n\nTry:\n\n```\nList encounters = new ArrayList();\n...\npublic int getMax(Collection numbers) {\n```\n\nFirst of all, it\xe2\x80\x99s easier to read and more intuitive for a function to take a Map or List than a HashMap or ArrayList.\n\nThe List interface defines how we want that variable to behave, and whether it\xe2\x80\x99s an ArrayList or LinkedList is incidental. Keeping the variable and method signatures abstract means we can change the implementation later (eg swapping ArrayList->LinkedList) without changing the rest of our code.\nhttps://stackoverflow.com/questions/2279030/type-list-vs-type-arraylist-in-java\n\nRelated: when writing utility methods, making the input type as abstract as possible makes the method versatile. See Util.asSortedList in Wildbook: since the input is an abstract Collection, it can accept a List, Set, PriorityQueue, or Vector as input, and return a sorted List.\n\nRuntime (not style): Use Sets (not Lists or arrays) if you\xe2\x80\x99re only keeping track of collection membership / item uniqueness. \n\nInstead of:\n\n```\n \tList uniqueIndividuals = new ArrayList();\n \tfor(Encounter currentEncounter: encounters){\n\t\tMarkedIndividual currentInd = enc.getIndividual();\n\t\tif !(uniqueIndividuals.contains(currentInd) {\n\t\t\tuniqueIndividuals.add(currentInd);\n\t\t\tdoStuff();\n```\n \t\t\t\nTry:\n\n```\nSet uniqueIndividuals = new HashSet();\t\n \tfor(Encounter currentEncounter: encounters){\n\t\tMarkedIndividual currentInd = enc.getIndividual();\n\t\tif !(uniqueIndividuals.contains(currentInd) {\n\t\t\tuniqueIndividuals.add(currentInd);\n\t\t\tdoStuff();\n```\n\nThe reason is a little deep in the data types. Sets are defined as unordered collections of unique elements; and Lists/arrays are ordered collections with no bearing on element-uniqueness. If the order of a collection doesn\xe2\x80\x99t matter and you\xe2\x80\x99re just checking membership, you\xe2\x80\x99ll have faster runtime using a Set.\n\nSets implement contains, add, and remove methods much faster than lists [contains is O(log(n)) vs O(n) runtime]. A list has to iterate through the entire list every time it runs contains (it checks each item once at a time) while a set (especially a HashSet) keeps track of an item index for quick lookup.\n\n\nUse for-each loops aka \xe2\x80\x9cenhanced for loops\xe2\x80\x9d to make loops more concise and readable.\n\nInstead of:\n\n```\nfor (int i=0; i0` because the for-loops take care of that.\n\nAlso note that if you want access to the `i` variable for logging or otherwise, the classic for-loop is best.\n\n\n`Util.stringExists` is shorthand for a common string check:\n\nInstead of:\n\n```\n\tif (str!=Null && !str.equals("""")) {\n\t\tdoStuff();\n```\n \nTry:\n\n```\n\tif (Util.stringExists(str)) {\n\t\tdoStuff();\n```\n\nThis method also checks for the strings \xe2\x80\x9cnone\xe2\x80\x9d and \xe2\x80\x9cunknown\xe2\x80\x9d which have given us trouble in displays in the past.\n\n

History

\nWildbook started as a collaborative software platform for globally-coordinated whale shark (Rhincodon typus ) research as deployed in the Wildbook for Whale Sharks (now part of http://www.sharkbook.ai). After many requests to use our software outside of whale shark research, it is now an open source, community-maintained standard for mark-recapture studies.\n\n\n

Wildbook is a registered trademark of Wild Me, a 501(c)(3) non-profit organization.

https://www.wildme.org\n'",,"2016/07/12, 22:09:20",2661,GPL-2.0,171,11864,"2023/10/24, 21:36:15",64,257,261,45,1,2,0.0,0.7652888838833202,"2023/04/19, 01:10:42",2023-04-01,8,30,false,,false,false,,,https://github.com/WildMeOrg,https://wildme.org,"Portland, OR",,,https://avatars.githubusercontent.com/u/20192494?v=4,,, PEcAn,The Predictive Ecosystem Analyzer is an integrated ecological bioinformatics toolbox.,PecanProject,https://github.com/PecanProject/pecan.git,github,"ecosystem-model,pecan,r,national-science-foundation,ecosystem-science,bayesian,plants,meta-analysis,data-science,data-assimilation,forecasting,cyberinfrastructure",Biodiversity and Species Distribution,"2023/10/06, 18:14:55",186,0,24,true,R,PEcAn Project,PecanProject,"R,Fortran,Shell,PHP,Python,C++,HTML,CSS,Dockerfile,Makefile,TeX,C,JavaScript,MATLAB",www.pecanproject.org,"b'[![GitHub Actions CI](https://github.com/PecanProject/pecan/workflows/CI/badge.svg)](https://github.com/PecanProject/pecan/actions)\n[![Slack](https://img.shields.io/badge/slack-login-green.svg)](https://pecanproject.slack.com/) \n[![Slack](https://img.shields.io/badge/slack-join_chat-green.svg)](https://join.slack.com/t/pecanproject/shared_invite/enQtMzkyODUyMjQyNTgzLWEzOTM1ZjhmYWUxNzYwYzkxMWVlODAyZWQwYjliYzA0MDA0MjE4YmMyOTFhMjYyMjYzN2FjODE4N2Y4YWFhZmQ)\n[![DOI](https://zenodo.org/badge/4469/PecanProject/pecan.svg)](https://zenodo.org/badge/latestdoi/4469/PecanProject/pecan)\n\n\n\n## Our Vision\n#### Ecosystem science, policy, and management informed by the best available data and models\n\n## Our Mission\n#### Develop and promote accessible tools for reproducible ecosystem modeling and forecasting\n\n\n## What is PEcAn?\n\nThe Predictive Ecosystem Analyzer (PEcAn) (see [pecanproject.org](http://pecanproject.org)) is an integrated ecological bioinformatics toolbox (Dietze et al 2013, LeBauer et al, 2013) that consists of: 1) a scientific workflow system to manage the immense amounts of publicly-available environmental data and 2) a Bayesian data assimilation system to synthesize this information within state-of-the-art ecosystems models. This project is motivated by the fact that many of the most pressing questions about global change are not necessarily limited by the need to collect new data as much as by our ability to synthesize existing data. This project seeks to improve this ability by developing a accessibe framework for integrating multiple data sources in a sensible manner.\n\nThe PEcAn workflow system allows ecosystem modeling to be more reproducible, automated, and transparent in terms of operations applied to data, and thus ultimately more comprehensible to both peers and the public. It reduces the redundancy of effort among modeling groups, facilitate collaboration, and makes models more accessible the rest of the research community.\n\nPEcAn is not itself an ecosystem model, and it can be used to with a variety of different ecosystem models; integrating a model involves writing a wrapper to convert inputs and outputs to and from the standards used by PEcAn. Currently, PEcAn supports over a dozen ecosystem models, with more being added all the time (see the _models_ folder for the most up-to-date list)\n\n## Documentation\n\nConsult documentation of the PEcAn Project; either the [lastest stable development](https://pecanproject.github.io/pecan-documentation/develop/) branch, the latest [release](https://pecanproject.github.io/pecan-documentation/master/). Documentation from [earlier releases is here](https://pecanproject.github.io/documentation.html).\n\n## Getting Started\n\nSee our [""Tutorials Page""](https://pecanproject.github.io/tutorials.html) that provides self-guided tutorials, links to vignettes, and an overview presentation.\n\n### Installation\n\nComplete instructions on how to install PEcAn can be found in the [documentation here](https://pecanproject.github.io/pecan-documentation/develop/pecan-manual-setup.html). To get PEcAn up and running you can use one of three methods:\n1. Run a [Virtual Machine](https://pecanproject.github.io/pecan-documentation/develop/install-vm.html#install-vm). This is recommended for students and new users, and provides a consistent, tested environment for each release.\n2. Use [Docker](https://pecanproject.github.io/pecan-documentation/develop/install-docker.html#install-docker). This is recommended, especially for development and production deployment.\n3. Install all of the PEcAn R packages on your own Linux or MacOS computer or server. This can be done by [installing from r-universe](https://pecanproject.github.io/pecan-documentation/develop/r-universe.html): \n``` r\n# Enable repository from pecanproject\noptions(repos = c(\n pecanproject = \'https://pecanproject.r-universe.dev\',\n CRAN = \'https://cloud.r-project.org\'))\n# Download and install PEcAn.all in R\ninstall.packages(\'PEcAn.all\')\n\n``` \nThis, however, may have limited functionality without also installing other components of PEcAn, in particular [BETYdb](https://pecanproject.github.io/pecan-documentation/develop/osinstall.html#install-bety).\n\n### Website\n\nVisit our [webage](https://pecanproject.github.io) to keep up with latest news, version, and information about the PEcAn Project\n\n#### Web Interface demo\nThe fastest way to begin modeling ecosystems is through the PEcAn web interface. \nWe have a [demo website](http://pecan.ncsa.illinois.edu/pecan/01-introduction.php) that runs the current version of PEcAn. Using this instance you can perform a run using either ED or SIPNET at any of the predefined sites.\n\nThe demo instance only allows for runs at pecan.ncsa.illinois.edu. Once you have set up the run it will execute on our server; depending on the number of people executing a model and the model selected this can take between a few seconds and a few minutes to finish. Once it\'s finished, you see the results of the execution and can plot the outputs of the model. Complete examples of a few executions can be found in our online [tutorials](http://pecanproject.github.io/tutorials.html).\n\n## Publications\n\n* LeBauer, D.S., D. Wang, K. Richter, C. Davidson, and M.C. Dietze (2013). Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs. [doi:10.1890/12-0137.1](https://doi.org/10.1890/12-0137.1)\n* Wang, D, D.S. LeBauer, and M.C. Dietze (2013). Predicting yields of short-rotation hybrid poplar (Populus spp.) for the contiguous US through model-data synthesis. Ecological Applications [doi:10.1890/12-0854.1](https://doi.org/10.1890/12-0854.1)\n* Dietze, M.C., D.S LeBauer, and R. Kooper (2013). On improving the communication between models and data. Plant, Cell, & Environment [doi:10.1111/pce.12043](https://doi.org/10.1111/pce.12043)\n* Dietze, Michael C., Shawn P. Serbin, Carl Davidson, Ankur R. Desai, Xiaohui Feng, Ryan Kelly, Rob Kooper et al. ""A quantitative assessment of a terrestrial biosphere model\'s data needs across North American biomes."" Journal of Geophysical Research: Biogeosciences 119, no. 3 (2014): 286-300.\n* Viskari, Toni, Brady Hardiman, Ankur R. Desai, and Michael C. Dietze. ""Model-data assimilation of multiple phenological observations to constrain and predict leaf area index."" (2015) [doi:10.1890/14-0497.1](https://doi.org/10.1890/14-0497.1)\n* Shiklomanov. A, MC Dietze, T Viskari, PA Townsend, SP Serbin. 2016 ""Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion"" Remote Sensing of the Environment 183: 226-238\n* LeBauer, David, Rob Kooper, Patrick Mulrooney, Scott Rohde, Dan Wang, Stephen P. Long, and Michael C. Dietze. ""BETYdb: a yield, trait, and ecosystem service database applied to second\xe2\x80\x90generation bioenergy feedstock production."" GCB Bioenergy (2017).\n\nA extensive list of publications that apply PEcAn or are informed by our work on [Google Scholar](https://scholar.google.com/citations?hl=en&user=HWhxBY4AAAAJ).\n\n## Acknowledgements\n\nThe PEcAn project is supported by the National Science Foundation (ABI #1062547, ABI #1458021, DIBBS #1261582, ARC #1023477, EF #1318164, EF #1241894, EF #1241891), NASA Terrestrial Ecosystems, the Energy Biosciences Institute, Department of Energy (ARPA-E awards #DE-AR0000594 and DE-AR0000598), and an Amazon AWS in Education Grant.\n\nAny opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, NASA, or other federal agencies. PEcAn is a collaboration among research groups at the Department of Earth And Environment at Boston University, the Carl Woese Institute for Genomic Biology at the University of Illinois, the Image Spatial Data Analysis group at the National Center for Supercomputing Applications, the Department of Atmospheric & Oceanic Sciences at the University Wisconsin-Madison, and the Terrestrial Ecosystem Science & Technology group at Brookhaven National Lab.\n\nBETYdb is a product of the Energy Biosciences Institute at the University of Illinois at Urbana-Champaign. We gratefully acknowledge the great effort of other researchers who generously made their own data available for further study.\n\n## License\n\nUniversity of Illinois/NCSA Open Source License\n\nCopyright (c) 2012, University of Illinois, NCSA. All rights reserved.\n\nPEcAn project\nwww.pecanproject.org\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal with the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\n- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers.\n- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other materials provided with the distribution.\n- Neither the names of University of Illinois, NCSA, nor the names of its contributors may be used to endorse or promote products derived from this Software without specific prior written permission.\n\nTHE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE.\n'",",https://zenodo.org/badge/latestdoi/4469/PecanProject/pecan,https://doi.org/10.1890/12-0137.1,https://doi.org/10.1890/12-0854.1,https://doi.org/10.1111/pce.12043,https://doi.org/10.1890/14-0497.1","2012/11/25, 23:48:26",3986,CUSTOM,1082,21361,"2023/10/08, 14:36:06",460,2023,2766,151,17,38,1.3,0.8649342801027345,"2021/10/08, 20:48:15",v1.7.2,0,80,false,,false,true,,,https://github.com/PecanProject,http://pecanproject.org,,,,https://avatars.githubusercontent.com/u/2879854?v=4,,, mapme.biodiversity,Efficient analysis of spatial biodiversity datasets for global portfolios.,mapme-initiative,https://github.com/mapme-initiative/mapme.biodiversity.git,github,,Biodiversity and Species Distribution,"2023/08/28, 14:09:01",16,0,5,true,R,mapme-initiative,mapme-initiative,R,https://mapme-initiative.github.io/mapme.biodiversity/,"b'\n\n\n[![R-CMD-check](https://github.com/mapme-initiative/mapme.biodiversity/workflows/R-CMD-check/badge.svg)](https://github.com/mapme-initiative/mapme.biodiversity/actions)\n[![Coverage\nStatus](https://img.shields.io/codecov/c/github/mapme-initiative/mapme.biodiversity/master.svg)](https://app.codecov.io/github/mapme-initiative/mapme.biodiversity?branch=main)\n[![CRAN\nstatus](https://badges.cranchecks.info/summary/mapme.biodiversity.svg)](https://cran.r-project.org/web/checks/check_results_mapme.biodiversity.html)\n[![CRAN\nversion](https://www.r-pkg.org/badges/version/mapme.biodiversity)](https://CRAN.R-project.org/package=mapme.biodiversity)\n[![License](https://img.shields.io/badge/License-GPL%20(%3E=3)-brightgreen.svg?style=flat)](https://choosealicense.com/licenses/gpl-3.0/)\n\n\n\n# mapme.biodiversity \n\n## About\n\nBiodiversity areas, especially primary forests, provide multiple\necosystem services for the local population and the planet as a whole.\nThe rapid expansion of human land use into natural ecosystems and the\nimpacts of the global climate crisis put natural ecosystems and the\nglobal biodiversity under threat.\n\nThe mapme.biodiversity package helps to analyse a number of biodiversity\nrelated indicators and biodiversity threats based on freely available\ngeodata-sources such as the Global Forest Watch. It supports\ncomputational efficient routines and heavy parallel computing in\ncloud-infrastructures such as AWS or Microsoft Azure using in the\nstatistical programming language R. The package allows for the analysis\nof global biodiversity portfolios with a thousand or millions of AOIs\nwhich is normally only possible on dedicated platforms such as the\nGoogle Earth Engine. It provides the possibility to e.g.\xc2\xa0analyse the\nWorld Database of Protected Areas (WDPA) for a number of relevant\nindicators. The primary use case of this package is to support\nscientific analysis and data science for individuals and organizations\nwho seek to preserve the planet biodiversity. Its development is funded\nby the German Development Bank KfW.\n\n## Installation\n\nThe package and its dependencies can be installed from CRAN via:\n\n``` r\ninstall.packages(""mapme.biodiversity"")\n```\n\nTo install the development version, use the following command:\n\n``` r\nremotes::install_github(""https://github.com/mapme-initiative/mapme.biodiversity"")\n```\n\n## Usage example\n\n`{mapme.biodiversity}` works by constructing a portfolio from an sf\nobject. Specific raster and vector resource matching the spatio-temporal\nextent of the portfolio are made available locally. Once all required\nresources are available, indicators can be calculated individually for\neach asset in the portfolio.\n\nTo list all available resources and indicators run:\n\n``` r\nlibrary(mapme.biodiversity)\nlibrary(sf)\n```\n\n ## Linking to GEOS 3.11.1, GDAL 3.6.4, PROJ 9.1.1; sf_use_s2() is TRUE\n\n``` r\nresources <- names(available_resources())\nindicators <- names(available_indicators())\ncat(sprintf(\n ""Supported resources:\\n- %s\\n\\nSupported indicators:\\n- %s"",\n paste(resources, collapse = ""\\n- ""),\n paste(indicators, collapse = ""\\n- "")\n))\n```\n\n ## Supported resources:\n ## - chirps\n ## - esalandcover\n ## - fritz_et_al\n ## - gfw_emissions\n ## - gfw_lossyear\n ## - gfw_treecover\n ## - gmw\n ## - nasa_firms\n ## - nasa_grace\n ## - nasa_srtm\n ## - nelson_et_al\n ## - soilgrids\n ## - teow\n ## - ucdp_ged\n ## - worldclim_max_temperature\n ## - worldclim_min_temperature\n ## - worldclim_precipitation\n ## - worldpop\n ## \n ## Supported indicators:\n ## - active_fire_counts\n ## - active_fire_properties\n ## - biome\n ## - deforestation_drivers\n ## - drought_indicator\n ## - ecoregion\n ## - elevation\n ## - fatalities\n ## - landcover\n ## - mangroves_area\n ## - population_count\n ## - precipitation_chirps\n ## - precipitation_wc\n ## - soilproperties\n ## - temperature_max_wc\n ## - temperature_min_wc\n ## - traveltime\n ## - treecover_area\n ## - treecover_area_and_emissions\n ## - treecoverloss_emissions\n ## - tri\n\nOnce you have decided on an indicator you are interested in, you can\ninitialize a biodiversity portfolio by using an sf-object that only\ncontains geometries of type `POLYGON` via the `init_portfolio()`\nfunction call. This will set some important information that is needed\nfurther down the processing chain. We can then request the download of a\nresource that is required to calculate specific indicators. Once the\nindicator has been calculated for individually for all assets in a\nportfolio, the data is returned as a nested list column to the original\nobject.\n\n``` r\n(system.file(""extdata"", ""sierra_de_neiba_478140_2.gpkg"", package = ""mapme.biodiversity"") %>%\n sf::read_sf() %>%\n init_portfolio(\n years = 2016:2017,\n outdir = system.file(""res"", package = ""mapme.biodiversity""),\n tmpdir = system.file(""tmp"", package = ""mapme.biodiversity""),\n add_resources = FALSE,\n verbose = FALSE\n ) %>%\n get_resources(\n resources = c(""gfw_treecover"", ""gfw_lossyear"", ""gfw_emissions""),\n vers_treecover = ""GFC-2020-v1.8"", vers_lossyear = ""GFC-2020-v1.8""\n ) %>%\n calc_indicators(""treecover_area_and_emissions"", min_size = 1, min_cover = 30) %>%\n tidyr::unnest(treecover_area_and_emissions))\n```\n\n ## Simple feature collection with 2 features and 8 fields\n ## Geometry type: POLYGON\n ## Dimension: XY\n ## Bounding box: xmin: -71.80933 ymin: 18.57668 xmax: -71.33201 ymax: 18.69931\n ## Geodetic CRS: WGS 84\n ## # A tibble: 2 \xc3\x97 9\n ## WDPAID NAME DESIG_ENG ISO3 assetid years emissions treecover\n ## \n ## 1 478140 Sierra de Neiba National Park DOM 1 2016 2832 2357.\n ## 2 478140 Sierra de Neiba National Park DOM 1 2017 3468 2345.\n ## # \xe2\x84\xb9 1 more variable: geom \n\n## A note on parallel computing\n\n{mapme.biodiversity} follows the parallel computing paradigm of the\n{[future](https://cran.r-project.org/package=future)} package. That\nmeans that you as a user are in the control if and how you would like to\nset up parallel processing. Currently, {mapme.biodiversity} supports\nparallel processing on the asset level of the `calc_indicators()`\nfunction only. We also currently assume that parallel processing is done\non the cores of a single machine. In future developments, we would like\nto support distributed processing. If you are working on a distributed\nuse-cases, please contact the developers, e.g.\xc2\xa0via the [discussion\nboard](https://github.com/mapme-initiative/mapme.biodiversity/discussions)\nor mail.\n\nTo process 6 assets in parallel and report a progress bar you will have\nto set up the following in your code:\n\n``` r\nlibrary(future)\nlibrary(progressr)\n\nplan(multisession, workers = 6) # set up parallel plan\n\nwith_progress({\n portfolio <- calc_indicators(\n portfolio,\n ""treecover_area_and_emissions"",\n min_size = 1,\n min_cover = 30\n )\n})\n\nplan(sequential) # close child processes\n```\n\nNote, that the above code uses `future::multisession()` as the parallel\nbackend. This backend will resolve the calculation in multiple\nbackground R sessions. You should use that backend if you are operating\non Windows, using RStudio or otherwise are not sure about which backend\nto use. In case you are operating on a system that allows process\nforking and are *not* using RStudio, consider using\n`future::multicore()` for more efficient parallel processing.\n\nHead over to the [online\ndocumentation](https://mapme-initiative.github.io/mapme.biodiversity/index.html)\nfind more detailed information about the package.\n'",,"2022/02/09, 09:14:58",623,GPL-3.0,63,394,"2023/09/16, 19:44:45",18,83,172,82,39,0,0.0,0.29729729729729726,"2023/08/28, 14:08:34",v0.4.0,1,7,false,,false,false,,,https://github.com/mapme-initiative,https://mapme-initiative.org/,,,,https://avatars.githubusercontent.com/u/76036920?v=4,,, spatialEco,R package for spatial analysis and modelling of ecological systems.,jeffreyevans,https://github.com/jeffreyevans/spatialEco.git,github,"r,r-spatial,spatial,raster,vector,ecology,biodiversity,conservation,cran,r-package",Biodiversity and Species Distribution,"2023/07/18, 00:26:59",90,0,25,true,R,,,R,,"b'# spatialEco (dev 2.0-2) \n\n[![R-CMD-check](https://github.com/jeffreyevans/spatialEco/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/jeffreyevans/spatialEco/actions/workflows/R-CMD-check.yaml)\n[![CRAN status](http://www.r-pkg.org/badges/version/spatialEco)](https://cran.r-project.org/package=spatialEco)\n[![CRAN RStudio mirror downloads](http://cranlogs.r-pkg.org/badges/grand-total/spatialEco)](https://cran.r-project.org/package=spatialEco)\n\n\nspatialEco R package with utilities to support spatial data manipulation, query, sampling\n and modeling. Functions include models for species population density, qudrat-based \n\tanalysis and sampling, spatial smoothing, multivariate separability, point process model \n\tfor creating pseudo- absences and sub-sampling, polygon and point-distance structural metrics,\n auto-logistic model, sampling models, cluster optimization and statistical\n exploratory tools.\n\nI jumped to a major release and pushed the version to 2.0-0. All spatial functions are now using the \n`sf` and `terra` packages due to `sp`, `rgeos`, `rgdal`, `maptools` and `raster` being retired. Sorry \nbut, for the most part I removed backwards compatibility with these depreciated object classes so, you \nwill need to make sure that you are using modern spatial object classes. In terra, there is now only \none class type for multi or single band raster objects ""SpatRaster"" which can be read or coerced using \n`terra::rast`. For coercing sp class vector objects to sf you can use `sf::st_as:sf` or `as(x, ""sf"")` \nand, going from sf to sp you use `as(x, ""Spatial"")` \n \n \n## Available functions in development version of spatialEco 2.0-2\n\n| `spatialEco` Function | Description |\n|:-----------------------------|:----------------------------------------------------------------------------------------|\n| `all_pairwise` | Creates a list of all pairwise combinations of a vector |\n| `annulus.matrix` | Creates a 0,1 matrix based on defined annulus parameters, can be used as a window matrix in a raster focal function |\n| `aspline.downscale` | Downscale raster to a higher resolution using multivariate adaptive regression splines (MARS) |\n| `background` | Creates a point sample that can be used as a NULL for SDM\'s and other modeling approaches (see pseudo.absence for alternate approach). |\n| `bbox_extent` | Creates a bounding box polygon representing the extent of a feature or raster | \n| `bearing.distance` | Calculate new point based on bearing/distance \n| `breeding.density` | Calculates n-th percent breeding density areas base on a kernel density estimate of population counts. |\n| `built.index` | remote sensing built-up index | \n| `cgls_urls` | Based on query, provide URL\'s for Copernicus Global Land Service datasets \n| `chae` | The Canine-Human Age Equivalent (for fun) \n| `class.comparison` | Depreciated, with migration to terra, I collapsed into raster.change \n| `classBreaks` | for finding class breaks in a distribution \n| `collinear` | Test for linear or nonlinear collinearity/correlation in data \n| `combine` | Combines multiple rasters into an ""all possible combinations"" raster emulation the ESRI combine function\n| `concordance` | Performs a concordance/disconcordance (C-statistic) test on binomial models. \n| `conf.interval` | Calculates confidence interval for the mean or median of a distribution with with unknown population variance\n| `convexHull` | Derives a convex hull of points using the alpha hull approach with adjustable tension. Please note that due to licensing reasons, this function is only available in the GitHub development version and not on CRAN. You must call the function from the package namespace using spatialEco:::convexHull\n| `correlogram` | Calculates and plots a correlogram (spatially lagged correlations, ""pearson"", ""kendall"" or ""spearman"") \n| `cross.tab` | Cross tabulate two rasters, labels outputs \n| `crossCorrelation` | Calculates the partial spatial cross-correlation function \n| `csi` | Calculates cosine similarity and angular similarity on two vectors or a matrix \n| `curvature` | Zevenbergen & Thorne, McNab\'s or Bolstad\'s surface (raster) curvature \n| `dahi` | Calculates the DAHI (Diurnal Anisotropic Heat Index) \n| `date_seq` | Creates date sequence, given defined start and stop dates, with options for day, week, month, quarter, year or, minute.\n| `daymet.point` | Downloads DAYMET climate variables for specified point and timeperiod \n| `daymet.tiles` | Returns a vector of DAYMET tile id\'s within a specified extent \n| `dispersion` | Calculates the dispersion (""rarity"") of targets associated with planning units \n| `dissection` | Evans (1972) Martonne\'s modified dissection \n| `divergence` | Kullback-Leibler Divergence (Cross-entropy) \n| `download.daymet` | Depreciated \n| `download.hansen` | Depreciated \n| `download.prism` | Depreciated \n| `effect.size` | Cohen\'s-d effect size with pooled sd for a control and experimental group \n| `erase.points` | Erases points inside or outside a polygon feature class \n| `explode` | Depreciated due to redundancy with sf::st_cast \n| `extract.vertices` | extracts (x,y) vertices coordinates from polygons and linesa \n| `fuzzySum` | Calculates the fuzzy sum of a vector \n| `gaussian.kernel` | Creates a Gaussian Kernel of specified size and sigma \n| `geo.buffer` | Buffers data in geographic coordinate space using a temporary projection \n| `group.pdf` | Creates a probability density plot of y for each group of x \n| `hexagons` | Create hexagon polygon \xe2\x80\x9cfishnet\xe2\x80\x9d of defined size and extent. \n| `hli.pt` | Heat Load Index for tabular ""point"" data with slope and aspect \n| `hli` | Heat Load Index, now with support for southern hemisphere data \n| `hsp` | Hierarchical Slope Position \n| `hybrid.kmeans` | Clustering using hierarchical clustering to define cluster-centers in k-means \n| `idw.smoothing` | Distance weighted smoothing (IDW) of a variable in a spatial point object. The function is a smoothing interpolator at the point observation(s) level using a distance-weighted mean.\n| `impute.loess` | Imputes NA\'s or smooths data (or both) for a vector, intended mostly for time-series or serial data.\n| `insert` | Inserts a row or column into a data.frame \n| `insert.values` | Inserts new values into a vector at specified positions \n| `is.empty` | Method, evaluates if vector is empty \n| `is.whole` | Depreciated after R release of base::is.whole in 4.1.0 \n| `kendall` | Kendall tau trend with continuity correction for time-series \n| `kl.divergence` | Calculates the Kullback-Leibler divergence (relative entropy) between unweighted theoretical component distributions. Divergence is calculated as: int[f(x) (log f(x) |log g(x)) dx] for distributions with densities f() and g().\n| `knn` | returns ids, rownames and distance of nearest neighbors in two (or single) spatial objects. Optional radius distance constraint. Added optional covariates (weights)\n| `lai` | Calculates two versions of Leaf Area Index \n| `local.min.max` | Calculates the local minimums and maximums in a numeric vector, indicating inflection points in the distribution.\n| `loess.boot` | Bootstrap of a Local Polynomial Regression (loess) \n| `loess.ci` | Calculates a local polynomial regression fit with associated confidence intervals \n| `logistic.regression` | Performs a logistic (binomial) and autologistic (spatially lagged binomial) regression using maximum likelihood estimation or penalized maximum likelihood estimation.\n| `max_extent` | Returns the maximum extent of multiple spatial inputs \n| `mean_angle` | Returns the mean of a vector of angles. Intended for focal and zonal functions on slope or aspect \n| `moments` | Calculate statistical moments of a distribution including percentiles, arithmetic-geometric-harmonic means, coefficient of variation, median absolute deviation, skewness, kurtosis, mode and number of modes. \n| `morans.plot` | Autocorrelation plot \n| `nni` | Calculates the nearest neighbor index (NNI) measure of clustering or dispersal \n| `nth.vlaue` | Returns the Nth (smallest/largest) values in a numeric vector \n| `oli.aws` | Download Landsat 8 OLI from AWS. \n| `o.ring` | Calculates inhomogeneous O-ring point pattern statistic (Wiegand & Maloney 2004) \n| `optimal.k` | Find optimal k of k-Medoid partitions using silhouette widths \n| `optimized.sample.variance` | Draws an optimal sample that minimizes or maximizes the sample variance \n| `outliers` | Identify outliers using modified Z-score \n| `overlap` | For comparing the similarity of two niche estimates using Warren\'s-I \n| `parea.sample` | Creates a systematic or random point sample of polygons where n is based on percent area of each polygon\n| `parse.bits` | Based on integer value, pulls value(s) of specified bit(s) \n| `parial.cor` | Partial and Semi-partial correlation \n| `plot.effect.size` | Plot generic for effect size \n| `plot.loess.boot` | Plot generic for loess boot \n| `point.in.poly` | Depreciated because function is redundant with sf::st_intersection\n| `polygon_extract` | Depreciated because of migration to terra. Required package only supports raster class \n| `polyPerimeter` | Calculates the perimeter length(s) for a polygon object \n| `poly.regression` | smoothing data in time-series and imputing missing (NA) values using polynomial regression\n| `poly_trend` | Derives Nth order polynomial trend with confidence intervals \n| `pp.subsample` | Generates random subsample based on point process intensity function of the observed data. This is a spatially informed data thinning model that can be used to reduce pseudo-replication or autocorrelation.\n| `proximity.index` | Proximity index for a set of polygons \n| `pseudo.absence` | Generates pseudo-absence samples based on the spatial intensity function of known species locations. This is akin to distance constrained but is informed by the spatial process of the observed data and is drawn from a probabilistic sample following the intensity function.\n| `quadrats` | Quadrat sampling or analysis, variable size and angle options \n| `random.raster` | creates random raster/stack of defined dimensions and statistical distributions \n| `raster.change` | Compares two categorical rasters with a variety of statistical options \n| `raster.deviation` | Local deviation from the raster based on specified global statistic or a polynomial trend.\n| `rasterDistance` | This replicates the raster distanceFromPoints function but uses the Arya & Mount Approximate Near Neighbor (ANN) C++ library for calculating distances. Which results in a notable increase in performance. It is not memory safe and does not use the GeographicLib (Karney, 2013) spheroid distance method for geographic data\n| `raster.downscale` | Downscale raster to a higher resolution raster using robust regression \n| `raster.entropy` | Calculates entropy on integer raster (i.e., 8 bit 0-255) \n| `raster.gaussian.smooth` | Applies a Gaussian smoothing kernel to smooth raster.h \n| `raster.invert` | Inverts value of a raster \n| `raster.kendall` | Calculates Kendall\'s tau trend with continuity correction for raster time-series \n| `raster.mds` | Multidimensional scaling of raster values within an N x N focal window \n| `raster.modified.ttest` | Bivariate moving window correlation using Dutilleul\'s modified t-test \n| `raster.moments` | Calculates focal statistical moments of a raster \n| `raster.transformation` | Applies specified statistical transformation to a raster \n| `raster.vol` | Calculates a percent volume on a raster or based on the entire raster or a systematic sample \n| `raster.Zscore` | Calculates the modified z-score for all cells in a raster \n| `rasterCorrelation` | Performs a simple moving window correlation between two rasters\t\t \n| `remove_duplicates` | Removes duplicate duplicate feature geometries \n| `remove.holes` | Removes all holes (null geometry) in polygon sf class objects \n| `rm.ext` | Removes file extentions from text string \n| `rotate.polygon` | Rotates a polygon by specified angle \n| `sa.trans` | Trigonometric transformation of a slope and aspect interaction \n| `sample.annulus` | Creates sample points based on annulus with defined inner and outer radius \n| `sample.line` | Depreciated because sf::st_sample can aggregate samples by feature\n| `sample.poly` | Depreciated because sf::st_sample can aggregate samples by feature \n| `sampleTransect` | Creates random transects from points, generates sample points along each transect \n| `separability` | Calculates variety of univariate separability metrics for nominal class samples \n| `sf_dissolve` | Dissolves polygon geometry using attribute, globally or overlap \n| `sg.smooth` | Smoothing time-series data using a Savitzky-Golay filter \n| `shannons` | Calculates Shannon\'s Diversity Index and Shannon\'s Evenness Index \n| `shift` | Shifts a vector by n lags without changing its length, can specify fill values \n| `sieve` | Creates a minimum mapping unit by removing pixel clusters < specified area\n| `similarity` | Uses row imputation to identify ""k"" ecological similar observations \n| `smooth.time.series` | Smoothing and imputing missing (NA) of pixel-level data in raster time-series using (local polynomial) LOESS regression\n| `sobal` | Applies an isotropic image gradient operator (Sobel-Feldman) using a 3x3 window \n| `spatial.select` | Performs a spatial select (feature subset) similar to ArcGIS \n| `spectral.separability` | Calculates class-wise multivariate spectral separability \n| `sf.kde` | A weighted or un-weighted kernel density estimate (previously sp.kde now as alias) \n| `sp.na.omit ` | Depreciated as only relevant to sp class objects, for sf use base na.omit \n| `squareBuffer` | Creates a square buffer of feature class\n| `srr` | Surface Relief Ratio \n| `stratified.random` | Creates a stratified random sample of an sp class object using a factor. \n| `subsample.distance` | Minimum, and optional maximum, distance constrained sub-sampling \n| `swvi` | Senescence weighted MSAVI or MTVI \n| `time_to_event` | Returns the time (sum to position) to a specified value \n| `topo.distance` | Calculates topographic corrected distance for a SpatialLinesDataFrame object \n| `tpi` | Calculates topographic position using mean deviations within specified window \n| `trasp` | Solar-radiation Aspect Index \n| `trend.line` | Calculated specified (linear, exponential, logarithmic, polynomial) trend line of x,y and plots results.\n| `tri` | Implementation of the Riley et al (1999) Terrain Ruggedness Index \n| `vrm` | Implementation of the Sappington et al., (2007) vector ruggedness measure \n| `winsorize` | Removes extreme outliers using a winsorization transformation \n| `wt.centroid` | Creates centroid of [x,y] coordinates, of a random field, based on a weights field in a point sample.\n| `zonal.stats` | Depreciated in leu of exactextractr library\n\n**Bugs**: Users are encouraged to report bugs here. Go to [issues](https://github.com/jeffreyevans/spatialEco/issues) in the menu above, and press new issue to start a new bug report, documentation correction or feature request. You can direct questions to .\n\n**To install `spatialEco` in R use install.packages() to download current stable release from CRAN** \n\n**for the development version, run the following (requires the remotes package):**\n`remotes::install_github(""jeffreyevans/spatialEco"")`\n\n**You can also install from ROpenSci (R-Universe):**\n\n```\n# Enable repository from jeffreyevans\noptions(repos = c(\n jeffreyevans = \'https://jeffreyevans.r-universe.dev\',\n CRAN = \'https://cloud.r-project.org\'))\n \n# Download and install spatialEco in R\ninstall.packages(\'spatialEco\')\n```\n'",,"2017/10/27, 18:24:50",2189,GPL-3.0,45,387,"2023/05/30, 19:00:29",5,6,55,13,148,0,0.0,0.008196721311475419,,,0,4,false,,false,false,,,,,,,,,,, Biodiverse,"A tool for the spatial analysis of diversity using indices based on taxonomic, phylogenetic, trait and matrix-based relationships, as well as related environmental and temporal variations.",shawnlaffan,https://github.com/shawnlaffan/biodiverse.git,github,"spatial-analysis,phylogeography,phylogenetic-trees,phylogenetic-diversity,phylodiversity,endemism,beta-diversity,species-turnover,randomisations,biodiverse,hacktoberfest",Biodiversity and Species Distribution,"2023/10/24, 07:58:23",64,0,7,true,Perl,,,"Perl,R,Prolog,Raku,Shell,Batchfile",,"b'[![Build Status](https://travis-ci.org/shawnlaffan/biodiverse.svg?branch=master)](https://travis-ci.org/shawnlaffan/biodiverse)\n[![Build status](https://ci.appveyor.com/api/projects/status/9dnh2co30sfbl3i2/branch/master?svg=true)](https://ci.appveyor.com/project/shawnlaffan/biodiverse/branch/master)\n![Windows](https://github.com/shawnlaffan/biodiverse/workflows/Windows/badge.svg)\n![macos](https://github.com/shawnlaffan/biodiverse/workflows/macos/badge.svg)\n[![Build Status](https://api.cirrus-ci.com/github/shawnlaffan/biodiverse.svg)](https://cirrus-ci.com/github/shawnlaffan/biodiverse)\n\n\n# Biodiverse\n\nBiodiverse is a tool for the spatial analysis of diversity using indices based on taxonomic, phylogenetic, trait and matrix-based (e.g. genetic distance) relationships, as well as related environmental and temporal variations. \n\n**DOWNLOAD**: Biodiverse can be downloaded from https://github.com/shawnlaffan/biodiverse/wiki/Downloads\n\nBiodiverse supports the following processes: \n\n 1. Linked visualisation of data distributions in geographic, taxonomic, phylogenetic and matrix spaces;\n 1. Spatial moving window analyses including richness, endemism, phylogenetic diversity and beta diversity;\n 1. Spatially constrained agglomerative cluster analyses; \n 1. Spatially constrained region grower analyses; \n 1. Interactive visualisation of turnover patterns (for example beta-diversity); and \n 1. Randomisations for hypothesis testing. \n\nBiodiverse is open-source and supports user developed extensions. It can be used both through a graphical user interface (GUI) and through user written scripts.\n\nMore than 300 indices are supported. See the [Indices](https://github.com/shawnlaffan/biodiverse/wiki/Indices) page.\n\n*Screen shots* can be found on the ScreenShots page.\n\n**Example applications** can be seen at the [publications page](https://github.com/shawnlaffan/biodiverse/wiki/PublicationsList).\n\n*Help* can be located via the [help pages](https://github.com/shawnlaffan/biodiverse/wiki/Home) (these are also accessible via the wiki link on the right of this page).\n\nA **discussion group** is at http://groups.google.com.au/group/biodiverse-users and a **blog** at http://biodiverse-analysis-software.blogspot.com.au/\n\n\nTo cite Biodiverse or acknowledge its use, use the following details, substituting the version of the application that you used for ""Version 1.0"".\n\n* Laffan, S.W., Lubarsky, E. & Rosauer, D.F. (2010) Biodiverse, a tool for the spatial analysis of biological and related diversity. [Ecography. Vol 33, 643-647 (Version 1.0)](https://doi.org/10.1111/j.1600-0587.2010.06237.x).\n\nAn overview of the system is also provided in Dan Rosauer\'s talk at TDWG2008:\n\n* Rosauer, D.F. & Laffan, S.W. (2008) Linking phylogenetic trees, taxonomy & geography to map phylogeography using Biodiverse. Taxonomic Data Working Group 2008, Perth, Australia. [PPT](http://www.tdwg.org/fileadmin/2008conference/slides/Rosauer_09_05_phyloTrees.ppt) [SWF with audio](http://www.tdwg.org/fileadmin/2008conference/slides/Rosauer_09_05_phyloTrees.swf). \n\nFor a list of **publications using Biodiverse**, see the [PublicationsList](https://github.com/shawnlaffan/biodiverse/wiki/PublicationsList) page. \n\n# Installation\nInstallation instructions can be accessed through the [Installation](https://github.com/shawnlaffan/biodiverse/wiki/Installation) page.\n\n# News \n\nSee http://shawnlaffan.github.io/biodiverse/#news\n\n\n# Acknowledgements \n\nThis research has been supported by Australian Research Council Linkage Grant LP0562070 (Laffan and West) and UNSW FRGP funding to Laffan.\n\nMuch of the original GUI coding was by Eugene Lubarsky. Substantial contributions to the project have also been made by Michael Zhou and Anthony Knittel. Unfortunately the code history does not show their contributions as authorship details were lost as we transitioned from Google Code to GitHub. \n\n# Persistent URL \n\nhttp://www.purl.org/biodiverse\n\n## Keywords \n\nBiodiversity analysis tool, spatial analysis, phylogeography, spatial analysis, endemism, phylogenetic diversity, beta diversity, species turnover\n'",",https://doi.org/10.1111/j.1600-0587.2010.06237.x","2015/03/27, 22:49:20",3134,GPL-3.0,226,4239,"2023/10/24, 07:58:28",83,45,796,61,1,0,0.0,0.08039702233250623,"2023/04/27, 04:42:38",r4.3,0,9,false,,false,false,,,,,,,,,,, Naturtag,A tool for nature photographers that adds useful metadata to describe the organisms in your photos.,pyinat,https://github.com/pyinat/naturtag.git,github,"inaturalist,photography,taxonomy,darwin-core,exif,iptc,xmp,dwc,hierarchical-keywords,biodiversity,biodiversity-data,cli",Biodiversity and Species Distribution,"2023/09/19, 20:45:56",30,0,12,true,Python,,pyinat,"Python,Shell,PowerShell",https://naturtag.readthedocs.io,"b""# Naturtag\n\n[![Build status](https://github.com/JWCook/naturtag/workflows/Build/badge.svg)](https://github.com/JWCook/naturtag/actions)\n[![Documentation Status](https://readthedocs.org/projects/naturtag/badge/?version=stable)](https://naturtag.readthedocs.io/en/stable/)\n[![GitHub issues](https://img.shields.io/github/issues/JWCook/naturtag)](https://github.com/JWCook/naturtag/issues)\n[![PyPI](https://img.shields.io/pypi/v/naturtag?color=blue)](https://pypi.org/project/naturtag)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/naturtag)](https://pypi.org/project/naturtag)\n\n\n\n
\n\n[![](assets/icons/naturtag-gh-preview.png)](https://naturtag.readthedocs.io)\n\n## Contents\n- [Summary](#summary)\n- [Use Cases](#use-cases)\n- [Installation](#installation)\n- [Usage](#usage)\n - [GUI](#gui)\n - [CLI](#cli)\n - [Library](#library)\n- [Development Status](#development-status)\n\n\n## Summary\nNaturtag is a tool for nature photographers that adds useful metadata to describe the organisms in\nyour photos. It includes a **desktop application**, a **command-line interface**, and can also be\nused as a **python library**.\n\nIt is mainly intended for use with [iNaturalist](https://www.inaturalist.org); it can tag your\nphotos with either complete observation metadata, or just taxonomy metadata.\n\n\n## Use Cases\nNaturtag embeds this information in your local photo collection using\n[XMP](https://en.wikipedia.org/wiki/Extensible_Metadata_Platform) and\n[EXIF](https://en.wikipedia.org/wiki/Exif) metadata. This has a variety of uses, including:\n\n### Local photo organization\nNaturtag can tag your photos with **hierarchical keywords** (aka structured keywords), which\nare supported by some photo viewers/editors like\n[**Lightroom**](https://millennialdiyer.com/articles/photography/lightroom-keyword-hierarchy/),\n[**FastPictureViewer**](https://www.fastpictureviewer.com),\n[**Photo Mechanic**](https://www.photometadata.org/META-Tutorials-Photo-Mechanic-Applying-Keywords),\n[**digiKam**](https://www.digikam.org), and\n[**XnViewMP**](https://www.xnview.com/en/xnviewmp).\n\nThis basically gives you a taxonomic tree for browsing and filtering your photos.\n\n
\nExample in XnView\n\n![screenshot](assets/screenshots/xnview.png)\n
\n\n### Photo hosting\nNaturtag can also simplify tagging photos for photo hosting sites like Flickr. For that use case, this\ntool generates semi-structured keywords in the same format as\n[iNaturalist's Flickr Tagger](https://www.inaturalist.org/taxa/flickr_tagger).\n\nExample search using these tags: https://www.flickr.com/photos/tags/taxonomy:class=arachnida\n\n
\nExample of taxonomy tags on Flickr\n\n![screenshot](assets/screenshots/flickr.png)\n
\n\n### Other biodiversity tools\nFinally, naturtag can improve interoperability with other tools and systems that interact with biodiversity\ndata. For example, in addition to iNaturalist you might submit some observations to another\nplatform with a more specific focus, such as **eBird**, **BugGuide**, or **Mushroom Observer**.\nFor that use case, this tool supports [Simple Darwin Core](https://dwc.tdwg.org/simple).\n\n## Installation\nSee [GitHub Releases](https://github.com/pyinat/naturtag/releases) for downloads and\n[Installation](https://naturtag.readthedocs.io/en/stable/installation.html)\nfor platform-specific instructions.\n\nTo just install naturtag as a python package, run:\n```bash\npip install naturtag\n```\n\n## Usage\n\n### GUI\nThe main interface for this project is still a work in progress.\n\nIt includes an interface for selecting and tagging images:\n\n![Screenshot](assets/screenshots/image-selector.png)\n\nAnd tools to search and browse species to tag your images with:\n\n![Screenshot](assets/screenshots/taxon-search.png)\n\nSee [Application Guide](https://naturtag.readthedocs.io/en/stable/app.html) for more details.\n\n### CLI\nNaturtag also includes a command-line interface. It takes an observation or species, plus some image\nfiles, and generates EXIF and XMP metadata to write to those images. You can see it in action here:\n[![asciicast](https://asciinema.org/a/0a6gzpt7AI9QpGoq0OGMDOxqi.svg)](https://asciinema.org/a/0a6gzpt7AI9QpGoq0OGMDOxqi)\n\nSee [CLI documentation](https://naturtag.readthedocs.io/en/stable/cli.html) for more details.\n\n### Library\nYou can also import `naturtag` as a python library, and use its main features in your own scripts or\napplications. Basic example:\n```python\nfrom naturtag import tag_images, refresh_tags\n\n# Tag images with full observation metadata\ntag_images(['img1.jpg', 'img2.jpg'], observation_id=1234)\n\n# Refresh previously tagged images with latest observation and taxonomy metadata\nrefresh_tags(['~/observations/'], recursive=True)\n```\n\nSee [API Reference](https://naturtag.readthedocs.io/en/stable/reference.html) for more details.\n\n\n## Development Status\n* See [Issues](https://github.com/JWCook/naturtag/issues?q=) for planned features and current progress.\n* If you have any suggestions, questions, or requests, please\n [create an issue](https://github.com/JWCook/naturtag/issues/new/choose), or ping me (**@jcook**)\n on the [iNaturalist Community Forum](https://forum.inaturalist.org/c/general/14).\n* When I'm not working on this, I'm usually working on other libraries that naturtag benefits from, including\n [requests-cache](https://requests-cache.readthedocs.io),\n [pyinaturalist](https://pyinaturalist.readthedocs.io), and\n [pyinaturalist-convert](https://github.com/JWCook/pyinaturalist-convert).\n""",,"2020/05/10, 18:52:49",1263,MIT,73,589,"2023/10/17, 20:52:47",55,175,281,108,8,3,0.0,0.005703422053231932,"2023/06/07, 19:43:28",v0.8.0b0,0,2,false,,false,false,,,https://github.com/pyinat,https://pyinaturalist.readthedocs.io,,,,https://avatars.githubusercontent.com/u/105503620?v=4,,, IUCNN,"Environmental data and existing IUCN Red List assessments to predict the conservation status of ""Not Evaluated"" species, for any taxon or geographic region of interest.",IUCNN,https://github.com/IUCNN/IUCNN.git,github,"tensorflow,deep-learning,machine-learning,conservation-prioritization,conservation",Biodiversity and Species Distribution,"2023/02/17, 08:24:10",22,0,8,true,R,,IUCNN,"R,Python,HTML",,"b'\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![DOI](https://zenodo.org/badge/293626039.svg)](https://zenodo.org/badge/latestdoi/293626039)\n[![R-CMD-check](https://github.com/azizka/IUCNN/workflows/R-CMD-check/badge.svg)](https://github.com/azizka/IUCNN/actions)\n\n\n\n# IUCNN\nBatch estimation of species\' IUCN Red List threat status using neural networks.\n\n# Installation\n1. Install IUCNN directly from Github using devtools. \n```r\ninstall.packages(""devtools"")\nlibrary(devtools)\n\ninstall_github(""IUCNN/IUCNN"")\n```\n\n2. Since some of IUCNNs functions are run in Python, IUCNN needs to set up a Python environment. This is easily done from within R, using the `install_miniconda()` function of the package `reticulate` (this will need c. 3 GB disk space).\nIf problems occur at this step, check the excellent [documentation of reticulate](https://rstudio.github.io/reticulate/index.html).\n```r\ninstall.packages(""reticulate"")\nlibrary(reticulate)\ninstall_miniconda()\n```\n\n\n3. Install the tensorflow python library. If you are using **MacOS** or **Linux** it is recommended to install tensorflow using conda:\n```r\nreticulate::conda_install(""r-reticulate"",""tensorflow=2.4"")\n```\n\nIf you are using **Windows**, you can install tensorflow using pip:\n\n```r\nreticulate::py_install(""tensorflow~=2.4.0rc4"", pip = TRUE)\n```\n\n4. Finally install the npBNN python library from Github:\n\n```r\nreticulate::py_install(""https://github.com/dsilvestro/npBNN/archive/refs/tags/v0.1.11.tar.gz"", pip = TRUE)\n```\n\n# Usage\nThere are multiple models and features available in IUCNN. A vignette with a detailed tutorial on how to use those is available as part of the package: `vignette(""Approximate_IUCN_Red_List_assessments_with_IUCNN"")`. Running IUCNN will write files to your working directory.\n\nA simple example run for terrestrial orchids (This will take about 5 minutes and download ~500MB of data for feature preparation into the working directory):\n\n```r\nlibrary(tidyverse)\nlibrary(IUCNN)\n\n#load example data \ndata(""training_occ"") #geographic occurrences of species with IUCN assessment\ndata(""training_labels"")# the corresponding IUCN assessments\ndata(""prediction_occ"") #occurrences from Not Evaluated species to prdict\n\n# 1. Feature and label preparation\nfeatures <- iucnn_prepare_features(training_occ) # Training features\nlabels_train <- iucnn_prepare_labels(x = training_labels,\n y = features) # Training labels\nfeatures_predict <- iucnn_prepare_features(prediction_occ) # Prediction features\n\n# 2. Model training\nm1 <- iucnn_train_model(x = features, lab = labels_train)\n\nsummary(m1)\nplot(m1)\n\n# 3. Prediction\niucnn_predict_status(x = features_predict,\n model = m1)\n```\nAdditional features quantifying phylogenetic relationships and geographic sampling bias are available via `iucnn_phylogenetic_features` and `iucnn_bias_features`.\n\n\nWith model testing\n\n```r\nlibrary(tidyverse)\nlibrary(IUCNN)\n\n#load example data \ndata(""training_occ"") #geographic occurrences of species with IUCN assessment\ndata(""training_labels"")# the corresponding IUCN assessments\ndata(""prediction_occ"") #occurrences from Not Evaluated species to predict\n\n# Feature and label preparation\nfeatures <- iucnn_prepare_features(training_occ) # Training features\nlabels_train <- iucnn_prepare_labels(x = training_labels,\n y = features) # Training labels\nfeatures_predict <- iucnn_prepare_features(prediction_occ) # Prediction features\n\n\n# Model testing\n# For illustration models differing in dropout rate and number of layers\n\nmod_test <- iucnn_modeltest(x = features,\n lab = labels_train,\n logfile = ""model_testing_results-2.txt"",\n model_outpath = ""iucnn_modeltest-2"",\n mode = ""nn-class"",\n dropout_rate = c(0.0, 0.1, 0.3),\n n_layers = c(""30"", ""40_20"", ""50_30_10""),\n cv_fold = 5,\n init_logfile = TRUE)\n\n# Select best model\nm_best <- iucnn_best_model(x = mod_test,\n criterion = ""val_acc"",\n require_dropout = TRUE)\n\n# Inspect model structure and performance\nsummary(m_best)\nplot(m_best)\n\n# Train the best model on all training data for prediction\nm_prod <- iucnn_train_model(x = features,\n lab = labels_train,\n production_model = m_best)\n\n# Predict RL categories for target species\npred <- iucnn_predict_status(x = features_predict,\n model = m_prod)\nplot(pred)\n\n```\n\nUsing a convolutional neural network\n\n```r\nfeatures <- iucnn_cnn_features(training_occ) # Training features\nlabels_train <- iucnn_prepare_labels(x = training_labels,\n y = features) # Training labels\nfeatures_predict <- iucnn_cnn_features(prediction_occ) # Prediction features\n\n```\n\n# Citation\n```r\nlibrary(IUCNN)\ncitation(""IUCNN"")\n```\n\nZizka A, Andermann T, Silvestro D (2022). ""IUCNN - Deep learning approaches to approximate species\xe2\x80\x99 extinction risk."" [Diversity and Distributions, 28(2):227-241 doi: 10.1111/ddi.13450](https://doi.org/10.1111/ddi.13450). \n\nZizka A, Silvestro D, Vitt P, Knight T (2021). \xe2\x80\x9cAutomated conservation assessment of the orchid family with deep\nlearning.\xe2\x80\x9d [Conservation Biology, 35(3):897-908, doi: doi.org/10.1111/cobi.13616](https://doi.org/10.1111/cobi.13616)\n'",",https://zenodo.org/badge/latestdoi/293626039,https://doi.org/10.1111/ddi.13450,https://doi.org/10.1111/cobi.13616","2020/09/07, 20:28:46",1143,MIT,2,348,"2023/05/30, 10:15:24",13,1,41,5,148,0,0.0,0.1647727272727273,"2021/09/15, 19:07:55",v2.0.0,0,4,false,,false,false,,,https://github.com/IUCNN,,,,,https://avatars.githubusercontent.com/u/90766685?v=4,,, IPT,Global Biodiversity Information Facility and used to publish and share biodiversity datasets through the GBIF network.,gbif,https://github.com/gbif/ipt.git,github,biodiversity-informatics,Biodiversity and Species Distribution,"2023/10/24, 08:53:36",116,0,16,true,Java,Global Biodiversity Information Facility,gbif,"Java,FreeMarker,JavaScript,CSS,Rich Text Format,HTML,Shell,Roff,Dockerfile,Fluent",https://www.gbif.org/ipt,"b'https://builds.gbif.org/job/ipt/lastBuild/console[image:https://builds.gbif.org/job/ipt/badge/icon[Build status]]\nhttps://crowdin.com/project/gbif-ipt[image:https://badges.crowdin.net/gbif-ipt/localized.svg[Crowdin]]\n\n= GBIF Integrated Publishing Toolkit (IPT)\n\nThe Integrated Publishing Toolkit (IPT) is a free, open source software tool provided by the Global Biodiversity Information Facility (GBIF) and used to publish and share biodiversity datasets through the https://www.gbif.org/[GBIF network]. The IPT can also be configured with a DataCite account in order to assign DOIs to datasets, thus transforming it into a data repository.\n\n[TIP]\n====\n*Most users should refer to the https://ipt.gbif.org/manual/[user manual] for further information, including https://ipt.gbif.org/manual/en/ipt/latest/releases[downloads], https://ipt.gbif.org/manual/en/ipt/latest/getting-started[installation instructions] and details of the https://ipt.gbif.org/manual/en/ipt/latest/releases[latest release].*\n\nIf you have a GitHub account, you may select ""Watch"" \xe2\x86\x92 ""Custom"" \xe2\x86\x92 ""Releases"" above to receive\na notification from GitHub when there is a new IPT release. These notifications are also sent to the https://lists.gbif.org/mailman/listinfo/ipt/[mailing list].\n====\n\nIn this repository you can find the source code for the IPT software and its https://ipt.gbif.org/manual/[user manual], as well as the https://github.com/gbif/ipt/issues[issue tracker].\n\n== Resources\n\n* https://ipt.gbif.org/manual/[IPT User Manual]\n* Information on the https://ipt.gbif.org/manual/en/ipt/latest/releases[latest release]\n* The https://ipt.gbif.org[Demo IPT] is available online, https://ipt.gbif.org/manual/en/ipt/latest/getting-started[further information about this]\n* How to contribute https://ipt.gbif.org/manual/en/ipt/latest/translations[translations] with https://crowdin.com/project/gbif-ipt[Crowdin]\n* https://lists.gbif.org/mailman/listinfo/ipt/[IPT Mailing List] \xe2\x80\x94 please subscribe for notifications of future releases.\n\n== Acknowledgements\n\nA large number of dedicated volunteers contribute to the success of this software. With your help, the IPT has become a successful tool in use all around the world.\n\nhttps://crowdin.com/[Crowdin] is kindly supporting this https://crowdin.com/project/gbif-ipt[open source project] by giving GBIF a free access to its localization management platform. Crowdin makes it possible to manage a large number of concurrent translations.\n'",,"2015/05/07, 19:44:39",3093,Apache-2.0,488,7775,"2023/10/19, 18:01:06",135,155,1988,243,6,8,0.4,0.6583063646170442,"2023/09/21, 08:20:38",ipt-2.7.6,0,52,false,,false,false,,,https://github.com/gbif,https://www.gbif.org,"Copenhagen, Denmark",,,https://avatars.githubusercontent.com/u/1963797?v=4,,, enmSdmX,A set of tools in R for implementing species distribution models and ecological niche models.,adamlilith,https://github.com/adamlilith/enmSdmX.git,github,"bias-correction,biogeography,ecological-niche-modelling,niche-modeling,niche-modelling,species-distribution-modeling,ecological-niche-modeling",Biodiversity and Species Distribution,"2023/09/15, 16:50:31",19,0,18,true,R,,,R,,"b'# enmSdmX\n\n\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![cran version](https://www.r-pkg.org/badges/version/enmSdmX)](https://cran.r-project.org/package=enmSdmX)\n\n\n\nTools for modeling niches and distributions of species \n\n\n\n`enmSdmX` is a set of tools in R for implementing species distribution models (SDMs) and ecological niche models (ENMs), including: bias correction, spatial cross-validation, model evaluation, raster interpolation, biotic velocity (speed and direction of movement of a ""mass"" represented by a raster), and tools for using spatially imprecise records. The heart of the package is a set of ""training"" functions which automatically optimize model complexity based number of available occurrences. These algorithms include MaxEnt, MaxNet, boosted regression trees/gradient boosting machines (BRT), generalized additive models (GAM), generalized linear models (GLM),\tnatural splines (NS), and random forests (RF). To enhance interoperability with other packages, the package does not create any new classes. The package works with PROJ6 geodetic objects and coordinate reference systems.\n\n## Installation ##\nYou can install this package from CRAN using:\n\n`install.packages(\'enmSdmX\', dependencies = TRUE)`\n\nAlternatively, you can install the development version of this package using:\n\n`remotes::install_github(\'adamlilith/enmSdmX\', dependencies = TRUE)` \n\nYou may need to install the `remotes` package first.\n\n# Functions #\n\n### Using spatially imprecise records\n* `coordImprecision`: Coordinate imprecision\n* `nearestGeogPoints`: Minimum convex polygon from a set of spatial polygons and/or points (""nearest geographic point"" method)\n* `nearestEnvPoints`: Extract ""most conservative"" environments from points and/or polygons (""nearest environmental point"" method)\n\n### Data preparation ###\n* `elimCellDuplicates`: Eliminate duplicate points in each cell of a raster\n* `geoFold`: Assign geographically-distinct k-folds\n* `geoFoldContrast`: Assign geographically-distinct k-folds to background or contrast sites\n\n### Bias correction\n* `geoThin`: Thin geographic points deterministically or randomly\n* `weightByDist`: Proximity-based weighting for occurrences for correcting spatial bias\n\n### Model training ###\n* `trainByCrossValid` and `summaryByCrossValid`: Calibrate a distribution/niche model using cross-validation\n* `trainBRT`: Boosted regression trees (BRTs)\n* `trainGAM`: Generalized additive models (GAMs)\n* `trainGLM`: Generalized linear models (GLMs)\n* `trainMaxEnt`: MaxEnt models\n* `trainMaxNet`: MaxNet models\n* `trainNS`: Natural splines (NSs)\n* `trainRF`: Random forests (RFs) \n\n### Model prediction ###\n* `predictEnmSdm`: Predict most model types using default settings; parallelized\n* `predictMaxEnt`: Predict MaxEnt model\n* `predictMaxNet`: Predict MaxNet model\n\n### Model evaluation ###\n* `evalAUC`: AUC (with/out site weights)\n* `evalMultiAUC`: Multivariate version of AUC (with/out site weight)\n* `evalContBoyce`: Continuous Boyce Index (with/out site weights)\n* `evalThreshold`: Thresholds to convert continuous predictions to binary predictions (with/out site weights)\n* `evalThresholdStats`: Model accuracy based on thresholded predictions (with/out site weights)\n* `evalTjursR2`: Tjur\'s R2 (with/out site weights)\n* `evalTSS`: True Skill Statistic (TSS) (with/out site weights)\n* `modelSize`: Number of response values in a model object\n\n### Niche overlap and comparison ###\n* `compareResponse`: Compare different niche model responses along an environmental variable\n* `nicheOverlapMetrics`: Niche overlap metrics\n\n### Functions for rasters ###\n* `bioticVelocity`: Velocity of a ""mass"" across a time series of rasters\n* `getValueByCell` and `setValueByCell`: Retrieve or get raster values(s) by cell number\n* `globalx`: ""Friendly"" wrapper for terra::global() for calculatig raster statistics\n* `interpolateRasts`: Interpolate a stack of rasters\n* `longLatRasts`: Generate rasters with values of longitude/latitude for cell values\n* `sampleRast` : Sample raster with/out replacement\n* `squareCellRast`: Create a raster with square cells from an object with an extent\n\n### Coordinate reference systems ###\n* `crss`: Coordinate reference systems and their nicknames\n* `customAlbers`: Create a custom Albers conic equal-area projection\n* `customLambert`: Create a custom Lambert azimuthal equal-area projection\n* `customVNS`: Create a custom vertical near-side projection\n* `getCRS`: Return a WKT2 (well-known text) string using a nickname\n\n### Geographic utility functions ###\n* `countPoints`: Number of points in a ""spatial points"" object\n* `decimalToDms`: Convert decimal coordinate to degrees-minutes-seconds\n* `dmsToDecimal`: Convert degrees-minutes-seconds coordinate to decimal\n* `extentToVect`: Convert extent to a spatial polygon\n* `plotExtent`: Create a spatial polygon the same size as a plot region\n* `spatVectorToSpatial`: Convert SpatVector object to a Spatial* object\n\n### Data\n* `lemurs`: Lemur occurrences\n* `mad0`: Madagascar spatial object\n* `mad1`: Madagascar spatial object\n* `madClim`: Madagascar climate rasters for the present\n* `madClim2030`: Madagascar climate rasters for the 2030s\n* `madClim2050`: Madagascar climate rasters for the 2050s\n* `madClim2070`: Madagascar climate rasters for the 2070s\n* `madClim2090`: Madagascar climate rasters for the 2090s\n\n# Citation #\n\nSmith, A.B., Murphy, S.J., Henderson, D., and Erickson, K.D. 2023. Including imprecisely georeferenced specimens improves accuracy of species distribution models and estimates of niche breadth. Global Ecology and Biogeography In press. [open access pre-print | published article]\n\nAbstract\n\nAim Museum and herbarium specimen records are frequently used to assess the conservation status of species and their responses to climate change. Typically, occurrences with imprecise geolocality information are discarded because they cannot be matched confidently to environmental conditions and are thus expected to increase uncertainty in downstream analyses. However, using only precisely georeferenced records risks undersampling of the environmental and geographical distributions of species. We present two related methods to allow the use of imprecisely georeferenced occurrences in biogeographical analysis.\n\nInnovation Our two procedures assign imprecise records to the (1) locations or (2) climates that are closest to the geographical or environmental centroid of the precise records of a species. For virtual species, including imprecise records alongside precise records improved the accuracy of ecological niche models projected to the present and the future, especially for species with c.\xc2\xa020 or fewer precise occurrences. Using only precise records underestimated loss of suitable habitat and overestimated the amount of suitable habitat in both the present and the future. Including imprecise records also improves estimates of niche breadth and extent of occurrence. An analysis of 44 species of North American Asclepias (Apocynaceae) yielded similar results.\n\nMain conclusions Existing studies examining the effects of spatial imprecision typically compare outcomes based on precise records against the same records with spatial error added to them. However, in real-world cases, analysts possess a mix of precise and imprecise records and must decide whether to retain or discard the latter. Discarding imprecise records can undersample the geographical and environmental distributions of species and lead to mis-estimation of responses to past and future climate change. Our method, for which we provide a software implementation in the `enmSdmX` package for R, is simple to use and can help leverage the large number of specimen records that are typically deemed ""unusable"" because of spatial imprecision in their geolocation.\n'",",https://doi.org/10.1111/geb.13628\","2022/10/27, 03:05:23",363,CUSTOM,173,173,"2023/09/15, 16:50:32",2,31,33,33,40,0,0.0,0.0,"2023/01/26, 02:08:57",v1.0.1,0,1,false,,false,false,,,,,,,,,,, sdmTMB,An R package that fits spatial and spatiotemporal predictive-processes for species distribution models.,pbs-assess,https://github.com/pbs-assess/sdmTMB.git,github,"r,glmm,spatial-analysis,ecology,tmb,species-distribution-modelling",Biodiversity and Species Distribution,"2023/10/24, 17:36:00",132,0,42,true,R,,pbs-assess,"R,C++,TeX,Makefile",https://pbs-assess.github.io/sdmTMB/,"b'\n\n\n# sdmTMB \n\n> Spatial and spatiotemporal GLMMs with TMB\n\n\n[![](https://www.r-pkg.org/badges/version/sdmTMB)](https://cran.r-project.org/package=sdmTMB)\n[![Documentation](https://img.shields.io/badge/documentation-sdmTMB-orange.svg?colorB=E91E63)](https://pbs-assess.github.io/sdmTMB/)\n[![R-CMD-check](https://github.com/pbs-assess/sdmTMB/workflows/R-CMD-check/badge.svg)](https://github.com/pbs-assess/sdmTMB/actions)\n[![downloads](http://cranlogs.r-pkg.org/badges/sdmTMB)](https://cranlogs.r-pkg.org/)\n\n\nsdmTMB is an R package that fits spatial and spatiotemporal GLMMs (Generalized Linear Mixed Effects Models) using Template Model Builder ([TMB](https://github.com/kaskr/adcomp)), [R-INLA](https://www.r-inla.org/), and Gaussian Markov random fields. One common application is for species distribution models (SDMs). See the [documentation site](https://pbs-assess.github.io/sdmTMB/) and a preprint:\n\nAnderson, S.C., E.J. Ward, P.A. English, L.A.K. Barnett. 2022. sdmTMB: an R package for fast, flexible, and user-friendly generalized linear mixed effects models with spatial and spatiotemporal random fields. bioRxiv 2022.03.24.485545; doi: https://doi.org/10.1101/2022.03.24.485545\n\n## Table of contents\n\n- [Installation](#installation)\n- [Overview](#overview)\n- [Getting help](#getting-help)\n- [Citation](#citation)\n- [Related software](#related-software)\n- [Basic use](#basic-use)\n- [Advanced functionality](#advanced-functionality)\n - [Time-varying coefficients](#time-varying-coefficients)\n - [Spatially varying coefficients\n (SVC)](#spatially-varying-coefficients-svc)\n - [Random intercepts](#random-intercepts)\n - [Breakpoint and threshold\n effects](#breakpoint-and-threshold-effects)\n - [Simulating data](#simulating-data)\n - [Sampling from the joint precision\n matrix](#sampling-from-the-joint-precision-matrix)\n - [Calculating uncertainty on spatial\n predictions](#calculating-uncertainty-on-spatial-predictions)\n - [Cross validation](#cross-validation)\n - [Priors](#priors)\n - [Bayesian MCMC sampling with\n Stan](#bayesian-mcmc-sampling-with-stan)\n - [Turning off random fields](#turning-off-random-fields)\n - [Using a custom fmesher mesh](#using-a-custom-fmesher-mesh)\n - [Barrier meshes](#barrier-meshes)\n\n## Installation\n\nsdmTMB can be installed from CRAN:\n\n``` r\ninstall.packages(""sdmTMB"", dependencies = TRUE)\n```\n\nAssuming you have a [C++\ncompiler](https://support.posit.co/hc/en-us/articles/200486498-Package-Development-Prerequisites)\ninstalled, the development version can be installed:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""pbs-assess/sdmTMB"", dependencies = TRUE)\n```\n\nThere are some extra utilities in the\n[sdmTMBextra](https://github.com/pbs-assess/sdmTMBextra) package.\n\n## Overview\n\nAnalyzing geostatistical data (coordinate-referenced observations from\nsome underlying spatial process) is becoming increasingly common in\necology. sdmTMB implements geostatistical spatial and spatiotemporal\nGLMMs using TMB for model fitting and R-INLA to set up SPDE (stochastic\npartial differential equation) matrices. One common application is for\nspecies distribution models (SDMs), hence the package name. The goal of\nsdmTMB is to provide a fast, flexible, and user-friendly\ninterface\xe2\x80\x94similar to the popular R package glmmTMB\xe2\x80\x94but with a focus on\nspatial and spatiotemporal models with an SPDE approach. We extend the\ngeneralized linear mixed models (GLMMs) familiar to ecologists to\ninclude the following optional features:\n\n- spatial random fields\n- spatiotemporal random fields that may be independent by year or\n modelled with random walks or autoregressive processes\n- smooth terms for covariates, using the familiar `s()` notation from\n mgcv\n- breakpoint (hockey-stick) or logistic covariates\n- time-varying covariates (coefficients modelled as random walks)\n- spatially varying coefficient models (SVCs)\n- interpolation or forecasting over missing or future time slices\n- a wide range of families: all standard R families plus `tweedie()`,\n `nbinom1()`, `nbinom2()`, `lognormal()`, and `student()`, plus some\n truncated and censored families\n- delta/hurdle models including `delta_gamma()`, `delta_lognormal()`,\n and `delta_truncated_nbinom2()`\n\nEstimation is performed in sdmTMB via maximum marginal likelihood with\nthe objective function calculated in TMB and minimized in R via\n`stats::nlminb()` with the random effects integrated over via the\nLaplace approximation. The sdmTMB package also allows for models to be\npassed to Stan via tmbstan, allowing for Bayesian model estimation.\n\nSee\n[`?sdmTMB`](https://pbs-assess.github.io/sdmTMB/reference/sdmTMB.html)\nand\n[`?predict.sdmTMB`](https://pbs-assess.github.io/sdmTMB/reference/predict.sdmTMB.html)\nfor the most complete examples. Also see the vignettes (\xe2\x80\x98Articles\xe2\x80\x99) on\nthe [documentation site](https://pbs-assess.github.io/sdmTMB/index.html)\nand the preprint and appendices linked to below.\n\n## Getting help\n\nFor questions about how to use sdmTMB or interpret the models, please\npost on the [discussion\nboard](https://github.com/pbs-assess/sdmTMB/discussions). If you\n[email](https://github.com/pbs-assess/sdmTMB/blob/main/DESCRIPTION) a\nquestion, we are likely to respond on the [discussion\nboard](https://github.com/pbs-assess/sdmTMB/discussions) with an\nanonymized version of your question (and without data) if we think it\ncould be helpful to others. Please let us know if you don\xe2\x80\x99t want us to\ndo that.\n\nFor bugs or feature requests, please post in the [issue\ntracker](https://github.com/pbs-assess/sdmTMB/issues).\n\n[Slides](https://pbs-assess.github.io/sdmTMB-teaching/noaa-psaw-2022/)\nand\n[recordings](https://www.youtube.com/channel/UCYoFG51RjJVx7m9mZGaj-Ng/videos)\nfrom a workshop on sdmTMB.\n\n## Citation\n\nTo cite sdmTMB in publications use:\n\n``` r\ncitation(""sdmTMB"")\n```\n\nAnderson, S.C., E.J. Ward, P.A. English, L.A.K. Barnett. 2022. sdmTMB:\nan R package for fast, flexible, and user-friendly generalized linear\nmixed effects models with spatial and spatiotemporal random fields.\nbioRxiv 2022.03.24.485545; doi:\n\n\nA list of (known) publications that use sdmTMB can be found\n[here](https://github.com/pbs-assess/sdmTMB/wiki/Publications-using-sdmTMB).\nPlease use the above citation so we can track publications.\n\n## Related software\n\nsdmTMB is heavily inspired by the\n[VAST](https://github.com/James-Thorson-NOAA/VAST) R package:\n\nThorson, J.T. 2019. Guidance for decisions using the Vector\nAutoregressive Spatio-Temporal (VAST) package in stock, ecosystem,\nhabitat and climate assessments. Fisheries Research 210: 143\xe2\x80\x93161.\n.\n\nand the [glmmTMB](https://github.com/glmmTMB/glmmTMB) R package:\n\nBrooks, M.E., Kristensen, K., van Benthem, K.J., Magnusson, A., Berg,\nC.W., Nielsen, A., Skaug, H.J., Maechler, M., and Bolker, B.M. 2017.\nglmmTMB balances speed and flexibility among packages for zero-inflated\ngeneralized linear mixed modeling. The R Journal 9(2): 378\xe2\x80\x93400.\n.\n\n[INLA](https://www.r-inla.org/) and\n[inlabru](https://sites.google.com/inlabru.org/inlabru) can fit many of\nthe same models as sdmTMB (and many more) in an approximate Bayesian\ninference framework.\n\n[mgcv](https://cran.r-project.org/package=mgcv) can fit similar\nSPDE-based Gaussian random field models with code included in [Miller et\nal.\xc2\xa0(2019)](https://doi.org/10.1007/s13253-019-00377-z).\n\nA table in the [sdmTMB\npreprint](https://doi.org/10.1101/2022.03.24.485545) describes\nfunctionality and timing comparisons between sdmTMB, VAST, INLA/inlabru,\nand mgcv and the discussion makes suggestions about when you might\nchoose one package over another.\n\n## Basic use\n\nAn sdmTMB model requires a data frame that contains a response column,\ncolumns for any predictors, and columns for spatial coordinates. It\nusually makes sense to convert the spatial coordinates to an equidistant\nprojection such as UTMs such that distance remains constant throughout\nthe study region \\[e.g., using `sf::st_transform()`\\]. Here, we\nillustrate a spatial model fit to Pacific cod (*Gadus macrocephalus*)\ntrawl survey data from Queen Charlotte Sound, BC, Canada. Our model\ncontains a main effect of depth as a penalized smoother, a spatial\nrandom field, and Tweedie observation error. Our data frame `pcod`\n(built into the package) has a column `year` for the year of the survey,\n`density` for density of Pacific cod in a given survey tow, `present`\nfor whether `density > 0`, `depth` for depth in meters of that tow, and\nspatial coordinates `X` and `Y`, which are UTM coordinates in\nkilometres.\n\n``` r\nlibrary(dplyr)\nlibrary(ggplot2)\nlibrary(sdmTMB)\nhead(pcod)\n```\n\n #> # A tibble: 3 \xc3\x97 6\n #> year density present depth X Y\n #> \n #> 1 2003 113. 1 201 446. 5793.\n #> 2 2003 41.7 1 212 446. 5800.\n #> 3 2003 0 0 220 449. 5802.\n\nWe start by creating a mesh object that contains matrices to apply the\nSPDE approach.\n\n``` r\nmesh <- make_mesh(pcod, xy_cols = c(""X"", ""Y""), cutoff = 10)\n```\n\nHere, `cutoff` defines the minimum allowed distance between points in\nthe units of `X` and `Y` (km). Alternatively, we could have created any\nmesh via the fmesher or INLA packages and supplied it to `make_mesh()`.\nWe can inspect our mesh object with the associated plotting method\n`plot(mesh)`.\n\nFit a spatial model with a smoother for depth:\n\n``` r\nfit <- sdmTMB(\n density ~ s(depth),\n data = pcod,\n mesh = mesh,\n family = tweedie(link = ""log""),\n spatial = ""on""\n)\n```\n\nPrint the model fit:\n\n``` r\nfit\n#> Spatial model fit by ML [\'sdmTMB\']\n#> Formula: density ~ s(depth)\n#> Mesh: mesh (isotropic covariance)\n#> Data: pcod\n#> Family: tweedie(link = \'log\')\n#> \n#> coef.est coef.se\n#> (Intercept) 2.37 0.21\n#> sdepth 0.62 2.53\n#> \n#> Smooth terms:\n#> Std. Dev.\n#> sds(depth) 13.93\n#> \n#> Dispersion parameter: 12.69\n#> Tweedie p: 1.58\n#> Mat\xc3\xa9rn range: 16.39\n#> Spatial SD: 1.86\n#> ML criterion at convergence: 6402.136\n#> \n#> See ?tidy.sdmTMB to extract these values as a data frame.\n```\n\nThe output indicates our model was fit by maximum (marginal) likelihood\n(`ML`). We also see the formula, mesh, fitted data, and family. Next we\nsee any estimated main effects including the linear component of the\nsmoother (`sdepth`), the standard deviation on the smoother weights\n(`sds(depth)`), the Tweedie dispersion and power parameters, the Mat\xc3\xa9rn\nrange distance (distance at which points are effectively independent),\nthe marginal spatial field standard deviation, and the negative log\nlikelihood at convergence.\n\nWe can extract parameters as a data frame:\n\n``` r\ntidy(fit, conf.int = TRUE)\n#> # A tibble: 1 \xc3\x97 5\n#> term estimate std.error conf.low conf.high\n#> \n#> 1 (Intercept) 2.37 0.215 1.95 2.79\ntidy(fit, effects = ""ran_pars"", conf.int = TRUE)\n#> # A tibble: 4 \xc3\x97 5\n#> term estimate std.error conf.low conf.high\n#> \n#> 1 range 16.4 4.47 9.60 28.0 \n#> 2 phi 12.7 0.406 11.9 13.5 \n#> 3 sigma_O 1.86 0.218 1.48 2.34\n#> 4 tweedie_p 1.58 0.00998 1.56 1.60\n```\n\nRun some basic sanity checks on our model:\n\n``` r\nsanity(fit)\n#> \xe2\x9c\x94 Non-linear minimizer suggests successful convergence\n#> \xe2\x9c\x94 Hessian matrix is positive definite\n#> \xe2\x9c\x94 No extreme or very small eigenvalues detected\n#> \xe2\x9c\x94 No gradients with respect to fixed effects are >= 0.001\n#> \xe2\x9c\x94 No fixed-effect standard errors are NA\n#> \xe2\x9c\x94 No standard errors look unreasonably large\n#> \xe2\x9c\x94 No sigma parameters are < 0.01\n#> \xe2\x9c\x94 No sigma parameters are > 100\n#> \xe2\x9c\x94 Range parameter doesn\'t look unreasonably large\n```\n\nUse the visreg package to plot the smoother effect in link space with\nrandomized quantile partial residuals:\n\n``` r\nvisreg::visreg(fit, xvar = ""depth"", xlim = c(50, 500))\n```\n\n\n\nOr on the response scale:\n\n``` r\nvisreg::visreg(fit, xvar = ""depth"", scale = ""response"", xlim = c(50, 300), nn = 200)\n```\n\n\n\nPredict on new data:\n\n``` r\np <- predict(fit, newdata = qcs_grid)\n```\n\n``` r\nhead(p)\n```\n\n #> # A tibble: 3 \xc3\x97 7\n #> X Y depth est est_non_rf est_rf omega_s\n #> \n #> 1 456 5636 347. -3.06 -3.08 0.0172 0.0172\n #> 2 458 5636 223. 2.03 1.99 0.0460 0.0460\n #> 3 460 5636 204. 2.89 2.82 0.0747 0.0747\n\n``` r\nggplot(p, aes(X, Y, fill = exp(est))) + geom_raster() +\n scale_fill_viridis_c(trans = ""sqrt"")\n```\n\n\n\nWe could switch to a presence-absence model by changing the response\ncolumn and family:\n\n``` r\nfit <- sdmTMB(\n present ~ s(depth),\n data = pcod, \n mesh = mesh,\n family = binomial(link = ""logit"")\n)\n```\n\nOr a hurdle/delta model by changing the family:\n\n``` r\nfit <- sdmTMB(\n density ~ s(depth),\n data = pcod,\n mesh = mesh,\n family = delta_gamma(link1 = ""logit"", link2 = ""log""),\n)\n```\n\nWe could instead fit a spatiotemporal model by specifying the `time`\ncolumn and a spatiotemporal structure:\n\n``` r\nfit_spatiotemporal <- sdmTMB(\n density ~ s(depth, k = 5), \n data = pcod, \n mesh = mesh,\n time = ""year"",\n family = tweedie(link = ""log""), \n spatial = ""off"", \n spatiotemporal = ""ar1""\n)\n```\n\nIf we wanted to create an area-weighted standardized population index,\nwe could predict on a grid covering the entire survey (`qcs_grid`) with\ngrid cell area 4 (2 x 2 km) and pass the predictions to `get_index()`:\n\n``` r\ngrid_yrs <- replicate_df(qcs_grid, ""year"", unique(pcod$year))\np_st <- predict(fit_spatiotemporal, newdata = grid_yrs, \n return_tmb_object = TRUE)\nindex <- get_index(p_st, area = rep(4, nrow(grid_yrs)))\nggplot(index, aes(year, est)) +\n geom_ribbon(aes(ymin = lwr, ymax = upr), fill = ""grey90"") +\n geom_line(lwd = 1, colour = ""grey30"") +\n labs(x = ""Year"", y = ""Biomass (kg)"")\n```\n\n\n\nOr the center of gravity:\n\n``` r\ncog <- get_cog(p_st, format = ""wide"")\nggplot(cog, aes(est_x, est_y, colour = year)) +\n geom_pointrange(aes(xmin = lwr_x, xmax = upr_x)) +\n geom_pointrange(aes(ymin = lwr_y, ymax = upr_y)) +\n scale_colour_viridis_c()\n```\n\n\n\nFor more on these basic features, see the vignettes [Intro to modelling\nwith\nsdmTMB](https://pbs-assess.github.io/sdmTMB/articles/basic-intro.html)\nand [Index standardization with\nsdmTMB](https://pbs-assess.github.io/sdmTMB/articles/index-standardization.html).\n\n## Advanced functionality\n\n### Time-varying coefficients\n\nTime-varying intercept:\n\n``` r\nfit <- sdmTMB(\n density ~ 0 + s(depth, k = 5), \n time_varying = ~ 1, \n data = pcod, mesh = mesh,\n time = ""year"", \n family = tweedie(link = ""log""),\n silent = FALSE # see progress\n)\n```\n\nTime-varying (random walk) effect of depth:\n\n``` r\nfit <- sdmTMB(\n density ~ 1, \n time_varying = ~ 0 + depth_scaled + depth_scaled2,\n data = pcod, mesh = mesh,\n time = ""year"",\n family = tweedie(link = ""log""),\n spatial = ""off"",\n spatiotemporal = ""ar1"",\n silent = FALSE\n)\n```\n\nSee the vignette [Intro to modelling with\nsdmTMB](https://pbs-assess.github.io/sdmTMB/articles/basic-intro.html)\nfor more details.\n\n### Spatially varying coefficients (SVC)\n\nSpatially varying effect of time:\n\n``` r\npcod$year_scaled <- as.numeric(scale(pcod$year))\nfit <- sdmTMB(\n density ~ s(depth, k = 5) + year_scaled,\n spatial_varying = ~ year_scaled, \n data = pcod, mesh = mesh, \n time = ""year"",\n family = tweedie(link = ""log""),\n spatiotemporal = ""off""\n)\n```\n\nSee `zeta_s` in the output, which represents the coefficient varying in\nspace. You\xe2\x80\x99ll want to ensure you set up your model such that it ballpark\nhas a mean of 0 (e.g., by including it in `formula` too).\n\n``` r\ngrid_yrs <- replicate_df(qcs_grid, ""year"", unique(pcod$year))\ngrid_yrs$year_scaled <- (grid_yrs$year - mean(pcod$year)) / sd(pcod$year)\np <- predict(fit, newdata = grid_yrs) %>% \n subset(year == 2011) # any year\nggplot(p, aes(X, Y, fill = zeta_s_year_scaled)) + geom_raster() +\n scale_fill_gradient2()\n```\n\n\n\nSee the vignette on [Fitting spatial trend models with\nsdmTMB](https://pbs-assess.github.io/sdmTMB/articles/spatial-trend-models.html)\nfor more details.\n\n### Random intercepts\n\nWe can use the same syntax (`1 | group`) as lme4 or glmmTMB to fit\nrandom intercepts:\n\n``` r\npcod$year_factor <- as.factor(pcod$year)\nfit <- sdmTMB(\n density ~ s(depth, k = 5) + (1 | year_factor),\n data = pcod, mesh = mesh,\n time = ""year"",\n family = tweedie(link = ""log"")\n)\n```\n\n### Breakpoint and threshold effects\n\n``` r\nfit <- sdmTMB(\n present ~ 1 + breakpt(depth_scaled), \n data = pcod, mesh = mesh,\n family = binomial(link = ""logit"")\n)\n```\n\n``` r\nfit <- sdmTMB(\n present ~ 1 + logistic(depth_scaled), \n data = pcod, mesh = mesh,\n family = binomial(link = ""logit"")\n)\n```\n\nSee the vignette on [Threshold modeling with\nsdmTMB](https://pbs-assess.github.io/sdmTMB/articles/threshold-models.html)\nfor more details.\n\n### Simulating data\n\n#### Simulating data from scratch\n\n``` r\npredictor_dat <- expand.grid(\n X = seq(0, 1, length.out = 100), Y = seq(0, 1, length.out = 100)\n)\nmesh <- make_mesh(predictor_dat, xy_cols = c(""X"", ""Y""), cutoff = 0.05)\nsim_dat <- sdmTMB_simulate(\n formula = ~ 1,\n data = predictor_dat,\n mesh = mesh,\n family = poisson(link = ""log""),\n range = 0.3,\n sigma_O = 0.4,\n seed = 1,\n B = 1 # B0 = intercept\n)\nhead(sim_dat)\n#> # A tibble: 6 \xc3\x97 7\n#> X Y omega_s mu eta observed `(Intercept)`\n#> \n#> 1 0 0 -0.154 2.33 0.846 1 1\n#> 2 0.0101 0 -0.197 2.23 0.803 0 1\n#> 3 0.0202 0 -0.240 2.14 0.760 2 1\n#> 4 0.0303 0 -0.282 2.05 0.718 2 1\n#> 5 0.0404 0 -0.325 1.96 0.675 3 1\n#> 6 0.0505 0 -0.367 1.88 0.633 2 1\n\n# sample 200 points for fitting:\nset.seed(1)\nsim_dat_obs <- sim_dat[sample(seq_len(nrow(sim_dat)), 200), ]\n```\n\n``` r\nggplot(sim_dat, aes(X, Y)) +\n geom_raster(aes(fill = exp(eta))) + # mean without observation error\n geom_point(aes(size = observed), data = sim_dat_obs, pch = 21) +\n scale_fill_viridis_c() +\n scale_size_area() +\n coord_cartesian(expand = FALSE)\n```\n\n\n\nFit to the simulated data:\n\n``` r\nmesh <- make_mesh(sim_dat_obs, xy_cols = c(""X"", ""Y""), cutoff = 0.05)\nfit <- sdmTMB(\n observed ~ 1,\n data = sim_dat_obs,\n mesh = mesh,\n family = poisson()\n)\n```\n\nSee\n[`?sdmTMB_simulate`](https://pbs-assess.github.io/sdmTMB/reference/sdmTMB_simulate.html)\nfor more details.\n\n#### Simulating from an existing fit\n\n``` r\ns <- simulate(fit, nsim = 500)\ndim(s)\n#> [1] 969 500\ns[1:3,1:4]\n#> [,1] [,2] [,3] [,4]\n#> [1,] 0 59.40310 83.20888 0.00000\n#> [2,] 0 34.56408 0.00000 19.99839\n#> [3,] 0 0.00000 0.00000 0.00000\n```\n\nSee the vignette on [Residual checking with\nsdmTMB](https://pbs-assess.github.io/sdmTMB/articles/residual-checking.html),\n[`?simulate.sdmTMB`](https://pbs-assess.github.io/sdmTMB/reference/simulate.sdmTMB.html),\nand\n[`?dharma_residuals`](https://pbs-assess.github.io/sdmTMB/reference/dharma_residuals.html)\nfor more details.\n\n### Sampling from the joint precision matrix\n\nWe can take samples from the implied parameter distribution assuming an\nMVN covariance matrix on the internal parameterization:\n\n``` r\nsamps <- gather_sims(fit, nsim = 1000)\nggplot(samps, aes(.value)) + geom_histogram() +\n facet_wrap(~.variable, scales = ""free_x"")\n#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.\n```\n\n\n\nSee\n[`?gather_sims`](https://pbs-assess.github.io/sdmTMB/reference/gather_sims.html)\nand\n[`?get_index_sims`](https://pbs-assess.github.io/sdmTMB/reference/get_index_sims.html)\nfor more details.\n\n### Calculating uncertainty on spatial predictions\n\nThe fastest way to get point-wise prediction uncertainty is to use the\nMVN samples:\n\n``` r\np <- predict(fit, newdata = predictor_dat, nsim = 500)\npredictor_dat$se <- apply(p, 1, sd)\nggplot(predictor_dat, aes(X, Y, fill = se)) +\n geom_raster() +\n scale_fill_viridis_c(option = ""A"") +\n coord_cartesian(expand = FALSE)\n```\n\n\n\n### Cross validation\n\nsdmTMB has built-in functionality for cross-validation. If we were to\nset a `future::plan()`, the folds would be fit in parallel:\n\n``` r\nmesh <- make_mesh(pcod, c(""X"", ""Y""), cutoff = 10)\n## Set parallel processing if desired:\n# library(future)\n# plan(multisession)\nm_cv <- sdmTMB_cv(\n density ~ s(depth, k = 5),\n data = pcod, mesh = mesh,\n family = tweedie(link = ""log""), k_folds = 2\n)\n#> Running fits with `future.apply()`.\n#> Set a parallel `future::plan()` to use parallel processing.\n# Sum of log likelihoods of left-out data:\nm_cv$sum_loglik\n#> [1] -6756.28\n```\n\nSee\n[`?sdmTMB_cv`](https://pbs-assess.github.io/sdmTMB/reference/sdmTMB_cv.html)\nfor more details.\n\n### Priors\n\nPriors/penalties can be placed on most parameters. For example, here we\nplace a PC (penalized complexity) prior on the Mat\xc3\xa9rn random field\nparameters, a standard normal prior on the effect of depth, a Normal(0,\n10^2) prior on the intercept, and a half-normal prior on the Tweedie\ndispersion parameter (`phi`):\n\n``` r\nmesh <- make_mesh(pcod, c(""X"", ""Y""), cutoff = 10)\nfit <- sdmTMB(\n density ~ depth_scaled,\n data = pcod, mesh = mesh,\n family = tweedie(),\n priors = sdmTMBpriors(\n matern_s = pc_matern(range_gt = 10, sigma_lt = 5),\n b = normal(c(0, 0), c(1, 10)),\n phi = halfnormal(0, 15)\n )\n)\n```\n\nWe can visualize the PC Mat\xc3\xa9rn prior:\n\n``` r\nplot_pc_matern(range_gt = 10, sigma_lt = 5)\n```\n\n\n\nSee\n[`?sdmTMBpriors`](https://pbs-assess.github.io/sdmTMB/reference/priors.html)\nfor more details.\n\n### Bayesian MCMC sampling with Stan\n\nThe fitted model can be passed to the tmbstan package to sample from the\nposterior with Stan. See the [Bayesian\nvignette](https://pbs-assess.github.io/sdmTMB/articles/web_only/bayesian.html).\n\n### Turning off random fields\n\nWe can turn off the random fields for model comparison:\n\n``` r\nfit_sdmTMB <- sdmTMB(\n present ~ poly(depth_scaled, 2),\n data = pcod, mesh = mesh,\n spatial = ""off"",\n family = binomial()\n)\nfit_glm <- glm(\n present ~ poly(depth_scaled, 2),\n data = pcod,\n family = binomial()\n)\n\ntidy(fit_sdmTMB)\n#> # A tibble: 3 \xc3\x97 3\n#> term estimate std.error\n#> \n#> 1 (Intercept) -0.426 0.0573\n#> 2 poly(depth_scaled, 2)1 -31.7 3.03 \n#> 3 poly(depth_scaled, 2)2 -66.9 4.09\nbroom::tidy(fit_glm)\n#> # A tibble: 3 \xc3\x97 5\n#> term estimate std.error statistic p.value\n#> \n#> 1 (Intercept) -0.426 0.0573 -7.44 1.03e-13\n#> 2 poly(depth_scaled, 2)1 -31.7 3.03 -10.5 1.20e-25\n#> 3 poly(depth_scaled, 2)2 -66.9 4.09 -16.4 3.50e-60\n```\n\n### Using a custom fmesher mesh\n\nDefining a mesh directly with INLA:\n\n``` r\nbnd <- INLA::inla.nonconvex.hull(cbind(pcod$X, pcod$Y), convex = -0.1)\nmesh_inla <- INLA::inla.mesh.2d(\n boundary = bnd,\n max.edge = c(25, 50)\n)\nmesh <- make_mesh(pcod, c(""X"", ""Y""), mesh = mesh_inla)\nplot(mesh)\n```\n\n\n\n``` r\nfit <- sdmTMB(\n density ~ s(depth, k = 5),\n data = pcod, mesh = mesh,\n family = tweedie(link = ""log"")\n)\n```\n\n### Barrier meshes\n\nA barrier mesh limits correlation across barriers (e.g., land or water).\nSee `add_barrier_mesh()` in\n[sdmTMBextra](https://github.com/pbs-assess/sdmTMBextra).\n'",",https://doi.org/10.1101/2022.03.24.485545\n\n##,https://doi.org/10.1101/2022.03.24.485545,https://doi.org/10.1016/j.fishres.2018.10.013,https://doi.org/10.32614/rj-2017-066,https://doi.org/10.1007/s13253-019-00377-z,https://doi.org/10.1101/2022.03.24.485545","2018/09/19, 05:59:53",1862,CUSTOM,404,2078,"2023/10/18, 22:32:01",34,19,154,71,7,2,0.4,0.1558572146807441,"2023/10/20, 18:16:57",v0.4.0,0,9,false,,false,false,,,https://github.com/pbs-assess,,,,,https://avatars.githubusercontent.com/u/37125935?v=4,,, ENMwizard,Advanced Tecniques for Ecological Niche Modeling Made Easy.,HemingNM,https://github.com/HemingNM/ENMwizard.git,github,"enmeval,species,rasters,maxent-models,tunning,niche-modeling",Biodiversity and Species Distribution,"2023/06/12, 12:36:24",13,0,2,true,R,,,R,,"b'ENMwizard\n======================\n### Advanced Tecniques for Ecological Niche Modeling Made Easy\n\nThis package provides tools to facilitate the use of advanced techniques related to ecological niche modeling (ENM) and the automation of the workflow for modeling multiple species. ENMwizard allows easier: 1. preparation of occurrence and environmental data (selection of environmental variables, selection of calibration and projection areas); 2. model tunning (thanks to the package ENMeval); 3. model selection and projection. Computationally intensive tasks can be performed using a single or multiple cores to speed up processing. ENMwizard also implements AICc Model Averaging for MaxEnt models (Gutierrez & Heming, 2018, https://arxiv.org/abs/1807.04346).\n\n-----\n\n# Installation\nENMwizard is downloadable from https://github.com/HemingNM/ENMwizard. You can download it using devtools to install from GitHub.\n\n## Install from GitHub using devtools\nRun the following code from your R console:\n\n```r\ninstall.packages(""devtools"")\ndevtools::install_github(""HemingNM/ENMwizard"")\n\nlibrary(ENMwizard)\n```\n\n## Notice that ENMwizard is not compatible with ENMeval 2.0. \nSorry for that. I am working to make it compatible with the newest version soon\n\n## Citation\nPlease cite ENMwizard (and other R packages it depends on) by using:\n\n```r\ncitation(""ENMwizard"")\ncitation(""spThin"")\ncitation(""ENMeval"")\ncitation(""raster"")\n```\n\n\n-----\n\n# Steps for niche modeling using ENMwizard\n\n## Prepare environmental data\n\n### Load occurrence data\n\nFirst, lets use occ data available in dismo package.\n```r\nBvarieg.occ <- read.table(paste(system.file(package=""dismo""),\n""/ex/bradypus.csv"", sep=""""), header=TRUE, sep="","")\n\nhead(Bvarieg.occ) # Check first rows\n\n```\n\nNow we make it a named list, where names correspond to species names.\n```r\nspp.occ.list <- list(Bvarieg = Bvarieg.occ)\n```\n\n### Create occ polygon to crop rasters prior to modelling\n\nThe occurrence points in the named list are used to create polygons. \nNotice that you can cluster the occ points using several clustering methods. \nSee differences and choose one that fits your needs:\n```r\nocc.polys <- set_calibarea_b(spp.occ.list)\nocc.polys <- set_calibarea_b(spp.occ.list, k=0, c.m=""AP"", q=.01) # less polygons\nocc.polys <- set_calibarea_b(spp.occ.list, k=0, c.m=""AP"", q=.3)\nocc.polys <- set_calibarea_b(spp.occ.list, k=0, c.m=""AP"", q=.8) # more polygons\nocc.polys <- set_calibarea_b(spp.occ.list, k=0, c.m=""NB"", method = ""centroid"", index = ""duda"")\nocc.polys <- set_calibarea_b(spp.occ.list, k=0, c.m=""NB"", method = ""centroid"", index = ""sdindex"") \n\n```\n\n### Create buffer\n\n... and the occurrence polygons are buffered using 1.5 degrees.\n```r\nocc.b <- buffer_b(occ.polys, width = 1.5)\n```\n\n### Get and cut enviromental layers\nGet climate data for historical (near current) conditions.\nIn this example, a directory called \'rasters\' is created. Then, rasters from historical (near current) are downloaded.\n```r\n# Create directory to store raster files\ndir.create(""./rasters"")\n\n# Download data for present\nlibrary(raster)\npredictors <- getData(\'worldclim\', var=\'bio\', res=10, path=""rasters"")\n```\n\nCut environmental variables for each species (and plot them for visual inspection).\n```r\npred.cut <- cut_calibarea_b(occ.b, predictors)\n\nfor(i in 1:length(pred.cut)){\n plot(pred.cut[[i]][[1]])\n plot(occ.polys[[i]], border = ""red"", add = T)\n plot(occ.b[[i]], add = T)\n}\n```\n\n### Select the least correlated variables\n```r\nvars <- select_vars_b(pred.cut, cutoff=.75, names.only = T)\n# See selected variables for each species\nlapply(vars, function(x)x[[1]])\n# remove correlated variables from our variable set\npred.cut <- select_vars_b(pred.cut, cutoff=.75, names.only = F)\n```\n\n\n## Prepare occurrence data\n### Filter original dataset\nNow we want to remove localities that are too close apart. We will do it for all species listed in ""spp.occ.list"".\n```r\nthinned.dataset.batch <- thin_b(loc.data.lst = spp.occ.list)\n```\n\n### Load occurrence data (filtered localities)\nAfter thinning, we choose one dataset for each species for modelling.\n```r\nocc.locs <- load_thin_occ(thinned.dataset.batch)\n```\n\n## Great! Now we are ready for tunning species\' ENMs\n\n-----\n## Tunning Maxent\'s feature classes and regularization multiplier via ENMeval\n### Model tuning using ENMeval\nHere we will run ENMevaluate_b to call ENMevaluate (from ENMeval package). Here we will test which combination of Feature Classes and Regularization Multipliers give the best results. For this, we will partition our occurrence data using the ""block"" method.\n\nBy providing [at least] two lists, occurrence and environmental data, we will be able to evaluate ENMs for as many species as listed in our occ.locs object. For details see ?ENMeval::ENMevaluate. Notice that you can use multiple cores for this task. This is specially usefull when there are a large number of models and species.\n```r\nENMeval.res.lst <- ENMevaluate_b(occ.locs, pred.cut, \n RMvalues = c(1, 1.5), fc = c(""L"", ""LQ"", ""LP""),\n method=""block"", algorithm=""maxent.jar"")\n```\n\n-----\n## Model fitting (calibration)\nAfter tuning MaxEnt models, we will calibrate them using all occurrence data (i.e. without partition them).\n\n```r\n# Run model\nmxnt.mdls.preds.lst <- calib_mdl_b(ENMeval.o.l = ENMeval.res.lst, \n a.calib.l = pred.cut,\n mSel = c(""LowAIC"", ""AUC""))\n```\n\n## Projection\n### Prepare projecion area\n#### Download environmental data\nFor projection it is necessary to download raster files with the environmental variables of interest. Rasters with historical (near current) climatic conditions was already created. We will download data of climatic conditions for two future (2050 and 2070) scenarios and create one list with all three climate scenarios.\n\n```r\nlibrary(raster)\n# Get climate data for future conditions (2050) from two GCMs at RCP 8.5\nfutAC5085 <- getData(\'CMIP5\', var=\'bio\', res=10, rcp=85, model=\'AC\', year=50, path=""rasters"")\nnames(futAC5085) <- names(predictors)\n\nfutCC5085 <- getData(\'CMIP5\', var=\'bio\', res=10, rcp=85, model=\'CC\', year=50, path=""rasters"")\nnames(futCC5085) <- names(predictors)\n\n# Get climate data for future conditions (2070) from two GCMs at RCP 8.5\nfutAC7085 <- getData(\'CMIP5\', var=\'bio\', res=10, rcp=85, model=\'AC\', year=70, path=""rasters"")\nnames(futAC7085) <- names(predictors)\n\nfutCC7085 <- getData(\'CMIP5\', var=\'bio\', res=10, rcp=85, model=\'CC\', year=70, path=""rasters"")\nnames(futCC7085) <- names(predictors)\n\npredictors.l <- list(ncurrent = predictors,\n futAC5085 = futAC5085,\n futCC5085 = futCC5085,\n futAC7085 = futAC7085,\n futCC7085 = futCC7085)\n\n```\n\n#### Select area for projection based on the extent of occ points\nNow it is time to define the projection area for each species. The projection area can be the same for all species (in this example) of be defined individually. Here, the projection area will be defined as an square area slightly larger than the original occurrence of the species. Then, a two lists with models will be created for a species. In the first list, the projection will be performed using current climatic conditions. In the second list, two cenarios of futurure climate (defined above) are created.\n\n```r\npoly.projection <- set_projarea_b(occ.polys, mult = .1, buffer=FALSE)#\nplot(poly.projection[[1]], col=""gray"")\nplot(occ.polys[[1]], col=""yellow"", add=T)\n\npred.cut.l <- cut_projarea_mscn_b(poly.projection, predictors.l)\nplot(poly.projection[[1]], col=""gray"")\nplot(pred.cut.l[[1]][[1]][[1]], add=T)\nplot(occ.polys[[1]], add=T)\n```\n\n#### ... if the extent to project is the same for all species\nWhen all species are to be projected using the same current and future climates and in the same region, then the following lines can be used to repeat the same lists of cenarios for all species (could be defined differently for each species if wanted).\n\n```r\nproj.extent <- extent(c(-109.5, -26.2, -59.5, 18.1))\n# coerce to a SpatialPolygons object\nproj.extent <- as(proj.extent, \'SpatialPolygons\') \npred.cut.l <- cut_projarea_rst_mscn_b(proj.extent, predictors.l, occ.polys)\n```\n\n### Model projections\n\nFinally, the model(s) can be projected on all climatic cenarios. This is performed by `the proj_mdl_b` function. The function has two arguments: 1) MaxEnt fitted models (see step 4.3 above) and 2) list of rasters representing all cenarios onto which models will be projected.\nThis function can be run using a single core (default) or multiple cores available in a computer. There two ways of performing parallel processing: by species or by model. If the distribution of few species is being modelled, and models are computationally intensive, then processing by model will provide best results. If there are many species, probably parallel processing by species (split species across the multiple cores of a computer) will be faster.\n\n```r\n# For single or multiple species\n\n# using a single core (default)\nmxnt.mdls.preds.cf <- proj_mdl_b(mxnt.mdls.preds.lst, a.proj.l = pred.cut.l)\n\n# or using multiple cores\nmxnt.mdls.preds.cf <- proj_mdl_b(mxnt.mdls.preds.lst, a.proj.l = pred.cut.l, numCores=2)\n\n# plot projections\npar(mfrow=c(1,2), mar=c(1,2,1,2))\nplot(mxnt.mdls.preds.cf$Bvarieg$mxnt.preds$ncurrent)\nplot(mxnt.mdls.preds.cf$Bvarieg$mxnt.preds$futAC5085)\n```\n### Create consensual projections across GCMs by (e.g.) year and/or RCP\nThe climate scenario projections can be grouped and averaged to create consensual projections.\nHere we downloaded two GCMs for 2050 and two for 2070, both at RCP 8.5. So, the GCMs will\nbe averaged by year.\n```r\n# create two vectors containing grouping codes\nyr <- c(50, 70)\nrcp <- c(""45"", ""85"")\ngroups <- list(yr, rcp)\n\n# get names we gave to the predictors\nclim.scn.nms <- names(predictors.l)\nconsensus_gr(groups, clim.scn.nms)\n\n## here we do compute the consensual projections\nmxnt.mdls.preds.cf <- consensus_scn_b(mcmp.l=mxnt.mdls.preds.cf, groups = list(yr, rcp), ref=""ncurrent"")\n\n\n####\n## just in case you have multiple GCMs by year and RCP, this is an example \n## that return more groups\n\n# grouping codes\nyr <- c(2050, 2070)\nrcp <- c(""RCP45"", ""RCP85"")\ngroups <- list(yr, rcp)\n\n# names of climate scenarios\nclim.scn.nms <- c(""CCSM4.2050.RCP45"", ""MIROC.ESM.2050.RCP45"", ""MPI.ESM.LR.2050.RCP45"",\n ""CCSM4.2070.RCP45"", ""MIROC.ESM.2070.RCP45"", ""MPI.ESM.LR.2070.RCP45"",\n ""CCSM4.2050.RCP85"", ""MIROC.ESM.2050.RCP85"", ""MPI.ESM.LR.2050.RCP85"",\n ""CCSM4.2070.RCP85"", ""MIROC.ESM.2070.RCP85"", ""MPI.ESM.LR.2070.RCP85"")\nconsensus_gr(groups, clim.scn.nms)\n\n\n```\n\n### Apply thresholds on suitability projections\nWe have the projections for each climatic scenario, now we must select one (or more) threshold criteria and apply on the projections.\n```r\n# 1. Fixed.cumulative.value.1 (fcv1);\n# 2. Fixed.cumulative.value.5 (fcv5);\n# 3. Fixed.cumulative.value.10 (fcv10);\n# 4. Minimum.training.presence (mtp);\n# 5. 10.percentile.training.presence (x10ptp);\n# 6. Equal.training.sensitivity.and.specificity (etss);\n# 7. Maximum.training.sensitivity.plus.specificity (mtss);\n# 8. Balance.training.omission.predicted.area.and.threshold.value (bto);\n# 9. Equate.entropy.of.thresholded.and.original.distributions (eetd).\n\nmods.thrshld.lst <- thrshld_b(mxnt.mdls.preds.cf, thrshld.i = c(5,7))\n```\n\n### Identify range shifts\nRange shifts are differences in suitable areas between climate scenarios. \nHere we will map where the range has shifted and compute unchanged, lost, and gained areas.\n```r\nspp_rdiff <- range_shift_b(mods.thrshld.lst, ref.scn = ""ncurrent"")\n\n# plot maps of range change\nbreaks <- round(seq(from=-1, to=1, .666), 2)\ncolors <- colorRampPalette(c(""red"", ""gray"", ""blue""))(length(breaks)-1)\nplot(spp_rdiff[[1]][[1]][[1]], col=colors, breaks=breaks)\n\n# area of range changes\nspp_rdiff_a <- get_rsa_b(spp_rdiff)\nspp_rdiff_a\n```\n\n## Visualize\n### Plot one projection for current climate and another for a future climatic scenario\n```r\nplot(mods.thrshld.lst$Bvarieg$ncurrent$binary$x10ptp)\nplot(mods.thrshld.lst$Bvarieg$X50.85$binary$x10ptp)\nplot_mdl_diff(mxnt.mdls.preds.lst[[1]], mods.thrshld.lst[[1]], sp.nm = ""Bvarieg"")\nplot_mdl_diff_b(mxnt.mdls.preds.cf, mods.thrshld.lst, save=T)\n```\n\n### Plot differences between current climate and future climatic scenarios for all thresholds\n```r\nplot_scn_diff_b(mxnt.mdls.preds.cf, mods.thrshld.lst, \n ref.scn = ""ncurrent"", mSel = ""LowAIC"", save=F)\n```\n\n\n## Compute metrics\n### Compute variable contribution and permutation importance\n```r\nget_cont_permimport_b(mxnt.mdls.preds.cf)\n```\n### Compute ""Fractional Predicted Area"" (\'n of occupied pixels\'/n)\n```r\nget_fpa_b(mods.thrshld.lst)\n```\n### Compute species\' total suitable area\n```r\nget_tsa_b(mods.thrshld.lst)\n```\n'",",https://arxiv.org/abs/1807.04346","2017/09/26, 14:36:50",2220,CUSTOM,11,516,"2023/06/13, 11:38:03",0,0,10,3,134,0,0,0.02620967741935487,,,0,2,false,,false,false,,,,,,,,,,, flexsdm,Useful tools for constructing species distribution models.,sjevelazco,https://github.com/sjevelazco/flexsdm.git,github,"ecological-niche-modelling,model-tuning,spatial-ecology,ensemble-modelling,model-fit-for-purpose,spatially-structured-validation,species-distribution-modelling",Biodiversity and Species Distribution,"2023/10/20, 02:21:14",43,0,12,true,R,,,R,https://sjevelazco.github.io/flexsdm/,"b'[![License](https://img.shields.io/badge/license-GPL%20%28%3E=%203%29-lightgrey.svg?style=flat)](http://www.gnu.org/licenses/gpl-3.0.html)\n[![R-CMD-check](https://github.com/sjevelazco/flexsdm/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/sjevelazco/flexsdm/actions/workflows/R-CMD-check.yaml)\n[![Codecov test coverage](https://codecov.io/gh/sjevelazco/flexsdm/branch/main/graph/badge.svg?token=UT1UB0TWSV)](https://codecov.io/gh/sjevelazco/flexsdm)\n[![DOI](https://zenodo.org/badge/354032642.svg)](https://zenodo.org/badge/latestdoi/354032642)\n[![DOI](https://img.shields.io/badge/DOI-10.1111%2F2041--210X.13874-orange)](https://doi.org/10.1111/2041-210X.13874)\n\n\n# flexsdm\n\n### Overview \nSpecies distribution modeling has become a standard tool in several research areas such as ecology, conservation biology, biogeography, paleobiogeography, and epidemiology. Species distribution modeling is an area of active research in both theoretical and methodological aspects. One of the most exciting features of **flexsdm** is its high manipulation and parametrization capacity based on different functions and arguments. These attributes enable users to define a complete or partial modeling workflow specific for a modeling situation (e.g., number of variables, number of records, different algorithms, algorithms tuning, ensemble methods).\n\n\n### Structure of flexsdm\nThe function of **flexsdm** package are organized into three major modeling steps\n\n\n\n\n\n### 1. Pre-modeling functions \nSet tools that prepare modeling input data (e.g., species occurrences thinning, sample pseudo-absences or background points, delimitation of calibration area). \n\n* `calib_area()` Delimit calibration area for constructing species distribution models\n* `correct_colinvar()` Collinearity reduction on predictors\n* `env_outliers()` Integration of outliers detection methods in the environmental space\n* `part_random()` Data partitioning for training and testing models\n* `part_sblock()` Spatial block cross validation\n* `part_sband()` Spatial band cross validation\n* `part_senv()` Environmental cross-validation\n* `plot_res()` Plot different resolutions to be used in part_sblock\n* `get_block()` Transform a spatial partition layer to the same spatial properties of environmental variables\n* `sample_background()` Sample background points\n* `sample_pseudoabs()` Sampel pseudo-absence \n* `sdm_directory()` Create directories for saving the outputs of the flexsdm\n* `sdm_extract()` Extract environmental data based on x and y coordinates\n* `occfilt_env()` Perform environmental filtering on species occurrences\n* `occfilt_geo()` Perform geographical filtering on species occurrences\n\n\n### 2. Modeling functions \nIt includes functions related to modeling construction and validation. Several of them can be grouped into `fit_*`, `tune_*`, and `esm_*` family functions. `fit_*` construct and validate models with default hyper-parameter values. `tune_*` construct and validate models searching for the best hyper-parameter values combination. `esm_` construct and validate Ensemble of Small Models.\n\n#### Model evaluation\n* `sdm_eval()` Calculate different model performance metrics\n\n#### `fit_*` functions family\n* `fit_gam()` Fit and validate Generalized Additive Models\n* `fit_gau()` Fit and validate Gaussian Process models\n* `fit_gbm()` Fit and validate Generalized Boosted Regression models\n* `fit_glm()` Fit and validate Generalized Linear Models\n* `fit_max()` Fit and validate Maximum Entropy models\n* `fit_net()` Fit and validate Neural Networks models\n* `fit_raf()` Fit and validate Random Forest models\n* `fit_svm()` Fit and validate Support Vector Machine models\n\n#### `tune_*` functions family\n* `tune_gbm()` Fit and validate Generalized Boosted Regression models with exploration of \nhyper-parameters\n* `tune_max()` Fit and validate Maximum Entropy models with exploration of hyper-parameters\n* `tune_net()` Fit and validate Neural Networks models with exploration of hyper-parameters\n* `tune_raf()` Fit and validate Random Forest models with exploration of hyper-parameters\n* `tune_svm()` Fit and validate Support Vector Machine models with exploration of hyper-parameters\n\n#### Model ensemble\n* `fit_ensemble()` Fit and validate ensemble models with different ensemble methods\n\n#### `esm_*` functions family\n* `esm_gam()` Fit and validate Generalized Additive Models with Ensemble of Small Model approach\n* `esm_gau()` Fit and validate Gaussian Process models Models with Ensemble of Small Model approach\n* `esm_gbm()` Fit and validate Generalized Boosted Regression models with Ensemble of Small \nModel approach\n* `esm_glm()` Fit and validate Generalized Linear Models with Ensemble of Small Model approach\n* `esm_max()` Fit and validate Maximum Entropy models with Ensemble of Small Model approach\n* `esm_net()` Fit and validate Neural Networks models with Ensemble of Small Model approach\n* `esm_svm()` Fit and validate Support Vector Machine models with Ensemble of Small Model \napproach\n\n### 3. Post-modeling functions\nTools related to models\xe2\x80\x99 geographical predictions, evaluation, and correction. \n\n* `sdm_predict()` Spatial predictions of individual and ensemble model\n* `sdm_summarize()` Merge model performance tables\n* `interp()` Raster interpolation between two time periods\n* `extra_eval()` Measure model extrapolation\n* `extra_truncate()` Constraint suitability values under a given extrapolation value\n* `msdm_priori()` Create spatial predictor variables to reduce overprediction of species distribution models\n* `msdm_posteriori()` Methods to correct overprediction of species distribution models based on occurrences and suitability patterns.\n\n\n### 4. Graphical model exploration \nUseful tools to visually explore models\xe2\x80\x99 geographical and environemtal predictions, model extrapolation, and partial depnendece plot. \n\n* `p_pdp()` Create partial dependence plot(s) to explore the marginal effect of predictors on suitability\n* `p_psp()` Create partial dependence surface plot(s) to explore the bivariate marginal effect of predictors on suitability\n* `p_extra()` Graphical exploration of extrapolation or suitability pattern in the environmental and geographical space\n* `data_pdp()` Calculate data to construct partial dependence plots\n* `data_psp()` Calculate data to construct partial dependence surface plots\n\n### Installation\nYou can install the development version of **flexsdm** from\n[github](https://github.com/sjevelazco/flexsdm)\n\n:warning: \nNOTE: The version 1.4-22 of **terra** package is causing errors when trying to instal **flexsdm**. \nPlease, first install a version \xe2\x89\xa5 1.5-12 of **terra** package available on CRAN or development version of [terra](https://github.com/rspatial/terra) and then **flexsdm**.\n\n``` r\n# install.packages(""remotes"")\n\n# For Windows and Mac OS operating systems\nremotes::install_github(""sjevelazco/flexsdm"")\n\n# For Linux operating system\nremotes::install_github(""sjevelazco/flexsdm@HEAD"")\n```\n\n\n### Package website\n\nSee the package website () for functions explanation and vignettes.\n\n### Package citation\n\nVelazco, S.J.E., Rose, M.B., Andrade, A.F.A., Minoli, I., Franklin, J. (2022). flexsdm: An R package for supporting a comprehensive and flexible species distribution modelling workflow. Methods in Ecology and Evolution, 13(8) 1661\xe2\x80\x931669. https://doi.org/10.1111/2041-210X.13874\n \n> Test the package and give us your feedback [here](https://github.com/sjevelazco/flexsdm/issues) or send an e-mail to sjevelazco@gmail.com.\n\n'",",https://zenodo.org/badge/latestdoi/354032642,https://doi.org/10.1111/2041-210X.13874,https://doi.org/10.1111/2041-210X.13874\n","2021/04/02, 13:48:48",936,CUSTOM,54,1244,"2023/10/20, 02:21:15",8,324,347,43,5,0,0.0,0.17906336088154273,"2023/05/16, 23:06:59",v1.3.3,0,5,false,,false,false,,,,,,,,,,, The Catalogue of Life,"The most complete authoritative list of the world's species, maintained by hundreds of global taxonomists.",CatalogueOfLife,https://github.com/CatalogueOfLife/general.git,github,"taxonomy,life,science,biodiversity,nomenclature",Biodiversity and Species Distribution,"2023/10/02, 15:50:43",47,0,5,true,,COL,CatalogueOfLife,,,"b'# Catalogue of Life (COL)\n\n\n[Catalogue of Life (COL)](http://www.catalogueoflife.org/) is a collaboration bringing together the effort and contributions of taxonomists and informaticians from around the world. COL aims to address the needs of researchers, policy-makers, environmental managers and the wider public for a consistent and up-to-date listing of all the world\xe2\x80\x99s known species. COL also supports those who need to manage their own taxonomic information and species lists.\n\nYou can read more about COL here:\n\n - Overview: https://www.catalogueoflife.org/about/catalogueoflife\n - The COL Data Pipeline: https://www.catalogueoflife.org/about/colpipeline\n - COL ChecklistBank: https://www.catalogueoflife.org/about/colcommunity#col-checklistbank\n\n\n## COL ChecklistBank\n\nCOL, in partnership with the [Global Biodiversity Information Facility (GBIF)](https://www.gbif.org), maintains [COL ChecklistBank](https://data.catalogueoflife.org), a public checklist repository that brings together the latest versions of the checklists for each taxonomic sector, along with thousands of other species checklists shared by researchers and other contributors. These include summaries from new taxonomic publications, national or local checklists, lists of threatened or invasive species, etc. Checklists published in [supported formats](docs/DATA-FORMATS.md) to this repository are readily discoverable and citable and can be downloaded both in their original form and in a standard interpreted format. All published checklists can be searched, browsed, downloaded or accessed via a standard Application Programming Interface (API). In this way, COL ChecklistBank serves as a rich resource for discovering how any scientific name has been used in different contexts. Some of the names found in these checklists do not yet appear in any community-managed sector checklist, so COL ChecklistBank is also a tool for addressing gaps in COL.\n\n\n## Github repositories\n\nCOL manages several github repositories within the [Catalogue of Life organisation](https://github.com/catalogueOfLife) which are responsible for specific tasks.\nPlease check the individual repositories and their issue management for more details:\n\n - [general](https://github.com/CatalogueOfLife/general): the overarching project repository that contains:\n - issues reaching out to the [CoL Global Team](https://github.com/CatalogueOfLife/general/issues?q=is%3Aissue+is%3Aopen+label%3A%22Global+Team%22)\n - [coldp](https://github.com/CatalogueOfLife/coldp): Catalogue of Life Data Package specification of a richer & recommended exchange format for Catalogue of Life and ChecklistBank, replacing DwC-A and the CoL submission format (ACEF).\n - [backend](https://github.com/CatalogueOfLife/backend): the Java backend with various Maven modules that primarily provide standalone JSON webservices as shaded jars using the Dropwizard framework\n - [API documentation](https://sp2000.github.io/colplus/api/api.html) using RAML\n - [checklistbank](https://github.com/CatalogueOfLife/checklistbank): repository containing all frontend code written in [React](https://reactjs.org/) on top of the API services.\n - [portal](https://github.com/CatalogueOfLife/portal): the public facing website for Catalogue of Life built using [Jekyll](https://jekyllrb.com/)\n - [deploy](https://github.com/CatalogueOfLife/deploy): private repository with credentials and deploy scripts for GBIF\n - [data](https://github.com/CatalogueOfLife/data): a data repository that is used for tracking issues and working with data in the COL Checklist.\n'",,"2017/04/24, 09:56:25",2375,CUSTOM,5,251,"2023/08/18, 12:21:33",15,4,73,5,68,0,4.0,0.36929460580912865,,,0,7,false,,false,false,,,https://github.com/CatalogueOfLife,http://www.catalogueoflife.org/,"Leiden, NL",,,https://avatars.githubusercontent.com/u/56591279?v=4,,, Darwin Core,Standard for sharing of information about biological diversity.,tdwg,https://github.com/tdwg/dwc.git,github,"tdwg,standard,biodiversity-standards,darwin-core,biodiversity-informatics",Biodiversity and Species Distribution,"2023/09/25, 13:19:18",179,0,34,true,Python,Biodiversity Information Standards (TDWG),tdwg,"Python,Jupyter Notebook",https://dwc.tdwg.org,"b'# Darwin Core\n\nDarwin Core is a standard maintained by the [Darwin Core Maintenance Interest Group](https://www.tdwg.org/standards/dwc/#maintenance%20group). It includes a glossary of terms (in other contexts these might be called properties, elements, fields, columns, attributes, or concepts) intended to **facilitate the sharing of information about biological diversity** by providing identifiers, labels, and definitions. Darwin Core is primarily based on taxa, their occurrence in nature as documented by observations, specimens, samples, and related information.\n\n## Getting started\n\n[Darwin Core Quick Reference Guide](https://dwc.tdwg.org/terms/)\n\nDocuments:\n\n- [List of terms document](https://dwc.tdwg.org/list/): Comprehensive metadata for current and obsolete terms in human readable form \n- [Complete term history table](vocabulary/term_versions.csv): A CSV file with the full version history of Darwin Core terms\n- [Distribution documents](dist/): Simple CSV files to start using Darwin Core\n- [Website documents](docs/): Markdown files that form the source for the [Darwin Core website](https://dwc.tdwg.org/)\n\nCommunity:\n\n- [How to contribute](.github/CONTRIBUTING.md): a guide on how to contribute to Darwin Core\n- [Darwin Core Q&A](https://github.com/tdwg/dwc-qa): an open forum on the use of Darwin Core\n\n## Repo structure\n\nThe repository structure is described below. Files/directories indicated with `GENERATED` should not be edited manually.\n\n```\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 .github\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 ISSUE_TEMPLATE : Directory of issue templates generated by GitHub\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 CONTRIBUTING.md : Guide on how to contribute, create issues, etc.\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 build\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 doe-cv-build : Directory of build scripts for the degreeOfEstablishment controlled vocabulary\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dwc_terms_guides_rdf : Directory containing editable template for generating RDF guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dwc_terms_guides_text : Directory containing editable template for generating text guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dwc_terms_guides_xml : Directory containing editable template for generating XML guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dwc_terms_namespace : Directory containing editable template for generating namespace policy\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dwc_terms_simple : Directory containing editable template for generating Simple DwC guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 em-cv-build : Directory of build scripts for the establishmentMeans controlled vocabulary\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 pw-cv-build : Directory of build scripts for the pathway controlled vocabulary\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 xml : Directory for build script and configs for XML extension definitions\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 ext : Directory of GENERATED XML extension definitions\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md : Workflow for generating a new version of the vocabulary\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 build_other_doc_header.py : Script to build non-list of terms documents from their editable templates\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 build-termlist.ipynb : Obsolete Juyter notebook to construct the term list (morphed to .py version)\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 build-temlist.py : Script to build Markdown pages that provide term metadata for complex vocabularies\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 build.py : Build script to generate distribution files from the normative document\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 generate_term_versions.py : Script to build the terms_versions.csv file\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 qrg-list.csv : List of the term IRIs in the order that they are to appear in the Quick Reference Guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 requirements.txt : List of libraries required by the build scripts\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 termlist-footer.md : Footer to append to the generated term list document\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 termlist-header.md : Header to prepend to the generated term list document\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 terms.tmpl : A Jinja2 template to format the Quick Reference Guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 update_previous_doc.py : Script to move current doc to a version and update version links in it\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 workflow_diagram.png : Figure used in README.md to show how to create a new version of the standard\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dist : GENERATED Distribution files generated by build.py\n\xe2\x94\x82 \xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 all_dwc_vertical.csv : GENERATED CSV file with all Darwin Core terms as a column\n\xe2\x94\x82 \xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 simple_dwc_horizontal.csv : GENERATED CSV file with Simple Darwin Core terms as a row\n\xe2\x94\x82 \xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 simple_dwc_vertical.csv : GENERATED CSV file with Simple Darwin Core terms as a column\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 docs (GENERATED except for index.md)\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 doe : Degree of Establishment Controlled Vocabulary List of Terms\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 em : Establishment Means Controlled Vocabulary List of Terms\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 list : Darwin Core List of Terms documents\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 namespace : Darwin Core namespace policy\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 pw : Pathway Controlled Vocabulary List of Terms\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 rdf : Darwin Core RDF Guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 simple : Simple Darwin Core Guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 terms : GENERATED Quick Reference Guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 text : Darwin Core Text Guide (Darwin Core Archive specification)\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 xml : Darwin Core XML Guide\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 _data : Website navigation and footer\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 _config.yml : Jekyll site configuration\n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 CNAME : Canonical Name record for dwc.tdwg.org\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 index.md : Website home page (manually maintained)\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 vocabulary\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 term_versions.csv : Darwin Core term versions, contains the complete history of the terms\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 .gitignore : Files and directories to be ignored by git\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE : Repository license\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md : Description of this repository\n```\n\n## Contributors\n\n[List of contributors](https://github.com/tdwg/dwc/contributors)\n\n## License\n\n[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)\n\n## Recommended citation\n\nFor Darwin Core in general, consider the peer-reviewed article on Darwin Core:\n\n> Wieczorek J, Bloom D, Guralnick R, Blum S, D\xc3\xb6ring M, et al. (2012) Darwin Core: An Evolving Community-Developed Biodiversity Data Standard. PLoS ONE 7(1): e29715. https://doi.org/10.1371/journal.pone.0029715\n\nFor this repository:\n\n> Darwin Core Maintenance Interest Group, Biodiversity Information Standards (TDWG) (2014). Darwin Core. Zenodo. https://doi.org/10.5281/zenodo.592792\n\nThe citation above represents all versions of the repository. Specific [versions/releases](https://github.com/tdwg/dwc/releases) from 2011 onwards are also deposited on Zenodo.\n'",",https://doi.org/10.1371/journal.pone.0029715\n\nFor,https://doi.org/10.5281/zenodo.592792\n\nThe","2014/10/27, 16:49:02",3285,CC-BY-4.0,103,932,"2023/09/25, 13:43:25",54,90,445,119,30,2,1.2,0.5623471882640587,"2023/07/10, 16:39:59",2023-07-10,0,17,false,,false,true,,,https://github.com/tdwg,https://www.tdwg.org,,,,https://avatars.githubusercontent.com/u/5882606?v=4,,, iNaturalist,Helps you identify the plants and animals around you.,inaturalist,https://github.com/inaturalist/inaturalist.git,github,,Biodiversity and Species Distribution,"2023/10/25, 14:03:25",590,0,78,true,JavaScript,iNaturalist,inaturalist,"JavaScript,Ruby,Haml,HTML,SCSS,PLpgSQL,CSS,PHP,XSLT,Gherkin,Shell,Dockerfile,Makefile",http://www.inaturalist.org,"b'h2. iNaturalist ""!https://github.com/inaturalist/inaturalist/workflows/inaturalist%20CI/badge.svg!"":https://github.com/inaturalist/inaturalist/actions\n\nOpen source Rails app behind ""iNaturalist.org"":https://www.inaturalist.org/\n\nWant to help out? Fork the project and check out the ""Development Setup Guide"":https://github.com/inaturalist/inaturalist/wiki/Development-Setup-Guide (might be a bit out of date, contact ""kueda"":http://github.com/kueda if you hit problems getting set up).\n\nThinking about running your own version of iNaturalist? Consider joining the ""iNaturalist Network"":https://www.inaturalist.org/sites/network instead of forking the community.\n\nh3. Attribution\n\nUse of the Time Zone Geometries feature with the recommended source data will include information from ""Timezone Boundary Builder"":https://github.com/evansiroky/timezone-boundary-builder, which is made available under the ""Open Database License (ODbL)"":https://opendatacommons.org/licenses/odbl/.\n'",,"2009/09/04, 02:26:52",5164,MIT,600,14541,"2023/10/25, 14:03:28",520,1549,3389,349,0,5,0.0,0.3028758065801004,,,0,54,false,,false,true,,,https://github.com/inaturalist,http://www.inaturalist.org,"Bay Area, California",,,https://avatars.githubusercontent.com/u/62292?v=4,,, pyinaturalist,"Python client for iNaturalist, a community science platform that helps people get involved in the natural world by observing and identifying the living things around them.",pyinat,https://github.com/pyinat/pyinaturalist.git,github,"python,inaturalist,api-client,api,biodiversity,biodiversity-data,biodiversity-informatics,citizen-science",Biodiversity and Species Distribution,"2023/10/05, 14:25:45",109,39,38,true,Python,,pyinat,"Python,JavaScript,Dockerfile",https://pyinaturalist.readthedocs.io,"b""# pyinaturalist\n\n[![Build](https://github.com/pyinat/pyinaturalist/workflows/Build/badge.svg)](https://github.com/pyinat/pyinaturalist/actions)\n[![Codecov](https://codecov.io/gh/pyinat/pyinaturalist/branch/main/graph/badge.svg)](https://codecov.io/gh/pyinat/pyinaturalist)\n[![Documentation](https://img.shields.io/readthedocs/pyinaturalist/stable)](https://pyinaturalist.readthedocs.io)\n\n[![PyPI](https://img.shields.io/pypi/v/pyinaturalist?color=blue)](https://pypi.org/project/pyinaturalist)\n[![Conda](https://img.shields.io/conda/vn/conda-forge/pyinaturalist?color=blue)](https://anaconda.org/conda-forge/pyinaturalist)\n[![PyPI - Python Versions](https://img.shields.io/pypi/pyversions/pyinaturalist)](https://pypi.org/project/pyinaturalist)\n\n[![Run with Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/pyinat/pyinaturalist/main?urlpath=lab/tree/examples)\n[![Open in VSCode](docs/images/open-in-vscode.svg)](https://open.vscode.dev/pyinat/pyinaturalist)\n\n
\n\n[![](docs/images/pyinaturalist_logo_med.png)](https://pyinaturalist.readthedocs.io)\n\n# Introduction\n[**iNaturalist**](https://www.inaturalist.org) is a community science platform that helps people\nget involved in the natural world by observing and identifying the living things around them.\nCollectively, the community produces a rich source of global biodiversity data that can be valuable\nto anyone from hobbyists to scientists.\n\n**pyinaturalist** is a client for the [iNaturalist API](https://api.inaturalist.org/v1) that makes\nthese data easily accessible in the python programming language.\n\n- [Features](#features)\n- [Quickstart](#quickstart)\n- [Next Steps](#next-steps)\n- [Feedback](#feedback)\n- [Related Projects](#related-projects)\n\n## Features\n* \xe2\x9e\xa1\xef\xb8\x8f **Easier requests:** Simplified request formats, easy pagination, and complete request\n parameter type annotations for better IDE integration\n* \xe2\xac\x85\xef\xb8\x8f **Convenient responses:** Type conversions to the things you would expect in python, and an\n optional object-oriented inteface for response data\n* \xf0\x9f\x94\x92 **Security:** Keyring integration for secure credential storage\n* \xf0\x9f\x93\x97 **Docs:** Example requests, responses, scripts, and Jupyter notebooks to help get you started\n* \xf0\x9f\x92\x9a **Responsible use:** Follows the\n [API Recommended Practices](https://www.inaturalist.org/pages/api+recommended+practices)\n by default, so you can be nice to the iNaturalist servers and not worry about rate-limiting errors\n* \xf0\x9f\xa7\xaa **Testing:** A dry-run testing mode to preview your requests before potentially modifying data\n\n### Supported Endpoints\nMany of the most relevant API endpoints are supported, including:\n* \xf0\x9f\x93\x9d Annotations and observation fields\n* \xf0\x9f\x86\x94 Identifications\n* \xf0\x9f\x92\xac Messages\n* \xf0\x9f\x91\x80 Observations (multiple formats)\n* \xf0\x9f\x93\xb7 Observation photos + sounds\n* \xf0\x9f\x93\x8a Observation histograms, observers, identifiers, life lists, and species counts\n* \xf0\x9f\x93\x8d Places\n* \xf0\x9f\x91\xa5 Projects\n* \xf0\x9f\x90\xa6 Species\n* \xf0\x9f\x91\xa4 Users\n\n## Quickstart\nHere are usage examples for some of the most commonly used features.\n\nFirst, install with pip:\n```bash\npip install pyinaturalist\n```\n\nThen, import the main API functions:\n```python\nfrom pyinaturalist import *\n```\n\n### Search observations\nLet's start by searching for all your own observations. There are\n[numerous fields you can search on](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.v1.observations.html#pyinaturalist.v1.observations.create_observation), but we'll just use `user_id` for now:\n```python\n>>> observations = get_observations(user_id='my_username')\n```\n\nThe full response will be in JSON format, but we can use `pyinaturalist.pprint()` to print out a summary:\n```python\n>>> for obs in observations['results']:\n>>> pprint(obs)\nID Taxon Observed on User Location\n\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\n117585709 Genus: Hyoscyamus (henbanes) May 18, 2022 niconoe Calvi, France\n117464920 Genus: Omophlus May 17, 2022 niconoe Gal\xc3\xa9ria, France\n117464393 Genus: Briza (Rattlesnake Grasses) May 17, 2022 niconoe Gal\xc3\xa9ria, France\n...\n```\n\nYou can also get\n[observation counts by species](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.v1.observations.html#pyinaturalist.v1.observations.get_observation_species_counts).\nOn iNaturalist.org, this information can be found on the 'Species' tab of search results.\nFor example, to get species counts of all your own research-grade observations:\n```python\n>>> counts = get_observation_species_counts(user_id='my_username', quality_grade='research')\n>>> pprint(counts)\n ID Rank Scientific name Common name Count\n\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\xe2\x94\x81\n47934 species \xf0\x9f\x90\x9b Libellula luctuosa Widow Skimmer 7\n48627 species \xf0\x9f\x8c\xbb Echinacea purpurea Purple Coneflower 6\n504060 species \xf0\x9f\x8d\x84 Pleurotus citrinopileatus Golden Oyster Mushroom 6\n...\n```\n\nAnother useful format is the\n[observation histogram](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.v1.observations.html#pyinaturalist.v1.observations.get_observation_histogram),\nwhich shows the number of observations over a given interval. The default is `month_of_year`:\n```python\n>>> histogram = get_observation_histogram(user_id='my_username')\n>>> print(histogram)\n{\n 1: 8, # January\n 2: 1, # February\n 3: 19, # March\n ..., # etc.\n}\n```\n\n### Create and update observations\nTo create or modify observations, you will first need to log in.\nThis requires creating an [iNaturalist app](https://www.inaturalist.org/oauth/applications/new),\nwhich will be used to get an access token.\n```python\ntoken = get_access_token(\n username='my_username',\n password='my_password',\n app_id='my_app_id',\n app_secret='my_app_secret',\n)\n```\nSee [Authentication](https://pyinaturalist.readthedocs.io/en/latest/user_guide.html#authentication)\nfor more options including environment variables, keyrings, and password managers.\n\nNow we can [create a new observation](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.v1.observations.html#pyinaturalist.v1.observations.create_observation):\n```python\nfrom datetime import datetime\n\nresponse = create_observation(\n taxon_id=54327, # Vespa Crabro\n observed_on_string=datetime.now(),\n time_zone='Brussels',\n description='This is a free text comment for the observation',\n tag_list='wasp, Belgium',\n latitude=50.647143,\n longitude=4.360216,\n positional_accuracy=50, # GPS accuracy in meters\n access_token=token,\n photos=['~/observations/wasp1.jpg', '~/observations/wasp2.jpg'],\n)\n\n# Save the new observation ID\nnew_observation_id = response[0]['id']\n```\n\nWe can then [update the observation](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.v1.observations.html#pyinaturalist.v1.observations.update_observation) information, photos, or sounds:\n```python\nupdate_observation(\n new_observation_id,\n access_token=token,\n description='updated description !',\n photos='~/observations/wasp_nest.jpg',\n sounds='~/observations/wasp_nest.mp3',\n)\n```\n\n### Search species\nLet's say you partially remember either a genus or family name that started with **'vespi'**-something.\nThe [taxa endpoint](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.v1.taxa.html#pyinaturalist.v1.taxa.get_taxa)\ncan be used to search by name, rank, and several other criteria\n```python\n>>> response = get_taxa(q='vespi', rank=['genus', 'family'])\n```\n\nAs with observations, there is a lot of information in the response, but we'll print just a few basic details:\n```python\n>>> pprint(response)\n[52747] Family: Vespidae (Hornets, Paper Wasps, Potter Wasps, and Allies)\n[92786] Genus: Vespicula\n[84737] Genus: Vespina\n...\n```\n\n## Next Steps\nFor more information, see:\n\n* [User Guide](https://pyinaturalist.readthedocs.io/en/latest/user_guide.html):\n introduction and general features that apply to most endpoints\n* [Endpoint Summary](https://pyinaturalist.readthedocs.io/en/latest/endpoints.html):\n a complete list of endpoints wrapped by pyinaturalist\n* [Examples](https://pyinaturalist.readthedocs.io/en/stable/examples.html):\n data visualizations and other examples of things to do with iNaturalist data\n* [Reference](https://pyinaturalist.readthedocs.io/en/latest/reference.html): Detailed API documentation\n* [Contributing Guide](https://pyinaturalist.readthedocs.io/en/stable/contributing.html):\n development details for anyone interested in contributing to pyinaturalist\n* [History](https://github.com/pyinat/pyinaturalist/blob/dev/HISTORY.md):\n details on past and current releases\n* [Issues](https://github.com/pyinat/pyinaturalist/issues): planned & proposed features\n\n## Feedback\nIf you have any problems, suggestions, or questions about pyinaturalist, please let us know!\nJust [create an issue](https://github.com/pyinat/pyinaturalist/issues/new/choose).\nAlso, **PRs are welcome!**\n\n**Note:** pyinaturalist is developed by members of the iNaturalist community, and is not endorsed by\niNaturalist.org or the California Academy of Sciences. If you have non-python-specific questions\nabout the iNaturalist API or iNaturalist in general, the\n[iNaturalist Community Forum](https://forum.inaturalist.org/) is the best place to start.\n\n## Related Projects\nOther python projects related to iNaturalist:\n\n* [naturtag](https://github.com/pyinat/naturtag): A desktop application for tagging image files with iNaturalist taxonomy & observation metadata\n* [pyinaturalist-convert](https://github.com/pyinat/pyinaturalist-convert): Tools to convert observation data to and from a variety of useful formats\n* [pyinaturalist-notebook](https://github.com/pyinat/pyinaturalist-notebook): Jupyter notebook Docker image for pyinaturalist\n* [dronefly](https://github.com/dronefly-garden/dronefly): A Discord bot with iNaturalist integration, used by the iNaturalist Discord server.\n""",,"2018/10/10, 08:34:57",1841,MIT,173,1058,"2023/09/19, 20:19:28",16,334,506,128,36,0,0.0,0.09344660194174759,"2023/02/27, 18:36:04",v0.18.0,2,8,false,,false,true,"thompsonmj/Andromeda,sthysel/inat-fetch,bemarchant/xepelin_basiclogin,fede2cr/birdnetpi_inat,Imageomics/Andromeda,MatheusFBBueno/inaturalist-bot,dnamyco/inat_util,dronefly-garden/dronefly-discord,openssl-sg-insights/inaturalistbot,openssl-sg-insights/naturtag,JWCook/dronefly,FelipeSBarros/pyinaturalist-convert,dfloer/dronefly,AEHamrick/iNaturalist-Uploads,dronefly-garden/dronefly-cli,EcoNet-NZ/inaturalist-to-cams,probsJustin/database_playground,probsJustin/Inaturalist_research_scraper,JWCook/pyinaturalist-open-data,SergioMOrozco/flora_dex,infovis-vt/Quest_Project_2022,neldomarcelino/parque,puddin-l/Quest_Project_2022,dronefly-garden/dronefly-core,ap5879/CS6910_Assignment2,jodfie/ai-critter,IMiBio/SysIMiBio,joaobagas/Auto-Curation-System,pyinat/pyinaturalist-notebook,pyinat/pyinaturalist-convert,pyinat/naturtag,dronefly-garden/dronefly,omarm12/Natures_RPG,JWCook/inat-backlog-slogger,evjrob/birbcam,leynier/inaturalistbot,JWCook/taxon-keyword-gen,biophyser/oh_deer_app,inbo/vespa-watch",,https://github.com/pyinat,https://pyinaturalist.readthedocs.io,,,,https://avatars.githubusercontent.com/u/105503620?v=4,,, TaxonWorks,An integrated web-based workbench for taxonomists and biodiversity scientists.,SpeciesFileGroup,https://github.com/SpeciesFileGroup/taxonworks.git,github,"species,biodiversity-informatics,ruby,taxonomy,biodiversity,describe,life,evolution,nomenclature,collections",Biodiversity and Species Distribution,"2023/10/25, 21:30:20",77,0,6,true,Ruby,Species File Group,SpeciesFileGroup,"Ruby,Vue,JavaScript,HTML,SCSS,TeX,Shell,Stylus,Dockerfile,CSS",http://taxonworks.org,"b'![TaxonWorks](https://sfg.taxonworks.org/s/o3exin ""https://taxonworks.org"")\n\n[![build](https://github.com/SpeciesFileGroup/taxonworks/workflows/build/badge.svg?branch=development)](https://github.com/SpeciesFileGroup/taxonworks/actions?query=workflow%3Abuild)\n[![Coverage Status][3]][4]\n[![Chat on Matrix](https://img.shields.io/matrix/TaxonWorks:gitter.im?label=chat&server_fqdn=matrix.org)](https://app.gitter.im/#/room/#SpeciesFileGroup_taxonworks:gitter.im)\n[![Link to documentation](https://img.shields.io/badge/documentation-yes-green)](https://docs.taxonworks.org)\n\n## Overview\n\nTaxonWorks is a web-based workbench designed for taxonomists and biodiversity scientists. It provides tools to help you capture, organize, and enhance your data, collaborate with others, and prepare your work for analysis and publication. With TaxonWorks, you can easily manage your research data, share it with colleagues, and streamline the process of analyzing and publishing your findings.\n\nTo get more information on the project, its vision, and scope see [taxonworks.org](https://taxonworks.org) and [docs.taxonworks.org](https://docs.taxonworks.org).\n\n## License\n\nTaxonWorks is open source and is presently available under the [University of Illinois/NCSA Open Source License](https://taxonworks.org), read [more here](https://en.wikipedia.org/wiki/University_of_Illinois/NCSA_Open_Source_License).\n\n## Funding\n\nThe foundation of TaxonWorks is funded by an endowment of the [Species File Group](https://speciesfilegroup.org). This project was funded in part by NSF-ABI-1356381. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. \n\n[3]: https://codecov.io/gh/SpeciesFileGroup/taxonworks/branch/development/graph/badge.svg?token=X2Raeg8KJI\n[4]: https://codecov.io/gh/SpeciesFileGroup/taxonworks\n\n\n\n'",,"2013/09/03, 18:13:42",3704,CUSTOM,2130,25302,"2023/10/23, 18:58:02",720,338,2877,425,2,3,0.1,0.6746287350152085,"2023/10/23, 20:11:52",v0.35.1,3,22,false,,true,true,,,https://github.com/SpeciesFileGroup,http://speciesfilegroup.org,"University of Illinois, INHS",,,https://avatars.githubusercontent.com/u/1653377?v=4,,, ENMTML,An R package for an integrated construction of Ecological Niche Models.,andrefaa,https://github.com/andrefaa/ENMTML.git,github,,Biodiversity and Species Distribution,"2023/09/05, 15:06:38",44,0,3,true,R,,,R,,"b'**UNDER UPDATE!**\r\n# __ENMTML__ \r\n[![DOI](https://zenodo.org/badge/DOI/10.1016/j.envsoft.2019.104615.svg)](https://doi.org/10.1016/j.envsoft.2019.104615)\r\n[![License](https://img.shields.io/badge/license-GPL%20%28%3E=%203%29-lightgrey.svg?style=flat)](http://www.gnu.org/licenses/gpl-3.0.html)\r\n[![Lifecycle:manturing](https://img.shields.io/badge/lifecycle-manturing-blue.svg)](https://www.tidyverse.org/lifecycle/#manturing)\r\n\r\n## An R package for an integrated construction of Ecological Niche Models\r\n\r\n## Installation\r\n```ruby\r\nif (!""remotes""%in%installed.packages()){install.packages(""remotes"")} \r\nremotes::install_github(""andrefaa/ENMTML"") \r\n```\r\n## Why **ENMTML**?\r\n[ENMTML](https://andrefaa.github.io/ENMTML/) stands for **E**cological **N**iche **M**odelling at **T**he **M**eta**L**and EcologyLab. \r\nIt is a product from the [Prof. Paulo De Marco\'s lab in Universidade Federal de Goi\xc3\xa1s, Brasil](https://themetaland.wixsite.com/themetaland).\r\n\r\nIt puts together a lot of our work with ENM\'s in the past years in a single package, with the objective of making it easy to use but also covering the complex methodological development that exists in the field.\r\n\r\n## Package website\r\nSee [ENMTML](https://andrefaa.github.io/ENMTML/) package website (https://andrefaa.github.io/ENMTML) for further details of [functions](https://andrefaa.github.io/ENMTML/articles/ENMTML.html) and examples \r\n\r\n## What is the main objective of **ENMTML**?\r\nWe believe there is a division within ENM/SDM.\r\n\r\nDevelopers are constantly coming up with better methods, which causes those improvements to be scattered throughout literature and not always reach users. \r\nThis effect is potentialized as novelties are sometimes built within different R-packages, which demand users to also have some comprehension of programming. \r\n \r\nThe main objective of **ENMTML** is to minimize those issues. \r\n \r\nWe gather here most of the methodological development on ENM and present them to users in a single function with arguments related to those methodological decisions. \r\n \r\nWe bring together several alternatives for: \r\n\r\n* Variable collinearity control \r\n* Bias control \r\n* Accessible area delimitation \r\n* Pseudo-absence allocation \r\n* Data partition \r\n* A wide variety of algorithms \r\n* Thresholds \r\n* Evaluation metrics \r\n* [Deal with overprediction (MSDM)](https://github.com/sjevelazco/MSDM) \r\n* Ensemble models \r\n* Projection to a different time period/spatial extent (MOP calculation included) \r\n\r\n## What if I couldn\'t find what I was looking for in **ENMTML**? \r\n\r\n#### Please let us know!\r\n\r\nWe are regularly working on the package and are very interested in incorporating new functionalities to the package. \r\n\r\n## Last but not least \r\n \r\n**There are no defaults!** \r\n\r\nWe believe **every ENM should be carefully planned and every decision matters!** \r\n\r\nWe attempted to present a solid background for all methodological alternatives in our package, you can find in our article specific details on where to find a detailed description of the included methods.\r\n\r\n\r\n### CITATION:\r\n**Andrade, A.F.A., Velazco, S.J.E., De Marco Jr, P., 2020. ENMTML: An R package for a straightforward construction of complex ecological niche models. Environmental Modelling & Software 125, 104615. https://doi.org/10.1016/j.envsoft.2019.104615**\r\n \r\n \r\n> Please report bugs [here](https://github.com/andrefaa/ENMTML/issues) or send an e-mail to andrefaandrade@gmail.com or sjevelazco@gmail.com! \r\n\r\n'",",https://doi.org/10.1016/j.envsoft.2019.104615,https://doi.org/10.1016/j.envsoft.2019.104615**\r\n","2017/11/10, 12:11:57",2175,CUSTOM,10,1243,"2023/06/08, 11:18:16",8,323,346,4,139,1,0.0,0.48631239935587767,"2019/10/29, 19:11:42",v1.0-beta,0,3,false,,false,false,,,,,,,,,,, bdc,"A toolkit for standardizing, integrating, and cleaning biodiversity data.",brunobrr,https://github.com/brunobrr/bdc.git,github,"bdc,workflow,biodiversity-data",Biodiversity and Species Distribution,"2023/09/09, 17:02:54",21,0,5,true,R,,,"R,Makefile",https://brunobrr.github.io/bdc,"b'\n\n\n# ***bdc*** \n\n## **A toolkit for standardizing, integrating, and cleaning biodiversity data**\n\n\n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/bdc)](https://CRAN.R-project.org/package=bdc)\n[![downloads](https://cranlogs.r-pkg.org/badges/grand-total/bdc)](https://cranlogs.r-pkg.org:443/badges/grand-total/bdc)\n[![rstudio mirror\ndownloads](https://cranlogs.r-pkg.org/badges/bdc)](https://cranlogs.r-pkg.org:443/badges/bdc)\n[![R-CMD-check](https://github.com/brunobrr/bdc/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/brunobrr/bdc/actions/workflows/R-CMD-check.yaml)\n[![Codecov test\ncoverage](https://codecov.io/gh/brunobrr/bdc/branch/master/graph/badge.svg?token=9AUF86G9LJ)](https://app.codecov.io/gh/brunobrr/bdc)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6450390.svg)](https://doi.org/10.5281/zenodo.6450390)\n[![License](https://img.shields.io/badge/license-GPL%20(%3E=%203)-lightgrey.svg?style=flat)](http://www.gnu.org/licenses/gpl-3.0.html)\n\n\n\n#### **Overview**\n\nHandle biodiversity data from several different sources is not an easy\ntask. Here, we present the **B**iodiversity **D**ata **C**leaning\n(*bdc*), an R package to address quality issues and improve the\nfitness-for-use of biodiversity datasets. *bdc* contains functions to\nharmonize and integrate data from different sources following common\nstandards and protocols, and implements various tests and tools to flag,\ndocument, clean, and correct taxonomic, spatial, and temporal data.\n\nCompared to other available R packages, the main strengths of the *bdc*\npackage are that it brings together available tools \xe2\x80\x93 and a series of\nnew ones \xe2\x80\x93 to assess the quality of different dimensions of biodiversity\ndata into a single and flexible toolkit. The functions can be applied to\na multitude of taxonomic groups, datasets (including regional or local\nrepositories), countries, or worldwide.\n\n#### **Structure of *bdc***\n\nThe *bdc* toolkit is organized in thematic modules related to different\nbiodiversity dimensions.\n\n------------------------------------------------------------------------\n\n> :warning: The modules illustrated, and **functions** within, **were\n> linked to form** a proposed reproducible **workflow** (see\n> [**vignettes**](https://brunobrr.github.io/bdc/)). However, all\n> functions **can also be executed independently**.\n\n------------------------------------------------------------------------\n\n#### ![](https://raw.githubusercontent.com/brunobrr/bdc/master/inst/extdata/icon_vignettes/Figure1.png)\n\n
\n\n#### 1. [**Merge databases**](https://brunobrr.github.io/bdc/articles/integrate_datasets.html)\n\nStandardization and integration of different datasets into a standard\ndatabase.\n\n- `bdc_standardize_datasets()` Standardization and integration of\n different datasets into a new dataset with column names following\n Darwin Core terminology\n\n#### 2. [**Pre-filter**](https://brunobrr.github.io/bdc/articles/prefilter.html)\n\nFlagging and removal of invalid or non-interpretable information,\nfollowed by data amendments (e.g., correct transposed coordinates and\nstandardize country names).\n\n- `bdc_scientificName_empty()` Identification of records lacking names\n or with names not interpretable\n- `bdc_coordinates_empty()` Identification of records lacking\n information on latitude or longitude\n- `bdc_coordinates_outOfRange()` Identification of records with\n out-of-range coordinates (latitude \\> 90 or -90; longitude \\>180 or\n -180)\n- `bdc_basisOfRecords_notStandard()` Identification of records from\n doubtful sources (e.g., fossil or machine observation) impossible to\n interpret and not compatible with Darwin Core recommended vocabulary\n- `bdc_country_from_coordinates()` Derive country name from valid\n geographic coordinates\n- `bdc_country_standardized()` Standardization of country names and\n retrieve country code\n- `bdc_coordinates_transposed()` Identification of records with\n potentially transposed latitude and longitude\n- `bdc_coordinates_country_inconsistent()` Identification of\n coordinates in other countries or far from a specified distance from\n the coast of a reference country (i.e., in the ocean)\n- `bdc_coordinates_from_locality()` Identification of records lacking\n coordinates but with a detailed description of the locality\n associate with records from which coordinates can be derived\n\n#### 3. [**Taxonomy**](https://brunobrr.github.io/bdc/articles/taxonomy.html)\n\nCleaning, parsing, and harmonization of scientific names against\nmultiple taxonomic references.\n\n- `bdc_clean_names()` Name-checking routines to clean and split a\n taxonomic name into its binomial and authority components\n- `bdc_query_names_taxadb()` Harmonization of scientific names by\n correcting spelling errors and converting nomenclatural synonyms to\n currently accepted names.\n- `bdc_filter_out_names()` Function used to filter out records\n according to their taxonomic status present in the column \xe2\x80\x9cnotes\xe2\x80\x9d.\n For example, to filter only valid accepted names categorized as\n \xe2\x80\x9caccepted\xe2\x80\x9d\n\n#### 4. [**Space**](https://brunobrr.github.io/bdc/articles/space.html)\n\nFlagging of erroneous, suspicious, and low-precision geographic\ncoordinates.\n\n- `bdc_coordinates_precision()` Identification of records with a\n coordinate precision below a specified number of decimal places\n- `clean_coordinates()` (From *CoordinateCleaner* package and part of\n the data-cleaning workflow). Identification of potentially\n problematic geographic coordinates based on geographic gazetteers\n and metadata. Include tests for flagging records: around country\n capitals or country or province centroids, duplicated, with equal\n coordinates, around biodiversity institutions, within urban areas,\n plain zeros in the coordinates, and suspect geographic outliers\n\n#### 5. [**Time**](https://brunobrr.github.io/bdc/articles/time.html)\n\nFlagging and, whenever possible, correction of inconsistent collection\ndate.\n\n- `bdc_eventDate_empty()` Identification of records lacking\n information on event date (i.e., when a record was collected or\n observed)\n- `bdc_year_outOfRange()` Identification of records with illegitimate\n or potentially imprecise collecting year. The year provided can be\n out-of-range (e.g., in the future) or collected before a specified\n year supplied by the user (e.g., 1900)\n- `bdc_year_from_eventDate()` This function extracts four-digit year\n from unambiguously interpretable collecting dates\n\n#### [**Other functions**](https://brunobrr.github.io/bdc/reference/index.html)\n\nAim to facilitate the **documentation, visualization, and\ninterpretation** of results of data quality tests the package contains\nfunctions for documenting the results of the data-cleaning tests,\nincluding functions for saving i) records needing further inspection,\nii) figures, and iii) data-quality reports.\n\n- `bdc_create_report()` Creation of data-quality reports documenting\n the results of data-quality tests and the taxonomic harmonization\n process\n- `bdc_create_figures()` Creation of figures (i.e., bar plots and\n maps) reporting the results of data-quality tests\n- `bdc_filter_out_flags()` Removal of columns containing the results\n of data quality tests (i.e., column starting with \xe2\x80\x9c.\xe2\x80\x9d) or other\n columns specified\n- `bdc_quickmap()` Creation of a map of points using ggplot2. Helpful\n in inspecting the results of data-cleaning tests\n- `bdc_summary_col()` This function creates or updates the column\n summarizing the results of data quality tests (i.e., the column\n \xe2\x80\x9c.summary\xe2\x80\x9d)\n\n#### **Installation**\n\nYou can install *bdc* from CRAN\n\n``` r\ninstall.packages(""bdc"")\nlibrary(taxadb)\n```\n\nor the development version from\n[GitHub](https://github.com/brunobrr/bdc) using:\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""brunobrr/bdc"")\n```\n\nLoad the package with:\n\n``` r\nlibrary(bdc)\n```\n\n#### **Package website**\n\nSee *bdc* package website () for\ndetailed explanation on each module.\n\n#### **Getting help**\n\n> If you encounter a clear bug, please file an issue\n> [**here**](https://github.com/brunobrr/bdc/issues). For questions or\n> suggestion, please send us a email (ribeiro.brr@gmail.com).\n\n#### **Citation**\n\nRibeiro, BR; Velazco, SJE; Guidoni-Martins, K; Tessarolo, G; Jardim,\nLucas; Bachman, SP; Loyola, R (2022). bdc: A toolkit for standardizing,\nintegrating, and cleaning biodiversity data. Methods in Ecology and\nEvolution.\n[doi.org/10.1111/2041-210X.13868](https://doi.org/10.1111/2041-210X.13868)\n'",",https://doi.org/10.5281/zenodo.6450390,https://doi.org/10.1111/2041-210X.13868","2020/09/29, 18:48:10",1121,GPL-3.0,19,1102,"2023/09/08, 20:37:51",6,208,249,30,47,1,0.0,0.460772104607721,"2023/03/13, 14:04:46",v.1.1.4,0,7,false,,false,false,,,,,,,,,,, Wallace,"A modular platform for reproducible modeling of species niches and distributions, written in R.",wallaceEcoMod,https://github.com/wallaceEcoMod/wallace.git,github,,Biodiversity and Species Distribution,"2023/09/27, 13:47:29",125,0,16,true,R,Wallace Ecological Modeling App,wallaceEcoMod,"R,CSS,JavaScript",https://wallaceecomod.github.io/,"b'[![R-CMD-check](https://github.com/wallaceEcoMod/wallace/workflows/R-CMD-check/badge.svg)](https://github.com/wallaceEcoMod/wallace/actions) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![CRAN version](http://www.r-pkg.org/badges/version/wallace)](https://CRAN.R-project.org/package=wallace) [![downloads](https://cranlogs.r-pkg.org:443/badges/grand-total/wallace?color=orange)](https://cranlogs.r-pkg.org:443/badges/grand-total/wallace?color=orange)\n\n# Wallace (v2.1.0)\n\n*Wallace* is a modular platform for reproducible modeling of species niches and distributions, written in R. The application guides users through a complete analysis, from the acquisition of data to visualizing model predictions on an interactive map, thus bundling complex workflows into a single, streamlined interface.\n\nInstall *Wallace* via CRAN and run the application with the following R code.\n\n```R\ninstall.packages(""wallace"")\nlibrary(wallace)\nrun_wallace()\n```\n\nDevelopment versions can be downloaded from Github with the following R code.\n\n```R\ninstall.packages(""devtools"")\ndevtools::install_github(""wallaceEcoMod/wallace"")\nlibrary(wallace)\nrun_wallace()\n```\n\n### Before using *Wallace*\n\n#### Update R and RStudio versions\nPlease make sure you have installed the latest versions of both R (Mac OS, Windows) and RStudio (Mac OS / Windows: choose the free version).\n\n#### How to run Maxent with maxent.jar\n*Wallace* v2.1.0 includes two options to run Maxent models: maxnet and maxent.jar. The former, which is an R implementation and fits the model with the package `glmnet`, is now the default and does not require the package `rJava` (see Phillips et al. 2017). The latter, which is the Java implementation, runs the `maxent()` function in the package `dismo`. This function requires the user to place the `maxent.jar` file in the `/java` directory of the `dismo` package root folder. You can download Maxent here, and locate `maxent.jar`, which is the Maxent program itself, in the downloaded folder. You can find the directory path to `dismo/java` by running `system.file(\'java\', package=""dismo"")` at the R console. Simply copy `maxent.jar` and paste it into this folder. If you try to run Maxent in *Wallace* without the file in place, you will get a warning message in the log window and Maxent will not run.\n\n### Potential Issues\n\n#### rJava and Java versions (just for maxent.jar option)\n*Wallace* uses the `rJava` package only to run the program `maxent.jar`. The package `rJava` will not load properly if the version of Java on your computer (32-bit or 64-bit) does not match that of the R installation you are using. For example, if you are running 64-bit R, please make sure your Java is also 64-bit, or else `rJava` will be unable to load. Install the latest version of Java here, and 64-bit Windows users should make sure to select ""Windows Offline (64-bit)"". There is currently only a 64-bit download for Mac OS. For Mac users running OSX Yosemite and above with problems, see this StackOverflow post for some tips on how to get `rJava` working again. If you need to install Java for the first time, you can follow these instructions for Mac and Windows.\n\n#### Problems viewing tables\nIf for some reason you are unable to view the tables in *Wallace*, please install (force if necessary) the development version of `htmlwidgets` by running this code: `devtools::install_github(""ramnathv/htmlwidgets"")`. You should be able to view tables now.\n\n#### Windows Users: PDF download of session code\nIf PDF downloading of session code is not working for you, please follow the following instructions, taken from here:\n - Step 1: Download and Install MiKTeX from http://miktex.org/2.9/setup\n - Step 2: Run `Sys.getenv(""PATH"")` in R studio. This command returns the path where Rstudio is trying to find pdflatex.exe. In Windows (64-bit), it should return ""C:\\Program Files\\MiKTeX 2.9\\miktex\\bin\\x64\\pdflatex.exe"". If pdflatex.exe is not located in this location Rstudio gives this error code 41.\n - Step 3: To set this path variable run: `Sys.setenv(PATH=paste(Sys.getenv(""PATH""),""C:/Program Files/MiKTeX 2.9/miktex/bin/x64/"",sep="";""))`.\n\n#### Windows Users: Only for Github installation\nIf you are using Windows, please download and install RTools before installing the `devtools` package. After you install RTools, please make sure you add ""C:\\Rtools\\bin"" to your PATH variable (instructions here). Additionally, when using `devtools` on Windows machines, there is a known bug that sometimes results in the inability to download all package dependencies. If this happens to you, please install the packages and their dependencies directly from CRAN.\n\n#### Any other problems with install_github()\nAlthough the recommended way to install is through CRAN, if you are trying to install the Github version and are having problems, follow these steps.\n 1. Download the zip file from the repository page.\n 2. Unzip and open the wallace.Rproj file in RStudio.\n 3. In the right-hand pane, click Build, then Install & Restart.\n 4. Type `run_wallace()` in the console and press Enter.\n'",,"2015/02/04, 18:21:07",3185,GPL-3.0,78,3528,"2023/09/25, 21:47:04",44,146,365,21,30,3,0.0,0.4175550817341862,"2022/12/08, 17:47:40",v2.0.0,0,13,false,,false,false,,,https://github.com/wallaceEcoMod,wallaceecomod.github.io,NYC,,,https://avatars.githubusercontent.com/u/22451486?v=4,,, ENMeval,R package for automated runs and evaluations of ecological niche models.,jamiemkass,https://github.com/jamiemkass/ENMeval.git,github,,Biodiversity and Species Distribution,"2023/01/09, 10:47:05",42,0,3,true,R,,,R,https://jamiemkass.github.io/ENMeval/,"b'[![CRAN version](https://www.r-pkg.org/badges/version/ENMeval)](https://CRAN.R-project.org/package=ENMeval) [![downloads](https://cranlogs.r-pkg.org:443/badges/grand-total/ENMeval?color=orange)](https://cranlogs.r-pkg.org:443/badges/grand-total/ENMeval?color=orange)\n[![R-CMD-check](https://github.com/jamiemkass/ENMeval/workflows/R-CMD-check/badge.svg)](https://github.com/jamiemkass/ENMeval/actions)\n\n\n# ENMeval version 2.0.4\n\n## R package for automated tuning and evaluations of ecological niche models\n\nNOTE: ENMeval is a work in progress, changing slowly to fix bugs when users identify them. If you find a bug, please raise an Issue in this Github repo and I will resolve it as soon as I can. The CRAN version may lag behind the Github one, so please try the development version here first if you are having any issues.\nInstall with: `devtools::install_packages(""jamiemkass/ENMeval"")`\n\n[`ENMeval`](https://jamiemkass.github.io/ENMeval/index.html) is an R package that performs automated tuning and evaluations of ecological niche models and species distribution models. Version >=2.0.0 represents an extensive restructure and expansion of version 0.3.1, and has many new features, including customizable specification of algorithms besides Maxent using the new **ENMdetails** object, comprehensive metadata output, null model evaluations, new visualization tools, a completely updated and extensive [vignette](https://jamiemkass.github.io/ENMeval/articles/ENMeval-2.0-vignette.html) with a complete analysis walkthrough, and more flexibility for different analyses and data types. Many of these new features were created in response to user requests -- thank you for your input!\n\n`ENMeval` >=2.0.0 includes the functionality to specify any algorithm of choice, but comes out of the box with two implementations of Maxent: maxnet [(Phillips *et al.* 2017)](https://onlinelibrary.wiley.com/doi/full/10.1111/ecog.03049) from the [maxnet R package](https://cran.r-project.org/package=maxnet) and the Java software maxent.jar [(Phillips *et al.* 2006)](https://doi.org/10.1016/j.ecolmodel.2005.03.026), available [here](http://biodiversityinformatics.amnh.org/open_source/maxent/), as well as BIOCLIM implemented with the [dismo R package](https://cran.r-project.org/package=dismo). \n\nModel tuning refers to the process of building models with varying complexity settings, then choosing optimal settings based on some criteria. As it is difficult to predict in advance what level of complexity best fits your data and results in the most ecologically realistic response for your species, model tuning and evaluations are essential for ENM studies. This process helps researchers maximize predictive ability and avoid overfitting with models that are too complex. \n\nFor a more detailed description of version >=2.0.0, please reference the new publication in Methods in Ecology and Evolution:\n\n[Kass, J. M., Muscarella, R., Galante, P. J., Bohl, C., Pinilla-Buitrago, G. E., Boria, R. A., Soley-Guardia, M., & Anderson, R. P. (2021). ENMeval 2.0: redesigned for customizable and reproducible modeling of species\xe2\x80\x99 niches and distributions. Methods in Ecology and Evolution, 12: 1602-1608.](https://doi.org/10.1111/2041-210X.13628)\n\nFor the original package version, please reference the 2014 publication in Methods in Ecology and Evolution:\n\n[Muscarella, R., Galante, P. J., Soley-Guardia, M., Boria, R. A., Kass, J. M., Uriarte, M. and Anderson, R. P. (2014), ENMeval: An R package for conducting spatially independent evaluations and estimating optimal model complexity for Maxent ecological niche models. Methods in Ecology and Evolution, 5: 1198\xe2\x80\x931205.](https://doi.org/10.1111/2041-210X.12261)\n\nNOTES:\n\n1. The vignette is not included in the CRAN version of the package due to file size constraints, but is [available](https://jamiemkass.github.io/ENMeval/articles/ENMeval-2.0.0-vignette.html) on the package\'s Github Pages website. \n\n2. Please make sure to use the most recent version of [maxent.jar](https://biodiversityinformatics.amnh.org/open_source/maxent/) (currently 3.4.4), as recent bug fixes were made.\n\n3. Note that as of version 0.3.0, the default implementation uses the [\'maxnet\' R package](https://cran.r-project.org/package=maxnet). The output from this differs from that of the original Java program and so some features are not compatible (e.g., variable importance, html output).\n'",",https://doi.org/10.1016/j.ecolmodel.2005.03.026,https://doi.org/10.1111/2041-210X.13628,https://doi.org/10.1111/2041-210X.12261","2015/01/26, 14:18:11",3194,GPL-3.0,3,723,"2023/04/14, 00:11:51",17,22,131,6,195,2,0.0,0.3719879518072289,"2021/05/17, 21:35:38",v2.0.0,0,5,false,,false,false,,,,,,,,,,, BioDiversityHub BC,The source of British Columbia's species inventory data.,bcgov,https://github.com/bcgov/biohubbc.git,github,"biodiversity,species,env,flnr,flnro,react,material-ui,biohub",Biodiversity and Species Distribution,"2023/10/24, 23:57:28",13,0,1,true,TypeScript,Province of British Columbia,bcgov,"TypeScript,Shell,PLpgSQL,JavaScript,Makefile,Dockerfile,SCSS,Python,HTML",https://apps.nrs.gov.bc.ca/int/confluence/pages/viewpage.action?pageId=75599180,"b'[![img](https://img.shields.io/badge/Lifecycle-Experimental-339999)](https://github.com/bcgov/repomountie/blob/master/doc/lifecycle-badges.md) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=bcgov_biohubbc&metric=alert_status)](https://sonarcloud.io/dashboard?id=bcgov_biohubbc) [![codecov](https://codecov.io/gh/bcgov/biohubbc/branch/dev/graph/badge.svg?token=CF2ZR3T3U2)](https://codecov.io/gh/bcgov/biohubbc) [![BioHubBC](https://img.shields.io/endpoint?url=https://dashboard.cypress.io/badge/simple/w8oxci/dev&style=flat&logo=cypress)](https://dashboard.cypress.io/projects/w8oxci/runs)\n\n# BioDiversityHub BC - updated\n\nSub-project under the SEISM Capital project, the source of BC\xe2\x80\x99s species inventory data.\n\nThe objectives for the SIMS project are:\n\n- To provide a single source for aquatic and terrestrial species and habitat data.\n- To reduce the barriers for collecting and sharing aquatic and terrestrial species and habitat data throughout the province of British Columbia.\n- To reduce the effort involved with managing aquatic and terrestrial species and habitat data.\n- To improve access for all stakeholders to the aquatic and terrestrial species and habitat data needed to make informed decisions and policies for the province.\n\n# Pre-reqs\n\n## Install Node/NPM\n\n- Requires Node version 12+\n- https://nodejs.org/en/download/\n\n## Install Git\n\n- https://git-scm.com/downloads\n\n### Clone the repo\n\n- `git clone https://github.com/bcgov/biohubbc.git`\n\n## Install Docker\n\n- https://www.docker.com/products/docker-desktop\n\n### Windows\n\n_Note: there are 2 mutually exclusive modes that Docker Desktop supports on Windows: Hyper-V or WSL2. You should be able to run the application in either mode, but this documentation was only written with instructions for Hyper-V. See https://code.visualstudio.com/blogs/2020/03/02/docker-in-wsl2 for possible instructions on using Docker Desktop in WSL2._\n\nIf prompted, install Docker using Hyper-V (not WSL 2)\n\n### Grant Docker access to your local folders\n\nThis setup uses volumes to support live reload. \nEnsure Docker Desktop has access to your file system so that it can detect file changes and trigger live reload.\n\n#### MacOS\n\n- In the Docker-Desktop app:\n - Go to settings (gear icon)\n - Now go to Resources\n - Go to File Sharing\n - Add the folder/drive your repo is cloned under\n - This will grant Docker access to the files under it, which is necessary for it to detect file changes.\n\n#### Windows\n\n- In the Docker-Desktop app:\n - Go to settings (gear icon)\n - On the general tab ensure that the `Use the WSL 2 based engine` is unchecked.\n - If it is checked, uncheck it, and click `Apply & Restart`\n - Docker may crash, but that is fine, you can kill docker for now.\n - You will then need to go to the following URL and follow the instructions in the first section `Enable Hyper-V using Powershell`: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v\n - This should just consist of running the 1 command in Powershell (as Admin): `Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All`\n - Once the powershell command has been run, it will ask you to restart your machine.\n - Once restarted, re-launch Docker, and check that docker starts successfully and that the `Use the WSL 2 based engine` setting is still unchecked\n - Go to settings (gear icon)\n - Now go to Resources\n - Go to File Sharing\n - Add the folder/drive your repo is cloned under\n - This will grant Docker access to the files under it, which is necessary for it to detect file changes.\n\n## Ensure you can run the `make` command\n\n### MacOS\n\n- Install make: `brew install make`\n - https://formulae.brew.sh/formula/make\n\n### Windows\n\n- Install chocolatey: https://chocolatey.org/install#install-step2\n- Install make: `choco install make`\n - https://community.chocolatey.org/packages/make\n\n_Note: you will need to run choco commands in a terminal as administrator_\n\n# Configuring Local IDE\n\nYou can use any code IDE you prefer, though VSCode is recommended.\n\n- https://code.visualstudio.com/\n\nFor convenience, you can install all node_modules by running the following command from the repo\'s root folder.\n\n```\nmake install\n```\n\nYou can also manually run `npm install` in each of the `/api/`, `/app/`, and `/database/` folders.\n\n# Building/Running the App\n\n_Note: Run all make commands from the root folder of the repo._\n\n_Note: Run all commands in a terminal that supports make. On Mac you can use the default `Terminal`, on Windows you can use `git-bash`._\n\n## Initialize the `./env` file.\n\nThis will copy `./env_config/env.docker` to `./.env`. \nThis should only need to be run once. \nThis file may need additional editing to provide secrets for external services (like S3).\n\n```\nmake env\n```\n\nResult of running `make env` for the first time: \n![make env screenshot](README/images/make/running_make_env.png ""Running `make env`"")\n\n## Start all Applications\n\nStarts all applications (database, api, app).\n\n```\nmake web\n```\n\nResult of running `make web` (condensed to only show the important parts): \n![make web screenshot](README/images/make/running_make_start.png ""Running `make web`"")\n\n## Access the Running Applications\n\napi:\n\n- `localhost:6100/api/`\n\napp:\n\n- `localhost:7100`\n\n# Helpful Makefile Commands\n\nSee `./Makefile` for all available commands.\n\n_Note: Run all make commands from the root folder of the repo._\n\n## Print Makefile Help Doc\n\n```\nmake help\n```\n\n## Install All Dependencies\n\nWill run `npm install` in each of the project folders (api, app, database).\n\n```\nmake install\n```\n\n## Delete All Containers\n\nWill stop and delete the application docker containers. \nThis is useful when you want to clear out all database content, returning it to its initial default state. \nAfter you\'ve run `make clean`, running `make web` will launch new containers, with a fresh instance of the database.\n\n```\nmake clean\n```\n\n## View the logs for a container\n\n### API\n\n```\nmake log-api\n```\n\n### APP\n\n```\nmake log-app\n```\n\n### Database\n\n```\nmake log-db\n```\n\n### Database Setup (migrations + seeding)\n\n```\nmake log-db-setup\n```\n\n## Run Linter and Fix Issues\n\nWill run the projects code linter and attempt to fix all issues found.\n\n_Note: Not all formatting issues can be auto-fixed._\n\n```\nmake lint-fix\n```\n\n## Run Formatter and Fix Issues\n\nWill run the projects code formatter and attempt to fix all issues found.\n\n_Note: Not all formatting issues can be auto-fixed._\n\n```\nmake format-fix\n```\n\n## Shell Into a Docker Container (database, api, app, etc)\n\nSee `./Makefile` for all available commands.\n\n### Database\n\nThis is useful if you want to access the PSQL database through the CLI. \nSee [DBeaver](#dbeaver) for a GUI-centric way of accessing the PSQL database.\n\n```\nmake db-container\n```\n\n# Helpful Docker Commands\n\n## Show all running containers\n\n```\ndocker ps\n```\n\n## Show all containers (running and closed)\n\n```\ndocker ps -a\n```\n\nWhat a successfully built/run set of docker containers looks like:\n![make web screenshot](README/images/make/running_docker_ps_-a.png ""Running `docker ps -a`"")\n\n_Note: The exited container is correct, as that container executes database migrations and then closes_\n\n## View the logs for a container\n\n`docker logs ` \nInclude `-f` to ""follow"" the container logs, showing logs in real time\n\n## Prune Docker Artifacts\n\nOver a long period time, Docker can run out of storage memory. When this happens, docker will log a message indicating it is out of memory.\n\nThe below command will delete docker artifacts to recover docker hard-drive space.\n\nSee [documentation](https://docs.docker.com/engine/reference/commandline/system_prune/) for OPTIONS.\n\n```\ndocker system prune [OPTIONS]\n```\n\n# Troubleshooting\n\n## Make Issues\n\nIf you get an error saying the `make` command is not found, you may need to install it first. \nSee [Ensure you can run the make command](#ensure-you-can-run-the-make-command)\n\n## Docker Service Issues\n\n### ENV\n\nA docker service can fail if required environment variables can\'t be found. \nDouble check that your `.env` has the latest variables from `env.docker`, which may have been updated.\n\n## Docker Timezone Issue\n\nWhile trying to run a make command such as `make web`, if you encounter an issue along the lines of:\n\n```\nE: Release file for http://deb.debian.org/debian/dists/buster-updates/InRelease is not valid yet (invalid for another 1d 1h 5min 13s). Updates for this repository will not be applied.\n```\n\nit may be possible that your system clock is out of date or not synced (dockerfile timezone has to match your machine timezone).\nIn this case, make sure your timezone is correct and matches that of docker and restart your machine/terminal window and try again.\n\n## Database Container Wont Start\n\nIf you already had PSQL installed, it is likely that the default port `5432` is already in use and the instance running in Docker fails because it can\'t acquire that port.\n\n- You can either stop the existing PSQL service, freeing up the port for Dockers use.\n- Or alter the `DB_PORT` environment variable in `.env` to something not in use (ex: `5433`).\n - You will likely need to run `make clean` and `make web` to ensure the containers are re-built with the new variables.\n\n# Helpful Tools\n\n## DBeaver\n\nGUI-centric application for viewing/interacting with Databases.\n\n- https://dbeaver.io/\n\n### Pre-req\n\n- Intall PostgreSQL 12+\n- https://www.postgresql.org/download/\n\n### Add a new connection\n\n- Click New Database Connection (+ icon)\n - Host: localhost\n - Port: 5432\n - Database: biohubbc\n - username: postgres\n - password: postgres\n - user role: (leave empty)\n - local client: PostgreSQL 12\n\n_Note: all of the above connection values can be found in the `.env` file_\n\n# Acknowledgements\n\n[![SonarCloud](https://sonarcloud.io/images/project_badges/sonarcloud-black.svg)](https://sonarcloud.io/dashboard?id=bcgov_biohubbc)\n\n# License\n\n Copyright 2019 Province of British Columbia\n\n Licensed under the Apache License, Version 2.0 (the ""License"");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an ""AS IS"" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n'",,"2020/09/15, 20:24:29",1135,Apache-2.0,210,970,"2023/10/24, 23:38:10",4,1125,1137,315,1,4,3.0,0.7716606498194946,"2023/10/17, 22:01:08",v1.0.0-beta-11,0,17,false,,false,true,,,https://github.com/bcgov,https://github.com/bcgov/BC-Policy-Framework-For-GitHub,Canada,,,https://avatars.githubusercontent.com/u/916280?v=4,,, Global Names Verifier,Verifies scientific names against more than 100 biodiversity databases.,gnames,https://github.com/gnames/gnverifier.git,github,"biodiversity,scientific-names,verification,reconciliation,resolution,go,golang,bioinformatics",Biodiversity and Species Distribution,"2023/10/10, 16:30:30",14,2,7,true,Go,gnames,gnames,"Go,HTML,CSS,Makefile,JavaScript,Dockerfile",https://verifier.globalnames.org,"b'# Global Names Verifier\n\n[![DOI](https://zenodo.org/badge/297323648.svg)](https://zenodo.org/badge/latestdoi/297323648)\n\nTry `GNverifier` [online][web-service].\n\n[GNverifier with OpenRefine]\n\n[GNverifier API]\n\n[Feedback]\n\nTakes a scientific name or a list of scientific names and verifies them against\na variety of biodiversity [Data Sources][data_source_ids]. Includes an advanced\nsearch feature.\n\n\n\n* [Citing](#citing)\n* [Features](#features)\n* [Installation](#installation)\n * [Using Homebrew on Mac OS X, Linux, and Linux on Windows ([WSL2])](#using-homebrew-on-mac-os-x-linux-and-linux-on-windows-wsl2)\n * [MS Windows](#ms-windows)\n * [Linux and Mac (without Homebrew)](#linux-and-mac-without-homebrew)\n * [Compile from source](#compile-from-source)\n* [Usage](#usage)\n * [As a web service](#as-a-web-service)\n * [As a RESTful API](#as-a-restful-api)\n * [One name-string](#one-name-string)\n * [Many name-strings in a file](#many-name-strings-in-a-file)\n * [Advanced search](#advanced-search)\n * [Options and flags](#options-and-flags)\n * [help](#help)\n * [version](#version)\n * [port](#port)\n * [all_matches](#all_matches)\n * [capitalize](#capitalize)\n * [species group](#species-group)\n * [fuzzy-match of uninomial names](#fuzzy-match-of-uninomial-names)\n * [format](#format)\n * [jobs](#jobs)\n * [quiet](#quiet)\n * [sources](#sources)\n * [web-logs](#web-logs)\n * [nsqd-tcp](#nsqd-tcp)\n * [Configuration file](#configuration-file)\n * [Advanced Search Query Language](#advanced-search-query-language)\n * [Examples of searches](#examples-of-searches)\n* [Copyright](#copyright)\n\n\n\n## Citing\n\nIf you want to cite GNverifier, use [DOI generated by Zenodo][zenodo doi]:\n\n## Features\n\n- Small and fast app to verify scientific names against many biodiversity\n databases. The app is a client to a [verifier API].\n- It provides 6 different match levels:\n - **Exact**: complete match with a canonical form or a full name-string\n from a data source.\n - **Fuzzy**: if exact match did not happen, it tries to match name-strings\n assuming spelling errors.\n - **Partial**: strips middle or last epithets from bi- or multi-nomial names\n and tries to match what is left.\n - **PartialFuzzy**: the same as Partial but assuming spelling mistakes.\n - **Virus**: verification of virus names.\n - **FacetedSearch**: marks [advanced-search](#advanced-search) queries.\n- Taxonomic resolution. If a database contains taxonomic information, it\n returns the currently accepted name for the provided name-string.\n- Best match is returned according to the match score. Data sources with some\n manual curation have priority over auto-curated and uncurated datasets. For\n example [Catalogue of Life] or [WoRMS] are considered curated,\n [GBIF] auto-curated, [uBio] not curated.\n- Fine-tuning the match score by matching authors, years, ranks etc.\n- It is possible to map any name-strings checklist to any of registered\n Data Sources.\n- If a Data Source provides a classification for a name, it will be returned to\n the output.\n- The app works for checking just one name-string, or multiple ones written in\n a file.\n- [Advanced search](#advanced-search) uses simple but powerful\n [query language](#advanced-search-query-language)\n to find abbreviated names, search by author, year etc.\n- Supports feeding data via pipes of an operating system. This feature allows\n to chain the program together with other tools.\n- [GNverifier] includes a web-based graphical user interface identical to its\n ""official"" [web-service].\n\n## Installation\n\n### Using Homebrew on Mac OS X, Linux, and Linux on Windows ([WSL2])\n\nHomebrew is a popular package manager for Open Source software originally\ndeveloped for Mac OS X. Now it is also available on Linux, and can easily\nbe used on Windows 10 or 11, if Windows Subsystem for Linux (WSL) is\n[installed][wsl install].\n\nTo use [GNverifier] with Homebrew:\n\n1. Install [Homebrew]\n\n2. Open terminal and run the following commands:\n\n```bash\nbrew tap gnames/gn\nbrew install gnverifier\n```\n\n### MS Windows\n\nDownload the [latest release] from GitHub, unzip.\n\nOne possible way would be to create a default folder for executables and place\n`GNverifier` there.\n\nUse `Windows+R` keys\ncombination and type ""`cmd`"". In the appeared terminal window type:\n\n```cmd\nmkdir C:\\Users\\your_username\\bin\ncopy path_to\\gnverifier.exe C:\\Users\\your_username\\bin\n```\n\n[Add `C:\\Users\\your_username\\bin` directory to your `PATH`][winpath] `user`\nand/or `system` environment variable.\n\nAnother, simpler way, would be to use `cd C:\\Users\\your_username\\bin` command\nin `cmd` terminal window. The [GNverifier] program then will be automatically\nfound by Windows operating system when you run its commands from that\ndirectory.\n\nYou can also read a more detailed guide for Windows users in\n[a PDF document][win-pdf].\n\n### Linux and Mac (without Homebrew)\n\nIf [Homebrew] is not installed, download the [latest release] from GitHub,\nuntar, and install binary somewhere in your path.\n\n```bash\ntar xvf gnverifier-linux-0.1.0.tar.xz\n# or tar xvf gnverifier-mac-0.1.0.tar.gz\nsudo mv gnverifier /usr/local/bin\n```\n\n### Compile from source\n\nInstall Go according to [installation instructions][go-install]\n\n```bash\ngo get github.com/gnames/gnverifier/gnverifier\n```\n\n## Usage\n\n[GNverifier] takes one name-string or a text file with one name-string per\nline as an argument, sends a query with these data to a [remote GNames\nserver][gnames] to match the name-strings against many biodiversity\ndatabases and returns results to STDOUT either in JSON, CSV or TSV format.\n\nThe app can alto take a query string like\n`g:M. sp:galloprovincialis au:Olivier` to perform advanced searching,\nif the full scientific name is undetermined.\n\n### As a web service\n\n```bash\ngnverifier -p 8080\n```\n\nAfter running this command, you should be able to access web-based user\ninterface via a browser at `http://localhost:8080`\n\n### As a RESTful API\n\nRefer to the [RESTful API docs][gnames] to learn how to use the same\nfunctionality via scripts.\n\n### One name-string\n\n```bash\ngnverifier ""Monohamus galloprovincialis""\n```\n\n### Many name-strings in a file\n\n```bash\ngnverifier /path/to/names.txt\n```\n\nThe app assumes that a file contains a simple list of names, one per line.\n\nIt is also possible to feed data via STDIN:\n\n```bash\ncat /path/to/names.txt | gnverifier\n```\n\n### Advanced search\n\nAdvanced search allows to use a simple but powerful query language to find names\nby abbreviated genus, a year or a range of years. See detailed description\nin [Advanced Search Query Language](#advanced-search-query-language) section.\n\n```bash\ngnverifier ""g:B. sp:bubo au:Linn. y:1700-""\n```\n\n### Options and flags\n\nAccording to POSIX standard flags and options can be given either before or\nafter name-string or file name.\n\n#### help\n\n```bash\ngnverifier -h\n# or\ngnverifier --help\n# or\ngnverifier\n```\n\n#### version\n\n```bash\ngnverifier -V\n# or\ngnverifier --version\n```\n\n#### port\n\nStarts GNverifier as a web service using entered port\n\n```bash\ngnverifier -p 8080\n```\n\nThis command will run user-interface accessible by a browser\nat `http://localhost:8080`\n\n#### all_matches\n\nTo see all matches instead of the best one use --all_matches flag.\n\nWARNING: for some names the result will be excessively large.\n\n```bash\ngnverifier -s \'1,12\' -M file.txt\n```\n\nThis flag is ignored by advanced search.\n\n#### capitalize\n\nIf your names are co not have uninomials or genera capitalized according to\nrules on nomenclature, you can still verify them using this option. If\n`capitalize` flag is set, the first character of every name-string will be\ncapitalized (when appropriate). This flag is ignores by advanced search.\n\n```bash\ngnverifier -c ""bubo bubo""\n# or\ngnverifier --capitalize ""bubo bubo""\n```\n\n#### species group\n\nIf `species_group` flag is on, a search of `Aus bus` would also search for\n`Aus bus bus` and vice versa. This flag expands search to a species group of\na name if applicable. It means it involves into search botanical autonyms and\ncoordinated names in zoology.\n\n```bash\ngnverifier -g ""Bubo bubo""\ngnverifier --species_group ""Bubo bubo""\n```\n\n#### fuzzy-match of uninomial names\n\nWhen `fuzzy_uninomial` flag is on, uninomials are allowed to go through\nfuzzy matching, if needed. Normally this flag is off because fuzzy-matched\nuninomials create a significant amount of false positives.\n\n```bash\ngnverifier -z ""Pomatmus""\ngnverifier --fuzzy_uninomial ""Pomatmus""\n```\n\n#### format\n\nAllows to pick a format for output. Supported formats are\n\n- compact: one-liner JSON.\n- pretty: prettified JSON with new lines and tabs for easier reading.\n- tsv: returns tab-separated values representation.\n- csv: (DEFAULT) returns comma-separated values representation.\n\n```bash\n# short form for compact JSON format\ngnverifier -f compact file.txt\n# or long form for ""pretty"" JSON format\ngnverifier --format=""pretty"" file.csv\n# tsv format\ngnverifier -f tsv file.csv\n```\n\nNote that a separate JSON ""document"" is returned for each separate record,\ninstead of returning one big JSON document for all records. For large lists it\nsignificantly speeds up parsing of the JSON on the user side.\n\n#### jobs\n\nIf the list of names if very large, it is possible to tell [GNverifier] to\nrun requests in parallel. In this example GNverifier will run 8 processes\nsimultaneously. The order of returned names will be somewhat randomized.\n\n```bash\ngnverifier -j 8 file.txt\n# or\ngnverifier --jobs=8 file.tsv\n```\n\nSometimes it is important to return names in exactly same order. For such\ncases set `jobs` flag to 1.\n\n```bash\ngnverifier -j 1 file.txt\n```\n\nThis option is ignored by advanced search.\n\n#### quiet\n\nRemoves log messages from the output. Note that results of verification go\nto STDOUT, while log messages go to STDERR. So instead of using `-q` flag\nSTDERR can be redirected to `/dev/null`:\n\n```bash\ngnverifier ""Puma concolor"" -q >verif-results.csv\n\n#or\n\ngnverifier ""Puma concolor 2>/dev/null >verif-results.csv\n```\n\n#### sources\n\nBy default [GNverifier] returns only one ""best"" result of a match. If a user\nhas a particular interest in a data set, s/he can set it with this option, and\nall matches that exist for this source will be returned as well. You need to\nprovide a data source id for a dataset. Ids can be found at the following\n[URL][data_source_ids]. Some of them are provided in the GNverifier help\noutput as well.\n\nData from such sources will be returned in preferred_results section of JSON\noutput, or with CSV/TSV rows that start with ""PreferredMatch"" string.\n\n```bash\ngnverifier file.csv -s ""1,11,172""\n# or\ngnverifier file.tsv --sources=""12""\n# or\ncat file.txt | gnverifier -s \'1,12\'\n```\n\nIf all matched sources need to be returned, set the flag to ""0"".\n\nWARNING: the result might be excessively large.\n\n```bash\ngnverifier ""Bubo bubo"" -s 0\n# potentially even more results get returned by adding --all_matches flag\ngnverifier ""Bubo bubo"" -s 0 -M\n```\n\nThe `sources` option would overwrite `ds:` settings in case of advanced search.\n\n### web-logs\n\nRequires `--port`. Enables output of logs for web-services.\n\n```bash\ngnverifier -p 8777 --web-logs\n```\n\n### nsqd-tcp\n\nRequires `--port`. Allows redirecting web-service log output to [NSQ]\nmessaging server\'s TCP-based endpoint. It is handy for aggregations of logs\nfrom [GNverifier] web-services running inside of Docker containers or in\nKubernetes pods.\n\n```bash\ngnverifier -p 8777 --nsqd-tcp=localhost:4150\n# with logs printed out\ngnverifier -p 8777 --nsqd-tcp=localhost:4150 --with-logs\n```\n\n### Configuration file\n\nIf you find yourself using the same flags over and over again, it makes sense\nto edit configuration file instead. It is located at\n`$HOME/.config/gnverifier.yaml`. After that you do not need to use command line\noptions and flags. Configuration file is self-documented, the [default\ngnverifier.yaml] is located on GitHub\n\n```bash\ngnverifier file.txt\n```\n\nIn case if [GNverifier] runs as a web-based user interface, it is also\npossible to use environment variables for configuration.\n\n| Env. Var. | Configuration |\n| :---------------------- | :----------------- |\n| GNV_FORMAT | Format |\n| GNV_DATA_SOURCES | DataSources |\n| GNV_WITH_ALL_MATCHES | WithAllMatches |\n| GNV_WITH_CAPITALIZATION | WithCapitalization |\n| GNV_VERIFIER_URL | VerifierURL |\n| GNV_JOBS | Jobs |\n| GNV_WEB_LOGS_NSQD_TCP | WebLogsNsqdTCP |\n| GNV_WITH_WEB_LOGS | WithWebLogs |\n\n### Advanced Search Query Language\n\nExample: `g:M. sp:gallop. au:Oliv. y:1750-1799` or `n:M. gallop. Oliv. 1750-1799`\n\nQuery language allows searching for scientific names using name components\nlike genus name, specific epithet, infraspecific epithet, author, year.\nIt includes following operators:\n\n`g:`\n: Genus name, can be abbreviated (for example `g:Bubo`, `g:B.`).\n\n`sp:`\n: specific epithet, can be abbreviated (for example `sp:galloprovincialis`,\n`sp:gallop.`).\n\n`isp:`\n: Infraspecific epithet, can be abbreviated (for example `isp:auspicalis`,\n`isp:ausp.`).\n\n`asp:`\n: Either specific, or infraspecific epithet (for example `asp:bubo`).\n\n`au:`\n: One of the authors of a name, can be abbreviated (for example `au:Linn.`,\n`au:Linnaeus`).\n\n`y:`\n: Year. Can be one year, or a year range (for example `y:1888`, `y:1800-1802`,\n`y:1756-`, `y:-1880`)\n\n`ds:`\n: Limit result to one or more data-sources. Note that command line `sources`\noption, if given, will overwrite this setting (`ds:1,2,172`).\n\n`tx:`\n: Parent taxon. Limit results to names that contain a particular higher taxon\nin their classification. If `ds:` is given, uses the classification of the\nfirst data-source in the setting. If `ds:` is not given, uses managerial\nclassification of the Catalogue of Life (`tx:Hemiptera`, `tx:Animalia`,\n`tx:Magnoliopsida`).\n\n`all:`\n: If true, [GNverifier] will show all results, not only the best ones.\nThe setting can be `true` or `false` (`all:t`, `all:f`). This setting\nwill also become true if `sources` command line option is set to `0`.\n\n`n:`\n: A ""name"" setting. It allows to combine several query components together\nfor convenience. Note that it is not a \'real\' scientific name, but a shortcut\nto enter several settings at once loosely following rules of nomenclature\n(`n:B. bubo Linn. 1758`). For example, in contrast with GNparser results, it\nis possible to have abbreviated specific epithets or range in\nyears: `n:Mono. gall. Oliv. 1750-1800`.\n\nOften there are errors in species epithets gender. Because of that search\nwill try to detect names in any gender that correspond to the epithet.\n\nThe search requires to have either `sp:`, `isp:` or `asp:` setting,\nor provide their analogs in `n:` setting.\n\n#### Examples of searches\n\n```text\ngnverifier ""n:Pom. saltator tx:Animalia y:1750-""\n\ngnverifier ""g:Plantago asp:major au:Linn.""\n\ngnverifier ""g:Cara. isp:daurica ds:1,12""\n```\n\n## Copyright\n\nAuthors: [Dmitry Mozzherin][dimus]\n\nCopyright \xc2\xa9 2020-2023 Dmitry Mozzherin. See [LICENSE] for further\ndetails.\n\n[catalogue of life]: https://catalogueoflife.org/\n[gbif]: https://www.gbif.org/\n[gnverifier]: https://github.com/gnames/gnverifier\n[homebrew]: https://brew.sh/\n[license]: https://github.com/gnames/gnverifier/blob/master/LICENSE\n[nsq]: https://nsq.io/overview/quick_start.html\n[wsl install]: https://docs.microsoft.com/en-us/windows/wsl/install-win10\n[wsl2]: https://docs.microsoft.com/en-us/windows/wsl/install\n[worms]: https://marinespecies.org/\n[zenodo doi]: https://zenodo.org/badge/latestdoi/297323648\n[data_source_ids]: https://verifier.globalnames.org/data_sources\n[default gnverifier.yaml]: https://github.com/gnames/gnverifier/blob/master/gnverifier/cmd/gnverifier.yaml\n[dimus]: https://github.com/dimus\n[latest release]: https://github.com/gnames/gnverifier/releases/latest\n[gnames]: https://apidoc.globalnames.org/gnames\n[go-install]: https://golang.org/doc/install\n[test directory]: https://github.com/gnames/gnverifier/tree/master/testdata\n[ubio]: https://ubio.org/\n[verifier api]: https://apidoc.globalnames.org/gnames\n[web-service]: https://verifier.globalnames.org\n[win-pdf]: https://github.com/gnames/gnverifier/blob/master/use-gnverifier-windows.pdf\n[winpath]: https://www.computerhope.com/issues/ch000549.htm\n[GNverifier with OpenRefine]: https://github.com/gnames/gnverifier/wiki/OpenRefine-readme\n[GNverifier API]: https://apidoc.globalnames.org/gnames\n[Feedback]: https://github.com/gnames/gnverifier/issues\n'",",https://zenodo.org/badge/latestdoi/297323648,https://zenodo.org/badge/latestdoi/297323648\n","2020/09/21, 11:49:47",1129,MIT,12,132,"2023/05/30, 16:59:16",6,0,95,9,148,0,0,0.0,"2023/10/10, 16:29:48",v1.1.3,0,1,false,,false,false,"gnames/bhlindex,gnames/gnfinder",,https://github.com/gnames,,,,,https://avatars.githubusercontent.com/u/11817407?v=4,,, DISPLACE_GUI,A Scientific Research Software for Spatial Fisheries and Natural Resource Management.,frabas,https://github.com/frabas/DISPLACE_GUI.git,github,"fisheries,sustainability,ocean,impact-assessment,bio-economics,marine-resources,marine-traffics,aquaculture",Biodiversity and Species Distribution,"2023/07/03, 14:21:00",8,0,2,true,C,,,"C,C++,CMake,Python,QMake,R,Shell,Inno Setup,AppleScript,Batchfile",https://displace-project.org/,"b'\nThis branch is a copy of master commit b37cac9, created on 08/04/2021. I will implement the following changes, all realted to SSM assumptions:\n\n- beta formula in main.cpp around l1082 (change the forcing DONE AND CORRECTED WEIRD INDEXING< TO BE PROOFREAD)\n- background mortality in Node.cpp around l1461 (change the forcing DONE)\n- predkernel formula main.cpp around l1085 (update DONE)\n\n\n\nFind your way with DISPLACE\n======\n\n## Summary\n- [What is for?](#what-is-for)\n- [How to contribute](#how-to-contribute)\n- [How to install](#how-to-install)\n- [Quick start for running a DISPLACE simulation](#quick-start-for-running-a-displace-simulation)\n- [Simulation output formats](#simulation-output-formats)\n- [How to build from scratch](#how-to-build-from-scratch)\n- [DISPLACE doxygen documentation](#displace-doxygen-documentation)\n- [GDAL notes](#gdal-notes)\n- [Unit testing](#unit-testing)\n- [Build a new case study](#build-a-new-case-study)\n- [How to cite](#how-to-cite)\n\n\n\n## What is for?\n\n\nDISPLACE is a dynamic, individual-based model for spatial fishing planning and effort displacement \nintegrating underlying fish population models. The DISPLACE project develops and provides a \nplatform primarily for research purposes to transform the fishermen detailed knowledge into models,\nevaluation tools, and methods that can provide the fisheries with research and advice. \nBy conducting a ecological and socio-economic impact assessment the model intends to serve as an aid to decision-making for (fishery) managers.\nAn impact assessment will help answering on what are the impacts of the different policy options and who will be affected.\nBy quantifying the effects the assessment will measure how the different options compare, for example \nhow different the options perform in achieving the objective(s) of the policy.\n\n## How to contribute\n\n[Please read our CONTRIBUTING statement here](CONTRIBUTING.md)\n\n\n## How to install DISPLACE\n\n\nLook at the [Release section](https://github.com/frabas/DISPLACE_GUI/releases) on this GitHub repository \nto download an installer for Windows. Alternatively, look at the \n[google drive for DISPLACE](https://drive.google.com/drive/folders/0ByuO_4j-1PxtfnZBblpQNmh2a2Z4SmpkRC16T1kwR0t1RWUyOVUxdHlEZzZwZWVpaVJac00)\nfor Unix or MacOSX packages, also hosting additional files i.e. possible dependencies and the DISPLACE software development kit.\n\n\n### Install on Windows\n\nLaunch the installer application, and follow the guide. There are no prerequisites on Windows, and the application\nshould work out of the box.\n\n### Install on MacOS\n\nOpen the DMG file, then drop the program in the Application folder.\nThere are no prerequisites on MacOS, and the application should work out of the box.\n\n### Install on Ubuntu Linux\n\nUbuntu 18.04LTS has a few prerequisites that must be installed before installing the displace package itself.\n\nRun the following command to install the prerequisites:\n\n```bash\n\n$ sudo apt install libgdal20 libgdal-dev libcgal13 libcgal-dev libboost1.65-all-dev libgeographic17 libqt5gui5 libqt5widgets5 libqt5xml5\n\n```\n\nThen install the `msqlitecpp` packages provided in the download section:\n\n```bash\n$ sudo dpkg -i msqlitecpp0_0.9.4_amd64.deb msqlitecpp-dev_0.9.4_amd64.deb \n```\n\n\nFinally, install the displace package:\n\n```bash\n$ sudo dpkg -i displace_0.9.22_amd64.deb\n```\n\nIf you have any difficulty, try fixing the package dependencies by running: \n\n```bash\n$ sudo apt --fix-broken install\n```\n\nAny missing package should be automatically installed.\n\n## How to compile from the code source\n\n\n[compiling with CMake (preferred)](docs/Building-cmake.md)\n\n[compiling on HPC (simulator only)](docs/Building-on-hpc.md)\n\n[making the displace sdk (optional)](docs/building.md)\n\n[compiling on Windows (deprecated)](docs/Building.win)\n\n[compiling on Unix deprecated)](docs/Building.unix)\n\n[compiling on MacOSX deprecated)](docs/Building.MacOSX)\n\n\n## DISPLACE doxygen documentation\n\nCan be found [here](https://frabas.github.io/DISPLACE_GUI/doxygen/doc/html/index.html)\n\nProcedure for updating the doxygen documentation without keeping track of git history [here](https://github.com/robotology/how-to-document-modules/blob/master/README.md)\n\n## Quick start for running a basic DISPLACE simulation\n\n[DISPLACE Example datasets](https://displace-project.org/blog/download/) are available for download. \nYou need to unzip the downloaded file to a folder that name the dataset with the pattern DISPLACE_input_xx,\nfor example DISPLACE_input_minitest which is the minimal dataset typically used for demonstration purpose.\n\n\nRun DISPLACE with e.g. displacegui\n\n\nBy default the Model Objects is set to 4. If you want to run a scenario, first make sure your Model Objects is set to [0].\n\n\n![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_select_model_0.png)\n\n\nIf yes then in the main menu do a ""File"">""Load a Scenario Model, \n\n\n ![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_load_scenario_file.png)\n\n\nand choose a scenario file (a .dat file) you\xc2\xb4ll find in the\\simusspe subfolder of your DISPLACE dataset.\nSelect the file, click Ok and wait to see the DISPLACE graph plotted on the map.\n\n\n ![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_scenario_file_is.png)\n\n\nYou can now click Start in the DISPLACE command panel for a DISPLACE simulation to start:\n\n\n ![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_click_start.png)\n\nAfter some object creation and initialization the time step window will shortly update and the simulation run to the end time step. \nBy default, 8762 hourly time steps will be simulated which is approx. the number of hours in one year.\nin Setup menu the total number of time step can be changed to up to 52586 for a maximal 6-years horizon. \nBecause of the computation time, running more than one year simulation and many replicates are better done on a HPC cluster.\nAutomated shell scripts to run many DISPLACE simulations in parallel on a HPC cluster can be provided on request. \n\n ## Simulation output formats\n\n\nLook at the [description](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/output_fileformats.md) of the list of files produced by a DISPLACE simulation. \nA [displaceplot](https://github.com/frabas/displaceplot/releases) R package has been developed to handle these output text files and produced some plots out of them. \nSimulation outcomes are also exported as a SQLite database which can be re-loaded within DISPLACE in a Replay mode. The internal structure and simulated data\ncan be further retrieved from the database when using an external SQLite DB browser. \n\nTo load a result database into DISPLACE: \n\n\n\n ![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_load_db_menu.png)\n\nSelect a DISPLACE db file:\n\n ![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_load_db_file.png)\n\nOnce loaded, the simulation can be replayed with the Replay command:\n\n ![alt text](https://github.com/frabas/DISPLACE_GUI/blob/master/docs/quickstart_replay_command.png)\n\n\n\n\n## GDAL Notes\n\n\nCurrent version of QMapControl supports shapefiles loading ONLY for WGS84 Coordinates system. This is because QMapControl itself uses WGS84 coordinates.\nSo you need to convert your shapefiles if not using this CRS.\nYou can use ogr2ogr or the provided script in the scripts/ directory.\n\n$ ogr2ogr -t_srs WGS84 DEST.shp SRC.shp\n\nNote that DEST file is specified before the source!\n\n\n## Unit testing\n\n\nUnit testing is performed using the Boost::Test framework. It can be compiled and linked in two ways:\n\n- Dynamic linked, using the system installed boost::test library. This option is enabled by default on Unix\n- Compiled in, using the boost/test/included/unit_test.hpp (included in main.cpp). This is suitable for Windows, where boost::test is not available with our version of mingw64. A standard installation of boost::test is required.\n\nThe two methods can be selected by defining boost_test_included in the CONFIG line of the Qt Project. It is enabled by default on Windows (see localconfig.pri).\n\nIf boost::test is not available in any form, it can be disabled by removing the units-test option from the CONFIG variable in the pro file.\n\n## Build a new case study\n\nA set of [R routines](https://github.com/frabas/DISPLACE_R_inputs) are available to create a new DISPLACE case study from scratch. These routines are quite data-hungry \nand would require to be adapted to each case. Ideally a R package should be build soon to wrap them in a consistent tool. \nAnother way is to scrutinize the DISPLACE_input_mintest github repository to see all the required input files required \nto run a minimal case study. Because it can be currently tricky to understand the logic behind all the input text files \nwe will think one day to build a unique input database to DISPLACE instead of using so many individual text files.\n\n## How to cite\n\n* Bastardie F, Nielsen JR, Miethe T. 2014. DISPLACE: a dynamic, individual-based model for \nspatial fishing planning and effort displacement - integrating underlying \nfish population models. Canadian Journal of Fisheries and Aquatic Sciences. 71(3):366-386. [link here](https://www.nrcresearchpress.com/doi/full/10.1139/cjfas-2013-0126#.XJs-ubh7nmE) \n\n* Bastardie, F., Nielsen, J. R., Eigaard, O. R., Fock, H. O., Jonsson, P., and Bartolino, V. \nCompetition for marine space: modelling the Baltic Sea fisheries and effort displacement \nunder spatial restrictions. ICES Journal of Marine Science, doi: 10.1093/icesjms/fsu215. [link to a free copy](https://academic.oup.com/icesjms/article/72/3/824/701817) \n\n* Bastardie, F., Nielsen, J. R., Eero, M., Fuga, F. Rindorf., A. 2017. Effects of changes\nin stock productivity and mixing on sustainable fishing and economic viability,\nICES Journal of Marine Science, Volume 74, Issue 2, Pages 535\xe2\x80\x93551\n[link to a free copy](https://academic.oup.com/icesjms/article/74/2/535/2669542)\n\n* Bastardie, F., Angelini, S., Bolognini, L., Fuga, F., Manfredi, C., Martinelli, M.,\nNielsen, J. R., Santojanni, A., Scarcella, G., and Grati, F.. 2017. \nSpatial planning for fisheries in the Northern Adriatic: working toward viable and sustainable fishing.\nEcosphere 8( 2):e01696. [link to a free copy](https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecs2.1696) \n\n'",,"2014/09/09, 12:17:51",3333,GPL-2.0,38,3431,"2021/12/13, 07:38:26",23,70,117,0,681,0,0.0,0.3551980198019802,"2023/04/05, 08:32:56",v1.1.6,0,3,false,,false,true,,,,,,,,,,, GBIF Alert,A GBIF-based early alert system for invasive species.,riparias,https://github.com/riparias/gbif-alert.git,github,"biodiversity,biodiversity-data,biodiversity-informatics,django,gbif,invasive-species,webapp",Biodiversity and Species Distribution,"2023/10/02, 08:14:51",4,0,3,true,Python,LIFE RIPARIAS,riparias,"Python,Vue,HTML,TypeScript,JavaScript,Scheme,Dockerfile,CSS,Shell",https://gbif-alert-demo.thebinaryforest.net/,"b'# GBIF Alert\n\n\n[![Django CI](https://github.com/riparias/gbif-alert/actions/workflows/django_tests.yml/badge.svg)](https://github.com/riparias/gbif-alert/actions/workflows/django_tests.yml)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Automatic deployment - demo server](https://github.com/riparias/gbif-alert/actions/workflows/deploy_demo.yml/badge.svg?branch=devel)](https://github.com/riparias/gbif-alert/actions/workflows/deploy_demo.yml)\n\n\nGBIF Alert is a [GBIF](https://www.gbif.org)-based early alert system for invasive species.\n\nIt is a reusable website engine powered by [Django](https://www.djangoproject.com/) available under the [MIT license](LICENSE).\nContributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for more information.\n\n## Getting started\n\nGBIF Alert allows you to monitor a list of species, and be notified of new occurrences on GBIF via email.\n\nMultiple websites using GBIF alert (called *instances*) exists, in order to target different communities:\n\n- **You are an end-user that just want to be informed of new occurrence in the GBIF network?** Join [an existing instance](#user-content-gbif-alert-instances-in-the-wild) that covers your area and species of interest, register and start configuring your alerts! Here is a demonstration video: https://www.youtube.com/watch?v=bixaTGRIZ4A\n\n- **You have more technical knowledge and want to install your own instance of GBIF Alert?** No problem: GBIF Alert is fully configurable, and we provide facilities to make it easy to install and deploy. \nSee [INSTALL.md](INSTALL.md) for more information.\n\n## GBIF Alert instances in the wild\n\n- LIFE RIPARIAS Early Alert: [production](https://alert.riparias.be) / [development](https://dev-alert.riparias.be) (Targets riparian invasive species in Belgium)\n- [GBIF Alert demo instance](https://gbif-alert-demo.thebinaryforest.net/) (Always in sync with the `devel` branch of this repository)'",,"2021/08/25, 09:58:51",791,MIT,327,764,"2023/10/19, 10:22:25",54,12,212,113,6,0,0.0,0.03821656050955413,,,0,2,false,,false,true,,,https://github.com/riparias,https://riparias.be,Belgium,,,https://avatars.githubusercontent.com/u/85938012?v=4,,, python-dwca-reader,"A Python package to read and parse Darwin Core Archive (DwC-A) files, as produced by the GBIF website, the IPT and many other biodiversity informatics tools.",BelgianBiodiversityPlatform,https://github.com/BelgianBiodiversityPlatform/python-dwca-reader.git,github,"biodiversity-informatics,python,biodiversity,biodiversity-standards,gbif,dwc",Biodiversity and Species Distribution,"2023/10/02, 11:42:29",41,27,3,true,Python,Belgian Biodiversity Platform,BelgianBiodiversityPlatform,"Python,Shell",,"b'.. image:: https://badge.fury.io/py/python-dwca-reader.svg\n :target: https://badge.fury.io/py/python-dwca-reader\n\n.. image:: https://github.com/BelgianBiodiversityPlatform/python-dwca-reader/workflows/run-unit-tests/badge.svg\n :alt: run-unit-tests\n\n.. image:: https://readthedocs.org/projects/python-dwca-reader/badge/?version=latest\n\t:target: http://python-dwca-reader.readthedocs.org/en/latest/?badge=latest\n\t:alt: Documentation Status\n\nWhat is it ?\n============\n\nA simple Python class to read `Darwin Core Archive`_ (DwC-A) files, including data downloads from `GBIF`_.\n\nDocumentation\n-------------\n\nThe documentation has moved to: http://python-dwca-reader.readthedocs.org/.\n\n.. _Darwin Core Archive: http://en.wikipedia.org/wiki/Darwin_Core_Archive\n.. _GBIF: https://www.gbif.org\n'",,"2013/03/22, 12:40:30",3869,BSD-3-Clause,2,397,"2022/08/01, 12:46:24",24,14,74,1,450,2,0.0,0.07272727272727275,,,0,6,false,,true,false,"biodiversity-aq/biodiversity-aq-portal,tahiri-lab/py_madaclim,DjordjeR/flora-plant-service,RolnickLab/gbif-species-trainer,pieterprovoost/early-warning-webapp,FelipeSBarros/pyinaturalist-convert,LoricAndre/BigData2022,AKSW/life-knowledge-base,biodiversity-aq/data.biodiversity.aq,IRDG2OI/geoenrich,sveppalicious/Birds,USF-IMARS/obis_notebooks,riparias/gbif-alert,pyinat/pyinaturalist-notebook,pyinat/pyinaturalist-convert,Jesssullivan/mo-image-identifier,scan-bugs-org/dwca-splitter,kartoza/django-bims,plantnet/gbif-dl,inbo/mica-dashboard,BelgianBiodiversityPlatform/ifbl,Datafable/gbif-dataset-metrics,tbhi/bioacoustica_commons,BelgianBiodiversityPlatform/meuh,BelgianBiodiversityPlatform/www.formicidae-atlas.be,kurator-org/kurator-validation,inbo/pywhip",,https://github.com/BelgianBiodiversityPlatform,http://www.biodiversity.be,"Brussels, Belgium",,,https://avatars.githubusercontent.com/u/3726126?v=4,,, BirdNET-Analyzer,A deep learning solution for avian diversity monitoring.,kahst,https://github.com/kahst/BirdNET-Analyzer.git,github,"bioacoustics,birds,birdsong,acoustic-monitoring,deep-learning",Biodiversity and Species Distribution,"2023/10/20, 17:31:49",489,0,245,true,Python,,,"Python,Dockerfile",,"b'//***********************************************\n//***************** SETTINGS ********************\n//***********************************************\n\n:doctype: book\n:use-link-attrs:\n:linkattrs:\n\n// Github Icons\nifdef::env-github[]\n:tip-caption: :bulb:\n:note-caption: :information_source:\n:important-caption: :heavy_exclamation_mark:\n:caution-caption: :fire:\n:warning-caption: :warning:\nendif::[]\n\n// Table of Contents\n:toc:\n:toclevels: 2\n:toc-title: \n:toc-placement!:\n:sectanchors:\n\n// Numbered sections\n:sectnums:\n:sectnumlevels: 2\n\n// Links\n:cc-by-nc-sa: http://creativecommons.org/licenses/by-nc-sa/4.0/\n\n//************* END OF SETTINGS ******************\n//************************************************\n\n\n// Header\n++++\n
\n

BirdNET-Analyzer

\n

Automated scientific audio data processing and bird ID.

\n

\n++++\n\n// Badges\n:license-badge: https://badgen.net/badge/License/CC-BY-NC-SA%204.0/green\n:os-badge: https://badgen.net/badge/OS/Linux%2C%20Windows%2C%20macOS/blue\n:species-badge: https://badgen.net/badge/Species/6512/blue\n:twitter-badge: https://img.shields.io/twitter/follow/BirdNET_App\n:reddit-badge: https://img.shields.io/reddit/subreddit-subscribers/BirdNET_Analyzer?style=social\n// Mail icon from FontAwesome\n:mail-badge: https://img.shields.io/badge/Mail us!-ccb--birdnet%40cornell.edu-yellow.svg?style=social&logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj48IS0tISBGb250IEF3ZXNvbWUgUHJvIDYuNC4wIGJ5IEBmb250YXdlc29tZSAtIGh0dHBzOi8vZm9udGF3ZXNvbWUuY29tIExpY2Vuc2UgLSBodHRwczovL2ZvbnRhd2Vzb21lLmNvbS9saWNlbnNlIChDb21tZXJjaWFsIExpY2Vuc2UpIENvcHlyaWdodCAyMDIzIEZvbnRpY29ucywgSW5jLiAtLT48cGF0aCBkPSJNNjQgMTEyYy04LjggMC0xNiA3LjItMTYgMTZ2MjIuMUwyMjAuNSAyOTEuN2MyMC43IDE3IDUwLjQgMTcgNzEuMSAwTDQ2NCAxNTAuMVYxMjhjMC04LjgtNy4yLTE2LTE2LTE2SDY0ek00OCAyMTIuMlYzODRjMCA4LjggNy4yIDE2IDE2IDE2SDQ0OGM4LjggMCAxNi03LjIgMTYtMTZWMjEyLjJMMzIyIDMyOC44Yy0zOC40IDMxLjUtOTMuNyAzMS41LTEzMiAwTDQ4IDIxMi4yek0wIDEyOEMwIDkyLjcgMjguNyA2NCA2NCA2NEg0NDhjMzUuMyAwIDY0IDI4LjcgNjQgNjRWMzg0YzAgMzUuMy0yOC43IDY0LTY0IDY0SDY0Yy0zNS4zIDAtNjQtMjguNy02NC02NFYxMjh6Ii8+PC9zdmc+\n\nimage:{license-badge}[CC BY-NC-SA 4.0, link={cc-by-nc-sa}]\nimage:{os-badge}[Supported OS, link=""""]\nimage:{species-badge}[Number of species, link=""""]\n\n[.text-center]\nimage:{mail-badge}[Email, link=mailto:ccb-birdnet@cornell.edu, height=25]\nimage:https://img.shields.io/twitter/follow/BirdNET_App[Twitter Follow, link=https://twitter.com/BirdNET_App, height=25]\nimage:{reddit-badge}[Subreddit subscribers, link=""https://reddit.com/r/BirdNET_Analyzer"", height=25]\n\n++++\n
\n++++\n\n[discrete]\n== Introduction\n\nThis repo contains BirdNET models and scripts for processing large amounts of audio data or single audio files.\nThis is the most advanced version of BirdNET for acoustic analyses and we will keep this repository up-to-date with new models and improved interfaces to enable scientists with no CS background to run the analysis.\n\nFeel free to use BirdNET for your acoustic analyses and research.\nIf you do, please cite as:\n\n----\n@article{kahl2021birdnet,\n title={BirdNET: A deep learning solution for avian diversity monitoring},\n author={Kahl, Stefan and Wood, Connor M and Eibl, Maximilian and Klinck, Holger},\n journal={Ecological Informatics},\n volume={61},\n pages={101236},\n year={2021},\n publisher={Elsevier}\n}\n----\n\nThis work is licensed under a {cc-by-nc-sa}[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].\n\n[discrete]\n== About\n\nDeveloped by the https://www.birds.cornell.edu/ccb/[K. Lisa Yang Center for Conservation Bioacoustics] at the https://www.birds.cornell.edu/home[Cornell Lab of Ornithology] in collaboration with https://www.tu-chemnitz.de/index.html.en[Chemnitz University of Technology].\n\nGo to https://birdnet.cornell.edu to learn more about the project.\n\nWant to use BirdNET to analyze a large dataset? Don\'t hesitate to contact us: ccb-birdnet@cornell.edu\n\nFollow us on Twitter https://twitter.com/BirdNET_App[@BirdNET_App]\n\nWe also have a discussion forum on https://reddit.com/r/BirdNET_Analyzer[Reddit] if you have a general question or just want to chat.\n\n*Have a question, remark, or feature request? Please start a new issue thread to let us know. Feel free to submit a pull request.*\n\n\n[discrete]\n== Contents\ntoc::[]\n\n== Model version update\n\n[discrete]\n==== V2.4, June 2023\n\n* more than 6,000 species worldwide\n* covers frequencies from 0 Hz to 15 kHz with two-channel spectrogram (one for low and one for high frequencies)\n* 0.826 GFLOPs, 50.5 MB as FP32\n* enhanced and optimized metadata model\n* global selection of species (birds and non-birds) with 6,522 classes (incl. 10 non-event classes)\n\nYou can find a list of previous versions here: https://github.com/kahst/BirdNET-Analyzer/tree/main/checkpoints[BirdNET-Analyzer Model Version History]\n\n== Technical Details\n\nModel V2.4 uses the following settings:\n\n* 48 kHz sampling rate (we up- and downsample automatically and can deal with artifacts from lower sampling rates)\n* we compute 2 mel spectrograms as input for the convolutional neural network:\n** first one has fmin = 0 Hz and fmax = 3000; nfft = 2048; hop size = 278; 96 mel bins\n** second one has fmin = 500 Hz and fmax = 15 kHz; nfft = 1024; hop size = 280; 96 mel bins\n* both spectrograms have a final resolution of 96x511 pixels\n* raw audio will be normalized between -1 and 1 before spectrogram conversion\n* we use non-linear magnitude scaling as mentioned in http://ceur-ws.org/Vol-2125/paper_181.pdf[Schl\xc3\xbcter 2018]\n* V2.4 uses an EfficienNetB0-like backbone with a final embedding size of 1024\n* See https://github.com/kahst/BirdNET-Analyzer/issues/177#issuecomment-1772538736[this comment] for more details\n\n== Usage guide\n\nThis document provides instructions for downloading and installing the GUI, and conducting some of the most common types of analyses. Within the document, a link is provided to download example sound files that can be used for practice.\n\nDownload the PDF here: https://zenodo.org/records/8357176[BirdNET-Analyzer Usage Guide]\n\nWatch our presentation on how to use BirdNET-Analyzer to train your own models: https://youtu.be/HuEZGIPeyq0[BirdNET - BioacousTalks at YouTube]\n\n== Showroom\n\nBirdNET powers a number of fantastic community projects dedicated to bird song identification, all of which use models from this repository.\nThese are some highlights, make sure to check them out!\n\n.Community projects\n[cols=""~,~"", options=""header""]\n|===\n| Project | Description\n\n| image:https://tuc.cloud/index.php/s/cDqtQxo8yMRkNYP/download/logo_box_loggerhead.png[HaikuBox,300,link=https://haikubox.com]\n| \n*HaikuBox* +\nOnce connected to your WiFi, Haikubox will listen for birds 24/7.\nWhen BirdNET finds a match between its thousands of labeled sounds and the birdsong in your yard, it identifies the bird species and shares a three-second audio clip to the Haikubox website and smartphone app.\n\nLearn more at: https://haikubox.com[HaikuBox.com]\n\n| image:https://tuc.cloud/index.php/s/WKCZoE9WSjimDoe/download/logo_box_birdnet-pi.png[BirdNET-PI,300,link=https://birdnetpi.com]\n| *BirdNET-Pi* +\nBuilt on the TFLite version of BirdNET, this project uses pre-built TFLite binaries for Raspberry Pi to run on-device sound analyses.\nIt is able to recognize bird sounds from a USB sound card in realtime and share its data with the rest of the world.\n\nLearn more at: https://birdnetpi.com[BirdNETPi.com]\n\n| image:https://tuc.cloud/index.php/s/jDtyG9W36WwKpbR/download/logo_box_birdweather.png[BirdWeather,300,link=https://app.birdweather.com]\n| *BirdWeather* +\nThis site was built to be a living library of bird vocalizations.\nUsing the BirdNET artificial neural network, BirdWeather is continuously listening to over 100 active stations around the world in real-time.\n\nLearn more at: https://app.birdweather.com[BirdWeather.com]\n\n| image:https://tuc.cloud/index.php/s/zpNkXJq7je3BKNE/download/logo_box_ecopi_bird.png[ecoPI:Bird,300,link=https://oekofor.netlify.app/en/portfolio/ecopi-bird_en/]\n| *ecoPi:Bird* +\nThe ecoPi:Bird is a device for automated acoustic recordings of bird songs and calls, with a self-sufficient power supply.\nIt facilitates economical long-term monitoring, implemented with minimal personal requirements.\n\nLearn more at: https://oekofor.netlify.app/en/portfolio/ecopi-bird_en/[oekofor.netlify.app]\n|===\n\n*Other cool projects:*\n\n* BirdCAGE is an application for monitoring the bird songs in audio streams: https://github.com/mmcc-xx/BirdCAGE[BirdCAGE at GitHub]\n* BattyBirdNET-Analyzer is a tool to assist in the automated classification of bat calls: https://github.com/rdz-oss/BattyBirdNET-Analyzer[BattyBirdNET-Analyzer at GitHub]\n\nWorking on a cool project that uses BirdNET? Let us know and we can feature your project here.\n\n== Setup\n=== Setup (Raven Pro)\n\nIf you want to analyze audio files without any additional coding or package install, you can now use https://ravensoundsoftware.com/software/raven-pro/[Raven Pro software] to run BirdNET models.\nAfter download, BirdNET is available through the new ""Learning detector"" feature in Raven Pro.\nFor more information on how to use this feature, please visit the https://ravensoundsoftware.com/article-categories/learning-detector/[Raven Pro Knowledge Base].\n\n=== Setup (birdnetlib)\n\nThe easiest way to setup BirdNET on your machine is to install https://pypi.org/project/birdnetlib/[birdnetlib] through pip with:\n\n[source,sh]\n----\npip3 install birdnetlib\n----\n\nMake sure to install Tensorflow Lite, librosa and ffmpeg like mentioned below.\nYou can run BirdNET with:\n\n[source,python]\n----\nfrom birdnetlib import Recording\nfrom birdnetlib.analyzer import Analyzer\nfrom datetime import datetime\n\n# Load and initialize the BirdNET-Analyzer models.\nanalyzer = Analyzer()\n\nrecording = Recording(\n analyzer,\n ""sample.mp3"",\n lat=35.4244,\n lon=-120.7463,\n date=datetime(year=2022, month=5, day=10), # use date or week_48\n min_conf=0.25,\n)\nrecording.analyze()\nprint(recording.detections)\n----\n\nFor more examples and documentation, make sure to visit https://pypi.org/project/birdnetlib/[pypi.org/project/birdnetlib/].\nFor any feature request or questions regarding *birdnetlib*, please contact link:mailto:joe.weiss@gmail.com[Joe Weiss] or add an issue or PR at https://github.com/joeweiss/birdnetlib[github.com/joeweiss/birdnetlib].\n\n=== Setup (Ubuntu)\n\nInstall Python 3:\n\n[source,sh]\n----\nsudo apt-get update\nsudo apt-get install python3-dev python3-pip\npip3 install --upgrade pip\n----\n\nInstall TFLite runtime (recommended) or Tensorflow (has to be 2.5 or later):\n\n[source,sh]\n----\npip3 install tflite-runtime\n\nOR\n\npip3 install tensorflow\n----\n\nInstall Librosa to handle audio files:\n\n[source,sh]\n----\npip3 install librosa resampy\nsudo apt-get install ffmpeg\n----\n\nClone the repository\n\n[source,sh]\n----\ngit clone https://github.com/kahst/BirdNET-Analyzer.git\ncd BirdNET-Analyzer\n----\n\n=== Setup (Windows)\n\nBefore you attempt to setup BirdNET-Analyzer on your Windows machine, please consider downloading our fully-packaged version that does not require you to install any additional packages and can be run ""as-is"".\n\nYou can download this version here: https://tuc.cloud/index.php/s/myHcJNsDsMrDqMM/download/BirdNET-Analyzer.zip[BirdNET-Analyzer Windows]\n\n. Download the zip file\n. Before unpacking, make sure to right-click the zip-file, select ""Properties"" and check the box ""Unblock"" under ""Security"" at the bottom of the ""General"" tab.\n ** If Windows does not display this option, the file can be unblocked with the PowerShell 7 command `Unblock-File -Path .\\BirdNET-Analyzer.zip`\n. Unpack the zip-file\n. Navigate to the extracted folder named ""BirdNET-Analyzer""\n. You can start the analysis through the command prompt with `+BirdNET-Analyzer.exe --i ""path\\to\\folder"" ...+` (see <> for more details), or you can launch `BirdNET-Analyzer-GUI.exe` to start the analysis through a basic GUI.\n\nFor more advanced use cases (e.g., hosting your own API server), follow these steps to set up BirdNET-Analyzer on your Windows machine:\n\nInstall Python 3.9 or higher (has to be 64bit version)\n\n* Download and run installer: https://www.python.org/downloads/release/python-390/[Download Python installer]\n\nWARNING: :exclamation:**Make sure to check: ☑ ""Add path to environment variables"" during install**:exclamation:\n\nInstall Tensorflow (has to be 2.5 or later), Librosa and NumPy\n\n* Open command prompt with *`Win + S`* type ""command"" and click on ""Command Prompt""\n* Type `pip install --upgrade pip`\n* Type `pip install librosa resampy`\n* Install Tensorflow by typing `pip install tensorflow`\n\nNOTE: You might need to run the command prompt as ""administrator"".\nType *`Win + S`*, search for command prompt and then right-click, select ""Run as administrator"".\n\nInstall Visual Studio Code (optional)\n\n* Download and install VS Code: https://code.visualstudio.com/sha/download?build=stable&os=win32-x64-user[Download VS Code installer]\n* Select all available options during install\n\nInstall BirdNET using Git (for simple download see below)\n\n* Download and install Git Bash: https://github.com/git-for-windows/git/releases/download/v2.34.1.windows.1/Git-2.34.1-64-bit.exe[Download Git Bash installer]\n* Select Visual Studio Code as default editor (optional)\n* Keep all other settings as recommended\n* Create folder in personal directory called ""Code"" (or similar)\n* Change to folder and right click, launch ""Git bash here""\n* Type `+git clone https://github.com/kahst/BirdNET-Analyzer.git+`\n* Keep BirdNET updated by running `git pull` for BirdNET-Analyzer folder occasionally\n\nInstall BirdNET from zip\n\n* Download BirdNET: https://github.com/kahst/BirdNET-Analyzer/archive/refs/heads/main.zip[Download BirdNET Zip-file]\n* Unpack zip file (e.g., in personal directory)\n* Keep BirdNET updated by re-downloading the zip file occasionally and overwrite existing files\n\nRun BirdNET from command line\n\n* Open command prompt with *`Win + S`* type ""command"" and click on ""Command Prompt""\n* Navigate to the folder where you installed BirdNET (cd path\\to\\BirdNET-Analyzer)\n* See <> for command line arguments\n\nNOTE: With Visual Studio Code installed, you can right-click the BirdNET-Analyzer folder and select ""Open with Code"".\nWith proper extensions installed (View -> Extensions -> Python) you will be able to run all scripts from within VS Code.\n\n=== Setup (macOS)\n\nNOTE: Installation was only tested on a M1 chip.\nFeedback on older Intel CPUs or newer M2 chips is welcome!\n\n==== Requirements\n\nXcode command-line tools:\n\n[source,sh]\n----\nxcode-select --install\n----\n\nConda (Apple silicon):\n\n[source,sh]\n----\ncurl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o ~/Downloads/Miniconda3-latest-MacOSX-arm64.sh\nbash ~/Downloads/Miniconda3-latest-MacOSX-arm64.sh -b -p $HOME/miniconda\n----\n\nConda (x86_64):\n\n[source,sh]\n----\ncurl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o ~/Downloads/Miniconda3-latest-MacOSX-x86_64.sh\nbash ~/Downloads/Miniconda3-latest-MacOSX-x86_64.sh -b -p $HOME/miniconda\n----\n\nThe installer prompts ""`Do you wish the installer to initialize Miniconda3 by running conda init?`"" We recommend ""`yes`"".\n\nAdd the `conda-forge` channel:\n\n[source,sh]\n----\nconda config --add channels conda-forge\n----\n\n==== Create Conda Environment\n\n[source,sh]\n----\nconda create -n birdnet-analyzer python=3.10 -c conda-forge -y\nconda activate birdnet-analyzer\n----\n\n==== Install dependencies\n\nApple silicon only:\n\n[source,sh]\n----\nconda install -c apple tensorflow-deps\n----\n\nTensorFlow for macOS and Metal plug-in:\n\n[source,sh]\n----\npython -m pip install tensorflow-macos tensorflow-metal\n----\n\nLibrosa and ffmpeg:\n\n[source,sh]\n----\nconda install -c conda-forge librosa resampy -y\n----\n\n==== Verify\n\nClone the git repository if you have not done that yet:\n\n[source,sh]\n----\ngit clone https://github.com/kahst/BirdNET-Analyzer.git\ncd BirdNET-Analyzer\n----\n\nRun the example.\nIt will take a while the first time you run it.\nSubsequent runs will be faster.\n\n[source,sh]\n----\npython analyze.py\n----\n\nNOTE: Now, you can install and use <>.\n\n== Usage\n=== Usage (CLI)\n\n. Inspect config file for options and settings, especially inference settings.\nSpecify a custom species list if needed and adjust the number of threads TFLite can use to run the inference.\n. Run `analyzer.py` to analyze an audio file.\nYou need to set paths for the audio file and selection table output.\nHere is an example:\n+\n[source,sh]\n----\npython3 analyze.py --i /path/to/audio/folder --o /path/to/output/folder\n----\n+\nNOTE: Your custom species list has to be named \'species_list.txt\' and the folder containing the list needs to be specified with `--slist /path/to/folder`.\nYou can also specify the number of CPU threads that should be used for the analysis with `--threads ` (e.g., `--threads 16`).\nIf you provide GPS coordinates with `--lat` and `--lon`, the custom species list argument will be ignored.\n+\n\nHere\'s a complete list of all command line arguments:\n+\n----\n--i, Path to input file or folder. If this is a file, --o needs to be a file too.\n--o, Path to output file or folder. If this is a file, --i needs to be a file too.\n--lat, Recording location latitude. Set -1 to ignore.\n--lon, Recording location longitude. Set -1 to ignore.\n--week, Week of the year when the recording was made. Values in [1, 48] (4 weeks per month). Set -1 for year-round species list.\n--slist, Path to species list file or folder. If folder is provided, species list needs to be named ""species_list.txt"". If lat and lon are provided, this list will be ignored.\n--sensitivity, Detection sensitivity; Higher values result in higher sensitivity. Values in [0.5, 1.5]. Defaults to 1.0.\n--min_conf, Minimum confidence threshold. Values in [0.01, 0.99]. Defaults to 0.1.\n--overlap, Overlap of prediction segments. Values in [0.0, 2.9]. Defaults to 0.0.\n--rtype, Specifies output format. Values in [\'table\', \'audacity\', \'r\', \'kaleidoscope\', \'csv\']. Defaults to \'table\' (Raven selection table).\n--threads, Number of CPU threads.\n--batchsize, Number of samples to process at the same time. Defaults to 1.\n--locale, Locale for translated species common names. Values in [\'af\', \'de\', \'it\', ...] Defaults to \'en\'.\n--sf_thresh, Minimum species occurrence frequency threshold for location filter. Values in [0.01, 0.99]. Defaults to 0.03.\n--classifier, Path to custom trained classifier. Defaults to None. If set, --lat, --lon and --locale are ignored.\n----\n+\nHere are two example commands to run this BirdNET version:\n+\n[source,sh]\n----\npython3 analyze.py --i example/ --o example/ --slist example/ --min_conf 0.5 --threads 4\n\npython3 analyze.py --i example/ --o example/ --lat 42.5 --lon -76.45 --week 4 --sensitivity 1.0\n----\n+\n. Run `embeddings.py` to extract feature embeddings instead of class predictions.\nResult file will contain timestamps and lists of float values representing the embedding for a particular 3-second segment.\nEmbeddings can be used for clustering or similarity analysis.\nHere is an example:\n+\n[,sh]\n----\npython3 embeddings.py --i example/ --o example/ --threads 4 --batchsize 16\n----\n+\nHere\'s a complete list of all command line arguments:\n+\n----\n--i, Path to input file or folder. If this is a file, --o needs to be a file too.\n--o, Path to output file or folder. If this is a file, --i needs to be a file too.\n--overlap, Overlap of prediction segments. Values in [0.0, 2.9]. Defaults to 0.0.\n--threads, Number of CPU threads.\n--batchsize, Number of samples to process at the same time. Defaults to 1.\n----\n+\n. After the analysis, run `segments.py` to extract short audio segments for species detections to verify results.\nThis way, it might be easier to review results instead of loading hundreds of result files manually.\n+\nHere\'s a complete list of all command line arguments:\n+\n----\n--audio, Path to folder containing audio files.\n--results, Path to folder containing result files.\n--o, Output folder path for extracted segments.\n--min_conf, Minimum confidence threshold. Values in [0.01, 0.99]. Defaults to 0.1.\n--max_segments, Number of randomly extracted segments per species.\n--seg_length, Length of extracted segments in seconds. Defaults to 3.0.\n--threads, Number of CPU threads.\n----\n+\n. When editing your own `species_list.txt` file, make sure to copy species names from the labels file of each model.\n+\nYou can find label files in the checkpoints folder, e.g., `checkpoints/V2.3/BirdNET_GLOBAL_3K_V2.3_Labels.txt`.\n+\nSpecies names need to consist of `scientific name_common name` to be valid.\n+\n. You can generate a species list for a given location using `species.py` in case you need it for reference.\nHere is an example:\n+\n[,sh]\n----\npython3 species.py --o example/species_list.txt --lat 42.5 --lon -76.45 --week 4\n----\n+\nHere\'s a complete list of all command line arguments:\n+\n----\n--o, Path to output file or folder. If this is a folder, file will be named \'species_list.txt\'.\n--lat, Recording location latitude.\n--lon, Recording location longitude.\n--week, Week of the year when the recording was made. Values in [1, 48] (4 weeks per month). Set -1 for year-round species list.\n--threshold, Occurrence frequency threshold. Defaults to 0.05.\n--sortby, Sort species by occurrence frequency or alphabetically. Values in [\'freq\', \'alpha\']. Defaults to \'freq\'.\n----\n+\n. This is a very basic version of the analysis workflow, you might need to adjust it to your own needs.\n. Please open an issue to ask for new features or to document unexpected behavior.\n. I will keep models up to date and upload new checkpoints whenever there is an improvement in performance.\nI will also provide quantized and pruned model files for distribution.\n\n=== Usage (Docker)\n\nInstall docker for Ubuntu:\n\n[source,sh]\n----\nsudo apt install docker.io\n----\n\nBuild Docker container:\n\n[source,sh]\n----\nsudo docker build -t birdnet .\n----\n\nNOTE: You need to run docker build again whenever you make changes to the script.\n\nIn order to pass a directory that contains your audio files to the docker file, you need to mount it inside the docker container with `-v /my/path:/mount/path` before you can run the container.\n\nYou can run the container for the provided example soundscapes with:\n\n[source,sh]\n----\nsudo docker run -v $PWD/example:/audio birdnet analyze.py --i audio --o audio --slist audio\n----\n\nYou can adjust the directory that contains your recordings by providing an absolute path:\n\n[source,sh]\n----\nsudo docker run -v /path/to/your/audio/files:/audio birdnet analyze.py --i audio --o audio --slist audio\n----\n\nYou can also mount more than one drive, e.g., if input and output folder should be different:\n\n[source,sh]\n----\nsudo docker run -v /path/to/your/audio/files:/input -v /path/to/your/output/folder:/output birdnet analyze.py --i input --o output --slist input\n----\n\nSee <> above for more command line arguments, all of them will work with Docker version.\n\nNOTE: If you like to specify a species list (which will be used as post-filter and needs to be named \'species_list.txt\'), you need to put it into a folder that also has to be mounted.\n\n=== Usage (Server)\n\nYou can host your own analysis service and API by launching the `server.py` script.\nThis will allow you to send files to this server, store submitted files, analyze them and send detection results back to a client.\nThis could be a local service, running on a desktop PC, or a remote server.\nThe API can be accessed locally or remotely through a browser or Python client (or any other client implementation).\n\n. Install one additional package with `pip3 install bottle`.\n. Start the server with `python3 server.py`.\nYou can also specify a host name or IP and port number, e.g., `python3 server.py --host localhost --port 8080`.\n+\nHere\'s a complete list of all command line arguments:\n+\n----\n--host, Host name or IP address of API endpoint server. Defaults to \'0.0.0.0\'.\n--port, Port of API endpoint server. Defaults to 8080.\n--spath, Path to folder where uploaded files should be stored. Defaults to \'/uploads\'.\n--threads, Number of CPU threads for analysis. Defaults to 4.\n--locale, Locale for translated species common names. Values in [\'af\', \'de\', \'it\', ...] Defaults to \'en\'.\n----\n+\nNOTE: The server is single-threaded, so you\'ll need to start multiple instances for higher throughput.\nThis service is intented for short audio files (e.g., 1-10 seconds).\n+\n. Query the API with a client.\nYou can use the provided Python client or any other client implementation.\nRequest payload needs to be `multipart/form-data` with the following fields: `audio` for raw audio data as byte code, and `meta` for additional information on the audio file.\nTake a look at our example client implementation in the `client.py` script.\n+\nThis script will read an audio file, generate metadata from command line arguments and send it to the server.\nThe server will then analyze the audio file and send back the detection results which will be stored as a JSON file.\n+\nHere\'s a complete list of all command line arguments:\n+\n----\n--host, Host name or IP address of API endpoint server.\n--port, Port of API endpoint server.\n--i, Path to file that should be analyzed.\n--o, Path to result file. Leave blank to store with audio file.\n--lat, Recording location latitude. Set -1 to ignore.\n--lon, Recording location longitude. Set -1 to ignore.\n--week, Week of the year when the recording was made. Values in [1, 48] (4 weeks per month). Set -1 for year-round species list.\n--overlap, Overlap of prediction segments. Values in [0.0, 2.9]. Defaults to 0.0.\n--sensitivity, Detection sensitivity; Higher values result in higher sensitivity. Values in [0.5, 1.5]. Defaults to 1.0.\n--pmode, Score pooling mode. Values in [\'avg\', \'max\']. Defaults to \'avg\'.\n--num_results, Number of results per request.\n--sf_thresh, Minimum species occurrence frequency threshold for location filter. Values in [0.01, 0.99]. Defaults to 0.03.\n--save, Define if files should be stored on server. Values in [True, False]. Defaults to False.\n----\n+\n. Parse results from the server.\nThe server will send back a JSON response with the detection results.\nThe response also contains a `msg` field, indicating `success` or `error`.\nResults consist of a sorted list of (species, score) tuples.\n+\nThis is an example response:\n+\n\n[source,json]\n----\n{""msg"": ""success"", ""results"": [[""Poecile atricapillus_Black-capped Chickadee"", 0.7889], [""Spinus tristis_American Goldfinch"", 0.5028], [""Junco hyemalis_Dark-eyed Junco"", 0.4943], [""Baeolophus bicolor_Tufted Titmouse"", 0.4345], [""Haemorhous mexicanus_House Finch"", 0.2301]]}\n----\n+\nNOTE: Let us know if you have any questions, suggestions, or feature requests.\nAlso let us know when hosting an analysis service - we would love to give it a try.\n\n=== Usage (GUI)\n\nWe provide a very basic GUI which lets you launch the analysis through a web interface.\n\n.Web based GUI\nimage::https://tuc.cloud/index.php/s/QyBczrWXCrMoaRC/download/analyzer_gui.png[GUI screenshot]\n\n. You need to install two additional packages in order to use the GUI with `pip3 install pywebview gradio`\n. Launch the GUI with `python3 gui.py`.\n. Set all folders and parameters, after that, click \'Analyze\'.\n\n== Training\n\nYou can train your own custom classifier on top of BirdNET.\nThis is useful if you want to detect species that are not included in the default species list.\nYou can also use this to train a classifier for a specific location or season.\nAll you need is a dataset of labeled audio files, organized in folders by species (we use folder names as labels).\n*This also works for non-bird species, as long as you have a dataset of labeled audio files*.\nAudio files will be resampled to 48 kHz and converted into 3-second segments (we will use the center 3-second segment if the file is longer, we will pad with random noise if the file is shorter).\nWe recommend using at least 100 audio files per species (although training also works with less data).\nYou can download a sample training data set https://drive.google.com/file/d/16hgka5aJ4U69ane9RQn_quVmgjVY2AY5[here].\n\n. Collect training data and organize in folders based on species names.\n. Species labels should be in the format `_` (e.g., `Poecile atricapillus_Black-capped Chickadee`), but other formats work as well.\n. It can be helpful to include a non-event class.\nIf you name a folder \'Noise\', \'Background\', \'Other\' or \'Silence\', it will be treated as a non-event class.\n. Run the training script with `python3 train.py --i --o `.\n+\nHere is a list of all command line arguments:\n+\n----\n--i, Path to training data folder. Subfolder names are used as labels.\n--o, Path to trained classifier model output.\n--crop_mode, Crop mode for training data. Values in [\'center\', \'first\', \'segments\']. Defaults to \'center\'.\n--crop_overlap, Overlap of training data segments in seconds if crop_mode is \'segments\'. Defaults to 0.\n--epochs, Number of training epochs. Defaults to 100.\n--batch_size, Batch size. Defaults to 32.\n--val_split, Validation split ratio. Defaults to 0.2.\n--learning_rate, Learning rate. Defaults to 0.01.\n--hidden_units, Number of hidden units. Defaults to 0. If set to >0, a two-layer classifier is used.\n--dropout, Dropout rate. Defaults to 0.\n--mixup, Whether to use mixup for training.\n--upsampling_ratio, Balance train data and upsample minority classes. Values between 0 and 1. Defaults to 0.\n--upsampling_mode, Upsampling mode. Can be \'repeat\', \'mean\' or \'smote\'. Defaults to \'repeat\'.\n--model_format, Model output format. Can be \'tflite\', \'raven\' or \'both\'. Defaults to \'tflite\'.\n--cache_mode, Cache mode. Can be \'none\', \'load\' or \'save\'. Defaults to \'none\'.\n--cache_file, Path to cache file. Defaults to \'train_cache.npz\'.\n----\n+\n. After training, you can use the custom trained classifier with the `--classifier` argument of the `analyze.py` script. If you want to use the custom classifier in Raven, make sure to set `--model_format raven`.\n+\nNOTE: Adjusting hyperparameters (e.g., number of hidden units, learning rate, etc.) can have a big impact on the performance of the classifier.\nWe recommend trying different hyperparameter settings.\n+\nExample usage (when downloading and unzipping the sample training data set):\n+\n\n[source,sh]\n----\npython3 train.py --i train_data/ --o checkpoints/custom/Custom_Classifier.tflite\npython3 analyze.py --classifier checkpoints/custom/Custom_Classifier.tflite\n----\n+\nNOTE: Setting a custom classifier will also set the new labels file.\nDue to these custom labels, the location filter and locale will be disabled.\n\n== Funding\n\nThis project is supported by Jake Holshuh (Cornell class of `\'69) and The Arthur Vining Davis Foundations.\nOur work in the K.\nLisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K.\nLisa Yang to advance innovative conservation technologies to inspire and inform the conservation of wildlife and habitats. \n\nThe German Federal Ministry of Education and Research is funding the development of BirdNET through the project ""BirdNET+"" (FKZ 01|S22072).\nAdditionally, the German Federal Ministry of Environment, Nature Conservation and Nuclear Safety is funding the development of BirdNET through the project ""DeepBirdDetect"" (FKZ 67KI31040E).\n\n== Partners\n\nBirdNET is a joint effort of partners from academia and industry.\nWithout these partnerships, this project would not have been possible.\nThank you!\n\n.Our partners\nimage::https://tuc.cloud/index.php/s/KSdWfX5CnSRpRgQ/download/box_logos.png[Logos of all partners]\n'",",https://zenodo.org/records/8357176","2021/09/22, 13:29:56",763,CUSTOM,217,368,"2023/10/20, 11:30:50",35,55,151,119,5,6,0.0,0.28270042194092826,,,0,16,false,,false,false,,,,,,,,,,, specify7,A biological collections data management platform.,specify,https://github.com/specify/specify7.git,github,,Biodiversity and Species Distribution,"2023/10/25, 16:09:52",57,0,11,true,TypeScript,Specify Collections Consortium,specify,"TypeScript,Python,JavaScript,CSS,Dockerfile,HTML,Makefile,Shell,Nix",https://www.specifysoftware.org/products/specify-7/,"b""\n# [Specify 7](https://www.specifysoftware.org/products/specify-7/)\n\nThe [Specify Collections Consortium](https://www.specifysoftware.org) is pleased\nto offer Specify 7, a web implementation of our biological collections data\nmanagement platform.\n\nWe encourage members to use\nour [Dockerized compositions](https://github.com/specify/docker-compositions) of\nSpecify 7. You can choose a version, make the necessary adjustments and then run\na single command to get everything working. It is very simple and can be easily\nupdated when new versions are released. Members can contact us\nat [support@specifysoftware.org](mailto:support@specifysoftware.org) to gain\naccess to this repository.\n\nThe new generation of Specify combines the interface design components and data\nmanagement foundation of Specify 6 with the efficiency and ease-of-use of\nweb-based data access and cloud computing. The Specify 7 web application uses\nthe same interface layout language as Specify 6, so any user interface\ncustomization made in one product is mirrored in the other. Also Specify 6 and\nSpecify 7 use the same data model and can work from the same Specify MySQL\ndatabase, which means they can be run simultaneously with any Specify\ncollection. By providing an easy migration path to the web, Specify 7 helps\ntransition Specify 6 collections to cloud computing. It is also a great starting\nplatform for collections which prefer zero workstation software installation and\nubiquitous web browser access.\n\nSpecify 7\xe2\x80\x99s server/browser architecture open the door for computing support of\ncollaborative digitization projects and for remote hosting of institutional or\nproject specimen databases. Without the need for a local area or campus network\nto connect to the MySQL data server, Specify 7 gives you and your collaborators\naccess to a shared specimen database through any web browser. Without adequate\nIT support to maintain a secure database server? With the Specify 7 server\nsoftware supported on generic Linux servers, museums can utilize a server\nhosting service to provide support for the technical complexities of systems\nadministration, security management, and backing-up. Want to create a joint\ndatabase with remote collaborators for a collaborative digitizing effort? No\nproblem! Host, hire a hosting service or use\nour [Specify Cloud](https://www.specifysoftware.org/products/cloud/) service for\nyour Specify database, set up accounts and go. We provide the same efficient\nuser interface and printed reports and labels customization, and help desk\nsupport for Specify 7 as we do for Specify 6.\n\n**Secure.**\nSupport for Single Sign-On (SSO) integrates Specify 7 with a campus or\ninstitutional identity providers. It supports all identity providers (IdPs) that\nhave an OpenID endpoints.\n\nThe Security and Accounts tool allows administrators to give access based on\nroles and policies. Create, edit, and copy roles among collections and\ndatabases. Administrators can give users as many or few permissions as desired,\nfrom guest accounts to collection managers.\n\n**Accessible.**\nIt is important that web applications work for people with disabilities. Specify\n7 is developed with this top of mind, not only meeting international\naccessibility standards but also providing a better experience for everyone.\n\nSpecify 7 is largely compliant with the main WWW accessibility standard \xe2\x80\x93 **WCAG\n2.1 (AA)**. It supports screen readers and allows each user to customize their\ncolor scheme and appearance as well as reduce motion and resize all elements.\n\nThis accessible design respects system and web browser preferences for date\nformats, language, theme, and animations.\n\n---\n\nThe Specify Collections Consortium is funded by its member\ninstitutions. The Consortium web site is:\nhttps://specifysoftware.org\n\nSpecify 7 Copyright \xc2\xa9 2023 Specify Collections Consortium. Specify\ncomes with ABSOLUTELY NO WARRANTY. This is free software licensed\nunder GNU General Public License 2 (GPL2).\n\n Specify Collections Consortium\n Biodiversity Institute\n University of Kansas\n 1345 Jayhawk Blvd.\n Lawrence, KS 66045 USA\n\n## Table of Contents\n\n- [Specify 7](#specify-7)\n - [Table of Contents](#table-of-contents)\n - [Changelog](#changelog)\n - [Installation](#installation)\n - [Docker installation](#docker-installation-recommended)\n (**Recommended**)\n - [Local installation](#local-installation)\n - [Installing system dependencies](#installing-system-dependencies)\n - [Installing Specify 6](#installing-specify-6)\n - [Cloning Specify 7 source repository](#cloning-specify-7-source-repository)\n - [Setting up Python Virtual Environment](#setting-up-python-virtual-environment)\n - [Building](#building)\n - [Adjusting settings files](#adjusting-settings-files)\n - [Turning on debugging](#turning-on-debugging)\n - [The development server](#the-development-server)\n - [The Specify 7 worker](#the-specify-7-worker)\n - [Installing production requirements](#installing-production-requirements)\n - [Setting up Apache](#setting-up-apache)\n - [Restarting Apache](#restarting-apache)\n - [Updating Specify 7](#updating-specify-7)\n - [Updating the database (Specify 6) version](#updating-the-database-specify-6-version)\n - [Localizing Specify 7](#localizing-specify-7)\n\n## Changelog\n\nChangelog is available in [CHANGELOG.md](./CHANGELOG.md)\n\n# Installation\n\nWe encourage all users to read our documentation on the Community Forum\nregarding installing and deploying Specify \xe2\x80\x93\n[**Specify 7 Installation Instructions**](https://discourse.specifysoftware.org/t/specify-7-installation-instructions/755).\n\nIf you are an existing Specify 6 user who is looking to evaluate Specify 7, you\ncan contact [support@specifysoftware.org](mailto:support@specifysoftware.org)\nalong with a copy of your database and we can configure a temporary deployment\nfor evaluation purposes.\n\n## Docker Installation (Recommended)\n\n### Specify Collections Consortium (SCC) Members:\n\nWe encourage members to use\nour [Dockerized compositions](https://github.com/specify/docker-compositions)\nof Specify 7. You can choose your desired version, make the necessary\nadjustments and then run a single command to get everything\nworking. It is very simple and can be easily updated when new versions are\nreleased. Documentation for deploying Specify\nusing Docker is available within the repository.\n\n[**\xf0\x9f\x93\xa8 Click here to request\naccess\n**](mailto:support@specifysoftware.org?subject=Requesting%20Docker%20Repository%20Access&body=My%20GitHub%20username%20is%3A%20%0D%0AMy%20Specify%20Member%20Institution%20is%3A%20%0D%0AAdditional%20Questions%20or%20Notes%3A%20)\nor email [support@specifysoftware.org](mailto:support@specifysoftware.org)\nwith your GitHub username, member\ninstitution or collection, and any additional questions you have for us.\n\n### Non-Members:\n\nIf your institution is not a member of the Specify Collections Consortium, you\ncan follow\nthe [local installation instructions](#local-installation) below or\ncontact [membership@specifysoftware.org](mailto:membership@specifysoftware.org)\nto learn more about joining the SCC to\nreceiving configuration assistance, support, and hosting services if you are\ninterested.\n\n## Local Installation\n\nAfter completing these instructions you will be able to run the test\nserver and interact with the Django based Specify webapp in your\nbrowser on your local machine.\n\nInstructions for deployment follow.\n\n**Note:** If updating from a previous version, some of the python\ndependencies have changed. It is recommended to place the new version\nin a separate directory next to the previous version and install all\nthe new dependencies in a Python virtualenv as described below. That\nwill avoid version conflicts and allow the previous version to\ncontinue working while the new version is being set up. When the new\nversion is working satisfactorily using the test server, the Apache\nconf can be changed to point to it (or changed back to the old\nversion, if problems arise).\n\n### Installing system dependencies\n\nSpecify 7 requires Python 3.8. Ubuntu 20.04 LTS is recommended. For\nother distributions these instructions will have to be adapted.\n\nUbuntu 20.04 LTS:\n\n```shell\nsudo apt install -y curl\ncurl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -\nsudo apt-get -y install --no-install-recommends \\\n build-essential \\\n git \\\n libldap2-dev \\\n libmariadbclient-dev \\\n libsasl2-dev \\\n nodejs \\\n python3-venv \\\n python3.8 \\\n python3.8-dev \\\n redis \\\n unzip\n```\n\nCentOS 7 / Red Hat 7:\n\n```shell\nyum install -y epel-release sudo wget\nyum install -y \\\n gcc make \\\n git \\\n openldap-devel \\\n mariadb-devel \\\n nodejs \\\n npm \\\n java-11-openjdk-headless \\\n python36-virtualenv \\\n python36 \\\n python36u-devel \\\n redis \\\n unzip\n```\n\nAfterward, please make sure you have Node.js 18 installed:\n\n```\nnode -v\n```\n\n### Installing Specify 6\n\nA copy of the most recent Specify 6 release is required on the server\nas Specify 7 makes use of resource files. A Java runtime is required\nto execute the Specify 6 installer, but is not needed to run\nSpecify 7. It is possible to copy the Specify 6 install from another\nLinux system to avoid the need to install Java on the server.\n\n```shell\nwget https://update.specifysoftware.org/Specify_unix_64.sh\nsh Specify_unix_64.sh -q -dir ./Specify6.8.03\nsudo ln -s $(pwd)/Specify6.8.03 /opt/Specify\n```\n\n### Cloning Specify 7 source repository\n\nClone this repository.\n\n```shell\ngit clone https://github.com/specify/specify7.git\n```\n\nYou will now have a specify7 directory containing the source\ntree.\n\nNote, by default, `git clone` checks out the `production` branch of Specify 7.\nThat branch contains the latest tested features and bug fixes. If you prefer a\nmore stable release, you can switch to one of our tagged released.\n\n```shell\ncd specify7\ngit checkout tags/v7.8.6\n```\n\nTagged releases are coming out every other week and undergo more testing.\n\nSee [the list of tags](https://github.com/specify/specify7/tags) to check what's\nthe latest stable release.\n\n### Adjusting settings files\n\nIn the directory `specify7/specifyweb/settings` you will find the\n`specify_settings.py` file. Make a copy of this file as\n`local_specify_settings.py` and edit it. The file contains comments\nexplaining the various settings.\n\n### Setting up Python Virtual Environment\n\nUsing a Python\n[virtual environment](https://docs.python-guide.org/en/latest/dev/virtualenvs/)\nwill avoid version conflicts with other Python libraries on your\nsystem. Also, it avoids having to use a superuser account to install\nthe Python dependencies.\n\n```shell\npython3.8 -m venv specify7/ve\nspecify7/ve/bin/pip install wheel\nspecify7/ve/bin/pip install --upgrade -r specify7/requirements.txt\n```\n\n### Building\n\nTo build Specify 7 use the default make target.\n\n```shell\ncd specify7\nsource ve/bin/activate\nmake\n```\n\n> Note, if `source` command is not available on your system, try running\n> `. ve/bin/activate` instead\n\nOther make targets:\n\n#### `make build`\n\nRuns all necessary build steps.\n\n#### `make frontend`\n\nInstalls or updates Javascript dependencies and builds the Javascript\nmodules only.\n\n#### `make clean`\n\nRemoves all generated files.\n\nThe following targets require the virualenv to be activated:\n\n#### `make pip_requirements`\n\nInstall or updates Python dependencies.\n\n#### `make django_migrations`\n\nApplies Specify schema changes to the database named in the\nsettings. This step may fail if the master user configured in the\nsettings does not have DDL privileges. Changing the `MASTER_NAME` and\n`MASTER_PASSWORD` settings to the MySQL root user will allow the\nchanges to be applied. Afterward, the master user settings can be\nrestored.\n\n#### `make runserver`\n\nA shortcut for running the Django development server.\n\n#### `make webpack_watch`\n\nRun webpack in watch mode so that changes to the frontend source code\nwill be automatically compiled. Useful during the development process.\n\n### Turning on debugging\n\nFor development purposes, Django debugging should be turned on. It\nwill enable stack traces in responses that encounter exceptions, and\nallow operation with the unoptimized Javascript files.\n\nDebugging can be enabled by creating the file\n`specify7/specifyweb/settings/debug.py` with the contents, `DEBUG = True`.\n\n### The development server\n\n> NOTE: development server should only be run in debug mode. See previous\n> section for instructions on how to turn on debugging.\n\nSpecify7 can be run using the Django development server.\n\n```shell\ncd specify7\nsource ve/bin/activate\nmake runserver\n```\n\nThis will start a development server for testing purposes on\n`localhost:8000`.\n\nWhen the server starts up, it will issue a warning that some\nmigrations have not been applied:\n\n```\nYou have 11 unapplied migration(s). Your project may not work\nproperly until you apply the migrations for app(s): auth,\ncontenttypes, sessions. Run 'python manage.py migrate' to apply them.\n```\n\nSpecify 7 makes use of functions from the listed Django apps (auth,\ncontenttypes, and sessions) but does not need the corresponding tables\nto be added to the database. Running `make django_migrations` will\napply only those migrations needed for Specify 7 to operate.\n\n### The Specify 7 Worker\n\nStarting from version `v7.6.0`, the Specify WorkBench utilizes this \ndedicated worker process to handle the upload and validation operations. \n\nStarting from version `v7.9.0`, the record merging functionality employs the worker to handle all record merging activities.\n\nThis worker process utilizes [Celery](https://docs.celeryproject.org/en/master/index.html), a job queue \nmanagement system, with [Redis](https://docs.celeryproject.org/en/master/getting-started/backends-and-brokers/redis.html) \nserving as the broker.\n\nThe worker process can be started from the commandline\nby executing:\n\n```shell\ncd specify7\ncelery -A specifyweb worker -l INFO --concurrency=1\n```\n\nFor deployment purposes it is recommended to configure a systemd unit\nto automatically start the Specify 7 worker process on system start up\nby executing the above command within the installation directory. It\nis possible to run Redis and worker process on a separate server and\nto provision multiple worker processes for high volume\nscenarios. Contact the Specify team about these use cases.\n\n### Installing production requirements\n\nFor production environments, Specify7 can be hosted by Apache. The\nfollowing packages are needed:\n\n- Apache\n- mod-wsgi to connect Python to Apache\n\nUbuntu:\n\n```shell\nsudo apt-get install apache2 libapache2-mod-wsgi-py3\n```\n\nCentOS / Red Hat:\n\n```shell\nyum install httpd python3-mod_wsgi\n```\n\nWarning: This will replace the Python 2.7 version of mod-wsgi that was\nused by Specify 7.4.0 and prior. If executed on a production server\nrunning one of those versions, Specify 7 will stop working until the\nnew deployment is configured.\n\n### Setting up Apache\n\nIn the `specify7` directory you will find the `specifyweb_apache.conf`\nfile. Make a copy of the file as `local_specifyweb_apache.conf` and\nedit the contents to reflect the location of Specify6 and Specify7 on\nyour system. There are comments showing what to change.\n\nThen remove the default Apache welcome page and make a link to your\n`local_specifyweb_apache.conf` file.\n\nUbuntu:\n\n```shell\nsudo rm /etc/apache2/sites-enabled/000-default.conf\nsudo ln -s $(pwd)/specify7/local_specifyweb_apache.conf /etc/apache2/sites-enabled/\n```\n\nCentOS / Red Hat:\n\n```shell\nsudo ln -s $(pwd)/specify7/local_specifyweb_apache.conf /etc/httpd/conf.d/\n```\n\n### Restarting Apache\n\nAfter changing Apache's config files restart the service.\n\nUbuntu:\n\n```shell\nsudo systemctl restart apache2.service\n```\n\nCentOS / Red Hat:\n\n```shell\nsudo systemctl restart httpd.service\n```\n\n### Nginx configuration\n\nSpecify 7 is web-server agnostic.\nExample [nginx.conf](https://github.com/specify/specify7/blob/production/nginx.conf)\n(note, you would have to adjust the host names and enable HTTPs).\n\n## Updating Specify 7\n\nSpecify 7.4.0 and prior versions were based on Python 2.7. If updating\nfrom one of these versions, it will be necessary to install Python 3.8\nby running the `apt-get` commands in the\n[Install system dependencies](#install-system-dependencies) and the\n[Production requirements](#production-requirements) steps. Then\nproceed as follows:\n\n0. Backup your Specify database using MySQL dump or the Specify backup\n and restore tool.\n\n1. Clone or download a new copy of this repository in a directory next\n to your existing installation.\n\n `git clone https://github.com/specify/specify7.git specify7-new-version`\n\n2. Copy the settings from the existing to the new installation.\n\n `cp specify7/specifyweb/settings/local* specify7-new-version/specifyweb/settings/`\n\n3. Make sure to update the `THICK_CLIENT_LOCATION` setting in\n `local_specify_settings.py`, if you are updating the Specify 6\n version.\n\n4. Update the system level dependencies by executing the _apt-get_\n command in the [Installing system\n dependencies](#installing-system-dependencies) section.\n\n5. Create a new virtualenv for the new installation by following the\n [Python Virtual Environment](#python-virtual-environment) section\n for the new directory.\n\n6. [Build](#building) the new version of Specify 7.\n\n7. Test it out with the [development server](#the-development-server).\n\n8. Deploy the new version by updating your Apache config to replace\n the old installation paths with the new ones and restarting Apache.\n\n9. Configure the Specify 7 worker process to execute at system start\n up as described in [The Specify 7 worker](#the-specify-7-worker) section.\n\n## Updating the database (Specify 6) version\n\nThe Specify database is updated from one version to the next by the\nSpecify 6 application. To update the database version connect to the\ndatabase with a new version of Specify 6 and follow the Specify 6\nupdate procedures.\n\nOnce the database version is updated, a corresponding copy of Specify\n6 must be provided to the Specify 7 server by repeating\nthe [Installing Specify 6](#installing-specify-6) section of this guide for the\nnew version of Specify 6.\n\n[![analytics](https://www.google-analytics.com/collect?v=1&t=pageview&dl=https%3A%2F%2Fgithub.com%2Fspecify%2Fspecify7&uid=readme&tid=UA-169822764-3)]()\n\n## Localizing Specify 7\n\nSpecify 7 interface is localized to a few languages out of the box. We welcome\ncontributions of new translations. We are using\n[Weblate](https://hosted.weblate.org/projects/specify-7/) continuous\nlocalization\nplatform.\n[Instructions on how you can contribute](https://discourse.specifysoftware.org/t/get-started-with-specify-7-localization/956)\n""",,"2012/02/07, 16:54:30",4278,GPL-2.0,3373,9620,"2023/10/23, 14:06:51",1003,573,3122,1372,2,82,2.5,0.5512375665047421,"2023/09/25, 17:56:54",v7.9.0,19,29,false,,false,true,,,https://github.com/specify,https://specifysoftware.org/,"University of Kansas, Lawrence",,,https://avatars.githubusercontent.com/u/2906014?v=4,,, gbifdb,Provide a relational database interface to a parquet based serializations of gbif's AWS snapshots of its public data.,ropensci,https://github.com/ropensci/gbifdb.git,github,,Biodiversity and Species Distribution,"2023/10/19, 19:46:25",27,0,5,true,R,rOpenSci,ropensci,R,https://docs.ropensci.org/gbifdb/,"b'\n\n\n# gbifdb\n\n\n\n[![R-CMD-check](https://github.com/ropensci/gbifdb/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/gbifdb/actions)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/gbifdb)](https://CRAN.R-project.org/package=gbifdb)\n\n\nThe goal of `gbifdb` is to provide a relational database interface to a\n`parquet` based serializations of `gbif`\xe2\x80\x99s AWS snapshots of its public\ndata [^1]. Instead of requiring custom functions for filtering and\nselecting data from the central GBIF server (as in `rgbif`), `gbifdb`\nusers can take advantage of the full array of `dplyr` and `tidyr`\nfunctions which can be automatically translated to SQL by `dbplyr`.\nUsers already familiar with SQL can construct SQL queries directly with\n`DBI` instead. `gbifdb` sends these queries to\n[`duckdb`](https://duckdb.org), a high-performance, columnar-oriented\ndatabase engine which runs entirely inside the client, (unlike\nserver-client databases such as MySQL or Postgres, no additional setup\nis needed outside of installing `gbifdb`.) `duckdb` is able to execute\nthese SQL queries directly on-disk against the Parquet data files,\nside-stepping limitations of available RAM or the need to import the\ndata. It\xe2\x80\x99s highly optimized implementation can be faster even than\nin-memory operations in `dplyr`. `duckdb` supports the full set of SQL\ninstructions, including windowed operations like `group_by`+`summarise`\nas well as table joins.\n\n`gbifdb` has two mechanisms for providing database connections: one\nwhich the Parquet snapshot of GBIF must first be downloaded locally, and\na second where the GBIF parquet snapshot can be accessed directly from\nan Amazon Public Data Registry S3 bucket without downloading a copy. The\nlatter approach will be faster for one-off operations and is also\nsuitable when using a cloud-based computing provider in the same region.\n\n## Installation\n\n\n\nAnd the development version from [GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""ropensci/gbifdb"")\n```\n\n`gbifdb` has few dependencies: `arrow`, `duckdb` and `DBI` are required.\n\n## Getting Started\n\n``` r\nlibrary(gbifdb)\nlibrary(dplyr) # optional, for dplyr-based operations\n```\n\n### Remote data access\n\nTo begin working with GBIF data directly without downloading the data\nfirst, simply establish a remote connection using `gbif_remote()`.\n\n``` r\ngbif <- gbif_remote()\n```\n\nWe can now perform most `dplyr` operations:\n\n``` r\ngbif %>%\n filter(phylum == ""Chordata"", year > 1990) %>%\n count(class, year) %>%\n collect()\n#> # A tibble: 461 \xc3\x97 3\n#> class year n\n#> \n#> 1 Actinopterygii 2003 696289\n#> 2 Actinopterygii 2009 1152201\n#> 3 Elasmobranchii 2009 67477\n#> 4 Actinopterygii 2010 1348109\n#> 5 Elasmobranchii 2003 22638\n#> 6 Ascidiacea 2013 9151\n#> 7 Actinopterygii 2002 777535\n#> 8 Actinopterygii 2008 1311066\n#> 9 Elasmobranchii 2008 64769\n#> 10 Elasmobranchii 2002 21948\n#> # \xe2\x80\xa6 with 451 more rows\n```\n\nBy default, this relies on an `arrow` connection, which currently lacks\nsupport for some more complex windowed operations in `dplyr`. A user can\nspecify the option `to_duckdb = TRUE` in `gbif_remote()` (or simply pass\nthe connection to `arrow::to_duckdb()`) to create a `duckdb` connection.\nThis is slightly slower at this time. Keep in mind that as with any\ndatabase connection, to use non-`dplyr` functions the user will\ngenerally need to call `dplyr::collect()`, which pulls the data into\nworking memory. \nBe sure to subset the data appropriately first (e.g.\xc2\xa0with `filter`,\n`summarise`, etc), as attempting to `collect()` a large table will\nprobably exceed available RAM and crash your R session!\n\nWhen using a `gbif_remote()` connection, all I/O operations will be\nconducted over the network storage instead of your local disk, without\ndownloading the full dataset first. Consequently, this mechanism will\nperform best on platforms with faster network connections. These\noperations will be considerably slower than they would be if you\ndownload the entire dataset first (see below, unless you are on an AWS\ncloud instance in the same region as the remote host), but this does\navoid the download step all-together, which may be necessary if you do\nnot have 100+ GB free storage space or the time to download the whole\ndataset first (e.g.\xc2\xa0for one-off queries).\n\n### Local data\n\nFor extended analysis of GBIF, users may prefer to download the entire\nGBIF parquet data first. This requires over 100 GB free disk space, and\nwill be a time-consuming process the first time. However, once\ndownloaded, future queries will run much much faster, particularly if\nyou are network-limited. Users can download the current release of GBIF\nto local storage like so:\n\n``` r\ngbif_download()\n```\n\nBy default, this will download to the dir given by `gbif_dir()`. \nAn alternative directory can be provided by setting the environmental\nvariable, `GBIF_HOME`, or providing the path to the directory containing\nthe parquet files directly.\n\nOnce you have downloaded the parquet-formatted GBIF data, `gbif_local()`\nwill establish a connection to these local parquet files.\n\n``` r\ngbif <- gbif_local()\ngbif\n#> # Source: lazy query [?? x 48]\n#> # Database: duckdb_connection\n#> gbifid datasetkey occurrenceid kingdom phylum class order family genus\n#> \n#> 1 1572326202 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF9\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 2 1572326211 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 3 1572326213 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF9\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 4 1572326222 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 5 1572326224 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 6 1572326210 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF9\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 7 1572326209 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 8 1572326215 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 9 1572326228 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> 10 1572326205 0e2c20a3-3c3\xe2\x80\xa6 7B3E9B63FF8\xe2\x80\xa6 Animal\xe2\x80\xa6 Arthr\xe2\x80\xa6 Aran\xe2\x80\xa6 Capon\xe2\x80\xa6 Medi\xe2\x80\xa6\n#> # \xe2\x80\xa6 with more rows, and 39 more variables: species ,\n#> # infraspecificepithet , taxonrank , scientificname ,\n#> # verbatimscientificname , verbatimscientificnameauthorship ,\n#> # countrycode , locality , stateprovince ,\n#> # occurrencestatus , individualcount , publishingorgkey ,\n#> # decimallatitude , decimallongitude ,\n#> # coordinateuncertaintyinmeters , coordinateprecision , \xe2\x80\xa6\n```\n\n``` r\ncolnames(gbif)\n#> [1] ""gbifid"" ""datasetkey"" \n#> [3] ""occurrenceid"" ""kingdom"" \n#> [5] ""phylum"" ""class"" \n#> [7] ""order"" ""family"" \n#> [9] ""genus"" ""species"" \n#> [11] ""infraspecificepithet"" ""taxonrank"" \n#> [13] ""scientificname"" ""verbatimscientificname"" \n#> [15] ""verbatimscientificnameauthorship"" ""countrycode"" \n#> [17] ""locality"" ""stateprovince"" \n#> [19] ""occurrencestatus"" ""individualcount"" \n#> [21] ""publishingorgkey"" ""decimallatitude"" \n#> [23] ""decimallongitude"" ""coordinateuncertaintyinmeters"" \n#> [25] ""coordinateprecision"" ""elevation"" \n#> [27] ""elevationaccuracy"" ""depth"" \n#> [29] ""depthaccuracy"" ""eventdate"" \n#> [31] ""day"" ""month"" \n#> [33] ""year"" ""taxonkey"" \n#> [35] ""specieskey"" ""basisofrecord"" \n#> [37] ""institutioncode"" ""collectioncode"" \n#> [39] ""catalognumber"" ""recordnumber"" \n#> [41] ""identifiedby"" ""dateidentified"" \n#> [43] ""license"" ""rightsholder"" \n#> [45] ""recordedby"" ""typestatus"" \n#> [47] ""establishmentmeans"" ""lastinterpreted""\n```\n\nNow, we can use `dplyr` to perform standard queries:\n\n``` r\ngrowth <- gbif %>%\n filter(phylum == ""Chordata"", year > 1990) %>%\n count(class, year) %>% arrange(year)\ngrowth\n#> # Source: lazy query [?? x 3]\n#> # Database: duckdb_connection\n#> # Groups: class\n#> # Ordered by: year\n#> class year n\n#> \n#> 1 Cephalaspidomorphi 1991 1152\n#> 2 Elasmobranchii 1991 17521\n#> 3 Ascidiacea 1991 1602\n#> 4 Thaliacea 1991 669\n#> 5 Amphibia 1991 18443\n#> 6 Sarcopterygii 1991 13\n#> 7 Leptocardii 1991 36\n#> 8 1991 912\n#> 9 Actinopterygii 1991 363791\n#> 10 Holocephali 1991 1048\n#> # \xe2\x80\xa6 with more rows\n```\n\nRecall that when database connections in `dplyr`, the data remains in\nthe database (i.e.\xc2\xa0on disk, not in working RAM). \nThis is fine for any further operations using `dplyr`/`tidyr` functions\nwhich can be translated into SQL. \nUsing such functions we can usually reduce our resulting table to\nsomething much smaller, which can then be pulled into memory in R for\nfurther analysis using `collect()`:\n\n``` r\nlibrary(ggplot2)\nlibrary(forcats)\n# GBIF: the global bird information facility?\ngrowth %>%\n collect() %>%\n mutate(class = fct_lump_n(class, 6)) %>%\n ggplot(aes(year, n, fill=class)) + geom_col() +\n ggtitle(""GBIF observations of vertebrates by class"")\n```\n\n\n\n## Visualizing all of GBIF\n\nDatabase operations such as rounding provide an easy way to \xe2\x80\x9crasterize\xe2\x80\x9d\nthe data for spatial visualizations. Here we quickly generate where\ncolor intensity reflects the logarithmic occurrence count in that pixel:\n\n``` r\nlibrary(terra)\nlibrary(viridisLite)\n\ndb <- gbif_local()\ndf <- db |> mutate(latitude = round(decimallatitude,1),\n longitude = round(decimallongitude,1)) |> \n count(longitude, latitude) |> \n collect() |> \n mutate(n = log(n))\n\nr <- rast(df, crs=""epsg:4326"")\nplot(r, col= viridis(1e3), legend=FALSE, maxcell=6e6, colNA=""black"", axes=FALSE)\n```\n\n\n\n## Performance notes\n\nBecause `parquet` is a columnar-oriented dataset, performance can be\nimproved by including a `select()` call at the end of a dplyr function\nchain to only return the columns you actually need. This can be\nparticularly helpful on remote connections using `gbif_remote()`.\n\n[^1]: all CC0 and CC-BY licensed data in GBIF that have coordinates\n which passed automated quality checks, \\[see GBIF\n docs\\])\n'",,"2021/11/06, 19:25:41",718,CUSTOM,4,88,"2023/10/12, 21:41:56",0,7,9,6,13,0,0.0,0.012987012987012991,,,0,2,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, diversitree,"Includes a number of comparative phylogenetic methods, mostly focussing on analysing diversification and character evolution.",richfitz,https://github.com/richfitz/diversitree.git,github,,Biodiversity and Species Distribution,"2023/10/02, 08:11:39",27,0,3,true,R,,,"R,TeX,C,Fortran,HTML,C++,CSS,Makefile,M4,Shell,Python",http://www.zoology.ubc.ca/prog/diversitree,"b'# diversitree: comparative phylogenetic analyses of diversification\n\n\n[![R-CMD-check](https://github.com/richfitz/diversitree/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/richfitz/diversitree/actions/workflows/R-CMD-check.yaml)\n\n\nThis repository contains ""diversitree"". It is my experimental sources\nthough, and may not compile or work for you. You may prefer the\nreleased version from CRAN:\n\n > install.packages(""diversitree"")\n\n\nThe interesting directories are:\n\n* inst/tests: testing functions that exercise most of the\npackage\'s main features. Running `./run_tests.R` will run the tests. These take too long to run on CRAN (over a minute), so are not set up in the usual way.\n* doc: Vignettes, and their required data files.\n\n## Installing diversitree\n\nClone the repository with\n\n git clone git://github.com/richfitz/diversitree.git\n\nThe package should then be installable the usual way. You\'ll need a C, C++ and Fortran compiler.\n\nTo install and specify the location of the fftw library in a\nnon-standard place, a line like this is required:\n R CMD INSTALL diversitree --configure-args=\'--with-fftw=/path/to/fftw\'\nwhere the path will be the path so that the files\n /path/to/fftw/include/fftw3.h\n /path/to/fftw/lib/lib/fftw3.so\nexist.\n\nOn Windows, set the environment variable LIB_FFTW to point to the\ndirectory that contains include/fftw.h, and install the package.\n\nIf fftw is not found, installation will continue, but the (relatively)\nfast C based QuaSSE integration will not be available. The R based\nfft integrator and the method-of-lines integrator will be available.\n\n## Branches\n\nThe ""master"" branch contains the bleeding edge version of diversitree.\nIt may or may not work for you. The ""release"" branch contains stable\nreleases.\n'",,"2012/03/20, 20:29:11",4236,CUSTOM,16,538,"2023/10/02, 08:11:39",14,16,28,3,23,0,0.2,0.018656716417910446,,,0,4,false,,false,false,,,,,,,,,,, rmangal,Retrieve and explore data from the ecological interactions database MANGAL.,ropensci,https://github.com/ropensci/rmangal.git,github,,Biodiversity and Species Distribution,"2023/01/31, 13:18:26",12,0,2,true,R,rOpenSci,ropensci,"R,TeX",https://docs.ropensci.org/rmangal,"b'# rmangal :package: - an R Client for the Mangal database \n\n[![](https://badges.ropensci.org/332_status.svg)](https://github.com/ropensci/software-review/issues/332)\n[![R CMD Check](https://github.com/ropensci/rmangal/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/ropensci/rmangal/actions/workflows/R-CMD-check.yaml)\n[![lint](https://github.com/ropensci/rmangal/actions/workflows/lint.yaml/badge.svg)](https://github.com/ropensci/rmangal/actions/workflows/lint.yaml)\n[![codecov](https://app.codecov.io/gh/ropensci/rmangal/branch/master/graph/badge.svg?token=lGqUVLM2o3)](https://app.codecov.io/gh/ropensci/ropensci/rmangal)\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![CRAN status](https://www.r-pkg.org:443/badges/version/rmangal)](https://CRAN.R-project.org/package=rmangal)\n\n\n## Context\n\n[Mangal](https://mangal.io/#/) -- a global ecological interactions database --\nserializes ecological interaction matrices into nodes (e.g. taxon, individuals\nor population) and interactions (i.e. edges). For each network, Mangal offers\nthe opportunity to store study context such as the location, sampling\nenvironment, inventory date and informations pertaining to the original\npublication. For every nodes involved in the ecological networks, Mangal\nreferences unique taxonomic identifiers such as Encyclopedia of Life (EOL),\nCatalogue of Life (COL), Global Biodiversity Information Facility (GBIF) etc.\nand can extend nodes informations to individual traits.\n\n**rmangal** is an R client to the Mangal database and provides various functions\nto explore his content through search functions. It offers methods to retrieve\nnetworks structured as `mgNetwork` or `mgNetworksCollection` S3 objects and\nmethods to convert `mgNetwork` to other class objects in order to analyze and\nvisualize networks properties: [`igraph`](https://igraph.org/r/),\n[`tidygraph`](https://github.com/thomasp85/tidygraph), and\n[`ggraph`](https://github.com/thomasp85/ggraph).\n\n\n## Installation\n\nSo far, only the development version is available and can be installed via the [remotes](https://CRAN.R-project.org/package=remotes) :package:\n\n```r\nR> remotes::install_github(""ropensci/rmangal"")\nR> library(""rmangal"")\n```\n\n\n## How to use `rmangal`\n\nThere are [seven `search_*()` functions](https://docs.ropensci.org/rmangal/reference/index.html#section-explore-database) to explore the content of Mangal, for\ninstance `search_datasets()`:\n\n```r\nR> mgs <- search_datasets(""lagoon"")\nFound 2 datasets\n```\n\nOnce this first step achieved, networks found can be retrieved with the `get_collection()` function.\n\n```r\nR> mgn <- get_collection(mgs)\n```\n\n`get_collection()` returns an object `mgNetwork` if there is one network\nreturned, otherwise an object `mgNetworkCollection`, which is a list of\n`mgNetwork` objects.\n\n\n```r\nR> class(mgn)\n[1] ""mgNetworksCollection""\nR> mgn\nA collection of 3 networks\n\n* Network # from data set #\n* Description: Dietary matrix of the Huizache\xe2\x80\x93Caimanero lagoon\n* Includes 189 edges and 26 nodes\n* Current taxonomic IDs coverage for nodes of this network:\n --> ITIS: 81%, BOLD: 81%, EOL: 85%, COL: 81%, GBIF: 0%, NCBI: 85%\n* Published in ref # DOI:10.1016/s0272-7714(02)00410-9\n\n* Network # from data set #\n* Description: Food web of the Brackish lagoon\n* Includes 27 edges and 11 nodes\n* Current taxonomic IDs coverage for nodes of this network:\n --> ITIS: 45%, BOLD: 45%, EOL: 45%, COL: 45%, GBIF: 18%, NCBI: 45%\n* Published in ref # DOI:NA\n\n* Network # from data set #\n* Description: Food web of the Costal lagoon\n* Includes 34 edges and 13 nodes\n* Current taxonomic IDs coverage for nodes of this network:\n --> ITIS: 54%, BOLD: 54%, EOL: 54%, COL: 54%, GBIF: 15%, NCBI: 54%\n* Published in ref # DOI:NA\n```\n\n[`igraph`](https://igraph.org/r/) and\n[`tidygraph`](https://github.com/thomasp85/tidygraph) offer powerful features to\nanalyze networks and **rmangal** provides functions to convert `mgNetwork` to\n`igraph` and `tbl_graph` so that the user can easily benefit from those\npackages.\n\n```r\nR> ig <- as.igraph(mgn[[1]])\nR> class(ig)\n[1] ""igraph""\nR> library(tidygraph)\nR> tg <- as_tbl_graph(mgn[[1]])\nR> class(tg)\n[1] ""tbl_graph"" ""igraph""\n```\n\n:book: Note that the vignette [""Get started with\nrmangal""](https://docs.ropensci.org/rmangal/articles/rmangal.html) will guide\nthe reader through several examples and provide further details about **rmangal** features.\n\n## How to publish ecological networks\n\nWe are working on that part. The networks publication process will be\nfacilitated with structured objects and tests suite to maintain data integrity\nand quality.Comments and suggestions are welcome, feel free to open issues.\n\n## `rmangal` vs `rglobi`\n\nThose interested only in pairwise interactions among taxons may consider using\n`rglobi`, an R package that provides an interface to the [GloBi\ninfrastructure](https://www.globalbioticinteractions.org/about.html). GloBi\nprovides open access to aggregated interactions from heterogeneous sources. In\ncontrast, Mangal gives access to the original networks and open the gate to\nstudy ecological networks properties (i.e. connectance, degree etc.) along large\nenvironmental gradients, which wasn\'t possible using the GloBi infrastructure.\n\n\n## Older versions \n\n* See https://github.com/mangal-interactions/rmangal-v1 for the first version of the client.\n* Note that due to changes in the RESTful API, there is no backward compatibility.\n\n\n## Acknowledgment\n\nWe are grateful to [Noam Ross](https://github.com/noamross) for acting as an editor during the review process. We also thank [Anna Willoughby](https://github.com/arw36) and [Thomas Lin Petersen](https://github.com/thomasp85) for reviewing the package. Their comments strongly contributed to improve the quality of rmangal.\n\n\n## Code of conduct\n\nPlease note that the `rmangal` project is released with a [Contributor Code of Conduct](https://mangal.io/doc/r/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms.\n\n## Meta\n\n* Get citation information for `rmangal` in R doing `citation(package = \'rmangal\')`\n* Please [report any issues or bugs](https://github.com/ropensci/rmangal/issues).\n\n[![rofooter](https://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n'",,"2018/02/01, 14:38:05",2092,CUSTOM,3,604,"2023/01/31, 13:18:27",6,40,107,1,267,1,0.5,0.37283950617283945,"2021/11/24, 19:40:35",v2.1.0,0,7,false,,true,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, EcoReleve,A free and open source biodiversity data entry software.,natural-solutions/reneco,https://gitlab.com/natural-solutions/reneco/ecoreleve-data,gitlab,,Biodiversity and Species Distribution,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, SpeciesDistributionToolkit,A collection of Julia packages forming a toolkit meant to deal with species distribution data.,PoisotLab,https://github.com/PoisotLab/SpeciesDistributionToolkit.jl.git,github,"bioclim,biodiversity,biogeography,chelsa,ecology,species-distribution-models,earthenv",Biodiversity and Species Distribution,"2023/10/16, 18:18:20",11,0,10,true,Julia,the Poisot lab,PoisotLab,Julia,https://poisotlab.github.io/SpeciesDistributionToolkit.jl/,"b'# SpeciesDistributionToolkit\n\n`SpeciesDistributionToolkit.jl` is a collection of Julia packages forming a\ntoolkit meant to deal with (surprise!) species distribution data. Specifically,\nthe goal of these packages put together is to provide a consistent way to handle\noccurrence data, put them on a map, and make it interact with environmental\ninformation.\n\n> To get a sense of the next steps and help with the development, see the \n[issues/bugs tracker](https://github.com/orgs/PoisotLab/projects/3)\n\nThis package is *not* intended to perform any actual modeling, but can serve as\na robust basis for such models. We offer an interface from this package to `MLJ`\nto facilitate prediction.\n\nForm a technical point of view, this *repository* is a [Monorepo][mnrp]\nconsisting of several related packages to work with species distribution data.\nThese packages were formerly independent and tied together with moxie and\n`Require`, which was less than ideal. All the packages forming the toolkit share\na version number (which was set based on the version number of the eldest\npackage, `SimpleSDMLayers`), and the toolkit itself has its own version number.\n\n[mnrp]: https://monorepo.tools/\n\nNote that the packages *do* work independently as well, but they are now *designed*\nto work together. In particular, when installing `SpeciesDistributionToolkit`,\nyou get access to all the functions and types exported by the component\npackages. This is the *recommended* way to interact with the packages.\n\n## Current component packages\n\n**Getting occurrence data**: `GBIF.jl`, a wrapper around the GBIF API, to\nretrieve taxa and occurrence datasets, and perform filtering on these occurrence\ndata based on flags\n\n**Getting environmental data**: `SimpleSDMDatasets.jl`, an efficient way to\ndownload and store environmental raster data for consumption by other packages.\n\n**Using environmental data**: `SimpleSDMLayers.jl`, a series of types and common\noperations on raster data\n\n**Simulating occurrence data**: `Fauxcurrences.jl`, a package to simulate\nrealistic species occurrence data from a know series of occurrences, with\nadditional statistical constraints\n\n**Getting organisms silhouettes**: `Phylopic.jl`, a wrapper around the Phylopic\nAPI\n\n'",,"2022/11/08, 01:46:54",351,CUSTOM,276,1477,"2023/10/16, 18:18:22",19,139,197,197,9,3,0.0,0.1674382716049383,"2023/10/15, 02:40:38",v0.0.10,1,9,false,,true,true,,,https://github.com/PoisotLab,http://poisotlab.io,"Montréal, Canada",,,https://avatars.githubusercontent.com/u/7968754?v=4,,, tidysdm, A Species Distribution Models in R.,EvolEcolGroup,https://github.com/EvolEcolGroup/tidysdm.git,github,,Biodiversity and Species Distribution,"2023/10/04, 13:41:39",5,0,5,true,R,Evolutionary Ecology Group at University of Cambridge,EvolEcolGroup,R,https://evolecolgroup.github.io/tidysdm/,"b'# tidysdm \n\n\n[![R-CMD-check](https://github.com/EvolEcolGroup/tidysdm/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/EvolEcolGroup/tidysdm/actions/workflows/R-CMD-check.yaml)\n[![codecov](https://codecov.io/gh/EvolEcolGroup/tidysdm/branch/main/graph/badge.svg?token=KLOzxJoLBO)](https://codecov.io/gh/EvolEcolGroup/tidysdm)\n\n\nThe goal of `tidysdm` is to implement Species Distribution Models using the\n`tidymodels` framework. The advantage of `tidymodels` is that the model syntax and the results\nreturned to the user are standardised, thus providing a coherent interface to\nmodelling. Given the variety of models required for SDM, `tidymodels` is an\nideal framework. `tidysdm` provides a number of wrappers and specialised\nfunctions to facilitate the fitting of SDM with `tidymodels`.\n\nBesides modelling contemporary species, `tidysdm` has a number of functions\nspecifically designed to work with palaeontological data. \n\nWhilst users are free\nto use their own environmental data, the articles showcase the potential integration\nwith [`pastclim`](https://evolecolgroup.github.io/pastclim/dev/index.html), \nwhich helps downloading and manipulating present day data,\nfuture predictions, and palaeoclimate reconstructions.\n\nAn overviewof the capabilities of `tidysdm` is given in [Leonardi et al.\n(2023)](https://doi.org/10.1101/2023.07.24.550358).\n\n## Installation\n\n`tidysdm` is still at the **beta** stage of development, and there is no official release on\nCRAN yet.\n\nYou can install the latest version of tidysdm from [GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""EvolEcolGroup/tidysdm"")\n```\n\nFor the development version of `tidysdm`, which includes experimental features\nthat might not be mature yet, use:\n``` r\ndevtools::install_github(""EvolEcolGroup/tidysdm"", ref = ""dev"")\n```\n\n\nTo take advantage of the integration with `pastclim` highlighted in the articles, you\nwill need the `dev` version (NOT the one on CRAN). You can obtain it with:\n``` r \ninstall.packages(\'terra\', repos=\'https://rspatial.r-universe.dev\')\n\ndevtools::install_github(""EvolEcolGroup/pastclim"", ref=""dev"")\n```\n\n\n## Overview of functionality\n\nOn its dedicated [website](https://evolecolgroup.github.io/tidysdm/),\nyou can find Articles giving you a step-by-step [overview of the\nfitting SDMs to contemporary species](https://evolecolgroup.github.io/tidysdm/articles/a0_tidysdm_overview.html),\nas well as an equivalent [tutorial for using palaeontological data](https://evolecolgroup.github.io/tidysdm/articles/a1_palaeodata_application.html).\nFurthermore, there is an [Article with examples of how to leverage various\nfeatures of tidymodels](https://evolecolgroup.github.io/tidysdm/dev/articles/a2_tidymodels_additions.html) that are not commonly adopted in SDM pipelines\n\nThere is also a [dev\nversion](https://evolecolgroup.github.io/tidysdm/dev/) of the site\nupdated for the `dev` branch of `tidysdm` (on the top left of the dev\nwebsite, the version number is in red and in the format x.x.x.9xxx,\nindicating it is a development version).\n\n## When something does not work\n\nWhat should you do if you get an error when trying to fit a model? `tidysdm`\nis a relatively new package, so it might well be that, when you get an\nerror, you might have encountered a bug. However, it is also possible that you\nhave misspecified your model (and so the error comes from `tidymodels`, because\nyour model is not valid). We have prepared an [Article on how to diagnose failing\nmodels](https://evolecolgroup.github.io/tidysdm/dev/articles/a3_troubleshooting.html).\nIt is not a fully comprehensive list of everything that could go wrong, but it will\nhopefully give you ideas on how to dig deeper in what is wrong. You should also\ncheck the [issues on\nGitHub](https://github.com/EvolEcolGroup/tidysdm/issues) to see whether\nthe problem has already been reported. \n\nIf you are convinced\nthat the problem is a bug in `tidysdm`, feel free to create an\nnew issue. Please make sure you have updated to the latest version of\n`tidysdm`, as well as updating all other packages on your\nsystem, and provide [a reproducible\nexample](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example)\nfor the developers to investigate the problem.\n'",",https://doi.org/10.1101/2023.07.24.550358","2023/07/14, 09:50:27",103,AGPL-3.0,111,111,"2023/10/06, 21:23:49",2,18,19,19,19,0,0.4,0.15384615384615385,,,0,5,false,,false,false,,,https://github.com/EvolEcolGroup,http://www.eeg.zoo.cam.ac.uk/,,,,https://avatars.githubusercontent.com/u/44270007?v=4,,, GeoNature-citizen,A free and Open Source web solution for citizen science projects for biodiversity data collection.,PnX-SI,https://github.com/PnX-SI/GeoNature-citizen.git,github,,Biodiversity and Species Distribution,"2023/09/18, 13:48:32",19,0,3,true,TypeScript,SI des parcs nationaux français,PnX-SI,"TypeScript,Python,HTML,CSS,PLpgSQL,Shell,JavaScript,Dockerfile,SCSS",https://geonature-citizen.readthedocs.io,"b""# GeoNature-citizen\n\nPortail d'inventaire participatif de la biodiversit\xc3\xa9 \xc3\xa0 destination du grand public ([**D\xc3\xa9mo**](http://democitizen.geonature.fr)).\n\n![logo](https://github.com/PnX-SI/GeoNature-citizen/raw/master/frontend/src/assets/logo.png)\n\n:bangbang: **Projet en cours de d\xc3\xa9veloppement, actuellement en version beta**\n\n**English:**\n\nGeoNature-citizen is a free and Open Source web solution for citizen science projects for biodiversity data collection. It is fully customizable. Your platform may be a single or a multiple program and be based on existing or adoc list of species.\n\nThe data collection is gamified to improve the user management using badges and scores. It can also be customized to accept new user to be created or not.\n\nIt is based on a fully open Source stack from PostgreSQL to Angular.\n\n**Francais:**\n\nGeoNature-citizen est une solution web gratuite et \xc3\xa0 code source ouvert pour les projets de science citoyenne destin\xc3\xa9s \xc3\xa0 la collecte de donn\xc3\xa9es sur la biodiversit\xc3\xa9. L'outil est enti\xc3\xa8rement personnalisable. Votre plateforme peut \xc3\xaatre constitu\xc3\xa9e d'un programme unique ou de plusieurs programmes de collecte et \xc3\xaatre bas\xc3\xa9e sur une liste d'esp\xc3\xa8ces existante ou adoc.\n\nLa collecte de donn\xc3\xa9es est ludifi\xc3\xa9e pour am\xc3\xa9liorer la gestion des utilisateurs \xc3\xa0 l\xe2\x80\x99aide de badges et de scores. Elle peut \xc3\xa9galement \xc3\xaatre personnalis\xc3\xa9e pour accepter que de nouveaux utilisateurs soient cr\xc3\xa9\xc3\xa9s ou non.\n\nDocumentation : https://geonature-citizen.readthedocs.io\n\n## Cas d'utilisation\n\n- https://obs.mercantour-parcnational.fr\n- https://www.a-vos-mares.org/participez/\n- http://abc-meylan.lpo-aura.org/obs/home\n- http://biodiv-valenceromansagglo.lpo-aura.org\n- https://biomap.champs-libres.be/fr/home\n- https://gncitizen.lpo-aura.org/fr/home\n- https://citizen.nature-occitanie.org/fr/home\n- https://phenoclim.org/accueil/individus-phenoclim/\n- https://atlasdelabiodiversite.cote-emeraude.fr/fr/home\n- https://enquetes.lashf.org/fr/home\n\n## Solutions logicielles\n\n### Backend (API)\n\n- Python 3\n - Flask (moteur de l'API)\n - flask-jwt-extended (pour l'authentification)\n - SQLAlchemy\n- PostgreSQL / Postgis\n\n### Frontend\n\n- NodeJS\n- Angular 8\n- LeafletJS\n- Bootstrap 4.1\n\n### D\xc3\xa9pendances\n\nGeoNature-citizen s'appuie sur [TaxHub](https://github.com/PnX-SI/TaxHub) pour la cr\xc3\xa9ation des listes d'esp\xc3\xa8ces utilis\xc3\xa9es dans les programmes.\n\n### Installation\n\n- Lancer le script install_app.sh pour installer l'application enti\xc3\xa8re ainsi que ses d\xc3\xa9pendances (postgres, taxhub ...)\n- Au premier lancement le script cr\xc3\xa9era un fichier settings.ini dans config\n- Remplacer toutes les variables par vos donn\xc3\xa9es de votre serveur\n- Relancer le script install_app.sh\n - Les fichiers de conf frontend et backend seront alors cr\xc3\xa9\xc3\xa9s et configur\xc3\xa9s\n - Le serveur flask sera lanc\xc3\xa9 via supervisor : api_geonature\n - Si vous avez choisi le mode Server side pour le frontend, il sera lanc\xc3\xa9 via supervisor : geonature sur le port 4000\n\n### Mise \xc3\xa0 jour\n\n- Lancer le script update_app.sh\n - Le script r\xc3\xa9cup\xc3\xa9rera les modifications depuis git\n - il va transpiler le front et red\xc3\xa9marrer si besoin les services supervisor\n - [Warning] si des modifications SQL ont \xc3\xa9t\xc3\xa9 faites, il faudra les faire manuellement\n\n## L'origine du projet\n\nCe projet est initialement d\xc3\xa9velopp\xc3\xa9 pour r\xc3\xa9pondre aux besoins de collectes participatives dans le cadre des d\xc3\xa9marches d'atlas de biodiversit\xc3\xa9 communal/territorial (ABC/ABT).\nLa premi\xc3\xa8re version de ce projet est le fruit d'une d\xc3\xa9marche mutualis\xc3\xa9e entre diff\xc3\xa9rents projects :\n\n- Projet d'Atlas de biodiversit\xc3\xa9 de territoire de [Valence Romans Agglo](http://www.valenceromansagglo.fr/fr/index.html), en partenariat avec la [LPO Auvergne-Rh\xc3\xb4ne-Alpes](https://auvergne-rhone-alpes.lpo.fr/).\n- Projets d'inventaires participatifs du [Parc national du Mercantour](http://www.mercantour-parcnational.fr/fr) et du [Syndicat Mixte pour la gestion et la protection de la Camargue Gardoise](https://www.camarguegardoise.com/), avec une r\xc3\xa9alisation par [Natural Solutions](https://www.natural-solutions.eu/).\n\nIl constitue l'une des briques du projet GeoNature, port\xc3\xa9 par les [Parcs nationaux de France](http://www.parcsnationaux.fr/fr) et b\xc3\xa9n\xc3\xa9ficie de l'appui technique du [Parc national des Ecrins](http://www.ecrins-parcnational.fr/).\n\n## Contributeurs\n\n[![Contributors](https://contrib.rocks/image?repo=PnX-SI/GeoNature-citizen)](https://github.com/PnX-SI/GeoNature-citizen/graphs/contributors)\n""",,"2018/06/23, 14:17:01",1950,AGPL-3.0,2,1815,"2023/09/18, 13:48:32",90,162,272,10,37,6,0.1,0.528052805280528,"2021/10/05, 21:41:48",v0.99.4-dev,0,15,false,,false,false,,,https://github.com/PnX-SI,,,,,https://avatars.githubusercontent.com/u/10531541?v=4,,, galah,"An R interface to biodiversity data hosted by the living atlases; a set of organisations that share a common codebase, and act as nodes of the Global Biodiversity Information Facility.",AtlasOfLivingAustralia,https://github.com/AtlasOfLivingAustralia/galah-R.git,github,r,Biodiversity and Species Distribution,"2023/10/14, 05:19:25",28,0,4,true,R,Atlas of Living Australia,AtlasOfLivingAustralia,"R,CSS",https://galah.ala.org.au,"b'\n\n\n

\ngalah\n

\n\n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/galah)](https://cran.r-project.org/package=galah)\n\n\n\n------------------------------------------------------------------------\n\n`galah` is an R interface to biodiversity data hosted by the \xe2\x80\x98living\natlases\xe2\x80\x99; a set of organisations that share a common codebase, and act\nas nodes of the Global Biodiversity Information Facility\n([GBIF](https://www.gbif.org)). These organisations collate and store\nobservations of individual life forms, using the [\xe2\x80\x98Darwin\nCore\xe2\x80\x99](https://dwc.tdwg.org) data standard. `galah` was built and is\nmaintained by the [Science & Decision Support\nTeam](https://labs.ala.org.au) at the [Atlas of Living\nAustralia](https://www.ala.org.au) (ALA).\n\n`galah` enables users to locate and download species occurrence records\n(observations, specimens, eDNA records, etc.), taxonomic information, or\nassociated media such as images or sounds, and to restrict their queries\nto particular taxa or locations. Users can specify which columns are\nreturned by a query, or restrict their results to occurrences that meet\nparticular data-quality criteria. All functions return a `tibble` as\ntheir standard format, except `atlas_taxonomy` which returns tree\nconsisting of `Node` objects using the `data.tree` package.\n\nThe package is named for the bird of the same name (*Eolophus\nroseicapilla*), a widely-distributed endemic Australian species. The\nlogo was designed by [Ian Brennan](https://www.iangbrennan.org/).\n\nIf you have any comments, questions or suggestions, please [contact\nus](mailto:support@ala.org.au).\n\n
\n\n------------------------------------------------------------------------\n\n## Getting started\n\n- The [quick start\n guide](https://galah.ala.org.au/R/articles/quick_start_guide.html)\n provides an introduction to the package functions.\n- For an outline of the package structure, and a list of all the\n available functions, run `?galah` or view the [reference\n page](https://galah.ala.org.au/R/index.html).\n\n------------------------------------------------------------------------\n\n## Installation\n\nInstall from CRAN:\n\n``` r\ninstall.packages(""galah"")\n```\n\nInstall the development version from GitHub:\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""AtlasOfLivingAustralia/galah"")\n```\n\nOn Linux you will first need to ensure that `libcurl` and `v8` (version\n\\<= 3.15) are installed on your system \xe2\x80\x94 e.g.\xc2\xa0on Ubuntu/Debian, open a\nterminal and do:\n\n``` sh\nsudo apt-get install libcurl4-openssl-dev libv8-3.14-dev\n```\n\n`galah` depends on `sf` for location-based searches. To install `galah`\nyou will need to make sure your system meets the `sf` system\nrequirements, as specified [here](https://cran.r-project.org/package=sf)\n\n------------------------------------------------------------------------\n\n## Cheat sheet\n\n\n\n------------------------------------------------------------------------\n\n## Citations\n\nTo generate a citation for the package version you are using, you can\nrun\n\n``` r\ncitation(package = ""galah"")\n```\n\nIf you\xe2\x80\x99re using occurrence data downloaded through `galah` in a\npublication, please generate a DOI and cite it. To request a DOI for a\ndownload of occurrence record, set `mint_doi = TRUE` in a call to\n`atlas_occurrences()`. To generate a citation for the downloaded\noccurrence records, pass the `data.frame` generated to\n`atlas_citation()`.\n\n``` r\n# Download occurrence records with a DOI \nocc <- atlas_occurrences(..., mint_doi = TRUE)\n\n# See DOI\nattr(occ, ""doi"")\n\n# Generate citation\natlas_citation(occ)\n```\n'",,"2020/12/06, 22:49:59",1053,AGPL-3.0,92,2400,"2023/10/09, 04:43:39",26,28,184,38,16,0,0.0,0.6297743055555556,"2023/10/14, 21:57:30",v1.5.4,0,15,false,,false,false,,,https://github.com/AtlasOfLivingAustralia,https://www.ala.org.au,Australia,,,https://avatars.githubusercontent.com/u/7296572?v=4,,, elapid,"Species distribution modeling tools, including a python implementation of Maxent.",earth-chris,https://github.com/earth-chris/elapid.git,github,"biodiversity-informatics,biogeography,geospatial,maxent,niche-modelling,species-distribution-modelling",Biodiversity and Species Distribution,"2023/04/19, 18:33:59",38,0,26,true,Python,,,"Python,Makefile",https://elapid.org,"b'

\n \n

\n\n

\n Contemporary species distribution modeling tools for python.\n

\n\n![GitHub](https://img.shields.io/github/license/earth-chris/elapid)\n![PyPI version](https://img.shields.io/pypi/v/elapid)\n![Anaconda version](https://anaconda.org/conda-forge/elapid/badges/version.svg)\n![PyPI downloads](https://img.shields.io/pypi/dm/elapid)\n![GitHub last commit](https://img.shields.io/github/last-commit/earth-chris/elapid)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.04930/status.svg)](https://doi.org/10.21105/joss.04930)\n\n---\n\n**Documentation**: [earth-chris.github.io/elapid](https://earth-chris.github.io/elapid)\n\n**Source code**: [earth-chris/elapid](https://github.com/earth-chris/elapid)\n\n---\n\n## :snake: Introduction\n\n`elapid` is a series of species distribution modeling tools for python. This includes a custom implementation of [Maxent][home-maxent] and a suite of methods to simplify working with biogeography data.\n\nThe name is an homage to *A Biogeographic Analysis of Australian Elapid Snakes* (H.A. Nix, 1986), the paper widely credited with defining the essential bioclimatic variables to use in species distribution modeling. It\'s also a snake pun (a python wrapper for mapping snake biogeography).\n\n---\n\n## :seedling: Installation\n\n`pip install elapid` or `conda install -c conda-forge elapid`\n\nInstalling `glmnet` is optional, but recommended. This can be done with `pip install elapid[glmnet]` or `conda install -c conda-forge elapid glmnet`. For more support, and for information on why this package is recommended, see [this page](https://elapid.org/install#installing-glmnet).\n\nThe `conda` install is recommended for Windows users. While there is a `pip` distribution, you may experience some challenges. The easiest way to overcome them is to use [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/about). Otherwise, see [this page](https://elapid.org/install) for support.\n\n---\n\n## :deciduous_tree: Why use elapid?\n\nThe amount and quality of bioegeographic data has increased dramatically over the past decade, as have cloud-based tools for working with it. `elapid` was designed to provide a set of modern, python-based tools for working with species occurrence records and environmental covariates to map different dimensions of a species\' niche.\n\n`elapid` supports working with modern geospatial data formats and uses contemporary approaches to training statistical models. It uses `sklearn` conventions to fit and apply models, `rasterio` to handle raster operations, `geopandas` for vector operations, and processes data under the hood with `numpy`.\n\nThis makes it easier to do things like fit/apply models to multi-temporal and multi-scale data, fit geographically-weighted models, create ensembles, precisely define background point distributions, and summarize model predictions.\n\nIt does the following things reasonably well:\n\n:globe_with_meridians: **Point sampling**\n\nSelect random geographic point samples (aka background or pseudoabsence points) within polygons or rasters, handling `nodata` locations, as well as sampling from bias maps (using `elapid.sample_raster()`, `elapid.sample_vector()`, or `elapid.sample_bias_file()`).\n\n:chart_with_upwards_trend: **Vector annotation**\n\nExtract and annotate point data from rasters, creating `GeoDataFrames` with sample locations and their matching covariate values (using `elapid.annotate()`). On-the-fly reprojection, dropping nodata, multi-band inputs and multi-file inputs are all supported.\n\n:bar_chart: **Zonal statistics**\n\nCalculate zonal statistics from multi-band, multi-raster data into a single `GeoDataFrame` from one command (using `elapid.zonal_stats()`).\n\n:bug: **Feature transformations**\n\nTransform covariate data into derivative `features` to expand data dimensionality and improve prediction accuracy (like `elapid.ProductTransformer()`, `elapid.HingeTransformer()`, or the all-in-one `elapid.MaxentFeatureTransformer()`).\n\n:bird: **Species distribution modeling**\n\nTrain and apply species distribution models based on annotated point data, configured with sensible defaults (like `elapid.MaxentModel()` and `elapid.NicheEnvelopeModel()`).\n\n:satellite: **Training spatially-aware models**\n\nCompute spatially-explicit sample weights, checkerboard train/test splits, or geographically-clustered cross-validation splits to reduce spatial autocorellation effects (with `elapid.distance_weights()`, `elapid.checkerboard_split()` and `elapid.GeographicKFold()`).\n\n:earth_asia: **Applying models to rasters**\n\nApply any pixel-based model with a `.predict()` method to raster data to easily create prediction probability maps (like training a `RandomForestClassifier()` and applying with `elapid.apply_model_to_rasters()`).\n\n:cloud: **Cloud-native geo support**\n\nWork with cloud- or web-hosted raster/vector data (on `https://`, `gs://`, `s3://`, etc.) to keep your disk free of temporary files.\n\nCheck out some example code snippets and workflows on the [Working with Geospatial Data](https://elapid.org/examples/WorkingWithGeospatialData/) page.\n\n---\n\n:snake: `elapid` requires some effort on the user\'s part to draw samples and extract covariate data. This is by design.\n\nSelecting background samples, computing sample weights, splitting train/test data, and specifying training parameters are all critical modeling choices that have profound effects on inference and interpretation.\n\nThe extra flexibility provided by `elapid` enables more control over the seemingly black-box approach of Maxent, enabling users to better tune and evaluate their models.\n\n---\n\n## How to cite\n\nBibTeX:\n\n```\n@article{\n Anderson2023,\n title = {elapid: Species distribution modeling tools for Python}, journal = {Journal of Open Source Software}\n author = {Christopher B. Anderson},\n doi = {10.21105/joss.04930},\n url = {https://doi.org/10.21105/joss.04930},\n year = {2023},\n publisher = {The Open Journal},\n volume = {8},\n number = {84},\n pages = {4930},\n}\n```\n\nOr click ""Cite this repository"" on the [GitHub page](https://github.com/earth-chris/elapid).\n\n---\n\n## Developed by\n\n[Christopher Anderson](https://cbanderson.info)[^1] [^2]\n\n![Twitter Follow](https://img.shields.io/twitter/follow/earth_chris)\n![GitHub Stars](https://img.shields.io/github/stars/earth-chris?affiliations=OWNER%2CCOLLABORATOR&style=social)\n\n[home-maxent]: https://biodiversityinformatics.amnh.org/open_source/maxent/\n[r-maxnet]: https://github.com/mrmaxent/maxnet\n[^1]: [Earth Observation Lab, Planet Labs PBC](https://www.planet.com)\n[^2]: [Center for Conservation Biology, Stanford University](https://ccb.stanford.edu)\n'",",https://doi.org/10.21105/joss.04930,https://doi.org/10.21105/joss.04930","2021/03/20, 04:00:24",949,MIT,40,452,"2023/10/13, 06:37:33",2,61,96,28,12,1,0.0,0.0,"2023/04/10, 08:11:28",v1.0.1,0,1,false,,false,false,,,,,,,,,,, ReMobidyc,A multi-agent simulator for individual-based modeling in population dynamics and ecotoxicology.,ReMobidyc,https://github.com/ReMobidyc/ReMobidyc.git,github,"multi-agent-simulation,pharo,population-dynamics,simulator,ecotoxicology",Biodiversity and Species Distribution,"2023/10/04, 06:32:58",9,0,4,true,Smalltalk,re:Mobidyc,ReMobidyc,"Smalltalk,Shell",,"b'[![Pharo version](https://img.shields.io/badge/Pharo-11-%23aac9ff.svg)](https://pharo.org/download)\n![CI](https://github.com/tomooda/ViennaTalk/actions/workflows/test.yml/badge.svg)\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/cormas/cormas/master/LICENSE)\n\n\n\n# re\xcb\x90mobidyc\nre\xcb\x90mobidyc (also denoted as re:mobidyc or ReMobidyc without the exotic punctuation letter) is a multi-agent simulator for individual-based modeling in population dynamics and ecotoxicology.\n\n## Presentations\n\n* for biologists: \n\n [![ReMobidyc: develop and run Individual-Based Models, even with little coding skills](https://img.youtube.com/vi/7Uvh_qtzcA4/1.jpg)](https://www.youtube.com/watch?v=7Uvh_qtzcA4)\n\n\n* for computer scientists:\n\n [![https://www.slideshare.net/esug/remobidyc-the-overview](/images/esug-re%20mobidyc.png)](https://www.slideshare.net/esug/remobidyc-the-overview)\n\n\n\n## Installation\n\nre\xcb\x90mobidyc is implemented on the [Pharo](https://pharo.org/) system.\nThere are three ways to install re\xcb\x90mobidyc.\n\n### 1. from binary package\n\nThe following pre-built packages are available.\n\nThe latest release: [
Avignon
 
](https://github.com/ReMobidyc/ReMobidyc/releases/latest/)\n* macOS / AppleSilicon [![download](https://img.shields.io/badge/DOWNLOAD-f0f0f0?labelColor=a0a0a0&style=flat&logoColor=white&logo=DocuSign)](https://github.com/ReMobidyc/ReMobidyc/releases/latest/download/remobidyc-mac-arm64.dmg)\n* macOS / Intel processor [![download](https://img.shields.io/badge/DOWNLOAD-f0f0f0?labelColor=a0a0a0&style=flat&logoColor=white&logo=DocuSign)](https://github.com/ReMobidyc/ReMobidyc/releases/latest/download/remobidyc-mac-x64.dmg)\n* Linux / Intel processor [![download](https://img.shields.io/badge/DOWNLOAD-f0f0f0?labelColor=a0a0a0&style=flat&logoColor=white&logo=DocuSign)](https://github.com/ReMobidyc/ReMobidyc/releases/latest/download/remobidyc-linux-x64.tar.bz2)\n* windows64 / Intel processor [![download](https://img.shields.io/badge/DOWNLOAD-f0f0f0?labelColor=a0a0a0&style=flat&logoColor=white&logo=DocuSign)](https://github.com/ReMobidyc/ReMobidyc/releases/latest/download/remobidyc-win-x64.zip)\n\n### 2. from Pharo\n\nIf you have the [Pharo](https://pharo.org/) installation, you can install re\xcb\x90mobidyc by evaluating the following expression.\n(Open a Playground by the Browse>>Playground menu in the menubar, then copy&paste the text below into the Playground. Select all the pasted text and select the ""Do-it"" in the right-click menu. After the system loads the latest re\xcb\x90mobidyc code, save the image by the Pharo>>Save menu in the menubar.)\n\n```\nEpMonitor disableDuring: [\n\tMetacello new\n\t\tonConflictUseLoaded;\n\t\tonWarningLog;\n\t\trepository: \'github://ReMobidyc/ReMobidyc:main/\';\n\t\tbaseline: \'ReMobidyc\';\n\t\tload ] \n```\n\n### 3. from command shell\n\nThe following one-liner will download Pharo and install re\xcb\x90mobidyc.\n\n```\ncurl https://raw.githubusercontent.com/ReMobidyc/ReMobidyc/main/scripts/install-remobidyc.sh | bash\n```\n\n## Modeling Language\n\nDocumentation is still under construction. Please see the following ""cheatsheets"".\n\n* [Expressions](docs/cheatsheets/expressions.md)\n* [Conditions](docs/cheatsheets/conditions.md)\n* [Units](docs/cheatsheets/units.md)\n\n## Examples\n\n### 1. Simplified SugarScape\n![animation of sugascape example](images/SugarScape.png)\n\nEach cell grows grass (indicated by green rect) and each goat (yellow dot) eats grass.\nThis example exhibits the interaction between an animat and the cell where the animat is located.\n\n### 2. Goat and Wolf\n![animation of goat and wolf example](images/GoatAndWolf.png)\n\nEach cell grows grass (indicated by green rect), each goat (yellow dot) eats grass and each wolf (red dot) preys on its nearest goat.\nThis example exhibits the interaction between two animats.\n\n### 3. Grasshopper\n![animation of grasshoppers example](images/Grasshoppers.png)\n\nEach cell grows grass(indicated by green rect), a grasshopper (yellow dot) eats grass.\nWhen a grasshopper is matured by age, it lays 5 eggs (cyan dots), and each egg hatches in 20 days.\nThis example exhibits life stages of individuals.\n\n[A tutorial](Tutorials/Grasshoppers/Grasshoppers.md) to build this Grasshopper model is available.\n\n## Background\nre\xcb\x90Mobidyc is a variation of Mobidyc that inherits the design rationale\nof Mobidyc.\nThe objective of re\xcb\x90Mobidyc is to renovate the original Mobidyc keeping\nits design principles.\nThe base system is changed from VisualWorks to Pharo.\nThey are both Smalltalk systems, and Pharo is today\'s most actively\ndeveloped/used open-source Smalltalk system.\nWe will renovate the implementation of Mobidyc from its very basis to\napply outcomes from computer science.\nThe development of re\xcb\x90Mobidyc is just beginning, and we will need time\nto re-produce the functionality of the original Mobidyc.\n\n[1] [The original Mobidyc site](https://mobidyc.cnrs.fr/index.php?title=English_summary)\n\n## Design Rationale\nAs a tool for scientific research, re\xcb\x90Mobidyc will provide the following features\n\n* Easy to model\n - A behavior of an agent will be defined in a declarative manner instead of series of commands, if-statements and loops.\n\n* Easy to modify\n - Every component in a behavioral definition will be type-checked so that the user can find minor errors before running it.\n\n* Easy to verify model\n - A definition can have assertions that double-checks its behavior so that the user can be sure that the model is defined as intended.\n\n* Easy to reproduce\n - re\xcb\x90Mobidyc will make all simulation reproduceable including randomized actions.\n\n* Easy to verify output\n - States of agents at every step in simulation will be stored in persistent memory so that the user and reviewers can check its validity.\n\n* Easy to trace\n - The user can trace which agent had interaction with a particular agent to find the cause of an observed phenomenon. \n\n* Easy to publish\n - The user can publish her/his model along with input/intermediate/output data and visualized images as an evidence in scientific research.\n\n* Easy to implement\n - re\xcb\x90Mobidyc will define its own modeling language in formal specification language so that its execution engine can be implemneted by third persons with complete compatibility.\n\n## Architectural design\nTo achieve the objectives above, re\xcb\x90Mobidyc will have the following major components.\n\n* Persistent storage\n - re\xcb\x90Mobidyc will use persistent storages, such as file systems and RDBs, to store models, states of agents at all simulation steps, input data and output data.\n\n* Reproducible random numbers\n - re\xcb\x90Mobidyc will have its own random number generator and make its code open so that all actions taken in a simulation can be accurately reproduceable.\n\n* Web servers and APIs\n - re\xcb\x90Mobidyc will have web-based UI so that models can be shared by research communities.\n - re\xcb\x90Mobidyc will provide web API to retrieve models and simulation data so that anyone can create specialized native applications.\n\n## Organizational Contributors\nThe re\xcb\x90mobidyc project is supported by [DGtal Aqua Lab, Shizuoka University](https://wwp.shizuoka.ac.jp/dgtalaqualab/) and [Software Research Associates, Inc.](https://www.sra.co.jp/en/)\n\n[![DGtal Aqua Lab](images/DGTALAQUALAB-logo.png)](https://wwp.shizuoka.ac.jp/dgtalaqualab/)\xe3\x80\x80\xe3\x80\x80[![SRA logo](images/SRA-logo-large.png)](https://www.sra.co.jp/en/)\n'",,"2021/06/30, 06:38:48",847,MIT,502,1012,"2023/10/04, 06:32:59",11,221,286,204,21,0,0.0,0.004304160688665681,"2023/08/14, 06:27:56",Avignon,0,3,false,,false,false,,,https://github.com/ReMobidyc,,,,,https://avatars.githubusercontent.com/u/86707456?v=4,,, CART,"The Conservation Assessment Ranking Tool is designed for use in the conservation planning process to assess resource concerns, planned practices and site vulnerability.",jneme910,https://github.com/jneme910/CART.git,github,"resource-assessment,intrinsic-soil-properties,soil-quality-degradation,soils-data,conservation-assessment-ranking,cart,land-unit,risk,ranking,conservation-practices,ssurgo,nrcs,usda,usda-nrcs,soil-data-access,conservation,assessment,ranking-tool",Conservation and Restoration,"2023/03/29, 13:43:02",10,0,1,true,HTML,,,"HTML,TSQL,Python,JavaScript,CSS",https://jneme910.github.io/CART/,"b""# Conservation Assessment Ranking Tool (CART)\nJason Nemecek and Steve Peaslee \n\nAugust 2, 2021 \n\nThe Conservation Assessment Ranking Tool (CART) is designed for use in the conservation planning process to assess resource concerns, planned practices, and site vulnerability. It ranks applications for USDA conservation funding. CART starts with a site-specific risk threshold for each resource concern. The thresholds are based on intrinsic site characteristics, such as soils and climate. The tool evaluates the benefits of site-specific management for treating resource concerns. A \xe2\x80\x9cmanagement credit\xe2\x80\x9d score is assigned to each site based on such factors as the methods used for crop production and the conservation practices that are applied. The scores are summed, and the total is compared to the threshold for existing conditions and to planning alternatives. The scores can also be used to prioritize program ranking, which may be further modified by identified priorities, special considerations, or both.\n\nThis documentation describes the SQL queries that access soils data for CART. The queries described in this documentation run through the [Soil Data Access](https://sdmdataaccess.nrcs.usda.gov/Query.aspx) tabular data query portal. The portal accesses current databases maintained by the U.S. National Cooperative Soil Survey.\n\n### Structured Query Language (SQL) script for the Conservation Assessment Ranking Tool: Soil.\n1. SQL Server version: [Click here](https://github.com/jneme910/CART/blob/master/SQL-Library/CART_SoilsQuery_20210922SSMS.sql)\n2. Soil Data Access SQL version: [Click here](https://github.com/jneme910/CART/blob/master/SQL-Library/CART_SoilsQuery_kitchensink_20210927SDA.txt)\n3. Area of Interest (AOI) Geometry examples to copy into the SQL script: [Click here](https://raw.githubusercontent.com/jneme910/CART/master/SQL-Library/AOI_Geometry_Examples.txt)\n4. Prototype: [Click here](https://jneme910.github.io/CART/documents/rev00_Organic_Matter_Depletion.html)\n5. Prototype2: [Click here](https://jneme910.github.io/CART/documents/rev00.html)\n6. Prototype3: [Click here](https://jneme910.github.io/CART/documents/rev00_with_description.html) \n\n### The soils data used in CART can be found in four main sections.\n1. Resource Assessment (Resource Concerns); \n * Soil Quality Degradation \n * Other\n2.\tEP\xe2\x80\x94Easement Program\n3.\tEE\xe2\x80\x94Environmental Evaluation\n4.\tOutcome Results (Under Development)\n\n\n ||Datasets|Purpose* |Documentation| Section|\n|-----|----------|--------|--------|----------------------------------------|\n|1| Ponding or Flooding |RA, EP|[Click here](https://jneme910.github.io/CART/chapters/Ponding_or_Flooding)|Excess Water-Ponding and Flooding, Easements|\n|2|Depth to Water Table |RA, EP |[Click here](https://jneme910.github.io/CART/chapters/Depth_to_Water_Table) |Excess Water-Seasonal High Water Table, Easements|\n|3|Hydric Rating by Map Unit |RA, EP |[Click here](https://jneme910.github.io/CART/chapters/Hydric_Rating_by_Map_Unit)|Excess Water: Seeps; Air Quality: Emissions of Greenhouse Gases; Easements|\n|4 |Nitrogen Leaching |RA | [Click here](https://jneme910.github.io/CART/chapters/Nitrogen_Leaching_Potential)|Future Development (Water Quality-Diffuse Nutrient, Pesticide and Pathogens Transport to Water\n |5|Farmland Classification |EE, EP |[Click here](https://jneme910.github.io/CART/chapters/Farmland_Classification) |Easements; Environmental Evaluation|\n|6|Available Water Storage |EP |[Click here](https://jneme910.github.io/CART/chapters/Available_Water_Storage) |Easements|\n|7|Soil Organic Carbon Stock|RA, EP |[Click here](https://jneme910.github.io/CART/chapters/Soil-Organic-Carbon_Stock)|Easements; Air Quality: Emissions of Greenhouse Gases |\n |8|Drainage Class |EP |[Click here](https://jneme910.github.io/CART/chapters/Drainage_Class) |Easements|\n|9|Organic Soils |RA |See 'Hydric Rating by Mapunit'|---|\n|10|Agricultural Organic Soil Subsidence |RA |[Click here](https://jneme910.github.io/CART/chapters//EditedRMD/Agricultural_Organic_Soil_Subsidence) |Soil Quality Degradation: Subsidence|\n|11|Soil Susceptibility to Compaction |RA |[Click here](https://jneme910.github.io/CART/chapters/Soil_Susceptibility_to_Compaction)|Soil Quality Degradation:Compaction| \n|12|Soil Susceptibility Organic Matter Depletion |RA |[Click here](https://jneme910.github.io/CART/chapters/Organic_Matter_Depletion)|Soil Quality Degradation:Organic Matter Depletion|\n|13|Surface Salt Concentration |RA |[Click here](https://jneme910.github.io/CART/chapters/Surface_Salt_Concentration)|Soil Quality Degradation:Concentration of Salts and Other Chemicals\n|14|Limitation for Aerobic Soil Organisms |RA |[Click here](https://jneme910.github.io/CART/chapters/Suitability_for_Aerobic_Soil_Organisms)|Soil Quality Degradation:Soil Organism Habitat Loss and Degradation|\n|15|Aggregate stability |RA |[Click here](https://jneme910.github.io/CART/chapters/Aggregate-stability) |Soil Quality Degradation:Aggregate Instability|\n|16| Domain Tables|---| [Click here](https://jneme910.github.io/CART/chapters/CART_Soil_Data_Access_Domains) |---|\n|17|Soil Property List by Interpretation |---| [Click here](https://jneme910.github.io/CART/chapters/Soil_Property_List_by_Soil_Interpretation) |---|\n|18|Soil Property List and Column Descriptions |---|[Click here](https://jneme910.github.io/CART/chapters/Soil_Propert_List_and_Definition)|---|\n|19|Data Checks |--- |[Click here](https://jneme910.github.io/CART/chapters/Soil_Data_Checks)|---|\n|20|Outcomes |--- |[Click here](https://jneme910.github.io/CART/chapters/Outcomes) |---|\n|21|Future Development|--- |[Click here](https://jneme910.github.io/CART/chapters/future) |---|\n|22|CART User\xe2\x80\x99s Guide|--- |[Click here](https://github.com/jneme910/CART/blob/master/documents/CART_Resource_Concern_Assessment_Draft.docx) |---|\n|23|CART Overview |--- |[Click here](https://github.com/jneme910/CART/blob/master/documents/CART_Overview.pdf) |---|\n|24|Soil Data Access Metrics|---|[Click here](https://jneme910.github.io/CART/chapters/Metric) |---| \n\n *RA\xe2\x80\x94Resource Assessment; EP\xe2\x80\x94Easement Program; EE\xe2\x80\x94Environmental Evaluation; RT\xe2\x80\x94Ranking Tool\n\nSoil properties can be divided into two broad categories: intrinsic and non-intrinsic. Intrinsic soil properties are those empirical soil properties that are not based on any other soil properties (e.g., content of very fine sand). Non-intrinsic soil properties tend to be derived from multiple intrinsic soil properties (e.g., K factor). Non-intrinsic soil properties also tend to be interpretive in nature. Examples of non-intrinsic soil properties include Farmland Classification, T Factor, and Wind Erodibility Group.\n\n# Resource Concerns\n## Soil Quality Degradation \nCART evaluates six resource concerns related to soil quality degradation. Each involves analysis of soil interpretation data from the Soil Data Access Query service. Soil maps and reports for these interpretations are also available from Web Soil Survey. Both the Soil Data Access Query service and Web Soil Survey connect to the same soils database. Five of the resources concerns use traditional soil interpretations; the sixth, Aggregate Stability, is written entirely in SQL. \n\n||Resource Concerns|Related Soil Interpretation\n|-----|----------|--------|\n|1|Subsidence|Agricultural Organic Soil Subsidence|\n|2|\tCompaction|\tSoil Susceptibility to Compaction|\n|3|\tOrganic Matter Depletion|Organic Matter Depletion|\n|4\t|Concentration of Salts and Other Chemicals|\tSurface Salt Concentration|\n|5| Soil organism habitat loss or degradation|Suitability for Aerobic Soil Organisms|\n|6|Aggregate instability| Aggregate stability|\n\n### Soil Data Access Requests by CART\n1.\tThe request for soils data begins once land units have been selected (fig. 1).\n2. The request is in the form of an SQL query and contains:\n * Land unit identifier\n * Bounding coordinates\n3.\tCART automatically sends the request to Soil Data Access Query Service.\n4.\tMap layers are processed in the background and are not displayed.\n\n![Example: Park County, Wyoming](https://jneme910.github.io/CART/TableImages/Park_County_WY.png)\n\nFigure 1.\xe2\x80\x94This map is here to show you a landunit in Park County, Wyoming.\n### MAP DATA PROCESSING\n \n![Example: Map data is processed in the background](https://jneme910.github.io/CART/TableImages/Map%20Data%20is%20processed%20in%20the%20background.PNG)\n\nFigure 2.\xe2\x80\x94A map of the soils in the selected area and a map showing the soil interpretation for surface salt concentration. \n\nMap data is processed in the background. In figure 2, the map on the left shows 8 different soils within a land unit. The map on the right illustrates risk of surface salinization. The red polygon indicates an area of high risk for surface salinization. The yellow areas have a moderate risk, and the green areas are low risk.\n\n### Service Data\n\nIn the following table, the query service returned soils information for \xe2\x80\x9dRisk of Surface Salt Concentration\xe2\x80\x9d within the land unit. The soil interpretation rating was used to calculate the CART rating. The table shows the magnitude of the CART rating as both a land unit percentage and as land unit acres.\n\n![Example: Service Data](https://jneme910.github.io/CART/TableImages/Service%20Data.PNG)\n\nAreas with the highest risk are assigned a rating of 1. Areas with a lower risk are assigned a larger rating number. Rating values are calculated using soils data at the component level.\n\n### Land unit Detailed Ratings\n\nThe service request calculates the rolling sum values for rating acres and rating percent for each resource concern and finds the single most limiting rating (per land unit) that comprises **at least 10% by area or 10 acres.** \n\n![Example: Land Unit Detail Ratings](https://jneme910.github.io/CART/TableImages/Land%20Unit%20Detail%20Ratings.PNG)\n\nIn this example, the most limiting rating that meets these criteria is the second row. This rating is provided to the CART application as the land unit rating for Concentration of Salts and Other Chemicals. It is important to understand that the Web Soil Survey does not have the functionality for calculating the land unit ratings. Web Soil Survey is only designed to provide soil maps and reports.\n\n### Final Land unit Ratings \n\nFor each of the resource concerns, the final land unit ratings (which are derived from Soil Data Access) are returned to CART for awarding of points. The publication date of the soils data is also returned to CART as metadata. This metadata ensures that the Soil Quality Degradation ratings can be associated with a particular version of SSURGO soils data.\n\n![Example: Final Land Unit Ratings](https://jneme910.github.io/CART/TableImages/Final%20Land%20Unit%20Ratings.PNG)\n\nThe following domain table contains an ordered list of all possible rating values. \n\n![Example: Domain](https://jneme910.github.io/CART/TableImages/Domain.PNG)\n\n# EASEMENTS\nClick a heading below for specific information on a listed query.\n\n1. [Soil Organic Carbon Stock](https://jneme910.github.io/CART/chapters/Soil_Organic_Carbon_Stock)\n2. [Farmland Classification](https://jneme910.github.io/CART/chapters/Farmland_Classification)\n3. [Hydric Soil Rating by Map Unit](https://jneme910.github.io/CART/chapters/Hydric_Rating_by_Map_Unit)\n4. [Ponding or Flooding Frequency](https://jneme910.github.io/CART/chapters/Ponding_or_Flooding)\n5. [Depth to Water Table](https://jneme910.github.io/CART/chapters/Depth_to_Water_Table)\n6. [Drainage Class](https://jneme910.github.io/CART/chapters/Drainage_Class)\n7. [Available Water Storage](https://jneme910.github.io/CART/chapters/Available_Water_Storage)\n\n# Environmental Evaluation (CPA-52)\nClick a heading below for specific information on a listed query.\n\n1. [Farmland Classification](https://jneme910.github.io/CART/chapters/Farmland_Classification)\n2. [Hydric Soils Rating by Mapunit](https://jneme910.github.io/CART/chapters/Hydric_Rating_by_Map_Unit)\n\n# Outcomes\nThe programming proposed for outcomes is intended to provide NRCS leadership with the ability to model data and report the natural resource impacts and outcomes of conservation practices, systems, programs, and initiatives. It will also facilitate the identification of conservation treatment needs and the reporting of outcomes for NRCS and USDA.\n\n1. Outcomes Design Concept: [Click here](https://jneme910.github.io/CART/chapters/Outcomes)\n2. Data connections (CART-NPAD): [Click here](https://github.com/jneme910/CART/blob/master/documents/npad_70_051419.pdf)\n\n# Acknowledgements\n1.\tSteve Campbell: Soil Scientist, NRCS\n2.\tSkye Wills: Soil Scientist, NRCS\n3.\tChad Volkman: Cartographer, NRCS\n4.\tPhil Anzel: Senior Software Developer, Vistronix\n5.\tSusan McGlasson: Database Administrator, Vistronix\n6.\tBob Dobos: Soil Scientist, NRCS\n7.\tCathy Seybold: Soil Scientist, NRCS\n8.\tJeff Thomas: Soil Scientist, NRCS\n9.\tMike Robotham: National Leader for Technical Soil Services, NRCS\n10.\tLaura Morton: Management Analyst, NRCS\n11.\tAaron Lauster: National Sustainable Agriculture Leader, NRCS\n12.\tCasey Sheley: Natural Resource Specialist, NRCS\n13.\tEric Hesketh: Soil Scientist, NRCS\n14.\tGreg Zwicke: Environmental Engineer, NRCS\n15.\tMatt Flint: Natural Resource Specialist, NRCS\n16.\tDanielle Balduff: Natural Resource Specialist, NRCS\n17.\tBreanna Barlow: Management Analyst, NRCS\n18.\tBarry Fisher: Central Region Soil Health Team Leader, NRCS\n19.\tRobin Plummer: Developer, NRCS\n20.\tAaron Bustamante: \n21.\tPam Thomas: Associate Director of Soil Survey Programs, NRCS\n\n*With support from the Resource Concern Team and Workgroups.* \n\n\n""",,"2019/04/05, 15:52:13",1664,MIT,3,440,"2023/10/04, 06:32:59",0,0,0,0,21,0,0,0.0076142131979695105,,,0,3,false,,false,false,,,,,,,,,,, forest-prediction,Deep learning for deforestation classification and forecasting in satellite imagery.,DS3Lab,https://github.com/DS3Lab/forest-prediction.git,github,,Conservation and Restoration,"2021/04/20, 18:07:13",21,0,6,true,Python,DS3 Lab,DS3Lab,"Python,Shell,MATLAB,TeX,Dockerfile",,"b'# forest-prediction\n\xf0\x9f\x9b\xb0Deep learning for deforestation classification and forecasting in satellite imagery\n\n[![](https://tinyurl.com/greenai-pledge)](https://github.com/daviddao/green-ai)\n\n## Overview\nIn this repository we provide implementations for:\n1. Data scraping (Tile services and Google Earth Engine)\n2. Forest prediction (Semantic Segmentation)\n3. Video prediction (Lee et al, 2018)\n4. Image to image translation (Isola et al, 2017)\n\n## Installation\n```console\n$ git clone https://github.com/DS3Lab/forest-prediction.git\n$ cd forest-prediction/semantic_segmentation/unet\n$ conda create --name forest-env python=3.7\n$ ./install.sh\n$ source activate forest-env\n```\n## Running\nYou can train the models for semantic segmentation by simply running:\n```console\n(forest-env) $ cd semantic_segmentation/unet\n(forest-env) $ python train.py -c {config_path} -d {gpu_id}\n```\nFor multi-GPU training, set gpu_id to a comma-separated list of devices, e.g. \n-d 0,1,2,3,4\nThis will produce a file having the time in which the script was executed as the folder name.\nIt will be saved in the ""save_dir"" value from the JSON file, under ""trainer"". Under save_dir, it will create\na log file, where you can check Tensorboard, and a model file, where the model is going to be stored.\n\n## Testing\nYou can test the models for semantic segmentation by running:\n```console\n(forest-env) $ python simple_test.py -r {model_saved_path/model.pth} -d {gpu_id}\n```\nIt will run the predictions and save the corresponding outputs in model_saved_path. To keep an order of the images, set both `batch_size` and `num_workers` to 1.\n\n## Configuration\nYou can change the type of model used, and its configuration by altering (or creating) a config.json file. \n\n### Structure of `config.json`\nThe fields of the config file are self explanatory. We explain the most important ones.\n* `name`: indicates the name of the experiment. It is the folder in which both the training logs and models are going to be stored\n* `n_gpu`: for multi-GPU training, it is necessary to specify how many gpus it is going to use. For instance, if the user specifies `-d 0,1`, in order to use both gpus `n_gpu` needs to be set up to 2. If it is set up to 1, it will only use gpu 0, if it is set up to a number higher than 2, then it will yield an error.\n* `arch`: it specifies the model that will be used for training/testing purposes.\n* `data_loader_train` and `data_loader_val`: data loaders for training and validation purposes. For testing, only `data_loader_val` is used. \n \n\n'",,"2019/07/25, 13:12:08",1553,MIT,0,377,"2023/10/03, 22:35:23",1,7,7,2,22,1,0.0,0.10869565217391308,,,0,3,false,,false,false,,,https://github.com/DS3Lab,,Zurich,,,https://avatars.githubusercontent.com/u/20972509?v=4,,, forestatrisk,A Python package to model and forecast the risk of deforestation.,ghislainv,https://github.com/ghislainv/forestatrisk.git,github,"python,land-use-change,spatial-modelling,spatial-analysis,forecasting,spatial-autocorrelation,tropical-forests,roads,protected-areas,biodiversity-scenario,ipbes,co2-emissions,ipcc,forest-cover-change,deforestation,deforestation-risk,redd",Conservation and Restoration,"2023/05/13, 12:37:42",107,0,16,true,Python,,,"Python,C,TeX,Shell,Makefile,CSS",https://ecology.ghislainv.fr/forestatrisk,"b'..\n # ==============================================================================\n # author :Ghislain Vieilledent\n # email :ghislain.vieilledent@cirad.fr, ghislainv@gmail.com\n # web :https://ecology.ghislainv.fr\n # license :GPLv3\n # ==============================================================================\n\n.. image:: https://ecology.ghislainv.fr/forestatrisk/_static/logo-far.svg\n :align: right\n :target: https://ecology.ghislainv.fr/forestatrisk\n :alt: Logo forestatrisk\n :width: 140px\n\n``forestatrisk`` Python package\n*******************************\n\n\n|Python version| |PyPI version| |GitHub Actions| |License| |Zenodo| |JOSS|\n\n\nOverview\n========\n\nThe ``forestatrisk`` Python package can be used to **model** the\ntropical deforestation spatially, **predict** the spatial risk of\ndeforestation, and **forecast** the future forest cover in the\ntropics. It provides functions to estimate the spatial probability of\ndeforestation as a function of various spatial explanatory variables.\n\nSpatial explanatory variables can be derived from topography\n(altitude, slope, and aspect), accessibility (distance to roads,\ntowns, and forest edge), deforestation history (distance to previous\ndeforestation), or land conservation status (eg. protected area) for\nexample.\n\n.. image:: https://ecology.ghislainv.fr/forestatrisk/_static/forestatrisk.png\n :align: center\n :target: https://ecology.ghislainv.fr/forestatrisk\n :alt: prob_AFR\n :width: 800px\n\nScientific publication\n======================\n\n**Vieilledent G.** 2021. ``forestatrisk``: a Python package for\nmodelling and forecasting deforestation in the tropics.\n*Journal of Open Source Software*. 6(59): 2975.\n[doi: `10.21105/joss.02975 `__]. |pdf|\n\t \nStatement of Need\n=================\n\nSpatial modelling of the deforestation allows identifying the main\nfactors determining the spatial risk of deforestation and quantifying\ntheir relative effects. Forecasting forest cover change is paramount\nas it allows anticipating the consequences of deforestation (in terms\nof carbon emissions or biodiversity loss) under various technological,\npolitical and socio-economic scenarios, and informs decision makers\naccordingly. Because both biodiversity and carbon vary greatly in\nspace, it is necessary to provide spatial forecasts of forest cover\nchange to properly quantify biodiversity loss and carbon emissions\nassociated with future deforestation.\n\nThe ``forestatrisk`` Python package can be used to model the tropical\ndeforestation spatially, predict the spatial risk of deforestation,\nand forecast the future forest cover in the tropics. The spatial data\nused to model deforestation come from georeferenced raster files,\nwhich can be very large (several gigabytes). The functions available\nin the ``forestatrisk`` package process large rasters by blocks of\ndata, making calculations fast and efficient. This allows\ndeforestation to be modeled over large geographic areas (e.g. at the\nscale of a country) and at high spatial resolution\n(eg. \xe2\x89\xa4\xc2\xa030\xc2\xa0m). The ``forestatrisk`` package offers the possibility\nof using logistic regression with auto-correlated spatial random\neffects to model the deforestation process. The spatial random effects\nmake possible to structure the residual spatial variability of the\ndeforestation process, not explained by the variables of the model and\noften very large. In addition to these new features, the\n``forestatrisk`` Python package is open source (GPLv3 license),\ncross-platform, scriptable (via Python), user-friendly (functions\nprovided with full documentation and examples), and easily extendable\n(with additional statistical models for example). The ``forestatrisk``\nPython package has been used to model deforestation and predict future\nforest cover by 2100 across the humid tropics\n(``__).\n\nInstallation\n============\n\nYou will need several dependencies to run the ``forestatrisk`` Python\npackage. The best way to install the package is to create a Python\nvirtual environment, either through ``conda`` (recommended) or ``virtualenv``.\n\nUsing ``conda`` (recommended)\n+++++++++++++++++++++++++++++\n\nYou first need to have ``miniconda3`` installed (see `here\n`__).\n\nThen, create a conda environment (details `here\n`__)\nand install the ``forestatrisk`` package with the following commands:\n\n.. code-block:: shell\n\t\t\n conda create --name conda-far -c conda-forge python=3.9 gdal numpy matplotlib pandas patsy pip statsmodels earthengine-api --yes\n conda activate conda-far\n pip install pywdpa scikit-learn # Packages not available with conda\n pip install forestatrisk # For PyPI version\n # pip install https://github.com/ghislainv/forestatrisk/archive/master.zip # For GitHub dev version\n # conda install -c conda-forge python-dotenv rclone --yes # Potentially interesting libraries\n\nTo deactivate and delete the conda environment:\n\n.. code-block:: shell\n\t\t\n conda deactivate\n conda env remove --name conda-far\n\nUsing ``virtualenv``\n++++++++++++++++++++\n\nYou first need to have the ``virtualenv`` package installed (see `here `__).\n\nThen, create a virtual environment and install the ``forestatrisk``\npackage with the following commands:\n\n.. code-block:: shell\n\n cd ~\n mkdir venvs # Directory for virtual environments\n cd venvs\n virtualenv --python=/usr/bin/python3 venv-far\n source ~/venvs/venv-far/bin/activate\n # Install numpy first\n pip install numpy\n # Install gdal (the correct version) \n pip install --global-option=build_ext --global-option=""-I/usr/include/gdal"" gdal==$(gdal-config --version)\n pip install forestatrisk # For PyPI version, this will install all other dependencies\n # pip install https://github.com/ghislainv/forestatrisk/archive/master.zip # For GitHub dev version\n pip install statsmodels # Optional additional packages\n\nTo deactivate and delete the virtual environment:\n\n.. code-block:: shell\n\t\t\n deactivate\n rm -R ~/venvs/venv-far # Just remove the repository\n\nInstallation testing\n++++++++++++++++++++\n\nYou can test that the package has been correctly installed using the\ncommand ``forestatrisk`` in a terminal:\n\n.. code-block:: shell\n\n forestatrisk\n\nThis should return a short description of the ``forestatrisk`` package\nand the version number:\n\n.. code-block:: shell\n\n # forestatrisk: modelling and forecasting deforestation in the tropics.\n # https://ecology.ghislainv.fr/forestatrisk/\n # forestatrisk version x.x.\n\nYou can also test the package executing the commands in the `Get\nstarted\n`__\ntutorial.\n \nMain functionalities\n====================\n\nSample\n++++++\n\nFunction ``.sample()`` sample observations points from a forest cover\nchange map. The sample is balanced and stratified between deforested\nand non-deforested pixels. The function also retrieves information\nfrom explanatory variables for each sampled point. Sampling is done by\nblock to allow computation on large study areas (e.g. country or\ncontinental scale) with a high spatial resolution (e.g. 30m).\n\nModel\n+++++\n\nFunction ``.model_binomial_iCAR()`` can be used to fit the\ndeforestation model. A linear Binomial logistic regression model is\nused in this case. The model includes an intrinsic Conditional\nAutoregressive (iCAR) process to account for the spatial\nautocorrelation of the observations. Parameter inference is done in a\nhierarchical Bayesian framework. The function calls a Gibbs sampler\nwith a Metropolis algorithm written in pure C code to reduce\ncomputation time.\n\nOther models (such as a simple GLM or a Random Forest model) can also\nbe used.\n\nPredict and project\n+++++++++++++++++++\n\nFunction ``.predict()`` allows predicting the deforestation\nprobability on the whole study area using the deforestation model\nfitted with ``.model_*()`` functions. The prediction is done by block\nto allow the computation on large study areas (e.g. country or\ncontinental scale) with a high spatial resolution (e.g. 30m).\n\nFunction ``.deforest()`` predicts the future forest cover map based on a\nraster of probability of deforestation (rescaled from 1 to 65535),\nwhich is obtained from function ``.predict()``, and an area (in\nhectares) to be deforested.\n\nValidate\n++++++++\n\nA set of functions (eg. ``.cross_validation()`` or\n``.map_accuracy()``\\ ) is also provided to perform model and map\nvalidation.\n\nContributing\n============\n\nThe ``forestatrisk`` Python package is Open Source and released under\nthe `GNU GPL version 3 license\n`__. Anybody\nwho is interested can contribute to the package development following\nour `Community guidelines\n`__. Every\ncontributor must agree to follow the project\'s `Code of conduct\n`__.\n\n\n.. |Python version| image:: https://img.shields.io/pypi/pyversions/forestatrisk?logo=python&logoColor=ffd43b&color=306998\n :target: https://pypi.org/project/forestatrisk\n :alt: Python version\n\n.. |PyPI version| image:: https://img.shields.io/pypi/v/forestatrisk\n :target: https://pypi.org/project/forestatrisk\n :alt: PyPI version\n\n.. |GitHub Actions| image:: https://github.com/ghislainv/forestatrisk/workflows/PyPkg/badge.svg\n :target: https://github.com/ghislainv/forestatrisk/actions\n :alt: GitHub Actions\n\t \n.. |License| image:: https://img.shields.io/badge/licence-GPLv3-8f10cb.svg\n :target: https://www.gnu.org/licenses/gpl-3.0.html\n :alt: License GPLv3\t \n\n.. |Zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.996337.svg\n :target: https://doi.org/10.5281/zenodo.996337\n :alt: Zenodo\n\n.. |JOSS| image:: https://joss.theoj.org/papers/10.21105/joss.02975/status.svg\n :target: https://doi.org/10.21105/joss.02975\n :alt: JOSS\n\n.. |pdf| image:: https://ecology.ghislainv.fr/forestatrisk/_static/logo-pdf.png\n :target: https://www.theoj.org/joss-papers/joss.02975/10.21105.joss.02975.pdf\n :alt: pdf\n'",",https://doi.org/10.21105/joss.02975,https://doi.org/10.5281/zenodo.996337\n,https://doi.org/10.21105/joss.02975\n","2016/12/01, 10:45:12",2519,GPL-3.0,27,596,"2023/04/28, 10:14:52",7,50,76,4,180,0,0.0,0.024299065420560706,"2022/02/26, 10:31:34",v1.1,0,6,false,,true,true,,,,,,,,,,, worldpa,R interface to the World Database on Protected Areas.,FRBCesab,https://github.com/FRBCesab/worldpa.git,github,"r,protected-areas,protected-planet,world,spatial,shapefile",Conservation and Restoration,"2021/02/24, 12:52:08",14,0,0,false,R,FRB CESAB,FRBCesab,R,https://frbcesab.github.io/worldpa/,"b'# worldpa \n\n\n\n[![R-CMD-check](https://github.com/FRBCesab/worldpa/workflows/R-CMD-check/badge.svg)](https://github.com/FRBCesab/worldpa/actions)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/worldpa)](https://CRAN.R-project.org/package=worldpa)\n[![Project Status:\nActive](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![License\nGPL-3](https://img.shields.io/badge/licence-GPLv3-8f10cb.svg)](http://www.gnu.org/licenses/gpl.html)\n[![DOI](https://zenodo.org/badge/221718108.svg)](https://zenodo.org/badge/latestdoi/221718108)\n\n\n## Overview\n\nThis R package is an interface to the World Database on Protected Areas\n(WDPA) hosted on the Protected planet website\n(). This package is freely released by\nthe\n[FRB-CESAB](https://www.fondationbiodiversite.fr/en/about-the-foundation/le-cesab/)\nand allows user to download spatial shapefiles (`simple features`) of\nprotected areas (PA) for world countries using the WDPA API\n().\n\n## Terms and conditions\n\nYou must ensure that the following citation is always clearly reproduced\nin any publication or analysis involving the Protected Planet Materials\nin any derived form or format:\n\n> UNEP-WCMC and IUCN (`YEAR`) Protected Planet: The World Database on\n> Protected Areas (WDPA). Cambridge, UK: UNEP-WCMC and IUCN. Available\n> at: www.protectedplanet.net (dataset downloaded the `YEAR/MONTH`).\n\nFor further details on terms and conditions of the WDPA usage, please\nvisit the page:\n.\n\n## Prerequisites\n\nThis package uses the WDPA API to access data on world protected areas.\nYou must first have obtained a Personal API Token by filling in the form\navailable at: . Then follow\nthese instructions to store this token: [Managing WDPA API\nToken](https://frbcesab.github.io/worldpa/articles/worldpa.html#managing-wdpa-api-token).\n\n## Installation\n\nYou can install the development version from\n[GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""FRBCesab/worldpa"")\n```\n\n:warning: **Note:** Build the vignette only if you already have stored\nthe token.\n\n## Getting started\n\nBrowse the\n[**vignette**](https://frbcesab.github.io/worldpa/articles/worldpa.html)\nto get started.\n\nFunctions documentation can be found at:\n[https://frbcesab.github.io/worldpa/reference](https://frbcesab.github.io/worldpa/reference/).\n'",",https://zenodo.org/badge/latestdoi/221718108","2019/11/14, 14:36:40",1441,GPL-3.0,0,102,"2020/01/16, 12:29:47",2,1,2,0,1378,0,0.0,0.0,"2019/11/18, 09:21:11",v1.0.0,0,1,false,,false,false,,,https://github.com/FRBCesab,https://www.fondationbiodiversite.fr/en/about-the-foundation/le-cesab/,"Montpellier, FRANCE",,,https://avatars.githubusercontent.com/u/56294308?v=4,,, pywdpa,Python interface to the World Database on Protected Areas.,ghislainv,https://github.com/ghislainv/pywdpa.git,github,"wdpa,protected-planet,protected-areas,api,shapefiles,python-package,world,country-iso-code",Conservation and Restoration,"2021/03/12, 16:43:17",6,4,1,false,Python,,,"Python,Makefile,Shell,CSS",https://ecology.ghislainv.fr/pywdpa,"b'..\n # ==============================================================================\n # author :Ghislain Vieilledent\n # email :ghislain.vieilledent@cirad.fr, ghislainv@gmail.com\n # web :https://ecology.ghislainv.fr\n # license :GPLv3\n # ==============================================================================\n\n.. image:: https://ecology.ghislainv.fr/pywdpa/_static/logo-pywdpa.svg\n :align: right\n :target: https://ecology.ghislainv.fr/pywdpa\n :alt: Logo pywdpa\n :width: 140px\n\t \n``pywdpa`` Python package\n*************************\n\n\n|Python version| |PyPI version| |GitHub Actions| |License| |Zenodo|\n\n\nOverview\n========\n\nThe ``pywdpa`` Python package is an interface to the World Database on\nProtected Areas (WDPA) hosted on the Protected Planet website at\n``_. The ``pywdpa`` package provides\nfunctions to download shapefiles of protected areas (PA) for any\ncountries with an iso3 code using the Protected Planet API at\n``_. The ``pywdpa`` package\ntranslates some functions of the R package ``worldpa``\n(``_) in the Python language.\n\n.. image:: https://ecology.ghislainv.fr/pywdpa/_static/protected-planet.jpg\n :align: center\n :target: https://ecology.ghislainv.fr/pywdpa\n :alt: protected-planet\n\nTerms and conditions\n====================\n\nYou must ensure that the following citation is always clearly\nreproduced in any publication or analysis involving the Protected\nPlanet Materials in any derived form or format:\n\n..\n \n UNEP-WCMC and IUCN (\\ ``YEAR``\\ ) Protected Planet: The World\n Database on Protected Areas (WDPA). Cambridge, UK: UNEP-WCMC and\n IUCN. Available at: www.protectedplanet.net (dataset downloaded the\n ``YEAR/MONTH``\\ ).\n\n\nFor further details on terms and conditions of the WDPA usage, please\nvisit the page:\n``_.\n\nPrerequisites\n=============\n\nThis package uses the Protected Planet API to access data on world\nprotected areas. You must first have obtained a Personal API Token by\nfilling in the form available at\n``_. Then you need to set an\nenvironment variable (we recommend using the name ``WDPA_KEY``\\ )\nusing either the command ``os.environ[""WDPA_KEY""]=""your_token""`` or\n`python-dotenv `_.\n\nInstallation\n============\n\nThe easiest way to install the ``pywdpa`` Python package is via `pip `_:\n\n.. code-block:: bash\n\n $ # For version on PyPI\n $ python -m pip install pywdpa\n\nor \n\n.. code-block:: bash\n\n $ # For development version on GitHub\n $ python -m pip install https://github.com/ghislainv/pywdpa/archive/master.zip\n\nbut you can also install ``pywdpa`` executing the ``setup.py`` file:\n\n.. code-block:: bash\n\n $ git clone https://github.com/ghislainv/pywdpa\n $ cd pywdpa\n $ python setup.py install\n\nContributing\n============\n\nThe ``pywdpa`` Python package is Open Source and released under\nthe `GNU GPL version 3 license\n`__. Anybody\nwho is interested can contribute to the package development following\nour `Community guidelines\n`__. Every\ncontributor must agree to follow the project\'s `Code of conduct\n`__.\n \n.. |Python version| image:: https://img.shields.io/pypi/pyversions/pywdpa?logo=python&logoColor=ffd43b&color=306998\n :target: https://pypi.org/project/pywdpa\n :alt: Python version\n\n.. |PyPI version| image:: https://img.shields.io/pypi/v/pywdpa\n :target: https://pypi.org/project/pywdpa\n :alt: PyPI version\n\n.. |GitHub Actions| image:: https://github.com/ghislainv/pywdpa/workflows/PyPkg/badge.svg\n :target: https://github.com/ghislainv/pywdpa/actions\n :alt: GitHub Actions\n\t \n.. |License| image:: https://img.shields.io/badge/licence-GPLv3-8f10cb.svg\n :target: https://www.gnu.org/licenses/gpl-3.0.html\n :alt: License GPLv3\n\n.. |Zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.4275513.svg\n :target: https://doi.org/10.5281/zenodo.4275513\n :alt: Zenodo\n\n'",",https://doi.org/10.5281/zenodo.4275513\n","2020/01/30, 13:13:37",1364,GPL-3.0,0,93,"2021/03/12, 16:43:17",0,11,11,0,957,0,0.0,0.0,"2021/03/12, 16:58:52",v0.1.5,0,1,false,,true,true,"tdearden/pa-mine-intersect,brendanwallace/gfw_research,ghislainv/docker-forestatrisk-tropics,ghislainv/forestatrisk",,,,,,,,,, wdpar,R Interface to the World Database on Protected Areas.,prioritizr,https://github.com/prioritizr/wdpar.git,github,"r,conservation,data,spatial,database,biodiversity,protected-areas,cran,r-package,rstats",Conservation and Restoration,"2023/09/21, 01:42:46",35,0,2,true,R,,prioritizr,"R,TeX,Makefile",https://prioritizr.github.io/wdpar,"b'\n\n\n## wdpar: Interface to the World Database on Protected Areas\n\n[![lifecycle](https://img.shields.io/badge/Lifecycle-stable-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![R-CMD-check-Ubuntu](https://img.shields.io/github/actions/workflow/status/prioritizr/wdpar/R-CMD-check-ubuntu.yaml?branch=master&label=Ubuntu)](https://github.com/prioritizr/wdpar/actions)\n[![R-CMD-check-Windows](https://img.shields.io/github/actions/workflow/status/prioritizr/wdpar/R-CMD-check-windows.yaml?branch=master&label=Windows)](https://github.com/prioritizr/wdpar/actions)\n[![R-CMD-check-macOS](https://img.shields.io/github/actions/workflow/status/prioritizr/wdpar/R-CMD-check-macos.yaml?branch=master&label=macOS)](https://github.com/prioritizr/wdpar/actions)\n[![Documentation](https://img.shields.io/github/actions/workflow/status/prioritizr/wdpar/documentation.yaml?branch=master&label=Documentation)](https://github.com/prioritizr/wdpar/actions)\n[![Coverage\nStatus](https://img.shields.io/codecov/c/github/prioritizr/wdpar?label=Coverage)](https://app.codecov.io/gh/prioritizr/wdpar/branch/master)\n[![CRAN\\_Status\\_Badge](http://www.r-pkg.org/badges/version/wdpar)](https://CRAN.R-project.org/package=wdpar)\n\n### Overview\n\n[Protected Planet](https://www.protectedplanet.net/en) provides the most\ncomprehensive data for conservation areas worldwide. Specifically, it\nprovides the World Database on Protected Areas (WDPA) and the World\nDatabase on Other Effective Area-Based Conservation Measures (WDOECM).\nThese databases are used to monitor the performance of existing\nprotected areas, and identify priority areas for future conservation\nefforts. Additionally, these databases receive monthly updates from\ngovernment agencies and non-governmental organizations. However, they\nare associated with [several issues that need to be addressed prior to\nanalysis](https://www.protectedplanet.net/en/resources/calculating-protected-area-coverage)\nand the dynamic nature of these databases means that the entire data\ncleaning process needs to be repeated after obtaining a new version.\n\nThe *wdpar R* package provides an interface to data provided by\n[Protected Planet](https://www.protectedplanet.net/en). Specifically,\nthe package can automatically obtain data from the [World Database on\nProtected Areas\n(WDPA)](https://www.protectedplanet.net/en/thematic-areas/wdpa?tab=WDPA)\nand the [World Database on Other Effective Area-Based Conservation\nMeasures\n(WDOECM)](https://www.protectedplanet.net/en/thematic-areas/oecms). It\nalso provides methods for cleaning data from these databases following\nbest practices (outlined in [Butchart *et al.*\n2015](https://doi.org/10.1111/conl.12158); [Protected Planet\n2021](https://www.protectedplanet.net/en/resources/calculating-protected-area-coverage);\n[Runge *et al.* 2015](https://doi.org/10.1126/science.aac9180)). The\nmain functions are `wdpa_fetch()` for downloading data and\n`wdpa_clean()` for cleaning data. For more information, please see the\npackage vignette.\n\n### Installation\n\n#### Package installation\n\nThe [latest official version of the *wdpar R*\npackage](https://CRAN.R-project.org/package=wdpar) can be installed\nusing the following R code. Please note that this package requires the\n[*curl*](https://github.com/jeroen/curl) and [*sf\nR*](https://github.com/r-spatial/sf) packages which may require\nadditional software to be installed. If you encounter problems\ninstalling the *wdpar R* package, please consult the installation\ninstructions for these packages.\n\n``` r\ninstall.packages(""wdpar"", repos = ""https://cran.rstudio.com/"")\n```\n\nAlternatively, the latest developmental version can be installed using\nthe following *R* code. Please note that while developmental versions\nmay contain additional features not present in the official version,\nthey may also contain coding errors.\n\n``` r\nif (!require(remotes))\n install.packages(""remotes"")\nremotes::install_github(""prioritizr/wdpar"")\n```\n\n#### Additional dependencies\n\nThe *wdpar R* package can leverage the *prepr R* package to augment data\ncleaning procedures. Since the *prepr R* package is not available on the\nComprehensive R Archive Network, it is listed as an optional dependency.\nIn some cases, the *prepr R* package is required to complete the data\ncleaning procedures (e.g.\xc2\xa0to fix especially extreme geometry issues) and\nthe *wdpar R* package will throw an error if the package is not\navailable. To install the *prepr R* package, please use the following R\ncode.\n\n``` r\nif (!require(remotes))\n install.packages(""remotes"")\nremotes::install_github(""dickoa/prepr"")\n```\n\nNote that the *prepr R* package has system dependencies that need to be\ninstalled before the package itself can be installed (see below for\nplatform-specific instructions).\n\n##### *Windows*\n\nThe [Rtools](https://cran.r-project.org/bin/windows/Rtools/) software\nneeds to be installed to install the *prepr R* package package from\nsource. This software provides system requirements from\n[rwinlib](https://github.com/rwinlib/).\n\n##### *Ubuntu*\n\nThe `gmp`, `mpfr`, and several spatial libraries need to be installed.\nFor recent versions of Ubuntu (18.04 and later), these libraries are\navailable through official repositories. They can be installed using the\nfollowing system commands:\n\n apt-get -y update\n apt-get install -y libgmp3-dev libmpfr-dev libudunits2-dev libgdal-dev libgeos-dev libproj-dev\n\n##### *Linux*\n\nFor Unix-alikes, `gmp` (>= 4.2.3), `mpfr` (>= 3.0.0), and `gdal`\n(>= 3.2.2) are required.\n\n##### *MacOS*\n\nThe `gmp`, `mpfr`, and `gdal` libraries are required. The easiest way to\ninstall these libraries is using [HomeBrew](https://brew.sh/). After\ninstalling HomeBrew, these libraries can be installed using the\nfollowing commands in the system terminal:\n\n brew install pkg-config\n brew install gmp\n brew install mpfr\n brew install gdal\n\n### Usage\n\nHere we will provide a short introduction to the *wdpar R* package.\nFirst, we will load the *wdpar R* package. We will also load the *dplyr*\nand *ggmap R* packages to help explore the data.\n\n``` r\n# load packages\nlibrary(wdpar)\nlibrary(dplyr)\nlibrary(ggmap)\n```\n\nNow we will download protected area data for Malta from [Protected\nPlanet](https://www.protectedplanet.net/en). We can achieve this by\nspecifying Malta\xe2\x80\x99s country name (i.e.\xc2\xa0`""Malta""`) or Malta\xe2\x80\x99s [ISO3\ncode](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) (i.e.\xc2\xa0`""MLT""`).\nSince data are downloaded to a temporary directory by default, we will\nspecify that the data should be downloaded to a persistent directory.\nThis means that R won\xe2\x80\x99t have to re-download the same dataset every time\nwe restart our R session, and R can simply re-load previously downloaded\ndatasets as needed.\n\n``` r\n# download protected area data for Malta\nmlt_raw_pa_data <- wdpa_fetch(""Malta"", wait = TRUE,\n download_dir = rappdirs::user_data_dir(""wdpar""))\n```\n\nNext, we will clean the data set. Briefly, the cleaning steps include:\nexcluding protected areas that are not yet implemented, excluding\nprotected areas with limited conservation value, replacing missing data\ncodes (e.g.\xc2\xa0`""0""`) with missing data values (i.e.\xc2\xa0`NA`), replacing\nprotected areas represented as points with circular protected areas that\ncorrespond to their reported extent, repairing any topological issues\nwith the geometries, and erasing overlapping areas. For more\ninformation, see `?wdpa_clean`.\n\n``` r\n# clean Malta data\nmlt_pa_data <- wdpa_clean(mlt_raw_pa_data)\n```\n\nPrint preview of the data associated with each protected area.\n\n``` r\n# print preview\nhead(mlt_pa_data)\n```\n\n ## Simple feature collection with 6 features and 32 fields\n ## Geometry type: MULTIPOLYGON\n ## Dimension: XY\n ## Bounding box: xmin: 1382584 ymin: 4280853 xmax: 1399759 ymax: 4299615\n ## Projected CRS: +proj=cea +lon_0=0 +lat_ts=30 +x_0=0 +y_0=0 +datum=WGS84 +ellps=WGS84 +units=m +no_defs\n ## Precision: 1500 \n ## # A tibble: 6 \xc3\x97 33\n ## WDPAID WDPA_PID PA_DEF NAME ORIG_NAME DESIG DESIG_ENG DESIG_TYPE IUCN_CAT\n ## \n ## 1 194425 194425 PA \'Il-\xe2\x80\xa6 \'Il-G\xc5\xbcej\xe2\x80\xa6 Rise\xe2\x80\xa6 Nature R\xe2\x80\xa6 National Ia \n ## 2 194420 194420 PA Filf\xe2\x80\xa6 Filfla Rise\xe2\x80\xa6 Nature R\xe2\x80\xa6 National Ia \n ## 3 555588631 555588631 PA Il-M\xe2\x80\xa6 Il-Majji\xe2\x80\xa6 Park\xe2\x80\xa6 National\xe2\x80\xa6 National II \n ## 4 174757 174757 PA Il-\xc4\xa0\xe2\x80\xa6 Il-\xc4\xa0onna\xe2\x80\xa6 List\xe2\x80\xa6 List of \xe2\x80\xa6 National III \n ## 5 174758 174758 PA Bidn\xe2\x80\xa6 Bidnija,\xe2\x80\xa6 List\xe2\x80\xa6 List of \xe2\x80\xa6 National III \n ## 6 194415 194415 PA \'Il-\xe2\x80\xa6 \'Il-\xc4\xa0onn\xe2\x80\xa6 List\xe2\x80\xa6 List of \xe2\x80\xa6 National III \n ## # \xe2\x84\xb9 24 more variables: INT_CRIT , MARINE , REP_M_AREA ,\n ## # GIS_M_AREA , REP_AREA , GIS_AREA , NO_TAKE ,\n ## # NO_TK_AREA , STATUS , STATUS_YR , GOV_TYPE ,\n ## # OWN_TYPE , MANG_AUTH , MANG_PLAN , VERIF ,\n ## # METADATAID , SUB_LOC , PARENT_ISO , ISO3 ,\n ## # SUPP_INFO , CONS_OBJ , GEOMETRY_TYPE , AREA_KM2 ,\n ## # geometry \n\nFinally, after cleaning the data, let\xe2\x80\x99s plot a map showing Malta\xe2\x80\x99s\nprotected areas and color each area according to its management category\n([as defined by the The International Union for Conservation of\nNature](https://www.iucn.org/)).\n\n``` r\n# reproject data to longitude/latitude for plotting\nmlt_pa_data <- st_transform(mlt_pa_data, 4326)\n\n# download basemap imagery\nbg <- get_stamenmap(unname(st_bbox(mlt_pa_data)), zoom = 8,\n maptype = ""watercolor"", force = TRUE)\n\n# make map\nggmap(bg) +\ngeom_sf(aes(fill = IUCN_CAT), data = mlt_pa_data, inherit.aes = FALSE) +\ntheme(axis.title = element_blank(), legend.position = ""bottom"")\n```\n\n\n\nIf you need to calculate protected area coverage statistics for a\ncountry, please note that you will need to manually clip the cleaned\nprotected area data to the countries\xe2\x80\x99 coastline and its Exclusive\nEconomic Zone (EEZ) to obtain accurate results (see [official data\ncleaning\nguidelines](https://www.protectedplanet.net/en/resources/calculating-protected-area-coverage)).\nThis step is not performed by the *wdpar R* package because there is no\nsingle \xe2\x80\x9cbest\xe2\x80\x9d coastline and Exclusive Economic Zone (EEZ) dataset, since\nthe \xe2\x80\x9cbest\xe2\x80\x9d dataset for any given project depends on the level of\nrequired precision and available computational resources. For more\nexamples\xe2\x80\x94including an example of clipping the cleaned data to a\ncoastline\xe2\x80\x94please refer to the [package\nvignette](https://prioritizr.github.io/wdpar/articles/wdpar.html).\n\n### Citation\n\nPlease cite the *wdpar R* package and the relevant databases used in\npublications.\n\nTo cite the package, please use:\n\n> Hanson JO (2022) wdpar: Interface to the World Database on Protected\n> Areas. Journal of Open Source Software, 7: 4594. Available at\n> .\n\nTo cite the World Database on Protected Areas (WDPA), please use:\n\n> UNEP-WCMC and IUCN (\\[insert year of the version downloaded\\])\n> Protected Planet: The World Database on Protected Areas (WDPA),\n> \\[insert month/year of the version downloaded\\], Cambridge, UK:\n> UNEP-WCMC and IUCN. Available at: www.protectedplanet.net.\n\nTo cite the World Database on Other Effective Area-Based Conservation\nMeasures (WDOECM), please use:\n\n> UNEP-WCMC and IUCN (\\[insert year of the version downloaded\\])\n> Protected Planet: The world database on other effective area-based\n> conservation measures, \\[insert month/year of the version\n> downloaded\\], Cambridge, UK: UNEP-WCMC and IUCN. Available at:\n> www.protectedplanet.net.\n'",",https://doi.org/10.1111/conl.12158,https://doi.org/10.1126/science.aac9180,https://doi.org/10.21105/joss.04594","2017/12/14, 04:33:51",2141,GPL-3.0,10,292,"2023/09/20, 21:30:51",0,21,75,16,35,0,0.1,0.0,"2023/09/21, 20:26:52",v1.3.7,0,1,false,,false,true,,,https://github.com/prioritizr,,,,,https://avatars.githubusercontent.com/u/25472841?v=4,,, Plant-for-the-Planet,Allows you to plant trees with over 100 reforestation projects around the world.,Plant-for-the-Planet-org,https://github.com/Plant-for-the-Planet-org/treecounter-app.git,github,"climate-change,react-native,plant,reforestation,plant-trees,react",Conservation and Restoration,"2022/08/02, 04:40:30",39,0,2,true,JavaScript,Plant-for-the-Planet,Plant-for-the-Planet-org,"JavaScript,SCSS,CSS,Java,TypeScript,HTML,Shell,Objective-C++,Ruby,Objective-C,C,Swift,Procfile",https://www.trilliontreecampaign.org,"b'# Plant-for-the-Planet App\n![iOS build on MacOS](https://github.com/Plant-for-the-Planet-org/treecounter-app/workflows/iOS%20build%20on%20MacOS/badge.svg) ![Android build on Ubuntu](https://github.com/Plant-for-the-Planet-org/treecounter-app/workflows/Android%20build%20on%20Ubuntu/badge.svg)\n\nWelcome to this repository which contains the code of the web clients and the native iOS and Android apps of the Trillion Tree Campaign at https://www.trilliontreecampaign.org/ written with React-Native. For contributions please read our [contribution guide](https://github.com/Plant-for-the-Planet-org/treecounter-app/blob/develop/CONTRIBUTING.md) as well as our [code of conduct](https://github.com/Plant-for-the-Planet-org/treecounter-app/blob/develop/CODE_OF_CONDUCT.md) and the following information:\n\n## Directory Structure\n\n`ios` houses the iOS project files, `web` houses the web configuration, assets and index.html, and `android` contains Android project files.The `app` contains the react code base for all platform i.e components, reducers, containers etc.\n\n`index.web.js` is the entry point of web platform build, `index.js` is the entry point of both iOS and android platform build process.\n\n## Configuration\n\nCopy `.env.develop.sample` to `.env.develop` and add the necessary API keys for your development environment.\nInstall nvm following instructions from https://github.com/nvm-sh/nvm#install--update-script\nRun `nvm install && nvm use` to install and use required version of node.\n\n## Web Setup\n\n!!! Web setup is deprecated. Please visit https://github.com/plant-for-the-Planet-org/planet-webapp\n\nRun following commands\n\n```\nbash\nnpm install\nnpm start\n```\n\nTo run the app as prod, useful for testing features like (hashed js/css):\n\n```\nnpm run start-prod-server\n```\n\n## iOS Setup\n\n* Install latest Xcode.\n* Run following commands\n```\nbash\nbrew install node\nbrew install watchman\nnpm install -g react-native-cli\nnpm install\ncd ios && pod install\n```\n\n### Running into iOS simulator\n\nBuild and run the app in development mode deployed from Metro Bundler in an iOS simulator (starts Metro Bundler automatically if not already running, also starts iOS simulator):\n\n```\nbash\nnpm run ios\n```\n\nIf you have problems with a cached version of the bundle, you can stop the Metro Bundler and manually start it with the reset cache option:\n\n```\nreact-native start --reset-cache\n```\n\n## Android Setup\n\nSteps for setting up Dev Env for android on MAC is as follows:\n\n* Install Latest Android Studio.\n* From Android studio\xe2\x80\x99s SDK Manager add SDK 23, 27 and Build tool Version 23.0.1\n* Install JDK 8 if not already there and set JAVA_HOME specific to your JDK Version.\n* Create .bash_profile if not already there and add following variables in it:\n\n```\nbash\nexport ANDROID_HOME=$HOME/Library/Android/sdk\nexport PATH=$PATH:$ANDROID_HOME/emulator\nexport PATH=$PATH:$ANDROID_HOME/tools\nexport PATH=$PATH:$ANDROID_HOME/tools/bin\nexport PATH=$PATH:$ANDROID_HOME/platform-tools\nexport JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home\n```\n\n* Run following commands\n\n```\nbash\nbrew install node\nbrew install watchman\nnpm install -g react-native-cli\nnpm install\n```\n\n### Running into Android emulator\n\nBuild and run the app in development mode deployed from Metro Bundler (starts Metro Bundler automatically if not already running) on an emulator or device. You need to start an Android emulator or attach a device manually before:\n\n```\nbash\nnpm run android\n```\n\nIf you have problems with a cached version of the bundle, you can stop the Metro Bundler and manually start it with the reset cache option:\n\n```\nreact-native start --reset-cache\n```\n\n## Development process\n\nThis project uses GitFlow (https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) with Master-Branch `master` and Development-Branch `develop`. The Master-Branch will be automatically released by CircleCI to the production system. There are currently some more protected branches also build by CircleCI automatically and mapped to test backends using the branch name as subdomain.\n\n## Versioning\n\n*App Versioning Guide*\n\neg: Version M.F.B\nV 1.1.10\n\nM = Major Changes\nF = Feature Addition\nB = Critical Bug Fixes and Additions\n\nRelease candidate can have the target version number\nV 1.1.`11` RC `1`\n\nBeta and Alpha builds can also have target version number\nV 1.1.`11` B `12`\nV 1.1.`11` A `12` [increment per release]\n\n\n## Supporters\nThe deployment and production of this app is also possible due to support from open-source software contributors.\n\n      \n\n      \n\n      \n\n      \n\nFixer.io\n\n## License\n\nPlant-for-the-Planet App is free software, and is released under the terms of the GPL version 3 or (at your option) any later version. See license.txt.

\n'",,"2018/05/01, 11:12:52",2003,CUSTOM,0,6740,"2023/10/21, 07:32:53",52,1385,3215,81,4,14,0.0,0.7986606178440268,"2018/07/11, 07:16:38",v1.0.0,0,33,false,,true,true,,,https://github.com/Plant-for-the-Planet-org,https://www.plant-for-the-planet.org,,,,https://avatars.githubusercontent.com/u/6512301?v=4,,, Tree Mapper App,Tree Mapper extends the Plant-for-the-Planet App and allows on site coordinate submission during plantation.,Plant-for-the-Planet-org,https://github.com/Plant-for-the-Planet-org/treemapper.git,github,"climate-change,react-native,plant,reforestation",Conservation and Restoration,"2023/09/19, 06:01:04",18,0,3,true,TypeScript,Plant-for-the-Planet,Plant-for-the-Planet-org,"TypeScript,JavaScript,Java,Shell,Objective-C,Ruby,C,Swift",https://treemapper.app,"b'# TreeMapper App\n![iOS build on MacOS](https://github.com/Plant-for-the-Planet-org/treemapper/workflows/iOS%20build%20on%20MacOS/badge.svg) ![Android build on Ubuntu](https://github.com/Plant-for-the-Planet-org/treemapper/workflows/Android%20build%20on%20Ubuntu/badge.svg)\n\nTreeMapper is open source application based on react-native licensed under terms of GPL v3. It is managed by [Plant-for-the-Planet Foundation](https://www.plant-for-the-planet.org/) and open source contributors.\n\nFor contributions please read our [contribution guide](https://github.com/Plant-for-the-Planet-org/treemapper/blob/develop/CONTRIBUTING.md) as well as our [code of conduct](https://github.com/Plant-for-the-Planet-org/treemapper/blob/develop/CODE_OF_CONDUCT.md) and the following information:\n\n## TreeMapper on Web\nData uploaded by TreeMapper can be viewed on the [Plant-for-the-Planet Platform](https://pp.eco)\n\n\n## Directory Structure\n\n`ios` houses the iOS project files and `android` contains Android project files. The `app` contains the react code base for all platform.\n\n`index.js` is the entry point of both iOS and android platform build process.\n\n## Configuration\n\nCopy `.env.sample` to `.env` and add the necessary API keys for your development environment.\nInstall nvm following instructions from https://github.com/nvm-sh/nvm#install--update-script\nRun `nvm install && nvm use` to install and use required version of node.\n\n## iOS Setup\n\n* Install latest Xcode.\n* Run following commands\n```\nbash\nbrew install node\nbrew install watchman\nnpm install -g react-native-cli\nnpm install\ncd ios && pod install\n```\n\n### Running into iOS simulator\n\nBuild and run the app in development mode deployed from Metro Bundler in an iOS simulator (starts Metro Bundler automatically if not already running, also starts iOS simulator):\n\n```\nbash\nnpm run ios\n```\n\nIf you have problems with a cached version of the bundle, you can stop the Metro Bundler and manually start it with the reset cache option:\n\n```\nreact-native start --reset-cache\n```\n\n## Android Setup\n\nSteps for setting up Dev Env for android on MAC is as follows:\n\n* Install Latest Android Studio.\n* From Android studio\xe2\x80\x99s SDK Manager add SDK 28 and Build tool Version 28.0.3\n* Install JDK 8 if not already there and set JAVA_HOME specific to your JDK Version (below version number is just an example).\n* Create .bash_profile if not already there and add following variables in it:\n\n```\nbash\nexport ANDROID_HOME=$HOME/Library/Android/sdk\nexport PATH=$PATH:$ANDROID_HOME/emulator\nexport PATH=$PATH:$ANDROID_HOME/tools\nexport PATH=$PATH:$ANDROID_HOME/tools/bin\nexport PATH=$PATH:$ANDROID_HOME/platform-tools\nexport JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home\n```\n\n* Run following commands\n\n```\nbash\nbrew install node\nbrew install watchman\nnpm install -g react-native-cli\nnpm install\n```\n\n### Running into Android emulator\n\nBuild and run the app in development mode deployed from Metro Bundler (starts Metro Bundler automatically if not already running) on an emulator or device. You need to start an Android emulator or attach a device manually before:\n\n```\nbash\nnpm run android\n```\n\nIf you have problems with a cached version of the bundle, you can stop the Metro Bundler and manually start it with the reset cache option:\n\n```\nreact-native start --reset-cache\n```\n\n## Development process\n\nThis project uses GitFlow (https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) with Master-Branch `master` and Development-Branch `develop`.\n\n## Versioning\n\n*App Versioning Guide*\n\neg: Version M.F.B\nV 1.1.10\n\nM = Major Changes\nF = Feature Addition\nB = Critical Bug Fixes and Additions\n\nRelease candidate can have the target version number\nV 1.1.`11` RC `1`\n\nBeta and Alpha builds can also have target version number\nV 1.1.`11` B `12`\nV 1.1.`11` A `12` [increment per release]\n\n\n## Supporters\nThe deployment and production of this app is also possible due to support from open-source software contributors.\n\n      \n\n      \n\n\n'",,"2020/04/20, 10:01:30",1283,CUSTOM,61,2036,"2023/08/06, 08:38:43",149,355,558,24,80,6,0.6,0.7418639053254438,,,0,15,false,,true,true,,,https://github.com/Plant-for-the-Planet-org,https://www.plant-for-the-planet.org,,,,https://avatars.githubusercontent.com/u/6512301?v=4,,, Continuous Reforestation,A GitHub Action for planting trees within your development workflow using the Reforestation as a Service (RaaS) API developed by DigitalHumani.,protontypes,https://github.com/protontypes/continuous-reforestation.git,github,"reforestation,continuous-integration,carbon-capture-sequestration,carbon-capture,sustainable-software,sustainability",Conservation and Restoration,"2021/03/27, 13:35:42",176,15,11,false,Python,protontypes,protontypes,"Python,Dockerfile",,"b'# Continuous Reforestation\n**Make tree planting a part of your daily workflow. :deciduous_tree:** \n\n[](https://github.com/protontypes/continuous-reforestation)\nA GitHub Action for planting trees within your development workflow using the Reforestation as a Service (RaaS) API developed by [DigitalHumani](https://digitalhumani.com/). \n\nPlanting trees is an easy way to make a difference in the fight against climate change. Every tree helps to bind CO2 as long as it grows and creates living space for wildlife. Automating the process gives you total control of where, when and how much you want to contribute while saving you the fuss of doing the whole process manually. By using the RaaS API, you or your project can plant trees in a transparent way by exposing the API calls and related statistics. The RaaS API is completely free of charge. You only pay for the trees (1 $ each) directly to the reforestation organization. Find more information on this project read our [blog post](https://protontypes.eu/blog/2021/03/25/continuous-reforestation/).

\n[![Actions Status](https://github.com/protontypes/continuous-reforestation/workflows/Lint/badge.svg)](https://github.com/jacobtomlinson/protontypes/continuous-reforestation/actions)\n[![Actions Status](https://github.com/protontypes/continuous-reforestation/workflows/Integration%20Test/badge.svg)](https://github.com/protontypes/continuous-reforestation/actions)\n[![](https://badgen.net/badge/icon/Community%20Chat/green?icon=gitter&label)](https://gitter.im/protontypes/community)\n\n## Use cases\nPlant trees on ...\n* pull requests (and/or push, ...).\n* failed or successful tests.\n* the very first contribution to an open source project.\n* a new release, a milestone, or a closed issue.\n* a scheduled event (i.e. once per week).\n* the carbon footprint of your digital products after deployment.\n\nSee more possible trigger events [here](https://docs.github.com/en/actions/reference/events-that-trigger-workflows).\n\n## Usage\n\n1. \xf0\x9f\x8f\x81 To get started, you need an account with DigitalHumani RaaS. Since they are currently in the early stages, you have to contact them to get an account. Send them an email [here](https://digitalhumani.com/#contact). You also receive the API key value corresponding for your enterprise ID. This is your secret authentication key. **Do not add your API key to your workfile yaml file**.\n\n2. \xe2\x9c\x82\xef\xb8\x8f Copy the example worflow to `/.github/workflow/integration.yaml` and change the variables in the workflow to your data. Set the `production` variable to `false` to test your implementation within the sandboxed development API. Push your script to GitHub and check the GitHub Action tab of your project. If you use GitHub Action for the first time, activate it when prompted.\n\n3. \xf0\x9f\x93\x88 An open dashboard is provided to ensure a high level of transparency. This is currently under development and will show additional details. For this purpose visit:\n``\nhttps://digitalhumani.com/dashboard/\n``\n\n4. \xf0\x9f\x97\x9d\xef\xb8\x8f Add your authentication key as a secret in your repository `Settings` -> `Secrets` -> `New Repository Secret`: Name: `RAASKEY`, Value: ``. You can also add it as an organization wide secret in the setting of your organization.\n\n5. \xf0\x9f\x8c\xb1 Verify the number of trees planted in the dashboard development statistics. Set the `production` variable to `true` and push this commit. You now have left the development environment and started planting trees. From now on every configured trigger will continuously request to plant trees. At the end of each month you will be asked to confirm your requested amount of trees.\n\nTo see a list of all supported reforestation projects and more details on the RaaS API read the [documentation of DigitalHumani](https://digitalhumani.com/docs/#appendixlist-of-projects).\n\n**Disclaimer:** Even though this workflow automates the request to plant trees, the planting process itself remains manual labour by the reforestation organisations. They are also the people who write your invoice. Due to the amount of work it requires to write these invoices, DigitalHumani accumulates your plant requests until you reach a certain number, depending on your chosen reforestation project, before issuing the order. Below are the least required amounts to receive a monthly invoice and actually plant trees. If you plant more, don\'t mind this disclaimer.\n\n| Reforestation project | Necessary number of requested trees |\n| --------------------- | ----------------------------------- |\n| Chase Africa | 20 |\n| Conserve Natural Forests | 20 |\n| OneTreePlanted | 1 |\n| Sustainable Harvest International | 50 |\n| TIST | 20 | \n\n### Example workflows\n\n```yaml\nname: Plant a tree on a successful merged pull request to your main branch\non: \n pull_request_target:\n branches:\n - main\n types:\n - closed\njobs:\n planttrees:\n runs-on: ubuntu-latest\n steps:\n - name: Plant a Tree\n if: github.event.pull_request.merged == true\n id: planttrees\n uses: protontypes/continuous-reforestation@main\n with:\n # Enter your API variables below\n apikey: ${{ secrets.raaskey }}\n enterpriseid: """"\n user: ${{ github.actor }}\n treecount: 1\n projectid: ""14442771"" # This projectid can be used to have your trees planted where they are needed the most.\n production: ""true""\n\n - name: Response of digitalhumani.com RaaS API\n run: |\n echo ""${{ steps.planttrees.outputs.response }}""\n```\n\n```yaml\nname: Plant a tree on every push to main\non:\n push:\n branches:\n - main\njobs:\n planttrees:\n runs-on: ubuntu-latest\n steps:\n - name: Plant a Tree\n id: planttrees\n uses: protontypes/continuous-reforestation@main\n with:\n # Enter your API variables below\n apikey: ${{ secrets.raaskey }}\n enterpriseid: ""\n**Vieilledent G.,**\n\n**C. Vancutsem,**\n\n**C. Bourgoin,**\n\n**P. Ploton,**\n\n**P. Verley,** **and** **F. Achard.** 2023. Spatial scenario of tropical\ndeforestation and carbon emissions for the 21st century.\n*bioRxiv*. doi:\n[10.1101/2022.03.22.485306](https://doi.org/10.1101/2022.03.22.485306).\n[![manuscript in\npdf](Website/images/logo-pdf.png ""manuscript in pdf"")](https://www.biorxiv.org/content/10.1101/2022.03.22.485306v3.full.pdf)\nSupplementary Information\n[![SI](Website/images/logo-zip.png ""supplementary information"")](https://www.biorxiv.org/content/biorxiv/early/2023/05/12/2022.03.22.485306/DC1/embed/media-1.pdf)\n\n\n\nFigure: **Pantropical map of the risk of deforestation.**\n\n## Minimal reproducible example using the `forestatrisk` Python package\n\nThis\n[notebook](https://ecology.ghislainv.fr/forestatrisk/notebooks/far_tropics.html)\nprovides a minimal and reproducible example presenting the general\napproach we followed to model and forecast deforestation in each of the\n119 study areas (representing 92 countries) considered in the above\narticle. We use the Guadeloupe archipelago as a case study. The notebook\nis available at the [website](https://ecology.ghislainv.fr/forestatrisk)\nassociated with the `forestatrisk` Python package. This package has been\nspecifically developed for this study and provides functions to model\nand forecast deforestation in the tropics.\n\n## Steps followed to produce the results of the study\n\nWe present below the R and Python scripts which have been used to\nproduce the results of the study, from the datasets preparation to the\nwriting of the manuscript.\n\n### 1. Preparing datasets\n\n``` bash\n## Derive past forest cover change maps from the annual product \n## of Vancutsem et al. 2021 using Google Earth Engine.\npython Tropics/forest_gee_jrc.py\n\n## Download raw data from on-line databases (GADM, SRTM, WDPA, OSM), and Google Drive.\npython Tropics/download_raw_data.py\n\n## Compute explanatory variables (elevation, slope, distances, etc.).\npython Tropics/compute_variables.py\n```\n\n### 2. Estimating deforestation intensity\n\n``` bash\n## Compute deforestation rates and uncertainty\nRscript Intensity/intensity.R\n\n## Estimate contagious deforestation between states of Brazil\npython Intensity/brazil_fcc_jrc.py\n```\n\n### 3. Spatial modeling and forecasting\n\n``` bash\n## Model and forecast\npython Tropics/model_and_forecast.py\n```\n\n### 4. Post-processing and writing\n\n``` bash\n## Combine rasters to obtain continental maps\npython Maps/combine.py\n\n## Synthesize results\nRscript Analysis/synthesis.R\n\n## Plot main maps\nRscript Maps/main_maps.R\nRscript Maps/main_maps_prob.R\n\n## Plot supplementary maps\nRscript Maps/supp_maps.R\n\n## Compile documents\nRscript Manuscript/zzz_knitr_compile/compile_book.R\n```\n\n## Website accompanying the article\n\nA website at is accompanying the article\ncited above. The website includes the following resources:\n\n### Interactive map\n\nWe release interactive pantropical maps of the past forest cover change\n(2000\xe2\x80\x932010\xe2\x80\x932020), of the risk of deforestation (2020), and of the\nprojected forest cover in 2050 and 2100:\n\n- [Map of the tropics](https://forestatrisk.cirad.fr/maps.html)\n\n### Download\n\nRasters of results from this study can be downloaded as Cloud Optimized\nGeoTIFFs ([COG](https://www.cogeo.org/)):\n\n- [Rasters](https://forestatrisk.cirad.fr/rasters.html)\n- [COG tutorial](https://forestatrisk.cirad.fr/notebooks/cog.html)\n\n### Supplementary data\n\n- [Data S1](https://forestatrisk.cirad.fr/data-s.html): Uncertainty\n around projected forest cover.\n- [Data S2](https://forestatrisk.cirad.fr/data-s.html): Uncertainty\n around projected carbon emissions.\n\n### `forestatrisk` Python package\n\nResults from this study have been obtained with the `forestatrisk`\nPython package:\n\n- [Package website](https://ecology.ghislainv.fr/forestatrisk/) (with\n full documentation)\n- [Tutorials](https://ecology.ghislainv.fr/forestatrisk/articles.html)\n\n\n

\nCopyright \xc2\xa9 2021 Cirad,\nEC JRC. All rights reserved.\n

\n\n\n\n\n\n\n\n'",",https://doi.org/10.18167/DVN1/7N2BTU,https://doi.org/10.1101/2022.03.22.485306,https://doi.org/10.1101/2022.03.22.485306","2020/01/25, 10:51:18",1369,GPL-3.0,12,461,"2021/03/16, 20:17:41",0,0,0,0,953,0,0,0.0,"2023/05/25, 16:18:19",v5.0,0,1,false,,false,false,,,,,,,,,,, Tree Tracker,Used by people who plant trees so they don't have to manually type coordinates with pictures they took.,protect-earth,https://github.com/protect-earth/tree-tracker-ios.git,github,,Conservation and Restoration,"2023/10/15, 20:27:31",12,0,1,true,Swift,Protect Earth,protect-earth,Swift,,"b'# Tree Tracker\nApp for cataloguing trees planted and allowing the recorded trees to be uploaded via a custom API to a centralised database. Mainly used by people who plant trees so they don\'t have to manually type coordinates with pictures they took and then try to guess the site/species afterwards.\n\n## Running the app from Xcode with Mock server\n1. Make sure you have downloaded Xcode 13.4+\n2. Open the project in Xcode (you\'ll notice the dependencies will start to fetch in the background).\n(In the meantime, Xcode will need to fetch dependencies for the project... \xf0\x9f\x98\xb4)\n3. The signing settings for the project are configured for our CICD build pipeline, and will not allow you to build and run the app on your own device. To fix this, simply enable automatic signing in XCode and update the bundle identifier to something unique to you. This will update the .xcodeproj file accordingly. **NOTE** _Changes to signing settings must not be checked in, as these will break the automated builds._\n4. Running the `Tree Tracker` scheme will use the API settings you [configure in your secrets file](#config).\n5. When running on a device, you\'ll also need to trust the certificate in _Settings -> General -> Profiles_, otherwise you\'ll see an error after installing the build and before running it.\n\n## Using your own Cloudinary server\n\n### Cloudinary setup\nCloudinary is used as an image storage and manipulation service, to temporarily hold captured images of the trees and allow these to be quickly resized for our needs.\n\n1. Create a free account on [Cloudinary](https://cloudinary.com/users/register/free) (this will give you the needed Cloud name).\n2. Now create an [upload preset](https://cloudinary.com/console/settings/upload) (this will give you the Upload Preset name).\n3. Keep the keys as you\'d need to add them to Secrets.swift later on.\n\n## Rollbar\nWe use [Rollbar](https://www.rollbar.com) for centralised logging of errors, to help us troubleshoot issues with the app during real world usage. \nIf you wish, you can sign up for a free Rollbar account, generate your own API token and provide it through `ROLLBAR_AUTH_TOKEN` to see telemetry in Rollbar during development. This can be useful if you are specifically adding telemetry features, but otherwise is probably more complex than just looking at the logs in XCode console. \n\nIf you choose not to setup Rollbar, simply add a dummy value for `ROLLBAR_AUTH_TOKEN` and any Rollbar calls will silently fail.\n\n## Additional project config {#config}\nNow, to run the project, we\'ll need to generate the Secrets file. This means you need to run first install [`pouch`](https://github.com/sunshinejr/pouch) (the easiest is using `brew install sunshinejr/formulae/pouch`). Now, you need to have these environment variables available. It would be wise to prepare this file once and keep it somewhere obvious but take care not to check it in to Git. You can simply `source` the file whenever you need to regenerate Secrets.\n\n```\nexport CLOUDINARY_CLOUD_NAME=qqq2ek4mq\nexport CLOUDINARY_UPLOAD_PRESET_NAME=iadfadff\nexport PROTECT_EARTH_API_TOKEN=""n|xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx""\nexport PROTECT_EARTH_API_BASE_URL=""api.endpoint.com""\nexport PROTECT_EARTH_ENV_NAME=Development\nexport ROLLBAR_AUTH_TOKEN=yourRollbarToken\n```\n\nIn the root folder, run `pouch`, which should generate a file at `./TreeTracker/Secrets.swift`.\n\nWith all that, you can switch the scheme to `Tree Tracker` and it _should_ run just fine.\n\n## Contributing\nPlease feel free to create issues and PRs for anything, really. However, bear in mind that this app is created for specific audience so PRs with functionality that is out of scope might not be merged (if you feel like the PR you\'re working on is questionable, please feel free to reach out via Issues).\n\n## License\n[MIT](License.md)\n'",,"2020/12/25, 12:59:53",1034,MIT,40,288,"2023/10/15, 20:27:36",10,59,100,29,10,0,0.2,0.502145922746781,"2023/05/15, 21:20:30",0.10.11,0,4,false,,true,false,,,https://github.com/protect-earth,https://protect.earth,"Corsham, UK",,,https://avatars.githubusercontent.com/u/62665039?v=4,,, FSDL Deforestation Detection,"A deep learning approach to detecting deforestation risk, using satellite images and a deep learning model.",karthikraja95,https://github.com/karthikraja95/fsdl_deforestation_detection.git,github,,Conservation and Restoration,"2021/05/16, 02:42:23",32,0,3,true,Jupyter Notebook,,,"Jupyter Notebook,Python,Makefile,Dockerfile,Shell,CSS",,"b'# FSDL Deforestation Detection\n\n
\n\n[![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/andrecnf/fsdl_deforestation_detection/fsdl_deforestation_detection/dashboard/streamlit_app.py)\n\n[![Dependencies Status](https://img.shields.io/badge/dependencies-up%20to%20date-brightgreen.svg)](https://github.com/karthikraja95/fsdl_deforestation_detection/pulls?utf8=%E2%9C%93&q=is%3Apr%20author%3Aapp%2Fdependabot)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Security: bandit](https://img.shields.io/badge/security-bandit-green.svg)](https://github.com/PyCQA/bandit)\n[![Pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/karthikraja95/fsdl_deforestation_detection/blob/master/.pre-commit-config.yaml)\n[![Semantic Versions](https://img.shields.io/badge/%F0%9F%9A%80-semantic%20versions-informational.svg)](https://github.com/karthikraja95/fsdl_deforestation_detection/releases)\n[![License](https://img.shields.io/github/license/karthikraja95/fsdl_deforestation_detection)](https://github.com/karthikraja95/fsdl_deforestation_detection/blob/master/LICENSE)\n\nDetecting deforestation from satellite images: a full stack deep learning project\n\n
\n\n## Description\n\nA deep learning approach to detecting deforestation risk, using satellite images and a deep learning model. We relied on [Planet](https://www.planet.com/) imagery from two [Kaggle](https://www.kaggle.com/) datasets (one from the [Amazon rainforest](https://www.kaggle.com/c/planet-understanding-the-amazon-from-space) and another on [oil palm plantations in Borneo](https://www.kaggle.com/c/widsdatathon2019)) and trained a [ResNet](https://paperswithcode.com/method/resnet) model using [FastAI](https://docs.fast.ai/). For more details, check the following links:\n\n* [Streamlit dashboard for testing the model and exploring the data](https://share.streamlit.io/andrecnf/fsdl_deforestation_detection/fsdl_deforestation_detection/dashboard/streamlit_app.py)\n\n* [Model training notebook in Colab](https://colab.research.google.com/github/karthikraja95/fsdl_deforestation_detection/blob/master/fsdl_deforestation_detection/experimental/FSDL_Final_Model.ipynb)\n\n* [Project management workspace in Notion](https://www.notion.so/Homepage-2ff744c443814f459d80a6e5819226a5)\n\n* [Loom video about the project](https://www.loom.com/share/365d412db3a0474ba46d4fdd7f4c5494)\n\n* [Medium article about this project](https://towardsdatascience.com/detecting-deforestation-from-satellite-images-7aa6dfbd9f61)\n\nThis is the result of a group project, made by [Andr\xc3\xa9 Ferreira](https://andrecnf.com/) and [Karthik Bhaskar](https://www.kbhaskar.com/), for the [Full Stack Deep Learning - Spring 2021 online course](http://fullstackdeeplearning.com/spring2021/).\n\n## Very first steps\n\n### Initial\n\n1. Initialize `git` inside your repo:\n\n```bash\ngit init\n```\n\n2. If you don\'t have `Poetry` installed run:\n\n```bash\nmake download-poetry\n```\n\n3. Initialize poetry and install `pre-commit` hooks:\n\n```bash\nmake install\n```\n\n4. Upload initial code to GitHub (ensure you\'ve run `make install` to use `pre-commit`):\n\n```bash\ngit add .\ngit commit -m "":tada: Initial commit""\ngit branch -M main\ngit remote add origin https://github.com/karthikraja95/fsdl_deforestation_detection.git\ngit push -u origin main\n```\n\n### Initial setting up\n\n- Set up [Dependabot](https://docs.github.com/en/github/administering-a-repository/enabling-and-disabling-version-updates#enabling-github-dependabot-version-updates) to ensure you have the latest dependencies.\n- Set up [Stale bot](https://github.com/apps/stale) for automatic issue closing.\n\n### Poetry\n\nAll manipulations with dependencies are executed through Poetry. If you\'re new to it, look through [the documentation](https://python-poetry.org/docs/).\n\n
\nNotes about Poetry\n

\n\nPoetry\'s [commands](https://python-poetry.org/docs/cli/#commands) are very intuitive and easy to learn, like:\n\n- `poetry add numpy`\n- `poetry run pytest`\n- `poetry build`\n- etc\n\n

\n
\n\n### Makefile usage\n\n[`Makefile`](https://github.com/karthikraja95/fsdl_deforestation_detection/blob/master/Makefile) contains many functions for fast assembling and convenient work.\n\n
\n1. Download Poetry\n

\n\n```bash\nmake download-poetry\n```\n\n

\n
\n\n
\n2. Install all dependencies and pre-commit hooks\n

\n\n```bash\nmake install\n```\n\nIf you do not want to install pre-commit hooks, run the command with the NO_PRE_COMMIT flag:\n\n```bash\nmake install NO_PRE_COMMIT=1\n```\n\n

\n
\n\n
\n3. Check the security of your code\n

\n\n```bash\nmake check-safety\n```\n\nThis command launches a `Poetry` and `Pip` integrity check as well as identifies security issues with `Safety` and `Bandit`. By default, the build will not crash if any of the items fail. But you can set `STRICT=1` for the entire build, or you can configure strictness for each item separately.\n\n```bash\nmake check-safety STRICT=1\n```\n\nor only for `safety`:\n\n```bash\nmake check-safety SAFETY_STRICT=1\n```\n\nmultiple\n\n```bash\nmake check-safety PIP_STRICT=1 SAFETY_STRICT=1\n```\n\n> List of flags for `check-safety` (can be set to `1` or `0`): `STRICT`, `POETRY_STRICT`, `PIP_STRICT`, `SAFETY_STRICT`, `BANDIT_STRICT`.\n\n

\n
\n\n
\n4. Check the codestyle\n

\n\nThe command is similar to `check-safety` but to check the code style, obviously. It uses `Black`, `Darglint`, `Isort`, and `Mypy` inside.\n\n```bash\nmake check-style\n```\n\nIt may also contain the `STRICT` flag.\n\n```bash\nmake check-style STRICT=1\n```\n\n> List of flags for `check-style` (can be set to `1` or `0`): `STRICT`, `BLACK_STRICT`, `DARGLINT_STRICT`, `ISORT_STRICT`, `MYPY_STRICT`.\n\n

\n
\n\n
\n5. Run all the codestyle formaters\n

\n\nCodestyle uses `pre-commit` hooks, so ensure you\'ve run `make install` before.\n\n```bash\nmake codestyle\n```\n\n

\n
\n\n
\n6. Run tests\n

\n\n```bash\nmake test\n```\n\n

\n
\n\n
\n7. Run all the linters\n

\n\n```bash\nmake lint\n```\n\nthe same as:\n\n```bash\nmake test && make check-safety && make check-style\n```\n\n> List of flags for `lint` (can be set to `1` or `0`): `STRICT`, `POETRY_STRICT`, `PIP_STRICT`, `SAFETY_STRICT`, `BANDIT_STRICT`, `BLACK_STRICT`, `DARGLINT_STRICT`, `ISORT_STRICT`, `MYPY_STRICT`.\n\n

\n
\n\n
\n8. Build docker\n

\n\n```bash\nmake docker\n```\n\nwhich is equivalent to:\n\n```bash\nmake docker VERSION=latest\n```\n\nMore information [here](https://github.com/karthikraja95/fsdl_deforestation_detection/tree/master/docker).\n\n

\n
\n\n
\n9. Cleanup docker\n

\n\n```bash\nmake clean_docker\n```\n\nor to remove all build\n\n```bash\nmake clean\n```\n\nMore information [here](https://github.com/karthikraja95/fsdl_deforestation_detection/tree/master/docker).\n\n

\n
\n\n## \xf0\x9f\x9b\xa1 License\n\n[![License](https://img.shields.io/github/license/karthikraja95/fsdl_deforestation_detection)](https://github.com/karthikraja95/fsdl_deforestation_detection/blob/master/LICENSE)\n\nThis project is licensed under the terms of the `MIT` license. See [LICENSE](https://github.com/karthikraja95/fsdl_deforestation_detection/blob/master/LICENSE) for more details.\n\n## \xf0\x9f\x93\x83 Citation\n\n```\n@misc{fsdl_deforestation_detection,\n author = {Karthik Bhaskar, Andre Ferreira},\n title = {Predicting deforestation from Satellite Images},\n year = {2021},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {\\url{https://github.com/karthikraja95/fsdl_deforestation_detection}}\n}\n```\n\n\n'",,"2021/03/29, 19:46:39",940,MIT,0,65,"2023/01/09, 22:02:22",10,120,120,10,289,10,0.0,0.3207547169811321,,,0,3,false,,true,true,,,,,,,,,,, Global Reforestation Opportunity Assessment,Quantify carbon sequestration in naturally regenerating forests around the world.,forc-db,https://github.com/forc-db/GROA.git,github,,Conservation and Restoration,"2023/05/16, 12:14:27",29,0,3,true,R,ForC,forc-db,"R,Python",,"b'# Global Reforestation Opportunity Assessment (GROA)\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3983644.svg)](https://doi.org/10.5281/zenodo.3983644)\n\nThese data were assembled by the Nature Conservancy and a team of scientists from 19 institutions to quantify carbon sequestration in naturally regenerating forests around the world. This analysis was published in *Nature* in 2020 (Cook-Patton et al. 2020). \n\nWe encourage the research community to collaborate in updating, correcting, expanding, and utilizing this database through the GitHub platform. \n\n### Relationship to ForC\nGROA identified relevant studies within the Global Forest Carbon database, [ForC](https://forc-db.github.io/), and tracked down the original studies to confirm or update the data. It is now hosted by the [ForC-db organization on GitHub](https://github.com/forc-db). This repository also contains [code and data files for integration of GROA into ForC](https://github.com/forc-db/GROA/tree/master/GROA-ForC_integration).\n\n## Data Use Policy and Guidelines\n\n### License\n\nGROA is licensed under [CC-BY-4](https://creativecommons.org/licenses/by/4.0/), as described in [`license.txt`](https://github.com/forc-db/GROA/blob/master/license.txt).\n\n### Communication/ collaboration with GROA team\n\nWhile not required, we encourage researchers planning to use GROA to contact the principal investigator (Dr. Susan Cook-Patton, The Nature Conservancy) to inform her of intended use of the data and to discuss potential collaboration. \n\n### Database citation\nAny publications using these data should cite Cook-Patton et al. 2020. \n\nIn addition, this database should be referenced as well (DOI representing all versions: [10.5281/zenodo.3983644](https://doi.org/10.5281/zenodo.3983644)). If the data has changed since original publication, arising publications should cite the specific version used, ideally with a DOI associated with that version. Authors may contact Susan Cook-Patton (The Nature Conservancy) or Kristina Anderson-Teixeira (Smithsonian) to generate a release and associated DOI that matches the database version used.\n\n## Citation\nCook-Patton, S.C., S.M. Leavitt, D. Gibbs, N.L. Harris, K. Lister, K.J. Anderson-Teixeira, R.D. Briggs, R.L. Chazdon, T.W. Crowther, P.W. Ellis, H.P. Griscom, V. Herrmann, K.D. Holl, R.A. Houghton, C. Larrosa, G. Lomax, R. Lucas, P. Madsen, Y. Malhi, A. Paquette, J.D. Parker, K. Paul, D. Routh, S. Roxburgh, S. Saatchi, J. van den Hoogen, W.S. Walker, C. E. Wheeler, S. A. Wood, L. Xu, & B. W. Griscom (2020) Mapping Potential Carbon Capture from Global Natural Forest Regrowth. *Nature*, in press.\n\n\n\n## Contacts\nSusan Cook-Patton, The Nature Conservancy\n\nKristina Anderson-Teixeira, Smithsonian\n'",",https://doi.org/10.5281/zenodo.3983644,https://doi.org/10.5281/zenodo.3983644","2018/09/06, 18:26:52",1875,CC-BY-4.0,1,131,"2020/08/07, 20:44:47",3,1,23,0,1174,1,0.0,0.46399999999999997,"2020/08/13, 19:46:56",v1.0,0,5,false,,false,false,,,https://github.com/forc-db,,Smithsonian,,,https://avatars.githubusercontent.com/u/16504833?v=4,,, EU forest tree point data,A compilation of analysis-ready point data for the purpose of vegetation and Potential Natural Vegetation mapping for the EU.,openlandmap,https://gitlab.com/openlandmap/eu-forest-tree-point-data,gitlab,,Conservation and Restoration,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Treetracker,"Coordinates the digital capture of tree growth data in the field, allowing to establish employment for people living in extreme poverty based on tree planting.",Greenstand,https://github.com/Greenstand/treetracker-android.git,github,android,Conservation and Restoration,"2023/05/19, 03:11:07",88,0,21,true,Kotlin,Greenstand,Greenstand,"Kotlin,Ruby,Shell",,"b'![Build Status](https://github.com/Greenstand/treetracker-android/workflows/Treetracker%20Android%20App%20CI/badge.svg?branch=master)\n\n# Treetracker Android\n\n## Current Milestones and Issue Topics\n\n### Next feature release\n\nFeature we are currently prioritizing\nhttps://github.com/Greenstand/treetracker-android/milestone/17\n\n \n \n\n## Project Details\n\nThis is the Android app for Greenstands Treetracker open source project (www.treetracker.org). \nThis project coordinates the digital capture of tree growth data in the field, allowing to establish employment for people living in extreme poverty based on tree planting.\nThe Android segment is the data collection tool that transports the information into the next service through a data pipeline towards the veryification service.\n\nFor more on design intent and the app\'s user story see the [wiki in this repository](https://github.com/Greenstand/treetracker-android/wiki/User-Story)\n\n \n \n\n## Project Setup\nDevelopers will need to ask the #android_chat channel in Slack for the treetracker.keys.properties file to build gradle for the application.\n\nFor development, select the build variant _dev_. This build variant is configured to allow trees to be added without a specific accuracy. \n\n \n\n## QC Deployment\n\nNote: QC deployment pipeline are about to be changed and moved to Github actions from Travis. The \nfollowing details will be updated once the change is made.\n\nThis repo has been configured to be easily deployed to QC through Travis. The process, however, is still a manual.\n\nIn order to get a new build to QC, take the following steps:\n\n1. Go to the [Project page in Travis](https://travis-ci.com/Greenstand/treetracker-android)\n2. Select _More Options > Trigger build_\n3. On the dialog, select the appropriate branch, and use either of these configurations:\n * For an Android Beta build:\n```\nscript:\n - ""fastlane android beta""\n```\n * For an Android JustDigIt build:\n```\nscript:\n - ""fastlane android justdiggit_beta""\n```\t\n4. Run the build and then wait for it to complete. _Voil\xc3\xa1._\n\nRunning without a `script` custom parameter will result in a standard build to be run without any artifacts deployed.\n\n### Fastlane\n\nFastlane must be installed using\nbundle install --path vendor/bundle\nfastlane install_plugins\nfirebase login\n\notherwise the firebase plugin will not work\n\n\n\n \n \n\n## Deployment\n\nThere is one prerequisite to using the appropriate gradle tasks:\n\n1) Placing the relevant keys.json from the PlayStore in the ./app folder [example here](https://docs.fastlane.tools/getting-started/android/setup/#collect-your-google-credentials)\n\nOnce this is done, you can proceed by running one of the following tasks to run the release:\n\n* `bootstrapReleasePlayResources` | Downloads the play store listing for the Release build. No download of image resources. See #18.\n* `generateReleasePlayResources` | Collects play store resources for the Release build\n* `publishListingRelease` | Updates the play store listing for the Release build\n\n \n \n\n## Contributing\n\n See [Contributing in the Development-Overview README](https://github.com/Greenstand/Development-Overview/blob/master/README.md)\n\nReview the project board for current priorities [Android Project](https://github.com/orgs/Greenstand/projects/5)\n\nPlease review the [issue tracker](https://github.com/Greenstand/treetracker-android/issues) here on this github repository \n\nCheck out the cool [roadmap](https://github.com/Greenstand/Development-Overview/blob/master/Roadmap.md)\n\nAll contributions should be submitted as pull requests against the master branch in this github repository. https://github.com/Greenstand/treetracker-android/\n'",,"2017/09/13, 20:05:32",2233,AGPL-3.0,47,1901,"2023/08/22, 09:00:17",72,528,1030,75,64,8,0.0,0.6222222222222222,,,1,38,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,lfx_crowdfunding,custom",true,false,,,https://github.com/Greenstand,https://app.gitbook.com/@greenstand/s/engineering/,,,,https://avatars.githubusercontent.com/u/25363578?v=4,,, gfcanalysis,Tools for working with Global Forest Change dataset.,azvoleff,https://github.com/azvoleff/gfcanalysis.git,github,,Conservation and Restoration,"2023/10/09, 21:55:23",17,0,2,true,R,,,R,,"b""# gfcanalysis\n\n## Overview\nThe `gfcanalysis` package facilitates analyses of forest change using the \n[Global Forest \nChange](http://earthenginepartners.appspot.com/science-2013-global-forest) \ndataset released by Hansen et al. 2013, and subsequent versions of that data. \nThe package was originally designed to support the work of the [Tropical \nEcology Assessment & Monitoring (TEAM) Network](http://www.teamnetwork.org).\n\n## Package Installation\n\n## Release Version\n\nTo install the release version of `gfcanalysis` in R, just run:\n\n```R\ninstall.packages('gfcanalysis')\n```\n\n## Development Version\n\nThe easiest way to install the development version of the `gfcanalysis` package \nis using the \n[`devtools`](https://CRAN.R-project.org/package=devtools)\npackage. After installing `devtools`, type:\n\n```R\ninstall_github('azvoleff/gfcanalysis')\n```\n\n## Usage\n\nThere are two example scripts demonstrating how to use the `gfcanalysis` \npackage:\n\n- For those new to R, see the \n[install_gfcanalysis.R](https://raw.githubusercontent.com/azvoleff/gfcanalysis/master/inst/examples/install_gfcanalysis.R)\nscript. This script provides guidance on how to download and install the \n(development version) of the package in R.\n\n- For an example of how to run the package to calculate forest \nchange statistics for a given area of interest (AOI), see the examples in\n[analyze_gfc.R](https://raw.githubusercontent.com/azvoleff/gfcanalysis/master/inst/examples/analyze_GFC.R).\n\n## Author Contact Information\n\n[Matt Cooper](mailto:mw.coop.r@gmail.com) (current author and maintainer) \nPhD Student \nDepartment of Geographical Sciences \nUniversity of Maryland \n\n[Alex Zvoleff](mailto:azvoleff@conservation.org) (original author) \nSenior Director, Resilience Science \nMoore Center for Science\nConservation International \n2011 Crystal Dr. Suite 600 \nArlington, VA 22202 \nUSA\n\n## References\nHansen, M. C., P. V. Potapov, R. Moore, M. Hancher, S. A. Turubanova, A. \nTyukavina, D. Thau, S. V. Stehman, S. J. Goetz, T. R. Loveland, A. Kommareddy, \nA. Egorov, L. Chini, C. O. Justice, and J. R. G. Townshend. 2013. \nHigh-Resolution Global Maps of 21st-Century Forest Cover Change. Science 342, \n(15 November): 850--853. Data available on-line from: \nhttp://earthenginepartners.appspot.com/science-2013-global-forest.\n""",,"2014/02/24, 16:15:48",3530,AGPL-3.0,7,222,"2023/10/11, 14:12:07",0,3,14,1,14,0,0.3333333333333333,0.10599078341013823,,,0,4,false,,false,false,,,,,,,,,,, California Forest Observatory,Python wrappers for accessing Forest Observatory data via the Salo API.,forestobservatory,https://github.com/forestobservatory/cfo-api.git,github,"ecology,wildfire,california,api,python-wrappers",Conservation and Restoration,"2021/02/09, 01:20:42",11,4,1,false,Python,California Forest Observatory,forestobservatory,"Python,Makefile,Dockerfile",https://www.forestobservatory.com,"b'\n\n# Introduction\n\nThe [California Forest Observatory][cfo-web] (CFO) is a data-driven forest monitoring system that maps the drivers of wildfire behavior across the state\xe2\x80\x94including vegetation fuels, weather, topography & infrastructure\xe2\x80\x94from space.\n\nThe `cfo` library was designed to provide easy access to CFO datasets. Each dataset has a unique `asset_id`, and the `search` and `fetch` workflows were designed to query and download these assets. \n\n- You can search for asset IDs by geography, data type, and time of year\n - `forest.search(geography=""SantaCruzCounty"", metric=""CanopyHeight"", year=2020)`\n- You can download the data to your loacal machine\n - `forest.download(asset_id, output_file)`\n- If you don\'t want the file, you can just fetch the download URL\n - `forest.fetch(asset_id, dl=True)` \n- Or a WMS URL for web mapping \n - `forest.fetch(asset_id, wms=True)`\n\nYou can find support for the CFO API at the [community forum][cfo-forum].\n\n## License\n\nCFO data are available for free for non-commercial use per the [API terms][api-terms]. You must have a CFO account, which you can create by visiting [the web map][cfo-web], clicking the menu in the top right corner and selecting ""Create an account."" Please keep track of the e-mail address and password you used to create your Forest Observatory account, as you\'ll need them to authenticate API access.\n\nThe software provided here, the `cfo` python API wrapper, is provided with an MIT license. Please do not confuse the license terms for the wrapper with the [terms of use][api-terms] for the API.\n\n## Table of contents\n\n- [Installation](#installation)\n- [Authentication](#authentication)\n- [Searching for data](#searching)\n- [Downloading data](#downloads)\n- [Serving map tiles](#map-tiles)\n- [Contact](#contact)\n\n# Installation\n\nThis library can be installed via `pip` directly from Github.\n\n```bash\npip install cfo\n```\n\nIf you don\'t have `pip` you could also clone the repository locally and install using python\'s `setuptools`\n\n```bash\ngit clone https://github.com/forestobservatory/cfo-api.git\ncd cfo-api\npython setup.py install\n```\n\nOnce installed, you should be able to load the `cfo` module in python. Instatiate the `api` class to begin working with the Forest Observatory API.\n\n```python\nimport cfo\nforest = cfo.api()\n```\n\n\n\n# Authentication\n\nA Forest Observatory account is required to use the API (sign up free at [forestobservatory.com][cfo-web]).\n\nThere are two authentication methods: entering your CFO account\'s email/password at runtime or setting environment variables.\n\n### Passing your credentials at runtime\n\nUsing any API call (`forest.search()`, `forest.fetch()`, `forest.download()`) will prompt you to enter the following authentication information:\n\n```python\n>>> CFO E-mail: slug@forest.net\n>>> CFO Password: **********\n```\n\nYou can also authenticate directly with `forest.authenticate()`.\n\nThis retrieves an authentication token from the API, which is stored as a temp file for future access (this does not store your e-mail/password). The API reads this stored token, which means you won\'t have to pass your email/password during each session.\n\n### Setting environment variables \n\nYou can forego runtime credential entry by setting environment variables. This is the lowest friction, least secure approach. You\'ll set the following variables in your `.bashrc` profile or elsewhere.\n\n```bash\nexport CFO_EMAIL=slug@forest.net\nexport CFO_PASS=ari0limax\n```\n\n### Restoring a botched authentication\n\nThe temp file that stores your authentication credentials can sometimes get donked up. To re-authenticate, use the following command to pass your credentials and overwrite the temporary token data.\n\n```python\nforest.authenticate(ignore_temp=True)\n```\n\n\n\n# Searching\n\nCFO data are organized by `asset_id`. These IDs contain information on the spatial extent of the data, the category and name of the data, the time of collection, and the spatial resolution. Asset IDs follow this naming format:\n\n```python\nasset_id = {geography}-{category}-{metric}-{year}-{timeOfYear}-{resolution}\n```\n\nSome examples:\n\n- A statewide vegetation fuels dateset that\'s rendered in the Layers tab: `California-Vegetation-CanopyHeight-2020-Summer-00010m`.\n- A statewide weather dataset queried in the Trends tab: `California-Weather-WindSpeed-2020-0601-03000m`.\n- A county-level dataset accessed in the Download tab: `Marin-Vegetation-SurfaceFuels-2020-Spring-00010m`.\n\nThe `forest.search()` function queries the API and returns the assets that match the search terms.\n\n```python\n>>> import cfo\n>>> forest = cfo.api()\n>>> forest.search(geography=""MendocinoCounty"", metric=""CanopyCover"")\n2020-09-07 13:53:47,028 INFO cfo.utils [authenticate] Loaded cfo token\n[\'MendocinoCounty-Vegetation-CanopyCover-2020-Fall-00010m\']\n```\n\nThe default behavior of this function is to return the asset IDs as a list.\n\nYou could instead return the API JSON data, including asset ID, the spatial extent (`bbox`) of the data, the catalog its stored in, etc. by setting `just_assets=False`.\n\n```python\n>>> forest.search(geography=""MendocinoCounty"", metric=""CanopyCover"", just_assets=False)\n[{\'asset_id\': \'MendocinoCounty-Vegetation-CanopyCover-2020-Fall-00010m\',\n\'attribute_dict\': {},\n\'bbox\': [-124.022978699284, -122.814767867036, 38.7548320538975, 40.0060478879686],\n\'catalog\': \'cfo\',\n\'description\': \'CanopyCover\',\n\'expiration_utc_datetime\': \'\',\n\'utc_datetime\': \'2020-07-09 09:52:42.292286+00:00\'}]\n```\n\nAnd to examine the full response from the `requests` library, use `forest.search(raw=True)`.\n\nBut with over 17,000 published assets it\'s not easy to know just what to search by. So we wrote some functions to simplify your searches.\n\n### Convenience functions\n\nBased on the asset ID naming convention above, we\'ve provided some `list` functions as a guide to what\'s available.\n\n- Geography - CFO datasets have been clipped to different spatial extents: statewide, by county, by municipality, by watershed.\n - `forest.list_geographies()` - returns the different geographic extents. Use `forest.list_geographies(by=""County"")` to narrow return just the unique counties.\n - `forest.list_geography_types()` - returns the categories of geographical clipping available.\n- Category - we currently provide three categories of data.\n - `forest.list_categories()` - returns [`Vegetation`, `Weather`, `Wildfire`]\n- Metric - each category of data contains a list of different available data types\n - `forest.list_metrics()` - returns the unique metrics for each category. \n - Run `forest.list_metrics(category=""Weather"")` to return only weather-specific metrics.\n\nUse these as keywords when searching for data (e.g. `id_list = forest.search(geography=""FresnoCounty"", category=""Vegetation"")`).\n\nYou can also use wildcards:\n\n```python\n>>> forest.search(geography=\'Plumas*\', metric=\'CanopyHeight\')\n[\'PlumasCounty-Vegetation-CanopyHeight-2020-Fall-00010m\',\n\'PlumasEurekaMunicipality-Vegetation-CanopyHeight-2020-Fall-00010m\',\n\'PlumasLakeMunicipality-Vegetation-CanopyHeight-2020-Fall-00010m\']\n```\n\n### A note on availabile datasets\n\nEven though we have a range of geographic extents, resolutions, and metrics, it is **not** the case that we provide all permutations of extent/resolution/metric. For example, we clip all `Vegetation` data to the county level, but we do not clip any `Weather` data that fine. All weather data are only available at the state level. \n\nThis means you don\'t really need to specify the geographic extent if you search for weather data. You\'ll get pretty far with `wind_ids = forest.search(metric=""WindSpeed"")`.\n\n\n\n# Downloads\n\nOnce you\'ve generated a list of asset IDs, you can then download the files to your local machine. The `forest.download()` function requires an asset_id string so you\'ll have to iterate over search results, which are often returned as lists.\n\nHere\'s how to search for and download all data from Mendocino County.\n\n```python\nimport cfo\nforest = cfo.api()\nasset_ids = forest.search(geography=""MendocinoCounty"")\nfor asset in asset_ids:\n forest.download(asset)\n```\n\nWhich generates the following output as it downloads each file.\n\n```\n2020-09-07 16:19:24,542 INFO cfo.utils [download] Beginning download for: MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m\n2020-09-07 16:19:28,853 INFO cfo.utils [download] Successfully downloaded MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m to file: /home/slug/MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m.tif\n2020-09-07 16:19:29,359 INFO cfo.utils [download] Beginning download for: MendocinoCounty-Vegetation-CanopyBaseHeight-2020-Fall-00010m\n2020-09-07 16:19:32,321 INFO cfo.utils [download] Successfully downloaded MendocinoCounty-Vegetation-CanopyBaseHeight-2020-Fall-00010m to file: /home/slug/MendocinoCounty-Vegetation-CanopyBaseHeight-2020-Fall-00010m.tif\n...\n```\n\nThis function uses the `fetch()` command under the hood to retrieve a URL for where the file is hosted on google cloud storage. It then performs a `GET` call to download the file locally. \n\nThe function will download the file to your current working directory if you don\'t specify an output file path. You can set a custom output path with `forest.download(asset_id, path)`. This may be tricky if you\'re downloading multiple datasets, but you could parse the asset_id to generate useful names for output files.\n\n```python\nasset_ids = forest.search(geography=""MendocinoCounty"")\nfor asset in asset_ids:\n geo, category, metric, year, timeOfYear, res = asset.split(""-"")\n output_path = f""/external/downloads/CFO-{metric}-{year}.tif""\n forest.download(asset, output_path)\n```\n\nWhich generates the following output:\n\n```\n2020-09-07 23:10:02,312 INFO cfo.utils [download] Beginning download for: MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m\n2020-09-07 23:10:25,163 INFO cfo.utils [download] Successfully downloaded MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m to file: /external/downloads/CFO-CanopyHeight-2020.tif\n2020-09-07 23:10:25,596 INFO cfo.utils [download] Beginning download for: MendocinoCounty-Vegetation-CanopyBaseHeight-2020-Fall-00010m\n2020-09-07 23:10:47,965 INFO cfo.utils [download] Successfully downloaded MendocinoCounty-Vegetation-CanopyBaseHeight-2020-Fall-00010m to file: /external/downloads/CFO-CanopyBaseHeight-2020.tif\n...\n```\n\n\n# Map tiles\n\nThe `fetch` function also returns URLs for displaying CFO data in web mapping applications as WMS tile layers.\n\n```python\nforest.fetch(""MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m"", wms=True)\n\'https://maps.salo.ai/geoserver/cfo/wms?layers=cfo:MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m&format=""image/png""&styles=vegetation&p0=0.0&p2=1.44&p25=18.0&p30=21.599999999999998&p50=36.0&p60=43.199999999999996&p75=54.0&p90=64.8&p98=70.56&p100=72.0\'\n\n```\n\nWMS URLs don\'t always easily plug and play with different rendering services, but they should work with a little nudging. Here\'s how to use the above URL to visualize these data in a jupyter notebook with `ipyleaflet`.\n\n```python\nfrom ipyleaflet import Map, WMSLayer, LayersControl, basemaps\nwms = WMSLayer(\n url=\'https://maps.salo.ai/geoserver/cfo/wms?p0=0.0&p2=1.44&p25=18.0&p30=21.599999999999998&p50=36.0&p60=43.199999999999996&p75=54.0&p90=64.8&p98=70.56&p100=72.0\',\n layers=""cfo:MendocinoCounty-Vegetation-CanopyHeight-2020-Fall-00010m"",\n name=""Mendocino Canopy Height"",\n styles=""vegetation"",\n format=""image/png8"",\n transparent=True,\n attribution=""Forest Observatory \xc2\xa9 Salo Sciences"",\n)\nm = Map(basemap=basemaps.Stamen.Terrain, center=(39.39,-123.33), zoom=10)\nm.add_layer(wms)\ncontrol = LayersControl(position=\'topright\')\nm.add_control(control)\nm\n```\n\nThis code, executed in `jupyter-lab`, should look something like this.\n\n\n\nThe URL has a lot of useful information. Here\'s a quick breakdown of what\'s encoded in the string returned from `fetch`. \n\n- The base URL (`https://maps.salo.ai/geoserver/cfo/wms`) is our map server address.\n- Each component following the `?` is a parameter passed to the map server.\n- `layers` specifies which asset to show (and is defined based on `{catalog}:{asset_id}` naming).\n- `format` defines the image format the data are rendered in (use `image/png8` for best performance).\n- `styles` defines the color palette (which you can retrieve with `forest.list_styles()`).\n- the long list of `p0, p2, p25, ..., p100` are parameters we use to render custom raster styles on the fly. These numbers are based on the min/max raster values of a dataset and can be altered on the fly to dynamically scale the data.\n\n# Contact\n\nIssue tracking isn\'t set up for this repository yet. Please visit the [Forest Observatory Community Forum][cfo-forum] for technical support. To get in touch directly or to inquire about commercial API access, contact [tech@forestobservatory.com](mailto:tech@forestobservatory.com).\n\nThe California Forest Observatory API is developed and maintained by [Salo Sciences][salo-web].\n\n\n[api-terms]: https://forestobservatory.com/api.html\n[cfo-web]: https://forestobservatory.com\n[cfo-forum]: https://groups.google.com/a/forestobservatory.com/g/community\n[salo-web]: https://salo.ai\n'",,"2020/06/24, 22:49:53",1218,MIT,0,96,"2023/10/11, 14:12:07",4,0,0,0,14,1,0,0.0,,,0,1,false,,false,false,"narest-qa/repo63,catzzz/slacgismo-gridlabd-mirror,catzzz/gridlabd-slacgismo-clone,catzzz/gridlabd-hipas-clone",,https://github.com/forestobservatory,forestobservatory.com,California,,,https://avatars.githubusercontent.com/u/64549705?v=4,,, prioritizr,Uses mixed integer linear programming techniques to provide a flexible interface for building and solving conservation planning problems.,prioritizr,https://github.com/prioritizr/prioritizr.git,github,"r,spatial,optimization,conservation,biodiversity,prioritization,solver,conservation-planner,rstats",Conservation and Restoration,"2023/10/18, 00:22:19",114,0,26,true,R,,prioritizr,"R,C++,TeX,C,Makefile,CSS",https://prioritizr.net,"b'\n\n\n# prioritizr \n\n## Systematic Conservation Prioritization in R\n\n\n\n[![lifecycle](https://img.shields.io/badge/Lifecycle-stable-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![R-CMD-check-Ubuntu](https://img.shields.io/github/actions/workflow/status/prioritizr/prioritizr/R-CMD-check-ubuntu.yaml?branch=main&label=Ubuntu)](https://github.com/prioritizr/prioritizr/actions)\n[![R-CMD-check-Windows](https://img.shields.io/github/actions/workflow/status/prioritizr/prioritizr/R-CMD-check-windows.yaml?branch=main&label=Windows)](https://github.com/prioritizr/prioritizr/actions)\n[![R-CMD-check-macOS](https://img.shields.io/github/actions/workflow/status/prioritizr/prioritizr/R-CMD-check-macos.yaml?branch=main&label=macOS)](https://github.com/prioritizr/prioritizr/actions)\n[![Documentation](https://img.shields.io/github/actions/workflow/status/prioritizr/prioritizr/documentation.yaml?branch=main&label=Documentation)](https://github.com/prioritizr/prioritizr/actions)\n[![Coverage\nStatus](https://img.shields.io/codecov/c/github/prioritizr/prioritizr?label=Coverage)](https://app.codecov.io/gh/prioritizr/prioritizr/branch/main)\n[![CRAN-Status-Badge](http://www.r-pkg.org/badges/version/prioritizr)](https://CRAN.R-project.org/package=prioritizr)\n\n\nThe *prioritizr R* package uses mixed integer linear programming (MILP)\ntechniques to provide a flexible interface for building and solving\nconservation planning problems. It supports a broad range of objectives,\nconstraints, and penalties that can be used to custom-tailor\nconservation planning problems to the specific needs of a conservation\nplanning exercise. Once built, conservation planning problems can be\nsolved using a variety of commercial and open-source exact algorithm\nsolvers. In contrast to the algorithms conventionally used to solve\nconservation problems, such as heuristics or simulated annealing, the\nexact algorithms used here are guaranteed to find optimal solutions.\nFurthermore, conservation problems can be constructed to optimize the\nspatial allocation of different management actions or zones, meaning\nthat conservation practitioners can identify solutions that benefit\nmultiple stakeholders. Finally, this package has the functionality to\nread input data formatted for the *Marxan* conservation planning\nprogram, and find much cheaper solutions in a much shorter period of\ntime than *Marxan*.\n\n## Installation\n\n#### Official version\n\nThe latest official version of the *prioritizr R* package can be\ninstalled from the [Comprehensive R Archive Network\n(CRAN)](https://cran.r-project.org/) using the following *R* code.\n\n``` r\ninstall.packages(""prioritizr"", repos = ""https://cran.rstudio.com/"")\n```\n\n#### Developmental version\n\nThe latest development version can be installed to gain access to new\nfunctionality that is not yet present in the latest official version.\nPlease note that the developmental version is more likely to contain\ncoding errors than the official version. To install the developmental\nversion, you can install it directly from the [GitHub online code\nrepository](https://github.com/prioritizr/prioritizr) or from the [R\nUniverse](https://prioritizr.r-universe.dev/prioritizr). In general, we\nrecommend installing the developmental version from the [R\nUniverse](https://prioritizr.r-universe.dev/prioritizr). This is because\ninstallation via [R\nUniverse](https://prioritizr.r-universe.dev/prioritizr) does not require\nany additional software (e.g.,\n[RTools](https://cran.r-project.org/bin/windows/Rtools/) for Windows\nsystems, or [Xcode and gfortran](https://mac.r-project.org/tools/) for\nmacOS systems).\n\n- To install the latest development version from [R\n Universe](https://prioritizr.r-universe.dev/prioritizr), use the\n following *R* code.\n\n ``` r\n install.packages(\n ""prioritizr"",\n repos = c(\n ""https://prioritizr.r-universe.dev"",\n ""https://cloud.r-project.org""\n )\n )\n ```\n\n- To install the latest development version from\n [GitHub](https://github.com/prioritizr/prioritizr), use the\n following *R* code.\n\n ``` r\n if (!require(remotes)) install.packages(""remotes"")\n remotes::install_github(""prioritizr/prioritizr"")\n ```\n\n## Citation\n\nPlease cite the *prioritizr R* package when using it in publications. To\ncite the latest official version, please use:\n\n> Hanson JO, Schuster R, Morrell N, Strimas-Mackey M, Edwards BPM, Watts\n> ME, Arcese P, Bennett J, Possingham HP (2023). prioritizr: Systematic\n> Conservation Prioritization in R. R package version 8.0.3. Available\n> at .\n\nAlternatively, to cite the latest development version, please use:\n\n> Hanson JO, Schuster R, Morrell N, Strimas-Mackey M, Edwards BPM, Watts\n> ME, Arcese P, Bennett J, Possingham HP (2023). prioritizr: Systematic\n> Conservation Prioritization in R. R package version 8.0.3.2. Available\n> at .\n\nAdditionally, we keep a [record of\npublications](https://prioritizr.net/articles/publication_record.html)\nthat use the *prioritizr R* package. If you use this package in any\nreports or publications, please [file an issue on\nGitHub](https://github.com/prioritizr/prioritizr/issues/new) so we can\nadd it to the record.\n\n## Usage\n\nHere we provide a short example showing how the *prioritizr R* package\ncan be used to build and solve conservation problems. Specifically, we\nwill use an example dataset available through the *prioritizrdata R*\npackage. Additionally, we will use the *terra R* package to perform\nraster calculations. To begin with, we will load the packages.\n\n``` r\n# load packages\nlibrary(prioritizr)\nlibrary(prioritizrdata)\nlibrary(terra)\n```\n\nWe will use the Washington dataset in this example. To import the\nplanning unit data, we will use the `get_wa_pu()` function. Although the\n*prioritizr R* package can support many different types of planning unit\ndata, here our planning units are represented as a single-layer raster\n(i.e., `terra::rast()` object). Each cell represents a different\nplanning unit, and cell values denote land acquisition costs.\nSpecifically, there are 10757 planning units in total (i.e., cells with\nnon-missing values).\n\n``` r\n# import planning unit data\nwa_pu <- get_wa_pu()\n\n# preview data\nprint(wa_pu)\n```\n\n ## class : SpatRaster \n ## dimensions : 109, 147, 1 (nrow, ncol, nlyr)\n ## resolution : 4000, 4000 (x, y)\n ## extent : -1816382, -1228382, 247483.5, 683483.5 (xmin, xmax, ymin, ymax)\n ## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs \n ## source : wa_pu.tif \n ## name : cost \n ## min value : 0.2986647 \n ## max value : 1804.1838379\n\n``` r\n# plot data\nplot(wa_pu, main = ""Costs"", axes = FALSE)\n```\n\n\n\nNext, we will use the `get_wa_features()` function to import the\nconservation feature data. Although the *prioritizr R* package can\nsupport many different types of feature data, here our feature data are\nrepresented as a multi-layer raster (i.e., `terra::rast()` object). Each\nlayer describes the spatial distribution of a feature. Here, our feature\ndata correspond to different bird species. To account for migratory\npatterns, the breeding and non-breeding distributions of species are\nrepresented as different features. Specifically, the cell values denote\nthe relative abundance of individuals, with higher values indicating\ngreater abundance.\n\n``` r\n# import feature data\nwa_features <- get_wa_features()\n\n# preview data\nprint(wa_features)\n```\n\n ## class : SpatRaster \n ## dimensions : 109, 147, 396 (nrow, ncol, nlyr)\n ## resolution : 4000, 4000 (x, y)\n ## extent : -1816382, -1228382, 247483.5, 683483.5 (xmin, xmax, ymin, ymax)\n ## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs \n ## source : wa_features.tif \n ## names : Recur~ding), Botau~ding), Botau~ding), Corvu~ding), Corvu~ding), Cincl~full), ... \n ## min values : 0.000, 0.000, 0.000, 0.000, 0.000, 0.00, ... \n ## max values : 0.514, 0.812, 3.129, 0.115, 0.296, 0.06, ...\n\n``` r\n# plot the first nine features\nplot(wa_features[[1:9]], nr = 3, axes = FALSE)\n```\n\n\n\nLet\xe2\x80\x99s make sure that you have a solver installed on your computer. This\nis important so that you can use optimization algorithms to generate\nspatial prioritizations. If this is your first time using the\n*prioritizr R* package, please install the HiGHS solver using the\nfollowing *R* code. Although the HiGHS solver is relatively fast and\neasy to install, please note that you\xe2\x80\x99ll need to install the [Gurobi\nsoftware suite and the *gurobi* *R* package](https://www.gurobi.com/)\nfor best performance (see the [Gurobi Installation\nGuide](https://prioritizr.net/articles/gurobi_installation_guide.html)\nfor details).\n\n``` r\n# if needed, install HiGHS solver\ninstall.packages(""highs"", repos = ""https://cran.rstudio.com/"")\n```\n\nNow, let\xe2\x80\x99s generate a spatial prioritization. To ensure feasibility, we\nwill set a budget. Specifically, the total cost of the prioritization\nwill represent a 5% of the total land value in the study area. Given\nthis budget, we want the prioritization to increase feature\nrepresentation, as much as possible, so that each feature would,\nideally, have 20% of its distribution covered by the prioritization. In\nthis scenario, we can either purchase all of the land inside a given\nplanning unit, or none of the land inside a given planning unit. Thus we\nwill create a new `problem()` that will use a minimum shortfall\nobjective (via `add_min_shortfall_objective()`), with relative targets\nof 20% (via `add_relative_targets()`), binary decisions (via\n`add_binary_decisions()`), and specify that we want near-optimal\nsolutions (i.e., 10% from optimality) using the best solver installed on\nour computer (via `add_default_solver()`).\n\n``` r\n# calculate budget\nbudget <- terra::global(wa_pu, ""sum"", na.rm = TRUE)[[1]] * 0.05\n\n# create problem\np1 <-\n problem(wa_pu, features = wa_features) %>%\n add_min_shortfall_objective(budget) %>%\n add_relative_targets(0.2) %>%\n add_binary_decisions() %>%\n add_default_solver(gap = 0.1, verbose = FALSE)\n\n# print problem\nprint(p1)\n```\n\n ## A conservation problem ()\n ## \xe2\x94\x9c\xe2\x80\xa2data\n ## \xe2\x94\x82\xe2\x94\x9c\xe2\x80\xa2features: ""Recurvirostra americana (breeding)"" , \xe2\x80\xa6 (396 total)\n ## \xe2\x94\x82\xe2\x94\x94\xe2\x80\xa2planning units:\n ## \xe2\x94\x82 \xe2\x94\x9c\xe2\x80\xa2data: (10757 total)\n ## \xe2\x94\x82 \xe2\x94\x9c\xe2\x80\xa2costs: continuous values (between 0.2987 and 1804.1838)\n ## \xe2\x94\x82 \xe2\x94\x9c\xe2\x80\xa2extent: -1816381.6182, 247483.5211, -1228381.6182, 683483.5211 (xmin, ymin, xmax, ymax)\n ## \xe2\x94\x82 \xe2\x94\x94\xe2\x80\xa2CRS: +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs (projected)\n ## \xe2\x94\x9c\xe2\x80\xa2formulation\n ## \xe2\x94\x82\xe2\x94\x9c\xe2\x80\xa2objective: minimum shortfall objective (`budget` = 8748.4908)\n ## \xe2\x94\x82\xe2\x94\x9c\xe2\x80\xa2penalties: none specified\n ## \xe2\x94\x82\xe2\x94\x9c\xe2\x80\xa2targets: relative targets (between 0.2 and 0.2)\n ## \xe2\x94\x82\xe2\x94\x9c\xe2\x80\xa2constraints: none specified\n ## \xe2\x94\x82\xe2\x94\x94\xe2\x80\xa2decisions: binary decision\n ## \xe2\x94\x94\xe2\x80\xa2optimization\n ## \xe2\x94\x9c\xe2\x80\xa2portfolio: shuffle portfolio (`number_solutions` = 1, \xe2\x80\xa6)\n ## \xe2\x94\x94\xe2\x80\xa2solver: gurobi solver (`gap` = 0.1, `time_limit` = 2147483647, `first_feasible` = FALSE, \xe2\x80\xa6)\n ## # \xe2\x84\xb9 Use `summary(...)` to see complete formulation.\n\nAfter we have built a `problem()`, we can solve it to obtain a solution.\n\n``` r\n# solve the problem\ns1 <- solve(p1)\n\n# extract the objective\nprint(attr(s1, ""objective""))\n```\n\n ## solution_1 \n ## 4.40521\n\n``` r\n# extract time spent solving the problem\nprint(attr(s1, ""runtime""))\n```\n\n ## solution_1 \n ## 8.323\n\n``` r\n# extract state message from the solver\nprint(attr(s1, ""status""))\n```\n\n ## solution_1 \n ## ""OPTIMAL""\n\n``` r\n# plot the solution\nplot(s1, main = ""Solution"", axes = FALSE)\n```\n\n\n\nAfter generating a solution, it is important to evaluate it. Here, we\nwill calculate the number of planning units selected by the solution,\nand the total cost of the solution. We can also check how many\nrepresentation targets are met by the solution.\n\n``` r\n# calculate number of selected planning units by solution\neval_n_summary(p1, s1)\n```\n\n ## # A tibble: 1 \xc3\x97 2\n ## summary n\n ## \n ## 1 overall 2319\n\n``` r\n# calculate total cost of solution\neval_cost_summary(p1, s1)\n```\n\n ## # A tibble: 1 \xc3\x97 2\n ## summary cost\n ## \n ## 1 overall 8748.\n\n``` r\n# calculate target coverage for the solution\np1_target_coverage <- eval_target_coverage_summary(p1, s1)\nprint(p1_target_coverage)\n```\n\n ## # A tibble: 396 \xc3\x97 9\n ## feature met total_amount absolute_target absolute_held absolute_shortfall\n ## \n ## 1 Recurvir\xe2\x80\xa6 TRUE 100. 20.0 23.4 0 \n ## 2 Botaurus\xe2\x80\xa6 TRUE 99.9 20.0 29.2 0 \n ## 3 Botaurus\xe2\x80\xa6 TRUE 100. 20.0 34.0 0 \n ## 4 Corvus b\xe2\x80\xa6 TRUE 99.9 20.0 20.2 0 \n ## 5 Corvus b\xe2\x80\xa6 FALSE 99.9 20.0 18.7 1.29\n ## 6 Cinclus \xe2\x80\xa6 TRUE 100. 20.0 20.4 0 \n ## 7 Spinus t\xe2\x80\xa6 TRUE 99.9 20.0 22.4 0 \n ## 8 Spinus t\xe2\x80\xa6 TRUE 99.9 20.0 23.0 0 \n ## 9 Falco sp\xe2\x80\xa6 TRUE 99.9 20.0 24.5 0 \n ## 10 Falco sp\xe2\x80\xa6 TRUE 100. 20.0 24.4 0 \n ## # \xe2\x84\xb9 386 more rows\n ## # \xe2\x84\xb9 3 more variables: relative_target , relative_held ,\n ## # relative_shortfall \n\n``` r\n# check percentage of the features that have their target met given the solution\nprint(mean(p1_target_coverage$met) * 100)\n```\n\n ## [1] 96.46465\n\nAlthough this solution helps meet the representation targets, it does\nnot account for existing protected areas inside the study area. As such,\nit does not account for the possibility that some features could be\npartially \xe2\x80\x93 or even fully \xe2\x80\x93 represented by existing protected areas and,\nin turn, might fail to identify meaningful priorities for new protected\nareas. To address this issue, we will use the `get_wa_locked_in()`\nfunction to import spatial data for protected areas in the study area.\nWe will then add constraints to the `problem()` to ensure they are\nselected by the solution (via `add_locked_in_constraints()`).\n\n``` r\n# import locked in data\nwa_locked_in <- get_wa_locked_in()\n\n# print data\nprint(wa_locked_in)\n```\n\n ## class : SpatRaster \n ## dimensions : 109, 147, 1 (nrow, ncol, nlyr)\n ## resolution : 4000, 4000 (x, y)\n ## extent : -1816382, -1228382, 247483.5, 683483.5 (xmin, xmax, ymin, ymax)\n ## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs \n ## source : wa_locked_in.tif \n ## name : protected areas \n ## min value : 0 \n ## max value : 1\n\n``` r\n# plot data\nplot(wa_locked_in, main = ""Existing protected areas"", axes = FALSE)\n```\n\n\n\n``` r\n# create new problem with locked in constraints added to it\np2 <-\n p1 %>%\n add_locked_in_constraints(wa_locked_in)\n\n# solve the problem\ns2 <- solve(p2)\n\n# plot the solution\nplot(s2, main = ""Solution"", axes = FALSE)\n```\n\n\n\nThis solution is an improvement over the previous solution. However,\nthere are some places in the study area that are not available for\nprotected area establishment (e.g., due to land tenure). As a\nconsequence, the solution might not be practical for implementation,\nbecause it might select some places that are not available for\nprotection. To address this issue, we will use the `get_wa_locked_out()`\nfunction to import spatial data describing which planning units are not\navailable for protection. We will then add constraints to the\n`problem()` to ensure they are not selected by the solution (via\n`add_locked_out_constraints()`).\n\n``` r\n# import locked out data\nwa_locked_out <- get_wa_locked_out()\n\n# print data\nprint(wa_locked_out)\n```\n\n ## class : SpatRaster \n ## dimensions : 109, 147, 1 (nrow, ncol, nlyr)\n ## resolution : 4000, 4000 (x, y)\n ## extent : -1816382, -1228382, 247483.5, 683483.5 (xmin, xmax, ymin, ymax)\n ## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs \n ## source : wa_locked_out.tif \n ## name : urban areas \n ## min value : 0 \n ## max value : 1\n\n``` r\n# plot data\nplot(wa_locked_out, main = ""Areas not available for protection"", axes = FALSE)\n```\n\n\n\n``` r\n# create new problem with locked out constraints added to it\np3 <-\n p2 %>%\n add_locked_out_constraints(wa_locked_out)\n\n# solve the problem\ns3 <- solve(p3)\n\n# plot the solution\nplot(s3, main = ""Solution"", axes = FALSE)\n```\n\n\n\nThis solution is even better then the previous solution. However, we are\nnot finished yet. The planning units selected by the solution are fairly\nfragmented. This can cause issues because fragmentation increases\nmanagement costs and reduces conservation benefits through edge effects.\nTo address this issue, we can further modify the problem by adding\npenalties that punish overly fragmented solutions (via\n`add_boundary_penalties()`). Here we will use a penalty factor (i.e.,\nboundary length modifier) of 0.003, and an edge factor of 50% so that\nplanning units that occur on the outer edge of the study area are not\noverly penalized.\n\n``` r\n# create new problem with boundary penalties added to it\np4 <-\n p3 %>%\n add_boundary_penalties(penalty = 0.003, edge_factor = 0.5)\n\n# solve the problem\ns4 <- solve(p4)\n\n# plot the solution\nplot(s4, main = ""Solution"", axes = FALSE)\n```\n\n\n\nNow, let\xe2\x80\x99s explore which planning units selected by the solution are\nmost important for cost-effectively meeting the targets. To achieve\nthis, we will calculate importance (irreplaceability) scores using the\nFerrier method. Although this method produces scores for each feature\nseparately, we will examine the total scores that summarize overall\nimportance across all features.\n\n``` r\n# calculate importance scores\nrc <-\n p4 %>%\n eval_ferrier_importance(s4)\n\n# print scores\nprint(rc)\n```\n\n ## class : SpatRaster \n ## dimensions : 109, 147, 397 (nrow, ncol, nlyr)\n ## resolution : 4000, 4000 (x, y)\n ## extent : -1816382, -1228382, 247483.5, 683483.5 (xmin, xmax, ymin, ymax)\n ## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs \n ## source(s) : memory\n ## varnames : wa_pu \n ## wa_pu \n ## wa_pu \n ## ...\n ## names : Recur~ding), Botau~ding), Botau~ding), Corvu~ding), Corvu~ding), Cincl~full), ... \n ## min values : 0.0000000000, 0.0000000000, 0.0000000000, 0.000000e+00, 0.000000e+00, 0.000000e+00, ... \n ## max values : 0.0003227724, 0.0002213034, 0.0006622152, 7.771815e-05, 8.974447e-05, 8.483296e-05, ...\n\n``` r\n# plot the total importance scores\n## note that gray cells are not selected by the prioritization\nplot(\n rc[[""total""]], main = ""Importance scores"", axes = FALSE,\n breaks = c(0, 1e-10, 0.005, 0.01, 0.025),\n col = c(""#e5e5e5"", ""#fff7ec"", ""#fc8d59"", ""#7f0000"")\n)\n```\n\n\n\nThis short example demonstrates how the *prioritizr R* package can be\nused to build and customize conservation problems, and then solve them\nto generate solutions. Although we explored just a few different\nfunctions for modifying a conservation problem, the package provides\nmany functions for specifying objectives, constraints, penalties, and\ndecision variables, so that you can build and custom-tailor conservation\nplanning problems to suit your planning scenario.\n\n## Learning resources\n\nThe [package website](https://prioritizr.net/index.html) contains\ninformation on the *prioritizr R* package. Here you can find\n[documentation for every function and built-in\ndataset](https://prioritizr.net/reference/index.html), and [news\ndescribing the updates in each package\nversion](https://prioritizr.net/news/index.html). It also contains the\nfollowing articles and tutorials.\n\n- [**Getting\n started**](https://prioritizr.net/articles/prioritizr.html): Short\n tutorial on using the package.\n- [**Package\n overview**](https://prioritizr.net/articles/package_overview.html):\n Introduction to systematic conservation planning and a comprehensive\n overview of the package.\n- [**Connectivity\n tutorial**](https://prioritizr.net/articles/connectivity_tutorial.html):\n Tutorial on incorporating connectivity into prioritizations.\n- [**Calibrating trade-offs\n tutorial**](https://prioritizr.net/articles/calibrating_trade-offs_tutorial.html):\n Tutorial on running calibration analyses to satisfy multiple\n criteria.\n- [**Management zones\n tutorial**](https://prioritizr.net/articles/management_zones_tutorial.html):\n Tutorial on incorporating multiple management zones and actions into\n prioritizations.\n- [**Gurobi installation\n guide**](https://prioritizr.net/articles/gurobi_installation_guide.html):\n Instructions for installing the *Gurobi* optimization suite for\n generating prioritizations.\n- [**Solver\n benchmarks**](https://prioritizr.net/articles/solver_benchmarks.html):\n Performance comparison of optimization solvers for generating\n prioritizations.\n- [**Publication\n record**](https://prioritizr.net/articles/publication_record.html):\n List of publications that have cited the package.\n\nAdditional resources can also be found in [online repositories under the\n*prioritizr* organization](https://github.com/prioritizr). These\nresources include [slides for talks and seminars about the\npackage](https://github.com/prioritizr/teaching). Additionally, workshop\nmaterials are available too (e.g., the [Carleton 2023\nworkshop](https://prioritizr.github.io/workshop/)).\n\n## Getting help\n\nIf you have any questions about the *prioritizr R* package or\nsuggestions for improving it, please [post an issue on the code\nrepository](https://github.com/prioritizr/prioritizr/issues/new).\n'",,"2017/02/04, 22:45:17",2454,MIT,23,810,"2023/10/18, 00:22:41",13,65,293,50,8,0,0.0,0.128,"2023/08/17, 02:14:51",v8.0.3,0,7,false,,false,true,,,https://github.com/prioritizr,,,,,https://avatars.githubusercontent.com/u/25472841?v=4,,, EcoSISTEM.jl,"A Julia package that provides functionality for simulating species undergoing dynamic biological processes such as birth, death, competition and dispersal, as well as environmental changes in climate and habitat.",EcoJulia,https://github.com/EcoJulia/EcoSISTEM.jl.git,github,"julia,biodiversity,epidemiology,ecosystem-simulation,ecology,simulation",Conservation and Restoration,"2023/10/24, 08:22:55",34,0,13,true,Julia,,EcoJulia,Julia,,"b""# EcoSISTEM\n\n| **Documentation** | **Build Status** | **DOI** |\n|:-----------------:|:----------------:|:-------:|\n| [![stable docs][docs-stable-img]][docs-stable-url] | [![build tests][actions-img]][actions-url] [![JuliaNightly][nightly-img]][nightly-url] | [![DOI][zenodo-img]][zenodo-url] |\n| [![dev docs][docs-dev-img]][docs-dev-url] | [![codecov][codecov-img]][codecov-url] | |\n\n## Package for running dynamic ecosystem simulations\n\n### Summary\n\n**EcoSISTEM** (Ecosystem Simulation through Integrated Species-Trait Environment Modelling) is a [Julia](http://www.julialang.org) package that provides functionality for simulating species undergoing dynamic biological processes such as birth, death, competition and dispersal, as well as environmental changes in climate and habitat.\n\nThe package was primarily developed for global scale simulations of plant biodiversity. The underlying model for this is described in the arXiv paper [arXiv:1911.12257 (q-bio.QM)][paper-url],\n*Dynamic virtual ecosystems as a tool for detecting large-scale\nresponses of biodiversity to environmental and land-use change*.\n\nThere are substantial changes to the package introduced through the [`dev`][dev-url] branch ([docs][docs-dev-url]), including epidemiological simulations and refactoring of the code base for further flexibility.\n\nThis package is in beta now, so please raise an issue if you find any problems. For more information on how to contribute, please read [our contributing guidelines](CONTRIBUTING.md). We are supported by NERC's Landscape Decisions [small][NERC-small] and [large][NERC-big] maths grants and an [EPSRC][EPSRC-stu] studentship.\n\n## Introduction to EcoSISTEM\nYou can now run through a full introduction to EcoSISTEM with Pluto.jl! To get started:\n\n``` julia\nimport Pluto\nPluto.run()\n```\nThis should open a Pluto window in your browser - from there you can type `notebooks\\Introduction.jl` in the `Open from file` box. Note that it may be slow on first launch!\n\n[paper-url]: https://arxiv.org/abs/1911.12257\n\n[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg\n[docs-stable-url]: https://docs.ecojulia.org/EcoSISTEM.jl/stable/\n\n[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg\n[docs-dev-url]: https://docs.ecojulia.org/EcoSISTEM.jl/dev/\n\n[actions-img]: https://github.com/EcoJulia/EcoSISTEM.jl/actions/workflows/testing.yaml/badge.svg?branch=main\n[actions-url]: https://github.com/EcoJulia/EcoSISTEM.jl/actions/workflows/testing.yaml?branch=main\n\n[nightly-img]: https://github.com/EcoJulia/EcoSISTEM.jl/actions/workflows/nightly.yaml/badge.svg?branch=main\n[nightly-url]: https://github.com/EcoJulia/EcoSISTEM.jl/actions/workflows/nightly.yaml?branch=main\n\n[codecov-img]: https://codecov.io/gh/EcoJulia/EcoSISTEM.jl/branch/main/graph/badge.svg\n[codecov-url]: https://codecov.io/gh/EcoJulia/EcoSISTEM.jl?branch=main\n\n[zenodo-img]: https://zenodo.org/badge/251665824.svg\n[zenodo-url]: https://zenodo.org/badge/latestdoi/251665824\n\n[dev-url]: https://github.com/EcoJulia/EcoSISTEM.jl/tree/dev\n[NERC-small]: https://gtr.ukri.org/projects?ref=NE%2FT004193%2F1\n[NERC-big]: https://gtr.ukri.org/projects?ref=NE%2FT010355%2F1\n[EPSRC-stu]: https://gtr.ukri.org/projects?ref=EP%2FM506539%2F1\n""",",https://arxiv.org/abs/1911.12257\n\n,https://zenodo.org/badge/latestdoi/251665824\n\n","2020/03/31, 16:36:17",1303,LGPL-3.0,71,1295,"2023/10/24, 12:05:52",6,109,117,34,1,0,0.0,0.08231458842705786,"2023/03/13, 20:30:34",v0.1.4,0,3,false,,false,true,,,https://github.com/EcoJulia,https://ecojulia.org/,,,,https://avatars.githubusercontent.com/u/22506369?v=4,,, oneimpact,Provides tools for the assessment of cumulative impacts of multiple infrastructure and land use modifications in ecological studies.,NINAnor,https://github.com/NINAnor/oneimpact.git,github,"biodiversity,grass-gis,r,r-package,cumulative-impacts",Conservation and Restoration,"2023/05/04, 00:53:48",3,0,1,true,R,Norwegian Institute for Nature Research,NINAnor,"R,Dockerfile",https://ninanor.github.io/oneimpact/,"b'# oneimpact \r\n\r\n\r\n [![DOI](https://zenodo.org/badge/453101311.svg)](https://zenodo.org/badge/latestdoi/453101311)\r\n\r\n\r\n\r\n`oneimpact` provides tools for the assessment of cumulative impacts of multiple infrastructure and land use modifications in ecological studies.\r\nThe tools use R interface but the main calculations might be run in both R and GRASS GIS. The tools available so far are:\r\n\r\n### Zone of influence (ZOI) decay functions\r\n\r\n- zoi_functions: a set of decay zone of influence functions to characterize different shapes of the ZOI around infrastructure, \r\nparameterized based on the zone of influence radius. The functions implemented so far are: threshold (`threshold_decay` or `step_decay`),\r\nlinear decay (`linear_decay` or `bartlett_decay` or `tent_decay`), exponential decay (`exp_decay`), or Gaussian decay \r\n(`gaussian_decay` or `half_norm_decay`).\r\n- `plot_zoi1d`: plot ZOI in 1 dimensional space for multiple points infrastructure, using both the ZOI of the nearest\r\nfeature and the cumulative ZOI metric.\r\n\r\n\r\n### Compute zones of influence (ZOI):\r\n\r\n- `calc_zoi_nearest`: Calculate the zone of influence from the nearest infrastructure, according to multiple possible \r\ndecay functions and zones of influence radii.\r\n- `calc_zoi_cumulative`: Calculate the cumulative zone of influence of multiple features, according to multiple possible \r\ndecay functions and zones of influence radii.\r\n- `calc_zoi`: Calculate both the the ZOI of the nearest infrastructure and the cumulative ZOI, at multiple\r\nscales or zones of influence radii.\r\n\r\n### Spatial filters:\r\n\r\n- `create_filter`: Create filters or weight matrices for neighborhood analysis, according to different decay functions\r\nand parameterized using the zone of influence radius.\r\n- `save_filter`: Saves filters/weight matrices outside R for use within GRASS GIS modules.\r\n\r\n### Ancillary functions:\r\n\r\n- `grass_binarize`: Binarize continuous or multi-class categorical rasters within GRASS GIS. Binary maps may be used \r\nas input for cumulative zone of influence and kernel density calculation.\r\n- `grass_v2rast_count`: Rasterize a vector files counting the number of features within each pixel of the output\r\nraster. Count rasters may be used as input for cumulative zone of influence and kernel density calculation.\r\n\r\n### Support for landscape simulation:\r\n\r\n- `set_points`: simulate points in a landscape according to different rules and spatial patterns.\r\n\r\n## Installation\r\n\r\nTo install the development version of the `oneimpact` R package, please use:\r\n\r\n```\r\nlibrary(devtools)\r\ndevtools::install_github(""NINAnor/oneimpact"", ref = ""HEAD"")\r\n```\r\n\r\n## Run with Docker\r\n\r\n```bash\r\ndocker run --rm -p 8787:8787 -e PASSWORD=rstudio -v $PWD/myproject:/home/rstudio/myproject ghcr.io/ninanor/oneimpact:main\r\n```\r\n\r\nIf you use Compose:\r\n\r\n```bash\r\ndocker compose run rstudio\r\n```\r\n\r\nYou can customize `docker-compose.yml` based on your needs.\r\n\r\n## See also\r\n\r\nThe `oneimpact` functions are greatly based on neighborhood analyses made through the\r\n[`terra` package](https://rspatial.org/terra/pkg/index.html) in R and on three GRASS GIS modules:\r\n[`r.mfilter`](https://grass.osgeo.org/grass78/manuals/r.mfilter.html), \r\n[`r.resamp.filter`](https://grass.osgeo.org/grass78/manuals/r.resamp.filter.html), and \r\n[`r.neighbors`](https://grass.osgeo.org/grass78/manuals/r.neighbors.html). The connection\r\nbetween R and GRASS GIS is made through the [`rgrass7`](https://github.com/rsbivand/rgrass) R package.\r\n\r\n## Meta\r\n\r\n - Please [report any issues or bugs](https://github.com/NINAnor/oneimpact/issues/new/).\r\n - License: GPL3\r\n - Get citation information for `oneimpact` in R running `citation(package = \'oneimpact\')`, or check the reference [here](https://ninanor.github.io/oneimpact/authors.html#citation).\r\n - Contributions are mostly welcome!\r\n'",",https://zenodo.org/badge/latestdoi/453101311","2022/01/28, 14:38:29",635,LGPL-3.0,22,222,"2023/01/16, 21:50:44",7,5,11,3,282,0,0.0,0.037735849056603765,"2023/05/03, 00:54:15",0.1.1,0,3,false,,false,false,,,https://github.com/NINAnor,http://www.nina.no,Norway,,,https://avatars.githubusercontent.com/u/11290934?v=4,,, grainscape,"Efficient Modelling of Landscape Connectivity, Habitat, and Protected Area Networks.",achubaty,https://github.com/achubaty/grainscape.git,github,"habitat-connectivity,spatial-graphs,landscape-connectivity,r-package,r",Conservation and Restoration,"2023/07/06, 17:13:43",17,0,4,true,TeX,,,"TeX,R,C++,C",https://alexchubaty.com/grainscape,"b'# grainscape\n\n\n[![R-CMD-check](https://github.com/achubaty/grainscape/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/achubaty/grainscape/actions/workflows/R-CMD-check.yaml)\n[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/grainscape)](https://cran.r-project.org/package=grainscape)\n[![DOI](https://zenodo.org/badge/62731055.svg)](https://zenodo.org/badge/latestdoi/62731055)\n[![Codecov test coverage](https://codecov.io/gh/achubaty/grainscape/branch/main/graph/badge.svg)](https://app.codecov.io/gh/achubaty/grainscape?branch=main)\n\n\n\n\n## Efficient Modelling of Landscape Connectivity, Habitat, and Protected Area Networks\n\nGiven a landscape resistance surface, creates grains of connectivity and minimum planar graph models that can be used to calculate effective distances for landscape connectivity at multiple scales.\nThis is a cross-platform reimplementation and update of the `grainscape` package (http://grainscape.r-forge.r-project.org).\n\nTo cite `grainscape` in publications, see `citation(""grainscape"")`.\n\n### Installation\n\n#### From CRAN\n\n```r\ninstall.packages(""grainscape"")\n```\n\n#### From GitHub\n\n1. **Install development libraries:** building packages from source requires the appropriate development libraries for your operating system.\n See [here](https://support.posit.co/hc/en-us/articles/200486498-Package-Development-Prerequisites) for more details.\n \n - *Windows:* install [Rtools](https://cran.r-project.org/bin/windows/Rtools/).\n - *macOS:* install Xcode commandline tools from the terminal: `xcode-select --install`. \n - *Debian/Ubuntu Linux:* ensure `r-base-dev` is installed.\n\n2. **Install from GitHub:**\n \n ```r\n #install.packages(""remotes"")\n library(""remotes"")\n install_github(""achubaty/grainscape"")\n ```\n\n### Reporting bugs\n\nContact us via the package GitHub site: [https://github.com/achubaty/grainscape/issues](https://github.com/achubaty/grainscape/issues).\n\n### Contributions\n\nThis Git repository uses the [Git Flow](https://nvie.com/posts/a-successful-git-branching-model/) branching model (the [`git flow`](https://github.com/petervanderdoes/gitflow-avh) extension is useful for this).\nThe [`development`](https://github.com/achubaty/grainscape/tree/development) branch contains the latest contributions and other code that will appear in the next release, and the [`main`](https://github.com/achubaty/grainscape) branch contains the code of the latest release, which is exactly what is currently on [CRAN](https://cran.r-project.org/package=grainscape).\n\nTo make a contribution to the package, just send a [pull request](https://help.github.com/articles/using-pull-requests/). \nWhen you send your PR, make sure `development` is the destination branch on the [grainscape repository](https://github.com/achubaty/grainscape).\nYour PR should pass `R CMD check --as-cran`, which will also be checked by [GitHub Actions](https://github.com/achubaty/grainscape/actions) when the PR is submitted.\n'",",https://zenodo.org/badge/latestdoi/62731055","2016/07/06, 15:15:03",2667,LGPL-3.0,31,522,"2023/04/20, 02:44:41",4,1,65,6,188,0,0.0,0.16556291390728473,"2023/04/20, 13:29:28",v0.4.4,0,6,false,,false,false,,,,,,,,,,, restoptr,Aims to identify priority areas for restoration efforts using optimization algorithms.,dimitri-justeau,https://github.com/dimitri-justeau/restoptr.git,github,,Conservation and Restoration,"2023/08/16, 11:54:49",10,0,5,true,R,,,"R,TeX,Makefile,CSS",https://dimitri-justeau.github.io/restoptr/,"b'\n\n\n# restopr \n\n## Ecological Restoration Planning\n\n[![lifecycle](https://img.shields.io/badge/Lifecycle-stable-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![R-CMD-check-Ubuntu](https://github.com/dimitri-justeau/restoptr/actions/workflows/R-CMD-check-ubuntu.yaml/badge.svg)](https://github.com/dimitri-justeau/restoptr/actions)\n[![R-CMD-check-Windows](https://github.com/dimitri-justeau/restoptr/actions/workflows/R-CMD-check-windows.yaml/badge.svg)](https://github.com/dimitri-justeau/restoptr/actions)\n[![R-CMD-check-MacOS](https://github.com/dimitri-justeau/restoptr/actions/workflows/R-CMD-check-macos.yaml/badge.svg)](https://github.com/dimitri-justeau/restoptr/actions)\n[![Coverage\nStatus](https://codecov.io/github/dimitri-justeau/restoptr/coverage.svg?branch=master)](https://app.codecov.io/gh/dimitri-justeau/restoptr)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/restoptr)](https://github.com/dimitri-justeau/restoptr)\n[![Downloads](https://cranlogs.r-pkg.org/badges/restoptr)](https://CRAN.R-project.org/package=restoptr)\n\n

\nLogo by Camille Salmon\n

\n\n## Overview\n\nThe `restoptr` R package provides a flexible framework for ecological\nrestoration planning. It aims to identify priority areas for restoration\nefforts using optimization algorithms (based on Justeau-Allaire *et al.*\n2021). Priority areas can be identified by maximizing landscape indices,\nsuch as the effective mesh size (Jaeger 2000), or the integral index of\nconnectivity (Pascual-Hortal & Saura 2006). Additionally, constraints\ncan be used to ensure that priority areas exhibit particular\ncharacteristics (e.g., ensure that particular places are not selected\nfor restoration, ensure that priority areas form a single contiguous\nnetwork). Furthermore, multiple near-optimal solutions can be generated\nto explore multiple options in restoration planning. The package\nleverages the [Choco-solver](https://choco-solver.org/) software to\nperform optimization using constraint programming (CP) techniques\n(Prud\xe2\x80\x99homme *et al.* 2016).\n\n## Installation\n\n### Package installation\n\nThe latest official version of the *restoptr R* package can be installed\nfrom the [Comprehensive R Archive Network\n(CRAN)](https://cran.r-project.org/) using the following *R* code.\n\n``` r\ninstall.packages(""restoptr"", repos = ""https://cran.rstudio.com/"")\n```\n\nAlternatively, the latest developmental version can be installed using\nthe following *R* code. Please note that while developmental versions\nmay contain additional features not present in the official version,\nthey may also contain coding errors.\n\n``` r\nif (!require(remotes)) install.packages(""remotes"")\nremotes::install_github(""dimitri-justeau/restoptr"")\n```\n\n### System dependencies\n\nThe packages requires a Java Runtime Environment (JRE), version 8 or\nhigher. Below we provide platform-specific instructions to install it.\n\n#### *Windows*\n\nPlease install the latest Java Runtime Environment for Windows (see\nOracle JDK, [OpenJDK](https://openjdk.org/install/), or\n[GraalVM](https://www.graalvm.org/downloads/)). You also need to install\n[Maven](https://maven.apache.org/). After downloading the file, please\nrun installer to install Java on your system. You will also need to\nensure that the `PATH` environmental variable if configured so that *R*\ncan access Java. *restoptr* relies on *rJava* for the communication\nbetween *R* and *Java*. If you have any trouble during the installation\nof *restopt* due to *rJava*, please refer to *rJava*\xe2\x80\x99s documentation:\n.\n\n#### *Ubuntu*\n\nFor recent versions of Ubuntu (18.04 and later), the Java libraries are\navailable through official repositories. They can be installed using the\nfollowing system commands.\n\n``` bash\nsudo apt-get install default-jdk\n```\n\nIf you want to install a specific JRE version, please follow\ninstructions from Oracle, [OpenJDK](https://openjdk.org/install/), or\n[GraalVM](https://www.graalvm.org/downloads/).\n\n#### *Linux*\n\nPlease follow instructions from Oracle,\n[OpenJDK](https://openjdk.org/install/), or\n[GraalVM](https://www.graalvm.org/downloads/).\n\n#### *MacOS*\n\nThe easiest way to install the Java libraries is using\n[HomeBrew](https://brew.sh/). After installing HomeBrew, the Java\nlibraries can be installed using the following system commands.\n\n``` bash\nbrew install openjdk\n```\n\nPlease note that you might also need to ensure that the `PATH`\nenvironmental variable if configured so that *R* can access Java.\n\n### Building the Java core library from source (optional)\n\nThe package relies on a core Java library called\n[`restopt`](https://github.com/dimitri-justeau/restopt). This Java\nlibrary handles the constrained optimization process via the\n[Choco-solver](https://choco-solver.org/) software. Although this\nlibrary is automatically included with the package, it can be manually\ncompile from source if needed. **Please note that this step is entirely\noptional, and is not needed to install the package.** To compile the\nJava library, a the [Maven](https://maven.apache.org/) software needs to\nbe installed as well as a Java Development Kit (JDK) (version 8+) is\nrequired (e.g., see Oracle JDK, [OpenJDK](https://openjdk.org/install/),\nor [GraalVM](https://www.graalvm.org/downloads/)). After installing\nthese dependencies, the following procedures can be used to compile the\nJava library and it along with the package.\n\nFirst clone the repository and update the source code.\n\n``` bash\ngit clone https://github.com/dimitri-justeau/restoptr.git\ncd restoptr\ngit submodule update --init --recursive\ngit pull --recurse-submodules\n```\n\nNext, compile the core Java library with Maven.\n\n``` bash\ncd restopt\nmvn clean package -DskipTests\n```\n\nNext, copy the resulting Java library (.jar) file into `java` directory.\n\n``` bash\ncp target/restopt-*.jar ../java/\n```\n\nFinally, the package can be installed with the newly compiled Java\nlibrary using the following *R* command.\n\n``` r\nif (!require(remotes)) install.packages(""remotes"")\nremotes::install_local(""."")\n```\n\n## Usage\n\nHere we will provide a short tutorial on using the *restoptr R* package\nto identify priority areas for restoration. As part of this tutorial, we\nwill use an example dataset that is distributed with the package\n(obtained from Justeau-Allaire *et al.* 2021). This example dataset\ncontains data for prioritizing forest restoration efforts within a\nprotected area in New Caledonia. We will begin the tutorial by loading\nthe package. If you haven\xe2\x80\x99t already, please install the package (see\nabove for installation instructions).\n\n``` r\n# load package\nlibrary(restoptr)\n```\n\nTo identify priorities for restoration, we require information on the\nlocation of places that do and do not currently contain suitable\nhabitat. We will now import data to describe which places within the\nprotected area contain forest habitat (imported as the `habitat_data`\nobject). Specifically, this object is a spatial grid (i.e., raster\nlayer). Each grid cell corresponds to a candidate place for restoration\n(termed planning unit), and their values indicate the absence or\npresence of forest within each planning unit (using values of zero and\none, respectively).\n\n``` r\n# import data\nhabitat_data <- rast(\n system.file(""extdata"", ""habitat_hi_res.tif"", package = ""restoptr"")\n)\n\n# preview data\nprint(habitat_data)\n```\n\n ## class : SpatRaster \n ## dimensions : 1867, 2713, 1 (nrow, ncol, nlyr)\n ## resolution : 27.9487, 29.74339 (x, y)\n ## extent : 419768.2, 495593.1, 227538.9, 283069.8 (xmin, xmax, ymin, ymax)\n ## coord. ref. : RGNC91-93 / Lambert New Caledonia (EPSG:3163) \n ## source : habitat_hi_res.tif \n ## name : habitat_hi_res\n\n``` r\n# visualize data\nplot(habitat_data, plg = list(x = ""topright""))\n```\n\n\n\nRestoration efforts are often limited in terms of the places where they\ncan be implemented. For example, restoration efforts may not be feasible\nin dense cities. In our example, some places are not feasible for\nrestoration because they cannot be accessed by existing tracks within\nthe protected area. We will now import data to describe which places are\nnot feasible for restoration (imported as the `locked_out_data` object).\nThis object \xe2\x80\x93 similar to the habitat data \xe2\x80\x93 is a spatial grid. The grid\ncell values in this object indicate which planning units should be\nconsidered available for restoration or not (using values of zero and\none, respectively).\n\n``` r\n# import data\nlocked_out_data <- rast(\n system.file(""extdata"", ""locked_out.tif"", package = ""restoptr"")\n)\n\n# preview data\nprint(locked_out_data)\n```\n\n ## class : SpatRaster \n ## dimensions : 1867, 2713, 1 (nrow, ncol, nlyr)\n ## resolution : 27.9487, 29.74339 (x, y)\n ## extent : 419768.2, 495593.1, 227538.9, 283069.8 (xmin, xmax, ymin, ymax)\n ## coord. ref. : RGNC91-93 / Lambert New Caledonia (EPSG:3163) \n ## source : locked_out.tif \n ## name : layer \n ## min value : 1 \n ## max value : 1\n\n``` r\n# visualize data\nplot(locked_out_data, plg = list(x = ""topright""))\n```\n\n\n\nWe now will build a restoration optimization problem (stored in the\n`problem` object). This object will specify all the data, settings, and\noptimization criteria for identifying priority areas. Specifically, we\nwill initialize the problem with the `habitat_data` object to specify\nwhich planning units already contain suitable habitat (with the\n`restopt_problem()` function). To reduce run time, we will also\ninitialize it with parameters to aggregate the spatial data (i.e.,\n`aggregation_factor` and `habitat_threshold`). Next, we will specify\nthat the objective function for the optimization process is to maximize\nconnectivity based on the effective mesh size metric (with the\n`set_max_mesh_objective()` function). We will then specify constraints\nto ensure that the priority areas exhibit particular characteristics.\nThese constraints will be used to ensure that (i) certain planning units\nare not selected for restoration (with the `add_locked_out_constraint()`\nfunction), (ii) the total amount of restored area should range between\n90 and 220 ha (with the `add_restorable_constraint()` function), and\n(iii) limit the spatial extent of the priority areas to be within 2.4 km\n(with the `add_compactness_constraint()` function).\n\n``` r\n# build restoration optimization problem\nproblem <-\n ## initialize problem with habitat data\n restopt_problem(\n existing_habitat = habitat_data,\n aggregation_factor = 16,\n habitat_threshold = 0.7\n ) %>%\n ## set objective function is to maximize effective mesh size\n set_max_mesh_objective() %>%\n ## add constraint to ensure that certain places are not selected\n add_locked_out_constraint(locked_out_data) %>%\n ## add constraint to limit total amount of restored area\n add_restorable_constraint(90, 220, unit = ""ha"") %>%\n ## add constraint to limit spatial extent of priority areas\n add_compactness_constraint(2.4, unit = ""km"")\n\n# preview problem\nprint(problem)\n```\n\n ## ----------------------------------------------------------------- \n ## Restopt \n ## ----------------------------------------------------------------- \n ## original habitat: habitat_hi_res.tif \n ## aggregation factor: 16 \n ## habitat threshold: 0.7 \n ## existing habitat: in memory \n ## restorable habitat: in memory \n ## ----------------------------------------------------------------- \n ## objective: Maximize effective mesh size \n ## ----------------------------------------------------------------- \n ## constraints: \n ## - locked out (data = in memory) \n ## - restorable (min_restore = 90, max_restore = 220, min_proportion = 1, unit = ha) \n ## - compactness (max_diameter = 2.4, unit = km) \n ## ----------------------------------------------------------------- \n ## settings: \n ## - precision = 4\n ## - time_limit = 0\n ## - nb_solutions = 1\n ## - optimality_gap = 0\n ## - solution_name_prefix = Solution \n ## -----------------------------------------------------------------\n\nAfter building the problem, we can solve it to identify priority areas\nfor restoration (with the `solve()` function). The solution is a raster\nlayer containing values that indicate if planning units: (`0`) were\nlocked out, (`1`) do not contain existing habitat, (`2`) contain\nexisting habitat, or (`3`) selected as a priority area for restoration.\n\n``` r\n# solve problem to identify priority areas\nsolution <- solve(problem)\n```\n\n ## Good news: the solver found 1 solution statisfying the constraints that was proven optimal ! (solving time = 0.97 s)\n\n``` r\n# preview solution\nprint(solution)\n```\n\n ## class : RestoptSolution \n ## dimensions : 117, 170, 1 (nrow, ncol, nlyr)\n ## resolution : 447.1792, 475.8943 (x, y)\n ## extent : 419768.2, 495788.7, 227390.1, 283069.8 (xmin, xmax, ymin, ymax)\n ## coord. ref. : RGNC91-93 / Lambert New Caledonia (EPSG:3163) \n ## source(s) : memory\n ## categories : label \n ## name : Solution 1 \n ## min value : Locked out \n ## max value : Restoration\n\n``` r\n# visualize solution\nplot(\n solution,\n main = ""Solution"",\n col = c(""#E5E5E5"", ""#fff1d6"", ""#b2df8a"", ""#1f78b4""),\n plg = list(x = ""topright"")\n)\n```\n\n\n\nFinally, we can access additional information on the solution (with the\n`get_metadata()` function).\n\n``` r\n# access information on the solution\n## N.B. spatial units are expressed as hectares\nget_metadata(solution, area_unit = ""ha"")\n```\n\n ## min_restore total_restorable nb_planning_units nb_components diameter\n ## 1 219.3772 [ha] 219.3772 [ha] 15 3 2280.175 [m]\n ## optimality_proven search_state solving_time mesh_initial mesh\n ## 1 TRUE TERMINATED 0.944 13667.84 [ha] 14232.66 [ha]\n ## mesh_best\n ## 1 14232.66 [ha]\n\nThis has just been a short taster of the package. For an extended\ntutorial on using the package, please refer to the vignette.\n\n## Citation\n\nPlease cite the *restoptr R* package when using it in publications.\n\n> Justeau\xe2\x80\x90Allaire, D., Hanson, J. O., Lannuzel, G., Vismara, P., Lorca,\n> X., & Birnbaum, P. (2023). restoptr: an R package for ecological\n> restoration planning. Restoration Ecology, e13910.\n> \n\n## Getting help\n\nIf you have any questions about using the package, suggestions for\nimprovements, or if you detect a bug, please [open an issue in online\ncode\nrepository](https://github.com/dimitri-justeau/restoptr/issues/new/choose).\nWe designed the package to make it relatively easy to add new\nfunctionality, and would be delighted to hear from you.\n\n## References\n\nJaeger, J. A. G. (2000). Landscape division, splitting index, and\neffective mesh size: New measures of landscape fragmentation. *Landscape\nEcology*, 15(2), 115-\xe2\x80\x91130.\n\nJusteau-Allaire, D., Vieilledent, G., Rinck, N., Vismara, P., Lorca, X.,\n& Birnbaum, P. (2021). Constrained optimization of landscape indices in\nconservation planning to support ecological restoration in New\nCaledonia. *Journal of Applied Ecology*, 58(4), 744\xe2\x80\x91-754.\n\nPascual-Hortal, L., & Saura, S. (2006). Comparison and development of\nnew graph-based landscape connectivity indices: Towards the priorization\nof habitat patches and corridors for conservation. *Landscape Ecology*,\n21(7), 959-\xe2\x80\x91967.\n\nPrud\xe2\x80\x99homme, C., Fages, J.-G., & Lorca, X. (2016). Choco Solver\nDocumentation. {TASC, INRIA Rennes, LINA CNRS UMR 6241, COSLING S.A.S.\nAvailable at .\n'",",https://doi.org/10.1111/rec.13910","2021/11/26, 06:09:40",698,GPL-3.0,15,213,"2023/03/15, 10:32:47",0,13,55,2,224,0,1.0,0.29556650246305416,"2023/06/16, 15:51:09",v1.0.5,0,2,false,,false,false,,,,,,,,,,, ADRIA.jl, A multi-criteria decision support platform for informing reef restoration and adaptation interventions.,open-AIMS,https://github.com/open-AIMS/ADRIA.jl.git,github,,Conservation and Restoration,"2023/10/22, 03:00:07",7,0,7,true,Julia,Australian Institute of Marine Science,open-AIMS,Julia,,"b'# ADRIA.jl\n\nADRIA: Adaptive Dynamic Reef Intervention Algorithms.\n\n[![Release](https://img.shields.io/github/v/release/open-AIMS/ADRIA.jl)](https://github.com/open-AIMS/ADRIA.jl/releases) [![DOI](https://zenodo.org/badge/483052659.svg)](https://zenodo.org/badge/latestdoi/483052659)\n\n[![Documentation](https://img.shields.io/badge/docs-stable-blue)](https://open-aims.github.io/ADRIA.jl/stable/) [![Documentation](https://img.shields.io/badge/docs-dev-blue)](https://open-aims.github.io/ADRIA.jl/dev/)\n\n
\n \n \n \n

This project is currently moving to use Blue Style Guide.\nFollow the style guide when submiting a PR.

\n
\n\nADRIA is a decision-support tool designed to help reef managers, modellers and decision-makers\naddress the challenges of adapting to climate change in coral reefs. It provides line of sight\nto conservation solutions in complex settings where multiple objectives need to be considered,\nand helps investors identify which options represent the highest likelihood of providing\nreturns on investment. ADRIA uses a set of dynamic Multi-Criteria Decision Analyses (dMCDA)\nwhich simulates a reef decision maker to identify candidate locations for intervention\ndeployment which consider ecological, economic and social benefits.\n\nADRIA also includes a simplified coral ecosystem model to allow exploration of outcomes as a\nresult of intervention decisions made across a wide range of possible future conditions.\n\nADRIA requires Julia v1.9 and above.\n\nSetup and usage is demonstrated in the\n[Documentation](https://open-aims.github.io/ADRIA.jl/stable/usage/getting_started/).\n\nFor developers, refer to the\n[Developers setup guide](https://open-aims.github.io/ADRIA.jl/stable/development/development_setup/).\n'",",https://zenodo.org/badge/latestdoi/483052659","2022/04/19, 01:25:45",554,MIT,2278,3358,"2023/10/18, 23:23:43",65,357,485,339,7,7,1.3,0.3724559023066486,"2023/09/21, 04:22:50",v0.9.0,0,4,false,,false,false,,,https://github.com/open-AIMS,,,,,https://avatars.githubusercontent.com/u/68976138?v=4,,, ECOSTRESS,The images acquired by ECOSTRESS are the most detailed temperature images of the surface ever acquired from space and can be used to measure the temperature of an individual farmers field and plants.,,,custom,,Forest Observation and Management,,,,,,,,,,https://ecostress.jpl.nasa.gov/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, treeseg,Developed to near-automatically extract tree-level point clouds from high-density larger-area lidar point clouds acquired in forests.,apburt,https://github.com/apburt/treeseg.git,github,,Forest Observation and Management,"2021/05/31, 20:01:51",167,0,49,true,C++,,,"C++,CMake",,"b'# treeseg\n\nExtract individual trees from lidar point clouds\n\n\n\n## Table of contents\n\n- [Overview](#overview)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Acknowledgements](#acknowledgements)\n- [Authors](#authors)\n- [Citing](#citing)\n- [License](#license)\n\n## Overview\n\ntreeseg has been developed to near-automatically segment individual tree point clouds from high-density larger-area lidar point clouds acquired in forests. A formal, albeit somewhat outdated description of the methods can be found in our [paper](https://doi.org/10.1111/2041-210X.13121).\n\n## Installation\n\ntreeseg has been developed and tested on Ubuntu 20.04 LTS only, and is dependent on the following packages:\n\n* Point Cloud Library (v1.10)\n* Armadillo (v9.8)\n\nThese dependencies are installed via apt:\n\n```\napt install libpcl-dev libarmadillo-dev\n```\n\ntreeseg can then be installed using:\n\n```\ngit clone https://github.com/apburt/treeseg.git;\nmkdir ./treeseg/build; cd ./treeseg/build; cmake ..; make;\n```\n\nOptionally, for users with RIEGL V-Line scan data, treeseg includes the executable `rxp2pcd`, to convert and preprocess data in RXP data stream format, to binary PCD format. This executable will automatically be built if the directories `./treeseg/include/reigl/` and `./treeseg/lib/riegl/` are populated with the RIEGL RiVLIB headers and libraries (as appropriate for the user\'s particular CPU architecture and gcc version), which can be downloaded from the Members Area of the RIEGL website (e.g., rivlib-2_5_10-x86_64-linux-gcc9.zip). \n\nFinally, the environment variable `PATH` can then be updated to include the directory containing the built treeseg executables, either temporarily, by calling the following, or permanently, by inserting it at the top of `~/.bashrc`:\n\n```\nexport PATH=""$PATH:/path/to/treeseg/build""\n```\n\n## Usage\n\nA tutorial demonstrating the usage of treeseg is available [here](/doc/tutorial_overview.md).\n\n## Acknowledgements\n\ntreeseg makes extensive use of the Point Cloud Library ([PCL](http://pointclouds.org)).\n\n## Authors\n\n* Andrew Burt\n* Mathias Disney\n* Kim Calders\n* Matheus Boni Vicari\n* Tony Peter\n\n## Citing\n\ntreeseg can be cited as:\n\nBurt, A., Disney, M., Calders, K. (2019). Extracting individual trees from lidar point clouds using *treeseg*. *Methods Ecol Evol* 10(3), 438\xe2\x80\x93445. doi: 10.1111/2041-210X.13121\n\nA doi for the latest version is available in [releases](https://github.com/apburt/treeseg/releases).\n\n## License\n\nThis project is licensed under the terms of the MIT license - see the [LICENSE](LICENSE) file for details.\n'",",https://doi.org/10.1111/2041-210X.13121","2017/12/10, 16:04:31",2145,MIT,0,86,"2023/03/14, 02:44:02",10,8,41,1,225,1,0.0,0.09999999999999998,"2021/05/31, 20:06:56",v0.2.2,0,3,false,,false,false,,,,,,,,,,, fgeo,Analyze forest diversity and dynamics.,forestgeo,https://github.com/forestgeo/fgeo.git,github,"ecology,forests,dynamics,forestgeo,metapackage,fgeo,abundance,tree,demography,habitat,dynamic",Forest Observation and Management,"2019/12/11, 18:02:06",28,0,5,false,R,ForestGEO,forestgeo,R,https://forestgeo.github.io/fgeo/.,"b'\n\n\n# Analyze forest diversity and dynamics\n\n\n\n[![lifecycle](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://www.tidyverse.org/lifecycle/#maturing)\n[![Travis build\nstatus](https://travis-ci.org/forestgeo/fgeo.svg?branch=master)](https://travis-ci.org/forestgeo/fgeo)\n[![Coverage\nstatus](https://codecov.io/gh/forestgeo/fgeo/branch/master/graph/badge.svg)](https://codecov.io/github/forestgeo/fgeo?branch=master)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/fgeo)](https://cran.r-project.org/package=fgeo)\n\n\n**fgeo** helps you to install, load, and access the documentation of\nmultiple packages to analyze forest diversity and dynamics\n(fgeo.analyze,\nfgeo.plot,\nfgeo.tool,\nfgeo.x). This\npackage-collection allows you to manipulate and plot\n[ForestGEO](http://www.forestgeo.si.edu/) data, and to do common\nanalyses including abundance, demography, and species-habitats\nassociations.\n\n - [Search functions and\n datasets](https://forestgeo.github.io/fgeo/articles/siteonly/reference.html)\n - [Ask questions, report bugs, or propose\n features](https://github.com/forestgeo/fgeo/issues/new)\n\n## Installation\n\nMake sure your R environment is as follows:\n\n - R version is recent\n - All packages are updated (run `update.packages()`; maybe use `ask =\n FALSE`)\n - No other R session is running\n - Current R session is clean (click *Session \\> Restart R*)\n\nInstall the latest stable version of **fgeo** from\n[CRAN](https://cran.r-project.org/) with:\n\n``` r\ninstall.packages(""fgeo"")\n```\n\nOr install the development version of **fgeo** from\n[GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""forestgeo/fgeo.x"")\n```\n\n - [How to setup .Rprofile for easiest installation of\n **fgeo**?](https://forestgeo.github.io/fgeo/articles/siteonly/questions-and-answers.html#how-to-setup--rprofile-for-easiest-installation-of-fgeo)\n - [How to update\n **fgeo**?](https://forestgeo.github.io/fgeo/articles/siteonly/questions-and-answers.html#how-to-update-fgeo)\n - [How to remove\n **fgeo**?](https://forestgeo.github.io/fgeo/articles/siteonly/questions-and-answers.html#how-to-remove-fgeo)\n - [How to avoid or fix common installation\n problems?](https://forestgeo.github.io/fgeo/articles/siteonly/questions-and-answers.html#how-to-avoid-or-fix-common-installation-problems)\n\n## Example\n\n``` r\nlibrary(fgeo)\n#> \xe2\x94\x80\xe2\x94\x80 Attaching packages \xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 fgeo 1.1.3.9000 \xe2\x94\x80\xe2\x94\x80\n#> \xe2\x9c\x94 fgeo.analyze 1.1.10.9000 \xe2\x9c\x94 fgeo.tool 1.2.5 \n#> \xe2\x9c\x94 fgeo.plot 1.1.8 \xe2\x9c\x94 fgeo.x 1.1.4\n#> \xe2\x94\x80\xe2\x94\x80 Conflicts \xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 fgeo_conflicts() \xe2\x94\x80\xe2\x94\x80\n#> \xe2\x9c\x96 fgeo.tool::filter() masks stats::filter()\n```\n\n### Explore **fgeo**\n\nOn an interactive session, `fgeo_help()` and `fgeo_browse_reference()`\nhelp you to search documentation.\n\n if (interactive()) {\n # To search on the viewer; accepts keywords\n fgeo_help()\n # To search on a web browser\n fgeo_browse_reference() \n }\n\n### Access and manipulate data\n\n`example_path()` allows you to access datasets stored in your R\nlibraries.\n\n``` r\nexample_path()\n#> [1] ""csv"" ""mixed_files"" ""rdata"" ""rdata_one"" \n#> [5] ""rds"" ""taxa.csv"" ""tsv"" ""vft_4quad.csv""\n#> [9] ""view"" ""weird"" ""xl""\n\n(vft_file <- example_path(""view/vft_4quad.csv""))\n#> [1] ""/home/mauro/R/x86_64-pc-linux-gnu-library/3.6/fgeo.x/extdata/view/vft_4quad.csv""\n```\n\n#### `read_()`\n\n`read_vft()` and `read_taxa()` import a ViewFullTable and ViewTaxonomy\nfrom .tsv or .csv files.\n\n``` r\nread_vft(vft_file)\n#> # A tibble: 500 x 32\n#> DBHID PlotName PlotID Family Genus SpeciesName Mnemonic Subspecies\n#> \n#> 1 385164 luquillo 1 Rubia\xe2\x80\xa6 Psyc\xe2\x80\xa6 brachiata PSYBRA \n#> 2 385261 luquillo 1 Urtic\xe2\x80\xa6 Cecr\xe2\x80\xa6 schreberia\xe2\x80\xa6 CECSCH \n#> 3 384600 luquillo 1 Rubia\xe2\x80\xa6 Psyc\xe2\x80\xa6 brachiata PSYBRA \n#> 4 608789 luquillo 1 Rubia\xe2\x80\xa6 Psyc\xe2\x80\xa6 berteroana PSYBER \n#> 5 388579 luquillo 1 Areca\xe2\x80\xa6 Pres\xe2\x80\xa6 acuminata PREMON \n#> 6 384626 luquillo 1 Arali\xe2\x80\xa6 Sche\xe2\x80\xa6 morototoni SCHMOR \n#> 7 410958 luquillo 1 Rubia\xe2\x80\xa6 Psyc\xe2\x80\xa6 brachiata PSYBRA \n#> 8 385102 luquillo 1 Piper\xe2\x80\xa6 Piper glabrescens PIPGLA \n#> 9 353163 luquillo 1 Areca\xe2\x80\xa6 Pres\xe2\x80\xa6 acuminata PREMON \n#> 10 481018 luquillo 1 Salic\xe2\x80\xa6 Case\xe2\x80\xa6 arborea CASARB \n#> # \xe2\x80\xa6 with 490 more rows, and 24 more variables: SpeciesID ,\n#> # SubspeciesID , QuadratName , QuadratID , PX ,\n#> # PY , QX , QY , TreeID , Tag , StemID ,\n#> # StemNumber , StemTag , PrimaryStem , CensusID ,\n#> # PlotCensusNumber , DBH , HOM , ExactDate ,\n#> # Date , ListOfTSM , HighHOM , LargeStem ,\n#> # Status \n```\n\n#### `pick_()` and `drop_()`\n\n**fgeo** is pipe-friendly. You may not use pipes but often they make\ncode easier to read.\n\n> Use %\\>% to emphasize a sequence of actions, rather than the object\n> that the actions are being performed on.\n\n\xe2\x80\x93 \n\n`pick_dbh_under()`, `drop_status()` and friends pick and drop rows from\na ForestGEO ViewFullTable or census table.\n\n``` r\n(census <- fgeo.x::tree5)\n#> # A tibble: 30 x 19\n#> treeID stemID tag StemTag sp quadrat gx gy MeasureID CensusID\n#> \n#> 1 7624 160987 1089\xe2\x80\xa6 175325 TRIP\xe2\x80\xa6 722 139. 425. 486675 5\n#> 2 8055 10036 1094\xe2\x80\xa6 109482 CECS\xe2\x80\xa6 522 94.8 424. 468874 5\n#> 3 19930 117849 1234\xe2\x80\xa6 165576 CASA\xe2\x80\xa6 425 61.3 496. 471979 5\n#> 4 23746 29677 14473 14473 PREM\xe2\x80\xa6 617 100. 328. 442571 5\n#> 5 31702 39793 22889 22889 SLOB\xe2\x80\xa6 304 53.8 73.8 447307 5\n#> 6 35355 44026 27538 27538 SLOB\xe2\x80\xa6 1106 203. 110. 449169 5\n#> 7 35891 44634 282 282 DACE\xe2\x80\xa6 901 172. 14.7 434266 5\n#> 8 39705 48888 33371 33370 CASS\xe2\x80\xa6 1010 184. 194. 451067 5\n#> 9 50184 60798 5830 5830 MATD\xe2\x80\xa6 1007 191. 132. 437645 5\n#> 10 57380 155867 66962 171649 SLOB\xe2\x80\xa6 1414 274. 279. 459427 5\n#> # \xe2\x80\xa6 with 20 more rows, and 9 more variables: dbh , pom ,\n#> # hom , ExactDate , DFstatus , codes ,\n#> # nostems , status , date \n\ncensus %>% \n pick_dbh_under(100)\n#> # A tibble: 18 x 19\n#> treeID stemID tag StemTag sp quadrat gx gy MeasureID CensusID\n#> \n#> 1 7624 160987 1089\xe2\x80\xa6 175325 TRIP\xe2\x80\xa6 722 139. 425. 486675 5\n#> 2 19930 117849 1234\xe2\x80\xa6 165576 CASA\xe2\x80\xa6 425 61.3 496. 471979 5\n#> 3 31702 39793 22889 22889 SLOB\xe2\x80\xa6 304 53.8 73.8 447307 5\n#> 4 35355 44026 27538 27538 SLOB\xe2\x80\xa6 1106 203. 110. 449169 5\n#> 5 39705 48888 33371 33370 CASS\xe2\x80\xa6 1010 184. 194. 451067 5\n#> 6 57380 155867 66962 171649 SLOB\xe2\x80\xa6 1414 274. 279. 459427 5\n#> 7 95656 129113 1315\xe2\x80\xa6 131519 OCOL\xe2\x80\xa6 402 79.7 22.8 474157 5\n#> 8 96051 129565 1323\xe2\x80\xa6 132348 HIRR\xe2\x80\xa6 1403 278 40.6 474523 5\n#> 9 96963 130553 1347\xe2\x80\xa6 134707 TETB\xe2\x80\xa6 610 114. 182. 475236 5\n#> 10 115310 150789 1652\xe2\x80\xa6 165286 MANB\xe2\x80\xa6 225 24.0 497. 483175 5\n#> 11 121424 158579 1707\xe2\x80\xa6 170701 CASS\xe2\x80\xa6 811 146. 218. 484785 5\n#> 12 121689 158871 1712\xe2\x80\xa6 171277 INGL\xe2\x80\xa6 515 84.2 285. 485077 5\n#> 13 121953 159139 1718\xe2\x80\xa6 171809 PSYB\xe2\x80\xa6 1318 247. 354. 485345 5\n#> 14 124522 162698 1742\xe2\x80\xa6 174224 CASS\xe2\x80\xa6 1411 279. 210. 488386 5\n#> 15 125038 163236 1753\xe2\x80\xa6 175335 CASS\xe2\x80\xa6 822 153. 426. 488924 5\n#> 16 126087 NA 1773\xe2\x80\xa6 CASA\xe2\x80\xa6 521 89.8 408. NA NA\n#> 17 126803 NA 1785\xe2\x80\xa6 PSYB\xe2\x80\xa6 622 113. 426 NA NA\n#> 18 126934 NA 1787\xe2\x80\xa6 MICR\xe2\x80\xa6 324 47 480. NA NA\n#> # \xe2\x80\xa6 with 9 more variables: dbh , pom , hom ,\n#> # ExactDate , DFstatus , codes , nostems ,\n#> # status , date \n```\n\n`pick_main_stem()` and `pick_main_stemid()` pick the main stem or main\nstemid(s) of each tree in each census.\n\n``` r\nstem <- download_data(""luquillo_stem6_random"")\n\ndim(stem)\n#> [1] 1320 19\ndim(pick_main_stem(stem))\n#> [1] 1000 19\n```\n\n#### `add_()`\n\n`add_status_tree()`adds the column `status_tree` based on the status of\nall stems of each tree.\n\n``` r\nstem %>% \n select(CensusID, treeID, stemID, status) %>% \n add_status_tree()\n#> # A tibble: 1,320 x 5\n#> CensusID treeID stemID status status_tree\n#> \n#> 1 6 104 143 A A \n#> 2 6 119 158 A A \n#> 3 NA 180 222 G A \n#> 4 NA 180 223 G A \n#> 5 6 180 224 G A \n#> 6 6 180 225 A A \n#> 7 6 602 736 A A \n#> 8 6 631 775 A A \n#> 9 6 647 793 A A \n#> 10 6 1086 1339 A A \n#> # \xe2\x80\xa6 with 1,310 more rows\n```\n\n`add_index()` and friends add columns to a ForestGEO-like dataframe.\n\n``` r\nstem %>% \n select(gx, gy) %>% \n add_index()\n#> Guessing: plotdim = c(320, 500)\n#> * If guess is wrong, provide the correct argument `plotdim`\n#> # A tibble: 1,320 x 3\n#> gx gy index\n#> \n#> 1 10.3 245. 13\n#> 2 183. 410. 246\n#> 3 165. 410. 221\n#> 4 165. 410. 221\n#> 5 165. 410. 221\n#> 6 165. 410. 221\n#> 7 149. 414. 196\n#> 8 38.3 245. 38\n#> 9 143. 411. 196\n#> 10 68.9 253. 88\n#> # \xe2\x80\xa6 with 1,310 more rows\n```\n\n### Plot data\n\nFor simplicity, we will focus on only a few species.\n\n``` r\nstem_2sp <- stem %>% \n filter(sp %in% c(""PREMON"", ""CASARB""))\n```\n\n`autoplot()` and friends produce different output depending on the class\nof input. You can create different input classes, for example, with\n`sp()` and `sp_elev()`:\n\n - Use `sp(census)` to plot the column `sp` of a `census` dataset \xe2\x80\x93\n i.e.\xc2\xa0to plot species distribution.\n\n\n\n``` r\nclass(sp(stem_2sp))\n#> [1] ""sp"" ""tbl_df"" ""tbl"" ""data.frame""\n\nautoplot(sp(stem_2sp))\n```\n\n![](man/figures/README-autoplot-sp-1.png)\n\n - Use `sp_elev(census, elevation)` to plot the columns `sp` and `elev`\n of a `census` and `elevation` dataset, respectively \xe2\x80\x93 i.e.\xc2\xa0to plot\n species distribution and topography.\n\n\n\n``` r\nelevation <- fgeo.x::elevation\nclass(sp_elev(stem_2sp, elevation))\n#> [1] ""sp_elev"" ""list""\n\nautoplot(sp_elev(stem_2sp, elevation))\n```\n\n![](man/figures/README-autoplot-sp-elev-1.png)\n\n### Analyze\n\n#### Abundance\n\n`abundance()` and `basal_area()` calculate abundance and basal area,\noptionally by groups.\n\n``` r\nabundance(\n pick_main_stem(census)\n)\n#> # A tibble: 1 x 1\n#> n\n#> \n#> 1 30\n\nby_species <- group_by(census, sp)\n\nbasal_area(by_species)\n#> # A tibble: 18 x 2\n#> # Groups: sp [18]\n#> sp basal_area\n#> \n#> 1 CASARB 437. \n#> 2 CASSYL 4146. \n#> 3 CECSCH 144150. \n#> 4 DACEXC 56832. \n#> 5 GUAGUI 9161. \n#> 6 HIRRUG 131. \n#> 7 INGLAU 141. \n#> 8 MANBID 167. \n#> 9 MATDOM 45239. \n#> 10 MICRAC 0 \n#> 11 OCOLEU 437. \n#> 12 PREMON 78864. \n#> 13 PSYBER 0 \n#> 14 PSYBRA 154. \n#> 15 SCHMOR 41187. \n#> 16 SLOBER 23377. \n#> 17 TETBAL 272. \n#> 18 TRIPAL 93.3\n```\n\n#### Demography\n\n`recruitment_ctfs()`, `mortality_ctfs()`, and `growth_ctfs()` calculate\nrecruitment, mortality, and growth. They all output a list.\n`as_tibble()` converts the output from a list to a more convenient\ndataframe.\n\n``` r\ntree5 <- fgeo.x::tree5\n\nas_tibble(\n mortality_ctfs(tree5, tree6)\n)\n#> Detected dbh ranges:\n#> * `census1` = 10.9-323.\n#> * `census2` = 10.5-347.\n#> Using dbh `mindbh = 0` and above.\n#> # A tibble: 1 x 9\n#> N D rate lower upper time date1 date2 dbhmean\n#> \n#> 1 27 1 0.00834 0.00195 0.0448 4.52 18938. 20590. 101.\n```\n\n#### Species-habitats association\n\n`tt_test()` runs a torus translation test to determine habitat\nassociations of tree species. `as_tibble()` converts the output from a\nlist to a more convenient dataframe. `summary()` helps you to interpret\nthe result.\n\n``` r\n# This analysis makes sense only for tree tables\ntree <- download_data(""luquillo_tree5_random"")\n\nhabitat <- fgeo.x::habitat\nresult <- tt_test(tree, habitat)\n#> Using `plotdim = c(320, 500)`. To change this value see `?tt_test()`.\n#> Using `gridsize = 20`. To change this value see `?tt_test()`.\n\nas_tibble(result)\n#> # A tibble: 292 x 8\n#> habitat sp N.Hab Gr.Hab Ls.Hab Eq.Hab Rep.Agg.Neut Obs.Quantile\n#> * \n#> 1 1 ALCFLO 2 1443 153 4 0 0.902\n#> 2 2 ALCFLO 1 807 778 15 0 0.504\n#> 3 3 ALCFLO 0 0 715 885 -1 0 \n#> 4 4 ALCFLO 0 0 402 1198 -1 0 \n#> 5 1 ALCLAT 0 0 544 1056 -1 0 \n#> 6 2 ALCLAT 1 1432 156 12 0 0.895\n#> 7 3 ALCLAT 0 0 324 1276 -1 0 \n#> 8 4 ALCLAT 0 0 144 1456 -1 0 \n#> 9 1 ANDINE 1 1117 466 17 0 0.698\n#> 10 2 ANDINE 1 1081 510 9 0 0.676\n#> # \xe2\x80\xa6 with 282 more rows\n\nsummary(result)\n#> # A tibble: 292 x 3\n#> sp habitat association\n#> \n#> 1 ALCFLO 1 neutral \n#> 2 ALCFLO 2 neutral \n#> 3 ALCFLO 3 repelled \n#> 4 ALCFLO 4 repelled \n#> 5 ALCLAT 1 repelled \n#> 6 ALCLAT 2 neutral \n#> 7 ALCLAT 3 repelled \n#> 8 ALCLAT 4 repelled \n#> 9 ANDINE 1 neutral \n#> 10 ANDINE 2 neutral \n#> # \xe2\x80\xa6 with 282 more rows\n```\n\n## Downloads of fgeo packages\n\n![](man/figures/README-fgeo-downloads-1.png)\n\n## Related projects\n\nAdditional packages maintained by ForestGEO but not included in\n**fgeo**:\n\n - [**fgeo.data**](https://forestgeo.github.io/fgeo.data/): Open\n datasets of ForestGEO.\n - [**fgeo.krig**](https://forestgeo.github.io/fgeo.krig/): Analyze\n soils.\n\nOther packages not maintained by ForestGEO:\n\n - [CTFS-R Package](http://ctfs.si.edu/Public/CTFSRPackage/): The\n original package of CTFS functions. No longer supported by\n ForestGEO.\n - [**BIOMASS**](https://CRAN.R-project.org/package=BIOMASS): An R\n package to estimate above-ground biomass in tropical forests.\n\n## R code from recent publications by ForestGEO partners\n\nData have been made available as required by the journal to enable\nreproduction of the results presented in the paper. Please do not share\nthese data without permission of the ForestGEO plot Principal\nInvestigators (PIs). If you wish to publish papers based on these data,\nyou are also required to get permission from the PIs of the\ncorresponding ForestGEO plots.\n\n - [Soil drivers of local-scale tree growth in a lowland tropical\n forest (Zemunik et\n al., 2018).](https://github.com/SoilLabAtSTRI/Soil-drivers-of-tree-growth)\n - [Plant diversity increases with the strength of negative density\n dependence at the global scale (LaManna et\n al., 2018)](https://github.com/forestgeo/LaManna_et_al_Science)\n - Response \\#1: LaManna et al.\xc2\xa02018. Response to Comment on \xe2\x80\x9cPlant\n diversity increases with the strength of negative density\n dependence at the global scale\xe2\x80\x9d Science Vol. 360, Issue 6391,\n eaar3824. DOI: 10.1126/science.aar3824\n - Response \\#2: LaManna et al.\xc2\xa02018. Response to Comment on \xe2\x80\x9cPlant\n diversity increases with the strength of negative density\n dependence at the global scale\xe2\x80\x9d. Science Vol. 360, Issue 6391,\n eaar5245. DOI: 10.1126/science.aar5245\n\n## Information\n\n - [Getting help](https://forestgeo.github.io/fgeo/SUPPORT.html).\n - [Contributing](https://forestgeo.github.io/fgeo/CONTRIBUTING.html).\n - [Contributor Code of\n Conduct](https://forestgeo.github.io/fgeo/CODE_OF_CONDUCT.html).\n\n## Acknowledgments\n\nThanks to all partners of ForestGEO for sharing their ideas and code.\nFor feedback on **fgeo**, special thanks to Gabriel Arellano, Stuart\nDavies, Lauren Krizel, Sean McMahon, and Haley Overstreet. For all other\nhelp, I thank contributors in the the documentation of the features they\nhelped with.\n'",,"2017/12/07, 14:28:11",2148,CUSTOM,0,784,"2019/06/23, 21:33:24",8,112,206,0,1585,0,0.1,0.1905444126074498,"2019/06/19, 23:22:45",fgeo-1.1.4,0,17,false,,true,true,,,https://github.com/forestgeo,http://www.forestgeo.si.edu/,Smithsonian Tropical Research Institute,,,https://avatars.githubusercontent.com/u/25665726?v=4,,, SEPAL,Empowering people around the world to gain a better understanding of land cover dynamics in forest management by facilitating the efficient access and use of Earth observation data.,openforis,https://github.com/openforis/sepal.git,github,,Forest Observation and Management,"2023/10/04, 10:33:12",188,0,27,true,JavaScript,Open Foris,openforis,"JavaScript,Groovy,CSS,Shell,Java,Dockerfile,HTML,Python,Handlebars,R,EQ",https://sepal.io/,"b'![banner](https://raw.githubusercontent.com/openforis/sepal-doc/master/docs/source/_images/sepal_header.png)\n\nSEPAL\n=====\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/openforis/sepal/blob/master/license.txt)\n[![Documentation Status](https://readthedocs.org/projects/sepal-doc/badge/?version=latest)](https://sepal-doc.readthedocs.io/en/latest/?badge=latest) \n[![Crowdin](https://badges.crowdin.net/sepal/localized.svg)](https://crowdin.com/project/sepal)\n\nSEPAL is a cloud computing platform for geographical data processing. It enables users to quickly process large amount\nof data without high network bandwidth requirements or need to invest in high-performance computing infrastructure.\n\n--------------------------------------------------------------------------------\n\nCurrently available in the following languages:\n\n| English | Fran\xc3\xa7ais | Espa\xc3\xb1ol |\n|---------|----------|---------|\n\nYou can contribute to the translation effort on our [crowdin project](https://crowdin.com/project/sepal).\n\n--------------------------------------------------------------------------------\n\nBackground\n----------\nReducing Emissions from Deforestation and Forest Degradation (REDD) is an effort to create a financial value for the\ncarbon stored in forests, offering incentives for developing countries to reduce emissions from forested lands and\ninvest in low-carbon paths to sustainable development. ""REDD+"" goes beyond deforestation and forest degradation,\nand includes the role of conservation, sustainable management of forests and enhancement of forest carbon stocks. The\nUN-REDD Programme is the United Nations collaborative initiative on Reducing Emissions from Deforestation and forest\nDegradation (REDD) in developing countries.\n\n[FAO](http://www.fao.org/home/en/), as a member of the [UN-REDD Programme](http://www.un-redd.org/), is responsible for\nassisting countries in developing robust national forest\nmonitoring systems (NFMS) and operational satellite land monitoring systems (SLMS) to help them to meet the measurement,\nreporting and verification (MRV) requirements of the REDD+. Furthermore, countries need help in establishing and\nmaintaining a SLMS capable of producing the information required to make consequential decisions about forest\nmanagement; decisions that promote sustainable forest management and can potentially mitigate the effects of global\nclimate change on society. Specifically, a solution is needed to address the existing challenges countries face when\ndeveloping forest monitoring systems, due to difficulties accessing and processing remotely sensed data; a key source\nof information for monitoring forest area and forest area change over large, often remote areas.\n\nTo tackle the problems mentioned above, FAO and Norway are collaborating on the System for Earth Observation\nData Access, Processing and Analysis for Land Monitoring (SEPAL).\n\nIt consists of the following components:\n\n1. A powerful, cloud-based computing platform for big data processing and storage.\n2. Facilitated access to remote sensing data sources through a direct-access-interface to earth observation data\n repositories.\n3. A set of open-source software tools, capable of efficient data processing\n4. Related capacity development and technology transfer activities\n\nThe computing platform enables FAO national partners to process data quickly without locally maintained high\nperformance computing infrastructures. The direct link to data repositories allows fast access to satellite\nimagery and other earth observation data for processing. The software tools, such as FAO\xe2\x80\x99s\n[Open Foris Geospatial Toolkit](http://www.openforis.org/tools/geospatial-toolkit.html)\nperform powerful image processing, are completely customizable and function similarly \xe2\x80\x98on the cloud\xe2\x80\x99 or on the desktop.\n\nScreenshots\n-----------\n![01 landing](https://user-images.githubusercontent.com/149204/132474862-daf724e5-e7f8-4086-9132-c9afde0e6173.png)\n![02 login](https://user-images.githubusercontent.com/149204/132474870-be73899f-f6bb-4d8b-96c5-05bb21a5d53c.png)\n![03 recipe list](https://user-images.githubusercontent.com/149204/132474880-12333a36-dee0-4bdc-a0b4-0e9aab24b601.png)\n![03 create recipe](https://user-images.githubusercontent.com/149204/132481048-6149f776-a7ed-47cb-8f75-3519aa1b8f1e.png)\n![04 optical](https://user-images.githubusercontent.com/149204/132482428-16ef1555-26bc-441a-8717-d65db3b62ef4.png)\n![05 nicfi planet composite](https://user-images.githubusercontent.com/149204/132474895-da433549-5d52-48cf-93ae-23c0ee9d47c0.png)\n![06 sentinel1 time scan](https://user-images.githubusercontent.com/149204/132483174-154e792e-b6ce-4b22-ad08-1b8e4fdda829.png)\n![07 sentinel1 harmonics](https://user-images.githubusercontent.com/149204/132474903-0d1db533-7427-49f6-9981-07aa5a0f6b71.png)\n![08 classification](https://user-images.githubusercontent.com/149204/132474907-d4a018a1-282f-4dbd-b870-90bae470d1a0.png)\n![09 ccdc chart](https://user-images.githubusercontent.com/149204/132474909-3a3c9f9d-4fb9-42b8-be01-2b354c7283a3.png)\n![10 visparams](https://user-images.githubusercontent.com/149204/132474911-13fdd36a-e4fd-4ad2-93e2-e0a53510b1dc.png)\n![11 layers layout](https://user-images.githubusercontent.com/149204/132478296-627a62cd-9d7b-40cf-a1aa-034c50664cf6.png)\n![12 iPhone](https://user-images.githubusercontent.com/149204/132478926-2bf51235-de16-4a11-9bfb-4960b1e5471a.png)\n![13 terminal](https://user-images.githubusercontent.com/149204/132491822-db82fe79-154f-4f60-b0bc-b5a57006c5a4.png)\n![14 apps](https://user-images.githubusercontent.com/149204/132491851-5ac0303f-1064-4e12-9627-f34e3f78d880.png)\n\nArchitectural overview\n----------------------\nThe core of the system is the _SEPAL server_ and the _user sandboxes_. SEPAL server provides a web-based user-interface,\nwhere geospatial data from multiple providers can be searched, processing chains composed and executed, and geospatial\ndata products visualized.\n\nThe user sandboxes are spaces where users get access to a number of geospatial data processing tools, such as those\nincluded in Open Foris Geospatial Toolkit and Orfeo Toolbox, and their own dedicated storage. SEPAL provides users SSH\naccess to their respective sandbox. This can either be done directly with an SSH client, or through a provided web-based\nterminal. Web-based sandbox tools can be accessed over HTTP.\n\nSandboxes are implemented as Docker containers, which in addition to providing isolation between users, allows for very\nflexible deployment. Sandboxes are started when needed, and stopped when not used. This enables them to be deployed in a\ncluster of worker server instances, which can be dynamically scaled up and down based on demand.\n\n### Default AWS deployment\nThere are three types of server instances:\n\n1. SEPAL servers, constantly running, one in each region where SEPAL is deployed. In addition to the features\n described above, they also are the entry points for user sandboxes. These instances can be\n fairly small and cheap, and don\xe2\x80\x99t require much storage.\n\n2. Worker instances, running user sandboxes and retrieving data (Landsat, Sentinel etc.). These instances are\n automatically launched when users access their sandboxes, and terminated when users disconnect. Users get to decide\n which instance type each sandbox session will be running on.\n\n3. Operation server, one single instance. It tests and deploys the software, monitors the health of the system,\n and provides a user interface where user usage can be monitored, and disk/instance use quotas can be configured.\n This instance can be fairly small.\n\nUsers can at times require a lot of processing power and memory for their processing jobs. The large instances\nneeded for this type of jobs are quite expensive. For instance, an r3.8xlarge (32 CPUs, 244 GiB memory) costs over 3 USD\nan hour, which adds up to more than 2,300 USD a month. When using such expensive instances, care have to be taken to\nuse them efficiently, and not have them sitting idle at any time. To maximize the utility of the worker instances,\nSEPAL will automatically launch them when they are requested, and terminate them when they\'re not used anymore.\nFor instance, if a user run a 10 hours processing job on an r3.8xlarge, the total cost would be 30 USD, with no\nmoney spent on an idling instance.\n\nTo limit the cost of operating SEPAL, each user has a configurable monthly budget to spend on sandbox sessions. For\ninstance, given a monthly budget of 100 USD a user might have used 32 hours of r3.8xlarge, or 250 hours r3.large and 450\nhours t2.large.\n\nAnother costly component is storage, where 1TB of EFS storage costs 300 USD a month. To limit storage costs, each user\nhave configurable disk quota.\n\n### Components and services part of a SEPAL deployment\n\n**HAProxy** -\nOff-the-shelf load balancer, allowing SEPAL to be clustered for availability. Run both SSH and HTTPS on port 443,\nto prevent firewalls from blocking SSH.\n\n**nginx** -\nOff-the-shelf HTTP and reverse proxy server, proxying all SEPAL HTTP endpoints.\n\n**Xterm.js** -\nOff-the-shelf web-based SSH client. Gives users SSH access to their Sandbox in a web browser.\n\n**Sepal server** -\nProvides the system user interface.\n\n**Data provider** -\nService retrieving geospatial data from various external data providers.\n\n**Sandbox lifecycle manager** -\nService managing the user sandboxes. It deploys them on demand when users requests access, and un-deploys them as soon\nas a user disconnects from them.\n\n**Sandbox SSH gateway** -\nService responsible for dynamically tunnelling SSH connections to users sandbox, while notifying the sandbox lifecycle\nmanager on connects and disconnects.\n\n**Sandbox web proxy** -\nService proxying HTTP connections to user sandboxes. It maintains HTTP sessions, and notifies the sandbox lifecycle\nmanager on session creation and expiry.\n\n**Sandbox** -\nThe user sandboxes are spaces where users get access to a number of geospatial data processing tools. See table below\nfor provided tools.\n\n![SEPAL components](https://raw.githubusercontent.com/openforis/sepal/master/docs/Components.png)\n\n### Software deployed on each users sandbox:\n\n**Open Foris Geospatial Toolkit** -\nA collection of command-line utilities for processing of geospatial data.\n\n**GDAL** -\nA translator library for raster and vector geospatial data formats.\n\n**R** -\nLanguage for statistical computing and graphics.\n\n**RStudio Server** -\nAn IDE for R in a web browser.\n\n**Orfeo ToolBox** -\nLibrary for remote sensing image processing.\n\n**OpenSARKit** -\nTools for Automatic Preprocessing of SAR Imagery.\n\nBuild and Release\n-----------------\nThe project is under active development, and the build and release process is still in flux, so these\ninstructions will change, and improve, over time.\n\n### Prerequisites\nIn order to build and run the SEPAL system, a Linux or macOS installation is needed.\nThe end-users on the other hand, are of course free to use whatever Operating system they prefer, including Windows.\n\nIn addition to this, the following software must be installed:\n\n[Java](http://www.oracle.com/technetwork/java/javase/downloads/index.html),\n[Maven](https://maven.apache.org/download.cgi), and\n[Ansible](http://docs.ansible.com/ansible/intro_installation.html).\nIf you want to run the system locally, you need [Vagrant](https://www.vagrantup.com/downloads.html), and\nto deploy on Amazon Web Services EC2 instances, you need an [AWS account](https://aws.amazon.com/account/).\n\n### Configuration\nTBD\n\n### Build\nTBD\n\n### Deploy\nTBD\n'",,"2015/07/06, 09:47:47",3033,MIT,639,10858,"2023/10/05, 08:31:40",86,56,221,31,20,1,0.1,0.48480050664977836,"2015/10/22, 13:10:41",v1.0,0,16,false,,true,true,,,https://github.com/openforis,http://www.openforis.org,"Rome, Italy",,,https://avatars.githubusercontent.com/u/1212750?v=4,,, Forest Carbon database,Global Forest Carbon Database.,forc-db,https://github.com/forc-db/ForC.git,github,,Forest Observation and Management,"2023/06/09, 14:38:47",53,0,4,true,HTML,ForC,forc-db,"HTML,R,CSS",https://forc-db.github.io/,"b'# Forest Carbon database (ForC-db)\n[![DOI](https://zenodo.org/badge/49171546.svg)](https://zenodo.org/badge/latestdoi/49171546)\n\n## Database overview\n\nFor an overview of the database, please see the [ForC database website](https://forc-db.github.io/).\n\n## Database structure\n\n*Database structure* - The ForC database consists of a series of cross-referenced data tables and associated supplementary materials, as described below.\n\n*Data tables* are in the [data folder](https://github.com/forc-db/ForC/tree/master/data). \n\n*Metadata* for all tables is contained in the [metadata folder](https://github.com/forc-db/ForC/tree/master/metadata). \n\n*Scripts* for manipulating the database are given in the [scripts folder](https://github.com/forc-db/ForC/tree/master/scripts).\n\n*Figures* summarizing database content are given in the [figures folder](https://github.com/forc-db/ForC/tree/master/figures).\n\n*Original publications* associated with all data are given in the measurements table, and full citations are in the [bibliography file](). The references contained in the database are archived in the [ForC-db group on Mendeley](https://www.mendeley.com/community/forc-db/), which is a public repository. PDFs of ForC-db references are be stored in our [References repository](https://github.com/forc-db/References), which is private because of copyright restrictions, but access will be granted upon request.\n\n*Database publications* are listed in [ForC_publications.csv](https://github.com/forc-db/ForC/blob/master/ForC_publications.csv). We ask that any authors using the database add resulting publications to this table.\n\n\n## QA/QC\nThis is an automatically generated report of database errors and inconsistencies.\n\n[![QA_QC_checks](https://github.com/forc-db/ForC/workflows/QA_QC_checks/badge.svg)](https://github.com/forc-db/ForC/actions) **ATTENTION!!!!** If this is not passing, the dashboard below is not up-to-date because the continuous integration system needs to be fixed!\n\n\n\n[![There_are_no_errors to fix_:-)](https://github.com/forc-db/ForC/blob/master/QA_QC/error_reports/errors.png)](https://github.com/forc-db/ForC/tree/master/QA_QC/error_reports)\n\n[![There_are_no_warnings_:-)](https://github.com/forc-db/ForC/blob/master/QA_QC/warning_reports/warnings.png)](https://github.com/forc-db/ForC/tree/master/QA_QC/warning_reports)\n\n\n## Is ForC_simplified up to date?\n[![Create_ForC_simplified](https://github.com/forc-db/ForC/workflows/Create_ForC_simplified/badge.svg)](https://github.com/forc-db/ForC/actions) **ATTENTION!!!!** If this is not passing, ForC_simplified is not up to date.\n\n'",",https://zenodo.org/badge/latestdoi/49171546","2016/01/07, 00:59:31",2848,CC-BY-4.0,11,2859,"2022/06/02, 15:53:39",43,68,210,0,510,0,0.5,0.6100676183320811,"2023/06/09, 13:20:20",v4.0-alpha,0,13,false,,false,true,,,https://github.com/forc-db,,Smithsonian,,,https://avatars.githubusercontent.com/u/16504833?v=4,,, TreeLS,High performance R functions for forest data processing based on Terrestrial Laser Scanning (but not only) point clouds.,tiagodc,https://github.com/tiagodc/TreeLS.git,github,,Forest Observation and Management,"2023/06/10, 05:25:59",67,0,11,true,C++,,,"C++,R",,"b'[![GPLv3 License](https://img.shields.io/badge/License-GPL%20v3-yellow.svg)](https://opensource.org/licenses/)\n[![](https://www.r-pkg.org/badges/version/TreeLS)](https://cran.r-project.org/package=TreeLS)\n![GitHub tag (latest by date)](https://img.shields.io/github/v/tag/tiagodc/TreeLS)\n![](https://cranlogs.r-pkg.org/badges/grand-total/TreeLS)\n\n# TreeLS\n\nHigh performance R functions for forest data processing based on **T**errestrial **L**aser **S**canning (but not only) point clouds.\n\n## Description\n\nThis package is a refactor of the methods described in [this paper](https://doi.org/10.1016/j.compag.2017.10.019), among many other features for 3D point cloud processing of forest environments.\n\nMost algorithms are written in C++ and wrapped in R functions through `Rcpp`. *TreeLS* is built on top of [lidR](https://github.com/Jean-Romain/lidR/), using its `LAS` infrastructure internally for most methods.\n\nFor any questions, comments or bug reports please submit an [issue](https://github.com/tiagodc/TreeLS/issues) here on GitHub. Suggestions, ideas and references of new algorithms are always welcome - as long as they fit into TreeLS\' scope.\n\n`TreeLS` is currently on v2.0.2. To install it from an official mirror, use: `install.packages(""TreeLS"")`. To install the most recent version, check out the *Installation from source* section below.\n\n## News\n\n- August/2020: Version 2.0 is finally available! It\'s a major release, introducing several new functionalities, bug fixes, more robust estimators for noisy clouds and more flexible plotting. All functionalities from older versions are now available and optimized, so there should be no need to use legacy code anymore. The scope of application of TreeLS has become much wider in this version, specially due to the introduction of functions like `fastPointMetrics` and `shapeFit`, making it much easier for researchers to assess point cloud data in many contexts and develop their own methods on top of those functions. For a comprehensive list of the updates check out the [CHANGELOG](https://github.com/tiagodc/TreeLS/blob/master/CHANGELOG.md).\n\n- March/2019: `TreeLS` is finally available on CRAN and is now an official R package.\n\n\n\n## Main functionalities\n\n- Tree detection at plot level\n- Tree region assignment\n- Stem detection and denoising\n- Stem segmentation\n- Forest inventory\n- Fast calculation of point features\n- Research basis and other applications\n- 3D plotting and manipulation\n\n## Installation from source\n\n### Requirements\n- Rcpp compiler:\n - on Windows: install [Rtools](https://cran.r-project.org/bin/windows/Rtools/) for your R version - make sure to add it to your system\'s *path*\n - on Mac: install Xcode\n - on Linux: be sure to have `r-base-dev` installed\n\n### Install TreeLS latest version\n\nOn the R console, run:\n```\nremotes::install_github(\'tiagodc/TreeLS\')\n```\n\n## Usage\n\nExample of full processing workflow from reading a point cloud file until stem segmentation of a forest plot:\n```\nlibrary(TreeLS)\n\n# open sample plot file\nfile = system.file(""extdata"", ""pine_plot.laz"", package=""TreeLS"")\ntls = readTLS(file)\n\n# normalize the point cloud\ntls = tlsNormalize(tls, keep_ground = F)\nx = plot(tls)\n\n# extract the tree map from a thinned point cloud\nthin = tlsSample(tls, smp.voxelize(0.02))\nmap = treeMap(thin, map.hough(min_density = 0.1), 0)\nadd_treeMap(x, map, color=\'yellow\', size=2)\n\n# classify tree regions\ntls = treePoints(tls, map, trp.crop())\nadd_treePoints(x, tls, size=4)\nadd_treeIDs(x, tls, cex = 2, col=\'yellow\')\n\n# classify stem points\ntls = stemPoints(tls, stm.hough())\nadd_stemPoints(x, tls, color=\'red\', size=8)\n\n# make the plot\'s inventory\ninv = tlsInventory(tls, d_method=shapeFit(shape=\'circle\', algorithm = \'irls\'))\nadd_tlsInventory(x, inv)\n\n# extract stem measures\nseg = stemSegmentation(tls, sgt.ransac.circle(n = 20))\nadd_stemSegments(x, seg, color=\'white\', fast=T)\n\n# plot everything once\ntlsPlot(tls, map, inv, seg, fast=T)\n\n# check out only one tree\ntlsPlot(tls, inv, seg, tree_id = 11)\n\n#------------------------------------------#\n### overview of some new methods on v2.0 ###\n#------------------------------------------#\n\nfile = system.file(""extdata"", ""pine.laz"", package=""TreeLS"")\ntls = readTLS(file) %>% tlsNormalize()\n\n# calculate some point metrics\ntls = fastPointMetrics(tls, ptm.knn())\nx = plot(tls, color=\'Verticality\')\n\n# get its stem points\ntls = stemPoints(tls, stm.eigen.knn(voxel_spacing = .02))\nadd_stemPoints(x, tls, size=3, color=\'red\')\n\n# get dbh and height\ndbh_algo = shapeFit(shape=\'cylinder\', algorithm = \'bf\', n=15, inliers=.95, z_dev=10)\ninv = tlsInventory(tls, hp = .95, d_method = dbh_algo)\nadd_tlsInventory(x, inv)\n\n# segment the stem usind 3D cylinders and getting their directions\nseg = stemSegmentation(tls, sgt.irls.cylinder(n=300))\nadd_stemSegments(x, seg, color=\'blue\')\n\n# check out a specific tree segment\ntlsPlot(seg, tls, segment = 3)\n\n```\n'",",https://doi.org/10.1016/j.compag.2017.10.019","2016/05/16, 16:36:48",2718,GPL-3.0,1,325,"2023/06/10, 05:36:31",22,3,32,1,137,0,0.0,0.375,,,0,4,false,,false,false,,,,,,,,,,, TreeQSM,Quantitative Structure Models of Single Trees from Laser Scanner Data.,InverseTampere,https://github.com/InverseTampere/TreeQSM.git,github,,Forest Observation and Management,"2022/05/11, 15:30:40",99,0,30,true,MATLAB,Inverse Tampere,InverseTampere,"MATLAB,M",,"b'# TreeQSM\n\n**Version 2.4.1**\n**Reconstruction of quantitative structure models for trees from point cloud data**\n\n[![DOI](https://zenodo.org/badge/100592530.svg)](https://zenodo.org/badge/latestdoi/100592530)\n\n![QSM image](https://github.com/InverseTampere/TreeQSM/blob/master/Manual/fig_point_cloud_qsm.png)\n\n\n### Description\n\nTreeQSM is a modelling method that reconstructs quantitative structure models (QSMs) for trees from point clouds. A QSM consists of a hierarchical collection of cylinders estimating topological, geometrical and volumetric details of the woody structure of the tree. The input point cloud, which is usually produced by a terrestrial laser scanner, must contain only one tree, which is intended to be modelled, but the point cloud may contain also some points from the ground and understory. Moreover, the point cloud should not contain significant amount of noise or points from leaves as these are interpreted as points from woody parts of the tree and can therefore lead to erroneous results. Much more details of the method and QSMs can be found from the manual that is part of the code distribution.\n\nThe TreeQSM is written in Matlab.\nThe main function is _treeqsm.m_, which takes in a point cloud and a structure array specifying the needed parameters. Refer to the manual or the help documentation of a particular function for further details.\n\n### References\n\nWeb: https://research.tuni.fi/inverse/\nSome published papers about the method and applications: \nRaumonen et al. 2013, Remote Sensing https://www.mdpi.com/2072-4292/5/2/491 \nCalders et al. 2015, Methods in Ecology and Evolution https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12301 \nRaumonen et al. 2015, ISPRS Annals https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-3-W4/189/2015/ \n\xc3\x85kerblom et al. 2015, Remote Sensing https://www.mdpi.com/2072-4292/7/4/4581 \n\xc3\x85kerblom et al. 2017, Remote Sensing of Environment https://www.sciencedirect.com/science/article/abs/pii/S0034425716304746 \nde Tanago Menaca et al. 2017, Methods in Ecology and Evolution https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.12904 \n\xc3\x85kerblom et al. 2018, Interface Focus http://dx.doi.org/10.1098/rsfs.2017.0045 \nDisney et al. 2018, Interface Focus http://dx.doi.org/10.1098/rsfs.2017.0048 \n\n\n### Quick guide\n\nHere is a quick guide for testing the code and starting its use. However, it is highly recommended that after the testing the user reads the manual for more information how to best use the code. \n\n1) Start MATLAB and set the main path to the root folder, where _treeqsm.m_ is located.\\\n2) Use _Set Path_ --> _Add with Subfolders_ --> _Open_ --> _Save_ --> _Close_ to add the subfolders, where all the codes of the software are, to the paths of MATLAB.\\\n3) Import a point cloud of a tree into the workspace. Let us name it P.\\\n4) Define suitable inputs:\\\n     >> inputs = define_input(P,1,1,1);\\\n5) Reconstruct QSMs:\\\n     >> QSM = treeqsm(P,inputs); \n'",",https://zenodo.org/badge/latestdoi/100592530","2017/08/17, 10:39:39",2260,CUSTOM,0,132,"2022/12/14, 13:41:37",16,0,4,3,315,2,0,0.03125,"2022/05/11, 15:49:18",2.4.1,0,2,false,,false,false,,,https://github.com/InverseTampere,https://webpages.tuni.fi/inverse/,"Tampere, Finland",,,https://avatars.githubusercontent.com/u/27767416?v=4,,, DeepLidar,Geographic Generalization in Airborne RGB Deep Learning Tree Detection.,weecology,https://github.com/weecology/DeepLidar.git,github,,Forest Observation and Management,"2019/10/14, 18:47:49",45,0,5,false,Python,Weecology,weecology,"Python,Shell",,"b'# Geographic Generalization in Airborne RGB Deep Learning Tree Detection\n\nBen. G. Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan White\n\n# Summary\nDeepLidar is a keras retinanet implementation for predicting individual tree crowns in RGB imagery. \n\n## How can I train new data?\n\nDeepLidar uses a semi-supervised framework for model training. For generating lidar-derived training data see (). I recommend using a conda environments to manage python dependencies. \n\n1. Create conda environment and install dependencies\n\n```\nconda env create --name DeepForest -f=generic_environment.yml\n```\n\nClone the fork of the retinanet repo and install in local environment\n\n```\nconda activate DeepForest\ngit clone https://github.com/bw4sz/keras-retinanet\ncd keras-retinanet\npip install .\n```\n\n2. Update config paths\n\nAll paths are hard coded into [_config.yml](https://github.com/weecology/DeepLidar/blob/master/_config.yml)\n\n3. Train new model with new hand annotations\n\n```\npython train.py --retrain\n```\n\n# How can I use pre-built models to predict new images.\n\nCheck out a demo ipython notebook: https://github.com/weecology/DeepLidar/tree/master/demo\n\n# Where are the data?\n\nThe Neon Trees Benchmark dataset is soon to be published. All are welcome to use it. Currently under curation (in progress): https://github.com/weecology/NeonTreeEvaluation/\n\nFor a static version of the dataset that reflects annotations at the time of submission, see dropbox link [here](https://www.dropbox.com/s/yjrhs8b7ocbw6ji/static.zip?dl=0)\n\n## Published articles\n\nOur first article was published in *Remote Sensing* and can be found [here](https://www.mdpi.com/2072-4292/11/11/1309). \n\nThis codebase is constantly evolving and improving. To access the code at the time of publication, see Releases.\nThe results of the full model can be found on our [comet page](https://www.comet.ml/bw4sz/deeplidar/2645e41bf83b47e68a313f3c933aff8a).\n'",,"2018/11/13, 21:47:48",1807,CUSTOM,0,1159,"2020/01/22, 21:47:40",3,0,9,0,1372,0,0,0.0,"2019/08/25, 21:57:56",v4.1,0,1,false,,false,false,,,https://github.com/weecology,http://weecology.org,,,,https://avatars.githubusercontent.com/u/1156696?v=4,,, Global Forest Watch,"An online, global and near real-time forest monitoring tool.",Vizzuality,https://github.com/wri/gfw.git,github,"forest-monitoring,satellite-imagery,mapbox,deforestation,nextjs,react,redux",Forest Observation and Management,"2023/10/25, 19:53:48",253,0,32,true,JavaScript,World Resources Institute,wri,"JavaScript,SCSS,Gherkin,HTML",https://www.globalforestwatch.org,"b'# What is Global Forest Watch?\n\n[Global Forest Watch](http://www.globalforestwatch.org/) (GFW) is a\ndynamic online forest monitoring and alert system that empowers people\neverywhere to better manage forests. This repository contains the GFW web app.\n\n![Global forest watch map](/public/preview.jpg?raw=true ""Global Forest Watch"")\n\n# Getting started\n\nThe GFW web app is built with [Nextjs](https://nextjs.org/), [React](https://reactjs.org/) and [Redux](https://redux.js.org/).\n\n## Installing the app\n\nClone the repo:\n\n```bash\n$ git clone https://github.com/Vizzuality/gfw.git\n```\n\nInstalling dependencies:\n\n```bash\n$ yarn\n```\n\nCopy the `.env.sample` to `.env.local`, and start the server:\n\n```bash\n$ yarn dev\n```\n\nThe app should now be accessible on [http://0.0.0.0:3000](http://0.0.0.0:3000).\n\n## Developing\n\nWe follow a [Gitflow Worklow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) for development and deployment. \nWe merge pull requests into `develop`, which is deployed automatically to both the staging and pre-production servers. In order to release features into production, we merge `develop` into `master`, triggering an automatic deployment to production.\n\n![gitflow workflow](https://www.atlassian.com/dam/jcr:b5259cce-6245-49f2-b89b-9871f9ee3fa4/03%20(2).svg)\n\n### Staging, pre-production and review apps\n\nWe use [Heroku](https://www.heroku.com/) to deploy our apps. Production is deployed automatically from `master` to [globalforestwatch.org](https://www.globalforestwatch.org). \n\nWe have two staging environments: staging and pre-production. Both are deployed automatically from `develop`. \nThe main difference is that staging points to the staging environments of the APIs we access, pre-production points to the production ones. This is set by the `NEXT_PUBLIC_FEATURE_ENV` env variable. \n\nWe also make use of Heroku\'s [Review Apps](https://devcenter.heroku.com/articles/github-integration-review-apps) feature. \nWhen a pull request is created, a review app is deployed automatically by Heroku with a `NEXT_PUBLIC_FEATURE_ENV` of `preproduction`, and a link to the environment is added automatically to the respective pull request. \n\n\n## Releases\n\nWe are using github releases to record changes to the app. To help us manage this we are using [Zeit Releases](https://github.com/zeit/release), an npm package for handling github releases, tagging commits (major, minor, patch), and automating semantic release logs. For a more detailed explanation of semantic changelogs see [this post](https://semver.org/).\n\n\n#### Managing commits for a release\n\nWhen developing, you can tag your commits as follows: `fix some excellent bug (patch)` where `patch` can be `(major/minor/patch/ignore)`. This commit title will automatically be grouped into the correct section for the release. Otherwise you will be prompted during the release to assign (or ignore) each of your commits. You will have to do this for every commit so don\'t forget to squash!\n\nSo how do you make a release on GFW?\n\n1. Checkout master and merge in develop (not compulsory but advised for consistency).\n2. Run `npx release [type]` where type can be `major`, `minor`, `patch`, or `pre` (see [zeit docs](https://github.com/zeit/release) for more details).\n3. Follow the prompts to manage commits.\n4. You will be taken to github draft release editor with all your commits grouped and ready to go.\n5. Enter your title and include any extra info you want.\n6. Publish!\n\n# RW API Documentation for GFW\n\nMap layers and relevant datasets are stored in the [RW-API](http://api.resourcewatch.org/) and the `globalforestwatch.org/map` utilises the [layer-manager](https://github.com/Vizzuality/layer-manager) to render them.\n\nThe schema used to style these layers, their legends, and define their interactions are specific to the *Global Forest Watch* platform.\n\nWhen creating or modifying layers/datasets for GFW, follow the schema and syntax outlined in the [API Documentation](./docs/API_Documentation.md) markdown file.\n\nTo view GFW-specific layers and datasets use the following endpoint:\n\nhttps://api.resourcewatch.org/v1/dataset?app=gfw&includes=layer,vocabulary,metadata&page[size]=200\n\n### BrowserStack\n\nWe use [BrowserStack](https://www.browserstack.com) to find and fix cross-browser issues.\n\n\n'",,"2012/05/17, 11:16:15",4178,MIT,1012,26981,"2023/10/25, 20:10:26",17,4419,4699,285,0,11,1.0,0.6696428571428572,"2020/12/17, 15:48:53",6.4.4,0,63,false,,false,false,,,https://github.com/wri,https://wri.org,"Washington, DC",,,https://avatars.githubusercontent.com/u/4615146?v=4,,, gfw-mapbuilder,A library to build custom Forest Atlas web applications.,wri,https://github.com/wri/gfw-mapbuilder.git,github,"gfw,esri-js,website",Forest Observation and Management,"2023/10/19, 16:17:51",31,0,2,true,TypeScript,World Resources Institute,wri,"TypeScript,JavaScript,SCSS,HTML",https://my.gfw-mapbuilder.org/v1.latest/,"b'[![Build Status](https://github.com/wri/gfw-mapbuilder/workflows/build-and-deploy/badge.svg)](https://github.com/wri/gfw-mapbuilder/actions)\n\n# GFW Map Builder ArcGIS Online Template\n\n> Template for the GFW Map Builder that will be available through ArcGIS Online.\n\n### Getting Started\n\nBefore you can begin, make sure you have [node.js](https://nodejs.org/en/).\n\n### Env variables\n\nCreate .env file at the root of the project and add `REACT_APP_PLANET_API_KEY` checkout `.env.examples` file.\nReach out to point of contact for Mapbuilder and ask for api key\n\nInstall all the javascript dependencies.\n\n```shell\nnpm install\n```\n\nStart the server and then the app will be served at [http://localhost:3000](http://localhost:3000).\n\n```shell\nnpm start\n```\n\n### Generating a build\n\n> You will need node.js installed for these steps\n\nRun the following command to generate a build to the `webpackBuild` directory.\n\n```shell\nnpm run build\n```\n\n### Configuring\n\nThis application has a general ([`resources.js`](https://github.com/wri/gfw-mapbuilder/blob/develop/configs/resources.js)). file that contains things controlled by the developers. Also, the Resources file contains configurations that are controlled via ArcGIS Online or whomever may be deploying the application. You can control things like the layers in the accordion, their source urls, their order on the map and in the UI, service urls (print, geometry, map, etc.), which layers to include in the analysis, and even the configurations for slope analysis and other aspects of the analysis. Anything that needs to be controlled from ArcGIS Online or the person deploying it, should be placed in `resources.js`.\n\nTo ensure that your `resources.js` has a valid configuration run the following command\n\n```shell\nnpm run test\n```\n\nThese [Jest](https://jestjs.io/) unit tests will ensure that you have correctly configured any properties that are required in the `layerPanel` and `analysisModules` sections.\n\n#### Configuring Layers and Accordions\n\nThe layers and the accordion are now more easily configurable via the `resources.js` file. Layers that you want to appear on the map but not in the accordion should be placed under `extraLayers`. The configuration structure is as follows:\n\n```javascript\nGROUP_LCD: {\n order: 1,\n label: {\n en: \'Land Cover Dynamics\',\n fr: \'Evolution de l\\\'occupation des sols\',\n es: \'Din\xc3\xa1mica de la Cobertura del Suelo\',\n pt: \'Land Cover Dynamics\',\n id: \'Land Cover Dynamics\',\n zh: \'\xe5\x9c\x9f\xe5\x9c\xb0\xe8\xa6\x86\xe7\x9b\x96\xe5\x8a\xa8\xe6\x80\x81\xe6\x95\xb0\xe6\x8d\xae\'\n },\n layers: [{\n order: 1,\n id: \'TREE_COVER_LOSS\',\n type: \'image\',\n url: \'https://gis-treecover.wri.org/arcgis/rest/services/ForestCover_lossyear_density/ImageServer\',\n technicalName: \'tree_cover_loss\',\n legendLayer: 0,\n colormap: [[1, 219, 101, 152]],\n inputRange: [1, 15],\n outputRange: [1],\n label: {\n en: \'Tree cover loss\',\n fr: \'Perte en couvert arbor\xc3\xa9\',\n es: \'P\xc3\xa9rdida de la cobertura arb\xc3\xb3rea\',\n pt: \'Tree cover loss\',\n id: \'Tree cover loss\',\n zh: \'\xe6\xa3\xae\xe6\x9e\x97\xe8\xa6\x86\xe7\x9b\x96\xe6\x8d\x9f\xe5\xa4\xb1\'\n },\n sublabel: {\n en: \'(annual, 30m, global, Hansen/UMD/Google/USGS/NASA)\',\n fr: \'(annuel, 30m, global, Hansen/UMD/Google/USGS/NASA)\',\n es: \'(anual, 30m, global, Hansen/UMD/Google/USGS/NASA)\',\n pt: \'(annual, 30m, global, Hansen/UMD/Google/USGS/NASA)\',\n id: \'(annual, 30m, global, Hansen/UMD/Google/USGS/NASA)\',\n zh: \'(\xe6\xaf\x8f\xe5\xb9\xb4\xe6\x9b\xb4\xe6\x96\xb0, 30\xe7\xb1\xb3, \xe5\x85\xa8\xe7\x90\x83\xe8\xa6\x86\xe7\x9b\x96, \xe6\xb1\x89\xe6\xa3\xae/\xe9\xa9\xac\xe9\x87\x8c\xe5\x85\xb0\xe5\xa4\xa7\xe5\xad\xa6/\xe8\xb0\xb7\xe6\xad\x8c/\xe7\xbe\x8e\xe5\x9b\xbd\xe5\x9c\xb0\xe8\xb4\xa8\xe6\xb5\x8b\xe9\x87\x8f\xe5\xb1\x80(USGS)/\xe7\xbe\x8e\xe5\x9b\xbd\xe5\xae\x87\xe8\x88\xaa\xe5\xb1\x80(NASA))\'\n }\n }]\n}\n```\n\nProperties for the groups and layers are described in detail in the resources file, but here is a brief description of what you see above as well:\n\n- `GROUP_LCD` - Unique key to contain all the properties for the group, this is an accordion section in the layer panel\n - `order` - Order that the group will appear in the UI and the order in which it\'s layers will appear on the map. An `order` of 1 will be above an `order` of 2 in the UI and the map. **MINIMUM** is 1, value of 0 may result in layers being placed under the basemap.\n - `label` - Object containing keys for various languages, this is the label in the UI for the accordion section.\n - `layers` - an array of layers that will appear in this accordion. Some layers have custom configurations and some support different options for different types of layers.\n - `order` - order of the layer in the accordion and on the the map. This order is relative to this section. Layers more or less will be stacked similar to how they appear in the UI with the exception of feature/graphics layers as they always go on top. In the below example, layer A will be on top even though it has a higher order because the group it belongs to has a lower order, meaning the group and the layer will appear first:\n - Group 1 - order 1\n - Layer A - order 5\n - Group 2 - order 2\n - Layer B - order 1\n - `id` - Unique ID for the layer, this must be unique across the whole app, not just the group\n - `type` - Type of layer. Currently `tiled`, `webtiled`, `image`, `dynamic`, `feature`, `graphic`, `glad`, and `terra` are supported types.\n - `visible` - default layer visibility. Default value if not supplied is false.\n - `url` - required for all layers except graphics layers.\n - `technicalName` - key for this layer to retrieve metadata about it from the GFW metadata API\n - `legendLayer` - If this layer has no legend or a bad legend, and has an alternative one available here, `http://gis-gfw.wri.org/arcgis/rest/services/legends/MapServer`, you can provide the layer id of it\'s legend here so the app can pull that legend in.\n - `layerIds` - An array of layer ids for dynamic layers, should look like this: `[0, 1, 2, 3]` or `[1]`.\n - `label` - An object of keys representing various languages, this is the label that shows in the UI\n - `sublabel` - An object of keys representing various languages, this is the sublabel that shows in the UI\n - `popup` - See below for more explanation and an example of how to use this\n\n#### Adding Additional Groups\n\nWe are now supporting the ability to add additional group accordions to the layer panel. To add a new group, simply add another entry into the layerPanel object (described above in the [\'Configuring\' section](#configuring)). Below is an example group that you can copy and paste into the layerPanel object and edit to the configuration that you need. Follow any instructions/suggestions in the commented lines (preceded by `//`), then be sure to delete any commented lines before you save. Any properties that are commented out are optional, you may safely delete those if they are not needed for your group (exceptions will be noted below).\n\n```javascript\n// Change the group name to something descriptive and unique. It should be all caps with words separated by underscores.\nGROUP_NAME: {\n // Properties must not be duplicated. One groupType is required. Choose one and uncomment it, then delete the others.\n // groupType: \'checkbox\',\n // groupType: \'radio\',\n // groupType: \'nested\',\n\n // Edit the order of this group and the other groups. This determines the order they appear in the layer panel.\n order: 1,\n label: {\n // Edit the group label, this can be anything you want it to be\n en: \'Group Label\',\n // Optionally add labels for additional languages (see the section on Strings and Translations below).\n // fr: \'Label for French Language\'\n },\n layers: [\n // Uncomment the layer item under the corresponding groupType that you selected earlier, then duplicate for any additional layers in this group.\n\n // CHECKBOX\n // {\n // Required - the layer id generated from your AGOL webmap\n // id: \'layer_id_1234\',\n\n // Required - the order that you would like this layer to appear within the group accordion section (1 will appear ABOVE 2)\n // order: 1,\n\n // Optional - sublabel for the layer\n // sublabel: {\n // en: \'Layer sublabel\',\n // fr: \'Sublabel for French Language\'\n // }\n // }\n\n // RADIO\n // {\n // Required - the layer id generated from your AGOL webmap\n // id: \'layer_id_1234\',\n\n // Required - the order that you would like this layer to appear within the group accordion section\n // order: 1,\n\n // If this is a MapServiceLayer you must include the following property. This lets the application know which sublayers you would like included in this group.\n // includedSublayers: [0, 1, 2, 3],\n\n // Optional - the sublabel for the layer.\n // sublabel: {\n // en: \'Layer Sublabel\',\n // fr: \'Sublabel for French Language\'\n // }\n // Note: If this is a MapServiceLayer, the sublayer that the sublabel belongs to must be specified.\n // sublabel: {\n // 0: {\n // en: \'Sublayer 0 Sublabel\',\n // fr: \'Sublayer 0 Sublabel for French Language\'\n // },\n // 1: {\n // en: \'Sublayer 1 Sublabel\',\n // fr: \'Sublayer 1 Sublabel for French Language\'\n // }\n // }\n // }\n\n // NESTED\n // {\n // Required - the order that you would like this layer grouping to appear within the group accordion section\n // order: 1,\n\n // Required - the label of the nested layer grouping\n // label: {\n // en: \'Nested grouping label\',\n // fr: \'Nested grouping label for French Language\'\n // },\n\n // Required - the layers that will appear in this grouping\n // nestedLayers: [\n // {\n // Required - the layer id generated from your AGOL webmap\n // id: \'layer_id_1234\',\n\n // Required - the order that you would like this layer to appear within the nested grouping\n // order: 1,\n\n // Optional - sublabel for the layer\n // sublabel: {\n // en: \'Layer sublabel\',\n // fr: \'Sublabel for French Language\'\n // }\n // }\n // ]\n // }\n ]\n},\n```\n\n#### Configuring Popups for layers not in Webmaps\n\nThis is currently only supported for dynamic layers and feature layers. A popup configuration has some elements it must contain to keep the styling looking appropriate and they are outlined below. Here is an example layer configuration that contains a popup configuration (NOTE the addition of `popup` at the bottom):\n\n```javascript\norder: 6,\nid: \'ACTIVE_FIRES\',\ntype: \'dynamic\',\nurl: \'http://gis-potico.wri.org/arcgis/rest/services/Fires/Global_Fires/MapServer\',\ntechnicalName: \'noaa18_fires\',\nlayerIds: [0, 1, 2, 3],\nlabel: {\n ...\n},\nsublabel: {\n ...\n},\npopup: {\n title: {\n en: \'Active Fires\'\n },\n content: {\n en: [\n {\'label\': \'Brightness\', \'fieldExpression\': \'BRIGHTNESS\'},\n {\'label\': \'Confidence\', \'fieldExpression\': \'CONFIDENCE\'},\n {\'label\': \'Latitude\', \'fieldExpression\': \'LATITUDE\'},\n {\'label\': \'Longitude\', \'fieldExpression\': \'LONGITUDE\'},\n {\'label\': \'Acquisition Date\', \'fieldExpression\': \'ACQ_DATE:DateString(hideTime:true)\'},\n {\'label\': \'Acquisition Time\', \'fieldExpression\': \'ACQ_TIME\'}\n ]\n }\n}\n```\n\nThis way you can add more languages and also use modifiers on fields. `fieldExpression` get\'s used in the same manner the JSAPI uses fields for popup content, in a string like so: \'\\${BRIGHTNESS}\'. This is why we can use modifiers like `ACQ_DATE:DateString(hideTime:true)`. You can see a list of available modifiers here: [Format info window content](https://developers.arcgis.com/javascript/3/jshelp/intro_formatinfowindow.html)\n\n### Strings\n\nThis portion refers to how a developer could add some new strings, if you are looking at adding translations, see [Translations](#translations) below. The convention to add new strings to the application is to add them in each language, in `src/js/languages.js`. The name should be all uppercase separated by an underscore. For example, a link in the navigation bar for the word about would be added four times, once for each supported language in their appropriate section, like so:\n\n```javascript\nstrings.en.NAV_ABOUT = \'About\';\n...\nstrings.fr.NAV_ABOUT = \'About\';\n...\nstrings.es.NAV_ABOUT = \'About\';\n...\nstrings.pt.NAV_ABOUT = \'About\';\n```\n\nThen in your components, or any other part of the code, simply import the languages module, get the current language from React\'s context(or pass it out from a component if needs be).\n\n```javascript\nimport text from \'js/languages\';\n\nexport default class MyComponent extends Component {\n static contextTypes = {\n language: PropTypes.string.isRequired,\n };\n\n render() {\n const { language } = this.context;\n\n return
{text[language].NAV_ABOUT}
;\n }\n}\n```\n\n### Translations\n\nIf you are adding or fixing translations. The strings used in the application can be found in two locations. The majority of them will be in the `src/js/languages.js` file. They are prefixed by the two digit country code. Add the appropriate translation in the correct language section. You may see something like this:\n\n```javascript\nstrings.en.DATA = \'Data\'; //English\n...\nstrings.fr.DATA = \'Data\'; // French\n...\nstrings.es.DATA = \'Data\'; // Spanish\n...\nstrings.pt.DATA = \'Data\'; // Portuguese\n```\n\nThe other location is the `src/js/resources.js` file. There are `layers` and `basemaps` each with subsections for each of the four languages. In each subsection is an array or objects containing the layer configuration. Be careful what you change in here, the only three things related to labels are `label`, `sublabel`, and `group`. The `group` refers to the name on the accordion, it needs to be the same as the other layers in the same group (they are linked by a `groupKey`).\n\n### Deployment\n\nBackup 1.5.0 folder\n\n`aws s3 sync s3://wri-sites/gfw-mapbuilder.org/library.gfw-mapbuilder.org/1.5.0/ /Users/dstarr/Desktop/MapbuilderBackups/04212022/ --profile wri`\n\nCopy dist folder into 1.5.0 aws folder\n\n`aws s3 sync --content-type ""text/html"" /Users/dstarr/Documents/dev/gfw-mapbuilder/dist/ s3://wri-sites/gfw-mapbuilder.org/library.gfw-mapbuilder.org/1.5.0/ --profile wri`\n\nCopy dist > 1.5.0.js file in dist folder into 1.5.0.js file in aws folder\n\n`aws s3 cp --content-type ""text/html"" /Users/dstarr/Documents/dev/gfw-mapbuilder/dist/loader/1.5.0.js s3://wri-sites/gfw-mapbuilder.org/library.gfw-mapbuilder.org/1.5.0/1.5.0.js --profile wri`\n\nClear cache\n\n`aws cloudfront create-invalidation --distribution-id E58RE0T7L0R9N --path ""/"" --profile wri`\n\n`aws cloudfront create-invalidation --distribution-id E2B81LN86UDRTJ --path ""/"" --profile wri`\n'",,"2016/03/04, 20:08:54",2791,MIT,205,4960,"2023/10/19, 16:17:53",42,869,1325,49,6,2,1.2,0.7307445442875482,"2021/03/11, 19:15:31",v1.5.1,0,19,false,,false,true,,,https://github.com/wri,https://wri.org,"Washington, DC",,,https://avatars.githubusercontent.com/u/4615146?v=4,,, lidR,An R package for airborne LiDAR data manipulation and visualization for forestry application.,Jean-Romain,https://github.com/r-lidar/lidR.git,github,"point-cloud,lidar,las,laz,r,als,forestry,remote-sensing",Forest Observation and Management,"2023/09/07, 08:02:58",501,0,90,true,R,R lidar,r-lidar,"R,C++,C",https://CRAN.R-project.org/package=lidR,"b'\nlidR \n======================================================================================================\n![license](https://img.shields.io/badge/Licence-GPL--3-blue.svg) \n[![R build status](https://github.com/r-lidar/lidR/workflows/R-CMD-check/badge.svg)](https://github.com/r-lidar/lidR/actions)\n[![Codecov test coverage](https://codecov.io/gh/r-lidar/lidR/branch/master/graph/badge.svg)](https://app.codecov.io/gh/r-lidar/lidR?branch=master)\n\nR package for Airborne LiDAR Data Manipulation and Visualization for Forestry Applications\n\nThe lidR package provides functions to read and write `.las` and `.laz` files, plot point clouds, compute metrics using an area-based approach, compute digital canopy models, thin LiDAR data, manage a collection of LAS/LAZ files, automatically extract ground inventories, process a collection of tiles using multicore processing, segment individual trees, classify points from geographic data, and provides other tools to manipulate LiDAR data in a research and development context.\n\n:book: Read [the book](https://r-lidar.github.io/lidRbook/index.html) to get started with the lidR package. See changelogs on [NEW.md](https://github.com/r-lidar/lidR/blob/master/NEWS.md)\n\nTo cite the package use `citation()` from within R:\n\n```r\ncitation(""lidR"")\n#> Roussel, J.R., Auty, D., Coops, N. C., Tompalski, P., Goodbody, T. R. H., S\xc3\xa1nchez Meador, A., Bourdon, J.F., De Boissieu, F., Achim, A. (2021). lidR : An R package for analysis of Airborne Laser Scanning (ALS) data. Remote Sensing of Environment, 251 (August), 112061. .\n#> Jean-Romain Roussel and David Auty (2023). Airborne LiDAR Data Manipulation and Visualization for Forestry Applications. R package version 3.1.0. https://cran.r-project.org/package=lidR\n``` \n\n# Key features\n\n\n\n### Read and display a las file\n\nIn R-fashion style the function `plot`, based on `rgl`, enables the user to display, rotate and zoom a point cloud. Because `rgl` has limited capabilities with respect to large datasets, we also made a package [lidRviewer](https://github.com/Jean-Romain/lidRviewer) with better display capabilities.\n\n```r\nlas <- readLAS("""")\nplot(las)\n```\n\n### Compute a canopy height model\n\n\n\n`lidR` has several algorithms from the literature to compute canopy height models either point-to-raster based or triangulation based. This allows testing and comparison of some methods that rely on a CHM, such as individual tree segmentation or the computation of a canopy roughness index.\n\n```r\nlas <- readLAS("""")\n\n# Khosravipour et al. pitfree algorithm\nthr <- c(0,2,5,10,15)\nedg <- c(0, 1.5)\nchm <- rasterize_canopy(las, 1, pitfree(thr, edg))\n\nplot(chm)\n```\n\n### Read and display a catalog of las files\n\n\n\n`lidR` enables the user to manage, use and process a collection of `las` files. The function `readLAScatalog` builds a `LAScatalog` object from a folder. The function `plot` displays this collection on an interactive map using the `mapview` package (if installed).\n\n```r\nctg <- readLAScatalog("""")\nplot(ctg, map = TRUE)\n```\n\nFrom a `LAScatalog` object the user can (for example) extract some regions of interest (ROI) with `clip_roi()`. Using a catalog for the extraction of the ROI guarantees fast and memory-efficient clipping. `LAScatalog` objects allow many other manipulations that can be done with multicore processing.\n\n### Individual tree segmentation\n\n\n\nThe `segment_trees()` function has several algorithms from the literature for individual tree segmentation, based either on the digital canopy model or on the point-cloud. Each algorithm has been coded from the source article to be as close as possible to what was written in the peer-reviewed papers. Our goal is to make published algorithms usable, testable and comparable.\n\n```r\nlas <- readLAS("""")\n\nlas <- segment_trees(las, li2012())\ncol <- random.colors(200)\nplot(las, color = ""treeID"", colorPalette = col)\n```\n\n### Wall-to-wall dataset processing\n\n\n\nMost of the lidR functions can seamlessly process a set of tiles and return a continuous output. Users can create their own methods using the `LAScatalog` processing engine via the `catalog_apply()` function. Among other features the engine takes advantage of point indexation with lax files, takes care of processing tiles with a buffer and allows for processing big files that do not fit in memory.\n\n```r\n# Load a LAScatalog instead of a LAS file\nctg <- readLAScatalog("""")\n\n# Process it like a LAS file\nchm <- rasterize_canopy(ctg, 2, p2r())\ncol <- random.colors(50)\nplot(chm, col = col)\n```\n\n### Full waveform\n\n\n\nlidR can read full waveform data from LAS files and provides interpreter functions to convert the raw data into something easier to manage and display in R. The support of FWF is still in the early stages of development.\n\n```r\nfwf <- readLAS("""")\n\n# Interpret the waveform into something easier to manage\nlas <- interpret_waveform(fwf)\n\n# Display discrete points and waveforms\nx <- plot(fwf, colorPalette = ""red"", bg = ""white"")\nplot(las, color = ""Amplitude"", add = x)\n```\n\n# About\n\n**lidR** is developed openly at [Laval University](https://www.ulaval.ca/en).\n\n* Development of the `lidR` package between 2015 and 2018 was made possible thanks to the financial support of the [AWARE project (NSERC CRDPJ 462973-14)](https://awareproject.ca/); grantee [Prof Nicholas Coops](https://forestry.ubc.ca/faculty-profile/nicholas-coops/).\n* Development of the `lidR` package between 2018 and 2021 was made possible thanks to the financial support of the [Minist\xc3\xa8re des For\xc3\xaats, de la Faune et des Parcs of Qu\xc3\xa9bec](https://www.quebec.ca/gouvernement/ministere/forets-faune-parcs).\n\n\n\n# Install `lidR` dependencies on GNU/Linux\n\n```\n# Ubuntu\nsudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable\nsudo apt-get update\nsudo apt-get install libgdal-dev libgeos++-dev libudunits2-dev libproj-dev libx11-dev libgl1-mesa-dev libglu1-mesa-dev libfreetype6-dev libxt-dev libfftw3-dev\n\n# Fedora\nsudo dnf install gdal-devel geos-devel udunits2-devel proj-devel mesa-libGL-devel mesa-libGLU-devel freetype-devel libjpeg-turbo-devel\n```\n\n\n \n'",,"2016/02/17, 11:47:38",2807,GPL-3.0,41,2878,"2023/10/24, 12:49:03",5,55,582,57,1,1,0.1,0.041549011698265415,"2023/03/16, 13:06:27",v4.0.3,0,14,false,,false,false,,,https://github.com/r-lidar,https://github.com/r-lidar,,,,https://avatars.githubusercontent.com/u/93974705?v=4,,, Digital Forestry Toolbox,A collection of digital forestry tools for Matlab/Octave.,mparkan,https://github.com/mparkan/Digital-Forestry-Toolbox.git,github,"digital-forestry-toolbox,forest,matlab,lidar,remote-sensing,laser,forestry,asprs,point-cloud,octave,laser-scanning,vegetation",Forest Observation and Management,"2021/09/30, 17:50:32",43,0,1,false,MATLAB,,,MATLAB,http://mparkan.github.io/Digital-Forestry-Toolbox/,"b'# Digital-Forestry-Toolbox\nThe Digital Forestry Toolbox (DFT) is collection of tools and tutorials for Matlab/Octave designed to help process and analyze remote sensing data related to forests.\n\n# Documentation\n\nPlease check the [Digital-Forestry-Toolbox website](http://mparkan.github.io/Digital-Forestry-Toolbox/). \n\n## License\n\n* Unless otherwise stated in the file, the code is licensed under [GNU GPL V3](http://www.gnu.org/licenses/licenses.en.html).\n* The airborne laser scanning datasets are a courtesy of the States of [Solothurn](https://geo.so.ch/map/?bl=hintergrundkarte_sw&st=lidar), [Z\xc3\xbcrich](https://maps.zh.ch/?topic=LidarZH) and [Geneva](https://ge.ch/sitg/donnees) (Switzerland).\n\n## Reference\n\nThe Digital Forestry Toolbox was developed by Matthew Parkan ([GIS Research Laboratory, EPFL](https://lasig.epfl.ch/)) with support from the Swiss Forest and Wood Research Fund (project 2013.18).\n\nIf you use this code in your work, please consider including the following citation:\n
\nMatthew Parkan. (2018). Digital Forestry Toolbox for Matlab/Octave. DOI: 10.5281/zenodo.1213013. Available: http://mparkan.github.io/Digital-Forestry-Toolbox/.\n
\n\n[![DOI](https://zenodo.org/badge/58712667.svg)](https://zenodo.org/badge/latestdoi/58712667)\n'",",https://zenodo.org/badge/latestdoi/58712667","2016/05/13, 07:30:11",2721,GPL-3.0,0,291,"2021/06/21, 15:25:49",1,1,13,0,856,0,0.0,0.0,"2018/05/25, 12:04:16",v.1.0.2,0,1,false,,false,false,,,,,,,,,,, pyfor,Tools for analyzing aerial point clouds of forest data.,brycefrank,https://github.com/brycefrank/pyfor.git,github,"lidar,forestry,las,forest-inventory",Forest Observation and Management,"2019/12/01, 18:33:46",86,0,10,false,Python,,,Python,,"b'

\n
\n pyfor

\n Documentation |\n Changelog |\n Request a Feature |\n Road Map\n
\n \n \n

\n\n**pyfor** is a Python package that assists in the processing of point cloud data in the context of forest inventory. \nThis includes manipulation of point data, support for analysis, and a\nmemory optimized API for managing large collections of tiles.\n\n## Release Status\n\nCurrent Release: 0.3.6\n\nRelease Date: December 1st, 2019.\n\nRelease Status: 0.3.6 is an adolescent LiDAR data processing package adequate for single tile processing and large acqusitions.\n\n## What Does pyfor Do?\n\n- [Normalization](http://brycefrank.com/pyfor/html/topics/normalization.html)\n- [Canopy Height Models](http://brycefrank.com/pyfor/html/topics/canopyheightmodel.html)\n- [Ground Filtering](http://brycefrank.com/pyfor/html/api/pyfor.ground_filter.html)\n- [Clipping](http://brycefrank.com/pyfor/html/topics/clipping.html)\n- [Large Acquisition Processing](http://brycefrank.com/pyfor/html/advanced/handlinglargeacquisitions.html)\n\nand many other tasks. See the [documentation](http://brycefrank.com/pyfor) for examples and applications.\n\nWhat about tree segmentation? Please see pyfor\'s sister package [`treeseg`](https://github.com/brycefrank/treeseg) which\nis a standalone package for tree segmentation and detection.\n\n## Installation\n\n[miniconda](https://conda.io/miniconda.html) or Anaconda is required for your system before beginning. pyfor depends on many packages that are otherwise tricky and difficult to install (especially gdal and its bindings), and conda provides a quick and easy way to manage many different Python environments on your system simultaneously.\n\nAs of October 14th, 2019, we are proud to announce that `pyfor` is available on `conda-forge`, greatly simplifying the installation process:\n\n```\nconda install -c conda-forge pyfor \n```\n\n## Collaboration & Requests\n\nIf you would like to contribute, especially those experienced with `numba`, `numpy`, `gdal`, `ogr` and `pandas`, please contact me at bfrank70@gmail.com \n\nI am also willing to implement features on request. Feel free to [open an issue](https://github.com/brycefrank/pyfor/issues) with your request or email me at the address above.\n\npyfor will always remain a free service. Its development takes time, energy and a bit of money to maintain source code and host documentation. If you are so inclined, donations are accepted at the donation button at the top of the readme.\n\n'",,"2016/12/07, 20:31:36",2513,MIT,0,588,"2019/12/30, 17:38:43",10,22,69,0,1395,0,0.0,0.011857707509881465,"2019/12/01, 19:27:54",0.3.6,0,3,false,,false,false,,,,,,,,,,, DeepForest,Python Package for Tree Crown Detection in Airborne RGB imagery.,weecology,https://github.com/weecology/DeepForest.git,github,,Forest Observation and Management,"2023/10/23, 21:28:25",387,46,82,true,Python,Weecology,weecology,Python,https://deepforest.readthedocs.io/,"b'# DeepForest\n\n[![Github Actions](https://github.com/weecology/DeepForest/actions/workflows/Conda-app.yml/badge.svg)](https://github.com/weecology/DeepForest/actions/workflows/Conda-app.yml)\n[![Documentation Status](https://readthedocs.org/projects/deepforest/badge/?version=latest)](http://deepforest.readthedocs.io/en/latest/?badge=latest)\n[![Version](https://img.shields.io/pypi/v/DeepForest.svg)](https://pypi.python.org/pypi/DeepForest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2538143.svg)](https://doi.org/10.5281/zenodo.2538143)\n\n\n### Conda-forge build status\n\n| Name | Downloads | Version | Platforms |\n| --- | --- | --- | --- |\n| [![Conda Recipe](https://img.shields.io/badge/recipe-deepforest-green.svg)](https://anaconda.org/conda-forge/deepforest) | [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/deepforest.svg)](https://anaconda.org/conda-forge/deepforest) | [![Conda Version](https://img.shields.io/conda/vn/conda-forge/deepforest.svg)](https://anaconda.org/conda-forge/deepforest) | [![Conda Platforms](https://img.shields.io/conda/pn/conda-forge/deepforest.svg)](https://anaconda.org/conda-forge/deepforest) |\n\n![](www/MEE_Figure4.png)\n![](www/example_predictions_small.png)\n\n# What is DeepForest?\n\nDeepForest is a python package for training and predicting ecological objects in airborne imagery. DeepForest currently comes with a tree crown object detection model and a bird detection model. Both are single class modules that can be extended to species classification based on new data. Users can extend these models by annotating and training custom models.\n\n![](../www/image.png)\n\n# Documentation\n\n[DeepForest is documented on readthedocs](https://deepforest.readthedocs.io/)\n\n## How does deepforest work?\nDeepForest uses deep learning object detection networks to predict bounding boxes corresponding to individual trees in RGB imagery. \nDeepForest is built on the object detection module from the [torchvision package](http://pytorch.org/vision/stable/index.html) and designed to make training models for detection simpler.\n\nFor more about the motivation behind DeepForest, see some recent talks we have given on computer vision for ecology and practical applications to machine learning in environmental monitoring.\n\n## Where can I get help, learn from others, and report bugs?\nGiven the enormous array of forest types and image acquisition environments, it is unlikely that your image will be perfectly predicted by a prebuilt model. Below are some tips and some general guidelines to improve predictions.\n\nGet suggestions on how to improve a model by using the [discussion board](https://github.com/weecology/DeepForest/discussions). Please be aware that only feature requests or bug reports should be posted on the [issues page](https://github.com/weecology/DeepForest/issues).\n\n# Developer Guidelines \n\nWe welcome pull requests for any issue or extension of the models. Please follow the [developers guide](https://deepforest.readthedocs.io/en/latest/developer.html).\n\n## License\n\nFree software: [MIT license](https://github.com/weecology/DeepForest/blob/master/LICENSE)\n\n## Why DeepForest?\n\nRemote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high-resolution imagery. Individual crown delineation has been a long-standing challenge in remote sensing, and available algorithms produce mixed results. DeepForest is the first open-source implementation of a deep learning model for crown detection. Deep learning has made enormous strides in a range of computer vision tasks but requires significant amounts of training data. By including a trained model, we hope to simplify the process of retraining deep learning models for a range of forests, sensors, and spatial resolutions.\n\n## Citation\n\n[Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks.\nRemote Sens. 2019, 11, 1309](https://www.mdpi.com/2072-4292/11/11/1309)\n'",",https://doi.org/10.5281/zenodo.2538143","2018/03/07, 20:22:58",2058,MIT,83,629,"2023/10/23, 21:28:27",49,174,470,167,2,3,1.2,0.574468085106383,"2021/06/06, 03:46:06",1.0.0,6,12,false,,false,true,"ahbarhusain/Vegetation-measurement-along-line-corridors-using-satellite-imagery,weecology/neonwranglerpy,Candy-CY/Hyperspectral-Image-Classification-Models,0xemc/carbonmap-services,clubmicrolab/TreePassport,LUP-LuftbildUmweltPlanung/RetinaNet,AdiNarendra98/AI-for-Environment,MarconiS/GEOtreehealth,gulyutin/see-tree,sstopkin/deep-forest-demo,KevinSantiago123/tree-identification,PhillRob/DeepForest-Experiments,Open-Data-Hackathon-Konstanz/stadtbaume,hakanonal/treecounter,gianlucaasti/Computer_Vision,rowanoaktree/drones4ducks-prototyping,charliedream1/ai_quant_trade,RivkinMikhail/Tech_Challenge,aguirrejuan/CountTrees,johanneshagspiel/planet-painter,derwells/crowNNs,YannFrancois/oneforest,DeadmanIQ445/rosrest_synced,DrRoad/DeepForest_JointPrediction,weecology/NeonSpeciesBenchmark,ai-in-actiune/tree-counting-and-classification-in-images,brandonTheProgram/MNIST-Classification,ecoulson/CS153project,FelipeSBarros/DeepForestSanJuan,rociopey/landslidedetectionsquad,CharlesAuthier/tree-detection,weecology/DeepTreeAttention,Bright-Sheep/Hack4Nature,ZXX8888/test,csw522110/cswcode,Kaslanarian/Kaggle-TPS-June,jonaselmesten/forest-classifier,blidiaahmed/Hack4Nature,thegeekywanderer/DeepForest-Counting-Trees,chhatrapalsinhzala/technostack,climate-ai/oneforest,gelarpambudi/deepforest-api,ew318/Tree_Species_Recognition,zushicat/tree-crown-detection,SinghKislay/data-viz,weecology/DeepForest_demos",,https://github.com/weecology,http://weecology.org,,,,https://avatars.githubusercontent.com/u/1156696?v=4,,, NeonTreeEvaluation,"Benchmark dataset for tree detection for airborne RGB, Hyperspectral and LIDAR imagery.",weecology,https://github.com/weecology/NeonTreeEvaluation.git,github,,Forest Observation and Management,"2022/01/28, 21:35:58",104,0,28,true,Python,Weecology,weecology,"Python,R,Shell",,"b'# A multi-sensor benchmark dataset for detecting individual trees in airborne RGB, Hyperspectral and LIDAR point clouds\n\nIndividual tree detection is a central task in forestry and ecology. Few papers analyze proposed methods across a wide geographic area. The NeonTreeEvaluation dataset is a set of bounding boxes drawn on RGB imagery for 22 sites in the National Ecological Observation Network (NEON). Each site covers a different forest type(e.g. [TEAK](https://www.neonscience.org/field-sites/field-sites-map/TEAK)). This dataset is the first to have consistant annotations across a variety of ecosystems for co-registered RGB, LiDAR and hyperspectral imagery. In total this repo holds 30975 Tree annotations.\n\nA static version of this benchmark can be downloaded from zenodo.\nhttps://zenodo.org/record/5914554#.YfRhcPXMKHE\n\nEvaluation images are included in this repo under /evaluation folder.\nAnnotation files (.xml) are included in this repo under /annotations/\n\n*Note: Not all plots in the evaluation folder contain annotations, many are unannotated to support future growth of the benchmark.*\n\nMantainer: Ben Weinstein - University of Florida.\n\n# How do I evaluate against the benchmark?\n\nWe have built an R package for easy evaluation and interacting with the benchmark evaluation data.\n\nhttps://github.com/weecology/NeonTreeEvaluation_package\n\n# How were images annotated?\n\nEach visible tree was annotated to create a bounding box that encompassed all portions of the vertical object. Fallen trees were not annotated. Visible standing stags were annotated. \n\n\n\nFor the point cloud annotations, the two dimensional bounding boxes were [draped](https://github.com/weecology/DeepLidar/blob/b3449f6bd4d0e00c24624ff82da5cfc0a018afc5/DeepForest/postprocessing.py#L13) over the point cloud, and all non-ground points (height < 2m) were excluded. Minor cosmetic cleanup was performed to include missing points. In general, the point cloud annotations should be seen as less thoroughly cleaned, given the tens of thousands of potential points in each image.\n\n# RGB\n\n```R\nlibrary(raster)\nlibrary(NeonTreeEvaluation)\n\n#Read RGB image as projected raster\nrgb_path<-get_data(plot_name = ""SJER_059_2018"",type=""rgb"")\nrgb<-stack(rgb_path)\n\n#Path to annotations dataset\nannotation_path <- get_data(""SJER_059_2018"",type=""annotations"")\nannotations <- xml_parse(annotation_path)\n\n#View one plot\'s annotations as polygons, project into UTM\n#copy project utm zone (epsg), xml has no native projection metadata\nboxes<-boxes_to_spatial_polygons(annotations, rgb)\n\nplotRGB(rgb)\nplot(boxes,add=T, col=NA, border=""red"")\n```\n\n\n\n# Lidar\n\nTo access the draped lidar hand annotations, use the ""label"" column. Each tree has a unique integer.\n\n```R\nlibrary(lidR)\npath<-get_data(""TEAK_052_2018"",type=""lidar"")\nr<-readLAS(path)\ntrees<-lasfilter(r,!label==0)\nplot(trees,color=""label"")\n```\n\n\n\nThe same is true for the training tiles (see below)\n\n\n\nWe elected to keep all points, regardless of whether they correspond to tree annotation. Non-tree points have value 0. We recommend removing these points before evaluating the point cloud. Since the annotations were made in the RGB and then draped on to the point cloud, there will be some erroneous points at the borders of trees.\n\n# Hyperspectral \n\nHyperspectral surface reflectance (NEON ID: DP1.30006.001) is a 426 band raster covering visible and near infared spectrum.\n\n```R\npath<-get_data(""MLBS_071_2018"",type=""hyperspectral"")\ng<-stack(path)\nnlayers(g)\n[1] 426\n#Grab a three band combination to view as false color\ng<-g[[c(17,55,113)]]\nnlayers(g)\n[1] 3\nplotRGB(g,stretch=""lin"")\n```\n\n\nAnd in the training data:\n\n\n\n## Training Tiles\n\nWe have uploaded the large training tiles to Zenodo for download.\n\nhttps://zenodo.org/record/5914554#.YfRhcPXMKHE\n\nThe annotations are alongside the evaluation annotations in this repo. Not every training tile has all three data types. There are several known omissions.\n\n* 2019_DSNY_5_452000_3113000_image_crop.tif does not have a LiDAR point cloud.\n* 2019_YELL_2_541000_4977000_image_crop.tif is unprojected and does not have CHM, LIDAR or HSI data\n* 2019_YELL_2_528000_4978000_image_crop2.tif is unprojected and does not have CHM, LIDAR or HSI data\n* 2019_ONAQ_2_367000_4449000_image_crop.tif is projected, but NEON did not have any LiDAR data at the site\n\nThese tiles represent a small portion of the annotations and can be removed if HSI and LiDAR data are used.\n\n# Performance\nSee the R package for current data and scores. This repo is just to hold the annotations in version control.\n\nhttps://github.com/weecology/NeonTreeEvaluation_package\n\n## Cited\n1 Weinstein, Ben G., et al. ""Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks."" Remote Sensing 11.11 (2019): 1309. https://www.mdpi.com/2072-4292/11/11/1309\nThanks to the lidR R package for making algorithms accessible for comparison.\n'",,"2018/11/13, 21:59:30",1807,CC0-1.0,0,247,"2023/02/17, 14:58:09",1,5,34,1,250,0,0.2,0.005319148936170248,"2021/05/18, 15:32:15",1.8.0,0,2,false,,false,false,,,https://github.com/weecology,http://weecology.org,,,,https://avatars.githubusercontent.com/u/1156696?v=4,,, PyCrown,Fast raster-based individual tree segmentation for LiDAR data.,manaakiwhenua,https://github.com/manaakiwhenua/pycrown.git,github,"tree,crowns,segmentation,lidar,python,numba,trees",Forest Observation and Management,"2020/07/21, 23:45:39",140,0,34,false,Python,Manaaki Whenua – Landcare Research,manaakiwhenua,"Python,Jupyter Notebook",https://datastore.landcareresearch.co.nz/dataset/pycrown,"b'[![manaakiwhenua-standards](https://github.com/manaakiwhenua/pycrown/workflows/manaakiwhenua-standards/badge.svg)](https://github.com/manaakiwhenua/manaakiwhenua-standards)\r\n\r\n\r\n# PyCrown - Fast raster-based individual tree segmentation for LiDAR data\r\nAuthor: Dr Jan Schindler (formerly Z\xc3\xb6rner) ()\r\n\r\nPublished under GNU GPLv3\r\n\r\n\r\n# Summary\r\nPyCrown is a Python package for identifying tree top positions in a canopy height model (CHM) and delineating individual tree crowns.\r\n\r\nThe tree top mapping and crown delineation method (optimized with Cython and Numba), uses local maxima in the canopy height model (CHM) as initial tree locations and identifies the correct tree top positions even in steep terrain by combining a raster-based tree crown delineation approach with information from the digital surface model (DSM) and terrain model (DTM).\r\n\r\n*Citation:*\r\n\r\nZ\xc3\xb6rner, J.; Dymond, J.; Shepherd J.; Jolly, B. PyCrown - Fast raster-based individual tree segmentation for LiDAR data. Landcare Research NZ Ltd. 2018, https://doi.org/10.7931/M0SR-DN55\r\n\r\n*Research Article:*\r\n\r\nZ\xc3\xb6rner, J., Dymond, J.R., Shepherd, J.D., Wiser, S.K., Bunting, P., Jolly, B. (2018) Lidar-based regional inventory of tall trees - Wellington, New Zealand. Forests 9, 702-71. https://doi.org/10.3390/f9110702\r\n\r\n\r\n# Purpose and methods\r\nA number of open-source tools to identify tree top locations and delineate tree crowns already exist. The purpose of this package is to provide a fast and flexible Python-based implementation which builds on top of already well-established algorithms.\r\n\r\nTree tops are identified in the first iteration through local maxima in the smoothed CHM.\r\n\r\nWe re-implement the crown delineation algorithms from **Dalponte and Coomes (2016)** in Python. The original code was published as R-package *itcSegment* () and was further optimized for speed in the *lidR* R-package ().\r\n\r\nOur Cython and Numba implementations of the original algorithm provide a significant speed-up compared to *itcSegment* and a moderate improvement over the version available in the *lidR* package.\r\n\r\nWe also adapted the crown algorithm slightly to grow in circular fashion around the tree top which gives crown a smoother, more natural looking shape.\r\n\r\nWe add an additional step to correct for erroneous tree top locations on steep slopes by taking either the high point from the surface model or the centre of mass of the tree crown as new tree top.\r\n\r\nReference:\r\n\r\n**Dalponte, M. and Coomes, D.A. (2016)** *Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data*. Methods in Ecology and Evolution, 7, 1236-1245.\r\n\r\n\r\n# Main outputs\r\n* **Tree top locations** (stored as 3D ESRI .shp-file)\r\n* **Tree crowns** (stored as 2D ESRI .shp-file)\r\n* **Individual tree classification of the 3D point cloud** (stored as .las-file)\r\n\r\n\r\n# Contributors\r\n* Dr Jan Z\xc3\xb6rner (Manaaki Whenua - Landcare Research, Lincoln, New Zealand)\r\n* Dr John Dymond (Manaaki Whenua - Landcare Research, Palmerston North, New Zealand)\r\n* Dr James Shepherd (Manaaki Whenua - Landcare Research, Palmerston North, New Zealand)\r\n* Dr Ben Jolly (Manaaki Whenua - Landcare Research, Palmerston North, New Zealand)\r\n\r\n\r\n# Requirements\r\nIt is assumed that you generated a canopy height model (CHM), digital surface model (DSM) and digital terrain model (DTM) from the LiDAR dataset before running *PyCrown*.\r\nIf you want to classify individual trees in the point cloud, it is recommended to normalize heights to *height above ground elevation* (also done externally).\r\n\r\nFor processing laser scanning data we recommend the open-source software *SPDLib* (http://www.spdlib.org).\r\n\r\n\r\n# Installation and environment set-up\r\n**Python 3.6 is required.**\r\n\r\nTested on: Windows 10, Debian 9 (Stretch), Fedora 28, Ubuntu 18.04 & 16.04\r\n\r\n## Environment set-up\r\n### With Conda package manager (recommended)\r\n#### Create the environment and install all required packages\r\n\r\n`conda env create`\r\n\r\n#### Activate the environment\r\n\r\nWindows: `activate pycrown-env`\r\n\r\nLinux: `source activate pycrown-env`\r\n\r\n### With Python\'s venv and pip\r\n#### Create the environment\r\n\r\n`python -m venv pycrown-env`\r\n\r\nLinux: `source pycrown-env/bin/activate`\r\n\r\nWindows: `pycrown-env\\Scripts\\activate.bat`\r\n\r\n#### Install all required packages\r\n\r\n`python -m pip install --upgrade pip`\r\n\r\n`pip install -r requirements.txt`\r\n\r\n## Run Tests\r\nThere are only some rudimentary tests provided at the moment, but it is advised to check that everything works:\r\n\r\n`python setup.py test`\r\n\r\n## Install PyCrown\r\nBuild and install the PyCrown module with:\r\n\r\n`python setup.py install`\r\n\r\n\r\n# Common problems\r\n## laspy.util.LaspyException: Laszip was not found on the system\r\nOn some platforms (e.g. Ubuntu 16.04) the installation of laspy does not include laszip/laszip-cli.\r\nSee the [issue report](https://github.com/laspy/laspy/issues/79) on github for more infos.\r\n\r\nIn this case, please follow these steps:\r\n\r\n* `wget http://lastools.org/download/LAStools.zip`\r\n* `unzip LAStools.zip && cd LAStools && make`\r\n* `cp bin/laszip /home/USERNAME/miniconda3/envs/pycrown-env/bin/`\r\n\r\nIf you encounter this error under Windows, please download LAStools.zip, extract the archive and copy the file ""laszip.exe"" from the ""bin""-directory to the conda environment, e.g. C:\\Users\\\\AppData\\Local\\Continuum\\miniconda3\\envs\\pycrown-env\\Scripts\\ or C:\\Users\\\\Miniconda3\\envs\\pycrown-env\\Scripts\r\n\r\n## Error while building \'pycrown._crown_dalponte_cython\' extension\r\nBuilding the Cython module requires C++ build tools which may need to be installed on your system.\r\n\r\nThe Windows error message on Windows provides instructions:\r\n`error: Microsoft Visual C++ 14.0 is required. Get it with ""Build Tools for Visual Studio"": https://visualstudio.microsoft.com/downloads/`\r\nDuring the setup process, please select \'C++ Build Tools\'.\r\n\r\n## TypeError: a bytes-like object is required, not \'FakeMmap\' when trying to load .laz files\r\nThere seems to be an incompatibility between laspy and numpy in recent versions. The combination `numpy==1.16.4` and `laspy==1.5.1` works for me.\r\nI suggest either not using .laz files for the time being or downgrading to the appropiate package versions.\r\nPlease also refer to this github issue: https://github.com/laspy/laspy/issues/112\r\n\r\n\r\n# Getting Started\r\nYou can find an IPython Notebook demonstrating each step of the tree segmentation approach in the *example* folder.\r\n\r\nYou can also run the example python script directly. Results are stored in the *example/result* folder.\r\n\r\n`cd example`\r\n\r\n`python example.py`\r\n\r\n## Main processing steps\r\n### Step 1: Smoothing of CHM using a median filter\r\n![Step 1](example/step_1.jpg)\r\n\r\n### Step 2: Tree top detection using local maxima filter\r\n![Step 2](example/step_2.jpg)\r\n\r\n### Step 3: Tree Crown Delineation using an adapted Dalponte and Coomes (2016) scheme\r\n![Step 3](example/step_3.jpg)\r\n\r\n### Step 4: Tree top correction of trees on steep slopes\r\n![Step 4](example/step_4.jpg)\r\n\r\n### Step 5: Smoothing of crown polygons using first returns of normalized LiDAR point clouds\r\n![Step 5](example/step_5.jpg)\r\n\r\n### Step 6: Classification of individual trees in the 3D point cloud (visualized with CloudCompare)\r\n![Classified Point Cloud](example/step_6a.jpg)\r\n![Classified Point Cloud](example/step_6b.jpg)\r\n'",",https://doi.org/10.7931/M0SR-DN55\r\n\r\n*Research,https://doi.org/10.3390/f9110702\r\n\r\n\r\n#","2019/05/29, 03:16:33",1610,GPL-3.0,0,17,"2020/12/29, 13:37:36",7,3,13,0,1030,0,0.0,0.0,"2019/06/03, 21:23:28",v0.2.1,0,1,false,,false,false,,,https://github.com/manaakiwhenua,https://www.landcareresearch.co.nz/,,,,https://avatars.githubusercontent.com/u/47998937?v=4,,, canopyLazR,An R package that estimates leaf area density and leaf area index from airborne LiDAR point clouds.,akamoske,https://github.com/akamoske/canopyLazR.git,github,,Forest Observation and Management,"2023/08/25, 14:12:06",24,0,5,true,R,,,R,,"b'\n# canopyLazR\n\nR package to estimate leaf area density (LAD), leaf area index (LAI), and forest structural attributes from airborne LiDAR point clouds.\n\n## Information\n\nFor theory behind the package please see the citation below. Please cite with use. \n\n*Kamoske A.G., Dahlin K.M., Stark S.C., and Serbin S.P. 2019. Leaf area density from airborne LiDAR: Comparing sensors and resolutions in a forest ecosystem. Forest Ecology and Management 433, 364-375.*\n\n### Corresponding Author\n\nDr. Aaron G. Kamoske\n \n + [USDA Forest Service, National Adaptive Management Analyst] \n + aaron.kamoske@usda.gov\n\n### Contributing Authors\n\nDr. Scott C. Stark\n \n + [Michigan State University, Department of Forestry](https://www.canr.msu.edu/for/) \n + [Tropical Forestry Ecology Lab](https://sites.google.com/site/scottcstarktropicalforest/) \n + scott.c.stark@gmail.com \n \nDr. Shawn P. Serbin\n\n + [Brookhaven National Laboratory, Environmental and Climate Sciences Department](https://www.bnl.gov/envsci/)\n + [Terrestrial Ecosystem Science and Technology (TEST) group](https://www.bnl.gov/testgroup)\n + sserbin@bnl.gov\n \nDr. Kyla M. Dahlin\n + [Michigan State University, Department of Geography, Environment, and Spatial Sciences](http://geo.msu.edu/)\n + [Michigan State University, Ecology, Evolutionary Biology, and Behavior Program](https://eebb.msu.edu/)\n + [ERSAM Lab](https://www.ersamlab.com/)\n + kdahlin@msu.edu\n \n## Installation\n\nThe easiest way to install `canopyLazR` is via `install_github` from the `devtools` package:\n\n```\n# If you haven\'t already installed this package and its dependencies\ninstall.packages(""devtools"")\n\n# If you alread have devtools installed or just installed it\nlibrary(devtools)\n\n# Install canopyLazR from GitHub\ninstall_github(""akamoske/canopyLazR"")\n\n# Load the library\nlibrary(canopyLazR)\n```\n\nNow all functions should be available.\n\n## Downloading example data\n\n[NEON](https://www.neonscience.org/) provides a teaching LiDAR dataset that is easy to download via R. We can use this file as a test dataset here. Code to download this .las file follows:\n\n```\n# Install missing R package if needed\nlist.of.packages <- c(""uuid"",""rlas"",""devtools"")\nnew.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[, ""Package""])]\nif (length(new.packages)) {\n print(""installing : "")\n print(new.packages)\n install.packages(new.packages, repos = ""http://cran.rstudio.com/"", dependencies = TRUE)\n}\n\n# Create a scratch folder to contain example LiDAR dataset\nscratch_folder <- file.path(""~/scratch/neon_data/"")\nif (! file.exists(scratch_folder)) dir.create(scratch_folder,recursive=TRUE)\nsetwd(file.path(scratch_folder))\ngetwd()\n\n# Download NEON example .las file\ndownload.file(url = ""https://ndownloader.figshare.com/files/7024955"",\n destfile = file.path(scratch_folder,""neon_lidar_example.las""),\n method = ""auto"",\n mode = ""wb"")\n\n```\n\n## Example of usage (after installation)\n\nOnce the package is loaded into your R session, this is the an example of how to use the functions in this package\nto estimate LAD and LAI:\n\n```\n# Convert .laz or .las file into a voxelized lidar array\nlaz.data <- laz.to.array(laz.file.path = file.path(scratch_folder,""neon_lidar_example.las""), \n voxel.resolution = 10, \n z.resolution = 1,\n use.classified.returns = TRUE)\n\n# Level the voxelized array to mimic a canopy height model\nlevel.canopy <- canopy.height.levelr(lidar.array = laz.data)\n\n# Estimate LAD for each voxel in leveled array\nlad.estimates <- machorn.lad(leveld.lidar.array = level.canopy, \n voxel.height = 1, \n beer.lambert.constant = NULL)\n\n# Convert the LAD array into a single raster stack\nlad.raster <- lad.array.to.raster.stack(lad.array = lad.estimates, \n laz.array = laz.data, \n epsg.code = 32611)\n\n# Create a single LAI raster from the LAD raster stack\nlai.raster <- raster::calc(lad.raster, fun = sum, na.rm = TRUE)\n\n# Convert the list of LAZ arrays into a ground and canopy height raster\ngrd.can.rasters <- array.to.ground.and.canopy.rasters(laz.data, 32611)\n\n# Calculate max LAD and height of max LAD\nmax.lad <- lad.ht.max(lad.array = lad.estimates, \n laz.array = laz.data, \n ht.cut = 5, \n epsg.code = 32618)\n\n# Calculate the ratio of filled and empty voxels in a given column of the canopy\nempty.filled.ratio <- canopy.porosity.filled.ratio(lad.array = lad.estimates,\n laz.array = laz.data,\n ht.cut = 5,\n epsg.code = 32618)\n\n# Calculate the volume of filled and empty voxles in a given column of the canopy\nempty.filled.volume <- canopy.porosity.filled.volume(lad.array = lad.estimates,\n laz.array = laz.data,\n ht.cut = 5,\n xy.res = 10,\n z.res = 1,\n epsg.code = 32618)\n\n# Calculate the within canopy rugosity\nwithin.can.rugosity <- rugosity.within.canopy(lad.array = lad.estimates,\n laz.array = laz.data,\n ht.cut = 5,\n epsg.code = 32618)\n\n# Calculate the heights of various LAD quantiles\nht.quantiles <- lad.quantiles(lad.array = lad.estimates,\n laz.array = laz.data,\n ht.cut = 5,\n epsg.code = 32618)\n\n# Calculate various canopy volume metrics from Lefsky\ncan.volume <- canopy.volume(lad.array = lad.estimates,\n laz.array = laz.data,\n ht.cut = 5,\n xy.res = 10,\n z.res = 1,\n epsg.code = 32618)\n\n# We can calculate the depth of the euphotic zone by dividing by the volume of the voxel\neuphotic.depth <- can.volume$euphotic.volume.column.raster / ( 10 * 10 * 1)\n\n# Calculate the top of canopy rugosity volume\ntoc.rugos <- toc.rugosity(chm.raster = grd.can.rasters$chm.raster,\n xy.res = 10,\n z.res = 1)\n\n# Plot the lai raster\nplot(lai.raster)\n\n# Plot the ground raster\nplot(grd.can.rasters$ground.raster)\n\n# Plot the canopy height raster\nplot(grd.can.rasters$canopy.raster)\n\n# Plot the canopy height model raster\nplot(grd.can.rasters$chm.raster)\n\n# Plot the max LAD raster\nplot(max.lad$max.lad.raster)\n\n# Plot the height of max LAD raster\nplot(max.lad$max.lad.ht.raster)\n\n# Plot filled voxel ratio raster\nplot(empty.filled.ratio$filled.raster)\n\n# Plot porosity voxel ratio raster\nplot(empty.filled.ratio$porosity.raster)\n\n# Plot filled voxel volume raster\nplot(empty.filled.volume$filled.raster)\n\n# Plot porosity voxel volume raster\nplot(empty.filled.volume$porosity.raster)\n\n# Plot the standard deviation of LAD within a vertical column raster\nplot(within.can.rugosity$vertical.sd.lad.raster)\n\n# Plot within canopy rugosity\nplot(within.can.rugosity$rugosity.raster)\n\n# Plot the height of the 10th quantile\nplot(ht.quantiles$quantile.10.raster)\n\n# Plot the height of the 25th quantile\nplot(ht.quantiles$quantile.25.raster)\n\n# Plot the height of the 50th quantile\nplot(ht.quantiles$quantile.50.raster)\n\n# Plot the height of the 75th quantile\nplot(ht.quantiles$quantile.75.raster)\n\n# Plot the height of the 90th quantile\nplot(ht.quantiles$quantile.90.raster)\n\n# Plot the height of the mean LAD\nplot(ht.quantiles$mean.raster)\n\n# Plot the volume of the euphotic zone for each column\nplot(can.volume$euphotic.volume.column.raster)\n\n# Plot the total leaf area in the euphotic zone for each column\nplot(can.volume$euphotic.tla.column.raster)\n\n# Plot the depth of the euphotic zone\nplot(euphotic.depth)\n\n# Plot the volume of the oligophotic zone for each column\nplot(can.volume$oligophotic.volume.column.raster)\n\n# Plot the total leaf area in the oligophotic zone for each column\nplot(can.volume$oligophotic.tla.column.raster)\n\n# Plot the volume of the empty space within a given colume\nplot(can.volume$empty.volume.column.raster)\n\n# Plot the volume of the empty space within a 3x3 moving window\nplot(can.volume$empty.canopy.volume.raster)\n\n# Plot the volume of the euphotic zone within a 3x3 moving window\nplot(can.volume$filled.canopy.euphotic.raster)\n\n# Plot the volume of the oligophotic zone within a 3x3 moving window\nplot(can.volume$filled.canopy.oligophotic.raster)\n\n# Plot the total leaf area of the euphotic zone within a 3x3 moving window\nplot(can.volume$filled.canopy.euphotic.tla.raster)\n\n# Plot the total leaf area of the oligophotic zone within a 3x3 moving window\nplot(can.volume$filled.canopy.oligophotic.tla.raster)\n\n# Plot the top of canopy rugosity volume\nplot(toc.rugos)\n\n```\n\n## License\n\nThis project is licensed under the GNU GPUv2 License - see the [LICENSE.md](LICENSE.md) file for details\n\n'",,"2018/08/09, 16:08:02",1903,GPL-2.0,1,153,"2018/08/23, 16:00:48",0,2,3,0,1889,0,0.0,0.046666666666666634,"2020/08/16, 17:40:14",v1.0,0,2,false,,false,false,,,,,,,,,,, forestlas,Code for generating metrics of forest vertical structure from airborne LiDAR data.,philwilkes,https://github.com/philwilkes/forestlas.git,github,,Forest Observation and Management,"2019/12/16, 10:38:37",17,0,1,false,Jupyter Notebook,,,"Jupyter Notebook,Python",,"b'# forestlas\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n![LiDAR derived vertical profiles](http://www2.geog.ucl.ac.uk/~ucfaptv/3_PLOTS_geo_trees_bar_spectra_v5.png)\nPython code for generating metrics of forest vertical structure from airborne LiDAR data. This code was developed as \npart of my PhD (completed in 2016, can be viewed \nhere) \nand was developed over the forests of Victoria, Australia.\nThe aim was to develop a suite of metrics that are robust to forest type i.e. can be applied without prior information of \nforest structure.\n\nThere are a number of methods available, check this \nJupyter notebook for an introduction.\nFunctions include reading `.las` files to numpy array, writing to `.las` as well as a number of methods to dice, slice and tile \nLiDAR data.\nThe main set of functions found in `forestlas.canopyComplexity`.\nThese allow you to derive metrics of vertical canopy structure such as Pgap and also estimate number of canopy layers.\nMore information can be found in this paper Wilkes, P. et al. (2016). Using discrete-return airborne laser scanning to \nquantify number of canopy strata across diverse forest types. Methods in Ecology and Evolution, 7(6), 700\xe2\x80\x93712. \n\n\n#### Funding\nThis research was funded by the Australian Postgraduate Award, Cooperative Research Centre for Spatial Information \nunder Project 2.07, TERN/AusCover and Commonwealth Scientific and IndustrialResearch Organisation (CSIRO) Postgraduate \nScholarship.\n'",",https://doi.org/10.1111/2041-210X.12510","2019/08/12, 21:48:36",1535,GPL-3.0,0,19,"2019/12/05, 09:22:03",0,0,1,0,1420,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, OpenTreeMap,"A collaborative platform for crowdsourced tree inventory, ecosystem services calculations, urban forestry analysis and community engagement.",OpenTreeMap,https://github.com/OpenTreeMap/otm-core.git,github,,Forest Observation and Management,"2023/08/03, 07:09:29",179,0,8,true,Python,OpenTreeMap,OpenTreeMap,"Python,JavaScript,HTML,SCSS,CSS,Jinja,Shell",www.opentreemap.org,"b'![OTM2 open source logo](https://opentreemap.github.io/images/logo@2x.png)\n\n[![Code Health](https://landscape.io/github/OpenTreeMap/otm-core/master/landscape.png)](https://landscape.io/github/OpenTreeMap/otm-core/master)\n[![Build Status](https://travis-ci.org/OpenTreeMap/otm-core.svg?branch=master)](https://travis-ci.org/OpenTreeMap/otm-core)\n[![Coverage Status](https://coveralls.io/repos/OpenTreeMap/otm-core/badge.png)](https://coveralls.io/r/OpenTreeMap/otm-core)\n\n# OpenTreeMap 2\n\n## Questions?\n\nJoin the user mailing list and let us know: \nhttp://groups.google.com/group/opentreemap-user\n\nOr, try our Gitter channel: [![Join the chat at https://gitter.im/OpenTreeMap/otm-core](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/OpenTreeMap/otm-core?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n\n## Installation\n\nFor full installation instructions, see the [Github \nwiki](https://github.com/OpenTreeMap/otm-core/wiki/Installation-Guide).\n\nAlternatively, you can also use the [otm-vagrant \nproject](https://github.com/OpenTreeMap/otm-vagrant) to get started. \nWhile not recommended for production, otm-vagrant greatly simplifies \ngetting a development environment for testing and contributing to OTM locally.\n\n### Environment variables\nThis project requires several environment variables, to provide API keys for several services.\n\n```\nROLLBAR_SERVER_SIDE_ACCESS_TOKEN=....\nGOOGLE_MAPS_KEY=...\n```\n`ROLLBAR_SERVER_SIDE_ACCESS_TOKEN` is a token for [Rollbar](rollbar.com).\n`GOOGLE_MAPS_KEY` is a browser key for the Google Maps Javascript API, [which can be obtained here](https://developers.google.com/maps/documentation/javascript/get-api-key).\n\n## Other Repositories\n\nThis repository (ie, otm-core) is but one of a few separate repositories \nthat together compose the OpenTreeMap project. Others include:\n\n* [otm-tiler](https://github.com/OpenTreeMap/otm-tiler) - map tile \nserver based on [Windshaft](https://github.com/CartoDB/Windshaft)\n* [otm-ecoservice](https://github.com/OpenTreeMap/otm-ecoservice) - ecosystem \nbenefits calculation service\n* [otm-ios](https://github.com/OpenTreeMap/otm-ios) - An \nOpenTreeMap client for iOS devices.\n* [otm-android](https://github.com/OpenTreeMap/otm-android) - An OpenTreeMap client for Android devices.\n\n## Additional Documentation\n\nThe REST API that provides data for the native mobile apps is documented in [api.md](doc/api.md)\n\n### Deprecated Repositories\n\nOpenTreeMap has a long history. These repositories still exist, but are \ndeprecated and no development is happening here moving forward.\n\n* [OpenTreeMap](https://github.com/OpenTreeMap/OpenTreeMap) - Otherwise \nknown as ""OTM1"", this is previous generation codebase of OpenTreeMap. It \nhas been entirely superceded by this repository and the others \nlisted above. However, there are some live tree map sites still running \non the old OTM1 code, and so we have left it up for archival purposes.\n\n## Developer documentation\n - [Javascript module conventions](doc/js.md)\n - [Python mixins](doc/mixins.md)\n\n\n## Acknowledgements\n\nThis application includes code based on [django-url-tools](https://bitbucket.org/monwara/django-url-tools), Copyright (c) 2013 Monwara LLC.\n\nUSDA Grant\n---------------\nPortions of OpenTreeMap are based upon work supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under Agreement No. 2010-33610-20937, 2011-33610-30511, 2011-33610-30862 and 2012-33610-19997 of the Small Business Innovation Research Grants Program. Any opinions, findings, and conclusions, or recommendations expressed on the OpenTreeMap website are those of Azavea and do not necessarily reflect the view of the U.S. Department of Agriculture.\n'",,"2013/02/19, 15:11:04",3900,CUSTOM,6,5191,"2022/07/07, 04:25:54",415,1734,2911,1,475,25,0.4,0.6475507765830346,,,0,26,false,,false,true,,,https://github.com/OpenTreeMap,www.opentreemap.org,,,,https://avatars.githubusercontent.com/u/1125635?v=4,,, DeepTreeAttention,Hyperspectral Image Classification with Attention Aided CNNs.,weecology,https://github.com/weecology/DeepTreeAttention.git,github,,Forest Observation and Management,"2023/03/13, 18:49:18",91,0,21,true,Python,Weecology,weecology,"Python,Shell",,"b'DeepTreeAttention\n==============================\n\n[![Github Actions](https://github.com/Weecology/DeepTreeAttention/actions/workflows/pytest.yml/badge.svg)](https://github.com/Weecology/DeepTreeAttention/actions/)\n\nTree Species Prediction for the National Ecological Observatory Network (NEON)\n\nImplementation of Hang et al. 2020 [Hyperspectral Image Classification with Attention Aided CNNs](https://arxiv.org/abs/2005.11977) for tree species prediction.\n\n# Model Architecture\n\n![](www/model.png)\n\nProject Organization\n------------\n\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md <- The top-level README for developers using this project.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 processed <- The final, canonical data sets for modeling.\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 raw <- The original, immutable data dump.\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 environment.yml <- Conda requirements\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 setup.py <- makes project pip installable (pip install -e .) so src can be imported\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 src <- Source code for use in this project.\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Models <- Model Architectures\n\n--------\n\n# Workflow\nThere are three main parts to this project, a 1) data module, a 2) model module, and 3) a trainer module. Usually the data_module is created to hold the train and test split and keep track of data generation reproducibility. Then a model architecture is created and pass to the model module along with the data module. Finally the model module is passed to the trainer.\n\n```\n#1) \ndata_module = data.TreeData(csv_file=""data/raw/neon_vst_data_2021.csv"", regenerate=False, client=client)\n\n#2)\nmodel = \nm = main.TreeModel(model=model, bands=data_module.config[""bands""], classes=data_module.num_classes,label_dict=data_module.species_label_dict)\n\n#3\ntrainer = Trainer()\ntrainer.fit(m, datamodule=data_module)\n```\n\n## Pytorch Lightning Data Module (data.TreeData)\n\nThis repo contains a pytorch lightning data module for reproducibility. The goal of the project is to make it easy to share with others within our research group, but we welcome contributions from outside the community. While all data is public, it is VERY large (>20TB) and cannot be easily shared. If you want to reproduce this work, you will need to download the majority of NEON\'s camera, HSI and CHM data and change the paths in the config file. For the \'raw\' NEON tree stem data see data/raw/neon_vst_2021.csv. The data module starts from this state, which are x,y locations for each tree. It then performs the following actions as an end-to-end workflow.\n\n1. Filters the data to represent trees over 3m with sufficient number of training samples\n2. Extract the LiDAR derived canopy height and compares it to the field measured height. Trees that are below the canopy are excluded based on the min_CHM_diff parameter in the config.\n3. Splits the training and test x,y data such that field plots are either in training or test.\n4. For each x,y stem location the crown is predicted by the tree detection algorithm (DeepForest - https://deepforest.readthedocs.io/).\n5. Crops of each tree crown are created and divided into pixel windows for pixel-level prediction.\n\nThis workflow does not need to be run on every experiment. If you are satisifed with the current train/test split and data generation process, set regenerate=False\n\n```\ndata_module = data.TreeData(csv_file=""data/raw/neon_vst_data_2021.csv"", regenerate=False)\ndata_module.setup()\n```\n\n## Pytorch Lightning Training Module (data.TreeModel)\n\nTraining is handled by the TreeModel class which loads a model from the models folder, reads the config file and runs the training. The evaluation metrics and images are computed and put of the comet dashboard\n\n```\nm = main.TreeModel(model=Hang2020.vanilla_CNN, bands=data_module.config[""bands""], classes=data_module.num_classes,label_dict=data_module.species_label_dict)\n\ntrainer = Trainer(\n gpus=data_module.config[""gpus""],\n fast_dev_run=data_module.config[""fast_dev_run""],\n max_epochs=data_module.config[""epochs""],\n accelerator=data_module.config[""accelerator""],\n logger=comet_logger)\n \ntrainer.fit(m, datamodule=data_module)\n```\n\n## Alive/Dead Filtering\n\nAs part of the prediction pipeline, RGB crops are scored as either \'Alive\', meanining they have leaves during presumed leaf-on season, or \'Dead\', meaning they do not have leaves.\nTo finetune the resent50 model, see src/models/dead.py. The classified data for the Alive/Dead crops can be found in data/raw/dead_train and dead/raw/dead_test.\n\n### Dev Guide\n\nIn general, major changes or improvements should be made on a new git branch. Only core improvements should be made on the main branch. If a change leads to higher scores, please create a pull request. Any pull requests are expected to have pytest unit tests (see tests/) that cover major use cases.\n\n## Model Architectures\n\nThe TreeModel class takes in a create model function\n\n```\nm = main.TreeModel(model=Hang2020.vanilla_CNN)\n```\n\nAny model can be specified provided it follows the following input and output arguments\n\n```\nclass myModel(Module):\n """"""\n Model description\n """"""\n def __init__(self, bands, classes):\n super(myModel, self).__init__()\n \n\n def forward(self, x):\n \n class_scores = F.softmax(x)\n \n return class_scores\n```\n\n### Extending the model\n\nTo create a model that takes in new inputs, I strongly recommend sub-classing the existing TreeData and TreeModel classes. For an example, see the MetadataModel in models/metadata.py\n\n```\n#Subclass of the training model\nclass MetadataModel(main.TreeModel):\n """"""Subclass the core model and update the training loop to take two inputs""""""\n def __init__(self, model, sites,classes, label_dict, config):\n super(MetadataModel,self).__init__(model=model,classes=classes,label_dict=label_dict, config=config) \n \n def training_step(self, batch, batch_idx):\n """"""Train on a loaded dataset\n """"""\n #allow for empty data if data augmentation is generated\n inputs, y = batch\n images = inputs[""HSI""]\n metadata = inputs[""site""]\n y_hat = self.model.forward(images, metadata)\n loss = F.cross_entropy(y_hat, y) \n \n return loss\n\n```\n\n## Getting Started (UF - collaboration)\n\nThis section is meant solely for members of the idtrees group who have access to the data.\n\n1) Fork this repo and install the conda environment.\n\n```\nconda env create -f=environment.yml\nconda activate DeepTreeAttention\n```\n\n2) Update the config.yml\n\nCurrently, only members of the ewhite group have permissions to the raw NEON data.\n\nFor example:\n\n```\nrgb_sensor_pool: /orange/ewhite/NeonData/*/DP3.30010.001/**/Camera/**/*.tif\n```\n\nThis is not a problem, just set \n\n```\nregenerate: False\n```\n\nand it will bypass these steps and use the existing train/test split (e.g. data/processed/train.csv) \n\nYou will need to set the correct crop directories\n\n```\ncrop_dir: /blue/ewhite/b.weinstein/DeepTreeAttention/crops/\n```\nTo wherever the crops are saved. This is currently \n\n```\n/orange/idtrees-collab/DeepTreeAttention/crops/\n```\n\nI highly recommend making a comet login. Change\n\n```\n#Comet dashboard\ncomet_workspace: bw4sz\n```\nto your usename and add a [.comet.config file](https://www.comet.ml/docs/python-sdk/advanced/#non-interactive-setup) to authenticate.\n\n3) Submit a job\n\nSubmit a SLURM job\n\n```\nsbatch SLURM/experiment.sh\n```\n\n4) Look at the comet repo for results\n\nThe metrics tab has the Micro and Macro Accuracy.\n\n\n'",",https://arxiv.org/abs/2005.11977","2020/06/01, 14:14:29",1241,MIT,57,4007,"2023/10/25, 17:15:39",11,23,165,23,0,0,0.0,0.0027118644067796183,"2022/11/09, 19:21:46",v1.0,0,3,false,,false,false,,,https://github.com/weecology,http://weecology.org,,,,https://avatars.githubusercontent.com/u/1156696?v=4,,, OpenSimRoot,"Source code for simulating root architecture, nutrient and water uptake and more.",rootmodels,https://gitlab.com/rootmodels/OpenSimRoot,gitlab,,Forest Observation and Management,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, forestmangr,An R package for forest mensuration and management.,sollano,https://github.com/sollano/forestmangr.git,github,,Forest Observation and Management,"2023/02/15, 21:57:42",14,0,2,true,R,,,R,,"b'\n\n\n[![Travis-CI Build\nStatus](https://travis-ci.org/sollano/forestmangr.svg?branch=master)](https://travis-ci.org/sollano/forestmangr)\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/forestmangr)](https://cran.r-project.org/package=forestmangr)\n[![](https://cranlogs.r-pkg.org/badges/grand-total/forestmangr)](https://cran.r-project.org/package=forestmangr)\n[![](https://cranlogs.r-pkg.org/badges/forestmangr)](https://cran.r-project.org/package=forestmangr)\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable\nstate and is being actively\ndeveloped.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n\n\n# forestmangr\n\nProcessing forest inventory data with methods such as simple random\nsampling, stratified random sampling and systematic sampling. There are\nalso functions for yield and growth predictions and model fitting,\nlinear and non linear grouped data fitting, and statistical tests.\n\nIf you need any help, I\xe2\x80\x99m available for consulting. If you find\nforestmangr useful, please consider supporting my efforts in developing\nthis open-source R package for the forestry community\\!\n\n
\n\n
\n\n\n\n\n\n\n\n\n
\n\n## Installation\n\nTo install the stable CRAN version, use:\n\n``` r\ninstall.packages(""forestmangr"")\n```\n\nOr you can install forestmangr from github, for the latest dev version\nwith:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""sollano/forestmangr"")\n```\n\n## Example\n\n``` r\nlibrary(forestmangr)\nlibrary(dplyr)\ndata(""exfm16"")\nhead(exfm16)\n#> # A tibble: 6 x 7\n#> strata plot age DH N V B\n#> \n#> 1 1 1 26.4 12.4 1020 19.7 5.7\n#> 2 1 1 38.4 17.2 1020 60.8 9.8\n#> 3 1 1 51.6 19.1 1020 103. 13.9\n#> 4 1 1 63.6 21.8 1020 136. 15.3\n#> 5 1 2 26.4 15 900 27.3 6 \n#> 6 1 2 38.4 20.3 900 80 10.5\n```\n\nNow, we can fit a model for Site estimatation. With `nls_table`, we can\nfit a non-linear model, extract it\xe2\x80\x99s coefficients, and merge it with the\noriginal data in one line. Here we\xe2\x80\x99ll use Chapman & Richards model:\n\n``` r\nage_i <- 64\nexfm16_fit <- exfm16 %>%\n nls_table(DH ~ b0 * (1-exp(-b1* age))^b2, mod_start = c( b0=23, b1=0.03, b2 = 1.3), output=""merge"") %>% \n mutate(site = DH *( ( (1- exp( -b1/age ))^b2 ) / (( 1 - exp(-b1/age_i))^b2 ))) %>% \n select(-b0,-b1,-b2)\nhead(exfm16_fit)\n#> strata plot age DH N V B site\n#> 1 1 1 26.4 12.4 1020 19.7 5.7 22.48027\n#> 2 1 1 38.4 17.2 1020 60.8 9.8 24.24290\n#> 3 1 1 51.6 19.1 1020 103.4 13.9 22.07375\n#> 4 1 1 63.6 21.8 1020 136.5 15.3 21.89203\n#> 5 1 2 26.4 15.0 900 27.3 6.0 27.19388\n#> 6 1 2 38.4 20.3 900 80.0 10.5 28.61226\n```\n\nNow, to fit Clutter\xe2\x80\x99s model, we can use the `fit_clutter` function,\nindicating the DH, B, V, site and Plot variable names:\n\n``` r\ncoefs_clutter <- fit_clutter(exfm16_fit, ""age"", ""DH"", ""B"", ""V"", ""site"", ""plot"")\ncoefs_clutter\n#> b0 b1 b2 b3 a0 a1\n#> 1 1.398861 -28.84038 0.0251075 1.241779 1.883471 0.05012873\n```\n\nNow, say we wanted to do a Simple Random Sampling Forest Inventory, with\n20% as a accepted error. First, let\xe2\x80\x99s load the package and some data:\n\n``` r\nlibrary(forestmangr)\ndata(""exfm2"")\ndata(""exfm3"")\ndata(""exfm4"")\nhead(exfm3,10)\n#> # A tibble: 10 x 3\n#> TOTAL_AREA PLOT_AREA VWB\n#> \n#> 1 46.8 3000 41\n#> 2 46.8 3000 33\n#> 3 46.8 3000 24\n#> 4 46.8 3000 31\n#> 5 46.8 3000 10\n#> 6 46.8 3000 32\n#> 7 46.8 3000 62\n#> 8 46.8 3000 16\n#> 9 46.8 3000 66\n#> 10 46.8 3000 25\n```\n\nFirst we should try a pilot inventory, to see if the number of plots\nsampled is enough for reaching the desired error:\n\n``` r\nsprs(exfm3, ""VWB"", ""PLOT_AREA"", ""TOTAL_AREA"", error = 20, pop = ""fin"")\n#> Variables Values\n#> 1 Total number of sampled plots (n) 10.0000\n#> 2 Number of maximum plots (N) 156.0000\n#> 3 Variance Quoeficient (VC) 53.2670\n#> 4 t-student 2.2622\n#> 5 recalculated t-student 2.0452\n#> 6 Number of samples regarding the admited error 25.0000\n#> 7 Variance (S2) 328.0000\n#> 8 Standard deviation (s) 18.1108\n#> 9 Mean (Y) 34.0000\n#> 10 Standard error of the mean (Sy) 5.5405\n#> 11 Absolute Error 12.5335\n#> 12 Relative Error (%) 36.8634\n#> 13 Estimated Total Value (Yhat) 5304.0000\n#> 14 Total Error 1955.2326\n#> 15 Inferior Confidence Interval (m3) 21.4665\n#> 16 Superior Confidence Interval (m3) 46.5335\n#> 17 Inferior Confidence Interval (m3/ha) 71.5549\n#> 18 Superior Confidence Interval (m3/ha) 155.1118\n#> 19 inferior Total Confidence Interval (m3) 3348.7674\n#> 20 Superior Total Confidence Interval (m3) 7259.2326\n```\n\nWe can see that we have 10 plots, but 15 more are needed if we want a\nminimum of 20% error. The exfm4 data has new samples, that we now can\nuse to run a definitive inventory:\n\n``` r\nsprs(exfm4, ""VWB"", ""PLOT_AREA"", ""TOTAL_AREA"", error = 20, pop = ""fin"")\n#> Variables Values\n#> 1 Total number of sampled plots (n) 25.0000\n#> 2 Number of maximum plots (N) 156.0000\n#> 3 Variance Quoeficient (VC) 45.4600\n#> 4 t-student 2.0639\n#> 5 recalculated t-student 2.0930\n#> 6 Number of samples regarding the admited error 20.0000\n#> 7 Variance (S2) 226.6933\n#> 8 Standard deviation (s) 15.0563\n#> 9 Mean (Y) 33.1200\n#> 10 Standard error of the mean (Sy) 2.7595\n#> 11 Absolute Error 5.6952\n#> 12 Relative Error (%) 17.1957\n#> 13 Estimated Total Value (Yhat) 5166.7200\n#> 14 Total Error 888.4555\n#> 15 Inferior Confidence Interval (m3) 27.4248\n#> 16 Superior Confidence Interval (m3) 38.8152\n#> 17 Inferior Confidence Interval (m3/ha) 91.4159\n#> 18 Superior Confidence Interval (m3/ha) 129.3841\n#> 19 inferior Total Confidence Interval (m3) 4278.2645\n#> 20 Superior Total Confidence Interval (m3) 6055.1755\n```\n\nThe desired error was met.\n\nThe exfm2 data has a strata variable. Say we wanted to run a SRS\ninventory for every stand. We can do this with the .groups argument:\n\n``` r\nhead(exfm2,10)\n#> # A tibble: 10 x 4\n#> STRATA STRATA_AREA PLOT_AREA VWB\n#> \n#> 1 1 14.4 1000 7.9 \n#> 2 1 14.4 1000 3.8 \n#> 3 1 14.4 1000 4.4 \n#> 4 1 14.4 1000 6.25\n#> 5 1 14.4 1000 5.55\n#> 6 1 14.4 1000 8.1 \n#> 7 1 14.4 1000 6.1 \n#> 8 1 14.4 1000 6.6 \n#> 9 1 14.4 1000 7.4 \n#> 10 1 14.4 1000 5.35\nsprs(exfm2, ""VWB"", ""PLOT_AREA"", ""STRATA_AREA"",.groups=""STRATA"", error = 20, pop = ""fin"")\n#> Variables STRATA1 STRATA2 STRATA3\n#> 1 Total number of sampled plots (n) 14.0000 20.0000 23.0000\n#> 2 Number of maximum plots (N) 144.0000 164.0000 142.0000\n#> 3 Variance Quoeficient (VC) 24.4785 15.8269 16.7813\n#> 4 t-student 2.1604 2.0930 2.0739\n#> 5 recalculated t-student 2.4469 4.3027 4.3027\n#> 6 Number of samples regarding the admited error 9.0000 11.0000 12.0000\n#> 7 Variance (S2) 2.1829 3.6161 5.3192\n#> 8 Standard deviation (s) 1.4774 1.9016 2.3063\n#> 9 Mean (Y) 6.0357 12.0150 13.7435\n#> 10 Standard error of the mean (Sy) 0.3752 0.3984 0.4402\n#> 11 Absolute Error 0.8105 0.8339 0.9130\n#> 12 Relative Error (%) 13.4288 6.9409 6.6431\n#> 13 Estimated Total Value (Yhat) 869.1429 1970.4600 1951.5739\n#> 14 Total Error 116.7157 136.7670 129.6455\n#> 15 Inferior Confidence Interval (m3) 5.2252 11.1811 12.8305\n#> 16 Superior Confidence Interval (m3) 6.8462 12.8489 14.6565\n#> 17 Inferior Confidence Interval (m3/ha) 52.2519 111.8105 128.3048\n#> 18 Superior Confidence Interval (m3/ha) 68.4624 128.4895 146.5647\n#> 19 inferior Total Confidence Interval (m3) 752.4271 1833.6930 1821.9284\n#> 20 Superior Total Confidence Interval (m3) 985.8586 2107.2270 2081.2194\n```\n\nWe can also run a stratified random sampling inventory with this data:\n\n``` r\nstrs(exfm2, ""VWB"", ""PLOT_AREA"", ""STRATA_AREA"", ""STRATA"", error = 20, pop = ""fin"")\n#> $Table1\n#> Variables STRATA 1 STRATA 2\n#> 1 Plot Area 1000.0000 1000.0000\n#> 2 Number of sampled plots per stratum (nj) 14.0000 20.0000\n#> 3 Total number of sampled plots (n) 57.0000 57.0000\n#> 4 Number of maximum plots per stratum (Nj) 144.0000 164.0000\n#> 5 Number of maximum plots (N) 450.0000 450.0000\n#> 6 Nj/N Ratio (Pj) 0.3200 0.3644\n#> 7 Stratum sum (Eyj) 84.5000 240.3000\n#> 8 Stratum quadratic sum (Eyj2) 538.3950 2955.9100\n#> 9 Mean of Yi per stratum (Yj) 6.0357 12.0150\n#> 10 PjSj2 0.6985 1.3179\n#> 11 PjSj 0.4728 0.6930\n#> 12 PjYj 1.9314 4.3788\n#> 13 t-student 2.0032 2.0032\n#> 14 recalculated t-student 3.1824 3.1824\n#> 15 Number of samples regarding the admited error 8.0000 8.0000\n#> 16 Optimal number of samples per stratum (nj optimal) 2.0000 3.0000\n#> 17 Optimal number of samples (n optimal) 9.0000 9.0000\n#> 18 Total value of Y per stratum (Yhatj) 869.1429 1970.4600\n#> STRATA 3\n#> 1 1000.0000\n#> 2 23.0000\n#> 3 57.0000\n#> 4 142.0000\n#> 5 450.0000\n#> 6 0.3156\n#> 7 316.1000\n#> 8 4461.3350\n#> 9 13.7435\n#> 10 1.6785\n#> 11 0.7278\n#> 12 4.3368\n#> 13 2.0032\n#> 14 3.1824\n#> 15 8.0000\n#> 16 4.0000\n#> 17 9.0000\n#> 18 1951.5739\n#> \n#> $Table2\n#> Variables value\n#> 1 t-student 2.0032\n#> 2 Standard error of the mean (Sy) 0.2339\n#> 3 Stratified Variance 3.6949\n#> 4 Stratified Standard Deviation 1.8936\n#> 5 Variance Quoeficient (VC) 17.7851\n#> 6 Stratified Mean (Y) 10.6471\n#> 7 Absolute Error 0.4685\n#> 8 Relative Error (%) 4.4003\n#> 9 Estimated Total Value (Yhat) 4791.1768\n#> 10 Total Error 210.8250\n#> 11 Inferior Confidence Interval (m3) 10.1786\n#> 12 Superior Confidence Interval (m3) 11.1156\n#> 13 Inferior Confidence Interval (m3/ha) 101.7856\n#> 14 Superior Confidence Interval (m3/ha) 111.1556\n#> 15 inferior Total Confidence Interval (m3) 4580.3518\n#> 16 Superior Total Confidence Interval (m3) 5002.0018\n```\n\n## Citation\n\nTo cite this package in publications, use:\n\nABNT:\n\nBRAGA S. R.; OLIVEIRA, M. L. R.; GORGENS, E. B. forestmangr: Forest\nMensuration and Management. R package version 0.9.2, 2020. Dispon\xc3\xadvel\nem: \n\nAPA:\n\nSollano Rabelo Braga, Marcio Leles Romarco de Oliveira and Eric Bastos\nGorgens (2020). forestmangr: Forest Mensuration and Management. R\npackage version 0.9.2. \n\n## License\n\nThis project is licensed under the MIT License - see the\n[LICENSE](LICENSE) file for details\n\n## Acknowledgments\n\n - This project was developed on the Forest Management Lab, DEF, UFVJM,\n Diamantina/Minas Gerais - Brazil.\n\n - This project came to be as a mean to make the life of a forestry\n engeneer a little easier and pratical. We\xe2\x80\x99d like to thank everyone\n at UFVJM that has in anyway helped this project grow.\n\n - We\xe2\x80\x99d like to thank UFVJM, FAPEMIG, CNPq e CAPES for the support.\n'",,"2018/10/08, 19:59:13",1843,CUSTOM,13,200,"2022/12/11, 22:43:03",2,3,3,2,318,0,0.0,0.0,"2021/08/16, 13:54:56",v0.9.4,0,1,false,,false,false,,,,,,,,,,, FATES,"A cohort model of vegetation competition and co-existence, allowing a representation of the biosphere which accounts for the division of the land surface into successional stages.",NGEET,https://github.com/NGEET/fates.git,github,,Forest Observation and Management,"2023/10/20, 23:30:18",85,0,10,true,Fortran,NGEE-Tropics,NGEET,"Fortran,Python,Shell,CMake",,"b""\n![FATES_logo](.github/images/logo_fates_small.png)\n------------------------------\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3825473.svg)](https://doi.org/10.5281/zenodo.3825473)\n\nThis repository holds the Functionally Assembled Terrestrial Ecosystem Simulator (FATES). FATES is a numerical terrestrial ecosystem model. Its development and support is primarily supported by the Department of Energy's Office of Science, through the Next Generation Ecosystem Experiment - Tropics ([NGEE-T](https://ngee-tropics.lbl.gov/)) project.\n\nFor more information on the FATES model, see our [User's Guide](https://fates-users-guide.readthedocs.io/en/latest/) and [technical documentation](https://fates-docs.readthedocs.io/en/latest/index.html). \n\nPlease submit any questions you may have to the [FATES Github Discussions board](https://github.com/NGEET/fates/discussions).\n\nTo receive email updates about forthcoming release tags, regular meeting notifications, and other important announcements, please join the [FATES Google group](https://groups.google.com/g/fates_model).\n\n## Important Guides:\n------------------------------\n\n[User's Guide](https://fates-users-guide.readthedocs.io/en/latest/)\n\n[How to Contribute](https://github.com/NGEET/fates/blob/master/CONTRIBUTING.md)\n\n[Table of FATES and Host Land Model API compatability](https://fates-users-guide.readthedocs.io/en/latest/user/Table-of-FATES-API-and-HLM-STATUS.html)\n\n[List of Unsupported or Broken Features](https://fates-users-guide.readthedocs.io/en/latest/user/Current-Unsupported-or-Broken-Features.html)\n\n[Code of Conduct](https://github.com/NGEET/fates/blob/master/CODE_OF_CONDUCT.md)\n\n## Important Note:\n------------------------------\n\n**Most users should not need to directly clone this repository. FATES needs to be run through a host model, and all supported host-models are in charge of cloning and loading the fates software.**\n\nFATES has support to be run via the Energy Exascale Earth System Model (E3SM), the Community Earth System Model (CESM), or its land component, the Community Terrestrial Systems Model (CTSM).\n\nhttps://github.com/E3SM-Project/E3SM\n\nhttps://github.com/ESCOMP/cesm\n\nThe FATES, E3SM and CTSM teams maintain compatability of the NGEET/FATES master branch with the E3SM master and CTSM master branches respectively. There may be some modest lag time in which the latest commit on the FATES master branch is available to these host land models (HLM) by default. This is typically correlated with FATES development updates forcing necessary changes to the FATES API. See the table of [FATES API/HLM compatibility](https://fates-users-guide.readthedocs.io/en/latest/user/Table-of-FATES-API-and-HLM-STATUS.html) for information on which fates tag corresponds to which HLM tag or commit. \n""",",https://doi.org/10.5281/zenodo.3825473","2015/12/09, 23:18:30",2877,CUSTOM,588,4363,"2023/10/20, 13:45:54",197,376,840,128,5,15,1.6,0.5479846449136276,"2020/06/08, 16:12:54",sci.1.36.0_api.11.2.0,0,29,false,,true,true,,,https://github.com/NGEET,http://ngee-tropics.lbl.gov/,,,,https://avatars.githubusercontent.com/u/14792952?v=4,,, DetecTree,A Pythonic library to classify tree/non-tree pixels from aerial imagery.,martibosch,https://github.com/martibosch/detectree.git,github,"tree-detection,tree-pixels,tree-canopy,remote-sensing,python,image-segmentation",Forest Observation and Management,"2022/10/24, 11:18:13",167,3,47,false,Python,,,"Python,TeX",https://doi.org/10.21105/joss.02172,"b""[![PyPI version fury.io](https://badge.fury.io/py/detectree.svg)](https://pypi.python.org/pypi/detectree/)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/detectree.svg)](https://anaconda.org/conda-forge/detectree)\n[![Documentation Status](https://readthedocs.org/projects/detectree/badge/?version=latest)](https://detectree.readthedocs.io/en/latest/?badge=latest)\n[![Build Status](https://github.com/martibosch/detectree/workflows/tests/badge.svg?branch=main)](https://github.com/martibosch/detectree/actions?query=workflow%3A%22tests%22)\n[![codecov](https://codecov.io/gh/martibosch/detectree/branch/main/graph/badge.svg?token=ZTZK2LFR6T)](https://codecov.io/gh/martibosch/detectree)\n[![GitHub license](https://img.shields.io/github/license/martibosch/detectree.svg)](https://github.com/martibosch/detectree/blob/master/LICENSE)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02172/status.svg)](https://doi.org/10.21105/joss.02172)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3908338.svg)](https://doi.org/10.5281/zenodo.3908338)\n\n# DetecTree\n\n## Overview\n\nDetecTree is a Pythonic library to classify tree/non-tree pixels from aerial imagery, following the methods of Yang et al. [1]. The target audience is researchers and practitioners in GIS that are interested in two-dimensional aspects of trees, such as their proportional abundance and spatial distribution throughout a region of study. These measurements can be used to assess important aspects of urban planning such as the provision of urban ecosystem services. The approach is of special relevance when LIDAR data is not available or it is too costly in monetary or computational terms.\n\n```python\nimport detectree as dtr\nimport matplotlib.pyplot as plt\nimport rasterio as rio\nfrom rasterio import plot\n\n# select the training tiles from the tiled aerial imagery dataset\nts = dtr.TrainingSelector(img_dir='data/tiles')\nsplit_df = ts.train_test_split(method='cluster-I')\n\n# train a tree/non-tree pixel classfier\nclf = dtr.ClassifierTrainer().train_classifier(\n split_df=split_df, response_img_dir='data/response_tiles')\n\n# use the trained classifier to predict the tree/non-tree pixels\ntest_filepath = split_df[~split_df['train'].sample(1).iloc[0]['img_filepath']\ny_pred = dtr.Classifier().classify_img(test_filepath, clf)\n\n# side-by-side plot of the tile and the predicted tree/non-tree pixels\nfigwidth, figheight = plt.rcParams['figure.figsize']\nfig, axes = plt.subplots(1, 2, figsize=(2 * figwidth, figheight))\n\nwith rio.open(img_filepath) as src:\n plot.show(src.read(), ax=axes[0])\naxes[1].imshow(y_pred)\n```\n\n![Example](figures/example.png)\n\nA full example application of DetecTree to predict a tree canopy map for the Aussersihl district in Zurich [is available as a Jupyter notebook](https://github.com/martibosch/detectree-example/blob/master/notebooks/aussersihl-canopy.ipynb). See also [the API reference documentation](https://detectree.readthedocs.io/en/latest/?badge=latest) and the [example repository](https://github.com/martibosch/detectree-example) for more information on the background and some example notebooks.\n\n## Citation\n\nBosch M. 2020. \xe2\x80\x9cDetecTree: Tree detection from aerial imagery in Python\xe2\x80\x9d. *Journal of Open Source Software, 5(50), 2172.* [doi.org/10.21105/joss.02172](https://doi.org/10.21105/joss.02172)\n\nNote that DetecTree is based on the methods of Yang et al. [1], therefore it seems fair to reference their work too. An example citation in an academic paper might read as follows:\n\n> The classification of tree pixels has been performed with the Python library DetecTree (Bosch, 2020), which is based on the approach of Yang et al. (2009).\n\n## Installation\n\n### With conda\n\nThe easiest way to install `detectree` is with conda as in:\n\n``` bash\nconda install -c conda-forge detectree\n```\n\n### With pip\n\nYou can install `detectree` with pip as in:\n\n``` bash\npip install detectree\n```\n\nIf you want to be able to read compressed LAZ files, you will need [the Python bindings for `laszip`](https://github.com/tmontaigu/laszip-python). Note that the latter require [`laszip`], which can be installed using conda (which is automatically handled when installing `detectree` with conda as shown above) or downloaded from [laszip.org](https://laszip.org/). Then, detectree and the Python bindings for `laszip` can be installed with pip as in:\n\n``` bash\npip install detectree[laszip]\n```\n\n### Development install\n\nTo install a development version of detectree, you can first use conda to create an environment with all the dependencies - with the [`environment-dev.yml` file](https://github.com/martibosch/detectree/blob/main/environment-dev.yml) - and activate it as in:\n\n``` bash\nconda env create -f environment-dev.yml\nconda activate detectree-dev\n```\n\nand then clone the repository and use pip to install it in development mode\n\n```bash\ngit clone git@github.com:martibosch/detectree.git\ncd detectree/\npip install -e .\n```\n\nThis will also install the dependencies required for running tests, linting the code and building the documentation. Additionally, you can activate [pre-commit](https://pre-commit.com/) so that the latter are run as pre-commit hooks as in:\n\n```bash\npre-commit install\n```\n\n## See also\n\n* [lausanne-tree-canopy](https://github.com/martibosch/lausanne-tree-canopy): example computational workflow to get the tree canopy of Lausanne with DetecTree\n* [A video of a talk about DetecTree](https://www.youtube.com/watch?v=USwF2KyxVjY) in the [Applied Machine Learning Days of EPFL (2020)](https://appliedmldays.org/) and [its respective slides](https://martibosch.github.io/detectree-amld-2020)\n\n## Acknowledgments\n\n* With the support of the \xc3\x89cole Polytechnique F\xc3\xa9d\xc3\xa9rale de Lausanne (EPFL)\n\n## References\n\n1. Yang, L., Wu, X., Praun, E., & Ma, X. (2009). Tree detection from aerial imagery. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (pp. 131-137). ACM.\n""",",https://doi.org/10.21105/joss.02172,https://doi.org/10.5281/zenodo.3908338,https://doi.org/10.21105/joss.02172","2019/07/31, 07:52:10",1547,GPL-3.0,0,62,"2022/10/24, 10:49:55",0,2,13,0,366,0,0.0,0.01666666666666672,"2022/10/24, 11:07:52",v0.4.2,0,2,false,,false,true,"HujiGabarel/soccer_fields,CharanSriram/MildFire-m,Akula112233/mildfire",,,,,,,,,, Sentinel-Tree-Cover,This project maps tree extent at the ten-meter scale using open source artificial intelligence and satellite imagery.,wri,https://github.com/wri/sentinel-tree-cover.git,github,,Forest Observation and Management,"2023/06/07, 18:23:25",124,0,27,true,Jupyter Notebook,World Resources Institute,wri,"Jupyter Notebook,Python,TeX,HCL,Makefile,QML,Shell,Dockerfile",,"b'Mapping tree cover and extent with Sentinel-1 and 2\n==============================\n\n# Description\n\nThis is the GitHub repository for the Sentinel-1 and Sentinel-2 dataset Tropical Tree Cover, which is viewable on Google Earth Engine [here](https://jombrandt.users.earthengine.app/view/sentinel-tree-cover). The asset is public as of May 2023 on Google Earth Engine [here](https://code.earthengine.google.com/?asset=projects/wri-datalab/TropicalTreeCover). The dataset is published in [Remote Sensing of Environment](https://www.sciencedirect.com/science/article/pii/S0034425723001256).\n\nThis project maps tree extent at the ten-meter scale using open source artificial intelligence and satellite imagery. The data enables accurate reporting of tree cover in urban areas, tree cover on agricultural lands, and tree cover in open canopy and dry forest ecosystems.\n\nThis repository contains the source code for the project. A full description of the methodology can be found in the [publication](https://www.sciencedirect.com/science/article/pii/S0034425723001256). The data product specifications can be accessed on the wiki page.\n* [Background](https://github.com/wri/restoration-mapper/wiki/Product-Specifications#background)\n* [Data Extent](https://github.com/wri/restoration-mapper/wiki/Product-Specifications#data-extent)\n* [Methodology](https://github.com/wri/restoration-mapper/wiki/Product-Specifications#methodology)\n* [Validation and Analysis](https://github.com/wri/restoration-mapper/wiki/Product-Specifications#validation-and-analysis) | [Jupyter Notebook](https://github.com/wri/restoration-mapper/blob/master/notebooks/analysis/validation-analysis.ipynb)\n* [Definitions](https://github.com/wri/restoration-mapper/wiki/Product-Specifications#definitions)\n* [Limitations](https://github.com/wri/restoration-mapper/wiki/Product-Specifications#limitations)\n\n\n\n# Citation\n\nBrandt, J., Ertel, J., Spore, J., & Stolle, F. (2023). Wall-to-wall mapping of tree extent in the tropics with Sentinel-1 and Sentinel-2. Remote Sensing of Environment, 292, 113574. doi:10.1016/j.rse.2023.113574\n\nBrandt, J. & Stolle, F. (2021) A global method to identify trees outside of closed-canopy forests with medium-resolution satellite imagery. International Journal of Remote Sensing, 42:5, 1713-1737, DOI: 10.1080/01431161.2020.1841324\n\n![img](references/screenshots/demo.gif?raw=true)\n\n# Getting started\n\nAn overview Jupyter notebook walking through the creation of the data can be found [here](https://github.com/wri/sentinel-tree-cover/blob/master/notebooks/development/Tropical%20Tree%20Cover%20technical%20introduction.ipynb)\n\nA Google Earth Engine script to export Geotiffs of the extent data by country can be found [here](https://code.earthengine.google.com/e65c9fc97fa0827012dd28b74f59d32e)\n\n# Installation\n\nUtilizing this repository to generate your own data requires:\n* Sentinel-Hub API key, see [Sentinel-hub](http://sentinel-hub.com/)\n* Amazon Web Services API key (optional) with s3 read/write privileges\n\nThe API keys should be stored as `config.yaml` in the base directory with the structure:\n\n```\nkey: ""YOUR-SENTINEL-HUB-API-KEY""\nawskey: ""YOUR-AWS-API-KEY""\nawssecret: ""YOUR-AWS-API-SECRET""\n```\n\nThe code can be utilized without AWS by setting `--ul_flag False` in `download_and_predict_job.py`. By default, the pipeline will output satellite imagery and predictions in 6 x 6 km tiles to the `--s3_bucket` bucket. NOTE: The specific layer configurations for Sentinel-Hub have not yet been released but are available on request.\n\n## With Docker\n\n```\ngit clone https://github.com/wri/sentinel-tree-cover\ncd sentinel-tree-cover/\ntouch config.yaml\nvim config.yaml # insert your API keys here\ndocker build -t sentinel_tree_cover .\ndocker run -it --entrypoint /bin/bash sentinel_tree_cover:latest\ncd src\npython3 download_and_predict_job.py --country ""country"" --year year\n```\n\n## Without docker\n* Clone repository\n* Install dependencies `pip3 install -r requirements.txt`\n* Install GDAL (different process for different operating systems, see https://gdal.org)\n* Download model `python3 src/models/download_model.py`\n* Start Jupyter notebook and navigate to `notebooks/` folder\n\n# Usage\nThe `notebooks/` folder contains ordered notebooks for downloading training and testing data and training the model, as follows:\n* 1a-download-sentinel-2: downloads monthly mosaic 10 and 20 meter bands for training / testing plots\n* 1b-download-sentinel-1: downloads monthly VV-VH db sigma Sentinel-1 imagery for training / testing plots\n* 2-data-preprocessing: Combines satellite imagery for training / testing plots with labelled data from [Collect Earth Online](collect.earth)\n* 3-feature-selection: Feature selection for remote sensing indices utilizing random forests\n* 4-model: Trains and deploys tree cover model\n\n\nThe `src/` folder contains the source code for the project, as well as the primary entrypoint for the Docker container, `download_and_predict_job_fast.py`\n\n`download_and_predict_job_fast.py` can be used as follows, with additional optional arguments listed in the file: `python3 download_and_predict_job_fast.py --country $COUNTRY --year $YEAR`\n\n# Methodology\n\n## Model\nThis model uses a U-Net architecture with the following modifications:\n* [Convolutional GRU](https://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learning-approach-for-precipitation-nowcasting.pdf) encoder with group normalization to develop temporal features of monthly cloud-free mosaics\n* Concurrent spatial and channel squeeze excitation in both the encoder and decoder (https://arxiv.org/abs/1803.02579)\n* DropBlock and Zoneout for generalization in both the encoder and decoder\n* Group normalization and Swish activation in both the encoder and decoder\n* [AdaBound](https://arxiv.org/abs/1902.09843) optimizer with Stochastic Weight Averaging and Sharpness Aware Minimization\n* Binary cross entropy and boundary loss\n* Smoothed image predictions across moving windows with Gaussian filters\n* A much larger input (28x28) than output (14x14) at training time, with 182x182 and 168x168 input and output size in production, respectively\n\n![img4](references/readme/model_diagram.png?raw=true)\n\n## Data\nThis project uses Sentinel 1 and Sentinel 2 imagery. Monthly composites of Sentinel 1 VV-VH imagery are fused with the nearest Sentinel 2 10- and 20-meter bands. These images are preprocessed by:\n* Super-resolving 20m bands to 10m with DSen2\n* Calculating cloud cover and cloud shadow masks\n* Removing steps with >30% cloud cover, and linearly interpolating to remove clouds and shadows from <30% cloud cover images\n![img](references/readme/cloud_removal.gif?raw=true)\n* Applying Whittaker smoothing (lambda = 100) to each time series for each pixel for each band to reduce noise\n![img](references/screenshots/datasmooth.png?raw=true)\n* Calculating vegetation indices, including EVI, BI, and MSAVI2\n\nThe cloud / shadow removal and temporal mosaicing algorithm is summarized below:\n* Select all images with <30% cloud cover\n* Select up to two images per month with <30% cloud cover, closest to beginning and middle of month\n* Select least cloudy image if max CC > 15%, otherwise select the image closest to the middle of the month\n* Linearly interpolate clouds and cloud shadows with a rolling median\n* Smooth time series data with a rolling median\n* Linearly interpolate image stack to a 15 day timestep\n* Smooth time stack with Whittaker smoother\n\n# License\n\nThe code is released under the GNU General Public License v3.0.\n\n# Project Organization\n------------\n\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Makefile <- Makefile with commands like `make data` or `make train`\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md <- The top-level README for developers using this project.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 docs <- A default Sphinx project; see sphinx-doc.org for details\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 models <- Trained and serialized models, model predictions, or model summaries\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 notebooks <- Jupyter notebooks\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 baseline\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 replicate-paper\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 visualization\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 references <- Data dictionaries, manuals, and all other explanatory materials.\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 requirements.txt <- The requirements file for reproducing the analysis environment, e.g.\n \xe2\x94\x82 generated with `pip freeze > requirements.txt`\n \xe2\x94\x82\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 setup.py <- makes project pip installable (pip install -e .) so src can be imported\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 src <- Source code for use in this project.\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 __init__.py <- Makes src a Python module\n \xe2\x94\x82 \xe2\x94\x82\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data <- Scripts to download or generate data\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 make_dataset.py\n \xe2\x94\x82 \xe2\x94\x82\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 features <- Scripts to turn raw data into features for modeling\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 build_features.py\n \xe2\x94\x82 \xe2\x94\x82\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 models <- Scripts to train models and then use trained models to make\n \xe2\x94\x82 \xe2\x94\x82 \xe2\x94\x82 predictions\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 predict_model.py\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 train_model.py\n \xe2\x94\x82 \xe2\x94\x82\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 visualization <- Scripts to create exploratory and results oriented visualizations\n \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 visualize.py\n \xe2\x94\x82\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 tox.ini <- tox file with settings for running tox; see tox.testrun.org\n\n\n--------\n'",",https://arxiv.org/abs/1803.02579,https://arxiv.org/abs/1902.09843","2020/02/04, 16:49:33",1359,GPL-3.0,16,230,"2023/05/19, 16:51:59",6,26,41,1,159,2,0.3,0.0625,"2020/07/21, 15:36:41",v1.1.0,0,2,false,,false,false,,,https://github.com/wri,https://wri.org,"Washington, DC",,,https://avatars.githubusercontent.com/u/4615146?v=4,,, Bioverse Labs,Python scripts using usual frameworks in Deep Learning for pattern recognition on forest environments.,Bioverse-Labs,https://github.com/Bioverse-Labs/deep-learning.git,github,,Forest Observation and Management,"2023/04/19, 16:31:12",5,0,0,true,Python,Bioverse Labs,Bioverse-Labs,Python,,"b'\n\n

\n \n \n \n \n \n \n \n \n \n \'Documentation\n \n \n

\n\n# Bioverse Labs (deep-learning module)\nThe deep-learning module incorporate essential procedures to prepare remote sensing images mostly for automatic classification methods, in this case specifically, deep-learning approaches. \n\nBeyond the routines, this code was prepared to have a personal use or even ready for adaptation to be run in server-side. The ain is to incorporate some of the main Deep Learning models for remote sensing image analysis and mapping. In this version, the following DL architectures were tested:\n- [UNet](https://lmb.informatik.uni-freiburg.de/Publications/2015/RFB15a/)\n\n**All modules available here are under construction. Therefore, many errors and malfunctions can occur for your specific needs.**\n\n## Summary\n* [Setting up your environment](#setting-up-your-python-environment)\n* [Python version and OS](#python-version-and-os)\n* [Prepare your virtual environment](#prepare-your-virtual-environment)\n* [Preparing your `.env` file](#preparing-your-env-file)\n* [Installing `requirements.txt`](#installing-requirementstxt) \n* [The `settings.py` file](#the-settingspy-file)\n* [The hierarchy of folders](#the-hierarchy-of-folders)\n* [NVIDIA\'s driver and CUDA for Ubuntu 20.4](#nvidias-driver-and-cuda-for-ubuntu-204)\n* [Examples ](#examples)\n* [Training the model](#training-the-model)\n* [Predicting with an existent weight](#predicting-with-an-existent-weight)\n* [TODO-list](#todo-list)\n\n\n# Setting up your Python environment\nThis command-line solution was mainly used for training and prediction of remote sensing images, which have a more dynamic use of image bandsLICENSE and wider radiometric values (normally, 16 bits), and was created to be permissive and easily extended on very similar purposes (remote sensing). Most of the methods and libraries presented, were used to solve these particular events, and certainly have a lot of other alternatives out there. Please, feel free to share and contribute. \n\n## Python version and OS\nThe `deep-learning` was developed using Python version 3.7+, and **Linux 20.4 LTS focal fossa** operational system. \n\n# Prepare your virtual environment\nFirst of all, check if you have installed the libraries needed:\n```\nsudo apt-get install python3-env\n```\nthen, in the\n```\npython -m venv .venv\n```\nand activates it:\n```\nsource .venv/bin/activate\n```\nas soon you have it done, you are ready to install the requirements.\n\n## Preparing your `.env` file\nThis library uses decoupling, which demands you to set up variables that is only presented locally, for instance, the path you want to save something, or the resources of your project. In summary, your environment variables. So, copy a paste the file `.env-example` and rename it to `.env`. Afterwards, just fill out each of the variables content within the file:\n```\nDL_DATASET=PATH_TO_TRAINING_FOLDER\n```\n\n## Installing `requirements.txt`\nIf you do not intent to use GPU, there is no need to install support to it. So, in requirements file, make sure to set `tensorflow-gpu` to only `tensorflow`. If everything is correct, and you **virtualenv** is activated, execute: \n```\npip install -r requirements.txt\n```\n\nThe extra requeriments not listed in `requirements.txt`, is GDAL library, which has to be installed according to the version installed globally in your computer. So, pip have to address the same version during installation. The procedures of its dependencies can be found in `INSTALL_GDAL` in the root folder. \n\nIn order to install GDAL, execute:\n```\nsudo apt install gcc g++ libxml2-dev libxslt1-dev zlib1g-dev\n```\nthen,\n```\nsudo apt-get install gdal-bin libgdal-dev\n```\n\nWith your virtual environment activated, run:\n```\npip install GDAL==$(gdal-config --version | awk -F\'[.]\' \'{print $1"".""$2}\')\n```\n\nIt will automatically get your GDAL\'s version and it will pip install according to it. \n\n## The `settings.py` file\nThis file centralized all constants variable used in the code, in particular, the constants that handle all the DL model. Thus, the Python dictionary `DL_PARAM` splits all the values and parameters by model type. In this case, only the UNet architecture was implemented:\n```\nDL_PARAM = {\n \'unet\': {\n \'image_training_folder\': os.path.join(DL_DATASET, \'samples\', LABEL_TYPE),\n \'annotation_training_folder\': os.path.join(DL_DATASET, \'samples\', LABEL_TYPE),\n \'output_checkpoints\': os.path.join(DL_DATASET, \'predictions\', \'256\', \'weight\'),\n \'output_history\': os.path.join(DL_DATASET, \'predictions\', \'256\', \'history\'),\n \'save_model_dir\': os.path.join(DL_DATASET, \'predictions\', \'256\', \'model\'),\n \'tensorboard_log_dir\': os.path.join(DL_DATASET, \'predictions\', \'256\', \'log\'),\n \'pretrained_weights\': \'model-input256-256-batch8-drop05-best.hdf5\',\n \'image_prediction_folder\': os.path.join(DL_DATASET, \'test\', \'big\'),\n \'output_prediction\': os.path.join(DL_DATASET, \'predictions\', \'256\', \'inference\'),\n \'output_prediction_shp\': os.path.join(DL_DATASET, \'predictions\', \'256\', \'shp\'),\n \'tmp_slices\': os.path.join(DL_DATASET, \'tmp\', \'tmp_slice\'),\n \'tmp_slices_predictions\': os.path.join(DL_DATASET, \'tmp\', \'tmp_slice_predictions\'),\n \'input_size_w\': 256,\n \'input_size_h\': 256,\n \'input_size_c\': 3,\n \'batch_size\': 8,\n \'learning_rate\': 0.0001,\n \'filters\': 64,\n \'kernel_size\': 3,\n \'deconv_kernel_size\': 3,\n \'pooling_size\': 2,\n \'pooling_stride\': 2,\n \'dropout_rate\': 0.5,\n \'epochs\': 500,\n \'classes\': {\n ""other"": [0, 0, 0],\n ""nut"": [102, 153, 0],\n ""palm"": [153, 255, 153]\n },\n \'color_classes\': {0: [0, 0, 0], 1: [102, 153, 0], 2: [153, 255, 153]},\n \'width_slice\': 256,\n \'height_slice\': 256\n }\n}\n```\nin this way, if a new model is introduced to the code, a new key is add to this dictionary with its respective name, then, it will automatically load all the parameters according to the type of mode the user choose in the `-model` command-line option. \n\n\n## The hierarchy of folders\nIt is very recommended to prepare the hierarchy of folders as described in this section. When the training samples are build, as described in [bioverse image-processing](https://github.com/Bioverse-Labs/image-processing), four main folders are created: one for raster, one for the annotation (i.e. ground-truth, label, reference images), one to save the predictions (i.e. inferences), and finally one to store the validation samples. Besides, in order to conduct multiple test, such as different dimensions and classes of training samples, subfolders are also created under each folder, such as:\n\n```\nsamples\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 classid\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 training\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 image\n|\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 |\xc2\xa0\xc2\xa0\xc2\xa0 :: images in TIF extension\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 label\n|\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 |\xc2\xa0\xc2\xa0\xc2\xa0 :: annotation in PNG extension\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 validation\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 image\n|\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 |\xc2\xa0\xc2\xa0\xc2\xa0 :: images in PNG extension\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 label\n|\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 |\xc2\xa0\xc2\xa0\xc2\xa0 :: annotation in PNG extension\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 predictions\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 256\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 history\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 inference\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 weight\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 model\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 log\n```\n\nThis suggestion of folder hierarchy is not mandatory, just make sure the paths is correctly pointed in `settings.py` file.\n\n\n## NVIDIA\'s driver and CUDA for Ubuntu 20.4\nFor most of the processing and research approaching Deep Learning (DL) methodologies, a certain computational power is needed. Recently, the use of GPUs has expanded the horizon of heavy machine learning processing such as the DL demands. \n\nTensorFlow GPU support requires an assortment of drivers and libraries. To simplify installation and avoid library conflicts, the TensorFlow recommends the use of a [TensorFlow Docker Image](https://www.tensorflow.org/install/docker), which incorporates all the setups needed to this kind of procedure. For more details, please, access the [TensorFlow official page](https://www.tensorflow.org/install/gpu).\n\nConsidering not using a Docker image, there are many tutorials in how to install your NVIDIA\'s driver and CUDA toolkit, such as described in [Google Cloud](https://cloud.google.com/compute/docs/gpus/install-drivers-gpu#ubuntu-driver-steps), [NVIDIA Dev](https://developer.nvidia.com/cuda-downloads), or even a particular pages. The way presented here, could vary in your need. So, if you want to prepare your environment based in a complete different OS or architecture, just follow the right steps in the provider website, and make sure to have all the cuda libraries listed in `LD_LIBRARY_PATH` (See [NVidia Cuda Toolkit Documentation](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) for more details).\n\nSo, for Linux OS, x86_64 arch, Tensorflow 2.1+, and Ubuntu LTS 20.04, first, it is necessary to install all software requirements, which includes: \n- NVIDIA\xc2\xae GPU drivers \xe2\x80\x94 CUDA\xc2\xae 10.1 requires 418.x or higher.\n- CUDA\xc2\xae Toolkit \xe2\x80\x94 TensorFlow supports CUDA\xc2\xae 10.1 (TensorFlow >= 2.1.0)\n- CUPTI ships with the CUDA\xc2\xae Toolkit.\n- cuDNN SDK 7.6\n- (Optional) TensorRT 6.0 to improve latency and throughput for inference on some models.\n\nto install CUDA\xc2\xae 10 (TensorFlow >= 1.13.0) on Ubuntu 16.04 and 18.04. These instructions may work for other Debian-based distros.\n```\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin\nsudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600\nwget https://developer.download.nvidia.com/compute/cuda/11.0.3/local_installers/cuda-repo-ubuntu2004-11-0-local_11.0.3-450.51.06-1_amd64.deb\nsudo dpkg -i cuda-repo-ubuntu2004-11-0-local_11.0.3-450.51.06-1_amd64.deb\nsudo apt-key add /var/cuda-repo-ubuntu2004-11-0-local/7fa2af80.pub\nsudo apt-get update\nsudo apt-get -y install cuda\n```\nthen, reboot the system.\n\nAs mentioned before, TensorFlow will seek for some of the CUDA libraries during training. As reported by many users, is possible that some of them is installed in different location in your filesystem. To guarantee your `LD_LIBRARY_PATH` is pointing out to the right folder, add the \n\nIf you followed all steps and have it installed properly, then, the final steps is So, add the following lines to your `~\\.bashrc` file using `nano` or any other editor (check the version to replace on `XXX`):\n```\nexport PATH=/usr/local/cuda-XXX/bin${PATH:+:${PATH}}\nexport LD_LIBRARY_PATH=/usr/local/cuda-XXX/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}\n```\nright after, updated it:\n```\nsource ~/.bashrc\n```\n\nIf you followed all steps and have it installed properly, you are ready to train your model!\n\n> For more details, follow the issue reported [here](https://stackoverflow.com/questions/60208936/cannot-dlopen-some-gpu-libraries-skipping-registering-gpu-devices) and [here](https://askubuntu.com/questions/1145946/tensorflow-wont-import-with-sudo-python3).\n\n# Examples \n## Training the model\nAfter to validate all paths and parameters in `settings.py`, the training could be performed with the following command line:\n```\npython main.py -model unet -train True -predict False -verbose True\n```\nthe runtime logging will print something like (note that all CUDA libraries must to be all loaded correctly, otherwise, it will not be registered and then CPU is used):\n```\n(.venv) user@user-machine:~/deep-learning$ python main.py -model unet -train True -predict False -verbose True\n2020-09-23 20:43:31.621367: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\n[2020-09-23 20:43:42] {main.py :63 } INFO : Starting process... \n[2020-09-23 20:43:42] {main.py :39 } INFO : >> UNET model selected... \n[2020-09-23 20:43:42] {unet.py :19 } INFO : >>>> Settings up UNET model... \n2020-09-23 20:43:43.281992: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1\n2020-09-23 20:43:43.654774: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-09-23 20:43:43.655488: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla P4 computeCapability: 6.1\ncoreClock: 1.1135GHz coreCount: 20 deviceMemorySize: 7.43GiB deviceMemoryBandwidth: 178.99GiB/s\n2020-09-23 20:43:43.655531: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\n2020-09-23 20:43:43.655713: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library \'libcublas.so.10\'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory\n2020-09-23 20:43:44.191179: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10\n2020-09-23 20:43:44.398087: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10\n2020-09-23 20:43:45.234487: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10\n2020-09-23 20:43:45.458019: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10\n2020-09-23 20:43:46.809363: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7\n2020-09-23 20:43:46.809421: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1753] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\nSkipping registering GPU devices...\n2020-09-23 20:43:46.809732: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2020-09-23 20:43:47.219185: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2300000000 Hz\n2020-09-23 20:43:47.220090: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x27b8e30 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2020-09-23 20:43:47.220121: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2020-09-23 20:43:47.237334: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-09-23 20:43:47.237365: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] \n[2020-09-23 20:43:48] {unet.py :74 } INFO : >>>> Done! \nFound 2636 images belonging to 1 classes.\nFound 2636 images belonging to 1 classes.\nEpoch 1/500\n 4/50 [=>............................] - ETA: 2:40 - loss: 0.6931 - accuracy: 0.8943\n...\n```\n\nThe model as well as the history of training (with accuracy, losses, and other metrics evolution), will be saved in the paths indicated in `settings.py`.\n\n## Predicting with an existent weight\n\nFirst of all, make sure the `.hdf5` weight file is correctly set in `pretrained_weights` network\'s parameter. After to validate all other paths in `settings.py`, the inferences/predictions could be performed with the following command line:\n```\npython main.py -model unet -train False -predict True -verbose True\n```\n\nThe prediction procedure involve two types: (i) for images where the dimension is equal to the samples\'s dimensions used during training, and (ii) images where the dimension is larger. Besides, the inferences have two classes of images, the images without any geographic information, and images with geographic information. The difference is that for images with no geographic metadata, the poligonization (the process to convert PNG prediction in SHP shapefiles - geographic vectors), **will not** be performed. \n\n> In this section, we focus specifically how the (ii) was implemented. \n\nConsidering a large geographic image as an example, in the figure below is shown how the inference is made. First (a), the large image is tilled in a way that each tile have the same dimension as it was trained. \n\n\n\nIn order to prevent discontinuous predictions between each tile, a buffer is applied (see (a)). The buffer can be configured also in `settings.py`, with the `BUFFER_TO_INFERENCE` variable, where the integer value represents the number of pixels to apply the buffering. In this way, zero will perform the inferences without buffering. The maximum buffering value is the half of each tile\'s dimension.\n\nAfter to predict, each tile will have a correspondent segmentation (see (b)). After to predict every single tile that compose the image, the predictions are then merged (c). Due to the buffering, the discontinuity is minimized during merging. Finally, getting a more consistent map in the end (d).\n\n> The predictions in PNG will be placed in `output_prediction`. If it is a large image, then it will be place the tile\'s predictions first in `tmp_slices_predictions`, then, the merging procedure will select all tiles and place the merged predictions in `output_prediction`. When done, the poligonization is performed (only for geographic files). The final vector file is place in `output_prediction_shp`. \n\n# TODO-list\nThis source-code is being released for specific applications, and for those who probably has similar needs. For this reason, we still have a lot to do in terms of unit tests, python conventions, optimization issues, refactoring, so on! So, Feel free to use and any recommendation or PRs will be totally welcome!\n\n-- ~~refactor docstring~~\n-- include alternative to the set of dl models: \n --- ~~unet~~\n --- pspnet\n --- deeplabv3+\n --- segnet\n-- ~~finish inferece procedure~~:\n --- ~~single image~~\n --- ~~multiple images~~\n --- ~~dynamic resolution~~\n-- unittest over basic methods: filesystem/IO, organization, params\n\n\n'",,"2020/07/14, 13:19:46",1198,MIT,4,210,"2022/03/01, 01:05:49",3,6,6,0,603,1,0.0,0.030456852791878153,,,0,6,false,,false,false,,,https://github.com/Bioverse-Labs,https://www.bioverselabs.com/,,,,https://avatars.githubusercontent.com/u/68245846?v=4,,, rGEDI,An R Package for NASA's Global Ecosystem Dynamics Investigation (GEDI) Data Visualization and Processing.,carlos-alberto-silva,https://github.com/carlos-alberto-silva/rGEDI.git,github,,Forest Observation and Management,"2023/10/25, 22:27:46",132,0,15,true,R,,,R,,"b'![](https://github.com/carlos-alberto-silva/rGEDI/blob/master/readme/fig1.png)
\n\n[![R-CMD-check](https://github.com/carlos-alberto-silva/rGEDI/actions/workflows/r.yml/badge.svg?branch=master)](https://github.com/carlos-alberto-silva/rGEDI/actions/workflows/r.yml)\n[![CRAN](https://www.r-pkg.org/badges/version/rGEDI)](https://cran.r-project.org/package=rGEDI)\n![Github](https://img.shields.io/badge/Github-0.1.12-green.svg)\n![licence](https://img.shields.io/badge/Licence-GPL--3-blue.svg) \n![Downloads](https://cranlogs.r-pkg.org/badges/grand-total/rGEDI)\n[![Build Status](https://travis-ci.com/carlos-alberto-silva/rGEDI.svg?token=Jqizwyc6gBxNafNccTdU&branch=master)](https://travis-ci.com/carlos-alberto-silva/rGEDI)\n\n**rGEDI: An R Package for NASA\'s Global Ecosystem Dynamics Investigation (GEDI) Data Visualizing and Processing.**\n\nAuthors: Carlos Alberto Silva, Caio Hamamura, Ruben Valbuena, Steven Hancock, Adrian Cardil, Eben N. Broadbent, Danilo R. A. de Almeida, Celso H. L. Silva Junior and Carine Klauberg \n\nThe rGEDI package provides functions for i) downloading, ii) visualizing, iii) clipping, iv) gridding, iv) simulating and v) exporting GEDI data.\n\n# Getting Started\n\n## Installation\n```r\n#The CRAN version:\ninstall.packages(""rGEDI"")\n\n#The development version:\nlibrary(devtools)\ndevtools::install_git(""https://github.com/carlos-alberto-silva/rGEDI"", dependencies = TRUE)\n\n# loading rGEDI package\nlibrary(rGEDI)\n\n``` \n\n## Find GEDI data within your study area (GEDI finder tool)\n```r\n# Study area boundary box coordinates\nul_lat<- -44.0654\nlr_lat<- -44.17246\nul_lon<- -13.76913\nlr_lon<- -13.67646\n\n# Specifying the date range\ndaterange=c(""2019-07-01"",""2020-05-22"")\n\n# Get path to GEDI data\ngLevel1B<-gedifinder(product=""GEDI01_B"",ul_lat, ul_lon, lr_lat, lr_lon,version=""002"",daterange=daterange)\ngLevel2A<-gedifinder(product=""GEDI02_A"",ul_lat, ul_lon, lr_lat, lr_lon,version=""002"",daterange=daterange)\ngLevel2B<-gedifinder(product=""GEDI02_B"",ul_lat, ul_lon, lr_lat, lr_lon,version=""002"",daterange=daterange)\n```\n## Downloading GEDI data\n```r\n# Set output dir for downloading the files\noutdir=getwd()\n\n# Downloading GEDI data\ngediDownload(filepath=gLevel1B,outdir=outdir)\ngediDownload(filepath=gLevel2A,outdir=outdir)\ngediDownload(filepath=gLevel2B,outdir=outdir)\n\n#######\n# Herein, we are using only a GEDI sample dataset for this tutorial.\n#######\n# downloading zip file\ndownload.file(""https://github.com/carlos-alberto-silva/rGEDI/releases/download/datasets/examples.zip"",destfile=file.path(outdir, ""examples.zip""))\n\n# unzip file \nunzip(file.path(outdir,""examples.zip""))\n\n```\n\n## Reading GEDI data\n```r\n# Reading GEDI data\ngedilevel1b<-readLevel1B(level1Bpath = file.path(outdir,""GEDI01_B_2019108080338_O01964_T05337_02_003_01_sub.h5""))\ngedilevel2a<-readLevel2A(level2Apath = file.path(outdir,""GEDI02_A_2019108080338_O01964_T05337_02_001_01_sub.h5""))\ngedilevel2b<-readLevel2B(level2Bpath = file.path(outdir,""GEDI02_B_2019108080338_O01964_T05337_02_001_01_sub.h5""))\n```\n\n## Get GEDI Pulse Geolocation (GEDI Level1B)\n```r\nlevel1bGeo<-getLevel1BGeo(level1b=gedilevel1b,select=c(""elevation_bin0""))\nhead(level1bGeo)\n\n## shot_number latitude_bin0 latitude_lastbin longitude_bin0 longitude_lastbin elevation_bin0\n## 1: 19640002800109382 -13.75903 -13.75901 -44.17219 -44.17219 784.8348\n## 2: 19640003000109383 -13.75862 -13.75859 -44.17188 -44.17188 799.0491\n## 3: 19640003200109384 -13.75821 -13.75818 -44.17156 -44.17156 814.4647\n## 4: 19640003400109385 -13.75780 -13.75777 -44.17124 -44.17124 820.1437\n## 5: 19640003600109386 -13.75738 -13.75736 -44.17093 -44.17093 821.7012\n## 6: 19640003800109387 -13.75697 -13.75695 -44.17061 -44.17061 823.2526\n\n# Converting shot_number as ""integer64"" to ""character""\nlevel1bGeo$shot_number<-as.character(level1bGeo$shot_number)\n\n# Converting level1bGeo as data.table to SpatialPointsDataFrame\nlibrary(sp)\nlevel1bGeo_spdf<-SpatialPointsDataFrame(cbind(level1bGeo$longitude_bin0, level1bGeo$latitude_bin0),\n data=level1bGeo)\n\n# Exporting level1bGeo as ESRI Shapefile\nraster::shapefile(level1bGeo_spdf,file.path(outdir,""GEDI01_B_2019108080338_O01964_T05337_02_003_01_sub""))\n```\n\n\n```\nlibrary(leaflet)\nlibrary(leafsync)\n\nleaflet() %>%\n addCircleMarkers(level1bGeo$longitude_bin0,\n level1bGeo$latitude_bin0,\n radius = 1,\n opacity = 1,\n color = ""red"") %>%\n addScaleBar(options = list(imperial = FALSE)) %>%\n addProviderTiles(providers$Esri.WorldImagery) %>%\n addLegend(colors = ""red"", labels= ""Samples"",title =""GEDI Level1B"")\n\n\n```\n\n## Get GEDI Full-waveform (GEDI Level1B)\n```r\n# Extracting GEDI full-waveform for a giving shotnumber\nwf <- getLevel1BWF(gedilevel1b, shot_number=""19640521100108408"")\n\npar(mfrow = c(1,2), mar=c(4,4,1,1), cex.axis = 1.5)\n\nplot(wf, relative=FALSE, polygon=TRUE, type=""l"", lwd=2, col=""forestgreen"",\n xlab=""Waveform Amplitude"", ylab=""Elevation (m)"")\ngrid()\nplot(wf, relative=TRUE, polygon=FALSE, type=""l"", lwd=2, col=""forestgreen"",\n xlab=""Waveform Amplitude (%)"", ylab=""Elevation (m)"")\ngrid()\n```\n![](https://github.com/carlos-alberto-silva/rGEDI/blob/master/readme/fig3.png)\n\n## Get GEDI Elevation and Height Metrics (GEDI Level2A)\n```r\n# Get GEDI Elevation and Height Metrics\nlevel2AM<-getLevel2AM(gedilevel2a)\nhead(level2AM[,c(""beam"",""shot_number"",""elev_highestreturn"",""elev_lowestmode"",""rh100"")])\n\n## beam shot_number elev_highestreturn elev_lowestmode rh100\n## 1: BEAM0000 19640002800109382 740.7499 736.3301 4.41\n## 2: BEAM0000 19640003000109383 756.0878 746.7614 9.32\n## 3: BEAM0000 19640003200109384 770.3423 763.1509 7.19\n## 4: BEAM0000 19640003400109385 775.9838 770.6652 5.31\n## 5: BEAM0000 19640003600109386 777.8409 773.0841 4.75\n## 6: BEAM0000 19640003800109387 778.7181 773.6990 5.01\n\n# Converting shot_number as ""integer64"" to ""character""\nlevel2AM$shot_number<-as.character(level2AM$shot_number)\n\n# Converting Elevation and Height Metrics as data.table to SpatialPointsDataFrame\nlevel2AM_spdf<-SpatialPointsDataFrame(cbind(level2AM$lon_lowestmode,level2AM$lat_lowestmode),\n data=level2AM)\n\n# Exporting Elevation and Height Metrics as ESRI Shapefile\nraster::shapefile(level2AM_spdf,file.path(outdir,""GEDI02_A_2019108080338_O01964_T05337_02_001_01_sub""))\n```\n\n## Plot waveform with RH metrics\n```r\nshot_number = ""19640521100108408""\n\npng(""fig8.png"", width = 8, height = 6, units = \'in\', res = 300)\nplotWFMetrics(gedilevel1b, gedilevel2a, shot_number, rh=c(25, 50, 75, 90))\ndev.off()\n```\n![](https://github.com/carlos-alberto-silva/rGEDI/blob/master/readme/fig8.png)\n## Get GEDI Vegetation Biophysical Variables (GEDI Level2B)\n```r\nlevel2BVPM<-getLevel2BVPM(gedilevel2b)\nhead(level2BVPM[,c(""beam"",""shot_number"",""pai"",""fhd_normal"",""omega"",""pgap_theta"",""cover"")])\n\n## beam shot_number pai fhd_normal omega pgap_theta cover\n## 1: BEAM0000 19640002800109382 0.007661204 0.6365142 1 0.9961758 0.003823273\n## 2: BEAM0000 19640003000109383 0.086218357 2.2644432 1 0.9577964 0.042192958\n## 3: BEAM0000 19640003200109384 0.299524575 1.8881851 1 0.8608801 0.139084846\n## 4: BEAM0000 19640003400109385 0.079557180 1.6625489 1 0.9609926 0.038997617\n## 5: BEAM0000 19640003600109386 0.018724868 1.5836401 1 0.9906789 0.009318732\n## 6: BEAM0000 19640003800109387 0.017654873 1.2458609 1 0.9912092 0.008788579\n\n# Converting shot_number as ""integer64"" to ""character""\nlevel2BVPM$shot_number<-as.character(level2BVPM$shot_number)\n\n# Converting GEDI Vegetation Profile Biophysical Variables as data.table to SpatialPointsDataFrame\nlevel2BVPM_spdf<-SpatialPointsDataFrame(cbind(level2BVPM$longitude_lastbin,level2BVPM$latitude_lastbin),data=level2BVPM)\n\n# Exporting GEDI Vegetation Profile Biophysical Variables as ESRI Shapefile\nraster::shapefile(level2BVPM_spdf,file.path(outdir,""GEDI02_B_2019108080338_O01964_T05337_02_001_01_sub_VPM""))\n\n```\n\n## Get Plant Area Index (PAI) and Plant Area Volume Density (PAVD) Profiles (GEDI Level2B)\n```r\nlevel2BPAIProfile<-getLevel2BPAIProfile(gedilevel2b)\nhead(level2BPAIProfile[,c(""beam"",""shot_number"",""pai_z0_5m"",""pai_z5_10m"")])\n\n## beam shot_number pai_z0_5m pai_z5_10m\n## 1: BEAM0000 19640002800109382 0.007661204 0.0000000000\n## 2: BEAM0000 19640003000109383 0.086218357 0.0581122264\n## 3: BEAM0000 19640003200109384 0.299524575 0.0497199222\n## 4: BEAM0000 19640003400109385 0.079557180 0.0004457365\n## 5: BEAM0000 19640003600109386 0.018724868 0.0000000000\n## 6: BEAM0000 19640003800109387 0.017654873 0.0000000000\n\nlevel2BPAVDProfile<-getLevel2BPAVDProfile(gedilevel2b)\nhead(level2BPAVDProfile[,c(""beam"",""shot_number"",""pavd_z0_5m"",""pavd_z5_10m"")])\n\n## beam shot_number pavd_z0_5m pavd_z5_10m\n## 1: BEAM0000 19640002800109382 0.001532241 0.0007661204\n## 2: BEAM0000 19640003000109383 0.005621226 0.0086218351\n## 3: BEAM0000 19640003200109384 0.049960934 0.0299524590\n## 4: BEAM0000 19640003400109385 0.015822290 0.0079557188\n## 5: BEAM0000 19640003600109386 0.003744974 0.0018724868\n## 6: BEAM0000 19640003800109387 0.003530974 0.0017654872\n\n# Converting shot_number as ""integer64"" to ""character""\nlevel2BPAIProfile$shot_number<-as.character(level2BPAIProfile$shot_number)\nlevel2BPAVDProfile$shot_number<-as.character(level2BPAVDProfile$shot_number)\n\n# Converting PAI and PAVD Profiles as data.table to SpatialPointsDataFrame\nlevel2BPAIProfile_spdf<-SpatialPointsDataFrame(cbind(level2BPAIProfile$lon_lowestmode,level2BPAIProfile$lat_lowestmode),\n data=level2BPAIProfile)\nlevel2BPAVDProfile_spdf<-SpatialPointsDataFrame(cbind(level2BPAVDProfile$lon_lowestmode,level2BPAVDProfile$lat_lowestmode),\n data=level2BPAVDProfile)\n\n# Exporting PAI and PAVD Profiles as ESRI Shapefile\nraster::shapefile(level2BPAIProfile_spdf,file.path(outdir,""GEDI02_B_2019108080338_O01964_T05337_02_001_01_sub_PAIProfile""))\nraster::shapefile(level2BPAVDProfile_spdf,file.path(outdir,""GEDI02_B_2019108080338_O01964_T05337_02_001_01_sub_PAVDProfile""))\n\n```\n\n## Plot Plant Area Index (PAI) and Plant Area Volume Density (PAVD) Profiles \n```r\n#specify GEDI beam\nbeam=""BEAM0101""\n\n# Plot Level2B PAI Profile\ngPAIprofile<-plotPAIProfile(level2BPAIProfile, beam=beam, elev=TRUE)\n\n# Plot Level2B PAVD Profile\ngPAVDprofile<-plotPAVDProfile(level2BPAVDProfile, beam=beam, elev=TRUE)\n\n```\n![](https://github.com/carlos-alberto-silva/rGEDI/blob/master/readme/fig9.png)\n\n\n## Clip GEDI data (h5 files; gedi.level1b, gedi.level2a and gedi.level2b objects)\n```r\n## Clip GEDI data by coordinates\n# Study area boundary box\nxmin = -44.15036\nxmax = -44.10066\nymin = -13.75831\nymax = -13.71244\n\n## clipping GEDI data within boundary box\nlevel1b_clip_bb <- clipLevel1B(gedilevel1b, xmin, xmax, ymin, ymax,output=file.path(outdir,""level1b_clip_bb.h5""))\nlevel2a_clip_bb <- clipLevel2A(gedilevel2a, xmin, xmax, ymin, ymax, output=file.path(outdir,""level2a_clip_bb.h5""))\nlevel2b_clip_bb <- clipLevel2B(gedilevel2b, xmin, xmax, ymin, ymax,output=file.path(outdir,""level2b_clip_bb.h5""))\n\n## Clipping GEDI data by geometry\n# specify the path to shapefile for the study area\npolygon_filepath <- system.file(""extdata"", ""stands_cerrado.shp"", package=""rGEDI"")\n\n# Reading shapefile as SpatialPolygonsDataFrame object\npolygon_spdf<-raster::shapefile(polygon_filepath)\nhead(polygon_spdf@data) # column id name ""id""\nsplit_by=""id""\n\n# Clipping h5 files\nlevel1b_clip_gb <- clipLevel1BGeometry(gedilevel1b,polygon_spdf,output=file.path(outdir,""level1b_clip_gb.h5""), split_by=split_by)\nlevel2a_clip_gb <- clipLevel2AGeometry(gedilevel2a,polygon_spdf,output=file.path(outdir,""level2a_clip_gb.h5""), split_by=split_by)\nlevel2b_clip_gb <- clipLevel2BGeometry(gedilevel2b,polygon_spdf,output=file.path(outdir,""level2b_clip_gb.h5""), split_by=split_by)\n```\n## Clip GEDI data (data.table objects)\n```r\n## Clipping GEDI data within boundary box\nlevel1bGeo_clip_bb <-clipLevel1BGeo(level1bGeo, xmin, xmax, ymin, ymax)\nlevel2AM_clip_bb <- clipLevel2AM(level2AM, xmin, xmax, ymin, ymax)\nlevel2BVPM_clip_bb <- clipLevel2BVPM(level2BVPM, xmin, xmax, ymin, ymax)\nlevel1BPAIProfile_clip_bb <- clipLevel2BPAIProfile(level2BPAIProfile, xmin, xmax, ymin, ymax)\nlevel2BPAVDProfile_clip_bb <- clipLevel2BPAVDProfile(level2BPAVDProfile, xmin, xmax, ymin, ymax)\n\n## Clipping GEDI data by geometry\nlevel1bGeo_clip_gb <- clipLevel1BGeoGeometry(level1bGeo,polygon_spdf, split_by=split_by)\nlevel2AM_clip_gb <- clipLevel2AMGeometry(level2AM,polygon_spdf, split_by=split_by)\nlevel2BVPM_clip_gb <- clipLevel2BVPMGeometry(level2BVPM,polygon_spdf, split_by=split_by)\nlevel1BPAIProfile_clip_gb <- clipLevel2BPAIProfileGeometry(level2BPAIProfile,polygon_spdf, split_by=split_by)\nlevel2BPAVDProfile_clip_gb <- clipLevel2BPAVDProfileGeometry(level2BPAVDProfile,polygon_spdf, split_by=split_by)\n\n\n## View GEDI clipped data by bbox\nm1<-leaflet() %>%\n addCircleMarkers(level2AM$lon_lowestmode,\n level2AM$lat_lowestmode,\n radius = 1,\n opacity = 1,\n color = ""red"") %>%\n addCircleMarkers(level2AM_clip_bb$lon_lowestmode,\n level2AM_clip_bb$lat_lowestmode,\n radius = 1,\n opacity = 1,\n color = ""green"") %>%\n addScaleBar(options = list(imperial = FALSE)) %>%\n addProviderTiles(providers$Esri.WorldImagery) %>%\n addLegend(colors = c(""red"",""green""), labels= c(""All samples"",""Clip bbox""),title =""GEDI Level2A"") \n\n## View GEDI clipped data by geometry\n# color palette\npal <- colorFactor(\n palette = c(\'blue\', \'green\', \'purple\', \'orange\',""white"",""black"",""gray"",""yellow""),\n domain = level2AM_clip_gb$poly_id\n)\n\nm2<-leaflet() %>%\n addCircleMarkers(level2AM$lon_lowestmode,\n level2AM$lat_lowestmode,\n radius = 1,\n opacity = 1,\n color = ""red"") %>%\n addCircleMarkers(level2AM_clip_gb$lon_lowestmode,\n level2AM_clip_gb$lat_lowestmode,\n radius = 1,\n opacity = 1,\n color = pal(level2AM_clip_gb$poly_id)) %>%\n addScaleBar(options = list(imperial = FALSE)) %>%\n addPolygons(data=polygon_spdf,weight=1,col = \'white\',\n opacity = 1, fillOpacity = 0) %>%\n addProviderTiles(providers$Esri.WorldImagery) %>%\n addLegend(pal = pal, values = level2AM_clip_gb$poly_id,title =""Poly IDs"" ) \n\nsync(m1, m2)\n```\n![](https://github.com/carlos-alberto-silva/rGEDI/blob/master/readme/fig4.png)\n\n## Compute descriptive statistics of GEDI Level2A and Level2B data\n```r\n# Define your own function\nmySetOfMetrics = function(x)\n{\nmetrics = list(\n min =min(x), # Min of x\n max = max(x), # Max of x\n mean = mean(x), # Mean of x\n sd = sd(x)# Sd of x\n )\n return(metrics)\n}\n\n# Computing the maximum of RH100 stratified by polygon\nrh100max_st<-polyStatsLevel2AM(level2AM_clip_gb,func=max(rh100), id=""poly_id"")\nhead(rh100max_st)\n\n## poly_id max\n## 1: 2 12.81\n## 2: 1 12.62\n## 3: 5 9.96\n## 4: 6 8.98\n## 5: 4 10.33\n## 6: 8 8.72\n\n# Computing a serie statistics for GEDI metrics stratified by polygon\nrh100metrics_st<-polyStatsLevel2AM(level2AM_clip_gb,func=mySetOfMetrics(rh100),\nid=""poly_id"")\nhead(rh100metrics_st)\n\n## poly_id min max mean sd\n## 1: 2 4.08 12.81 5.508639 1.452143\n## 2: 1 3.78 12.62 5.514930 1.745507\n## 3: 5 4.12 9.96 5.100122 1.195272\n## 4: 6 4.64 8.98 5.595294 1.024171\n## 5: 4 4.38 10.33 7.909500 1.757200\n## 6: 8 4.45 8.72 6.136471 1.097468\n\n# Computing the max of the Total Plant Area Index\npai_max<-polyStatsLevel2BVPM(level2BVPM_clip_gb,func=max(pai), id=NULL)\npai_max\n\n## max\n# 1: 1.224658\n\n# Computing a serie of statistics of Canopy Cover stratified by polygon\ncover_metrics_st<-polyStatsLevel2BVPM(level2BVPM_clip_gb,func=mySetOfMetrics(cover),\nid=""poly_id"")\nhead(cover_metrics_st)\n\n## poly_id min max mean sd\n## 1: 2 0.0010017310 0.3479594 0.05156159 0.05817241\n## 2: 1 0.0003717059 0.3812594 0.04829096 0.06346548\n## 3: 5 0.0020242794 0.4262614 0.03577852 0.06407325\n## 4: 6 0.0028748326 0.2392146 0.03094646 0.05577988\n## 5: 4 0.0022404396 0.3501986 0.11343149 0.09354305\n## 6: 8 0.0050588539 0.1457105 0.04784596 0.04427151\n```\n\n## Compute Grids with descriptive statistics of GEDI-derived Elevation and Height Metrics (Level2A)\n\n\n\n```r\n# Computing a serie of statistics of GEDI RH100 metric\nrh100metrics<-gridStatsLevel2AM(level2AM = level2AM, func=mySetOfMetrics(rh100), res=0.005)\n\n# View maps\nlibrary(rasterVis)\nlibrary(viridis)\n\nrh100maps<-levelplot(rh100metrics,\n layout=c(1, 4),\n margin=FALSE,\n xlab = ""Longitude (degree)"", ylab = ""Latitude (degree)"",\n colorkey=list(\n space=\'right\',\n labels=list(at=seq(0, 18, 2), font=4),\n axis.line=list(col=\'black\'),\n width=1),\n par.settings=list(\n strip.border=list(col=\'gray\'),\n strip.background=list(col=\'gray\'),\n axis.line=list(col=\'gray\')\n ),\n scales=list(draw=TRUE),\n col.regions=viridis,\n at=seq(0, 18, len=101),\n names.attr=c(""rh100 min"",""rh100 max"",""rh100 mean"", ""rh100 sd""))\n\n# Exporting maps \npng(""fig6.png"", width = 6, height = 8, units = \'in\', res = 300)\nrh100maps\ndev.off()\n\n\n\n```\n\n## Compute Grids with descriptive statistics of GEDI-derived Canopy Cover and Vertical Profile Metrics (Level2B)\n\n\n\n```r\n# Computing a serie of statistics of Total Plant Area Index\nlevel2BVPM$pai[level2BVPM$pai==-9999]<-NA # assing NA to -9999\npai_metrics<-gridStatsLevel2BVPM(level2BVPM = level2BVPM, func=mySetOfMetrics(pai), res=0.005)\n\n# View maps\npai_maps<-levelplot(pai_metrics,\n layout=c(1, 4),\n margin=FALSE,\n xlab = ""Longitude (degree)"", ylab = ""Latitude (degree)"",\n colorkey=list(\n space=\'right\',\n labels=list(at=seq(0, 1.5, 0.2), font=4),\n axis.line=list(col=\'black\'),\n width=1),\n par.settings=list(\n strip.border=list(col=\'gray\'),\n strip.background=list(col=\'gray\'),\n axis.line=list(col=\'gray\')\n ),\n scales=list(draw=TRUE),\n col.regions=viridis,\n at=seq(0, 1.5, len=101),\n names.attr=c(""PAI min"",""PAI max"",""PAI mean"", ""PAI sd""))\n\n# Exporting maps \npng(""fig7.png"", width = 6, height = 8, units = \'in\', res = 300)\npai_maps\ndev.off()\n\n\n\n\n```\n\n## Simulating GEDI full-waveform data from Airborne Laser Scanning (ALS) 3-D point cloud and extracting canopy derived metrics\n```r\n# Specifying the path to ALS data\nlasfile_amazon <- file.path(outdir, ""Amazon.las"")\nlasfile_savanna <- file.path(outdir, ""Savanna.las"")\n\n# Reading and plot ALS file\nlibrary(lidR)\nlibrary(plot3D)\nlas_amazon<-readLAS(lasfile_amazon)\nlas_savanna<-readLAS(lasfile_savanna)\n\n# Extracting plot center geolocations\nxcenter_amazon = mean(bbox(las_amazon)[1,])\nycenter_amazon = mean(bbox(las_amazon)[2,])\nxcenter_savanna = mean(bbox(las_savanna)[1,])\nycenter_savanna = mean(bbox(las_savanna)[2,])\n\n# Simulating GEDI full-waveform\nwf_amazon<-gediWFSimulator(input=lasfile_amazon,output=file.path(getwd(),""gediWF_amazon_simulation.h5""),coords = c(xcenter_amazon, ycenter_amazon))\nwf_savanna<-gediWFSimulator(input=lasfile_savanna,output=file.path(getwd(),""gediWF_savanna_simulation.h5""),coords = c(xcenter_savanna, ycenter_savanna))\n\n# Plotting ALS and GEDI simulated full-waveform\npng(""gediWf.png"", width = 8, height = 6, units = \'in\', res = 300)\n\npar(mfrow=c(2,2), mar=c(4,4,0,0), oma=c(0,0,1,1),cex.axis = 1.2)\nscatter3D(las_amazon@data$X,las_amazon@data$Y,las_amazon@data$Z,pch = 16,colkey = FALSE, main="""",\n cex = 0.5,bty = ""u"",col.panel =""gray90"",phi = 30,alpha=1,theta=45,\n col.grid = ""gray50"", xlab=""UTM Easting (m)"", ylab=""UTM Northing (m)"", zlab=""Elevation (m)"")\n\n# Simulated waveforms shot_number is incremental beggining from 0\nshot_number = 0\nsimulated_waveform_amazon = getLevel1BWF(wf_amazon, shot_number)\nplot(simulated_waveform_amazon, relative=TRUE, polygon=TRUE, type=""l"", lwd=2, col=""forestgreen"",\n xlab="""", ylab=""Elevation (m)"", ylim=c(90,140))\ngrid()\nscatter3D(las_savanna@data$X,las_savanna@data$Y,las_savanna@data$Z,pch = 16,colkey = FALSE, main="""",\n cex = 0.5,bty = ""u"",col.panel =""gray90"",phi = 30,alpha=1,theta=45,\n col.grid = ""gray50"", xlab=""UTM Easting (m)"", ylab=""UTM Northing (m)"", zlab=""Elevation (m)"")\n\nshot_number = 0\nsimulated_waveform_savanna = getLevel1BWF(wf_savanna, shot_number)\nplot(simulated_waveform_savanna, relative=TRUE, polygon=TRUE, type=""l"", lwd=2, col=""green"",\nxlab=""Waveform Amplitude (%)"", ylab=""Elevation (m)"", ylim=c(815,835))\ngrid()\ndev.off()\n```\n![](https://github.com/carlos-alberto-silva/rGEDI/blob/master/readme/fig7.png)\n\n## Extracting GEDI full-waveform derived metrics without adding noise to the full-waveform\n```\nwf_amazon_metrics<-gediWFMetrics(input=wf_amazon,\n outRoot=file.path(getwd(), ""amazon""))\nwf_savanna_metrics<-gediWFMetrics(input=wf_savanna,\n outRoot=file.path(getwd(), ""savanna""))\n\nmetrics<-rbind(wf_amazon_metrics,wf_savanna_metrics)\nrownames(metrics)<-c(""Amazon"",""Savanna"")\nhead(metrics[,1:8])\n\n# wave ID true ground true top ground slope ALS cover gHeight maxGround inflGround\n#Amazon gedi.BEAM0000.0 -1e+06 133.25 -1e+06 -1 94.93 99.95 95.16\n#Savanna gedi.BEAM0000.0 -1e+06 831.47 -1e+06 -1 822.18 822.17 822.25\n```\n## Extracting GEDI full-waveform derived metrics after adding noise to the full-waveform\n```\nwf_amazon_metrics_noise<-gediWFMetrics(input=wf_amazon,\n outRoot=file.path(getwd(), ""amazon""),\n linkNoise= c(3.0103,0.95),\n maxDN= 4096,\n sWidth= 0.5,\n varScale= 3)\n\nwf_savanna_metrics_noise<-gediWFMetrics(\n input=wf_savanna,\n outRoot=file.path(getwd(), ""savanna""),\n linkNoise= c(3.0103,0.95),\n maxDN= 4096,\n sWidth= 0.5,\n varScale= 3)\n\nmetrics_noise<-rbind(wf_amazon_metrics_noise,wf_savanna_metrics_noise)\nrownames(metrics_noise)<-c(""Amazon"",""Savanna"")\nhead(metrics_noise[,1:8])\n\n# #wave ID true ground true top ground slope ALS cover gHeight maxGround inflGround\n# Amazon 0 -1e+06 133.29 -1e+06 -1 99.17 99.99 95.39\n# Savanna 0 -1e+06 831.36 -1e+06 -1 822.15 822.21 822.18\n\n```\n\n## Always close gedi objects, so HDF5 files won\'t be blocked!\n```{r cleanup, echo=TRUE, results=""hide"", error=TRUE}\nclose(wf_amazon)\nclose(wf_savanna)\nclose(gedilevel1b)\nclose(gedilevel2a)\nclose(gedilevel2b)\n```\n\n\n# References\nDubayah, R., Blair, J.B., Goetz, S., Fatoyinbo, L., Hansen, M., Healey, S., Hofton, M., Hurtt, G., Kellner, J., Luthcke, S., & Armston, J. (2020) The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth\xe2\x80\x99s forests and topography. Science of Remote Sensing, p.100002. https://doi.org/10.1016/j.srs.2020.100002\n\nHancock, S., Armston, J., Hofton, M., Sun, X., Tang, H., Duncanson, L.I., Kellner,\n J.R. and Dubayah, R., 2019. The GEDI simulator: A large-footprint waveform lidar simulator\n for calibration and validation of spaceborne missions. Earth and Space Science.\n https://doi.org/10.1029/2018EA000506\n\nSilva, C. A.; Saatchi, S.; Alonso, M. G. ; Labriere, N. ; Klauberg, C. ; Ferraz, A. ; Meyer, V. ; Jeffery, K. J. ; Abernethy, K. ; White, L. ; Zhao, K. ; Lewis, S. L. ; Hudak, A. T. (2018) Comparison of Small- and Large-Footprint Lidar Characterization of Tropical Forest Aboveground Structure and Biomass: A Case Study from Central Gabon. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, p. 1-15.\n https://ieeexplore.ieee.org/document/8331845\n\nGEDI webpage. Accessed on February 15 2020 https://gedi.umd.edu/ \nGEDI01_Bv001. Accessed on February 15 2020 https://lpdaac.usgs.gov/products/gedi01_bv001/ \nGEDI02_Av001. Accessed on February 15 2020 https://lpdaac.usgs.gov/products/gedi02_av001/ \nGEDI02_Bv001. Accessed on February 15 2020 https://lpdaac.usgs.gov/products/gedi02_bv001/ \nGEDI Finder. Accessed on February 15 2020 https://lpdaacsvc.cr.usgs.gov/services/gedifinder\n\n# Acknowledgements\nThe University of Maryland and NASA\'s Goddard Space Flight Center for developing GEDI mission.\n\nWe gratefully acknowledge funding from NASA\xe2\x80\x99s Carbon Monitoring Systems, grant NNH15ZDA001N-CMS. Project entitled ""Future Mission Fusion for High Biomass Forest Carbon Accounting"" led by Dr. Laura Duncanson (lduncans@umd.edu, University of Maryland) and Dr. Lola Fatoyinbo (lola.fatoyinbo@nasa.gov, NASA\'s Goddard Space Flight Center).\n\nThe Brazilian National Council for Scientific and Technological Development (CNPq) for funding the project entitled ""Mapping fuel load and simulation of fire behaviour and spread in the Cerrado biome using modeling and remote sensing technologies"" and leaded by Prof. Dr. Carine Klauberg (carine_klauberg@hotmail.com) and Dr. Carlos Alberto Silva\n(carlos_engflorestal@outlook.com).\n\n# Reporting Issues \nPlease report any issue regarding the rGEDI package herein https://groups.yahoo.com/neo/groups/rGEDI\n\n# Citing rGEDI\nSilva,C.A; Hamamura,C.; Valbuena, R.; Hancock,S.; Cardil,A.; Broadbent, E. N.; Almeida,D.R.A.; Silva Junior, C.H.L; Klauberg, C. rGEDI: NASA\'s Global Ecosystem Dynamics Investigation (GEDI) Data Visualization and Processing.\nversion 0.1.9, accessed on October. 22 2020, available at: \n\n# Disclaimer\n**rGEDI package has not been developted by the GEDI team. It comes with no guarantee, expressed or implied, and the authors hold no responsibility for its use or reliability of its outputs.**\n\n'",",https://doi.org/10.1016/j.srs.2020.100002\n\nHancock,https://doi.org/10.1029/2018EA000506\n\nSilva","2020/01/25, 14:15:11",1369,MIT,30,651,"2023/07/05, 01:27:30",2,3,55,15,112,0,0.3333333333333333,0.3838709677419355,"2020/11/10, 23:18:12",v0.1.10,0,3,false,,false,false,,,,,,,,,,, detectreeRGB,"Tree crown delineation from RGB imagery, coupled with methods to delineate tree crowns from LiDAR data.",shmh40,https://github.com/shmh40/detectreeRGB.git,github,,Forest Observation and Management,"2021/11/29, 14:53:24",31,0,7,false,Jupyter Notebook,,,"Jupyter Notebook,R",,"b'# detectreeRGB\n\nThis is the repository for Sebastian Hickman\'s AI4ER MRes project, titled \'Detecting changes in tall tree height with machine learning, LiDAR, and RGB imagery\'.\n\nIts key components are: scripts to read in and tile geospatial data, an implementation of Mask R-CNN from Detectron2 (Wu et al., 2019) to perform tree crown delineation from RGB imagery, scripts to delineate tree crowns from LiDAR data using UAVforestR (T. Swinfield, https://github.com/swinersha/UAVforestR), and scripts to analyse the growth and mortality of identified trees from repeat observations. The code includes notebooks written in Python and R scripts.\n\nThe data used to evaluate and test Mask R-CNN and ITCfast are freely available at https://zenodo.org/record/5090039.\n\n## Colab scripts\n\nScripts to run this project easily in Google Colab can be found in the colab directory in the *dev branch*.\n\n## Workflow\n\nThe workflow of the project is described by the following image.\n\n \n\n## Repository structure\n```\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n|\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md <- The top-level README for developers using this project.\n|\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 requirements <- Directory containing the requirement files.\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 tiling <- R scripts to read and tile geospatial data for subsequent analysis.\n\xe2\x94\x82 \xe2\x94\x82\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 preprocessing <- Scripts to convert raw RGB tiffs into tiled pngs, and to convert shapefiles to geojsons\n\xe2\x94\x82\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 bayesian_opt\n| | \n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 bo <- Notebook to carry out Bayesian optimisation on the parameters of the ITCfast algorithm\n| and script to run and evaluate ITCfast with the suggested parameters. \n|\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 models <- Notebooks to train and test models\n| | \n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 mask_rcnn <- Notebook to train, test and evaluate a Mask R-CNN model.\n| |\n| \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 itcfast <- Notebook to test ITCfast on an area of interest, and evaluate its performance.\n\xe2\x94\x82\n|\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 change_analysis <- Notebooks to analyse changes in individual trees between repeat observations. \n```\n\n## Deploying Mask R-CNN\n\nWe provide pre-trained model weights, which can be used to directly predict on your area of forest. A notebook is provided to easily deploy our pre-trained model in models/mask_rcnn. Our model was trained on 999 trees in Paracou, French Guiana, which is an area of lowland tropical forest. If you would like to train your own model, using your own manually delineated crowns, this is possible with the training notebook provided in models/mask_rcnn. This may improve performance if you are predicting on a type of forest significantly different to lowland tropical forest.\n\nHere is an example image of the predictions made by Mask R-CNN.\n\n \n\n## Evaluating Mask R-CNN\n\nCode to evaluate both models is also provided in models/mask_rcnn. This requires some manually delineated tree crowns in your area of interest. The model is evaluated using standard COCO metrics, including Average Precision and Average Recall.\n\n## Customising Mask R-CNN training\n\nIf you wish to train your own model, you may wish to alter the hyperparameters used by Mask R-CNN while training. All hyperparameters are easily altered with our notebook. The key hyperparameters you may wish to vary include the depth of the ResNet backbone, the learning rate, and the batch size.\n\n## ITCfast\n\nITCfast can be deployed and evaluated using the notebooks provided in models/itcfast. The parameters of the model can be easily altered, or optimised to your own data using the scripts in bayesian_opt/bo.\n\n## Change Analysis \n\nChange analysis between 2014 and 2020 is carried out with the script in the change_analysis directory, which also produces plots given in the project report, similar to this example.\n\n \n\n## Tiling\n\nThe tiling directory provides a script to convert geospatial data, such as GeoTiffs and GeoJSONs into formats supported by Detectron2, png images and JSONs. The outputs of this script can then be passed into Mask R-CNN to train and test the model. \n'",",https://zenodo.org/record/5090039.\n\n##","2021/07/01, 12:39:59",846,MIT,0,69,"2023/07/05, 01:27:30",0,0,0,0,112,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, ForesToolboxRS,Remote Sensing Tools for Forest Monitoring.,ytarazona,https://github.com/ytarazona/ForesToolboxRS.git,github,"remote-sensing,change-detection,deforestation,machine-learning",Forest Observation and Management,"2021/07/28, 12:30:29",46,0,6,false,R,,,R,https://ytarazona.github.io/ForesToolboxRS/,"b'\n# ForesToolboxRS \n\n[![License:\nMIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![GitHub action build\nstatus](https://github.com/ytarazona/ForesToolboxRS/workflows/pkgdown/badge.svg)](https://github.com/ytarazona/ForesToolboxRS/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/ytarazona/ForesToolboxRS/branch/main/graph/badge.svg)](https://codecov.io/gh/ytarazona/ForesToolboxRS?branch=main)\n[![lifecycle](https://img.shields.io/badge/lifecycle-experimental-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental)\n\n\n\n\n# Citation\n\nTo cite the `ForesToolboxRS` package in publications, please use [this\npaper](https://doi.org/10.1080/07038992.2021.1941823):\n\nYonatan Tarazona, Alaitz Zabala, Xavier Pons, Antoni Broquetas, Jakub\nNowosad & Hamdi A. Zurqani (2021) Fusing Landsat and SAR Data for\nMapping Tropical Deforestation through Machine Learning Classification\nand the PVts-\xce\xb2 Non-Seasonal Detection Approach, Canadian Journal of\nRemote Sensing, DOI: 10.1080/07038992.2021.1941823\n\nLaTeX/BibTeX version can be obtained with:\n\n library(ForesToolboxRS)\n citation(""ForesToolboxRS"")\n\n# Introduction\n\n**ForesToolboxRS** is an R package providing a variety of tools and\nalgorithms for the processing and analysis of satellite images for the\nvarious applications of Remote Sensing for Earth Observations. All\nimplemented algorithms are based on scientific publications.\n\n - Tarazona, Y., Zabala,A., Pons, X., Broquetas, A., Nowosad, J.,\n Zurqani, H.A. (2021). Fusing Landsat and SAR Data for Mapping\n Tropical Deforestation through Machine Learning Classification and\n the PVts-\xce\xb2 Non-Seasonal Detection Approach. Canadian Journal of\n Remote Sensing.\n - Tarazona, Y., Maria, Miyasiro-Lopez. (2020). Monitoring tropical\n forest degradation using remote sensing. Challenges and\n opportunities in the Madre de Dios region, Peru. Remote Sensing\n Applications: Society and Environment, 19, 100337.\n - Tarazona, Y., Mantas, V.M., Pereira, A.J.S.C. (2018). Improving\n tropical deforestation detection through using photosynthetic\n vegetation time series (PVts-\xce\xb2). Ecological Indicators, 94, 367 379.\n - Hamunyela, E., Verbesselt, J., Roerink, G., & Herold, M. (2013).\n Trends in spring phenology of western European deciduous forests.\n Remote Sensing,5(12), 6159-6179.\n - Souza Jr., C.M., Roberts, D.A., Cochrane, M.A., 2005. Combining\n spectral and spatialinformation to map canopy damage from selective\n logging and forest fires. Remote Sens. Environ. 98 (2-3), 329-343.\n - Adams, J. B., Smith, M. O., & Gillespie, A. R. (1993). Imaging\n spectroscopy: Interpretation based on spectral mixture analysis. In\n C. M. Pieters & P. Englert (Eds.), Remote geochemical analysis:\n Elements and mineralogical composition. NY: Cambridge Univ. Press\n 145-166 pp.\n - Shimabukuro, Y.E. and Smith, J., (1991). The least squares mixing\n models to generate fraction images derived from remote sensing\n multispectral data. IEEE Transactions on Geoscience and Remote\n Sensing, 29, pp.\xc2\xa016-21.\n\n**The PVts-Beta approach**, a non-seasonal detection approach, is\nimplemented in this package and can read time series, vector, matrix,\nand raster data. Some functions of this package are intended to show, on\nthe one hand, some progress in methods for mapping deforestation and\nforest degradation, and on the other hand, to provide some tools (not\nyet available) for routine analysis of remotely detected data. Tools for\ncalibration of unsupervised and supervised algorithms through various\ncalibration approaches are some of the functions embedded in this\npackage. Therefore we sincerely hope that **ForesToolboxRS** can\nfacilitate different analyses and simple and robust processes in\nsatellite images\n\nAvailable functions:\n\n| Name of functions | Description |\n| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| **`pvts()`** | This algorithm will allow to detect disturbances in the forests using all the available Landsat set. In fact, it can also be run with sensors such as MODIS. |\n| **`pvtsRaster()`** | This algorithm will allow to detect disturbances in the forests using all the available Landsat set. In fact, it can also be run with sensors such as MODIS. |\n| **`smootH()`** | In order to eliminate outliers in the time series, a temporary smoothing is used. |\n| **`mla()`** | This developed function allows to execute supervised classification in satellite images through various algorithms. |\n| **`calmla()`** | This function allows to calibrate supervised classification in satellite images through various algorithms and using approches such as Set-Approach, Leave-One-Out Cross-Validation (LOOCV), Cross-Validation (k-fold) and Monte Carlo Cross-Validation (MCCV). |\n| **`rkmeans()`** | This function allows to classify satellite images using k-means. |\n| **`calkmeans()`** | This function allows to calibrate the kmeans algorithm. It is possible to obtain the best k value and the best embedded algorithm in kmeans. |\n| **`coverChange()`** | This algorithm is able to obtain gain and loss in land cover classification. |\n| **`linearTrend()`** | Linear trend is useful for mapping forest degradation, land degradation, among others. This algorithm is capable of obtaining the slope of an ordinary least-squares linear regression and its reliability (p-value). |\n| **`fusionRS()`** | This algorithm allows to fusion images coming from different spectral sensors (e.g., optical-optical, optical and SAR or SAR-SAR). Among many of the qualities of this function, it is possible to obtain the contribution (%) of each variable in the fused image. |\n| **`sma()`** | The SMA assumes that the energy received, within the field of vision of the remote sensor, can be considered as the sum of the energies received from each dominant endmember. This function addresses a Linear Mixing Model. |\n| **`ndfiSMA()`** | The NDFI it is sensitive to the state of the canopy cover, and has been successfully applied to monitor forest degradation and deforestation in Peru and Brazil. This index comes from the endmembers Green Vegetation (GV), non-photosynthetic vegetation (NPV), Soil (S) and the reminder is the shade component. |\n| **`tct()`** | The Tasseled-Cap Transformation is a linear transformation method for various remote sensing data. Not only can it perform volume data compression, but it can also provide parametersassociated with the physical characteristics, such as brightness, greenness and wetness indices. |\n| **`gevi()`** | Greenness Vegetation Index is obtained from the Tasseled Cap Transformation. |\n| **`indices()`** | This function allows to obtain several remote sensing spectral indices in the optical domain. |\n\n# Installation\n\nTo install the latest development version directly from the GitHub\nrepository. Before running **ForesToolboxRS**, it is necessary to\ninstall the **remotes** package:\n\n``` r\nlibrary(remotes)\ninstall_github(""ytarazona/ForesToolboxRS"")\nsuppressMessages(library(ForesToolboxRS))\n```\n\n# Examples\n\n## 1\\. Breakpoint in an NDFI series (**`pvts`** function)\n\nHere an Normalized Difference Fraction Index (NDFI) between 2000 and\n2019 (28 data) was used. One NDFI for each year was obtained. The idea\nis to detect a change in 2008 (position 19). The NDFI value ranges from\n-1 to 1.\n\n``` r\nlibrary(ForesToolboxRS)\n#> Registered S3 method overwritten by \'quantmod\':\n#> method from\n#> as.zoo.data.frame zoo\n# NDFI series\nndfi <- c(0.86, 0.93, 0.97, 0.91, 0.95, 0.96, 0.91,\n 0.88, 0.92, 0.89, 0.90, 0.89, 0.91, 0.92,\n 0.89, 0.90, 0.92, 0.84, 0.46, 0.13, 0.12,\n 0.18, 0.14, 0.25, 0.17, 0.15, 0.18, 0.20)\n \n# Plot\nplot(ndfi, pch = 20, xlab = ""Index"", ylab = ""NDFI value"")\nlines(ndfi, col = ""gray45"")\n```\n\n\n\n### 1.1 Applying a smoothing (the **`smootH()`** function)\n\nBefore detecting a breakpoint, it is necessary to apply smoothing to\nremove any existing outliers. So, we\xe2\x80\x99ll use the **`smootH()`** function\nfrom the **ForesToolboxRS** package. The mathematical approach of this\nmethod of removing outliers implies the non-modification of the first\nand last values of the historical series.\n\nIf the idea is to detect changes in 2008 (position 19), then we will\nsmooth the data only up to that position (i.e., `ndfi[1:19]`). Let\xe2\x80\x99s do\nthat.\n\n``` r\nndfi_smooth <- ndfi\nndfi_smooth[1:19] <- smootH(ndfi[1:19])\n\n# Let\'s plot the real series\nplot(ndfi, pch = 20, xlab = ""Index"", ylab = ""NDFI value"")\nlines(ndfi, col = ""gray45"", lty = 3)\n# Let\'s plot the smoothed series\nlines(ndfi_smooth, col = ""blue"", ylab = ""NDFI value"", xlab = ""Time"")\npoints(ndfi_smooth, pch = 20, col = ""blue"")\n```\n\n\n\n> **Note**: You can change the detection threshold if you need to.\n\n### 1.1 Breakpoint using a specific index (vector)\n\nTo detect changes, either we can have a vector (using a specific\nindex/position) or a time series as input. Let\xe2\x80\x99s first detect changes\nwith a vector, a then with a time series.\n\nWe use the output of the **`smootH()`** function (**`ndfi_smooth()`**).\n\nParameters:\n\n - **x**: smoothed series preferably to optimize detection\n - **startm**: monitoring year, index 19 (i.e., year 2008)\n - **endm**: year of final monitoring, index 19 (i.e., also year 2008)\n - **threshold**: detection threshold (for NDFI series we will use 5).\n If you are using PV series, NDVI and EVI series you can use 5, 3 and\n 3 respectively. Please see [Tarazona et\n al.\xc2\xa0(2018)](https://www.sciencedirect.com/science/article/abs/pii/S1470160X18305326)\n for more details.\n\n\n\n``` r\n# Detect changes in 2008 (position 19)\ncd <- pvts(x = ndfi_smooth, startm = 19, endm = 19, threshold = 5)\nplot(cd)\n```\n\n\n\n### 1.3 Breakpoint using Time Series\n\nParameters:\n\n - **x**: smoothed series preferably to optimize detection\n - **startm**: monitoring year, in this case year 2008.\n - **endm**: year of final monitoring, also year 2008.\n - **threshold**: detection threshold (for NDFI series we will use 5).\n If you are using PV series, NDVI and EVI series you can use 5, 3 and\n 3 respectively. Please see [Tarazona et\n al.\xc2\xa0(2018)](https://www.sciencedirect.com/science/article/abs/pii/S1470160X18305326)\n for more details.\n\n\n\n``` r\n# Let\xc2\xb4s create a time series of the variable ""ndfi""\nndfi_ts <- ts(ndfi, start = 1990, end = 2017, frequency = 1)\n\n# Applying a smoothing\nndfi_smooth <- ndfi_ts\nndfi_smooth[1:19] <- smootH(ndfi_ts[1:19])\n\n# Detect changes in 2008\ncd <- pvts(x = ndfi_ts, startm = 2008, endm = 2008, threshold = 5)\nplot(cd)\n```\n\n\n\n### 1.4 Breakpoint Not Detected\n\nParameters:\n\n - **x**: smoothed series preferably to optimize detection\n - **startm**: monitoring year, index 16 (i.e., year 2005)\n - **endm**: year of final monitoring, index 16 (i.e., also year 2005)\n - **threshold**: detection threshold (for NDFI series we will use 5).\n If you are using PV series, NDVI and EVI series you can use 5, 3 and\n 3 respectively. Please see [Tarazona et\n al.\xc2\xa0(2018)](https://www.sciencedirect.com/science/article/abs/pii/S1470160X18305326)\n for more details.\n\n\n\n``` r\n# Detect changes in 2005\ncd <- pvts(x = ndfi_smooth, startm = 2005, endm = 2005, threshold = 5)\nplot(cd)\n```\n\n\n\n## 2\\. Supervised classification in Remote Sensing (the **`mla()`** function)\n\nFor this tutorial, Landsat-8 OLI image and signatures were used. To\ndownload data please follow this codes:\n\n``` r\n# Data Preparation\ndir.create(""testdata"")\n# downloading the image\ndownload.file(""https://github.com/ytarazona/ft_data/raw/main/data/LC08_232066_20190727_SR.zip"",\n destfile = ""testdata/LC08_232066_20190727_SR.zip"")\n# unziping the image\nunzip(""testdata/LC08_232066_20190727_SR.zip"", exdir = ""testdata"")\n# downloading the signatures\ndownload.file(""https://github.com/ytarazona/ft_data/raw/main/data/signatures.zip"",\n destfile = ""testdata/signatures.zip"")\n# unziping the signatures\nunzip(""testdata/signatures.zip"", exdir = ""testdata"")\n```\n\n### 2.1 Applying Random Forest (supervised classification)\n\nParameters:\n\n - **img**: RasterStack (Landsat 8 OLI)\n - **endm**: Signatures, **sf** object (shapefile)\n - **model**: Random Forest like \xe2\x80\x98randomForest\xe2\x80\x99\n - **training\\_split**: 80 percent to train and 20 percent to validate\n the model\n\n\n\n``` r\nlibrary(ForesToolboxRS)\nlibrary(raster)\n#> Loading required package: sp\nlibrary(sf)\n#> Linking to GEOS 3.9.0, GDAL 3.2.2, PROJ 7.2.1\n\n# Read raster\nimage <- stack(""testdata/LC08_232066_20190727_SR.tif"")\n\n# Read signatures\nsig <- read_sf(""testdata/signatures.shp"")\n\n# Classification with Random Forest\nclassRF <- mla(img = image, model = ""randomForest"", endm = sig, training_split = 80)\n#> 4 cores detected, using 3\n\n# Results\nprint(classRF)\n#> ******************** ForesToolboxRS CLASSIFICATION ********************\n#> \n#> ****Overall Accuracy****\n#> Accuracy Kappa AccuracyLower AccuracyUpper AccuracyNull \n#> 1.000000e+02 1.000000e+02 9.439909e+01 1.000000e+02 2.656250e+01 \n#> AccuracyPValue \n#> 1.423025e-35 \n#> \n#> ****Confusion Matrix****\n#> 1 2 3 4 Total Users_Accuracy Commission\n#> 1 15 0 0 0 15 100 0\n#> 2 0 15 0 0 15 100 0\n#> 3 0 0 17 0 17 100 0\n#> 4 0 0 0 17 17 100 0\n#> Total 15 15 17 17 NA NA NA\n#> Producer_Accuracy 100 100 100 100 NA NA NA\n#> Omission 0 0 0 0 NA NA NA\n#> \n#> ****Classification Map****\n#> class : RasterLayer \n#> dimensions : 301, 337, 101437 (nrow, ncol, ncell)\n#> resolution : 0.0002694946, 0.0002694946 (x, y)\n#> extent : -63.98125, -63.89043, -8.758574, -8.677456 (xmin, xmax, ymin, ymax)\n#> crs : +proj=longlat +datum=WGS84 +no_defs \n#> source : memory\n#> names : layer \n#> values : 1, 4 (min, max)\n```\n\n``` r\n# Classification\ncolmap <- c(""#0000FF"",""#228B22"",""#FF1493"", ""#00FF00"")\nplot(classRF$Classification, main = ""RandomForest Classification"", col = colmap, axes = TRUE)\n```\n\n\n\n### 2.2 Calibrating with Monte Carlo Cross-Validation (**`calmla()`** function)\n\n**`ForesToolboxRS`** has several approaches to calibrate machine\nlearning algorithms such as **Set-Approach**, **Leave One Out\nCross-Validation (LOOCV)**, **Cross-Validation (k-fold)** and **Monte\nCarlo Cross-Validation (MCCV)**.\n\nParameters:\n\n - **img**: RasterStack (Landsat-8 OLI)\n - **endm**: Signatures\n - **model**: c(\xe2\x80\x9csvm\xe2\x80\x9d, \xe2\x80\x9crandomForest\xe2\x80\x9d, \xe2\x80\x9cnaiveBayes\xe2\x80\x9d, \xe2\x80\x9cknn\xe2\x80\x9d). Machine\n learning algorithms: Support Vector Machine, Random Forest, Naive\n Bayes, K-nearest Neighbors\n - **training\\_split**: 70\n - **approach**: \xe2\x80\x9cMCCV\xe2\x80\x9d\n - **iter**: 10\n\n> **Warning\\!**: This function may take some time to process depending\n> on the volumen of the data.\n\n``` r\ncal_ml <- calmla(img = image, endm = sig,\n model = c(""svm"", ""randomForest"", ""naiveBayes"", ""knn""),\n training_split = 70, approach = ""MCCV"", iter = 10)\n```\n\n``` r\n# Calibration result\nplot(\n cal_ml$svm_mccv,\n main = ""Monte Carlo Cross-Validation calibration"",\n col = ""darkmagenta"",\n type = ""b"",\n ylim = c(0, 0.4),\n ylab = ""Error between 0 and 1"",\n xlab = ""Number of iterations""\n)\nlines(cal_ml$randomForest_mccv, col = ""red"", type = ""b"")\nlines(cal_ml$naiveBayes_mccv, col = ""green"", type = ""b"")\nlines(cal_ml$knn_mccv, col = ""blue"", type = ""b"")\nlegend(\n ""topleft"",\n c(\n ""Support Vector Machine"",\n ""Random Forest"",\n ""Naive Bayes"",\n ""K-nearest Neighbors""\n ),\n col = c(""darkmagenta"", ""red"", ""green"", ""blue""),\n lty = 1,\n cex = 0.7\n)\n```\n\n\n\n### 3\\. Unsupervised classification in Remote Sensing (**`rkmeans`** function)\n\nFor this tutorial, the same images was used.\n\n#### 3.1 Applying K-means\n\nParameters:\n\n - **img**: RasterStack (Landsat 8 OLI)\n - **k**: the number of clusters\n - **algo**: \xe2\x80\x9cMacQueen\xe2\x80\x9d\n\n\n\n``` r\nlibrary(ForesToolboxRS)\nlibrary(raster)\n\n# Read raster\nimage <- stack(""testdata/LC08_232066_20190727_SR.tif"")\n\n# Classification with K-means\nclassKmeans <- rkmeans(img = image, k = 4, algo = ""MacQueen"")\n```\n\n``` r\n# Plotting classification\ncolmap <- c(""#0000FF"",""#00FF00"",""#228B22"", ""#FF1493"")\nplot(classKmeans, main = ""K-means Classification"", col = colmap, axes = FALSE)\n```\n\n\n\n### 3.2 Calibrating k-means (the **`calkmeans()`** function)\n\nThis function allows to calibrate the *kmeans* algorithm. It is possible\nto obtain the best value and the best embedded algorithm in kmeans. If\nwe want to find the optimal value of (clusters or classes), so we must\nput as an argument of the function. Here, we are finding k for which the\nintra-class inertia is stabilized.\n\nParameters:\n\n - **img**: RasterStack (Landsat 8 OLI)\n - **k**: The number of clusters\n - **iter.max**: The maximum number of iterations allowed. Strictly\n related to k-means\n - **algo**: It can be \xe2\x80\x9cHartigan-Wong\xe2\x80\x9d, \xe2\x80\x9cLloyd\xe2\x80\x9d, \xe2\x80\x9cForgy\xe2\x80\x9d or \xe2\x80\x9cMacQueen\xe2\x80\x9d.\n Algorithms embedded in k-means\n \n - **iter**: Iterations number to obtain the best k value\n\n\n\n``` r\n# Elbow method\nbest_k <- calkmeans(img = image, k = NULL, iter.max = 10,\n algo = c(""Hartigan-Wong"", ""Lloyd"", ""Forgy"", ""MacQueen""),\n iter = 20)\n```\n\n``` r\nplot(best_k)\n```\n\n\n'",",https://doi.org/10.1080/07038992.2021.1941823","2020/10/09, 19:31:31",1111,CUSTOM,0,271,"2021/04/24, 15:30:21",1,3,3,0,914,0,0.0,0.2556390977443609,"2021/05/23, 19:54:35",v0.1.5,0,2,false,,false,false,,,,,,,,,,, Gieß den Kiez,Enable coordinated citizen participation in the irrigation of urban trees.,technologiestiftung,https://github.com/technologiestiftung/giessdenkiez-de.git,github,"citylab-berlin,trees,watering,rain,community,map,berlin,open-data",Forest Observation and Management,"2023/10/17, 13:19:46",71,0,14,true,TypeScript,Technologiestiftung Berlin,technologiestiftung,"TypeScript,JavaScript,HTML",https://www.giessdenkiez.de,"b""![Node.js CI](https://github.com/technologiestiftung/giessdenkiez-de/workflows/Node.js%20CI/badge.svg?branch=master) ![love badge](https://img.shields.io/badge/Built%20with-%E2%99%A5-red) ![citylab badge](https://img.shields.io/badge/@-CityLAB%20Berlin-blue)\n\n# [![Logo of _Gie\xc3\x9f den Kiez_](./docs/images/logo.svg)](https://www.giessdenkiez.de)\n\n---\n\n![Screenshot of _Gie\xc3\x9f den Kiez_](./docs/images/screenshot.png)\n\n## About [_Gie\xc3\x9f den Kiez_](https://www.giessdenkiez.de)\n\nThe consequences of climate change, especially the dry and hot summers, are putting a strain on Berlin's ecosystem. Our urban trees are drying out and suffering long-term damage: In recent years, more and more trees have had to be cut down and their lifespan is declining. In the meantime, the population is regularly called upon to help, but largely uncoordinated. [_Gie\xc3\x9f den Kiez_](https://www.giessdenkiez.de) is was made to change that and enable coordinated citizen\\* participation in the irrigation of urban trees. This project was made by the [Technologiestiftung Berlin](https://www.technologiestiftung-berlin.de/de/startseite/) and the [CityLAB Berlin](https://www.citylab-berlin.org/).\n\n---\n\n## Repositories\n\nThis project is composed of multiple repositories:\n\n- [React frontend (this is here)](https://github.com/technologiestiftung/giessdenkiez-de)\n- [Database API](https://github.com/technologiestiftung/giessdenkiez-de-postgres-api)\n- [DWD Harvester](https://github.com/technologiestiftung/giessdenkiez-de-dwd-harvester)\n- [OSM Harvester](https://github.com/technologiestiftung/giessdenkiez-de-osm-pumpen-harvester)\n\n---\n\n## Documentation\n\nYou can find the projects documentation in this repos [wiki](https://github.com/technologiestiftung/giessdenkiez-de/wiki).\n""",,"2019/08/06, 09:24:57",1541,MIT,440,2050,"2023/10/17, 13:19:47",64,410,558,138,8,20,0.0,0.6485671191553545,"2023/05/23, 21:07:55",v2.1.0,0,17,false,,false,false,,,https://github.com/technologiestiftung,https://www.technologiestiftung-berlin.de,"Berlin, Germany",,,https://avatars.githubusercontent.com/u/16606790?v=4,,, Forest Structural Complexity Tool,Allowing plot scale measurements to be extracted automatically from most high-resolution forest point clouds from a variety of sensor sources.,SKrisanski,https://github.com/SKrisanski/FSCT.git,github,,Forest Observation and Management,"2022/09/17, 23:43:41",103,0,29,true,Python,,,Python,,"b'# Forest Structural Complexity Tool\n\n### Created by Sean Krisanski\n![banner.png](readme_images/banner.png)\n## Purpose of this tool\n\nThis tool was written for the purpose of allowing plot scale measurements to be extracted automatically from most\nhigh-resolution forest point clouds from a variety of sensor sources. Such sensor types it works on include\nTerrestrial Laser Scanning (TLS), Mobile Laser Scanning (MLS), Terrestrial Photogrammetry, Above and below-canopy\nUAS Photogrammetry or similar. Very high resolution Aerial Laser Scanning (ALS) is typically on the borderline of what\nthe segmentation tool is capable of handling at this time. If a dataset is too low resolution, the segmentation model\nwill likely label the stems as vegetation points instead.\n\nThere are also some instances where the segmentation model has not seen appropriate training data for the point cloud.\nThis may be improved in future versions, as it should be easily fixed with additional training data.\n\n**A video showing the outputs of the tool is provided here: https://youtu.be/rej5Bu57AqM**\n\n\n## Installation\n\nYou will need to install all packages in the requirements.txt file. \n\nIf using Anaconda, create a clean environment and activate it. \nIn Anaconda Prompt, type the following (replacing the path to FSCT and your desired environment name as needed):\n\n```shell\ncd PATH_TO_FSCT-MAIN_DIRECTORY\nconda create --name YOUR_ENVIRONMENT_NAME_HERE python==3.9\nconda activate YOUR_ENVIRONMENT_NAME_HERE\nconda install pip\npip install -r requirements.txt\n```\n\nThis should hopefully install all required packages for you.\nThese are the instructions for Windows 10 and Linux.\nI have not tested this on Mac. If someone with a Mac tests this and \nit works (or doesn\'t), please let me know!\n\nIf you have any difficulties or find any bugs, please get in touch and I will try to help you get it going. \nSuggestions for improvements are greatly appreciated.\n\nIf you do not have an Nvidia GPU, please set the ```use_CPU_only``` setting in ```run.py``` to True.\n\n## How to use\n\nOpen the ""run.py"" file and set num_procs and batch_size appropriately for your computer hardware.\nAdjust the parameters if needed or leave them as they are.\n\nRun the ""run.py"" file. This will ask you to select 1 or multiple "".las"" files.\nIf all goes well, you will have a new directory in the same location as the "".las"" file/s you selected and once complete,\nthis will contain the following outputs.\n\nStart with small plots containing at least some trees. The tree measurement code will currently cause an error if it\nfinds no trees in the point cloud.\n\n## FSCT Outputs\n\n```Plot_Report.html``` and ```Plot_Report.md```\nA summary of the information extracted. Nicer to look at than the processing report, but still a bit ugly in Version 1.\nFuture versions may make this a bit nicer/add data tables/etc.\n\n```tree_data.csv```\nBasic measurements of the trees.\n* Headings are as follows (all units are in metres or cubic metres for volume)\n[x_tree_base, y_tree_base, z_tree_base, DBH, CCI_at_BH, Height, Volume_1, Volume_2, Crown_mean_x, Crown_mean_y, Crown_top_x, Crown_top_y, Crown_top_z, mean_understory_height_in_5m_radius]\n* CCI_at_BH stands for Circumferential Completeness Index at Breast Height. CCI is simply the fraction of a circle with\npoint coverage in a stem slice as illustrated below. This provides an indication of how complete your stem coverage is.\nIn a single scan TLS point cloud, you cannot get a CCI greater than 0.5 (assuming the cylinder fitting was not erroneous), as only one side of the tree is mapped.\nIf you have completely scanned the tree (at the measurement location), you should get a CCI of 1.0 (the highest possible CCI).\n![CCI.jpg](readme_images/CCI.jpg)\nThe figure is from this paper: https://doi.org/10.3390/rs12101652 if you would like a more detailed explanation of the idea.\n\n\n* Volume_1 is the sum of the volume of the fitted cylinders. \n* Volume_2 is the volume of a cone (with a base diameter equal to the DBH and height from 1.3 m up to the tree height) + \nthe volume of a cylinder (with a diameter of DBH and 1.3 m tall). This avoids the possibility of a short and shallow \nangled cone resulting from a short tree with a large DBH.* \n\n```taper_data.csv```\nThis is simply the largest diameter at a range of given heights above the DTM for each stem.\nAll measurements are in metres.\nHeadings are PlotId, TreeId, x_base, y_base, z_base, followed by the measurement heights. \n\n```processing_report.csv```\nSummary information about the plot and the processing times. Be aware: if you open this while processing and FSCT\nattempts to write to the open file, it will throw a permission error.\n\n\n![simple_outputs.png](readme_images/simple_outputs.png)\n\n\n### Point Cloud Outputs\n\n```DTM.las``` Digital Terrain Model in point form.\n![dtm1.png](readme_images/dtm1.png)\n\n```cropped_DTM.las``` Digital Terrain Model cropped to the plot_radius.\n\n```working_point_cloud.las``` The subsampled and cropped point cloud that is fed to the segmentation tool.\n![input_point_cloud.png](readme_images/input_point_cloud.png)\n\n```segmented.las``` The classified point cloud created by the segmentation tool.\n![segmented2.png](readme_images/segmented2.png)\n\n```segmented_cleaned.las``` The cleaned segmented point cloud created during the post-processing step.\n\n```terrain_points.las``` Semantically segmented terrain points.\n\n```vegetation_points.las``` Semantically segmented vegetation points.\n\n```ground_veg.las``` Ground vegetation points.\n\n```cwd_points.las``` Semantically segmented Coarse woody debris points.\n\n```stem_points.las``` Semantically segmented stem points.\n\n```cleaned_cyls.las``` Point-based cylinder representation with a variety of properties. Saved as CSV as well for convenience/ease of use.\n\n```cleaned_cyl_vis.las``` A point cloud visualisation of the circles/cylinders defined in cleaned_cyls.las\nEssentially makes circles out of points for every measurement in cleaned_cyls.\n\n![cleaned_cyl_vis.png](readme_images/cleaned_cyl_vis.png)\n\n```stem_points_sorted.las``` Stem points assigned by tree_id. **This is a simple output at the moment and will not give\nhighly reliable results. This current iteration may be useful for generating instance segmentation training datasets,\nhowever, this will likely require you to manually correct it to be of high enough quality for training data.\n\n```veg_points_sorted.las``` Vegetation assigned by tree_id. Ground points are given a tree_id of 0. **This is a simple \noutput at the moment and will not give highly reliable results.** This current iteration may be useful for generating \ninstance segmentation training datasets, however, this will likely require you to manually correct it to be of high \nenough quality for training data.\n\n\n```text_point_cloud.las``` A point cloud text visualisation of TreeId, DBH, height, CCI at breast height, Volume_1 and \nVolume_2. It\'s a bit dodgy, but it works in any point cloud viewer without fuss.\n\n```tree_aware_cropped_point_cloud.las``` If you specify a plot_radius and a plot_radius_buffer, this will trim the point\ncloud to the plot_radius. See the **Tree Aware Plot Cropping** section in User Parameters for more information on this mode.\n\n![individual_tree_segmentation.png](readme_images/individual_tree_segmentation.png)\n\n\n### Recommended PC Specifications\n**Warning: FSCT is computationally expensive in its current form.** Fortunately, it is still considerably faster than a human \nat what it does.\n\nIt is **strongly recommended** to have a CUDA compatible GPU (Nvidia) for running this tool. \nThis can be run on CPU only, but expect inference to take a long time. **CPU also appears to give worse semantic segmentation results than GPU. I did not expect this and I do not know why this is the case. If you have any ideas about why they are so different, please let me know!**\n\nIt should be able to be run on most modern gaming desktop PCs (or decently powerful laptops).\n\nI use the following setup and the computational times are tolerable:\n- CPU: Intel i9-10900K (overclocked to 4.99GHz all cores).\n- GPU: Nvidia Titan RTX (24 GB vRAM)\n- RAM: 128 GB DDR4 at 3200 MHz (If you run out of RAM, try increasing your page file size (Windows) or swap size (Linux))\n\nHopefully in time, I\'ll be able to make this more efficient and less resource hungry.\n\n## User Parameters\n\n### Circular Plot options\n```plot_centre```\n[X, Y] Coordinates of the plot centre (metres). If ""None"", plot_centre is the centre of the bounding box of the point cloud. Leave at None if not using.\n\n```plot_radius```\nIf 0 m, the plot is not cropped. Otherwise, the plot is cylindrically cropped from the plot centre with plot_radius + plot_radius_buffer. Leave at 0 if not using.\n\n```plot_radius_buffer```\nThis is used for ""Tree Aware Plot Cropping Mode"". Leave at 0 if not using.\n\n### Tree Aware Plot Cropping\nThe purpose of this mode is to simulate the behaviour of a typical field plot, by not chopping trees in half if they are\nat the boundary of the plot radius.\n\nWe first trim the point cloud to a radius where the initial trim radius = plot_radius + plot_radius_buffer.\nFor example, we might want a 4 m plot_radius. If we use a 2 m plot_radius_buffer, the point cloud will be cropped to\n6 m radius initially. FSCT will then use the measurement information extracted from the trees in that 6 m radius point\ncloud, to check which tree centres are within the 4 m radius. This allows a tree which was just inside the boundary, to\nextend 2 m beyond the plot boundary without losing points. If we used a simple radius trim at 4 m, trees which were\njust inside the boundary may be cut in half.\n\n![img.png](readme_images/tree_aware_plot_cropping.png)\n\nThis mode is used if plot_radius is non-zero and plot_radius_buffer is non-zero.\n### Other Parameters\n\n```PlotId```\nThe ""PlotId"" is taken from the filename of the input point cloud, so name files accordingly.\n\n### Set these appropriately for your hardware.\n```batch_size```\nThe number of samples in a batch used for the deep learning inference. This number depends on the amount of GPU memory you\nhave. If you set this too high, you will run out of GPU memory. As a rough guide, I can fit 18-20 on an Nvidia Titan RTX GPU with 24 GB GPU\nRAM. \n**Please Note: Until I add some nicer handling of this section, you must set batch_size>=2.**\n\n```num_cpu_cores```\nThe number of CPU cores you have/wish to use. Set to 0 by default, which means using ALL cores.\n\n### Optional settings - Generally leave as they are.\n\n```ground_veg_cutoff_height```\nAny vegetation points below this height are considered to be understory and are not assigned to individual trees.\n\n```veg_sorting_range```\nVegetation points can be, at most, this far away from a cylinder horizontally to be matched to a particular tree.\n\n```sort_stems```\nIf you don\'t need the sorted stem points, turning this off speeds things up. Veg sorting is required for tree height measurement, but stem sorting isn\'t necessary for general use.\n\n```stem_sorting_range```\nStem points can be, at most, this far away from a cylinder in 3D to be matched to a particular tree.\n\n```taper_measurement_height_min```\nThe starting height for the output taper measurements.\n\n```taper_measurement_height_max```\nTaper measurements are extracted up to this height above the DTM.\n\n```taper_measurement_height_increment```\nThe increment of the taper measurements.\n\n```taper_slice_thickness```\nThe cleaned cylinders (in the point based representation) within +/- 0.5 * taper_slice_thickness are found. The largest radius within this slice is used as the diameter for that particular height. \n\n```delete_working_directory```\nGenerally leave this on. Deletes the files used for segmentation after segmentation is finished.\nYou may wish to turn it off if you want to re-run/modify the segmentation code so you don\'t need to run pre-processing every time.\n\n## Scripts\n\n### Scripts you would normally interact with:\n```run.py``` This is how you should interface with the code base under normal use.\n\n```combine_multiple_output_CSVs.py``` This will get\nall ""plot_summary.csv"" files and combine them into one CSV. This will be saved in the highest common directory\nof the selected point clouds.\n\n### Scripts you would only use directly if you are modifying the software:\n```run_tools.py``` A few helper functions to clean up run.py.\n\n```tools.py``` Other helper functions used throughout the code base.\n\n```preprocessing.py``` Performs subsampling of the input point cloud and handles the slicing and dicing of the point\ncloud into samples the segmentation model can work with.\n\n```model.py``` The segmentation model modified from the Pytorch Geometric implementation of Pointnet++.\n\n```inference.py``` Performs the semantic segmentation on the samples and then reassembles them back into a full point\ncloud.\n\n```post_segmentation_script.py``` Creates the Digital Terrain Model (DTM) and uses this and some basic rules to clean the\nsegmented point cloud up. Creates the class specific point clouds (terrain, vegetation, CWD and stem points).\n\n```measure.py``` Extracts measurements and metrics from the outputs of the post_segmentation_script.\n\n```report_writer.py``` Summarises the measurements in a simple report format.\n\n\n#\n\n\n## Known Limitations\n* Young trees with a lot of branching do not currently get segmented correctly.\n* Some extremely large trees do not currently get measured properly as the rules don\'t always hold.\n* FSCT is unlikely to output useful results on low resolution point clouds. \n* *Very high* resolution Aerial LiDAR is about the lowest it can currently cope with. If your dataset is on the borderline,\ntry setting low_resolution_point_cloud_hack_mode (in other_parameters.py) to 4 or 5 and rerunning. It\'s an ugly hack, but it can help sometimes.\n* Segmentation does often miss some branches, but usually gets the bulk of them.\n* Small branches are often not detected.\n* Completely horizontal branches/sections may not be measured correctly from the method used.\n\n## Citation\n#### If you wish to cite this work, please use the below citation. If citing for something other than a scientific journal, feel free to link to the GitHub instead.\nKrisanski, S.; Taskhiri, M.S.; Gonzalez Aracil, S.; Herries, D.; Muneri, A.; Gurung, M.B.; Montgomery, J.; Turner, P. Forest Structural Complexity Tool\xe2\x80\x94An Open Source, Fully-Automated Tool for Measuring Forest Point Clouds. Remote Sens. 2021, 13, 4677. https://doi.org/10.3390/rs13224677\n\n## Use of this code\nPlease feel free to use/modify/share this code. If you can improve/evaluate the code somehow and wish to make a paper of it, please do!\nI might not have a chance to make many improvements going forward after my PhD, but I will try to keep it maintained.\n\nIf you can share your improvements, that would be great, but you are not obligated. Commercial use of FSCT is also permitted.\n\n\n## Instructions for training a new semantic segmentation model\n\nFSCT relies heavily on the segmentation model working properly. \nTraining your own model may help expand the utility of FSCT to additional datasets outside of the original training set I used.\n\n### Step 1 - Creating training data\nUnless you modify the code, training data must be provided as a .las file.\nThis file must have a ""label"" column, with integer based labels as follows: 1: Terrain, 2: Vegetation, 3: Coarse woody debris, 4: Stems/branches.\n\nLook at a ""segmented.las"" or ""segmented_cleaned.las"" file (an output of FSCT in normal use) as an example of what the training data must look like.\nIt is strongly recommended to use FSCT to label your data, THEN correct it manually. \n\n**Note: manually segmenting/correcting point clouds is extremely tedious. The original dataset took me ~3-4 weeks to label from scratch...\nI use CloudCompare\'s segmentation tool for manually correcting the training data. You should start by loading the terrain_points.las, vegetation_points.las, cwd_points.las, and stem_points.las. \nI may eventually add an explanation video of how I do this, but for now, you will need to work out a way to do this.\nImportantly, take great care to label consistently. Sloppy labelling may result in your model not learning what you want it to learn. Small details can matter.**\n\n### Step 2 - Preparing training data for processing\nTake your chosen point cloud, and chop it into train, validation and test slices. You may choose to slice them \nas 50%, 25% and 25% respectively, but use your discretion.\n- Save each slice as a .las file.\n- Place the ""train"" slice into the directory ```FSCT/data/train_dataset/```\n- Place the ""validation"" slice into the directory ```FSCT/data/validation_dataset/```\n- Place the ""test"" slice into the directory ```FSCT/data/test_dataset/```\n\nYou can have multiple point clouds in the above directories, and during preprocessing, they will all be placed in the respective sample directories ```FSCT/data/*_dataset/sample_dir/```\n\n### Step 3 - Preprocessing the training data\nSet the parameters: ```preprocess_train_datasets```, ```preprocess_validation_datasets``` and ```preprocess_test_datasets``` to True (or 1).\nRun the ```train.py``` file and it will generate the samples for you. After running this the first time, set the above to False (or 0) to avoid preprocessing them again and duplicating them in the ```sample_dir``` directories.\n\nFor each labelled point cloud you wish to use for training, you must slice it into a chunk for training (most of the point cloud), and a chunk for validation.\nPlace the training chunk into the ""data/train_dataset/"" directory.\n\n\n**Note:** Preprocessing will add files to the respective ```sample_dir``` directory, but *does not yet delete them*. This is important if you re-run the preprocessing step.\n\n#### Here is a simple scenario which should hopefully make this clearer:\nI have already preprocessed some point clouds located in the ```train_dataset``` directory. I have created another training dataset and wish to preprocess it so I can use it for training.\n\nI have 2 options:\n Option A: move the already processed point clouds out of the ```train_dataset``` directory. Leave the ```sample_dir``` directory as it was. Add the new training point cloud into the train_dataset directory. Set the ```preprocess_train_datasets``` parameter to 1 and run the script. As you moved the previously processed point clouds out of the train_dataset directory, they will not be processed, and just the new point cloud will be pre-processed and added to the ```sample_dir``` directory. Set the ```preprocess_train_datasets``` parameter back to 0 and proceed as you wish.\n \n Option B: Leave your previously processed training point clouds in the ```train_dataset``` directory, add your new training point cloud to this directory also. Manually delete the contents of the ```sample_dir``` directory and re-run preprocessing for all of the training point clouds.\n \nOptions A and B achieve the same thing, but option A is more efficient, as you are not pre-processing everything from scratch again. Option B is likely necessary if you wish to remove a sample point cloud from the dataset.\n\nWhile most users of FSCT aren\'t likely to be training their own models, I plan to improve this process. Please see here for future work enhancements planned: https://github.com/SKrisanski/FSCT/issues/4\n \n\n### Step 4 - Train the model\nYou can either let the script continue on after the preprocessing step, or stop it, turn off the preprocessing modes and rerun.\nBe sure to set the parameters according to your computer\'s specs. If you have CUDA errors, reduce the batch size or switch to CPU mode. If you don\'t have an Nvidia GPU, you must use CPU mode, but training will be very slow...\n\nThe ```training_monitor.py``` script will plot the loss and accuracy of the model. You must run this simultaneously in a separate terminal/python console to the training script.\n\n**Note: the training process will take several days on a powerful desktop computer.**\n\n### Step 5 - Use the trained model in FSCT\nSimply change the ```model_filename``` in ```other_parameters.py``` to the model you named in ```train.py```.\n\n### An idea potentially worth exploring\nFSCT is already capable of producing reasonably well segmented point clouds (within the stated limitations). \nBy leveraging FSCT to automatically segment point clouds, it seems likely that the model could almost train itself into\na more consistent and robust state through the use of carefully designed data augmentations.\n\n### Created a model that you wish to contribute to the repository?\nGet in touch and if it works well, I\'ll happily add it to the model collection of this repo.\n\n## Contributing/Collaborating\nThis code is likely far from optimal, so if you find errors or have ideas/suggestions on improvements/better practices,\nthey would be most welcome!\n\n## Acknowledgements\nThis research was funded by the Australian Research Council - Training Centre for Forest Value (IC150100004),\nUniversity of Tasmania, Australia.\n\nThanks to my supervisory team Assoc. Prof Paul Turner and Dr. Mohammad Sadegh Taskhiri from the eLogistics Research\nGroup and Dr. James Montgomery from the University of Tasmania.\n\nThanks to Susana Gonzalez Aracil, David Herries from Interpine Group Ltd (New Zealand) https://interpine.nz/, Allie \nMuneri and Mohan Gurung from PF Olsen (Australia) Ltd. https://au.pfolsen.com/, who provided a number of the raw point\nclouds and plot measurements used during the development and validation of this tool.\n\n\n\n## References\nThe deep learning component uses Pytorch https://pytorch.org/ and Pytorch-Geometric \nhttps://pytorch-geometric.readthedocs.io/en/latest/#\n\nThe first step is semantic segmentation of the forest point cloud. This is performed using a modified version of\nPointnet++ https://github.com/charlesq34/pointnet2 using the implementation in Pytorch-Geometric as a starting point\nprovided here: https://github.com/rusty1s/pytorch_geometric/blob/master/examples/pointnet2_segmentation.py\n'",",https://doi.org/10.3390/rs12101652,https://doi.org/10.3390/rs13224677\n\n##","2021/06/22, 05:39:18",855,GPL-3.0,0,157,"2022/11/02, 09:08:44",13,3,22,3,357,1,0.3333333333333333,0.09677419354838712,,,0,3,false,,false,false,,,,,,,,,,, Forest Scenario Planner,An online tool for forest management scenario planning.,Ecotrust,https://github.com/Ecotrust/forestplanner.git,github,,Forest Observation and Management,"2021/12/23, 19:48:50",29,0,1,true,JavaScript,Ecotrust,Ecotrust,"JavaScript,Python,Jupyter Notebook,HTML,CSS,Ruby,Perl,Puppet,Shell,PHP,PLpgSQL,Pascal,R",,"b'# Forest Scenario Planner\n## An Online Tool for Forest Management Scenario Planning\n\nEcotrust has created the Forest Planner to give forest management scenario planning capacity to all Oregon and Washington land managers. Users will be able to visualize alternative management scenarios on their lands and receive immediate feedback on how their decisions might pay off in terms of timber harvests and financial returns, as well as public benefits like carbon storage and ecosystem services. \n\n### Using the Scenario Planning Tool\n* Find and map your property with helpful map layers such as tax lots \n* Use preloaded Forest Inventory Analysis data (FIA) or upload their own cruise data\n* Designate forest management areas or stands with riparian buffers and steep slopes\n* Define management prescriptions\n* Specify timber and carbon market prices\n\n### Scenario Planning Tool Outputs:\n* Graphs of timber volume and financial returns generated over time\n* Optimized harvest schedule\n* Choose from a spectrum of harvest practices and apply them to the management units you want\n* Maps of standing timber volume, species and age class over time\n* Visualize potential to realize carbon credits and ecosystem-based incentives over time\n\n### Additional Features\n* Secure and confidentially store and manage data and scenario runs\n* Share scenarios between collaborators to explore collaborative decision making among forest landowners and managers\n\n### Software \n* OpenLayers\n* GeoDjango\n* PostGIS\n* Madrona\n* Tilemill\n* Mapnik\n* [Forest Vegetation Simulator](http://www.fs.fed.us/fmsc/fvs/)\n\n### Authors\n\n* Matthew Perry ([perrygeo](https://github.com/perrygeo))\n* Edwin Knuth ([eknuth](https://github.com/eknuth))\n* Mike Mertens ([mmertens](https://github.com/mmertens))\n* Ryan Hodges ([rhodges](https://github.com/rhodges))\n* Ken Vollmer (kvollmer)\n* Will Moore ([willthemoor](https://github.com/willthemoor))\n* David Diaz ([d-diaz](https://github.com/d-diaz))\n'",,"2011/12/14, 20:34:37",4333,BSD-3-Clause,0,1989,"2023/03/13, 22:17:12",12,17,475,7,226,1,0.3,0.555431131019037,,,0,5,false,,false,false,,,https://github.com/Ecotrust,https://ecotrust.org,"Portland, OR",,,https://avatars.githubusercontent.com/u/1215872?v=4,,, spanner,"Utilities to support landscape-, forest-, and tree-related data collection, manipulation, analysis, modelling, and visualization.",bi0m3trics,https://github.com/bi0m3trics/spanner.git,github,,Forest Observation and Management,"2023/08/02, 01:55:26",18,0,6,true,C++,,,"C++,R,C",,"b'# spanner \n![license](https://img.shields.io/badge/Licence-GPL--3-blue.svg) \n[![](https://www.r-pkg.org/badges/version/spanner)](https://cran.r-project.org/package=spanner)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4624277.svg)](https://doi.org/10.5281/zenodo.4624277)\n\nDefinition of spanner\n
1 (chiefly British): WRENCH\n
2: a wrench that has a hole, projection, or hook at one or both ends of the head for engaging with a corresponding device on the object that is to be turned\n
3: utilities to support landscape-, forest-, and tree-related data collection, manipulation, analysis, modelling, and visualization. \n\n# Install `spanner`\n\nGet the latest released version of spanner from github.\n\n```r\nremotes::install_github(\'bi0m3trics/spanner\')\n```\n\n# Example usage\n\n\n\nThe following is the full processing pipeline described in Donager et al. (2021), and provides an example from downloading an example dataset, preprocesing it using lidR\'s functionality, estimating tree locations and DBH by rasterizing individual point cloud values of relative neighborhood density (at 0.3 and 1 m radius) and verticality within a slice of the normalized point cloud around breast height to \n(1.37 m), to individual tree segmentation following ecological principles for \xe2\x80\x9cgrowing\xe2\x80\x9d trees based on input locations in a graph-theory approach. Relies heavily on work of Roussel et al (2020), Tao and others (2015), and de Conto et al. (2017).

\n\nCitation: Donager, Jonathon J., Andrew J. S\xc3\xa1nchez Meador, and Ryan C. Blackburn 2021. Adjudicating Perspectives on Forest Structure: How Do Airborne, Terrestrial, and Mobile Lidar-Derived Estimates Compare? Remote Sensing 13, no. 12: 2297. https://doi.org/10.3390/rs13122297\n\n```r\nlibrary(spanner)\n\n# set the number of threads to use in lidR\nset_lidr_threads(8)\n\n# download and read an example laz\ngetExampleData(""DensePatchA"")\nLASfile = system.file(""extdata"", ""DensePatchA.laz"", package=""spanner"")\nlas = readTLSLAS(LASfile, select = ""xyzcr"", ""-filter_with_voxel 0.01"")\n# Don\'t forget to make sure the las object has a projection\n# projection(las) = sp::CRS(""+init=epsg:26912"")\n\n# pre-process the example lidar dataset by classifying the ground points\n# using lidR::csf(), normalizing it, and removing outlier points \n# using lidR::ivf()\nlas = classify_ground(las, csf(sloop_smooth = FALSE, \n class_threshold = 0.5,\n cloth_resolution = 0.5, rigidness = 1L, \n iterations = 500L, time_step = 0.65))\nlas = normalize_height(las, tin())\nlas = classify_noise(las, ivf(0.25, 3))\nlas = filter_poi(las, Classification != LASNOISE)\n\n# plot the non-ground points, colored by height\nplot(filter_poi(las, Classification!=2), color=""Z"", trim=30)\n\n# perform a deep inspection of the las object. If you see any \n# red text, you may have issues!\nlas_check(las)\n\n# find individual tree locations and attribute data\nmyTreeLocs = get_raster_eigen_treelocs(las = las, res = 0.05, \n pt_spacing = 0.0254, \n dens_threshold = 0.2, \n neigh_sizes=c(0.333, 0.166, 0.5), \n eigen_threshold = 0.5, \n grid_slice_min = 0.6666, \n grid_slice_max = 2.0,\n minimum_polygon_area = 0.025, \n cylinder_fit_type = ""ransac"", \n output_location = getwd(), \n max_dia=0.5, \n SDvert = 0.25)\n\n# plot the tree information over a CHM\nplot(lidR::grid_canopy(las, res = 0.2, p2r()))\npoints(myTreeLocs$X, myTreeLocs$Y, col = ""black"", pch=16, \n cex = myTreeLocs$Radius^2*10, asp=1)\n\n# segment the point cloud \nmyTreeGraph = segment_graph(las = las, tree.locations = myTreeLocs, k = 50, \n distance.threshold = 0.5,\n use.metabolic.scale = FALSE, \n ptcloud_slice_min = 0.6666,\n ptcloud_slice_max = 2.0,\n subsample.graph = 0.1, \n return.dense = FALSE,\n output_location = getwd())\n\n# plot it in 3d colored by treeID\nplot(myTreeGraph, color = ""treeID"")\n```\n'",",https://doi.org/10.5281/zenodo.4624277,https://doi.org/10.3390/rs13122297,https://doi.org/10.3390/rs13122297\n\n```r\nlibrary(spanner)\n\n#","2020/03/11, 14:53:39",1323,GPL-3.0,3,78,"2023/09/28, 14:22:21",0,4,5,2,27,0,0.0,0.15068493150684936,"2022/02/13, 21:59:16",1.0.1,0,3,false,,false,false,,,,,,,,,,, ForestTools,Detect and segment individual tree from remotely sensed data.,andrew-plowright,https://github.com/andrew-plowright/ForestTools.git,github,,Forest Observation and Management,"2023/10/02, 15:41:34",41,0,13,true,R,,,"R,C++,Dockerfile",,"b'ForestTools \n======================================================================================================\n![license](https://img.shields.io/badge/Licence-GPL--3-blue.svg) \n[![](https://www.r-pkg.org/badges/version/ForestTools)](https://www.r-pkg.org/pkg/ForestTools)\n[![R-CMD-check](https://github.com/andrew-plowright/ForestTools/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/andrew-plowright/ForestTools/actions/workflows/R-CMD-check.yaml)\n[![](https://cranlogs.r-pkg.org/badges/ForestTools)](https://CRAN.R-project.org/package=ForestTools)\n\nThe ForestTools R package offers functions to analyze remote sensing forest data. Please consult the [NEWS.md](NEWS.md) file for updates.\n\nTo get started, consult the [canopy analysis tutorial](https://CRAN.R-project.org/package=ForestTools/vignettes/treetop_analysis.html). \n\nTo cite the package use `citation(""ForestTools"")` from within R.\n\n```\nAndrew Plowright (2023). ForestTools: Tools for Analyzing Remote Sensing Forest Data. R package version 1.0.0,\nhttps://github.com/andrew-plowright/ForestTools.\n```\n\n\n# Features\n\n### Detect and segment trees\n\nIndividual trees can be detected and delineated using a combination of the\n**variable window filter** (`vwf`) and **marker-controlled watershed segmentation**\n(`mcws`) algorithms, both of which are applied to a rasterized **canopy height model (CHM)**.\nCHMs are typically derived from aerial LiDAR or photogrammetric point clouds.\n\n![image info](./man/figures/treetops_segments.png)\n\n\n### Compute textural metrics\n\n**Grey-level co-occurrence matrices** (GLCMs) and their associated statistics can be computed for individual trees using a single-band\nimage and a segment raster (which can be produced using `mcws`). These metrics can be used to characterize and classify trees.\n\n\n# References\n\nThis library implements techniques developed in the following studies:\n\n* **Variable window filter**: [Seeing the trees in the forest](https://www.ingentaconnect.com/content/asprs/pers/2004/00000070/00000005/art00003) by Popescu, S. C., & Wynne, R. H. (2004)\n* **Marker-controlled watershed segmentation**: [Morphological segmentation](https://www.sciencedirect.com/science/article/pii/104732039090014M) by Meyer, F., & Beucher, S. (1990)\n* **Grey-level co-occurrence matrices**: [Robust radiomics feature quantification using semiautomatic volumetric segmentation](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0102107) by Parmar, C., Velazquez, E.R., Leijenaar, R., Jermoumi, M., Carvalho, S., Mak, R.H., Mitra, S., Shankar, B.U., Kikinis, R., Haibe-Kains, B. and Lambin, P. (2014)\n\n\n# Research\n\nThe following is a non-exhaustive list of research papers that use the ForestTools library. Several of these studies discuss topics such as algorithm parameterization, and may be informative for users of this library.\n\n### 2023\n\n* [A novel post-fire method to estimate individual tree crown scorch height and volume using simple RPAS-derived data](https://fireecology.springeropen.com/articles/10.1186/s42408-023-00174-7) by Arkin, J., Coops, N. C., Daniels, L. D., & Plowright, A. (2023)\n\n* [Modelling internal tree attributes for breeding applications in Douglas-fir progeny trials using RPAS-ALS](https://www.sciencedirect.com/science/article/pii/S2666017222000347) by du Toit, F., Coops, N. C., Ratcliffe, B., El-Kassaby, Y. A., & Lucieer, A. (2023)\n\n* [Mountain Tree Species Mapping Using Sentinel-2, PlanetScope, and Airborne HySpex Hyperspectral Imagery](https://www.mdpi.com/2072-4292/15/3/844) by Kluczek, M., Zagajewski, B., & Zwijacz-Kozica, T. (2023)\n\n* [Use of Drone RGB Imagery to Quantify Indicator Variables of Tropical-Forest-Ecosystem Degradation and Restoration](https://www.mdpi.com/1999-4907/14/3/586) by Lee, K., Elliott, S., & Tiansawat, P. (2023)\n\n### 2022\n\n* [Individual Tree Identification in ULS Point Clouds Using a Crown Width Mixed-Effects Model Based on NFI Data](https://www.mdpi.com/2072-4292/14/4/926) by Kubi\xc5\xa1ta, J., & Surov\xc3\xbd, P. (2022)\n\n* [Utilizing Single Photon Laser Scanning Data for Estimating Individual Tree Attributes](https://helda.helsinki.fi/bitstream/handle/10138/344212/isprs_annals_V_2_2022_431_2022.pdf?sequence=1) by Simula, J., Holopainen, M., & Imangholiloo, M. (2022)\n\n* [UAV-LiDAR and RGB Imagery Reveal Large Intraspecific Variation in Tree-Level Morphometric Traits across Different Pine Species Evaluated in Common Gardens](https://www.mdpi.com/2072-4292/14/22/5904) by Lombardi, E., Rodr\xc3\xadguez-Puerta, F., Santini, F., Chambel, M. R., Climent, J., Resco de Dios, V., & Voltas, J. (2022)\n\n* [Cross-Comparison of Individual Tree Detection Methods Using Low and High Pulse Density Airborne Laser Scanning Data](https://www.mdpi.com/2072-4292/14/14/3480) by Sparks, A. M., Corrao, M. V., & Smith, A. M. (2022)\n\n* [Slow development of woodland vegetation and bird communities during 33 years of passive rewilding in open farmland](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0277545) by Broughton, R. K., Bullock, J. M., George, C., Gerard, F., Maziarz, M., Payne, W. E., Scholefield, P. A., Wade, D., & Pywell, R. F. (2022)\n\n* [Application of unmanned aerial system structure from motion point cloud detected tree heights and stem diameters to model missing stem diameters](https://www.sciencedirect.com/science/article/pii/S2215016122001108) by Swayze, N. C., & Tinkham, W. T. (2022)\n\n* [Limited increases in savanna carbon stocks over decades of fire suppression](https://www.nature.com/articles/s41586-022-04438-1) by Zhou, Y., Singh, J., Butnor, J. R., Coetsee, C., Boucher, P. B., Case, M. F., Hockridge, E. G., Davies, A. B., & Staver, A. C. (2022)\n\n* [Automated Inventory of Broadleaf Tree Plantations with UAS Imagery](https://www.mdpi.com/2072-4292/14/8/1931) by Chandrasekaran, A., Shao, G., Fei, S., Miller, Z., & Hupy, J. (2022)\n\n* [Use of Unoccupied Aerial Systems to Characterize Woody Vegetation across Silvopastoral Systems in Ecuador](https://www.mdpi.com/2072-4292/14/14/3386) by I\xc3\xb1amagua-Uyaguari, J. P., Green, D. R., Fitton, N., Sangoluisa, P., Torres, J., & Smith, P. (2022)\n\n* [Democratizing macroecology: Integrating unoccupied aerial systems with the National Ecological Observatory Network](https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecs2.4206) by Koontz, M. J., Scholl, V. M., Spiers, A. I., Cattau, M. E., Adler, J., McGlinchy, J., Goulden, T., Melbourne, B. A., & Balch, J. K. (2022).\n\n* [An Integrated Method for Estimating Forest-Canopy Closure Based on UAV LiDAR Data](https://www.mdpi.com/2072-4292/14/17/4317) by Gao, T., Gao, Z., Sun, B., Qin, P., Li, Y., & Yan, Z. (2022)\n\n* [Detection of standing retention trees in boreal forests with airborne laser scanning point clouds and multispectral imagery](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13995) by Hardenbol, A. A., Korhonen, L., Kukkonen, M., & Maltamo, M. (2022)\n\n* [Optimizing aerial imagery collection and processing parameters for drone-based individual tree mapping in structurally complex conifer forests](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13860) by Young, D. J., Koontz, M. J., & Weeks, J. (2022)\n\n* [Assessing Structural Complexity of Individual Scots Pine Trees by Comparing Terrestrial Laser Scanning and Photogrammetric Point Clouds](https://www.mdpi.com/1999-4907/13/8/1305) by Tienaho, N., Yrttimaa, T., Kankare, V., Vastaranta, M., Luoma, V., Honkavaara, E., ... & Saarinen, N. (2022)\n\n* [SiDroForest: a comprehensive forest inventory of Siberian boreal forest investigations including drone-based point clouds, individually labeled trees, synthetically generated tree crowns, and Sentinel-2 labeled image patches](https://essd.copernicus.org/articles/14/4967/2022/) by van Geffen, F., Heim, B., Brieger, F., Geng, R., Shevtsova, I. A., Schulte, L., ... & Kruse, S. (2022)\n\n* [Individual urban trees detection based on point clouds derived from UAV-RGB imagery and local maxima algorithm, a case study of Fateh Garden, Iran](https://link.springer.com/article/10.1007/s10668-022-02820-7) by Azizi, Z., & Miraki, M. (2022)\n\n* [Effect of varied unmanned aerial vehicle laser scanning pulse density on accurately quantifying forest structure](https://www.tandfonline.com/doi/abs/10.1080/01431161.2021.2023229) by Sumnall, M. J., Albaugh, T. J., Carter, D. R., Cook, R. L., Hession, W. C., Campoe, O. C., ... & Thomas, V. A. (2022)\n\n* [Correcting the Results of CHM-Based Individual Tree Detection Algorithms to Improve Their Accuracy and Reliability](https://www.mdpi.com/2072-4292/14/8/1822) by Lisiewicz, M., Kami\xc5\x84ska, A., Kraszewski, B., & Stere\xc5\x84czak, K. (2022)\n\n* [Combining aerial photos and LiDAR data to detect canopy cover change in urban forests](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0273487) by Coupland, K., Hamilton, D., & Griess, V. C. (2022)\n\n\n* [Effects of Flight and Smoothing Parameters on the Detection of Taxus and Olive Trees with UAV-Borne Imagery](https://www.mdpi.com/2504-446X/6/8/197) by Ottoy, S., Tziolas, N., Van Meerbeek, K., Aravidis, I., Tilkin, S., Sismanis, M., Stavrakoudis, D., Gitas, I. Z., Zalidis, G. & De Vocht, A. (2022)\n\n\n* [Modeling the Missing DBHs: Influence of Model Form on UAV DBH Characterization](https://www.mdpi.com/1999-4907/13/12/2077) by Tinkham, W. T., Swayze, N. C., Hoffman, C. M., Lad, L. E., & Battaglia, M. A. (2022)\n\n* [Mapping Tree Canopy in Urban Environments Using Point Clouds from Airborne Laser Scanning and Street Level Imagery](https://www.mdpi.com/1424-8220/22/9/3269) by Rodr\xc3\xadguez-Puerta, F., Barrera, C., Garc\xc3\xada, B., P\xc3\xa9rez-Rodr\xc3\xadguez, F., & Garc\xc3\xada-Pedrero, A. M. (2022)\n\n* [Extraction of individual trees based on Canopy Height Model to monitor the state of the forest](https://www.sciencedirect.com/science/article/pii/S2666719322000644) by Douss, R., & Farah, I. R. (2022)\n\n* [Aprisco Field Station: the spatial structure of a new experimental site focused on agroecology](https://academic.oup.com/jpe/article/15/6/1118/6576147) by O\xe2\x80\x99Brien, M. J., Carbonell, E. P., & Sch\xc3\xb6b, C. (2022)\n\n* [UAV-Based Characterization of Tree-Attributes and Multispectral Indices in an Uneven-Aged Mixed Conifer-Broadleaf Forest](https://www.mdpi.com/2072-4292/14/12/2775) by Vivar-Vivar, E. D., Pompa-Garc\xc3\xada, M., Mart\xc3\xadnez-Rivas, J. A., & Mora-Tembre, L. A. (2022)\n\n### 2021\n\n* [Detectability of the Critically Endangered Araucaria angustifolia Tree Using Worldview-2 Images, Google Earth Engine and UAV-LiDAR](https://www.mdpi.com/2073-445X/10/12/1316) by Saad, F., Biswas, S., Huang, Q., Corte, A. P. D., Coraiola, M., Macey, S., Marcos Bergmann, M., & Leimgruber, P. (2021)\n\n* [Fine scale mapping of fractional tree canopy cover to support river basin management](https://onlinelibrary.wiley.com/doi/abs/10.1002/hyp.14156) by Gao, S., Castellazzi, P., Vervoort, R. W., & Doody, T. M. (2021)\n\n* [Above Ground Biomass Estimation of Syzygium aromaticum using structure from motion (SfM) derived from Unmanned Aerial Vehicle in Paninggahan Agroforest Area, West Sumatra](http://jbioua.fmipa.unand.ac.id/index.php/jbioua/article/view/338) by Harapan, T. S., Husna, A., Febriamansyah, T. A., Mutashim, M., Saputra, A., Taufiq, A., & Mukhtar, E. (2021)\n\n* [Influence of flight parameters on UAS-based monitoring of tree height, diameter, and density](https://www.sciencedirect.com/science/article/abs/pii/S0034425721002601) by Swayze, N. C., Tinkham, W. T., Vogeler, J. C., & Hudak, A. T. (2021)\n\n\n* [Detection of aspen in conifer-dominated boreal forests with seasonal multispectral drone image point clouds](https://www.silvafennica.fi/article/10515/author/20257) by Hardenbol, A. A., Kuzmin, A., Korhonen, L., Korpelainen, P., Kumpula, T., Maltamo, M., & Kouki, J. (2021)\n\n* [Correcting tree count bias for objects segmented from lidar point clouds](https://www.proquest.com/openview/4c03d80d21aa8d71509deaae79259b9f/1?pq-origsite=gscholar&cbl=2030384) by Strub, M. R., & Osborne, N. (2021)\n\n* [Comparison of Accuracy between Analysis Tree Detection in UAV Aerial Image Analysis and Quadrat Method for Estimating the Number of Trees to be Removed in the Environmental Impact Assessment](https://koreascience.kr/article/JAKO202118752917743.page) by Park, M. (2021)\n\n* [Arboricoltura di precisione: un nuovo approccio alla gestione del rischio caduta alberi basato sulla Geomatica](https://mediageo.it/ojs/index.php/GEOmedia/article/view/1810) by De Petris, S., Sarvia, F., & Borgogno-Mondino, E. (2021)\n\n* [Canopy Extraction and Height Estimation of Trees in a Shelter Forest Based on Fusion of an Airborne Multispectral Image and Photogrammetric Point Cloud](https://www.hindawi.com/journals/js/2021/5519629/) by Wang, X., Zhao, Q., Han, F., Zhang, J., & Jiang, P. (2021)\n\n* [Uav-based lidar scanning for individual tree detection and height measurement in young forest permanent trials](https://www.mdpi.com/2072-4292/14/1/170) by Rodr\xc3\xadguez-Puerta, F., G\xc3\xb3mez-Garc\xc3\xada, E., Mart\xc3\xadn-Garc\xc3\xada, S., P\xc3\xa9rez-Rodr\xc3\xadguez, F., & Prada, E. (2021)\n\n* [UAV-derived forest degradation assessments for planning and monitoring forest ecosystem restoration: towards a forest degradation index](https://www.cifor.org/knowledge/publication/8199/) by Lee, K. (2021)\n\n* [Potential for Individual Tree Monitoring in Ponderosa Pine-Dominated Forests Using Unmanned Aerial System Structure from Motion Point Clouds](https://cdnsciencepub.com/doi/abs/10.1139/cjfr-2020-0433) by Creasy, M. B., Tinkham, W. T., Hoffman, C. M., & Vogeler, J. C. (2021)\n\n* [Assessment of Above-Ground Carbon Storage by Urban Trees Using LiDAR Data: The Case of a University Campus](https://www.mdpi.com/1999-4907/12/1/62) by G\xc3\xbcl\xc3\xa7in, D., & van den Bosch, C. C. K. (2021)\n\n* [Influence of Agisoft Metashape Parameters on UAS Structure from Motion Individual Tree Detection from Canopy Height Models](https://www.mdpi.com/1999-4907/12/2/250) by Tinkham, W. T., & Swayze, N. C. (2021)\n\n* [Ground-Penetrating Radar as phenotyping tool for characterizing intraspecific variability in root traits of a widespread conifer](https://link.springer.com/article/10.1007/s11104-021-05135-0) by Lombardi, E., Ferrio, J. P., Rodr\xc3\xadguez-Robles, U., de Dios, V. R., & Voltas, J. (2021)\n\n* [Bridging the genotype\xe2\x80\x93phenotype gap for a Mediterranean pine by semi\xe2\x80\x90automatic crown identification and multispectral imagery](https://nph.onlinelibrary.wiley.com/doi/abs/10.1111/nph.16862) by Santini, F., Kefauver, S. C., Araus, J. L., Resco de Dios, V., Mart\xc3\xadn Garc\xc3\xada, S., Grivet, D., & Voltas, J. (2021)\n\n* [Tracking the rates and mechanisms of canopy damage and recovery following Hurricane Maria using multitemporal lidar data](https://www.biorxiv.org/content/10.1101/2021.03.26.436869v1.abstract) by Leitold, V., Morton, D. C., Martinuzzi, S., Paynter, I., Uriarte, M., Keller, M., Keller, M., Ferraz, A., Cook, B. D., Corp, L. A., & Gonz\xc3\xa1lez, G. (2021)\n\n* [Cross-scale interaction of host tree size and climatic water deficit governs bark beetle-induced tree mortality](https://www.nature.com/articles/s41467-020-20455-y) by Koontz, M. J., Latimer, A. M., Mortenson, L. A., Fettig, C. J., & North, M. P. (2021)\n\n### 2020\n\n* [The wildlife\xe2\x80\x90livestock interface on extensive free\xe2\x80\x90ranging pig farms in central Spain during the \xe2\x80\x9cmontanera\xe2\x80\x9d period](https://onlinelibrary.wiley.com/doi/abs/10.1111/tbed.13854) by Triguero\xe2\x80\x90Oca\xc3\xb1a, R., Laguna, E., Jim\xc3\xa9nez\xe2\x80\x90Ruiz, S., Fern\xc3\xa1ndez\xe2\x80\x90L\xc3\xb3pez, J., Garc\xc3\xada\xe2\x80\x90Bocanegra, I., Barasona, J. \xc3\x81., ... & Acevedo, P. (2020)\n\n* [Supporting Assessment of Forest Burned Areas by Aerial Photogrammetry: The Susa Valley (NW Italy) Fires of Autumn 2017](https://link.springer.com/chapter/10.1007/978-3-030-58811-3_59) by De Petris, S., Momo, E. J., & Borgogno-Mondino, E. (2020)\n\n* [Applying unmanned aerial vehicles (UAVs) to map shrubland structural attributes in northern Patagonia, Argentina](https://cdnsciencepub.com/doi/abs/10.1139/cjfr-2019-0440@cjfrjuvs-uav.issue1) by Gonzalez Musso, R. F., Oddi, F. J., Goldenberg, M. G., & Garibaldi, L. A. (2020)\n\n* [Automated Canopy Delineation and Size Metrics Extraction for Strawberry Dry Weight Modeling Using Raster Analysis of High-Resolution Imagery](https://www.mdpi.com/2072-4292/12/21/3632) by Abd-Elrahman, A., Guan, Z., Dalid, C., Whitaker, V., Britt, K., Wilkinson, B., & Gonzalez, A. (2020)\n\n* [Northern Bobwhite Non\xe2\x80\x90Breeding Habitat Selection in a Longleaf Pine Woodland](https://wildlife.onlinelibrary.wiley.com/doi/abs/10.1002/jwmg.21925) by Kroeger, A. J., DePerno, C. S., Harper, C. A., Rosche, S. B., & Moorman, C. E. (2020)\n\n* [Evaluation of Features Derived from High-Resolution Multispectral Imagery and LiDAR Data for Object-Based Support Vector Machine Classification of Tree Species](https://www.tandfonline.com/doi/abs/10.1080/07038992.2020.1809363) by Roffey, M., & Wang, J. (2020)\n\n* [Mapping Species at an Individual-Tree Scale in a Temperate Forest, Using Sentinel-2 Images, Airborne Laser Scanning Data, and Random Forest Classification](https://www.mdpi.com/2072-4292/12/22/3710) by Plakman, V., Janssen, T., Brouwer, N., & Veraverbeke, S. (2020)\n\n### 2019\n\n* [High-resolution multisensor remote sensing to support date palm farm management](https://www.mdpi.com/2077-0472/9/2/26) by Mulley, M., Kooistra, L., & Bierens, L. (2019)\n\n* [Quantifying canopy tree loss and gap recovery in tropical forests under low-intensity logging using VHR satellite imagery and airborne LiDAR](https://www.mdpi.com/2072-4292/11/7/817) by Dalagnol, R., Phillips, O. L., Gloor, E., Galv\xc3\xa3o, L. S., Wagner, F. H., Locks, C. J., & Arag\xc3\xa3o, L. E. (2019)\n\n* [Forest inventory sensitivity to UAS-based image processing algorithms](https://afrjournal.org/index.php/afr/article/download/1282/818) by Maturbongs, B., Wing, M. G., Strimbu, B., & Burnett, J. (2019)\n\n* [Remote sensing pipeline for tree segmentation and classification in a mixed softwood and hardwood system](https://peerj.com/articles/5837/) by McMahon, C. A. (2019)\n\n* [Tree height in tropical forest as measured by different ground, proximal, and remote sensing instruments, and impacts on above ground biomass estimates](https://www.sciencedirect.com/science/article/abs/pii/S0303243419300844) by Laurin, G. V., Ding, J., Disney, M., Bartholomeus, H., Herold, M., Papale, D., & Valentini, R. (2019)\n\n* [Advances in the Derivation of Northeast Siberian Forest Metrics Using High-Resolution UAV-Based Photogrammetric Point Clouds](https://www.mdpi.com/2072-4292/11/12/1447) by Brieger, F., Herzschuh, U., Pestryakova, L. A., Bookhagen, B., Zakharov, E. S., & Kruse, S. (2019)\n\n* [Multi-scale Assessment of Northern Bobwhite and White-tailed Deer Habitat Selection in Longleaf Pine Woodlands](https://repository.lib.ncsu.edu/bitstream/handle/1840.20/37046/etd.pdf?sequence=1) by Kroeger, A. J. (2019)\n\n### 2018\n\n* [Bayesian and classical machine learning methods: a comparison for tree species classification with LiDAR waveform signatures](https://www.mdpi.com/2072-4292/10/1/39) by Zhou, T., Popescu, S. C., Lawing, A. M., Eriksson, M., Strimbu, B. M., & B\xc3\xbcrkner, P. C. (2018)\n\n### 2017\n\n* [Underproductive agriculture aids connectivity in tropical forests](https://www.sciencedirect.com/science/article/abs/pii/S0378112717308101) by Evans, L. J., Goossens, B., & Asner, G. P. (2017)\n'",,"2016/12/28, 16:11:32",2492,GPL-3.0,28,118,"2023/09/19, 17:31:29",2,4,26,9,36,0,0.0,0.02631578947368418,"2023/08/09, 23:52:12",v1.0.0,0,3,false,,false,false,,,,,,,,,,, rFIA,"Increase the accessibility and use of the USFS Forest Inventory and Analysis Database by providing a user-friendly, open source platform to easily query and analyze.",hunter-stanke,https://github.com/hunter-stanke/rFIA.git,github,"fia,fia-database,forest-inventory,forest-variables,inventories,fia-datamart,compute-estimates,space-time,spatial,r",Forest Observation and Management,"2023/04/09, 03:25:37",45,0,9,true,R,,,R,https://rfia.netlify.com/,"b'\n\n\n# rFIA: Unlocking the FIA Database in R \n\n[![](https://www.r-pkg.org/badges/version/rFIA?color=green)](https://cran.r-project.org/package=rFIA)\n[![](https://img.shields.io/badge/Cite%20rFIA-in%20EMS-yellow.svg)](https://www.sciencedirect.com/science/article/abs/pii/S1364815219311089)\n[![](http://cranlogs.r-pkg.org/badges/grand-total/rFIA?color=blue)](https://cran.r-project.org/package=rFIA)\n[![](https://travis-ci.org/hunter-stanke/rFIA.svg?branch=master)](https://travis-ci.org/hunter-stanke/rFIA)\n\n![US Biomass](man/figures/usBiomass.jpg)\n\nThe goal of `rFIA` is to increase the accessibility and use of the USFS\nForest Inventory and Analysis (FIA) Database by providing a\nuser-friendly, open source platform to easily query and analyze FIA\nData. Designed to accommodate a wide range of potential user objectives,\n`rFIA` simplifies the estimation of forest variables from the FIA\nDatabase and allows all R users (experts and newcomers alike) to unlock\nthe flexibility and potential inherent to the Enhanced FIA design.\n\nSpecifically, `rFIA` improves accessibility to the spatio-temporal\nestimation capacity of the FIA Database by producing space-time indexed\nsummaries of forest variables within user-defined population boundaries.\nDirect integration with other popular R packages (e.g., dplyr, sp, and\nsf) facilitates efficient space-time query and data summary, and\nsupports common data representations and API design. The package\nimplements design-based estimation procedures outlined by Bechtold &\nPatterson (2005), and has been validated against estimates and sampling\nerrors produced by EVALIDator. Current development is focused on the\nimplementation of spatially-enabled model-assisted estimators to improve\npopulation, change, and ratio estimates.\n\nFor more information and example usage of `rFIA`, check out our\n[website](https://rfia.netlify.app/). To report a bug or suggest\nadditions to `rFIA`, please use our [active\nissues](https://github.com/hunter-stanke/rFIA/issues) page here on\nGitHub, or contact [Hunter Stanke](https://hunter-stanke.com/) (lead\ndeveloper and maintainer).\n\n***To cite*** `rFIA`, please refer to our recent publication in\n[Environmental Modeling and\nSoftware](https://doi.org/10.1016/j.envsoft.2020.104664) (doi:\n).\n\n
\n\n## Installation\n\nYou can install the released version of `rFIA` from\n[CRAN](https://CRAN.R-project.org) with:\n\n``` r\ninstall.packages(""rFIA"")\n```\n\nAlternatively, you can install the development version from GitHub:\n\n``` r\ndevtools::install_github(\'hunter-stanke/rFIA\')\n```\n\n
\n\n## Functionality\n\n| `rFIA` Function | Description |\n| --------------- | ------------------------------------------------------------------ |\n| `area` | Estimate land area in various classes |\n| `biomass` | Estimate volume, biomass, & carbon stocks of standing trees |\n| `clipFIA` | Spatial & temporal queries for FIA data |\n| `diversity` | Estimate diversity indices (e.g.\xc2\xa0species diversity) |\n| `dwm` | Estimate volume, biomass, and carbon stocks of down woody material |\n| `getFIA` | Download FIA data, load into R, and optionally save to disk |\n| `growMort` | Estimate recruitment, mortality, and harvest rates |\n| `invasive` | Estimate areal coverage of invasive species |\n| `plotFIA` | Produce static & animated plots of FIA summaries |\n| `readFIA` | Load FIA database into R environment from disk |\n| `seedling` | Estimate seedling abundance (TPA) |\n| `standStruct` | Estimate forest structural stage distributions |\n| `tpa` | Estimate abundance of standing trees (TPA & BAA) |\n| `vitalRates` | Estimate live tree growth rates |\n| `writeFIA` | Write in-memory FIA Database to disk |\n\n
\n\n## Example Usage\n\n### *Download FIA Data and Load into R*\n\nThe first step to using `rFIA` is to download subsets of the FIA\nDatabase. The easiest way to accomplish this is using `getFIA`. Using\none line of code, you can download state subsets of the FIA Database,\nload data into your R environment, and optionally save those data to a\nlocal directory for future use\\!\n\n``` r\n## Download the state subset or Connecticut (requires an internet connection)\n# All data acquired from FIA Datamart: https://apps.fs.usda.gov/fia/datamart/datamart.html\nct <- getFIA(states = \'CT\', dir = \'/path/to/save/data\')\n```\n\nBy default, `getFIA` only loads the portions of the database required to\nproduce summaries with other `rFIA` functions (`common = TRUE`). This\nconserves memory on your machine and speeds download time. If you would\nlike to download all available tables for a state, simple specify\n`common = FALSE` in the call to `getFIA`.\n\n**But what if I want to load multiple states worth of FIA data into R?**\nNo problem\\! Simply specify mutiple state abbreviations in the `states`\nargument of `getFIA` (e.g.\xc2\xa0`states = c(\'MI\', \'IN\', \'WI\', \'IL\'`)), and\nall state subsets will be downloaded and merged into a single\n`FIA.Database` object. This will allow you to use other `rFIA` functions\nto produce estimates within polygons which straddle state boundaries\\!\n\nNote: given the massive size of the full FIA Database, users are\ncautioned to only download the subsets containing their region of\ninterest.\n\n**If you have previously downloaded FIA data would simply like to load\ninto R from a local directory, use `readFIA`:**\n\n``` r\n## Load FIA Data from a local directory\ndb <- readFIA(\'/path/to/your/directory/\')\n```\n\n-----\n\n### *Compute Estimates of Forest Variables*\n\nNow that you have loaded your FIA data into R, it\xe2\x80\x99s time to put it to\nwork. Let\xe2\x80\x99s explore the basic functionality of `rFIA` with `tpa`, a\nfunction to compute tree abundance estimates (TPA, BAA, & relative\nabundance) from FIA data, and `fiaRI`, a subset of the FIA Database for\nRhode Island including inventories from 2013-2018.\n\n**Estimate the abundance of live trees in Rhode Island:**\n\n``` r\nlibrary(rFIA)\n## Load the Rhode Island subset of the FIADB (included w/ rFIA)\n## NOTE: This object can be produced using getFIA and/or readFIA\ndata(""fiaRI"")\n\n## Only estimates for the most recent inventory year\nfiaRI_MR <- clipFIA(fiaRI, mostRecent = TRUE) ## subset the most recent data\ntpaRI_MR <- tpa(fiaRI_MR)\nhead(tpaRI_MR)\n#> # A tibble: 1 x 8\n#> YEAR TPA BAA TPA_SE BAA_SE nPlots_TREE nPlots_AREA N\n#> \n#> 1 2018 427. 122. 6.63 3.06 126 127 199\n\n## All Inventory Years Available (i.e., returns a time series)\ntpaRI <- tpa(fiaRI)\nhead(tpaRI)\n#> # A tibble: 5 x 8\n#> YEAR TPA BAA TPA_SE BAA_SE nPlots_TREE nPlots_AREA N\n#> \n#> 1 2014 466. 120. 6.73 3.09 121 123 196\n#> 2 2015 444. 121. 6.40 3.06 122 124 194\n#> 3 2016 450. 123. 6.46 2.94 124 125 197\n#> 4 2017 441. 123. 6.66 3.01 124 125 196\n#> 5 2018 427. 122. 6.63 3.06 126 127 199\n```\n\n**What if I want to group estimates by species? How about by size\nclass?**\n\n``` r\n## Group estimates by species\ntpaRI_species <- tpa(fiaRI_MR, bySpecies = TRUE)\nhead(tpaRI_species, n = 3)\n#> # A tibble: 3 x 11\n#> YEAR SPCD COMMON_NAME SCIENTIFIC_NAME TPA BAA TPA_SE BAA_SE\n#> \n#> 1 2018 12 balsam fir Abies balsamea 0.0873 0.0295 114. 114. \n#> 2 2018 43 Atlantic white-ce\xe2\x80\xa6 Chamaecyparis thyo\xe2\x80\xa6 0.247 0.180 59.1 56.0\n#> 3 2018 68 eastern redcedar Juniperus virginia\xe2\x80\xa6 1.14 0.138 64.8 67.5\n#> # \xe2\x80\xa6 with 3 more variables: nPlots_TREE , nPlots_AREA , N \n\n## Group estimates by size class\n## NOTE: Default 2-inch size classes, but you can make your own using makeClasses()\ntpaRI_sizeClass <- tpa(fiaRI_MR, bySizeClass = TRUE)\nhead(tpaRI_sizeClass, n = 3)\n#> # A tibble: 3 x 9\n#> YEAR sizeClass TPA BAA TPA_SE BAA_SE nPlots_TREE nPlots_AREA N\n#> \n#> 1 2018 1 188. 3.57 13.0 12.8 76 127 199\n#> 2 2018 3 68.6 5.76 15.1 15.8 46 127 199\n#> 3 2018 5 46.5 9.06 6.51 6.57 115 127 199\n\n## Group by species and size class, and plot the distribution \n## for the most recent inventory year\ntpaRI_spsc <- tpa(fiaRI_MR, bySpecies = TRUE, bySizeClass = TRUE)\nplotFIA(tpaRI_spsc, BAA, grp = COMMON_NAME, x = sizeClass,\n plot.title = \'Size-class distributions of BAA by species\', \n x.lab = \'Size Class (inches)\', text.size = .75,\n n.max = 5) # Only want the top 5 species, try n.max = -5 for bottom 5\n```\n\n\n\n**What if I want estimates for a specific type of tree (ex. greater than\n12-inches DBH and in a canopy dominant or subdominant position) in\nspecific area (ex. growing on mesic sites), and I want to group by\nestimates by some variable other than species or size class (ex.\nownsership group)?** Easy\\! Each of these specifications are described\nin the FIA Database, and all `rFIA` functions can leverage these data to\neasily implement complex queries\\!\n\n``` r\n## grpBy specifies what to group estimates by (just like species and size class above)\n## treeDomain describes the trees of interest, in terms of FIA variables \n## areaDomain, just like above,describes the land area of interest\ntpaRI_own <- tpa(fiaRI_MR, \n grpBy = OWNGRPCD, \n treeDomain = DIA > 12 & CCLCD %in% c(1,2),\n areaDomain = PHYSCLCD %in% c(20:29))\nhead(tpaRI_own)\n#> # A tibble: 2 x 9\n#> YEAR OWNGRPCD TPA BAA TPA_SE BAA_SE nPlots_TREE nPlots_AREA N\n#> \n#> 1 2018 30 0.848 3.57 59.0 59.1 3 38 199\n#> 2 2018 40 1.49 3.99 25.7 27.7 12 82 199\n```\n\n**What if I want to produce estimates within my own population\nboundaries (within user-defined spatial zones/polygons)?** This is where\nthings get really exciting.\n\n``` r\n## Load the county boundaries for Rhode Island\ndata(\'countiesRI\') ## Load your own spatial data from shapefiles using readOGR() (rgdal)\n\n## polys specifies the polygons (zones) where you are interested in producing estimates\n## returnSpatial = TRUE indicates that the resulting estimates will be joined with the \n## polygons we specified, thus allowing us to visualize the estimates across space\ntpaRI_counties <- tpa(fiaRI_MR, polys = countiesRI, returnSpatial = TRUE)\n\n## NOTE: Any grey polygons below simply means no FIA data was available for that region\nplotFIA(tpaRI_counties, BAA) # Plotting method for spatial FIA summaries, also try \'TPA\' or \'TPA_PERC\'\n```\n\n\n\n**We produced a really cool time series earlier, how would I marry the\nspatial and temporal capacity of `rFIA` to produce estimates across\nuser-defined polygons and through time?** Easy\\! Just hand `tpa` the\nfull FIA.Database object you produced with `readFIA` (not the most\nrecent subset produced with `clipFIA`). For stunning space-time\nvisualizations, hand the output of `tpa` to `plotFIA`. To save the\nanimation as a .gif file, simpy specify `fileName` (name of output file)\nand `savePath` (directory to save file, combined with `fileName`).\n\n``` r\n## Using the full FIA dataset, all available inventories\ntpaRI_st <- tpa(fiaRI, polys = countiesRI, returnSpatial = TRUE)\n\n## Animate the output\nlibrary(gganimate)\nplotFIA(tpaRI_st, TPA, animate = TRUE, legend.title = \'Abundance (TPA)\', legend.height = .8)\n#> NULL\n```\n'",",https://doi.org/10.1016/j.envsoft.2020.104664,https://doi.org/10.1016/j.envsoft.2020.104664","2019/08/05, 16:33:04",1542,GPL-3.0,12,442,"2023/04/09, 03:23:42",16,6,31,7,199,1,0.0,0.012690355329949221,,,0,5,false,,false,false,,,,,,,,,,, Forest Vegetation Simulator,"A family of individual-tree, distance-independent, forest growth simulation models.",USDAForestService,https://github.com/USDAForestService/ForestVegetationSimulator.git,github,,Forest Observation and Management,"2023/08/03, 14:51:06",24,0,24,true,Fortran,USDA Forest Service,USDAForestService,"Fortran,C,POV-Ray SDL,Assembly,C++,Makefile,CMake,Batchfile,R",,"b'\n

What is FVS?

\n

The Forest Vegetation Simulator (FVS) is a family of individual-tree, distance-independent, forest growth simulation models. It is a system of highly integrated analytical tools that is based upon a body of scientific research and experience. Since the development of the first model for northern Idaho in 1973, FVS has evolved into a collection of \'variants\' which represent different geographic areas across the county. FVS can simulate a wide range of silvicultural treatments for most major forest tree species, forest types, and stand conditions.

\n

The FVS Staff of the Forest Management Service Center (FMSC) in Fort Collins, Colorado, maintains, supports, develops, and provides training for FVS.

\n

Detailed information, such as our workflow policies and directions on building FVS in various environments, can be found on our Wiki.

\n

More information, such as training materials, videos, and documents, can be found on our website.

\n

Throughout its history, FVS has had many contributors, authors, and developers. The following list is not intended to be all encompassing, but rather simply identifies the currently active contributors.\n

\n
    \n
  • Michael VanDyck - FVS Staff Lead and Product Owner\n
  • Daniel Wagner - FVS Programmer and Repository Maintainer\n
  • Lance David - FVS Senior Programmer\n
  • Erin Smith-Mateja - FVS Forest Biometrician\n
  • Mark Castle - FVS Forest Biometrician\n
  • Matt Diskin - FVS Forest Biometrician\n
  • Nick Crookston - Forest Research Consultant\n
  • Don Robinson - Ecologist and Canadian Variant (BC, ON) Development\n
\n\n\n\nhttps://user-images.githubusercontent.com/100228553/179036261-bfcc668b-976a-48a3-9b6e-4e193c17e948.mp4'",,"2022/02/22, 17:08:39",610,CUSTOM,74,1498,"2023/10/04, 13:20:00",0,27,34,27,21,0,0.1,0.5687943262411348,"2023/08/03, 15:04:21",FS2023.3,0,12,false,,false,true,,,https://github.com/USDAForestService,https://www.fs.usda.gov/,USDA Forest Service - Chief Information Office,,,https://avatars.githubusercontent.com/u/21344804?v=4,,, PYFOREST,Informing Forest Conservation Regulations in Paraguay.,cp-PYFOREST,https://github.com/cp-PYFOREST/Land-Use-Assessment.git,github,"deforestation,forest,land-use,r,assessment,compliance-check,forest-loss,data-analysis,geospatial-data,deforestation-risk,conservation,forest-management,open-source",Forest Observation and Management,"2023/06/13, 21:35:28",1,0,1,true,HTML,PYFOREST,cp-PYFOREST,HTML,https://bren.ucsb.edu/projects/informing-forest-conservation-regulations-paraguay,"b'

\n\nPYFOREST\n\n

\n\n

\n\nInforming Forest Conservation Regulations in Paraguay\n\n\n

\n\n\n\n

\n\n# Land Use Assessment \n\n\n

\n\n[Land-Use-Assesment](https://github.com/cp-PYFOREST/Land-Use-Assessment) | [Land-Use-Plan-Simulation](https://github.com/cp-PYFOREST/Land-Use-Plan-Simulation) | [PYFOREST-ML](https://github.com/cp-PYFOREST/PYFOREST-ML) | [PYFOREST-Shiny](https://github.com/cp-PYFOREST/PYFOREST-Shiny)\n\n

\n\n # Documentation\n For more detailed information about our project, including our methodologies, data sources, and technical specifications, please refer to our [technical documentation](https://bren.ucsb.edu/projects/informing-forest-conservation-regulations-paraguay).\n \n ## Table of Contents\n- [Description](#description)\n- [Land Use Plan Assessment](#land-use-plan-assessment)\n- [Deforestation Rates](#deforestation-rates)\n- [Forest Cover](#forest-cover)\n- [Results](#results)\n- [Data Information](#data-information)\n- [License](#license)\n- [Contributors](#contributors)\n \n## Description\nTo understand and mitigate the impacts of deforestation, it is crucial to evaluate property owners\' compliance with their approved LUPs and accurately quantify deforestation rates and forest cover. This region, rich in biodiversity, has been experiencing significant deforestation, making it a crucial study area.\n \nThe first task is to assess whether property owners are executing their approved LUPs as intended. This involves a detailed analysis of various datasets, including property boundaries, LUPs, and forest loss data, all provided by INFONA, the National Forestry Institute of Paraguay.\n \nBy leveraging geospatial overlays, we compare forest loss data against permitted land use. This approach allows us to identify areas where deforestation has occurred without authorization, which is a key indicator of non-compliance with approved LUPs. This approach allows us to identify areas where deforestation has occurred without authorization, providing valuable insights into the effectiveness of current forest regulations and land management practices.\n\n## Methods\n\n### Land Use Plan Assessment\n\nThe assessment process involves determining the active private properties for each year of analysis, using the Property Boundary dataset. The active-inactive.qmd and active-lup.qmd files preprocess the data to identify the unique identifiers of active properties and active LUPs for each year from 2011 to 2020. These identifiers are then used to subset the Land Use Plan dataset.\n- Each row of a LUP subset is a vector polygon of the approved land use type. The analysis done in lup_{year}-compliance.qmd uses the yearly subsets of LUPs and overlays them with the corresponding \xe2\x80\x98Forest Loss\xe2\x80\x99 dataset to determine the cell count per land use type. Each cell of the \xe2\x80\x98Forest Loss\xe2\x80\x99 dataset is a deforested area. \n- Yearly subsets of the Land use dataset contain a categorical column of \'GRUPOS,\' identifying the approved land use type. The analysis done in each lup_{year}-compliance.qmd uses the \'GRUPOS\' column to filter by the land use types of \'authorized area\' and \'forest reserve\' (\'AREA_AUTORIZADA,\' \'BOSQUES\').\n- Pixel counts were converted to an area for each property and land use type. Pixel counts greater than zero in the area designated as a forest reserve is considered illegal deforestation, placing the property out of compliance with its approved land use plan.\n \n

\n\n\n\n

\n\n### Deforestation Rates \n- A time series analysis was performed on the output of the land use plan assessment to determine deforestation rates and quantify total areas at the national, department, district, and property levels.\n\n### Forest Cover \n- The same approach was applied to quantify forest cover across time using the yearly \xe2\x80\x98forest cover\xe2\x80\x99 datasets in conjunction with the Study Boundary dataset. This dataset was then subset into districts and departments.\n\n## Results\n- This analysis has determined that between 2019 and 2020, 44% of the deforestation within LUPs occurred in protected areas and was considered unauthorized, totaling 21,321 ha of illegal deforestation.\n \n

\n\n\n\n

\n \n \n - Out of a total of 1800 properties, 311 (or approximately 17%) did not comply with their LUPs. A relatively small proportion of properties primarily drives the high percentage of deforestation in protected areas. This result will allow INFONA to explore the underlying factors or patterns driving this behavior and better determine how to mitigate unauthorized deforestation within LUPs.\n \n

\n\n\n\n

\n \n - The spatial distribution of properties that committed illegal deforestation in 2019 leans heavily towards the furthest western boundary of the study region in the Boqueron Department. This distribution aligns with this study\'s analysis of the deforestation of the entire Paraguayan Chaco. \n \n - Boqueron experienced an approximately 14% decrease in forest cover from 2011 to 2020, declining from ~67% to ~53%. The yearly percentages of deforestation in Boqueron reflect the decreases observed in unauthorized deforestation within LUPs. In 2011, 2017, and 2020, Boqueron had percentages of area lost by year of 1.88, 2.12, and 0.53, respectively. Though the rate of deforestation decreased over the study period, Boqueron leads in comparison to the departments\xe2\x80\x99 Alto Paraguay and Presidente Hayes.\n\n- For the same three years of 2011, 2017, and 2020 Alto Paraguay had comparable decreases in percentages of 1.43, 0.75, 0.2. Forest cover decreased by 10% in the ten year period, 74% down to 64%. \n\n- The values for the percentage of the area lost by year for Presidente Hayes are significantly lower than the other two departments within our study boundary, at 0.76, 0.75, and 0.28 for 2011, 2017, and 2020, respectively. An important point concerning the reported low values is that Presidente Hayes had the least forest cover to begin the ten-year period, with an approximate 50% coverage reduced to ~45.87% in 2020.\n\nTable 2: Data Information - Objective 1\n| Dataset | Year(s) | Source | Data Type | Spatial Reference/Resolution | Metadata |\n|---------|----------|----------|---------|----------|----------|\n| Forest Cover |\t2000-2020 |\tINFONA |\tRasters |\tCRS: WGS 84 / UTM zone 21S | Resolution: 30m x 30m |\tMetadata |\n| Forest Loss | 2000-2020 |\tINFONA |\tRasters |\tCRS: WGS 84 / UTM zone 21S | Resolution: 30m x 30m |\tMetadata |\n| Permitted land use | \t1994-2022 |\tINFONA |\tPolygons |\tCRS: WGS 84 / UTM zone 21S |\tMetadata |\n \n## License\n\nThis project is licensed under the Apache-2.0 License - see the LICENSE.md file for details\n \n## Contributors\n[Atahualpa Ayala](Atahualpa-Ayala), [Dalila Lara](https://github.com/dalilalara), [Alexandria Reed](https://github.com/reedalexandria), [Guillermo Romero](https://github.com/romero61)\nAny advise for common problems or issues.\n\n\n\n'",,"2023/01/26, 22:00:49",272,CUSTOM,58,58,"2023/04/19, 17:38:54",0,9,46,46,189,0,0.0,0.5102040816326531,"2023/06/25, 18:51:34",v1.0.0,0,5,false,,false,false,,,https://github.com/cp-PYFOREST,https://bren.ucsb.edu/projects/informing-forest-conservation-regulations-paraguay,,,,https://avatars.githubusercontent.com/u/123590267?v=4,,, Detectree2,Automatic tree crown delineation based on the Detectron2 implementation of Mask R-CNN.,PatBall1,https://github.com/PatBall1/detectree2.git,github,"detectron2,python,pytorch,deep-learning",Forest Observation and Management,"2023/10/25, 19:58:10",101,0,95,true,Jupyter Notebook,,,"Jupyter Notebook,Python,R,Makefile,Dockerfile,Shell",https://patball1.github.io/detectree2/,"b'

\n\n\n

\n\n\n [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Detectree CI](https://github.com/patball1/detectree2/actions/workflows/python-ci.yml/badge.svg)](https://github.com/patball1/detectree2/actions/workflows/python-ci.yml) [![PEP8](https://img.shields.io/badge/code%20style-pep8-orange.svg)](https://www.python.org/dev/peps/pep-0008/) [![DOI](https://zenodo.org/badge/470698486.svg)](https://zenodo.org/badge/latestdoi/470698486)\n\n\n\n\nPython package for automatic tree crown delineation based on Mask R-CNN. Pre-trained models can be picked in the [`model_garden`](https://github.com/PatBall1/detectree2/tree/master/model_garden).\nA tutorial on how to prepare data, train models and make predictions is available [here](https://patball1.github.io/detectree2/tutorial.html). For questions, collaboration proposals and requests for data email [James Ball](mailto:ball.jgc@gmail.com). Some example data is available for download [here](https://doi.org/10.5281/zenodo.8136161).\n\nDetectree2\xe6\x98\xaf\xe4\xb8\x80\xe4\xb8\xaa\xe5\x9f\xba\xe4\xba\x8eMask R-CNN\xe7\x9a\x84\xe8\x87\xaa\xe5\x8a\xa8\xe6\xa0\x91\xe5\x86\xa0\xe6\xa3\x80\xe6\xb5\x8b\xe4\xb8\x8e\xe5\x88\x86\xe5\x89\xb2\xe7\x9a\x84Python\xe5\x8c\x85\xe3\x80\x82\xe6\x82\xa8\xe5\x8f\xaf\xe4\xbb\xa5\xe5\x9c\xa8[`model_garden`](https://github.com/PatBall1/detectree2/tree/master/model_garden)\xe4\xb8\xad\xe9\x80\x89\xe6\x8b\xa9\xe9\xa2\x84\xe8\xae\xad\xe7\xbb\x83\xe6\xa8\xa1\xe5\x9e\x8b\xe3\x80\x82[\xe8\xbf\x99\xe9\x87\x8c](https://patball1.github.io/detectree2/tutorial.html)\xe6\x8f\x90\xe4\xbe\x9b\xe4\xba\x86\xe5\xa6\x82\xe4\xbd\x95\xe5\x87\x86\xe5\xa4\x87\xe6\x95\xb0\xe6\x8d\xae\xe3\x80\x81\xe8\xae\xad\xe7\xbb\x83\xe6\xa8\xa1\xe5\x9e\x8b\xe5\x92\x8c\xe8\xbf\x9b\xe8\xa1\x8c\xe9\xa2\x84\xe6\xb5\x8b\xe7\x9a\x84\xe6\x95\x99\xe7\xa8\x8b\xe3\x80\x82\xe5\xa6\x82\xe6\x9e\x9c\xe6\x9c\x89\xe4\xbb\xbb\xe4\xbd\x95\xe9\x97\xae\xe9\xa2\x98\xef\xbc\x8c\xe5\x90\x88\xe4\xbd\x9c\xe6\x8f\x90\xe6\xa1\x88\xe6\x88\x96\xe8\x80\x85\xe9\x9c\x80\xe8\xa6\x81\xe6\xa0\xb7\xe4\xbe\x8b\xe6\x95\xb0\xe6\x8d\xae\xef\xbc\x8c\xe5\x8f\xaf\xe4\xbb\xa5\xe9\x82\xae\xe4\xbb\xb6\xe8\x81\x94\xe7\xb3\xbb[James Ball](mailto:ball.jgc@gmail.com)\xe3\x80\x82\xe4\xb8\x80\xe4\xba\x9b\xe7\xa4\xba\xe4\xbe\x8b\xe6\x95\xb0\xe6\x8d\xae\xe5\x8f\xaf\xe4\xbb\xa5\xe5\x9c\xa8[\xe8\xbf\x99\xe9\x87\x8c](https://doi.org/10.5281/zenodo.8136161)\xe4\xb8\x8b\xe8\xbd\xbd\xe3\x80\x82\n\n> **Warning**\\\n> Due to an influx of new users we have been hitting bandwidth limits. This is primarily from the file size of the pre-trained models. If you are using these models please aim to save them locally and point to them when you need them rather than downloading them each time they are required. We will move to a more bandwidth friendly set up soon. In the meantime, if installing the package is failing please raise it as an issue or notify me directly on ball.jgc@gmail.com.\n\n---\n\n| | Code developed by James Ball, Seb Hickman, Thomas Koay, Oscar Jiang, Luran Wang, Panagiotis Ioannou, James Hinton and Matthew Archer in the [Forest Ecology and Conservation Group](https://coomeslab.org/) at the University of Cambridge. The Forest Ecology and Conservation Group is led by Professor David Coomes and is part of the University of Cambridge [Conservation Research Institute](https://www.conservation.cam.ac.uk/). |\n| :---: | :--- |\n\n\n\n## Citation\n\nPlease cite this article if you use _detectree2_ in your work:\n\nBall, J.G.C., Hickman, S.H.M., Jackson, T.D., Koay, X.J., Hirst, J., Jay, W., Archer, M., Aubry-Kientz, M., Vincent, G. and Coomes, D.A. (2023),\nAccurate delineation of individual tree crowns in tropical forests from aerial RGB imagery using Mask R-CNN.\n*Remote Sens Ecol Conserv*. 9(5):641-655. [https://doi.org/10.1002/rse2.332](https://doi.org/10.1002/rse2.332)\n\n## Independent validation\n\nIndependent validation has been performed on a temperate deciduous forest in Japan.\n\n> *Detectree2 (F1 score: 0.57) outperformed DeepForest (F1 score: 0.52)*\n>\n> *Detectree2 could estimate tree crown areas accurately, highlighting its potential and robustness for tree detection and delineation*\n\nGan, Y., Wang, Q., and Iio, A. (2023).\nTree Crown Detection and Delineation in a Temperate Deciduous Forest from UAV RGB Imagery Using Deep Learning Approaches: Effects of Spatial Resolution and Species Characteristics. \n*Remote Sensing*. 15(3):778. [https://doi.org/10.3390/rs15030778](https://doi.org/10.3390/rs15030778)\n\n## Requirements\n\n- Python 3.8+\n- [gdal](https://gdal.org/download.html) geospatial libraries\n- [PyTorch \xe2\x89\xa5 1.8 and torchvision](https://pytorch.org/get-started/previous-versions/) versions that match\n- For training models GPU access (with CUDA) is recommended\n\ne.g.\n```pip3 install torch torchvision torchaudio```\n\n## Installation\n\n### pip\n\n```pip install git+https://github.com/PatBall1/detectree2.git```\n\nCurrently works on Google Colab (Pro version recommended). May struggle on clusters if geospatial libraries are not configured.\nSee [Installation Instructions](https://patball1.github.io/detectree2/installation.html) if you are having trouble.\n\n### conda\n\n*Under development*\n\n## Getting started\n\nDetectree2, based on the [Detectron2](https://github.com/facebookresearch/detectron2) Mask R-CNN architecture, locates\ntrees in aerial images. It has been designed to delineate trees in challenging dense tropical forests for a range of\necological applications.\n\nThis [tutorial](https://patball1.github.io/detectree2/tutorial.html) takes you through the key steps.\n[Example Colab notebooks](https://github.com/PatBall1/detectree2/tree/master/notebooks/colab) are also available but are\nnot updated frequently so functions and parameters may need to be adjusted to get things working properly.\n\nThe standard workflow includes:\n\n1) Tile the orthomosaics and crown data (for training, validation and testing)\n2) Train (and tune) a model on the training tiles\n3) Evaluate the model performance by predicting on the test tiles and comparing to manual crowns for the tiles\n4) Using the trained model to predict the crowns over the entire region of interest\n\nTraining crowns are used to teach the network to delineate tree crowns.\n

\n\n\n

\n\nHere is an example image of the predictions made by Detectree2.\n

\n\n

\n\n## Applications\n\n### Tracking tropical tree growth and mortality\n\n

\n\n

\n\n### Counting urban trees (Buffalo, NY)\n\n

\n\n

\n\n### Multi-temporal tree crown segmentation\n\n

\n\n

\n\n### Liana detection and infestation mapping\n\n*In development*\n\n

\n\n

\n\n### Tree species identification and mapping\n\n*In development*\n\n## To do\n\n- Functions for multiple labels vs single ""tree"" label\n\n## Project Organization\n\n```\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Makefile\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 detectree2\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data_loading\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 models\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 preprocessing\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 R\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 tests\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 docs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 source\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 model_garden\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 notebooks\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 colab\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 colabJB\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 colabJH\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 colabKoay\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 colabPan\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 colabSeb\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 exploratory\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 mask_rcnn\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 testing\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 training\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 reports\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 turing\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 report\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 figures\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 sections\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 requirements\n```\n\n## Code formatting\n\nTo automatically format your code, make sure you have `black` installed (`pip install black`) and call\n```black .```\nfrom within the project directory.\n\n---\n\nCopyright (c) 2022, James G. C. Ball\n'",",https://zenodo.org/badge/latestdoi/470698486,https://doi.org/10.5281/zenodo.8136161,https://doi.org/10.5281/zenodo.8136161,https://doi.org/10.1002/rse2.332,https://doi.org/10.1002/rse2.332,https://doi.org/10.3390/rs15030778,https://doi.org/10.3390/rs15030778","2022/03/16, 18:05:41",588,MIT,116,322,"2023/10/06, 18:53:33",36,59,89,48,19,8,0.0,0.22672064777327938,"2023/10/01, 17:52:28",v1.0.7,0,7,false,,true,false,,,,,,,,,,, allometric,"Thousands of allometric models exist in the scientific and technical forestry literature, and allometric is a platform for archiving and using this vast array of models in a robust and structured format.",allometric,https://github.com/allometric/allometric.git,github,"forest-inventory,forestry,carbon",Forest Observation and Management,"2023/10/17, 02:06:07",45,0,44,true,R,allometric,allometric,"R,PowerShell,CSS",,"b'\n# allometric \n\n\n\n[![R-CMD-check](https://github.com/allometric/allometric/actions/workflows/check-standard.yaml/badge.svg)](https://github.com/allometric/allometric/actions/workflows/check-standard.yaml)\n[![](https://img.shields.io/badge/devel%20version-1.4.1-blue.svg)](https://github.com/allometric/allometric)\n[![codecov](https://codecov.io/gh/allometric/allometric/branch/master/graph/badge.svg?token=3V5KUFMO2X)](https://app.codecov.io/gh/allometric/allometric)\n\n\n`allometric` is an R package for predicting tree attributes with\nallometric models. Thousands of allometric models exist in the\nscientific and technical forestry literature, and `allometric` is a\nplatform for archiving and using this vast array of models in a robust\nand structured format.\n\n`allometric` not only enables the use of allometric models for analysis,\nit also provides a structured language for adding models to the package.\nIf you are interested in helping the developer in this process please\nrefer to the [Contributing a\nModel](https://allometric.github.io/allometric/articles/installing_a_model.html)\nvignette.\n\nIn total **`allometric` contains 2099 models across 60 publications**,\nthe following table displays the number of models by continent and\ncategory:\n\n| category | AS | EU | NA | AF | OC | SA |\n|:------------------------|----:|----:|----:|----:|----:|----:|\n| biomass component | 26 | 136 | 435 | 0 | 0 | 0 |\n| crown diameter | 0 | 12 | 36 | 0 | 0 | 0 |\n| crown height | 0 | 12 | 0 | 0 | 0 | 0 |\n| shrub biomass | 0 | 19 | 0 | 0 | 0 | 0 |\n| shrub biomass increment | 0 | 28 | 0 | 0 | 0 | 0 |\n| shrub diameter | 0 | 39 | 0 | 0 | 0 | 0 |\n| shrub height | 0 | 28 | 0 | 0 | 0 | 0 |\n| site index | 0 | 0 | 52 | 0 | 0 | 0 |\n| stem height | 7 | 0 | 345 | 12 | 2 | 18 |\n| stem volume | 4 | 0 | 575 | 0 | 0 | 20 |\n| stump volume | 0 | 0 | 64 | 0 | 0 | 0 |\n| taper | 2 | 0 | 18 | 0 | 0 | 0 |\n| tree biomass | 2 | 36 | 90 | 0 | 21 | 16 |\n| other | 0 | 0 | 168 | 0 | 0 | 0 |\n\nRefer to the\n[Reference](https://allometric.github.io/allometric/reference/index.html)\nfor a full list of publications disaggregated by allometric model type.\n\n## How Can I Help?\n\n`allometric` is a monumental undertaking, and already several people\nhave come forward and added hundreds of models. There are several ways\nto help out. The following list is ranked from the least to most\ndifficult tasks.\n\n1. [Add missing publications as an\n Issue](https://github.com/allometric/models/issues/new?assignees=brycefrank&labels=add+publication&projects=&template=add-models-from-a-publication.md&title=%5BInsert+Author-Date+Citation%5D).\n We always need help *finding publications* to add. If you know of a\n publication that is missing, feel free to add it as an Issue and we\n will eventually install the models contained inside.\n2. [Find source material for a\n publication](https://github.com/allometric/models/labels/missing%20source).\n Some publications are missing their original source material.\n Usually these are very old legacy publications. If you know where a\n publication might be found, or who to contact, leave a note on any\n of these issues.\n3. [Help us digitize\n publications](https://github.com/allometric/models/labels/digitization%20needed).\n We always need help *digitizing legacy reports*, at this link you\n will find a list of reports that need manual digitization. These can\n be handled by anyone with Excel and a cup of coffee.\n4. [Learn how to install and write\n models](https://allometric.github.io/allometric/articles/installing_a_model.html).\n Motivated users can learn how to install models directly using the\n package functions and git pull requests. Users comfortable with R\n and git can handle this task.\n\nOther ideas? Contact to help out.\n\n## Installation\n\nCurrently `allometric` is only available on GitHub, and can be installed\nusing `devtools`.\n\n``` r\ndevtools::install_github(""allometric/allometric"")\n```\n\n## Getting Started\n\nMost users will only be interested in finding and using allometric\nequations in their analysis. `allometric` allows rapid searching of\ncurrently installed models.\n\nBefore beginning, make sure to install the models locally by running\n\n``` r\nlibrary(allometric)\ninstall_models()\n```\n\nThis compiles the allometric models, and enables their use.\n`install_models()` only needs to be ran at installation or following any\npackage updates. The user can then call `load_models()` to load all\navailable allometric models into memory.\n\n``` r\nallometric_models <- load_models()\nhead(allometric_models)\n```\n\n #> # A tibble: 6 \xc3\x97 12\n #> id model_type country region family genus species model pub_id\n #> \n #> 1 539629a5 stem height Accipi\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 2 7bc0e06a stem volume Accipi\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 3 1fa4219a stem volume Accipi\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 4 b359d3ce stump volume Accipi\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 5 fb5c4575 stem ratio Accipi\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 6 733186a1 stem height Acerac\xe2\x80\xa6 Acer macrop\xe2\x80\xa6 fvs_2\xe2\x80\xa6\n #> # \xe2\x84\xb9 3 more variables: family_name , covt_name , pub_year \n\n**Finding and Selecting a Model**\n\n`allometric_models` is a `tibble::tbl_df` dataframe. Each row represents\none allometric model with various attributes. Some columns are `list`\ncolumns, which are columns that contain lists with multiple values as\ntheir elements. One example of this is the `family_name` column, which\ncontains the names of all authors for the publication that contains the\nmodel.\n\n`list` columns enable rigorous searching of models covered in the\n`?allometric_models` help page, but to get started we will use a helper\nfunction called `unnest_models()` that will give us a clearer picture of\nthe available data. Using the `cols` argument we can specify which\ncolumns we want to unnest. In this case we will unnest the `family_name`\ncolumn.\n\n``` r\nunnested_models <- unnest_models(allometric_models, cols = ""family_name"")\nunnested_models\n```\n\n #> # A tibble: 5,076 \xc3\x97 12\n #> id model_type country region family genus species model pub_id\n #> \n #> 1 539629a5 stem height Accip\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 2 7bc0e06a stem volume Accip\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 3 1fa4219a stem volume Accip\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 4 b359d3ce stump volume Accip\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 5 fb5c4575 stem ratio Accip\xe2\x80\xa6 Circ\xe2\x80\xa6 canade\xe2\x80\xa6 hahn_\xe2\x80\xa6\n #> 6 733186a1 stem height Acera\xe2\x80\xa6 Acer macrop\xe2\x80\xa6 fvs_2\xe2\x80\xa6\n #> 7 a31af9a5 stem height Acera\xe2\x80\xa6 Acer macrop\xe2\x80\xa6 fvs_2\xe2\x80\xa6\n #> 8 44f59d7d stem height Acera\xe2\x80\xa6 Acer macrop\xe2\x80\xa6 fvs_2\xe2\x80\xa6\n #> 9 1d58b6d4 stem height Acera\xe2\x80\xa6 Acer macrop\xe2\x80\xa6 fvs_2\xe2\x80\xa6\n #> 10 539ef85b stem height Acera\xe2\x80\xa6 Acer macrop\xe2\x80\xa6 fvs_2\xe2\x80\xa6\n #> # \xe2\x84\xb9 5,066 more rows\n #> # \xe2\x84\xb9 3 more variables: family_name , covt_name , pub_year \n\nNow, each row represents unique data combinations for each model, which\ncan be quickly filtered by most users using `dplyr::filter`. For\nexample, to find a volume model for the genus Alnus that had\n`""Brackett""` as an author or co-author we can use\n\n``` r\nbrackett_alnus_vol <- unnested_models %>%\n dplyr::filter(\n family_name == ""Brackett"", model_type == ""stem volume"",\n genus == ""Alnus""\n )\n\nbrackett_alnus_vol\n```\n\n #> # A tibble: 1 \xc3\x97 12\n #> id model_type country region family genus species model pub_id\n #> \n #> 1 f21028ef stem volume Betulac\xe2\x80\xa6 Alnus rubra brack\xe2\x80\xa6\n #> # \xe2\x84\xb9 3 more variables: family_name , covt_name , pub_year \n\nwe can see that model `f21028ef` is a volume model written by Brackett\nfor *Alnus rubra*. The model can be selected using the `id` field:\n\n``` r\nbrackett_alnus_mod <- brackett_alnus_vol %>% select_model(""f21028ef"")\n```\n\nor by using the row index\n\n``` r\nbrackett_alnus_mod <- brackett_alnus_vol %>% select_model(1)\n```\n\n**Determine Needed Information**\n\n`brackett_alnus_mod` now represents an allometric model that can be used\nfor prediction.\n\nUsing the standard output of `brackett_alnus_mod` we obtain a summary of\nthe model form, the response variable, the needed covariates and their\nunits, a summary of the model descriptors (i.e., what makes the model\nunique within the publication), and estimates of the parameters.\n\n``` r\nbrackett_alnus_mod\n```\n\n #> Model Call: \n #> vsia = f(dsob, hst) \n #> \n #> vsia [ft3]: volume of the entire stem inside bark, including top and stump\n #> dsob [in]: diameter of the stem, outside bark at breast height\n #> hst [ft]: total height of the stem \n #> \n #> Parameter Estimates: \n #> # A tibble: 1 \xc3\x97 3\n #> a b c\n #> \n #> 1 -2.67 1.92 1.07\n #> \n #> Model Descriptors: \n #> # A tibble: 1 \xc3\x97 7\n #> country region family genus species geographic_region age_class\n #> \n #> 1 US US-WA Betulaceae Alnus rubra \n\nWe can see from the `Model Call` section that `brackett_alnus_mod` will\nrequire two covariates called `dsob`, which refers to diameter outside\nbark at breast height, and `hst`, the height of the main stem.\n`allometric` uses a variable naming system to determine the names of\nresponse variables and covariates (refer to the [Variable Naming System\nvignette](https://allometric.github.io/allometric/articles/variable_naming_system.html)).\n\n**Predict Using the Selected Model**\n\nUsing the `predict()` method we can easily use the function as defined\nby providing values of these two covariates.\n\n``` r\npredict(brackett_alnus_mod, 12, 65)\n```\n\n #> 22.2347 [ft^3]\n\nor we can use the prediction function with a data frame of values\n\n``` r\nmy_trees <- data.frame(dias = c(12, 15, 20), heights = c(65, 75, 100))\npredict(brackett_alnus_mod, my_trees$dias, my_trees$heights)\n```\n\n #> Units: [ft^3]\n #> [1] 22.23470 39.80216 94.20053\n\nor even using the convenience of `dplyr`\n\n``` r\nmy_trees %>%\n mutate(vols = predict(brackett_alnus_mod, dias, heights))\n```\n\n #> dias heights vols\n #> 1 12 65 22.23470 [ft^3]\n #> 2 15 75 39.80216 [ft^3]\n #> 3 20 100 94.20053 [ft^3]\n\n## Next Steps\n\nThe following vignettes available on the [package\nwebsite](https://allometric.github.io/allometric/index.html) provide\ninformation to two primary audiences.\n\nUsers interested in finding models for analysis will find the following\ndocumentation most useful:\n\n- [Common Inventory Use\n Cases](https://allometric.github.io/allometric/articles/inventory_example.html)\n\nUsers interested in **contributing models** to the package will find\nthese vignettes the most useful:\n\n- [Contributing a\n Model](https://allometric.github.io/allometric/articles/installing_a_model.html)\n- [Describing a Model with\n Descriptors](https://allometric.github.io/allometric/articles/descriptors.html)\n- [Variable Naming\n System](https://allometric.github.io/allometric/articles/variable_naming_system.html)\n'",,"2022/12/17, 16:37:04",312,CUSTOM,433,433,"2023/10/17, 18:10:49",17,7,143,143,8,0,0.14285714285714285,0.01449275362318836,"2023/09/20, 17:15:43",v1.4.1,0,4,false,,false,false,,,https://github.com/allometric,https://allometric.github.io/allometric/,,,,https://avatars.githubusercontent.com/u/142178884?v=4,,, 3D Forest,"Visualization, processing and analysis of Lidar point clouds, mainly focused on forest environment.",VUKOZ-OEL,https://github.com/VUKOZ-OEL/3d-forest.git,github,"cpp,desktop-application,editor,forest,lidar,point-cloud,qt,3d,interactive-visualization,las,opengl,gui,laser-scanning,scientific-computing,tree,classification,cross-platform,segmentation,plugins,data-analysis",Forest Observation and Management,"2023/08/06, 17:39:31",44,0,19,true,C++,The Silva Tarouca Research Institute,VUKOZ-OEL,"C++,CMake",https://3dforest.eu,"b'
\n\n
\n\n# 3D Forest\n3D Forest is software for analysis of Lidar data from forest environment.\n\nCopyright 2020-Present VUKOZ\nBlue Cat team and other authors\n\n## License\n3D Forest is released under the GPLv3 license.\nSee [LICENSE](LICENSE) for more information.\n\n## Documentation\nHTML [Documentation](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-user-manual.html)\nwith User Manual and Developer Guide.\n\n## Tools and Algorithms\n- [Ground Classification](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-user-manual.html#tools-classification)\n- [Point Descriptor](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-user-manual.html#tools-descriptor)\n- [Point Elevation, Height Above Ground](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-user-manual.html#tools-elevation)\n- [Tree Segmentation](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-user-manual.html#tools-segmentation)\n\n## Build\nThe code uses C++17, CMake, Qt5 or Qt6 and OpenGL.\n```\n > mkdir build\n > cd build\n > cmake -G ""MinGW Makefiles"" .. -DCMAKE_INSTALL_PREFIX=..\n > mingw32-make\n > mingw32-make install\n```\n\n## Build Instructions\nThe code uses C++17 and CMake. Qt5 or Qt6 and OpenGL are required to build desktop application.\nThe build process generates desktop application with graphical user interface and command line tools.\nSee [INSTALL](INSTALL) for more information.\n\n- [Windows MinGW](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-developer-guide.html#build-windows-make) build from source code\n- [Windows Visual Studio](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-developer-guide.html#build-windows-visual-studio) build from source code\n- [Linux](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-developer-guide.html#build-linux-make) build from source code\n- [macOS / Mac OS X](https://vukoz-oel.github.io/3d-forest-documentation/3d-forest-developer-guide.html#build-macos-make) build from source code\n\n## Third-Party Libraries\n3D Forest source code includes several third-party libraries which are stored\nunder 3rdparty directory. This approach allows to use compatible versions of\nthird-party libraries **without downloading and installation** of each library.\n\n- ctk widgets: ctkRangeSlider (Jul 27, 2018), A slider that has 2 input values.\n- delaunator-cpp (Oct 6, 2018), Delaunay triangulation of 2D points.\n- eigen (3.4.0), Template library for linear algebra.\n- libigl (2.4.0), A simple geometry processing library.\n- stb_image_write (v1.16), stb single-file public domain libraries for C++.\n- octree (0.1-icra), Index-based Octree implementation.\n\n## Known Issues\nThis software is currently in development.\n\n# Support\n## Links\n\nSome useful links:\n\n- [3D Forest web site](https://www.3dforest.eu/)\n\n## Source Code sitemap\n```\nCMakeLists.txt - CMake top-level file.\nINSTALL - Installation and building instructions.\nLICENSE - The GPL license.\nREADME.md - Project summary.\n\n3rdparty/ - 3rd party libraries.\nbin/ - 3D Forest binaries. CMake install destination directory.\nbuild/ - CMake build directory.\ncmake/ - CMake settings.\ndata/ - Example data files.\ndoc/ - Documentation.\nsrc/ - Source code.\n```'",,"2020/06/05, 06:20:15",1237,GPL-3.0,416,1200,"2023/07/27, 06:14:55",2,0,2,1,90,0,0,0.00924024640657084,,,0,2,false,,false,false,,,https://github.com/VUKOZ-OEL,www.naturalforests.cz,Czech Republic,,,https://avatars.githubusercontent.com/u/69624981?v=4,,, datazoom.amazonia,"Facilitates access to official Brazilian Amazon data, including agriculture, deforestation, production.",datazoompuc,https://github.com/datazoompuc/datazoom.amazonia.git,github,"datazoom,amazonia,municipality",Forest Observation and Management,"2023/10/25, 01:35:01",37,0,10,true,R,Data Zoom,datazoompuc,"R,C",,"b'\n\n\n\n\n# datazoom.amazonia\n\n\n\n[![CRAN\nversion](https://www.r-pkg.org/badges/version/datazoom.amazonia?color=orange)](https://cran.r-project.org/package=datazoom.amazonia?style=flat)\n[![R build\nstatus](https://github.com/datazoompuc/datazoom.amazonia/workflows/R-CMD-check/badge.svg)](https://github.com/datazoompuc/datazoom.amazonia/actions?style=flat)\n[![CRAN\ndownloads](https://cranlogs.r-pkg.org/badges/grand-total/datazoom.amazonia?color=blue)](https://cran.r-project.org/package=datazoom.amazonia?style=flat)\n[![CRAN\ndownloads](https://cranlogs.r-pkg.org/badges/datazoom.amazonia?color=lightgrey)](https://cran.r-project.org/package=datazoom.amazonia?style=flat)\n![Languages](https://img.shields.io/github/languages/count/datazoompuc/datazoom.amazonia?style=flat)\n![Commits](https://img.shields.io/github/commit-activity/y/datazoompuc/datazoom.amazonia?style=flat)\n![Open\nIssues](https://img.shields.io/github/issues-raw/datazoompuc/datazoom.amazonia?style=flat)\n![Closed\nIssues](https://img.shields.io/github/issues-closed-raw/datazoompuc/datazoom.amazonia?style=flat)\n![Files](https://img.shields.io/github/directory-file-count/datazoompuc/datazoom.amazonia?style=flat)\n![Followers](https://img.shields.io/github/followers/datazoompuc?style=flat)\n\n\nThe datazoom.amazonia package facilitates access to official Brazilian\nAmazon data, including agriculture, deforestation, production. The\npackage provides functions that download and pre-process selected\ndatasets.\n\n## Installation\n\nYou can install the released version of `datazoom.amazonia` from\n[CRAN](https://CRAN.R-project.org/package=datazoom.amazonia) with:\n\n``` r\ninstall.packages(""datazoom.amazonia"")\n```\n\nAnd the development version from GitHub with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""datazoompuc/datazoom.amazonia"")\n```\n\n**[1 - Environmental data](#environmental-data)**\n\n
\n\n\n\n\n
\n\n| | |\n|-----------------------|----------------------------------------|\n| **[PRODES](#prodes)** | *Yearly deforestation* |\n| **[DETER](#deter)** | *Alerts on forest cover changes* |\n| **[DEGRAD](#degrad)** | *Forest degradation* |\n| **[Imazon](#imazon)** | *Deforestation pressure in the Amazon* |\n\n\n\n| | |\n|-----------------------------------|-------------------------------------|\n| **[IBAMA](#ibama)** | *Environmental fines* |\n| **[MAPBIOMAS](#mapbiomas)** | *Land cover and land use* |\n| **[TerraClimate](#terraclimate)** | *Climate data* |\n| **[SEEG](#seeg)** | *Greenhouse gas emission estimates* |\n| **[CENSOAGRO](#censoagro)** | *Agriculture activities* |\n\n
\n\n**[2 - Social data](#social-data)**\n\n\n\n\n\n
\n\n| | |\n|-------------------------------|---------------------------------------------------------|\n| **[IPS](#ips)** | *Amazon Social Progress Index* |\n| **[DATASUS](#datasus)** | *Causes of mortality and availability of hospital beds* |\n| **[IEMA](#iema)** | *Access to electricity in the Amazon region* |\n| **[Population](#population)** | *Population* |\n\n
\n\n**[3 - Economic data](#economic-data)**\n\n\n\n\n\n\n
\n\n| | |\n|-----------------------------|---------------------------------|\n| **[COMEX](#comex)** | *Brazilian international trade* |\n| **[BACI](#baci)** | *Global international trade* |\n| **[PIB-Munic](#pib-munic)** | *Municipal GDP* |\n| **[CEMPRE](#cempre)** | *Central register of companies* |\n| **[PAM](#pam)** | *Agricultural production* |\n\n\n\n| | |\n|-------------------------|---------------------------|\n| **[PEVS](#pevs)** | *Forestry and extraction* |\n| **[PPM](#ppm)** | *Livestock farming* |\n| **[SIGMINE](#sigmine)** | *Mining* |\n| **[ANEEL](#aneel)** | *Energy development* |\n| **[EPE](#epe)** | *Energy consumption* |\n\n
\n\n**[4 - Other tools](#other-tools)**\n\n\n\n
\n\n| | |\n|-----------------------------------------------------------------|-----------------------------------------------------------------------------|\n| **[Legal Amazon Municipalities](#legal-amazon-municipalities)** | *Dataset with brazilian cities and whether they belong to the Legal Amazon* |\n| **[The \xe2\x80\x98googledrive\xe2\x80\x99 package](#googledrive)** | *Troubleshooting and information for downloads from Google Drive* |\n\n
\n\n# Environmental Data\n\n## PRODES\n\nThe PRODES project uses satellites to monitor deforestation in Brazil\xe2\x80\x99s\nLegal Amazon. The raw data reports total and incremental (year-by-year)\nlow-cut deforested area at the municipality level.\n\nThe data made available in this package goes back to the year 2000, with\nongoing updates. In line with INPE\xe2\x80\x99s API, requesting data for an\nunavailable year does not yield an error, but rather a best effort\nresponse (columns regarding observation data are filled with default\nvalues).\n\nData is collected based on the PRODES-year, which starts at August 1st\nand ends on July 31st. Accordingly, 2018 deforestation data covers the\nperiod from 01/08/2017 to 31/07/2018.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""prodes""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **time_period**: picks the years for which the data will be\n downloaded\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated data (raw_data = FALSE) from 2010 (time_period = 2010) \n# in portuguese (language = \'pt\').\ndata <- load_prodes(raw_data = FALSE,\n time_period = 2010,\n language = \'pt\') \n```\n\n## DETER\n\n[DETER](http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/deter/deter)\nuses satellite surveillance to detect and report changes in forest cover\nacross the Legal Amazon and the Cerrado biome. Each data point consists\nof a warning, describing which type of change has affected a certain\narea of forest at a given date. Broadly speaking, it makes a distinction\nbetween events of deforestation, degradation and logging. The data\nextracted here spans from 2016 onward in the Amazon, and from 2018\nonward in the Cerrado.\n\nThe raw DETER data shows one warning per row, with each row also\ncontaining a municipality. However, many warnings actually overlap with\n2 or up to 4 municipalities, which are not shown in the original data.\nTherefore, when the option `raw_data = TRUE` is selected, the original\nspatial information is intersected with a municipalities map of Brazil,\nand each warning can be split into more than one row, with each row\ncorresponding to a municipality.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are two options:\n - `""deter_amz""` for data from the Amazon\n - `""deter_cerrado""` for data from the Cerrado\n2. **raw_data**: there are two options:\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n3. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated data (raw_data = FALSE) from Amazonia (dataset = ""deter_amz"")\ndeter_amz <- load_deter(dataset = \'deter_amz\',\n raw_data = FALSE)\n```\n\n## DEGRAD\n\nThe [DEGRAD\nproject](http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/degrad)\nuses satellites to monitor degradation of forest areas. Raw data is\navailable as simple features (sf) objects, read from shapefiles. The\nproject was substituted in 2016 by DETER-B. Accordingly, data is\navailable from 2007 up to 2016.\n\nOriginal documentation for this data is very scarce, users beware. Some\nthings to keep in mind are:\n\nEvent data is organized through yearly editions (DEGRAD 2007-2016).\nInside a given edition however, there may be data from different years\n(events that happened in 2015 inside DEGRAD 2016 for example).\n\nThis package provides degradation data with municipality identification.\nIt does this by intersecting DEGRAD geometries with IBGE\xe2\x80\x99s municipality\ngeometries from the year 2019. CRS metadata however is missing from the\noriginal data source. A best effort approach is used and a CRS is\nassumed\n`(proj4string: ""+proj=longlat +ellps=aust_SA +towgs84=-66.8700,4.3700,-38.5200,0.0,0.0,0.0,0.0 +no_defs"")`.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""degrad""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **time_period**: picks the years for which the data will be\n downloaded\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated data (raw_data = TRUE) related to forest degradation\n# from 2010 to 2012 (time_period = 2010:2012). \ndata <- load_degrad(dataset = \'degrad\', \n raw_data = FALSE,\n time_period = 2010:2012)\n```\n\n## Imazon\n\nLoads data categorizing each municipality by the level of deforestation\npressure it faces. The categories used by Imazon have three levels,\nranging from 0 to 3.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""imazon_shp""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated data\ndata <- load_imazon(raw_data = FALSE)\n```\n\n\xf0\x9f\x94\xb4 This function uses the `googledrive` package to download data. In\ncase of authentication errors, see [googledrive](#googledrive).\n\n## IBAMA\n\nThe dataset is originally from the Brazilian Institute of Environment\nand Renewable Natural Resources (Ibama), documenting environmental\nembargoes and fines at the individual level from 2005 to the present\nday. In addition, it is possible to download distributed and collected\nfines from 1994 until the present day.\n\nThe function returns either the raw data or a data frame with aggregates\nconsidering, for each time-location period, counts for total the number\nof infractions, infractions that already went to trial, and number of\nunique perpetrators of infractions. There are also two data frames\nregarding distributed and collected fines across municipalities\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are three possible choices.\n\n - `""embargoed_areas""`: embargoed areas\n - `""distributed_fines""`: fines that have not been paid by\n individuals or corporations\n - `""collected_fines""`: fines that have been paid by individuals or\n corporations\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **states**: specifies for which states to download the data. It is\n \xe2\x80\x9call\xe2\x80\x9d by default, but can be a single state such as `""AC""` or any\n vector such as `c(""AC"", ""AM"")`. Does not apply to the\n `""embargoed_areas""` dataset.\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\nlibrary(datazoom.amazonia)\n\n# Download treated embargoes data (raw_data = FALSE) in english (language = ""eng"")\ndata <- load_ibama(dataset = ""embargoed_areas"", raw_data = FALSE, \n language = ""eng"")\n\n# Download treated collected fines data from ""BA""\ndata <- load_ibama(dataset = ""collected_fines"", raw_data = FALSE,\n states = ""BA"", language = ""pt"")\n```\n\n## MAPBIOMAS\n\nThe MAPBIOMAS project gathers data reporting the type of land covering\neach year by area, that is, for example, the area used for a temporary\ncrop of soybeans. It also reports the transition between coverings\nduring given years.\n\nThe data available has an yearly frequency and is available starting\nfrom the year 1989.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**:\n\n - `""mapbiomas_cover""`: types of land cover\n - `""mapbiomas_transition""`: changes in land cover\n - `""mapbiomas_deforestation_regeneration""`: deforestation and forest\n regeneration\n - `""mapbiomas_irrigation""`: irrigated areas\n - `""mapbiomas_grazing_quality""`: grazing quality\n - `""mapbiomas_mining""`: areas used for mining\n - `""mapbiomas_water""`: areas of water surface\n - `""mapbiomas_fire""`: areas of wildfire burn scars\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**:\n\n - For datasets `""mapbiomas_cover""`, `""mapbiomas_transition""`,\n `""mapbiomas_deforestation_regeneration""` and `""mapbiomas_fire""`,\n can be `""municipality""` or `""state""` (faster download).\n - For dataset `""mapbiomas_mining""`, can be `""indigenous_land""`,\n `""municipality""`, `""state""`, `""biome""` or `""country""`.\n - For dataset `""mapbiomas_irrigation""`, can be `""state""` or\n `""biome""`.\n - For dataset `""mapbiomas_water""`, can be `""municipality""`,\n `""state""` or `""biome""`.\n - Does not apply to other datasets.\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n5. **cover_level**: Aggregates the data to some level of land coverage.\n Only applies to datasets `""mapbiomas_cover""` and\n `""mapbiomas_grazing_quality""`:\n\n - `cover_level = ""none""`: no aggregation\n - `cover_level = 0`: least aggregated, with categories of Anthropic\n and Natural\n - `cover_level = 1`: categories such as Forest, Non Forest Natural\n Formation, Farming, Non Vegetated Area, Water, Non Observed\n - `cover_level = 2`: categories such as Agriculture, Aquaculture,\n Beach and Dune, Forest Plantation, Pasture, River, Lake and\n Ocean \n - `cover_level = 3`: categories such as Aquaculture, Beach and Dune,\n Forest Formation, Forest Plantation\n - `cover_level = 4`: categories such as Aquaculture, Beach and Dune,\n Forest Formation, Forest Plantation\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated Mapbiomas Cover data in english at the highest aggregation level\ndata <- load_mapbiomas(dataset = ""mapbiomas_cover"",\n raw_data = FALSE,\n geo_level = ""municipality"",\n language = ""eng"",\n cover_level = 0)\n\n# download treated Mapbiomas Transition data in portuguese\ndata <- load_mapbiomas(dataset = ""mapbiomas_transition"", raw_data = FALSE,\n geo_level = ""state"", language = ""pt"")\n\n# download treated data on mining on indigenous lands\ndata <- load_mapbiomas(""mapbiomas_mining"",\n raw_data = FALSE,\n geo_level = ""indigenous_land"")\n```\n\n## TerraClimate\n\nSpatial data on several climate variables, extracted from Climatology\nLab\xe2\x80\x99s [TerraClimate](https://www.climatologylab.org/terraclimate.html).\nThe table below shows all possible variables to be extracted, which are\nchosen through the \xe2\x80\x9cdataset\xe2\x80\x9d parameter. Data ranges from 1958 to 2020.\n\nNetcdf files are downloaded from the\n[THREDDS](http://thredds.northwestknowledge.net:8080/thredds/terraclimate_catalog.html)\nweb server, as recommended for rectangular subsets of the global data.\n\n
\n\nClick to see all dataset options\n\n\n| Dataset | Code | Description | Units |\n|:------------------------------|:----:|:-------------------------------------------------|:--------:|\n| max_temperature | tmax | Maximum 2-m Temperature | degC |\n| min_temperature | tmin | Minimum 2-m Temperature | degC |\n| wind_speed | ws | Wind Speed at 10-m | m/s |\n| vapor_pressure_deficit | vpd | Vapor Pressure Deficit | kPa |\n| vapor_pressure | vap | 2-m Vapor Pressure | kPa |\n| snow_water_equivalent | swe | Snow Water Equivalent at End of Month | mm |\n| shortwave_radiation_flux | srad | Downward Shortwave Radiation Flux at the Surface | W/m^2 |\n| soil_moisture | soil | Soil Moisture at End of Month | mm |\n| runoff | q | Runoff | mm |\n| precipitation | ppt | Accumulated Precipitation | mm |\n| potential_evaporation | pet | Reference Evapotranspiration | mm |\n| climatic_water_deficit | def | Climatic Water Deficit | mm |\n| water_evaporation | aet | Actual Evapotranspiration | mm |\n| palmer_drought_severity_index | PDSI | Palmer Drought Severity Index | unitless |\n\n
\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: picks the variable to be read. Possible options are\n shown in the table above.\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **time_period**: picks the years for which the data will be\n downloaded\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n5. **legal_amazon_only**: if set to `TRUE`, only downloads data from\n the Legal Amazon region\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Downloading maximum temperature data from 2000 to 2001\nmax_temp <- load_climate(dataset = ""max_temperature"", time_period = 2000:2001)\n\n# Downloading precipitation data only for the legal Amazon in 2010\namz_precipitation <- load_climate(dataset = ""precipitation"",\n time_period = 2010,\n legal_amazon_only = TRUE)\n```\n\n## SEEG\n\nLoads estimates of emission of greenhouse gases of Brazilian cities and\nstates from SEEG. SEEG is the System of Estimates of Emissions and\nRemovals of Greenhouse Gases (SEEG), an initiative of the Observat\xc3\xb3rio\ndo Clima, a network of institutions focused on climate change research\nin Brazil.\n\nThe data provided in SEEG\xe2\x80\x99s Collection 9 is a series covering the period\nfrom 1970 to 2020, except for the Land Use Change Sector that has the\nseries from 1990 to 2020.\n\nUsing data collected from government entities, institutes, research\ncenters, NGOs and other institutions, the estimates are created using\nthe methodology of the Brazilian Inventory of Anthropic Emissions and\nRemovals of Greenhouse Gases, assembled by the Ministry of Science,\nTechnology and Innovation (MCTI), and the directives of\nIntergovernmental Panel on Climate Change (IPCC)\n\nEmissions are divided in five main sources: Agricultural and Cattle\nRaising, Energy, Changes in Use of Land, Industrial Processes and\nResidues. All greenhouse gases contained in the national inventory are\nconsidered, encompassing CO2, CH4, N2O and the HFCs, with the conversion\nto carbon equivalence (CO2e) also included, both in the metric of GWP\n(Global Warming Potential) and GTP (Global Temperature Potential).\n\nThe data is downloaded from the SEEG website in the form of one single\nfile, so the option to select a certain range of years is not available.\nAlso, due to the size of the file, a stable internet connection is\nnecessary, and the function may take time to run.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are six choices:\n\n - `""seeg""`: provides all sectors in a same dataframe. Only works\n with `raw_data = TRUE`\n - `""seeg_farming""`\n - `""seeg_industry""`\n - `""seeg_energy""`\n - `""seeg_land""`\n - `""seeg_residuals""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""state""`, or `""municipality""`\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download raw data (raw_data = TRUE) of greenhouse gases (dataset = ""seeg"") \n# by state (geo_level = ""state"")\ndata <- load_seeg(dataset = ""seeg"", \n raw_data = TRUE,\n geo_level = ""state"")\n \n# Download treated data (raw_data = FALSE) of industry greenhouse gases (dataset = ""seeg_industry"")\ndata <- load_seeg(dataset = ""seeg_industry"", \n raw_data = FALSE,\n geo_level = ""state"")\n```\n\n\xf0\x9f\x94\xb4 This function uses the `googledrive` package to download data at the\nmunicipality level. In case of authentication errors, see\n[googledrive](#googledrive).\n\n## CENSOAGRO\n\nThe census of agriculture collects information about agricultural\nestablishments and the agricultural activities carried out there,\ncovering characteristics of the producer and establishment, economy and\nemployment in rural areas, livestock, farming and agroindustry.\n\nData is collected by IBGE and is available at country, state and\nmunicipality level.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**:there are 10 possible choices:\n\n - `""agricultural_land_area""`: area and number of agricultural\n properties\n - `""agricultural_area_use""`: area of agricultural properties by use\n - `""agricultural_employees_tractors""`: number of employees and\n tractors in agricultural properties\n - `""agricultural_producer_condition""`: condition of agricultural\n producer, whether they own the land\n - `""animal_production""`: number of animals farmed, by species\n - `""animal_products""`: amount of animal products, by product type\n - `""vegetable_production_area""`: area and amount produced, by\n vegetable product\n - `""vegetable_production_temporary""`: amount produced, by temporary\n crop\n - `""vegetable_production_permanent""`: amount produced, by permanent\n crop\n - `""livestock_production""`: amount of bovine cattle, and number of\n agricultural properties\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""` or `""state""`. For dataset\n `""livestock_production""`, it can also be `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded:\n\n - For datasets `""agricultural_land_area""`,\n `""agricultural_producer_condition""`, `""animal_products""`, and\n `""vegetable_production_area""`, it can be one of 1920, 1940, 1950,\n 1960, 1970, 1975, 1980, 1985, 1995, or 2006.\n - For datasets `""vegetable_production_permanent""` and\n `""vegetable_production_permanent""`, it can only be from 1940\n onwards\n - For datasets `""agricultural_area_use""`,\n `""agricultural_employees_tractors""`, `""animal_production""`, it can\n only be from 1970 onwards\n - For dataset `""livestock_production""`, it can only be 2017\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download total land area data at the country level in year 2006\n data <- load_censoagro(\n dataset = ""agricultural_land_area"",\n raw_data = TRUE,\n geo_level = ""country"",\n time_period = 2006\n )\n\n # Download temporary production crops data by state (geo_level = ""state"") in year 2006\n # in portuguese (language = ""pt"")\n data <- load_censoagro(\n dataset = ""vegetable_production_temporary"",\n raw_data = FALSE,\n geo_level = ""state"",\n time_period = 1996,\n language = ""pt""\n```\n\n# Social Data\n\n## IPS\n\nLoads information on the social and environmental performance of the\nLegal Amazon.\n\nData from the Amazon Social Progress Index, an initiative from Imazon\nwith support from the Social Progress Imperative that measures the\nsocial and environmental progress of its locations. Namely, the 772\nmunicipalities in the Amazon region. Survey is done at the municipal\nlevel and data is available in 2014 and 2018.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**:\n\n - `""all""`, `""life_quality""`, `""sanit_habit""`, `""violence""`,\n `""educ""`, `""communic""`, `""mortality""`, or `""deforest""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **time_period**: can be 2014, 2018, 2021, or a vector with some\n combination thereof\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download raw data from 2014 \ndata <- load_ips(dataset = ""all"", raw_data = TRUE, time_period = 2014)\n\n# Download treated deforest data from 2018 in portuguese\ndata <- load_ips(dataset = ""deforest"", raw_data = FALSE,\n time_period = 2018, language = ""pt"")\n```\n\n## DATASUS\n\nDATASUS is the IT department of SUS \xe2\x80\x93 the Brazilian Unified Health\nSystem. They provide data on health establishments, mortality, access to\nhealth services and several health indicators nationwide. This function\nallows for an easy download of several DATASUS raw datasets, and also\ncleans the data in a couple of datasets. The sections below explains\neach avaliable dataset.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**:\n\n - `""datasus_sim_do""` has SIM-DO mortality data\n - Possible subsets of SIM-DO are `""datasus_sim_dofet""` (Fetal),\n `""datasus_sim_doext""` (External causes), `""datasus_sim_doinf""`\n (Children), `""datasus_sim_domat""` (Maternal)\n - `""datasus_sih""` has SIH hospitalization data.\n - `""datasus_cnes_lt""` has data on the number of hospital beds.\n - further subsets of CNES are listed later, but those only allow for\n the download of raw data.\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data. Only\n effective for SIM-DO and subsets, SIH, and CNES-LT.\n\n3. **keep_all**: only applies when raw_data is FALSE. There are two\n options:\n\n - `TRUE`: keeps all original variables, adding variable labels and\n possibly constructing extra variables.\n - `FALSE`: aggregates data at the municipality, thereby losing\n individual-level data, and only keeping aggregate measures.\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **states**: a vector of states by which to filter the data. Only\n works for datasets whose data is provided in separate files by\n state.\n\n6. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n##### DATASUS - SIM (System of Mortality Information)\n\nEach original SIM data file contains rows corresponding to a declaration\nof death (DO), and columns with several characteristics of the person,\nthe place of death, and the cause of death. The data comes from the main\nSIM-DO (Declarations of Death) dataset, which goes by the option\n`""datasus_sim_do""`. There are also 4 subsets of SIM-DO, namely SIM-DOFET\n(Fetal), SIM-DOMAT (Maternal), SIM-DOINF (Children), and SIM-DOEXT\n(External Causes), with corresponding dataset options\n`""datasus_sim_dofet"", ""datasus_sim_domat"", ""datasus_sim_doinf"", ""datasus_sim_doext""`.\nNote that only SIM-DO provides separate files for each state, so all\nother dataset options always contain data from the whole country.\n\nBelow is an example of downloading the raw data, and also using the\n`raw_data = FALSE` option to obtain treated data. When this option is\nselected, we create several variables for deaths from each cause, which\nare encoded by their CID-10 codes. The function then returns, by\ndefault, the aggregated data of mortality sources at the municipality\nlevel. In this process, all the individual information such as age, sex,\nrace, and schooling are lost, so we also offer the option of\n`keep_all = TRUE`, which creates all the indicator variables for cause\nof death, adds variable labels, and does not aggregate, thereby keeping\nall individual-level variables.\n\n**Examples:**\n\n``` r\nlibrary(datazoom.amazonia)\n\n# download raw data for the year 2010 in the state of AM. \ndata <- load_datasus(dataset = ""datasus_sim_do"",\n time_period = 2010,\n states = ""AM"",\n raw_data = TRUE)\n\n# download treated data with the number of deaths by cause in AM and PA.\ndata <- load_datasus(dataset = ""datasus_sim_do"",\n time_period = 2010,\n states = c(""AM"", ""PA""),\n raw_data = FALSE)\n\n# download treated data with the number of deaths by cause in AM and PA\n# keeping all individual variables.\ndata <- load_datasus(dataset = ""datasus_sim_do"",\n time_period = 2010,\n states = c(""AM"", ""PA""),\n raw_data = FALSE,\n keep_all = TRUE)\n```\n\n##### DATASUS - CNES (National Register of Health Establishments)\n\nProvides information on health establishments, avaliable hospital beds,\nand active physicians. The data is split into 13 datasets: LT (Beds), ST\n(Establishments), DC (Complimentary data), EQ (Equipment), SR\n(Specialized services), HB (License), PF (Practitioner), EP (Teams), RC\n(Contractual Rules), IN (Incentives), EE (Teaching establishments), EF\n(Philanthropic establishments), and GM (Management and goals).\n\nRaw data is avaliable for all of them using the dataset option\n`datasus_cnes_lt, datasus_cnes_st`, and so on, and treated data is only\navaliable for CNES - LT. When `raw_data = FALSE` is chosen, we return\ndata on the number of total hospital beds and the ones avaliable through\nSUS, which can be aggregated by municipality (with option\n`keep_all = FALSE`) or keeping all original variables\n(`keep_all = TRUE`).\n\n**Examples:**\n\n``` r\nlibrary(datazoom.amazonia)\n\n# download treated data with the number of avaliable beds in AM and PA.\ndata <- load_datasus(dataset = ""datasus_cnes_lt"",\n time_period = 2010,\n states = c(""AM"", ""PA""),\n raw_data = FALSE)\n```\n\n##### DATASUS - SIH (System of Hospital Information)\n\nContains data on hospitalizations. Treated data only gains variable\nlabels, with no extra manipulation. Beware that this is a much heavier\ndataset.\n\n**Examples:**\n\n``` r\nlibrary(datazoom.amazonia)\n\n# download raw data\ndata <- load_datasus(dataset = ""datasus_sih"",\n time_period = 2010,\n states = ""AM"",\n raw_data = TRUE)\n\n# download data in a single tibble, with variable labels\ndata <- load_datasus(dataset = ""datasus_sih"",\n time_period = 2010,\n states = ""AM"",\n raw_data = FALSE)\n```\n\n## IEMA\n\nData from the Institute of Environment and Water Resources (IEMA),\ndocumenting the number of people without access to eletric energy\nthroughout the Amazon region in the year 2018.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""iema""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated data\ndata <- load_iema(raw_data = FALSE)\n```\n\n\xf0\x9f\x94\xb4 This function uses the `googledrive` package to download data. In\ncase of authentication errors, see [googledrive](#googledrive).\n\n## Population\n\nLoads IBGE information on estimated population (2001-2006, 2008-2009,\n2011-2021) or population (2007 and 2010) data. Data is available at\ncountry, state and municipality level and from 2001 to 2021.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""population""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""state""`, or `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated population data at the state level for 2010 to 2012\ndata <- load_population(raw_data = FALSE,\n geo_level = ""state"",\n time_period = 2010:2012)\n```\n\n# Economic Data\n\n## COMEX\n\nThe Comex dataset gathers data extracted from [Siscomex (Integrated\nSystem of Foreign\nTrade)](https://www.gov.br/produtividade-e-comercio-exterior/pt-br/assuntos/comercio-exterior/estatisticas/base-de-dados-bruta/),\nwhich is a database containing information from all products that are\nimported to or exported from Brazil. Using data reported from the\ncompanies which are responsible for the process of transporting the\nproducts, the system adheres to internationally standardized\nnomenclatures, such as the Harmonized System and the Mercosul Common\nNomenclature (which pertains to members of the Mercosul organization).\n\nThe data has a monthly frequency and is available starting from the year\n1989. From 1989 to 1996, a different system of nomenclatures was\nadopted, but all conversions are available on a dictionary in the Comex\nwebsite\n().\nSystems of nomenclature vary in the degree of detail in terms of the\nproduct involved, as well as other characteristics, such as unit and\ngranularity of location.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are four choices:\n\n - `""comex_export_mun""`: selects exports data by municipality\n - `""comex_import_mun""`: selects imports data by municipality\n - `""comex_export_prod""`: selects exports data by producer\n - `""comex_import_prod""`: selects imports data by producer\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **time_period**: picks the years for which the data will be\n downloaded\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated (raw_data = FALSE) exports data by municipality (dataset = ""comex_export_mun"")\n# from 2020 to 2021 (time_period = 2020:2021)\ndata <- load_br_trade(dataset = ""comex_export_mun"", \n raw_data = FALSE, \n time_period = 2020:2021)\n# download treated(raw_data = FALSE) imports data by municipality (dataset = ""comex_import_mun"")\n# from 2020 to 2021 (time_period = 2020:2021) \ndata <- load_br_trade(dataset = ""comex_import_mun"",\n raw_data = FALSE, \n time_period = 2020:2021)\n```\n\n## BACI\n\nLoads disaggregated data on bilateral trade flows for more than 5000\nproducts and 200 countries. The data is from the\n[CEPII](http://www.cepii.fr/CEPII/en/bdd_modele/bdd_modele_item.asp?id=37)\nand is built from data directly reported by each country to the United\nNations Statistical Division (Comtrade).\n\nAs all of the data is packed into one single .zip file in the website,\ndata on all years must be downloaded, even if not all of it is used.\nTherefore, downloading the data can take a long time.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there is one choice:\n\n - `""HS92""` which follows the Harmonized System method\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **time_period**: picks the years for which the data will be\n downloaded\n\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated data for 2016 (takes a long time to download)\nclean_baci <- load_baci(\n raw_data = FALSE,\n time_period = 2016\n)\n```\n\n## PIB-Munic\n\nLoads IBGE information on gross domestic product at current prices,\ntaxes, net of subsidies, on products at current prices and gross value\nadded at current prices, total and by economic activity, and respective\nshares. Data is available at country, state and municipality level and\nfrom 2002 to 2018.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""pibmunic""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""state""`, or `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated municipal GDP data at the state level for 2010 to 2012\ndata <- load_pibmunic(raw_data = FALSE,\n geo_level = ""state"",\n time_period = 2010:2012)\n```\n\n## CEMPRE\n\nEmployment, salary and firm data from IBGE\xe2\x80\x99s [Cadastro Central de\nEmpresas (CEMPRE)](https://sidra.ibge.gov.br/pesquisa/cempre/tabelas).\nLoads information on companies and other organizations and their\nrespective formally constituted local units, registered with the CNPJ -\nNational Register of Legal Entities. Data is available between 2006 and\n2019.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""cempre""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""state""` or `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n6. **sectors**: defines if the data will be return separated by sectors\n (`sectors = TRUE`) or not (`sectors = FALSE`)\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download raw data (raw_data = TRUE) at the country level\n# from 2008 to 2010 (time_period = 2008:2010).\ndata <- load_cempre(\n raw_data = TRUE,\n geo_level = ""country"", \n time_period = 2008:2010\n) \n# Download treted data (raw_data = FALSE) by state (geo_level = ""state"") \n# from 2008 to 2010 (time_period = 2008:2010) in portuguese (language = ""pt"").\n# In this example, data is split by sector (sectors = TRUE)\ndata <- load_cempre(raw_data = FALSE,\n geo_level = ""state"", \n time_period = 2008:2010,\n language = ""pt"",\n sectors = TRUE) \n```\n\n## PAM\n\n[Municipal Agricultural\nProduction](https://www.ibge.gov.br/en/statistics/economic/agriculture-forestry-and-fishing/16773-municipal-agricultural-production-temporary-and-permanent-crops.html?=&t=o-que-e)\n(PAM, in Portuguese) is a nationwide annual survey conducted by IBGE\n(Brazilian Institute of Geography and Statistics) which provides\ninformation on agricultural products, such as quantity produced, area\nplanted and harvested, average quantity of output and monetary value of\nsuch output. The products are divided in permanent and temporary farmed\nland, as well as dedicated surveys to the four products that yield\nmultiple harvests a year (beans, potato, peanut and corn), which all sum\nto a total survey of 64 agricultural products (31 of temporary tillage\nand 33 of permanent tillage). Output, however, is only included in the\ndataset if the planted area occupies over 1 acre or if output exceeds\none tonne.\n\nPermanent farming is characterized by a cycle of long duration, whose\nharvests may be done multiple times across the years without the need of\nplanting seeds again. Temporary farming, on the other hand, consists of\ncycles of short and medium duration, which after harvesting require\nplanting seeds again.\n\nThe data also has multiple aggregation levels, such as nationwide, by\nregion, mesoregion and microregion, as well as state and municipality.\n\nThe data available has a yearly frequency and is available from 1974 to\nthe present, with the exception of the four multiple-harvest products,\nwhich are only available from 2003. More information can be found on\n[this\nlink](https://www.ibge.gov.br/estatisticas/economicas/agricultura-e-pecuaria/9117-producao-agricola-municipal-culturas-temporarias-e-permanentes.html#:~:text=A%20pesquisa%20Produ%C3%A7%C3%A3o%20Agr%C3%ADcola%20Municipal,s%C3%A3o%20da%20cesta%20b%C3%A1sica%20do)\n(only in Portuguese).\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: See tables below\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""region""`, `""state""`, or\n `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\nThe datasets supported are shown in the tables below, made up of both\nthe original databases and their narrower subsets. Note that downloading\nonly specific crops is considerably faster.\n\n
\n\nFull datasets provided by IBGE:\n\n\n| dataset |\n|:----------------|\n| all_crops |\n| temporary_crops |\n| permanent_crops |\n| corn |\n| potato |\n| peanut |\n| beans |\n\n
\n
\n\nDatasets generated from Temporary Crops:\n\n\n| dataset | Name (pt) | Name (eng) |\n|:------------------|:----------------------------:|:-----------------------------:|\n| pineapple | Abacaxi | Pineapple |\n| alfafa | Alfafa Fenada | Alfafa Fenada |\n| cotton_herbaceous | Algodao Herbaceo (em Caroco) | Herbaceous Cotton (in Caroco) |\n| garlic | Alho | Garlic |\n| peanut_temporary | Amendoim (em Casca) | Peanuts (in Shell) |\n| rice | Arroz (em Casca) | Rice (in husk) |\n| oats | Aveia (em Grao) | Oats (in grain) |\n| sweet_potato | Batata Doce | Sweet potato |\n| potato_temporary | Batata Inglesa | English potato |\n| sugar_cane | Cana de Acucar | Sugar cane |\n| forage_cane | Cana para Forragem | Forage cane |\n| onion | Cebola | Onion |\n| rye | Centeio (em Grao) | Rye (in grain) |\n| barley | Cevada (em Grao) | Barley (in Grain) |\n| pea | Ervilha (em Grao) | Pea (in Grain) |\n| broad_bean | Fava (em Grao) | Broad Bean (in Grain) |\n| beans_temporary | Feijao (em Grao) | Beans (in Grain) |\n| tobacco | Fumo (em Folha) | Smoke (in Sheet) |\n| sunflower_seeds | Girassol (em Grao) | Sunflower (in Grain) |\n| jute_fiber | Juta (Fibra) | Jute (Fiber) |\n| linen_seeds | Linho (Semente) | Linen (Seed) |\n| malva_fiber | Malva (Fibra) | Malva (Fiber) |\n| castor_bean | Mamona (Baga) | Castor bean (Berry) |\n| cassava | Mandioca | Cassava |\n| watermelon | Melancia | watermelon |\n| melon | Melao | Melon |\n| corn_temporary | Milho (em Grao) | corn (in grain) |\n| ramie_fiber | Rami (Fibra) | Ramie (Fiber) |\n| soybean | Soja (em Grao) | Soybean (in grain) |\n| sorghum | Sorgo (em Grao) | Sorghum (in Grain) |\n| tomato | Tomate | Tomato |\n| wheat | Trigo (em Grao) | Wheat in grain) |\n| triticale | Triticale (em Grao) | Triticale (in grain) |\n| temporary_total | Total | Total |\n\n
\n
\n\nDatasets generated from Permanent Crops:\n\n\n| dataset | Name (pt) | Name (eng) |\n|:------------------------|:---------------------------:|:--------------------------:|\n| avocado | Abacate | Avocado |\n| cotton_arboreo | Algodao Arboreo (em Caroco) | Arboreo cotton (in Caroco) |\n| acai | Acai | Acai |\n| olive | Azeitona | Olive |\n| banana | Banana (Cacho) | Banana (Bunch) |\n| rubber_coagulated_latex | Borracha (Latex Coagulado) | Rubber (Coagulated Latex) |\n| rubber_liquid_latex | Borracha (Latex Liquido) | Rubber (Liquid Latex) |\n| cocoa_beans | Cacau (em Amendoa) | Cocoa (in Almonds) |\n| coffee_total | Cafe (em Grao) Total | Coffee (in Grain) Total |\n| coffee_arabica | Cafe (em Grao) Arabica | Cafe (in Grao) Arabica |\n| coffee_canephora | Cafe (em Grao) Canephora | Cafe (in Grain) Canephora |\n| cashew | Caju | Cashew |\n| khaki | Caqui | Khaki |\n| cashew_nut | Castanha de Caju | Cashew Nuts |\n| india_tea | Cha da India (Folha Verde) | India Tea (Leaf) |\n| coconut | Coco da Baia | Coconut |\n| coconut_bunch | Dende (Cacho de Coco) | Coconut Bunch |\n| yerba_mate | Erva Mate (Folha Verde) | Mate Herb (Leaf) |\n| fig | Figo | Fig |\n| guava | Goiaba | Guava |\n| guarana_seeds | Guarana (Semente) | Guarana (Seed) |\n| orange | Laranja | Orange |\n| lemon | Limao | Lemon |\n| apple | Maca | Apple |\n| papaya | Mamao | Papaya |\n| mango | Manga | Mango |\n| passion_fruit | Maracuja | Passion fruit |\n| quince | Marmelo | Quince |\n| walnut | Noz (Fruto Seco) | Walnut (Dry Fruit) |\n| heart_of_palm | Palmito | Palm heart |\n| pear | Pera | Pear |\n| peach | Pessego | Peach |\n| black_pepper | Pimenta do Reino | Black pepper |\n| sisal_or_agave | Sisal ou Agave (Fibra) | Sisal or Agave (Fiber) |\n| tangerine | Tangerina | Tangerine |\n| tung | Tungue (Fruto Seco) | Tung (Dry Fruit) |\n| annatto_seeds | Urucum (Semente) | Annatto (Seed) |\n| grape | Uva | Grape |\n| permanent_total | Total | Total |\n\n
\n\n**Examples:**\n\n``` r\n# download treated data at the state level from 2010 to 2011 for all crops\ndata <- load_pam(dataset = ""all_crops"", \n raw_data = FALSE, \n geo_level = ""state"", \n time_period = 2010:2011,\n language = ""eng"")\n```\n\n## PEVS\n\nLoads information on the amount and value of the production of the\nexploitation of native plant resources and planted forest massifs, as\nwell as existing total and harvested areas of forest crops.\n\nData is from the Silviculture and Forestry Extraction Production (PEVS,\nin Portuguese), a nationwide annual survey conducted by IBGE (Brazilian\nInstitute of Geography and Statistics). The data also has multiple\naggregation levels, such as nationwide, by region, mesoregion and\nmicroregion, as well as state and municipality.\n\nThe data available has a yearly frequency and is available from 1986 to\nthe present, with the exception of the data on total area for\nproduction, which are only available from 2013 onwards. More information\ncan be found in [this\nlink](https://www.ibge.gov.br/en/statistics/economic/agriculture-forestry-and-fishing/18374-forestry-activities.html?=&t=o-que-e).\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are three choices:\n\n - `""pevs_forest_crops""`: provides data related to both quantity and\n value of the forestry activities. The data goes from 1986 to 2019\n and it is divided by type of product.\n - `""pevs_silviculture""`: provides data related to both quantity and\n value of the silviculture. The data goes from 1986 to 2019 and it\n is divided by type of product.\n - `""pevs_silviculture_area""`: total existing area used for\n silviculture in 12/31.The data goes from 2013 to 2019 and it is\n divided by forestry species.\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""region""`, `""state""`, or\n `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated (raw_data = FALSE) silviculture data (dataset = \'pevs_silviculture\') \n# by state (geo_level = \'state\') from 2012 (time_period = 2012) \n# in portuguese (language = ""pt"")\ndata <- load_pevs(dataset = \'pevs_silviculture\', \n raw_data = FALSE,\n geo_level = \'state\', \n time_period = 2012, \n language = ""pt"")\n\n# Download raw (raw_data = TRUE) forest crops data by region \n# from 2012 to 2013 in english\ndata <- load_pevs(dataset = \'pevs_forest_crops\', \n raw_data = TRUE, \n geo_level = ""region"", \n time_period = 2012:2013)\n```\n\n## PPM\n\nData on livestock inventories (e.g:cattle, pigs and hogs) in Brazilian\nMunicipalities, as well as amount and value of animal products\n(e.g:output of milk, hen eggs, quail eggs, honey).\n\nThe periodicity of the survey is annual. The geographic coverage is\nnational, with results released for Brazil, Major Regions, Federation\nUnits, Mesoregions, Microregions and Municipalities.\n\nThe data available has a yearly frequency and is available from 1974 to\nthe present. More information can be found in [this\nlink](https://www.ibge.gov.br/en/statistics/economic/agriculture-forestry-and-fishing/17353-municipal-livestock-production.html?=&t=o-que-e).\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are five possible choices: \\*\n `""ppm_livestock_inventory""` \\* `""ppm_sheep_farming""` \\*\n `""ppm_animal_orig_production""` \\* `""ppm_cow_farming""` \\*\n `""ppm_aquaculture""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **geo_level**: `""country""`, `""region""`, `""state""`, or\n `""municipality""`\n\n4. **time_period**: picks the years for which the data will be\n downloaded\n\n5. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated data (raw_data = FALSE) about aquaculture (dataset = ""ppm_aquaculture"") \n# from 2013 to 2015 (time_period = 2013:2015) in english\n# with the level of aggregation being the country (geo_level = ""country""). \ndata <- load_ppm(dataset = ""ppm_aquaculture"", \n raw_data = FALSE, \n geo_level = ""country"", \n time_period = 2013:2015)\n\n# Download raw data about sheep farming by state from 1980 to 1995 in portuguese (language = ""pt"")\ndata <- load_ppm(dataset = ""ppm_sheep_farming"", \n raw_data = TRUE, \n geo_level = ""state"", \n time_period = 1980:1995, \n language = ""pt"")\n```\n\n## SIGMINE\n\nLoads information the mines being explored legally in Brazil, including\ntheir location, status, product being mined and area in square meters\netc. Survey is done at municipal and state level. The National Mining\nAgency (ANM) is responsible for this survey.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: `""sigmine_active""`\n\n2. **raw_data**: there are two options:\n\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n\n3. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# Download treated data (raw_data = FALSE) in portuguese (language = ""pt"").\ndata <- load_sigmine(dataset = \'sigmine_active\', \n raw_data = FALSE,\n language = ""pt"")\n```\n\n## ANEEL\n\nLoads data from the National Electrical Energy Agency (ANEEL), a\nBrazilian independent federal agency linked to the Ministry of Mines and\nEnergy (MME). ANEEL works to provide favorable conditions for the\nElectrical Energy Market to develop with balance and for the benefit of\nsociety.\n\nAs for now, there are three different datasets available for download:\nthe Energy Development Budget and the Energy Generation.\n\n#### Energy Development Budget\n\nThe Energy Development Budget dataset showcases the Energy Development\nAccount\xe2\x80\x99s (CDE) anual budget expenses. The CDE is designed to promote\nthe Brazilian energy development and is managed by the Electrical Energy\nCommercialization Chamber (CCEE).\n\nThe dataset makes available the year of the observation \xe2\x80\x93 from 2013 to\n2022 \xe2\x80\x93, the type of expense, its value in R\\$ (Reais) and its share over\nthe total amount of CDE budget expenses on the year\\*.\n\n\\*Note that \xe2\x80\x98share_of_total\xe2\x80\x99 values sum to 1 for each year available.\n\n#### Energy Generation\n\nThe Energy Generation dataset showcases information about ANEEL\xe2\x80\x99s\nGeneration Informations System (SIGA). SIGA provides information about\nthe Brazilian electrical energy generation installed capacity.\n\nThe dataset provides information at the individual venture/entity level.\nIt contains information about the power, source, stage, type of\npermission, origin and final fuel with which each venture/entity\noperates, as well as other legal, technical and geographical\ninformation.\\* Operation start dates contained in the dataset go as far\nback as 1924 up to 2022.\n\n\\* For more details on each variable, access [This\nlink](https://app.powerbi.com/view?r=eyJrIjoiNjc4OGYyYjQtYWM2ZC00YjllLWJlYmEtYzdkNTQ1MTc1NjM2IiwidCI6IjQwZDZmOWI4LWVjYTctNDZhMi05MmQ0LWVhNGU5YzAxNzBlMSIsImMiOjR9)\nand select \xe2\x80\x9cManual do Usuario\xe2\x80\x9d.\n\n#### Energy Enterprises\n\nThe Energy Enterprises dataset showcases information about distributed\nmicro and mini generators, covered by the Regulatory Resolution n\xc2\xba\n482/2012. The list of projects is classified by variables that make up\ntheir identification, namely: connected distributor, project code,\nnumerical nucleus of the project code, owner name, production class,\nsubgroup, name of the owner, number of consumer units that receive\ncredits, connection date, type of generating unit, source, installed\npower, municipality, and federative unit where it is located.\n\nThe data is expressed in quantities and installed power in kW\n(kilowatt). The quantity corresponds to the number of distributed micro\nor mini generators installed in the specified period. The installed\npower is defined by the sum of the nominal active electric power of the\ngenerating units.\n\n\\* For more details on each variable, access [This\nlink](https://dadosabertos.aneel.gov.br/dataset/relacao-de-empreendimentos-de-geracao-distribuida)\nand select \xe2\x80\x9cDicion\xc3\xa1rio de dados\xe2\x80\x9d.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are three choices:\n - `""energy_development_budget""`: government spending towards energy\n sources\n - `""energy_generation""`: energy generation by entity/corporation\n - `""energy_enterprises_distributed""`: distributed micro and mini\n generators\n2. **raw_data**: there are two options:\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n3. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated data about energy generation\nclean_aneel <- load_aneel(\n dataset = ""energy generation"",\n raw_data = FALSE\n)\n```\n\n## EPE\n\nLoads data from the Energy Research Company (EPE), a Brazilian public\ncompany that works closely with the Brazilian Ministry of Mines and\nEnergy (MME) and other agencies to ensure the sustainable development of\nBrazil\xe2\x80\x99s energy infrastructure. EPE\xe2\x80\x99s duty on that mission is to support\nMME with quality research and studies in order to aid Brazil\xe2\x80\x99s energy\ninfrastructure planning.\n\nAs for now, there are two different datasets available for download: the\nEnergy Consumption Per Class and the National Energy Balance. Both of\nthem were obtained from the [EPE\nwebsite](https://www.epe.gov.br/sites-pt/publicacoes-dados-abertos/publicacoes/).\n\n#### Energy Consumption Per Class\n\nThe Energy Consumption Per Class dataset provides monthly data about\nenergy consumption and consumers from 2004 to 2022, for each class of\nenergy consumption.\n\nThe different classes are Total consumption (and consumers), Industrial\nconsumption (and consumers), Residential consumption (and consumers),\nCommercial consumption (and consumers), Captive consumption\\* and Other\nconsumption (and consumers).\\*\\*\n\n\\*Note that there is no consumer data for \xe2\x80\x98Captive\xe2\x80\x99 class at all.\n\n\\*\\*There is also no consumer data for \xe2\x80\x98Industrial\xe2\x80\x99, \xe2\x80\x98Commercial\xe2\x80\x99 and\n\xe2\x80\x98Other\xe2\x80\x99 classes when the geographical level is \xe2\x80\x98Subsystem\xe2\x80\x99 or \xe2\x80\x98Region\xe2\x80\x99.\n\nThere are three different aggregation levels: The Region level\nencompasses the five Brazilian geographical regions (North, Northeast,\nMidwest, Southeast and South). The Subsystem level encompasses the five\nBrazilian Electric Subsystems (North, Northeast, Southeast/Midwest,\nSouth, Isolated Systems). The State level encompasses the 26 Brazilian\nStates and the Federal District.\n\n#### National Energy Balance\n\nThe National Energy Balance is a thorough and extensive research\ndeveloped and published by EPE that contains useful data about energy\nconsumption, generation, exportation and many more subjects.\n\nAs for now, the National Energy Balance dataset provides yearly data\nabout energy generation per source of production. The sources can be\ndivided into two groups: the renewable sources (hydro, wind, solar,\nnuclear, thermal, sugar_cane_bagasse, firewood, black_liquor) and the\nnon-renewable sources (steam_coal, natural_gas, coke_oven_gas, fuel_oil,\ndiesel).\n\nThe dataset has information at the Brazilian state level, including the\nFederal District, from 2011 to 2021 and also indicates whether the state\nis in the Legal Amazon or not.\n\n------------------------------------------------------------------------\n\n**Options:**\n\n1. **dataset**: there are two choices:\n - `""energy_consumption_per_class""`: monthly energy consumption and\n consumers by State, Region or Electric Subsystem\n - `""national_energy_balance""`: yearly energy generation per source,\n by State\n2. **raw_data**: there are two options:\n - `TRUE`: if you want the data as it is originally.\n - `FALSE`: if you want the treated version of the data.\n3. **geo_level**: only applies to the `""energy_consumption_per_class""`\n dataset.\n - `""state""`\n - `""subsystem""`\n4. **language**: you can choose between Portuguese `(""pt"")` and English\n `(""eng"")`\n\n------------------------------------------------------------------------\n\n**Examples:**\n\n``` r\n# download treated data about energy consumption at the state level\nclean_epe <- load_epe(\n dataset = ""energy_consumption_per_class"",\n geo_level = ""state"",\n raw_data = FALSE\n)\n```\n\n# Other tools\n\n## Legal Amazon Municipalities\n\nMany of our functions use a dataset with Brazilian municipalities, their\nmunicipality codes, whether they belong to the Legal Amazon, their\nstate, and some more variables. It was constructed from the [IBGE\nspreadsheet](https://www.ibge.gov.br/geociencias/cartas-e-mapas/mapas-regionais/15819-amazonia-legal.html?=&t=acesso-ao-produto)\nwith Legal Amazon municipalities, along with a data frame from the\n\xe2\x80\x98geobr\xe2\x80\x99 package. For more information on the columns, run\n`??datazoom.amazonia::municipalities`.\n\n``` r\n# load Brazilian municipalities dataset\ndata <- datazoom.amazonia::municipalities\n```\n\n## The \xe2\x80\x98googledrive\xe2\x80\x99 package\n\nFor some of our functions, the original data is stored in Google Drive\nand exceeds the file size limit for which direct downloads are possible.\nAs a result, the `googledrive` package is required to download the data\nthough the Google Drive API and run the function.\n\nThe first time the package is called, it requires you to link your\nGoogle account and grant permissions to be able to download data through\nthe Google Drive API.\n\nYou **must** tick all boxes when the permissions page opens, or else the\nfollowing error will occur:\n\n``` r\n#Error in `gargle_abort_request_failed()`:\n#! Client error: (403) Forbidden\n#Insufficient Permission: Request had insufficient authentication scopes.\n#\xe2\x80\xa2 domain: global\n#\xe2\x80\xa2 reason: insufficientPermissions\n#\xe2\x80\xa2 message: Insufficient Permission: Request had insufficient authentication\n# scopes.\n#Run `rlang::last_error()` to see where the error occurred.\n```\n\nFor further information, click\n[here](https://googledrive.tidyverse.org/) to access the official\npackage page.\n\n## Credits\n\nDataZoom is developed by a team at Pontif\xc3\xadcia Universidade Cat\xc3\xb3lica do\nRio de Janeiro (PUC-Rio), Department of Economics. Our official website\nis at: .\n\nTo cite package `datazoom.amazonia` in publications use:\n\n> Data Zoom (2023). Data Zoom: Simplifying Access To Brazilian\n> Microdata. \n> \n\nA BibTeX entry for LaTeX users is:\n\n @Unpublished{DataZoom2023,\n author = {Data Zoom},\n title = {Data Zoom: Simplifying Access To Brazilian Microdata},\n url = {https://www.econ.puc-rio.br/datazoom/english/index.html},\n year = {2023},\n }\n'",,"2020/10/10, 16:09:33",1110,CUSTOM,113,1051,"2023/10/25, 01:35:05",11,151,217,74,0,3,0.7,0.8040540540540541,,,0,20,false,,false,false,,,https://github.com/datazoompuc,http://www.econ.puc-rio.br/datazoom/english/index.html,PUC-Rio,,,https://avatars.githubusercontent.com/u/57681933?v=4,,, sgsR,A structurally guided sampling toolbox for LiDAR-based forest inventories.,tgoodbody,https://github.com/tgoodbody/sgsR.git,github,,Forest Observation and Management,"2023/06/12, 21:44:21",39,0,17,true,R,,,R,https://tgoodbody.github.io/sgsR/,"b'\n\n\n# sgsR - structurally guided sampling \n\n\n\n![license](https://img.shields.io/badge/Licence-GPL--3-blue.svg)\n[![R-CMD-check](https://github.com/tgoodbody/sgsR/workflows/R-CMD-check/badge.svg)](https://github.com/tgoodbody/sgsR/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/tgoodbody/sgsR/branch/main/graph/badge.svg)](https://app.codecov.io/gh/tgoodbody/sgsR?branch=main)\n[![](https://cranlogs.r-pkg.org/badges/sgsR)](https://CRAN.R-project.org/package=sgsR)\n\n\n## Installation :computer:\n\nInstall the stable version of [`sgsR`from\nCRAN](https://cran.r-project.org/package=sgsR) with:\n\n``` r\ninstall.packages(""sgsR"")\nlibrary(sgsR)\n```\n\nInstall the most recent development version of [`sgsR` from\nGithub](https://github.com/tgoodbody/sgsR) with:\n\n``` r\ninstall.packages(""devtools"")\ndevtools::install_github(""https://github.com/tgoodbody/sgsR"")\nlibrary(sgsR)\n```\n\n## Citing `sgsR` in literature\n\nOpen access publication: [sgsR: a structurally guided sampling toolbox\nfor LiDAR-based forest\ninventories](https://doi.org/10.1093/forestry/cpac055)\n\nTo cite `sgsR` use `citation()` from within R with:\n\n``` r\nprint(citation(""sgsR""), bibtex = TRUE)\n#> \n#> To cite package \'sgsR\' in publications use:\n#> \n#> Goodbody, TRH., Coops, NC., Queinnec, M., White, JC., Tompalski, P.,\n#> Hudak, AT., Auty, D., Valbuena, R., LeBoeuf, A., Sinclair, I.,\n#> McCartney, G., Prieur, J-F., Woods, ME. (2023). sgsR: a structurally\n#> guided sampling toolbox for LiDAR-based forest inventories. Forestry:\n#> An International Journal of Forest Research.\n#> 10.1093/forestry/cpac055.\n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @Manual{,\n#> title = {sgsR: a structurally guided sampling toolbox for LiDAR-based forest inventories.},\n#> author = {Tristan R.H. Goodbody and Nicholas C. Coops and Martin Queinnec and Joanne C. White and Piotr Tompalski and Andrew T. Hudak and David Auty and Ruben Valbuena and Antoine LeBoeuf and Ian Sinclair and Grant McCartney and Jean-Francois Prieur and Murray E. Woods},\n#> journal = {Forestry: An International Journal of Forest Research},\n#> year = {2023},\n#> doi = {10.1093/forestry/cpac055},\n#> }\n#> \n#> Tristan RH Goodbody, Nicholas C Coops and Martin Queinnec (2023).\n#> Structurally Guided Sampling. R package version 1.4.4.\n#> https://cran.r-project.org/package=sgsR.\n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @Manual{,\n#> title = {Structurally Guided Sampling},\n#> author = {Tristan RH Goodbody and Nicholas C Coops and Martin Queinnec},\n#> year = {2023},\n#> note = {R package version 1.4.4},\n#> url = {https://cran.r-project.org/package=sgsR},\n#> }\n```\n\n## Overview\n\n`sgsR` provides a collection of stratification and sampling algorithms\nthat use auxiliary information for allocating sample units over an areal\nsampling frame. ALS metrics, like those derived from the [`lidR`\npackage](https://cran.r-project.org/package=lidR) are the intended\ninputs.\n\nOther remotely sensed or auxiliary data can also be used (e.g.\xc2\xa0optical\nsatellite imagery, climate data, drone-based products).\n\n`sgsR` is being actively developed, so you may encounter bugs. If that\nhappens, [please report your issue\nhere](https://github.com/tgoodbody/sgsR/issues) by providing a\nreproducible example.\n\n## Example usage :bar_chart:\n\n``` r\n#--- Load mraster files ---#\nr <- system.file(""extdata"", ""mraster.tif"", package = ""sgsR"")\n\n#--- load the mraster using the terra package ---#\nmraster <- terra::rast(r)\n\n#--- apply quantiles algorithm to mraster ---#\nsraster <- strat_quantiles(mraster = mraster$zq90, # use mraster as input for stratification\n nStrata = 4) # produce 4 strata\n \n#--- apply stratified sampling ---#\nexisting <- sample_strat(sraster = sraster, # use sraster as input for sampling\n nSamp = 200, # request 200 samples\n mindist = 100, # samples must be 100 m apart\n plot = TRUE) # plot output\n```\n\n## Resources & Vignettes :books:\n\nCheck out [the package\ndocumentation](https://tgoodbody.github.io/sgsR/index.html) to see how\nyou can use `sgsR` functions for your work.\n\n`sgsR` was presented at the ForestSAT 2022 Conference in Berlin. [Slides\nfor the presentation can be found\nhere.](https://tgoodbody.github.io/sgsR-ForestSAT2022/)\n\n## Collaborators :woman: :man:\n\nWe are thankful for continued collaboration with academic, private\nindustry, and government institutions to help improve `sgsR`. Special\nthanks to to:\n\n| Collaborator | Affiliation |\n|:--------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|\n| [Martin Queinnec](https://www.researchgate.net/profile/Martin-Queinnec) | University of British Columbia |\n| [Joanne C. White](https://scholar.google.ca/citations?user=bqjk4skAAAAJ&hl=en/) | Canadian Forest Service |\n| [Piotr Tompalski](https://scholar.google.ca/citations?user=RtYdz0cAAAAJ&hl=en/) | Canadian Forest Service |\n| [Andrew T. Hudak](https://www.fs.usda.gov/research/about/people/ahudak/) | United States Forest Service |\n| [Ruben Valbuena](https://scholar.google.com/citations?user=Nx336TQAAAAJ&hl=en/) | Swedish University of Agricultural Sciences |\n| [Antoine LeBoeuf](https://scholar.google.com/citations?user=wGsKOK8AAAAJ&hl=en/) | Minist\xc3\xa8re des For\xc3\xaats, de la Faune et des Parcs |\n| [Ian Sinclair](https://ca.linkedin.com/in/ian-sinclair-984929a4/) | Ministry of Northern Development, Mines, Natural Resources and Forestry |\n| [Grant McCartney](https://www.signalhire.com/profiles/grant-mccartney%27s-email/99719223/) | Forsite Consultants Ltd. |\n| [Jean-Francois Prieur](https://www.researchgate.net/scientific-contributions/Jean-Francois-Prieur-2142960944) | Universit\xc3\xa9 de Sherbrooke |\n| [Murray Woods](https://www.researchgate.net/profile/Murray-Woods) | (Retired) Ministry of Northern Development, Mines, Natural Resources and Forestry |\n\n## Funding :raised_hands:\n\nDevelopment of `sgsR` was made possible thanks to the financial support\nof the Canadian Wood Fibre Centre\xe2\x80\x99s Forest Innovation Program.\n'",",https://doi.org/10.1093/forestry/cpac055","2021/04/08, 21:20:44",930,GPL-3.0,102,568,"2023/06/12, 20:11:29",0,13,33,16,135,0,0.0,0.027372262773722622,"2023/06/13, 19:25:42",v1.4.4,0,5,false,,false,false,,,,,,,,,,, r3PG,An R package for forest growth simulation using the 3-PG process-based model.,trotsiuk,https://github.com/trotsiuk/r3PG.git,github,,Forest Observation and Management,"2023/09/19, 10:34:54",25,0,6,true,HTML,,,"HTML,Fortran,R,C",https://trotsiuk.github.io/r3PG/,"b'[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](http://www.repostatus.org/badges/latest/active.svg)](http://www.repostatus.org/#active)\n[![R-CMD-check](https://github.com/trotsiuk/r3PG/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/trotsiuk/r3PG/actions/workflows/R-CMD-check.yaml)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/r3PG)](https://cran.r-project.org/package=r3PG)\n[![](https://cranlogs.r-pkg.org/badges/grand-total/r3PG)](https://cran.r-project.org/package=r3PG)\n[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\n\n\n## Purpose\n\n`r3PG` provides an implementation of the Physiological Processes Predicting Growth ([3-PG](https://3pg.forestry.ubc.ca)) model (Landsberg & Waring, 1997), which simulate forest growth and productivity. The `r3PG` serves as a flexible and easy-to-use interface for the `3-PGpjs` (Sands, 2010) and the `3-PGmix` (Forrester & Tang, 2016) model written in `Fortran`. The package enables fast and easy interaction with the model, and the `Fortran` re-implementation facilitates computationally intensive sensitivity analysis and calibration. The user can flexibly switch between various options and submodules, to use the original `3-PGpjs` model version for monospecific, even-aged and evergreen forests and the `3-PGmix` model, which can also simulate multi-cohort stands (e.g. mixtures, uneven-aged) that contain deciduous species.\n\n## Usage\n\nBelow is a basic example, for more extended examples please visit package [vignette](https://htmlpreview.github.io/?https://github.com/trotsiuk/r3PG/blob/master/vignettes/r3PG-ReferenceManual.html).\n\nThe main function is `run_3PG()` which returns all 108 simulated variables for each species at a monthly time-step, either as a 4-dimentional array or a long format data frame.\n\n```r\nlibrary(r3PG)\nout_3PG <- run_3PG(\n site = d_site, \n species = d_species, \n climate = d_climate, \n thinning = d_thinning,\n parameters = d_parameters, \n size_dist = d_sizeDist,\n settings = list(light_model = 2, transp_model = 2, phys_model = 2, \n height_model = 1, correct_bias = 0, calculate_d13c = 0),\n check_input = TRUE, df_out = TRUE)\n\nhead( out_3PG )\n```\n\nTo visualize the output:\n``` r\nlibrary(dplyr)\nlibrary(ggplot2)\n\nsel_var <- c(\'biom_stem\', \'biom_foliage\', \'biom_root\')\n\nout_3PG %>%\n filter( variable %in% sel_var ) %>%\n ggplot( aes(date, value, color = species) ) +\n geom_line() +\n facet_wrap(~variable, scales = \'free\') +\n theme_classic()\n```\n\nIf you prefer to use data stored in `Excell`, you can use the following example. Data to reproduce this example are stored in [data-raw/internal_data/data.input.xlsx](https://github.com/trotsiuk/r3PG/blob/master/data-raw/).\n\n``` r\nlibrary(readxl)\n\nf_loc <- \'data.input.xlsx\'\n\nrun_3PG(\n site = read_xlsx(f_loc, \'site\'),\n species = read_xlsx(f_loc, \'species\'),\n climate = read_xlsx(f_loc, \'climate\'),\n thinning = read_xlsx(f_loc, \'thinning\'),\n parameters = read_xlsx(f_loc, \'parameters\'), \n size_dist = read_xlsx(f_loc, \'sizeDist\'),\n settings = list(light_model = 2, transp_model = 2, phys_model = 2, \n height_model = 1, correct_bias = 0, calculate_d13c = 0),\n check_input = TRUE, df_out = TRUE)\n```\n\n## Installation \n\n### Stable release\n\n`r3PG` is available for instalation from [CRAN](https://cran.r-project.org/web/packages/r3PG/index.html) \n\n```r\ninstall.packages(""r3PG"")\n```\n\n### Development release\n\nTo install the current (development) version from the repository, run the following command:\n\n```r\nif(!require(devtools)){install.packages(devtools)}\ndevtools::install_github(repo = ""trotsiuk/r3PG"", build_vignettes = T)\n```\n\nThe unit test status of the master (development) branch is [![R-CMD-check](https://github.com/trotsiuk/r3PG/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/trotsiuk/r3PG/actions/workflows/R-CMD-check.yaml)\n\n## Other 3-PG implementations in R\n\nWe would like to acknowledge that r3PG is not the only 3-PG implementations in R. We are aware of the following other packages:\n\n| Maintainer | Source |\n| ------------------- | ------ |\n| Daniel M. Griffith | https://github.com/griffithdan/r3PG |\n| Georgios Xenakis | https://github.com/drGeorgeXenakis/fr3PGD |\n| Francesco Minunno | https://github.com/checcomi/threePGN-package |\n| Quinn Thomas | https://github.com/EcoDynForecast/DAPPER |\n\nWe explain in a recent publication ([Trotsiuk et al, 2020](https://doi.org/10.1111/2041-210X.13474)) how these r3PG packages differs and / or improves over these. \n\n## Issues, suggestions, contributions\n\nPlease submit issues, bugs and suggestions in the dedicated [page](https://github.com/trotsiuk/r3PG/issues). Contribution and improvements are always welcome!\n\n## Author and contact\n\n[Volodymyr Trotsiuk; ](volodymyr.trotsiuk@wsl.ch)\n[Florian Hartig; ](mailto:florian.hartig@biologie.uni-regensburg.de)\n[David I. Forrester](mailto:david.forrester@wsl.ch)\n\n\n## Citation\nTrotsiuk, V., Hartig, F., Forrester, D.I. (2020). r3PG \xe2\x80\x93 an R package for simulating forest growth using the 3-PG process-based model. Methods Ecol. Evol., 11, 1470\xe2\x80\x931475. https://doi.org/10.1111/2041-210X.13474\n\n\n## References\n\nForrester, D. I., & Tang, X. (2016). Analysing the spatial and temporal dynamics of species interactions in mixed-species forests and the effects of stand density using the 3-PG model. Ecological Modelling, 319, 233\xe2\x80\x93254. https://doi.org/10.1016/j.ecolmodel.2015.07.010\n\nLandsberg, J. J., & Waring, R. H. (1997). A generalised model of forest productivity using simplified concepts of radiation-use efficiency, carbon balance and partitioning. Forest Ecology and Management, 95(3), 209\xe2\x80\x93228. https://doi.org/10.1016/S0378-1127(97)00026-1\n\nSands, P. J. (2010). 3PGpjs user manual. Retrieved from https://3pg.sites.olt.ubc.ca/files/2014/04/3PGpjs_UserManual.pdf\n\nTrotsiuk, V., Hartig, F., Cailleret, M., Babst, F., Forrester, D. I., Baltensweiler, A., \xe2\x80\xa6 Schaub, M. (2020). Assessing the response of forest productivity to climate extremes in Switzerland using model\xe2\x80\x93data fusion. Global Change Biology, 26(4), 2463\xe2\x80\x932476. https://doi.org/10.1111/gcb.15011\n\nForrester, D. I., Hobi M. L., Mathys A. S., Stadelmann G., Trotsiuk V. (2021). Calibration of the process-based model 3-PG for major central European tree species. European Journal of Forest Research, 140, 847-868. https://doi.org/10.1007/s10342-021-01370-3\n'",",https://doi.org/10.1111/2041-210X.13474,https://doi.org/10.1111/2041-210X.13474\n\n\n##,https://doi.org/10.1016/j.ecolmodel.2015.07.010\n\nLandsberg,https://doi.org/10.1016/S0378-1127(97)00026-1\n\nSands,https://doi.org/10.1111/gcb.15011\n\nForrester,https://doi.org/10.1007/s10342-021-01370-3\n","2020/02/12, 09:31:56",1351,GPL-3.0,30,238,"2023/09/18, 14:07:38",1,21,97,27,37,0,0.0,0.18840579710144922,"2023/09/19, 10:36:28",v0.1.6,0,3,false,,false,false,,,,,,,,,,, Our Forests Tomorrow,Visualizing European forests future.,developmentseed,https://github.com/developmentseed/our-forests-tomorrow.git,github,,Forest Observation and Management,"2023/10/12, 10:05:00",10,0,10,true,TypeScript,Development Seed,developmentseed,"TypeScript,JavaScript,HTML,Shell",https://devseed.com/our-forests-tomorrow/,"b'# Our Forests Tomorrow: Visualizing European forests future\n\n![](https://raw.githubusercontent.com/developmentseed/our-forests-tomorrow/main/public/img/forest.webp)\n\nIn February 2022 European researchers published the [Eu-Trees4F study](https://publications.jrc.ec.europa.eu/repository/handle/JRC127314), comprised of a dataset that predicts the extent of 67 tree species in Europe, in 2035, 2065 and 2095, on a 10km grid, according to a whole range of bioclimatic parameters and on two different RCPs (intermediate 4.5 and BAU 8.5).\n\nThe amount of considered species, the precision of the simulation and the relatively high spatial resolution makes this dataset an absolute goldmine for forestry, research, biodiversity, but considering how the issue of forests has become salient in the public opinion, this could also be interesting to a really broad audience. Where I live in the West of France, people would be shocked to learn that our beloved beech trees, birch trees, aspens, will be likely gone within 45 years. But with a little help (in the form of assisted migration), the forests here could see new species thrive like Mediterranean oaks, willow trees, etc.\n\nThe goal of that project is to bring those findings to a large audience, building interactive product(s) such as a minisite and notebooks, and through media partners.\n\n## What are we up to?\n\n- The mini site prototype: [Our Forests Tomorrow](https://devseed.com/our-forests-tomorrow/)\n\n![205045050-fa1d002e-dc6e-452c-9da2-d2ee16570722](https://user-images.githubusercontent.com/1583415/230595970-5febc948-5833-4910-ba57-67832e8ab39e.png)\n\n- Based on that initial work, we ran several [technical/design experiments](https://github.com/developmentseed/our-forests-tomorrow/issues/1)\n- Work on the way to develop a visual identity as well as high-fidelity designs for a final version of the minisite\n- We developed two stories based on the initial study, currently drafted as Observable notebooks:\n - [Jam\xc3\xb3n ib\xc3\xa9rico\xc2\xa0under threat from climate crisis](https://observablehq.com/@nerik/eu-trees4f-jamon-iberico-under-threat-from-climate-crisis)\n - [Positive effects of climate change and warmer conditions for European trees](https://observablehq.com/@nerik/eu-trees4f-positive-effects-of-climate-change-and-warmer-co)\n - see [working document](https://paper.dropbox.com/doc/EUTrees-4F-Storytelling--B1xvfzlZQDPKG9WqxC_iDGmmAg-0dzBRWbdXLaLB3DIZZ6lE) (private)\n\nInternal use only/private documents:\n- [Labs phase 1 ticket](https://github.com/developmentseed/labs/issues/296)\n- [Labs phase 2 ticket](https://github.com/developmentseed/labs/issues/283)\n- [Team week slides](https://docs.google.com/presentation/d/1sRQSuknT50N6ysPNUXxmHZbfnZDjxMRk8rsXJRIXA4U/edit#slide=id.gb700de37bd_0_524)\n- [Pitch slides deck](https://docs.google.com/presentation/d/18SjpRg7HhnR_Acjt3FmFDx5ecDaAn__TVhbaKrx6MpA/edit#slide=id.gb700de37bd_0_524)\n- [Figma wireframes + high fidelity designs](https://www.figma.com/file/Yoa1s61W6Q2NvK5z7jHygx/Our-forests-tomorrow?node-id=182-3326&t=oBWVrEvmbG2vf5WN-0)\n\n## About the dataset\n\nThe original dataset consists of a series of GeoTIFF files covering mostly the European Union + some Eastern European countries. There is data for:\n- 67 tree species\n- 2 emissions scenarios (RCP 4.5 and 8.5)\n- 2 modelled outputs (climatic ensemble mean and SDM ensemble mean)\n- The 2005 baseline + 3 time steps (2035, 2065, 2095)\n- Data is gridded on a 10km grid\n- Data is scaled on a continuous scale running from 0 (species absent) to 1 (species extremely likely to be suitable) \n\nComparing with the 2005 baseline extent, the main visualisation goal is to assess what will happen in European regions:\n- Is the species __stable__?\n- Is the species likely to disappear/__decolonize__? (present in 2005, not present in the future timestep)\n- Is the species newly __suitable__? (not present in 2005, but conditions allow thriving in the future timestep)\n...for the three future timesteps, under the two emissions scenarios, for each of the 67 species.\n\nDatasets currently used in the prototype minisite (only RCP P8.5):\n- A stats JSON file that represents the number of cells falling into each of the stable/decolonize/suitable class, for each timestep, for each region and country. This is used to generate the small multiples charts for each region\n- A static vector tileset (MVT), points sampled from the GeoTIFFs along an hex grid, containing the class for each timestep. Generated for 10km, 20km, 40km grids\n\nOther datasets generated:\n- GeoJSON files (point geometries) for each species with continuous values for each timestep under the two RCPs\n- GeoJSON files (hexagon geometries) - needed to render in Mapbox GL which is not able to render hexes on the fly from points\n- TopoJSON extents\n\n\n## Run the minisite prototype locally\n\nClone this repo and:\n```\nyarn\nyarn start\n```\n\n'",,"2023/04/04, 12:21:09",204,Apache-2.0,65,240,"2023/05/05, 15:32:52",2,3,5,5,173,0,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/developmentseed,http://developmentseed.org,"Washington, DC",,,https://avatars.githubusercontent.com/u/92384?v=4,,, eu_cbm_hat,"Enables the assessment of forest CO2 emissions and removals under scenarios of forest management, natural disturbances and forest-related land use changes.",bioeconomy/eu_cbm,https://gitlab.com/bioeconomy/eu_cbm/eu_cbm_hat,gitlab,,Forest Observation and Management,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OpenPlantPathology,"Open Plant Pathology is an initiative that supports and promotes the spread of all open, transparent and reproducible practices in the field of plant pathology.",openplantpathology,https://github.com/openplantpathology/OpenPlantPathology.git,github,"openplantpathology,reproducible-research",Plants and Vegetation,"2023/06/17, 15:40:43",19,0,1,true,HTML,Open Plant Pathology,openplantpathology,"HTML,JavaScript,TeX,R,CSS,SCSS",https://www.openplantpathology.org/,"b'# OpenPlantPathology\n\n[![Netlify Status](https://api.netlify.com/api/v1/badges/49264c27-be62-46f0-aeaf-5882bcdfdb3e/deploy-status)](https://app.netlify.com/sites/openplantpathology/deploys)\n\nThis repository contains the code for the Open Plant Pathology website, .\n\n## Code of Conduct\n\nPlease note that the OpenPlantPathology project is released with a [Contributor Code of Conduct](https://contributor-covenant.org/version/2/0/CODE_OF_CONDUCT.html).\nBy contributing to this project, you agree to abide by its terms.\n'",,"2018/01/08, 11:10:15",2116,MIT,34,1217,"2022/04/11, 01:41:26",0,72,99,0,562,0,0.0,0.2630614115490376,,,0,6,false,,false,false,,,https://github.com/openplantpathology,https://www.openplantpathology.org/,Global,,,https://avatars.githubusercontent.com/u/27719812?v=4,,, CRootBox,"The focus of CRootBox is the simulation of different types of root architecture, and to provide a generic interface for coupling with arbitrary soil/environmental models, e.g., in order to determine the impact of specific root architectures on function.",Plant-Root-Soil-Interactions-Modelling,https://github.com/Plant-Root-Soil-Interactions-Modelling/CRootBox.git,github,,Plants and Vegetation,"2019/11/07, 07:57:35",18,0,1,false,C++,,Plant-Root-Soil-Interactions-Modelling,"C++,Python,TeX,Makefile,CMake,MATLAB",https://plant-root-soil-interactions-modelling.github.io/CRootBox/,"b'# CRootBox became CPlantBox\n\nCRootBox is not longer developed, it grew into [CPlantBox](https://github.com/Plant-Root-Soil-Interactions-Modelling/CPlantBox). \n\nCPlantBox is fully backward compatible CRootBox, offering a RootSystem only class, but can additionally simulate the plant shoot and leafs, enabeling even more complex functional plant models.\n\nFor the a description of the root system modelling please refer to tutorial/latex/RootBox.\n\n\n\n# CRootBox\n\nThe fastest way to try CRootBox is to read the tutorial and look at the examples in C++ or Python. \n\nFor C++ just uncomment the desired example in the `examples/main.cpp` file, compile and run it.\n```bash\ncmake .\nmake\ncd examples && ./test_crootbox\n```\n\n`cmake . ` runs CMake which configures the CRootBox libraries. `make` builds the libraries and the C++ example. `cd examples && ./test_crootbox` runs the example.\n\n# Folder sructure\n\n`/docs` \t\t\tCRootbox website\\\n`/examples` \t\tSome C++ examples how to use the CRootBox\\\n`/external` External libraries\\\n`/modelparameter`\t\tSome root parameter, and plant parameter files\\\n`/python` Python examples\\\n`/results` \t\tNice result images\\\n`/scripts` \t\tPyhthon scripts for visualization with Paraview, and Matlab scripts for parameter export\\\n`/src`\t\t\tCRootBox C++ codes\\\n`/test` \t\t\tTest\\\n`/tutorial` \t\tCRootbox tutorial\n\n\n# Documentation\n\nCreate the documentation by running doxygen in the folder\n$ doxygen doxy_config\n\nThe documentation should now be located in the folder /doc\n\n# Python bindings\n\nA Python library is automatically built by CMake if Python 3 and boost-python is installed on your system.\n\n# References\n\nPlease cite one or all of the following papers if you make use of CRootBox in your publication.\n\nAndrea Schnepf, Daniel Leitner, Magdalena Landl, Guillaume Lobet, Trung Hieu Mai, Shehan Morandage, Cheng Sheng, Mirjam Z\xc3\xb6rner, Jan Vanderborght, Harry Vereecken; CRootBox: a structural\xe2\x80\x93functional modelling framework for root systems, Annals of Botany, Volume 121, Issue 5, 18 April 2018, Pages 1033\xe2\x80\x931053, https://doi.org/10.1093/aob/mcx221\n\nSTATISTICAL CHARACTERIZATION OF THE ROOT SYSTEM ARCHITECTURE MODEL CROOTBOX\nAndrea Schnepf; Katrin Huber; Magdalena Landl; F\xc3\xa9licien Meunier; Lukas Petrich; Volker Schmidt\ndoi: 10.2136/vzj2017.12.0212; Date posted: May 04, 2018\n'",",https://doi.org/10.1093/aob/mcx221\n\nSTATISTICAL","2017/02/03, 14:20:39",2455,CUSTOM,0,389,"2019/09/26, 07:31:50",5,11,23,0,1490,2,0.0,0.430379746835443,,,0,6,false,,false,false,,,https://github.com/Plant-Root-Soil-Interactions-Modelling,,,,,https://avatars.githubusercontent.com/u/25530913?v=4,,, PlantCV,Plant phenotyping using computer vision.,danforthcenter,https://github.com/danforthcenter/plantcv.git,github,"science,bioinformatics,image-analysis,plant-phenotyping,plantcv",Plants and Vegetation,"2023/10/19, 17:52:50",568,91,93,true,Python,Donald Danforth Plant Science Center,danforthcenter,"Python,Shell,R,Dockerfile",,"b'![builds](https://github.com/danforthcenter/plantcv/workflows/builds/badge.svg)\n[![codecov](https://codecov.io/gh/danforthcenter/plantcv/branch/main/graph/badge.svg)](https://codecov.io/gh/danforthcenter/plantcv)\n[![Documentation Status](http://readthedocs.org/projects/plantcv/badge/?version=latest)](http://plantcv.readthedocs.io/en/latest/?badge=latest)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/danforthcenter/plantcv-binder.git/master?filepath=index.ipynb)\n[![Docker Pulls](https://img.shields.io/docker/pulls/danforthcenter/plantcv.svg)](https://hub.docker.com/r/danforthcenter/plantcv/)\n[![GitHub release](https://img.shields.io/github/release/danforthcenter/plantcv.svg)](https://github.com/danforthcenter/plantcv/releases)\n[![PyPI version](https://badge.fury.io/py/plantcv.svg)](https://badge.fury.io/py/plantcv)\n![Conda](https://img.shields.io/conda/v/conda-forge/plantcv)\n[![license](https://img.shields.io/github/license/danforthcenter/plantcv.svg)](https://github.com/danforthcenter/plantcv/blob/main/LICENSE)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](docs/CODE_OF_CONDUCT.md)\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-48-orange.svg?style=flat-square)](#contributors-)\n\n\n# PlantCV: Plant phenotyping using computer vision\n\nPlease use, cite, and [contribute to](http://plantcv.readthedocs.io/en/latest/CONTRIBUTING/) PlantCV!\nIf you have questions, please submit them via the\n[GitHub issues page](https://github.com/danforthcenter/plantcv/issues).\nFollow us on twitter [@plantcv](https://twitter.com/plantcv).\n\n***\n\n## Introduction to PlantCV\n\nPlantCV is an open-source image analysis software package targeted for plant phenotyping. PlantCV provides a common\nprogramming and documentation interface to a collection of image analysis techniques that are integrated from a variety\nof source packages and algorithms. PlantCV utilizes a modular architecture that enables flexibility in the design of\nanalysis workflows and rapid assimilation and integration of new methods. For more information about the project,\nlinks to recorded presentations, and publications using PlantCV, please visit our homepage: \n.\n\n### Quick Links\n\n* [Documentation](http://plantcv.readthedocs.io/)\n* [Interactive Documentation](https://mybinder.org/v2/gh/danforthcenter/plantcv-binder.git/master?filepath=index.ipynb)\n* [Installation Instructions](https://plantcv.readthedocs.io/en/stable/installation/)\n* [Updating/Changelog](https://plantcv.readthedocs.io/en/stable/updating/)\n* [Public Image Datasets](http://plantcv.danforthcenter.org/pages/data.html)\n* [Contribution Guide](https://plantcv.readthedocs.io/en/stable/CONTRIBUTING/)\n* [Code of Conduct](https://plantcv.readthedocs.io/en/stable/CODE_OF_CONDUCT/)\n* Downloads\n * [GitHub](https://github.com/danforthcenter/plantcv)\n * [PyPI](https://pypi.org/project/plantcv/)\n * [Conda-forge](https://anaconda.org/conda-forge/plantcv)\n * [Docker](https://hub.docker.com/r/danforthcenter/plantcv)\n * [Zenodo](https://doi.org/10.5281/zenodo.595522)\n\n### Citing PlantCV\n\nIf you use PlantCV, please cite the [PlantCV publications](https://plantcv.danforthcenter.org/#plantcv-publications)\nrelevant to your work. To see how others have used PlantCV in their research, check out our list of \n[publications using PlantCV](https://plantcv.danforthcenter.org/#publications-using-plantcv).\n\n***\n\n## Issues with PlantCV\n\nPlease file any PlantCV suggestions/issues/bugs via our \n[GitHub issues page](https://github.com/danforthcenter/plantcv/issues). Please check to see if any related \nissues have already been filed.\n\n***\n\n## Contributors\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Adam Dimech

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\xa4\x94

Alexander Kutschera

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Alexandria Pokorny

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Andy Lin

\xf0\x9f\x92\xbb

Cesar Lizarraga

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Collin Luebbert

\xf0\x9f\x92\xbb

David Peery

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\xa4\x94

Dhiraj Srivastava

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Dominik Schneider

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\xa4\x94

Eric Platon

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Fabian Dubois

\xf0\x9f\x92\xbb

Fabio Barbero

\xf0\x9f\x92\xbb

Garrot Yoan

\xf0\x9f\x93\x96

GrantKonkel

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96

Haley Schuhl

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x9a\xa7 \xe2\x9c\x85 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\xa2 \xf0\x9f\xa4\x94 \xf0\x9f\x92\xac

Hudanyun Sheng

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\xa4\x94

Jake

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96

Jeffrey Berry

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\xa4\x94

JoeDuenwald

\xf0\x9f\x93\x96

Jorge Gutierrez

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x93\xa2 \xf0\x9f\xa4\x94 \xf0\x9f\x92\xac

Josh Sumner

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85 \xf0\x9f\xa4\x94

Katie Murphy

\xf0\x9f\x93\x96 \xe2\x9c\x85 \xf0\x9f\x93\xa2 \xf0\x9f\xa4\x94 \xf0\x9f\x93\xa3

Malia Gehan

\xf0\x9f\x93\x86 \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x94\x8d \xf0\x9f\xa7\x91\xe2\x80\x8d\xf0\x9f\x8f\xab \xf0\x9f\x93\xa2 \xe2\x9c\x85 \xf0\x9f\xa4\x94 \xf0\x9f\x92\xac \xe2\x9a\xa0\xef\xb8\x8f

Malinda

\xf0\x9f\x92\xbb

Mark Wilson

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Max

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\xa4\x94

Noah Fahlgren

\xf0\x9f\x93\x86 \xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x94\x8d \xf0\x9f\xa7\x91\xe2\x80\x8d\xf0\x9f\x8f\xab \xf0\x9f\x93\xa2 \xe2\x9c\x85 \xf0\x9f\xa4\x94 \xf0\x9f\x92\xac

Sanazjd

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\xa4\x94

SethPolydore

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

Steen Hoyer

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\xa4\x94

Stephan Summerer

\xf0\x9f\x92\xbb

Steven Wu

\xf0\x9f\x92\xbb

Stylopidae1793

\xf0\x9f\x93\x96

adrianethompson

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9c\x85

annacasto

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f \xe2\x9c\x85 \xf0\x9f\x93\xa2 \xf0\x9f\xa4\x94

bganglia

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

jgerardhodge

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

jmgordon1223

\xf0\x9f\x93\x96

jwheeler5

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

kbgilbert

\xf0\x9f\x8e\xa8

lacostag

\xf0\x9f\x93\x96

lchavez037

\xf0\x9f\x93\x96

leowlima

\xf0\x9f\x93\x96

scallen81

\xf0\x9f\x92\xbb

sdkenney42

\xf0\x9f\x93\x96

typelogic

\xf0\x9f\x92\xbb

wurDevTim

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xe2\x9a\xa0\xef\xb8\x8f

zeeuqsze

\xf0\x9f\x93\x96 \xe2\x9c\x85 \xf0\x9f\x93\xa2 \xf0\x9f\xa7\x91\xe2\x80\x8d\xf0\x9f\x8f\xab
\n\n\n\n\n\n\n\n\n\n\n\n\n'",",https://doi.org/10.5281/zenodo.595522","2014/03/14, 18:24:23",3512,MPL-2.0,1541,7100,"2023/10/19, 17:52:55",68,804,1244,396,6,15,0.8,0.6276674025018396,"2023/08/25, 22:14:10",v4.0.1,0,51,false,,false,false,"MauroAbidalCarrer/leaffliction,JBarmentlo/42-Leafliction,danforthcenter/plantcv-stomata-tutorial-pcv4,danforthcenter/plantcv-stomata-tutorial,Simar197/j,blaneart/Leaffliction,chriswang0228/cloud_seg,mengmeng-jiang/coleaf,machineagency/duckbot,CSC-Odyssey/plantaea-refresh,ihdia/seamformer,WalterLuong/leaffliction,SLUVisLab/petal-count,arlaine4/Leaffliction,thervieu/leaffliction,rinocs/retinopathyTortuosityDetection,CSC-Odyssey/plantaea-image-processing,zmad55/plantaea-refresh,Addepto/Plant-Growth-Detection-CV,cinarolog/Electronics-and-Telecommunication-Engineering-Graduation-Project-WEB-Part-Streamlit-YOLO,jhgille2/nutrient_drop_out_cv,phytooracle/rgb_drone_canopy_cover,marilin99/fibar_tool,the-rhizodynamics-robot/groot-file-sorting,HydroTekFarm2023/hydro-ai-usecases,aureliedj/cv_scripts_agroscope,fPrager/artistic-plant,owalid/leaf-valley,SaucyBoi21/BiomassPredictonAI,SaucyBoi21/PlantCV-Testing,hisylee/rheology_chat,syncrotron/Thesis,jonaskaz/plant-analysis,IFQI91/Proyecto_final_COA501_IFQI,IFQI91/jupyter,jonaskaz/plant-vision,isi-vista/adam,vishwachintu/drmanagement,agostof/ILCI-JupyterHub,massivejords/Agar-plate-leaf-area,suryachereddy/CSE598PerceptionDrone,Leon-Yu0320/BTI-Plant-phenotyping,bliptrip/ZalapaTools,SFP-team/berrycv-workflow,Ilyabasharov/masonry_detection,RyanJennings1/Dissertation,danforthcenter/nsf-reu-2021,gelsightinc/gsrobotics,shajibghosh/post-processing-for-counterfeit-ic-detection,HugeCoderGuy/LightsCameraPlants,JoshKing56/UNet-Segmentation-for-Waterlogged-Barley,danforthcenter/plantcv-tutorial-interactive-pollent-count,ammarsyed/CIS4914SeniorProject,JustinBear99/django_asparagus,danforthcenter/plantcv-tutorial-photosynthesis,danforthcenter/nsf-iuse-hssu-intro,ethan-vbcf/image_analysis_handson,danforthcenter/plantcv-tasselyzer-tutorial,danforthcenter/plantcv-tutorial-segment-image-series,esgomezm/microscopy-dl-suite-tf,van-der-knaap-lab/tomato-analyzer-lite,wpbonelli/herbarium-sheets,mlab-upenn/cea-os,salmasamiei/IPPN_Workshop_2021_Plantcv,amirhszd/jostar,eprendes26/plantcv_pm_image_quantification,danforthcenter/plantcv-tutorial-template,IndominusByte/hydro-tech-backend,Julia2505/PlantCV-workshop,seanfhear/IoTea,nfahlgren/image-analysis-intro,danforthcenter/plantcv-nappn2021-workshop,XiaoyiWang98/PlantCVServer,phenome-force/PlantCV-workshop,AmaurySemery/Projet-top-sites-AoG,danforthcenter/plantcv-tutorial_leaf_instance_segmentation,AlteredOracle/mushroom-classification,turahul/Mushroom-Classification,kjappelbaum/colorcalibrator,mishanius/handwriten-multi-skew-line-extraction,Rohan-SoulpageIT/cortex_image_classification_API,COE516/DeepLearningPJ01,5nano/bulmapsaur-MIU,bjorn-grape/raspi-mask-ledRGB,5nano/bulmapsaur,bigFin/PlantCV_CannTx_1,vck/sertifikasi-pemrogram,danforthcenter/Eveland_NSF_Outreach,mokjunneng/PlantImageProcessing,Liveshort/astroplant-camera-module,danforthcenter/plantcv-binder",,https://github.com/danforthcenter,http://danforthcenter.org,"St. Louis, MO",,,https://avatars.githubusercontent.com/u/9678079?v=4,,, Deep Plant Phenomics,A platform for plant phenotyping using deep learning.,p2irc,https://github.com/p2irc/deepplantphenomics.git,github,,Plants and Vegetation,"2021/03/05, 15:43:15",125,0,8,false,Python,p2irc,p2irc,"Python,Makefile",,"b""# DEPRECATED\n\nDeep Plant Phenomics is no longer actively maintained. It is available here for historical purposes - however, it is provided as-is with no updates or bug fixes planned.\n\nSee [this thread](https://twitter.com/jordanubbens/status/1347273714631585792) for discussion.\n\n# Deep Plant Phenomics\n\nDeep Plant Phenomics (DPP) is a platform for plant phenotyping using deep learning. Think of it as [Keras](https://keras.io/) for plant scientists.\n\nDPP integrates [Tensorflow](https://www.tensorflow.org/) for learning. This means that it is able to run on both CPUs and GPUs, and scale easily across devices.\n\nRead the [doumentation](http://deep-plant-phenomics.readthedocs.io/en/latest/) for tutorials, or see the included examples. You can also read the [paper](http://journal.frontiersin.org/article/10.3389/fpls.2017.01190/full).\n\nDPP is maintained at the [Plant Phenotyping and Imaging Research Center (P2IRC)](http://p2irc.usask.ca/) at the [University of Saskatchewan](https://www.usask.ca/). \xf0\x9f\x8c\xbe\xf0\x9f\x87\xa8\xf0\x9f\x87\xa6\n\n## What's Deep Learning?\n\nPrincipally, DPP provides deep learning functionality for plant phenotyping and related applications. Deep learning is a category of techniques which encompasses many different types of neural networks. Deep learning techniques lead the state of the art in many image-based tasks, including image classification, object detection and localization, image segmentation, and others.\n\n## What Can I Do With This?\n\nThis package provides two things:\n\n### 1. Useful tools made possible using pre-trained neural networks\n\nFor example, calling `tools.predict_rosette_leaf_count(my_files)` will use a pre-trained convolutional neural network to estimate the number of leaves on each rosette plant.\n\n### 2. An easy way to train your own models\n\nFor example, using a few lines of code you can easily use your data to train a convolutional neural network to rate plants for biotic stress. See the [tutorial](http://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Training-The-Leaf-Counter/) for how the leaf counting model was built.\n\n## Features\n\n- Several [trained networks](http://deep-plant-phenomics.readthedocs.io/en/latest/Tools/) for common plant phenotyping tasks.\n- Easy ways to load data.\n - Loaders for some popular plant phenotyping datasets.\n - Plenty of [different loaders](http://deep-plant-phenomics.readthedocs.io/en/latest/Loaders/) for your own data, however it exists.\n- Support for [semantic segmentation](http://deep-plant-phenomics.readthedocs.io/en/latest/Semantic-Segmentation/).\n- Support for [object detection](http://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Training-An-Object-Detector).\n- Support for object counting via [density estimation](http://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Object-Counting-with-Heatmaps), including [Countception networks](http://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Object-Counting-with-Countception/).\n- Support for classification and [regression](http://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Training-The-Leaf-Counter) tasks.\n- Tensorboard integration for visualization.\n- Easy-to-use API for building new models.\n - [Pre-defined neural network architectures](http://deep-plant-phenomics.readthedocs.io/en/latest/Predefined-Model-Architectures) so you don't have to make your own.\n - Several data augmentation options.\n - Many ready-to-use [neural network layers](http://deep-plant-phenomics.readthedocs.io/en/latest/Neural-Network-Layers/).\n- Easy to [deploy](http://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Deployment/) your own models as a Python function!\n\n## Example Usage\n\nTrain a simple regression model:\n\n```python\nimport deepplantphenomics as dpp\n\nmodel = dpp.RegressionModel(debug=True)\n\n# 3 channels for colour, 1 channel for greyscale\nchannels = 3\n\n# Setup and hyperparameters\nmodel.set_batch_size(64)\nmodel.set_image_dimensions(256, 256, channels)\nmodel.set_maximum_training_epochs(25)\nmodel.set_test_split(0.2)\nmodel.set_validation_split(0.0)\n\n# Load dataset of images and ground-truth labels\nmodel.load_multiple_labels_from_csv('./data/my_labels.csv')\nmodel.load_images_with_ids_from_directory('./data')\n\n# Use a predefined model\nmodel.use_predefined_model('vgg-16')\n\n# Train!\nmodel.begin_training()\n```\n\n## Installation\n\n1. `git clone https://github.com/p2irc/deepplantphenomics.git`\n2. `pip install ./deepplantphenomics`\n\n**Note**: The package now requires Python 3.6 or greater. Python 2.7 is no longer supported.\n\n""",,"2017/04/18, 18:18:08",2381,GPL-2.0,0,617,"2020/09/21, 16:14:40",0,41,53,0,1129,0,0.0,0.563169164882227,"2019/12/09, 21:42:16",2.1.0,0,9,false,,false,false,,,https://github.com/p2irc,p2irc.usask.ca,University of Saskatchewan,,,https://avatars.githubusercontent.com/u/27709041?v=4,,, plant,A package for modeling forest trait ecology and evolution.,traitecoevo,https://github.com/traitecoevo/plant.git,github,"ecology,evolution,demography,plant-physiology,c-plus-plus,r,trait,dynamic,simulation,science-research,forests",Plants and Vegetation,"2023/10/24, 01:35:19",51,0,2,true,C++,Trait Ecology and Evolution,traitecoevo,"C++,R,TeX,Makefile,Dockerfile,CSS,Shell",https://traitecoevo.github.io/plant,"b'# plant: A package for modelling forest trait ecology and evolution\n\n\n[![R-CMD-check](https://github.com/traitecoevo/plant/workflows/R-CMD-check/badge.svg)](https://github.com/traitecoevo/plant/master)\n[![Codecov test coverage](https://codecov.io/gh/traitecoevo/plant/branch/master/graph/badge.svg)](https://codecov.io/gh/traitecoevo/plant?branch=master)\n\n\nThe plant package for R is an extensible framework for modelling size- and trait-structured demography, ecology and evolution in simulated forests. At its core, plant is an individual-based model where plant physiology and demography are mediated by traits. Individual plants from multiple species can be grown in isolation, in patches of competing plants or in metapopulations under a disturbance regime. These dynamics can be integrated into metapopulation-level estimates of invasion fitness and vegetation structure. Accessed from R, the core routines in plant are written in C++. The package provides for alternative physiology models and for capturing trade-offs among parameters. A detailed test suite is provided to ensure correct behaviour of the code.\n\n## Citation\n\nFalster DS, FitzJohn RG, Br\xc3\xa4nnstr\xc3\xb6m \xc3\x85, Dieckmann U, Westoby M (2016) plant: A package for modelling forest trait ecology & evolution. *Methods in Ecology and Evolution* 7: 136-146. doi: [10.1111/2041-210X.12525](http://doi.org/10.1111/2041-210X.12525)\n\n## Documentation\n\nAn overview of the plant package is given by the above publication. Further background on the default `FF16` growth model is available in Falster *et al* 2011 ([10.1111/j.1365-2745.2010.01735.x](http://doi.org/10.1111/j.1365-2745.2010.01735.x)) and Falster *et al* 2017 ([10.1101/083451](http://doi.org/10.1101/083451)).\n\n`plant` comes with a lot of documentation, available at [https://traitecoevo.github.io/plant/](https://traitecoevo.github.io/plant/). Initial versions for some of the material there was also included as supplementary material with the publication about plant, which can be accessed [here](http://onlinelibrary.wiley.com/doi/10.1111/2041-210X.12525/abstract#footer-support-info). \n\n## Package structure\n\nPlant is a complex package, using [C++11](https://en.wikipedia.org/wiki/C%2B%2B11) behind the scenes for speed with [R6 classes](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html) (via the [Rcpp](https://cran.r-project.org/web/packages/Rcpp/index.html) and [RcppR6](https://github.com/richfitz/RcppR6) packages). In this blog post, Rich FitzJohn and I describe the [key technologies used to build the plant package](https://methodsblog.wordpress.com/2016/02/23/plant/). \n\nIf you are interested in developing plant you should read the [Developer Notes](https://traitecoevo.github.io/plant/articles/developer_notes.html).\n\n## Installation\n\n**Requirements**\n\n- You must be using R 4.1.0 or newer. At this stage the package is not on CRAN. You\'re options for installing are described below.\n\n- Installation requires a C++11 compatible C compiler (OSX >= 10.10/Yosemite satisfies this, as do standard linux Ubuntu 12.04 and 14.04). On Windows machines you will need to install [Rtools](http://cran.r-project.org/bin/windows/Rtools/). When I tried this in [Rstudio](https://www.rstudio.com/), the program [automagically](https://en.oxforddictionaries.com/definition/automagically) sensed the absence of a compiler and asked if I wanted to install Rtools. Click `Yes`!\n\n**Option 1, using `remotes::install_github`**\n\nThe `plant` package can be installed direct from github using the [`remotes`](https://cran.r-project.org/web/packages/remotes/index.html) package:\n\n```r\nremotes::install_github(""traitecoevo/plant"", dependencies=TRUE)\n```\n\nTo install a specific (older) release, decide for the version number that you want to install in https://github.com/traitecoevo/plant/releases e.g.\n\n```r\nremotes::install_github(""traitecoevo/plant@v1.0.0"", dependencies=TRUE)\n```\n\nwith `""v1.0.0""` replaced by the appropriate version number. Note, the latest version of `plant` resides on the `develop` branch, which is sporadically released. `plant` follows [semantic versioning](https://semver.org/) meaning that major version indicate a potential break in backward compatibility.\n\n**Option 2, building from source**\n\nIf familiar with [git](https://git-scm.com/) you might find it easiest to build `plant` directly from the source code. This is most useful if developing new models or strategies, or to contribute new features.\n\nFirst, clone the `plant` repository\n\n```\ngit clone https://github.com/traitecoevo/plant\n```\n\nOpen an R session in the folder, then to install dependencies run\n\n```\ndevtools::install_deps()\n```\n\nThen to compile the project\n\n```\ndevtools::install()\n```\nor \n\n```\ndevtools::load_all()\n```\n\n## Usage\n\nHere are some example publications using plant:\n\n- Falster DS, FitzJohn RG, Br\xc3\xa4nnstr\xc3\xb6m \xc3\x85, Dieckmann U, Westoby M (2016) plant: A package for modelling forest trait ecology & evolution. *Methods in Ecology and Evolution* 7: 136-146. DOI: [10.1111/2041-210X.12525](http://doi.org/10.1111/2041-210X.12525)  code: [github](https://github.com/traitecoevo/plant_paper)\n- Falster DS, Duursma RA, FitzJohn RG (2018) How functional traits influence plant growth and shade tolerance across the life cycle. *Proceedings of the National Academy of Sciences* 115: E6789\xe2\x80\x93E6798. DOI: [10.1073/pnas.1714044115](http://doi.org/10.1073/pnas.1714044115)  code: [github](https://github.com/traitecoevo/growth_trajectories)\n- Falster DS, Kunstler GK, FitzJohn RG, Westoby M (2021) Emergent shapes of trait-based competition functions from resource-based models: a Gaussian is not normal in plant communities. *The American Naturalist* 198: 256\xe2\x80\x93267. DOI: [10.1086/714868](http://doi.org/10.1086/714868)  code: [github](https://github.com/traitecoevo/competition_kernels)\n\n\n'",",http://doi.org/10.1111/2041-210X.12525,http://doi.org/10.1111/j.1365-2745.2010.01735.x,http://doi.org/10.1101/083451,http://doi.org/10.1111/2041-210X.12525,http://doi.org/10.1073/pnas.1714044115,http://doi.org/10.1086/714868","2013/04/16, 00:30:53",3845,GPL-2.0,10,1520,"2023/10/24, 01:58:31",59,81,320,30,1,2,2.1,0.2732712765957447,"2021/02/25, 01:32:55",v2.0.0,0,8,false,,false,false,,,https://github.com/traitecoevo,,Australia,,,https://avatars.githubusercontent.com/u/11747619?v=4,,, monitoring-ecosystem-resilience,The focus is understanding vegetation patterns in semi-arid environments.,alan-turing-institute,https://github.com/alan-turing-institute/monitoring-ecosystem-resilience.git,github,"hut23,hut23-240",Plants and Vegetation,"2021/09/28, 09:45:22",20,0,2,false,Python,The Alan Turing Institute,alan-turing-institute,"Python,Jupyter Notebook,R,JavaScript,TeX,Shell,Makefile",,b'![Build status](https://api.travis-ci.com/alan-turing-institute/monitoring-ecosystem-resilience.svg?branch=develop)\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/alan-turing-institute/monitoring-ecosystem-resilience/master?filepath=notebooks)\n\n[![Documentation Status](https://readthedocs.org/projects/pyveg/badge/?version=latest)](https://pyveg.readthedocs.io/en/latest/?badge=latest)\n\n# monitoring-ecosystem-resilience\nRepository for mini-projects in the Data science for Sustainable development project.\n\nCurrently the focus of code in this repository is understanding vegetation patterns in semi-arid environments.\n\nThe code in this repository is intended to perform three inter-related tasks:\n* Download and process satellite imagery from Google Earth Engine.\n* Generate simulated vegetation patterns.\n* Calculate graph metrics to quantify the interconnectedness of vegetation in real and simulated images.\n\n### Python\n\nThe tasks above are all implemented in Python in the *pyveg* package. See the [README.md](pyveg/README.md) in the `pyveg` subdirectory for details on installation and usage.\n\n### R\n\nThe pattern-generation and graph-modelling are implemented in R in the *rveg* package. See the [README.md](rveg/README.md) in the `rveg` directory for further details.\n',,"2019/07/18, 09:23:22",1560,MIT,0,1566,"2022/10/05, 13:30:54",7,219,487,0,385,0,0.1,0.6034618410700237,"2020/11/19, 15:04:21",v1.1.0,0,7,false,,false,true,,,https://github.com/alan-turing-institute,https://turing.ac.uk,,,,https://avatars.githubusercontent.com/u/18304793?v=4,,, Quantitative Plant,A website presenting image analysis software tools and models for plants.,,,custom,,Plants and Vegetation,,,,,,,,,,https://www.quantitative-plant.org/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, phenofit,A state-of-the-art remote sensing vegetation phenology extraction package.,eco-hydro,https://github.com/eco-hydro/phenofit.git,github,"phenology,remote-sensing",Plants and Vegetation,"2023/02/15, 09:21:08",64,0,12,true,R,Eco-Hydrological Researches in China University of Geosciences (Wuhan),eco-hydro,"R,C++,C",http://phenofit.top,"b'\n# phenofit\n\n[![R-CMD-check](https://github.com/eco-hydro/phenofit/workflows/R-CMD-check/badge.svg)](https://github.com/eco-hydro/phenofit/actions)\n[![codecov](https://codecov.io/gh/eco-hydro/phenofit/branch/master/graph/badge.svg)](https://app.codecov.io/gh/eco-hydro/phenofit)\n[![License](http://img.shields.io/badge/license-GPL%20%28%3E=%202%29-brightgreen.svg?style=flat)](http://www.gnu.org/licenses/gpl-2.0.html)\n[![CRAN](http://www.r-pkg.org/badges/version/phenofit)](https://cran.r-project.org/package=phenofit)\n[![total](https://cranlogs.r-pkg.org/badges/grand-total/phenofit)](https://cran.r-project.org/package=phenofit)\n[![monthly](https://cranlogs.r-pkg.org/badges/phenofit)](https://cran.r-project.org/package=phenofit)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6320537.svg)](https://doi.org/10.5281/zenodo.6320537)\n\nA state-of-the-art **remote sensing vegetation phenology** extraction\npackage: `phenofit`\n\n- `phenofit` combine merits of TIMESAT and phenopix\n- A simple and stable growing season dividing method was proposed\n- Provide a practical snow elimination method based on Whittaker\n- 7 curve fitting methods and 4 phenology extraction methods\n- We add parameters boundary for every curve fitting method according\n to their ecological meaning.\n- `optimx` is used to select the best optimization method for different\n curve fitting methods.\n\n***Task lists***\n\n- [x] Test the performance of `phenofit` in multiple growing seasons\n regions (e.g.,\xc2\xa0the North China Plain);\n- [ ] Uncertainty analysis of curve fitting and phenological metrics;\n- [x] shiny app has been moved to\n [phenofit.shiny](https://github.com/eco-hydro/phenofit.shiny);\n- [x] Complete script automatic generating module in shinyapp;\n- [x] `Rcpp` improve double logistics optimization efficiency by 60%;\n- [x] Support spatial analysis;\n- [x] Support annual season in curve fitting;\n- [x] flexible fine fitting input ( original time-series or smoothed\n time-series by rough fitting).\n- [x] Asymmetric Threshold method\n\n\n\n# Installation\n\nYou can install phenofit from github with:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""eco-hydro/phenofit"")\n```\n\n# Note\n\nUsers can through the following options to improve the performance of phenofit in multiple growing \nseason regions:\n\n- Users can decrease those three parameters `nextend`, `minExtendMonth` and\n `maxExtendMonth` to a relative low value, by setting option \n `set_options(fitting = list(nextend = 1, minExtendMonth = 0, maxExtendMonth = 0.5))`.\n\n- Use `wHANTS` as the rough fitting function. Due to the nature of Fourier\n functions, `wHANTS` is more stable for multiple growing seasons, but it is\n less flexible than `wWHIT.` `wHANTS` is suitable for regions with the static\n growing season pattern across multiple years, `wWHIT` is more suitable for\n regions with the dynamic growing season pattern. Dynamic growing season\n pattern is the most challenging task, which also means that a large\n uncertainty might exist.\n\n When using `wHANTS` as the rough fitting function, `r_min` is suggested to be\n set as zero.\n\n- Use only one iteration in the fine fitting procedure.\n\n\n# **References**\n\n> [1] Kong, D., McVicar, T. R., Xiao, M., Zhang, Y., Pe\xc3\xb1a-Arancibia, J. L., Filippa, G., Xie, Y., Gu, X. (2022). phenofit: An R package for extracting vegetation phenology from time series remote sensing. __*Methods in Ecology and Evolution*__, 13, 1508-1527. \n> \n> [2] Kong, D., Zhang, Y.\\*, Wang, D., Chen, J., & Gu, X\\*. (2020).\n> Photoperiod Explains the Asynchronization Between Vegetation Carbon\n> Phenology and Vegetation Greenness Phenology. \n> *Journal of Geophysical Research: Biogeosciences*, 125(8), e2020JG005636.\n> \n>\n> [3] Kong, D., Zhang, Y.\\*, Gu, X., & Wang, D. (2019). A robust method\n> for reconstructing global MODIS EVI time series on the Google Earth\n> Engine. __*ISPRS Journal of Photogrammetry and Remote Sensing*__, 155,\n> 13\xe2\x80\x9324.\n>\n> [4] Kong, D., (2020). R package: A state-of-the-art Vegetation\n> Phenology extraction package, `phenofit` version 0.3.5,\n> \n>\n> [5] Zhang, Q.\\*, Kong, D.\\*, Shi, P., Singh, V.P., Sun, P., 2018.\n> Vegetation phenology on the Qinghai-Tibetan Plateau and its response\n> to climate change (1982\xe2\x80\x932013). __*Agricultural and Forest Meteorology*__. 248, 408\xe2\x80\x93417.\n> \n\n\n# Acknowledgements\n\nKeep in mind that this repository is released under a GPL2 license,\nwhich permits commercial use but requires that the source code (of\nderivatives) is always open even if hosted as a web service.\n'",",https://doi.org/10.5281/zenodo.6320537,https://doi.org/10.1111/2041-210X.13870,https://doi.org/10.1029/2020JG005636,https://doi.org/10.5281/zenodo.6320537,https://doi.org/10.1016/j.agrformet.2017.10.026","2018/04/23, 08:40:14",2011,GPL-2.0,10,216,"2023/09/22, 01:46:30",0,10,15,3,33,0,0.0,0.0,"2022/03/01, 14:35:02",v0.3.5,0,1,false,,false,false,,,https://github.com/eco-hydro,http://www.phenofit.top/,China,,,https://avatars.githubusercontent.com/u/86526225?v=4,,, rnpn,R client for interacting with the USA National Phenology Network data web services.,usa-npn,https://github.com/usa-npn/rnpn.git,github,"web-api,species,data,rstats,r,national-phenology-network,phenology,r-package",Plants and Vegetation,"2023/08/30, 18:16:36",16,0,3,true,R,USA National Phenology Network,usa-npn,R,https://rdrr.io/cran/rnpn/,"b'\n# rnpn\n\n\n\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable\nstate and is being actively\ndeveloped.](http://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/rnpn)](https://CRAN.R-project.org/package=rnpn)\n[![R build\nstatus](https://github.com/usa-npn/rnpn//workflows/R-CMD-check/badge.svg)](https://github.com/usa-npn/rnpn/actions)\n\n\n`rnpn` is an R client for interacting with the USA National Phenology\nNetwork data web services. These services include access to a rich set\nof observer-contributed, point-based phenology records as well as\ngeospatial data products including gridded phenological model and\nclimatological data.\n\nDocumentation is available for the National Phenology Network [API\ndocumentation](https://docs.google.com/document/d/1yNjupricKOAXn6tY1sI7-EwkcfwdGUZ7lxYv7fcPjO8/edit?hl=en_US),\nwhich describes the full set of REST services this package wraps.\n\nThere is no need for an API key to grab data from the National Phenology\nNetwork but users are required to self identify, on an honor system,\nagainst requests that may draw upon larger datasets. For functions that\nrequire it, simply populate the request_source parameter with your name\nor the name of your institution.\n\n## Installation\n\nCRAN version\n\n``` r\ninstall.packages(""rnpn"")\n```\n\nDevelopment version:\n\n``` r\ninstall.packages(""devtools"")\nlibrary(\'devtools\')\ndevtools::install_github(""usa-npn/rnpn"")\n```\n\n``` r\nlibrary(\'rnpn\')\n```\n\nThis package has dependencies on both curl and gdal. Some Linux based\nsystems may require additional system dependencies for those required\npackages, and accordingly this package, to install correctly. For\nexample, on Ubuntu:\n\n``` r\nsudo apt install libcurl4-openssl-dev\nsudo apt install libproj-dev libgdal-dev\n```\n\n## The Basics\n\nMany of the functions to search for data require knowing the internal\nunique identifiers of some of the database entities to filter the data\ndown efficiently. For example, if you want to search by species, then\nyou must know the internal identifier of the species. To get a list of\nall available species use the following:\n\n``` r\nspecies_list <- npn_species()\n```\n\nSimilarly, for phenophases:\n\n``` r\nphenophases <- npn_phenophases()\n```\n\n### Getting Observational Data\n\nThere are four main functions for accessing observational data, at\nvarious levels of aggregation. At the most basic level you can download\nthe raw status and intensity data.\n\n``` r\nsome_data <- npn_download_status_data(request_source=\'Your Name or Org Here\',years=c(2015),species_id=c(35),states=c(\'AZ\',\'IL\'))\n```\n\nNote that through this API, data can only be filtered chronologically by\nfull calendar years. You can specify any number of years in each API\ncall. Also note that request_source is a required parameter and should\nbe populated with your name or the name of the organization you\nrepresent. All other parameters are optional but it is highly\nrecommended that you filter your data search further.\n\n### Getting Geospatial Data\n\nThis package wraps around standard WCS endpoints to facilitate the\ntransfer of raster data. Generally, this package does not focus on\ninteracting with WMS services, although they are available. To get a\nlist of all available data layers, use the following:\n\n``` r\nlayers <- npn_get_layer_details()\n```\n\nYou can then use the name of the layers to select and download\ngeospatial data as a raster.\n\n``` r\nnpn_download_geospatial(coverage_id = \'si-x:lilac_leaf_ncep_historic\',date=\'2016-12-31\',format=\'geotiff\',output_path=\'./six-test-raster.tiff\')\n```\n\n## Example of combined observational and geospatial data\n\nFor more details see Vignette VII\n\n\n\n## What\xe2\x80\x99s Next\n\nPlease read and review the vignettes for this package to get further\ninformation about the full scope of functionality available.\n\n## Acknowledgments\n\nThis code was developed, in part, as part of the integrated\n[Pheno-Synthesis Software Suite\n(PS3)](https://git.earthdata.nasa.gov/projects/APIS/repos/pheno-synthesis-software-suite/browse).\nThe authors acknowledge funding for this work through NASA\xe2\x80\x99s AIST\nprogram (80NSSC17K0582, 80NSSC17K0435, 80NSSC17K0538, and\n80GSFC18T0003). The University of Arizona and the USA National Phenology\nNetwork\xe2\x80\x99s efforts with this package are supported in part by US\nGeological Survey (G14AC00405, G18AC00135) and the US Fish and Wildlife\nService (F16AC01075 and F19AC00168).\n\n## Meta\n\n- Please [report any issues or\n bugs](https://github.com/usa-npn/rnpn/issues).\n- License: MIT\n- Get citation information for `rnpn` in R doing\n `citation(package = \'rnpn\')`\n- Please note that this package is released with a [Contributor Code of\n Conduct](https://ropensci.org/code-of-conduct/). By contributing to\n this project, you agree to abide by its terms.\n\n[![image](http://ropensci.org/public_images/github_footer.png)](https://ropensci.org/)\n'",,"2011/08/08, 18:57:41",4461,CUSTOM,16,232,"2023/08/07, 15:26:48",1,9,31,4,79,1,0.6666666666666666,0.38596491228070173,,,0,8,false,,false,false,,,https://github.com/usa-npn,https://www.usanpn.org,"1311 E. 4th Street, Tucson, AZ 85721",,,https://avatars.githubusercontent.com/u/17733470?v=4,,, photosynthesis,"An R package with modeling tools for C3 photosynthesis, as well as analytical tools for curve-fitting plant ecophysiology responses.",cdmuir,https://github.com/cdmuir/photosynthesis.git,github,,Plants and Vegetation,"2023/08/15, 05:36:54",24,0,8,true,R,,,R,,"b'\n\n\n# photosynthesis \n\n\n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/photosynthesis)](https://cran.r-project.org/package=photosynthesis)\n[![](https://cranlogs.r-pkg.org/badges/photosynthesis)](https://cran.r-project.org/package=photosynthesis)\n[![R-CMD-check](https://github.com/cdmuir/photosynthesis/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/cdmuir/photosynthesis/actions/workflows/R-CMD-check.yaml)\n\n\n## Model C3 Photosynthesis\n\n## Description\n\n**photosynthesis** is an R package with modeling tools for C3\nphotosynthesis, as well as analytical tools for curve-fitting plant\necophysiology responses. It uses the R package\n[**units**](https://CRAN.R-project.org/package=units) to ensure that\nparameters are properly specified and transformed before calculations.\n\n## Get **photosynthesis**\n\nFrom CRAN\n\n``` r\ninstall.packages(""photosynthesis"")\n```\n\nor from GitHub\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""cdmuir/photosynthesis"")\n```\n\nAnd load `photosynthesis`\n\n``` r\nlibrary(""photosynthesis"")\n```\n\n## Vignettes\n\nSee the following vignettes for examples of what **photosynthesis** can\ndo:\n\n- [Introduction to the photosynthesis\n package](https://cdmuir.github.io/photosynthesis/articles/photosynthesis-introduction.html)\n- [Modeling C3 Photosynthesis: recommendations for common\n scenarios](https://cdmuir.github.io/photosynthesis/articles/modeling-recommendations.html)\n- [Fitting light response\n curves](https://cdmuir.github.io/photosynthesis/articles/light-response.html)\n- [Fitting CO2 response\n curves](https://cdmuir.github.io/photosynthesis/articles/co2-response.html)\n- [Fitting temperature response\n curves](https://cdmuir.github.io/photosynthesis/articles/temperature-response.html)\n- [Fitting stomatal conductance\n models](https://cdmuir.github.io/photosynthesis/articles/stomatal-conductance.html)\n- [Fitting light\n respiration](https://cdmuir.github.io/photosynthesis/articles/light-respiration.html)\n- [Fitting mesophyll\n conductance](https://cdmuir.github.io/photosynthesis/articles/mesophyll-conductance.html)\n- [Fitting pressure-volume\n curves](https://cdmuir.github.io/photosynthesis/articles/pressure-volume.html)\n- [Fitting hydraulic vulnerability\n curves](https://cdmuir.github.io/photosynthesis/articles/hydraulic-vulnerability.html)\n- [Sensitivity\n Analysis](https://cdmuir.github.io/photosynthesis/articles/sensitivity-analysis.html)\n\n## Contributors\n\n- [Joseph Stinziano](https://github.com/jstinzi)\n- [Chris Muir](https://github.com/cdmuir)\n- Cassaundra Roback\n- Demi Sargent\n- Bridget Murphy\n- Patrick Hudson\n\n## Comments and contributions\n\nWe welcome comments, criticisms, and especially contributions! GitHub\nissues are the preferred way to report bugs, ask questions, or request\nnew features. You can submit issues here:\n\n\n\n## Meta\n\n- Please [report any issues or\n bugs](https://github.com/cdmuir/photosynthesis/issues).\n- License: MIT\n- Get citation information for **photosynthesis** in R doing\n `citation(package = \'photosynthesis\')`\n- Please note that this project is released with a [Contributor Code of\n Conduct](https://github.com/cdmuir/photosynthesis/blob/master/CONDUCT.md).\n By participating in this project you agree to abide by its terms.\n'",,"2018/11/11, 23:46:12",1809,CUSTOM,52,181,"2023/09/05, 15:38:13",1,1,13,6,50,0,0.0,0.08695652173913049,"2023/08/15, 05:39:11",v2.1.4,0,2,false,,false,false,,,,,,,,,,, phenor,The framework leverages measurements of vegetation phenology from four common phenology observation datasets combined with global retrospective and projected climate data.,bluegreen-labs,https://github.com/bluegreen-labs/phenor.git,github,"phenocam,phenology-models,model-calibration,vegetation-phenology",Plants and Vegetation,"2023/08/28, 11:57:05",38,0,3,true,R,BlueGreen Labs,bluegreen-labs,R,https://bluegreen-labs.github.io/phenor/,"b'# phenor \n\n[![Build Status](https://github.com/bluegreen-labs/phenor/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/bluegreen-labs/phenor/actions/workflows/R-CMD-check.yaml)\n[![codecov](https://codecov.io/gh/bluegreen-labs/phenor/branch/master/graph/badge.svg)](https://codecov.io/gh/bluegreen-labs/phenor)\n\nThe phenor R package is a phenology modelling framework in R. The framework leverages measurements of vegetation phenology from four common phenology observation datasets combined with (global) retrospective and projected climate data (see below).\n\nThe package curently focusses on North America and Europe and relies heavily on [Daymet](https://daymet.ornl.gov/) and [E-OBS climate data](http://www.ecad.eu/download/ensembles/download.php) for underlying climate driver data in model optimization. The package supports global gridded CMIP6 forecasts scenarios using the ECMWF Copernicus CDS service.\n\nPhenological model calibration / validation data are derived from:\n- the transition dates derived from [PhenoCam](https://phenocam.sr.unh.edu) time series through the [phenocamr](https://github.com/bluegreen-labs/phenocamr) R package\n- the MODIS MCD12Q2 phenology product using the [MODISTools R package](http://onlinelibrary.wiley.com/doi/10.1002/ece3.1273/full)\n- the [Pan European Phenology Project (PEP725)](http://www.pep725.eu/) \n- the [USA National Phenological Network (USA-NPN)](https://www.usanpn.org/)\n- custom CSV based datasets\n\nWe refer to [Hufkens et al. (2018)](\nhttp://onlinelibrary.wiley.com/doi/10.1111/2041-210X.12970/full) for an in depth description and worked example of the phenor R package. All code used to generate the referenced publication is provided in a [separate github repository](https://github.com/bluegreen-labs/phenor_manuscript). Please refer to this paper when using the package for modelling efforts. \n\nKeep in mind that some of the scripts will take a significant amount of time to finish. As such, some data generated for the manuscript is included in the [manuscript repository](https://github.com/bluegreen-labs/phenor_manuscript). Some scripts generate figures and summary statistics on precompiled datasets rather than clean runs, when available. Furthermore, due to licensing issues no PEP725 data is included and some scripts will require proper login credentials for dependent code to function properly. Similarly, a download routine is not provided for the E-OBS data as to adhere to their data sharing policy and their request to register before downloading data.\n\n## Installation\n\n```diff\n- for the original package as described in the paper use release v1.0. Note that CMIP5 via NASA NEX has been deprecated in this release\n```\n\nTo install the latest stable release of the toolbox in R run the following commands in a R terminal\n\n```R\nif(!require(remotes)){install.packages(""remotes"")}\nremotes::install_github(""bluegreen-labs/phenor@v1.3.1"")\nlibrary(phenor)\n```\n\nThe development release can be installed by running\n\n```R\nif(!require(remotes)){install.packages(""remotes"")}\nremotes::install_github(""bluegreen-labs/phenor"")\nlibrary(phenor)\n```\n\nDownload a limited subset of the data described in Richardson et al. (2017) from github or clone the repository:\n\n```\ngit clone https://github.com/bluegreen-labs/phenocam_dataset.git\n```\n\nor download the full dataset from the [ORNL DAAC](https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1511).\n\n## Use\n\nExample code below shows that in few lines a modelling exercise can be set up. You can either download your own data using the phenocamr package and format them correctly using **pr_fm_phenocam()**\n\n```R\n# The command below downloads all time series for deciduous broadleaf\n# data at the bartlett PhenoCam site and estimates the\n# phenophases.\npr_dl_phenocam(vegetation = ""DB"",\n site = ""bartlett"",\n phenophase = TRUE)\n\n# process phenocam transition files into a consistent format\nphenocam_data <- pr_fm_phenocam(""/foo/bar/transition_dates/"")\n```\n\nAlternatively you can use the included data which consists of 370 site years of deciduous broadleaf forest sites as included in PhenoCam 1.0 dataset (Richardson et al. 2017) precompiled with Daymet climate variables. The file is loaded using a standard **data()** call or reference them directly (e.g. *phenocam_DB*).\n\n```R\n# load the included data using\ndata(""phenocam_DB"")\n```\n\n### Model development, and parameter estimation\n\nThe gathered data can now be used in model calibration / validation. Currently 17 models as described by Basler (2016) are provided in the pacakge. These models include: null, LIN, TT, TTs, PTT, PTTs, M1, M1s, PA, Pab, SM1, SM1b, SQ, SQb, UN, UM1, PM1 and PM1b models as described in Basler (2016). In addition three spring grassland (pollen) phenology models: GR, SGSI and AGSI are included as described in Garcia-Mozo et al. 2009 and Xin et al. 2015. Finally, one autumn chilling degree day model (CDD, Jeong et al. 2012) is provided. Parameter values associated with the models are provided in a separate file included in the package but can be specified separately for model development.\n\n```R\n# comma separated parameter file inlcuded in the package\n# for all the included models, this file is used by default\n# in all optimization routines\npath <- sprintf(""%s/extdata/parameter_ranges.csv"",path.package(""phenor""))\npar_ranges <- read.table(path,\n header = TRUE,\n sep = "","")\n```\n\nYour own model development can be done by creating similar functions which take in the described data format and parameters. Below is the function to optimize the model parameters for the *TT* (thermal time) model using Generalized Simulated Annealing (GenSA). Uppper and lower constraints to the parameter space have to be provided, and in case of GenSA initial parameters are estimated when par = NULL.\n\n```R\n# optimize model parameters\nset.seed(1234)\noptim.par <- pr_fit_parameters(par = NULL,\n data = phenocam_data,\n cost = rmse,\n model = ""TT"",\n method = ""GenSA"",\n lower = c(1,-5,0),\n upper = c(365,10,2000))\n```\n\nAfter a few minutes optimal parameters are provided. We can use these now to calculate our model values and some accuracy metrics.\n\n```R\n# now run the model for all data in the nested list using the estimated parameters\nmodelled <- pr_predict(data = data, par = optim.par$par)\n```\n\nEasy model calibration / validation can be achieved using the **model_calibration()** and **model_comparison()** functions. Which either allow for quick screening for model development, or the comparison of a suite of models (using different starting parameters). I hope this gets you started.\n\n### Model projections and spatial data\n\nThe package allows you to download gridded CMIP5 forecast data from the NASA Earth Exchange global daily downscaled climate projections project. Hindcast data are provided for global NASA Earth Exchange, Berkeley Earth, E-OBS and gridded Daymet data (at 1/4th degree, 1 degree, 1/4th degree, 1km resolutions respectively).\n\n```R\n# running data on spatial data (determined by the class assigned by\n# various format_*() routines) will return a spatial object (raster map)\nmap <- pr_predict(data = spatial_data, par = optim.par$par)\n```\n\nAn example of NASA Earth Exchange CMIP5 output and gridded Daymet data is provided below.\n\n![](https://raw.githubusercontent.com/khufkens/phenor_manuscript/master/output/Figure_5_spatial_runs.png)\n*Overview map comparing various spatial outputs of the Thermal Time (TT) and Accumulated Growing Season Index (AGSI) model optimized to deciduous broadleaf and grassland PhenoCam data respectively. a) phenor model output of the difference in estimates of spring phenology between the year 2100 and 2011 for 1/4th degree NASA Earth Exchange (NEX) global gridded Coupled Model Intercomparison Project 5 (CMIP5) Mid-Resolution Institut Pierre Simon Laplace Climate Model 5 (IPSL-CM5A-MR) model runs using the TT model parameterized on deciduous forest PhenoCam sites. Only pixels with more than 50% deciduous broadleaf or mixed forest cover per 1/4th degree pixel, using MODIS MCD12Q1 land cover data, are shown; b) phenor model output of the difference in estimates of spring phenology between the year 2100 and 2011 for NEX CMIP5 IPSL-CM5A-MR model runs using the AGSI model parameterized on grassland PhenoCam sites. Only pixels with more than 50% grassland coverage per 1/4th degree pixel, using MODIS MCD12Q1 land cover data, are shown; c) phenor model output for 11 Daymet gridded datasets (tiles) for the year 2011.*\n\n## References\n\nHufkens K., Basler J. D., Milliman T. Melaas E., Richardson A.D. 2018 [An integrated phenology modelling framework in R: Phenology modelling with phenor. Methods in Ecology & Evolution](http://onlinelibrary.wiley.com/doi/10.1111/2041-210X.12970/full), 9: 1-10.\n\nRichardson, A.D., Hufkens, K., Milliman, T., Aubrecht, D.M., Chen, M., Gray, J.M., Johnston, M.R., Keenan, T.F., Klosterman, S.T., Kosmala, M., Melaas, E.K., Friedl, M.A., Frolking, S. 2018. [Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery](https://www.nature.com/articles/sdata201828). Scientific Data, 5, 180028.\n\n## Acknowledgements\n\nThis project was is supported by the National Science Foundation\xe2\x80\x99s Macro-system Biology Program (awards EF-1065029 and EF-1702697) and the Marie Sk\xc5\x82odowska-Curie Action (H2020 grant 797668). Logo design elements are taken from the FontAwesome library according to [these terms](https://fontawesome.com/license).\n'",,"2017/03/24, 21:40:25",2406,CUSTOM,28,589,"2023/02/17, 18:27:40",6,14,43,12,250,0,0.0,0.05783132530120483,"2022/05/11, 10:43:48",v1.3.1,0,6,false,,false,false,,,https://github.com/bluegreen-labs,http://bluegreenlabs.org,"Melsele, Belgium",,,https://avatars.githubusercontent.com/u/65854203?v=4,,, RBIEN,Tools for accessing the Botanical Information and Ecology Network database.,bmaitner,https://github.com/bmaitner/RBIEN.git,github,"r,biodiversity,ecology,botanical,plant,open-science,phylogeny,traits,range-maps,bien",Plants and Vegetation,"2023/10/16, 22:03:46",38,0,1,true,HTML,,,"HTML,R,CSS",http://bien.nceas.ucsb.edu/bien/,"b'# RBIEN\nTools for accessing the Botanical Information and Ecology Network (BIEN) database\n\n# News:\nBIEN is back up on CRAN.\n\n## Installing\nTo install the development version of BIEN from Github:\n\n```{r}\ndevtools::install_github(""bmaitner/RBIEN"")\n\n```\n\n\n'",,"2017/01/09, 18:32:11",2480,CUSTOM,28,245,"2023/10/03, 17:16:45",29,4,11,2,22,1,0.0,0.0304347826086957,"2023/01/06, 20:56:27",1.2.6,0,3,false,,false,false,,,,,,,,,,, rWCVP,A package for accessing and using plant name and distribution data from the World Checklist of Vascular Plants.,matildabrown,https://github.com/matildabrown/rWCVP.git,github,,Plants and Vegetation,"2023/08/04, 11:20:31",11,0,11,true,R,,,R,https://matildabrown.github.io/rWCVP/,"b'\n\n\n # rWCVP\n\n\n\n\n[![R-CMD-check](https://github.com/matildabrown/rWCVP/workflows/R-CMD-check/badge.svg)](https://github.com/matildabrown/rWCVP/actions)\n\n\n\n\nrWCVP is a package for accessing and using plant name and distribution\ndata from the [World Checklist of Vascular\nPlants](https://powo.science.kew.org/about-wcvp)\n\n## Installation\n\nYou can install the development version of rWCVP from\n[GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""remotes"")\ndevtools::install_github(""matildabrown/rWCVP"")\n```\n\n## Example\n\nrWCVP makes it easy to get and plot the known distribution of plant\nspecies.\n\n``` r\nlibrary(rWCVP)\n\ndistribution <- wcvp_distribution(""Myrcia guianensis"", taxon_rank=""species"")\n\n# global map\nwcvp_distribution_map(distribution)\n\n# zoomed-in map\nwcvp_distribution_map(distribution, crop_map=TRUE)\n```\n'",,"2022/03/29, 15:01:15",575,GPL-3.0,93,215,"2023/06/30, 17:07:44",10,18,50,24,117,1,2.5,0.4222222222222223,"2022/12/22, 09:12:45",v1.0.2,0,2,false,,false,false,,,,,,,,,,, kewr,Meant to make accessing data from one of Royal Botanic Gardenn Kew easier and to provide a consistent interface their public APIs.,barnabywalker,https://github.com/barnabywalker/kewr.git,github,"r,package",Plants and Vegetation,"2022/06/30, 11:32:40",14,0,6,true,R,,,R,https://barnabywalker.github.io/kewr/,"b'\r\n\r\n\r\n# kewr\r\n\r\n\r\n\r\n[![R build\r\nstatus](https://github.com/barnabywalker/kewr/workflows/R-CMD-check/badge.svg)](https://github.com/barnabywalker/kewr/actions)\r\n\r\n\r\nAn R package to access data from RGB Kew\xe2\x80\x99s APIs.\r\n\r\n## Overview\r\n\r\nkewr is meant to make accessing data from one of RGB Kew easier and to\r\nprovide a consistent interface their public APIs.\r\n\r\nThis package should cover:\r\n\r\n- [x] [World Checklist of Vascular\r\n Plants](https://wcvp.science.kew.org/)\r\n- [x] [Plants of the World Online](http://powo.science.kew.org/)\r\n- [x] [International Plant Names Index](https://www.ipni.org/)\r\n- [x] [Kew Names Matching Service](http://namematch.science.kew.org/)\r\n- [x] [Kew\xe2\x80\x99s Tree of Life](https://treeoflife.kew.org)\r\n- [x] [Kew Reconciliation\r\n Service](http://data1.kew.org/reconciliation/about/IpniName)\r\n\r\nNew sources will be added as they come up.\r\n\r\n## Installation\r\n\r\nkewr is not on CRAN yet but you can install the latest development\r\nversion from GitHub:\r\n\r\n``` r\r\n# install.packages(""devtools"")\r\ndevtools::install_github(""barnabywalker/kewr"")\r\n```\r\n\r\n## Usage\r\n\r\nFunctions in this package all start with a prefix specifying what action\r\nyou want to perform and a suffix referring to the resource.\r\n\r\nFour of the resources (POWO, WCVP, IPNI, and ToL) are databases storing\r\nflora, taxonomic, nomenclatural, or genetic information. These three\r\nresources all have a `search_*` and `lookup_*`.\r\n\r\n### Retrieving records\r\n\r\nThe `lookup_` functions can be used to retrieve a particular record by\r\nits unique IPNI ID:\r\n\r\n``` r\r\nlookup_powo(""320035-2"")\r\nlookup_wcvp(""320035-2"")\r\nlookup_ipni(""320035-2"")\r\n```\r\n\r\nIPNI contains records for authors and publications, which can also be\r\nretrieved using the `lookup_ipni` function:\r\n\r\n``` r\r\nlookup_ipni(""20885-1"", type=""author"")\r\nlookup_ipni(""987-2"", type=""publication"")\r\n```\r\n\r\nThe ToL uses its own ID system. These IDs can be found by first\r\nsearching the database.\r\n\r\n``` r\r\nlookup_tol(""2717"")\r\n```\r\n\r\n### Searching databases\r\n\r\nAll four of these databases can be searched as well:\r\n\r\n``` r\r\nsearch_powo(""Poa annua"")\r\nsearch_wcvp(""Poa annua"")\r\nsearch_ipni(""Poa annua"")\r\nsearch_tol(""Poa annua"")\r\n```\r\n\r\nAnd all, except the ToL, use filters and keywords for more advanced\r\nsearches:\r\n\r\n``` r\r\nsearch_powo(list(genus=""Poa"", distribution=""Madagascar""), \r\n filters=c(""accepted"", ""species""))\r\nsearch_wcvp(list(genus=""Poa""), filters=c(""accepted"", ""species""))\r\nsearch_ipni(list(genus=""Poa"", published=1920),\r\n filters=c(""species""))\r\n```\r\n\r\nThe number of search results returned are determined by the `limit`\r\nkeyword:\r\n\r\n``` r\r\nsearch_powo(list(genus=""Poa""), limit=20)\r\nsearch_wcvp(list(genus=""Poa""), limit=20)\r\nsearch_ipni(list(genus=""Poa""), limit=20)\r\nsearch_tol(""Poa"", limit=20)\r\n```\r\n\r\nThe next page for a set of search results can be requested using the\r\n`request_next` function:\r\n\r\n``` r\r\nresults <- search_powo(list(genus=""Poa""))\r\nrequest_next(results)\r\n```\r\n\r\n### Loading data from ToL\r\n\r\nTree and gene data can be loaded directly from ToL into R.\r\n\r\nFor instance, you can load the whole Tree of Life.\r\n\r\n``` r\r\nload_tol()\r\n```\r\n\r\nOr a gene tree for a particular gene.\r\n\r\n``` r\r\ngene_info <- lookup_tol(""51"", type=""gene"")\r\nload_tol(gene_info$tree_file_url)\r\n```\r\n\r\nOr a FASTA file for a specimen.\r\n\r\n``` r\r\nspecimen_info <- lookup_tol(""1296"")\r\nload_tol(specimen_info$fasta_file_url)\r\n```\r\n\r\n### Downloading from the ToL\r\n\r\nThe corresponding files can also be downloaded for use later or in other\r\nprogrammes.\r\n\r\n``` r\r\nspecimen_info <- lookup_tol(""1296"")\r\ndownload_tol(specimen_info$fasta_file_url)\r\n```\r\n\r\n### Downloading the WCVP\r\n\r\nThe whole of WCVP can be download to a directory using:\r\n\r\n``` r\r\ndownload_wcvp()\r\n```\r\n\r\n### Matching names\r\n\r\nThe KNMS resource is only used for matching names to records in\r\nPOWO/WCVP:\r\n\r\n``` r\r\nmatch_knms(c(""Poa annua"", ""Magnolia grandifolia"", ""Bulbophyllum sp.""))\r\n```\r\n\r\nSingle names can also be matched to IPNI using the KRS resources.\r\n\r\n``` r\r\nmatch_krs(""Poa annua"")\r\n```\r\n\r\nKRS is slower for matching many names, as a request needs to be made for\r\neach one. But it has the advantage of allowing more complex matching:\r\n\r\n``` r\r\nmatch_krs(list(genus=""Solanum"", species=""sanchez-vegae"", author=""S.Knapp""))\r\n```\r\n\r\n### Tidying results\r\n\r\nEach function in this package returns an object that stores the original\r\nresponse as well as the content of the response parsed into a list. This\r\nis to give the user as much flexibility as possible and to make\r\ndebugging things a bit easier.\r\n\r\nBut this can be hard to use, so all the results objects can be tidied as\r\na `tibble`:\r\n\r\n``` r\r\nresults <- search_powo(""Poa annua"")\r\ntidy(results)\r\n```\r\n\r\n## Citing\r\n\r\nYou can get information about how to cite `kewr` by using:\r\n\r\n``` r\r\ncitation(""kewr"")\r\n```\r\n\r\nYou can also get the citation to use for each data service using the\r\ndifferent results objects:\r\n\r\n r <- search_wcvp(""Poa"")\r\n kew_citation(r)\r\n'",,"2020/11/11, 14:44:21",1078,CUSTOM,0,141,"2023/05/15, 19:44:01",4,39,56,2,163,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Plants of the World Online,An international collaborative programme that has as a primary aim to make available digitized data of the world's flora gathered from the past 250 years of botanical exploration and research.,RBGKew,https://github.com/RBGKew/powop.git,github,,Plants and Vegetation,"2022/05/13, 13:06:19",14,0,2,false,Java,"Royal Botanic Gardens, Kew",RBGKew,"Java,JavaScript,Handlebars,SCSS,Scheme,Vue,Python,Dockerfile,FreeMarker,Shell,Ruby,HTML",,"b'Plants of the World Online\n===\n\nPlants of the World Online Portal is a global, online, biodiversity information resource. This repository contains the code for the data model, harvester and web portal.\n\nThe POWO code powers:\n* [**Plants of the World Online**](http://powo.science.kew.org)\n* [**World Flora Online** (in development)](http://worldfloraonline.org/)\n\nDeveloping\n---\n\nThe easiest way to run the POWO app is using `docker-compose`. In order to do this you need to run the following to build images for the different modules:\n\n```\nmvn install\n```\n\nOnce this is done you can run the following to start the application\n\n```\ndocker-compose up\n```\n\n### Initial setup\n\nOnce you have the application up and running you will need to load some data to interact with. To do this:\n\n1. Go to `http://localhost:10080/admin/#/organisations`\n2. Click the cog in the top right and login with username `admin` and password `password`\n3. Click the cog in the top right and click import - select the `powo-harvest/local-development-data-configuration.json` as the file\n4. Go to `http://localhost:10080/admin/#/lists` and click the play icon next to the `Load everything` job [Note: if you want images in search results, first follow the Images instructions below]\n\nThis loads a subset of the full data onto your development machine. It can take several hours to complete.\n\n### Making changes\n\nIf any module dependencies have been updated (e.g. after running `git pull`) rebuild your local modules using:\n\n```\nmvn install\n```\n\nAfter making changes to a specific module you can rebuild just that module using the following commands (using the relevant Maven module e.g. `powo-portal`):\n\n```\nmvn prepare-package -pl powo-portal\nmvn package -pl powo-static\n```\n\nThen you can restart that service using the following command (using the relevant service name defined in `docker-compose.yaml` e.g. `portal`):\n\n```\ndocker-compose up portal\n```\n\nYou can run services that change infrequently in the background to simplify this process and make startup easier:\n\n```\ndocker-compose up -d db harvester solr geoserver geodb\n```\n\nThen restarting only the services you need:\n\n```\ndocker-compose up portal ingress\n```\n\n\n#### Making changes to the frontend\n\nFirst, start all the docker services with `docker-compose up -d`.\n\nIf you make changes to the frontend handlebars templates you will need to rebuild `powo-portal`:\n\n```\nmvn package -pl powo-portal -Ddockerfile.skip\n```\n\nIf you are working mainly on the frontend JS or CSS, you can use the following command to start automatic asset recompilation and browser reload:\n\n```\n# If yarn not installed globally\nnpm i -g yarn\n\n# Then start automatic asset compilation\ncd powo-portal/src/main/frontend\nyarn dev\n\n# Connect to a different backend, for developing an alternative POWO site\nyarn dev --backend-port=20080\n```\n\n### Issues with services hanging\n\nIf the `portal` and `harvester` services are hanging and failing to startup, it can be an issue with an unreleased lock on a Liquibase managed table. This can happen when services are stopped and don\'t properly release the locks. To fix this you can run:\n\n```\ndocker exec mysql --user=powo --password=powo powo -e ""UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;""\n```\n\nThis will reset the lock on the db, after that you can stop and restart your containers and they should be OK.\n\n\n### Images\n\nIn order for the app to load images correctly, it needs a CDN key. The steps to do this locally are:\n\n1. Get the CDN key from the POWO team\n2. Create a file called `.env` in the root of the project\n3. Add the following line, replacing `your_cdn_key` with the actual key:\n\n```\nCDN_KEY=your_cdn_key\n```\n\nDeployment\n===\n\nDeployment for POWO is managed via the [powo-infrastructure](https://gitlab.ad.kew.org/development/powop-infrastructure) repository. Please refer to that for all deployment documentation.\n'",,"2015/09/28, 10:26:17",2949,AGPL-3.0,0,3848,"2018/04/10, 11:11:12",0,2,2,0,2024,0,0.0,0.39233576642335766,,,0,7,false,,false,true,,,https://github.com/RBGKew,https://www.kew.org/science/who-we-are-and-what-we-do/departments/biodiversity-informatics-and-spatial-analysis,London,,,https://avatars.githubusercontent.com/u/6953174?v=4,,, dvm-dos-tem,"A process based Dynamic Vegetation, Dynamic Organic Soil, Terrestrial Ecosystem Model.",uaf-arctic-eco-modeling,https://github.com/uaf-arctic-eco-modeling/dvm-dos-tem.git,github,"vegetation-dynamics,ecosystem-modeling,soil-thermal-dynamics,fire-dynamics",Plants and Vegetation,"2023/10/23, 22:10:07",19,0,5,true,Jupyter Notebook,,uaf-arctic-eco-modeling,"Jupyter Notebook,C++,Python,Shell,C,Dockerfile,JavaScript,Makefile,Batchfile,CSS",,"b'README for dvm-dos-tem\n===========================================\n\n[![DOI](https://zenodo.org/badge/4579979.svg)](https://zenodo.org/badge/latestdoi/4579979)\n[![Slack](https://img.shields.io/badge/slack-login-green.svg)](https://arctic-eco-modeling.slack.com) \n\nThe dvm-dos-tem (`dvmdostem`) model is a process based bio-geo-chemical\necosystem model that focuses on C and N dynamics as well as soil thermal\ndynamics in high latitude ecosystems.\n\n[Complete Documentation](https://uaf-arctic-eco-modeling.github.io/dvm-dos-tem/index.html)\n\n> **Whats with the name?**\n> \n> `dvm-dos-tem` is short for ""Dynamic Vegetation \\[Model\\] Dynamic Organic Soil\n> Terrestrial Ecosystem Model"". Orignally the model\n> was simply ""TEM"", and as more logic and capabilities have been added, the name\n> has grown. We still frequently use simply ""TEM"" because it is less cumbersome\n> for writing and typing.\n\n\n> **Sept 2022**\n>\n> We are in the process of updating the entire documentation\n> system. There is still info scattered across this README, the wiki, a Google\n> Doc and the Sphinx system but we are working on consolidating the info into \n> primarily the Sphinx system.\n'",",https://zenodo.org/badge/latestdoi/4579979","2012/06/07, 01:26:42",4157,MIT,305,3291,"2023/10/23, 22:10:07",98,410,529,93,2,4,0.0,0.26233269598470366,"2023/06/14, 15:48:57",v0.7.0,0,18,false,,false,false,,,https://github.com/uaf-arctic-eco-modeling,,,,,https://avatars.githubusercontent.com/u/115209183?v=4,,, fgeo.biomass,Calculate biomass with allometric equations from the allodb package and ForestGEO data.,forestgeo,https://github.com/forestgeo/fgeo.biomass.git,github,,Biomass,"2019/06/05, 20:48:38",7,0,1,false,R,ForestGEO,forestgeo,R,https://forestgeo.github.io/fgeo.biomass,"b'\n\n\n# Calculate biomass\n\n[![lifecycle](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://www.tidyverse.org/lifecycle/#experimental)\n[![Travis build\nstatus](https://travis-ci.org/forestgeo/fgeo.biomass.svg?branch=master)](https://travis-ci.org/forestgeo/fgeo.biomass)\n[![Coverage\nstatus](https://coveralls.io/repos/github/forestgeo/fgeo.biomass/badge.svg)](https://coveralls.io/r/forestgeo/fgeo.biomass?branch=master)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/fgeo.biomass)](https://cran.r-project.org/package=fgeo.biomass)\n\nThe goal of fgeo.biomass is to calculate biomass using\n[ForestGEO](https://forestgeo.si.edu/) data and equations from either\nthe [BIOMASS package](https://CRAN.R-project.org/package=BIOMASS) or the\n[allodb package](https://forestgeo.github.io/allodb/).\n\n - The BIOMASS package is applicable to tropical forests. It was first\n [published on CRAN in 2016](https://cran.r-project.org/) and on\n [Methods on Ecology and Evolution\n in 2017](https://besjournals.onlinelibrary.wiley.com/doi/abs/10.1111/2041-210X.12753).\n fgeo.biomass provides the main features of BIOMASS with a simpler\n interface, consistent with all [fgeo\n packages](https://forestgeo.github.io/fgeo/).\n\n - The allodb package is work in progress, and aims to provide\n expert-selected allometric equations, both for tropical and\n temperate forests. fgeo.biomass provides a simple interface to\n automate the process of finding the right equation(s) for each stem\n and computing biomass.\n\n## Installation\n\nInstall the development version of **fgeo.biomass** with:\n\n # install.packages(""devtools"")\n devtools::install_github(""forestgeo/fgeo.biomass"")\n\n## Setup\n\nIn addition to the fgeo.biomass package we will use dplyr and ggplot2\nfor data wrangling and plotting.\n\n``` r\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(fgeo.biomass)\n```\n\n## fgeo.biomass wrapping BIOMASS\n\nWe\xe2\x80\x99ll use data from the [Barro Colorado Island,\nPanama](https://forestgeo.si.edu/sites/neotropics/barro-colorado-island)\n(BCI). We first pick alive trees and drop missing `dbh` values as we\ncan\xe2\x80\x99t calculate biomass for them.\n\n``` r\nbci_tree <- as_tibble(bciex::bci12t7mini) %>% \n filter(status == ""A"", !is.na(dbh))\nbci_tree\n#> # A tibble: 538 x 20\n#> treeID stemID tag StemTag sp quadrat gx gy MeasureID CensusID\n#> \n#> 1 858 1 0008~ """" apei~ 4402 899. 42 766 171\n#> 2 1129 1 0011~ """" quar~ 4308 867. 163. 995 171\n#> 3 2143 1 0021~ """" beil~ 3715 744 305. 1829 171\n#> 4 2388 10 0023~ 1 lueh~ 3622 724. 447. 2007 171\n#> 5 4448 1 0044~ """" sima~ 2321 477. 428. 3741 171\n#> 6 5877 1 0059~ """" quar~ 1303 280. 70.4 4800 171\n#> 7 6487 1 0065~ """" alse~ 1108 221. 178. 5226 171\n#> 8 8651 1 0105~ """" hyba~ 4811 974. 228. 6832 171\n#> 9 9480 1 0114~ """" fara~ 4814 977. 290 7373 171\n#> 10 10179 11 0121~ hyba~ 4819 979. 395. 7898 171\n#> # ... with 528 more rows, and 10 more variables: dbh , pom ,\n#> # hom , ExactDate , DFstatus , codes ,\n#> # nostems , date , status , agb \n```\n\nWe also need species data.\n\n``` r\nbci_species <- as_tibble(bciex::bci_species)\nbci_species\n#> # A tibble: 1,414 x 13\n#> sp Latin Genus Species Family SpeciesID SubspeciesID Authority\n#> \n#> 1 call~ Call~ Call~ laxa Fabac~ 131 1 (Benth.)~\n#> 2 pout~ Pout~ Pout~ glomer~ Sapot~ 811 2 (Miq.) R~\n#> 3 pout~ Pout~ Pout~ glomer~ Sapot~ 811 3 (Miq.) R~\n#> 4 prot~ Prot~ Prot~ tenuif~ Burse~ 828 4 (I.M. Jo~\n#> 5 soro~ Soro~ Soro~ pubive~ Morac~ 959 5 Hensl. \n#> 6 soro~ Soro~ Soro~ pubive~ Morac~ 959 6 Hensl. \n#> 7 swar~ Swar~ Swar~ simplex Fabac~ 980 7 (Raddi) ~\n#> 8 hibi~ Tali~ Tali~ tiliac~ Malva~ 997 9 (Arruda)~\n#> 9 quar~ Quar~ Quar~ astero~ Malva~ 871 10 (Pittier~\n#> 10 inga~ Inga~ Inga ciliata Fabac~ 1278 11 T.D.Penn.\n#> # ... with 1,404 more rows, and 5 more variables: IDLevel ,\n#> # syn , subsp , wsg , wsglevel \n```\n\n`add_tropical_biomass()` adds biomass to your census data.\n\n``` r\nbiomass <- add_tropical_biomass(bci_tree, bci_species)\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i Wood density given in [g/cm^3].\n#> Using \'Pantropical\' `region`.\n#> i Biomass is given in [kg].\n#> Adding new columns:\n#> family, genus, species, wd_level, wd_mean, wd_sd, biomass\nbiomass\n#> # A tibble: 538 x 27\n#> treeID stemID tag StemTag sp quadrat gx gy MeasureID CensusID\n#> \n#> 1 858 1 0008~ """" apei~ 4402 899. 42 766 171\n#> 2 1129 1 0011~ """" quar~ 4308 867. 163. 995 171\n#> 3 2143 1 0021~ """" beil~ 3715 744 305. 1829 171\n#> 4 2388 10 0023~ 1 lueh~ 3622 724. 447. 2007 171\n#> 5 4448 1 0044~ """" sima~ 2321 477. 428. 3741 171\n#> 6 5877 1 0059~ """" quar~ 1303 280. 70.4 4800 171\n#> 7 6487 1 0065~ """" alse~ 1108 221. 178. 5226 171\n#> 8 8651 1 0105~ """" hyba~ 4811 974. 228. 6832 171\n#> 9 9480 1 0114~ """" fara~ 4814 977. 290 7373 171\n#> 10 10179 11 0121~ hyba~ 4819 979. 395. 7898 171\n#> # ... with 528 more rows, and 17 more variables: dbh , pom ,\n#> # hom , ExactDate , DFstatus , codes ,\n#> # nostems , date , status , agb , family ,\n#> # genus , species , wd_level , wd_mean ,\n#> # wd_sd , biomass \n```\n\nYou may also provide a specific `region` or `latitude` and `longitude`.\n\n``` r\nbiomass <- add_tropical_biomass(\n bci_tree, \n bci_species,\n latitude = 9.154965, \n longitude = -79.845884\n)\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i Wood density given in [g/cm^3].\n#> Using `latitude` and `longitude` (ignoring `region`).\n#> i Biomass is given in [kg].\n#> Adding new columns:\n#> family, genus, species, wd_level, wd_mean, wd_sd, latitude, longitude, biomass\n\nbiomass %>% \n select(biomass, everything())\n#> # A tibble: 538 x 29\n#> biomass treeID stemID tag StemTag sp quadrat gx gy MeasureID\n#> \n#> 1 2397. 858 1 0008~ """" apei~ 4402 899. 42 766\n#> 2 1884. 1129 1 0011~ """" quar~ 4308 867. 163. 995\n#> 3 264. 2143 1 0021~ """" beil~ 3715 744 305. 1829\n#> 4 911. 2388 10 0023~ 1 lueh~ 3622 724. 447. 2007\n#> 5 961. 4448 1 0044~ """" sima~ 2321 477. 428. 3741\n#> 6 2473. 5877 1 0059~ """" quar~ 1303 280. 70.4 4800\n#> 7 570. 6487 1 0065~ """" alse~ 1108 221. 178. 5226\n#> 8 2.12 8651 1 0105~ """" hyba~ 4811 974. 228. 6832\n#> 9 16.0 9480 1 0114~ """" fara~ 4814 977. 290 7373\n#> 10 2.49 10179 11 0121~ hyba~ 4819 979. 395. 7898\n#> # ... with 528 more rows, and 19 more variables: CensusID ,\n#> # dbh , pom , hom , ExactDate , DFstatus ,\n#> # codes , nostems , date , status , agb ,\n#> # family , genus , species , wd_level ,\n#> # wd_mean , wd_sd , latitude , longitude \n```\n\n`propagate_errors()` allows you to propagate errors.\n\n``` r\nstr(\n propagate_errors(biomass)\n)\n#> List of 5\n#> $ meanAGB : num 20.9\n#> $ medAGB : num 20.6\n#> $ sdAGB : num 2.32\n#> $ credibilityAGB: Named num [1:2] 16.8 26.2\n#> ..- attr(*, ""names"")= chr [1:2] ""2.5%"" ""97.5%""\n#> $ AGB_simu : num [1:538, 1:1000] 1.49 1.907 0.219 1.487 1.125 ...\n#> ..- attr(*, ""dimnames"")=List of 2\n#> .. ..$ : NULL\n#> .. ..$ : chr [1:1000] ""203"" ""817"" ""977"" ""933"" ...\n```\n\n`model_height()` allows you to create a height model, which you can use\nto propagate height errors. This is what the entire pipeline looks like:\n\n``` r\nmodel <- model_height(bci_tree)\n#> i Using `method` log1 (other methods: log2, weibull, michaelis).\n\nerrors <- bci_tree %>% \n add_tropical_biomass(bci_species) %>% \n propagate_errors(height_model = model)\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i Wood density given in [g/cm^3].\n#> Using \'Pantropical\' `region`.\n#> i Biomass is given in [kg].\n#> Adding new columns:\n#> family, genus, species, wd_level, wd_mean, wd_sd, biomass\n#> Propagating errors on measurements of wood density.\n#> Propagating errors on measurements of height.\n\nstr(errors)\n#> List of 5\n#> $ meanAGB : num 21.6\n#> $ medAGB : num 21.4\n#> $ sdAGB : num 2.09\n#> $ credibilityAGB: Named num [1:2] 18.1 26.2\n#> ..- attr(*, ""names"")= chr [1:2] ""2.5%"" ""97.5%""\n#> $ AGB_simu : num [1:538, 1:1000] 2.506 0.881 0.376 1.277 1.019 ...\n```\n\nIf you pass `latitude` and `longitude` to `add_tropical_biomass(), and\nthen you pass a`height\\_model`to`propagate\\_errors()\\`, then you will\nneed to ignore the coordinates. On an interactive session, you should\nsee something like this:\n\n![](https://i.imgur.com/dhHCYJN.png)\n\n``` r\nif (interactive()) {\n errors <- bci_tree %>% \n add_tropical_biomass(\n bci_species, \n latitude = 9.154965, \n longitude = -79.845884\n ) %>% \n propagate_errors(height_model = model)\n \n str(errors)\n}\n```\n\n`add_wood_density()` adds wood density to your census data. It is not\nlimited to tropical forests, and has support for all of these regions:\nAfricaExtraTrop, AfricaTrop, Australia, AustraliaTrop,\nCentralAmericaTrop, China, Europe, India, Madagascar, Mexico,\nNorthAmerica, Oceania, SouthEastAsia, SouthEastAsiaTrop,\nSouthAmericaExtraTrop, SouthAmericaTrop, and World.\n\n``` r\nwood_density <- add_wood_density(bci_tree, bci_species)\n#> i Wood density given in [g/cm^3].\n\nwood_density %>% \n select(starts_with(""wd_""), everything())\n#> # A tibble: 538 x 26\n#> wd_level wd_mean wd_sd treeID stemID tag StemTag sp quadrat gx\n#> \n#> 1 genus 0.255 0.0941 858 1 0008~ """" apei~ 4402 899.\n#> 2 species 0.454 0.0708 1129 1 0011~ """" quar~ 4308 867.\n#> 3 genus 0.563 0.0941 2143 1 0021~ """" beil~ 3715 744 \n#> 4 species 0.417 0.0708 2388 10 0023~ 1 lueh~ 3622 724.\n#> 5 species 0.383 0.0708 4448 1 0044~ """" sima~ 2321 477.\n#> 6 species 0.454 0.0708 5877 1 0059~ """" quar~ 1303 280.\n#> 7 species 0.536 0.0708 6487 1 0065~ """" alse~ 1108 221.\n#> 8 species 0.67 0.0708 8651 1 0105~ """" hyba~ 4811 974.\n#> 9 species 0.584 0.0708 9480 1 0114~ """" fara~ 4814 977.\n#> 10 species 0.67 0.0708 10179 11 0121~ hyba~ 4819 979.\n#> # ... with 528 more rows, and 16 more variables: gy ,\n#> # MeasureID , CensusID , dbh , pom , hom ,\n#> # ExactDate , DFstatus , codes , nostems ,\n#> # date , status , agb , family , genus ,\n#> # species \n```\n\nThe BIOMASS package provides a tool to correct taxonomic names.\nfgeo.biomass does not include that feature. You may use BIOMASS directly\nor the more focused [taxize\npackage](https://cran.r-project.org/web/packages/taxize/taxize.pdf).\n\n## fgeo.biomass wrapping allodb\n\n## Warning\n\nThese features are not ready for research. We are now building a\n[Minimum Viable\nProduct](https://en.wikipedia.org/wiki/Minimum_viable_product), with\njust enough features to collect feedback from alpha users and redirect\nour effort. The resulting biomass is still meaningless.\n\nWe\xe2\x80\x99ll use the `add_biomass()` with these inputs:\n\n1. A ForestGEO-like *stem* or *tree* table.\n2. A *species* table (internally used to look up the Latin species\n names from the species codes in the `sp` column of the census\n table).\n\nWe\xe2\x80\x99ll use data from the [Smithsonian Conservation Biology Institute,\nUSA](https://forestgeo.si.edu/sites/north-america/smithsonian-conservation-biology-institute)\n(SCBI). We first pick alive trees and drop missing `dbh` values as we\ncan\xe2\x80\x99t calculate biomass for them.\n\n``` r\ncensus <- fgeo.biomass::scbi_tree1 %>% \n filter(status == ""A"", !is.na(dbh))\n\ncensus\n#> # A tibble: 30,050 x 20\n#> treeID stemID tag StemTag sp quadrat gx gy DBHID CensusID\n#> \n#> 1 1 1 10079 1 libe 0104 3.70 73 1 1\n#> 2 2 2 10168 1 libe 0103 17.3 58.9 3 1\n#> 3 3 3 10567 1 libe 0110 9 197. 5 1\n#> 4 4 4 12165 1 nysy 0122 14.2 428. 7 1\n#> 5 5 5 12190 1 havi 0122 9.40 436. 9 1\n#> 6 6 6 12192 1 havi 0122 1.30 434 13 1\n#> 7 8 8 12261 1 libe 0125 18 484. 17 1\n#> 8 9 9 12456 1 vipr 0130 18 598. 19 1\n#> 9 10 10 12551 1 astr 0132 5.60 628. 22 1\n#> 10 11 11 12608 1 astr 0132 13.3 623. 24 1\n#> # ... with 30,040 more rows, and 10 more variables: dbh , pom ,\n#> # hom , ExactDate , DFstatus , codes ,\n#> # nostems , date , status , agb \n```\n\nWe now use `add_biomass()` to add biomass to our census dataset.\n\n``` r\nspecies <- fgeo.biomass::scbi_species\n\nwith_biomass <- census %>% \n add_biomass(species, site = ""SCBI"")\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i biomass values are given in [kg].\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> Matching equations by site and species.\n#> Refining equations according to dbh.\n#> Using generic equations where expert equations can\'t be found.\n#> Warning: Can\'t find equations matching these species:\n#> acer sp, carya sp, crataegus sp, fraxinus sp, quercus sp, ulmus sp, unidentified unk\n#> Warning: Can\'t find equations for 15028 rows (inserting `NA`).\n#> Warning: Detected a single stem per tree. Do you need a multi-stem table?\n#> Warning: * For trees, `biomass` is that of the main stem.\n#> Warning: * For shrubs, `biomass` is that of the entire shrub.\n#> Adding new columns:\n#> rowid, species, site, biomass\n```\n\nWe are warned that we are using a tree-table (as opposed to a\nstem-table), and informed about how to interpret the resulting `biomass`\nvalues for trees and shrubs.\n\nSome equations couldn\xe2\x80\x99t be found. There may be two reasons:\n\n - Some stems in the data belong to species with no matching species in\n allodb.\n - Some stems in the data belong to species that do match species in\n allodb but the available equations were designed for a dbh range\n that doesn\xe2\x80\x99t include actual dbh values in the data.\n\nHere are the most interesting columns of the result:\n\n``` r\nwith_biomass %>% \n select(treeID, species, biomass)\n#> # A tibble: 30,050 x 3\n#> treeID species biomass\n#> \n#> 1 1 lindera benzoin NA \n#> 2 2 lindera benzoin NA \n#> 3 3 lindera benzoin NA \n#> 4 4 nyssa sylvatica 58.5 \n#> 5 5 hamamelis virginiana 17.6 \n#> 6 6 hamamelis virginiana 0.400\n#> 7 8 lindera benzoin 5.69 \n#> 8 9 viburnum prunifolium NA \n#> 9 10 asimina triloba NA \n#> 10 11 asimina triloba NA \n#> # ... with 30,040 more rows\n```\n\nLet\xe2\x80\x99s now visualize the relationship between `dbh` and b`biomass` by\n`species` (black points), in comparison with `agb` (above ground\nbiomass) values calculated with allometric equations for tropical trees\n(grey points).\n\n``` r\nwith_biomass %>% \n # Convert agb from [Mg] to [kg]\n mutate(agb_kg = agb * 1e3) %>% \n ggplot(aes(x = dbh)) +\n geom_point(aes(y = agb_kg), size = 1.5, color = ""grey"") +\n geom_point(aes(y = biomass), size = 1, color = ""black"") +\n facet_wrap(""species"", ncol = 4) +\n ylab(""Reference `agb` (grey) and calculated `biomass` (black) in [kg]"") +\n xlab(""dbh [mm]"") +\n theme_bw()\n#> Warning: Removed 15028 rows containing missing values (geom_point).\n```\n\n![](man/figures/README-unnamed-chunk-14-1.png)\n\nAbove, the species for which `biomass` couldn\xe2\x80\x99t be calculated show no\nblack points, although they do show grey reference-points.\n\nTo better understand the distribution of `biomass` values for each\nspecies we can use a box-plot.\n\n``` r\nwith_biomass %>% \n ggplot(aes(species, biomass)) +\n geom_boxplot() +\n ylab(""biomass [kg]"") +\n coord_flip()\n#> Warning: Removed 15028 rows containing non-finite values (stat_boxplot).\n```\n\n![](man/figures/README-unnamed-chunk-15-1.png)\n\nFor some species the maximum `dbh` for which `biomass` was calculated is\nmuch lower than the maximum `dbh` value for which the reference `agb`\nwas calculated. This is because most equations in **allodb** are defined\nfor a specific range of `dbh` values. Eventually **allodb** might\nprovide equations beyond the `dbh` limits currently available.\n\nTo explore this issue, here we use `add_component_biomass()` which\nallows us to see intermediary results that `add_biomass()` doesn\xe2\x80\x99t show.\n\n``` r\ndetailed_biomass <- suppressWarnings(suppressMessages(\n add_component_biomass(census, species, site = ""SCBI"")\n))\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i biomass values are given in [kg].\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n\n# Maximum `dbh` values by species\nmax_by_species <- detailed_biomass %>% \n select(species, dbh_max_mm) %>% \n group_by(species) %>% \n arrange(desc(dbh_max_mm)) %>% \n filter(row_number() == 1L) %>% \n ungroup()\n\n# `dbh` is above the maximum limit, so `biomass` is missing (agb has a value)\ndetailed_biomass %>% \n filter(dbh > 1000) %>% \n select(-dbh_max_mm) %>% \n left_join(max_by_species) %>% \n mutate(agb_kg = agb * 1e3) %>%\n select(species, biomass, agb, dbh, dbh_max_mm) %>% \n arrange(species) %>%\n print(n = Inf)\n#> Joining, by = ""species""\n#> # A tibble: 23 x 5\n#> species biomass agb dbh dbh_max_mm\n#> \n#> 1 fagus grandifolia NA 13.7 1030. 890\n#> 2 fraxinus americana NA 14.2 1053. 550\n#> 3 liriodendron tulipifera NA 8.24 1012. 650\n#> 4 liriodendron tulipifera NA 11.2 1159. 650\n#> 5 liriodendron tulipifera NA 10.3 1118. 650\n#> 6 liriodendron tulipifera NA 10.6 1135. 650\n#> 7 liriodendron tulipifera NA 8.48 1025. 650\n#> 8 liriodendron tulipifera NA 15.9 1365. 650\n#> 9 liriodendron tulipifera NA 8.12 1006. 650\n#> 10 liriodendron tulipifera NA 11.5 1173. 650\n#> 11 liriodendron tulipifera NA 11.5 1174. 650\n#> 12 liriodendron tulipifera NA 9.02 1054 650\n#> 13 liriodendron tulipifera NA 13.9 1280. 650\n#> 14 quercus alba NA 15.0 1018. 890\n#> 15 quercus rubra NA 27.7 1418. 890\n#> 16 quercus rubra NA 28.2 1432. 890\n#> 17 quercus rubra NA 25.5 1366. 890\n#> 18 quercus rubra NA 17.3 1143. 890\n#> 19 quercus rubra NA 21.9 1272. 890\n#> 20 quercus velutina NA 16.1 1107 890\n#> 21 quercus velutina NA 26.6 1393. 890\n#> 22 quercus velutina NA 15.6 1092. 890\n#> 23 quercus velutina NA 31.6 1511. 890\n```\n\n## Biomass via BIOMASS versus allodb\n\n``` r\ntemperate_biomass <- add_biomass(census, species, site = ""scbi"")\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i biomass values are given in [kg].\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> Matching equations by site and species.\n#> Refining equations according to dbh.\n#> Using generic equations where expert equations can\'t be found.\n#> Warning: Can\'t find equations matching these species:\n#> acer sp, carya sp, crataegus sp, fraxinus sp, quercus sp, ulmus sp, unidentified unk\n#> Warning: Can\'t find equations for 15028 rows (inserting `NA`).\n#> Warning: Detected a single stem per tree. Do you need a multi-stem table?\n#> Warning: * For trees, `biomass` is that of the main stem.\n#> Warning: * For shrubs, `biomass` is that of the entire shrub.\n#> Adding new columns:\n#> rowid, species, site, biomass\n\n# Warning: Aplying tropical equations to a temperate forest for comparison\ntropical_biomass <- add_tropical_biomass(census, species)\n#> Guessing dbh in [mm].\n#> i You may provide the dbh unit manually via the argument`dbh_unit`.\n#> i Wood density given in [g/cm^3].\n#> Using \'Pantropical\' `region`.\n#> i Biomass is given in [kg].\n#> Adding new columns:\n#> family, genus, species, wd_level, wd_mean, wd_sd, biomass\n\ndbh_biomsss <- tibble(\n dbh = temperate_biomass$dbh,\n species = temperate_biomass$species,\n temperate_biomass = temperate_biomass$biomass, \n tropical_biomass = tropical_biomass$biomass\n)\n```\n\n``` r\ndbh_biomsss %>% \n ggplot(aes(x = dbh)) +\n geom_point(aes(y = tropical_biomass), size = 1.5, color = ""grey"") +\n geom_point(aes(y = temperate_biomass), size = 1) +\n facet_wrap(""species"", ncol = 4) +\n ylab(""Biomass [kg] (via the BIOMASS (grey) and allodb (black) packages)"") +\n xlab(""dbh [mm]"") +\n theme_bw()\n#> Warning: Removed 15028 rows containing missing values (geom_point).\n```\n\n![](man/figures/README-unnamed-chunk-18-1.png)\n\n## General information\n\n - [Getting help](SUPPORT.md).\n - [Contributing](CONTRIBUTING.md).\n - [Contributor Code of Conduct](CODE_OF_CONDUCT.md).\n'",,"2018/11/10, 13:40:01",1810,GPL-3.0,0,226,"2019/05/09, 17:15:18",10,7,27,0,1630,1,0.0,0.013513513513513487,"2019/02/21, 15:48:51",0.0.0.9001,0,2,false,,true,true,,,https://github.com/forestgeo,http://www.forestgeo.si.edu/,Smithsonian Tropical Research Institute,,,https://avatars.githubusercontent.com/u/25665726?v=4,,, BIOMASS,An R package for estimating aboveground biomass and its uncertainty in tropical forests.,umr-amap,https://github.com/umr-amap/BIOMASS.git,github,,Biomass,"2023/10/02, 13:34:10",19,0,4,true,HTML,UMR AMAP,umr-amap,"HTML,R",,"b'BIOMASS\n================\n\n - [The package](#the-package)\n - [Citation](#citation)\n - [Install BIOMASS](#install-biomass)\n\n## The package\n\nR package for estimating aboveground biomass and its uncertainty in\ntropical forests.\n\nContains functions to estimate aboveground biomass/carbon and its\nuncertainty in tropical forests. These functions allow to:\n\n1. retrieve and correct the taxonomy;\n2. estimate the wood density and its uncertainty;\n3. construct height-diameter models;\n4. manage tree and plot coordinates;\n5. estimate the aboveground biomass/carbon at the stand level with\n associated uncertainty;\n\n \n\n## Citation\n\nTo cite \'BIOMASS\', please use citation(\xe2\x80\x9cBIOMASS\xe2\x80\x9d).\n\n## Install BIOMASS\n\nThe latest released version from CRAN:\n\n``` r\ninstall.packages(""BIOMASS"")\n```\n\nThe latest version from Github (in development):\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(\'umr-amap/BIOMASS\')\n```\n\nTo use it :\n\n``` r\nlibrary(""BIOMASS"")\n```\n'",,"2018/09/03, 07:58:57",1878,GPL-3.0,32,555,"2023/08/24, 12:32:14",10,8,27,2,62,0,0.125,0.23228346456692917,"2020/10/22, 10:35:26",v2.1.4,0,6,false,,true,true,,,https://github.com/umr-amap,http://amap.cirad.fr,"Montpellier, France",,,https://avatars.githubusercontent.com/u/66062028?v=4,,, carbon budget,"This model maps gross greenhouse gas emissions from forests between 2001 and 2015, gross carbon removals by forests between 2001, and the difference between them (net flux).",wri,https://github.com/wri/carbon-budget.git,github,,Biomass,"2023/07/03, 19:38:12",65,0,19,true,Python,World Resources Institute,wri,"Python,C++,Shell,Batchfile,Dockerfile",,"b'## Global forest carbon flux framework\n\n### Purpose and scope\nThis framework maps gross greenhouse gas emissions from forests, \ngross carbon removals (sequestration) by forests, and the difference between them (net flux), all between 2001 and 2022. \nGross emissions includes CO2, NH4, and N20 and all carbon pools (aboveground biomass, belowground biomass, \ndead wood, litter, and soil), and gross removals includes removals into aboveground and belowground biomass carbon. \nAlthough the framework is run for all tree canopy densities in 2000 (per Hansen et al. 2013), it is most relevant to\npixels with canopy density >30% in 2000 or pixels which subsequently had tree cover gain (per Potapov et al. 2022).\nIn addition to natural terrestrial forests, it also covers planted forests in most of the world, mangroves, and non-mangrove natural forests.\nThe framework essentially spatially applies IPCC national greenhouse gas inventory rules (2016 guidelines) for forests.\nIt covers only forests converted to non-forests, non-forests converted to forests and forests remaining forests (no other land \nuse transitions). The framework is described and published in [Harris et al. (2021) Nature Climate Change\n""Global maps of twenty-first century forest carbon fluxes""](https://www.nature.com/articles/s41558-020-00976-6).\nAlthough the original manuscript covered 2001-2019, the same methods were used to update the framework to include 2022, \nwith a few changes to some input layers and constants. You can read about the changes since publication \n[here](https://www.globalforestwatch.org/blog/data-and-research/whats-new-carbon-flux-monitoring).\n\n### Inputs\nWell over twenty inputs are needed for this framework. Most are spatial, but some are tabular.\nAll spatial data are converted to 10x10 degree raster tiles at 0.00025x0.00025 degree resolution \n(approximately 30x30 m at the equator) before ingestion. \nSpatial data include annual tree cover loss, biomass densities in 2000, drivers of tree cover loss, \necozones, tree cover extent in 2000, elevation, etc. \nMany inputs can be processed the same way (e.g., many rasters can be processed using the same `gdal` function) but some need special treatment.\nThe input processing scripts are mostly in the `data_prep` folder but a few are unfortunately in other folders. \nThe tabular data are generally annual biomass removal (i.e. \nsequestration) factors (e.g., mangroves, planted forests, natural forests), which are then applied to spatial data. \nDifferent inputs are needed for different steps in the framework. \n\nInputs can either be downloaded from AWS s3 storage or used if found locally in the folder `/usr/local/tiles/` in the Docker container\nin which the framework runs (see below for more on the Docker container).\nThe framework looks for files locally before downloading them in order to reduce run time. \nThe framework can still be run without AWS credentials; inputs will be downloaded from s3 but outputs will not be uploaded to s3.\nIn that case, outputs will only be stored locally.\n\nA complete list of inputs, including changes made to the framework, can be found \n[here](http://gfw2-data.s3.amazonaws.com/climate/carbon_model/Table_S3_data_sources__updated_20230406.pdf).\n\n### Outputs\nThere are three key outputs produced: gross GHG emissions, gross removals, and net flux, all summed per pixel for 2001-2022. \nThese are produced at two resolutions: 0.00025x0.00025 degrees \n(approximately 30x30 m at the equator) in 10x10 degree rasters (to make outputs a \nmanageable size), and 0.04x0.04 degrees (approximately 4x4km at the equator) as global rasters for static maps.\n\nFramework runs also automatically generate a .txt log. This log includes nearly everything that is output in the console.\nThis log is useful for documenting framework runs and checking for mistakes/errors in retrospect, \nalthough it does not capture errors that terminate runs.\nFor example, users can examine it to see if the correct input tiles were downloaded or if the intended tiles were used when running the framework. \n\nOutput rasters and logs are uploaded to s3 unless the `--no-upload` flag (`-nu`) is activated as a command line argument\nor no AWS s3 credentials are supplied to the Docker container.\nThis is good for local test runs or versions of the framework that are independent of s3 \n(that is, inputs are stored locally and not on s3, and the user does not have a connection to s3 storage or s3 credentials).\n\n#### 30-m output rasters\n\nThe 30-m outputs are used for zonal statistics (i.e. emissions, removals, or net flux in polygons of interest)\nand mapping on the Global Forest Watch web platform or at small scales (where 30-m pixels can be distinguished). \nIndividual emissions pixels can be assigned specific years based on Hansen loss during further analyses \nbut removals and net flux are cumulative over the entire framework run and cannot be assigned specific years. \nThis 30-m output is in megagrams (Mg) CO2e/ha 2001-2022 (i.e. densities) and includes all tree cover densities (""full extent""):\n`((TCD2000>0 AND WHRC AGB2000>0) OR Hansen gain=1 OR mangrove AGB2000>0)`.\nHowever, the framework is designed to be used specifically for forests, so the framework creates three derivative 30-m\noutputs for each key output (gross emissions, gross removals, net flux) as well (only for the standard version, not for sensitivity analyses).\nTo that end, the ""forest extent"" rasters also have pre-2000 oil palm plantations in Indonesia and Malaysia removed\nfrom them because carbon emissions and removals in those pixels would represent agricultural/tree crop emissions,\nnot forest/forest loss. \n\n1) Mg CO2e per pixel values for the full extent (all tree cover densities): \n `((TCD2000>0 AND WHRC AGB2000>0) OR Hansen gain=1 OR mangrove AGB2000>0)`\n2) Mg CO2e per hectare values for forest pixels only (colloquially, TCD>30 or Hansen gain pixels): \n `(((TCD2000>30 AND WHRC AGB2000>0) OR Hansen gain=1 OR mangrove AGB2000>0) NOT IN pre-2000 plantations)`\n3) Mg CO2e per pixel values for forest pixels only (colloquially, TCD>30 or Hansen gain pixels): \n `(((TCD2000>30 AND WHRC AGB2000>0) OR Hansen gain=1 OR mangrove AGB2000>0) NOT IN pre-2000 plantations)`\n\nThe per hectare outputs are used for making pixel-level maps (essentially showing emission and removal factors), \nwhile the per pixel outputs are used for getting total values within areas because the values\nof those pixels can be summed within areas of interest. The per pixel maps are calculated by `per hectare * pixel area/10000`.\n(The pixels of the per hectare outputs should not be summed but they can be averaged in areas of interest.)\nStatistics from this framework should always be based on the ""forest extent"" rasters, not the ""full extent"" rasters.\nThe full extent outputs should generally not be used but are created by the framework in case they are needed.\n\nIn addition to these three key outputs, there are many intermediate output rasters from the framework,\nsome of which may be useful for QC, analyses by area of interest, or other purposes. \nAll of these are at 0.00025x0.00025 degree resolution and reported as per hectare values (as opposed to per pixel values), if applicable. \nIntermediate outputs include the annual aboveground and belowground biomass removal rates\nfor all kinds of forests, the type of removal factor applied to each pixel, the carbon pool densities in 2000, \ncarbon pool densities in the year of tree cover loss, and the number of years in which removals occurred. \n\nAlmost all framework output have metadata associated with them, \nviewable using the `gdalinfo` command line utility (https://gdal.org/programs/gdalinfo.html). \nMetadata includes units, date created, framework version, geographic extent, and more. Unfortunately, the metadata are not viewable \nwhen looking at file properties in ArcMap\nor in the versions of these files downloadable from the Global Forest Watch Open Data Portal (https://data.globalforestwatch.org/).\n\n#### 4-km output rasters\n\nThe 4-km outputs are used for static large-scale maps, like in publications and presentations. \nThe units are Mt CO2e/pixel/year (in order to show absolute values). They are created using the ""forest extent"" \nper pixel 30-m rasters, not the ""full extent"" 30-m rasters. They should not be used for analysis. \n\n#### A note on signs\n\nAlthough gross emissions are traditionally given positive (+) values and\ngross removals are traditionally given negative (-) values, \nthe 30-m gross removals rasters are positive, while the 4-km gross removals rasters are negative. \nNet flux at both scales can be positive or negative depending on the balance of emissions and removals in the area of interest\n(negative for net sink, positive for net source).\n\n\n### Running the framework\nThe framework runs from the command line inside a Linux Docker container. \nOnce you have Docker configured on your system (download from Docker website), \nhave cloned this repository (on the command line in the folder you want to clone to, `git clone https://github.com/wri/carbon-budget`), \nand have configured access to AWS (if desired), you will be able to run the framework. \nYou can run the framework anywhere that the Docker container can be launched. That includes local computers (good for \nrunning test areas) and AWS ec2 instances (good for larger areas/global runs). \n\nThere are two ways to run the framework: as a series of individual scripts, or from a master script, which runs the individual scripts sequentially.\nWhich one to use depends on what you are trying to do. \nGenerally, the individual scripts (which correspond to specific framework stages) are\nmore appropriate for development and testing, while the master script is better for running\nthe main part of the framework from start to finish in one go. \nRun globally, both options iterate through a list of ~275 10 x 10 degree tiles. (Different framework stages have different numbers of tiles.)\nRun all tiles in the framework extent fully through one framework stage before starting on the next stage. \n(The master script does this automatically.) If a user wants to run the framework on just one or a few tiles, \nthat can be done through a command line argument (`--tile-id-list` or `-l`). \nIf individual tiles are listed, only those will be run. This is a natural system for testing or for\nrunning the framework for smaller areas. You can see the tile boundaries in `pixel_area_tile_footprints.zip` in this repo.\nFor example, to run the framework for Madagascar, only tiles 10S_040E, 10S_050E, and 20S_040E need to be run and the\ncommand line argument would be `-l 10S_040E,10S_050E,20S_040E`. \n\n#### Building the Docker container\n\nYou can do the following on the command line in the same folder as the repository on your system.\nThis will enter the command line in the Docker container\n\nFor runs on a local computer, use `docker-compose` so that the Docker is mapped to your computer\'s drives.\nIn my setup, `C:/GIS/Carbon_model/test_tiles/docker_output/` on my computer is mapped to `/usr/local/tiles` in\nthe Docker container in `docker-compose.yaml`. If running on another computer, you will need to change the local \nfolder being mapped in `docker-compose.yaml` to match your computer\'s directory structure. \nI do this for development and testing. \nIf you want the framework to be able to download from and upload to s3, you will also need to provide \nyour own AWS secret key and access key as environment variables (`-e`) in the `docker-compose run` command:\n\n`docker-compose build`\n\n`docker-compose run --rm -e AWS_SECRET_ACCESS_KEY=... -e AWS_ACCESS_KEY_ID=... carbon-budget`\n\nIf you don\'t have AWS credentials, you can still run the framework in the docker container but uploads will \nnot occur. In this situation, you need all the basic input files for all tiles in the docker folder `/usr/local/tiles/`\non your computer:\n\n`docker-compose build`\n\n`docker-compose run --rm carbon-budget`\n\nFor runs on an AWS r5d ec2 instance (for full framework runs), use `docker build`. \nYou need to supply AWS credentials for the framework to work because otherwise you won\'t be able to get \noutput tiles off of the spot machine and you will lose your outputs when you terminate the spot machine.\n\n`docker build . -t gfw/carbon-budget`\n\n`docker run --rm -it -e AWS_SECRET_ACCESS_KEY=... -e AWS_ACCESS_KEY_ID=... gfw/carbon-budget`\n\nBefore doing a framework run, confirm that the dates of the relevant input and output s3 folders are correct in `constants_and_names.py`. \nDepending on what exactly the user is running, the user may have to change lots of dates in the s3 folders or change none.\nUnfortunately, I can\'t really give better guidance than that; it really depends on what part of the framework is being run and how.\n(I want to make the situations under which users change folder dates more consistent eventually.)\n\nThe framework can be run either using multiple processors or one processor. The former is for large scale framework runs,\nwhile the latter is for framework development or running on small-ish countries that use only a few tiles. \nThe user can limit use to just one processor with the `-sp` command line flag. \nOne important thing to note is that if a user tries to use too many processors, the system will run out of memory and\ncan crash (particularly on AWS ec2 instances). Thus, it is important not to use too many processors at once.\nGenerally, the limitation in running the framework is the amount of memory available on the system rather than the number of processors.\nEach script has been somewhat calibrated to use a safe number of processors for an r5d.24xlarge EC2 instance,\nand often the number of processors being used is 1/2 or 1/3 of the actual number available.\nIf the tiles were smaller (e.g., 1x1 degree), more processors could be used but then there\'d also be more tiles to process, so I\'m not sure that would be any faster.\nUsers can track memory usage in real time using the `htop` command line utility in the Docker container. \n\n\n#### Individual scripts\nThe flux framework is comprised of many separate scripts (or stages), each of which can be run separately and\nhas its own inputs and output(s). There are several data preparation\nscripts, several for the removals (sequestration/gain) framework, a few to generate carbon pools, one for calculating\ngross emissions, one for calculating net flux, one for creating derivative outputs \n(aggregating key results into coarser resolution rasters for mapping and creating per-pixel and forest-extent outputs). \nEach script really has two parts: its `mp_` (multiprocessing) part and the part that actually does the calculations\non each 10x10 degree tile.\nThe `mp_` scripts (e.g., `mp_create_model_extent.py`) are the ones that are run. They download input files,\ndo any needed preprocessing, change output folder names as needed, list the tiles that are going to be run, etc.,\nthen initiate the actual work done on each tile in the script without the `mp_` prefix.\nThe order in which the individual stages must be run is very specific; many scripts depend on\nthe outputs of other scripts. Looking at the files that must be downloaded for the \nscript to run will show what files must already be created and therefore what scripts must have already been\nrun. Alternatively, you can look at the top of `run_full_model.py` to see the order in which framework stages are run. \nThe date component of the output directory on s3 generally must be changed in `constants_and_names.py`\nfor each output file. \n\nStages are run from the project folder as Python modules: `/usr/local/app# python -m [folder.script] [arguments]`\n\nFor example: \n\nExtent stage: `/usr/local/app# python -m data_prep.mp_model_extent -l 00N_000E -t std -nu`\n\nCarbon pool creation stage: `/usr/local/app# python -m carbon_pools.mp_create_carbon_pools -l 00N_000E,10S_050W -t std -ce loss -d 20239999`\n\n##### Running the emissions stage\nThe gross emissions script is the only part of the framework that uses C++. Thus, the appropriate version of the C++ \nemissions file must be compiled for emissions to run. \nThere are a few different versions of the emissions C++ script: one for the standard version and a few other for\nsensitivity analyses. \n`mp_calculate_gross_emissions.py` will compile the correct C++ file each time it is run, so the C++ file does not\nneed to be compiled manually. \nHowever, for completeness, the command for compiling the C++ script is (subbing in the actual file name): \n\n`c++ /usr/local/app/emissions/cpp_util/calc_gross_emissions_[VERSION].cpp -o /usr/local/app/emissions/cpp_util/calc_gross_emissions_[VERSION].exe -lgdal`\n\nFor the standard framework and the sensitivity analyses that don\'t specifically affect emissions, it is:\n\n`c++ /usr/local/app/emissions/cpp_util/calc_gross_emissions_generic.cpp -o /usr/local/app/emissions/cpp_util/calc_gross_emissions_generic.exe -lgdal`\n\n`mp_calculate_gross_emissions.py` can also be used to calculate emissions from soil only. \nThis is set by the `-p` argument: `biomass_soil` or `soil_only`. \n\nEmissions stage: `/usr/local/app# python -m emissions.mp_calculate_gross_emissions -l 30N_090W,10S_010E -t std -p biomass_soil -d 20239999`\n\n#### Master script \nThe master script runs through all of the non-preparatory scripts in the framework: some removal factor creation, gross removals, carbon\npool generation, gross emissions for biomass+soil, gross emissions for soil only, \nnet flux, aggregation, and derivative output creation. \nIt includes all the arguments needed to run every script. \nThus, the table below also explains the potential arguments for the individual framework stages. \nThe user can control what framework components are run to some extent and set the date part of \nthe output directories. The order in which the arguments are used does not matter (does not need to match the table below).\nPreparatory scripts like creating soil carbon tiles or mangrove tiles are not included in the master script because\nthey are run very infrequently. \n\n| Argument | Short argument | Required/Optional | Relevant stage | Description | \n| -------- | ----- | ----------- | ------- | ------ |\n| `model-type` | `-t` | Required | All | Standard version (`std`) or a sensitivity analysis. Refer to `constants_and_names.py` for valid list of sensitivity analyses. |\n| `stages` | `-s` | Required | All | The framework stage at which the run should start. `all` will run the following stages in this order: model_extent, forest_age_category_IPCC, annual_removals_IPCC, annual_removals_all_forest_types, gain_year_count, gross_removals_all_forest_types, carbon_pools, gross_emissions_biomass_soil, gross_emissions_soil_only, net_flux, create_derivative_outputs |\n| `tile-id-list` | `-l` | Required | All | List of tile ids to use in the framework. Should be of form `00N_110E` or `00N_110E,00N_120E` or `all` |\n| `run-through` | `-r` | Optional | All | If activated, run stage provided in `stages` argument and all following stages. Otherwise, run only stage in `stages` argument. Activated with flag. |\n| `run-date` | `-d` | Optional | All | Date of run. Must be format YYYYMMDD. This sets the output folder in s3. |\n| `no-upload` | `-nu` | Optional | All | No files are uploaded to s3 during or after framework run (including logs and framework outputs). Use for testing to save time. When AWS credentials are not available, upload is automatically disabled and this flag does not have to be manually activated. |\n| `single-processor` | `-sp` | Optional | All | Tile processing will be done without `multiprocessing` module whenever possible, i.e. no parallel processing. Use for testing. |\n| `log-note` | `-ln`| Optional | All | Adds text to the beginning of the log |\n| `carbon-pool-extent` | `-ce` | Optional | Carbon pool creation | Extent over which carbon pools should be calculated: loss or 2000 or loss,2000 or 2000,loss |\n| `std-net-flux-aggreg` | `-std` | Optional | Aggregation | The s3 standard framework net flux aggregated tif, for comparison with the sensitivity analysis map. |\n| `save-intermdiates` | `-si`| Optional | `run_full_model.py` | Intermediate outputs are not deleted within `run_full_model.py`. Use for local framework runs. If uploading to s3 is not enabled, intermediate files are automatically saved. |\n| `mangroves` | `-ma` | Optional | `run_full_model.py` | Create mangrove removal factor tiles as the first stage. Activate with flag. |\n| `us-rates` | `-us` | Optional | `run_full_model.py` | Create US-specific removal factor tiles as the first stage (or second stage, if mangroves are enabled). Activate with flag. |\n\nThese are some sample commands for running the flux framework in various configurations. You wouldn\'t necessarily want to use all of these;\nthey simply illustrate different configurations for the command line arguments. \nLike the individual framework stages, the full framework run script is also run from the project folder with the `-m` flag.\n\nRun: standard version; save intermediate outputs; run framework from annual_removals_IPCC;\nupload to folder with date 20239999; run 00N_000E; get carbon pools at time of loss; add a log note;\nuse multiprocessing (implicit because no `-sp` flag); only run listed stage (implicit because no -r flag)\n\n`python -m run_full_model -t std -si -s annual_removals_IPCC -d 20239999 -l 00N_000E -ce loss -ln ""00N_000E test""`\n\nRun: standard version; save intermediate outputs; run framework from annual_removals_IPCC; run all subsequent framework stages;\ndo not upload outputs to s3; run 00N_000E; get carbon pools at time of loss; add a log note; \nuse multiprocessing (implicit because no -sp flag)\n\n`python -m run_full_model -t std -si -s annual_removals_IPCC -r -nu -l 00N_000E -ce loss -ln ""00N_000E test""`\n\nRun: standard version; save intermediate outputs; run framework from the beginning; run all framework stages;\nupload to folder with date 20239999; run 00N_000E; get carbon pools at time of loss; add a log note;\nuse multiprocessing (implicit because no -sp flag)\n\n`python -m run_full_model -t std -si -s all -r -d 20239999 -l 00N_000E -ce loss -ln ""00N_000E test""`\n\nRun: standard version; save intermediate outputs; run framework from the beginning; run all framework stages;\nupload to folder with date 20239999; run 00N_000E, 10N_110E, and 50N_080W; get carbon pools at time of loss; \nadd a log note; use multiprocessing (implicit because no -sp flag)\n\n`python -m run_full_model -t std -si -s all -r -d 20239999 -l 00N_000E,10N_110E,50N_080W -ce loss -ln ""00N_000E test""`\n\nRun: standard version; run framework from the beginning; run all framework stages;\nupload to folder with date 20239999; run 00N_000E and 00N_010E; get carbon pools at time of loss; \nuse singleprocessing; add a log note; do not save intermediate outputs (implicit because no -si flag)\n\n`python -m run_full_model -t std -s all -r -nu -d 20239999 -l 00N_000E,00N_010E -ce loss -sp -ln ""Two tile test""`\n\nFULL STANDARD FRAMEWORK RUN: standard framework; save intermediate outputs; run framework from the beginning; run all framework stages;\nrun all tiles; get carbon pools at time of loss; add a log note;\nupload outputs to s3 with dates specified in `constants_and_names.py` (implicit because no -nu flag); \nuse multiprocessing (implicit because no -sp flag)\n\n`python -m run_full_model -t std -si -s all -r -l all -ce loss -ln ""Running all tiles""`\n\n### Sensitivity analysis\nNOT SUPPORTED AT THIS TIME.\n\nSeveral variations of the framework are included; these are the sensitivity variants, as they use different inputs or parameters. \nThey can be run by changing the `--model-type` (`-t`) argument from `std` to an option found in `constants_and_names.py`. \nEach sensitivity analysis variant starts at a different stage in the framework and runs to the final stage,\nexcept that sensitivity analyses do not include the creation of the supplementary outputs (per pixel tiles, forest extent tiles).\nSome use all tiles and some use a smaller extent.\n\n| Sensitivity analysis | Description | Extent | Starting stage | \n| -------- | ----------- | ------ | ------ |\n| `std` | Standard framework | Global | `mp_model_extent.py` |\n| `maxgain` | Maximum number of years of gain (removals) for gain-only and loss-and-gain pixels | Global | `gain_year_count_all_forest_types.py` |\n| `no_shifting_ag` | Shifting agriculture driver is replaced with commodity-driven deforestation driver | Global | `mp_calculate_gross_emissions.py` |\n| `convert_to_grassland` | Forest is assumed to be converted to grassland instead of cropland in the emissions framework| Global | `mp_calculate_gross_emissions.py` |\n| `biomass_swap` | Uses Saatchi 1-km AGB map instead of Baccini 30-m map for starting carbon densities | Extent of Saatchi map, which is generally the tropics| `mp_model_extent.py` |\n| `US_removals` | Uses IPCC default removal factors for the US instead of US-specific removal factors from USFS FIA | Continental US | `mp_annual_gain_rate_AGC_BGC_all_forest_types.py` |\n| `no_primary_gain` | Primary forests and IFLs are assumed to not have any removals| Global | `mp_forest_age_category_IPCC.py` |\n| `legal_Amazon_loss` | Uses Brazil\'s PRODES annual deforestation system instead of Hansen loss | Legal Amazon| `mp_model_extent.py` |\n| `Mekong_loss` | Uses Hansen loss v2.0 (multiple loss in same pixel). NOTE: Not used for flux framework v1.2.0, so this is not currently supported. | Mekong region | N/A |\n\n\n### Updating the framework with new tree cover loss\nFor the current general configuration of the framework, these are the changes that need to be made to update the\nframework with a new year of tree cover loss data. In the order in which the changes would be needed for rerunning the framework:\n\n1) Update the framework version variable `version` in `constants_and_names.py`.\n\n2) Change the tree cover loss tile source to the new tree cover loss tiles in `constants_and_names.py`.\nChange the tree cover loss tile pattern in `constants_and_names.py`.\n\n3) Change the number of loss years variable `loss_years` in `constants_and_names.py`.\n\n4) In `constants.h` (emissions/cpp_util/), change the number of framework years (`int model_years`) \n and the loss tile pattern (`char lossyear[]`).\n\n5) In `equations.cpp` (emissions/cpp_util/), change the number of framework years (`int model_years`). \n\n6) Obtain and pre-process the updated drivers of tree cover loss framework and tree cover loss from fires \n using `mp_prep_other_inputs_annual.py`. Note that the drivers map probably needs to be reprojected to WGS84 \n and resampled (0.005x0.005 deg) in ArcMap or similar \n before processing into 0.00025x0.00025 deg 10x10 tiles using this script. \n `mp_prep_other_inputs_annual.py` has some additional notes about that.\n\n7) Make sure that changes in forest age category produced by `mp_forest_age_category_IPCC.py` \n and the number of gain years produced by `mp_gain_year_count_all_forest_types.py` still make sense.\n\nStrictly speaking, if only the drivers, tree cover loss from fires, and tree cover loss are being updated, \nthe framework only needs to be run from forest_age_category_IPCC onwards (loss affects IPCC age category).\nHowever, for completeness, I suggest running all stages of the framework from model_extent onwards for an update so that\nframework outputs from all stages have the same version in their metadata and the same dates of output as the framework stages\nthat are actually being changed. A full framework run (all tiles, all stages) takes about 18 hours on an r5d.24xlarge \nEC2 instance with 3.7 TB of storage and 96 processors.\n\n\n### Other modifications to the framework\nIt is recommended that any changes to the framework be tested in a local Docker instance before running on an ec2 instance.\nI like to output files to test folders on s3 with dates 20239999 because that is clearly not a real run date. \nA standard development route is: \n\n1) Make changes to a single framework script and run using the single processor option on a single tile (easiest for debugging) in local Docker.\n\n2) Run single script on a few representative tiles using a single processor in local Docker.\n\n3) Run single script on a few representative tiles using multiple processor option in local Docker.\n\n4) Run the master script on a few representative tiles using multiple processor option in local Docker to \n confirm that changes work when using master script.\n\n5) Run single script on a few representative tiles using multiple processors on ec2 instance (need to commit and push changes to GitHub first).\n\n6) Run master script on all tiles using multiple processors on EC2 instance. \n If the changes likely affected memory usage, make sure to watch memory with `htop` to make sure that too much memory isn\'t required. \n If too much memory is needed, reduce the number of processors being called in the script. \n\nDepending on the complexity of the changes being made, some of these steps can be ommitted. Or if only a few tiles are \nbeing modeled (for a small country), only steps 1-4 need to be done. \n\n### Running framework tests\nThere is an incipient testing component using `pytest`. It is currently only available for the deadwood and litter\ncarbon pool creation step of the framework but can be expanded to other aspects of the framework. \nTests can be run from the project folder with the command `pytest`. \nYou can get more verbose output with `pytest -s`.\nTo run tests that just have a certain flag (e.g., `rasterio`), you can do `pytest -m rasterio -s`.\n\n\n### Dependencies\nTheoretically, this framework should run anywhere that the correct Docker container can be started \nand there is access to the AWS s3 bucket or all inputs are in the correct folder in the Docker container. \nThe Docker container should be self-sufficient in that it is configured to include the right Python packages, C++ compiler, GDAL, etc.\nIt is described in `Dockerfile`, with Python requirements (installed during Docker creation) in `requirements.txt`.\nOn an AWS ec2 instance, I have only run it on r5d instance types but it might be able to run on others.\nAt the least, it needs a certain type of memory configuration on the ec2 instance (at least one large SSD volume, I believe). \nOtherwise, I do not know the limitations and constraints on running this framework in an ec2 instance. \n\n### Contact information\nDavid Gibbs: david.gibbs@wri.org\n\nNancy Harris: nancy.harris@wri.org\n\nGlobal Forest Watch, World Resources Institute, Washington, D.C.\n'",,"2017/04/11, 18:05:56",2388,GPL-3.0,1,4197,"2023/07/03, 19:38:13",5,42,44,16,114,5,0.0,0.01142560373928847,"2023/07/03, 19:49:20",v1.2.3,0,4,false,,true,true,,,https://github.com/wri,https://wri.org,"Washington, DC",,,https://avatars.githubusercontent.com/u/4615146?v=4,,, PNVmaps,Global Maps of Potential Natural Vegetation based on Machine Learning.,Envirometrix,https://github.com/Envirometrix/PNVmaps.git,github,,Biomass,"2023/06/23, 13:14:46",30,0,3,true,TeX,Envirometrix,Envirometrix,"TeX,R,QML,Scheme",,"b'# Future projections of Potential Natural Vegetation across different climatic scenarios based on Machine Learning\n\nPNV predictions of the general IUCN classes and BIOME 6000 classes at 1 km spatial resolution are available for **[download](https://doi.org/10.5281/zenodo.7520813)**.\n\n![Biomes map at 1km for RCP 2.6 epoch 2040-2060](img/001_iucn_biomes.png ""Potential distribution of terrestrial biomes (Potential Natural Vegetation) at 1 km spatial resolution."")\n\nImprovements in the future projections of the PNV biomes maps at 1km:\n\n- Only 72 covariates were used (temperature, precipitation and topographical covariates), but model accuracy was doubled,\n- Predictions are based on the Ensemble Machine Learning (`""classif.ranger"", ""classif.glmnet"", ""classif.xgboost""`) as implemented in the [mlr package](https://mlr.mlr-org.com/),\n- Predictions are provided in the original BIOME 6000 classification system (20 classes) and the **[IUCN Global Ecosystem Typology](https://global-ecosystems.org/page/typology)** classification system\n- Model errors are provided per class (derived as the weighted standard deviation between multiple models) for single class maps,\n- Model errors are provided using the _margin of victory_ [(Calder\xc3\xb3n-Loor et al., 2021)](https://doi.org/10.1016/j.rse.2020.112148) for hard classes maps,\n- Model fine-tuning and accuracy assessment is based on repeated 5-fold spatial cross-validation,\n- Predictions are provided for current and future epochs (2040-2060 and 2060-2080) under three climatic scenarios (RCP 2.6, RCP 4.5 and RCP 8.5).\n\n\n# Global Maps of Potential Natural Vegetation based on Machine Learning\n\nPNV predictions of the general land cover classes at 250 m spatial resolution is available for **[download](https://doi.org/10.5281/zenodo.3631253)**.\n\n![GLC map at 250m](img/001_pnv_predictions_glc100.png ""Potential distribution of land cover classes (Potential Natural Vegetation) at 250 m spatial resolution."")\n\n\nUpdate of the predictions at 250 m spatial resolution is available for **[download](https://doi.org/10.5281/zenodo.3526619)**.\n\n![Biomes map at 250m](img/001_pnv_biome.type_biome00k_c_250m_s0..0cm_2000..2017_v0.2.png ""Potential distribution of biomes (Potential Natural Vegetation) at 250 m spatial resolution."")\n\nImprovements in the v0.2 of the PNV biomes map at 250 m:\n\n- 40% of new covariates have been added (see [the variable importance list](R_code/Biome_randomForest_v02.txt))\n- Predictions are based on the Ensemble Machine Learning (`""classif.ranger"", ""classif.glmnet"", ""classif.xgboost"", ""classif.nnTrain""`) as implemented in the [mlr package](https://mlr.mlr-org.com/),\n- Model errors are provided per class (derived as the weighted standard deviation between multiple models),\n- Model fine-tuning, feature selection and accuracy assessment is based on repeated cross-validation,\n\n*Summary*: This repository contains R code and some outputs of spatial predictions related with the production of [Global Maps of Potential Natural Vegetation](https://www.arcgis.com/apps/MapJournal/index.html?appid=1856322400844a7cab348bccfa4bee76). Three case studies were considered: (1) global distribution of biomes based on the BIOME 6000 data set (8057 modern pollen-based site reconstructions), (2) distribution of forest tree species in Europe based on detailed occurrence records (1,546,435 ground observations), and (3) global monthly Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) values (30,301 randomly-sampled points).\n\n![alt text](https://github.com/envirometrix/PNVmaps/blob/master/img/Fig_global_biomes_map.png ""Output predictions for global biomes."")\n\n# Step-by-step tutorial\n\n[This tutorial](https://github.com/Envirometrix/PNVmaps/tree/master/tutorial) explains how to fit models and produce predictions for smaller area in Europe. To run this tutorial you might need to install and customize some [R / OS GIS software](https://envirometrix.github.io/PredictiveSoilMapping/software.html).\n\n*Please cite as:*\n\nFuture projections:\n* Bonannella C, Hengl T, Parente L, de Bruin S. 2023. **[Biomes of the world under climate change scenarios: increasing aridity and higher temperatures lead to significant shifts in natural vegetation](https://doi.org/10.7717/peerj.15593)**. PeerJ 11:e15593 https://doi.org/10.7717/peerj.15593\n\nOriginal PNV maps:\n* Hengl T, Walsh MG, Sanderman J, Wheeler I, Harrison SP, Prentice IC. 2018. **[Global mapping of potential natural vegetation: an assessment of machine learning algorithms for estimating land potential](https://doi.org/10.7717/peerj.5457)**. PeerJ 6:e5457 https://doi.org/10.7717/peerj.5457\n\n# Download Maps\n\nThe 250m resolution predictions of biomes are available for download from https://doi.org/10.5281/zenodo.3526619\n\nThe 1km resolution maps are available for download under the [Open Database License (ODbl) v1.0](https://opendatacommons.org/licenses/odbl/) and can be downloaded from http://dx.doi.org/10.7910/DVN/QQHCIK without restrictions.\n\n# Disclaimer\n\nThese are premilimary maps of the Global Potential Natural Vegetation. Errors and artifacts are still possible. Training data sets BIOME 6000 and EU Forest are constantly being updated and could still contain erroneously geolocated points. Predictions of FAPAR are based on randomly simulated points and not on ground observations of FAPAR and classification of sites. Predictions of EU forest tree species are presented for experimental purposes only. To report an issue or artifact in maps, please use https://github.com/envirometrix/PNVmaps/issues.\n\n'",",https://doi.org/10.5281/zenodo.7520813,https://doi.org/10.1016/j.rse.2020.112148,https://doi.org/10.5281/zenodo.3631253,https://doi.org/10.5281/zenodo.3526619,https://doi.org/10.7717/peerj.15593,https://doi.org/10.7717/peerj.15593\n\nOriginal,https://doi.org/10.7717/peerj.5457,https://doi.org/10.7717/peerj.5457\n\n#,https://doi.org/10.5281/zenodo.3526619\n\nThe","2018/03/27, 13:55:55",2038,GPL-3.0,2,15,"2023/07/03, 19:38:13",3,0,0,0,114,0,0,0.33333333333333337,,,0,2,false,,true,true,,,https://github.com/Envirometrix,http://envirometrix.net,"Wageningen, the Netherlands",,,https://avatars.githubusercontent.com/u/32822370?v=4,,, MAAP,"Discover and use biomass relevant data, integrating the data for comparison, analysis, evaluation, and generation.",MAAP-Project,https://github.com/MAAP-Project/maap-documentation.git,github,,Biomass,"2023/10/24, 01:43:55",9,0,4,true,,,MAAP-Project,,,"b""# maap-documentation\n[![Documentation Status](https://readthedocs.org/projects/maap-project/badge/?version=latest)](https://maap-project.readthedocs.io/en/latest/?badge=latest)\n\nThis repository serves as the technical documentation for interfacing with the MAAP services.\n\n### Contributing to MAAP Documentation\n\nMAAP documentation is hosted on [maap-project.readthedocs.io](https://maap-project.readthedocs.io), is built using [Sphinx](http://www.sphinx-doc.org/en/master/index.html) and written in [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html). If you want to contribute to the documentation, you can do so by forking the repository, creating a branch for your changes and editing the documentation files in the docs directory of the repo.\n\nThis should be built using Python >=3.11.\n\nOS-version of [Pandoc](https://pandoc.org/) is also required.\n\nYou need to install Sphinx and supporting packages locally so you can ensure that your edits display correctly before making a pull request to the repository. These steps must be performed locally since MAAP's ADE does not support running a server and likely will not in the future.\n\nTo install supporting packages, run the following command:\n\n```\npip install -r requirements.txt\n```\n\nAfter installing the necessary packages you build the docs using the following command from the docs directory:\n\n```\ncd docs\nmake html\n```\n\nOnce the docs have been built successfully, there should be a `build/` directory with the HTML pages.\nTo verify the pages look as expected run a local python server.\n\n```\ncd build/html\npython3 -m http.server\n# If you are not prompted open a web browser and go to http://localhost:8000/ (default)\n```\n\n## Running Notebooks Locally\n\nTo run the documentation notebook code, you must make several configurations.\n\nInstall JupyterHub. \n\nInstall the `maap-py` library.\n\n1. Switch to your virtual environment that you wish to install in.\n2. `pip install matplotlib==3.3.1` \n3. Clone maap-py with `git clone git@github.com:MAAP-Project/maap-py.git`\n4. `cd maap-py` then `python setup.py install`\n""",,"2020/01/22, 16:39:38",1372,GPL-3.0,565,850,"2023/10/24, 01:44:01",35,249,316,207,1,7,3.9,0.7947882736156352,"2023/07/20, 20:19:27",3.1.0,0,22,false,,true,true,,,https://github.com/MAAP-Project,,,,,https://avatars.githubusercontent.com/u/42812645?v=4,,, BioPAL,The BIOMASS Product Algorithm Laboratory hosts official tools for processing and analysing ESA's BIOMASS mission data.,BioPAL,https://github.com/BioPAL/BioPAL.git,github,,Biomass,"2023/09/21, 07:49:44",74,0,24,true,Python,BioPAL,BioPAL,Python,https://biopal.org/,"b'# BioPAL\n\n[![Documentation Status](https://readthedocs.org/projects/biopal/badge/?version=latest)](http://biopal.readthedocs.io/?badge=latest)\n[![PyPI](https://img.shields.io/pypi/v/biopal)](https://pypi.org/project/biopal)\n[![PyPI - License](https://img.shields.io/pypi/l/biopal)](https://pypi.org/project/biopal)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/biopal)](https://pypi.org/project/biopal)\n\n\nThe BIOMASS Product Algorithm Laboratory hosts official tools for processing and analysing ESA\\\'s BIOMASS mission data.\n\n- Website: www.biopal.org\n- Documentation:\n- Mailing: \n- Contributing:\n- Bug reports:\n\n# Objective\n\nBIOMASS is ESA\'s (European Space Agency) seventh Earth Explorer mission, currently scheduled for launch in 2023. The satellite will be the first P-band SAR (Synthetic Aperture Radar) sensor in space and will be operated in fully polarimetric interferometric and tomographic modes. The mission main aim is to map forest properties globally, but the sensor will also allow exploring subsurface scenarios (ice, desert).\n\nThe BIOMASS Product Algorithm Laboratory (BioPAL) is an evolution of the software developed for the [BIOMASS prototype processor](https://www.mdpi.com/2072-4292/12/6/985) into an open source library to be used and contributed by the scientific community.\n\nThis repository collects the software routines for processing Level 1 SAR products to generate Level 2 forest products of Above Ground Biomass (AGB), Forest Heigth (FH) and Forest Disturbance (FD). More details about these products and BIOMASS can be found [here](https://www.mdpi.com/2072-4292/12/6/985).\n\n# Structure of the Project\n\nThis repository is organized as follows:\n\n- **arepytools**: Aresys I/O library for reading and managing the input dataset. Will be turned to an independent library in the future.\n\n- **biopal**: contains the BioPAL source code in particular:\n\n - the`biopal/_package_data` folder (do not edit) contains the default Input and Configuration xml files (use biopal-quickstart to get editable ones, see **Getting Started** section below)\n\n- **doc**: contains the documentation.\n\nBioPAL is already used by some ESA sponsored project, however it is still an experimental code.\nThis means that there might be still bugs. If you happen to find one, make us happy by filling an issue with all the details to reproduce it the best you can.\n\nYou can follow the developement roadmap to version 1.0.0 [here](https://github.com/BioPAL/BioPAL/projects/2).\n\n\n# Getting Started\n\nFor advanced insatallation and usage options, refer to **Documentation** section below.\n\n## BioPAL installation (default option)\nThis installation procedure makes use of the open-source package management system [conda](https://docs.conda.io/projects/conda/en/latest/), to be pre-installed.\n\nOpen a command window with *conda* available and follow this procedure.\n\nCreate an empty biopal environment:\n\n conda create --name biopal python=3.9\n\nInstall GDAL library:\n\n conda activate biopal\n conda install -c conda-forge GDAL=3.5\n\nInstall the package:\n\n pip install biopal\n\nConfigure biopal:\n\n biopal-quickstart FOLDER\n\n""FOLDER"" is the path where usable and editable versions of `Input_File.xml` and `Configuration_File.xml` files will be generated.\n\n## Run BioPAL\n\nPrepare your `Input_File.xml` and `Configuration_File.xml`, than open a command window with *conda* available and run BioPAL:\n\n conda activate biopal\n biopal --conf conf_folder inputfilexml\n\n* *inputfilexml*: path of the `Input_File.xml` \n* *conf_folder*: path of the folder containing `Configuration_File.xml`\n\n\n# BioPAL datasets\n\nBioPAL gives easy access to several datasets that are used for examples in the documentation and testing. \nThese datasets are hosted on our FTP server and must be downloaded for use. \n\nContact to receive access to the dataset and for more information.\n\n\n# Call for Contributions\n\nBioPAL is an open source project supported by a community who appreciates help from a wide range of different backgrounds. Large or small, any contribution makes a big difference; and if you\\\'ve never contributed to an open source project before, we hope you will start with BioPAL!\n\nIf you are interested in contributing, check out our contributor\\\'s guide. Beyond enhancing the processing algorithms, there are many ways to contribute:\n\n- Submit a bug report or feature request on GitHub Issues.\n- Contribute a Jupyter notebook to our examples gallery.\n- Assist us with user testing.\n- Add to the documentation or help with our website.\n- Write unit or integration tests for our project.\n- Answer questions on our issues, slack channel, MAAP Forums, and elsewhere.\n- Write a blog post, tweet, or share our project with others.\n- Teach someone how to use BioPAL.\n\nAs you can see, there are lots of ways to get involved and we would be very happy for you to join us! The only thing we ask is that you abide by the principles of openness, respect, and consideration of others as described in our Code of Conduct.\n\n## Contributing Guidelines in Brief\n\nRead carefully also contributor\\\'s guides before getting started.\n\n1. Fork the repository.\n\n2. Clone the private fork locally (execute the following command in your terminal):\n\n git clone https://github.com/your_name_here/BioPAL.git\n\n3. Follow the instructions specified in the documentation, make a demo run and compare with reference output. Make sure all tests are passed.\n\n4. Add the main repository to the list of your remotes (in order to pull the latest changes before making local changes):\n\n git remote add upstream https://github.com/BioPAL/BioPAL\n\n5. Create a branch for local development.\n\n6. Commit local changes and push local branch to the GitHub private fork.\n\n7. Submit a pull request through the GitHub website to the [main branch](https://github.com/BioPAL/BioPAL/tree/main) of the main repository.\n\n## Pull Request Requirements\n\n1. Include new tests for all the new routines developed.\n2. Documentation should be updated accordingly.\n3. Updated code must pass all the tests.\n\n# Documentation\n\nDocumentation is work in progress and can be found in [project doc/ folder](https://github.com/BioPAL/BioPAL/tree/main/doc) and on [website doc section](https://www.biopal.org/docs/).\n\nThe user manual of the previous prototype software can be found in [legacy](https://github.com/BioPAL/BioPAL/tree/main/doc/legacy/ARE-017082_BIOMASS_L2_User_Manual_[prototype_legacy].pdf).\n\n# History\n\nBioPAL was originally written and is currently maintained by Aresys and the BioPAL team on behalf of ESA.\n\nBioPAL team includes reperesentatives of several european research institutions, see [website about section](https://www.biopal.org/about/).\n\n\n# Citing\n\nIf you use BioPAL, please add these citations:\n\n- *BioPAL: BIOMASS Product Algorithm Laboratory, https://github.com/BioPAL/BioPAL*\n\n- *Banda F, Giudici D, Le Toan T, Mariotti d\xe2\x80\x99Alessandro M, Papathanassiou K, Quegan S, Riembauer G, Scipal K, Soja M, Tebaldini S, Ulander L, Villard L. The BIOMASS Level 2 Prototype Processor: Design and Experimental Results of Above-Ground Biomass Estimation. Remote Sensing. 2020; 12(6):985. https://doi.org/10.3390/rs12060985*'",",https://doi.org/10.3390/rs12060985*","2020/07/21, 11:45:39",1191,MIT,8,206,"2022/03/25, 09:14:32",9,34,45,1,579,0,0.0,0.2993630573248408,"2023/09/26, 09:38:30",v0.4.0rc0,0,8,false,,true,true,,,https://github.com/BioPAL,,,,,https://avatars.githubusercontent.com/u/68594778?v=4,,, allodb,An R package for biomass estimation at extratropical forest plots.,ropensci,https://github.com/ropensci/allodb.git,github,,Biomass,"2021/10/21, 02:47:44",30,0,4,false,R,rOpenSci,ropensci,R,https://docs.ropensci.org/allodb/,"b'\n\n\n# allodb: An R package for biomass estimation at extratropical forest plots\n\n\n\n[![peer-review](https://badges.ropensci.org/436_status.svg)](https://github.com/ropensci/software-review/issues/436)\n[![Codecov test\ncoverage](https://codecov.io/gh/ropensci/allodb/branch/master/graph/badge.svg)](https://codecov.io/gh/ropensci/allodb?branch=master)\n[![R-CMD-check](https://github.com/ropensci/allodb/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/allodb/actions)\n\n\n## Introduction\n\nAllometric equations for calculation of tree aboveground biomass (AGB)\nform the basis for estimates of forest carbon storage and exchange with\nthe atmosphere. While standard models exist to calculate forest biomass\nacross the tropics, we lack a standardized tool for computing AGB across\nthe global extratropics.\n\n*allodb* was conceived as a framework to standardize and simplify the\nbiomass estimation process across globally distributed extratropical\nforests (mainly temperate and boreal forests). With *allodb* we aimed\nto: a) compile relevant published and unpublished allometries, focusing\non AGB but structured to handle other variables (e.g., height); b)\nobjectively select and integrate appropriate available equations across\nthe full range of tree sizes; and c) serve as a platform for future\nupdates and expansion to other research sites.\n\nThe *allodb* package contains a dataset of systematically selected\npublished allometric equations. This dataset was built based on 701\nwoody species identified at 24 large [ForestGEO forest dynamic\nplots](https://forestgeo.si.edu/) representing all major extratropical\nforest types. A total of 570 parsed allometric equations to estimate\nindividual tree biomass were retrieved, checked, and combined using a\nweighting function designed to ensure optimal equation selection over\nthe full tree size range with smooth transitions across equations. The\nequation dataset used can be customized with built-in functions that\nsubset the original dataset and add new equations.\n\nThe package provides functions to estimate tree biomass based on\nuser-provided census data (tree diameter, taxonomic identification, and\nplot coordinates). New allometric equations are calibrated for each\nspecies and location by resampling the original equations; equations\nwith a larger sample size and/or higher taxonomic and climatic\nsimilarity with the species and location in question are given a higher\nweight in this process.\n\n## Installation\n\nInstall the development version of *allodb* from GitHub:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""ropensci/allodb"")\n```\n\n## Examples\n\nPrior to calculating tree biomass using *allodb*, users need to provide\na table (i.e.\xc2\xa0dataframe) with DBH (cm), parsed species Latin names, and\nsite(s) coordinates. In the following examples we use data from the\nSmithsonian Conservation Biology Institute, USA (SCBI) ForestGEO\ndynamics plot (trees from 1 hectare surveyed in 2008). Full tree census\ndata can be requested through the [ForestGEO\nportal](https://forestgeo.si.edu/explore-data).\n\n``` r\nlibrary(allodb)\ndata(scbi_stem1)\n```\n\nThe biomass of all trees in one (or several) censuses can be estimated\nusing the `get_biomass` function.\n\n``` r\nscbi_stem1$agb <-\n get_biomass(\n dbh = scbi_stem1$dbh,\n genus = scbi_stem1$genus,\n species = scbi_stem1$species,\n coords = c(-78.2, 38.9)\n )\n```\n\nBiomass for a single tree can be estimated given dbh and species\nidentification (results in kilograms).\n\n``` r\nget_biomass(\n dbh = 50,\n genus = ""liriodendron"",\n species = ""tulipifera"",\n coords = c(-78.2, 38.9)\n)\n#> [1] 1578.644\n```\n\nUsers can modify the set of equations that will be used to estimate the\nbiomass using the `new_equations` function. The default option is the\nentire *allodb* equation table. Users can also work on a subset of those\nequations, or add new equations to the table (see\n`?allodb::new_equations`). This new equation table should be provided as\nan argument in the `get_biomass` function.\n\n``` r\nshow_cols <- c(""equation_id"", ""equation_taxa"", ""equation_allometry"")\neq_tab_acer <- new_equations(subset_taxa = ""Acer"")\nhead(eq_tab_acer[, show_cols])\n#> # A tibble: 6 \xc3\x97 3\n#> equation_id equation_taxa equation_allometry \n#> \n#> 1 a4e4d1 Acer saccharum exp(-2.192-0.011*dbh+2.67*(log(dbh))) \n#> 2 dfc2c7 Acer rubrum 2.02338*(dbh^2)^1.27612 \n#> 3 eac63e Acer rubrum 5.2879*(dbh^2)^1.07581 \n#> 4 f49bcb Acer pseudoplatanus exp(-5.644074+(2.5189*(log(pi*dbh)))) \n#> 5 14bf3d Acer mandshuricum 0.0335*(dbh)^1.606+0.0026*(dbh)^3.323+0.1222*\xe2\x80\xa6\n#> 6 0c7cd6 Acer mono 0.0202*(dbh)^1.810+0.0111*(dbh)^2.740+0.1156*\xe2\x80\xa6\n```\n\nWithin the `get_biomass` function, this equation table is used to\ncalibrate a new allometric equation for all species/site combinations in\nthe user-provided dataframe. This is done by attributing a weight to\neach equation based on its sampling size, and taxonomic and climatic\nsimilarity with the species/site combination considered.\n\n``` r\nallom_weights <-\n weight_allom(\n genus = ""Acer"",\n species = ""rubrum"",\n coords = c(-78, 38)\n )\n\n## visualize weights\nequ_tab_acer <- new_equations()\nequ_tab_acer$weights <- allom_weights\nkeep_cols <-\n c(\n ""equation_id"",\n ""equation_taxa"",\n ""sample_size"",\n ""weights""\n )\norder_weights <- order(equ_tab_acer$weights, decreasing = TRUE)\nequ_tab_acer <- equ_tab_acer[order_weights, keep_cols]\nhead(equ_tab_acer)\n#> # A tibble: 6 \xc3\x97 4\n#> equation_id equation_taxa sample_size weights\n#> \n#> 1 138258 Acer rubrum 150 0.415\n#> 2 d6be5c Sapindaceae 243 0.383\n#> 3 a2fbbb Sapindaceae 200 0.349\n#> 4 2630d5 Trees (Angiosperms) 886 0.299\n#> 5 d4c590 Trees (Angiosperms) 549 0.289\n#> 6 ed748f Broad-leaved species 2223 0.270\n```\n\nEquations are then resampled within their original DBH range: the number\nof resampled values for each equation is proportional to its weight (as\nattributed by the `weight_allom` function).\n\n``` r\ndf_resample <-\n resample_agb(\n genus = ""Acer"",\n species = ""rubrum"",\n coords = c(-78, 38)\n )\n\nplot(\n df_resample$dbh,\n df_resample$agb,\n xlab = ""DBH (cm)"",\n ylab = ""Resampled AGB values (kg)""\n)\n```\n\n![](man/figures/README-resample-acer-1.png)\n\nThe resampled values are then used to fit the following nonlinear model:\n,\nwith i.i.d.\n.\nThe parameters (*a*, *b*, and *sigma*) are returned by the\n`est_params()` function.\n\nThe resampled values (dots) and new fitted equation (red dotted line)\ncan be visualized with the `illustrate_allodb()` function.\n\n``` r\npars_acer <- est_params(\n genus = ""Acer"",\n species = ""rubrum"",\n coords = c(-78, 38)\n)\nillustrate_allodb(\n genus = ""Acer"",\n species = ""rubrum"",\n coords = c(-78, 38)\n)\n```\n\n![](man/figures/README-est-params-acer-1.png)\n\nThe `est_params` function can be used for all species/site combinations\nin the dataset at once.\n\n``` r\nparams <- est_params(\n genus = scbi_stem1$genus,\n species = scbi_stem1$species,\n coords = c(-78.2, 38.9)\n)\nhead(params)\n#> # A tibble: 6 \xc3\x97 7\n#> genus species long lat a b sigma\n#> \n#> 1 Acer negundo -78.2 38.9 0.0762 2.55 433.\n#> 2 Acer rubrum -78.2 38.9 0.0768 2.55 412.\n#> 3 Ailanthus altissima -78.2 38.9 0.0995 2.48 377.\n#> 4 Amelanchier arborea -78.2 38.9 0.0690 2.56 359.\n#> 5 Asimina triloba -78.2 38.9 0.0995 2.48 377.\n#> 6 Carpinus caroliniana -78.2 38.9 0.0984 2.48 317.\n```\n\nAGB is then recalculated as `agb = a * dbh^b` within the `get_biomass`\nfunction.\n\nPlease note that this package is released with a [Contributor Code of\nConduct](https://ropensci.org/code-of-conduct/). By contributing to this\nproject, you agree to abide by its terms.\n'",,"2017/10/13, 19:08:44",2203,GPL-3.0,0,1739,"2021/10/21, 02:35:54",14,71,216,0,734,0,0.0,0.3828320802005013,"2020/09/22, 13:31:32",v1.0-alpha,0,5,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, wildfire forecasting,"The project intends to reproduce the Fire Forecasting capabilities of GEFF using Deep Learning and develop further improvements in accuracy, geography and time scale through inclusion of additional variables or optimization of model architecture and hyperparameters.",esowc,https://github.com/ECMWFCode4Earth/wildfire-forecasting.git,github,"deep-learning,wildfire-forecasting,gis,earth-observation,remote-sensing",Wildfire,"2021/02/22, 21:18:54",46,0,7,true,Jupyter Notebook,ECMWF Code for Earth,ECMWFCode4Earth,"Jupyter Notebook,Python,Dockerfile,Shell",,"b'# Forecasting Wildfire Danger Using Deep Learning\n\n[![Documentation Status](https://readthedocs.org/projects/wildfire-forecasting/badge/?version=latest)](https://wildfire-forecasting.readthedocs.io/en/latest/?badge=latest) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/esowc/wildfire-forecasting/master) \n\n- [Introduction](#introduction)\n- [TL; DR](#tl--dr)\n- [Getting Started](#getting-started)\n * [Using Binder](#using-binder)\n * [Clone this repo](#clone-this-repo)\n * [Using conda](#using-conda)\n * [Using Docker](#using-docker)\n- [Running Inference](#running-inference)\n- [Implementation overview](#implementation-overview)\n- [Documentation](#documentation)\n- [Acknowledgements](#acknowledgements)\n\n## Introduction\n\nThe Global ECMWF Fire Forecasting (GEFF) system, implemented in Fortran 90, is based on empirical models conceptualised several decades back. Recent GIS & Machine Learning advances could, theoretically, be used to boost these models\' performance or completely replace the current forecasting system. However thorough benchmarking is needed to compare GEFF to Deep Learning based prediction techniques.\n\nThe project intends to reproduce the Fire Forecasting capabilities of GEFF using Deep Learning and develop further improvements in accuracy, geography and time scale through inclusion of additional variables or optimisation of model architecture & hyperparameters. Finally, a preliminary fire spread prediction tool is proposed to allow monitoring activities.\n\n## TL; DR\n\nThis codebase (and this README) is a work-in-progress. The `master` is a stable release and we aim to address issues and introduce enhancements on a rolling basis. If you encounter a bug, please [file an issue](https://github.com/esowc/wildfire-forecasting/issues/new). Here are a quick few pointers that *just work* to get you going with the project:\n\n* Clone & navigate into the repo and create a conda environment using `environment.yml` on Ubuntu 18.04 and 20.04 only.\n* All EDA and Inference notebooks must be run within this environment. Use `conda activate wildfire-dl`\n* Check out the EDA notebooks titled [`EDA_X_mini_sample.ipynb`](data/EDA). We recommend `jupyterlab`.\n* Check out the Inference notebooks for [`1 day, 10 day, 14 day and 21 day predictions`](examples/).\n* The notebooks also include code to download a small sample dataset.\n\n**Next:**\n\n* See [Getting Started](#getting-started) for how to set up your local environment for training or inference\n* For a detailed description of the project codebase, check out the [Code_Structure_Overview](Code_Structure_Overview.md)\n* Read the [Running Inference](#running-inference) section for testing pre-trained models on sample data.\n* See [Implementation Overview](#implementation-overview) for details on tools & frameworks and how to retrain the model.\n\nThe work-in-progress documentation can be viewed online on [wildfire-forecasting.readthedocs.io](https://wildfire-forecasting.readthedocs.io/en/latest/).\n\n## Getting Started\n\n### Using Binder\n\nWhile we have included support for launching the repository in [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/esowc/wildfire-forecasting/master), the limited memory offered by Binder means that you might end up with crashed/dead kernels while trying to test the `Inference` or the `Forecast` notebooks. At this point, we don\'t have a workaround for this issue.\n\n### Clone this repo\n\n```bash\ngit clone https://github.com/esowc/wildfire-forecasting.git\ncd wildfire-forecasting\n```\n\nOnce you have cloned and navigated into the repository, you can set up a development environment using either `conda` or `docker`. Refer to the relevant instructions below and then skip to the next section on [Running Inference](#running-inference)\n\n### Using conda\n\nTo create the environment, run:\n\n```bash\nconda env create -f environment.yml\nconda clean -a\nconda activate wildfire-dl\n```\n\n>The setup is tested on Ubuntu 18.04, 20.04 and Windows 10 only. On systems with CUDA supported GPU and CUDA drivers set up, the conda environment and the code ensure that GPUs are used by default for training and inference. If there isn\'t sufficient GPU memory, this will typically lead to Out of Memory Runtime Errors. As a rule of thumb, around 4 GiB GPU memory is needed for inference and around 12 GiB for training.\n\n### Using Docker\n\nWe include a `Dockerfile` & `docker-compose.yml` and provide detailed instructions for setting up your development environment using Docker for training on both CPUs and GPUs. Please head over to the [Docker README](docker/README.md) for more details.\n\n## Running Inference\n\n* **Examples**:
\n The [Inference_2_1.ipynb](examples/Inference_2_1.ipynb), [Inference_4_10.ipynb](examples/Inference_4_10.ipynb), [Inference_4_14.ipynb](examples/Inference_4_14.ipynb), [Inference_7_21.ipynb](examples/Inference_7_21.ipynb) notebooks demonstrate the end-to-end procedure of loading data, creating model from saved checkpoint, and getting the predictions for 2 day input, 1 day output; and 4 day input, 10 day output, 4 day input, 14 day output and 7 day input, 21 day output experiments respectively.\n\n* **Testing data**:
\n Ensure the access to fwi-forcings and fwi-reanalysis data. Limited sample data is available at `gs://deepgeff-data-v0` (Released for educational purposes only).\n\n* **Pre-trained model**:
\n All previously trained models are listed in [pre-trained_models.md](src/model/checkpoints/pre-trained_models.md) with associated metadata. Select and download the desired pre-trained model checkpoint file via gsutil from `gs://deepgeff-models-v0`, set the `$CHECKPOINT_FILE`, `$FORCINGS_DIR` and `$REANALYSIS_DIR` directory paths through the flags while running testing or inference.\n\n * Example usage:\n `python src/test.py -in-days=2 -out-days=1 -forcings-dir=${FORCINGS_DIR} -reanalysis-dir=${REANALYSIS_DIR} -checkpoint-file=\'path/to/checkpoint\'`\n\n## Implementation overview\n\n![deep-learning-network-architecture](./docs/source/_static/unet_tapered.svg)\nWe implement a modified U-Net style Deep Learning architecture using [PyTorch 1.6](https://pytorch.org/docs/stable/index.html). We use [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) for code organisation and reducing boilerplate. The mammoth size of the total original dataset (~1TB) means we use extensive GPU acceleration in the code using [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit). For a GeForce RTX 2080 with 12GB memory and 40 vCPUs with 110 GB RAM, this translates to a 25x speedup over using only 8 vCPUs with 52GB RAM.\n\nFor reading geospatial datasets, we use [`xarray`](http://xarray.pydata.org/en/stable/quick-overview.html) and [`netcdf4`](https://unidata.github.io/netcdf4-python/netCDF4/index.html). The [`imbalanced-learn`](https://imbalanced-learn.readthedocs.io/en/stable/under_sampling.html) library is useful for Undersampling to tackle the high data skew. Code-linting and formatting is done using [`black`](https://black.readthedocs.io/en/stable/) and [`flake8`](https://flake8.pycqa.org/en/latest/).\n\n - The entry point for training is [src/train.py](src/train.py). Input variables used for training the model, by default, as configured in the [master](https://github.com/esowc/wildfire-forecasting/tree/master) branch, are `Temperature`, `Precipitation`, `Windspeed` and `Relative Humidity`. Support for additional variables `Leaf Area Index`, `Volumetric Soil Water Level 1` and `Land Skin Temperature` and implemented in the respective [branches](https://github.com/esowc/wildfire-forecasting/branches):\n\n - For training with input variables `t2`, `tp`, `wspeed` and `rh` + additionally `lai`, switch to the [lai](https://github.com/esowc/wildfire-forecasting/tree/lai) branch. **Note:** You will additionally require require the data for precisely these 5 variables in the /data dir to perform the training/inference for this combination of inputs.\n\n - For training with input variables `t2`, `tp`, `wspeed` and `rh` + additionally `swvl1`, switch to the [swvl1](https://github.com/esowc/wildfire-forecasting/tree/swvl1) branch. **Note:** You will additionally require require the data for precisely these 5 variables in the /data dir to perform the training/inference for this combination of inputs.\n\n - For training with input variables `t2`, `tp`, `wspeed` and `rh` + additionally `skt`, switch to the [skt](https://github.com/esowc/wildfire-forecasting/tree/skt) branch. **Note:** You will additionally require require the data for precisely these 5 variables in the /data dir to perform the training/inference for this combination of inputs.\n\n - For training with input variables `t2`, `tp`, `wspeed` and `rh` + additionally `skt` as well as `swvl1`, switch to the [skt+swvl1](https://github.com/esowc/wildfire-forecasting/tree/skt+swvl1) branch. **Note:** You will additionally require the data for precisely these 6 variables in the `/data` dir to perform the training/inference for this combination of inputs.\n\n * **Example Usage**: `python src/train.py [-in-days 4] [-out-days 1] [-forcings-dir ${FORCINGS_DIR}] [-reanalysis-dir ${REANALYSIS_DIR}]`\n\n * **Dataset**: We train our model on 1 year of global data. The `gs://deepgeff-data-v0` dataset demonstrated in the various EDA and Inference notebooks are not intended for use with `src/train.py`. The scripts will fail if used with those small datasets. If you intend to re-run the training, reach out to us for access to a bigger dataset necessary for the scripts.\n\n * **Logging**: We use [Weights & Biases](https://www.wandb.com/) for logging our training. When running the training script, you can either provide a `wandb API key` or choose to skip logging altogether. W&B logging is free and lets you monitor your training remotely. You can sign up for an account and then use `wandb login` from inside the environment to supply the key.\n\n * **Visualizing Results**: Upon completion of training, the results summary json from `wandb` can be visualized in terms of Accuracy %, MSE % and MAE % using the plotting module.\n * **Example Usage**:\n `python src/plot.py -f -i -o `\n\n* The entry point for inference is [src/test.py](src/test.py). **Note:** When performing inference for a model trained with an additional variable in any of the branches, ensure access to the respective variables in the /data dir.\n * **Example Usage**: `python src/test.py [-in-days 4] [-out-days 1] [-forcings-dir ${FORCINGS_DIR}] [-reanalysis-dir ${REANALYSIS_DIR}] [-checkpoint-file ${CHECKPOINT_FILE}]`\n\n* **Configuration Details**:\n\n
Optional arguments (default values indicated below):\n\n ` -h, --help show this help message and exit`\n
    -init-features 16                       Architecture complexity [int]\n    -in-days 4                              Number of input days [int]\n    -out-days 1                             Number of output days [int]\n    -epochs 100                             Number of training epochs [int]\n    -learning-rate 0.001                    Maximum learning rate [float]\n    -batch-size 1                           Batch size of the input [int]\n    -split 0.2                              Test split fraction [float]\n    -use-16bit True                         Use 16-bit precision for training (train only) [Bool]\n    -gpus 1                                 Number of GPUs to use [int]\n    -optim one_cycle                        Learning rate optimizer: one_cycle or cosine (train only) [str]\n    -dry-run False                          Use small amount of data for sanity check [Bool]\n    -case-study False                       The case-study region to use for inference: australia,california, portugal, siberia, chile, uk [Bool/str]\n    -clip-output False                      Limit the inference to the output values within supplied range (e.g. 0.5,60) [Bool/list]\n    -boxcox 0.1182                          Apply boxcox transformation with specified lambda while training and the inverse boxcox transformation during the inference. [Bool/float]\n    -binned ""0,5.2,11.2,21.3,38.0,50""       Show the extended metrics for supplied comma separated binned FWI value range [Bool/list]\n    -undersample False                      Undersample the datapoints having smaller than specified FWI (e.g. -undersample=10) [Bool/float]\n    -round-to-zero False                    Round off the target values below the specified threshold to zero [Bool/float]\n    -date-range 2019-04-01,2019-05-01       Limit prediction to a smaller subset of dates than available in the data directories [Bool/float]\n    -cb_loss False                          Use Class-Balanced loss with the supplied beta parameter [Bool/float]\n    -chronological_split False              Do chronological train-test split in the specified ratio [Bool/float]\n    -model unet_tapered                     Model to use: unet, unet_downsampled, unet_snipped, unet_tapered, unet_interpolated [str]\n    -out fwi_reanalysis                     Output data for training: gfas_frp or fwi_reanalysis [str]\n    -smos_input False                       Use soil-moisture input data [Bool]\n    -forecast-dir ${FORECAST_DIR}           Directory containing forecast data. Alternatively set $FORECAST_DIR [str]\n    -forcings-dir ${FORCINGS_DIR}           Directory containing forcings data. Alternatively set $FORCINGS_DIR [str]\n    -reanalysis-dir ${REANALYSIS_DIR}       Directory containing reanalysis data. Alternatively set $REANALYSIS_DIR [str]\n    -smos-dir ${SMOS_DIR}                   Directory containing soil moisture data. Alternatively set $SMOS_DIR [str]\n    -mask src/dataloader/mask.npy           File containing the mask stored as the numpy array [str]\n    -benchmark False                        Benchmark the FWI-Forecast data against FWI-Reanalysis [Bool]\n    -comment Comment of choice!             Used for logging [str]\n    -checkpoint-file                        Path to the test model checkpoint [Bool/str]
\n\n* The [src/](src) directory contains the architecture implementation.\n * The [src/dataloader/](src/dataloader) directory contains the implementation specific to the training data.\n * The [src/model/](src/model) directory contains the model implementation.\n * The [src/model/base_model.py](src/model/base_model.py) script has the common implementation used by every model.\n * The [src/config/](src/config) directory stores the config files generated via training.\n\n* The [data/EDA/](data/EDA/) directory contains the Exploratory Data Analysis and Preprocessing required for forcings data demonstrated via Jupyter Notebooks.\n * Forcings: [data/EDA/EDA_forcings_mini_sample.ipynb](data/EDA/EDA_forcings_mini_sample.ipynb) (*Resolution: 0.07 deg x 0.07 deg, 10 days*)\n * FWI-Reanalysis: [data/EDA/EDA_reanalysis_mini_sample.ipynb](data/EDA/EDA_reanalysis_mini_sample.ipynb) (*Resolution: 0.1 deg x 0.1 deg, 1 day*)\n * FWI-Forecast: [data/EDA/EDA_forecast_mini_sample.ipynb](data/EDA/EDA_forecast_mini_sample.ipynb) (*Resolution: 0.1 deg x 0.1 deg, 10 days*)\n\n\n* A walk-through of the codebase is in the [Code_Structure_Overview.md](Code_Structure_Overview.md).\n\n## Documentation\n\nWe use Sphinx for building our docs and host them on Readthedocs. The latest build of the docs can be accessed online [here](https://wildfire-forecasting.readthedocs.io/en/latest/). In order to build the docs from source, you will need `sphinx` and `sphinx-autoapi`. Follow the instructions below:\n\n```bash\ncd docs\nmake html\n```\n\nOnce the docs get built, you can access them inside [`docs/build/html/`](docs/build/html/index.html).\n\n## Acknowledgements\n\nThis project tackles [Challenge #26](https://github.com/esowc/challenges_2020/issues/10) from Stream 2: Machine Learning and Artificial Intelligence, as part of the [ECMWF Summer of Weather Code 2020](https://esowc.ecmwf.int/) Program.\n\nTeam: Roshni Biswas, Anurag Saha Roy, Tejasvi S Tomar.\n'",,"2020/05/14, 14:42:07",1259,GPL-3.0,0,278,"2022/11/10, 22:50:07",1,44,51,1,349,0,0.2,0.3480176211453745,,,0,2,false,,false,false,,,https://github.com/ECMWFCode4Earth,https://codeforearth.ecmwf.int,Online,,,https://avatars.githubusercontent.com/u/44897980?v=4,,, caliver,CALIbration and VERification of gridded fire danger models.,ecmwf,https://github.com/ecmwf/caliver.git,github,"natural-hazard,geospatial-data,verification,calibration,netcdf,r,wildfire",Wildfire,"2022/02/21, 17:51:40",18,0,1,false,R,European Centre for Medium-Range Weather Forecasts,ecmwf,"R,Dockerfile",https://ecmwf.github.io/caliver,"b'# caliver\n\n> An R package for the **cali**bration and **ver**ification of gridded models\n\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.596343.svg)](https://doi.org/10.5281/zenodo.596343)\n[![R-CMD-check](https://github.com/ecmwf/caliver/workflows/R-CMD-check/badge.svg)](https://github.com/ecmwf/caliver/actions)\n[![Codecov test coverage](https://codecov.io/gh/ecmwf/caliver/branch/master/graph/badge.svg)](https://codecov.io/gh/ecmwf/caliver?branch=master)\n\n[![CRAN Status Badge](http://www.r-pkg.org/badges/version/caliver)](https://cran.r-project.org/package=caliver)\n[![CRAN Total Downloads](http://cranlogs.r-pkg.org/badges/grand-total/caliver)](https://cran.r-project.org/package=caliver)\n[![CRAN Monthly Downloads](http://cranlogs.r-pkg.org/badges/caliver)](https://cran.r-project.org/package=caliver)\n\n\n**caliver** is a package developed for the R programming language. The name stands for **cal**Ibration and **ver**ification of gridded models. Although caliver was initially designed for wildfire danger models such as GEFF (developed by ECMWF) and RISICO (developed by CIMA Research Foundation), the algorithms can be applied to any gridded model output. Caliver is available with an APACHE-2 license.\n\nFor more details, please see the following papers:\n\n- Vitolo C, Di Giuseppe F, D\xe2\x80\x99Andrea M (2018) **Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs**. PLOS ONE 13(1): e0189419. https://doi.org/10.1371/journal.pone.0189419\n*Please note: in the latest version of the caliver package many functionalities described in this paper have become obsolete and deprecated, please refer to the vignette ""An introduction to the caliver package"" for more details.*\n\n- Vitolo C., Di Giuseppe F., Barnard C., Coughlan R., Krzeminski B., San-Miguel-Ayanz J. **ERA5-based global meteorological wildfire danger maps**. Sci Data 7, 216 (2020). https://doi.org/10.1038/s41597-020-0554-z\n\n- Vitolo C., Di Giuseppe F., Krzeminski B., San-Miguel-Ayanz J. **A 1980\xe2\x80\x932018 global fire danger re-analysis dataset for the Canadian Fire Weather Indices**, Sci Data 6, 190032 (2019). https://doi.org/10.1038/sdata.2019.32\n\n## Installation\nThe installation of the caliver package depends on the following libraries:\n\n* Geospatial Data Abstraction Library ([GDAL](https://gdal.org/))\n* NetCDF4 ([netcdf4](https://www.unidata.ucar.edu/software/netcdf/))\n\nMake sure you have the above libraries installed before attempting to install caliver.\nOnce all the dependencies are installed, get caliver\'s development version from github using [devtools](https://github.com/r-lib/devtools):\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""ecmwf/caliver"")\n```\n\nAlternatively, the stable version of this package is available on CRAN and can be installed as shown below.\n\n``` r\ninstall.packages(""caliver"")\n```\n\nLoad the package:\n\n``` r\nlibrary(""caliver"")\n```\n\n## Docker\nIn this repository you find a Dockerfile that contains all the necessary dependencies and the caliver package already installed.\n\n```\ndocker build -t ecmwf/caliver:latest -f Dockerfile .\n```\n\nAlternatively, you can use the image we host on docker hub:\n```\ndocker run -it --rm ecmwf/caliver:latest bash\n```\n\nMeta\n----\n\n- This package and functions herein are part of an experimental open-source project. They are provided as is, without any guarantee.\n- [Contributions are welcome](https://github.com/ecmwf/caliver/blob/master/CONTRIBUTING.md)! Please note that this project is released with a [Contributor Code of Conduct](https://github.com/ecmwf/caliver/blob/master/CONDUCT.md). By participating in this project you agree to abide by its terms.\n- Please [report any issues or bugs](https://github.com/ecmwf/caliver/issues).\n- License: Apache License 2.0\n- Get citation information for `caliver` in R doing `citation(package = ""caliver"")`\n'",",https://doi.org/10.5281/zenodo.596343,https://doi.org/10.1371/journal.pone.0189419\n*Please,https://doi.org/10.1038/s41597-020-0554-z\n\n-,https://doi.org/10.1038/sdata.2019.32\n\n##","2016/11/08, 16:11:16",2542,GPL-3.0,0,630,"2021/03/18, 13:35:05",2,17,26,0,951,0,0.1,0.00520833333333337,"2021/02/19, 16:16:59",2.0,0,3,false,,false,true,,,https://github.com/ecmwf,www.ecmwf.int,"Shinfield Park, Reading, United Kingdom",,,https://avatars.githubusercontent.com/u/6368067?v=4,,, burnr,Basic tools to analyze forest fire history data (e.g. FHX) in R.,ltrr-arizona-edu,https://github.com/ltrr-arizona-edu/burnr.git,github,"scientific,ecology,dendrochronology,statistics,plot,r,forestfire,cran,citation",Wildfire,"2022/05/20, 23:08:51",13,0,2,false,R,Laboratory of Tree-Ring Research,ltrr-arizona-edu,R,https://ltrr-arizona-edu.github.io/burnr/,"b'\n\n\n# burnr\n\n\n\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/burnr)](https://cran.r-project.org/package=burnr)\n[![Coverage\nStatus](https://coveralls.io/repos/github/ltrr-arizona-edu/burnr/badge.svg?branch=master)](https://coveralls.io/github/ltrr-arizona-edu/burnr?branch=master)\n[![R-CMD-check](https://github.com/ltrr-arizona-edu/burnr/workflows/R-CMD-check/badge.svg)](https://github.com/ltrr-arizona-edu/burnr/actions)\n[![downloads](https://cranlogs.r-pkg.org/badges/burnr)](https://cran.r-project.org/package=burnr)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.594459.svg)](https://doi.org/10.5281/zenodo.594459)\n\n\n\nBasic tools to analyze forest fire history data (e.g.\xc2\xa0FHX) in R. This is\ndesigned for power users and projects with special needs.\n\n## Installation\n\nYou can install the released version of burnr from\n[CRAN](https://CRAN.R-project.org) with:\n\n``` r\ninstall.packages(""burnr"")\n```\n\nAnd the development version from [GitHub](https://github.com/) with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""ltrr-arizona-edu/burnr"")\n```\n\n## Example\n\nThis is a basic example which shows you how to solve a common problem:\n\n``` r\nlibrary(burnr)\n\n# This gives you a basic plot. There are more advanced options. For example, we can color our plot by sample species.\n\ndata(lgr2_meta)\n\nplot(lgr2,\n color_group = lgr2_meta$SpeciesID,\n color_id = lgr2_meta$TreeID,\n plot_legend = TRUE\n)\n```\n\n\n\n## Support\n\nDocumentation is included in the code. If you\xe2\x80\x99re new to `burnr`, our\n[2018 paper in\nDendrochronologia](https://doi.org/10.1016/j.dendro.2018.02.005) is a\nnice survey of the package with many examples. We also have\ninstructional vignettes on the project website,\n. And you can work through\nexamples, with included data, in an R project hosted by @chguiterman on\nGitHub: . We\xe2\x80\x99re working to\nenhance our instruction and add to these demos on the burnr website, so\nplease send us requests for new tips and tricks, or create your own and\nshare with us\\!\n\n## Citation\n\nPlease cite the original `burnr` paper if you use it in your research:\n\n> Malevich, Steven B., Christopher H. Guiterman, and Ellis Q. Margolis\n> (2018) [Burnr: Fire History Analysis and Graphics in\n> R](https://www.sciencedirect.com/science/article/abs/pii/S1125786517301418?via%3Dihub).\n> *Dendrochronologia* 49: 9\xe2\x80\x9315. DOI: 10.1016/j.dendro.2018.02.005.\n\nCitations help us to identify user needs and justify additional time\ndeveloping and maintaining `burnr`.\n\n## Development\n\nPlease file bugs in the [bug\ntracker](https://github.com/ltrr-arizona-edu/burnr/issues).\n\nWant to contribute? Great\\! We\xe2\x80\x99re following [Hadley\xe2\x80\x99s packaging\nworkflow](https://r-pkgs.org/) and [style\nguide](https://style.tidyverse.org/). Fork away.\n\nIf you\xe2\x80\x99re not a developer, don\xe2\x80\x99t worry\\! We also welcome help with\ndocumentation and tutorials.\n'",",https://doi.org/10.5281/zenodo.594459,https://doi.org/10.1016/j.dendro.2018.02.005","2012/08/03, 22:12:47",4100,GPL-3.0,0,513,"2022/05/20, 23:20:50",16,46,189,0,523,0,0.7,0.19683257918552033,"2022/03/02, 03:06:32",v0.6.1,4,2,false,,false,false,,,https://github.com/ltrr-arizona-edu,http://ltrr.arizona.edu/,"The University of Arizona, Tucson AZ 85721, USA",,,https://avatars.githubusercontent.com/u/5680763?v=4,,, Pyrovision,Computer vision library for wildfire detection.,pyronear,https://github.com/pyronear/pyro-vision.git,github,"wildfire,python,deep-learning,pytorch,image-classification,object-detection,keypoint-detection,computer-vision",Wildfire,"2023/10/25, 18:43:12",45,9,7,true,Python,PyroNear,pyronear,"Python,Makefile,Dockerfile",https://pyronear.org/pyro-vision/,"b'![PyroNear Logo](docs/source/_static/images/logo.png)\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n

\n

\n \n \n \n \n \n \n \n \n \n \n \n

\n\n\n# Pyrovision: wildfire early detection\n\nThe increasing adoption of mobile phones have significantly shortened the time required for firefighting agents to be alerted of a starting wildfire. In less dense areas, limiting and minimizing this duration remains critical to preserve forest areas.\n\nPyrovision aims at providing the means to create a wildfire early detection system with state-of-the-art performances at minimal deployment costs.\n\n\n\n## Quick Tour\n\n### Automatic wildfire detection in PyTorch\n\nYou can use the library like any other python package to detect wildfires as follows:\n\n```python\nfrom pyrovision.models import rexnet1_0x\nfrom torchvision import transforms\nimport torch\nfrom PIL import Image\n\n\n# Init\nnormalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n\ntf = transforms.Compose([transforms.Resize(size=(448)), transforms.CenterCrop(size=448),\n transforms.ToTensor(), normalize])\n\nmodel = rexnet1_0x(pretrained=True).eval()\n\n# Predict\nim = tf(Image.open(""path/to/your/image.jpg"").convert(\'RGB\'))\n\nwith torch.no_grad():\n pred = model(im.unsqueeze(0))\n is_wildfire = torch.sigmoid(pred).item() >= 0.5\n```\n\n\n## Setup\n\nPython 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.\n\n### Stable release\n\nYou can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:\n\n```shell\npip install pyrovision\n```\n\nor using [conda](https://anaconda.org/pyronear/pyrovision):\n\n```shell\nconda install -c pyronear pyrovision\n```\n\n### Developer installation\n\nAlternatively, if you wish to use the latest features of the project that haven\'t made their way to a release yet, you can install the package from source:\n\n```shell\ngit clone https://github.com/pyronear/pyro-vision.git\npip install -e pyro-vision/.\n```\n\n\n## What else\n\n### Documentation\n\nThe full package documentation is available [here](https://pyronear.org/pyro-vision/) for detailed specifications.\n\n### Demo app\n\nThe project includes a minimal demo app using [Gradio](https://gradio.app/)\n\n![demo_app](https://user-images.githubusercontent.com/26927750/179017766-326fbbff-771d-4680-a230-b2785ee89c4d.png)\n\nYou can check the live demo, hosted on :hugs: [HuggingFace Spaces](https://huggingface.co/spaces) :hugs: over here :point_down:\n[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/pyronear/vision)\n\n\n### Docker container\n\nIf you wish to deploy containerized environments, a Dockerfile is provided for you build a docker image:\n\n```shell\ndocker build . -t \n```\n\n### Minimal API template\n\nLooking for a boilerplate to deploy a model from PyroVision with a REST API? Thanks to the wonderful [FastAPI](https://github.com/tiangolo/fastapi) framework, you can do this easily. Follow the instructions in [`./api`](api) to get your own API running!\n\n\n### Reference scripts\n\nIf you wish to train models on your own, we provide training scripts for multiple tasks!\nPlease refer to the [`./references`](references) folder if that\'s the case.\n\n\n## Citation\n\nIf you wish to cite this project, feel free to use this [BibTeX](http://www.bibtex.org/) reference:\n\n```bibtex\n@misc{pyrovision2019,\n title={Pyrovision: wildfire early detection},\n author={Pyronear contributors},\n year={2019},\n month={October},\n publisher = {GitHub},\n howpublished = {\\url{https://github.com/pyronear/pyro-vision}}\n}\n```\n\n\n## Contributing\n\nPlease refer to [`CONTRIBUTING`](CONTRIBUTING.md) to help grow this project!\n\n\n\n## License\n\nDistributed under the Apache 2 License. See [`LICENSE`](LICENSE) for more information.\n'",,"2019/09/08, 13:15:41",1508,Apache-2.0,11,143,"2023/10/18, 17:48:55",7,133,173,12,7,1,0.3,0.46099290780141844,"2022/07/20, 00:42:31",v0.2.0,0,10,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",true,true,"pm4-graders/3ES,MateoLostanlen/pyro-vision-test,MateoLostanlen/pyro-engine-test,MateoLostanlen/pyro-mlops,mateoIdemia/train-iqa,mateoIdemia/train-opa,mateoIdemia/train-autoencoder,MateoLostanlen/train-wildfire,pyronear/pyro-engine",,https://github.com/pyronear,https://pyronear.org/,Paris,,,https://avatars.githubusercontent.com/u/61667887?v=4,,, Pyronear Risks,The pyro-risks project aims at providing the pyronear-platform with a machine learning based wildfire forecasting capability.,pyronear,https://github.com/pyronear/pyro-risks.git,github,"python3,wildfire-forecasting,scikit-learn",Wildfire,"2023/10/20, 15:57:42",22,0,5,true,Jupyter Notebook,PyroNear,pyronear,"Jupyter Notebook,Python,Dockerfile,Procfile",https://pyronear.github.io/pyro-risks,"b'

Pyronear Risks

\n

\n \n \n \n \n \n \n \n \t\t\n \n \n \n \t\t\n

\n\nThe pyro-risks project aims at providing the pyronear-platform with a machine learning based wildfire forecasting capability. \n\n## Table of Contents\n\n- [Table of Contents](#table-of-contents)\n- [Getting started](#getting-started)\n - [Prerequisites](#prerequisites)\n - [Installation](#installation)\n- [Usage](#usage)\n - [Web server](#web-server)\n- [Examples](#examples)\n - [datasets](#datasets)\n - [Scripts](#scripts)\n- [Documentation](#documentation)\n- [Contributing](#contributing)\n- [Credits](#credits)\n- [License](#license)\n\n## Getting started\n\n### Prerequisites\n\n- Python 3.6 (or more recent), but < 3.12.0\n- [pip](https://pip.pypa.io/en/stable/)\n### Installation\n\nYou can install the package from github as follows:\n\n```shell\npip install git+https://github.com/pyronear/pyro-risks\n```\n\n## Usage\n\nBeforehand, you will need to set a few environment variables either manually or by writing an `.env` file in the root directory of this project, like in the example below:\n\n```\nCDS_UID=my_secret_uid\nCDS_API_KEY=my_very_secret_key\n```\nThose values will allow your web server to connect to CDS [API](https://github.com/ecmwf/cdsapi), which is mandatory for your datasets access to be fully operational.\n\n### Web server\n\nTo be able to expose model inference, you can run a web server using docker containers with this command:\n\n```bash\nPORT=8003 docker-compose up -d --build\n```\n\nOnce completed, you will notice that you have a docker container running on the port you selected, which can process requests just like any web server.\n\n## Examples\n### datasets\n\nAccess the main pyro-risks datasets locally. \n\n```python\nfrom pyro_risks.datasets import NASAFIRMS, NASAFIRMS_VIIRS, GwisFwi, ERA5T, ERALand\n\nmodis = NASAFIRMS()\nviirs = NASAFIRMS_VIIRS()\n\nfdi = GwisFwi()\n\nera = ERA5T()\nera_land = ERA5Land()\n```\n### Scripts\n\nYou are free to merge the datasets however you want and to implement any zonal statistic you want, but some are already provided for reference. In order to use them check the example scripts options as follows:\n\n```shell\npython scripts/example_ERA5_FIRMS.py --help\n```\n\nYou can then run the script with your own arguments:\n\n```shell\npython scripts/example_ERA5_FIRMS.py --type_of_merged departements\n```\n\n## Documentation\n\nThe full package documentation is available [here](https://pyronear.org/pyro-risks/) for detailed specifications. The documentation was built with [Sphinx](https://www.sphinx-doc.org) using a [theme](https://github.com/readthedocs/sphinx_rtd_theme) provided by [Read the Docs](https://readthedocs.org).\n\n## Contributing\n\nPlease refer to the [`CONTRIBUTING`](./CONTRIBUTING.md) guide if you wish to contribute to this project.\n\n## Credits\n\nThis project is developed and maintained by the repo owner and volunteers from [Data for Good](https://dataforgood.fr/).\n\n## License\n\nDistributed under the Apache v2 License. See [`LICENSE`](./LICENSE) for more information.\n'",,"2020/09/30, 17:19:23",1120,Apache-2.0,17,51,"2023/10/20, 15:57:43",17,49,59,10,5,4,0.7,0.6956521739130435,"2020/12/24, 16:05:14",v0.1.0,8,8,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",true,true,,,https://github.com/pyronear,https://pyronear.org/,Paris,,,https://avatars.githubusercontent.com/u/61667887?v=4,,, Wildfire Predictive Services,"Wildfire Predictive Services to support decision making in prevention, preparedness, response and recovery.",bcgov,https://github.com/bcgov/wps.git,github,"flnr,wildfire,bcws,python,fastapi,react,javascript,typescript,weather,postgis,postgres,postgresql,cffdrs,hacktoberfest",Wildfire,"2023/10/25, 15:58:49",35,0,12,true,Python,Province of British Columbia,bcgov,"Python,TypeScript,Gherkin,Shell,HTML,Dockerfile,Makefile,JavaScript,CSS,Mako",,"b""[![Issues](https://img.shields.io/github/issues/bcgov/wps.svg?style=for-the-badge)](/../../issues)\n[![MIT License](https://img.shields.io/github/license/bcgov/wps.svg?style=for-the-badge)](/LICENSE)\n[![Lifecycle](https://img.shields.io/badge/Lifecycle-Stable-97ca00?style=for-the-badge)](https://github.com/bcgov/repomountie/blob/master/doc/lifecycle-badges.md)\n[![codecov](https://codecov.io/gh/bcgov/wps/branch/main/graph/badge.svg?token=QZh80UTLpT)](https://codecov.io/gh/bcgov/wps)\n\n# Wildfire Predictive Services\n\n## Description\n\nWildfire Predictive Services to support decision making in prevention, preparedness, response and recovery.\n\n## Getting Started\n\n### Dependencies\n\n### Installing\n\n#### Running the application locally in docker:\n\n1. Create `.env` file in `web` using `web/.env.example` as a sample.\n2. Create `.env.docker` file in `api/app` using `api/app/.env.example` as a sample.\n3. Run `docker compose build` and then `docker compose up`\n4. Open [http://localhost:8080](http://localhost:8080) to view the front end served up from a static folder by the python api.\n5. Open [http://localhost:3000](http://localhost:3000) to view the front end served up in developer mode by node.\n\n#### Developing the application in a dev container, using vscode:\n\n- Open up the project: `Remote-Containers: Open Folder in Container`, select docker-compose.vscode.yml\n- Sometimes VSCode doesn't pick up you've changed the docker container: `Remote-Containers: Rebuild Container`\n- Install extensions into the container, as needed.\n- You can point the API database to: `host.docker.internal`\n- You can start up other services outside of vscode, e.g.: `docker compose up db` and `docker compose up redis`\n\n#### Running the api alone\n\nRefer to [api/README.md](api/README.md).\n\n#### Running the front end alone\n\nRefer to [web/README.md](web/README.md)\n\n### Documentation\n\n- [Database](docs/DB.md)\n- [Devops](docs/DEVOPS.md)\n- [Conventions](docs/CONVENTIONS.md)\n- [Wildfire Glossary](https://github.com/bcgov/wps/wiki/Glossary)\n\n## License\n\n[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) License - see the [LICENSE.md](https://github.com/bcgov/wps/blob/main/LICENSE)\n\n## Contributing\n\n### PRs\n\nYour Github PR is required to pass all our CI checks, including our test coverage threshold via CodeCov: https://docs.codecov.com/docs/about-code-coverage\n\n### Resources\n\n- [Issues](https://github.com/bcgov/wps/issues)\n- [ZenHub Board](https://app.zenhub.com/workspaces/wildfire-predictive-services-5e321393e038fba5bbe203b8/board?repos=235861506,237125626,237125691)\n- [PEP8](https://github.com/python/peps/blob/master/pep-0008.txt) and [PEP20](https://github.com/python/peps/blob/master/pep-0020.txt) coding conventions, but with 110 character line breaks\n- [Code of Conduct](https://github.com/bcgov/wps/blob/master/CONDUCT.md)\n\n## Acknowledgments\n\n[![SonarCloud](https://sonarcloud.io/images/project_badges/sonarcloud-white.svg)](https://sonarcloud.io/dashboard?id=bcgov_wps)\n""",,"2020/01/23, 18:42:10",1371,Apache-2.0,359,1221,"2023/10/25, 20:01:12",450,1523,2735,796,0,3,0.5,0.711573790569504,,,4,14,false,,false,true,,,https://github.com/bcgov,https://github.com/bcgov/BC-Policy-Framework-For-GitHub,Canada,,,https://avatars.githubusercontent.com/u/916280?v=4,,, Global ECMWF Fire Forecasting,The model is a Fortran program to calculate fire danger indices from atmospheric inputs.,projects/CEMSF/repos/geff,,custom,,Wildfire,,,,,,,,,,https://git.ecmwf.int/projects/CEMSF/repos/geff/browse,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, FIREDpy,Classifying fire events from the Collection 6 MODIS Burned Area Product.,earthlab,https://github.com/earthlab/firedpy.git,github,,Wildfire,"2023/09/13, 02:30:15",27,0,6,true,Python,Earth Lab,earthlab,"Python,HTML,R,Dockerfile,Shell",,"b'\n[![DOI](https://zenodo.org/badge/214283770.svg)](https://zenodo.org/badge/latestdoi/214283770) [![Docker Automated build](https://img.shields.io/docker/automated/earthlab/firedpy?style=plastic)](https://hub.docker.com/repository/docker/earthlab/firedpy/builds) ![GitHub contributors](https://img.shields.io/github/contributors/earthlab/firedpy) [![GitHub issues](https://img.shields.io/github/issues/earthlab/firedpy)](https://github.com/earthlab/firedpy/issues) ![GitHub commit activity](https://img.shields.io/github/commit-activity/w/earthlab/firedpy) \n\n# FIREDpy - FIRe Event Delineation for python\n\nA Python Command Line Interface for classifying fire events from the Collection 6 MODIS Burned Area Product.\n\nThis package uses a space-time window to classify individual burn detections from late 2001 to near-present into discrete events and return both a data table and shapefiles of these events. The user is able to specify the spatial and temporal parameters of the window, as well as the spatial and temporal extent, using either a shapefile or a list of MODIS Sinusoidal Projection tile IDs. Shapefiles include full event polygons by default, and the user has the option of having firedpy produce daily-level perimeters, providing a representation of both final and expanding event perimeters. \n\nAny area from the world may be selected. However, in the current version, memory constraints may limit the extent available for a single model run. Equatorial regions have much more fire activity, and may require much more RAM to process than a normal laptop will have.\n\nMore methodological information is at:\n\nBalch, J. K., St. Denis, L. A., Mahood, A. L., Mietkiewicz, N. P., Williams, T. P., McGlinchy J,\nand Cook, M. C. 2020. FIRED (Fire Events Delineation): An open, flexible algorithm & database\nof U.S. fire events derived from the MODIS burned area product (2001-19). Remote\nSensing, 12(21), 3498; https://doi.org/10.3390/rs12213498\n\nDescription of the country-level data sets is at: \n\nMahood, A.L. Lindrooth, E.J., Cook, M.C. and Balch, J.K. Country-level fire perimeter datasets (2001-2021). 2022. Nature Scientific Data, 9(458). https://doi.org/10.1038/s41597-022-01572-3\n\n\n\n### BUG ALERT: \n\nMany of the data products created in Fall 2021 may be shifted by a half pixel, and may lack a coordinate reference system. \n\nThe problem is now fixed, so this will not affect new iterations of firedpy. We created a script, R/posthoc_fixes.R that contains a function to fix either or both of these problems.\n\nSometimes the server (fuoco.geog.umd.edu) that houses the MCD64A1 product used by firedpy is down. If this happens, you just need to wait until it comes back up.\n\nSee the issues tab for more bugs, or to report a new bug!\n\n## Have you used firedpy?\n\nThe algorithm and derived data products are under active development. Please take this [survey](https://docs.google.com/forms/d/e/1FAIpQLSe7ycmS0HGIze2T6TIUox8hsu8nlGsxiUMww8SCeWHDZPhB-Q/viewform?usp=sf_link) to help us improve firedpy.\n\n## Current status of created products\n\nAlready-created products are linked below. They are housed in the CU Scholar data repository in the [Earth Lab Data collection](https://scholar.colorado.edu/collections/pz50gx05h), or [here](https://scholar.colorado.edu/catalog?f%5Bacademic_affiliation_sim%5D%5B%5D=Earth+Lab&locale=en). \n\nAll of the created products have an event-level shapefile in .gpkg and .shp formats. Many countries also have the daily-level shapefile, but these were not created for most countries in Africa and Asia due to computational restrictions. \n\n![completed countries are below](map_fig.png)\n\n## Click on a link below to download a fire perimeter dataset for your favorite country\n\n### North America\n - [Coterminous USA + Alaska](https://scholar.colorado.edu/concern/datasets/d504rm74m)\n - [US plus Canada](https://scholar.colorado.edu/concern/datasets/8336h304x)\n - [Canada](https://scholar.colorado.edu/concern/datasets/gf06g388c)\n - [Hawaii](https://scholar.colorado.edu/concern/datasets/7h149r06p)\n - [Carribean (Barbados, Bahamas, Cayman Islands, Cuba, Dominican Republic, Haiti, Jamaica, Montserrat, Puerto Rico, Saint Kitts And Nevis, Trinidad And Tobago, British Virgin Islands, Guadeloupe, Saint Barthelemy)](https://scholar.colorado.edu/concern/datasets/x633f230f)\n - [Mexico and Central America (Belize, Guatemala, Honduras, El Salvador, Nicaragua, Costa Rica, Panama)](https://scholar.colorado.edu/concern/datasets/vd66w1102)\n\n### South America\n - [Bolivia](https://scholar.colorado.edu/concern/datasets/b2773w83t)\n - [Argentina](https://scholar.colorado.edu/concern/datasets/5t34sk58k)\n - [Northern South America (Suriname, French Guiana, Guyana)](https://scholar.colorado.edu/concern/datasets/qv33rx839)\n - [Chile](https://scholar.colorado.edu/concern/datasets/qr46r2052)\n - [Uruguay](https://scholar.colorado.edu/concern/datasets/q524jq130)\n - [Brazil](https://scholar.colorado.edu/concern/datasets/05741s90q)\n - [Peru](https://scholar.colorado.edu/concern/datasets/x346d5616)\n - [Colombia](https://scholar.colorado.edu/concern/datasets/mp48sd91d)\n - [Ecuador](https://scholar.colorado.edu/concern/datasets/pc289k34n)\n - [Venezuela](https://scholar.colorado.edu/concern/datasets/7m01bm95m)\n - [Paraguay](https://scholar.colorado.edu/concern/datasets/rb68xd05p)\n \n[Entire Western hemisphere from Jan 2017 to March 2020, intended for use in conjunction with GOES16 active fire detections.](https://scholar.colorado.edu/concern/datasets/d217qq78g)\n\n### Europe\n - [Northern Europe (Iceland, Sweden, Norway, and Denmark)](https://scholar.colorado.edu/concern/datasets/sb397945f)\n - [Finland](https://scholar.colorado.edu/concern/datasets/6395w836j)\n - [Russia](https://scholar.colorado.edu/concern/datasets/q811kk87t)\n - [Italy](https://scholar.colorado.edu/concern/datasets/v979v416g)\n - [Spain & Portugal](https://scholar.colorado.edu/concern/datasets/gb19f7006)\n - [Western Europe (France, Germany, Poland, Switzerland, Belgium, Netherlands, Luxembourg and Austria)](https://scholar.colorado.edu/concern/datasets/v692t736f)\n - [Central to Southern Europe (Estonia, Latvia, Lithuania, Belarus, Ukraine, Czech Republic, Slovakia, Hungary, Romania, Bulgaria, Montenegro, Bosnia, Turkey, Republic Of Moldova, Serbia, Albania, Slovenia, and North Macedonia)](https://scholar.colorado.edu/concern/datasets/7h149r07z)\n - [Greece](https://scholar.colorado.edu/concern/datasets/bc386k355)\n - [UK and Ireland](https://scholar.colorado.edu/concern/datasets/pc289k33c)\n\n### Africa\n\n - [Angola](https://scholar.colorado.edu/concern/datasets/t435gf21z)\n - [Benin](https://scholar.colorado.edu/concern/datasets/z603qz58m)\n - [Botswana](https://scholar.colorado.edu/concern/datasets/b8515p69g)\n - [Burundi](https://scholar.colorado.edu/concern/datasets/3f462659h)\n - [Burkina Faso](https://scholar.colorado.edu/concern/datasets/9g54xj875)\n - [Cameroon](https://scholar.colorado.edu/concern/datasets/x920fz208)\n - [Central North Africa (Libya, Algeria, Tunisia)](https://scholar.colorado.edu/concern/datasets/8910jv77j)\n - [Chad](https://scholar.colorado.edu/concern/datasets/707958762)\n - [Central African Republic](https://scholar.colorado.edu/concern/datasets/pv63g1576)\n - [Democratic Republic of the Congo](https://scholar.colorado.edu/concern/datasets/5425kb88g)\n - [Djibouti](https://scholar.colorado.edu/concern/datasets/1831cm01x)\n - [Equatorial Guinea](https://scholar.colorado.edu/concern/datasets/vx021g32b)\n - [Eritrea](https://scholar.colorado.edu/concern/datasets/5m60qt182)\n - [eSwatini](https://scholar.colorado.edu/concern/datasets/9w0324116)\n - [Ethiopia](https://scholar.colorado.edu/concern/datasets/z316q2977)\n - [Gabon](https://scholar.colorado.edu/concern/datasets/2z10wr67h)\n - [The Gambia](https://scholar.colorado.edu/concern/datasets/pn89d7911)\n - [Ghana](https://scholar.colorado.edu/concern/datasets/2r36tz735)\n - [Guinea](https://scholar.colorado.edu/concern/datasets/05741s910)\n - [Guinea-Bissau](https://scholar.colorado.edu/concern/datasets/nc580n858)\n - [Ivory Coast](https://scholar.colorado.edu/concern/datasets/vq27zp62f)\n - [Kenya](https://scholar.colorado.edu/concern/datasets/1j92g871c)\n - [Lesotho](https://scholar.colorado.edu/concern/datasets/cr56n229w)\n - [Liberia](https://scholar.colorado.edu/concern/datasets/6h440t58k)\n - [Madagascar](https://scholar.colorado.edu/concern/datasets/fb494955x)\n - [Malawi](https://scholar.colorado.edu/concern/datasets/5999n464m)\n - [Mali](https://scholar.colorado.edu/concern/datasets/pr76f4544)\n - [Mauritania](https://scholar.colorado.edu/concern/datasets/x059c864s)\n - [Morocco](https://scholar.colorado.edu/concern/datasets/td96k3751)\n - [Mozambique](https://scholar.colorado.edu/concern/datasets/1n79h5504)\n - [Namibia](https://scholar.colorado.edu/concern/datasets/db78td244)\n - [Niger](https://scholar.colorado.edu/concern/datasets/m039k605q)\n - [Nigeria](https://scholar.colorado.edu/concern/datasets/cv43nx78p)\n - [Republic of the Congo](https://scholar.colorado.edu/concern/datasets/nk322f305)\n - [Rwanda](https://scholar.colorado.edu/concern/datasets/st74cr782)\n - [Senegal](https://scholar.colorado.edu/concern/datasets/tt44pp176)\n - [Sierra Leone](https://scholar.colorado.edu/concern/datasets/5712m779r)\n - [Somalia](https://scholar.colorado.edu/concern/datasets/xd07gt798)\n - [Somaliland](https://scholar.colorado.edu/concern/datasets/8c97kr53f)\n - [South Africa](https://scholar.colorado.edu/concern/datasets/rf55z8833)\n - [South Sudan](https://scholar.colorado.edu/concern/datasets/b2773w89g)\n - [Sudan](https://scholar.colorado.edu/concern/datasets/g158bj37v)\n - [Tanzania](https://scholar.colorado.edu/concern/datasets/7w62f947x)\n - [Togo](https://scholar.colorado.edu/concern/datasets/fj236325p)\n - [Uganda](https://scholar.colorado.edu/concern/datasets/hh63sx004)\n - [Zambia](https://scholar.colorado.edu/concern/datasets/6108vc441)\n - [Zimbabwe](https://scholar.colorado.edu/concern/datasets/f7623d95c)\n\n### Asia\n\n - [China](https://scholar.colorado.edu/concern/datasets/qz20st810)\n - [India](https://scholar.colorado.edu/concern/datasets/ht24wk47t)\n - [Central Asia (Turkmenistan, Kazakhstan, Uzbekistan, Kyrgystan, Tajikistan, Afghanistan, and Pakistan)](https://scholar.colorado.edu/concern/datasets/47429b07v)\n - [Middle East (Saudi Arabia, Qatar, Oman, Yemen, United Arab Emirates, Iraq, Jordan, Syria, Israel, Palestine, Lebanon, Egypt)](https://scholar.colorado.edu/concern/datasets/5d86p139h)\n - [Mongolia](https://scholar.colorado.edu/concern/datasets/4x51hk21h)\n - [Caucasus (Armenia, Azerbaijan, Georgia)](https://scholar.colorado.edu/concern/datasets/gf06g385j)\n - [Japan](https://scholar.colorado.edu/concern/datasets/dz010r34v)\n - [South Korea](https://scholar.colorado.edu/concern/datasets/pg15bg177)\n - [North Korea](https://scholar.colorado.edu/concern/datasets/3j333327g)\n - [Taiwan](https://scholar.colorado.edu/concern/datasets/df65v9276)\n - [Sri Lanka](https://scholar.colorado.edu/concern/datasets/9z9030982)\n - [Nepal](https://scholar.colorado.edu/concern/datasets/mk61rj10w)\n - [Bhutan](https://scholar.colorado.edu/concern/datasets/n009w342h)\n - [Bangladesh](https://scholar.colorado.edu/concern/datasets/d791sh33k)\n - [Vietnam](https://scholar.colorado.edu/concern/datasets/h702q7566)\n - [Thailand](https://scholar.colorado.edu/concern/datasets/xs55md39h)\n - [Laos](https://scholar.colorado.edu/concern/datasets/bz60cx389)\n - [Myanmar](https://scholar.colorado.edu/concern/datasets/pk02cb86p)\n\n### Australia (state by state)\n\n - [Tasmania](https://scholar.colorado.edu/concern/datasets/c534fq19w)\n - [Victoria](https://scholar.colorado.edu/concern/datasets/2r36tz74f)\n - [New South Wales + Capital Territory](https://scholar.colorado.edu/concern/datasets/37720d85c)\n - [Queensland](https://scholar.colorado.edu/concern/datasets/cr56n230n)\n - [South Australia](https://scholar.colorado.edu/concern/datasets/fn107015p)\n - [Western Australia](https://scholar.colorado.edu/concern/datasets/k35695559)\n - [Northern Territory](https://scholar.colorado.edu/concern/datasets/bn9997900)\n\n### Oceania\n\n - [Philippines](https://scholar.colorado.edu/concern/datasets/7d278v06f)\n - [Papua New Guinea](https://scholar.colorado.edu/concern/datasets/3r074w183)\n - [East Timor](https://scholar.colorado.edu/concern/datasets/j098zc184)\n - [New Zealand](https://scholar.colorado.edu/concern/datasets/9g54xj88f)\n - [Malaysia](https://scholar.colorado.edu/concern/datasets/fq977w13f)\n - [Brunei](https://scholar.colorado.edu/concern/datasets/mp48sd92p)\n - [Indonesia](https://scholar.colorado.edu/concern/datasets/p2676w918)\n\n\n## Installation\n\nThere are two ways to install firedpy. Method one is to run it out of a docker container, Method 2 is to install locally.\n\n### Method 1. Run from a Docker Container:\n\n#### 1.1 Get the docker container running:\n\nNote, the docker container has changed from `earthlab/firedpy` to `earthlabcu/firedpy`\n\n - `docker run -t -d earthlabcu/firedpy`\n \n - Call `docker ps` to get the name of the docker container you just created.\n\n - Then get into the docker container by running docker exec:\n\n `docker exec -it /bin/bash`\n\n - Then you will be inside of the docker container in the firedpy directory. Now, enter:\n\n `conda activate firedpy`\n\n And the environment is ready to use.\n\n#### 1.2 Copy firedpy outputs to your local machine\n\nAfter creating a new fire product, it might be useful to get it out of the docker container in order to use it.\n\n - First, exit the docker container by typing\n\n `exit`\n\n - Second, copy the file out. Here we will use the example of a container with the name ""unruffled_clarke"". The `docker cp` command uses the syntax `docker cp `. Files inside of a docker container will have a prefix of the docker container name (or container ID) followed by a colon, then with a normal path.\n\n Here is an example command using the container name:\n\n `docker cp unruffled_clarke:/home/firedpy/proj/outputs/shapefiles/fired_events_s5_t11_2020153.gpkg /home/Documents/fired_events_s5_t11_2020153.gpkg`\n\n Another example command using the container ID:\n\n `docker cp fa73c6d3e007:/home/firedpy/proj/outputs/shapefiles/fired_events_s5_t11_2020153.gpkg /home/Documents/fired_events_s5_t11_2020153.gpkg`\n\n\n### Method 2. Local Installation Instructions:\n\n - Clone this repository to a local folder and change directories into it:\n\n `git clone https://github.com/earthlab/firedpy.git`\n\n `cd firedpy`\n \n - Ensure your anaconda setup has **conda-forge**, **channel_priority** set to **strict**, and **update your conda**.\n\n `conda update conda --yes`\n `conda config --add channels conda-forge`\n `conda config --set channel_priority strict`\n \n - You must have all packages listed in the environment.yaml installed using \'conda install -c conda-forge \'\n\n - Create and activate a conda environment:\n\n `conda env create -f environment.yaml`\n\n `conda activate firedpy` \n\n - Install locally:\n\n `python setup.py install`\n\n\n## Use:\n - Run firedpy with no options to be prompted with input questions for each option/attribute\n \n `firedpy` \n \n - Or use the following commands in your command line to specify the options/attributes you would like: \n\n - In your terminal use this command to print out the available options and their descriptions:\n\n `firedpy --help`\n\n - Run firedpy with the default option to download required data and write a data table of classified fire events to a temporary directory. This uses CONUS as the default area of interest with a spatial parameter of 5 pixels (~2.3 km) and 11 days:\n\n `firedpy --default`\n\n - Change the spatial and temporal parameters of the model run:\n\n `firedpy -spatial 6 -temporal 10`\n\n - Specify specific tiles and a local project_directory for required data and model outputs:\n\n `firedpy -spatial 6 -temporal 10 -aoi h11v09 h12v09 -proj_dir /home//fired_project`\n\n - Write shapefiles as outputs in addition to the data table:\n\n `firedpy -spatial 6 -temporal 10 -aoi h11v09 h12v09 -proj_dir /home//fired_project --shapefile`\n\n - Add the most common level 3 Ecoregion as an attribute to each event:\n\n `firedpy -spatial 6 -temporal 10 -aoi h11v09 h12v09 -proj_dir /home//fired_project --shapefile -ecoregion_level 3`\n\n - Add landcover information and produce the daily burn file\n\n `firedpy -spatial 6 -temporal 10 -aoi h11v09 h12v09 -proj_dir /home//fired_project --shapefile -ecoregion_level 3 -landcover_type 1 -daily yes`\n\n For more information about each parameter, use:\n\n \'firedpy --help\'\n \n \n### Parameter table (under construction)\n \n| parameter | value(s)| example | description|\n|:--------------|:----------|:-----|:---------|\n| -spatial | integer | -spatial 5 | pixel radius for moving window, defaults to 5|\n| -temporal | integer | -temporal 11 | day radius for moving window, defaults to 11|\n| -aoi | character (MODIS tile) | -aoi h11v09 | which modis tiles should be used |\n| -aoi | character (shapefile) | -aoi /home/firedpy/individual_countries/canada.gpkg | figures out which modis tiles to download based on the polygon -- **polygon must be in the same projection as MODIS MCD64** -- all the polygons in the *ref* folder are correctly projected and can be used as crs templates to prepare other polygons. |\n| -proj_dir| character| -proj_dir /home/firedpy/proj | which directory should firedpy operate within? Defaults to a folder called ""proj"" within the current working directory.|\n| -ecoregion_type | character | -ecoregion_type na | type of ecoregion, either world or na|\n | -ecoregion_level | integer | -ecoregion_level 3 | if ecoregion type = na, the level (1-3) of North American ecoregions |\n | -landcover_type | integer and character | -landcover_type 2:username:password | number (1-3) corresponding with a MODIS/Terra+Aqua Land Cover (MCD12Q1) category. You will need to also make an account at https://urs.earthdata.nasa.gov/home and include your login information within the argument. |\n | -shp_type | character | -shp_type gpkg | option to build a shapefile for the fired event in gpkg, ESRI shapefile (shp), both, or none |\n | -file | character | -file fired_colorado | specifies the base of the file name for the tables and shapefile outputs, defaults to ""fired"", in the format: ""(-file aruguement)_toYYYYDDD_(either events or daily).gpkg"", with YYYY being the year, and DDD being the julian day of the last month in the time series. The example would output fired_colorado_to2021031_events.gpkg.|\n | -daily | character (yes or no) | -daily yes | creates daily polygons, if no just the event-level perimeters will be created. Defaults to no. |\n | -start_yr |integer | -start_yr 2001 | gets the hdf files from the MODIS tiles starting in this year. The first year avalible is 2001 |\n | -end_yr |integer | -end_yr 2021 | gets the hdf files from the MODIS tiles ending in this year. The last year avalible is 2021 |\n \n \n \n### Boundary files are available for use as areas of interest\n \n - Country boundaries are in **ref/individual_countries**\n - Continent boundaries are in **ref/continents**\n - United States state boundaries for the United States of America are in **ref/us_states**\n - Australian state boundaries are in **ref/australian_states**\n - For example `firedpy -aoi /home/firedpy/ref/us_states/colorado.gpkg`, and so on. Every space is a \'_\'. \n - If using the user input option, when prompted for the name of the continent, country, or state use ""_"" for spaces. \n - **Ensure that the input shapefiles are in the modis sinusiodal projection**\n\n\n## How to update the docker container\n\n- step 0.1. install docker (go to the docker website for OS-specific instructions.)\n- step 0.2. get a dockerhub account\n- step 1. login to docker hub via the command line\n - `docker login` or `sudo docker login`\n- step 2. get the existing docker image set up\n - docker run -t -d earthlab/firedpy\n- step 3. update from github\n - git pull \n- step 4. build the docker container\n - `docker build -t earthlab/firedpy:latest .`\n- step 5. **ENSURE THE SOFTWARE STILL WORKS BEFORE PUSHING**\n - `firedpy -aoi /home/firedpy/ref/individual_countries/samoa.gpkg`\n- step 6. push it up to dockerhub\n - `docker push earthlab/firedpy:latest`\n'",",https://zenodo.org/badge/latestdoi/214283770,https://doi.org/10.3390/rs12213498\n\nDescription,https://doi.org/10.1038/s41597-022-01572-3\n\n\n\n###","2019/10/10, 20:54:12",1476,MIT,5,501,"2023/09/13, 13:06:52",21,11,69,4,42,1,0.0,0.29850746268656714,"2021/09/14, 18:27:38",v1.0,0,6,false,,false,false,,,https://github.com/earthlab,https://www.earthdatascience.org/,"Boulder, Colorado, USA",,,https://avatars.githubusercontent.com/u/19476722?v=4,,, qgis2fds,"Export terrain elevation, landuse, and georeferencing for computational fluid dynamics wildfire or atmospheric pollutants dispersion simulations.",firetools,https://github.com/firetools/qgis2fds.git,github,,Wildfire,"2023/07/24, 12:53:18",13,0,3,true,Python,Fire safety design tools,firetools,"Python,QML,Shell",,b'# *qgis2fds* plugin repository\n\nThe open source plugin to export terrains and landuse from the [QGIS](http://www.qgis.org)\ngeographic information system to the [NIST Fire Dynamics Simulator (FDS)](https://pages.nist.gov/fds-smv/)\nfor wildfire simulation and atmospheric dispersion of fire pollutants.\n\n * **Learn** how to use this tool on the [wiki pages](https://github.com/firetools/qgis2fds/wiki).\n * **Discuss** about the usage on its [discussion group](https://groups.google.com/g/qgis2fds).\n * File **issues** on the [issue tracker](https://github.com/firetools/qgis2fds/issues). \n \n---\n\nThe development of *qgis2fds* has been funded by a grant from\nthe Italian Ministry of Foreign Affairs and International Cooperation.\n\nBy the research project WUIFI-21 (High fidelity computational fluid dynamics modeling of forest fires\nfor Wildland-Urban Interface communities resilience and protection)\nthe participating organizations intend to extend the capabilities of FDS\non the prediction of wildland-urban interface fires propagation.\n\n![MAECI](https://github.com/firetools/qgis2fds/wiki/p/web/logo-maeci.jpeg)\n',,"2020/05/04, 07:38:45",1269,GPL-3.0,18,236,"2023/09/09, 14:42:31",9,25,78,41,46,0,0.0,0.06849315068493156,"2023/09/09, 14:42:56",verify-dev,0,6,false,,false,false,,,https://github.com/firetools,,,,,https://avatars.githubusercontent.com/u/16683990?v=4,,, Mesogeos,A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean.,Orion-AI-Lab,https://github.com/Orion-AI-Lab/mesogeos.git,github,"datacube,deep-learning,forests,mediterranean,wildfire-forecasting,zarr",Wildfire,"2023/09/29, 10:13:16",21,0,20,true,Jupyter Notebook,Orion Lab,Orion-AI-Lab,"Jupyter Notebook,Python,Shell",https://orion-ai-lab.github.io/mesogeos/,"b""# Mesogeos: A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean\n\n\xf0\x9f\x86\x95 2023-09: Accepted at [Neurips 2023 Datasets and Benchmarks Track](https://openreview.net/group?id=NeurIPS.cc/2023/Track/Datasets_and_Benchmarks)\n\nThis is the official code repository of the mesogeos dataset. \n\n[Pre-print](https://arxiv.org/abs/2306.05144) describing the paper.\n\nThis repo contains code for the following:\n* Creation of the Mesogeos datacube.\n* Extraction of machine learning datasets for different tracks.\n* Training and evaluation machine learning models for these tracks.\n\n**Authors**: *Spyros Kondylatos (1, 2), Ioannis Prapas (1, 2), Gustau Camps-Valls (2), Ioannis Papoutsis (1)*\n\n*(1) Orion Lab, IAASARS, National Observatory of Athens*\n\n*(2) Image & Signal Processing Group, Universitat de Val\xc3\xa8ncia*\n\n## Table of Contents\n\n- [Downloading the data](#downloading-the-data)\n- [Datacube Generation](#datacube-generation)\n- [Machine Learning Tracks](#machine-learning-tracks)\n - [Track A: Wildfire Danger Forecasting](#track-a-wildfire-danger-forecasting)\n - [Track B: Final Burned Area Prediction](#track-b-final-burned-area-prediction)\n- [Contributing](#contributing)\n- [Datacube Details](#datacube-details)\n- [Citation](#citation)\n- [License](#license)\n- [Acknowledgements](#acknowledgements)\n\n## Data repository\n\nYou can access the data using this [Drive link](https://drive.google.com/drive/folders/1aRXQXVvw6hz0eYgtJDoixjPQO-_bRKz9). This link contains the mesogeos datacube (`mesogeos_cube.zarr/`), the extracted datasets for the machine learning tracks (`ml_tracks/`), as well as notebooks showing how to access the mesogeos cubes (`notebooks/`).\n\n### Accessing the mesogeos cube\n\nThe mesogeos cube is publicly accessible in the following places:\n\n- OVH S3 storage bucket: [https://my-uc3-bucket.s3.gra.io.cloud.ovh.net/mesogeos.zarr](https://my-uc3-bucket.s3.gra.io.cloud.ovh.net/mesogeos.zarr)\n- Google Drive folder: [https://drive.google.com/drive/folders/1aRXQXVvw6hz0eYgtJDoixjPQO-_bRK z9\n](https://drive.google.com/drive/folders/1aRXQXVvw6hz0eYgtJDoixjPQO-_bRKz9)\n\n#### Option 1: Access from S3 (Best option to download)\n\n```\nimport zarr\nimport xarray as xr\nimport fsspec\n\nurl = 'https://my-uc3-bucket.s3.gra.io.cloud.ovh.net/mesogeos.zarr'\nds = xr.open_zarr(fsspec.get_mapper(url))\nds\n```\n\nTo run this make sure to install `xarray`, `zarr` and `fsspec` libraries. \n\n**Downloading locally:** You can write the zarr using the [xarray `.to_zarr` method](https://docs.xarray.dev/en/latest/generated/xarray.Dataset.to_zarr.html).\n\n#### Option 2: Access from Google Colab\n[notebooks/1_Exploring_Mesogeos.ipynb](notebooks/1_Exploring_Mesogeos.ipynb) shows how to open Mesogeos directly in google colab \n[![colab_link](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Orion-AI-Lab/mesogeos/blob/main/notebooks/1_Exploring_Mesogeos.ipynb)\n\n## Datacube Generation\n\nFind the code to generate a datacube like mesogeos in [datacube_creation](datacube_creation/).\n\n## Machine Learning Tracks\n### Track A: Wildfire Danger Forecasting\n\nThis track defines wildfire danger forecasting as a binary classification problem.\n\nMore details in [Track A](./ml_tracks/a.fire_danger/)\n\n### Track B: Final Burned Area Prediction\n\nThis track is about predicting the final burned area of a wildfire given the ignition point and the conditions of the fire drivers at the first day of the fire in a neighborhood around the ignition point.\n\nMore details in [Track B](./ml_tracks/b.final_burned_area/README.md)\n\n## Datacube Details\n\nMesogeos is meant to be used to develop models for wildfire modeling in the Mediterranean. \nIt contains variables related to the ignition and spread of wildfire for the years 2006 to 2022 at a daily 1km x 1km grid.\n\n
Datacube Variables\n\nThe datacube contains the following variables:\n\n- satellite data from MODIS (Land Surface Temperature (https://lpdaac.usgs.gov/products/mod11a1v061/), Normalized Vegetation Index (https://lpdaac.usgs.gov/products/mod13a2v061/), Leaf Area Index (https://lpdaac.usgs.gov/products/mod15a2hv061/))\n- weather variables from ERA5-Land (max daily temperature, max daily dewpoint temperature, min daily relative humidity, \nmax daily wind speed, max daily surface pressure, mean daily surface solar radiation downwards) (https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview)\n- soil moisture index from JRC European Drought Observatory (https://edo.jrc.ec.europa.eu/edov2/home.static.html)\n- population count (https://hub.worldpop.org/geodata/listing?id=64) & distance to roads (https://hub.worldpop.org/geodata/listing?id=33) from worldpop.org \n- land cover from Copernicus Climate Change Service (https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-land-cover?tab=overview)\n- elevation, aspect, slope and curvature from Copernicus EU-DEM (https://land.copernicus.eu/imagery-in-situ/eu-dem/eu-dem-v1.1?tab=download)\n- burned areas and ignition points from EFFIS (https://effis.jrc.ec.europa.eu/applications/data-and-services)\n\nVriables in the cube:\n| Variable | Units | Description |\n| --- | --- | --- |\n| aspect | \xc2\xb0 | aspect |\n| burned areas | unitless | rasterized burned polygons. 0 when no burned area occurs in that cell, 1 if it does for the day of interest |\n| curvature | rad | curvature |\n| d2m | K | day's maximum 2 metres dewpoint temperature |\n| dem | m | elevation |\n| ignition_points | hectares | rasterized fire ignitions. It contains the final hectares of the burned area resulted from the fire |\n| lai | unitless | leaf area index |\n| lc_agriculture | % | fraction of agriculture in the pixel. 1st Jan of each year has the values of the year |\n| lc_forest | % | fraction of forest in the pixel. 1st Jan of each year has the values of the year |\n| lc_grassland | % | fraction of grassland in the pixel. 1st Jan of each year has the values of the year |\n| lc_settlement | % | fraction of settlement in the pixel. 1st Jan of each year has the values of the year |\n| lc_shrubland | % | fraction of shrubland in the pixel. 1st Jan of each year has the values of the year |\n| lc_sparse_veagetation | % | fraction of sparse vegetation in the pixel. 1st Jan of each year has the values of the year |\n| lc_water_bodies | % | fraction of water bodies in the pixel. 1st Jan of each year has the values of the year |\n| lc_wetland | % | fraction of wetland in the pixel. 1st Jan of each year has the values of the year |\n| lst_day | K | day's land surface temperature |\n| lst_night | K | nights' land surface temperature |\n| ndvi | unitless | normalized difference vegetation index |\n| population | people/km^2 | population count per year. 1st Jan of each year has the values of the year |\n| rh | %/100 | day's minimum relative humidity |\n| roads_distance | km | distance from the nearest road |\n| slope | rad | slope |\n| smi | unitless | soil moisture index |\n| sp | Pa | day's maximum surface pressure |\n| ssrd | J/m^2| day's average surface solar radiation downwards |\n| t2m | K | day's maximum 2 metres temperature |\n| tp | m | day's total precipitation |\n| wind_speed | m/s | day's maximum wind speed |\n\n
\n\nAn example of some variables for a day in the cube:\n![image](https://user-images.githubusercontent.com/76213770/225653285-754a7d4a-8f32-4200-820b-d3614e14b864.png)\n\n\n**Datacube Metadata**\n\n- Temporal Extent: `(2006-04-01, 2022-09-29)`\n- Spatial Extent: `(-10.72, 30.07, 36.74, 47.7)`, i.e. the wider Mediterranean region.\n- Coordinate Reference System: `EPSG:4326`\n\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7741518.svg)](https://doi.org/10.5281/zenodo.7741518)\n\n**Datacube Citation** \n\n```\nSpyros Kondylatos, Ioannis Prapas, Gustau Camps-Valls, & Ioannis Papoutsis. (2023). \nMesogeos: A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean. \nZenodo. https://doi.org/10.5281/zenodo.7473331\n```\n\n## Contributing\n\nWe welcome new contributions for new models and new machine learning tracks!\n\n**New Model**: To contribute a new model for an existing track, your code has to be (i) open, (ii) reproducible (we should be able to easily run your code and get the reported results) and (iii) use the same dataset split defined for the track. \nAfter we verify your results, you get to **add your model and name to the leaderboard**. \nCheck the current [leaderboards](https://orion-ai-lab.github.io/mesogeos/).\n\n[Submit a new issue](https://github.com/Orion-AI-Lab/mesogeos/issues/new/choose) containing a link to your code.\n\n**New ML Track**: To contribute a new track, [submit a new issue](https://github.com/Orion-AI-Lab/mesogeos/issues/new/choose).\n\nWe recommend at minimum:\n\n1. a dataset extraction process that samples from mesogeos,\n2. a description of the task,\n3. a baseline model,\n4. appropriate metrics.\n\n### License\n\nCreative Commons Attribution v4\n\n### Citation\n\n```\n@misc{kondylatos2023mesogeos,\n title={Mesogeos: A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean}, \n author={Spyros Kondylatos and Ioannis Prapas and Gustau Camps-Valls and Ioannis Papoutsis},\n year={2023},\n eprint={2306.05144},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n```\n\n### Acknowledgements \n\nThis work has received funding from the European Union\xe2\x80\x99s Horizon 2020 Research and Innovation Projects DeepCube and TREEADS, under Grant Agreement Numbers 101004188 and 101036926353 respectively\n""",",https://arxiv.org/abs/2306.05144,https://doi.org/10.5281/zenodo.7741518,https://doi.org/10.5281/zenodo.7473331\n```\n\n##","2022/10/12, 12:48:08",378,GPL-3.0,49,51,"2023/08/17, 07:41:11",0,1,2,2,69,0,0.0,0.34883720930232553,"2023/06/14, 08:50:43",v1.2,0,2,false,,false,false,,,https://github.com/Orion-AI-Lab,http://orionlab.space.noa.gr/,Greece,,,https://avatars.githubusercontent.com/u/94176283?v=4,,, VIAME,Video and Image Analytics for Marine Environments.,VIAME,https://github.com/VIAME/VIAME.git,github,"computer-vision,artificial-intelligence,video-search,video-analytics,marine-biology,open-source,machine-learning,video-annotation,image-annotation,annotation-framework,oceanography,image-processing,conservation,ecology,deep-learning,video-analysis,object-detection",Marine Life and Fishery,"2023/10/10, 15:53:09",237,0,44,true,Python,VIAME,VIAME,"Python,C++,CMake,Cuda,MATLAB,Cython,Shell,Ada,Batchfile,C",http://www.viametoolkit.org/,"b'\n\n\nVIAME is a computer vision application designed for do-it-yourself artificial intelligence including\nobject detection, object tracking, image/video annotation, image/video search, image mosaicing,\nimage enhancement, size measurement, multi-camera data processing, rapid model generation,\nand tools for the evaluation of different algorithms. Originally targetting marine species\nanalytics, VIAME now contains many common algorithms and libraries, and is also useful as a\ngeneric computer vision toolkit. It contains a number of standalone tools for accomplishing\nthe above, a pipeline framework which can connect C/C++, python, and matlab nodes together\nin a multi-threaded fashion, and multiple algorithms resting on top of the pipeline infrastructure.\nLastly, both desktop and web user interfaces exist for deployments in different types of\nenvironments, with an open annotation archive and example of the web platform available\nat [viame.kitware.com](https://viame.kitware.com).\n\n\nDocumentation\n-------------\n\nThe [User\'s Quick-Start Guide](https://data.kitware.com/api/v1/item/5fdaf1dd2fa25629b99843f8/download),\n[Tutorial Videos](https://www.youtube.com/channel/UCpfxPoR5cNyQFLmqlrxyKJw), \nand [Developer\'s Manual](http://viame.readthedocs.io/en/latest/) are more comprehensive,\nbut select entries are also listed below broken down by individual functionality:\n\n\n[Documentation Overview](https://viame.readthedocs.io/en/latest/section_links/documentation_overview.html) <>\n[Install or Build Instructions](examples/building_and_installing_viame) <>\n[All Examples](https://github.com/Kitware/VIAME/tree/master/examples) <>\n[DIVE Interface](https://kitware.github.io/dive) <>\n[VIEW Interface](examples/annotation_and_visualization) <>\n[Search and Rapid Model Generation](examples/search_and_rapid_model_generation) <>\n[Object Detector CLI](examples/object_detection) <>\n[Object Tracker CLI](examples/object_tracking) <>\n[Detector Training CLI](examples/object_detector_training) <>\n[Evaluation of Detectors](examples/scoring_and_roc_generation) <>\n[Detection File Formats](https://viame.readthedocs.io/en/latest/section_links/detection_file_conversions.html) <>\n[Calibration and Image Enhancement](examples/image_enhancement) <>\n[Registration and Mosaicing](examples/image_registration) <>\n[Stereo Measurement and Depth Maps](examples/measurement_using_stereo) <>\n[Pipelining Overview](https://github.com/Kitware/kwiver) <>\n[Core Class and Pipeline Info](http://kwiver.readthedocs.io/en/latest/architecture.html) <>\n[Plugin Integration](examples/hello_world_pipeline) <>\n[Example Plugin Templates](plugins/templates) <>\n[Embedding Algorithms in C++](examples/using_detectors_in_cxx_code)\n\nInstallations\n-------------\n\nFor a full installation guide and description of the various flavors of VIAME, see the\nquick-start guide, above. The full desktop version is provided as either a .msi, .zip or\n.tar file. Alternatively, standalone annotators (without any processing algorithms)\nare available via smaller installers. Lastly, docker files are available for both VIAME\nDesktop and Web (below). For full desktop installs, extract the binaries and place them\nin a directory of your choosing, for example /opt/noaa/viame on Linux\nor C:\\Program Files\\VIAME on Windows. If using packages built with GPU support, make sure\nto have sufficient video drivers installed, version 465.19 or higher. The best way to\ninstall drivers depends on your operating system. This isn\'t required if just using\nmanual annotators (or frame classifiers only). The binaries are quite large,\nin terms of disk space, due to the inclusion of multiple default model files and\nprograms, but if just building your desired features from source (e.g. for embedded\napps) they are much smaller.\n\n**Installation Requirements:**
\n* Up to 8 Gb of Disk Space for the Full Installation
\n* Windows 7\\*, 8, 10, or 11 (64-Bit) or Linux (64-Bit, e.g. RHEL, CentOS, Ubuntu)
\n * Windows 7 requires some updates and service packs installed, e.g. [KB2533623](https://www.microsoft.com/en-us/download/details.aspx?id=26764).
\n * MacOS is currently only supported running standalone annotation tools, see below.\n\n**Installation Recommendations:**
\n* NVIDIA Drivers (Version 465.19 or above, \nWindows \n[\\[1\\]](https://www.nvidia.com/Download/index.aspx?lang=en-us)\n[\\[2\\]](https://developer.nvidia.com/cuda-downloads)\nUbuntu \n[\\[1\\]](https://linuxhint.com/ubuntu_nvidia_ppa/)\n[\\[2\\]](https://developer.nvidia.com/cuda-downloads)\nCentOS \n[\\[1\\]](https://developer.nvidia.com/cuda-downloads)\n[\\[2\\]](https://www.nvidia.com/Download/index.aspx?lang=en-us))
\n* A [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus) with 8 Gb or more VRAM
\n\n**Windows Full Desktop Binaries:**
\n* VIAME v0.20.2 Windows, GPU Enabled, Wizard (.msi) (Coming Soon...)
\n* [VIAME v0.20.2 Windows, GPU Enabled, Mirror1 (.zip)](https://drive.google.com/file/d/1wLUt09dzcOFBifGJWhH_FlKvnadWrtII/view?usp=sharing)
\n* [VIAME v0.20.2 Windows, GPU Enabled, Mirror2 (.zip)](https://data.kitware.com/api/v1/item/64e38daedf16832d03257e39/download)
\n* [VIAME v0.20.1 Windows, CPU Only, Mirror1 (.zip)](https://drive.google.com/file/d/1lQdX9uXgzQbGnfXQ10mebWu2QLMLmDKF/view?usp=sharing)
\n* [VIAME v0.20.1 Windows, CPU Only, Mirror2 (.zip)](https://data.kitware.com/api/v1/item/64ab74436cb8a983de7a3772/download)\n\n**Linux Full Desktop Binaries:**
\n* [VIAME v0.20.2 Linux, GPU Enabled, Mirror1 (.tar.gz)](https://drive.google.com/file/d/1UzBwZXAsM_7nEiMSdvK-mGGdo-M7SFMK/view?usp=drive_link)
\n* [VIAME v0.20.2 Linux, GPU Enabled, Mirror2 (.tar.gz)](https://data.kitware.com/api/v1/item/650d79acc8ab4576ca99f22d/download)
\n* [VIAME v0.20.1 Linux, CPU Only, Mirror1 (.tar.gz)](https://drive.google.com/file/d/1-L37JZwju6AsPjeB0zvTwOrYALOQR1zR/view?usp=sharing)
\n* [VIAME v0.20.1 Linux, CPU Only, Mirror2 (.tar.gz)](https://data.kitware.com/api/v1/item/64ab74676cb8a983de7a3775/download)\n\n**Web Applications**:
\n* [VIAME Online Web Annotator and Public Annotation Archive](https://viame.kitware.com/)
\n* [VIAME Web Local Installation Instructions](https://kitware.github.io/dive/Deployment-Overview/)
\n* [VIAME Web Source Repository](https://github.com/Kitware/dive)\n\n**DIVE Standalone Desktop Annotator:**
\n* [DIVE Installers (Linux, Mac, Windows)](https://github.com/Kitware/dive/releases)\n\n**SEAL Standalone Desktop Annotator:**
\n* [SEAL Windows 7/8/10, GPU Enabled (.zip)](https://data.kitware.com/api/v1/item/602296172fa25629b95482f6/download)
\n* [SEAL Windows 7/8/10, CPU Only (.zip)](https://data.kitware.com/api/v1/item/602295642fa25629b9548196/download)
\n* [SEAL CentOS 7, GPU Enabled (.tar.gz)](https://data.kitware.com/api/v1/item/6023362a2fa25629b957c365/download)
\n* [SEAL Generic Linux, GPU Enabled (.tar.gz)](https://data.kitware.com/api/v1/item/6023359c2fa25629b957c2f3/download)\n\n**Optional Add-Ons and Model Files:**
\n* [Arctic Seals Models, Windows](https://data.kitware.com/api/v1/item/5e30b8ffaf2e2eed3545bff6/download)
\n* [Arctic Seals Models, Linux](https://data.kitware.com/api/v1/item/5e30b283af2e2eed3545a888/download)
\n* [EM Tuna Detectors, All OS](https://viame.kitware.com/api/v1/item/627b326cc4da86e2cd3abb5b/download)
\n* [HabCam Models (Scallop, Skate, Flatfish), Linux](https://viame.kitware.com/api/v1/item/645a7f6d4c19222431de7953/download)
\n* [Motion Detector Model, All OS](https://viame.kitware.com/api/v1/item/627b326fea630db5587b577b/download)
\n* [MOUSS Deep 7 Bottomfish Models, All OS](https://viame.kitware.com/api/v1/item/627b3282c4da86e2cd3abb5d/download)
\n* [Penguin Head FF Models, All OS](https://viame.kitware.com/api/v1/item/627b3289ea630db5587b577d/download)
\n* [Sea Lion Models, All OS](https://viame.kitware.com/api/v1/item/64e391660ee78064c384dbb9/download)
\n* [SEFSC 100-200 Class Fish Models, All OS](https://viame.kitware.com/api/v1/item/627b32b1994809b024f207a7/download)
\n* [ConvNext Low-Shot Models, All OS](https://viame.kitware.com/girder/api/v1/item/64e2c485ef791ec92a7221b2/download)\n\nNote: To install Add-Ons and Patches, copy them into an existing VIAME installation folder.\nFolders should match, for example, the Add-On packages contains a \'configs\' folder, and the\nmain installation also contains a \'configs\' folder so they should just be merged.\n\n\nDocker Images\n-------------\n\nDocker images are available on: https://hub.docker.com. For a default container with just core\nalgorithms, runnable via command-line, see:\n\nkitware/viame:gpu-algorithms-latest\n\nThis image is headless (ie, it contains no GUI) and contains a VIAME desktop (not web)\ninstallation in the folder /opt/noaa/viame. For links to the VIAME-Web docker containers see the\nabove section in the installation documentation. Most add-on models are not included in the\ninstance but can be downloaded via running the script download_viame_addons.sh in the bin folder.\n\nQuick Build Instructions\n------------------------\n\nThese instructions are intended for developers or those interested in building the latest master\nbranch. More in-depth build instructions can be found [here](examples/building_and_installing_viame),\nbut the software can be built either as a super-build, which builds most of its dependencies\nalongside itself, or standalone. To build VIAME requires, at a minimum, [Git](https://git-scm.com/),\n[CMake](https://cmake.org/), and a [C++ compiler](http://www.cplusplus.com/doc/tutorial/introduction/).\nInstalling Python and CUDA is also recommended. If using CUDA, versions 11.7 or 11.6 are\npreferred, with CUDNN 8. Other CUDA or CUDNN versions may or may not work. For python distributions,\nat a minimum Python3.6 or above is necessary, alongside having pip installed.\n\nTo build on the command line in Linux, use the following commands, only replacing [source-directory]\nand [build-directory] with locations of your choice. While these directories can be the same,\nit\'s good practice to have a \'src\' checkout then a seperate \'build\' directory alongside it:\n\n\tgit clone https://github.com/VIAME/VIAME.git [source-directory]\n\n\tcd [source-directory] && git submodule update --init --recursive\n\nNext, create a build directory and run the following `cmake` command (or alternatively\nuse the cmake GUI if you are not using the command line interface):\n\n\tmkdir [build-directory] && cd [build-directory]\n\n\tcmake -DCMAKE_BUILD_TYPE:STRING=Release [source-directory]\n\nOnce your `cmake` command has completed, you can configure any build flags you want\nusing \'ccmake\' or the cmake GUI, and then build with the following command on Linux:\n\n\tmake -j8\n\nOr alternatively by building it in Visual Studio or your compiler of choice on\nWindows. On Linux, \'-j8\' tells the build to run multi-threaded using 8 threads, this\nis useful for a faster build though if you get an error it can be difficult to see\nit, in which case running just \'make\' might be more helpful. For Windows,\ncurrently VS2019 is the most tested compiler.\n\nThere are several optional arguments to viame which control which plugins get built,\nsuch as those listed below. If a plugin is enabled that depends on another dependency\nsuch as OpenCV) then the dependency flag will be forced to on. If uncertain what to turn\non, it\'s best to just leave the default enable and disable flags which will build most\n(though not all) functionalities. These are core components we recommend leaving turned on:\n\n\n
\n\n| Flag | Description |\n|------------------------------|--------------------------------------------------------------------------------|\n| VIAME_ENABLE_OPENCV | Builds OpenCV and basic OpenCV processes (video readers, simple GUIs) |\n| VIAME_ENABLE_VXL | Builds VXL and basic VXL processes (video readers, image filters) |\n| VIAME_ENABLE_PYTHON | Turns on support for using python processes (multiple algorithms) |\n| VIAME_ENABLE_PYTORCH | Installs all pytorch processes (detectors, trackers, classifiers) |\n\n
\n\n\nAnd a number of flags which control which system utilities and optimizations are built, e.g.:\n\n\n
\n\n| Flag | Description |\n|------------------------------|--------------------------------------------------------------------------------|\n| VIAME_ENABLE_CUDA | Enables CUDA (GPU) optimizations across all packages |\n| VIAME_ENABLE_CUDNN | Enables CUDNN (GPU) optimizations across all processes |\n| VIAME_ENABLE_DIVE | Enables DIVE GUI (annotation and training on multiple sequences) |\n| VIAME_ENABLE_VIVIA | Builds VIVIA GUIs (VIEW and SEARCH for annotation and video search) |\n| VIAME_ENABLE_KWANT | Builds KWANT detection and track evaluation (scoring) tools |\n| VIAME_ENABLE_DOCS | Builds Doxygen class-level documentation (puts in install tree) |\n| VIAME_BUILD_DEPENDENCIES | Build VIAME as a super-build, building all dependencies (default) |\n| VIAME_INSTALL_EXAMPLES | Installs examples for the above modules into install/examples tree |\n| VIAME_DOWNLOAD_MODELS | Downloads pre-trained models for use with the examples and interfaces |\n\n
\n\n\nAnd lastly, a number of flags which build algorithms or interfaces with more specialized functionality:\n\n\n
\n\n| Flag | Description |\n|------------------------------|--------------------------------------------------------------------------------|\n| VIAME_ENABLE_TENSORFLOW | Builds TensorFlow object detector plugin |\n| VIAME_ENABLE_DARKNET | Builds Darknet (YOLO) object detector plugin |\n| VIAME_ENABLE_TENSORRT | Builds TensorRT object detector plugin |\n| VIAME_ENABLE_BURNOUT | Builds Burn-Out based pixel classifier plugin |\n| VIAME_ENABLE_SMQTK | Builds SMQTK plugins to support image/video indexing and search |\n| VIAME_ENABLE_SCALLOP_TK | Builds Scallop-TK based object detector plugin |\n| VIAME_ENABLE_SEAL | Builds Seal multi-modality GUI |\n| VIAME_ENABLE_ITK | Builds ITK cross-modality image registration |\n| VIAME_ENABLE_UW_CLASSIFIER | Builds UW fish classifier plugin |\n| VIAME_ENABLE_MATLAB | Turns on support for and installs all matlab processes |\n| VIAME_ENABLE_LANL | Builds an additional (Matlab) scallop detector |\n\n
\n\n\nSource Code Layout\n------------------\n
\n VIAME\n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 cmake               # CMake configuration files for subpackages\n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 docs                # Documentation files and manual (pre-compilation)\n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 configs             # All system-runnable config files and models\n   \xe2\x94\x82   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 pipelines       # All processing pipeline configs\n   \xe2\x94\x82   \xe2\x94\x82   \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 models      # All models, which only get downloaded based on flags\n   \xe2\x94\x82   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 prj-linux       # Default linux project files\n   \xe2\x94\x82   \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 prj-windows     # Default windows project files \n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 examples            # All runnable examples and example tutorials\n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 packages            # External projects used by the system\n   \xe2\x94\x82   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 kwiver          # Processing backend infastructure\n   \xe2\x94\x82   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 fletch          # Dependency builder for things which don\'t change often\n   \xe2\x94\x82   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 kwant           # Scoring and detector evaluation tools\n   \xe2\x94\x82   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 vivia           # Baseline desktop GUIs (v1.0)\n   \xe2\x94\x82   \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 ...             # Assorted other packages (typically for algorithms)\n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 plugins             # Integrated algorithms or wrappers around external projects\n   \xe2\x94\x82   \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 ...             # Assorted plugins (detectors, depth maps, filters, etc.)\n   \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 tools               # Standalone tools or scripts, often building on the above\n   \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md           # Project introduction page that you are reading\n   \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 RELEASE_NOTES.md    # A list of the latest updates in the system per version\n
\n\n\nUpdate Instructions\n-------------------\n\nIf you already have a checkout of VIAME and want to switch branches or\nupdate your code, it is important to re-run:\n\n\tgit submodule update --init --recursive\n\nAfter switching branches to ensure that you have on the correct hashes\nof sub-packages within the build. Very rarely you may also need to run:\n\n\tgit submodule sync\n\nJust in case the address of submodules has changed. You only need to\nrun this command if you get a ""cannot fetch hash #hashid"" error.\n\n\nLicense, Citations, and Acknowledgements\n----------------------------------------\n\nVIAME is released under a BSD-3 license.\n\nA non-exhaustive list of relevant papers used within the project alongside contributors\ncan be found [here](docs/citations.md).\n\nVIAME was developed with funding from multiple sources, with special thanks\nto those listed [here](docs/acknowledgements.md).\n'",,"2016/08/09, 16:31:49",2633,CUSTOM,572,7264,"2023/10/10, 15:53:10",38,104,130,11,15,12,2.3,0.06075509908867349,,,0,32,false,,false,false,,,https://github.com/VIAME,http://www.viametoolkit.org/,,,,https://avatars.githubusercontent.com/u/48599248?v=4,,, ecodata,A data package for reporting on Northeast Continental Shelf ecosystem status and trends.,NOAA-EDAB,https://github.com/NOAA-EDAB/ecodata.git,github,"soe,data-package",Marine Life and Fishery,"2023/08/01, 16:06:14",25,0,6,true,HTML,Ecosystem Dynamics and Assessment Branch,NOAA-EDAB,"HTML,R,JavaScript,CSS",https://noaa-edab.github.io/ecodata/landing_page,"b'\n\n\n# ecodata \n\n\n[![gitleaks](https://github.com/NOAA-EDAB/ecodata/actions/workflows/secretScan.yml/badge.svg)](https://github.com/NOAA-EDAB/ecodata/actions/workflows/secretScan.yml)\n\n\n## Overview\n\n`ecodata` is an R data package developed by the Ecosystems Dynamics and\nAssessment Branch of the Northeast Fisheries Science Center for use in\nState of the Ecosystem (SOE) reporting. SOE reports are high-level\noverviews of ecosystem indicator status and trends occurring on the\nNortheast Continental Shelf. Unless otherwise stated, data are\nrepresentative of specific Ecological Production Units (EPUs), referring\nto the Mid-Atlantic Bight (MAB), Georges Bank (GB), Gulf of Maine (GOM),\nand Scotian Shelf (SS). SOE reports are developed for US Fishery\nManagement Councils (FMCs), and therefore indicator data for Scotian\nShelf are included when available, but this is not always the case.\n\n### Please consult the [technical documentation](https://noaa-edab.github.io/tech-doc/) of SOE indicators before using data sets.\n\n## Using this package\n\n1. Use the command\n `remotes::install_github(""noaa-edab/ecodata"",build_vignettes=TRUE)`\n to install the package.\n2. Load the package into your environment with `library(ecodata)`\n3. Further information about the `ecodata` package can be found\n [here](https://noaa-edab.github.io/ecodata/).\n\n## Loading data sets\n\n1. All derived data sets are available once the package has been loaded\n into the environment. View available data sets using the syntax\n `ecodata::...`\n\n

\n\n

\n\n## Using geom\\_gls()\n\nAlso included in this package is a \xe2\x80\x9cgeom\xe2\x80\x9d extension of `ggplot2` for\nassessing trends in time series. This function fits four trend models to\neach series, uses AICc to select the best model fit, and then implements\na likelihood-ratio test to determine if a trend is present. If a\nsignificant trend is present (*P* < 0.05), then the trend line is\nplotted with the series. By default, a purple line color is assigned to\nnegative trends and orange to positive trends. More detailed information\nabout this method is available\n[here](https://noaa-edab.github.io/tech-doc/trend-analysis.html).\n\n`geom_gls()` follows the same rules as other `ggplot` stats/geoms. For\nexample,\n\n m <- 0.1\n x <- 1:30\n y <- m*x + rnorm(30, sd = 0.35)\n\n data <- data.frame(x = x,\n y = y)\n\n #Plot series with trend \n ggplot2::ggplot(data = data,aes(x = x, y = y)) +\n geom_line() +\n geom_gls()\n\nproduces\n\n\n\nThis repository is a scientific product and is not official\ncommunication of the National Oceanic and Atmospheric Administration, or\nthe United States Department of Commerce. All NOAA GitHub project code\nis provided on an \xe2\x80\x98as is\xe2\x80\x99 basis and the user assumes responsibility for\nits use. Any claims against the Department of Commerce or Department of\nCommerce bureaus stemming from the use of this GitHub project will be\ngoverned by all applicable Federal law. Any reference to specific\ncommercial products, processes, or services by service mark, trademark,\nmanufacturer, or otherwise, does not constitute or imply their\nendorsement, recommendation or favoring by the Department of Commerce.\nThe Department of Commerce seal and logo, or the seal and logo of a DOC\nbureau, shall not be used in any manner to imply endorsement of any\ncommercial product or activity by DOC or the United States Government.\n\n## Build documentation\n\nOrganize\n[Reference](https://noaa-edab.github.io/ecodata/reference/index.html) by\ndatasets and functions in `inst/_pkgdown.yml` and update using:\n\n``` r\nupdate_datasets_reference <- function(){\n library(here)\n library(dplyr)\n library(yaml)\n\n yml <- here(""inst/_pkgdown.yml"")\n\n lst <- read_yaml(yml)\n # listviewer::jsonedit(lst)\n \n # inject reference to datasets \n datasets <- data(package=""ecodata"") %>% .$results %>% .[, ""Item""]\n lst$reference <- list(\n title = ""Datasets"",\n contents = datasets)\n\n write_yaml(lst, yml)\n}\n\n# build just Reference index\npkgdown::build_reference_index()\n\n# build whole documentation website into docs/*\npkgdown::build_site()\n```\n\nNote that when building, you\xe2\x80\x99ll get alerted if datasets are missing or\nnot included, eg:\n\n > pkgdown::build_reference_index()\n Writing \'reference/index.html\'\n Warning messages:\n 1: In \'_pkgdown.yml\', topic must be a known topic name or alias\n x Not \'`heatwave_year`\' \n 2: In \'_pkgdown.yml\', topic must be a known topic name or alias\n x Not \'`seasonal_sst_anom_gridded`\' \n 3: In \'_pkgdown.yml\', topic must be a known topic name or alias\n x Not \'`wind`\' \n 4: Topics missing from index: \n * ecodata\n * seasonal_sst_anomaly_gridded\n * soe \n\nAside: get all plot chunks mentioning given dataset for Uploader:\n\n``` r\ndataset <- ""chl_pp""\nres <- system(glue::glue(""grep ecodata::{dataset} chunk-scripts/*"", intern = T))\n# todo: get just filenames\nres\n```\n'",,"2018/08/07, 15:54:53",1905,CUSTOM,176,972,"2023/10/24, 21:53:56",20,17,51,7,1,3,0.4,0.28196721311475414,"2023/08/01, 18:27:23",4.0,0,8,false,,false,false,,,https://github.com/NOAA-EDAB,https://www.nefsc.noaa.gov/ecosys/,,,,https://avatars.githubusercontent.com/u/38220006?v=4,,, rfishbase,An R interface to the fishbase.org database.,ropensci,https://github.com/ropensci/rfishbase.git,github,"r,rstats,r-package,fishbase,taxonomy,fish",Marine Life and Fishery,"2023/06/01, 18:57:17",102,0,14,true,R,rOpenSci,ropensci,"R,Makefile",https://docs.ropensci.org/rfishbase,"b'\n# rfishbase \n\n\n\n[![R-CMD-check](https://github.com/ropensci/rfishbase/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/rfishbase/actions)\n[![Coverage\nstatus](https://codecov.io/gh/ropensci/rfishbase/branch/master/graph/badge.svg)](https://codecov.io/github/ropensci/rfishbase?branch=master)\n[![Onboarding](https://badges.ropensci.org/137_status.svg)](https://github.com/ropensci/software-review/issues/137)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/rfishbase)](https://cran.r-project.org/package=rfishbase)\n[![Downloads](https://cranlogs.r-pkg.org/badges/grand-total/rfishbase)](https://github.com/r-hub/cranlogs.app)\n\n\nWelcome to `rfishbase 4`. This is the fourth rewrite of the original\n`rfishbase` package described in [Boettiger et\nal.\xc2\xa0(2012)](https://doi.org/10.1111/j.1095-8649.2012.03464.x).\n\n- `rfishbase 1.0` relied on parsing of XML pages served directly from\n Fishbase.org. \n- `rfishbase 2.0` relied on calls to a ruby-based API, `fishbaseapi`,\n that provided access to SQL snapshots of about 20 of the more popular\n tables in FishBase or SeaLifeBase.\n- `rfishbase 3.0` side-stepped the API by making queries which directly\n downloaded compressed csv tables from a static web host. This\n substantially improved performance a reliability, particularly for\n large queries. The release largely remained backwards compatible with\n 2.0, and added more tables.\n- `rfishbase 4.0` extends the static model and interface. Static tables\n are distributed in parquet and accessed through a provenance-based\n identifier. While old functions are retained, a new interface is\n introduced to provide easy access to all fishbase tables.\n\nWe welcome any feedback, issues or questions that users may encounter\nthrough our issues tracker on GitHub:\n\n\n## Installation\n\n``` r\nremotes::install_github(""ropensci/rfishbase"")\n```\n\n``` r\nlibrary(""rfishbase"")\nlibrary(""dplyr"") # convenient but not required\n```\n\n## Getting started\n\n## Generic table interface\n\nAll fishbase tables can be accessed by name using the `fb_tbl()`\nfunction:\n\n``` r\nfb_tbl(""ecosystem"")\n```\n\n # A tibble: 157,870 \xc3\x97 18\n autoctr E_CODE Ecosy\xe2\x80\xa6\xc2\xb9 Specc\xe2\x80\xa6\xc2\xb2 Stock\xe2\x80\xa6\xc2\xb3 Status Curre\xe2\x80\xa6\xe2\x81\xb4 Abund\xe2\x80\xa6\xe2\x81\xb5 LifeS\xe2\x80\xa6\xe2\x81\xb6 Remarks\n \n 1 1 1 50628 549 565 native Present adults \n 2 2 1 189 552 568 native Present adults \n 3 3 1 189 554 570 native Present adults \n 4 4 1 79732 873 889 native Present adults \n 5 5 1 5217 948 964 native Present adults \n 6 7 1 39852 956 972 native Present adults \n 7 8 1 39852 957 973 native Present adults \n 8 9 1 39852 958 974 native Present adults \n 9 10 1 188 1526 1719 native Present adults \n 10 11 1 188 1626 1819 native Present adults \n # \xe2\x80\xa6 with 157,860 more rows, 8 more variables: Entered ,\n # Dateentered , Modified , Datemodified , Expert ,\n # Datechecked , WebURL , TS , and abbreviated variable names\n # \xc2\xb9\xe2\x80\x8bEcosystemRefno, \xc2\xb2\xe2\x80\x8bSpeccode, \xc2\xb3\xe2\x80\x8bStockcode, \xe2\x81\xb4\xe2\x80\x8bCurrentPresence, \xe2\x81\xb5\xe2\x80\x8bAbundance,\n # \xe2\x81\xb6\xe2\x80\x8bLifeStage\n\nYou can see all the tables using `fb_tables()` to see a list of all the\ntable names (specify `sealifebase` if desired). Careful, there are a lot\nof them! The fishbase databases have grown a lot in the decades, and\nwere not intended to be used directly by most end-users, so you may have\nconsiderable work to determine what\xe2\x80\x99s what. Keep in mind that many\nvariables can be estimated in different ways (e.g.\xc2\xa0trophic level), and\nthus may report different values in different tables. Also note that\nspecies is name (or SpecCode) is not always the primary key for a table\n\xe2\x80\x93 many tables are specific to stocks or even individual samples, and\nsome tables are reference lists that are not species focused at all, but\nmeant to be joined to other tables (`faoareas`, etc). Compare tables\nagainst what you see on fishbase.org, or ask on our issues forum for\nadvice!\n\n``` r\nfish <- c(""Oreochromis niloticus"", ""Salmo trutta"")\n\nfb_tbl(""species"") %>% \n mutate(sci_name = paste(Genus, Species)) %>%\n filter(sci_name %in% fish) %>% \n select(sci_name, FBname, Length)\n```\n\n # A tibble: 2 \xc3\x97 3\n sci_name FBname Length\n \n 1 Oreochromis niloticus Nile tilapia 60\n 2 Salmo trutta Sea trout 140\n\n## SeaLifeBase\n\nSeaLifeBase.org is maintained by the same organization and largely\nparallels the database structure of Fishbase. As such, almost all\n`rfishbase` functions can instead be instructed to address the\n\n``` r\nfb_tbl(""species"", ""sealifebase"")\n```\n\n # A tibble: 103,169 \xc3\x97 109\n SpecCode Genus Species Author Speci\xe2\x80\xa6\xc2\xb9 FBname FamCode Subfa\xe2\x80\xa6\xc2\xb2 GenCode TaxIs\xe2\x80\xa6\xc2\xb3\n \n 1 10217 Abyss\xe2\x80\xa6 cidaris Poore\xe2\x80\xa6 3113 512 9280 0\n 2 10218 Abyss\xe2\x80\xa6 panope Poore\xe2\x80\xa6 3113 512 9280 0\n 3 90399 Abyss\xe2\x80\xa6 averin\xe2\x80\xa6 Kussa\xe2\x80\xa6 3113 502 17490 0\n 4 52610 Abyss\xe2\x80\xa6 millari Monni\xe2\x80\xa6 2585 978 9281 0\n 5 52611 Abyss\xe2\x80\xa6 wyvill\xe2\x80\xa6 Herdm\xe2\x80\xa6 2892 978 9281 0\n 6 138684 Abyss\xe2\x80\xa6 planus (Slad\xe2\x80\xa6 81020 1615 24229 0\n 7 90400 Abyss\xe2\x80\xa6 acutil\xe2\x80\xa6 Doti \xe2\x80\xa6 3113 587 9282 0\n 8 10219 Abyss\xe2\x80\xa6 argent\xe2\x80\xa6 Menzi\xe2\x80\xa6 3113 587 9282 0\n 9 10220 Abyss\xe2\x80\xa6 bathya\xe2\x80\xa6 Just,\xe2\x80\xa6 3113 587 9282 0\n 10 10221 Abyss\xe2\x80\xa6 dentif\xe2\x80\xa6 Menzi\xe2\x80\xa6 3113 587 9282 0\n # \xe2\x80\xa6 with 103,159 more rows, 99 more variables: Remark ,\n # PicPreferredName , PicPreferredNameM , PicPreferredNameF ,\n # PicPreferredNameJ , Source , AuthorRef , SubGenCode ,\n # Fresh , Brack , Saltwater , Land , BodyShapeI ,\n # DemersPelag , AnaCat , MigratRef , DepthRangeShallow ,\n # DepthRangeDeep , DepthRangeRef , DepthRangeComShallow ,\n # DepthRangeComDeep , DepthComRef , LongevityWild , \xe2\x80\xa6\n\n## Versions and importing all tables\n\nBy default, tables are downloaded the first time they are used.\n`rfishbase` defaults to download the latest available snapshot; be aware\nthat the most recent snapshot may be months behind the latest data on\nfishbase.org. Check available releases:\n\n``` r\navailable_releases()\n```\n\n [1] ""23.01"" ""21.06"" ""19.04""\n\n## Low-memory environments\n\nIf you have very limited RAM (e.g.\xc2\xa0\\<= 1 GB available) it may be helpful\nto use `fishbase` tables in remote form by setting `collect = FALSE`.\nThis allows the tables to remain on disk, while the user is still able\nto use almost all `dplyr` functions (see the `dbplyr` vignette). Once\nthe table is appropriately subset, the user will need to call\n`dplyr::collect()` to use generic non-dplyr functions, such as plotting\ncommands.\n\n``` r\nfb_tbl(""occurrence"")\n```\n\n # A tibble: 1,097,303 \xc3\x97 106\n catnum2 OccurrenceR\xe2\x80\xa6\xc2\xb9 SpecC\xe2\x80\xa6\xc2\xb2 Syncode Stock\xe2\x80\xa6\xc2\xb3 Genus\xe2\x80\xa6\xe2\x81\xb4 Speci\xe2\x80\xa6\xe2\x81\xb5 ColName PicName\n \n 1 34424 36653 227 22902 241 ""Megal\xe2\x80\xa6 ""cypri\xe2\x80\xa6 ""Megal\xe2\x80\xa6 """" \n 2 95154 45880 NA NA NA """" """" """" """" \n 3 97606 45880 NA NA NA """" """" """" """" \n 4 100025 45880 5520 25676 5809 ""Johni\xe2\x80\xa6 ""belan\xe2\x80\xa6 """" """" \n 5 98993 45880 5676 16650 5969 ""Chrom\xe2\x80\xa6 ""retro\xe2\x80\xa6 """" """" \n 6 99316 45880 454 23112 468 ""Drepa\xe2\x80\xa6 ""punct\xe2\x80\xa6 """" """" \n 7 99676 45880 5388 145485 5647 ""Gymno\xe2\x80\xa6 ""bosch\xe2\x80\xa6 """" """" \n 8 99843 45880 16813 119925 15264 ""Hemir\xe2\x80\xa6 ""balin\xe2\x80\xa6 """" """" \n 9 100607 45880 8288 59635 8601 ""Ostra\xe2\x80\xa6 ""rhino\xe2\x80\xa6 """" """" \n 10 101529 45880 NA NA NA ""Scomb\xe2\x80\xa6 ""toloo\xe2\x80\xa6 """" """" \n # \xe2\x80\xa6 with 1,097,293 more rows, 97 more variables: CatNum , URL ,\n # Station , Cruise , Gazetteer , LocalityType ,\n # WaterDepthMin , WaterDepthMax , AltitudeMin ,\n # AltitudeMax , LatitudeDeg , LatitudeMin , NorthSouth ,\n # LatitudeDec , LongitudeDeg , LongitudeMIn , EastWest ,\n # LongitudeDec , Accuracy , Salinity , LatitudeTo ,\n # LongitudeTo , LatitudeDegTo , LatitudeMinTo , \xe2\x80\xa6\n\n## Local copy\n\nSet the option \xe2\x80\x9crfishbase_local_db\xe2\x80\x9d = TRUE to create a local copy,\notherwise will use a remote copy. Local copy will get better performance\nafter initial import, but may experience conflicts when `duckdb` is\nupgraded or when multiple sessions attempt to access the directory.\nRemove the default storage directory (given by `db_dir()`) after\nupgrading duckdb if using a local copy.\n\n``` r\noptions(""rfishbase_local_db"" = TRUE)\ndb_disconnect() # close previous remote connection\n\nconn <- fb_conn()\nconn\n```\n\n >\n\nUsers can trigger a one-time download of all fishbase tables (or a list\nof desired tables) using `fb_import()`. This will ensure later use of\nany function can operate smoothly even when no internet connection is\navailable. Any table already downloaded will not be re-downloaded.\n(Note: `fb_import()` also returns a remote duckdb database connection to\nthe tables, for users who prefer to work with the remote data objects.)\n\n``` r\nfb_import()\n```\n\n >\n\n## Interactive RStudio pane\n\nRStudio users can also browse all fishbase tables interactively in the\nRStudio connection browser by using the function `fisbase_pane()`. Note\nthat this function will first download a complete set of the fishbase\ntables.\n\n## Backwards compatibility\n\n`rfishbase` 4.0 tries to maintain as much backwards compatibility as\npossible with rfishbase 3.0. Because parquet preserves native data\ntypes, some encoded types may differ from earlier versions. As before,\nthese are not always the native type \xe2\x80\x93 e.g.\xc2\xa0fishbase encodes some\nboolean (logical TRUE/FALSE) values as integer (-1, 0) or character\ntypes. Use `as.logical()` to coerce into the appropriate type in that\ncase.\n\nToggling between fishbase and sealifebase servers using an environmental\nvariable, `FISHBASE_API`, is now deprecated.\n\nNote that fishbase will store downloaded files by hash in the app\ndirectory, given by `db_dir()`. The default location can be set by\nconfiguring the desired path in the environmental variable,\n`FISHBASE_HOME`.\n\n------------------------------------------------------------------------\n\nPlease note that this package is released with a [Contributor Code of\nConduct](https://ropensci.org/code-of-conduct/). By contributing to this\nproject, you agree to abide by its terms.\n\n[![ropensci_footer](https://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n'",",https://doi.org/10.1111/j.1095-8649.2012.03464.x","2011/11/18, 01:03:37",4360,CUSTOM,20,627,"2023/09/14, 20:36:01",14,32,258,51,41,0,0.0,0.12307692307692308,"2021/09/08, 00:33:50",slb-21.08,0,7,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, PlanktonIndividuals.jl,This package simulates the behaviors of an ensemble of phytoplankton individuals.,JuliaOcean,https://github.com/JuliaOcean/PlanktonIndividuals.jl.git,github,,Marine Life and Fishery,"2023/08/24, 21:38:43",24,0,6,true,Julia,,JuliaOcean,Julia,,"b'# PlanktonIndividuals.jl\n\n[![Linux](https://github.com/JuliaOcean/PlanktonIndividuals.jl/actions/workflows/linux.yml/badge.svg)](https://github.com/JuliaOcean/PlanktonIndividuals.jl/actions/workflows/linux.yml)\n[![doc](https://img.shields.io/badge/docs-stable-blue.svg)](https://JuliaOcean.github.io/PlanktonIndividuals.jl/stable)\n[![doc](https://img.shields.io/badge/docs-dev-blue.svg)](https://JuliaOcean.github.io/PlanktonIndividuals.jl/dev)\n[![codecov](https://codecov.io/gh/JuliaOcean/PlanktonIndividuals.jl/branch/master/graph/badge.svg?token=jJL053vHAM)](https://codecov.io/gh/JuliaOcean/PlanktonIndividuals.jl)\n[![DOI](https://zenodo.org/badge/178023615.svg)](https://zenodo.org/badge/latestdoi/178023615)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.04207/status.svg)](https://doi.org/10.21105/joss.04207)\n\n![animation](https://github.com/JuliaOcean/PlanktonIndividuals.jl/raw/master/examples/figures/anim_3D_global.gif)\n\n`PlanktonIndividuals.jl` is a fast individual-based model written in Julia that can be run on both CPU and GPU. It simulates the life cycle of phytoplankton cells as Lagrangian particles in the ocean while nutrients are represented as Eulerian, density-based tracers using a [3rd order advection scheme](https://mitgcm.readthedocs.io/en/latest/algorithm/adv-schemes.html#third-order-direct-space-time-with-flux-limiting). The model is used to simulate and interpret the temporal and spacial variations of phytoplankton cell densities and stoichiometry as well as growth and division behaviors induced by diel cycle and physical motions ranging from sub-mesoscale to large scale processes.\n\n## Installation\n\nTo add `PlanktonIndividuals.jl` to your Julia environment:\n\n```julia\nusing Pkg; Pkg.add(""PlanktonIndividuals.jl"")\n```\n\n## Use Examples\n\n### 1. Simple Flow Fields In Two Dimensions\n\n```julia\nusing PlanktonIndividuals\np = dirname(pathof(PlanktonIndividuals))\n#include(joinpath(p,""../examples/vertical_2D_example.jl""))\ninclude(joinpath(p,""../examples/horizontal_2D_example.jl""))\n```\n\n### 2. Closer Look Into One Grid Box\n\n```julia\nusing PlanktonIndividuals\np = dirname(pathof(PlanktonIndividuals))\ninclude(joinpath(p,""../examples/0D_experiment.jl""))\n```\n\n### 3. Turbulent Flow Fields In Three Dimensions\n\nHere [Oceananigans.jl](https://github.com/climate-machine/Oceananigans.jl) is used to generate velocity fields and then use those to drive the individual-based model.\n\n```julia\nusing PlanktonIndividuals\np = dirname(pathof(PlanktonIndividuals))\ninclude(joinpath(p,""../examples/surface_mixing_3D_example.jl""))\n```\n'",",https://zenodo.org/badge/latestdoi/178023615,https://doi.org/10.21105/joss.04207","2019/03/27, 15:31:29",1673,MIT,44,955,"2023/08/13, 03:02:13",1,56,64,6,74,0,0.0,0.06727480045610035,"2023/05/04, 03:31:42",v0.6.6,0,2,false,,false,true,,,https://github.com/JuliaOcean,,,,,https://avatars.githubusercontent.com/u/41747359?v=4,,, UVic-updates-opem,"Introduces optimality-based phytoplankton and zooplankton into the UVic-ESCM (version 2.9) with variable C:N:P(:Chl) stoichiometry for phytoplankton, diazotrophs and detritus.",markus-pahlow,,custom,,Marine Life and Fishery,,,,,,,,,,https://git.geomar.de/markus-pahlow/UVic-updates-opem,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, OceanAdapt,"Provide information about the impacts of changing climate and other factors on the distribution of marine life to the National Climate Assessment, fisheries communities, policymakers, and to others.",pinskylab,https://github.com/pinskylab/OceanAdapt.git,github,,Marine Life and Fishery,"2023/07/01, 01:50:58",12,0,3,true,HTML,Global Change Research Group,pinskylab,"HTML,R,Python,Rich Text Format",http://oceanadapt.rutgers.edu,"b'OceanAdapt\n================\n\n\n\n# Our Method\n\nThe distributions of fish and invertebrate populations are routinely monitored by DFO, NMFS, and other agencies during bottom trawl surveys on the continental shelves of North America (see the metadata link below for details on data sources). These surveys provide core information for use in fisheries management and extend back two to five decades. For the indicators displayed on this website, a mean location (the centroid) is calculated for each species in each year of each survey, after the surveys have been standardized to a consistent spatial footprint through time. The centroid is the mean latitude and mean depth of catch in the survey, weighted by biomass.\n\nFor the regional and national indices, the first year is standardized to a value of zero and changes are then averaged across species in a region. Only regions with consistent survey methods and without coastlines that would prevent poleward shifts in distribution are included in the national average (currently Eastern Bering Sea and Northeast U.S.). Only species caught every year are analyzed to prevent changes in species composition from affecting the indicator. The indicator begins in the first year that data are available from the focal regions.\n\nThe historical analyses and data are described in Pinsky, M. L., B. Worm, M. J. Fogarty, J. L. Sarmiento, and S. A. Levin. 2013. Marine taxa track local climate velocities. Science 341: 1239-1242 doi: [10.1126/science.1239352](http://doi.org/10.1126/science.1239352) (free reprint available from [pinsky.marine.rutgers.edu/publications](http://pinsky.marine.rutgers.edu/publications/)).\n\nThe projections of future species distributions were developed from statistical relationships between ocean temperature, bottom habitat features, and species abundance. Ocean temperature projections for the future were from global climate models developed for the [Intergovernmental Panel on Climate Change](https://www.ipcc.ch/). Full methods are described in Morley, J. W., R. L. Selden, R. J. Latour, T. L. Fr\xc3\xb6licher, R. J. Seagraves, and M. L. Pinsky. 2018. Projecting shifts in thermal habitat for 686 species on the North American continental shelf. PLOS ONE 13(5): e0196127 doi: [10.1371/journal.pone.0196127](http://doi.org/10.1371/journal.pone.0196127) (open access).\n\n# [Metadata](https://github.com/pinskylab/OceanAdapt/tree/master/metadata)\n\n# News\n\n\n## 2022/04/20 - OceanAdapt contributes to the newly released NOAA Fisheries Distribution and Mapping Analysis Portal (DisMAP)\n\n- [NOAA Fisheries announces the release](https://www.noaa.gov/news-release/noaa-showcases-new-mapping-tool-for-marine-species) of a new, state-of-the-art mapping tool for marine species: the Distribution Mapping and Analysis Portal or DisMAP. The new tool grew out of and builds off of the OceanAdapt effort and was developed in collaboration with the OceanAdapt team! Visit the portal at [https://apps-st.fisheries.noaa.gov/dismap](https://apps-st.fisheries.noaa.gov/dismap) \n\n## 2022/02/14 - OceanAdapt update 2021.1\n\n- this is a patch for update 2021 (described below) in which a problem with duplicated records in the Maritimes region was fixed (issue #152) and the \'data_clean\' folder was updated and tidied up. \n\nDownload the latest release of this repository using the links below:\n\n[Download the full data and code](https://github.com/pinskylab/OceanAdapt/releases/tag/v2021.1.0)\n\n[![DOI](https://zenodo.org/badge/29789533.svg)](https://zenodo.org/badge/latestdoi/29789533)\n\n## 2021/12/21 - OceanAdapt update 2021\n\n - Expanded coverage to three new regions in Canada: Northern Gulf of St. Lawrence (GSLnor), Southern Gulf of St. Lawrence (GSLsouth), and the Canadian Pacific (CPAC). \n - 2019 data added for the Gulf of Alaska, the Maritimes regions (formerly Scotian Shelf), and Southeast US. \n - 2020 data were not available for most regions due to survey difficulties resulting from the COVID19 pandemic. If they were available, they only covered a small portion of the survey region. \n - A website update is pending, but expected to occur soon. \n \n## 2021/10/25 - OceanAdapt provides indicators for National Marine Ecosystem Status\n\n- The regional latitude and depth centroids from OceanAdapt are now part of the National Marine Ecosystem Status system [here](https://ecowatch.noaa.gov/). \n\n## 2021/06/09 - data have been added to OceanAdapt\\! - website update pending\n\n - New data are available via this GitHub! Website update is pending, but expected to occur in July 2021. We have expanded to three new regions in Canada: Northern Gulf of St. Lawrence (GSLnor), Southern Gulf of St. Lawrence (GSLsouth), and the Canadian Pacific (CPAC). 2019 data were added for the Gulf of Alaska, the Maritimes regions (formerly Scotian Shelf), and Southeast US. 2020 data were not available for most regions due to survey difficulties through the COVID19 pandemic. If they were available, they only covered a small portion of the survey region.\n\n## 2020/10/12 - OceanAdapt listed as indicator tool on the USGCRP\n\n - Click the link to see the U.S. Global Change Research Program\'s [Indicator Tools](https://www.globalchange.gov/browse/indicator-details/4141) for Marine Species Distributions\\!\n\n## 2020/05/18 - data have been added to OceanAdapt\\!\n\n - Check out the latest update to the OceanAdapt website. 2019 data added for nearly every region\\! 2019 data were not available for the Gulf of Alaska, Southeast US and Scotian Shelf at the time of update, but will be included in the next website update. A new release of this repository is available, download link and DOI below:\n\n [Download the lastest release (full data and\n code)](https://github.com/pinskylab/OceanAdapt/releases/tag/update2020)\n \n [![DOI](https://zenodo.org/badge/29789533.svg)](https://zenodo.org/badge/latestdoi/29789533)\n\n## 2020/01/15 - Our national average graphic has been included in the 4th National Climate Assessment\n\n - Find us [here](https://nca2018.globalchange.gov/chapter/1/) in figure 1.2(h).\n\n## 2019/03/01 - data have been added to OceanAdapt\\!\n\n - Check out the latest update to the OceanAdapt website. New data in\n every region\\! \n \n [Download the lastest release (full data and code)](https://github.com/mpinsky/OceanAdapt/releases/tag/update2019)\n \n [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3890214.svg)](https://doi.org/10.5281/zenodo.3890214)\n\n## 2019/03/01 - Scotian Shelf Region has been added to OceanAdapt\\!\n\n - Summer, Fall, and Spring seasonal surveys have been added to a new\n region on the map\\! (Please note Fall and Spring are under\n construction)\n\n## 2019/03/01 - OceanAdapt has a new look\\!\n\n - Thank you to EcoTrust for all of your help in redesigning our\n website. Just click the red map marker to get started on exploring a\n region.\n \n \n---\n\n# Our Data Policy\n\nAll of the data underlying these analyses are available for [download](https://github.com/pinskylab/OceanAdapt/data_raw), including spatially georeferenced catches from more than fifty thousand bottom trawl tows in eleven regions in the U.S. and Canada. Please notify us through the online form when you download the data, as this helps us justify maintaining the database as a community resource.\n\n**As part of our Fair Use Policy, please:** \n\n\n- Notify us if you are preparing a manuscript using information from the OceanAdapt database (also helps justify funding). \n\n- Coordinate your research efforts with others using the database by joining existing papers where efforts overlap (contact us if you want to check on potential overlap). \n\n- If the database is particularly crucial to your research, please consider offering database developers and their colleagues an opportunity to become involved as co-authors. \n\nIn primary publications using data from the database, please cite Pinsky et al. 2013. Marine taxa track local climate velocities. Science 341: 1239-1242 doi: 10.1126/science.1239352, as well as the original data sources.\n'",",http://doi.org/10.1126/science.1239352,http://doi.org/10.1371/journal.pone.0196127,https://zenodo.org/badge/latestdoi/29789533,https://zenodo.org/badge/latestdoi/29789533,https://doi.org/10.5281/zenodo.3890214","2015/01/24, 19:57:51",3196,MIT,102,964,"2021/12/23, 14:24:58",39,14,120,0,671,0,0.0,0.5714285714285714,"2023/07/01, 01:51:28",v2021.4,0,6,false,,false,false,,,https://github.com/pinskylab,globalchange.sites.ucsc.edu,"New Brunswick, NJ",,,https://avatars.githubusercontent.com/u/32388236?v=4,,, icesDatras,R interface to Database of Trawl Surveys web services.,ices-tools-prod,https://github.com/ices-tools-prod/icesDatras.git,github,,Marine Life and Fishery,"2023/10/18, 12:01:48",12,0,2,true,R,ICES tools (production),ices-tools-prod,R,https://datras.ices.dk/WebServices/Webservices.aspx,"b'[![Build Status](https://travis-ci.org/ices-tools-prod/icesDatras.svg?branch=master)](https://travis-ci.org/ices-tools-prod/icesDatras)\r\n[![codecov](https://codecov.io/gh/ices-tools-prod/icesDatras/branch/master/graph/badge.svg)](https://codecov.io/gh/ices-tools-prod/icesDatras)\r\n[![local release](https://img.shields.io/github/release/ices-tools-prod/icesDatras.svg?maxAge=2592001)](https://github.com/ices-tools-prod/icesDatras/tree/1.2-0)\r\n[![CRAN Status](http://r-pkg.org/badges/version/icesDatras)](https://cran.r-project.org/package=icesDatras)\r\n[![CRAN Monthly](http://cranlogs.r-pkg.org/badges/icesDatras)](https://cran.r-project.org/package=icesDatras)\r\n[![CRAN Total](http://cranlogs.r-pkg.org/badges/grand-total/icesDatras)](https://cran.r-project.org/package=icesDatras)\r\n[![License](https://img.shields.io/badge/license-GPL%20(%3E%3D%202)-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.en.html)\r\n\r\n[](http://ices.dk)\r\n\r\nicesDatras\r\n==========\r\n\r\nicesDatras provides R functions that access the\r\n[web services](https://datras.ices.dk/WebServices/Webservices.aspx) of the\r\n[ICES](http://ices.dk) [DATRAS](http://datras.ices.dk) trawl survey database.\r\n\r\nicesDatras is implemented as an [R](https://www.r-project.org) package and\r\navailable on [CRAN](https://cran.r-project.org/package=icesDatras).\r\n\r\nDATRAS database support\r\n-----------------------\r\n\r\nIf you have questions relating to the ICES DATRAS database or web services please email: DatrasAdministration@ices.dk\r\n\r\nInstallation\r\n------------\r\n\r\nicesDatras can be installed from CRAN using the `install.packages` command:\r\n\r\n```R\r\ninstall.packages(""icesDatras"")\r\n```\r\n\r\nUsage\r\n-----\r\n\r\nFor a summary of the package:\r\n\r\n```R\r\nlibrary(icesDatras)\r\n?icesDatras\r\n```\r\nInformation on available surveys in DATRAS: \r\n\r\n```R\r\ngetSurveyList()\r\n\r\n```\r\n\r\nWorking Examples \r\n-----\r\n\r\nExtracting survey haul (HH), lenght (HL) and agebased (CA) data from a given survey, quarter and year, \r\ni.e. North Sea IBTS, Quarter 1, 2019:\r\n\r\n```R\r\nsurvey <- ""NS-IBTS""\r\nyear <- 2019\r\nquarter <- 1\r\n\r\nHH <- getHHdata(survey, year, quarter) \r\nHL <- getHLdata(survey, year, quarter) \r\nCA <- getCAdata(survey, year, quarter) \r\n```\r\n\r\nExtracting catch weight of cod from the Baltic Sea survey, year 2019, quarter 1. \r\nNote: The icesVocab package provides `findAphia`, a function to look up Aphia species codes. \r\n\r\n```R\r\nlibrary(icesVocab)\r\naphia <- icesVocab::findAphia(""cod"") \r\n\r\nsurvey <- ""BITS""\r\nyears <- 2019\r\nquarters <- 1\r\ncodwgt <- getCatchWgt(survey, years, quarters, aphia)\r\n\r\n```\r\n\r\nGet catch weight for Baltic cod from all quarters in a small timeseries (e.g. 1991 to 2011) and plot the weight in a sipmle graph per quarter.\r\n\r\n```R\r\nlibrary(icesVocab)\r\nlibrary(ggplot2)\r\n\r\naphia <- icesVocab::findAphia(""cod"") \r\n\r\nsurvey <- ""BITS""\r\nyears <- 1991:2011\r\nquarters <- 1:4\r\ncodwgt <- getCatchWgt(survey, years, quarters, aphia)\r\ncodwgt %>% ggplot(aes(x = Year, y = CatchWgt, colour= Quarter)) + geom_point()\r\n```\r\n\r\nReferences\r\n----------\r\n\r\nICES DATRAS database:\r\nhttp://datras.ices.dk\r\n\r\nICES DATRAS web services:\r\nhttps://datras.ices.dk/WebServices/Webservices.aspx\r\n\r\nAphiaID of marine organisms: \r\nhttp://www.marinespecies.org/index.php\r\n\r\n\r\nDevelopment\r\n-----------\r\n\r\nicesDatras is developed openly on\r\n[GitHub](https://github.com/ices-tools-prod/icesDatras).\r\n\r\nFeel free to open an\r\n[issue](https://github.com/ices-tools-prod/icesDatras/issues) there if you\r\nencounter problems or have suggestions for future versions.\r\n\r\nThe current development version can be installed using:\r\n\r\n```R\r\nlibrary(devtools)\r\ninstall_github(""ices-tools-prod/icesDatras"")\r\n```\r\n'",,"2016/07/27, 14:09:06",2646,GPL-2.0,13,265,"2022/05/26, 13:31:37",4,18,40,0,517,0,0.0,0.10852713178294571,"2019/10/29, 09:07:38",1.3-0,0,7,false,,false,false,,,https://github.com/ices-tools-prod,https://www.ices.dk/data/tools/Pages/Software.aspx,"Copenhagen, Denmark",,,https://avatars.githubusercontent.com/u/20533792?v=4,,, icesTAF,Functions to support the International Council for the Exploration of the Sea Transparent Assessment Framework.,ices-tools-prod,https://github.com/ices-tools-prod/icesTAF.git,github,,Marine Life and Fishery,"2023/03/21, 10:54:34",5,0,2,true,R,ICES tools (production),ices-tools-prod,R,,"b'[![Build Status](https://travis-ci.org/ices-tools-prod/icesTAF.svg?branch=master)](https://travis-ci.org/ices-tools-prod/icesTAF)\n[![CRAN Status](https://r-pkg.org/badges/version/icesTAF)](https://cran.r-project.org/package=icesTAF)\n[![CRAN Monthly](https://cranlogs.r-pkg.org/badges/icesTAF)](https://cran.r-project.org/package=icesTAF)\n[![CRAN Total](https://cranlogs.r-pkg.org/badges/grand-total/icesTAF)](https://cran.r-project.org/package=icesTAF)\n\n[](https://ices.dk)\n\nicesTAF\n=======\n\nicesTAF provides functions to support the [ICES](https://ices.dk)\n[Transparent Assessment Framework](https://taf.ices.dk) to organize data,\nmethods, and results used in ICES assessments.\n\nicesTAF is implemented as an [R](https://www.r-project.org) package and\navailable on [CRAN](https://cran.r-project.org/package=icesTAF).\n\nInstallation\n------------\n\nicesTAF can be installed from CRAN using the `install.packages` command:\n\n```R\ninstall.packages(""icesTAF"")\n```\n\nUsage\n-----\n\nFor a summary of the package:\n\n```R\nlibrary(icesTAF)\n?icesTAF\n```\n\nReferences\n----------\n\nICES Transparent Assessment Framework:\nhttps://taf.ices.dk\n\nDevelopment\n-----------\n\nicesTAF is developed openly on\n[GitHub](https://github.com/ices-tools-prod/icesTAF).\n\nFeel free to open an\n[issue](https://github.com/ices-tools-prod/icesTAF/issues) there if you\nencounter problems or have suggestions for future versions.\n\nThe current development version can be installed using:\n\n```R\nlibrary(remotes)\ninstall_github(""ices-tools-prod/icesTAF"")\n```\n'",,"2017/02/14, 19:09:30",2444,GPL-3.0,26,703,"2023/03/21, 09:46:06",7,12,34,7,218,1,0.0,0.07777777777777772,"2020/05/16, 14:49:59",3.5-0,0,4,false,,false,false,,,https://github.com/ices-tools-prod,https://www.ices.dk/data/tools/Pages/Software.aspx,"Copenhagen, Denmark",,,https://avatars.githubusercontent.com/u/20533792?v=4,,, KSO,"The Koster Seafloor Observatory is an open-source, citizen science and machine learning approach to analyse subsea movies.",ocean-data-factory-sweden,https://github.com/ocean-data-factory-sweden/kso.git,github,"object-detection,deep-learning,marine-protected-areas,citizen-science",Marine Life and Fishery,"2023/10/24, 10:47:29",3,0,3,true,Python,Ocean Data Factory Sweden,ocean-data-factory-sweden,"Python,Dockerfile",,"b'# KSO System\n\nThe Koster Seafloor Observatory is an open-source, citizen science and machine learning approach to analyse subsea movies.\n\n\n\n[![Contributors][contributors-shield]][contributors-url]\n[![Forks][forks-shield]][forks-url]\n[![Stargazers][stars-shield]][stars-url]\n[![Issues][issues-shield]][issues-url]\n[![GPL License][license-shield]][license-url]\n\n### KSO Information architecture\nThe system processes underwater footage and its associated metadata into biologically-meaningful information. The format of the underwater media is standardised (typically .mp4 or .jpg) and the associated metadata should be captured in three csv files (\xe2\x80\x9cmovies\xe2\x80\x9d, \xe2\x80\x9csites\xe2\x80\x9d and \xe2\x80\x9cspecies\xe2\x80\x9d) following the [Darwin Core standards (DwC)](https://dwc.tdwg.org/simple/). \n![koster_info_diag][high-level-overview2]\n\n## Repository Overview\nThis repository contains scripts and resources to:\n* move and process underwater footage and its associated data (e.g. location, date, sampling device).\n* make this data available for citizen science to help you with annotating the data.\n* train and evaluate machine learning models. (customise [Yolov5][YoloV5] or [Yolov8][YoloV8] models using Ultralytics.)\n\n![high-level][high-level-overview]\n\nThe system is built around a series of easy-to-use Jupyter Notebook tutorials. Each tutorial allows users to perform a specific task of the system (e.g. upload footage to the citizen science platform or analyse the classified data).\n\nUsers can run these tutorials via Google Colab (by clicking on the Colab links in the table below), locally or on a High Performance Computer environment.\n\n### Tutorials\n| Name | Description | Try it! | \n| ------------------------------------------------- | ------------------------------------------------------------------------------------------- | --------|\n| 1. Check footage and metadata | Check format and contents of footage and sites, media and species csv files | [![Open In Colab][colablogo]][colab_tut_1] [![binder][binderlogo]][binder_tut] | \n| 2. Upload new media to the system* | Upload new underwater media to the cloud/server and update the csv files | WIP | \n| 3. Upload clips to Zooniverse | Prepare original footage and upload short clips to Zooniverse | [![Open In Colab][colablogo]][colab_tut_3] [![binder][binderlogo]][binder_tut] |\n| 4. Upload frames to Zooniverse | Extract frames of interest from original footage and upload them to Zooniverse | [![Open In Colab][colablogo]][colab_tut_4] [![binder][binderlogo]][binder_tut] |\n| 5. Train ML models | Prepare the training and test data, set model parameters and train models | [![Open In Colab][colablogo]][colab_tut_5] [![binder][binderlogo]][binder_tut] | \n| 6. Evaluate ML models | Use ecologically-relevant metrics to test the models | [![Open In Colab][colablogo]][colab_tut_6] [![binder][binderlogo]][binder_tut] |\n| 7. Publish ML models | Publish the model to a public repository | [![Open In Colab][colablogo]][colab_tut_7] [![binder][binderlogo]][binder_tut] | \n| 8. Analyse Zooniverse classifications | Pull up-to-date classifications from Zooniverse and report summary stats/graphs | [![Open In Colab][colablogo]][colab_tut_8] [![binder][binderlogo]][binder_tut] |\n| 9. Run ML models on footage | Automatically classify new footage | [![Open In Colab][colablogo]][colab_tut_9] [![binder][binderlogo]][binder_tut] |\n\n \n\\* Project-specific tutorial\n\n## Local Installation\nIf you want to fully use our system (Binder has computing limitations), you will need to download this repository on your local computer or server. (Or use SNIC or Cloudina, see instructions below)\nNote that depending on your choice of infrastructure, you will be limited to either [Yolov5][YoloV5] or [Yolov8][YoloV8]:\n* Locally it is possible to either use Yolov5 or Yolov8.\n* SNIC: only possible to use Yolov5.\n* Cloudina: Only possible to use Yolov8.\n\nThe latest developments are only available in the combination with Yolov8. However, there is a stable tagged [yolov5] (https://github.com/ocean-data-factory-sweden/kso/yolov5) version if you prefer Yolov5. \n\n### Local installation with Yolov5\nRequirements\n* [Python 3.8](https://www.python.org/)\n* [Anaconda](https://docs.anaconda.com/anaconda/install/index.html)\n* [GIT](https://git-scm.com/downloads)\n\n\n#### Download this repository\nClone this repository using\n```python\ngit clone --recurse-submodules --depth 1 --branch yolov5 https://github.com/ocean-data-factory-sweden/kso.git\n``` \n\n#### Prepare your system\nDepending on which system you are using (Windows/Linux/MacOS), you might need to install some extra tools. If this is the case, you will get a message about what you need to install in the next steps. \nFor example, on a Windows system, it will request you to install the Microsoft Build Tools C++ with a version higher than 14.0. You can install it from https://visualstudio.microsoft.com/visual-cpp-build-tools/. You only need to select the ""Windows SDK"" in the install menu.\n\n#### Set up the environment with Conda\n1. Open the Anaconda Prompt\n2. Navigate to the folder where you have cloned the repository or unzipped the manually downloaded repository. Then go into the kso folder. (```cd kso ```)\n3. Create an Anaconda environment with Python 3.8:\n\n```conda create -n python=3.8 ```\n\n4. Enter the environment: \n\n```conda activate ```\n\n5. Install numpy to prevent an error that will otherwise occur in the next step.\n\n```pip install numpy==1.22```\n\n6. Install all the requirements. If you do not have a GPU, run the following:\n\n```pip install -r yolov5_tracker/requirements.txt -r yolov5_tracker/yolov5/requirements.txt -r requirements.txt```\n\nHave a GPU? Find out which pytorch installation you need here (https://pytorch.org/), depending on your device and CUDA version. Add the recommended command to the gpu_requirements_user.txt file in the same way as the current example. Then run:\n\n```pip install -r yolov5_tracker/requirements.txt -r yolov5_tracker/yolov5/requirements.txt -r requirements.txt -r gpu_requirements_user.txt```\n\n\n#### Set up the environment with another virtual environment package\nIf using another virtual environment package, install the same requirements inside your fresh environment (Python 3.8).\n\n\n#### Link your environment to Jupyter notebooks\nAfter installing all the requirements, run the following command in your environment:\n\n```ipython kernel install --user --name=```\n\nNow open the Jupyter notebook and select/change kernel to run the notebooks from your environment.\n\n### Local installation with Yolov8\nThese instuctions will be provided once a stable version with Yolov8 is achieved. \n\n## SNIC Users (VPN required)\n\n**Before using the VPN to connect to SNIC, users should have login credentials and set up the Chalmers VPN on their local computers**\n\nInstructions to [set up the Chalmers VPN](https://www.c3se.chalmers.se/documentation/connecting/#vpn)\n\nTo use the Jupyter Notebooks within the Alvis HPC cluster, please visit [Alvis Portal](https://portal.c3se.chalmers.se) and log in using your SNIC credentials. \n\nOnce you have been authorized, click on ""Interactive Apps"" and then ""Jupyter"". This will open the server creation options. \n\nCreating a Jupyter session requires a custom environment file, which is available on our shared drive */mimer/NOBACKUP/groups/snic2022-22-1210/jupter_envs*. Please copy this file (jupyter-kso.sh) to your\xc2\xa0**Home Directory** in order to use the custom environment we have created.\n\nHere you can keep the settings as default, apart from the ""Number of hours"" which you can set to the desired limit. Then choose kso-jupyter.sh from the Runtime dropdown options.\n\n![screenshot_load][screenshot_loading]\n\nThis will directly queue a server session using the correct container image, first showing a blue window and then you should see a green window when the session has been successfully started and the button **""Connect to Jupyter""** appears on the screen. Click this to launch into the Jupyter Notebook environment. \n\n\n![screenshot_start][screenshot_started]\n\nImportant note: The remaining time for the server is shown in green window as well. If you have finished using the notebook server before the allocated time runs out, please select **""Delete""** so that the resources can be released for use by others within the project. \n\n## Cloudina \nInstructions will come...\n\n\n## Starting a new project\nIf you will work on a new project you will need to:\n1. Create initial information for the database: Input the information about the underwater footage files, sites and species of interest. You can use a [template of the csv files](https://drive.google.com/file/d/1PZGRoSY_UpyLfMhRphMUMwDXw4yx1_Fn/view?usp=sharing) and move the directory to the ""db_starter"" folder.\n2. Link your footage to the database: You will need files of underwater footage to run this system. You can [download some samples](https://drive.google.com/drive/folders/1t2ce8euh3SEU2I8uhiZN1Tu-76ZDqB6w?usp=sharing) and move them to `db_starter`. You can also store your own files and specify their directory in the tutorials.\n\n\n## Developer instructions\nIf you would like to expand and improve the KSO capabilities, please follow the instructions above to set the project up on your own local computer.\n\nWhen you start adding changes, please create your own branch on top of the current \'dev\' branch. Before submitting a Merge Request, please:\n* Run Black on the code you have edited \n```shell\nblack filename \n```\n* Clean up your commit history on your branch, so that every commit represents a logical change. (so squash and edit commits so that it is understandable for others)\n* For the commit messages, we ask that you please follow the [conventional commits guidelines](https://www.conventionalcommits.org/en/v1.0.0/) to facilitate code sharing. Also, please give a description of the logic behind the commit in the body of the message. \n* Rebase on top of dev. (never merge, only use rebase)\n* Submit a Pull Request and link at least 2 reviewers\n\n\n## Citation\n\nIf you use this code or its models in your research, please cite:\n\nAnton V, Germishuys J, Bergstr\xc3\xb6m P, Lindegarth M, Obst M (2021) An open-source, citizen science and machine learning approach to analyse subsea movies. Biodiversity Data Journal 9: e60548. https://doi.org/10.3897/BDJ.9.e60548\n\n## Collaborations/Questions\nYou can find out more about the project at https://www.zooniverse.org/projects/victorav/the-koster-seafloor-observatory.\n\nWe are always excited to collaborate and help other marine scientists. Please feel free to contact us (matthias.obst(at)marine.gu.se) with your questions.\n\n## Troubleshooting\n\nIf you experience issues with the Panoptes package and/or uploading movies to Zooniverse, it might be related to the libmagic package. In Windows, the following commands might fix the issue:\n```python\npip install python-libmagic\npip install python-magic-bin\n```\n\n\n\n[contributors-shield]: https://img.shields.io/github/contributors/ocean-data-factory-sweden/kso.svg?style=for-the-badge\n[contributors-url]: https://https://github.com/ocean-data-factory-sweden/kso/graphs/contributors\n[forks-shield]: https://img.shields.io/github/forks/ocean-data-factory-sweden/kso.svg?style=for-the-badge\n[forks-url]: https://github.com/ocean-data-factory-sweden/kso/network/members\n[stars-shield]: https://img.shields.io/github/stars/ocean-data-factory-sweden/kso.svg?style=for-the-badge\n[stars-url]: https://github.com/ocean-data-factory-sweden/kso/stargazers\n[issues-shield]: https://img.shields.io/github/issues/ocean-data-factory-sweden/kso.svg?style=for-the-badge\n[issues-url]: https://github.com/ocean-data-factory-sweden/kso/issues\n[license-shield]: https://img.shields.io/github/license/ocean-data-factory-sweden/kso.svg?style=for-the-badge\n[license-url]: https://github.com/ocean-data-factory-sweden/kso/blob/master/LICENSE.txt\n[high-level-overview2]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/high-level-overview-2.png?raw=true ""Overview of the three main modules and the components of the Koster Seafloor Observatory""\n[high-level-overview]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/high-level-overview.png?raw=true ""Overview of the three main modules and the components of the Koster Seafloor Observatory""\n[Data_management_module]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/Koster_data_management_module.png?raw=true\n[object_detection_module]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/Koster_object_detection_module.png?raw=true\n[koster_utils_repo]: https://github.com/ocean-data-factory-sweden/kso_utils\n[colablogo]: https://colab.research.google.com/assets/colab-badge.svg\n[binderlogo]: https://mybinder.org/badge_logo.svg\n[colab_tut_1]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/01_Check_and_update_csv_files.ipynb\n[binder_tut]: https://mybinder.org/v2/gh/ocean-data-factory-sweden/kso/master\n[colab_tut_2]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/02_Upload_new_footage.ipynb\n[colab_tut_3]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/03_Upload_clips_to_Zooniverse.ipynb\n[colab_tut_4]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/04_Upload_frames_to_Zooniverse.ipynb\n[colab_tut_5]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/05_Train_ML_models.ipynb\n[colab_tut_6]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/06_Evaluate_ML_Models.ipynb\n[colab_tut_7]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/07_Transfer_ML_Models.ipynb\n[colab_tut_8]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/08_Analyse_Aggregate_Zooniverse_Annotations.ipynb\n[colab_tut_9]: https://colab.research.google.com/github/ocean-data-factory-sweden/kso/blob/master/tutorials/10_Run_ML_Models_on_footage.ipynb\n[objdecmodule]: https://github.com/ocean-data-factory-sweden/kso\n[YoloV5]: https://github.com/ultralytics/yolov5\n[YoloV8]: https://github.com/ultralytics/ultralytics\n[OBIS-site]: https://www.gbif.org/network/2b7c7b4f-4d4f-40d3-94de-c28b6fa054a6\n[Koster_info_diagram]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/Koster_information_flow.png?raw=true ""Information architecture of the Koster Seafloor Observatory""\n[screenshot_loading]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/screenshot_loading.png?raw=true\n[screenshot_started]: https://github.com/ocean-data-factory-sweden/kso/blob/master/assets/screenshot_started.png?raw=true\n'",",https://doi.org/10.3897/BDJ.9.e60548\n\n##","2021/07/01, 14:47:48",846,GPL-3.0,238,362,"2023/10/24, 09:02:45",36,217,261,181,1,1,1.2,0.19484240687679089,"2023/10/17, 07:19:37",yolov5,9,5,true,"github,patreon,open_collective",false,false,,,https://github.com/ocean-data-factory-sweden,,Sweden,,,https://avatars.githubusercontent.com/u/54248548?v=4,,, FishGlob_data,"This repository contains the FishGlob database, including the methods to load, clean, and process the public bottom trawl surveys in it.",AquaAuma,https://github.com/AquaAuma/FishGlob_data.git,github,,Marine Life and Fishery,"2023/10/20, 21:11:11",18,0,18,true,R,,,R,,"b'# FishGlob_data\n\n[![DOI](https://zenodo.org/badge/580133169.svg)](https://zenodo.org/badge/latestdoi/580133169)\n\nThis repository contains the FishGlob database, including the methods to load, clean, and process the public bottom trawl surveys in it. The database is described in the manuscript, ""An integrated database of fish biodiversity sampled with scientific bottom trawl surveys"" by Aurore A. Maureaud, Juliano Palacios-Abrantes, Zo\xc3\xab Kitchel, Laura Mannocci, Malin L. Pinsky, Alexa Fredston, Esther Beukhof, Daniel L. Forrest, Romain Frelat, Maria L.D. Palomares, Laurene Pecuchet, James T. Thorson, P. Dani\xc3\xabl van Denderen, and Bastien M\xc3\xa9rigot.\n\nThis database is a product of the CESAB working group, [FishGlob: Fish biodiversity under global change \xe2\x80\x93 a worldwide assessment from scientific trawl surveys](https://www.fondationbiodiversite.fr/en/the-frb-in-action/programs-and-projects/le-cesab/fishglob/).\n\n\n\nMain contacts: Aurore A. Maureaud [aurore.aqua@gmail.com](mailto:aurore.aqua@gmail.com), Juliano Palacios-Abrantes [j.palacios@oceans.ubc.ca ](mailto:j.palacios@oceans.ubc.ca), and Malin L. Pinsky [malin.pinsky@rutgers.edu](mailto:malin.pinsky@rutgers.edu)\n\n### Structure of the repository\n\n* **cleaning_codes** includes all scripts to process and perform quality control on the trawl surveys.\n* **data_descriptor_figures** contains the R script to construct figures 2-4 for the data descriptor manuscript. \n* **functions** contains useful functions used in other scripts\n* **length_weight** contains the length-weight relationships for surveys where weights have to be calculated from abundance at length data (including NOR-BTS and DATRAS)\n* **metadata_docs** has a README with notes about each survey. This is a place to document changes in survey methods, quirks, etc. It is a growing list. If you have information to add, please open an Issue.\n* **outputs** contains all survey data processed .RData files and flagging outputs\n* **standard_formats** includes definitions of file formats in the FishGlob database, including survey ID codes.\n* **standardization_steps** contains the R codes to run a full survey standardization and a cross-survey summary of flagging methods\n* **summary** contains the quality check plots for each survey\n\n### Survey data processing steps\n\nData processing and cleaning is done on a per survey basis unless formats are similar across a group of surveys. The current repository can process 26 scientific bottom-trawl surveys, according to the following steps.\n\n**Steps** \n1. Merge the data files for one survey\n2. Clean & homogenize column names following the format described in *standard_formats/fishglob_data_columns.xlsx*\n3. Create missing columns and standardize units using the standard format *standard_formats/fishglob_data_columns.xlsx*\n4. Integrate the cleaned taxonomy by applying the function *clean_taxa()* and apply expert knowledge on taxonomic treatments\n5. Perform quality checks, including the output in the *summary* folder\n\n### Survey data standardization and flags\n\nData standardization and flags are done on a per survey basis and per survey_unit basis (integrating seasons and quarters). Flags are performed both on the temporal occurrence of taxa and the spatio-temporal sampling footprint according to the following steps.\n\n**Steps**\n1. Taxonomic quality control: run flag_spp() for each survey region\n2. Apply methods to identify a standard spatial footprint through time for each survey-season/quarter (the survey_unit column). Use the functions apply_trimming_per_survey_unit_method1() and apply_trimming_per_survey_unit_method2() \n3. Display and integrate results in the summary files\n\n### Final data products\n\n**Options**\nUsers can either use the single survey data products in **outputs/Cleaned_data/** and work with survey .RData files including flags or not (inclusion of flags is specified by XX_std_clean.RData), or generate their own compiled version of the data by running the **cleaning_codes/merge.R** which will write local versions of the database in **outputs/Compiled_data/**\n\n### Author contributions\n*Contributors to code*\n- **Cleaning taxonomy**: Juliano Palacios-Abrantes \n- **Cleaning surveys**: Juliano Palacios-Abrantes, Aurore Maureaud, Zo\xc3\xab Kitchel, Dan Forrest, Daniel van Denderen, Laurene Pecuchet, Esther Beukhof\n- **Summary of surveys**: Juliano Palacios-Abrantes, Aurore Maureaud, Zo\xc3\xab Kitchel, Laura Mannocci\n- **Merge surveys**: Aurore Maureaud\n- **Standardize surveys**: Laura Mannocci, Malin Pinsky, Aurore Maureaud, Zo\xc3\xab Kitchel, Alexa Fredston\n\n### Credit and citation\n\nWe highly encourage users to cite the corresponding data descriptor paper along with primary SBTS sources included in the FISHGLOB data files. Appropriate credit includes citing primary datasets and/or the [data descriptor](https://osf.io/2bcjw/) for the integration methods developed to gather regional surveys together depending on usage.\n\n### :warning: Important updates :warning:\n\n> **05/09/2023**: Norwegian survey is erroneous and will be replaced with a Barents Sea centered survey over 2004-onwards which will change the spatio-temporal coverage of the region (coordinated by Laurene Pecuchet with IMR), see [issue #29](https://github.com/AquaAuma/FishGlob_data/issues/29)\n'",",https://zenodo.org/badge/latestdoi/580133169","2022/12/19, 19:59:00",310,GPL-3.0,178,178,"2023/09/05, 17:44:19",8,2,22,22,50,0,0.0,0.24705882352941178,"2023/01/11, 22:40:11",v1.9,0,5,false,,false,false,,,,,,,,,,, FSAdata,Contains data for use in common fisheries stock analyses. See installation instructions further below.,fishR-Core-Team,https://github.com/fishR-Core-Team/FSAdata.git,github,"fisheries,stock-assessment,fisheries-stock-assessment,fishr-website,fish",Marine Life and Fishery,"2023/08/24, 15:05:33",12,0,2,true,R,fishR Core Team,fishR-Core-Team,R,https://fishr-core-team.github.io/FSAdata/,"b' \n\n**FSAdata** is a companion package to [**FSA**](https://fishr-core-team.github.io/FSA/) maintained by the [**fishR Core Team**](https://github.com/fishR-Core-Team) that contains data sets for use in common fisheries analyses. The data and documentation for individual data sets may be viewed by following the links on the [Reference page](reference/index.html).\n\nYou can contribute to the package by reporting problems or corrections via [a GitHub issue](https://github.com/fishR-Core-Team/FSAdata/issues/new/) or submitting a dataset for inclusion via [a GitHub issue](https://github.com/fishR-Core-Team/FSAdata/issues/new/) or [a GitHub pull request](https://github.com/fishR-Core-Team/FSAdata/pulls).\n\n \n\n## Installation\nThe [CRAN version](https://cran.r-project.org/web/packages/FSAdata/index.html) of **FSAdata** may be installed with\n\n```r\ninstall.packages(""FSAdata"")\n```\n\nThe development version may be installed from GitHub with\n\n```r\nif (!require(\'remotes\')) install.packages(\'remotes\'); require(\'remotes\')\nremotes::install_github(\'fishR-Core-Team/FSAdata\')\n```\n\n \n\n[![Project Status: Active - The project has reached a stable, usable state and is being actively developed.](http://www.repostatus.org/badges/latest/active.svg)](http://www.repostatus.org/#active) [![DOI](https://zenodo.org/badge/18454411.svg)](https://zenodo.org/badge/latestdoi/18454411) [![CRAN Version](http://www.r-pkg.org/badges/version/FSAdata)](http://www.r-pkg.org/pkg/FSAdata) [![R-CMD-check](https://github.com/fishR-Core-Team/FSAdata/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/fishR-Core-Team/FSAdata/actions/workflows/R-CMD-check.yaml)\n\n[![CRAN RStudio mirror downloads rate](http://cranlogs.r-pkg.org/badges/FSAdata) ![CRAN RSTudio mirror downloads total](http://cranlogs.r-pkg.org/badges/grand-total/FSAdata)](http://www.r-pkg.org/pkg/FSAdata)\n'",",https://zenodo.org/badge/latestdoi/18454411","2014/04/04, 22:23:17",3491,GPL-2.0,22,146,"2023/08/24, 15:07:48",2,9,18,13,62,0,0.3333333333333333,0.045454545454545414,"2023/08/24, 14:36:13",v0.4.1,0,2,false,,true,false,,,https://github.com/fishR-Core-Team,,United States of America,,,https://avatars.githubusercontent.com/u/99483821?v=4,,, dataaimsr,Australian Institute of Marine Science (AIMS) Data Platform API Client which provides easy access to AIMS Data Platform scientific data and information.,ropensci,https://github.com/ropensci/dataaimsr.git,github,"aims,data,monitoring,weather,sst,australia,marine",Marine Life and Fishery,"2023/05/31, 05:31:27",4,0,2,true,R,rOpenSci,ropensci,"R,TeX",https://docs.ropensci.org/dataaimsr/,"b'\n\n# dataaimsr \n\n\n\n[![](https://badges.ropensci.org/428_status.svg)](https://github.com/ropensci/software-review/issues/428)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.03282/status.svg)](https://doi.org/10.21105/joss.03282)\n[![Lifecycle:\nmaturing](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![R build\nstatus](https://github.com/ropensci/dataaimsr/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/dataaimsr/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/ropensci/dataaimsr/branch/master/graph/badge.svg)](https://app.codecov.io/gh/ropensci/dataaimsr?branch=master)\n![pkgdown](https://github.com/ropensci/dataaimsr/workflows/pkgdown/badge.svg)\n[![license](https://img.shields.io/badge/license-MIT%20+%20file%20LICENSE-lightgrey.svg)](https://choosealicense.com/)\n[![packageversion](https://img.shields.io/badge/Package%20version-1.1.0-orange.svg)](commits/master)\n[![Ask Us Anything\n!](https://img.shields.io/badge/Ask%20us-anything-1abc9c.svg)](https://github.com/ropensci/dataaimsr/issues/new)\n![Open Source\nLove](https://badges.frapsoft.com/os/v2/open-source.svg?v=103)\n\n\n**Barneche DR, Coleman G, Fermor D, Klein E, Robinson T, Smith J,\nSheehan JL, Dowley S, Ditton D, Gunn K, Ericson G, Logan M, Rehbein M**\n(2021). dataaimsr: An R Client for the Australian Institute of Marine\nScience Data Platform API which provides easy access to AIMS Data\nPlatform. *Journal of Open Source Software*, **6:** 3282. doi:\n[10.21105/joss.03282](https://doi.org/10.21105/joss.03282).\n\n## Overview\n\nThe Australian Institute of Marine Science (AIMS) has a long tradition\nin measuring and monitoring a series of environmental parameters along\nthe tropical coast of Australia. These parameters include long-term\nrecord of sea surface temperature, wind characteristics, atmospheric\ntemperature, pressure, chlorophyll-a data, among many others. The AIMS\nData Centre team has recently developed the [AIMS Data Platform\nAPI](https://open-aims.github.io/data-platform/) which is a *REST API*\nproviding JSON-formatted data to users. `dataaimsr` is an **R package**\nwritten to allow users to communicate with the AIMS Data Platform API\nusing an API key and a few convenience functions to interrogate and\nunderstand the datasets that are available to download. In doing so, it\nallows the user to fully explore these datasets in R in whichever\ncapacity they want (e.g. data visualisation, statistical analyses, etc).\nThe package itself contains a `plot` method which allows the user to\nplot summaries of the different types of dataset made available by the\nAPI. Below we provide a brief context about the existing\n[Datasets](#datasets) that can be explored through `dataaimsr`.\n\n## Installation\n\n### Requesting an AIMS Data Platform API Key\n\n**AIMS Data Platform** requires an API Key for data requests, [get a key\nhere](https://open-AIMS.github.io/data-platform/key-request).\n\nThe API Key can be passed to the package functions as an additional\n`api_key = ""XXXX""` argument. **However**, we strongly encourage users to\nmaintain their API key as a private locally hidden environment variable\n(`AIMS_DATAPLATFORM_API_KEY`) in the `.Renviron` file for automatic\nloading at the start of an R session. Please read this\n[article](https://CRAN.R-project.org/package=httr/vignettes/secrets.html)\nwhich details why keeping your API private is extremely important.\n\nUsers can modify their `.Renviron` file by adding the following line:\n\n AIMS_DATAPLATFORM_API_KEY=XXXXXXXXXXXXX\n\nThe `.Renviron` file is usually stored in each users home directory:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
System.Renviron file locations
MS WindowsC:\\Users\\\xe2\x80\xb9username\xe2\x80\xba\\.Renviron or\nC:\\Users\\\xe2\x80\xb9username\xe2\x80\xba\\Documents\\.Renviron
Linux / MacOs/home/\xe2\x80\xb9username\xe2\x80\xba/.Renviron
\n\n### Package\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TypeSourceCommand
ReleaseCRANNot yet available
DevelopmentGitHubremotes::install_github(""ropensci/dataaimsr"")
DevelopmentrOpenSciinstall.packages(""dataaimsr"", repos = ""https://dev.ropensci.org"")
\n\n## Usage\n\n # assumes that user already has API key saved to\n # .Renviron\n library(dataaimsr)\n\n # summarised by series\n # for all sites that contain data\n # within a defined date range\n sdf_b <- aims_data(""temp_loggers"", api_key = NULL,\n summary = ""summary-by-series"",\n filters = list(""from_date"" = ""2018-01-01"",\n ""thru_date"" = ""2018-12-31""))\n\n # downloads weather data from site Yongala\n # within a defined date range\n wdf_a <- aims_data(""weather"", api_key = NULL,\n filters = list(site = ""Yongala"",\n from_date = ""2018-01-01"",\n thru_date = ""2018-01-02""))\n\nMore comprehensive examples about how to navigate `dataaimsr` and\ninterrogate the datasets can be found on our [online\nvignettes](https://ropensci.github.io/dataaimsr/articles/).\n\n## Datasets\n\nCurrently, there are two AIMS long-term monitoring datasets available to\nbe downloaded through `dataaimsr`:\n\n### Northern Australia Automated Marine Weather And Oceanographic Stations\n\nAutomatic weather stations have been deployed by AIMS since 1980. Most\nof the stations are along the Great Barrier Reef (GBR) including the\nTorres Strait in North-Eastern Australia but there is also a station in\nDarwin and one at Ningaloo Reef in Western Australia. Many of the\nstations are located on the reef itself either on poles located in the\nreef lagoon or on tourist pontoons or other structures. A list of the\nweather stations which have been deployed by AIMS and the period of time\nfor which data may be available can be found on the\n[metadata](https://apps.aims.gov.au/metadata/view/0887cb5b-b443-4e08-a169-038208109466)\nwebpage. **NB:** Records may not be continuous for the time spans given.\n\n### AIMS Sea Water Temperature Observing System (AIMS Temperature Logger Program)\n\nThe data provided here are from a number of sea water temperature\nmonitoring programs conducted in tropical and subtropical coral reefs\nenvironments around Australia. Data are available from approximately 80\nGBR sites, 16 Coral Sea sites, 7 sites in North West Western Australia\n(WA), 8 Queensland regional ports, 13 sites in the Solitary Islands, 4\nsites in Papua New Guinea and 10 sites in the Cocos (Keeling) Islands.\nData are obtained from in-situ data loggers deployed on the reef.\nTemperature instruments sample water temperatures every 5-10 minutes\n(typically) and are exchanged and downloaded approximately every 12\nmonths. Temperature loggers on the reef-flat are generally placed just\nbelow Lowest Astronomical Tide level. Reef-slope (or where specified as\nUpper reef-slope) generally refers to depths 5\xe2\x80\x939 m while Deep reef-slope\nrefers to depths of ~20 m. For more information on the dataset and its\nusage, please visit the\n[metadata](https://apps.aims.gov.au/metadata/view/4a12a8c0-c573-11dc-b99b-00008a07204e)\nwebpage.\n\n## License\n\n`dataaimsr` is provided by the [Australian Institute of Marine\nScience](https://www.aims.gov.au) under the MIT License\n([MIT](https://opensource.org/license/mit/)).\n\n## Code of Conduct\n\nPlease note that this package is released with a [Contributor Code of\nConduct](https://ropensci.org/code-of-conduct/). By contributing to this\nproject, you agree to abide by its terms.\n\n## AIMS R package logos\n\nOur R package logos use a watercolour map of Australia, obtained with\nthe [ggmap](https://CRAN.R-project.org/package=ggmap) R package, which\ndownloads original map tiles provided by [Stamen\nDesign](https://stamen.com/), under [CC BY\n3.0](https://creativecommons.org/licenses/by/3.0), with data from\n[OpenStreetMap](https://www.openstreetmap.org/), under [CC BY\nSA](https://creativecommons.org/licenses/by-sa/3.0).\n'",",https://doi.org/10.21105/joss.03282,https://doi.org/10.21105/joss.03282","2019/08/05, 01:10:45",1543,CUSTOM,13,307,"2023/05/19, 00:39:35",1,2,18,1,160,0,0.0,0.49468085106382975,"2021/05/24, 03:29:45",v1.0.3,0,5,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, mermaid-dash,Transform your underwater insights into data-driven actions that save coral reefs.,data-mermaid,https://github.com/data-mermaid/mermaid-dash.git,github,,Marine Life and Fishery,"2023/10/13, 20:02:08",5,0,2,true,JavaScript,MERMAID,data-mermaid,"JavaScript,HTML,CSS",,"b""# mermaid-dash\n\nMERMAID Global Dashboard. A read-only platform that summarizes all the information collected through the datamermaid application. You can read more about [datamermaid here](https://datamermaid.org/).\n\nMERMAID Global Dashboard uses Create-React-App for its build tooling.\n\n## Setup\n\nIf you've been onboarded, or have worked with a Node-based project that uses NPM or Yarn, you will be in a comfortable, familiar space. If not, we suggest that you take some time to brush up on how Node works, and how you can use it to build new projects through tutorials such as (but not limited to) [this](https://www.tutorialspoint.com/nodejs/nodejs_npm.htm). We also suggest that you be familiar with how to use [git](https://try.github.io/).\n\n### Base Requirements\nA .env file is required in the root, with these vars defined:\nSKIP_PREFLIGHT_CHECK=true\nREACT_APP_MERMAID_API_URL=\n\n#### Node\n\nTo set up a local environment for this platform, you need to install Node JS, with the Dubnium LTS. You can install it through [here](https://nodejs.org/en/download/). If you want to be able to work with more than one version of Node on your local computer for other projects, consider using [NVM](https://github.com/nvm-sh/nvm).\n\n#### Git\n\nIn order to contribute to this project, you will also need to have a command-line terminal and Git.\n\n### Install Node Dependencies\n\nOnce you've installed Node, you should fork this repo, then clone locally. Once cloned, you can install all of the dependencies using the `npm install` command.\n\n### Run/Test\n\nYou can run the application using `npm start`. To run the existing tests, run `npm run test`.\n\n""",,"2019/05/23, 14:32:26",1616,MIT,149,968,"2023/09/18, 19:36:44",3,232,310,42,37,1,1.2,0.12698412698412698,,,0,6,false,,false,true,,,https://github.com/data-mermaid,https://datamermaid.org,,,,https://avatars.githubusercontent.com/u/50884305?v=4,,, mermaidr,"An open-source data platform developed to help you collect, analyze, and share coral reef monitoring data.",data-mermaid,https://github.com/data-mermaid/mermaidr.git,github,,Marine Life and Fishery,"2023/10/16, 15:30:45",9,0,2,true,R,MERMAID,data-mermaid,R,https://data-mermaid.github.io/mermaidr/,"b'\n\n\n# mermaidr\n\n\n\n[![R build\nstatus](https://github.com/data-mermaid/mermaidr/workflows/R-CMD-check/badge.svg)](https://github.com/data-mermaid/mermaidr/actions)\n\n\n`mermaidr` is an R package that enables you to access data from\n[MERMAID](https://datamermaid.org/), an open-source data platform\ndeveloped to help you collect, analyze, and share coral reef monitoring\ndata. Through `mermaidr` you can access data from\n[MERMAID](https://collect.datamermaid.org/) directly in R.\n\nFor more information and detailed instructions on usage, please visit\nthe [package website](https://data-mermaid.github.io/mermaidr/).\n\nIf you are new to the R programming language, our [new R users\nguide](https://data-mermaid.github.io/mermaidr/articles/new_to_r.html)\nis a great place to start! If you find yourself stuck, please don\xe2\x80\x99t\nhesitate to [ask for\nhelp](https://data-mermaid.github.io/mermaidr/articles/getting_help.html).\n\n## Installation\n\nYou can install mermaidr from GitHub with:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""data-mermaid/mermaidr"")\n```\n\n## Usage\n\nThrough `mermaidr`, you can access aggregated data from your coral reef\nsurveys. To do this, first load the package and access your MERMAID\nprojects:\n\n``` r\nlibrary(mermaidr)\n\nprojects <- mermaid_get_my_projects()\n```\n\nAt this point, you will have to authenticate to the Collect app. R will\nhelp you do this automatically by opening a browser window for you to\nlog in to Collect, either via Google sign-in or username and password -\nhowever you normally do! Once you\xe2\x80\x99ve logged in, come back to R. Your\nlogin credentials will be stored for a day, until they expire, and you\nwill need to log in again. The package handles the expiration for you,\nso just log in again when prompted.\n\nThis function gives us information on your projects, including project\ncountries, the number of sites, tags, data policies, and more:\n\n``` r\nprojects\n#> # A tibble: 19 \xc3\x97 15\n#> id name countries num_sites tags notes status data_policy_beltfish\n#> \n#> 1 02e6915c-1\xe2\x80\xa6 TWP \xe2\x80\xa6 Indonesia 14 ""WCS\xe2\x80\xa6 """" Open Private \n#> 2 170e7182-7\xe2\x80\xa6 2018\xe2\x80\xa6 Fiji 10 ""WCS\xe2\x80\xa6 ""Thi\xe2\x80\xa6 Open Private \n#> 3 173c2353-3\xe2\x80\xa6 Copy\xe2\x80\xa6 Fiji 8 ""WCS\xe2\x80\xa6 ""Nam\xe2\x80\xa6 Open Public Summary \n#> 4 1fbdb9ea-9\xe2\x80\xa6 a2 Canada; \xe2\x80\xa6 9 ""WWF\xe2\x80\xa6 ""Nam\xe2\x80\xa6 Open Private \n#> 5 2c0c9857-b\xe2\x80\xa6 Shar\xe2\x80\xa6 Canada; \xe2\x80\xa6 27 """" ""dhf\xe2\x80\xa6 Open Public Summary \n#> 6 2d6cee25-c\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozambiq\xe2\x80\xa6 74 ""WCS\xe2\x80\xa6 ""Dat\xe2\x80\xa6 Open Private \n#> 7 3a9ecb7c-f\xe2\x80\xa6 Aceh\xe2\x80\xa6 Indonesia 18 ""WCS\xe2\x80\xa6 """" Open Private \n#> 8 4080679f-1\xe2\x80\xa6 Mada\xe2\x80\xa6 Madagasc\xe2\x80\xa6 74 ""WCS\xe2\x80\xa6 ""MAC\xe2\x80\xa6 Open Private \n#> 9 4d23d2a1-7\xe2\x80\xa6 Mada\xe2\x80\xa6 Madagasc\xe2\x80\xa6 16 ""WCS\xe2\x80\xa6 ""Mon\xe2\x80\xa6 Open Public Summary \n#> 10 507d1af9-e\xe2\x80\xa6 Kari\xe2\x80\xa6 Indonesia 43 ""WCS\xe2\x80\xa6 """" Open Private \n#> 11 5679ef3d-b\xe2\x80\xa6 Mada\xe2\x80\xa6 Madagasc\xe2\x80\xa6 33 ""WCS\xe2\x80\xa6 """" Open Public Summary \n#> 12 5f13e6dc-4\xe2\x80\xa6 Copy\xe2\x80\xa6 Indonesia 43 ""WCS\xe2\x80\xa6 """" Open Public Summary \n#> 13 75ef7a5a-c\xe2\x80\xa6 Kubu\xe2\x80\xa6 Fiji 78 ""WCS\xe2\x80\xa6 """" Open Private \n#> 14 7a6bfd69-6\xe2\x80\xa6 Copy\xe2\x80\xa6 Belize 31 ""WCS\xe2\x80\xa6 """" Open Public Summary \n#> 15 9de82789-c\xe2\x80\xa6 XPDC\xe2\x80\xa6 Indonesia 37 """" ""XPD\xe2\x80\xa6 Open Private \n#> 16 a1b7ff1f-8\xe2\x80\xa6 Grea\xe2\x80\xa6 Fiji 76 ""Uni\xe2\x80\xa6 """" Open Private \n#> 17 bacd3529-e\xe2\x80\xa6 Beli\xe2\x80\xa6 Belize; \xe2\x80\xa6 32 ""WCS\xe2\x80\xa6 """" Open Public Summary \n#> 18 d065cba4-e\xe2\x80\xa6 2019\xe2\x80\xa6 Fiji 31 ""WCS\xe2\x80\xa6 ""Ble\xe2\x80\xa6 Open Private \n#> 19 e1efb1e0-0\xe2\x80\xa6 2016\xe2\x80\xa6 Fiji 8 ""WCS\xe2\x80\xa6 ""Nam\xe2\x80\xa6 Open Private \n#> # \xe2\x84\xb9 7 more variables: data_policy_benthiclit ,\n#> # data_policy_benthicpit , data_policy_benthicpqt ,\n#> # data_policy_habitatcomplexity , data_policy_bleachingqc ,\n#> # created_on , updated_on \n```\n\nTo focus on just one or a few projects, you can filter by fields like\nthe project name, country, or tags using the `dplyr` package. For\nexample, I\xe2\x80\x99ll narrow in on the WCS Mozambique Coral Reef Monitoring\nproject.\n\n``` r\nlibrary(dplyr)\n\nwcs_mozambique <- projects %>%\n filter(name == ""WCS Mozambique Coral Reef Monitoring"")\n```\n\nYou can access data collected on fishbelt, benthic LIT, benthic PIT,\nbleaching, or habitat complexity - the main function to pull data\nrelated to your project is `mermaid_get_project_data()`:\n\n``` r\nwcs_mozambique_fishbelt_samples <- wcs_mozambique %>%\n mermaid_get_project_data(method = ""fishbelt"", data = ""sampleevents"")\n```\n\nThe `data = ""sampleevents""` argument specifies that I\xe2\x80\x99d like to pull\ndata summarised to the level of a sample **event**, which is a site and\ndate - we can see that this pulls information about the site and date of\nsamples, along with aggregations like the total biomass of that\nsite/date, and broken down by trophic group and fish family.\n\n``` r\nwcs_mozambique_fishbelt_samples\n#> # A tibble: 79 \xc3\x97 58\n#> project tags country site latitude longitude reef_type reef_zone\n#> \n#> 1 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 2 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Baby\xe2\x80\xa6 -11.0 40.7 fringing fore reef\n#> 3 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Balu\xe2\x80\xa6 -22.0 35.5 patch fore reef\n#> 4 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Barr\xe2\x80\xa6 -26.0 32.9 barrier back reef\n#> 5 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Barr\xe2\x80\xa6 -26.1 32.9 barrier back reef\n#> 6 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Bunt\xe2\x80\xa6 -12.6 40.6 fringing fore reef\n#> 7 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Bunt\xe2\x80\xa6 -12.6 40.6 fringing fore reef\n#> 8 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Chec\xe2\x80\xa6 -26.8 32.9 patch fore reef\n#> 9 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Coli\xe2\x80\xa6 -12.6 40.6 fringing fore reef\n#> 10 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Dogt\xe2\x80\xa6 -12.5 40.6 fringing crest \n#> # \xe2\x84\xb9 69 more rows\n#> # \xe2\x84\xb9 50 more variables: reef_exposure , tide , current ,\n#> # visibility , management , management_secondary ,\n#> # management_est_year , management_size , management_parties ,\n#> # management_compliance , management_rules , sample_date ,\n#> # depth_avg , biomass_kgha_avg ,\n#> # biomass_kgha_trophic_group_avg_piscivore , \xe2\x80\xa6\n```\n\nIf you\xe2\x80\x99d like data related to the **units** of survey (for example, to\ntransects or quadrats), it\xe2\x80\x99s just a matter of changing `data` to\n\xe2\x80\x9csampleunits\xe2\x80\x9d:\n\n``` r\nwcs_mozambique %>%\n mermaid_get_project_data(method = ""fishbelt"", data = ""sampleunits"")\n#> # A tibble: 108 \xc3\x97 70\n#> project tags country site latitude longitude reef_type reef_zone\n#> \n#> 1 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 2 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 3 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Baby\xe2\x80\xa6 -11.0 40.7 fringing fore reef\n#> 4 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Balu\xe2\x80\xa6 -22.0 35.5 patch fore reef\n#> 5 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Barr\xe2\x80\xa6 -26.0 32.9 barrier back reef\n#> 6 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Barr\xe2\x80\xa6 -26.0 32.9 barrier back reef\n#> 7 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Barr\xe2\x80\xa6 -26.1 32.9 barrier back reef\n#> 8 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Barr\xe2\x80\xa6 -26.1 32.9 barrier back reef\n#> 9 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Bunt\xe2\x80\xa6 -12.6 40.6 fringing fore reef\n#> 10 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Bunt\xe2\x80\xa6 -12.6 40.6 fringing fore reef\n#> # \xe2\x84\xb9 98 more rows\n#> # \xe2\x84\xb9 62 more variables: reef_exposure , reef_slope , tide ,\n#> # current , visibility , relative_depth , management ,\n#> # management_secondary , management_est_year ,\n#> # management_size , management_parties ,\n#> # management_compliance , management_rules , sample_date ,\n#> # sample_time , depth , transect_number , label , \xe2\x80\xa6\n```\n\nAnd raw observations are available by changing it to \xe2\x80\x9cobservations\xe2\x80\x9d:\n\n``` r\nwcs_mozambique %>%\n mermaid_get_project_data(method = ""fishbelt"", data = ""observations"")\n#> # A tibble: 2,637 \xc3\x97 50\n#> project tags country site latitude longitude reef_type reef_zone\n#> \n#> 1 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 2 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 3 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 4 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 5 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 6 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 7 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 8 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 9 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> 10 WCS Mozambique Co\xe2\x80\xa6 WCS \xe2\x80\xa6 Mozamb\xe2\x80\xa6 Aqua\xe2\x80\xa6 -21.8 35.5 barrier back reef\n#> # \xe2\x84\xb9 2,627 more rows\n#> # \xe2\x84\xb9 42 more variables: reef_exposure , reef_slope , tide ,\n#> # current , visibility , relative_depth , management ,\n#> # management_secondary , management_est_year ,\n#> # management_size , management_parties ,\n#> # management_compliance , management_rules , sample_date ,\n#> # sample_time , transect_length , transect_width , \xe2\x80\xa6\n```\n\nFor more details on accessing project data, please see the [Accessing\nProject\nData](https://data-mermaid.github.io/mermaidr/articles/articles/detailed_usage.html)\narticle.\n\nYou may also want to access data that is not related to projects. To\naccess this data, you do not need to authenticate R with MERMAID.\n\nFor example, you can pull reference data (the names and information of\nthe fish and benthic attributes you can choose in MERMAID), using\n`mermaid_get_reference()`:\n\n``` r\nmermaid_get_reference(reference = ""fishfamilies"")\n#> # A tibble: 161 \xc3\x97 9\n#> id name status biomass_constant_a biomass_constant_b biomass_constant_c\n#> \n#> 1 0091bb\xe2\x80\xa6 Kyph\xe2\x80\xa6 Open 0.0193 3.03 0.986\n#> 2 00b644\xe2\x80\xa6 Mugi\xe2\x80\xa6 Open 0.0166 2.94 0.974\n#> 3 00f427\xe2\x80\xa6 Zena\xe2\x80\xa6 Open 0.00427 3.02 1 \n#> 4 02268a\xe2\x80\xa6 Sphy\xe2\x80\xa6 Open 0.00448 3.11 1 \n#> 5 0880aa\xe2\x80\xa6 Labr\xe2\x80\xa6 Open 0.0120 3.04 0.997\n#> 6 0aff09\xe2\x80\xa6 Scom\xe2\x80\xa6 Open 0.0111 3.03 0.988\n#> 7 0b69f2\xe2\x80\xa6 Ophi\xe2\x80\xa6 Open 0.00139 2.93 1 \n#> 8 0d9904\xe2\x80\xa6 Albu\xe2\x80\xa6 Open 0.0105 2.99 1 \n#> 9 0e5b1d\xe2\x80\xa6 Hemi\xe2\x80\xa6 Open 0.0373 3.16 0.99 \n#> 10 151384\xe2\x80\xa6 Serr\xe2\x80\xa6 Open 0.0136 3.03 0.997\n#> # \xe2\x84\xb9 151 more rows\n#> # \xe2\x84\xb9 3 more variables: regions , created_on , updated_on \n```\n\nUsing this function, you can access the fish family, fish genera, fish\nspecies, and benthic attributes references by changing the `reference`\nargument.\n\nYou can also get a list of *all* projects (not just your own):\n\n``` r\nmermaid_get_projects()\n#> # A tibble: 167 \xc3\x97 15\n#> id name countries num_sites tags notes status data_policy_beltfish\n#> \n#> 1 00673bdf-b\xe2\x80\xa6 TPK \xe2\x80\xa6 ""Indones\xe2\x80\xa6 15 ""WCS\xe2\x80\xa6 """" Open Private \n#> 2 01bbe407-f\xe2\x80\xa6 Mada\xe2\x80\xa6 ""Madagas\xe2\x80\xa6 12 ""WCS\xe2\x80\xa6 ""Sur\xe2\x80\xa6 Open Private \n#> 3 02e6915c-1\xe2\x80\xa6 TWP \xe2\x80\xa6 ""Indones\xe2\x80\xa6 14 ""WCS\xe2\x80\xa6 """" Open Private \n#> 4 07df6a50-6\xe2\x80\xa6 Cend\xe2\x80\xa6 ""Indones\xe2\x80\xa6 36 ""TNC\xe2\x80\xa6 """" Open Private \n#> 5 0b39fe6c-0\xe2\x80\xa6 Open\xe2\x80\xa6 ""Indones\xe2\x80\xa6 2 ""WCS\xe2\x80\xa6 ""Thi\xe2\x80\xa6 Open Private \n#> 6 0c000a00-f\xe2\x80\xa6 2019\xe2\x80\xa6 ""Fiji"" 18 ""WCS\xe2\x80\xa6 """" Open Private \n#> 7 0c16681c-6\xe2\x80\xa6 REEF\xe2\x80\xa6 """" 0 """" """" Open Public Summary \n#> 8 0d87490d-3\xe2\x80\xa6 Test\xe2\x80\xa6 ""Indones\xe2\x80\xa6 2 """" """" Open Public Summary \n#> 9 0de6f1fc-1\xe2\x80\xa6 Copy\xe2\x80\xa6 ""Fiji"" 9 ""WWF\xe2\x80\xa6 ""Dat\xe2\x80\xa6 Open Public Summary \n#> 10 0f17035f-0\xe2\x80\xa6 what """" 0 """" """" Open Public Summary \n#> # \xe2\x84\xb9 157 more rows\n#> # \xe2\x84\xb9 7 more variables: data_policy_benthiclit ,\n#> # data_policy_benthicpit , data_policy_benthicpqt ,\n#> # data_policy_habitatcomplexity , data_policy_bleachingqc ,\n#> # created_on , updated_on \n```\n\nAs well as all sites:\n\n``` r\nmermaid_get_sites()\n#> # A tibble: 2,696 \xc3\x97 13\n#> id name notes project latitude longitude country reef_type reef_zone\n#> \n#> 1 0415d9e5-\xe2\x80\xa6 mysi\xe2\x80\xa6 """" 2c56b9\xe2\x80\xa6 -1 -1 Bangla\xe2\x80\xa6 atoll back reef\n#> 2 6cd334f9-\xe2\x80\xa6 meli\xe2\x80\xa6 """" ea4751\xe2\x80\xa6 49 -110 Canada atoll back reef\n#> 3 afe4dac0-\xe2\x80\xa6 meli\xe2\x80\xa6 """" ea4751\xe2\x80\xa6 49 -110 Canada atoll back reef\n#> 4 02355d6c-\xe2\x80\xa6 BA09 """" a1b7ff\xe2\x80\xa6 -17.4 178. Fiji atoll back reef\n#> 5 03e5576e-\xe2\x80\xa6 BA03 """" 89f2d4\xe2\x80\xa6 -17.4 178. Fiji atoll back reef\n#> 6 0879390b-\xe2\x80\xa6 BA16 """" a1b7ff\xe2\x80\xa6 -17.2 178. Fiji atoll back reef\n#> 7 18f09a09-\xe2\x80\xa6 BA06 """" 0de6f1\xe2\x80\xa6 -17.4 178. Fiji atoll back reef\n#> 8 19258ea5-\xe2\x80\xa6 BA15 """" a1b7ff\xe2\x80\xa6 -17.2 178. Fiji atoll back reef\n#> 9 19e60884-\xe2\x80\xa6 YA02 """" a1b7ff\xe2\x80\xa6 -17.0 177. Fiji atoll back reef\n#> 10 20aeb13f-\xe2\x80\xa6 BA11 """" a1b7ff\xe2\x80\xa6 -17.3 178. Fiji atoll back reef\n#> # \xe2\x84\xb9 2,686 more rows\n#> # \xe2\x84\xb9 4 more variables: exposure , predecessor , created_on ,\n#> # updated_on \n```\n\nAnd all managements:\n\n``` r\nmermaid_get_managements()\n#> # A tibble: 1,016 \xc3\x97 17\n#> id name name_secondary est_year size parties compliance open_access\n#> \n#> 1 0031d438-\xe2\x80\xa6 Mata\xe2\x80\xa6 ""Fish Habitat\xe2\x80\xa6 2018 25 commun\xe2\x80\xa6 full FALSE \n#> 2 00c920d6-\xe2\x80\xa6 Lape\xe2\x80\xa6 ""Special Mana\xe2\x80\xa6 2017 198 commun\xe2\x80\xa6 full FALSE \n#> 3 02479d18-\xe2\x80\xa6 Prot\xe2\x80\xa6 ""Zona Perlind\xe2\x80\xa6 2015 NA commun\xe2\x80\xa6 full FALSE \n#> 4 029852d5-\xe2\x80\xa6 Haaf\xe2\x80\xa6 ""Fish Habitat\xe2\x80\xa6 2007 139 commun\xe2\x80\xa6 full FALSE \n#> 5 02cd9d54-\xe2\x80\xa6 Kaib\xe2\x80\xa6 """" 2017 NA commun\xe2\x80\xa6 full FALSE \n#> 6 02e546ac-\xe2\x80\xa6 VIR3 """" 2012 NA commun\xe2\x80\xa6 full FALSE \n#> 7 03bab6aa-\xe2\x80\xa6 VIR9 """" 2016 NA commun\xe2\x80\xa6 full FALSE \n#> 8 04286cba-\xe2\x80\xa6 Test\xe2\x80\xa6 """" 2018 NA govern\xe2\x80\xa6 full FALSE \n#> 9 044c6e26-\xe2\x80\xa6 Fono\xe2\x80\xa6 ""Fish Habitat\xe2\x80\xa6 2017 191 commun\xe2\x80\xa6 full FALSE \n#> 10 05227cee-\xe2\x80\xa6 Test\xe2\x80\xa6 """" 2018 5 full FALSE \n#> # \xe2\x84\xb9 1,006 more rows\n#> # \xe2\x84\xb9 9 more variables: no_take , access_restriction ,\n#> # periodic_closure , size_limits , gear_restriction ,\n#> # species_restriction , notes , created_on , updated_on \n```\n\nThere is additional data available from the MERMAID API, both related to\nspecific projects and not. If you think you\xe2\x80\x99ll need to use these, please\nsee `mermaid_get_endpoint()` and `mermaid_get_project_endpoint()`.\n\nThis is a small sample of the wealth of data that\xe2\x80\x99s available on your\nMERMAID projects, and on the ecosystem as a whole! Please explore the\n[package website](https://data-mermaid.github.io/mermaidr/) for more.\n'",,"2019/06/27, 13:32:11",1581,MIT,123,551,"2023/10/25, 19:32:01",1,55,86,42,0,1,0.0,0.005964214711729587,"2023/09/08, 17:16:41",v0.7.0,0,3,false,,false,false,,,https://github.com/data-mermaid,https://datamermaid.org,,,,https://avatars.githubusercontent.com/u/50884305?v=4,,, RSP,Refining the Shortest Paths of animals tracked with acoustic transmitters in estuarine regions.,YuriNiella,https://github.com/YuriNiella/RSP.git,github,,Marine Life and Fishery,"2023/08/15, 04:39:22",13,0,5,true,R,,,R,,"b'\n# RSP\n\n[![R-CMD-check](https://github.com/YuriNiella/RSP/workflows/R-CMD-check/badge.svg)](https://github.com/YuriNiella/RSP/actions)\n[![codecov](https://codecov.io/github/YuriNiella/RSP/branch/master/graphs/badge.svg)](https://codecov.io/github/YuriNiella/RSP)\n[![DOI](https://zenodo.org/badge/200786116.svg)](https://zenodo.org/badge/latestdoi/200786116)\n\nRefining the Shortest Paths (RSP) of animals tracked with acoustic\ntransmitters in estuarine regions\n\n## Overview\n\nThe RSP toolkit is a method for analyzing the fine scale movements of\naquatic animals tracked with passive acoustic telemetry in estuarine\nenvironments, that accounts for the surrounding land masses. The animal\nmovements between detections are recreated to have occurred exclusively\nin water and the utilization distribution areas are limited by the land\ncontours, providing realistic estimations of space use. The method can\nbe divided into the following three main steps:\n\n1) Refining the shortest paths in water between consecutive acoustic detections\n2) Calculating utilization distribution areas using dynamic Brownian\n Bridge Movement Models (dBBMM)\n3)\tCalculating the overlaps between different biological groups monitored\n\nDepending on the research questions being addressed, the utilization\ndistribution areas can be calculated for the entire monitoring periods,\nor in fine-scale according to fixed temporal intervals in hours\n(timeframes). Tracked animals are assigned to specific biological groups\n(different species, different sexes from a same species, etc.) prior to\nanalysis, and RSP can be used for calculating the amounts of inter-group overlap in\nspace and time between all groups monitored. This approach allows\nspatial ecologists to use the outputs from such fine-scale space use\nmodels (areas of use, or the between-group overlaps) as input for\nfurther statistical analysis.\n\nHere is an example of the same animal movements animated both using\n**only the receiver locations** and the **receiver and RSP positions\ncombined**:\n\n![](vignettes/animationRSP.gif)\n\n## Main RSP functions\n\n### Running the analysis\n\n**runRSP()**\n\nUsed to estimate the shortest in-water paths. Each\nanimal monitored is analysed individually and all detections are\nassigned to separate **tracks**: a sequence of detections with\nintervals shorter than 24 hours (by default, max.time = 24). When the animal is not detected for\na period of time longer than the maximum.time argument, a new track is created.\n\n**dynBBMM()**\n\nAfter the shortest in-water paths are estimated, the utilization distribution areas can be calculated with\n**dynamic Brownian Bridge Movement Models** (dBBMM). Models can be either calculated for the entire monitoring, or during a particular interval of interest. \n\n**getDistances()**\n\nCalculates the distances travelled (in meters) during each RSP track, both using only the receiver locations and also the exclusively in-water tracks. \n\n**getAreas()**\n\nObtains the in-water areas (in squared meters) for the tracked animals, either at monitored group or track levels. The contour levels of interest from the dBBMMs can be set, and by default areas are calculated for the **50%** and **95%** contours.\n\n**getOverlaps()**\n\nCalculates the amounts of overlap among different biological groups monitored, at the same contour levels as defined in getAreas(). Overlaps are returned as **only in space** for group dBBMM, and if a timeframe is set (in hours), overlaps are **simultaneously in space and time** for timeslot dBBMM. \n\n\n### Plotting the results\n\n**plotTracks()**\n\nThis function can be used to visualize the tracks created using **runRSP()**:\n\n\n\n**plotContours()**\n\nPlots the dBBMM utilization distribution areas calculated for each animal using **dynBBMM()**:\n\n\n \n**plotAreas()**\n\nThis plot shows the space use areas from a particular group of animals:\n\n\n\n**plotOverlap()**\n\nThis function shows where in the study area the overlaps between **different\nbiological groups** occurred:\n\n\n\n\n**animateTracks()**\n\nThis function can be used to create an **animation** of the **RSP tracks**:\n\n![](vignettes/Sydney_anim.gif)\n\n## Installation\n\nCurrent version: **1.0.4**\n\nYou will need the **remotes** package **to install RSP**:\n``` \ninstall.packages(""remotes"")\nlibrary(""remotes"") \n```\n\nNow you can install RSP using:\n```\ninstall_github(""YuriNiella/RSP"", build_opts = c(""--no-resave-data"", ""--no-manual""), build_vignettes = TRUE)\n```\n\nAll the information you need on how to perform the RSP analysis can be\nfound in the package vignettes:\n```\nbrowseVignettes(""RSP"")\n```\n'",",https://zenodo.org/badge/latestdoi/200786116","2019/08/06, 05:59:36",1541,GPL-3.0,24,684,"2023/09/08, 17:16:20",3,3,23,1,47,0,0.6666666666666666,0.25113464447806355,"2020/08/12, 22:32:09",v0.0.1,0,2,false,,false,false,,,,,,,,,,, aspe,An R package to analyse and visualise river fish data in France.,PascalIrz,https://github.com/PascalIrz/aspe.git,github,"aspe,river,fish,freshwater,france,dataset,electrofishing,sampling,plot,map,wfd,index",Marine Life and Fishery,"2023/08/09, 17:22:29",10,0,5,true,R,,,R,https://pascalirz.github.io/aspe/,"b'\r\n[![R-CMD-check](https://github.com/PascalIrz/aspe/workflows/R-CMD-check/badge.svg)](https://github.com/PascalIrz/aspe/actions)\r\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\r\n[![Lifecycle:Maturing](https://img.shields.io/badge/Lifecycle-Maturing-007EC6)](https://discourse.jupyter.org/t/should-we-adopt-the-tidyverse-package-lifecycle-badges/1310)\r\n[![DOI](https://zenodo.org/badge/323145117.svg)](https://zenodo.org/record/7458005)\r\n\r\n\r\n\r\n{aspe}: an R package to analyse and visualise river fish data in France \r\n====\r\n\r\n{aspe} fournit des outils autorisant simplement la plupart des\r\ntraitements de base \xc3\xa0 partir d\xe2\x80\x99une sauvegarde (dump SQL) de la base de\r\ndonn\xc3\xa9es ASPE de l\xe2\x80\x99OFB. Ces traitements comprennent les calculs\r\nd\xe2\x80\x99abondances, densit\xc3\xa9s, distribution en tailles et la d\xc3\xa9tection de\r\ntendances \xc3\xa0 l\xe2\x80\x99\xc3\xa9chelle de la station. Ces calculs sont accompagn\xc3\xa9s de\r\nfonctionnalit\xc3\xa9s graphiques.\r\n\r\nThe goal of {aspe} is to is to provide a suite of tools for most of the\r\ncommon processing of the ASPE database including importation from a SQL\r\nformat dump, calculation of abundances, densities, size distributions,\r\ntemporal trends for populations at the station level, along with\r\ngraphical output.\r\n\r\nLa base de donn\xc3\xa9es ASPE de l\xe2\x80\x99OFB contient toutes les donn\xc3\xa9es\r\nd\xe2\x80\x99inventaire par p\xc3\xaache \xc3\xa0 l\xe2\x80\x99\xc3\xa9lectricit\xc3\xa9 par le CSP, l\xe2\x80\x99ONEMA, l\xe2\x80\x99AFB, l\xe2\x80\x99OFB\r\nainsi que par certains partenaires sur les rivi\xc3\xa8res de France. Elle est d\xc3\xa9crite\r\ndans Irz *et al.* (2022), article produit en visant le respect des principes de la recherche reproductible (Wilkinson *et al.* 2016). Ses fichiers sources - donn\xc3\xa9es comprises - sont diffus\xc3\xa9s sur le d\xc3\xa9p\xc3\xb4t [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7099129.svg)](https://doi.org/10.5281/zenodo.7099129)\r\n\r\n\r\nThe ASPE database is administrated by the French Office for Biodiversity. It gathers, among others, all the river electrofishing samplings carried out by the former Fisheries Council, National Office for Water and Aquatic Systems and French Agency for Biodiversity. The database is described in Irz *et al.* (2022). This paper was produced to respect the FAIR principles (Wilkinson *et al.* 2016), hence its source files - including the database - are available at [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7099129.svg)](https://doi.org/10.5281/zenodo.7099129)\r\n\r\n- Irz P, Vigneron T, Poulet N, Cosson E, Point T, Baglini\xc3\xa8re E, Porcher JP. 2022. A long-term monitoring database on fish and crayfish species in French rivers. *Knowl. Manag. Aquat. Ecosyst.* 423, 25, [DOI:10.1051/kmae/2022021](https://doi.org/10.1051/kmae/2022021).\r\n- Wilkinson MD, et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. *Sci. Data* 3: 160018, [DOI:10.1038/sdata.2016.18](https://doi.org/10.1038/sdata.2016.18).\r\n\r\n\r\nInstallation\r\n------------\r\n\r\nSi besoin, commencer par installer le package R {devtools} ainsi que la suite Rtools, puis installer {aspe} au moyen de la commande :\r\n\r\nIf necessary, first install the {devtools} R package and the Rtools suite, then you can\r\ninstall the released version of {aspe} from\r\n[Github](https://github.com/PascalIrz/aspe) with:\r\n\r\n devtools::install_github(""PascalIrz/aspe"")\r\n\r\n\r\nR\xc3\xa9pertoires Github associ\xc3\xa9s / Associated Github repositories\r\n------------\r\n\r\nPlusieurs d\xc3\xa9p\xc3\xb4ts Github sont associ\xc3\xa9s au package :\r\n- [aspe_data](https://github.com/PascalIrz/aspe_data) : construction des jeux de donn\xc3\xa9es inclus dans le package {aspe}\r\n- [aspe_test](https://github.com/PascalIrz/aspe_test) : Fichiers R Markdown pour tester le package {aspe} et produire les tutos\r\n- [aspe_demo](https://github.com/PascalIrz/aspe_demo) : Fichiers R Markdown pour pr\xc3\xa9traiter les donn\xc3\xa9es pour [un tableau de bord interactif](https://gitlab.ofb.fr/cedric.mondy1/aspedashboard)\r\n- [aspeQual](https://github.com/PascalIrz/aspeQual) : package R destin\xc3\xa9 \xc3\xa0 la mise en qualit\xc3\xa9 de la base Aspe\r\n\r\nThe package comes with a number of associated repos :\r\n- [aspe_data](https://github.com/PascalIrz/aspe_data): building of the datasets included in the {aspe} package\r\n- [aspe_test](https://github.com/PascalIrz/aspe_test): R Markdown files to test the {aspe} package and to build the tutorials\r\n- [aspe_demo](https://github.com/PascalIrz/aspe_demo): R Markdown files to pre-process the fish data for [a dashboard app](https://gitlab.ofb.fr/cedric.mondy1/aspedashboard)\r\n- [aspeQual](https://github.com/PascalIrz/aspeQual) : R package dedicated to the quality control on the Aspe database\r\n\r\n\r\nTutoriels / Vignettes\r\n------------\r\n\r\nLa documentation g\xc3\xa9n\xc3\xa9rale du package est diffus\xc3\xa9e sur Github pages : https://pascalirz.github.io/aspe/\r\n\r\nUne s\xc3\xa9rie de tutoriels est en ligne :\r\n- [Traiter des donn\xc3\xa9es Indice Poisson Rivi\xc3\xa8re (IPR)](https://rpubs.com/kamoke/713491)\r\n- [Faire des traitements de base \xc3\xa0 partir des lots](https://rpubs.com/kamoke/715102)\r\n- [Traiter des mesures individuelles](https://rpubs.com/kamoke/715858)\r\n- [Faire des traitements g\xc3\xa9ographiques](https://rpubs.com/kamoke/716322)\r\n- [Construire des relations taille-poids](https://rpubs.com/kamoke/729779)\r\n\r\nSeveral vignettes (in French) are available online:\r\n- [Processing the fish-based river health index](https://rpubs.com/kamoke/713491)\r\n- [Base processing on the fish batches data](https://rpubs.com/kamoke/715102)\r\n- [Processing the individual measurements](https://rpubs.com/kamoke/715858)\r\n- [Geographical analysis](https://rpubs.com/kamoke/716322)\r\n- [Fitting length-weight relationships](https://rpubs.com/kamoke/729779)\r\n\r\n\r\nNommage des fonctions / Functions\' naming rules\r\n---------------\r\n\r\nLes familles de fonctions se distinguent par des pr\xc3\xa9fixes :\r\n\r\n- `mef_` : mise en forme des dataframes\r\n- `expl_` : exploration de la base\r\n- `export`: export des dataframes (.csv ou .RData)\r\n- `geo_` : op\xc3\xa9rations spatiales\r\n- `gg_` : production de graphiques avec `ggplot2`\r\n- `imp_` : importation depuis un dump SQL\r\n- `ipr_` : [indice poisson rivi\xc3\xa8re](https://www.kmae-journal.org/articles/kmae/abs/2002/02/kmae2002365p405/kmae2002365p405.html)\r\n- `misc_` : divers\r\n\r\nFunctions\' names start by group-specific prefixes:\r\n\r\n- `mef_`: tidying dataframes\r\n- `expl_`: exploring the database\r\n- `export`: exporting dataframes (.csv or .RData)\r\n- `geo_`: spatial processing\r\n- `gg_`: plots with `ggplot2`\r\n- `imp_`: importing data from a SQL dump\r\n- `ipr_`: [fish-based river health index](https://www.researchgate.net/publication/227818978_Development_and_validation_of_a_fish-based_index_FBI_for_the_assessment_of_river_health_in_France)\r\n- `misc_`: miscellaneous\r\n\r\nNommage des variables / Variables\' naming rules\r\n---------------\r\n\r\nDans la quasi-totalit\xc3\xa9 des cas, les variables contenues dans une table sont pr\xc3\xa9fix\xc3\xa9es en fonction de cette table. Par exemple :\r\n\r\n- `sta_` : station\r\n- `pop_` : point_prelevement\r\n- `mei_` : mesure_individuelle\r\n\r\nPour une liste des noms et signification des variables, taper dans la console :\r\n\r\n data(""data_dictionnaire"")\r\n View(data_dictionnaire)\r\n\r\n\r\nAlmost systematically, variables\' names start by table-specific prefixes:\r\n\r\n- `sta_` : station\r\n- `pop_` : point_prelevement (sampling point)\r\n- `mei_` : mesure_individuelle (individual measurment)\r\n\r\nTo display a comprehensive list of variables with their meaning (all in French so far, sorry), call:\r\n\r\n data(""data_dictionnaire"")\r\n View(data_dictionnaire)\r\n\r\n\r\n'",",https://zenodo.org/record/7458005,https://doi.org/10.5281/zenodo.7099129,https://doi.org/10.5281/zenodo.7099129,https://doi.org/10.1051/kmae/2022021,https://doi.org/10.1038/sdata.2016.18","2020/12/20, 19:05:40",1039,GPL-3.0,64,311,"2023/07/31, 15:02:22",3,10,18,7,86,0,0.9,0.05119453924914674,"2023/04/19, 12:22:06",v0.3.0,0,4,false,,false,false,,,,,,,,,,, Cifonauta,Marine biology image database by CEBIMar/USP.,bruvellu,https://github.com/bruvellu/cifonauta.git,github,"database,python,django,postgresql,marine-biology,image-database,cnpq,scientific,zoology",Marine Life and Fishery,"2023/10/20, 06:56:36",18,0,7,true,Python,,,"Python,JavaScript,CSS,HTML",http://cifonauta.cebimar.usp.br,"b'# :ocean: Cifonauta, an image database for marine biology\n\nThe **[Cifonauta database](http://cifonauta.cebimar.usp.br/)** aims to showcase the outstanding biodiversity in our seas and oceans.\nWe host thousands of [photos](http://cifonauta.cebimar.usp.br/search/?datatype=photo) and [videos](http://cifonauta.cebimar.usp.br/search/?datatype=video) of marine organisms\xe2\x80\x94from the [common creatures](http://cifonauta.cebimar.usp.br/media/9442/) you spot on the beach to the [rarest forms](http://cifonauta.cebimar.usp.br/media/9147/) only visible under the microscope.\nThe images were made by scientists during their research, and are annotated with [accurate scientific information](http://cifonauta.cebimar.usp.br/tags/), such as taxonomic classification, geolocation, habitat, life stage, size, and links to [primary literature](http://cifonauta.cebimar.usp.br/literature/).\nYou can [search](http://cifonauta.cebimar.usp.br/search/) for your favorite organism or enjoy a curated journey through the diversity of [marine larvae](http://cifonauta.cebimar.usp.br/tour/larvas-marinhas/) or [gelatinous creatures](http://cifonauta.cebimar.usp.br/tour/aguas-vivas-e-outras-criaturas-gelatinosas/) in our [thematic tours](http://cifonauta.cebimar.usp.br/tours/).\nAll the content is available under a [Creative Commons license](http://creativecommons.org/licenses/by-nc-sa/3.0/).\n\n## Explore the marine biodiversity at http://cifonauta.cebimar.usp.br/\n\n\n\n\n\n\n\n\n\n\n## Credits\n\nThe Cifonauta database was created and launched in 2011 by [Bruno C. Vellutini](https://brunovellutini.com/) and [Alvaro E. Migotto](http://cebimar.usp.br/pt/quem-somos/equipe/academica) from the [Center for Marine Biology](http://cebimar.usp.br/) of the [University of S\xc3\xa3o Paulo](http://www.usp.br/) (CEBIMar/USP) with funding from the Brazilian [National Council for Scientific and Technological Development](https://www.gov.br/cnpq/) (CNPq), Call MCT/CNPq N\xc2\xb042/2007, Process 551951/2008-7.\n\n'",,"2010/11/23, 04:36:40",4720,GPL-3.0,481,1751,"2023/10/20, 06:56:41",22,57,208,53,5,0,0.9,0.19889502762430944,,,0,9,false,,true,false,,,,,,,,,,, CoralNet,A repository and resource for benthic image analysis.,beijbom,https://github.com/coralnet/coralnet.git,github,,Marine Life and Fishery,"2023/06/09, 21:22:26",41,0,10,true,Python,CoralNet,coralnet,"Python,JavaScript,Jupyter Notebook,HTML,CSS",http://coralnet.ucsd.edu/,"b""# CoralNet\n\n[![CI Status](https://github.com/beijbom/coralnet/actions/workflows/django.yml/badge.svg)](https://github.com/beijbom/coralnet/actions/workflows/django.yml)\n\nCoralNet is a website which serves as a repository and resource for benthic image analysis.\n\nWebsite home: https://coralnet.ucsd.edu\n\nRead more about us: https://coralnet.ucsd.edu/about/\n\nCoralNet's machine-learning processes are powered by PySpacer: https://github.com/beijbom/pyspacer\n\n\n## Building and viewing the documentation\n\nYou can browse the docs right in the GitHub repo under `/docs`, but GitHub doesn't support some of the ReStructured Text links. The build will get everything working: \n\n- Download or `git clone` this repository's code.\n- Install Python and Sphinx. You can do this by either following the installation steps (`docs/installation.rst` in this repo) until Sphinx is installed, or you can use some other Python environment which already has Sphinx installed.\n- Open a terminal/command line, cd to the `docs` directory, and run `make html`. (This command is cross platform, since there's a ``Makefile`` as well as a ``make.bat``.)\n- Open `docs/_build/html/index.html` in a web browser to start browsing the documentation.\n- It's also possible to output in formats other than HTML, if you use ``make `` with a different format. See [Sphinx's docs](http://www.sphinx-doc.org/en/master/usage/quickstart.html#running-the-build).\n""",,"2016/03/09, 21:20:31",2786,BSD-2-Clause,118,2566,"2023/08/12, 18:42:49",165,140,301,35,74,1,0.0,0.26984855438274435,,,0,6,false,,false,false,,,https://github.com/coralnet,https://coralnet.ucsd.edu/,,,,https://avatars.githubusercontent.com/u/140933521?v=4,,, Aqualink,A philanthropically funded system to help people manage their local marine ecosystems in the face of increasing Ocean temperatures.,aqualinkorg,https://github.com/aqualinkorg/aqualink-app.git,github,"environment,climate-change,typescript,react,nodejs",Marine Life and Fishery,"2023/10/16, 15:48:28",31,0,9,true,TypeScript,Aqualink.org,aqualinkorg,"TypeScript,CSS,HTML,JavaScript,Shell,Dockerfile",,"b'# Aqualink App\n\nThis is the main repository for the Aqualink Coral Monitoring Application.\n\n## Description\n\n[Aqualink](https://aqualink.org) is a philanthropically funded system to help people manage their local marine ecosystems in the face of increasing Ocean temperatures. The system consists of satellite-connected underwater temperature sensors and photographic surveys to allow for remote collaboration with scientists across the world. If you are concerned about the effect of climate change on your local reef and want to do something about it then please apply to get a smart buoy for free.\n\n![Aqualink Screenshot - Map Page](packages/website/src/assets/img/readme-screenshot-map.png)\n\n## Development\nThe app is a monorepo managed by [Lerna](https://github.com/lerna/lerna) using \n[Yarn Workspaces](https://classic.yarnpkg.com/en/docs/workspaces/). Individual packages are found in the `./packages\n` directory. The app has two primary components:\n\n- [Nest.js API](./packages/api)\n- [React Webapp](./packages/website)\n\nCommands can either be run for a specific package by changing to the package subdirectory and running them directly\n, or for the entire project by running them here in this directory.\n\n### Getting Started\n\nTo get started, run `yarn install` from this directory. This will install all dependencies for all the packages. You\nwill also need to do some package-specific configuration such as setting up a `.env` file for environment variables\n- see the individual package READMEs for details on this process.\n\n### Running Commands\n\n```bash\n# Build all of the packages\nyarn build\n\n# Run tests for all of the packages\nyarn test\n\n# Start up all of the packages locally\nyarn start\n\n# Lint a specific file\nyarn lint ./packages/api/src/path/to/file.ts\n\n# Lint all the files in the app\nyarn lint:all\n```\n\n### Deploying\nWe are using Google App Engine to deploy applications. The default project we are using is `ocean-systems`.\n\n- Install the Gloud CLI using the instructions available [here](https://cloud.google.com/sdk/docs/quickstart-macos)\n- Make sure you are logged in by running `gcloud auth login`\n- The run the following commands:\n\n```bash\n# Set the project to ocean-systems\ngcloud config set project ocean-systems\n\n# Deploy to Google Cloud\nyarn deploy\n\n# Wait and verify that the app is running\ngcloud app browse\n```\n\n## Contributing\n\nAqualink is an MIT-licensed open source project. Contributions from developers and scientists are welcome!\n\n## License\n\n Aqualink is [MIT licensed](LICENSE).\n'",,"2020/04/28, 21:43:56",1275,MIT,94,556,"2023/10/16, 15:48:29",28,576,913,156,9,4,5.1,0.5470798569725864,,,4,19,false,,false,false,,,https://github.com/aqualinkorg,aqualink.org,"San Francisco, CA",,,https://avatars.githubusercontent.com/u/63257299?v=4,,, Near Real-Time Survey Progress and Temperature Maps,Create daily survey station daily temperature and anomaly plots as the ships work their way through the Bering Sea.,afsc-gap-products,https://github.com/afsc-gap-products/survey-live-temperature-map.git,github,"temperature,map,alaska,groundfish,climate,noaa-fisheries,data-visualization,visualization",Marine Life and Fishery,"2023/06/01, 18:46:31",6,0,5,true,R,AFSC GAP Survey Data Products,afsc-gap-products,R,https://www.fisheries.noaa.gov/alaska/science-data/near-real-time-temperatures-bering-sea-bottom-trawl-survey,"b'\n\n# [Near Real-Time Survey Progress and Temperature Maps](https://github.com/afsc-gap-products/survey-live-temperature-map) \n\n*This code is always in development. Find code used for final products\nof this code in\n[releases](paste0(https://github.com/afsc-gap-products/survey-live-temperature-map,%20%22/releases%22)).*\n\n## This code is primarally maintained by:\n\n**Emily Markowitz** (Emily.Markowitz AT noaa.gov;\n[@EmilyMarkowitz-NOAA](https://github.com/EmilyMarkowitz-NOAA))\n\n**Liz Dawson** (Liz.Dawson AT noaa.gov;\n[@liz-dawson-NOAA](https://github.com/liz-dawson-NOAA))\n\n**Chris Anderson** (Christopher.Anderson AT noaa.gov;\n[@ChrisAnderson-NOAA](https://github.com/ChrisAnderson-NOAA))\n\nAnd previously, **Caitlin Allen Akselrud** (caitlin.allen_akselrud AT\nnoaa.gov;\n[@CaitlinAkselrud-NOAA](https://github.com/CaitlinAkselrud-NOAA))\n\nAlaska Fisheries Science Center,\n\nNational Marine Fisheries Service,\n\nNational Oceanic and Atmospheric Administration,\n\nSeattle, WA 98115\n\n# Table of contents\n\n> - [*Purpose*](#purpose)\n> - [*Notes*](#notes)\n> - [*Plot Examples*](#plot-examples)\n> - [*Final stacked gifs*](#final-stacked-gifs)\n> - [*Blank, Grid-only Plot*](#blank,-grid-only-plot)\n> - [*Mean Plot*](#mean-plot)\n> - [*Anomaly Plot*](#anomaly-plot)\n> - [*Relevant publications*](#relevant-publications)\n> - [*Suggestions and Comments*](#suggestions-and-comments)\n> - [*R Version Metadata*](#r-version-metadata)\n> - [*NOAA README*](#noaa-readme)\n> - [*NOAA License*](#noaa-license)\n\n# Purpose\n\nThese scripts create daily survey station daily temperature and anomaly\nplots as the ships work their way through the Bering Sea. These ships\nare conducting NOAA Fisheries\xe2\x80\x99 Alaska Fisheries Science Center\xe2\x80\x99s\nfisheries independent surveys in the Eastern Bering Sea. Scripts pull\ntemperatures from google drive, entered by FPCs at sea, create daily\nmaps and composite gifs, and then push the maps to google drive for the\ncommunications team. These plots are displayed on the AFSC website\n\n- [2022 Eastern and Northern Bering Sea Bottom Trawl\n Survey](https://www.fisheries.noaa.gov/alaska/climate/near-real-time-temperatures-bering-sea-bottom-trawl-surveys-2022)\n- [2022 Aleutian Islands Bottom Trawl\n Survey](https://www.fisheries.noaa.gov/alaska/climate/near-real-time-temperatures-aleutian-islands-bottom-trawl-survey-2022)\n- [2021 Eastern and Northern Bering Sea Bottom Trawl\n Survey](https://www.fisheries.noaa.gov/alaska/science-data/near-real-time-temperatures-bering-sea-bottom-trawl-survey)\n- [2019 Eastern and Northern Bering Sea Bottom Trawl\n Survey](https://www.fisheries.noaa.gov/feature-story/2019-southeastern-bering-sea-shelf-bottom-trawl-survey-gets-underway)\n- [2018 Eastern Bering Sea Bottom Trawl\n Survey](https://www.fisheries.noaa.gov/resource/document/2018-eastern-bering-sea-continental-shelf-and-northern-bering-sea-trawl-surveys)\n- [2017 Eastern and Northern Bering Sea Bottom Trawl\n Survey](https://www.fisheries.noaa.gov/resource/document/2017-eastern-bering-sea-continental-shelf-and-northern-bering-sea-bottom-trawl)\n\n# Notes\n\n- [How to set up the task\n scheduler](https://docs.google.com/document/d/1pwBmR6AqgnvUx_AiWYQxtYxIRjWMfdd5EPWwFvpI3Ug/edit)\n- Files are saved to our internal dev FTP server and google drive.\n- Troubleshooting: if the task scheduler fails to run the code, but you\n can run the script in R or Rstudio, you may need to update Pandoc. If\n you are on a NOAA machine, ask IT to install the .msi file for you.\n Close and reopen everything and try again.\n\n# Plot Examples\n\nFind more plot examples\n[here](https://github.com/afsc-gap-products/survey-live-temperature-map/tree/main/test).\n\n## Final stacked gifs\n\n![NOAA Fisheries AFSC Groundfish Assessment Program conducted the\neastern Bering Sea and northern Bering Sea bottom trawl surveys. The\nnear real-time ocean bottom temperatures depicted were collected May\n29-August 20 This is the last day of the survey. On August 20, stations\nR-18 (60.67\xc2\xb0N, -168.69\xc2\xb0W; \\>8\xc2\xb0C), R-01 (60.67\xc2\xb0N, -168.01\xc2\xb0W; \\>8\xc2\xb0C), and\nR-02 (60.67\xc2\xb0N, -167.32\xc2\xb0W; \\>8\xc2\xb0C) were surveyed by the F/V Alaska Knight.\nNo stations were surveyed by the F/V Vesteraalen. Credit: NOAA\nFisheries](./examples/current_daily_bs.gif) , ![NOAA Fisheries AFSC\nGroundfish Assessment Program conducted the Aleutian Islands bottom\ntrawl survey. The near real-time ocean bottom temperatures depicted were\ncollected June 10-August 13 On August 13, stations 162-16 (51.65\xc2\xb0N,\n-177.89\xc2\xb0W; \\>5.5\xe2\x80\x936\xc2\xb0C) and 184-16 (51.66\xc2\xb0N, -176.25\xc2\xb0W; \\>5\xe2\x80\x935.5\xc2\xb0C) were\nsurveyed by the F/V Alaska Provider. No stations were surveyed by the\nF/V Ocean Explorer. Allocated stations that have not yet been sampled\nare shown as gray dots. Credit: NOAA\nFisheries](./examples/current_daily_ai.gif) , ![NOAA Fisheries AFSC\nGroundfish Assessment Program conducted the Gulf of Alaska bottom trawl\nsurvey. The near real-time ocean bottom temperatures depicted were\ncollected May 23-August 14 On August 14, a station 447-60 (54.79\xc2\xb0N,\n-133.07\xc2\xb0W; \\>6\xe2\x80\x937\xc2\xb0C) was surveyed by the F/V Alaska Provider. No stations\nwere surveyed by the F/V Ocean Explorer. Allocated stations that have\nnot yet been sampled are shown as gray dots. Credit: NOAA\nFisheries](./examples/current_daily_goa.gif)\n\n## Blank, Grid-only Plot\n\n![The grid of designated stations in the eastern Bering Sea and northern\nBering Sea bottom trawl survey areas as well as the 50m, 100m, and 200m\nbathymetric boundaries. Credit: NOAA\nFisheries](./examples/current_grid_bs.png) , ![The Gulf of Alaska bottom\ntrawl survey. This survey covers the Central Aleutians, Eastern\nAleutians, Southern Bering Sea, and Western Aleutians regions. Credit:\nNOAA Fisheries](./examples/current_grid_ai.png) , ![The Gulf of Alaska\nbottom trawl survey. This survey covers the Shumagin, Chirikof, Kodiak,\nYakutat, and Southeastern regions. Credit: NOAA\nFisheries](./examples/current_grid_goa.png)\n\n## Mean Plot\n\n
\n\n
The timeseries mean bottom temperatures\nfrom the NOAA Fisheries eastern Bering Sea (1982-2021; 39 years) and\nnorthern Bering Sea (2010-2021; 4 years) bottom trawl surveys. These\ndata are publicly accessible on Fisheries One Stop Shop data platform\n(https://www.fisheries.noaa.gov/foss). Credit: NOAA\nFisheries
\n
\n\n## Anomaly Plot\n\n
\n\n
The 2022 near real-time ocean bottom\ntemperature anomaly in the NOAA Fisheries AFSC Groundfish Assessment\nProgram\xe2\x80\x99s eastern Bering Sea and northern Bering Sea bottom trawl\nsurveys. The timeseries mean bottom temperatures from the eastern Bering\nSea (1982-2021; 39 years) and northern Bering Sea (2010-2021; 4 years)\nbottom trawl surveys are compared to their respective 2022 surveys (May\n30-August 20). These data are publicly accessible on Fisheries One Stop\nShop data platform (https://www.fisheries.noaa.gov/foss). Credit: NOAA\nFisheries
\n
\n\n# Relevant publications\n\n**Learn more about these surveys and ocean temperatures around Alaska**\n(Markowitz, Dawson, Charriere, Prohaska, Rohan, Stevenson, et al.,\n2022b, 2022a; Markowitz, Dawson, Charriere, Prohaska, Rohan, Haehn, et\nal., 2022; Markowitz et al., 2023; Rohan et al., 2022; Von Szalay and\nRaring, 2018, 2020)\n\n
\n\n
\n\nMarkowitz, E. H., Dawson, E. J., Anderson, A. B., Rohan, S. K.,\nCharriere, N. E., Prohaska, B. K., and Stevenson, D. E. (2023). *Results\nof the 2022 eastern and northern Bering Sea continental shelf bottom\ntrawl survey of groundfish and invertebrate fauna* (NOAA Tech. Memo.\nNMFS-AFSC-469; p. 213). U.S. Dep. Commer.\n\n
\n\n
\n\nMarkowitz, E. H., Dawson, E. J., Charriere, N. E., Prohaska, B. K.,\nRohan, S. K., Haehn, R. A., Stevenson, D. E., and Britt, L. L. (2022).\n*Results of the 2018 eastern Bering Sea continental shelf bottom trawl\nsurvey of groundfish and invertebrate fauna* (NOAA Tech. Memo.\nNMFS-AFSC-450; p. 183). U.S. Dep. Commer.\n\n\n
\n\n
\n\nMarkowitz, E. H., Dawson, E. J., Charriere, N. E., Prohaska, B. K.,\nRohan, S. K., Stevenson, D. E., and Britt, L. L. (2022a). *Results of\nthe 2019 eastern and northern Bering Sea continental shelf bottom trawl\nsurvey of groundfish and invertebrate fauna* (NOAA Tech. Memo.\nNMFS-AFSC-451; p. 225). U.S. Dep. Commer.\n\n\n
\n\n
\n\nMarkowitz, E. H., Dawson, E. J., Charriere, N. E., Prohaska, B. K.,\nRohan, S. K., Stevenson, D. E., and Britt, L. L. (2022b). *Results of\nthe 2021 eastern and northern Bering Sea continental shelf bottom trawl\nsurvey of groundfish and invertebrate fauna* (NOAA Tech. Memo.\nNMFS-AFSC-452; p. 227). U.S. Dep. Commer.\n\n\n
\n\n
\n\nRohan, S., Barnett, L., and Charriere, N. (2022). *Evaluating approaches\nto estimating mean temperatures and cold pool area from AFSC bottom\ntrawl surveys of the eastern Bering Sea* (NOAA Tech. Memo.\nNMFS-AFSC-456; p. 42). U.S. Dep. Commer.\n\n\n
\n\n
\n\nVon Szalay, P. G., and Raring, N. W. (2018). *Data report: 2017\nGulf of Alaska bottom trawl survey* (NOAA\nTech. Memo. NMFS-AFSC-374). U.S. Dep. Commer.\nhttps://doi.org/[http://doi.org/10.7289/V5/TM-AFSC-374\n](http://doi.org/10.7289/V5/TM-AFSC-374 )\n\n
\n\n
\n\nVon Szalay, P. G., and Raring, N. W. (2020). *Data report: 2018 Aleutian\nIslands bottom trawl survey* (NOAA Tech. Memo. NMFS-AFSC-409). U.S. Dep.\nCommer. \n\n
\n\n
\n\n# Suggestions and Comments\n\nIf you see that the data, product, or metadata can be improved, you are\ninvited to create a [pull\nrequest](https://github.com/afsc-gap-products/survey-live-temperature-map/pulls),\n[submit an issue to the GitHub\norganization](https://github.com/afsc-gap-products/data-requests/issues),\nor [submit an issue to the code\xe2\x80\x99s\nrepository](https://github.com/afsc-gap-products/survey-live-temperature-map/issues).\n\n## R Version Metadata\n\n FALSE R version 4.3.0 (2023-04-21 ucrt)\n FALSE Platform: x86_64-w64-mingw32/x64 (64-bit)\n FALSE Running under: Windows 10 x64 (build 19045)\n FALSE \n FALSE Matrix products: default\n FALSE \n FALSE \n FALSE locale:\n FALSE [1] LC_COLLATE=English_United States.utf8 \n FALSE [2] LC_CTYPE=English_United States.utf8 \n FALSE [3] LC_MONETARY=English_United States.utf8\n FALSE [4] LC_NUMERIC=C \n FALSE [5] LC_TIME=English_United States.utf8 \n FALSE \n FALSE time zone: America/Los_Angeles\n FALSE tzcode source: internal\n FALSE \n FALSE attached base packages:\n FALSE [1] stats graphics grDevices utils datasets methods base \n FALSE \n FALSE other attached packages:\n FALSE [1] glue_1.6.2 dplyr_1.1.2 magrittr_2.0.3\n FALSE \n FALSE loaded via a namespace (and not attached):\n FALSE [1] vctrs_0.6.2 httr_1.4.5 cli_3.6.1 knitr_1.42 \n FALSE [5] rlang_1.1.1 xfun_0.39 stringi_1.7.12 readtext_0.82 \n FALSE [9] generics_0.1.3 data.table_1.14.8 htmltools_0.5.5 fansi_1.0.4 \n FALSE [13] rmarkdown_2.21 evaluate_0.20 tibble_3.2.1 fastmap_1.1.1 \n FALSE [17] yaml_2.3.7 lifecycle_1.0.3 compiler_4.3.0 pkgconfig_2.0.3 \n FALSE [21] rstudioapi_0.14 digest_0.6.31 R6_2.5.1 tidyselect_1.2.0 \n FALSE [25] utf8_1.2.3 pillar_1.9.0 tools_4.3.0\n\n## NOAA README\n\nThis repository is a scientific product and is not official\ncommunication of the National Oceanic and Atmospheric Administration, or\nthe United States Department of Commerce. All NOAA GitHub project code\nis provided on an \xe2\x80\x98as is\xe2\x80\x99 basis and the user assumes responsibility for\nits use. Any claims against the Department of Commerce or Department of\nCommerce bureaus stemming from the use of this GitHub project will be\ngoverned by all applicable Federal law. Any reference to specific\ncommercial products, processes, or services by service mark, trademark,\nmanufacturer, or otherwise, does not constitute or imply their\nendorsement, recommendation or favoring by the Department of Commerce.\nThe Department of Commerce seal and logo, or the seal and logo of a DOC\nbureau, shall not be used in any manner to imply endorsement of any\ncommercial product or activity by DOC or the United States Government.\n\n## NOAA License\n\nSoftware code created by U.S. Government employees is not subject to\ncopyright in the United States (17 U.S.C. \xc2\xa7105). The United\nStates/Department of Commerce reserve all rights to seek and obtain\ncopyright protection in countries other than the United States for\nSoftware authored in its entirety by the Department of Commerce. To this\nend, the Department of Commerce hereby grants to Recipient a\nroyalty-free, nonexclusive license to use, copy, and create derivative\nworks of the Software outside of the United States.\n\n\n\n[U.S. Department of Commerce](https://www.commerce.gov/) \\| [National\nOceanographic and Atmospheric Administration](https://www.noaa.gov) \\|\n[NOAA Fisheries](https://www.fisheries.noaa.gov/)\n'",",https://doi.org/10.25923/m4pw-t510,https://doi.org/10.25923/d641-xb21,https://doi.org/10.25923/g1ny-y360,https://doi.org/10.25923/1wwh-q418,http://doi.org/10.7289/V5/TM-AFSC-374\n,http://doi.org/10.7289/V5/TM-AFSC-374,https://doi.org/10.25923/qe5v-fz70","2021/05/12, 03:44:23",897,MIT,75,211,"2023/06/01, 18:46:38",5,9,12,7,146,0,0.0,0.13207547169811318,"2023/10/03, 18:39:06",2023,0,4,false,,false,false,,,https://github.com/afsc-gap-products,https://www.fisheries.noaa.gov/alaska/science-data/groundfish-assessment-program-bottom-trawl-surveys,United States of America,,,https://avatars.githubusercontent.com/u/91760178?v=4,,, cold pool index,"Calculate the cold pool index, mean sea surface temperature, and mean bottom temperature using temperature data collected during bottom trawl surveys of the eastern Bering Sea.",afsc-gap-products,https://github.com/afsc-gap-products/coldpool.git,github,"ecosystem-status-report,environmental-data,climate",Marine Life and Fishery,"2023/09/25, 21:23:46",8,0,5,true,R,AFSC GAP Survey Data Products,afsc-gap-products,"R,TeX",,"b'# Introduction\n\nThis repository contains an R package that is used to calculate the *cold pool index*, mean sea surface temperature, and mean bottom temperature using temperature data collected during bottom trawl surveys of the eastern Bering Sea conducted by NOAA/AFSC/RACE\'s Groundfish Assessment Program [(Rohan, Barnett, and Charriere, 2022)](https://doi.org/10.25923/1wwh-q418). The cold pool index is defined as the area of the NOAA/AFSC/RACE eastern Bering Sea bottom trawl survey footprint with bottom temperatures less than or equal to 2\xc2\xb0 Celsius, in square kilometers. This package includes temperature products (mean temperatures, cold pool area, interpolated temperature raster) that are updated on an annual basis following the eastern Bering Sea shelf bottom trawl survey.\n\n- [Installation](https://github.com/afsc-gap-products/coldpool#installation)\n- [Accessing datasets using the package](https://github.com/afsc-gap-products/coldpool#accessing-datasets-using-the-package)\n- [Accessing datasets without installing the package](https://github.com/afsc-gap-products/coldpool#accessing-datasets-without-installing-the-package)\n- [Datasets in the package](https://github.com/afsc-gap-products/coldpool#datasets-in-the-package)\n- [Data collection and processing methods](https://github.com/afsc-gap-products/coldpool#data-collection)\n- [Cold pool area and temperature trends](https://github.com/afsc-gap-products/coldpool#cold-pool-area-and-temperature-trends)\n- [Citation](https://github.com/afsc-gap-products/coldpool#citation)\n\n\n# Installation\n\n1. Install the [akgfmaps package](https://github.com/sean-rohan-NOAA/akgfmaps) from GitHub prior to installing coldpool, as follows:\n```{r}\ndevtools::install_github(""sean-rohan-NOAA/akgfmaps"", build_vignettes = TRUE)\n```\n\nIf you encounter problems installing the akgfmaps package, please refer to the akgfmaps GitHub repository.\n\n2. Install the coldpool package using the following code:\n```{r}\ndevtools::install_github(""afsc-gap-products/coldpool"")\n```\n\n# Usage\n\n## Accessing datasets using the package\nUsers can access temperature products directly from datasets that are built into the package. For example, after installing the package, users can access a data frame containing cold pool area (area with temperature less than or equal to 2\xc2\xb0C), area of other isotherms (less than or equal to 1\xc2\xb0C, 0\xc2\xb0C, -1\xc2\xb0C), mean bottom temperature, and mean surface temperature for the EBS, using:\n```{r}\ncoldpool:::cold_pool_index\n```\n\nDocumentation for the dataset can be accessed using:\n```{r}\n?coldpool:::cold_pool_index\n```\n\n## Accessing datasets without installing the package\nUsers can access temperature products in an R data format (.rda) without installing the package. To do so, download the [sysdata.rda](./R/sysdata.rda) file in the R directory of the coldpool repository. The data set can then be loaded in R after installing and loading the [raster](https://cran.r-project.org/web/packages/raster/index.html) package (version >= 3.2-1), as follows:\n\n```{r}\n# Load raster package and data\nlibrary(raster)\nload(""[local_path]\\\\sysdata.rda"")\n\n# View loaded cold pool index data frame\ncold_pool_index\n```\n\n## Datasets in the package\n\n
\n
cold_pool_index
\n

Data frame containing the total area of EBS waters with bottom temperatures less than or equal to 2, 1, 0, and -1 \xc2\xb0C, mean bottom temperatures, and mean surface temperature during the EBS survey for 1982-2023 (excluding 2020 due to cancelled survey).

\n
nbs_mean_bottom_temperature
\n

Data frame of mean bottom temperature in the NBS during years with a full EBS+NBS standardized survey (2010, 2017, 2019, 2021, 2022, 2023).

\n
ebs_bottom_temperature
\n

Interpolated rasters of bottom temperature for the EBS survey area from 1982-2023 (excluding 2020 due to cancelled survey).

\n
ebs_surface_temperature
\n

Interpolated rasters of sea surface temperature for the EBS survey area from 1982-2023 (excluding 2020 due to cancelled survey).

\n
nbs_ebs_bottom_temperature
\n

Interpolated rasters of bottom temperature for the full EBS and NBS survey area for years with a full EBS+NBS standardized survey (2010, 2017, 2019, 2021, 2022, 2023).

\n
nbs_ebs_surface_temperature
\n

Interpolated rasters of sea surface temperature for the full EBS and NBS survey area for years with a full EBS+NBS standardized survey (2010, 2017, 2019, 2021, 2022, 2023).

\n
cpa_pre2021
\n

Data frame of cold pool area and mean temperatures calculated using the interpolation method used prior to 2021.

\n
\n\n## Caveat emptor\n\nThe temperature data products in this package are an annual snapshot of temperatures during summer bottom trawl surveys. Combined with biological data collected during bottom trawl surveys, these temperature data can provide a simultaneous characterization of thermal habitat and demersal fauna distribution and abundance in the eastern Bering Sea. However, these temperature data products are not adjusted to account for seasonal heating so they do not provide a snapshot of temperature at a specific point in time. Users who are interested in spatially integrated or spatially resolved estimates of temperature at specific points in time that do account for seasonal heating may want to consider using temperature predictions from the [Bering10K BEST-NPZ model](https://github.com/beringnpz/roms-bering-sea) [(Kearney et al., 2020)](http://www.doi.org/10.5194/gmd-13-597-2020), [satellite-derived sea surface temperature](https://github.com/jordanwatson/aksst) products, or alternative oceanographic sampling data, such as those collected by NOAA\'s [EcoFOCI](https://www.ecofoci.noaa.gov/) program.\n\n# Methods\n\n## Data collection\n\nTemperature data have been collected annually during AFSC\'s standardized summer bottom trawl surveys of the eastern Bering Sea continental shelf (EBS shelf) and northern Bering Sea (BS). The EBS shelf survey has been conducted annually since 1982 (except for 2020) and the NBS survey was conducted in 2010, 2017, 2019, 2021, and 2022. In the eastern Bering Sea, surveys are conducted from late May or early June through late July to early August and the northern Bering Sea survey is conducted immediately after the EBS shelf survey (July-August). The EBS shelf survey samples 376 index stations and the NBS survey samples 143 index stations per year, although the survey footprint and number of stations has changed over time (e.g. EBS NW strata added in 1987). The EBS shelf survey progresses from the nearshore waters inside Bristol Bay to the outer continental shelf in the NW portion of southeastern Bering Sea (Figure 1). The NBS survey starts offshore where the EBS shelf survey ends, then progresses northward towards the Bering Strait and Norton Sound, before heading south towards Nunivak Island.\n\n![Map of eastern Bering Sea and northern Bering Sea survey areas showing the EBS standard, EBS NW, and NBS survey strata.](./plots/ebs_nbs_survey_area.png)\n Figure 1. Map of eastern Bering Sea (EBS) shelf and northern Bering Sea (NBS) shelf survey areas and station grid, including EBS standard (sampled since 1982) and northwest (sampled since 1987) subareas. Thick black lines denote survey boundaries and fill color denotes the average day of year when stations are sampled by the survey.\n\nTemperature data are collected at every survey station using temperature sensors attached to the bottom trawl survey gear. The equipment used to collect temperature data has changed over time, as described in [Buckley et al. (2009)](https://repository.library.noaa.gov/view/noaa/3655).\n\n## Interpolation and analysis\n\nRaster surfaces and temperature products are produced by interpolating temperature data collected during surveys using ordinary kriging with Stein\xe2\x80\x99s parameterization of the Mat\xc3\xa9rn semivariogram. Only data from hauls with \'good\' performance are included in temperature calculations. Statistics that summarize temperature patterns are produced from raster surfaces: areas of isotherms in the EBS shelf survey area (2\xc2\xb0C, 1\xc2\xb0C, 0\xc2\xb0C, -1\xc2\xb0C), mean bottom temperature over the EBS shelf and NBS (i.e., gear temperature), and sea surface temperature over the EBS shelf and NBS. For the EBS shelf survey area, temperature data used to calculate summary statistics do not include NBS data because of a warm bias in NBS survey years that is caused by the southern portion of the nearshore domain of the NBS being sampled last. The interpolation region for the EBS shelf includes both the standard area (sampled from 1982-present) and the NW strata (sampled from 1987-present) for all years (Figure 1). The NBS interpolation only includes data from years with a full NBS survey and uses data from the EBS Standard, EBS NW, and NBS survey areas.\n\n## Data product and package updates\n\nTemperature data products in the coldpool package are updated on an annual basis via new package releases after the conclusion of summer bottom trawl surveys.\n\n\n# Cold pool area and temperature trends\n*Updated: September 5, 2023 (provisional data, subject to change)*\n\nCold pool area and temperature trends are reported in the annual [Ecosystem Status Reports](https://www.fisheries.noaa.gov/alaska/ecosystems/ecosystem-status-reports-gulf-alaska-bering-sea-and-aleutian-islands) for the eastern Bering Sea and ecosystem and socioeconomic profiles for EBS stocks. Temperature products are also used as covariates in some [stock assessment](https://www.fisheries.noaa.gov/alaska/population-assessments/north-pacific-groundfish-stock-assessments-and-fishery-evaluation) models or their inputs (e.g. abundance indices).\n\n![Cold pool area from 2004-2023, based on interpolated survey bottom temperatures](./plots/2023_coldpool_with_area.png)\n Figure 2. Cold pool extent in the eastern Bering Sea from 2004\xe2\x80\x932023, showing areas with bottom temperatures \xe2\x89\xa4 2\xc2\xb0C, \xe2\x89\xa4 1\xc2\xb0C, \xe2\x89\xa4 0\xc2\xb0C, and \xe2\x89\xa4 -1\xc2\xb0C (upper panels), and proportion of the southeastern Bering Sea survey area with bottom temperatures \xe2\x89\xa4 2\xc2\xb0C, \xe2\x89\xa4 1\xc2\xb0C, \xe2\x89\xa4 0\xc2\xb0C, and \xe2\x89\xa4 -1\xc2\xb0C (lower panel). Solid black lines in the interior of the surface represent the 50m and 100m isobaths.\n\n![Mean bottom temperature in the eastern Bering Sea during 2023, based on interpolated survey bottom temperatures](./plots/2023_nbs_ebs_temperature_map_grid.png)\n Figure 3. Contour map of bottom temperatures from the past four eastern and northern Bering Sea shelf bottom trawl surveys (2018-2023). Solid black contour lines denote stratum boundaries.\n\n![Mean bottom and sea surface temperatures in the eastern Bering Sea from 1982-2023, based on interpolated survey temperatures](./plots/2023_average_temperature.png)\n Figure 4. Average summer surface (green triangles) and bottom (blue circles) temperatures (\xc2\xb0C) of the eastern Bering Sea (EBS) shelf and northern Bering Sea (NBS) shelf based on data collected during standardized summer bottom trawl surveys from 1982\xe2\x80\x932023. Dashed lines represent the time series mean for the EBS (1982\xe2\x80\x932023, except 2020) and NBS (2010, 2017, 2019, 2021, 2022, 2023).\n\n# Citation\n\nRohan, S.K., Barnett L.A.K., and Charriere, N. 2022. Evaluating approaches to estimating mean temperatures and cold pool area from AFSC bottom trawl surveys of the eastern Bering Sea. U.S. Dep. Commer., NOAA Tech. Mem. NMFS-AFSC-456, 42 p. [https://doi.org/10.25923/1wwh-q418](https://doi.org/10.25923/1wwh-q418)\n\n# NOAA README\nThis repository is a scientific product and is not official communication of the National Oceanic and Atmospheric Administration, or the United States Department of Commerce. All NOAA GitHub project code is provided on an \xe2\x80\x98as is\xe2\x80\x99 basis and the user assumes responsibility for its use. Any claims against the Department of Commerce or Department of Commerce bureaus stemming from the use of this GitHub project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce. The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.\n\n# NOAA License\nSoftware code created by U.S. Government employees is not subject to copyright in the United States (17 U.S.C. \xc2\xa7105). The United States/Department of Commerce reserve all rights to seek and obtain copyright protection in countries other than the United States for Software authored in its entirety by the Department of Commerce. To this end, the Department of Commerce hereby grants to Recipient a royalty-free, nonexclusive license to use, copy, and create derivative works of the Software outside of the United States.\n
\n\n
\n[U.S. Department of Commerce](https://www.commerce.gov/) \\| [National Oceanographic and Atmospheric Administration](https://www.noaa.gov) \\| [NOAA Fisheries](https://www.fisheries.noaa.gov/)\n'",",https://doi.org/10.25923/1wwh-q418,https://doi.org/10.25923/1wwh-q418,https://doi.org/10.25923/1wwh-q418","2021/04/08, 21:12:35",930,CUSTOM,33,271,"2023/09/25, 21:23:46",1,32,49,15,30,0,0.0,0.14537444933920707,"2023/09/23, 00:53:46",v3.2-1,0,3,false,,false,false,,,https://github.com/afsc-gap-products,https://www.fisheries.noaa.gov/alaska/science-data/groundfish-assessment-program-bottom-trawl-surveys,United States of America,,,https://avatars.githubusercontent.com/u/91760178?v=4,,, PlanktoScope,"A modular, open-source hardware and software platform that allows for high-throughput quantitative imaging of plankton samples in aquatic biology and ecology.",PlanktoScope,https://github.com/PlanktoScope/PlanktoScope.git,github,"oceanography,ocean,science,citizen,water,plankton,citizen-science,raspberry-pi-camera,imaging,microscopy,microscope,microscope-platform,raspberry-pi-4,planktoscope",Marine Life and Fishery,"2023/10/26, 01:41:25",47,0,30,true,Shell,PlanktoScope,PlanktoScope,"Shell,C,JavaScript,Vim Snippet,Makefile",https://www.planktoscope.org,"b'# PlanktoScope: Open and Affordable Quantitative Imaging Platform\n\n![planktoscope_hero](documentation/docs/images/project_description/planktoscope_hero.png)\n\n## What are Plankton?\n\n_""Drifting endlessly, midway between the sea of air above and the depths of the abyss below, these strange creatures and the marine inflorescence that sustains them are called \'plankton\' \xe2\x80\x94 the wanderers""_ - **[Rachel Carson](https://de.wikipedia.org/wiki/Rachel_Carson)**\n\nPlankton are tiny organisms that drift in the oceans and play a crucial role in the global ecosystem. They are responsible for fixing 30-50% of the world\'s carbon dioxide and form the foundation of the global food chain. Despite their importance, studying plankton can be challenging due to the vast area of the oceans and the limited resources of research fleets and specialized equipment. The PlanktoScope is an open-source hardware and software platform that aims to make it easier to study plankton by providing high-throughput quantitative imaging capabilities at a low cost.\n\n## What is a PlanktoScope?\n\n![The PlanktoScope at tfom23 Expo](documentation/docs/images/planktoscope-buildworkshops-tfom23-expo.jpg)\n\nThe PlanktoScope is a modular, open-source platform for high-throughput quantitative imaging of plankton samples. Its small size, ease of use, and low cost make it suitable for a variety of applications, including the monitoring of laboratory cultures or natural micro-plankton communities. It can be controlled from any WiFi-enabled device and can be easily reconfigured to meet the changing needs of the user.\n\n[Learn more about how it works](https://www.planktoscope.org/how-it-works)\n\n## Read the Paper\n\nThe PlanktoScope has been described in the article [""PlanktoScope: Affordable Modular Quantitative Imaging Platform for Citizen Oceanography""](https://www.frontiersin.org/articles/10.3389/fmars.2022.949428/full), published in Frontiers in Marine Science in July 2022.\n\nDOI: [https://doi.org/10.3389/fmars.2022.949428](https://doi.org/10.3389/fmars.2022.949428)\n\n## Key Features\n\n![planktoscope_hero](documentation/docs/images/project_description/planktoscope_architecture.png)\n\nHere are some key features of the PlanktoScope:\n\n1. **Low cost**: The PlanktoScope is designed to be affordable, with parts costing under $1000.\n2. **Modular**: The PlanktoScope is modular, meaning it can be easily reconfigured to meet the changing needs of users.\n3. **Open-source**: The PlanktoScope is based on open-source hardware and software, making it accessible to a wide community of engineers, researchers, and citizens.\n4. **Versatility**: The PlanktoScope is versatile, and can be used to study a variety of plankton types, including laboratory cultures and natural micro-plankton communities.\n5. **High-throughput**: The PlanktoScope is capable of high-throughput quantitative imaging, allowing users to analyze large numbers of samples quickly and efficiently.\n6. **WiFi-enabled**: The PlanktoScope can be controlled from any WiFi-enabled device, making it easy to use and deploy in a variety of settings.\n7. **Portable**: The PlanktoScope is small and portable, making it easy to transport and use in the field.\n8. **Ease of use**: The PlanktoScope is designed to be easy to use, with instructions for assembly and use available on the PlanktoScope website.\n\n## How do I get one?\n\nYou can access the complete documentation here: https://planktoscope.github.io/PlanktoScope/\n|Get the kit|Assemble your kit|Start your machine|\n|--|--|--|\n|![Get the kit](documentation/docs/images/readme/get_kit.png)|![Assemble your kit](documentation/docs/images/readme/assemble_kit.png)|![Start your machine](documentation/docs/images/readme/start_pscope.png)|\n\n## Getting Involved\n\nThere are several ways you can join the development effort and contribute to this project.\n\n### Communication Platform\n\nWe use Slack as a communication platform for interested parties. You can request to join by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfcod-avpzWVmWj42_hW1v2mMSHm0DAGXHxVECFig2dnKHxGQ/viewform).\n\n### Reporting Issues\n\nIf you have identified a bug in the software or hardware, please open an issue in this repository to report it.\n\n### Contributing to Development\n\nYou can also contribute to the development effort by working on open issues. Check out the [issues labeled as good first issues](https://github.com/PlanktoScope/PlanktoScope/labels/good%20first%20issue) and let us know in the comments if you are interested in working on one. We may be able to provide guidance as you get started with the code.\n\n## License Information\n\nThis repository contains various types of materials that are covered by different licenses. Please read the following information carefully to determine which license applies to the materials you wish to use.\n\n### Hardware Files\n\nAll hardware files and documentation located in the `hardware` directory are released under the [CERN-OHL-S-2.0](https://ohwr.org/cern_ohl_s_v2.txt) license.\n\n### Software Source Code\n\nThe source code located in the `flows` and `scripts` directories is released under the [GPL-3.0](https://www.gnu.org/licenses/gpl-3.0.en.html) license.\n\n### Other Materials\n\nAll other materials, including documentation and pictures, are released under the [Creative Commons CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license.\n\nIf you wish to use any of the materials in this repository for a project that cannot be open-sourced, please contact us using Slack so we can discuss potential solutions.\n\nWe encourage you to fork this repository and publish any improvements you make. Doing so helps others and allows us to potentially integrate your changes into this repository.\n\n## Citing PlanktoScope\nIf you use PlanktoScope in your research, please use the following BibTeX entry.\n\n```\n@article{Pollina2022PlanktoScope,\n author={Pollina, Thibaut and Larson, Adam G. and Lombard, Fabien and Li, Hongquan and Le Guen, David and Colin, S\xc3\xa9bastien and de Vargas, Colomban and Prakash, Manu},\n title={PlanktoScope: Affordable Modular Quantitative Imaging Platform for Citizen Oceanography},\n journal={Frontiers in Marine Science},\n year={2022},\n doi={10.3389/fmars.2022.949428}\n}\n```\n'",",https://doi.org/10.3389/fmars.2022.949428,https://doi.org/10.3389/fmars.2022.949428","2020/03/19, 04:29:00",1316,CC-BY-SA-4.0,53,685,"2023/10/26, 01:41:26",90,99,163,115,0,1,0.5,0.49255952380952384,"2023/09/14, 22:26:10",software/v2023.9.0-beta.1,22,7,false,,false,false,,,https://github.com/PlanktoScope,https://www.planktoscope.org,,,,https://avatars.githubusercontent.com/u/62368168?v=4,,, pyafscgap,Community contributed Python-based tools for working with public bottom trawl surveys data from the NOAA Alaska Fisheries Science Center Groundfish Assessment Program.,SchmidtDSE,https://github.com/SchmidtDSE/afscgap.git,github,"biodiversity,biology,fish,fisheries,noaa",Marine Life and Fishery,"2023/06/28, 16:20:45",4,3,4,true,Python,DSE,SchmidtDSE,"Python,JavaScript,HTML,CSS,TeX,Shell,Dockerfile",https://pyafscgap.org,"b'# Python Tools for AFSC GAP\n| Group | Badges |\n|-------|--------|\n| Status | ![build workflow status](https://github.com/SchmidtDSE/afscgap/actions/workflows/build.yml/badge.svg?branch=main) ![docs workflow status](https://github.com/SchmidtDSE/afscgap/actions/workflows/docs.yml/badge.svg?branch=main) [![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) |\n| Usage | [![Python 3.7+](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/release/python-370/) [![Pypi Badge](https://img.shields.io/pypi/v/afscgap)](https://pypi.org/project/afscgap/) [![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/SchmidtDSE/afscgap/main?urlpath=/tree/index.ipynb) |\n| Publication | [![pyOpenSci](https://tinyurl.com/y22nb8up)](https://github.com/pyOpenSci/software-submission/issues/93) [![DOI](https://joss.theoj.org/papers/10.21105/joss.05593/status.svg)](https://doi.org/10.21105/joss.05593) |\n| Archive | [![Open in Code Ocean](https://codeocean.com/codeocean-assets/badge/open-in-code-ocean.svg)](https://codeocean.com/capsule/4905407/tree/v2) [![DOI](https://zenodo.org/badge/603308264.svg)](https://zenodo.org/badge/latestdoi/603308264) |\n\n
\n\nPython-based tool chain (""Pyafscgap.org"") for working with the public bottom trawl data from the [NOAA AFSC GAP](https://www.fisheries.noaa.gov/contact/groundfish-assessment-program). This provides information from multiple survey programs about where certain species were seen and when under what conditions, information useful for research in ocean health.\n\nSee [webpage](https://pyafscgap.org), [project Github](https://github.com/SchmidtDSE/afscgap), and [example notebook](https://mybinder.org/v2/gh/SchmidtDSE/afscgap/main?urlpath=/tree/index.ipynb).\n\n
\n
\n\n## Quickstart\nTaking your first step is easy!\n\n**Explore the data in a UI:** To learn about the datasets, try out an in-browser visual analytics app at [https://app.pyafscgap.org](https://app.pyafscgap.org) without writing any code.\n\n**Try out a tutorial in your browser:** Learn from and modify an in-depth [tutorial notebook](https://mybinder.org/v2/gh/SchmidtDSE/afscgap/main?urlpath=/tree/index.ipynb) in a free hosted academic environment (all without installing any local software).\n\n**Jump into code:** Ready to build your own scripts? Here\'s an example querying for Pacific cod in the Gulf of Alaska for 2021:\n\n```python\nimport afscgap # install with pip install afscgap\nquery = afscgap.Query()\nquery.filter_year(eq=2021)\nquery.filter_srvy(eq=\'GOA\')\nquery.filter_scientific_name(eq=\'Gadus macrocephalus\')\nresults = query.execute()\n```\n\nContinue your exploration in the [developer docs](https://pyafscgap.org/docs/usage/).\n\n
\n
\n\n## Installation\nReady to take it to your own machine? Install the open source tools for accessing the AFSC GAP via Pypi / Pip:\n\n```bash\n$ pip install afscgap\n```\n\nThe library\'s only dependency is [requests](https://docs.python-requests.org/en/latest/index.html) and [Pandas / numpy are not expected but supported](https://pyafscgap.org/docs/usage/#pandas). The above will install the release version of the library. However, you can also install the development version via:\n\n```bash\n$ pip install git+https://github.com/SchmidtDSE/afscgap.git@main\n```\n\nInstalling directly from the repo provides the ""edge"" version of the library which should be treated as pre-release.\n\n
\n
\n\n## Purpose\nUnofficial Python-based tool set for interacting with [bottom trawl surveys](https://www.fisheries.noaa.gov/alaska/commercial-fishing/alaska-groundfish-bottom-trawl-survey-data) from the [Ground Fish Assessment Program (GAP)](https://www.fisheries.noaa.gov/contact/groundfish-assessment-program). It offers:\n\n - Pythonic access to the official [NOAA AFSC GAP API service](https://www.fisheries.noaa.gov/foss/f?p=215%3A28).\n - Tools for inference of the ""negative"" observations not provided by the API service.\n - Visualization tools for quickly exploring and creating comparisons within the datasets, including for audiences with limited programming experience.\n\nNote that GAP are an excellent collection of datasets produced by the [Resource Assessment and Conservation Engineering (RACE) Division](https://www.fisheries.noaa.gov/about/resource-assessment-and-conservation-engineering-division) of the [Alaska Fisheries Science Center (AFSC)](https://www.fisheries.noaa.gov/about/alaska-fisheries-science-center) as part of the National Oceanic and Atmospheric Administration\'s Fisheries organization ([NOAA Fisheries](https://www.fisheries.noaa.gov/)).\n\nPlease see our [objectives documentation](https://pyafscgap.org/docs/objectives/) for additional information about the purpose, developer needs addressed, and goals of the project.\n\n
\n
\n\n## Usage\nThis library provides access to the AFSC GAP data with optional zero catch (""absence"") record inference.\n\n
\n\n### Examples / tutorial\nOne of the best ways to learn is through our examples / tutorials series. For more details see our [usage guide](https://pyafscgap.org/docs/usage/).\n\n
\n\n### API Docs\n[Full formalized API documentation is available](https://pyafscgap.org/devdocs/afscgap.html) as generated by pdoc in CI / CD.\n\n
\n\n### Data structure\nDetailed information about our data structures and their relationship to the data structures found in NOAA\'s upstream database is available in our [data model documentation](https://pyafscgap.org/docs/model/).\n\n
\n\n### Absence vs presence data\nBy default, the NOAA API service will only return information on hauls matching a query. So, for example, requesting data on Pacific cod will only return information on hauls in which Pacific cod is found. This can complicate the calculation of important metrics like catch per unit effort (CPUE). That in mind, one of the most important features in `afscgap` is the ability to infer ""zero catch"" records as enabled by `set_presence_only(False)`. See more information in [our inference docs](https://pyafscgap.org/docs/inference/).\n\n
\n\n### Data quality and completeness\nThere are a few caveats for working with these data that are important for researchers to understand. These are detailed in our [limitations docs](https://pyafscgap.org/docs/limitations/).\n\n
\n
\n\n## License\nWe are happy to make this library available under the BSD 3-Clause license. See LICENSE for more details. (c) 2023 Regents of University of California. See the [Eric and Wendy Schmidt Center for Data Science and the Environment\nat UC Berkeley](https://dse.berkeley.edu).\n\n
\n
\n\n## Developing\nIntersted in contributing to the project or want to bulid manually? Please see our [build docs](https://pyafscgap.org/docs/building/) for details.\n\n
\n
\n\n## People\n[Sam Pottinger](https://github.com/sampottinger) is the primary contact with additional development from [Giulia Zarpellon](https://github.com/gizarp). Additionally some acknowledgements:\n\n - Thank you to [Carl Boettiger](https://github.com/cboettig) and [Fernando Perez](https://github.com/fperez) for advice in the Python library.\n - Thanks also to [Maya Weltman-Fahs](https://dse.berkeley.edu/people/maya-weltman-fahs), [Brookie Guzder-Williams](https://github.com/brookisme), Angela Hayes, David Joy, and [Magali de Bruyn](https://github.com/magalidebruyn) for advice on the visual analytics tool.\n - Lewis Barnett, Emily Markowitz, and Ciera Martinez for general guidance.\n\nThis is a project of the [The Eric and Wendy Schmidt Center for Data Science and the Environment\nat UC Berkeley](https://dse.berkeley.edu) where [Kevin Koy](https://github.com/kevkoy) is Executive Director. Please contact us via dse@berkeley.edu.\n\n
\n
\n\n## Open Source\nWe are happy to be part of the open source community.\n\nAt this time, the only open source dependency used by this microlibrary is [Requests](https://docs.python-requests.org/en/latest/index.html) which is available under the [Apache v2 License](https://github.com/psf/requests/blob/main/LICENSE) from [Kenneth Reitz and other contributors](https://github.com/psf/requests/graphs/contributors).\n\nIn addition to Github-provided [Github Actions](https://docs.github.com/en/actions), our build and documentation systems also use the following but are not distributed with or linked to the project itself:\n\n - [mkdocs](https://www.mkdocs.org) under the [BSD License](https://github.com/mkdocs/mkdocs/blob/master/LICENSE).\n - [mkdocs-windmill](https://github.com/gristlabs/mkdocs-windmill) under the [MIT License](https://github.com/gristlabs/mkdocs-windmill/blob/master/LICENSE).\n - [mypy](https://github.com/python/mypy) under the [MIT License](https://github.com/python/mypy/blob/master/LICENSE) from Jukka Lehtosalo, Dropbox, and other contributors.\n - [nose2](https://docs.nose2.io/en/latest/index.html) under a [BSD license](https://github.com/nose-devs/nose2/blob/main/license.txt) from Jason Pellerin and other contributors.\n - [pdoc](https://github.com/mitmproxy/pdoc) under the [Unlicense license](https://github.com/mitmproxy/pdoc/blob/main/LICENSE) from [Andrew Gallant](https://github.com/BurntSushi) and [Maximilian Hils](https://github.com/mhils).\n - [pycodestyle](https://pycodestyle.pycqa.org/en/latest/) under the [Expat License](https://github.com/PyCQA/pycodestyle/blob/main/LICENSE) from Johann C. Rocholl, Florent Xicluna, and Ian Lee.\n - [pyflakes](https://github.com/PyCQA/pyflakes) under the [MIT License](https://github.com/PyCQA/pyflakes/blob/main/LICENSE) from Divmod, Florent Xicluna, and other contributors.\n - [sftp-action](https://github.com/Creepios/sftp-action) under the [MIT License](https://github.com/Creepios/sftp-action/blob/master/LICENSE) from Niklas Creepios.\n - [ssh-action](https://github.com/appleboy/ssh-action) under the [MIT License](https://github.com/appleboy/ssh-action/blob/master/LICENSE) from Bo-Yi Wu.\n\nNext, the visualization tool has additional dependencies as documented in the [visualization readme](https://github.com/SchmidtDSE/afscgap/blob/main/afscgapviz/README.md).\n\nFinally, note that the website uses assets from [The Noun Project](thenounproject.com/) under the NounPro plan. If used outside of https://pyafscgap.org, they may be subject to a [different license](https://thenounproject.com/pricing/#icons).\n\nThank you to all of these projects for their contribution.\n\n
\n
\n\n## Version history\nAnnotated version history:\n\n - `1.0.4`: Minor documentation fypo fix.\n - `1.0.3`: Documentation edits for journal article.\n - `1.0.2`: Minor documentation touch ups for pyopensci.\n - `1.0.1`: Minor documentation fix.\n - `1.0.0`: Release with pyopensci.\n - `0.0.9`: Fix with issue for certain import modalities and the `http` module.\n - `0.0.8`: New query syntax (builder / chaining) and units conversions.\n - `0.0.7`: Visual analytics tools.\n - `0.0.6`: Performance and size improvements.\n - `0.0.5`: Changes to documentation.\n - `0.0.4`: Negative / zero catch inference.\n - `0.0.3`: Minor updates in documentation.\n - `0.0.2`: License under BSD.\n - `0.0.1`: Initial release.\n'",",https://doi.org/10.21105/joss.05593,https://zenodo.org/badge/latestdoi/603308264","2023/02/18, 05:24:26",249,BSD-3-Clause,616,616,"2023/06/28, 16:20:47",3,85,103,103,119,0,0.0,0.005813953488372103,"2023/06/28, 15:25:55",1.0.4,0,2,false,,false,true,"narest-qa/repo4,SchmidtDSE/noaa-afsc-gap-examples,SchmidtDSE/afscgap",,https://github.com/SchmidtDSE,https://dse.berkeley.edu/,United States of America,,,https://avatars.githubusercontent.com/u/124641794?v=4,,, PhytoFit,"Used to display satellite chlorophyll concentration, and calculate statistics and model phytoplankton blooms for regions within custom polygons.",BIO-RSG,https://github.com/BIO-RSG/PhytoFit.git,github,,Marine Life and Fishery,"2023/08/09, 13:43:44",7,0,3,true,HTML,SOPhyE (Satellite Ocean Colour and Phytoplankton Ecology Group),BIO-RSG,"HTML,R,Dockerfile",,"b'# PhytoFit\n\n[![DOI](https://zenodo.org/badge/277295931.svg)](https://zenodo.org/badge/latestdoi/277295931)\n\nThis app can be used to display satellite chlorophyll concentration, and calculate statistics and model phytoplankton blooms for regions within custom polygons. See below for example in screen capture.\n\n\n\n\n\n### How to cite\n\nIn publications, please include acknowledgements to [NASA OBPG](https://oceancolor.gsfc.nasa.gov) for the satellite data and the [BIO remote sensing group](https://github.com/BIO-RSG) for the application, and use this citation in the references: \n\n*Stephanie Clay, Chantelle Layton, & Emmanuel Devred. (2021). BIO-RSG/PhytoFit: First release (v1.0.0). Zenodo. https://doi.org/10.5281/zenodo.4770754* \n\nBibTeX format: \n\n @misc{clay21,\n author = {Clay, Stephanie and Layton, Chantelle and Devred, Emmanuel},\n title = ""PhytoFit"",\n howpublished = ""\\url{https://github.com/BIO-RSG/PhytoFit}"",\n year = 2021\n }\n\n\n## Prerequisites\n\n1. Install the latest versions of R and RStudio.\n\n2. Install the necessary packages:\n```r\ninstall.packages(c(""fst"", ""shiny"", ""shinyWidgets"", ""shinyjs"", ""shinybusy"", ""leaflet"", ""stars"", ""leafem"", ""leafpm"", ""quantreg"", ""minpack.lm"", ""sp"", ""ggplot2"", ""ggpp"", ""dplyr"", ""tidyr"", ""raster"", ""curl"", ""sf"", ""fs""))\nremotes::install_github(""BIO-RSG/oceancolouR"")\n# if the line above doesn\'t work, try devtools::install_github(""BIO-RSG/oceancolouR"")\n# if that doesn\'t work, try either install.packages(""remotes"") or install.packages(""devtools"") and then run the oceancolouR installation line again\n```\n\n3. Restart R after the packages have been installed.\n\n\n## Getting started\n\n1. Download this repository one of two ways: \n\n- Option 1: Code --> Download ZIP \n\n- Option 2: Using git (this will make it easier to download updates in the future, by simply using the `git pull` command): Open git bash terminal, navigate to the folder where you want to download the repository, and type: `git clone https://github.com/BIO-RSG/PhytoFit.git` \n\n2. Open the PhytoFit repository in RStudio: \n\n- File --> Open Project --> Navigate to the PhytoFit folder and open ""PhytoFit.Rproj"" \n\n3. Download the datasets of your choice: \n\n- Open `00_download_new_datasets.R` from the PhytoFit folder. Set *ask_user=FALSE* to download all available datasets, or *ask_user=TRUE* to ask before downloading each one. Alternatively, you can run the script from the command line like: `Rscript [script directory]/00_download_new_datasets.R \'false\'`, filling in the [script directory] with the location where you stored the script. *\'false\'* is the ask_user argument, set to *\'true\'* for prompts. \n\n4. To update existing datasets: \n\n- Similar to the download script in step 3, open `00_update_datasets.R` and set the *ask_user* argument, or run from the command line (e.g. `Rscript [script directory]/00_update_datasets.R \'false\'`. This will update the datasets you have already downloaded with the most recent copies (and download any years of data missing from your local directory). \n\n\n**WARNINGS:** \n- Data files will be downloaded to `data/[region]/` subfolders of the PhytoFit repository - Do NOT move them from there or the app will not be able to read them. \n- If possible, please keep the data files if you intend to use them in the future, rather than re-downloading them later, to avoid excessive traffic on the ftp server. \n- Any data that is < 3 months old is ""Near Real Time"" (NRT) quality. NRT data is replaced with ""Science quality"" data after it becomes available, following the 3-month lag. More info here. \n\n\n## Running\n\nOpen app.R within RStudio, and click ""Run app""\n\n\n## Authors\n\n* **Chantelle Layton** - *Initial concept, preliminary design, coding, and algorithm development/improvements* \n* **Stephanie Clay** - *Final app design and modifications, feature addition, new datasets, maintenance, and algorithm improvements* \n* **Emmanuel Devred** - *Scientific support, algorithm development/improvements, review and feature recommendations* \n\n## Acknowledgments\n\n* **Andrea Hilborn** for many valuable suggestions\n\n\n## Links\n\n[User guide](https://github.com/BIO-RSG/PhytoFit/blob/master/USERGUIDE.md) (In progress) \n[Chl-a model performance evaluation](https://bio-rsg.github.io/chla_model_performance_summary.html) \n[References and data sources](https://github.com/BIO-RSG/PhytoFit/blob/master/USERGUIDE.md#references-and-data-sources) \n[Using the raw (binned) data](https://github.com/BIO-RSG/PhytoFit/blob/master/fst_tutorial.md) (This is a quick tutorial explaining how the raw satellite chlorophyll data used in PhytoFit can be read into R and manipulated for other purposes) \n[Code updates affecting the algorithms](https://github.com/BIO-RSG/PhytoFit/blob/master/updates.md) (Summary of updates that affected the way the bloom metrics are calculated) \n\n'",",https://zenodo.org/badge/latestdoi/277295931,https://doi.org/10.5281/zenodo.4770754*","2020/07/05, 12:16:52",1207,MIT,37,334,"2022/05/03, 15:04:37",0,4,5,0,540,0,0.0,0.1009174311926605,"2021/05/18, 17:45:04",v1.0.0,0,3,false,,false,false,,,https://github.com/BIO-RSG,,,,,https://avatars.githubusercontent.com/u/68566074?v=4,,, Echopype,A package built to enable interoperability and scalability in ocean sonar data processing.,OSOceanAcoustics,https://github.com/OSOceanAcoustics/echopype.git,github,"ocean,acoustics,echosounder,sonar,netcdf,ek60,azfp,xarray,ek80,zarr",Marine Life and Fishery,"2023/10/12, 18:39:21",77,0,15,true,Python,Open-Source Ocean Acoustics,OSOceanAcoustics,"Python,CSS,Jinja,HTML,Dockerfile",https://echopype.readthedocs.io/,"b'
\n \n
\n\n# Echopype\n\n
\n \n \n \n\n \n \n \n
\n\n
\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n
\n\n
\n \n \n \n\n \n \n \n
\n\nEchopype is a package built to enable interoperability and scalability in ocean sonar data processing. These data are widely used for obtaining information about the distribution and abundance of marine animals, such as fish and krill. Our ability to collect large volumes of sonar data from a variety of ocean platforms has grown significantly in the last decade. However, most of the new data remain under-utilized. echopype aims to address the root cause of this problem - the lack of interoperable data format and scalable analysis workflows that adapt well with increasing data volume - by providing open-source tools as entry points for scientists to make discovery using these new data.\n\nWatch the [echopype talk](https://www.youtube.com/watch?v=qboH7MyHrpU)\nat SciPy 2019 for background, discussions and a quick demo!\n\n## Documentation\n\nLearn more about echopype in the official documentation at https://echopype.readthedocs.io. Check out executable examples in the companion repository https://github.com/OSOceanAcoustics/echopype-examples.\n\n\n## Contributing\n\nYou can find information about how to contribute to echopype at our [Contributing Page](https://echopype.readthedocs.io/en/latest/contributing.html).\n\n\n## Echopype doesn\'t run on your data?\n\nPlease report any bugs by [creating issues on GitHub](https://medium.com/nyc-planning-digital/writing-a-proper-github-issue-97427d62a20f).\n\n[Pull requests](https://jarednielsen.com/learn-git-fork-pull-request/) are always welcome!\n\n\nContributors\n------------\n\nWu-Jung Lee ([@leewujung](https://github.com/leewujung)) founded the echopype project in 2018. It is currently led by Wu-Jung Lee and Emilio Mayorga ([@emiliom](https://github.com/emiliom)), who are primary developers together with Brandon Reyes ([@b-reyes](https://github.com/b-reyes)), Landung ""Don"" Setiawan ([@lsetiawan](https://github.com/lsetiawan)), and previously Kavin Nguyen ([@ngkavin](https://github.com/ngkavin)) and Imran Majeed ([@imranmaj](https://github.com/imranmaj)). Valentina Staneva ([@valentina-s](https://github.com/valentina-s)) is also part of the development team.\n\nOther contributors are listed in [echopype documentation](https://echopype.readthedocs.io).\n\nWe thank Dave Billenness of ASL Environmental Sciences for\nproviding the AZFP Matlab Toolbox as reference for our\ndevelopment of AZFP support in echopype.\nWe also thank Rick Towler ([@rhtowler](https://github.com/rhtowler))\nof the NOAA Alaska Fisheries Science Center\nfor providing low-level file parsing routines for\nSimrad EK60 and EK80 echosounders.\n\n\nLicense\n-------\n\nEchopype is licensed under the open source [Apache 2.0 license](https://opensource.org/licenses/Apache-2.0).\n\n\n---------------\n\nCopyright (c) 2018-2022, echopype Developers.\n'",",https://doi.org/10.5281/zenodo.3906999","2018/08/26, 00:59:09",1887,Apache-2.0,189,2317,"2023/10/25, 17:49:39",98,734,1088,333,0,15,4.1,0.6165191740412979,"2023/09/02, 23:08:03",v0.8.1,1,24,false,,false,false,,,https://github.com/OSOceanAcoustics,,,,,https://avatars.githubusercontent.com/u/42682187?v=4,,, OSMOSE,A multispecies and individual-based model which focuses on fish species.,osmose-model,https://github.com/osmose-model/osmose.git,github,,Marine Life and Fishery,"2023/02/14, 23:27:20",20,0,5,true,Java,OSMOSE: Modelling Marine Exploited Ecosystems,osmose-model,"Java,R,Python,TeX,nesC",http://www.osmose-model.org/,"b'
\n \n
\n\nOSMOSE: Modelling Marine Exploited Ecosystems\n=============================================\n\n\n[![DOI](https://zenodo.org/badge/48296200.svg)](https://zenodo.org/badge/latestdoi/48296200)\n[![Latest Release](https://img.shields.io/github/release/osmose-model/osmose.svg)](https://github.com/osmose-model/osmose/releases)\n[![R Build Status](https://github.com/osmose-model/osmose-private/workflows/r-build/badge.svg)](https://github.com/osmose-model/osmose-private/actions)\n[![Java Build Status](https://github.com/osmose-model/osmose-private/workflows/java-build/badge.svg)](https://github.com/osmose-model/osmose-private/actions)\n[![GitHub issues](https://img.shields.io/github/issues/osmose-model/osmose.svg)](https://github.com/osmose-model/osmose/issues)\n\n## Overview\n\nOSMOSE is a multispecies and Individual-based model (IBM) which focuses on fish species. This model assumes opportunistic predation based on spatial co-occurrence and size adequacy between a predator and its prey (size-based opportunistic predation). It represents fish individuals grouped into schools, which are characterized by their size, weight, age, taxonomy and geographical location (2D model), and which undergo major processes of fish life cycle (growth, explicit predation, natural and starvation mortalities, reproduction and migration) and fishing exploitation. The model needs basic biological parameters that are often available for a wide range of species, and which can be found in FishBase for instance, and fish spatial distribution data. This package provides tools to build a model and run simulations using the OSMOSE model. See [http://www.osmose-model.org/](http://www.osmose-model.org/) for more details.\n\n## Installation\n\n``` r\n# The easiest way to get osmose is from CRAN:\ninstall.packages(""osmose"")\n\n# Or the development version from GitHub:\n# install.packages(""devtools"")\ndevtools::install_github(""osmose-model/osmose"")\n```\n\n## Documentation and usage\n\n`osmose` includes several ways to get help and test his functions: demo scripts, vignettes and help files.\n\n### Help files\n\nIn order to get information about any function, the user just have to ask it by `?` command:\n\n``` r\n# Help file of read_osmose function\n?read_osmose\n\n# Help file of available plot methods\n?plot.osmose\n```\n\n### Demo scripts\n\nThe users can test the main functions by using demo scripts (embedded on the package root). In order to access to them, they need to run the demo command:\n``` r\n# Check all the available topics\ndemo(package = ""osmose"")\n\n# Select and run one of the topics (e.g. osmose.config_class)\ndemo(package = ""osmose"", topic = ""osmose.config_class"")\n```\n\n### Vignettes\n\nVignettes can be a good simple way to review all the main functions, because they will be showed as a html. The commands to call are pretty similar to demo scripts:\n\n``` r\n# Check all the available topics\nvignette(package = ""osmose"")\n\n# Select and run one of the topics (e.g. osmose.config_class)\nvignette(package = ""osmose"", topic = ""create_run_read"")\n```\n\n## References\n\n[Official website](http://www.osmose-model.org/) of the model, including info about the development of the project as well as references.\n[Documentation website](https://documentation.osmose-model.org/index.html) with information of parameters of java model (the core).\n[Github site](https://github.com/osmose-model/osmose) where the development code is placed.\n\n## Using documentation plugins\n\nSome documentation tools (Javadoc, PlantUML diagrams) can be generated using Maven plugins, which are defined in the `pom.xml` file.\n\n### Building Javadoc\n\n```\nmvn javadoc:javadoc\n```\n\nThe Javadoc will be stored on the `doc/_static/javadoc/apidocs/` folder.\n\n### Generate PlantUML diagrams\n\nTo generate PlantUML diagrams for the full Osmose project:\n\n```\nmvn plantuml-generator:generate@osmose-full\n```\n\nThe PlantUML diagram will be stored on `doc/_static/puml`\n\nTo convert the resulting diagram in an image format (SVG for instance), the [PlantUML](https://plantuml.com/fr/) tool is required. When\nthe diagram has been generated, type:\n\n```\nplantuml -tsvg doc/_static/puml/osmose-full.puml\n```\n\n## Acknowledgements\n\n
\n\n\n
\n
\n
\n\n\n
\n
\n\n
\n\n\n
\n'",",https://zenodo.org/badge/latestdoi/48296200","2015/12/19, 20:28:34",2867,GPL-3.0,126,3017,"2023/08/21, 08:42:09",5,1,5,1,65,0,0.0,0.18024263431542464,"2023/02/14, 23:33:26",4.3.3,0,6,false,,false,false,,,https://github.com/osmose-model,http://www.osmose-model.org,,,,https://avatars.githubusercontent.com/u/16767770?v=4,,, WHOI HABhub Data Portal,Is being developed as a data access and visualization portal for the New England Harmful Algal Bloom Observing Network.,WHOIGit,https://github.com/WHOIGit/whoi-hab-hub.git,github,,Marine Life and Fishery,"2023/10/20, 17:36:12",6,0,3,true,Python,WHOI GitHub Central,WHOIGit,"Python,JavaScript,HTML,Shell,CSS,Makefile,Batchfile,Dockerfile,SCSS",,"b'# WHOI HABhub Data Portal\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n\nHarmful Algal Bloom data API and map project. Our current version for Public Beta is now live! https://habhub.whoi.edu/\n\nThe ""HABhub"" data portal is being developed as a data access and visualization portal for the New England Harmful Algal Bloom Observing Network ([neHABON](https://northeasthab.whoi.edu/bloom-monitoring/habon-ne/)).\n\nThere are two separate applications that comprise the HABhub:\n\n- a backend data server that provides a REST API. Built using the Django Python framework. You can find this app in the `habhub-dataserver` directory.\n- a frontend client to visualize the data with an interactive map and charts. Built with the React JS framework. Located in the `habhub-react-frontend` directory.\n\nThese applications can run together on a shared machine as a single service or be set up independently of each other. The frontend client only needs to be configured with the base URL of the backend data server to access its API.\n\n## Local Deployment with Docker\n\nThis project is configured to be deployed both locally and in production using Docker and Docker Compose. The default local deployment uses Docker Compose to run both applications in a single network from one `local.yml` compose file.\n\n### Steps for initial Local Deployment:\n\n1. Docker and Docker Compose installed: [instructions](https://docs.docker.com/compose/install)\n2. Clone repo to your local computer: `git clone https://github.com/WHOIGit/whoi-hab-hub.git`\n3. Create local `.env` files for both Django data server and React frontend client. Continued below...\n\n**Step 3 details**\n\nHABhub uses environmental variables for configs such as API links, API tokens, and secrets like database usernames/passwords.\nThese variable should NOT be kept in version control.\n\n**For the Django Data Server**\n\nThe `habhub-dataserver` directory contains the Django backend application. This directory also includes a `.envs.example` directory that you can use as\na template to create your own `.envs` directory and files. HABhub requires this "".envs"" directory to be in the Django application root level directory. (ex. environmental variables file path: `habhub-dataserver/.envs/.local/.django`)\n\nFinal directory structure:\n\n```\nhabhub-dataserver\n-- .envs/\n -- .local/\n -- .django\n -- .postgres\n -- .production/\n -- .django\n -- .postgres\n\n```\n\n**For the React Frontend Client**\n\nThe `habhub-react-frontend` directory contains the React frontend application. Create a new default `.env` file in this directory using the provided `.env.example` file as a template. You can also use the example `.env.development` and `.env.production` files to set different values for environmental variables depending on the environment. Any variable named in one of these environment specific files will be used instead of the default value in the regular `.env` file.\n\nThe required environmental variables are:\n\n- REACT_APP_API_URL\n- REACT_APP_MAPBOX_TOKEN\n\nThe REACT_APP_API_URL is the base URL of the HABhub data server you want to use. In the default local set up, this is http://localhost:8000/\n\nThe REACT_APP_MAPBOX_TOKEN is the API token for Mapbox access. To get a Mapbox GL JS token, create an account [here](https://account.mapbox.com/auth/signup/)\n\nYou can also change the default initial map configuration settings for both the latitude/longitude and zoom:\n\n- REACT_APP_MAP_LATITUDE\n- REACT_APP_MAP_LONGITUDE\n- REACT_APP_MAP_ZOOM\n\n**Step 4**\n\nOpen your terminal and `cd` to the root level of the `whoi-hab-hub` directory. Run the following Docker Compose commands:\n\n```\ndocker-compose -f local.yml build\ndocker-compose -f local.yml up\n```\n\nFor the Django app, you need to run the initial DB migrations. Open up a second terminal and run:\n\n```\ndocker-compose -f local.yml run --rm django python manage.py migrate\n```\n\nThen create the initial superuser, run:\n\n```\ndocker-compose -f local.yml run --rm django python manage.py createsuperuser\n```\n\nThe frontend map application will now be available at: http://localhost:3000/\n\nThe backend data server will be available at: http://localhost:8000/\n\nTo access the Django admin system, login with your new superuser credentials at: http://localhost:8000/admin\n\n### Initial HABHub Data Server Configuration\n\nLogin to the Django admin panel to access the HABHub Data Server settings.\n\nThere are two `Core` data models that need to be configured to work with the HABHub React Frontend Client and the IFCB Dashboard.\n\n**Data Layers**\n\nThese are the different data layers that are available to display on the frontend HABHub map. By default, all of them are active. To remove a data layer from the frontend map, edit it and simply uncheck the ""Is Active"" checkbox.\n\nhttp://localhost:8000/admin/core/datalayer/\n\n**Target Species**\n\nThis is the list of HAB species that are available for both data ingestion from a IFCB Dashboard and to interact with in the frontend map.\n\nhttp://localhost:8000/admin/core/targetspecies/\n\nThe default list is pre-configured with the six species of interest from the main https://habhub.whoi.edu/ site. To ingest IFCB data for a species from an IFCB Dashboard, you just need to make sure that the ""species_id"" field matches the text string that is used in the IFCB dashboard Autoclass files to identify the species. Ex. file: https://habon-ifcb.whoi.edu/harpswell/D20200221T223958_IFCB000_class_scores.csv\n\nYou can also choose the primary color for each species for display on the map. A color gradient using that color is automatically created when you change a species color\n\n**IFCB Datasets Layer**\n\nTo configure this data layer, you need to first create some Dataset objects in the admin panel:\n\nhttp://localhost:8000/admin/ifcb_datasets/dataset/\n\nThese Datasets should match with an existing Dataset in your IFCB Dashboard. The ""Dashboard id name"" field needs to be set to the unique ID from the IFCB Dashboard. This can be found in the IFCB Dashboard URL for the dataset, ex: https://habon-ifcb.whoi.edu/timeline?dataset=harpswell, or at the bottom of the ""Basic Info"" box in the Dashboard.\n\nOnce a Dataset is created in HABhub, data from the Target Species will begin to automatically be ingested on an hourly basis. No further configuration is necessary.\n\n**Shellfish Toxicity Layer**\n\nFirst create some geographic Station locations in the admin panel that are providing Shellfish Toxicity data. You can then import data from a CSV for each Station using the Datapoint importer:\n\nhttp://localhost:8000/admin/stations/datapoint/import/\n\n**Closures Layer**\n\nThis layer is very dependent on specific state government data and protocols. Example data is available for some New England states, but this layer requires custom set up for new states and is still under development.\n\n### Steps to update Local Deployment:\n\nTo update your local version with the latest code changes from the `main` repo branch, take the following steps:\n\n1. `cd` to the project root directory\n2. `git pull` to get the latest version\n3. `docker-compose -f local.yml down`\n4. `docker-compose -f local.yml build`\n5. `docker-compose -f local.yml up --renew-anon-volumes` (the `--renew-anon-volumes` option makes sure that your local `node_modules` volume is updated with any package changes.)\n'",,"2018/12/14, 18:12:29",1776,GPL-3.0,77,824,"2023/09/30, 16:41:05",5,15,33,14,25,0,0.0,0.01653944020356235,,,0,4,false,,false,false,,,https://github.com/WHOIGit,www.whoi.edu,"Woods Hole, MA",,,https://avatars.githubusercontent.com/u/38010066?v=4,,, nwfscSurvey,Tool to pull and process NWFSC West Coast groundfish survey data for use in PFMC groundfish stock assessments.,pfmc-assessments,https://github.com/pfmc-assessments/nwfscSurvey.git,github,,Marine Life and Fishery,"2023/07/20, 18:17:33",10,0,5,true,R,PFMC assessments,pfmc-assessments,R,http://pfmc-assessments.github.io/nwfscSurvey/,"b'\n\n\n\n[![R-CMD-check](https://github.com/pfmc-assessments/nwfscSurvey/workflows/R-CMD-check/badge.svg)](https://github.com/pfmc-assessments/nwfscSurvey/actions)\n[![DOI](https://zenodo.org/badge/26344817.svg)](https://zenodo.org/badge/latestdoi/26344817)\n\n\n## Installation\n\n`nwfscSurvey` provides code for analysis of the Northwest Fisheries Science Center (NWFSC) West Coast Groundfish Bottom Trawl, NWFSC\nslope, Alaska Fisheries Science Center (AFSC) slope, and AFSC Triennial surveys. Code within this package allows\nfor pulling of data from the [NWFSC data warehouse](https://www.nwfsc.noaa.gov/data), calculating the design-based\nindices of abundance, visualization of data, and processing length- and age-composition data for use in West Coast groundfish stock assessment.\n\nThis code was developed for use by scientists at the NWFSC and is intended to work on the specific data products that we have access to using methods specific to the needs of this group.\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""pfmc-assessments/nwfscSurvey"")\n```\n\n## Package information\n\nA [website](http://pfmc-assessments.github.io/nwfscSurvey/index.html) is now available for the package. The [""Get started""](http://pfmc-assessments.github.io/nwfscSurvey/articles/nwfscSurvey.html) tab provide general information on how to pull, visualize, and process survey data. Additionally, the [""Reference""](http://pfmc-assessments.github.io/nwfscSurvey/reference/index.html) tab provides a detailed list of all available functions. \n'",",https://zenodo.org/badge/latestdoi/26344817","2014/11/08, 00:38:17",3274,GPL-3.0,116,542,"2023/07/20, 18:17:37",32,46,90,37,97,2,1.3,0.34090909090909094,"2021/02/25, 15:32:06",v2.0,0,11,false,,false,false,,,https://github.com/pfmc-assessments,,,,,https://avatars.githubusercontent.com/u/9087871?v=4,,, auk,eBird Data Extraction and Processing in R.,CornellLabofOrnithology,https://github.com/CornellLabofOrnithology/auk.git,github,"r,ebird,dataset",Terrestrial Animals,"2023/08/21, 16:04:15",118,0,13,true,R,Cornell Lab of Ornithology,CornellLabofOrnithology,R,https://CornellLabofOrnithology.github.io/auk/,"b'\n\n# auk: eBird Data Extraction and Processing in R \n\n\n\n[![License: GPL\nv3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](http://www.gnu.org/licenses/gpl-3.0)\n[![CRAN\\_Status\\_Badge](http://www.r-pkg.org/badges/version/auk)](https://cran.r-project.org/package=auk)\n[![Downloads](http://cranlogs.r-pkg.org/badges/grand-total/auk?color=brightgreen)](http://www.r-pkg.org/pkg/auk)\n[![R-CMD-check](https://github.com/CornellLabofOrnithology/auk/workflows/R-CMD-check/badge.svg)](https://github.com/CornellLabofOrnithology/auk/actions)\n[![rOpenSci](https://badges.ropensci.org/136_status.svg)](https://github.com/ropensci/onboarding/issues/136)\n\n\n## Overview\n\n[eBird](http://www.ebird.org) is an online tool for recording bird\nobservations. Since its inception, over 600 million records of bird\nsightings (i.e.\xc2\xa0combinations of location, date, time, and bird species)\nhave been collected, making eBird one of the largest citizen science\nprojects in history and an extremely valuable resource for bird research\nand conservation. The full eBird database is packaged as a text file and\navailable for download as the [eBird Basic Dataset\n(EBD)](http://ebird.org/ebird/data/download). Due to the large size of\nthis dataset, it must be filtered to a smaller subset of desired\nobservations before reading into R. This filtering is most efficiently\ndone using AWK, a Unix utility and programming language for processing\ncolumn formatted text data. This package acts as a front end for AWK,\nallowing users to filter eBird data before import into R.\n\nFor a comprehensive resource on using eBird data for modeling species\ndistributions, consult the free online book [Best Practices for Using\neBird\nData](https://cornelllabofornithology.github.io/ebird-best-practices/)\nand the association paper *Analytical guidelines to increase the value\nof community science data: An example using eBird data to estimate\nspecies distributions* ([Johnston et\nal.\xc2\xa02021](https://onlinelibrary.wiley.com/doi/10.1111/ddi.13271)).\n\n## Installation\n\n # cran release\n install.packages(""auk"")\n\n # or install the development version from github\n # install.packages(""remotes"")\n remotes::install_github(""CornellLabofOrnithology/auk"")\n\n`auk` requires the Unix utility AWK, which is available on most Linux\nand Mac OS X machines. Windows users will first need to install\n[Cygwin](https://www.cygwin.com) before using this package. Note that\n**Cygwin must be installed in the default location**\n(`C:/cygwin/bin/gawk.exe` or `C:/cygwin64/bin/gawk.exe`) in order for\n`auk` to work.\n\n## Vignette\n\nFull details on using `auk` to produce both presence-only and\npresence-absence data are outlined in the\n[vignette](https://cornelllabofornithology.github.io/auk/articles/auk.html).\n\n## Cheatsheet\n\nAn `auk` cheatsheet was developed by [Mickayla\nJohnston](https://www.linkedin.com/in/mickayla-johnston/):\n\n\n\n## `auk` and `rebird`\n\nThose interested in eBird data may also want to consider\n[`rebird`](https://github.com/ropensci/rebird), an R package that\nprovides an interface to the [eBird\nAPIs](https://confluence.cornell.edu/display/CLOISAPI/eBirdAPIs). The\nfunctions in `rebird` are mostly limited to accessing recent\n(i.e.\xc2\xa0within the last 30 days) observations, although `ebirdfreq()` does\nprovide historical frequency of observation data. In contrast, `auk`\ngives access to the full set of ~ 500 million eBird observations. For\nmost ecological applications, users will require `auk`; however, for\nsome use cases, e.g.\xc2\xa0building tools for birders, `rebird` provides a\nquick and easy way to access data.\n\n## A note on versions\n\nThis package contains a current (as of the time of package release)\nversion of the [bird taxonomy used by\neBird](http://help.ebird.org/customer/portal/articles/1006825-the-ebird-taxonomy).\nThis taxonomy determines the species that can be reported in eBird and\ntherefore the species that users of `auk` can extract. eBird releases an\nupdated taxonomy once a year, typically in August, at which time `auk`\nwill be updated to include the current taxonomy. When using `auk`, users\nshould be careful to ensure that the version they\xe2\x80\x99re using is in sync\nwith the eBird Basic Dataset they\xe2\x80\x99re working with. This is most easily\naccomplished by always using the must recent version of `auk` and the\nmost recent release of the dataset.\n\n## Quick start\n\nThis package uses the command-line program AWK to extract subsets of the\neBird Basic Dataset for use in R. This is a multi-step process:\n\n1. Define a reference to the eBird data file.\n2. Define a set of spatial, temporal, or taxonomic filters. Each type\n of filter corresponds to a different function, e.g.\xc2\xa0`auk_species` to\n filter by species. At this stage the filters are only set up, no\n actual filtering is done until the next step.\n3. Filter the eBird data text file, producing a new text file with only\n the selected rows.\n4. Import this text file into R as a data frame.\n\nBecause the eBird dataset is so large, step 3 typically takes several\nhours to run. Here\xe2\x80\x99s a simple example that extract all Canada Jay\nrecords from within Canada.\n\n library(auk)\n # path to the ebird data file, here a sample included in the package\n # get the path to the example data included in the package\n # in practice, provide path to ebd, e.g. f_in <- ""data/ebd_relFeb-2018.txt\n f_in <- system.file(""extdata/ebd-sample.txt"", package = ""auk"")\n # output text file\n f_out <- ""ebd_filtered_grja.txt""\n ebird_data <- f_in %>% \n # 1. reference file\n auk_ebd() %>% \n # 2. define filters\n auk_species(species = ""Canada Jay"") %>% \n auk_country(country = ""Canada"") %>% \n # 3. run filtering\n auk_filter(file = f_out) %>% \n # 4. read text file into r data frame\n read_ebd()\n\nFor those not familiar with the pipe operator (`%>%`), the above code\ncould be rewritten:\n\n f_in <- system.file(""extdata/ebd-sample.txt"", package = ""auk"")\n f_out <- ""ebd_filtered_grja.txt""\n ebd <- auk_ebd(f_in)\n ebd_filters <- auk_species(ebd, species = ""Canada Jay"")\n ebd_filters <- auk_country(ebd_filters, country = ""Canada"")\n ebd_filtered <- auk_filter(ebd_filters, file = f_out)\n ebd_df <- read_ebd(ebd_filtered)\n\n## Usage\n\n### Filtering\n\n`auk` uses a [pipeline-based workflow](http://r4ds.had.co.nz/pipes.html)\nfor defining filters, which can then be compiled into an AWK script.\nUsers should start by defining a reference to the dataset file with\n`auk_ebd()`. Then any of the following filters can be applied:\n\n- `auk_species()`: filter by species using common or scientific names.\n- `auk_country()`: filter by country using the standard English names\n or [ISO 2-letter country\n codes](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2).\n- `auk_state()`: filter by state using eBird state codes, see\n `?ebird_states`.\n- `auk_bcr()`: filter by [Bird Conservation Region\n (BCR)](http://nabci-us.org/resources/bird-conservation-regions/)\n using BCR codes, see `?bcr_codes`.\n- `auk_bbox()`: filter by spatial bounding box, i.e.\xc2\xa0a range of\n latitudes and longitudes in decimal degrees.\n- `auk_date()`: filter to checklists from a range of dates. To extract\n observations from a range of dates, regardless of year, use the\n wildcard \xe2\x80\x9c`*`\xe2\x80\x9d in place of the year,\n e.g.\xc2\xa0`date = c(""*-05-01"", ""*-06-30"")` for observations from May and\n June of any year.\n- `auk_last_edited()`: filter to checklists from a range of last\n edited dates, useful for extracting just new or recently edited\n data.\n- `auk_protocol()`: filter to checklists that following a specific\n search protocol, either stationary, traveling, or casual.\n- `auk_project()`: filter to checklists collected as part of a\n specific project (e.g.\xc2\xa0a breeding bird survey).\n- `auk_time()`: filter to checklists started during a range of\n times-of-day.\n- `auk_duration()`: filter to checklists with observation durations\n within a given range.\n- `auk_distance()`: filter to checklists with distances travelled\n within a given range.\n- `auk_breeding()`: only retain observations that have an associate\n breeding bird atlas code.\n- `auk_complete()`: only retain checklists in which the observer has\n specified that they recorded all species seen or heard. It is\n necessary to retain only complete records for the creation of\n presence-absence data, because the \xe2\x80\x9cabsence\xe2\x80\x9d\xe2\x80\x9d information is\n inferred by the lack of reporting of a species on checklists.\n\nNote that all of the functions listed above only modify the `auk_ebd`\nobject, in order to define the filters. Once the filters have been\ndefined, the filtering is actually conducted using `auk_filter()`.\n\n # sample data\n f <- system.file(""extdata/ebd-sample.txt"", package = ""auk"")\n # define an EBD reference and a set of filters\n ebd <- auk_ebd(f) %>% \n # species: common and scientific names can be mixed\n auk_species(species = c(""Canada Jay"", ""Cyanocitta cristata"")) %>%\n # country: codes and names can be mixed; case insensitive\n auk_country(country = c(""US"", ""Canada"", ""mexico"")) %>%\n # bbox: long and lat in decimal degrees\n # formatted as `c(lng_min, lat_min, lng_max, lat_max)`\n auk_bbox(bbox = c(-100, 37, -80, 52)) %>%\n # date: use standard ISO date format `""YYYY-MM-DD""`\n auk_date(date = c(""2012-01-01"", ""2012-12-31"")) %>%\n # time: 24h format\n auk_time(start_time = c(""06:00"", ""09:00"")) %>%\n # duration: length in minutes of checklists\n auk_duration(duration = c(0, 60)) %>%\n # complete: all species seen or heard are recorded\n auk_complete()\n ebd\n #> Input \n #> EBD: /Users/mes335/projects/auk/inst/extdata/ebd-sample.txt \n #> \n #> Output \n #> Filters not executed\n #> \n #> Filters \n #> Species: Cyanocitta cristata, Perisoreus canadensis\n #> Countries: CA, MX, US\n #> States: all\n #> Counties: all\n #> BCRs: all\n #> Bounding box: Lon -100 - -80; Lat 37 - 52\n #> Years: all\n #> Date: 2012-01-01 - 2012-12-31\n #> Start time: 06:00-09:00\n #> Last edited date: all\n #> Protocol: all\n #> Project code: all\n #> Duration: 0-60 minutes\n #> Distance travelled: all\n #> Records with breeding codes only: no\n #> Complete checklists only: yes\n\nIn all cases, extensive checks are performed to ensure filters are\nvalid. For example, species are checked against the official [eBird\ntaxonomy](http://help.ebird.org/customer/portal/articles/1006825-the-ebird-taxonomy)\nand countries are checked using the\n[`countrycode`](https://github.com/vincentarelbundock/countrycode)\npackage.\n\nEach of the functions described in the *Defining filters* section only\ndefines a filter. Once all of the required filters have been set,\n`auk_filter()` should be used to compile them into an AWK script and\nexecute it to produce an output file. So, as an example of bringing all\nof these steps together, the following commands will extract all Canada\nJay and Blue Jay records from Canada and save the results to a\ntab-separated text file for subsequent use:\n\n output_file <- ""ebd_filtered_blja-grja.txt""\n ebd_filtered <- system.file(""extdata/ebd-sample.txt"", package = ""auk"") %>% \n auk_ebd() %>% \n auk_species(species = c(""Canada Jay"", ""Cyanocitta cristata"")) %>% \n auk_country(country = ""Canada"") %>% \n auk_filter(file = output_file)\n\n**Filtering the full dataset typically takes at least a couple hours**,\nso set it running then go grab lunch!\n\n### Reading\n\neBird Basic Dataset files can be read with `read_ebd()`:\n\n system.file(""extdata/ebd-sample.txt"", package = ""auk"") %>% \n read_ebd() %>% \n str()\n #> tibble [494 \xc3\x97 45] (S3: tbl_df/tbl/data.frame)\n #> $ checklist_id : chr [1:494] ""S6852862"" ""S14432467"" ""S39033556"" ""S38303088"" ...\n #> $ global_unique_identifier : chr [1:494] ""URN:CornellLabOfOrnithology:EBIRD:OBS97935965"" ""URN:CornellLabOfOrnithology:EBIRD:OBS201605886"" ""URN:CornellLabOfOrnithology:EBIRD:OBS530638734"" ""URN:CornellLabOfOrnithology:EBIRD:OBS520887169"" ...\n #> $ last_edited_date : chr [1:494] ""2016-02-22 14:59:49"" ""2013-06-16 17:34:19"" ""2017-09-06 13:13:34"" ""2017-07-24 15:17:16"" ...\n #> $ taxonomic_order : num [1:494] 20145 20145 20145 20145 20145 ...\n #> $ category : chr [1:494] ""species"" ""species"" ""species"" ""species"" ...\n #> $ common_name : chr [1:494] ""Green Jay"" ""Green Jay"" ""Green Jay"" ""Green Jay"" ...\n #> $ scientific_name : chr [1:494] ""Cyanocorax yncas"" ""Cyanocorax yncas"" ""Cyanocorax yncas"" ""Cyanocorax yncas"" ...\n #> $ observation_count : chr [1:494] ""4"" ""2"" ""1"" ""1"" ...\n #> $ breeding_code : chr [1:494] NA NA NA NA ...\n #> $ breeding_category : chr [1:494] NA NA NA NA ...\n #> $ age_sex : chr [1:494] NA NA NA NA ...\n #> $ country : chr [1:494] ""Mexico"" ""Mexico"" ""Mexico"" ""Mexico"" ...\n #> $ country_code : chr [1:494] ""MX"" ""MX"" ""MX"" ""MX"" ...\n #> $ state : chr [1:494] ""Yucatan"" ""Chiapas"" ""Chiapas"" ""Chiapas"" ...\n #> $ state_code : chr [1:494] ""MX-YUC"" ""MX-CHP"" ""MX-CHP"" ""MX-CHP"" ...\n #> $ county : chr [1:494] NA NA NA NA ...\n #> $ county_code : chr [1:494] NA NA NA NA ...\n #> $ iba_code : chr [1:494] NA NA NA NA ...\n #> $ bcr_code : int [1:494] 56 60 60 60 60 55 55 60 56 55 ...\n #> $ usfws_code : chr [1:494] NA NA NA NA ...\n #> $ atlas_block : chr [1:494] NA NA NA NA ...\n #> $ locality : chr [1:494] ""Yuc. Hacienda Chichen"" ""Berlin2_Punto_06"" ""07_020_LaConcordia_SanFrancsco_Magallanes_P01"" ""07_020_CerroBola_BuenaVista_tr3_trad_P03"" ...\n #> $ locality_id : chr [1:494] ""L989845"" ""L2224225"" ""L6247542"" ""L6120049"" ...\n #> $ locality_type : chr [1:494] ""P"" ""P"" ""P"" ""P"" ...\n #> $ latitude : num [1:494] 20.7 15.8 15.8 15.8 15.7 ...\n #> $ longitude : num [1:494] -88.6 -93 -93 -92.9 -92.9 ...\n #> $ observation_date : Date[1:494], format: ""2010-09-05"" ""2011-08-18"" ""2012-02-02"" ...\n #> $ time_observations_started: chr [1:494] ""06:30:00"" ""08:00:00"" ""09:13:00"" ""06:40:00"" ...\n #> $ observer_id : chr [1:494] ""obsr55719"" ""obsr313215"" ""obsr313215"" ""obsr313215"" ...\n #> $ sampling_event_identifier: chr [1:494] ""S6852862"" ""S14432467"" ""S39033556"" ""S38303088"" ...\n #> $ protocol_type : chr [1:494] ""Traveling"" ""Traveling"" ""Stationary"" ""Stationary"" ...\n #> $ protocol_code : chr [1:494] ""P22"" ""P22"" ""P21"" ""P21"" ...\n #> $ project_code : chr [1:494] ""EBIRD"" ""EBIRD_MEX"" ""EBIRD_MEX"" ""EBIRD"" ...\n #> $ duration_minutes : int [1:494] 90 10 10 10 10 120 30 10 80 30 ...\n #> $ effort_distance_km : num [1:494] 1 0.257 NA NA 0.257 ...\n #> $ effort_area_ha : num [1:494] NA NA NA NA NA NA NA NA NA NA ...\n #> $ number_observers : int [1:494] 3 1 1 1 1 2 2 1 13 2 ...\n #> $ all_species_reported : logi [1:494] TRUE TRUE TRUE TRUE TRUE TRUE ...\n #> $ group_identifier : chr [1:494] NA NA NA NA ...\n #> $ has_media : logi [1:494] FALSE FALSE FALSE FALSE FALSE FALSE ...\n #> $ approved : logi [1:494] TRUE TRUE TRUE TRUE TRUE TRUE ...\n #> $ reviewed : logi [1:494] FALSE FALSE FALSE FALSE FALSE FALSE ...\n #> $ reason : chr [1:494] NA NA NA NA ...\n #> $ trip_comments : chr [1:494] NA ""Alonso Gomez Hdz Monitoreo Comunitario, Transectos en Bosque de Pino Encino, La Concordia,1098 msnm"" ""Miguel Mndez Lpez"" ""Rogelio Lpez Encino"" ...\n #> $ species_comments : chr [1:494] NA NA NA NA ...\n #> - attr(*, ""rollup"")= logi TRUE\n\n## Presence-absence data\n\nFor many applications, presence-only data are sufficient; however, for\nmodeling and analysis, presence-absence data are required. `auk`\nincludes functionality to produce presence-absence data from eBird\nchecklists. For full details, consult the vignette: `vignette(""auk"")`.\n\n## Code of Conduct\n\nPlease note that this project is released with a [Contributor Code of\nConduct](CONDUCT.md). By participating in this project you agree to\nabide by its terms.\n\n## Acknowledgements\n\nThis package is based on AWK scripts provided as part of the eBird Data\nWorkshop given by Wesley Hochachka, Daniel Fink, Tom Auer, and Frank La\nSorte at the 2016 NAOC on August 15, 2016.\n\n`auk` benefited significantly from the [rOpenSci](https://ropensci.org/)\nreview process, including helpful suggestions from [Auriel\nFournier](https://github.com/aurielfournier) and [Edmund\nHart](https://github.com/emhart).\n\n## References\n\n eBird Basic Dataset. Version: ebd_relFeb-2018. Cornell Lab of Ornithology, Ithaca, New York. May 2013.\n\n[![](http://www.ropensci.org/public_images/github_footer.png)](http://ropensci.org)\n'",,"2017/07/01, 20:28:25",2307,GPL-3.0,5,193,"2022/09/05, 14:04:51",8,3,66,0,415,0,0.0,0.010582010582010581,"2020/10/29, 03:27:42",v0.4.2,0,3,false,,false,true,,,https://github.com/CornellLabofOrnithology,http://www.birds.cornell.edu,"Ithaca, NY",,,https://avatars.githubusercontent.com/u/1395442?v=4,,, palmerpenguins,"The palmerpenguins data contains size measurements for three penguin species observed on three islands in the Palmer Archipelago, Antarctica.",allisonhorst,https://github.com/allisonhorst/palmerpenguins.git,github,,Terrestrial Animals,"2022/08/17, 04:59:45",801,0,89,true,R,,,"R,CSS",https://allisonhorst.github.io/palmerpenguins/,"b'\n\n\n# palmerpenguins \n\n\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3960218.svg)](https://doi.org/10.5281/zenodo.3960218)\n[![CRAN](https://www.r-pkg.org/badges/version/palmerpenguins)](https://cran.r-project.org/package=palmerpenguins)\n\n\n\nThe goal of palmerpenguins is to provide a great dataset for data\nexploration & visualization, as an alternative to `iris`.\n\n\n\n## Installation\n\nYou can install the released version of palmerpenguins from\n[CRAN](https://CRAN.R-project.org) with:\n\n``` r\ninstall.packages(""palmerpenguins"")\n```\n\nTo install the development version from [GitHub](https://github.com/)\nuse:\n\n``` r\n# install.packages(""remotes"")\nremotes::install_github(""allisonhorst/palmerpenguins"")\n```\n\n## About the data\n\nData were collected and made available by [Dr.\xc2\xa0Kristen\nGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)\nand the [Palmer Station, Antarctica\nLTER](https://pallter.marine.rutgers.edu/), a member of the [Long Term\nEcological Research Network](https://lternet.edu/).\n\nThe palmerpenguins package contains two datasets.\n\n``` r\nlibrary(palmerpenguins)\ndata(package = \'palmerpenguins\')\n```\n\nOne is called `penguins`, and is a simplified version of the raw data;\nsee `?penguins` for more info:\n\n``` r\nhead(penguins)\n#> # A tibble: 6 \xc3\x97 8\n#> species island bill_length_mm bill_depth_mm flipper_length_\xe2\x80\xa6 body_mass_g sex \n#> \n#> 1 Adelie Torge\xe2\x80\xa6 39.1 18.7 181 3750 male \n#> 2 Adelie Torge\xe2\x80\xa6 39.5 17.4 186 3800 fema\xe2\x80\xa6\n#> 3 Adelie Torge\xe2\x80\xa6 40.3 18 195 3250 fema\xe2\x80\xa6\n#> 4 Adelie Torge\xe2\x80\xa6 NA NA NA NA \n#> 5 Adelie Torge\xe2\x80\xa6 36.7 19.3 193 3450 fema\xe2\x80\xa6\n#> 6 Adelie Torge\xe2\x80\xa6 39.3 20.6 190 3650 male \n#> # \xe2\x80\xa6 with 1 more variable: year \n```\n\nThe second dataset is `penguins_raw`, and contains all the variables and\noriginal names as downloaded; see `?penguins_raw` for more info.\n\n``` r\nhead(penguins_raw)\n#> # A tibble: 6 \xc3\x97 17\n#> studyName `Sample Number` Species Region Island Stage `Individual ID`\n#> \n#> 1 PAL0708 1 Adelie Penguin \xe2\x80\xa6 Anvers Torge\xe2\x80\xa6 Adul\xe2\x80\xa6 N1A1 \n#> 2 PAL0708 2 Adelie Penguin \xe2\x80\xa6 Anvers Torge\xe2\x80\xa6 Adul\xe2\x80\xa6 N1A2 \n#> 3 PAL0708 3 Adelie Penguin \xe2\x80\xa6 Anvers Torge\xe2\x80\xa6 Adul\xe2\x80\xa6 N2A1 \n#> 4 PAL0708 4 Adelie Penguin \xe2\x80\xa6 Anvers Torge\xe2\x80\xa6 Adul\xe2\x80\xa6 N2A2 \n#> 5 PAL0708 5 Adelie Penguin \xe2\x80\xa6 Anvers Torge\xe2\x80\xa6 Adul\xe2\x80\xa6 N3A1 \n#> 6 PAL0708 6 Adelie Penguin \xe2\x80\xa6 Anvers Torge\xe2\x80\xa6 Adul\xe2\x80\xa6 N3A2 \n#> # \xe2\x80\xa6 with 10 more variables: `Clutch Completion` , `Date Egg` ,\n#> # `Culmen Length (mm)` , `Culmen Depth (mm)` ,\n#> # `Flipper Length (mm)` , `Body Mass (g)` , Sex ,\n#> # `Delta 15 N (o/oo)` , `Delta 13 C (o/oo)` , Comments \n```\n\nBoth datasets contain data for 344 penguins. There are 3 different\nspecies of penguins in this dataset, collected from 3 islands in the\nPalmer Archipelago, Antarctica.\n\n``` r\nstr(penguins)\n#> tibble [344 \xc3\x97 8] (S3: tbl_df/tbl/data.frame)\n#> $ species : Factor w/ 3 levels ""Adelie"",""Chinstrap"",..: 1 1 1 1 1 1 1 1 1 1 ...\n#> $ island : Factor w/ 3 levels ""Biscoe"",""Dream"",..: 3 3 3 3 3 3 3 3 3 3 ...\n#> $ bill_length_mm : num [1:344] 39.1 39.5 40.3 NA 36.7 39.3 38.9 39.2 34.1 42 ...\n#> $ bill_depth_mm : num [1:344] 18.7 17.4 18 NA 19.3 20.6 17.8 19.6 18.1 20.2 ...\n#> $ flipper_length_mm: int [1:344] 181 186 195 NA 193 190 181 195 193 190 ...\n#> $ body_mass_g : int [1:344] 3750 3800 3250 NA 3450 3650 3625 4675 3475 4250 ...\n#> $ sex : Factor w/ 2 levels ""female"",""male"": 2 1 1 NA 1 2 1 2 NA NA ...\n#> $ year : int [1:344] 2007 2007 2007 2007 2007 2007 2007 2007 2007 2007 ...\n```\n\nWe gratefully acknowledge Palmer Station LTER and the US LTER Network.\nSpecial thanks to Marty Downs (Director, LTER Network Office) for help\nregarding the data license & use.\n\n## Examples\n\nYou can find these and more code examples for exploring palmerpenguins\nin `vignette(""examples"")`.\n\nPenguins are fun to summarize! For example:\n\n``` r\nlibrary(tidyverse)\npenguins %>% \n count(species)\n#> # A tibble: 3 \xc3\x97 2\n#> species n\n#> \n#> 1 Adelie 152\n#> 2 Chinstrap 68\n#> 3 Gentoo 124\npenguins %>% \n group_by(species) %>% \n summarize(across(where(is.numeric), mean, na.rm = TRUE))\n#> # A tibble: 3 \xc3\x97 6\n#> species bill_length_mm bill_depth_mm flipper_length_mm body_mass_g year\n#> \n#> 1 Adelie 38.8 18.3 190. 3701. 2008.\n#> 2 Chinstrap 48.8 18.4 196. 3733. 2008.\n#> 3 Gentoo 47.5 15.0 217. 5076. 2008.\n```\n\nPenguins are fun to visualize! For example:\n\n\n\n\n\n## Artwork\n\nYou can download palmerpenguins art (useful for teaching with the data)\nin `vignette(""art"")`. If you use this artwork, please cite with:\n\xe2\x80\x9cArtwork by @allison_horst\xe2\x80\x9d.\n\n### Meet the Palmer penguins\n\n\n\n### Bill dimensions\n\nThe culmen is the upper ridge of a bird\xe2\x80\x99s bill. In the simplified\n`penguins` data, culmen length and depth are renamed as variables\n`bill_length_mm` and `bill_depth_mm` to be more intuitive.\n\nFor this penguin data, the culmen (bill) length and depth are measured\nas shown below (thanks Kristen Gorman for clarifying!):\n\n\n\n## License\n\nData are available by\n[CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)\nlicense in accordance with the [Palmer Station LTER Data\nPolicy](https://pallter.marine.rutgers.edu/data/) and the [LTER Data\nAccess Policy for Type I data](https://lternet.edu/data-access-policy/).\n\n## Citation\n\nTo cite the palmerpenguins package, please use:\n\n``` r\ncitation(""palmerpenguins"")\n#> \n#> To cite palmerpenguins in publications use:\n#> \n#> Horst AM, Hill AP, Gorman KB (2020). palmerpenguins: Palmer\n#> Archipelago (Antarctica) penguin data. R package version 0.1.0.\n#> https://allisonhorst.github.io/palmerpenguins/. doi:\n#> 10.5281/zenodo.3960218.\n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @Manual{,\n#> title = {palmerpenguins: Palmer Archipelago (Antarctica) penguin data},\n#> author = {Allison Marie Horst and Alison Presmanes Hill and Kristen B Gorman},\n#> year = {2020},\n#> note = {R package version 0.1.0},\n#> doi = {10.5281/zenodo.3960218},\n#> url = {https://allisonhorst.github.io/palmerpenguins/},\n#> }\n```\n\n## Additional data use information\n\nAnyone interested in publishing the data should contact [Dr.\xc2\xa0Kristen\nGorman](https://www.uaf.edu/cfos/people/faculty/detail/kristen-gorman.php)\nabout analysis and working together on any final products. From Gorman\net al.\xc2\xa0(2014): \xe2\x80\x9cIndividuals interested in using these data are expected\nto follow the US LTER Network\xe2\x80\x99s Data Access Policy, Requirements and Use\nAgreement: .\xe2\x80\x9d\n\n## References\n\n**Data originally published in:**\n\n- Gorman KB, Williams TD, Fraser WR (2014). Ecological sexual\n dimorphism and environmental variability within a community of\n Antarctic penguins (genus *Pygoscelis*). PLoS ONE 9(3):e90081.\n \n\n**Data citations:**\n\nAd\xc3\xa9lie penguins:\n\n- Palmer Station Antarctica LTER and K. Gorman, 2020. Structural size\n measurements and isotopic signatures of foraging among adult male\n and female Ad\xc3\xa9lie penguins (*Pygoscelis adeliae*) nesting along the\n Palmer Archipelago near Palmer Station, 2007-2009 ver 5.\n Environmental Data Initiative.\n \n (Accessed 2020-06-08).\n\nGentoo penguins:\n\n- Palmer Station Antarctica LTER and K. Gorman, 2020. Structural size\n measurements and isotopic signatures of foraging among adult male\n and female Gentoo penguin (*Pygoscelis papua*) nesting along the\n Palmer Archipelago near Palmer Station, 2007-2009 ver 5.\n Environmental Data Initiative.\n \n (Accessed 2020-06-08).\n\nChinstrap penguins:\n\n- Palmer Station Antarctica LTER and K. Gorman, 2020. Structural size\n measurements and isotopic signatures of foraging among adult male\n and female Chinstrap penguin (*Pygoscelis antarcticus*) nesting\n along the Palmer Archipelago near Palmer Station, 2007-2009 ver 6.\n Environmental Data Initiative.\n \n (Accessed 2020-06-08).\n'",",https://doi.org/10.5281/zenodo.3960218,https://doi.org/10.1371/journal.pone.0090081,https://doi.org/10.6073/pasta/98b16d7d563f265cb52372c8ca99e60f,https://doi.org/10.6073/pasta/7fca67fb28d56ee2ffa3d9370ebda689,https://doi.org/10.6073/pasta/c14dfcfada8ea13a17536e73eb6fbe9e","2020/06/05, 14:57:15",1237,CC0-1.0,0,209,"2022/08/17, 14:54:54",13,68,83,1,434,2,0.2,0.1842105263157895,"2020/07/25, 14:31:36",v0.1.0,0,5,false,,false,false,,,,,,,,,,, phenocamr,Facilitates the retrieval and post-processing of PhenoCam time series.,bluegreen-labs,https://github.com/bluegreen-labs/phenocamr.git,github,"phenocam,phenology-modelling,phenocam-data,remote-sensing",Terrestrial Animals,"2023/02/12, 15:24:08",20,0,2,true,R,BlueGreen Labs,bluegreen-labs,"R,CSS",http://bluegreen-labs.github.io/phenocamr/,"b'# phenocamr \n\n[![R-CMD-check](https://github.com/bluegreen-labs/phenocamr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/bluegreen-labs/phenocamr/actions/workflows/R-CMD-check.yaml)\n[![codecov](https://codecov.io/gh/bluegreen-labs/phenocamr/branch/master/graph/badge.svg)](https://codecov.io/gh/bluegreen-labs/phenocamr)\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/phenocamr)](https://cran.r-project.org/package=phenocamr)\n![CRAN\\_Downloads](https://cranlogs.r-pkg.org/badges/grand-total/phenocamr)\n\nFacilitates the retrieval and post-processing of PhenoCam time series. The post-processing of PhenoCam data includes outlier removal and the generation of data products such as phenological transition dates. If requested complementary [Daymet climate data](https://daymet.ornl.gov/) will be downloaded and merged with the PhenoCam data for modelling purposes. For a detailed overview of the assumptions made during post-processing I refer publications by Hufkens et al. (2018) and Richardson et al. (2018). Please cite the Hufkens et al. (2018) paper when using the package. A worked example is included below and in the package vignette.\n\n## Installation\n\n### stable release\n\nTo install the current stable release use a CRAN repository:\n\n```R\ninstall.packages(""phenocamr"")\nlibrary(phenocamr)\n```\n\n### development release\n\nTo install the development releases of the package run the following commands:\n\n```R\nif(!require(devtools)){install.package(devtools)}\ndevtools::install_github(""bluegreen-labs/phenocamr"")\nlibrary(phenocamr)\n```\n\nVignettes are not rendered by default, if you want to include additional documentation please use:\n\n```R\nif(!require(devtools)){install.package(""devtools"")}\ndevtools::install_github(""bluegreen-labs/phenocamr"", build_vignettes = TRUE)\nlibrary(phenocamr)\n```\n\n## Use\n\nTo download data for a single deciduous broadleaf forest site (harvard) use the following syntax:\n\n```R\ndownload_phenocam(site = ""harvard"",\n veg_type = ""DB"",\n frequency = 3,\n phenophase = TRUE,\n out_dir = ""~"")\n```\n\nThis will download all deciduous broadleaf (DB) PhenoCam time series for the ""harvard"" site at a 3-day time step into your home directory. In addition, the data is processed to estimate phenological transition dates (phenophases) and written to file. For detailed overview of all functions and worked example we reference to the R help documentation and the manuscripts below.\n\n## References\n\nHufkens K., Basler J. D., Milliman T. Melaas E., Richardson A.D. 2018 [An integrated phenology modelling framework in R: Phenology modelling with phenor. Methods in Ecology & Evolution](https://onlinelibrary.wiley.com/doi/10.1111/2041-210X.12970/full), 9: 1-10.\n\nRichardson, A.D., Hufkens, K., Milliman, T., Aubrecht, D.M., Chen, M., Gray, J.M., Johnston, M.R., Keenan, T.F., Klosterman, S.T., Kosmala, M., Melaas, E.K., Friedl, M.A., Frolking, S. 2017. [Tracking vegetation phenology across diverse North American biomes using PhenoCam imagery](https://www.nature.com/articles/sdata201828). Scientific Data, 5, 180028.\n\n## Acknowledgements\n\nThis project was is supported by the National Science Foundation\xe2\x80\x99s Macro-system Biology Program (awards EF-1065029 and EF-1702697). Logo design elements are taken from the FontAwesome library according to [these terms](https://fontawesome.com/license).\n\n'",,"2016/01/03, 11:53:00",2852,AGPL-3.0,4,291,"2023/04/28, 14:31:15",4,13,28,1,180,0,0.0,0.03474903474903479,"2020/07/18, 10:58:07",v1.1.4,0,3,false,,false,false,,,https://github.com/bluegreen-labs,http://bluegreenlabs.org,"Melsele, Belgium",,,https://avatars.githubusercontent.com/u/65854203?v=4,,, Annotation Interface for Data-driven Ecology,Tools for detecting wildlife in aerial images using active learning.,microsoft,https://github.com/microsoft/aerial_wildlife_detection.git,github,"aiforearth,wildlife,conservation,aerial-imagery,active-learning",Terrestrial Animals,"2022/07/26, 13:00:35",200,0,30,true,Python,Microsoft,microsoft,"Python,HTML,JavaScript,Shell,CSS,Dockerfile,PLpgSQL",,"b'# AIDE: Annotation Interface for Data-driven Ecology\n\nAIDE is two things in one: a tool for manually annotating images and a tool for training and running machine (deep) learning models. Those two things are coupled in an active learning loop: the human annotates a few images, the system trains a model, that model is used to make predictions and to select more images for the human to annotate, etc.\n \nMore generally, AIDE is a modular Web framework for labeling image datasets with AI assistance. AIDE is configurable for a variety of tasks, but it is particularly intended for ecological applications, such as the acceleration wildlife surveys that use aerial images. \n\nAIDE is primarily developed by [Benjamin Kellenberger](https://bkellenb.github.io), supported by the [Microsoft AI for Earth](https://www.microsoft.com/en-us/ai/ai-for-earth) program.\n\n\n\n## Contents\n* [Highlights](#highlights)\n* [News](#news)\n* [Demo](#demo)\n* [Installation and launching AIDE](#installation-and-launching-aide)\n* [AI models in AIDE](#ai-models-in-aide)\n * [Built-in AI models](#built-in-ai-models)\n * [Writing your own AI model](#writing-your-own-ai-model)\n* [Publications and References](#publications-and-references)\n* [Contributing](#contributing)\n\n\n\n## Highlights\n\n* **Powerful:** AIDE explicitly integrates humans and AI models in an annotation loop.\n* **Fast:** AIDE has been designed with speed in mind, both in terms of computations and workflow.\n* **Flexible:** The framework allows full customizability, from hyperparameters to models to annotation types to libraries. It provides:\n * Support for image classification, point annotations, and bounding boxes (object detection)\n * Many deep learning-based AI models and Active Learning criteria built-in\n * Interfaces for custom AI models and criteria, using any framework or library you want (see how to [write your own model](doc/custom_model.md)).\n* **Fully featured:** Beyond image labeling and model training, AIDE has management and graphical user/machine performance evaluation tools built-in, right in the web browser, allowing for advanced, manual label quality checks.\n* **Modular:** AIDE is separated into individual _modules_, each of which can be run on separate machines for scalability. It even supports on-the-fly addition of computational workers for computationally intensive model training!\n\n![AIDE highlights](doc/figures/AIDE_workflow.png)\n\n\n\n## News\n\n### AIDE v2.1 is out\n\nAIDE v2.1 is out! This includes a new interactive installer for Debian/Ubuntu systems as well as a plethora of bug fixes.\n\n\n[Older news](doc/news.md)\n\n\n\n\n## Demo\n\nA demo of AIDE v2 can be accessed **[here](http://aidedemo.westeurope.cloudapp.azure.com:8080/)**.\n\nThis demo allows exploring the annotation front-end with a number of example datasets, including:\n* **[Image labels](http://aidedemo.westeurope.cloudapp.azure.com:8080/snapshot_serengeti/interface)** on the [Snapshot Serengeti camera traps dataset](http://lila.science/datasets/snapshot-serengeti)\n* **[Points](http://aidedemo.westeurope.cloudapp.azure.com:8080/vgg_penguins/interface)** on the [VGG Penguins dataset](http://www.robots.ox.ac.uk/~vgg/data/penguins/)\n* **[Bounding boxes](http://aidedemo.westeurope.cloudapp.azure.com:8080/arcticseals/interface)** on the [NOAA Arctic Seals aerial imagery](http://lila.science/datasets/arcticseals)\n* **[Semantic segmentation](http://aidedemo.westeurope.cloudapp.azure.com:8080/landcover/interface)** on the [Chesapeake Land Cover satellite imagery](http://lila.science/datasets/chesapeakelandcover)\n\n\n\n\n## Installation and launching AIDE\n\nSee [here](doc/install_overview.md).\n\n\n\n\n## AI models in AIDE\n\n### Built-in AI models\n\n\nAIDE ships with a set of built-in models that can be configured and customized:\n\n| Label type | AI model | Model variants / backbones | More info |\n|-|-|-|-|\n| Image labels | AlexNet | AlexNet | [paper](https://arxiv.org/abs/1404.5997) |\n| | DenseNet | DenseNet-161 | [paper](https://arxiv.org/abs/1608.06993) |\n| | MNASNet | MNASNet | [paper](https://arxiv.org/abs/1807.11626) |\n| | MobileNet | MobileNet V2 | [paper](https://arxiv.org/abs/1801.04381) |\n| | ResNet | ResNet-18; ResNet-34; ResNet-50; ResNet-101; ResNet-152 | [paper](https://arxiv.org/abs/1512.03385) |\n| | ResNeXt | ResNeXt-50; ResNeXt-101 | [paper](https://arxiv.org/abs/1611.05431) |\n| | ShuffleNet | ShuffleNet V2 | [paper](https://arxiv.org/abs/1807.11164) |\n| | SqueezeNet | SqueezeNet | [paper](https://arxiv.org/abs/1602.07360) |\n| | VGG | VGG-16 | [paper](https://arxiv.org/abs/1409.1556) |\n| | Wide ResNet | Wide ResNet-50; Wide ResNet-101 | [info](https://pytorch.org/vision/stable/models.html#wide-resnet) |\n| Bounding boxes | Faster R-CNN | with ResNet-50 (PASCAL VOC); with ResNet-50 (MS-COCO); with ResNeXt-101 FPN (MS-COCO) | [paper](https://arxiv.org/pdf/1506.01497.pdf), [implementation details](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md#faster-r-cnn) |\n| | RetinaNet | with ResNet-50 FPN (MS-COCO); with ResNet-101 FPN (MS-COCO) | [paper](https://openaccess.thecvf.com/content_ICCV_2017/papers/Lin_Focal_Loss_for_ICCV_2017_paper.pdf), [implementation details](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md#retinanet) |\n| | TridentNet | with ResNet-50; ResNet-101 (MS-COCO) | [paper](https://arxiv.org/abs/1901.01892), [implementation details](https://github.com/facebookresearch/detectron2/tree/master/projects/TridentNet)\n| Segmentation masks | DeepLabV3+ | with modified ResNet-101 (Cityscapes) | [paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Liang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.pdf), [implementation details](https://github.com/facebookresearch/detectron2/tree/master/projects/DeepLab) |\n\n\nAll models can be configured in various ways through the AI model settings page in the Web browser. They all are pre-trained on [ImageNet](https://ieeexplore.ieee.org/document/5206848) unless specified otherwise.\nTo use one of the built-in models, simply import the requested one to your project through the Model Marketplace in the Web browser and start training/predicting!\n\n\n\n\n### Writing your own AI model\nAIDE is fully modular and supports custom AI models, as long as they provide a Python interface and can handle at least one of the different annotation and prediction types appropriately.\nWe greatly welcome contributions and are happy to help in the implementation of your custom models!\n\nSee [here](doc/custom_model.md) for instructions on implementing custom models into AIDE.\n\n\n\n## Publications and References\n\nPlease cite the following paper if you use AIDE in your work:\n\nKellenberger, Benjamin, Devis Tuia, and Dan Morris. ""AIDE: Accelerating image\xe2\x80\x90based ecological surveys with interactive machine learning."" Methods in Ecology and Evolution 11(12), 1716-1727.\nDOI: [10.1111/2041-210X.13489](https://doi.org/10.1111/2041-210X.13489).\n\n```BibTeX\n@article{kellenberger2020aide,\n title={AIDE: Accelerating image-based ecological surveys with interactive machine learning},\n author={Kellenberger, Benjamin and Tuia, Devis and Morris, Dan},\n journal={Methods in Ecology and Evolution},\n volume={11},\n number={12},\n pages={1716--1727},\n year={2020},\n publisher={Wiley Online Library}\n}\n```\n\n\n\nIf you use AIDE, we would be happy to hear from you! Please send us an [E-mail](mailto:benjamin.kellenberger@epfl.ch) with a little bit of info about your use case; besides getting to know the fellow usership of our software, this also enables us to provide somewhat more tailored support for you if needed. \nThank you very much.\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n'",",https://arxiv.org/abs/1404.5997,https://arxiv.org/abs/1608.06993,https://arxiv.org/abs/1807.11626,https://arxiv.org/abs/1801.04381,https://arxiv.org/abs/1512.03385,https://arxiv.org/abs/1611.05431,https://arxiv.org/abs/1807.11164,https://arxiv.org/abs/1602.07360,https://arxiv.org/abs/1409.1556,https://arxiv.org/pdf/1506.01497.pdf,https://arxiv.org/abs/1901.01892,https://doi.org/10.1111/2041-210X.13489","2019/07/18, 23:58:41",1560,MIT,0,773,"2022/09/21, 16:15:20",25,15,38,1,399,3,0.0,0.1885714285714286,"2021/11/19, 10:49:33",v2.2,0,8,false,,true,false,,,https://github.com/microsoft,https://opensource.microsoft.com,"Redmond, WA",,,https://avatars.githubusercontent.com/u/6154722?v=4,,, bioRad,R package for analysis and visualisation of biological signals in weather radar data.,adokter,https://github.com/adokter/bioRad.git,github,"r,package,radar,aeroecology,movement-ecology,enram,nexrad,eumetnet-opera,lifewatch,wsr-88d,weather-radar,oscibio",Terrestrial Animals,"2023/10/20, 15:47:09",26,0,4,true,R,,,"R,Rez",http://adokter.github.io/bioRad,"b'\n\n\n# bioRad \n\n\n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/bioRad)](https://cran.r-project.org/package=bioRad)\n[![R-CMD-check](https://github.com/adokter/bioRad/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/adokter/bioRad/actions/workflows/R-CMD-check.yaml)\n[![repo\nstatus](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![codecov](https://codecov.io/gh/adokter/bioRad/branch/master/graph/badge.svg?token=pDmyO4JVJu)](https://app.codecov.io/gh/adokter/bioRad)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3370004.svg)](https://doi.org/10.5281/zenodo.3370004)\n\n\nbioRad provides standardized methods for extracting and reporting\nbiological signals from weather radars. It includes functionality to\ninspect low-level radar data, process these data into meaningful\nbiological information on animal speeds and directions at different\naltitudes in the atmosphere, visualize these biological extractions, and\ncalculate further summary statistics.\n\nTo get started, see:\n\n- [Dokter et al.\xc2\xa0(2019)](https://doi.org/10.1111/ecog.04028): a paper\n describing the package.\n- [bioRad\n vignette](https://adriaandokter.com/bioRad/articles/bioRad.html): an\n introduction to bioRad\xe2\x80\x99s main functionalities.\n- [Function\n reference](https://adriaandokter.com/bioRad/reference/index.html):\n an overview of all bioRad functions.\n- [Introductory\n exercises](https://adriaandokter.com/bioRad/articles/rad_aero_19.html):\n a tutorial with code examples and exercises.\n\nMore vignettes:\n\n- [Range\n correction](https://adriaandokter.com/bioRad/articles/range_correction.html):\n estimate spatial images of vertically integrated density corrected\n for range effects.\n\nDocumentation for the latest development version can be found\n[here](https://adriaandokter.com/bioRad/dev/).\n\n## Installation\n\n### Install system libraries\n\nFor OS X and Linux the GNU Scientific Library (GSL), PROJ and HDF5\nlibraries need to be installed as system libraries prior to\ninstallation, which are required by dependency package\n**[vol2birdR](https://adriaandokter.com/vol2birdR/)**:\n\n| System | Command |\n|:--------------------------------------------|:------------------------------------------------------------------|\n| **OS X (using Homebrew)** | `brew install hdf5 proj gsl` |\n| **Debian-based systems (including Ubuntu)** | `sudo apt-get install libhdf5-dev libproj-dev gsl-bin libgsl-dev` |\n| **Systems supporting yum and RPMs** | `sudo yum install hdf5-devel proj-devel gsl gsl-devel` |\n\n
\n\nAdditional required system libraries on Linux (Ubuntu)\n\n\nThe following system libraries are required before installing bioRad on\nLinux systems. In terminal, install these with:\n\n sudo apt install libcurl4-openssl-dev\n sudo apt install libssl-dev\n sudo apt install libgdal-dev\n\n
\n\n
\n\n### Install bioRad\n\nYou can install the released version of bioRad from\n[CRAN](https://CRAN.R-project.org) with:\n\n``` r\ninstall.packages(""bioRad"")\n```\n\nAlternatively, you can install the latest development version from\n[GitHub](https://github.com/adokter/bioRad) with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""adokter/bioRad"")\n```\n\nThen load the package with:\n\n``` r\nlibrary(bioRad)\n#> Welcome to bioRad version 0.7.3\n#> using vol2birdR version 1.0.1 (MistNet installed)\n```\n\n### (optional) Enable MistNet\n\nTo enable MistNet, the following vol2birdR commands should be executed:\n\n``` r\nvol2birdR::install_mistnet()\nvol2birdR::install_mistnet_model()\n```\n\nRead the [vol2birdR\ndocumentation](https://adriaandokter.com/vol2birdR/articles/vol2birdR.html)\nfor more details.\n\n## Usage\n\n### Radar data example\n\nbioRad can read weather radar data (= polar volumes) in the\n[`ODIM`](http://eumetnet.eu/wp-content/uploads/2017/01/OPERA_hdf_description_2014.pdf)\nformat and formats supported by the [RSL\nlibrary](https://trmm-fc.gsfc.nasa.gov/trmm_gv/software/rsl/), such as\nNEXRAD data. NEXRAD data (US) are [available as open\ndata](https://www.ncdc.noaa.gov/nexradinv/) and on\n[AWS](https://registry.opendata.aws/noaa-nexrad/).\n\nHere we read an example polar volume data file with `read_pvolfile()`,\nextract the scan/sweep at elevation angle 3 with `get_scan()`, project\nthe data to a plan position indicator with `project_as_ppi()` and plot\nthe *radial velocity* of detected targets with `plot()`:\n\n``` r\nlibrary(tidyverse) # To pipe %>% the steps below\nsystem.file(""extdata"", ""volume.h5"", package = ""bioRad"") %>%\n read_pvolfile() %>%\n get_scan(3) %>%\n project_as_ppi() %>%\n plot(param = ""VRADH"") # VRADH = radial velocity in m/s\n```\n\n\n\n*Radial velocities towards the radar are negative, while radial\nvelocities away from the radar are positive, so in this plot there is\nmovement from the top right to the bottom left.*\n\n### Vertical profile data example\n\nWeather radar data can be processed into vertical profiles of biological\ntargets using `calculate_vp()`. This type of data is [available as open\ndata](https://aloftdata.eu) for over 100 European weather radars.\n\nOnce vertical profile data are loaded into bioRad, these can be bound\ninto time series using `bind_into_vpts()`. Here we read an example time\nseries, project it on a regular time grid with `regularize_vpts()` and\nplot it with `plot()`:\n\n``` r\nexample_vpts %>%\n regularize_vpts() %>%\n plot()\n```\n\n\n\n*The gray bars in the plot indicate gaps in the data.*\n\nThe altitudes in the profile can be integrated with\n`integrate_profile()` resulting in a dataframe with rows for datetimes\nand columns for quantities. Here we plot the quantity *migration traffic\nrate* (column `mtr`) with `plot()`:\n\n``` r\nmy_vpi <- integrate_profile(example_vpts)\n\nplot(my_vpi, quantity = ""mtr"") # mtr = migration traffic rate\n```\n\n\n\nTo know the total number of birds passing over the radar during the full\ntime series, we use the last value of the *cumulative migration traffic*\n(column `mt`):\n\n``` r\nmy_vpi %>%\n pull(mt) %>% # Extract column mt as a vector\n last()\n#> [1] 129491.5\n```\n\nFor more exercises, see [this\ntutorial](https://adriaandokter.com/bioRad/articles/rad_aero_19.html).\n\n## Meta\n\n- We welcome\n [contributions](https://adriaandokter.com/bioRad/CONTRIBUTING.html)\n including bug reports.\n- License: MIT\n- Get citation information for `bioRad` in R doing\n `citation(""bioRad"")`.\n- Please note that this project is released with a [Contributor Code\n of Conduct](https://adriaandokter.com/bioRad/CODE_OF_CONDUCT.html).\n By participating in this project you agree to abide by its terms.\n'",",https://doi.org/10.5281/zenodo.3370004,https://doi.org/10.1111/ecog.04028","2016/05/24, 15:49:06",2710,CUSTOM,767,2966,"2023/10/20, 15:42:08",60,267,579,110,5,0,1.1,0.5905349794238683,"2023/10/20, 15:41:36",0.7.3,0,13,false,,true,true,,,,,,,,,,, MegaDetector,Deep learning tools that accelerate the review of motion-triggered wildlife camera images.,microsoft,https://github.com/microsoft/CameraTraps.git,github,"camera-traps,aiforearth,wildlife,conservation,machine-learning,computer-vision",Terrestrial Animals,"2023/09/26, 19:44:41",599,0,115,true,Jupyter Notebook,Microsoft,microsoft,"Jupyter Notebook,Python,C#,Shell,HTML,Starlark,Dockerfile,Batchfile",https://repos.opensource.microsoft.com/microsoft/wizard?existingreponame=CameraTraps&existingrepoid=152634113,"b'# Announcement\n\nMicrosoft is working on expanding the CameraTraps repo. Our commitment remains firm in providing support and ensuring the continued maintenance of MegaDetector, serving the community and extending its benefits even further. \n\n# Overview\n\nThis repo contains the tools for training, running, and evaluating detectors and classifiers for images collected from motion-triggered wildlife cameras. The core functionality provided is:\n\n- Training and running models, particularly [MegaDetector](megadetector.md), an object detection model that does a pretty good job finding animals, people, and vehicles (and therefore is pretty good at finding empty images) in a variety of terrestrial ecosystems\n- Data parsing from frequently-used camera trap metadata formats into a common format\n- A [batch processing API](https://github.com/ecologize/CameraTraps/tree/main/api/batch_processing) that runs MegaDetector on large image collections, to accelerate population surveys\n- A [real-time API](https://github.com/ecologize/CameraTraps/tree/main/api/synchronous) that runs MegaDetector (and some species classifiers) synchronously, primarily to support anti-poaching scenarios (e.g. see this [blog post](https://customers.microsoft.com/en-us/story/1384184517929343083-wildlife-protection-solutions-nonprofit-ai-for-earth) describing how this API supports [Wildlife Protection Solutions](https://wildlifeprotectionsolutions.org/))\n\nThis repo is maintained by folks at [Ecologize](http://ecologize.org/) who like looking at pictures of animals. We want to support conservation, of course, but we also really like looking at pictures of animals.\n\n\n# What\'s MegaDetector all about?\n\nThe main model that we train and run using tools in this repo is [MegaDetector](megadetector.md), an object detection model that identifies animals, people, and vehicles in camera trap images. This model is trained on several hundred thousand bounding boxes from a variety of ecosystems. Lots more information – including download links and instructions for running the model – is available on the [MegaDetector page](megadetector.md).\n\nHere\'s a ""teaser"" image of what detector output looks like:\n\n![Red bounding box on fox](images/detector_example.jpg)\n\nImage credit University of Washington.\n\n\n# How do I get started?\n\nIf you\'re just considering the use of AI in your workflow, and you aren\'t even sure yet whether MegaDetector would be useful to you, we recommend reading the ""[getting started with MegaDetector](collaborations.md)"" page.\n\nIf you\'re already familiar with MegaDetector and you\'re ready to run it on your data (and you have some familiarity with running Python code), see the [MegaDetector README](megadetector.md) for instructions on downloading and running MegaDetector.\n\n\n# Who is using MegaDetector?\n\nWe work with ecologists all over the world to help them spend less time annotating images and more time thinking about conservation. You can read a little more about how this works on our [getting started with MegaDetector](collaborations.md) page.\n\nHere are a few of the organizations that have used MegaDetector... we\'re only listing organizations who (a) we know about and (b) have given us permission to refer to them here (or have posted publicly about their use of MegaDetector), so if you\'re using MegaDetector or other tools from this repo and would like to be added to this list, email us!\n\n* [Arizona Department of Environmental Quality](http://azdeq.gov/)\n* [Blackbird Environmental](https://blackbirdenv.com/)\n* [Camelot](https://camelotproject.org/)\n* [Canadian Parks and Wilderness Society (CPAWS) Northern Alberta Chapter](https://cpawsnab.org/)\n* [Conservation X Labs](https://conservationxlabs.com/)\n* [Czech University of Life Sciences Prague](https://www.czu.cz/en)\n* [EcoLogic Consultants Ltd.](https://www.consult-ecologic.com/)\n* [Estaci\xc3\xb3n Biol\xc3\xb3gica de Do\xc3\xb1ana](http://www.ebd.csic.es/inicio)\n* [Idaho Department of Fish and Game](https://idfg.idaho.gov/)\n* [Island Conservation](https://www.islandconservation.org/)\n* [Myall Lakes Dingo Project](https://carnivorecoexistence.info/myall-lakes-dingo-project/)\n* [Point No Point Treaty Council](https://pnptc.org/)\n* [Ramat Hanadiv Nature Park](https://www.ramat-hanadiv.org.il/en/)\n* [SPEA (Portuguese Society for the Study of Birds)](https://spea.pt/en/)\n* [Synthetaic](https://www.synthetaic.com/)\n* [Taronga Conservation Society](https://taronga.org.au/)\n* [The Nature Conservancy in Wyoming](https://www.nature.org/en-us/about-us/where-we-work/united-states/wyoming/)\n* [TrapTagger](https://wildeyeconservation.org/trap-tagger-about/)\n* [Upper Yellowstone Watershed Group](https://www.upperyellowstone.org/)\n\n* [Applied Conservation Macro Ecology Lab](http://www.acmelab.ca/), University of Victoria\n* [Banff National Park Resource Conservation](https://www.pc.gc.ca/en/pn-np/ab/banff/nature/conservation), Parks Canada\n* [Blumstein Lab](https://blumsteinlab.eeb.ucla.edu/), UCLA\n* [Borderlands Research Institute](https://bri.sulross.edu/), Sul Ross State University\n* [Capitol Reef National Park](https://www.nps.gov/care/index.htm) / Utah Valley University\n* [Center for Biodiversity and Conservation](https://www.amnh.org/research/center-for-biodiversity-conservation), American Museum of Natural History\n* [Centre for Ecosystem Science](https://www.unsw.edu.au/research/), UNSW Sydney\n* [Cross-Cultural Ecology Lab](https://crossculturalecology.net/), Macquarie University\n* [DC Cat Count](https://hub.dccatcount.org/), led by the Humane Rescue Alliance\n* [Department of Fish and Wildlife Sciences](https://www.uidaho.edu/cnr/departments/fish-and-wildlife-sciences), University of Idaho\n* [Department of Wildlife Ecology and Conservation](https://wec.ifas.ufl.edu/), University of Florida\n* [Ecology and Conservation of Amazonian Vertebrates Research Group](https://www.researchgate.net/lab/Fernanda-Michalski-Lab-4), Federal University of Amap\xc3\xa1\n* [Gola Forest Programma](https://www.rspb.org.uk/our-work/conservation/projects/scientific-support-for-the-gola-forest-programme/), Royal Society for the Protection of Birds (RSPB)\n* [Graeme Shannon\'s Research Group](https://wildliferesearch.co.uk/group-1), Bangor University \n* [Hamaarag](https://hamaarag.org.il/), The Steinhardt Museum of Natural History, Tel Aviv University\n* [Institut des Science de la For\xc3\xaat Temp\xc3\xa9r\xc3\xa9e](https://isfort.uqo.ca/) (ISFORT), Universit\xc3\xa9 du Qu\xc3\xa9bec en Outaouais\n* [Lab of Dr. Bilal Habib](https://bhlab.in/about), the Wildlife Institute of India\n* [Mammal Spatial Ecology and Conservation Lab](https://labs.wsu.edu/dthornton/), Washington State University\n* [McLoughlin Lab in Population Ecology](http://mcloughlinlab.ca/lab/), University of Saskatchewan\n* [National Wildlife Refuge System, Southwest Region](https://www.fws.gov/about/region/southwest), U.S. Fish & Wildlife Service\n* [Northern Great Plains Program](https://nationalzoo.si.edu/news/restoring-americas-prairie), Smithsonian\n* [Quantitative Ecology Lab](https://depts.washington.edu/sefsqel/), University of Washington\n* [Santa Monica Mountains Recreation Area](https://www.nps.gov/samo/index.htm), National Park Service\n* [Seattle Urban Carnivore Project](https://www.zoo.org/seattlecarnivores), Woodland Park Zoo\n* [Serra dos \xc3\x93rg\xc3\xa3os National Park](https://www.icmbio.gov.br/parnaserradosorgaos/), ICMBio\n* [Snapshot USA](https://emammal.si.edu/snapshot-usa), Smithsonian\n* [Wildlife Coexistence Lab](https://wildlife.forestry.ubc.ca/), University of British Columbia\n* [Wildlife Research](https://www.dfw.state.or.us/wildlife/research/index.asp), Oregon Department of Fish and Wildlife\n* [Wildlife Division](https://www.michigan.gov/dnr/about/contact/wildlife), Michigan Department of Natural Resources\n\n* Department of Ecology, TU Berlin\n* Ghost Cat Analytics\n* Protected Areas Unit, Canadian Wildlife Service\n\n* [School of Natural Sciences](https://www.utas.edu.au/natural-sciences), University of Tasmania ([story](https://www.utas.edu.au/about/news-and-stories/articles/2022/1204-innovative-camera-network-keeps-close-eye-on-tassie-wildlife))\n* [Kenai National Wildlife Refuge](https://www.fws.gov/refuge/kenai), U.S. Fish & Wildlife Service ([story](https://www.peninsulaclarion.com/sports/refuge-notebook-new-technology-increases-efficiency-of-refuge-cameras/))\n\n* [Australian Wildlife Conservancy](https://www.australianwildlife.org/) ([blog](https://www.australianwildlife.org/cutting-edge-technology-delivering-efficiency-gains-in-conservation/), [blog](https://www.australianwildlife.org/efficiency-gains-at-the-cutting-edge-of-technology/))\n* [Felidae Conservation Fund](https://felidaefund.org/) ([WildePod platform](https://wildepod.org/)) ([blog post](https://abhaykashyap.com/blog/ai-powered-camera-trap-image-annotation-system/))\n* [Alberta Biodiversity Monitoring Institute (ABMI)](https://www.abmi.ca/home.html) ([WildTrax platform](https://www.wildtrax.ca/)) ([blog post](https://wildcams.ca/blog/the-abmi-visits-the-zoo/))\n* [Shan Shui Conservation Center](http://en.shanshui.org/) ([blog post](https://mp.weixin.qq.com/s/iOIQF3ckj0-rEG4yJgerYw?fbclid=IwAR0alwiWbe3udIcFvqqwm7y5qgr9hZpjr871FZIa-ErGUukZ7yJ3ZhgCevs)) ([translated blog post](https://mp-weixin-qq-com.translate.goog/s/iOIQF3ckj0-rEG4yJgerYw?fbclid=IwAR0alwiWbe3udIcFvqqwm7y5qgr9hZpjr871FZIa-ErGUukZ7yJ3ZhgCevs&_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=wapp))\n* [Irvine Ranch Conservancy](http://www.irconservancy.org/) ([story](https://www.ocregister.com/2022/03/30/ai-software-is-helping-researchers-focus-on-learning-about-ocs-wild-animals/))\n* [Wildlife Protection Solutions](https://wildlifeprotectionsolutions.org/) ([story](https://customers.microsoft.com/en-us/story/1384184517929343083-wildlife-protection-solutions-nonprofit-ai-for-earth), [story](https://www.enterpriseai.news/2023/02/20/ai-helps-wildlife-protection-solutions-safeguard-endangered-species/))\n\n* [Road Ecology Center](https://roadecology.ucdavis.edu/), University of California, Davis ([Wildlife Observer Network platform](https://wildlifeobserver.net/))\n* [The Nature Conservancy in California](https://www.nature.org/en-us/about-us/where-we-work/united-states/california/) ([Animl platform](https://github.com/tnc-ca-geo/animl-frontend))\n* [San Diego Zoo Wildlife Alliance](https://science.sandiegozoo.org/) ([Animl R package](https://github.com/conservationtechlab/animl))\n\n\n\n\n# Data\n\nThis repo does not directly host camera trap data, but we work with our collaborators to make data and annotations available whenever possible on [lila.science](http://lila.science).\n\n\n# Contact\n\nFor questions about this repo, contact [cameratraps@lila.science](mailto:cameratraps@lila.science).\n\n\n# Contents\n\nThis repo is organized into the following folders...\n\n\n## api\n\nCode for hosting our models as an API, either for synchronous operation (i.e., for real-time inference) or as a batch process (for large biodiversity surveys). Common operations one might do after running MegaDetector – e.g. [generating preview pages to summarize your results](https://github.com/ecologize/CameraTraps/blob/main/api/batch_processing/postprocessing/postprocess_batch_results.py), [separating images into different folders based on AI results](https://github.com/ecologize/CameraTraps/blob/main/api/batch_processing/postprocessing/separate_detections_into_folders.py), or [converting results to a different format](https://github.com/ecologize/CameraTraps/blob/main/api/batch_processing/postprocessing/convert_output_format.py) – also live in this folder, within the [api/batch_processing/postprocessing](https://github.com/ecologize/CameraTraps/tree/main/api/batch_processing/postprocessing) folder.\n\n\n## classification\n\nExperimental code for training species classifiers on new data sets, generally trained on MegaDetector crops. Currently the main pipeline described in this folder relies on a large database of labeled images that is not publicly available; therefore, this folder is not yet set up to facilitate training of your own classifiers. However, it is useful for users of the classifiers that we train, and contains some useful starting points if you are going to take a ""DIY"" approach to training classifiers on cropped images. \n\nAll that said, here\'s another ""teaser image"" of what you get at the end of training and running a classifier:\n\n\n\n\n## data_management\n\nCode for:\n\n* Converting frequently-used metadata formats to [COCO Camera Traps](https://github.com/ecologize/CameraTraps/blob/main/data_management/README.md#coco-cameratraps-format) format\n* Converting the output of AI models (especially [YOLOv5](https://github.com/ecologize/CameraTraps/blob/main/api/batch_processing/postprocessing/convert_output_format.py)) to the format used for AI results throughout this repo\n* Creating, visualizing, and editing COCO Camera Traps .json databases\n\n\n## detection\n\nCode for training, running, and evaluating MegaDetector.\n\n\n## research\n\nOngoing research projects that use this repository in one way or another; as of the time I\'m editing this README, there are projects in this folder around active learning and the use of simulated environments for training data augmentation.\n\n\n## sandbox\n\nRandom things that don\'t fit in any other directory. For example:\n\n* A not-super-useful but super-duper-satisfying and mostly-successful attempt to use OCR to pull metadata out of image pixels in a fairly generic way, to handle those pesky cases when image metadata is lost.\n* Experimental postprocessing scripts that were built for a single use case\n\n\n## taxonomy-mapping\n\nCode to facilitate mapping data-set-specific categories (e.g. ""lion"", which means very different things in Idaho vs. South Africa) to a standard taxonomy.\n\n\n## test-images\n\nA handful of images from [LILA](https://lila.science) that facilitate testing and debugging.\n\n\n## visualization\n\nShared tools for visualizing images with ground truth and/or predicted annotations.\n\n\n# Gratuitous pretty camera trap picture\n\n![Bird flying above water](images/nacti.jpg)\n\nImage credit USDA, from the [NACTI](http://lila.science/datasets/nacti) data set.\n\n\n## License\n\nThis repository is licensed with the [MIT license](https://github.com/Microsoft/dotnet/blob/main/LICENSE).\n'",,"2018/10/11, 18:02:42",1840,MIT,167,3432,"2023/09/26, 19:44:41",18,269,344,29,29,8,1.1,0.45634629493763756,"2022/06/20, 19:41:18",v5.0,2,32,false,,true,true,,,https://github.com/microsoft,https://opensource.microsoft.com,"Redmond, WA",,,https://avatars.githubusercontent.com/u/6154722?v=4,,, ebirdst,Access and Analyze eBird Status and Trends Data.,ebird,https://github.com/ebird/ebirdst.git,github,,Terrestrial Animals,"2023/10/16, 22:29:15",13,0,13,true,R,eBird Status & Trends,ebird,"R,TeX",https://ebird.github.io/ebirdst/,"b'\n\n\n# ebirdst: Access and Analyze eBird Status and Trends Data\n\n\n\n[![License: GPL\nv3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](http://www.gnu.org/licenses/gpl-3.0)\n[![R-CMD-check](https://github.com/ebird/ebirdst/workflows/R-CMD-check/badge.svg)](https://github.com/ebird/ebirdst/actions)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/ebirdst)](https://cran.r-project.org/package=ebirdst)\n\n\n## Overview\n\nThe [eBird Status and Trends\nproject](https://science.ebird.org/en/status-and-trends) project at the\n[Cornell Lab of Ornithology](https://www.birds.cornell.edu/home) uses\nmachine-learning models to produce estimates of range boundaries,\noccurrence rate, and relative abundance at high spatial and temporal\nresolution across the full annual cycle of 2,282 bird species globally.\nThese models learn the relationships between bird observations collected\nthrough [eBird](https://ebird.org/home) and a suite of remotely sensed\nhabitat variables, while accounting for the noise and bias inherent in\ncommunity science datasets, including variation in observer behavior and\neffort. Interactive maps and visualizations of these model estimates can\nbe explored [online](https://science.ebird.org/en/status-and-trends),\nand the [Status and Trends Data\nProducts](https://science.ebird.org/en/status-and-trends/download-data)\nprovide access to the data behind these maps and visualizations. The\n`ebirdst` R package provides a set of tools for downloading these data\nproducts, loading them into R, and using them for visualizing and\nanalysis.\n\n## Installation\n\nInstall `ebirdst` from GitHub with:\n\n``` r\nif (!requireNamespace(""remotes"", quietly = TRUE)) {\n install.packages(""remotes"")\n}\nremotes::install_github(""ebird/ebirdst"")\n```\n\nThis version of `ebirdst` is designed to work with the eBird Status Data\nProducts estimated for the year 2021, with visualizations being released\non the web in November 2021, and data access being made available in\nJune 2022. **Users are strongly discouraged from comparing Status and\nTrends results between years due to methodological differences between\nversions.** If you have accessed and used previous versions and/or may\nneed access to previous versions for reasons related to reproducibility,\nplease contact and your request will be considered.\n\n## Transition from `raster` to `terra`\n\nThe majority of the eBird Status and Trends data products are raster\nspatial data stored in GeoTIFF format. Traditionally, R users have used\nthe [raster](https://rspatial.org/raster/) package to work with data in\nthis format and `ebirdst` used `raster` extensively. However, `raster`\nhas been replaced by the [terra](https://rspatial.org/index.html)\npackage, which offers a much improved and more efficient set of tools\nfor working with raster data. With this in mind, `ebirdst` has\ntransitioned to using `terra` rather than `raster` as of verison\n`2.2021.0`. This change may break your existing code and you have three\noptions for dealing with this:\n\n1. **Transition all code to `terra` and use the most recent version of\n `ebirdst`**. This approach is **strongly encouraged** because\n `terra` is much more efficient than `raster` and `raster` is no\n longer being actively developed. Consult the documentation for\n `terra` to learn how to use it.\n2. Install an older version of `ebirdst` that still uses `raster`. For\n example, you can use\n `remotes::install_version(""ebirdst"", version = ""1.2021.3"")`.\n3. Convert `terra` objects to `raster` objects. In the current version\n of `ebirdst`, `load_raster()` returns objects in `SpatRaster`\n format, you can convert those for use with `raster` using the\n function `raster::raster()`.\n\n## Data access\n\nData access is granted through an Access Request Form at:\n. Access with this form generates a key to\nbe used with this R package and is provided immediately (as long as\ncommercial use is not requested). Our terms of use have been designed to\nbe quite permissive in many cases, particularly academic and research\nuse. When requesting data access, please be sure to carefully read the\nterms of use and ensure that your intended use is not restricted.\n\nAfter completing the Access Request Form, you will be provided a Status\nand Trends access key, which you will need when downloading data. To\nstore the key so the package can access it when downloading data, use\nthe function `set_ebirdst_access_key(""XXXXX"")`, where `""XXXXX""` is the\naccess key provided to you. **Restart R after setting the access key.**\n\n**For those interested in accessing these data outside of R**, the most\nwidely used data products are available for direct download through the\n[Status and Trends\nwebsite](https://science.ebird.org/en/status-and-trends). Spatial data\nare accessible in widely adopted GeoTIFF and GeoPackage formats, which\ncan be opened in QGIS, ArcGIS, or other GIS software.\n\n## Versions\n\nThe Status and Trends Data Products provide estimates of relative\nabundance, and other variables, for a particular year. This estimation\nyear is used to identify the version of the data products. Each version\nof this R package is associated with a particular version of the data.\nFor example, the current version of the R package is 2.2021.4 and, as\nindicated by the year in the version number, it is designed to work with\nthe 2021 data products. Every year, typically in November, the Status\nand Trends Data Products are updated, and users are encouraged to update\nthis R package and transition to using the new version of the data\nproducts. After the data products are updated, there will be a brief\nperiod where access to the previous version is also provided, allowing\nusers to finish any analyses with this previous version. If you intended\nto continue using the older data products during this period you must\nnot update the R package.\n\n## Citation\n\nIf you use the the eBird Status & Trends data please cite it with:\n\n
\nFink, D., T. Auer, A. Johnston, M. Strimas-Mackey, S. Ligocki, O.\nRobinson, W. Hochachka, L. Jaromczyk, A. Rodewald, C. Wood, I. Davies,\nA. Spencer. 2022. eBird Status and Trends, Data Version: 2021; Released:\n2022. Cornell Lab of Ornithology, Ithaca, New York.\nhttps://doi.org/10.2173/ebirdst.2021\n
\n\n[Download\nBibTeX](https://raw.githubusercontent.com/ebird/ebirdst/main/ebirdst-citation.bib).\n\n## Vignettes\n\nFor full package documentation, including a series of vignettes covering\nthe full spectrum from introductory to advanced usage, please see the\npackage [website](https://ebird.github.io/ebirdst/). The available\nvignettes are:\n\n- [Introduction to eBird Status & Trends\n Data](https://ebird.github.io/ebirdst/articles/ebirdst.html): covers\n data access, available data products, and structure and format of data\n files.\n- [Working with Raster\n Data](https://ebird.github.io/ebirdst/articles/rasters.html): loading\n and analyzing the raster data products.\n\n## Quick Start\n\nThis quick start guide shows how to download data and plot abundance\nvalues similar to how they are plotted for the [eBird Status and Trends\nweekly abundance\nanimations](https://ebird.org/science/status-and-trends/yebsap/abundance-map-weekly).\nIn this guide, and throughout all package documentation, a simplified\nexample dataset is used consisting of Yellow-bellied Sapsucker in\nMichigan. For a full list of the species available for download, look at\nthe data frame `ebirst_runs`, which is included in this package.\n\n**Important note: after downloading the results, do not change the file\nstructure.** All functionality in this package relies on the structure\ninherent in the delivered results. Changing the folder and file\nstructure will cause errors with this package.\n\n``` r\nlibrary(ebirdst)\nlibrary(terra)\nlibrary(sf)\nlibrary(fields)\nlibrary(rnaturalearth)\n\n# download example data, yellow-bellied sapsucker in michigan\npath <- ebirdst_download(species = ""example_data"")\n\n# load relative abundance raster stack with 52 layers, one for each week\nabd <- load_raster(path = path, resolution = ""lr"")\n\n# load species specific mapping parameters\npars <- load_fac_map_parameters(path)\n# custom coordinate reference system\ncrs <- st_crs(pars$custom_projection)\n# legend breaks\nbreaks <- pars$weekly_bins\n# legend labels for top, middle, and bottom\nlabels <- pars$weekly_labels\n\n# the date that each raster layer corresponds to is stored within the labels\nweeks <- parse_raster_dates(abd)\nprint(weeks)\n#> [1] ""2021-01-04"" ""2021-01-11"" ""2021-01-18"" ""2021-01-25"" ""2021-02-01"" ""2021-02-08"" ""2021-02-15""\n#> [8] ""2021-02-22"" ""2021-03-01"" ""2021-03-08"" ""2021-03-15"" ""2021-03-22"" ""2021-03-29"" ""2021-04-05""\n#> [15] ""2021-04-12"" ""2021-04-19"" ""2021-04-26"" ""2021-05-03"" ""2021-05-10"" ""2021-05-17"" ""2021-05-24""\n#> [22] ""2021-05-31"" ""2021-06-07"" ""2021-06-14"" ""2021-06-21"" ""2021-06-28"" ""2021-07-06"" ""2021-07-13""\n#> [29] ""2021-07-20"" ""2021-07-27"" ""2021-08-03"" ""2021-08-10"" ""2021-08-17"" ""2021-08-24"" ""2021-08-31""\n#> [36] ""2021-09-07"" ""2021-09-14"" ""2021-09-21"" ""2021-09-28"" ""2021-10-05"" ""2021-10-12"" ""2021-10-19""\n#> [43] ""2021-10-26"" ""2021-11-02"" ""2021-11-09"" ""2021-11-16"" ""2021-11-23"" ""2021-11-30"" ""2021-12-07""\n#> [50] ""2021-12-14"" ""2021-12-21"" ""2021-12-28""\n\n# select a week in the middle of the year\nabd <- abd[[26]]\n\n# project to species specific coordinates\n# the nearest neighbor method preserves cell values across projections\nabd_prj <- project(trim(abd), crs$wkt, method = ""near"")\n\n# get reference data from the rnaturalearth package\n# the example data currently shows only the US state of Michigan\nwh_states <- ne_states(country = c(""United States of America"", ""Canada""),\n returnclass = ""sf"") %>% \n st_transform(crs = crs) %>% \n st_geometry()\n\n# start plotting\npar(mfrow = c(1, 1), mar = c(0, 0, 0, 0))\n\n# use raster bounding box to set the spatial extent for the plot\nbb <- st_as_sfc(st_bbox(trim(abd_prj)))\nplot(bb, col = ""white"", border = ""white"")\n# add background reference data\nplot(wh_states, col = ""#cfcfcf"", border = NA, add = TRUE)\n\n# plot zeroes as light gray\nplot(abd_prj, col = ""#e6e6e6"", maxpixels = ncell(abd_prj),\n axes = FALSE, legend = FALSE, add = TRUE)\n\n# define color palette\npal <- abundance_palette(length(breaks) - 1, ""weekly"")\n# plot abundance\nplot(abd_prj, col = pal, breaks = breaks, maxpixels = ncell(abd_prj),\n axes = FALSE, legend = FALSE, add = TRUE)\n\n# state boundaries\nplot(wh_states, add = TRUE, col = NA, border = ""white"", lwd = 1.5)\n\n# legend\nlabel_breaks <- seq(0, 1, length.out = length(breaks))\nimage.plot(zlim = c(0, 1), breaks = label_breaks, col = pal,\n smallplot = c(0.90, 0.93, 0.15, 0.85),\n legend.only = TRUE,\n axis.args = list(at = c(0, 0.5, 1), \n labels = round(labels, 2),\n cex.axis = 0.9, lwd.ticks = 0))\n```\n\n\n'",",https://doi.org/10.2173/ebirdst.2021,https://doi.org/10.2173/ebirdst.2021","2023/01/17, 20:30:28",281,GPL-3.0,29,29,"2023/10/17, 13:09:58",0,3,3,3,8,0,0.0,0.125,,,0,3,false,,true,true,,,https://github.com/ebird,https://science.ebird.org/en/status-and-trends,United States of America,,,https://avatars.githubusercontent.com/u/3780849?v=4,,, GeoPressureR,R package which help researchers construct the trajectory of a bird equiped with an atmospheric pressure sensor.,Rafnuss,https://github.com/Rafnuss/GeoPressureR.git,github,"r,era5,datalogger,bird,migration,windspeed,pressure-sensor,geolocator,tracker",Terrestrial Animals,"2023/10/25, 18:59:50",5,0,5,true,R,,,R,https://raphaelnussbaumer.com/GeoPressureR,"b'\n\n\n# GeoPressureR \n\n\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7754457.svg)](https://doi.org/10.5281/zenodo.7754457)\n[![R-CMD-check](https://github.com/Rafnuss/GeoPressureR/workflows/R-CMD-check/badge.svg)](https://github.com/Rafnuss/GeoPressureR/actions)\n[![pkgdown](https://github.com/Rafnuss/GeoPressureR/actions/workflows/pkgdown.yaml/badge.svg)](https://github.com/Rafnuss/GeoPressureR/actions/workflows/pkgdown.yaml)\n[![Codecov test\ncoverage](https://codecov.io/gh/Rafnuss/GeoPressureR/branch/master/graph/badge.svg)](https://app.codecov.io/gh/Rafnuss/GeoPressureR?branch=master)\n[![lint](https://github.com/Rafnuss/GeoPressureR/actions/workflows/lint.yaml/badge.svg)](https://github.com/Rafnuss/GeoPressureR/actions/workflows/lint.yaml)\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable\nstate and is being actively\ndeveloped.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n\n\nGeoPressureR is a R package which help researchers construct the\ntrajectory of a bird equiped with an atmospheric pressure sensor.\n\nThe package is a direct implementation of methods presented in\nNussbaumer et al.\xc2\xa0([2023a](https://doi.org/10.1111/2041-210X.14043)) and\nNussbaumer et al.\xc2\xa0([2023b](https://doi.org/10.1111/2041-210X.14082)).\n\n## Learn how to use GeoPressureR\n\nThe\n[GeoPressureManual](https://raphaelnussbaumer.com/GeoPressureManual/) is\na great place to start learning about how to use GeoPressureR.\n\nUsing the examples, this user guide takes you through each step of the\nanalysis in detail.\n\n

\n\n

\n\n## Start your own analysis\n\nOnce you are familiar with the workflow of the method and want to start\nyour own study, we suggest you use\n[GeoPressureTemplate](https://github.com/Rafnuss/GeoPressureTemplate), a\n[github template\nrepository](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-template-repository)\nwhich provides a standard code structure for your analysis.\n\nUsing this standardized code structure you will help you on many aspect:\ncode sharing and troubleshooting, data archiving, work reproducibility.\n\n## Cheatsheet\n\n\n\n \n\n## Constributions\n\nContributions to the code should follow the [Contributor Code of\nConduct](https://raphaelnussbaumer.com/GeoPressureR/CONTRIBUTING.html).\n'",",https://doi.org/10.5281/zenodo.7754457,https://doi.org/10.1111/2041-210X.14043,https://doi.org/10.1111/2041-210X.14082","2021/12/12, 16:20:00",682,GPL-3.0,276,677,"2023/10/25, 18:59:51",4,26,94,51,0,0,0.0,0.02003081664098616,"2023/09/15, 12:54:57",v3.1.0,0,3,false,,false,true,,,,,,,,,,, EcoAssist,An open-source application designed to streamline the work of ecologists dealing with camera trap images.,PetervanLunteren,https://github.com/PetervanLunteren/EcoAssist.git,github,"linux,macos,megadetector,python,windows,cameratraps,object-detection,yolov5,annotation-tool,deploy,machine-learning,train,conservation,ecology",Terrestrial Animals,"2023/10/25, 10:58:15",71,0,56,true,Python,,,"Python,Shell,Batchfile",,"b'[![status](https://joss.theoj.org/papers/dabe3753aae2692d9908166a7ce80e6e/status.svg)](https://joss.theoj.org/papers/dabe3753aae2692d9908166a7ce80e6e)\n[![Project Status: Active The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n![GitHub](https://img.shields.io/github/license/PetervanLunteren/EcoAssist)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7223363.svg)](https://doi.org/10.5281/zenodo.7223363)\n![GitHub last\ncommit](https://img.shields.io/github/last-commit/PetervanLunteren/EcoAssist)\n\n

\n \n

\n\n

Simplifying camera trap image analysis for ecologists

\n\n\nEcoAssist is an open-source application designed to streamline the work of ecologists dealing with camera trap images. It\'s an AI platform that enables annotation, training, and deployment of custom models for automatic species detection, offering ecologists a way to save time reviewing images and focus on conservation efforts.\n\nThe\xc2\xa0[MegaDetector](https://github.com/ecologize/CameraTraps/blob/main/megadetector.md)\xc2\xa0model is preloaded. This model can find out which images contain an animal and filter out the empties. Unfortunately, MegaDetector does not identify the animals, it just finds them. If you want a model that can identify species for your specific ecosystem or project, you\'ll have to train it yourself. Or outsource it to [Addax Data Science](https://addaxdatascience.com/).\n\nRecently, I joined forces with [Smart Parks](https://www.smartparks.org/). We\xe2\x80\x99re working on expanding the software to become a standalone and robust platform for camera trap image analysis to be used by ecologists worldwide. We\'ll test the setup with a pilot study for the [Desert Lion Conservation Project](https://www.desertlion.info/) in Namibia. If you feel like contributing to the development of EcoAssist, see the [sponsor section](#sponsor) below. \n\nYou can also help me by letting me know about any improvements, bugs, or new features so that I can keep EcoAssist up-to-date. You can\xc2\xa0[raise an issue](https://github.com/PetervanLunteren/EcoAssist/issues/new) or\xc2\xa0[email me](mailto:petervanlunteren@hotmail.com). An e-mail just to say hi and tell me about your project is also very much appreciated!\n\n[ ](https://addaxdatascience.com/)\n[ ](https://www.smartparks.org/)\n
\n
\n
\n
\n
\n
\n
\n\n## Quick links\n1. [Demo](#demo)\n2. [Overview](#overview)\n3. [Main features](#main-features)\n4. [Teasers](#teasers)\n5. [Users](#users)\n6. [Current focus](#current-focus)\n7. [Sponsor](#sponsor)\n8. [Tutorial](#tutorial)\n9. [Requirements](#requirements)\n10. [Download](#download)\n11. [Test your installation](#test-your-installation)\n12. [Update](#update)\n13. [GPU support](#gpu-support)\n14. [Bugs](#bugs)\n15. [Cite](#cite)\n16. [Uninstall](#uninstall)\n17. [Contributors](#contributors)\n18. [Similar software](#similar-software)\n\n## Demo\n

\n \n

\n\n## Overview\n

\n \n

\n\n## Main features\n* Runs on Windows, Mac, and Linux\n* No admin rights required\n* After installation completely offline\n* Use [MegaDetector](https://github.com/ecologize/CameraTraps/blob/main/megadetector.md) to filter out empty images or videos\n* Integration with [Timelapse](https://saul.cpsc.ucalgary.ca/timelapse/)\n* English :gb: & Espa\xc3\xb1ol :es:\n* Train models using the [YOLOv5](https://github.com/ultralytics/yolov5) architecture\n* Deploy models on images or videos\n* Built in function to annotate images based on [labelImg](https://github.com/heartexlabs/labelImg)\n* GPU acceleration for NVIDIA and Apple Silicon\n* Post-process your data to\n * separate\n * visualise\n * crop\n * label\n * export to .csv\n\n## Teasers\n

\n \n \n \n \n

\n\nCamera trap images taken from the [Missouri camera trap database](https://lila.science/datasets/missouricameratraps) and the [WCS Camera Traps dataset](https://lila.science/datasets/wcscameratraps).\n\n## Users\n

\n \n

\n

\n \n

\n\nAre you also a user and not on this map? [Let me know](mailto:petervanlunteren@hotmail.com)!\n\n## Current focus\nTogether with [Smart Parks](https://www.smartparks.org/), I\'m working on expanding the software. Our current focus is:\n* Implementing a human-in-the-loop feature for result verification.\n* Improving the annotation process to make it more robust.\n* Testing the setup with a real-world use case for [the Desert Lion Conservation](https://www.desertlion.info/) project.\n* Set up personalized assistance to support ecologists in effectively using EcoAssist for their projects.\n* Exploring the possibility of providing optimized hardware support.\n\nDo you think we are missing something? [Let me know](mailto:petervanlunteren@hotmail.com)!\n\n## Sponsor\nYou can sponsor the development of this initiative via the sponsor button below. By contributing, you directly support the development of the platform. Your support will enable me to invest more time and expand outreach to reach more conservationists in need. Thank you!\n\n[![](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/PetervanLunteren)\n\n## Tutorial\nI\'ve written a detailed tutorial on Medium that provides a step-by-step guide on annotating, training, evaluating, deploying, and postprocessing data with EcoAssist. You can find it [here](https://medium.com/towards-artificial-intelligence/train-and-deploy-custom-object-detection-models-without-a-single-line-of-code-a65e58b57b03). With EcoAssist I tried to make training a model as easy as possible. However, for an acruate model, some machine learning expertise will still be beneficial. If you want to outsource it, you can hire me via my company [Addax Data Science](https://addaxdatascience.com/) to train a custom model for you. \n\n## Requirements\nExcept a minimum of 8 GB RAM, there are no hard system requirements for EcoAssist since it is largely hardware-agnostic. However, please note that machine learning can ask quite a lot from your computer in terms of processing power. Although it will run on an old laptop only designed for text editing, it\xe2\x80\x99s probably not going to train any accurate models, while deploying models can take ages. Generally speaking, the faster the machine, the more reliable the results. GPU acceleration is a big plus.\n\n## Download\nEcoAssist will install quite a lot of dependencies, so don\'t panic if the installation takes 10-20 minutes and generates lots of textual feedback as it does so. Please note that some antivirus, VPN, proxy servers or other protection software might interfere with the installation. If you\'re having trouble, please disable this protection software for the duration of the installation.\n\nOpening EcoAssist for the first time will take a bit longer than usual due to script compiling. Have patience, all subsequent times will be better.\n\n
\nWindows\n
\n \n1. EcoAssist requires Git and a conda distribution to be installed on your device. See below for instructions on how to install them. During installation, you can leave all parameters at their default values.\n * You can install Git from [gitforwindows.org](https://gitforwindows.org/). \n * EcoAssist will work with Miniforge, Mambaforge, Anaconda, or Miniconda. Miniforge is recommended, however, Mambaforge, Anaconda or Miniconda will suffice if you already have that installed. To install Miniforge, simply download and execute the [Miniforge installer](https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Windows-x86_64.exe). If you see a ""Windows protected your PC"" warning, you may need to click ""More info"" and ""run anyway"".\n2. Download the [EcoAssist installation file](https://PetervanLunteren.github.io/EcoAssist/install.bat) and double-click it. If that doesn\'t work, you can drag and drop it in a command prompt window and press enter.\n3. If you\'ve executed it with admin rights, it will be installed for all users. If you don\'t have admin rights, you will be prompted if you\'d still like to enter an admin password, or proceed with the non-admin install - which will make EcoAssist available for your user only.\n4. When the installation is finished, there will be a shortcut file in your `Downloads` folder. You are free to move this file to a more convenient location. EcoAssist will open when double-clicked.\n\nIf you\'re having trouble with permissions issues, you can choose to run it inside a Windows Subsystem for Linux (WSL) environment. See the steps [here](https://github.com/PetervanLunteren/EcoAssist/issues/23). \n
\n\n
\nmacOS\n
\n \n1. EcoAssist requires you to have a recent version of Xcode Developer Tools. You can donwload and install it from the [Mac App Store](https://apps.apple.com/us/app/xcode/id497799835?mt=12). \n2. Download and open [this file](https://PetervanLunteren.github.io/EcoAssist/install.command). Some computers can be quite reluctant when having to open command files downloaded from the internet. You can circumvent trust issues by opening it with right-click > open > open. If that still doesn\'t work, you can change the file permissions by opening a new terminal window and copy-pasting the following commands.\n```bash\nchmod 755 $HOME/Downloads/install.command\nbash $HOME/Downloads/install.command\n```\n3. If you\'re an Apple Silicon user (M1/M2), go for a nice walk because this may take about 30 minutes to complete. Some of the software packages are not yet adopted to the Apple Silicon processor. There is a workaround, but it takes some time. Some packages need `Homebrew` to install. `Homebrew` will be automatically installed (if not already installed), but you\'ll need to provide a sudo password. If you don\'t know the sudo password, you can skip this by pressing Ctrl+D. EcoAssist will still work fine without it, but the annotation and human-in-the-loop feature will not work.\n4. When the installation is done, you\'ll find a `EcoAssist.command` file in your `Applications` folder. The app will open when double-clicked. You are free to move this file to a more convenient location. If you want EcoAssist in your dock, manually change `EcoAssist.command` to `EcoAssist.app`, then drag and drop it in your dock and change it back to `EcoAssist.command`. Not the prettiest solution, but it works...\n\nIf you\'re having trouble opening EcoAssist, you might have to reinstall `Xcode`. This sometimes happens after upgrading your MacOS version. More information in [this issue](https://github.com/PetervanLunteren/EcoAssist/issues/21). \n
\n\n
\nLinux\n
\n \n1. Download [this file](https://PetervanLunteren.github.io/EcoAssist/install.command).\n2. Change the permission of the file and execute it by running the following commands in a new terminal window. If you don\'t have root privileges, you might be prompted for a password to install `libxcb-xinerama0`. This package is required for the labelImg software on some Linux versions. If you don\'t know the `sudo` password, you can skip this by pressing Ctrl+D when you are prompted for the password. EcoAssist will still work fine without it, but you might have problems with the labelImg software. The rest of the installation can be done without root privileges.\n```bash\nchmod 755 $HOME/Downloads/install.command\nbash $HOME/Downloads/install.command\n```\n3. During the installation, a file called `EcoAssist` will be created on your desktop. The app will open when double-clicked. You are free to move this file to a more convenient location.\n
\n\n## Test your installation\n

\n \n

\n\nYou can quickly verify its functionality by following the steps below.\n1. Choose a local copy of [this](https://drive.google.com/uc?export=download&id=1ZNAhMbWVoLuIlkejI0ydS1XVChYSCQ50) (unzipped) folder at step 1\n2. Check \'Process all images in the folder specified\' \n3. Click the \'Deploy model\' button and wait for the prcess to complete\n4. Select the `test-images` folder again as \'Destination folder\'\n5. Check \'Export results to csv files\'\n6. Click the \'Post-process files\' button\n\nIf all went well, there should be a file called `results_files.csv` with the following content. \n\n| absolute_path | relative_path | data_type | n_detections | max_confidence |\n| :--- | :--- | :--- | :--- | :--- |\n| /.../test-images | empty.jpg | img | 0 | 0.0 |\n| /.../test-images | person.jpg | img | 2 | 0.875 |\n| /.../test-images | mutiple_categories.jpg | img | 2 | 0.899 |\n| /.../test-images | animal.jpg | img | 1 | 0.844 |\n| /.../test-images | vehicle.jpg | img | 1 | 0.936 |\n\n## Update\nTo update to the latest version, you\'ll have to repeat the [download](#download) procedure. It will replace all the old EcoAssist files with the new ones. It\'s all automatic, you don\'t have to do anything. Don\'t worry, it won\'t touch your conda distribution or your Git installation. Just the `ecoassistcondaenv` environment. \n\n## GPU support\nEcoAssist will automatically run on NVIDIA or Apple Silicon GPU if available. The appropriate `CUDAtoolkit` and `cuDNN` software is already included in the EcoAssist installation for Windows and Linux. If you have NVIDIA GPU available but it doesn\'t recognise it, make sure you have a [recent driver](https://www.nvidia.com/en-us/geforce/drivers/) installed, then reboot. An MPS compatible version of `Pytorch` is included in the installation for Apple Silicon users. Please note that applying machine learning on Apple Silicon GPU\'s is still under beta version. That means that you might run into errors when trying to run on GPU. My experience is that deployment runs smoothly on GPU, but training throws errors. Training on CPU will of course still work. The progress window and console output will display whether EcoAssist is running on CPU or GPU. \n\n

\n \n \n

\n\n## Bugs\nIf you encounter any bugs, please [raise an issue](https://github.com/PetervanLunteren/EcoAssist/issues) in this repository or [send me an email](mailto:petervanlunteren@hotmail.com).\n\n## Cite\nPlease use the following citations if you used EcoAssist in your research.\n\n
\nEcoAssist\n
\n\n[Link to paper](https://joss.theoj.org/papers/10.21105/joss.05581)\n```BibTeX\n@article{van_Lunteren_EcoAssist_2023,\n author = {van Lunteren, Peter},\n doi = {10.21105/joss.05581},\n journal = {Journal of Open Source Software},\n month = aug,\n number = {88},\n pages = {5581},\n title = {{EcoAssist: A no-code platform to train and deploy custom YOLOv5 object detection models}},\n url = {https://joss.theoj.org/papers/10.21105/joss.05581},\n volume = {8},\n year = {2023}\n}\n```\n
\n\n
\nMegaDetector\n
\n\n[Link to paper](https://arxiv.org/abs/1907.06772)\n```BibTeX\n@article{Beery_Efficient_2019,\n title = {Efficient Pipeline for Camera Trap Image Review},\n author = {Beery, Sara and Morris, Dan and Yang, Siyu},\n journal = {arXiv preprint arXiv:1907.06772},\n year = {2019}\n}\n```\n
\n\n
\nUltralytics\n
\n\nIf you used the training function.\n```Bibtex\n@software{Jocher_YOLOv5_2020,\n title = {{YOLOv5 by Ultralytics}},\n author = {Jocher, Glenn},\n year = {2020},\n doi = {10.5281/zenodo.3908559},\n url = {https://github.com/ultralytics/yolov5},\n license = {AGPL-3.0}\n}\n```\n
\n\n## Uninstall\nAll files are located in one folder, called `EcoAssist_files`. You can uninstall EcoAssist by simply deleting this folder. Please be aware that it\'s hidden, so you\'ll probably have to adjust your settings before you can see it (find out how to: [macOS](https://www.sonarworks.com/support/sonarworks/360003040160-Troubleshooting/360003204140-FAQ/5005750481554-How-to-show-hidden-files-Mac-and-Windows-), [Windows](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5#WindowsVersion=Windows_11), [Linux](https://askubuntu.com/questions/232649/how-to-show-or-hide-a-hidden-file)). If you\'re planning on updating EcoAssist, there is no need to uninstall it first. It will do that automatically. More about updating [here](#update). \n\n
\nLocation on Windows\n
\n \n```r\n# All users\n\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81Program Files\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81EcoAssist_files\n\n# Single user\n\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81Users\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81EcoAssist_files\n```\n
\n\n
\nLocation on macOS\n
\n \n```r\n\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81Applications\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81.EcoAssist_files\n```\n
\n\n
\nLocation on Linux\n
\n \n```r\n\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81home\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 \xf0\x9f\x93\x81.EcoAssist_files\n```\n
\n\n## Contributors\nThis is an open-source project, so please feel free to fork this repo and submit fixes, improvements or add new features. For more information, see the [contribution guidelines](https://github.com/PetervanLunteren/EcoAssist/blob/main/CONTRIBUTING.md). \n\n\n \n\n\n###\nThank you for your contributions!\n\n## Similar software\nAs far as I know, there are three other software packages capable of deploying the `MegaDetector` model. These packages are all set up slightly different and have different features.\n* [CamTrap Detector](https://github.com/bencevans/camtrap-detector)\n* [MegaDetector GUI](https://github.com/petargyurov/megadetector-gui)\n* [Megadetector-Interface](https://github.com/NaomiMcWilliam/Megadetector-Interface)\n'",",https://doi.org/10.5281/zenodo.7223363,https://arxiv.org/abs/1907.06772","2022/01/29, 13:43:22",634,MIT,390,643,"2023/10/21, 06:31:25",2,7,26,25,4,0,0.5714285714285714,0.015151515151515138,"2023/10/19, 06:33:31",v4.2,0,3,true,github,true,true,,,,,,,,,,, COSIMA,Ocean and sea-ice model analysis tools and examples.,COSIMA,https://github.com/COSIMA/cosima-cookbook.git,github,"ocean,analysis",Sea Ice,"2023/10/12, 22:43:29",54,0,6,true,Python,COSIMA,COSIMA,"Python,Mako",https://cosima-recipes.readthedocs.io/en/latest/,"b'\n

\n\n\n \n\n\n# cosima-cookbook\n\nThe COSIMA Cookbook is a framework for analysing output from ocean-sea ice models. The focus is on the ACCESS-OM2 suite of models being developed and run by members of [COSIMA: Consortium for Ocean-Sea Ice Modelling in Australia](http://cosima.org.au). But this framework is suited to analysing any MOM5/MOM6 output, as well as output from other models.\n\nThe cookbook is structured as follows:\n * This repository includes boiler-plate code and scripts that underpin the cookbook.\n * The [`cosima-recipes`](https://github.com/COSIMA/cosima-recipes) repository includes example notebooks on which you can base your analyses.\n * The [`cosima-recipes` template](https://github.com/COSIMA/cosima-recipes/blob/master/Tutorials/Template_For_Notebooks.ipynb) provides you with a template if you want to contribute your own scripts to the analysis.\n\n\n## Getting Started\n\nThe easiest way to use the COSIMA Cookbook is through NCI\'s HPC systems (either VDI or Gadi). The cookbook is preinstalled in the latest `conda/analysis3` environment.\n\nOnce you have an account on the VDI, you should:\n 1. Clone the [`cosima-recipes`](https://github.com/COSIMA/cosima-recipes) repository to your local file space.\n 2. Start a jupyter notebook session using the following commands:\n```\n>> module use /g/data/hh5/public/modules/\n>> module load conda/analysis3-unstable\n>> jupyter notebook\n```\n 3. Navigate to one of the COSIMA recipes and run the analysis.\n\nAlternatively, you might prefer to download `vdi_jupyter` or the `gadi_jupyter` scripts hosted in the CLEx CMS Github Repository [coecms/nci_scripts](https://github.com/coecms/nci_scripts). These scripts will allow you to open a Jupyter notebook in your local browser window.\n\n\n## Using the Cookbook\n\nThe COSIMA Cookbook relies on several components:\n 1. There needs to be a database of simulations -- on the NCI system, model output that is stored in the COSIMA space on the `/g/data/ik11/` directory.\n 2. Once you have access to data, the best place to start is the [`cosima-recipes`](https://github.com/COSIMA/cosima-recipes) repository which includes a series of jupyter notebooks containing examples that guide you through to use the cookbook to load model output and then proceed doing simple (or elaborate) computations. The best starting point of exploring the [`cosima-recipes`](https://github.com/COSIMA/cosima-recipes) is the [Documented Examples](https://cosima-recipes.readthedocs.io/en/latest/documented_examples.html). A collection of useful examples leveraging the `cosima-cookbook` is also found [here](https://github.com/COSIMA/ACCESS-OM2-1-025-010deg-report/tree/master/figures).\n\n\n## Contributing to the Cookbook\n\nIf you like the cookbook, you may like to interact more closely with us:\n * Contributions of new notebooks or analysis scripts are always welcome. Please check out the [`cosima-recipes`](https://github.com/COSIMA/cosima-recipes) repository.\n * If you find a problem, or have a suggestion for improvement, please log an issue.\n * All code submitted as part of the `cosima-cookbook` itself must be formatted with [black](https://github.com/psf/black)\n\n## Conditions of use for ACCESS-OM2 data\n\nWe request that users of ACCESS-OM2 model [code](https://github.com/COSIMA/access-om2) or output data:\n1. consider citing Kiss et al. (2020) ([http://doi.org/10.5194/gmd-13-401-2020](http://doi.org/10.5194/gmd-13-401-2020))\n2. include an acknowledgement such as the following:\n\n *The authors thank the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA; [http://www.cosima.org.au](http://www.cosima.org.au)) for making the ACCESS-OM2 suite of models available at [https://github.com/COSIMA/access-om2](https://github.com/COSIMA/access-om2).*\n3. let us know of any publications which use these models or data so we can add them to [our list](https://scholar.google.com/citations?hl=en&user=inVqu_4AAAAJ).\n\n[![Documentation Status](https://readthedocs.org/projects/cosima-cookbook/badge/?version=latest)](https://cosima-cookbook.readthedocs.org/en/latest)\n'",",http://doi.org/10.5194/gmd-13-401-2020,http://doi.org/10.5194/gmd-13-401-2020","2017/03/15, 01:01:22",2416,Apache-2.0,10,458,"2023/10/12, 22:43:29",47,146,276,15,13,1,0.1,0.7104477611940299,"2020/06/26, 06:33:28",0.4.0,0,12,false,,false,false,,,https://github.com/COSIMA,http://cosima.org.au,Australia,,,https://avatars.githubusercontent.com/u/12704607?v=4,,, ACCESS-OM2,Global ocean-sea ice coupled model configurations.,COSIMA,https://github.com/COSIMA/access-om2.git,github,,Sea Ice,"2023/07/20, 05:01:46",16,0,3,true,Python,COSIMA,COSIMA,"Python,Shell,Fortran",,"b'\n

\n\n\n| Build | Fast Run | Full Run | Repro | Tools | Release | \n|:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [![Build Status](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=ACCESS-OM2/build)](https://accessdev.nci.org.au/jenkins/job/ACCESS-OM2/job/build/) | [![Fast Run Status](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=ACCESS-OM2/fast_run)](https://accessdev.nci.org.au/jenkins/job/ACCESS-OM2/job/fast_run/) | [![Full Run Status](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=ACCESS-OM2/full_run)](https://accessdev.nci.org.au/jenkins/job/ACCESS-OM2/job/full_run/) | [![Repro Status](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=ACCESS-OM2/reproducibility)](https://accessdev.nci.org.au/jenkins/job/ACCESS-OM2/job/reproducibility/) | [![Tools Status](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=ACCESS-OM2/tools)](https://accessdev.nci.org.au/jenkins/job/ACCESS-OM2/job/tools/) | [![Release Status](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=ACCESS-OM2/release)](https://accessdev.nci.org.au/jenkins/job/ACCESS-OM2/job/release/) | \n\n# ACCESS-OM2\n\nACCESS-OM2 is a global coupled ocean - sea ice model being developed by [COSIMA](http://www.cosima.org.au).\n\nACCESS-OM2 consists of the [MOM 5.1](https://mom-ocean.github.io) ocean model, [CICE 5.1.2](https://github.com/CICE-Consortium/CICE-svn-trunk/tree/cice-5.1.2) sea ice model, and a file-based atmosphere called [YATM](https://github.com/COSIMA/libaccessom2) coupled together using [OASIS3-MCT v2.0](https://portal.enes.org/oasis). ACCESS-OM2 builds on the ACCESS-OM ([Bi et al., 2013](http://www.bom.gov.au/jshess/docs/2013/bi2_hres.pdf)) and AusCOM ([Roberts et al., 2007](https://50years.acs.org.au/content/dam/acs/50-years/journals/jrpit/JRPIT39.2.137.pdf); [Bi and Marsland, 2010](https://www.cawcr.gov.au/technical-reports/CTR_027.pdf)) models originally developed at [CSIRO](http://www.csiro.au).\n\nACCESS-OM2 comes with a number of standard configurations in the [control](https://github.com/COSIMA/access-om2/tree/master/control) directory. These include sea ice and ocean at a nominal 1.0, 0.25 and 0.1 degree horizontal grid spacing, forced by [JRA55-do](https://doi.org/10.1016/j.ocemod.2018.07.002) atmospheric reanalyses.\n\nACCESS-OM2 is being used for a growing number of research projects. A partial list of publications using the model is given [here](https://scholar.google.com/citations?hl=en&view_op=list_works&gmla=AJsN-F5gp3-wpXzF8odo9cFy-9ajlgIeqwrOq_7DvPS1rkETzqmPk1Sfx-gAmIs9kFfRflOR3HqNV_85pJ2j4LljHks1wQtONqiuOVgii-UICb9q2fmTp_w&user=inVqu_4AAAAJ).\n\n# Downloading\n\nThis respository contains many submodules, so you will need to clone it with the `--recursive` flag:\n```\ngit clone --recursive https://github.com/COSIMA/access-om2.git\n```\n\nTo update a previous clone of this repository to the latest version, you will need to do \n```\ngit pull\n```\nfollowed by\n```\ngit submodule update --init --recursive\n```\nto update all the submodules.\n\n# Where to find information\n\nThe v1.0 model code, configurations and performance were described in [Kiss et al. (2020)](https://doi.org/10.5194/gmd-13-401-2020), with further details in the draft [ACCESS-OM2 technical report](https://github.com/COSIMA/ACCESS-OM2-1-025-010deg-report). The current code and configurations differ from v1.0 in a number of ways (biogeochemistry, updated forcing, improvements and bug fixes), as described by [Solodoch et al. (2022)](https://doi.org/10.1029/2021GL097211), [Hayashida et al. (2023)](https://dx.doi.org/10.1029/2023JC019697), [Menviel et al. (2023)](https://doi.org/10.5194/egusphere-2023-390) and [Wang et al. (2023)](https://doi.org/10.5194/gmd-2023-123).\n \nModel output can be accessed by [NCI](http://nci.org.au) users via the [COSIMA Cookbook](https://github.com/COSIMA/cosima-cookbook).\n\nFor information on downloading, building and running the model, see the [ACCESS-OM2 wiki](https://github.com/COSIMA/access-om2/wiki). \n\n**NOTE:** All ACCESS-OM2 model components and configurations are undergoing continual improvement. We strongly recommend that you ""watch"" this repo (see button at top of screen; ask to be notified of all conversations) and also watch all the [component models](https://github.com/COSIMA/access-om2/tree/master/src), whichever [configuration(s)](https://github.com/COSIMA/access-om2/tree/master/control) you are using, and [`payu`](https://github.com/payu-org/payu) to be kept informed of updates, problems and bug fixes as they arise.\n\nRequests for help and other issues associated with the model, tools or configurations can be registered as [ACCESS-OM2 issues](https://github.com/COSIMA/access-om2/issues).\n\n## Conditions of use\n\nWe request that users of this or other ACCESS-OM2 model code:\n1. consider citing Kiss et al. (2020) ([http://doi.org/10.5194/gmd-13-401-2020](http://doi.org/10.5194/gmd-13-401-2020)), and also the other [papers above](https://github.com/COSIMA/access-om2#where-to-find-information) detailing more recent improvements to the model\n2. include an acknowledgement such as the following:\n\n *The authors thank the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA; [http://www.cosima.org.au](http://www.cosima.org.au)) for making the ACCESS-OM2 suite of models available at [https://github.com/COSIMA/access-om2](https://github.com/COSIMA/access-om2).*\n3. let us know of any publications which use these models or data so we can add them to [our list](https://scholar.google.com/citations?hl=en&user=inVqu_4AAAAJ).\n'",",https://doi.org/10.1016/j.ocemod.2018.07.002,https://doi.org/10.5194/gmd-13-401-2020,https://doi.org/10.1029/2021GL097211,https://doi.org/10.5194/egusphere-2023-390,https://doi.org/10.5194/gmd-2023-123,http://doi.org/10.5194/gmd-13-401-2020,http://doi.org/10.5194/gmd-13-401-2020","2014/07/25, 05:39:23",3379,Apache-2.0,4,452,"2023/07/26, 00:07:02",88,34,190,14,92,1,0.0,0.45549738219895286,"2019/04/12, 07:45:53",GMD2019,0,9,false,,false,false,,,https://github.com/COSIMA,http://cosima.org.au,Australia,,,https://avatars.githubusercontent.com/u/12704607?v=4,,, Sea ice drift,Sea ice drift from Sentinel-1 SAR imagery using open source feature tracking.,nansencenter,https://github.com/nansencenter/sea_ice_drift.git,github,,Sea Ice,"2023/05/24, 07:50:34",36,0,7,true,Python,Nansen Environmental and Remote Sensing Center,nansencenter,"Python,Dockerfile",,"b""[![Build Status](https://travis-ci.org/nansencenter/sea_ice_drift.svg?branch=master)](https://travis-ci.org/nansencenter/sea_ice_drift)\n[![Coverage Status](https://coveralls.io/repos/nansencenter/sea_ice_drift/badge.svg?branch=master)](https://coveralls.io/r/nansencenter/sea_ice_drift)\n[![DOI](https://zenodo.org/badge/46479183.svg)](https://zenodo.org/badge/latestdoi/46479183)\n\n## Sea ice drift from Sentinel-1 SAR data\n\nA computationally efficient, open source feature tracking algorithm,\ncalled ORB, is adopted and tuned for retrieval of the first guess\nsea ice drift from Sentinel-1 SAR images. Pattern matching algorithm\nbased on MCC calculation is used further to retrieve sea ice drift on a\nregular grid.\n\n## References:\n * Korosov A.A. and Rampal P., A Combination of Feature Tracking and Pattern Matching with Optimal Parametrization for Sea Ice Drift Retrieval from SAR Data, Remote Sens. 2017, 9(3), 258; [doi:10.3390/rs9030258](http://www.mdpi.com/2072-4292/9/3/258)\n * Muckenhuber S., Korosov A.A., and Sandven S., Open-source feature-tracking algorithm for sea ice drift retrieval from Sentinel-1 SAR imagery, The Cryosphere, 10, 913-925, [doi:10.5194/tc-10-913-2016](http://www.the-cryosphere.net/10/913/2016/), 2016\n\n## Running with Docker\n```\n# run ipython with SeaIceDrift\ndocker run --rm -it -v /path/to/data:/home/jovyan/work nansencenter/seaicedrift ipython\n\n# run jupyter notebook with SeaIceDrift\ndocker run --rm -p 8888:8888 -v /path/to/data/and/notebooks:/home/jovyan/work nansencenter/seaicedrift\n```\n\n## Installation on Ubuntu\n```\n# install some requirements with apt-get\napt-get install -y --no-install-recommends libgl1-mesa-glx gcc build-essential\n\n# install some requirements with conda\nconda install -c conda-forge gdal cartopy opencv\n\n# install other requirements with pip\npip install netcdf4 nansat\n\n# clone code\ngit clone https://github.com/nansencenter/sea_ice_drift.git\ncd sea_ice_drift\n\n# install SeaIceDrift\npython setup.py install\n```\n\n## Usage example\n```\n# download example datasets\nwget https://github.com/nansencenter/sea_ice_drift_test_files/raw/master/S1B_EW_GRDM_1SDH_20200123T120618.tif\nwget https://github.com/nansencenter/sea_ice_drift_test_files/raw/master/S1B_EW_GRDM_1SDH_20200125T114955.tif\n\n# start Python and import relevant libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom nansat import Nansat\nfrom sea_ice_drift import SeaIceDrift\n\n# open pair of satellite images using Nansat and SeaIceDrift\nfilename1='S1B_EW_GRDM_1SDH_20200123T120618.tif'\nfilename2='S1B_EW_GRDM_1SDH_20200125T114955.tif'\nsid = SeaIceDrift(filename1, filename2)\n\n# run ice drift retrieval using Feature Tracking\nuft, vft, lon1ft, lat1ft, lon2ft, lat2ft = sid.get_drift_FT()\n\n# plot\nplt.quiver(lon1ft, lat1ft, uft, vft);plt.show()\n\n# define a grid (e.g. regular)\nlon1pm, lat1pm = np.meshgrid(np.linspace(-33.5, -30.5, 50),\n np.linspace(83.6, 83.9, 50))\n\n# run ice drift retrieval for regular points using Pattern Matching\n# use results from the Feature Tracking as the first guess\nupm, vpm, apm, rpm, hpm, lon2pm, lat2pm = sid.get_drift_PM(\n lon1pm, lat1pm,\n lon1ft, lat1ft,\n lon2ft, lat2ft)\n# select high quality data only\ngpi = rpm*hpm > 4\n\n# plot high quality data on a regular grid\nplt.quiver(lon1pm[gpi], lat1pm[gpi], upm[gpi], vpm[gpi], rpm[gpi])\n\n```\nFull example [here](https://github.com/nansencenter/sea_ice_drift/blob/master/examples/simple.py)\n\n![Feature Tracking and the first SAR image](https://raw.githubusercontent.com/nansencenter/sea_ice_drift/master/examples/sea_ice_drift_FT_img1.png)\n\n![Pattern Matching and the second SAR image](https://raw.githubusercontent.com/nansencenter/sea_ice_drift/master/examples/sea_ice_drift_PM_img2.png)\n""",",https://zenodo.org/badge/latestdoi/46479183","2015/11/19, 08:40:38",2897,GPL-3.0,1,225,"2020/12/07, 10:24:28",3,3,24,0,1052,0,0.3333333333333333,0.02614379084967322,"2018/10/05, 11:07:20",v0.7,0,4,false,,false,false,,,https://github.com/nansencenter,www.nersc.no,"Bergen, Norway",,,https://avatars.githubusercontent.com/u/5212513?v=4,,, CICE,"A computationally efficient model for simulating the growth, melting, and movement of polar sea ice.",CICE-Consortium,https://github.com/CICE-Consortium/CICE.git,github,,Sea Ice,"2023/10/25, 21:34:35",47,0,13,true,Fortran,CICE Consortium,CICE-Consortium,"Fortran,Shell,Python,TypeScript,C,Makefile,OCaml",,"b'\n[![GHActions](https://github.com/CICE-Consortium/CICE/workflows/GHActions/badge.svg)](https://github.com/CICE-Consortium/CICE/actions)\n[![Documentation Status](https://readthedocs.org/projects/cice-consortium-cice/badge/?version=main)](http://cice-consortium-cice.readthedocs.io/en/main/?badge=main)\n[![lcov](https://img.shields.io/endpoint?url=https://apcraig.github.io/coverage.json)](https://apcraig.github.io)\n\n\n\n## The CICE Consortium sea-ice model\nCICE is a computationally efficient model for simulating the growth, melting, and movement of polar sea ice. Designed as one component of coupled atmosphere-ocean-land-ice global climate models, today\xe2\x80\x99s CICE model is the outcome of more than two decades of community collaboration in building a sea ice model suitable for multiple uses including process studies, operational forecasting, and climate simulation.\n\n\nThis repository contains the files and code needed to run the CICE sea ice numerical model starting with version 6. CICE is maintained by the CICE Consortium. \nVersions prior to v6 are found in the [CICE-svn-trunk repository](https://github.com/CICE-Consortium/CICE-svn-trunk).\n\nCICE consists of a top level driver and dynamical core plus the [Icepack][icepack] column physics code], which is included in CICE as a Git submodule. Because Icepack is a submodule of CICE, Icepack and CICE development are handled independently with respect to the GitHub repositories even though development and testing may be done together. \n\n[icepack]: https://github.com/CICE-Consortium/Icepack\n\nThe first point of contact with the CICE Consortium is the Consortium Community [Forum][forum]. \nThis forum is monitored by Consortium members and also opened to the whole community.\nPlease do not use our issue tracker for general support questions.\n\n[forum]: https://xenforo.cgd.ucar.edu/cesm/forums/cice-consortium.146/\n\nIf you expect to make any changes to the code, we recommend that you first fork both the CICE and Icepack repositories. \nIn order to incorporate your developments into the Consortium code it is imperative you follow the guidance for Pull Requests and requisite testing.\nHead over to our [Contributing][contributing] guide to learn more about how you can help improve CICE.\n\n[contributing]: https://github.com/CICE-Consortium/About-Us/wiki/Contributing\n\n## Useful links\n* **CICE wiki**: https://github.com/CICE-Consortium/CICE/wiki\n\n Information about the CICE model\n\n* **CICE Release Table**: https://github.com/CICE-Consortium/CICE/wiki/CICE-Release-Table\n\n Numbered CICE releases since version 6 with associated documentation and DOIs. \n \n* **Consortium Community Forum**: https://xenforo.cgd.ucar.edu/cesm/forums/cice-consortium.146/\n\n First point of contact for discussing model development including bugs, diagnostics, and future directions. \n\n* **Resource Index**: https://github.com/CICE-Consortium/About-Us/wiki/Resource-Index\n\n List of resources for information about the Consortium and its repositories as well as model documentation, testing, and development.\n\n## License\nSee our [License](LICENSE.pdf) and [Distribution Policy](DistributionPolicy.pdf).\n'",,"2017/05/24, 18:02:53",2345,GPL-3.0,58,920,"2023/10/18, 17:47:02",71,513,823,117,7,3,1.8,0.6091954022988506,"2023/09/11, 19:19:54",CICE6.4.2,0,18,false,,false,true,,,https://github.com/CICE-Consortium,,,,,https://avatars.githubusercontent.com/u/28584507?v=4,,, OSSP,Open Source Algorithm for Detecting Sea Ice Surface Features in High Resolution Optical Imagery.,wrightni,https://github.com/wrightni/OSSP.git,github,,Sea Ice,"2020/10/23, 14:24:26",20,0,4,false,Python,,,Python,,"b""# OSSP\n## Open Source Sea-ice Processing\n### Open Source Algorithm for Detecting Sea Ice Surface Features in High Resolution Optical Imagery\n\n### Nicholas Wright and Chris Polashenski\n\n## Introduction\n\nWelcome to OSSP; a set of tools for detecting surface features in high resolution optical imagery of sea ice. The primary focus is on the detection of and differentiation between open water, melt ponds, and snow/ice. \n\nThe Anaconda distribution of Python is recommended, but any distribution with the appropriate packages will work. You can download Anaconda, version 3.6, here: https://www.continuum.io/downloads\n\n\n## Dependencies\n\n* gdal (v2.0 or above)\n* numpy\n* scipy\n* h5py\n* scikit-image\n* sklearn\n* matplotlib\n* tkinter\n\n#### Optional\n* tqdm (for progress bar)\n* PGC imagery_utils (for WV pansharpening) (https://github.com/PolarGeospatialCenter/imagery_utils)\n\n## Usage\n\nFor detailed usage and installation instructions, see the pdf document 'Algorithm_Instructions.pdf'\n\n### setup.py\n\nThe first step is to run the setup.py script to compile C libraries. Run __python setup.py build\\_ext --build-lib .__ from the OSSP directory. Be sure to include the period after --build-lib. \n\n### ossp_process.py\n\nThis combines all steps of the image classification scheme into one script and should be the only script to call directly. If given a folder of images, this script finds all appropriately formatted files directory (.tif(f) and .jpg) and queues them for processing. If given an image file, this script processes that single image alone. This script processes images as follows: Image preprocessing (histogram stretch or pansharpening if chosen) -> segmentation (segment.py) -> classification (classify.py) -> calculate statistics. Output results are saved as a geotiff with the same georeference of the input image. \n\n#### Required Arguments\n* __input directory__: directory containing all of the images you wish to process. Note that all .jpg and .tif images in the input directory as well as all sub-directories of it will be processed. Can also provide the path and filename to a single image to process only that image.\n* __image type__: {\xe2\x80\x98srgb\xe2\x80\x99, \xe2\x80\x98wv02_ms\xe2\x80\x99, \xe2\x80\x98pan'}: the type of imagery you are processing. \n 1. 'srgb': RGB imagery taken by a typical camera\n 2. 'wv02_ms': DigitalGlobe WorldView 2 multispectral imagery\n 3. 'pan': High resolution panchromatic imagery\n* __training dataset file__: filepath of the training dataset you wish to use to analyze the input imagery\n\n#### Optional Arguments\n\n* __-o | --output_dir__: Directory to write output files. \n* __-v | --verbose__: Display text output as algorithm progresses. \n* __-c | --stretch__: {'hist', 'pansh', 'none'}: Apply an image correction prior to classification. Pansharpening / orthorectification option requires PGC scripts. *Default = hist*.\n* __-t | --threads__: Number of subprocesses to spawn for classification. Threads > 2 is only utilized for images larger than ~10,000x10,000 pixels. \n* __--pgc_script__: Path for the PGC imagery_utils folder if 'pansh' was chosen for the image correction.\n* __--training\\_label__: The label of a custom training dataset. See advanced section for details. *Default = image\\_type*.\n\n#### Notes:\n\nExample: ossp\\_process.py input\\_dir im\\_type training\\_dataset\\_file -v\n\nThis example will process all .tif and .jpg files in the input\\_dir.\n\n\n### training_gui.py\n\nGraphical user interface for creating a custom training dataset. Provide a directory of images that you wish to use as the basis of your training set. The GUI will present a random segment each time a classification is assigned. The display images can also be clicked classify a specific area. The segments themselves are automatically generated. The highlighted region corresponds to the segment that will be labeled.\n\nOutput is a .h5 file that can be provided to ossp\\_process.py.\n\nNote: Images are segmented prior to display on the GUI, and as such may take up to a minute to load (depending on image size and computer specs)\n\n#### Positional Arguments:\n* __input__: A directory containing the images you wish to use for training.\n* __image type__: {\xe2\x80\x98srgb\xe2\x80\x99, \xe2\x80\x98wv02_ms\xe2\x80\x99, \xe2\x80\x98pan'}: the type of imagery you are processing. \n 1. 'srgb': RGB imagery taken by a typical camera\n 2. 'wv02_ms': DigitalGlobe WorldView 2 multispectral imagery,\n 3. 'pan': High resolution panchromatic imagery\n\n#### Optional arguments:\n* __--tds_file__: Existing training dataset file. Will create a new one with this name if none exists. If a path is not provided, file is created in the image directory. *Default = \\_training\\_data.h5*.\n* __--username__: A specific label to attach to the training set. The --training\\_label argument of ossp_\\process references this value. *Default = *\n\n### Contact\nNicholas Wright\n\n""",,"2017/08/02, 19:16:17",2275,MIT,0,160,"2021/09/19, 00:05:51",1,3,3,0,767,0,0.3333333333333333,0.0,"2019/11/22, 16:44:04",v2.3,0,1,false,,false,false,,,,,,,,,,, sea-ice,Displays the monthly mean sea ice extent for the Arctic and Antarctic along with the historical median extent.,vannizhang,https://github.com/vannizhang/sea-ice.git,github,"arcgis-js-api,living-atlas,arcgis-online,esri,d3,sea-ice,visualization,arctic,antarctic",Sea Ice,"2021/12/17, 17:54:09",18,0,2,false,TypeScript,,,"TypeScript,SCSS,JavaScript,HTML",https://livingatlas.arcgis.com/sea-ice/,"b'# Sea Ice\nThis app displays the monthly mean sea ice extent for the [Arctic](https://www.arcgis.com/home/item.html?id=d1fb8225058e4a0d96ead7b9a574a652) and [Antarctic](https://www.arcgis.com/home/item.html?id=e7f11116c0bc42fb8c7c4d1b1d70eceb) along with the historical median extent.\n\n[View it live](https://livingatlas.arcgis.com/sea-ice/)\n\n![App](./screenshot.png)\n\n## Features\n\nThis app displays the monthly mean sea ice extent for the Arctic and Antarctic along with the historical median extent. Additionally, graphs are used to visualize the minimum and maximum extent for each year (top), and the monthly time series for each year (bottom). Use the top graph to select specific years to display in the map.\n\n## Instructions\n\n- Before we begin, make sure you have a fresh version of [Node.js](https://nodejs.org/en/) and NPM installed. The current Long Term Support (LTS) release is an ideal starting point. \n\n- To begin, clone this repository to your computer:\n\n ```sh\n https://github.com/vannizhang/sea-ice.git\n ```\n\n- From the project\'s root directory, install the required packages (dependencies):\n\n ```sh\n npm install\n ```\n\n - Now you can start the webpack dev server to test the app on your local machine:\n\n ```sh\n # it will start a server instance and begin listening for connections from localhost on port 8080\n npm run start\n ```\n\n - To build/deploye the app, you can simply run:\n\n ```sh\n # it will place all files needed for deployment into the /dist directory \n npm run build\n ```\n\n## Resources\n- [ArcGIS API for JavaScript (version 4.21)](https://developers.arcgis.com/javascript/index.html)\n- [National Snow and Ice Data Center](https://nsidc.org/)\n- [ArcGIS Living Atlas of the World](https://livingatlas.arcgis.com/en/browse/#d=2&q=sea%20ice)\n- [D3.js](https://d3js.org/)\n\n## Issues\n\nFind a bug or want to request a new feature? Please let us know by submitting an issue.\n\n## Disclaimer\n\nThis demo application is for illustrative purposes only and it is not maintained. There is no support available for deployment or development of the application.\n\n## Contributing\n\nEsri welcomes contributions from anyone and everyone. Please see our [guidelines for contributing](https://github.com/esri/contributing).\n\n## Licensing\nCopyright 2019 Esri\n\nLicensed under the Apache License, Version 2.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nA copy of the license is available in the repository\'s [LICENSE](license) file.\n'",,"2019/07/29, 17:42:42",1549,Apache-2.0,0,63,"2022/02/26, 17:36:39",22,11,16,0,606,22,0.0,0.03389830508474578,,,0,2,false,,false,false,,,,,,,,,,, FESOM2,Multi-resolution ocean general circulation model that solves the equations of motion describing the ocean and sea ice using finite-element and finite-volume methods on unstructured computational grids.,FESOM,https://github.com/FESOM/fesom2.git,github,,Sea Ice,"2022/06/07, 13:14:55",39,0,8,true,Jupyter Notebook,,FESOM,"Jupyter Notebook,Fortran,C,Python,Shell,CMake,Makefile,NASL,C++,Io,Slice,Batchfile",http://fesom.de/,"b'The Finite Element Sea Ice-Ocean Model (FESOM2) \n======\n[![Build Status](https://github.com/FESOM/fesom2/workflows/FESOM2%20main%20test/badge.svg)](https://github.com/FESOM/fesom2/actions)\n\nMulti-resolution ocean general circulation model that solves the equations of motion describing the ocean and sea ice using finite-element and finite-volume methods on unstructured computational grids. The model is developed and supported by researchers at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), in Bremerhaven, Germany.\n\n**Website:** [fesom.de](https://fesom.de/)\n\n**Documentation:** [fesom2.readthedocs.io](https://fesom2.readthedocs.io/en/latest/index.html)\n\n**Basic tutorial:** [Getting started](https://fesom2.readthedocs.io/en/latest/getting_started/getting_started.html)\n\n\nReferences\n----------\n\n[Complete list of references on fesom.de](https://fesom.de/publications/)\n\n* **[Ocean model formulation]** Danilov, S., Sidorenko, D., Wang, Q., and Jung, T.: The Finite-volumE Sea ice\xe2\x80\x93Ocean Model (FESOM2), Geosci. Model Dev., 10, 765\xe2\x80\x93789, https://doi.org/10.5194/gmd-10-765-2017, 2017. \n\n* **[Sea ice model formulation]** Danilov, S., Q. Wang, R. Timmermann, N. Iakovlev, D. Sidorenko, M. Kimmritz, T. Jung, and Schr\xc3\xb6ter, J. (2015), Finite-Element Sea Ice Model (FESIM), version 2, Geosci. Model Dev., 8, 1747\xe2\x80\x931761, http://www.geosci-model-dev.net/8/1747/2015/\n\n* **[Evaluation of standard sumulations]** Scholz, P., Sidorenko, D., Gurses, O., Danilov, S., Koldunov, N., Wang, Q., Sein, D., Smolentseva, M., Rakowsky, N., and Jung, T.: Assessment of the Finite-volumE Sea ice-Ocean Model (FESOM2.0) \xe2\x80\x93 Part 1: Description of selected key model elements and comparison to its predecessor version, Geosci. Model Dev., 12, 4875\xe2\x80\x934899, https://doi.org/10.5194/gmd-12-4875-2019, 2019.\n\n* **[Evaluation of computational performance]** Koldunov, N. V., Aizinger, V., Rakowsky, N., Scholz, P., Sidorenko, D., Danilov, S., and Jung, T.: Scalability and some optimization of the Finite-volumE Sea ice\xe2\x80\x93Ocean Model, Version 2.0 (FESOM2), Geosci. Model Dev., 12, 3991\xe2\x80\x934012, https://doi.org/10.5194/gmd-12-3991-2019, 2019. \n\n* **[Version coupled with ECHAM6 atmosphere]** Sidorenko, D., Goessling, H. F., Koldunov, N. V., Scholz, P., Danilov, S., Barbi, D., et al ( 2019). Evaluation of FESOM2.0 coupled to ECHAM6.3: Pre\xe2\x80\x90industrial and HighResMIP simulations. Journal of Advances in Modeling Earth Systems, 11. https://doi.org/10.1029/2019MS001696\n\n* **[Version with ICEPACK sea ice thermodynamics]** Zampieri, Lorenzo, Frank Kauker, J\xc3\xb6rg Fr\xc3\xb6hle, Hiroshi Sumata, Elizabeth C. Hunke, and Helge Goessling. Impact of Sea-Ice Model Complexity on the Performance of an Unstructured-Mesh Sea-ice/ocean Model Under Different Atmospheric Forcings. Washington: American Geophysical Union, 2020. https://dx.doi.org/10.1002/essoar.10505308.1.\n'",",https://doi.org/10.5194/gmd-10-765-2017,https://doi.org/10.5194/gmd-12-4875-2019,https://doi.org/10.5194/gmd-12-3991-2019,https://doi.org/10.1029/2019MS001696\n\n*","2018/07/02, 10:50:34",1941,GPL-3.0,0,1771,"2023/10/19, 12:38:44",115,331,393,112,6,44,0.9,0.6706067769897557,"2023/09/20, 11:34:45",AWI-CM3_v3.2,0,16,false,,false,false,,,https://github.com/FESOM,,,,,https://avatars.githubusercontent.com/u/24571721?v=4,,, IceNet,Code for Seasonal Arctic sea ice forecasting with probabilistic deep learning.,tom-andersson,https://github.com/tom-andersson/icenet-paper.git,github,,Sea Ice,"2023/10/10, 10:01:08",74,0,33,true,Python,,,"Python,Shell",,"b""# IceNet: Seasonal Arctic sea ice forecasting with probabilistic deep learning\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5176573.svg)](https://doi.org/10.5281/zenodo.5176573)\n\nThis codebase accompanies the Nature Communications paper [_Seasonal Arctic sea\nice forecasting with probabilistic deep\nlearning_](https://www.nature.com/articles/s41467-021-25257-4). It includes code to fully reproduce all the results of the study\nfrom scratch. It also includes code to download\nthe data generated by the study,\n[published on the Polar Data\nCentre](https://doi.org/10.5285/71820e7d-c628-4e32-969f-464b7efb187c), and\nreproduce all the paper's figures.\n\nThe flexibility of the code simplifies possible extensions of the study.\nThe data processing pipeline and custom `IceNetDataLoader` class lets you\ndictate which variables are input to the networks, which climate simulations are\nused for pre-training, and how far ahead to forecast.\nThe architecture of the IceNet model can be adapted in `icenet/models.py`.\nThe output variable to forecast could even be changed by refactoring the `IceNetDataLoader`\nclass.\n\nA demonstrator of this codebase (downloading pre-trained IceNet networks,\nthen generating and analysing forecasts) produced by [@acocac](https://github.com/acocac) can be found in [The Environmental\nData Science Book](https://edsbook.org/notebooks/gallery/ac327c3a-5264-40a2-8c6e-1e8d7c4b37ef/notebook).\n\n![](figures/architecture.png)\n\nThe guidelines below assume you're working in\nthe command line of a Unix-like machine with a GPU. If aiming to reproduce all the\nresults of the study, 1 TB of space should safely cover the storage requirements\nfrom the data downloaded and generated.\n\nIf you run into issues or have suggestions for improvement,\nplease raise an issue or email me (tomand@bas.ac.uk).\n\n## Steps to plot paper figures using the paper's results & forecasts\n\nTo reproduce the paper figures directly from the paper's \nresults and forecasts, run the following after\nsetting up the conda environment (see Step 1 below):\n- `./download_paper_generated_data.sh`. Downloads raw data from the paper. From here, you could start to explore the results of the paper in\nmore detail.\n- `python3 icenet/download_sic_data.py`. This is needed to plot the ground truth ice edge. Note this download can take anywhere from 1 to 12 hours to complete.\n- `python3 icenet/gen_masks.py`\n- `python3 icenet/plot_paper_figures.py`. Figures are saved in `figures/paper_figures/`.\n\n## Steps to reproduce the paper's results from scratch\n\n### 0) Preliminary setup\n\n* I use conda for package management. If you don't yet\nhave conda, you can download it\n[here](https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html).\n\n* To be able to download ERA5 data, you must first set up a CDS\naccount and populate your `.cdsapirc` file. Follow the 'Install the CDS API key'\ninstructions\n[here](https://cds.climate.copernicus.eu/api-how-to#install-the-cds-api-key).\n\n* To download the ECMWF SEAS5 forecast data for comparing with IceNet,\nyou must first register with ECMWF [here](https://apps.ecmwf.int/registration/).\nIf you are from an ECMWF Member State, you can then gain access to the ECMWF MARS Catalogue by\n[contacting your Computing\nRepresentative](https://www.ecmwf.int/en/about/contact-us/computing-representatives).\nOnce registered, obtain your\nAPI key [here](https://api.ecmwf.int/v1/key/) and fill the ECMWF API entries in\n`icenet/config.py`.\n\n* To track training runs and perform Bayesian hyperparameter tuning with Weights\nand Biases, sign up at https://wandb.ai/site. Obtain your API key from\n[here](https://wandb.ai/authorize) and fill the Weights and Biases entries in `icenet/config.py`.\nEnsure you are logged in by running `wandb login` after setting up the conda\nenvironment.\n\n### 1) Set up conda environment\n\nAfter cloning the repo, run the commands below in the root of the repository to\nset up the conda environment:\n\n- If you don't have [mamba](https://github.com/mamba-org/mamba) already, install\nit to your base env for faster conda operations: `conda install -n base mamba -c\nconda-forge`.\n- For upgradeability use the versioned direct dependency\nenvironment file: `mamba env create --file environment.yml`\n- For reproducibility use the locked environment file: `mamba env create --file\nenvironment.locked.yml`\n- Activate the environment before running code: `conda activate icenet`\n\n### 2) Download data\n\nThe [CMIP6 variable naming convention](https://docs.google.com/spreadsheets/d/1UUtoz6Ofyjlpx5LdqhKcwHFz2SGoTQV2_yekHyMfL9Y/edit#gid=1221485271)\nis used throughout this project - e.g. `tas` for surface air temperature, `siconca` for\nsea ice concentration, etc.\n\nWarning: some downloads are slow and the net download time can take 1-2 days.\nIt may be advisable to write a bash script to automatically execute all these\ncommands in sequence and run it over a weekend.\n\n- `python3 icenet/gen_masks.py`. This obtains masks for land, the polar holes,\nmonthly maximum ice extent (the 'active grid cell region'), and the Arctic regions\n& coastline.\n\n- `python3 icenet/download_sic_data.py`. Downloads OSI-SAF SIC data. This computes\nmonthly-averaged SIC server-side, downloads the results, and bilinearly interpolates missing grid cells (e.g. polar hole). Note this download can take anywhere from 1 to 12 hours to complete.\n\n- `./download_era5_data_in_parallel.sh`. Downloads ERA5 reanalysis data.\nThis runs multiple parallel `python3 icenet/download_era5_data.py`\ncommands in the background to acquire each ERA5 variable. The raw ERA5 data is downloaded in\nglobal latitude-longitude format and regridded to the EASE grid that\nOSI-SAF SIC data lies on. Logs are output to `logs/era5_download_logs/`.\n\n- `./download_cmip6_data_in_parallel.sh`. Downloads CMIP6 climate simulation data.\nThis runs multiple parallel `python3 icenet/download_cmip6_data.py`\ncommands in the background to download each climate simulation. The raw\n CMIP6 data is regridded from global latitude-longitude format to the EASE grid that\n OSI-SAF SIC data lies on. Logs are output to `logs/cmip6_download_logs/`\n\n- `./rotate_wind_data_in_parallel.sh`. This runs multiple parallel `python3 icenet/rotate_wind_data.py`\ncommands in the background to rotate the ERA5 and CMIP6 wind vector data onto the EASE grid.\nLogs are output to `logs/wind_rotation_logs/`.\n\n- `./download_seas5_forecasts_in_parallel.sh`. Downloads ECMWF SEAS5 SIC forecasts.\nThis runs multiple parallel `python3 icenet/download_seas5_forecasts.py`\ncommands to acquire 2002-2020 SEAS5 forecasts for each lead time\nvia the ECMWF MARS API and regrid the forecasts to EASE. The forecasts are saved to\n`data/forecasts/seas5/` in the folders `latlon/` and `EASE/`.\nLogs are output to `logs/seas5_download_logs/`.\n\n- `python3 icenet/biascorrect_seas5_forecasts.py`. Bias corrects the SEAS5 2012+ forecasts\nusing 2002-2011 forecasts. Also computes SEAS5 sea ice probability (SIP) fields.\nThe bias-corrected forecasts are saved as NetCDFs in `data/forecasts/seas5/` with dimensions\n`(target date, y, x, lead time)`.\n\n### 3) Process data\n\n#### 3.1) Set up IceNet's custom data loader\n\n- `python3 icenet/gen_data_loader_config.py`. Sets up the data loader configuration.\nThis is saved as a JSON file dictating IceNet's input and output data,\ntrain/val/test splits, etc.The config file is used to instantiate the\ncustom `IceNetDataLoader` class. Two example config files are provided in this repository\nin `dataloader_configs/`. Each config file is identified by a\ndataloader ID, determined by a timestamp and a user-provided name (e.g.\n`2021_06_15_1854_icenet_nature_communications`). The data loader ID,\ntogether with an architecture ID set in the training script, provides an 'IceNet ID'\nwhich uniquely identifies an IceNet ensemble model by its data configuration and\narchitecture.\n\n#### 3.2) Preprocess the raw data\n\n- `python3 icenet/preproc_icenet_data.py`. Normalises the raw NetCDF data and saves it as\nmonthly NumPy files. The normalisation parameters (mean/std dev or min/max)\nare saved as a JSON file so that new data can be preprocessed without\nhaving to recompute the normalisation. A custom IceNetDataPreProcessor class\n\n- The observational training & validation dataset for IceNet is just 23 GB,\nwhich can fit in RAM on some systems and significantly speed up the fine-tuning\ntraining phase compared with using the data loader. To benefit from this, run\n`python3 icenet/gen_numpy_obs_train_val_datasets.py` to generate NumPy tensors\nfor the train/val input/output data. To further benefit from the training speed\nimprovements of `tf.data`, generate a TFRecords dataset from the NumPy tensors\nusing `python3 icenet/gen_tfrecords_obs_train_val_datasets.py`. Whether to use\nthe data loader, NumPy arrays, or TFRecords datasets for training is controlled by bools in\n`icenet/train_icenet.py`.\n\n### 4) Train IceNet\n\n#### 4.1) OPTIONAL: Run the hyperparameter search (skip if using default values from paper)\n\n- Set `icenet/train_icenet.py` up for hyperparameter tuning: Set pre-training\nand temperature scaling bools to `False` in the user input section.\n- `wandb sweep icenet/sweep.yaml`\n- Then run the `wandb agent` command that is printed.\n- Cancel the sweep after a sufficient picture on optimal hyperparameters is\nbuilt up on the [wandb.ai](https://wandb.ai/home) page.\n\n#### 4.2) Run training\n\n- Train IceNet networks with `python3 icenet/train_icenet.py`. This takes\nhyperameter settings and the random seed for network weight initalisation as\ncommand line inputs. Run this multiple times with different settings of `--seed`\nto train an ensemble. Trained networks are saved in\n`trained_networks///networks/`. If working on a\nshared machine and familiar with SLURM, you may want to wrap this command in a\nSLURM script.\n\n### 5) Produce forecasts\n\n- `python3 icenet/predict_heldout_data.py`. Uses `xarray` to save predictions\nover the validation and test years as (2012-2020) as NetCDFs for IceNet and the\nlinear trend benchmark. IceNet's forecasts are saved in\n`data/forecasts/icenet///`.\nFor IceNet, the full forecast dataset has dimensions\n`(target date, y, x, lead time, ice class, seed)`, where `seed` specifies\na single ensemble member or the ensemble-mean forecast. An ensemble-mean\nSIP forecast is also computed and saved as a separate, smaller file\n(which only has the first four dimensions).\n\n- Compute IceNet's ensemble-mean temperature scaling parameter for each lead time:\n`python3 icenet/compute_ensemble_mean_temp_scaling.py`. The new, ensemble-mean\ntemperature-scaled SIP forecasts are saved to\n`data/forecasts/icenet///icenet_sip_forecasts_tempscaled.nc`.\nThese forecasts represent the final ensemble-mean IceNet model used for the paper.\n\n### 6) Analyse forecasts\n\n- `python3 icenet/analyse_heldout_predictions.py`. Loads the NetCDF forecast data and computes\nforecast metrics, storing results in a global `pandas` DataFrame with\n`MultiIndex` `(model, ensemble member, lead time, target date)` and columns\nfor each metric (binary accuracy and sea ice extent error). Uses\n`dask` to avoid loading the entire forecast datasets into memory, processing\nchunks in parallel to significantly speed up the analysis. Results are saved\nas CSV files in `results/forecast_results/` with a timestamp to avoid overwriting.\nOptionally pre-load the latest CSV file to append new models or metrics to the\nresults without needing to re-analyse existing models. Use this feature to append\nforecast results from other IceNet models (identified by their dataloader ID\nand architecture ID) to track the effect of design changes on forecast performance.\n\n- `python3 icenet/analyse_uncertainty.py`. Assesses the calibration of IceNet and\nSEAS5's SIP forecasts. Also determines IceNet's ice edge region and assesses\nits ice edge bounding ability. Results are saved in `results/uncertainty_results/`.\n\n### 7) Run the permute-and-predict method to explore IceNet's most important input variables\n\n- `python3 icenet/permute_and_predict.py`. Results are stored in\n`results/permute_and_predict_results/`.\n\n### 8) Generate the paper figures and tables\n\n- `python3 icenet/plot_paper_figures.py`. Figures are saved in `figures/paper_figures/`. Note, you will need the Sea Ice Outlook\nerror CSV file to plot Supp. Fig. 5:\n```\nwget -O data/sea_ice_outlook_errors.csv 'https://ramadda.data.bas.ac.uk/repository/entry/get/sea_ice_outlook_errors.csv?entryid=synth%3A71820e7d-c628-4e32-969f-464b7efb187c%3AL3Jlc3VsdHMvb3V0bG9va19lcnJvcnMvc2VhX2ljZV9vdXRsb29rX2Vycm9ycy5jc3Y%3D'\n```\n\n### Misc\n\n- `icenet/utils.py` defines IceNet utility functions like the data preprocessor,\ndata loader, ERA5 and CMIP6 processing, learning rate decay, and video functionality.\n- `icenet/models.py` defines network architectures.\n- `icenet/config.py` defines globals.\n- `icenet/losses.py` defines loss functions.\n- `icenet/callbacks.py` defines training callbacks.\n- `icenet/metrics.py` defines training metrics.\n\n### Project structure: simplified output from `tree`\n\n```\n.\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 obs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 cmip6\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 EC-Earth3\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r10i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r12i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r14i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r2i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 r7i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 MRI-ESM2-0\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r1i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r2i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r3i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 r4i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 r5i1p1f1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 forecasts\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 icenet\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 2021_06_15_1854_icenet_nature_communications\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 unet_tempscale\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 2021_06_30_0954_icenet_pretrain_ablation\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 unet_tempscale\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 linear_trend\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 seas5\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 EASE\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 latlon\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 masks\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 network_datasets\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 dataset1\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 meta\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 obs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 transfer\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 norm_params.json\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 dataloader_configs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 2021_06_15_1854_icenet_nature_communications.json\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 2021_06_30_0954_icenet_pretrain_ablation.json\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 figures\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 icenet\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 logs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 cmip6_download_logs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 era5_download_logs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 seas5_download_logs\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 wind_rotation_logs\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 results\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 forecast_results\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 2021_07_01_183913_forecast_results.csv\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 permute_and_predict_results\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 permute_and_predict_results.csv\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 uncertainty_results\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 ice_edge_region_results.csv\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 sip_bounding_results.csv\n\xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 uncertainty_results.csv\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 trained_networks\n \xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 2021_06_15_1854_icenet_nature_communications\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 obs_train_val_data\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 numpy\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 tfrecords\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 train\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x82\xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 val\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 unet_tempscale\n \xc2\xa0\xc2\xa0 \xc2\xa0\xc2\xa0 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 networks\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 network_tempscaled_36.h5\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 network_tempscaled_37.h5\n :\n```\n\n### Acknowledgements\n\nThanks to James Byrne (BAS) and Tony Phillips (BAS) for direct contributions to this codebase.\n""",",https://doi.org/10.5281/zenodo.5176573,https://doi.org/10.5285/71820e7d-c628-4e32-969f-464b7efb187c","2021/07/23, 12:18:07",824,GPL-3.0,9,70,"2023/10/10, 10:01:08",0,4,11,8,15,0,0.25,0.032258064516129004,"2021/08/10, 16:41:03",v1.0.0,0,3,false,,false,false,,,,,,,,,,, PyTrx,"Its primary purpose is to obtain velocities, surface areas, and distances from oblique, optical imagery of glacial environments.",PennyHow,https://github.com/PennyHow/PyTrx.git,github,"python3,glaciology,photogrammetry,time-lapse,opencv,optical-flow,template-matching,oblique-imagery,image-classification",Glacier and Ice Sheets,"2023/04/28, 13:30:48",40,0,9,true,Python,,,Python,,"b'# PyTrx\n \'Documentation
\n\nPyTrx (short for \'Python Tracking\') is a Python object-oriented toolbox created for the purpose of calculating real-world measurements from oblique images and time-lapse image series. Its primary purpose is to obtain velocities, surface areas, and distances from imagery of glacial environments.
\n\n
\n\n

PyTrx citations

\n\nWe are happy for others to use and adapt PyTrx for their own processing needs. If used, please cite the following key publication and Digital Object Identifier:
\n\n

How et al. (2020) PyTrx: a Python-based monoscopic terrestrial photogrammetry toolset for glaciology. Frontiers in Earth Science 8:21, doi:10.3389/feart.2020.00021

\n\nPyTrx has been used in the following publications. In addition to the publication above, please cite any that are applicable where possible:
\n\n*PyTrx used for georectification of glacier calving event locations*
\nHow et al. (2019) Calving controlled by melt-undercutting: detailed mechanisms revealed through time-lapse observations. Annals of Glaciology 60 (78), 20-31, doi:10.1017/aog.2018.28
\n\n*PhD thesis by Penelope How, for which PyTrx was developed primarily*
\nHow (2018) Dynamical change at tidewater glaciers examined using time-lapse photogrammetry. PhD thesis, University of Edinburgh, UK, https://hdl.handle.net/1842/31103
\n\n*PyTrx used for detection of supraglacial lakes and meltwater plumes*
\nHow et al. (2017) Rapidly changing subglacial hydrological pathways at a tidewater glacier revealed through simultaneous observations of water pressure, supraglacial lakes, meltwater plumes and surface velocities. The Cryosphere 11, 2691-2710, doi:10.5194/tc-11-2691-2017
\n\n*MSc thesis by Lynne Buie, where PyTrx was created in its earliest form*
\nAddison (2015) PyTrx: feature tracking software for automated production of glacier velocity. MSc thesis, University of Edinburgh, UK, https://hdl.handle.net/1842/11794
\n\n
\n\n

Installation

\n\nThe PyTrx installation has been tested on Linux and Windows operating systems (it should also work on Apple operating systems too, it just hasn\'t been tested). PyTrx is primarily available through pip:\n\n```bash\npip install pytrx\n```\nBe warned that there are difficulties with the GDAL package on pip, meaning that gdal could not be declared explicitly as a PyTrx dependency in the pip package compiling. Please ensure that gdal is installed separately if installing PyTrx through pip. You should be able to create a new environment, install GDAL and the other dependencies with conda, and then install PyTrx with pip.\n\n```bash\nconda create --name pytrx3 python=3.7\nconda activate pytrx3\nconda install gdal opencv pillow scipy matplotlib spyder\npip install pytrx\n```\n\nBe aware that the PyTrx example scripts in this repository are not included with the pip distribution of PyTrx, given the size of the example dataset files. Either download these separately, or create a new conda environment (using the .yml environment file provided) and clone the PyTrx GitHub repository:\n\n```bash\nconda env create --file environment.yml\nconda activate pytrx3\ngit clone https://github.com/PennyHow/PyTrx.git\n```\n\nSee our readthedocs page on setting up PyTrx for more details.\n\n
\n\n

Permissions and acknowledgements

\n\nPyTrx was initially developed and released as part of the CRIOS (Calving Rates and Impact on Sea Level project. PyTrx\'s continued development and maintenance is funded by an ESA Living Planet Fellowship.
\n\nParts of the georectification functions in the PyTrx toolbox were inspired and translated from ImGRAFT, a photogrammetry toolbox for Matlab (Messerli and Grinsted, 2015). Where possible, ImGRAFT has been credited for in the corresponding PyTrx scripts (primarily some passages in the CamEnv.py script) and cited in relevant PyTrx publications.
\n\nSee PyTrx\'s readthedocs for all permissions and acknowledgements.\n\n
\n \n

Links

\n\nThere are other useful software available for terrestrial photogrammetry in glaciology:
\n\nPointcatcher - Matlab-based GUI toolbox for feature-tracking and georectification
\nImGRAFT - Matlab toolbox for feature-tracking and georectification
\nEMT (Environmental Motion Tracking) - GUI toolbox for feature-tracking and georectification
\nCIAS - IDL gui for feature-tracking
\nPRACTISE - Matlab toolbox for georectification\n\n
\n\n

Copyright

\n\nPyTrx is licensed under a MIT License.\n'",",https://zenodo.org/badge/latestdoi/91549468,https://doi.org/10.5194/tc-11-2691-2017","2017/05/17, 08:01:03",2352,MIT,2,560,"2023/07/24, 22:15:18",3,10,44,2,93,0,0.0,0.004444444444444473,"2022/06/08, 13:37:48",pytrx1.2.4,0,2,false,,false,false,,,,,,,,,,, OGGM,A modular open source model for glacier dynamics.,OGGM,https://github.com/OGGM/oggm.git,github,"glacier,model,climate,ice-dynamics,mass-balance,global,sea-level,calving,python",Glacier and Ice Sheets,"2023/10/05, 11:41:40",188,0,29,true,Python,Open Global Glacier Model,OGGM,"Python,Shell,Dockerfile",http://oggm.org,"b'.. image:: docs/_static/logo.png\n\n|\n\n\n**OGGM is a modular open source model for glacier dynamics**\n\nOGGM is able to simulate past and\nfuture mass balance, volume and geometry of (almost) any glacier in the world,\nin a fully automated and extensible workflow.\n\nThe model accounts for glacier geometry (including contributory branches) and\nincludes an explicit ice dynamics module. We rely exclusively on publicly\navailable data for calibration and validation. **OGGM is modular and\nsupports novel modelling workflows**: it LOVES to be remixed and reused!\n\n.. image:: docs/_static/ex_tasman.jpg\n\n\nInstallation, documentation\n---------------------------\n\nThe documentation is hosted on ReadTheDocs: http://docs.oggm.org\n\n\nGet in touch\n------------\n\n- View the source code `on GitHub`_.\n- Report bugs or share your ideas on the `issue tracker`_.\n- Improve the model by submitting a `pull request`_.\n- Follow us on `Twitter`_.\n- Or you can always send us an `e-mail`_ the good old way.\n\n.. _e-mail: info@oggm.org\n.. _on GitHub: https://github.com/OGGM/oggm\n.. _issue tracker: https://github.com/OGGM/oggm/issues\n.. _pull request: https://github.com/OGGM/oggm/pulls\n.. _Twitter: https://twitter.com/OGGM1\n\n\nAbout\n-----\n\n:Version:\n .. image:: https://img.shields.io/pypi/v/oggm.svg\n :target: https://pypi.python.org/pypi/oggm\n :alt: Pypi version\n \n .. image:: https://img.shields.io/pypi/pyversions/oggm.svg\n :target: https://pypi.python.org/pypi/oggm\n :alt: Supported python versions\n\n:Citation:\n .. image:: https://img.shields.io/badge/Citation-GMD%20paper-orange.svg\n :target: https://www.geosci-model-dev.net/12/909/2019/\n :alt: GMD Paper\n\n .. image:: https://zenodo.org/badge/43965645.svg\n :target: https://zenodo.org/badge/latestdoi/43965645\n :alt: Zenodo\n\n:Tests: \n .. image:: https://coveralls.io/repos/github/OGGM/oggm/badge.svg?branch=master\n :target: https://coveralls.io/github/OGGM/oggm?branch=master\n :alt: Code coverage\n\n .. image:: https://github.com/OGGM/oggm/actions/workflows/run-tests.yml/badge.svg?branch=master\n :target: https://github.com/OGGM/oggm/actions/workflows/run-tests.yml\n :alt: Linux build status\n\n .. image:: https://img.shields.io/badge/Cross-validation-blue.svg\n :target: https://cluster.klima.uni-bremen.de/~oggm/ref_mb_params/oggm_v1.4/crossval.html\n :alt: Mass balance cross validation\n\n .. image:: https://readthedocs.org/projects/oggm/badge/?version=latest\n :target: http://docs.oggm.org/en/latest/\n :alt: Documentation status\n\n .. image:: https://img.shields.io/badge/benchmarked%20by-asv-green.svg?style=flat\n :target: https://cluster.klima.uni-bremen.de/~github/asv/\n :alt: Benchmark status\n\n:License:\n .. image:: https://img.shields.io/pypi/l/oggm.svg\n :target: https://github.com/OGGM/oggm/blob/master/LICENSE.txt\n :alt: BSD-3-Clause License\n\n:Authors:\n\n See the `version history`_ for a list of all contributors.\n\n .. _version history: http://docs.oggm.org/en/stable/whats-new.html\n'",",https://zenodo.org/badge/latestdoi/43965645\n","2015/10/09, 15:55:31",2938,BSD-3-Clause,88,1353,"2023/10/10, 08:54:01",223,994,1430,133,15,8,0.0,0.3799843627834245,"2023/08/27, 18:03:42",v1.6.1,0,31,false,,false,false,,,https://github.com/OGGM,www.oggm.org,,,,https://avatars.githubusercontent.com/u/14758186?v=4,,, GlaThiDa,Glacier Thickness Database.,wgms,https://gitlab.com/wgms/glathida,gitlab,,Glacier and Ice Sheets,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ALPGM,Regional glacier evolution model based on deep learning and parametrizations.,JordiBolibar,https://github.com/JordiBolibar/ALPGM.git,github,"glacier,surface-mass-balance,glacier-modelling,deep-learning",Glacier and Ice Sheets,"2023/07/07, 08:57:08",35,0,3,true,Python,,,Python,,"b'# ALPGM (ALpine Parameterized Glacier Model) v1.2\n\n![ALPGM](https://www.dropbox.com/s/8zycrf67lloppr5/algpm_logo2.png?raw=1)\n\n[![DOI](https://zenodo.org/badge/195388796.svg)](https://zenodo.org/badge/latestdoi/195388796)\n\n#### Author \n

Jordi Bol\xc3\xadbar

\n

jordi.bolibar@univ-grenoble-alpes.fr

\n

Institute of Environmental Geosciences (Universit\xc3\xa9 Grenoble Alpes)

\n\n## Overview\n

\n ALPGM is a fully parameterized glacier evolution model based on data science. Glacier-wide surface mass balance (SMB) are simulated using a deep artificial neural network (i.e. deep learning) or Lasso (i.e. regularized multilinear regression). \n Glacier dynamics are parameterized using glacier-specific delta-h functions (Huss et al. 2008). The model has so far been implemented with a dataset of French alpine glaciers, using climate forcings\n for past (SAFRAN, Durand et al. 1993) and future (ADAMONT, Verfaillie et al. 2018) periods.\n

\n\n

\n The machine learning SMB modelling approach is built upon widely used Python libraries (Keras, Scikit-learn and Statsmodels). \n

\n\n

\n For more details regarding ALPGM and the deep learning SMB modelling approach, I encourage you to read the Bolibar et al. (2020) paper in The Cryosphere: https://www.the-cryosphere.net/14/565/2020/\n\n## Workflow\n

\n ALPGM\'s workflow can be controlled via the alpgm_interface.py file. In this file, different settings can be configured, and each step can be run or skipped with a boolean flag. \n The default workflow runs as it follows:\n

\n\n

\n (1) First of all, the meteorological forcings are pre-processed (safran_forcings.py / adamont_forcings.py) in order to extract the necessary data closest to each glacier\xe2\x80\x99s centroid. The meteorological features are stored in intermediate files in order \n to reduce computation times for future runs, automatically skipping this preprocessing step when the files are already generated. \n

\n (2) The SMB machine learning module retrieves the pre-processed meteorological features and assembles the spatio-temporal training dataset, comprised by both climatic and topographical data. An algorithm is \n chosen for the SMB model, which can be loaded from a previous training or it can be trained again with the training dataset (smb_model_training.py). These model(s) are stored in intermediate files, allowing to skip this step for future runs.\n

\n (3) The performances of these SMB models can be evaluated performing a leave-one-glacier-out (LOGO) cross-validation (smb_validation.py). This step can be skipped when using already established models. Basic statistical performance \n metrics are given for each glacier and model, as well as plots with the simulated cumulative glacier-wide SMBs compared to their reference values with uncertainties for each of the glaciers from the training dataset.\n

\n (4) The Glacier Geometry Update module starts with the generation of the glacier specific parameterized functions, using the difference of the two pre-selected digital elevation model (DEM) rasters covering the \n whole study area for two separate dates, as well as the glacier contours (delta_h_alps.py). These parameterized functions are then stored in individual files to be used in the final simulations.\n

\n (5) Once all the previous steps have been run and the glacier-wide SMB models as well as the parameterized functions for all the glaciers are obtained, the final simulations are launched (glacier_evolution.py). \n For each glacier, the initial ice thickness raster and the parameterized function are retrieved. The meteorological data at the glaciers\xe2\x80\x99 centroid is re-computed with an annual time step based on each glacier\xe2\x80\x99s evolving topographical \n characteristics. These forcings are used to simulate the annual glacier-wide SMB using the machine learning model. Once an annual glacier-wide SMB value is obtained, the changes in geometry are computed using the \n parameterized function, thus updating the glacier\xe2\x80\x99s DEM and ice thickness rasters. If all the ice thickness raster pixels of a glacier become zero, the glacier is considered as disappeared and is removed from the \n simulation pipeline. For each year, multiple results are stored in data files as well as the raster DEM and ice thickness values for each glacier.\n

\n\n## SMB machine learning model(s)\n\n

\n ALPGM simulates glacier-wide SMBs using topographical and climate data at the glacier. This repository comes with some pre-trained SMB models, but they can be retrained again at will with new data. \n Retraining is important when working with a different region (outside the European Alps in this case), or when expanding the training dataset in order to improve the model\'s performance.\n

\n Two main models can be chosen for the SMB simulations:\n

\n Deep Artificial Neural Network: A deep ANN, also know as deep learning, is a complex nonlinear statistical model optimized by gradient descent. The SMB ANN models are trained with the glacier_neural_network_keras.py script in the scripts folder. ALPGM comes with already trained glacier-wide SMB models which can be used for multiple spatiotemporal simulations. Sample weights can be used in order to balance SMB datasets to better represent extreme values. As explained in Bolibar et al. (2020), this comes at the cost of a RMSE/variance tradeoff. In order to use it for simulations, choose the ""ann_weights"" or ""ann_no_weights"" models in alpgm_interface.py\n

\n Lasso: The Lasso (Least absolute shrinkage and selection operator) (Tibshirani, 1996), is a shrinkage method which attempts to overcome the shortcomings of the simpler step-wise and all-possible regressions. \n\tIn these two classical approaches, predictors are discarded in a discrete way, giving subsets of variables which have the lowest prediction error. However, due to its discrete selection, these different subsets can exhibit high variance, \n\twhich does not reduce the prediction error of the full model. The Lasso performs a more continuous regularization by shrinking some coefficients and setting others to zero, thus producing more interpretable models (Hastie et al., 2009). \n\tBecause of its properties, it strikes a balance between subset selection (like all-possible regressions) and Ridge regression (Hoerl and Kennard, 1970)\n

\n\n## Included data\n

\n All the data needed to run the French alpine glaciers case study simulations is available in this repository: the topographical and SMB data for the glaciers,\n\tthe glacier-specific delta-h parameterized functions, and the initial glacier ice thickness for the all the glaciers in the region (Farinotti et al. 2019). \n\tWith the exception of the SAFRAN (Durand et al. 2009) climate data preprocessed files, which can be [downloaded here](https://www.dropbox.com/s/2kisbxk2ajaunh2/SAFRAN_meteo_data.rar?raw=1) separately due to their size.\n\t\n

\n\n## Dependencies\n

\n\tDependencies are specified in the dependency graph of this repository: https://github.com/JordiBolibar/ALPGM/network/dependencies\n'",",https://zenodo.org/badge/latestdoi/195388796","2019/07/05, 10:16:14",1573,GPL-3.0,2,88,"2023/07/07, 08:56:57",1,4,4,2,110,0,0.0,0.03703703703703709,"2020/05/09, 10:16:25",v1.2,0,3,false,,false,false,,,,,,,,,,, Glacier Mapping From Satellite Imagery,Use computer vision to automatically segment debris and ice glaciers from satellite images.,krisrs1128,https://github.com/krisrs1128/glacier_mapping.git,github,,Glacier and Ice Sheets,"2023/05/27, 14:33:23",27,0,6,true,Python,,,"Python,Jupyter Notebook,Shell,JavaScript,Batchfile,Makefile,Dockerfile",,"b'# Glacier Mapping From Satellite Imagery\n\n## Overview\nThe goal of this project is to use computer vision to automatically segment\ndebris and ice glaciers from satellite images\n\nRequirements are specified in `requirements.txt`. The full package sources is\navailable in the `glacier_mapping` directory. Raw training data are Landsat 7\ntiff images from the Hindu-Kush-Himalayan region. We consider the region of\nBhutan and Nepal. Shapefile labels of the glaciers are provided by\n[ICIMOD](www.icimod.org)\n\n## Pipeline\n\n### Overview\n\nThe full preprocessing and training can be viewed at\n[https://colab.research.google.com/drive/1ZkDtLB_2oQpSFDejKZ4YQ5MXW4c531R6?usp=sharing](https://colab.research.google.com/drive/1ZkDtLB_2oQpSFDejKZ4YQ5MXW4c531R6?usp=sharing)\nBesides the raw tiffs and shapefiles, the required inputs are,\n\n* `conf/masking_paths.yaml`: Says how to burn shapefiles into image masks.\n* `conf/postprocess.yaml`: Says how to filter and transform sliced images.\n* `conf/train.yaml`: Specifies training options.\n\nAt each step, the following intermediate files are created,\n* `generate_masks()` --> writes mask_{id}.npy\'s and mask_metadata.csv\n* `write_pair_slices()` --> writes slice_{tiff_id}_img_{slice_id}, slice_{tiff_id}_label_{slice_id}, and slice_0-100.geojson (depending on which lines from mask_metadata are sliced)\n* `postproces()` --> copies slices*npy from previous step into train/, dev/, test/ folders, and writes mean and standard deviations to path specified in postprocess.yaml\n* `glacier_mapping.train.*` --> creates data/runs/run_name folder, containing logs/ with tensorboard logs and models/ with all checkpoints\n\n![pipeline](imgs/pipeline.jpeg)\n### Data Preprocessing:\n\n1. **Slicing**: We slice the input tiffs into 512x512 tiles. The resulting tiles\n along with corresponding shapefile labels are stored. Metadata of the slices\n are stored in a geojson file"" ```slicemetadata.geojson``` To slice, ```run:\n python3 src/slice.py```\n2. **Transformation**: For easy processing, we convert the input image and\n labels into multi-dimensional numpy ``.npy`` files.\n3. **Masking**: The input shapefiles are transformed into masks. The masks are\n needed for use as labels. This involves transforming the label as\n multi-channel images with each channel representing a label class ie. 0 -\n Glacier, 1 debris etc To run transformation and masking: ```python3\n src/mask.py```\n\n### Data PostProcessing\n1. **Filtering**: Returns the paths for pairs passing the filter criteria for a\n specific channel. Here we filter by the percentage of 1\'s in the filter\n channel.\n2. **Random Split**: The final dataset is saved in three folders: ``train/ test/\n dev/``\n3. **Reshuffle**: Shuffle the images and masks in the output directory\n4. **Imputation**: Given and input, we check for missing values (NaNs) and\n replace with 0\n5. **Generate stats**: Generate statistics of the input image channels: returns\n a dictionary with keys for means and standard deviations accross the channels\n in input images.\n6. **Normalization**: We normalize the final dataset based on the means and\n standard deviations calclualted.\n\n![Image-Mask Pair](imgs/image_mask.png)\n\n### Model Training\nModel: Unet with dropout (default dropout rate is 0.2).\n\n## vector data sources\nLabels : [ICIMOD](http://www.icimod.org/)\n\n* [(2000, Nepal)](http://rds.icimod.org/Home/DataDetail?metadataId=9351&searchlist=True): Polygons older/newer than 2 years from 2000 are filtered out. Original collection contains few polygons from 1990s\n* [(2000, Bhutan)](http://rds.icimod.org/Home/DataDetail?metadataId=9357&searchlist=True): Used as it\'s\n* [(2010, Nepal)](http://rds.icimod.org/Home/DataDetail?metadataId=9348&searchlist=True): Polygons older/newer than 2 years from 2010 are filtered out. Original collection is for 1980-2010\n* [(2010, Bhutan)](http://rds.icimod.org/Home/DataDetail?metadataId=9358&searchlist=True): Used as it\'s\n\n## License\n\nCode open source for anyone to use as it\'s under [MIT License](https://opensource.org/licenses/MIT)\n'",,"2019/10/09, 19:22:19",1477,GPL-3.0,5,1076,"2023/05/27, 14:33:26",6,84,90,2,151,3,0.0,0.38409090909090904,,,0,7,false,,false,false,,,,,,,,,,, captoolkit,NASA's Cryosphere Altimetry Processing Toolkit.,fspaolo,https://github.com/nasa-jpl/captoolkit.git,github,,Glacier and Ice Sheets,"2023/04/07, 21:24:41",51,0,11,true,Jupyter Notebook,NASA Jet Propulsion Laboratory,nasa-jpl,"Jupyter Notebook,Python,HTML",,"b'![splash](splash.png)\n\n# `captoolkit` - Cryosphere Altimetry Processing Toolkit\n\n[![Language](https://img.shields.io/badge/python-3.6%2B-blue.svg)](https://www.python.org/)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/fspaolo/captoolkit/blob/master/LICENSE)\n[![Documentation Status](https://readthedocs.org/projects/captoolkit/badge/?version=latest)](https://captoolkit.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/104787010.svg)](https://zenodo.org/badge/latestdoi/104787010)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/fspaolo/captoolkit/master) \n\n\n#### Set of tools for processing and integrating satellite and airborne (radar and laser) altimetry data.\n\n## Project leads\n\n* [Fernando Paolo](https://fpaolo.com) (fspaolo@gmail.com)\n* [Johan Nilsson](https://science.jpl.nasa.gov/people/Nilsson/) (johan.nilsson@jpl.nasa.gov)\n* [Alex Gardner](https://science.jpl.nasa.gov/people/AGardner/) (alex.s.gardner@jpl.nasa.gov)\n\nJet Propulsion Laboratory, California Institute of Technology\n\nDevelopment of the codebase was funded by the NASA Cryospheric Sciences program and the NASA MEaSUReS ITS_LIVE project through award to Alex Gardner.\n\n## Contributors\n\n- Tyler Sutterley (tsutterl@uw.edu)\n\n## Contribution\n\nIf you would like to contribute (your own code or modifications to existing ones) just create a [pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request) or send us an email, we will gladly add you as a contributor to the project.\n\n## Install\n\n git clone https://github.com/fspaolo/captoolkit.git\n cd captoolkit\n python setup.py install\n\n## Example\n\nRead ICESat-2 Land Ice Height product (ATL06) in parallel and extract some variables using 4 cores (from the command line):\n\n readatl06.py -n 4 *.h5\n\nTo see the input arguments of each program run:\n\n program.py -h\n\nFor more information check the header of each program.\n\n## Notebooks\n\n[Introduction to HDF5 data files](https://nbviewer.jupyter.org/github/fspaolo/captoolkit/blob/master/notebooks/intro-to-hdf5.ipynb) \nHigh-level overview of the HDF5 file structure and associated tools\n\n[Reduction of ICESat-2 data files](https://nbviewer.jupyter.org/github/fspaolo/captoolkit/blob/master/notebooks/redu-is2-files.ipynb) \nSelect (ATL06) files and variables of interest and write to a simpler structure\n\n[Filtering and gridding elevation change data](https://nbviewer.jupyter.org/github/fspaolo/captoolkit/blob/master/notebooks/Gridding-rendered.ipynb) \nInterpolate and filter data to derive gridded products of elevation change\n\n## Notes\n\nThis package is under heavy development, and new tools are being added as we finish testing them (many more utilities are coming).\n\nCurrently, the individual programs work as standalone command-line utilities or editable scripts. There is no need to install the package. You can simply run the python scripts as:\n\n python program.py -a arg1 -b arg2 /path/to/files/*.h5\n\n## Tools\n\n### Reading\n\n* [`readgeo.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/readgeo.md) - Read Geosat and apply/remove corrections\n* [`readers.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/readers.md) - Read ERS 1/2 (REAPER) and apply/remove corrections\n* [`readra2.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/readra2.md) - Read Envisat and apply/remove corrections\n* [`readgla12.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/readgla12.md) - Read ICESat GLA12 Release 634 HDF5 and apply/remove corrections\n* [`readgla06.py`](https://github.com/nasa-jpl/captoolkit/blob/master/doc/source/user_guide/readgla06.md) - Read ICESat GLA12 Release 634 HDF5 and apply/remove corrections\n* [`readatl06.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/readatl06.md) - Read ICESat-2 ATL06 HDF5 and select specific variables\n\n### Correcting\n\n* [`corrapply.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/corrapply.md) - Apply a set of specified corrections to a set of variables\n* [`corrslope.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/corrslope.md) - Correct slope-induced errors using \'direct\' or \'relocation\' method\n* [`corrscatt.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/corrscatt.md) - Correct radar altimetry height to correlation with waveform parameters\n* [`corrlaser.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/corrlaser.md) - Compute and apply corrections for ICESat Laser 2 and 3\n\n### Filtering\n\n* [`filtst.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/filtst.md) - Filter point-cloud data in space and time\n* [`filtmask.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/filtmask.md) - Select scattered data using raster-mask, polygon or bounding box\n* [`filtnan.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/filtnan.md) - Check for NaNs in a given 1D variable and remove the respective ""rows""\n* [`filttrack.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/filttrack.md) - Filter satellite tracks (segments) with along-track running window\n* [`filttrackwf.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/filttrackwf.md) - Filter waveform tracks (segments) with along-track running window\n\n### Differencing\n\n* [`xing.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/xing.md) - Compute differences between two adjacent points (cal/val)\n* [`xover.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/xover.md) - Compute crossover values at satellite orbit intersections\n\n### Fitting\n\n* [`fittopo.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/fittopo.md) - Detrend data with respect to modeled topography\n* [`fitsec.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/fitsec.md) - Compute robust height changes using a surface-fit approach\n\n### Interpolating\n\n* [`interpgaus.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/interpgaus.md) - Interpolate irregular data using Gaussian Kernel\n* [`interpmed.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/interpmed.md) - Interpolate irregular data using a Median Kernel\n* [`interpkrig.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/interpkrig.md) - Interpolate irregular data using Kriging/Collocation\n\n### Gaussian Processes\n\n* [`ointerp/ointerp.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ointerp/ointerp.md) - Optimal Interpolation/Gaussian Processes\n* [`ointerp/covx.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ointerp/covx.md) - Calculate empirical spatial covariances from data\n* [`ointerp/covt.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ointerp/covt.md) - Calculate empirical temporal covariances from data\n* [`ointerp/covfit.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ointerp/covfit.md) - Fit analytical model to empirical covariances\n\n### IBE\n\n* [`ibe/corribe.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ibe/corribe.md) - Compute and apply inverse barometer correction (IBE)\n* [`ibe/slp2ibe.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ibe/slp2ibe.md) - Convert ERA-Interim Sea-level pressure to IBE\n* [`ibe/geteraint.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/ibe/geteraint.md) - Example python params to download ERA-Interim\n\n### Tides\n\n* [`tide/corrtide.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/corrtide.md) - Compute and apply ocean and load tides corrections\n* [`tide/calc_astrol_longitudes.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/calc_astrol_longitudes.md) - Computes the basic astronomical mean longitudes\n* [`tide/calc_delta_time.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/calc_delta_time.md) - Calculates difference between universal and dynamic time\n* [`tide/convert_xy_ll.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/convert_xy_ll.md) - Convert lat/lon points to and from projected coordinates\n* [`tide/infer_minor_corrections.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/infer_minor_corrections.md) - Return corrections for 16 minor constituents\n* [`tide/load_constituent.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/load_constituent.md) - Loads parameters for a given tidal constituent\n* [`tide/load_nodal_corrections.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/load_nodal_corrections.md) - Load the nodal corrections for tidal constituents\n* [`tide/predict_tide_drift.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/predict_tide_drift.md) - Predict tidal elevations using harmonic constants\n* [`tide/read_tide_model.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/read_tide_model.md) - Extract tidal harmonic constants from OTIS tide models\n* [`tide/read_netcdf_model.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/read_netcdf_model.md) - Extract tidal harmonic constants from netcdf models\n* [`tide/read_GOT_model.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tide/read_GOT_model.md) - Extract tidal harmonic constants from GSFC GOT models\n\n### 2D Fields\n\n* [`gettopo.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/gettopo.md) - Estimate slope, aspect and curvature from given DEM\n* [`getdem.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/getdem.md) - Regrid mean height field (DEM) from grid-1 onto grid-2\n* [`getmsl.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/getmsl.md) - Calculate and extend MSL field for the ice shelves\n* [`getveloc.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/getveloc.md) - Combine best 2D mean field from different velocities\n* [`vregrid.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/vregrid.md) - Regrid velocity field onto height field\n* [`vmerge.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/vmerge.md) - Merge multiple velocity fields, e.g. Gardner et al. + Rignot et al.\n* [`mkmask.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/mkmask.md) - Compute ice shelf, basin and buffer raster masks\n\n### 3D Fields\n\n* [`cubefilt.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubefilt.md) - Filter slices (spatial) and individual time series (temporal)\n* [`cubefilt2.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubefilt2.md) - Filter time series residuals w.r.t. a piece-wise poly fit\n* [`cubexcal.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubexcal.md) - Cross-calibrate several data cubes with same dimensions\n* [`cubeimau.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubeimau.md) - Filter and regrid IMAU Firn cube product\n* [`cubegsfc.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubegsfc.md) - Filter and regrid GSFC Firn cube product\n* [`cubegemb.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubegemb.md) - Filter and regrid JPL Firn and SMB cube products\n* [`cubesmb.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubesmb.md) - Filter and regrid RACMO and ERA5 SMB cube products\n* [`cubethick.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubethick.md) - Compute time-variable Freeboard, Draft, and Thickness\n* [`cubediv.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubediv.md) - Compute time-variable Flux Divergence, and associated products\n* [`cubemelt.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubemelt.md) - Compute time-variable basal melt rates and mass change\n* [`cuberegrid.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cuberegrid.md) - Remove spatial artifacts and regrid 3D fields\n* [`cubeerror.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/cubeerror.md) - Estimate and propagate cube uncertainties\n\n### Utilities\n\n* [`split.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/split.md) - Split large 1D HDF5 file(s) into smaller ones\n* [`merge.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/merge.md) - Merge several HDF5 files into a single or multiple file(s)\n* [`mergetile.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/mergetile.md) - Merge tiles from different missions keeping the original grid\n* [`tile.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/tile.md) - Tile geographical (point) data to allow parallelization\n* [`join.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/join.md) - Join a set of geographical tiles (from individual files)\n* [`joingrd.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/joingrd.md) - Join a set of geographical tiles (subgrids from individual files)\n* [`stackgrd.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/stackgrd.md) - Stack a set of 2D grids into a 3D cube using time information\n* [`sort.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/sort.md) - Sort (in place) all 1D variables in HDF5 file(s)\n* [`dummy.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/dummy.md) - Add dummy variables as 1D arrays to HDF5 files(s)\n* [`hdf2txt.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/hdf2txt.md) - Convert HDF5 (1D arrays) to ASCII tables (columns)\n* [`txt2hdf.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/txt2hdf.md) - Convert (very large) ASCII tables to HDF5 (1D arrays)\n* [`query.py`](https://github.com/fspaolo/captoolkit/blob/master/doc/source/user_guide/query.md) - Query entire data base (tens of thousands of HDF5 files)\n\n### Scripts\n\n* `scripts/` - This folder contains supporting code (generic and specific) that we have used in our analyses. We provide these scripts **as is** in case you find them useful.\n\n### Data\n\n* `data/` - The data folder contains example data files for some of the tools. See respective headers.\n'",",https://zenodo.org/badge/latestdoi/104787010","2017/09/25, 18:31:15",2221,Apache-2.0,40,419,"2022/08/05, 19:28:02",0,5,5,0,446,0,0.4,0.09668508287292821,"2020/02/12, 21:53:13",0.1.0,0,5,false,,true,false,,,https://github.com/nasa-jpl,https://www.jpl.nasa.gov,"Pasadena, California, US",,,https://avatars.githubusercontent.com/u/10360932?v=4,,, PISM,"The Parallel Ice Sheet Model is an open source, parallel, high-resolution ice sheet model.",pism,https://github.com/pism/pism.git,github,"scientific-computing,parallel,geophysics,c-plus-plus,ice-sheet,glacier,sea-level,numerical,petsc,python,sphinx,climate,mpi",Glacier and Ice Sheets,"2023/01/24, 00:14:38",88,0,12,true,C++,Parallel Ice Sheet Model,pism,"C++,Python,CMake,TeX,C,SWIG,Shell,Fortran,MATLAB,Makefile,Dockerfile,Emacs Lisp",https://pism.io/,"b'PISM, a Parallel Ice Sheet Model\n================================\n|doi|_ |gpl|_ |cipism|_\n\nThe Parallel Ice Sheet Model is an open source, parallel, high-resolution ice sheet model:\n\n- hierarchy of available stress balances\n- marine ice sheet physics, dynamic calving fronts\n- polythermal, enthalpy-based conservation of energy scheme\n- extensible coupling to atmospheric and ocean models\n- verification and validation tools\n- `documentation `_ for users and developers\n- uses MPI_ and PETSc_ for parallel simulations\n- reads and writes `CF-compliant `_ NetCDF_ files\n\nPISM is jointly developed at the `University of Alaska, Fairbanks (UAF) `_ and the\n`Potsdam Institute for Climate Impact Research (PIK) `_. UAF developers are based in\nthe `Glaciers Group `_ at the `Geophysical Institute `_.\n\nPlease see ``ACKNOWLEDGE.rst`` and ``doc/funding.csv`` for a list of grants supporting\nPISM development.\n\nHomepage\n--------\n\n http://www.pism.io/\n\nDownload and Install\n--------------------\n\nSee the `Installing PISM `_ on ``pism.io``.\n\nSupport\n-------\n\nPlease e-mail `uaf-pism@alaska.edu `_ with questions about PISM.\n\nYou can also join the PISM workspace on `Slack `_.\n\nContributing\n------------\n\nWant to contribute? Great! See `Contributing to PISM `_.\n\n.. URLs\n\n.. |doi| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1199019.svg\n.. _doi: https://doi.org/10.5281/zenodo.1199019\n.. |gpl| image:: https://img.shields.io/badge/License-GPL-green.svg\n.. _gpl: https://opensource.org/licenses/GPL-3.0\n.. |cipism| image:: https://circleci.com/gh/pism/pism/tree/main.svg?style=svg\n.. _cipism: https://circleci.com/gh/pism/pism/tree/main\n.. _uaf: http://www.uaf.edu/\n.. _pik: http://www.pik-potsdam.de/\n.. _pism-manual: http://www.pism.io/docs\n.. _pism-contributing: http://www.pism.io/docs/contributing\n.. _pism-installation: http://www.pism.io/docs/installation\n.. _mpi: http://www.mcs.anl.gov/research/projects/mpi/\n.. _petsc: http://www.mcs.anl.gov/petsc/\n.. _cf: http://cf-pcmdi.llnl.gov/\n.. _netcdf: http://www.unidata.ucar.edu/software/netcdf/\n.. _glaciers: http://www.gi.alaska.edu/snowice/glaciers/\n.. _gi: http://www.gi.alaska.edu\n.. _NASA-MAP: http://map.nasa.gov/\n.. _NASA-Cryosphere: http://ice.nasa.gov/\n.. _NSF-Polar: https://nsf.gov/geo/plr/about.jsp\n.. _Slack-PISM: https://join.slack.com/t/uaf-pism/shared_invite/enQtODc3Njc1ODg0ODM5LThmOTEyNjEwN2I3ZTU4YTc5OGFhNGMzOWQ1ZmUzMWUwZDAyMzRlMzBhZDg1NDY5MmQ1YWFjNDU4MDZiNTk3YmE\n.. _uaf-pism: mailto:uaf-pism@alaska.edu\n\n..\n Local Variables:\n fill-column: 90\n End:\n'",",https://doi.org/10.5281/zenodo.1199019\n","2011/11/04, 19:03:37",4373,GPL-3.0,22,9827,"2023/02/07, 00:20:10",30,45,483,2,261,5,0.2,0.2887181129256057,"2023/01/24, 00:21:33",v2.0.6,0,17,false,,true,true,,,https://github.com/pism,https://www.pism.io/,University of Alaska Fairbanks,,,https://avatars.githubusercontent.com/u/1057072?v=4,,, icepack,Finite element modeling of glaciers and ice sheets.,CICE-Consortium,https://github.com/CICE-Consortium/Icepack.git,github,,Glacier and Ice Sheets,"2023/10/18, 19:33:49",21,0,3,true,Fortran,CICE Consortium,CICE-Consortium,"Fortran,Shell,C,Makefile,TypeScript",,"b'\n[![GHActions](https://github.com/CICE-Consortium/Icepack/workflows/GHActions/badge.svg)](https://github.com/CICE-Consortium/Icepack/actions)\n[![Documentation Status](https://readthedocs.org/projects/cice-consortium-icepack/badge/?version=main)](http://cice-consortium-icepack.readthedocs.io/en/main/?badge=main)\n[![lcov](https://img.shields.io/endpoint?url=https://apcraig.github.io/coverage_icepack.json)](https://apcraig.github.io)\n\n\n\n## The Icepack sea-ice column model\nThis repository contains files for Icepack, the column physics of the sea ice model [CICE][cice]. Icepack is maintained by the CICE Consortium. For testing purposes and guidance for including Icepack in other sea ice host models, this repository also includes a driver and basic test suite. \n\nIcepack is included in CICE as a Git submodule. Because Icepack is a submodule of CICE, Icepack and CICE development are handled independently with respect to the GitHub repositories even though development and testing can be done together.\n\n[cice]: https://github.com/CICE-Consortium/CICE\n\n## Getting help\nThe first point of contact with the CICE Consortium is the Consortium Community [Forum][forum]. This forum is monitored by Consortium members and is also open to the whole community. **Please do not use our issue tracker for general support questions.**\n\n[forum]: https://xenforo.cgd.ucar.edu/cesm/forums/cice-consortium.146/\n\n## Contributing\nIf you expect to make any changes to the code, we recommend that you first fork the Icepack repository.\nIn order to incorporate your developments into the Consortium code it is imperative you follow the guidance for Pull Requests and requisite testing.\nHead over to our [Contributing][contributing] guide to learn more about how you can help improve Icepack.\n\n[contributing]: https://github.com/CICE-Consortium/About-Us/wiki/Contributing\n\n## Useful links\n* **Icepack wiki**: https://github.com/CICE-Consortium/Icepack/wiki\n\n Information about the Icepack model\n\n* **Icepack Release Table**: https://github.com/CICE-Consortium/Icepack/wiki/Icepack-Release-Table\n\n Numbered Icepack releases with associated documentation and DOIs. \n \n* **Consortium Community Forum**: https://xenforo.cgd.ucar.edu/cesm/forums/cice-consortium.146/\n \n First point of contact for discussing model development including bugs, diagnostics, and future directions. \n\n* **Resource Index**: https://github.com/CICE-Consortium/About-Us/wiki/Resource-Index\n\n List of resources for information about the Consortium and its repositories as well as model documentation, testing, and development.\n\n## License\nSee our [License](LICENSE.pdf) and [Distribution Policy](DistributionPolicy.pdf).\n'",,"2017/05/24, 18:09:45",2345,GPL-3.0,45,670,"2023/10/18, 17:47:29",25,337,447,65,7,1,1.8,0.6189516129032258,"2023/09/11, 18:56:31",Icepack1.3.4,0,18,false,,false,true,,,https://github.com/CICE-Consortium,,,,,https://avatars.githubusercontent.com/u/28584507?v=4,,, DeepBedMap,Using a deep neural network to better resolve the bed topography of Antarctica.,weiji14,https://github.com/weiji14/deepbedmap.git,github,"super-resolution,digital-elevation-model,deep-neural-network,data-science,antarctica,flat-file-db,glaciology,generative-adversarial-network,jupyter-notebook,chainer,remote-sensing,optuna,bedmap,pangeo,binder",Glacier and Ice Sheets,"2022/06/17, 12:49:33",39,0,6,false,Jupyter Notebook,,,"Jupyter Notebook,TeX,Python,Gherkin,Dockerfile,Shell",,"b""# DeepBedMap [[paper]](https://doi.org/10.5194/tc-14-3687-2020) [[poster]](https://github.com/weiji14/deepbedmap/issues/133) [[presentation]](https://hackmd.io/@weiji14/2022ML4Polar)\n\nGoing beyond BEDMAP2 using a super resolution deep neural network.\nAlso a convenient [flat file](https://en.wikipedia.org/wiki/Flat-file_database) data repository for high resolution bed elevation datasets around Antarctica.\n\n[![DOI](https://zenodo.org/badge/147595013.svg)](https://zenodo.org/badge/latestdoi/147595013)\n![GitHub top language](https://img.shields.io/github/languages/top/weiji14/deepbedmap.svg)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)\n[![Comet.ML: experiments](https://img.shields.io/badge/Comet.ml-experiments-orange.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABHNCSVQICAgIfAhkiAAAAhpJREFUOI2Nk01IVGEUhp/zzfW3sUgllUIEzVApS2JyIwStylYmlC3EWvWzcNGyhdAqiGpZy4ja2UYMrBgoatnCTCNRJBxBpNE0bcbx3rlviyycScF3d+B93nMO33eMPEkiRLEs9BRgdYDzUSKCDTl4bWY5/pwqlJoNBkllmjZevWVjYgrzA37V13LgwjnYUzwH9JlZPL8xodQlSWsPHynJES3VnFTiYLvelcQ0QptGiEl+Vpvqz5kglBoMplZvXsP/8J5I+X5WffiWFMpECANHai5Bx2ScaGPdX/asmY14kgAG088eE0y9IdJYSSif+ek03uJP/KQjRJR3tG+FAZ5IqrZAaowQTK5c2gv7GjEzFhPfsdPXqbnchzmPTHKJsqNN/60NdHtBSK99jkNlGoumMGUoPXaFqhu3/7kKa6q2gwHOuyJHbbY0pOjiLbyWNojOUtHVuxOQr1oHECy8xJ+5T6S1haLOu7gSt9sAXChmS07dofDEAwoarpJd/YorrdwtP+s2Ap6yNIqLHiKYeYEyK6yP3gPC3QQM2+Yzjq5/HGj1Ko4TqahHmR+4gkKIHgZvx2kWgGpnZqQDuovrO7HiMswV4pbjkJ2H9Bikxzf92fyAvpy7yAR/vrKmB6TUuLQ8LCWfS6kxKTUh+Qvaov78NAAkNUv6IklaiUtrn6RsaiuYkHRmK2PbhADE/ICeAo86wAEJYIhtzvk3y+cYpafNe/QAAAAASUVORK5CYII=)](https://www.comet.ml/weiji14/deepbedmap/)\n[![Github Actions Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fweiji14%2Fdeepbedmap%2Fbadge&style=flat)](https://github.com/weiji14/deepbedmap/actions)\n[![Dependabot Status](https://api.dependabot.com/badges/status?host=github&repo=weiji14/deepbedmap)](https://dependabot.com)\n\n![DeepBedMap DEM over entire Antarctic continent, EPSG:3031 projection](https://user-images.githubusercontent.com/23487320/94385232-16dee180-01a1-11eb-83ce-8793709079ff.png)\n\n![DeepBedMap Pipeline](https://yuml.me/diagram/scruffy;dir:LR/class/[Data|Highres/Lowres/Misc]->[Preprocessing|data_prep.ipynb],[Preprocessing]->[Model-Training|srgan_train.ipynb],[Model-Training]->[Inference|deepbedmap.ipynb])\n\n

\nDirectory structure\n\n```\n deepbedmap/\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 features/ (files describing the high level behaviour of various features)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 *.feature... (easily understandable specifications written using the Given-When-Then gherkin language)\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md (markdown information on the feature files)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 highres/ (contains high resolution localized DEMs)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 *.txt/csv/grd/xyz... (input vector file containing the point-based bed elevation data)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 *.json (the pipeline file used to process the xyz point data)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 *.nc (output raster netcdf files)\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md (markdown information on highres data sources)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 lowres/ (contains low resolution whole-continent DEMs)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 bedmap2_bed.tif (the low resolution DEM!)\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md (markdown information on lowres data sources)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 misc/ (miscellaneous raster datasets)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 *.tif (Surface DEMs, Ice Flow Velocity, etc. See list in Issue #9)\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md (markdown information on miscellaneous data sources)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 model/ (*hidden in git, neural network model related files)\n \xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 train/ (a place to store the raster tile bounds and model training data)\n \xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 weights/ (contains the neural network model's architecture and weights)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 .env (environment variable config file used by pipenv)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 .ignore (files ignored by a particular piece of software)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 . (stuff to make the code in this repo look and run nicely e.g. linters, CI/CD config files, etc)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Dockerfile (set of commands to fully reproduce the software stack here into a docker image, used by binder)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE.md (the license covering this repository)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Pipfile (what you want, the summary list of core python dependencies)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Pipfile.lock (what you need, all the pinned python dependencies for full reproducibility)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 README.md (the markdown file you're reading now)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data_list.yml (human and machine readable list of the datasets and their metadata)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 data_prep.ipynb/py (paired jupyter notebook/python script that prepares the data)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 deepbedmap.ipynb/py (paired jupyter notebook/python script that predicts an Antarctic bed digital elevation model)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 environment.yml (conda binary packages to install)\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 paper_figures.ipynb/py (paired jupyter notebook/python script to produce figures for DeepBedMap paper\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 srgan_train.ipynb/py (paired jupyter notebook/python script that trains the ESRGAN neural network model)\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 test_ipynb.ipynb/py (paired jupyter notebook/python script that runs doctests in the other jupyter notebooks!)\n```\n
\n\n# Getting started\n\n## Quickstart\n\nLaunch in [Binder](https://mybinder.readthedocs.io) (Interactive jupyter notebook/lab environment in the cloud).\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/weiji14/deepbedmap/master)\n[![Open All Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/weiji14/deepbedmap)\n\n## Installation\n\n![Installation steps](https://yuml.me/diagram/scruffy/class/[Git|clone-repo]->[Conda|install-binaries-and-pipenv],[Conda]->[Pipenv|install-python-libs])\n\nStart by cloning this [repo-url](/../../)\n\n git clone \n\nThen I recommend [using conda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) to install the non-python binaries (e.g. GMT, CUDA, etc).\nThe conda virtual environment will also be created with Python and [pipenv](https://pipenv.readthedocs.io) installed.\n\n cd deepbedmap\n conda env create -f environment.yml\n\nActivate the conda environment first.\n\n conda activate deepbedmap\n\nThen set some environment variables **before** using pipenv to install the necessary python libraries,\notherwise you may encounter some problems (see Common problems below).\nYou may want to ensure that `which pipenv` returns something similar to ~/.conda/envs/deepbedmap/bin/pipenv.\n\n export HDF5_DIR=$CONDA_PREFIX/\n export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/\n pipenv install --python $CONDA_PREFIX/bin/python --dev\n #or just\n HDF5_DIR=$CONDA_PREFIX/ LD_LIBRARY_PATH=$CONDA_PREFIX/lib/ pipenv install --python $CONDA_PREFIX/bin/python --dev\n\nFinally, double-check that the libraries have been installed.\n\n pipenv graph\n\n### Syncing/Updating to new dependencies\n\n conda env update -f environment.yml\n pipenv sync --dev\n\n### Common problems\n\nNote that the [.env](https://pipenv.readthedocs.io/en/latest/advanced/#configuration-with-environment-variables) file stores some environment variables.\nSo if you run `conda activate deepbedmap` followed by some other command and get an `...error while loading shared libraries: libpython3.7m.so.1.0...`,\nyou may need to run `pipenv shell` or do `pipenv run ` to have those environment variables registered properly.\nOr just run this first:\n\n export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/\n\nAlso, if you get a problem when using `pipenv` to install [netcdf4](https://github.com/Unidata/netcdf4-python), make sure you have done:\n\n export HDF5_DIR=$CONDA_PREFIX/\n\nand then you can try using `pipenv install` or `pipenv sync` again.\nSee also this [issue](https://github.com/pydata/xarray/issues/3185#issuecomment-520693149) for more information.\n\n## Running jupyter lab\n\n conda activate deepbedmap\n pipenv shell\n\n python -m ipykernel install --user --name deepbedmap #to install conda env properly\n jupyter kernelspec list --json #see if kernel is installed\n jupyter lab &\n\n## Citing\n\nThe paper is published at [The Cryosphere](https://www.the-cryosphere.net) and can be referred to using the following BibTeX code:\n\n @Article{tc-14-3687-2020,\n AUTHOR = {Leong, W. J. and Horgan, H. J.},\n TITLE = {DeepBedMap: a deep neural network for resolving the bed topography of Antarctica},\n JOURNAL = {The Cryosphere},\n VOLUME = {14},\n YEAR = {2020},\n NUMBER = {11},\n PAGES = {3687--3705},\n URL = {https://tc.copernicus.org/articles/14/3687/2020/},\n DOI = {10.5194/tc-14-3687-2020}\n }\n\nThe DeepBedMap_DEM v1.1.0 dataset is available from Zenodo at https://doi.org/10.5281/zenodo.4054246.\nNeural network model training experiment runs are also recorded at https://www.comet.ml/weiji14/deepbedmap.\nPython code for the DeepBedMap model here on Github is also mirrored on Zenodo at https://doi.org/10.5281/zenodo.3752613.\n""",",https://doi.org/10.5194/tc-14-3687-2020,https://zenodo.org/badge/latestdoi/147595013,https://doi.org/10.5281/zenodo.4054246.\nNeural,https://doi.org/10.5281/zenodo.3752613.\n","2018/09/06, 00:28:32",1876,LGPL-3.0,0,509,"2022/06/17, 12:45:54",6,156,188,0,495,2,0.3,0.18803418803418803,"2020/11/05, 22:36:23",v1.1.0,0,5,false,,false,false,,,,,,,,,,, SIS2,"Calculates the concentration, thickness, temperature, brine content and snow cover of an arbitrary number of ice thickness categories (including open water) as well as the motion of the complete pack.",NOAA-GFDL,https://github.com/NOAA-GFDL/SIS2.git,github,,Glacier and Ice Sheets,"2023/10/23, 20:17:03",13,0,1,true,Fortran,NOAA - Geophysical Fluid Dynamics Laboratory,NOAA-GFDL,"Fortran,C",,"b'# SIS2\n\nNOAA-GFDL\'s Sea Ice Simulator version 2\n\n# Disclaimer\n\nThe United States Department of Commerce (DOC) GitHub project code is provided\non an ""as is"" basis and the user assumes responsibility for its use. DOC has\nrelinquished control of the information and no longer has responsibility to\nprotect the integrity, confidentiality, or availability of the information. Any\nclaims against the Department of Commerce stemming from the use of its GitHub\nproject will be governed by all applicable Federal law. Any reference to\nspecific commercial products, processes, or services by service mark,\ntrademark, manufacturer, or otherwise, does not constitute or imply their\nendorsement, recommendation or favoring by the Department of Commerce. The\nDepartment of Commerce seal and logo, or the seal and logo of a DOC bureau,\nshall not be used in any manner to imply endorsement of any commercial product\nor activity by DOC or the United States Government.\n\nThis project code is made available through GitHub but is managed by NOAA-GFDL\nat https://gitlab.gfdl.noaa.gov.\n'",,"2014/02/10, 15:06:39",3544,CUSTOM,12,1410,"2023/10/23, 20:17:03",14,141,189,14,2,3,1.4,0.13259195893926434,"2014/12/15, 23:18:40",ulm,0,16,false,,false,false,,,https://github.com/NOAA-GFDL,www.gfdl.noaa.gov,"Princeton, New Jersey",,,https://avatars.githubusercontent.com/u/11219395?v=4,,, freshwater,Greenland liquid water runoff from 1958 through 2019.,GEUS-PROMICE,https://github.com/GEUS-Glaciology-and-Climate/freshwater.git,github,"scientific-workflows,greenland,org-mode,grass-gis,water",Glacier and Ice Sheets,"2023/10/06, 00:51:30",9,0,2,true,TeX,GEUS Glaciology and Climate,GEUS-Glaciology-and-Climate,"TeX,Python",https://doi.org/10.5194/essd-12-2811-2020,"b'\n* Table of contents :toc_5:noexport:\n- [[#greenland-liquid-water-discharge-from-1950-through-2021][Greenland liquid water discharge from 1950 through 2021]]\n - [[#updates-since-last-publication][Updates since last publication]]\n - [[#v-2023][v 2023]]\n - [[#v-2022-10][v 2022-10]]\n - [[#v-2022-08][v 2022-08]]\n- [[#warning][WARNING]]\n- [[#related-work][Related Work]]\n- [[#citation][Citation]]\n- [[#funding][Funding]]\n- [[#accessing-this-data][Accessing this data]]\n - [[#introduction][Introduction]]\n - [[#database-format][Database Format]]\n - [[#warnings][Warnings]]\n - [[#requirements][Requirements]]\n - [[#examples][Examples]]\n - [[#command-line-interface][Command line interface]]\n - [[#usage-instructions][Usage Instructions]]\n - [[#outlets-and-basins][Outlets and basins]]\n - [[#one-point][One point]]\n - [[#polygon-covering-multiple-land-and-ice-outlets][Polygon covering multiple land and ice outlets]]\n - [[#discharge][Discharge]]\n - [[#one-point-1][One point]]\n - [[#polygon-covering-multiple-land-and-ice-outlets-1][Polygon covering multiple land and ice outlets]]\n - [[#python-api][Python API]]\n - [[#outlets-and-basins-1][Outlets and basins]]\n - [[#one-point-2][One point]]\n - [[#polygon-covering-multiple-land-and-ice-outlets-2][Polygon covering multiple land and ice outlets]]\n - [[#discharge-1][Discharge]]\n - [[#one-point-3][One point]]\n - [[#polygon-covering-multiple-land-and-ice-outlets-3][Polygon covering multiple land and ice outlets]]\n\n* Greenland liquid water discharge from 1950 through 2021\n\nThis is the source for ""Greenland liquid water discharge from 1958 through 2019"" and subsequent versions. \n\n+ The paper is located at https://doi.org/10.5194/essd-12-2811-2020.\n+ The data sets are located at [[https://doi.org/10.22008/promice/freshwater][doi:10.22008/promice/freshwater]]\n+ Companion paper: ""Greenland Ice Sheet solid ice discharge from 1986 through 2019""\n + Publication: [[https://doi.org/10.5194/essd-12-1367-2020][doi:10.5194/essd-12-1367-2020]]\n + Source: https://github.com/GEUS-PROMICE/ice_discharge/\n + Data: [[https://doi.org/10.22008/promice/data/ice_discharge][doi:10.22008/promice/data/ice_discharge]]\n + Contains basins for [[https://doi.org/10.22008/FK2/KIDYD1][k=0.8]] (ice only), [[https://doi.org/10.22008/FK2/TARK8O][0.9]] (ice only), and [[https://doi.org/10.22008/FK2/XKQVL7][1.0]] (ice and land) scenarios\n + Discharge data is included in the [[https://doi.org/10.22008/FK2/XKQVL7][Streams, Outlets, and Basins (k=1.0)]]\n\n\nThe source for this work is hosted on GitHub at https://github.com/GEUS-PROMICE/freshwater. GitHub [[https://github.com/mankoff/freshwater/issues?utf8=%E2%9C%93&q=is%3Aissue][issues]] are used to collect suggested improvements to the paper or problems that made it through review. The work may be under be under active development, including updating data (and therefore tables) within the source document.\n+ This [[https://github.com/mankoff/freshwater/compare/10.5194/essd-12-2811-2020...main][diff]] shows changes between the published version of the paper and the current (active) development version.\n + Issues tagged [[https://github.com/GEUS-Glaciology-and-Climate/freshwater/issues?q=label%3Amajor_change][major change]] are worth noting and may be significant enough to update to the paper.\n+ The source for the active development version can be viewed at https://github.com/GEUS-PROMICE/freshwater/tree/main\n+ The source for the published paper can be viewed at https://github.com/GEUS-PROMICE/freshwater/tree/10.5194/essd-12-2811-2020\n\n** Updates since last publication\n\n*** v 2023\n\n[[https://github.com/GEUS-Glaciology-and-Climate/freshwater/tree/release_2023][release_2023]] has the following changes from [[https://github.com/GEUS-Glaciology-and-Climate/freshwater/tree/release_2022][v2022-08]]. See [[https://github.com/GEUS-Glaciology-and-Climate/freshwater/milestone/1][2023 Milestone]], https://github.com/GEUS-Glaciology-and-Climate/freshwater/compare/release_2022...release_2023, and git log for more details\n\n+ Added 2022 RACMO data\n+ Added 2022 MAR data\n+ Updated MAR data from 3.12 to 3.13\n+ Updated BedMachine from v4 to v5\n+ Updated ArcticDEM from v3.0 to v4.1\n\n*** v 2022-10\n\nv 2022-10 has the following changes:\n+ No change to the data values\n+ Data product has been reformatted to 4 NetCDF files: One per RCM (2) and domain (2), each containing all years of data\n + E.g., MAR_ice, MAR_land, RACMO_ice, RACMO_Land\n+ Data files are now part of the dataset that contains the Geopackages of streams, outlets, and basins, rather than their own dataset on the dataverse\n + DOI: 10.22008/FK2/AA6MTB has been Deaccesioned\n + DOI: [[https://doi.org/10.22008/FK2/XKQVL7][10.22008/FK2/XKQVL7]] now contains the discharge data\n+ The README has been updated to show some additional examples using the metadata added in v3\n\n*** v 2022-08\n\nv 2022-08 has the following changes (see GitHub diff above for more details):\n+ Update from BedMachine v3 to v4\n+ Data now spans 1950 through 2021, instead of 1958 through 2019\n+ Internal NetCDF variable renamed from \'runoff\' to \'discharge\'\n+ Significant improvement in metadata supporting better query by basin, region, or sector\n+ Recognition that land runoff with depth << 0 is valid\n\n* WARNING\n\n+ Bugs may exist in this data or the [[./discharge.py]] access script. All known bugs will be documented at [[https://github.com/GEUS-PROMICE/freshwater/issues]]. Before using this software or finalizing results, you should check if any [[https://github.com/mankoff/freshwater/issues][open issues]] impact your results, or if any issue have been [[https://github.com/mankoff/freshwater/issues?q=is%3Aissue+is%3Aclosed][closed]] since you downloaded the data or script.\n\n* Related Work \n\n+ Companion paper: ""Greenland ice sheet mass balance from 1840 through next week""\n + Publication: [[https://doi.org/10.5194/essd-13-5001-2021][doi:10.5194/essd-13-5001-2021]]\n + Source: https://github.com/GEUS-Glaciology-and-Climate/mass_balance\n + Data: https://doi.org/10.22008/FK2/OHI23Z\n\n+ Companion paper: ""Greenland Ice Sheet solid ice discharge from 1986 through March 2020""\n + Publication: [[https://doi.org/10.5194/essd-12-1367-2020][doi:10.5194/essd-12-1367-2020]]\n + Source: https://github.com/GEUS-PROMICE/ice_discharge/\n + Data: [[https://doi.org/10.22008/promice/data/ice_discharge][doi:10.22008/promice/data/ice_discharge]]\n\n* Citation\n\n#+BEGIN_EXAMPLE\n@article{mankoff_2020_liquid,\n author = {Mankoff, Kenneth D. and No\xc3\xabl, Brice and Fettweis, Xavier and Ahlstr\xc3\xb8m, Andreas P. and\n Colgan, William and Kondo, Ken and Langley, Kirsty and Sugiyama, Shin and van As,\n Dirk and Fausto, Robert S.},\n title = {{G}reenland liquid water discharge from 1958 through 2019},\n journal = {Earth System Science Data},\n year \t = 2020,\n volume = 12,\n number = 4,\n pages = {2811\xe2\x80\x932841},\n month = 11,\n DOI \t = {10.5194/essd-12-2811-2020},\n publisher = {Copernicus GmbH}\n}\n#+END_EXAMPLE\n\n* Funding\n\n| Dates | Organization | Program | Effort |\n|--------------+--------------+-------------------------------------------+----------------------------------------|\n| 2023 -- | NASA GISS | Modeling Analysis and Prediction program. | Maintenance |\n| 2022 -- | GEUS | PROMICE | Distribution (data hosting) |\n| 2018 -- 2022 | GEUS | PROMICE | Development; publication; distribution |\n\n\n#+BEGIN_HTML\n

\n\n\n\n
\n\n\n\n
\n

\n#+END_HTML\n\n\n* Accessing this data\n** Introduction\n\nNOTE: Data can be accessed directly from the NetCDF files. Querying the NetCDF files directly allows more advanced queries on the metadata, for example, `all outlets with Jakobshavn Isbr\xc3\xa6 as the nearest discharge gate, excluding outlets more than 5 km away`. The `5 km` filter removes stream discharge from Disko Island which has Jakobshavn Isbr\xc3\xa6 as the nearest discharge gate, but should not be counted as discharge from that basin.\n\nAs an example, it is easiest to begin working with the outlets, save subsetted data, visually check in QGIS, and then when your algorithm appears to work, apply the same query to the discharge NetCDF files. Example:\n\n#+BEGIN_SRC jupyter-python :exports code\nimport pandas as pd\nimport geopandas as gp\n\ndf = pd.read_csv(\'./freshwater/ice/outlets.csv\', index_col=0)\ngdf = gp.GeoDataFrame(df, geometry=gp.points_from_xy(df[\'lon\'],df[\'lat\']))\n\n# select subglacial discharge within 2.5 km of basins\ngdf = gdf[(gdf[\'elev\'] < -10) &\n (gdf[\'M2019_ID_dist\'] < 2500)]\n\ngdf.to_file(""foo.gpkg"", driver=""GPKG"")\n#+END_SRC\n\nSimilar queries might include:\n+ Pandas =groupby= to combine outlets per gate, basin, sector, or region\n+ Examining the ice outlet location, and the downstream coastal outlet location. If the two are the same, then the outlet is marine terminating. This may give better results than querying based on the BedMachine provided =elev= metadata.\n\nIf you prefer to not access the NetCDF files directly, after the data have been downloaded the =discharge.py= script allows access to outlets, basins, and their discharge within a region of interest (ROI). The ROI can be a point, a list describing a polygon, or a file. Optionally, upstream outlets, basins, and discharge from any land outlet(s) can be included. The script can be called from the command line (CLI) or within Python.\n\nThe ROI coordinate units can be either EPSG:4326 (lon,lat) or EPSG:3413. The units for the coordinates are guessed using the range of values. If the ROI is a point, basins that contain that point are selected. Either 1 (if the point is on land) or two (ice and the downstream land, if the point is on the ice) basins are selected, and optionally, all ice basins upstream from the one land basin. If the ROI is a polygon, all outlets within the polygon are selected. The polygon does not have to be closed - a convex hull is wrapped around it. If the argument is a file (e.g. KML file) then the first polygon is selected and used.\n\nWhen the script is run from the command line, CSV data is written to =stdout= and can be redirected to a file. When the API is accessed from within Python, if the script is used to access outlets, a =GeoPandas= =GeoDataFrame= is returned and can be used for further analysis within Python, or written to any file format supported by =GeoPandas= or =Pandas=, for example =CSV=, or =GeoPackage= for =QGIS=. If the script is used to access discharge, an =xarray= =Dataset= is returned, and can be used for further analysis within Python, or written to any file format supported by =xarray=, for example =CSV= or =NetCDF=.\n\n*** Database Format\n\n+ The =cat= column in the CSVs file links to the =station= vector in the NetCDF.\n\nThis script queries two database:\n \n+ land :: The land coast outlets and land basins.\n+ ice :: ice margin outlets and ice basins.\n\nThe folder structure required is a =root= folder (named =freshwater= in the examples below, but can be anything) and then a =land= and =ice= sub-folder. The geospatial files for =land= and =ice= must be in these folders (i.e. the k=1.0 Streams, Outlets, and Basins dataset from https://dataverse.geus.dk/dataverse/freshwater), along with a =MAR.nc= and =RACMO.nc= in each of the =land= and =ice= folders.\n\nExample:\n\n#+BEGIN_SRC bash :results verbatim :exports results\nfind ./freshwater/land/ ./freshwater/ice/ -maxdepth 1 | sort\n#+END_SRC\n\n#+RESULTS:\n#+begin_example\n./freshwater/ice/\n./freshwater/ice/basins.csv\n./freshwater/ice/basins_filled.gpkg\n./freshwater/ice/basins.gpkg\n./freshwater/ice/MAR.nc\n./freshwater/ice/outlets.csv\n./freshwater/ice/outlets.gpkg\n./freshwater/ice/RACMO.nc\n./freshwater/ice/streams.csv\n./freshwater/ice/streams.gpkg\n./freshwater/land/\n./freshwater/land/basins.csv\n./freshwater/land/basins_filled.gpkg\n./freshwater/land/basins.gpkg\n./freshwater/land/MAR.nc\n./freshwater/land/outlets.csv\n./freshwater/land/outlets.gpkg\n./freshwater/land/RACMO.nc\n./freshwater/land/streams.csv\n./freshwater/land/streams.gpkg\n#+end_example\n\n*** Warnings\n\n+ The script takes a few seconds to query the outlets and basins. The script takes ~10s of seconds to query each of the discharge time series datasets. Because there may be up to 6 discharge queries (2 RCMs for each of 1 land domain + ice domain + upstream ice), it can several minutes on a fast laptop to extract the data. To track progress, do not set the =quiet= flag to =True=.\n\n+ If a polygon includes ice outlets, and the ~upstream~ flag is set, some ice outlets, basins, and discharge may be included twice, once as a ""direct"" selection within the polygon and once as an upstream outlet and basin from the land polygon. Further processing by the user can remove duplicates (see examples below).\n\n+ The =id= column may not be unique for multiple reasons:\n + As above, the same outlet may be included twice.\n + =id=\'s are unique within a dataset (i.e. =land=, and =ice=), but not between datasets.\n\n+ Due to bash command-line parsing behavior, the syntax =--roi -60,60= does not work. Use ~--roi=-60,06~.\n\n+ Longitude is expected in degrees East, and should therefore probably be negative.\n\n+ The =cat= column in the CSVs file links to the =station= vector in the NetCDF.\n\n+ If possible, avoid using index-based lookups, and query based on location or =station=.\n\n*** Requirements\n:PROPERTIES:\n:header-args:jupyter-python: :kernel freshwater_user :session using :eval no-export\n:END:\n\nSee =environment.yml= file in Git repository, or\n\n#+BEGIN_SRC bash\nmamba create -n freshwater_user python=3.7 xarray=0.20.2 fiona=1.8.21 shapely=1.8.2 geopandas=0.7.0 netcdf4=1.6.0 dask=2.15.0\nmamba activate freshwater_user\n#+END_SRC\n\n** Examples\n:PROPERTIES:\n:header-args:jupyter-python: :kernel freshwater :session using :eval no-export :exports both\n:header-args:bash: :eval no-export :session ""*freshwater-shell*"" :results verbatim :exports both :prologue conda activate freshwater_user\n:END:\n\n*** Command line interface\n**** Usage Instructions\n\n# (setq org-babel-min-lines-for-block-output 100)\n\n#+BEGIN_SRC bash :exports both\npython ./discharge.py -h\n#+END_SRC\n\n#+RESULTS:\n#+begin_example\nusage: discharge.py [-h] --base BASE --roi ROI [-u] (-o | -d) [-q]\n\nDischarge data access\n\noptional arguments:\n -h, --help show this help message and exit\n --base BASE Folder containing freshwater data\n --roi ROI x,y OR lon,lat OR x0,y0 x1,y1 ... xn,yn OR lon0,lat0 lon1,lat1 ... lon_n,lat_n. [lon: degrees E]\n -u, --upstream Include upstream ice outlets draining into land basins\n -o, --outlets Return outlet IDs (same as basin IDs)\n -d, --discharge Return RCM discharge for each domain (outlets merged)\n -q, --quiet Be quiet\n#+end_example\n\n**** Outlets and basins\n***** One point\n\nThe simplest example is a point, in this case near the Watson River outlet. Because we select one point over land and do not request upstream outlets and basins, only one row should be returned.\n\n#+BEGIN_SRC bash :exports both :results table\npython ./discharge.py --base ./freshwater --roi=-50.5,67.2 -o -q\n#+END_SRC\n\n#+RESULTS:\n| index | id | lon | lat | x | y | elev | Z2012_sector | Z2012_sector_dist | M2019_ID | M2019_ID_dist | M2019_basin | M2019_region | M2020_gate | M2020_gate_dist | B2015_name | B2015_dist | domain | upstream | coast_id | coast_lon | coast_lat | coast_x | coast_y |\n| 0 | 121108 | -51.219 | 67.153 | -271550 | -2492150 | 4 | 62 | 38320 | 71 | 38035 | ISUNNGUATA-RUSSELL | SW | 195 | 193828 | Isunnguata Sermia | 45930 | land | False | -1 | | | -1 | -1 |\n\nIf we move 10\xc2\xb0 east to somewhere over the ice, there should be four rows: one for the land outlet and basin, and three more for the three ice scenario:\n\n#+BEGIN_SRC bash :exports both :results table\npython ./discharge.py --base ./freshwater --roi=-40.5,67.2 -o -q\n#+END_SRC\n\n#+RESULTS:\n| index | id | lon | lat | x | y | elev | Z2012_sector | Z2012_sector_dist | M2019_ID | M2019_ID_dist | M2019_basin | M2019_region | M2020_gate | M2020_gate_dist | B2015_name | B2015_dist | domain | upstream | coast_id | coast_lon | coast_lat | coast_x | coast_y |\n| 0 | 126875 | -38.071 | 66.330 | 313650 | -2580750 | -187 | 41 | 5796 | 63 | 0 | HELHEIMGLETSCHER | SE | 231 | 9650 | Helheim Gletsjer | 11776 | land | False | -1 | | | -1 | -1 |\n| 1 | 67985 | -38.110 | 66.333 | 311850 | -2580650 | -244 | 41 | 4177 | 63 | 0 | HELHEIMGLETSCHER | SE | 231 | 7850 | Helheim Gletsjer | 10042 | ice | False | 126875 | -38.071 | 66.330 | 313650 | -2580750 |\n\n***** Polygon covering multiple land and ice outlets\n\nHere a polygon covers several land outlets near the end of a fjord, and several ice outlets of the nearby ice margin. In addition, we request all ice outlets upstream of all selected land basins.\n\nWe use the following simple KML file for our ROI (this can be copied-and-pasted into the Google Earth side-bar to see it). Rather than use this file with ~--roi=/path/to/file.kml~, we use the coordinates directly, and demonstrate dropping the last coordinate because the code will wrap any polygon in a convex hull.\n\n#+BEGIN_SRC xml\n\n\n\n Ice and Land Sample\n \n ice and land\n \n 1\n -51.50,66.93 -51.21,66.74 -49.44,66.91 -49.84,67.18 -51.50,66.93\n \n \n\n\n#+END_SRC\n\nIn this example, we query for upstream outlets, and for brevity show just the first three and last three lines.\n\n#+BEGIN_SRC bash :results table :exports both\npython ./discharge.py --base ./freshwater --roi=""-51.50,66.93 -51.21,66.74 -49.44,66.91 -49.84,67.18"" -q -u -o | (head -n3 ;tail -n4)\n#+END_SRC\n\n#+RESULTS:\n| index | id | lon | lat | x | y | elev | Z2012_sector | Z2012_sector_dist | M2019_ID | M2019_ID_dist | M2019_basin | M2019_region | M2020_gate | M2020_gate_dist | B2015_name | B2015_dist | domain | upstream | coast_id | coast_lon | coast_lat | coast_x | coast_y |\n| 0 | 122055 | -50.713 | 67.002 | -251250 | -2511450 | 20 | 62 | 22184 | 71 | 22906 | ISUNNGUATA-RUSSELL | SW | 195 | 207779 | Isunnguata Sermia | 31644 | land | False | -1 | | | -1 | -1 |\n| 1 | 122222 | -50.735 | 66.988 | -252350 | -2512850 | 7 | 62 | 23683 | 71 | 24427 | ISUNNGUATA-RUSSELL | SW | 195 | 209355 | Isunnguata Sermia | 33360 | land | False | -1 | | | -1 | -1 |\n| 203 | 67946 | -49.521 | 66.438 | -203950 | -2579550 | 767 | 62 | 0 | 40 | 0 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 199999 | Quantum Gletsjer | 80065 | ice | True | 123466 | -50.652 | 66.868 | -250050 | -2526750 |\n| 204 | 68014 | -49.544 | 66.419 | -205150 | -2581550 | 825 | 62 | 0 | 40 | 184 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 197830 | Quantum Gletsjer | 78386 | ice | True | 123466 | -50.652 | 66.868 | -250050 | -2526750 |\n| 205 | 68056 | -49.535 | 66.407 | -204850 | -2582950 | 859 | 62 | 0 | 40 | 0 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 196497 | Quantum Gletsjer | 78340 | ice | True | 123466 | -50.652 | 66.868 | -250050 | -2526750 |\n\n**** Discharge\n\nThe discharge examples here use the same code as the ""outlets and basins"" examples above, except we use =--discharge= rather than =--outlet=.\n\n***** One point\n\nThe simplest example is a point, in this case near the Watson River outlet. Because we select one point over land and do not request upstream outlets and basins, two time series should be returned: =MAR_land= and =RACMO_land=. Rather than showing results for every day from 1958 through 2019, we limit to the header and the first 10 days of June, 2012.\n\n#+BEGIN_SRC bash :exports both :results table\npython ./discharge.py --base ./freshwater --roi=-50.5,67.2 -q -d | (head -n1; grep -A9 ""^2012-06-01"")\n#+END_SRC\n\n#+RESULTS:\n| time | MAR_land | RACMO_land |\n| 2012-06-01 | 11.893755 | 0.029936 |\n| 2012-06-02 | 10.126999 | 0.001237 |\n| 2012-06-03 | 8.114753 | 0.001323 |\n| 2012-06-04 | 3.970580 | 0.000000 |\n| 2012-06-05 | 0.313908 | -0.001191 |\n| 2012-06-06 | 0.478592 | 0.303289 |\n| 2012-06-07 | 0.330184 | 0.007452 |\n| 2012-06-08 | 2.857732 | 0.193424 |\n| 2012-06-09 | 0.308489 | 0.087070 |\n| 2012-06-10 | 0.308755 | 0.024483 |\n\n+ If we move 10\xc2\xb0 east to somewhere over the ice we add two columns: One for each of the two RCMs over the ice domain.\n+ If the =--upstream= flag is set, we add two columns: One for each of the RCMs over the *upstream* ice domains. Results are summed across outlets per domain.\n+ Results are therefore one of the following\n + Two columns: 2 RCM * 1 land domain\n + Four columns: 2 RCM * (1 land + 1 ice domain)\n + Four columns: 2 RCM * (1 land + 1 upstream ice domain)\n + Six columns: 2 RCM * (1 land + 1 ice + 1 upstream ice domain)\n\n***** Polygon covering multiple land and ice outlets\n\nWhen querying using an ROI that covers multiple outlets, discharge is summed by domain. Therefore, even if 100s of outlets are within the ROI, either two columns, eight, eight, or fourteen columns are returned depending on the options.\n\n*** Python API\n\nThe python API is similar to the command line interface, but rather than printing results to =stdout=, returns a =GeoPandas= =GeoDataFrame= of outlets, an =xarray= =Dataset= of discharge. The discharge is not summed by domain, but instead contains discharge for each outlet.\n\n**** Outlets and basins\n\n***** One point\n\nThe simplest example is a point, in this case near the Watson River outlet. Because we select one point over land and do not request upstream outlets and basins, only one row should be returned.\n\n#+BEGIN_SRC jupyter-python :session using\nfrom discharge import discharge \ndf = discharge(base=""./freshwater"", roi=""-50.5,67.2"", quiet=True).outlets()\n#+END_SRC\n\n#+RESULTS:\n\nThe =df= variable is a =Pandas= =GeoDataFrame=. \n\nIt includes two geometry columns\n+ =outlet= :: A point for the location of the outlet (also available as the =x= and =y= columns)\n+ =basin= :: A polygon describing this basin\n\nBecause the geometry columns do not display well in tabular form, we drop them. \n\n#+BEGIN_SRC jupyter-python :session using\ndf.drop(columns=[""outlet"",""basin""])\n#+END_SRC\n\n#+RESULTS:\n| index | id | lon | lat | x | y | elev | Z2012_sector | Z2012_sector_dist | M2019_ID | M2019_ID_dist | M2019_basin | M2019_region | M2020_gate | M2020_gate_dist | B2015_name | B2015_dist | domain | upstream | coast_id | coast_lon | coast_lat | coast_x | coast_y |\n|-------+--------+----------+---------+---------+----------+------+--------------+-------------------+----------+---------------+--------------------+--------------+------------+-----------------+-------------------+------------+--------+----------+----------+-----------+-----------+---------+---------|\n| 0 | 121108 | -51.2185 | 67.1535 | -271550 | -2492150 | 4 | 62 | 38320 | 71 | 38035 | ISUNNGUATA-RUSSELL | SW | 195 | 193828 | Isunnguata Sermia | 45930 | land | False | -1 | nan | nan | -1 | -1 |\n\n***** Polygon covering multiple land and ice outlets\n\nHere a polygon covers several land outlets near the end of a fjord, and several ice outlets of the nearby ice margin. In addition, we request all ice outlets upstream of all selected land basins. Results are shown in tabular form and written to geospatial file formats.\n\n#+BEGIN_SRC jupyter-python :session using\nfrom discharge import discharge\ndf = discharge(base=""./freshwater"", roi=""-51.50,66.93 -51.21,66.74 -49.44,66.91 -49.84,67.18"", quiet=True, upstream=True).outlets()\n#+END_SRC\n\n#+RESULTS:\n\nView the first few rows, excluding the geometry columns\n\n#+BEGIN_SRC jupyter-python :session using\ndf.drop(columns=[""outlet"",""basin""]).head()\n#+END_SRC\n\n#+RESULTS:\n| index | id | lon | lat | x | y | elev | Z2012_sector | Z2012_sector_dist | M2019_ID | M2019_ID_dist | M2019_basin | M2019_region | M2020_gate | M2020_gate_dist | B2015_name | B2015_dist | domain | upstream | coast_id | coast_lon | coast_lat | coast_x | coast_y |\n|-------+--------+----------+---------+---------+----------+------+--------------+-------------------+----------+---------------+--------------------+--------------+------------+-----------------+-------------------+------------+--------+----------+----------+-----------+-----------+---------+---------|\n| 0 | 122055 | -50.713 | 67.0017 | -251250 | -2511450 | 20 | 62 | 22184 | 71 | 22906 | ISUNNGUATA-RUSSELL | SW | 195 | 207779 | Isunnguata Sermia | 31644 | land | False | -1 | nan | nan | -1 | -1 |\n| 1 | 122222 | -50.7346 | 66.9884 | -252350 | -2512850 | 7 | 62 | 23683 | 71 | 24427 | ISUNNGUATA-RUSSELL | SW | 195 | 209355 | Isunnguata Sermia | 33360 | land | False | -1 | nan | nan | -1 | -1 |\n| 2 | 122251 | -50.7748 | 66.985 | -254150 | -2513050 | -1 | 62 | 25444 | 71 | 26179 | ISUNNGUATA-RUSSELL | SW | 195 | 209887 | Isunnguata Sermia | 34934 | land | False | -1 | nan | nan | -1 | -1 |\n| 3 | 122275 | -50.8707 | 66.9767 | -258450 | -2513550 | 4 | 62 | 29682 | 71 | 30397 | ISUNNGUATA-RUSSELL | SW | 195 | 211236 | Isunnguata Sermia | 38789 | land | False | -1 | nan | nan | -1 | -1 |\n| 4 | 122285 | -50.8569 | 66.9764 | -257850 | -2513650 | 15 | 62 | 29141 | 71 | 29862 | ISUNNGUATA-RUSSELL | SW | 195 | 211209 | Isunnguata Sermia | 38336 | land | False | -1 | nan | nan | -1 | -1 |\n\nView the last few rows:\n\nNote that the =domain= and =upstream= columns can be used to subset the table.\n\n#+BEGIN_SRC jupyter-python :session using\ndf.drop(columns=[""outlet"",""basin""]).tail()\n#+END_SRC\n\n#+RESULTS:\n| index | id | lon | lat | x | y | elev | Z2012_sector | Z2012_sector_dist | M2019_ID | M2019_ID_dist | M2019_basin | M2019_region | M2020_gate | M2020_gate_dist | B2015_name | B2015_dist | domain | upstream | coast_id | coast_lon | coast_lat | coast_x | coast_y |\n|-------+-------+----------+---------+---------+----------+------+--------------+-------------------+----------+---------------+-----------------------------------------------+--------------+------------+-----------------+------------------+------------+--------+----------+----------+-----------+-----------+---------+----------|\n| 201 | 67919 | -49.4996 | 66.4435 | -202950 | -2578950 | 791 | 62 | 0 | 40 | 6 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 200758 | Quantum Gletsjer | 81191 | ice | True | 123466 | -50.6517 | 66.8677 | -250050 | -2526750 |\n| 202 | 67935 | -49.5385 | 66.4378 | -204750 | -2579450 | 764 | 62 | 0 | 40 | 0 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 199967 | Quantum Gletsjer | 79323 | ice | True | 123466 | -50.6517 | 66.8677 | -250050 | -2526750 |\n| 203 | 67946 | -49.5206 | 66.4375 | -203950 | -2579550 | 767 | 62 | 0 | 40 | 0 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 199999 | Quantum Gletsjer | 80065 | ice | True | 123466 | -50.6517 | 66.8677 | -250050 | -2526750 |\n| 204 | 68014 | -49.5436 | 66.419 | -205150 | -2581550 | 825 | 62 | 0 | 40 | 184 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 197830 | Quantum Gletsjer | 78386 | ice | True | 123466 | -50.6517 | 66.8677 | -250050 | -2526750 |\n| 205 | 68056 | -49.5346 | 66.4068 | -204850 | -2582950 | 859 | 62 | 0 | 40 | 0 | SAQQAP-MAJORQAQ-SOUTHTERRUSSEL_SOUTHQUARUSSEL | SW | 262 | 196497 | Quantum Gletsjer | 78340 | ice | True | 123466 | -50.6517 | 66.8677 | -250050 | -2526750 |\n\nFinally, write data to various file formats. GeoPandas DataFrames can only have one geometry, so we must select one and drop the other before writing the file.\n\n#+BEGIN_SRC jupyter-python :session using\ndf.drop(columns=[""outlet"",""basin""]).to_csv(""outlets.csv"")\ndf.set_geometry(""outlet"").drop(columns=""basin"").to_file(""outlets.gpkg"", driver=""GPKG"")\ndf.set_geometry(""basin"").drop(columns=""outlet"").to_file(""basins.gpkg"", driver=""GPKG"")\n#+END_SRC\n\n**** Discharge\n\nThe code here is the same as above from the ""Outlets and basins"" section, but we call =discharge()= rather than =outlets()=.\n\n***** One point\n\nThe simplest example is a point, in this case near the Watson River outlet. Because we select one point over land and do not request upstream outlets and basins, only one row should be returned.\n\n#+BEGIN_SRC jupyter-python :session using\nfrom discharge import discharge\nds = discharge(base=""./freshwater"", roi=""-50.5,67.2"").discharge()\n#+END_SRC\n\nPrint the =xarray= =Dataset=:\n\n#+BEGIN_SRC jupyter-python :session using :exports both\nprint(ds)\n#+END_SRC\n\n#+RESULTS:\n: \n: Dimensions: (land: 1, time: 26663)\n: Coordinates:\n: * time (time) datetime64[ns] 1950-01-01 1950-01-02 ... 2022-12-31\n: * land (land) uint64 121108\n: Data variables:\n: MAR_land (time, land) float64 0.0007218 0.0007235 ... 0.6995 0.7007\n: RACMO_land (time, land) float64 nan nan nan nan ... 0.1555 0.1591 0.1549\n\nDisplay the time series. Unlike the command line interface, here the outlets are not merged.\n\n#+BEGIN_SRC jupyter-python :session using\nds.sel(time=slice(\'2012-06-01\',\'2012-06-10\')).to_dataframe()\n#+END_SRC\n\n#+RESULTS:\n| | MAR_land | RACMO_land |\n|------------------------------------------------------+----------+------------|\n| (121108, Timestamp(\'2012-06-01 00:00:00\', freq=\'D\')) | 11.8938 | 0.029936 |\n| (121108, Timestamp(\'2012-06-02 00:00:00\', freq=\'D\')) | 10.127 | 0.00123702 |\n| (121108, Timestamp(\'2012-06-03 00:00:00\', freq=\'D\')) | 8.11475 | 0.00132286 |\n| (121108, Timestamp(\'2012-06-04 00:00:00\', freq=\'D\')) | 3.97058 | 0 |\n| (121108, Timestamp(\'2012-06-05 00:00:00\', freq=\'D\')) | 0.313908 | -0.0011907 |\n| (121108, Timestamp(\'2012-06-06 00:00:00\', freq=\'D\')) | 0.478592 | 0.303289 |\n| (121108, Timestamp(\'2012-06-07 00:00:00\', freq=\'D\')) | 0.330184 | 0.00745243 |\n| (121108, Timestamp(\'2012-06-08 00:00:00\', freq=\'D\')) | 2.85773 | 0.193424 |\n| (121108, Timestamp(\'2012-06-09 00:00:00\', freq=\'D\')) | 0.308489 | 0.0870701 |\n| (121108, Timestamp(\'2012-06-10 00:00:00\', freq=\'D\')) | 0.308755 | 0.0244829 |\n\n\nIn order to merge the outlets, select all coordinates that are *not time* and merge them. Also, apply a rolling mean:\n\n#+BEGIN_SRC jupyter-python :session using\ndims = [_ for _ in ds.dims.keys() if _ != \'time\'] # get all dimensions except the time dimension\nds.sum(dim=dims)\\\n .rolling(time=7)\\\n .mean()\\\n .sel(time=slice(\'2012-06-01\',\'2012-06-10\'))\\\n .to_dataframe()\n#+END_SRC\n\n#+RESULTS:\n| time | MAR_land | RACMO_land |\n|---------------------+----------+------------|\n| 2012-06-01 00:00:00 | 30.644 | 1.39377 |\n| 2012-06-02 00:00:00 | 31.1031 | 1.2407 |\n| 2012-06-03 00:00:00 | 27.5909 | 0.458691 |\n| 2012-06-04 00:00:00 | 21.0425 | 0.157925 |\n| 2012-06-05 00:00:00 | 14.3486 | 0.0893565 |\n| 2012-06-06 00:00:00 | 8.40202 | 0.0880673 |\n| 2012-06-07 00:00:00 | 5.03268 | 0.0488637 |\n| 2012-06-08 00:00:00 | 3.74182 | 0.0722192 |\n| 2012-06-09 00:00:00 | 2.33918 | 0.084481 |\n| 2012-06-10 00:00:00 | 1.22403 | 0.0877896 |\n\n***** Polygon covering multiple land and ice outlets\n\nHere a polygon covers several land outlets near the end of a fjord, and several ice outlets of the nearby ice margin. In addition, we request all ice outlets upstream of all selected land basins.\n\n#+BEGIN_SRC jupyter-python :session using\nfrom discharge import discharge\nds = discharge(base=""./freshwater"", roi=""-51.50,66.93 -51.21,66.74 -49.44,66.91 -49.84,67.18"", quiet=True, upstream=True).discharge()\n#+END_SRC\n\nWhat are the dimensions (i.e. how many outlets in each domain?)\n\n#+BEGIN_SRC jupyter-python :session using :exports both\nprint(ds)\n#+END_SRC\n\n#+RESULTS:\n#+begin_example\n\nDimensions: (ice: 33, ice_upstream: 85, land: 88, time: 26663)\nCoordinates:\n ,* ice_upstream (ice_upstream) uint64 66407 66414 66416 ... 68014 68056\n ,* time (time) datetime64[ns] 1950-01-01 ... 2022-12-31\n ,* land (land) uint64 122055 122222 122251 ... 123897 123926\n ,* ice (ice) uint64 66425 66427 66444 ... 66595 66596 66639\nData variables:\n MAR_land (time, land) float64 0.0002109 1.244e-06 ... 0.005236\n MAR_ice (time, ice) float64 2.94e-16 2.026e-17 ... 2.785e-18\n RACMO_land (time, land) float64 nan nan nan ... 0.001346 0.1365\n RACMO_ice (time, ice) float64 nan nan nan ... 0.0001123 0.004071\n MAR_ice_upstream (time, ice_upstream) float64 1.261e-17 ... 1.855e-17\n RACMO_ice_upstream (time, ice_upstream) float64 nan nan ... 5.79e-05\n#+end_example\n\nWith these results:\n+ Sum all outlets within each domain\n+ Drop the land discharge and the upstream domains (keep only ice discharge explicitly within our ROI)\n+ Apply a 5-day rolling mean\n+ Plot 2012 discharge season\n\n#+BEGIN_SRC jupyter-python :session using\nd = [_ for _ in ds.dims.keys() if _ != \'time\'] # dims for summing (don\'t sum time dimension)\nv = [_ for _ in ds.data_vars if (\'land\' in _) | (\'_u\' in _)] # vars containing \'_u\'\n\nr = ds.sum(dim=d)\\\n .drop_vars(v)\\\n .rolling(time=5).mean()\n\nimport matplotlib.pyplot as plt\nplt.style.use(\'seaborn\')\n\nfor d in r.data_vars: r[d].sel(time=slice(\'2012-04-01\',\'2012-11-15\')).plot(drawstyle=\'steps\', label=d)\n_ = plt.legend()\nplt.savefig(""./fig/api_example.png"", bbox_inches=\'tight\')\n#+END_SRC\n\n#+RESULTS:\n\n[[./fig/api_example.png]]\n'",",https://doi.org/10.5194/essd-12-2811-2020.\n+,https://doi.org/10.22008/promice/freshwater,https://doi.org/10.5194/essd-12-1367-2020,https://doi.org/10.22008/promice/data/ice_discharge,https://doi.org/10.22008/FK2/KIDYD1,https://doi.org/10.22008/FK2/TARK8O,https://doi.org/10.22008/FK2/XKQVL7,https://doi.org/10.22008/FK2/XKQVL7,https://doi.org/10.22008/FK2/XKQVL7,https://doi.org/10.5194/essd-13-5001-2021,https://doi.org/10.22008/FK2/OHI23Z\n\n+,https://doi.org/10.5194/essd-12-1367-2020,https://doi.org/10.22008/promice/data/ice_discharge","2019/06/20, 02:21:00",1589,CUSTOM,27,95,"2023/10/06, 00:52:20",8,1,30,9,20,0,0.0,0.022471910112359605,,,0,2,false,,false,false,,,https://github.com/GEUS-Glaciology-and-Climate,https://eng.geus.dk/about/organisation/departments/glaciology-and-climate,"Copenhagen, Denmark",,,https://avatars.githubusercontent.com/u/71171316?v=4,,, ECCO-v4-Python-Tutorial,Contains several tutorials for using the ECCO Central Production Version 4 ocean and sea-ice state estimate.,ECCO-GROUP,https://github.com/ECCO-GROUP/ECCO-v4-Python-Tutorial.git,github,,Glacier and Ice Sheets,"2023/05/15, 19:22:10",44,0,5,true,Jupyter Notebook,Estimating the Circulation and Climate of the Ocean (ECCO),ECCO-GROUP,"Jupyter Notebook,Python,Shell",https://ecco-v4-python-tutorial.readthedocs.io/index.html,"b'# ECCO Version 4 Python Tutorial\n\n**Content:**\n\nThis repository contains a Python tutorial for using the [ECCO Central Production version 4](https://ecco.jpl.nasa.gov/) ocean and sea-ice state estimate. Directories within the repository include the ([tutorial documentation](http://ecco-v4-python-tutorial.readthedocs.io/)) and individiual lessons from the tutorial as Juypter notebooks ([model settings ([Tutorials_as_Jupyter_Notebooks/](Tutorials_as_Jupyter_Notebooks/) and [Tutorials_as_Python_Files/](Tutorials_as_Python_Files/)). \n\nAs of May, 2023 the tutorials have been udpated for ECCO V4r4. Tutorials that refer to files from V4r3 are also available.\n\nIf user support is needed, please contact .\n\n[Estimating the Circulation and Climate of the Ocean]: http://ecco.jpl.nasa.gov, http://ecco-group.org/\n\n**References:**\n\nForget, G., J.-M. Campin, P. Heimbach, C. N. Hill, R. M. Ponte, and C. Wunsch, 2015: ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation. Geoscientific Model Development, 8, 3071-3104, , \n\nForget, G., J.-M. Campin, P. Heimbach, C. N. Hill, R. M. Ponte, and C. Wunsch, 2016: ECCO Version 4: Second Release, , [direct download][]\n'",,"2018/01/24, 19:39:43",2100,CUSTOM,94,291,"2023/05/15, 19:22:10",4,58,62,29,163,3,0.0,0.11250000000000004,"2023/05/10, 00:50:00",4.4.1,0,8,false,,false,false,,,https://github.com/ECCO-GROUP,,,,,https://avatars.githubusercontent.com/u/34173086?v=4,,, icepyx,Python tools for obtaining and working with ICESat-2 data.,icesat2py,https://github.com/icesat2py/icepyx.git,github,"icesat-2,community-driven,python3,hacktoberfest,closember",Glacier and Ice Sheets,"2023/10/19, 22:13:18",165,13,34,true,Python,,icesat2py,Python,https://icepyx.readthedocs.io/en/latest/,"b'icepyx\n======\n\n**Python tools for obtaining and working with ICESat-2 data**\n\n|GitHub license| |Conda install| |Pypi install| |Contributors| |JOSS|\n\nLatest release (main branch): |Docs Status main| |Travis main Build Status| |Code Coverage main|\n\nCurrent development version (development branch): |Docs Status dev| |Travis dev Build Status| |Code Coverage dev|\n\n.. |GitHub license| image:: https://img.shields.io/badge/License-BSD%203--Clause-blue.svg\n :target: https://opensource.org/licenses/BSD-3-Clause\n\n.. |Conda install| image:: https://anaconda.org/conda-forge/icepyx/badges/version.svg \n :target: https://anaconda.org/conda-forge/icepyx\n\n.. |Pypi install| image:: https://badge.fury.io/py/icepyx.svg\n :target: https://pypi.org/project/icepyx\n\n.. |Contributors| image:: https://img.shields.io/badge/all_contributors-34-orange.svg?style=flat-square\n :alt: All Contributors\n :target: https://github.com/icesat2py/icepyx/blob/main/CONTRIBUTORS.rst\n\n.. |JOSS| image:: https://joss.theoj.org/papers/10.21105/joss.04912/status.svg\n :alt: JOSS publication link and DOI\n :target: https://doi.org/10.21105/joss.04912\n\n.. |Docs Status main| image:: https://readthedocs.org/projects/icepyx/badge/?version=latest\n :target: http://icepyx.readthedocs.io/?badge=latest\n\n.. |Docs Status dev| image:: https://readthedocs.org/projects/icepyx/badge/?version=development\n :target: https://icepyx.readthedocs.io/en/development\n\n.. |Travis main Build Status| image:: https://app.travis-ci.com/icesat2py/icepyx.svg?branch=main\n :target: https://app.travis-ci.com/icesat2py/icepyx\n\n.. |Travis dev Build Status| image:: https://app.travis-ci.com/icesat2py/icepyx.svg?branch=development\n :target: https://app.travis-ci.com/icesat2py/icepyx\n\n.. |Code Coverage main| image:: https://codecov.io/gh/icesat2py/icepyx/branch/main/graph/badge.svg\n :target: https://codecov.io/gh/icesat2py/icepyx\n\n.. |Code Coverage dev| image:: https://codecov.io/gh/icesat2py/icepyx/branch/development/graph/badge.svg\n :target: https://codecov.io/gh/icesat2py/icepyx\n \n\nOrigin and Purpose\n------------------\nicepyx is both a software library and a community composed of ICESat-2 data users, developers, and the scientific community. We are working together to develop a shared library of resources - including existing resources, new code, tutorials, and use-cases/examples - that simplify the process of querying, obtaining, analyzing, and manipulating ICESat-2 datasets to enable scientific discovery.\n\nicepyx aims to provide a clearinghouse for code, functionality to improve interoperability, documentation, examples, and educational resources that tackle disciplinary research questions while minimizing the amount of repeated effort across groups utilizing similar datasets. icepyx also hopes to foster collaboration, open-science, and reproducible workflows by integrating and sharing resources.\n\nMany of the underlying tools from which icepyx was developed began as Jupyter Notebooks developed for and during the cryosphere-themed ICESat-2 Hackweek at the University of Washington in June 2019 or as scripts written and used by the ICESat-2 Science Team members. \nOriginally called icesat2py, the project combined and generalized these scripts into a unified framework, adding examples, documentation, and testing where necessary and making them accessible for everyone. \nicepyx is now a domain-agnostic, standalone software package and community (under the broader `icesat2py GitHub organization `_) that continues to build functionality for obtaining and working with ICESat-2 data products locally and in the cloud. \nIt also improves interoperability for ICESat-2 datasets with other open-source tools.\n\n.. _`zipped file`: https://github.com/icesat2py/icepyx/archive/main.zip\n.. _`Fiona`: https://pypi.org/project/Fiona/\n\nInstallation\n------------\n\nThe simplest way to install icepyx is by using the\n`conda `__\npackage manager. |Conda install|\n \n conda install icepyx\n\nAlternatively, you can also install icepyx using `pip `__. |Pypi install|\n\n pip install icepyx\n\nMore detailed instructions for installing `icepyx` can be found at\nhttps://icepyx.readthedocs.io/en/latest/getting_started/install.html\n\n\nExamples (Jupyter Notebooks)\n----------------------------\n\nListed below are example Jupyter notebooks for working with ICESat-2 (IS2).\n\n`IS2_data_access `_\n\n`IS2_data_access2_subsetting `_\n\n`IS2_data_variables `_\n\n`IS2_data_visualization `_\n\n`IS2_data_read-in `_\n\n`IS2_cloud_data_access (BETA ONLY) `_\n\n\nCiting icepyx\n-------------\n.. _`CITATION.rst`: ./CITATION.rst\n\nThis community and software is developed with the goal of supporting science applications. Thus, our contributors (including those who have developed the packages used within icepyx) and maintainers justify their efforts and demonstrate the impact of their work through citations. Please see `CITATION.rst`_ for additional citation information.\n\nContact\n-------\nWorking with ICESat-2 data and have ideas you want to share?\nHave a great suggestion or recommendation of something you\'d like to see\nimplemented and want to find out if others would like that tool too?\nCome join the conversation at: https://discourse.pangeo.io/.\nSearch for ""icesat-2"" under the ""science"" topic to find us.\n\n.. _`icepyx`: https://github.com/icesat2py/icepyx\n.. _`contribution guidelines`: ./doc/source/contributing/contribution_guidelines.rst\n\nContribute\n----------\nWe welcome and invite contributions to icepyx_ from anyone at any career stage and with any amount of coding experience!\nCheck out our `contribution guidelines`_ to see how you can contribute.\n\nPlease note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. |Contributor Covenant|\n\n.. |Contributor Covenant| image:: https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg\n :target: code_of_conduct.md\n \nResearch notice\n~~~~~~~~~~~~~~~\n\nPlease note that this repository is participating in a study into\nsustainability of open source projects. Data will be gathered about this\nrepository for approximately the next 12 months, starting from June\n2021.\n\nData collected will include number of contributors, number of PRs, time\ntaken to close/merge these PRs, and issues closed.\n\nFor more information, please visit `the informational\npage `__ or\ndownload the `participant information\nsheet `__.\n'",",https://doi.org/10.21105/joss.04912\n\n","2019/06/20, 23:49:26",1588,BSD-3-Clause,47,571,"2023/10/18, 17:05:11",59,298,395,73,7,12,3.0,0.3507853403141361,"2023/09/14, 16:01:01",v0.8.0,9,25,false,,true,false,"DAndrewA/ICESat2,erikmannerfelt/ADSvalbard,RedbirdTaiwan/deepicedrain,quangchiem139/deepicedrain,scumechanics/deepicedrain,ciresdem/ETOPO,SpoookyTanuki/webdev_hw7_flask,weiji14/deepicedrain,georgettica/IS2_velocity,Surfix/icechart,ICESAT-2HackWeek/IS2_velocity,MinesGlaciology/Learn-ICESat-2,nicholas-kotlinski/icesat2_hackweek",,https://github.com/icesat2py,,,,,https://avatars.githubusercontent.com/u/57814535?v=4,,, BedMachine,"Matlab tools for loading, interpolating, and displaying BedMachine ice sheet topography.",chadagreene,https://github.com/chadagreene/BedMachine.git,github,"antarctica,matlab,greenland,bedmachine,geology,ice-shelves,glaciology,glaciers",Glacier and Ice Sheets,"2022/11/09, 22:18:17",21,0,4,true,MATLAB,,,MATLAB,,"b'[![View BedMachine on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://www.mathworks.com/matlabcentral/fileexchange/69159-bedmachine)\n# BedMachine Tools for MATLAB\n\n

\n\nMatlab tools for loading, interpolating, and displaying BedMachine ice sheet topography.\n\n## Requirements \n* [Antarctic Mapping Tools for Matlab](https://github.com/chadagreene/Antarctic-Mapping-Tools) (Greene et al., 2017) or [Arctic Mapping Tools](https://www.mathworks.com/matlabcentral/fileexchange/63324) if you\'re analyzing Greenland.\n* BedMachine data for [Greenland](https://nsidc.org/data/IDBMG4) or [Antarctica](https://nsidc.org/data/nsidc-0756) (Morlighem et al., 2017 & 2019).\n\n## Contents \n* **`bedmachine_data`** loads the gridded data. \n

\n\n* **`bedmachine_interp`** interpolates surface elevation, thickness, bed, etc at any locations. \n

\n\n* **`bedmachine`** plots BedMachine data as imagesc or contour. \n

\n\n* **`bedmachine_profile`** creates a profile slice along a straight line such as a ship track or flowline. \n

\n\n* **`bedmachine_3d`** creates a 3D map of BedMachine data. \n

\n\n## Citing this dataset\n\nIf you use BedMachine data, please cite the Morlighem paper listed below. And if this function is useful for you, please do me a kindness and cite my Antarctic Mapping Tools paper.\n\nMorlighem M. et al., (2017), BedMachine v3: Complete bed topography and ocean bathymetry mapping of Greenland from multi-beam echo sounding combined with mass conservation, Geophys. Res. Lett., 44, doi:10.1002/2017GL074954.\n\nMorlighem, M., E. Rignot, T. Binder, D. D. Blankenship, R. Drews, G. Eagles, O. Eisen, F. Ferraccioli, R. Forsberg, P. Fretwell, V. Goel, J. S. Greenbaum, H. Gudmundsson, J. Guo, V. Helm, C. Hofstede, I. Howat, A. Humbert, W. Jokat, N. B. Karlsson, W. Lee, K. Matsuoka, R. Millan, J. Mouginot, J. Paden, F. Pattyn, J. L. Roberts, S. Rosier, A. Ruppel, H. Seroussi, E. C. Smith, D. Steinhage, B. Sun, M. R. van den Broeke, T. van Ommen, M. van Wessem, and D. A. Young. 2019. Deep glacial troughs and stabilizing ridges unveiled beneath the margins of the Antarctic ice sheet, Nature Geoscience. doi:10.1016/j.cageo.2016.08.003.\n\nGreene, C. A., Gwyther, D. E., & Blankenship, D. D. Antarctic Mapping Tools for Matlab. Computers & Geosciences. 104 (2017) pp.151-157. doi:10.1016/j.cageo.2016.08.003.\n'",,"2021/04/10, 21:37:21",928,MIT,2,16,"2023/10/18, 17:05:11",1,0,0,0,7,1,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, Iceberg Locations,Antarctic large iceberg positions derived from ASCAT and OSCAT-2.,Joel-hanson,https://github.com/Joel-hanson/Iceberg-locations.git,github,"climate-change,iceberg,python,beautifulsoup4,scraping,git-scraping",Glacier and Ice Sheets,"2023/09/02, 01:40:44",6,0,0,true,Python,,,Python,https://www.scp.byu.edu/current_icebergs.html,"b'# Iceberg Locations\n\n[![The iceberg data collection](https://github.com/Joel-hanson/Iceberg-locations/actions/workflows/iceberge-tracker.yml/badge.svg?branch=main)](https://github.com/Joel-hanson/Iceberg-locations/actions/workflows/iceberge-tracker.yml)\n\n

\n \n

\n\n> Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website\n\n## Overview\n\nThis a project which automatically scrapes data from https://www.scp.byu.edu/current_icebergs.html to get the current location of all the large iceberg in the Antarctic, The position is derived from ASCAT and OSCAT-2. The json `iceberg_location.json` contains all the information collected from the page. This JSON is typically updated once or twice a week(as per the updates on the website), typically on Mondays and possibly Fridays. Positions reported here are extracted from near real-time ASCAT and OSCAT-2 data in tandem. Positions reported in the full iceberg database are generated from science data and have been more accurately tracked. The full database is updated only a few times per year which can be accessed from https://www.scp.byu.edu/data/iceberg/database1.html.\n\n> The scheduled task of scraping the website runs every day.\n\n## Requirements\n\n1. Python (3.6, 3.7, 3.8, 3.9)\n2. beautifulsoup4 (4.9.3)\n3. lxml (4.6.2)\n\n## Get started\n\nStep 1: Install requirements\n\n`pip install -r requirements.txt`\n\nStep 2: Make migration for the models\n\n`python iceberg.py`\n\nStep 3: Go to the link `iceberg_location.json` to see the latest position details of the iceberg.\n\n## JSON Schema\n\nThe file `iceberg_location.json` is structured in the format\n\n```json\n{\n ""$schema"": ""http://json-schema.org/draft-06/schema#"",\n ""type"": ""object"",\n ""additionalProperties"": {\n ""type"": ""array"",\n ""items"": {\n ""$ref"": ""#/definitions/ScriptElement""\n }\n },\n ""definitions"": {\n ""ScriptElement"": {\n ""type"": ""object"",\n ""additionalProperties"": false,\n ""properties"": {\n ""iceberg"": {\n ""type"": ""string""\n },\n ""recent_observation"": {\n ""type"": ""string""\n },\n ""longitude"": {\n ""type"": ""integer""\n },\n ""dms_longitude"": {\n ""type"": ""string""\n },\n ""dms_lattitude"": {\n ""type"": ""string""\n },\n ""lattitude"": {\n ""type"": ""integer""\n }\n },\n ""required"": [\n ""dms_lattitude"",\n ""dms_longitude"",\n ""iceberg"",\n ""lattitude"",\n ""longitude"",\n ""recent_observation""\n ],\n ""title"": ""ScriptElement""\n }\n }\n}\n```\n\n## Example\n\n```json\n{\n ""02/12/21"": [\n {\n ""iceberg"": ""a23a"",\n ""recent_observation"": ""02/09/21"",\n ""longitude"": -400.0,\n ""dms_longitude"": ""40 0\'W"",\n ""dms_lattitude"": ""75 45\'S"",\n ""lattitude"": -7545.0\n },\n {\n ""iceberg"": ""a63"",\n ""recent_observation"": ""02/09/21"",\n ""longitude"": -5447.0,\n ""dms_longitude"": ""54 47\'W"",\n ""dms_lattitude"": ""71 41\'S"",\n ""lattitude"": -7141.0\n },\n {\n ""iceberg"": ""a64"",\n ""recent_observation"": ""02/09/21"",\n ""longitude"": -6038.0,\n ""dms_longitude"": ""60 38\'W"",\n ""dms_lattitude"": ""69 23\'S"",\n ""lattitude"": -6923.0\n },\n ................\n```\n\n_OSCAT-2 - Operational users please note: This list cannot possibly contain all potentially hazardous icebergs in Antarctic waters -- Scatterometers such as ASCAT and OSCAT-2 were designed for measuring ocean winds, not icebergs. Scatterometer data is useful for tracking icebergs but is limited. During the Austral summer contrast between ocean and melting icebergs is reduced, which can result in gaps invisibility. Further, as the SCP team is not an operational agency, errors are expected and we cannot be held responsible for omissions or errors in this database. Also, note that the large icebergs tracked here tend to shed smaller iceberg fragments which are serious navigation hazards in nearby areas. Fragments (large and small) can drift substantial distances from their origins._'",,"2021/01/11, 13:30:20",1017,MIT,3,90,"2021/01/21, 16:02:51",0,1,1,0,1007,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, SICOPOLIS,A 3-d dynamic/thermodynamic model that simulates the evolution of large ice sheets and ice caps.,sicopolis,,custom,,Glacier and Ice Sheets,,,,,,,,,,https://gitlab.awi.de/sicopolis/sicopolis,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, cosipy,Solves the energy balance at the surface and is coupled to an adaptive vertical multi-layer subsurface module.,cryotools,https://github.com/cryotools/cosipy.git,github,,Glacier and Ice Sheets,"2023/06/15, 11:08:47",47,0,9,true,Python,CryoTools,cryotools,"Python,Shell",,"b'.. image:: https://cryo-tools.org/wp-content/uploads/2019/11/COSIPY-logo-2500px.png\n\nThe coupled snowpack and ice surface energy and mass balance model in Python COSIPY solves the energy balance at the surface and is coupled to an adaptive vertical multi-layer subsurface module.\n\nDocumentation\n-------------\nThe documentation for COSIPY is available at the following link:\nhttps://cosipy.readthedocs.io/en/latest/\n\nCommunication and Support\n-------------------------\nWe are using the groupware slack for communication (inform about new releases, bugs, features, ..) and support:\nhttps://cosipy.slack.com\n\nAbout\n-----\n\n:Tests:\n .. image:: https://readthedocs.org/projects/cosipy/badge/?version=latest\n :target: https://cosipy.readthedocs.io/en/latest/\n\n .. image:: http://www.repostatus.org/badges/latest/active.svg\n :target: http://www.repostatus.org/#active\n\n .. image:: https://travis-ci.org/cryotools/cosipy.svg?branch=master\n :target: https://travis-ci.org/cryotools/cosipy\n\n .. image:: https://codecov.io/gh/cryotools/cosipy/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/cryotools/cosipy\n\n:Citation:\n .. image:: https://img.shields.io/badge/Citation-GMD%20paper-orange.svg\n :target: https://gmd.copernicus.org/articles/13/5645/2020/\n\n .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3902191.svg\n :target: https://doi.org/10.5281/zenodo.2579668\n\n:License:\n .. image:: https://img.shields.io/badge/License-GPLv3-blue.svg\n :target: http://www.gnu.org/licenses/gpl-3.0.en.html\n'",",https://doi.org/10.5281/zenodo.2579668\n\n:License:\n","2017/10/23, 14:10:03",2193,GPL-3.0,1,514,"2022/11/10, 11:02:14",22,13,32,3,349,4,0.1,0.12852664576802508,"2021/01/14, 16:10:00",v1.4,0,4,false,,false,false,,,https://github.com/cryotools,https://cryo-tools.org,,,,https://avatars.githubusercontent.com/u/33029607?v=4,,, QGreenland,"A free mapping tool to support interdisciplinary Greenland-focused research, teaching, decision making, and collaboration.",nsidc,https://github.com/nsidc/qgreenland.git,github,"gis,greenland,qgis",Glacier and Ice Sheets,"2023/08/31, 23:11:43",29,0,11,true,Python,National Snow and Ice Data Center,nsidc,"Python,Shell,Dockerfile,Jinja",https://qgreenland.readthedocs.io,"b'

\n \n \n

\n\n# QGreenland\n[![NSF-1928393](https://img.shields.io/badge/NSF-1928393-red.svg)](https://nsf.gov/awardsearch/showAward?AWD_ID=1928393)\n[![NSF-1928393](https://img.shields.io/badge/NSF-2324765-red.svg)](https://nsf.gov/awardsearch/showAward?AWD_ID=2324765)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8173510.svg)](https://doi.org/10.5281/zenodo.8173510)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8247896.svg)](https://doi.org/10.5281/zenodo.8247896)\n\n\n\nThis repository is responsible for the code and configuration for creating the\nQGreenland Core zip package. To download the package and learn more about\nQGreenland, visit our [our website](https://www.qgreenland.org).\n\nFor more detailed information about using the QGreenland Core zip package and on\nhow to contribute to QGreenland, see our\n[Documentation](https://qgreenland.readthedocs.io)\n\n> :tada: QGreenland v3 has been released! Please visit our\n> [website](https://qgreenland.org/download) to download it now! Note\n> that an official annoucement and more exciting news is planned for early\n> September. Subscribe to the [QGreenland newsletter](http://eepurl.com/gQ7VCr)\n> to learn more!\n\n\n## A Free GIS package for Greenland\n\n![QGreenland example images](/doc/_images/qgreenland-examples.jpg)\n\nQGreenland is a free mapping tool to support interdisciplinary Greenland-focused\nresearch, teaching, decision making, and collaboration. It combines key datasets\ninto a unified, all-in-one GIS analysis and visualization environment for\noffline and online use.\n\nAn international Editorial Board and Project Collaborators connects the\nQGreenland Team to data and user communities.\n\nLearn more about [What is\nQGreenland?](https://qgreenland.readthedocs.io/en/latest/what_is_qgr.html)\n\n\n# Usage\n\n## For contributors\n\nThose wishing to utilize the `qgreenland` code to create their own QGreenland\ndata package should see the contributor [How to build QGreenland\nCore](https://qgreenland.readthedocs.io/en/latest/contributor/how-to/run-qgreenland.html)\nguide.\n\n## For users of the QGreenland Core data package\n\nSee our [Get started with QGreenland\nCore](https://qgreenland.readthedocs.io/en/latest/user/tutorials/get-started.html)\ntutorial!\n\n### What is inside the QGreenland Core data package zip file?\n\nAt the root of the zip file, you will find useful files such as a\n`UserGuide.pdf`, the `qgreenland.qgs` QGIS project file and scientific\ndiscipline-specific directories containing data (GeoTIFFs and GeoPackages). \n\nFor more detailed information, see our documentation on the [QGreenland Core\nDownload\nPackage](https://qgreenland.readthedocs.io/en/latest/what_is_qgr.html#qgreenland-core-download-package).\n\n\n### Educational resources\n\nWe keep the QGreenland official website up-to-date with links to helpful\neducational resources, including our own QGreenland User Guide.\n\n* [QGreenland official website](https://qgreenland.org)\n* [QGreenland YouTube channel](https://www.youtube.com/channel/UCjWae_Jrbognx2ju_SHBZ2A/videos)\n* [QGreenland official documentation](https://qgreenland.readthedocs.io)\n\n### Troubleshooting\n\nSee our user troubleshooting guide\n[here](https://qgreenland.readthedocs.io/en/latest/user/how-to/troubleshooting.html).\n\n# Contributing\n\nSee our [discusson page on\ncontributing](https://qgreenland.readthedocs.io/en/latest/contributor/discussion/contributing.html)\nto get started!\n\nContributor documentation contains technical instructions about running the\nQGreenland pipeline code, but we also strive to describe everything clearly.\nOur goal is to make it as easy as possible for any user of QGreenland to\ncontribute to the project, so please do not be deterred from sharing your ideas.\n\nIf you have an idea for a new feature or have a bug to report, please submit an\n[Issue](https://github.com/nsidc/qgreenland/issues).\n\n**If all else fails, please [email us](mailto:qgreenland.info@gmail.com)!**\n\n# Acknowledgements\n\nPlease see our\n[acknowledgements](https://qgreenland.readthedocs.io/en/latest/acknowledgements.html)\nfor our best effort to acknowledge all of the giants upon whose shoulders we stand.\n'",",https://doi.org/10.5281/zenodo.8173510,https://doi.org/10.5281/zenodo.8247896","2020/02/18, 19:49:50",1345,CUSTOM,660,4519,"2023/08/31, 23:11:46",86,614,711,221,55,3,1.1,0.5112893642305407,"2023/09/07, 16:25:29",v3.0.0,0,7,false,,true,false,,,https://github.com/nsidc,http://www.nsidc.org/,"Boulder, Colorado",,,https://avatars.githubusercontent.com/u/1874284?v=4,,, DeepIceDrain,Mapping and monitoring deep subglacial water activity in Antarctica using remote sensing and machine learning.,weiji14,https://github.com/weiji14/deepicedrain.git,github,"intake,binder,python3,jupyter-lab,big-data,antarctica,ice-sheet,hdf5,zarr,pygmt,datashader,analysis-ready-data,icesat-2,open-science",Glacier and Ice Sheets,"2022/10/03, 21:13:01",26,0,6,false,Jupyter Notebook,,,"Jupyter Notebook,Shell,Python,Gherkin,Dockerfile,Makefile",,"b'# DeepIceDrain [[poster]](https://github.com/weiji14/nzasc2021)\n\nMapping and monitoring deep subglacial water activity\nin Antarctica using remote sensing and machine learning.\n\n[![Zenodo Digital Object Identifier](https://zenodo.org/badge/DOI/10.5281/zenodo.4071235.svg)](https://doi.org/10.5281/zenodo.4071235)\n![GitHub top language](https://img.shields.io/github/languages/top/weiji14/deepicedrain.svg)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)\n[![Test DeepIceDrain package](https://github.com/weiji14/deepicedrain/actions/workflows/python-app.yml/badge.svg)](https://github.com/weiji14/deepicedrain/actions/workflows/python-app.yml)\n[![Dependabot Status](https://api.dependabot.com/badges/status?host=github&repo=weiji14/deepicedrain)](https://dependabot.com)\n![License](https://img.shields.io/github/license/weiji14/deepicedrain)\n\n| Ice Surface Elevation trends over Antactica | Active Subglacial Lake fill-drain event |\n|---|---|\n| ![ICESat-2 ATL11 rate of height change over time in Antarctica 2019-03-29 to 2020-12-24](https://user-images.githubusercontent.com/23487320/123902132-65cfd680-d9c0-11eb-88d6-4e0e8c5abc47.png) | ![dsm_whillans_ix_cycles_3-9.gif](https://user-images.githubusercontent.com/23487320/124219379-5ed7ce00-db50-11eb-95d0-f1f660d4d688.gif) |\n\n![DeepIceDrain Pipeline Part 1 Exploratory Data Analysis](https://yuml.me/diagram/scruffy;dir:LR/class/[Land-Ice-Elevation|atl06_play.ipynb]->[Convert|atl06_to_atl11.ipynb],[Convert]->[Land-Ice-Height-time-series|atl11_play.ipynb])\n![DeepIceDrain Pipeline Part 2 Subglacial Lake Analysis](https://yuml.me/diagram/scruffy;dir:LR/class/[Height-Change-over-Time-(dhdt)|atlxi_dhdt.ipynb],[Height-Change-over-Time-(dhdt)]->[Subglacial-Lake-Finder|atlxi_lake.ipynb],[Subglacial-Lake-Finder]->[Crossover-Analysis|atlxi_xover.ipynb])\n\n| Along track view of an ATL11 Ground Track | Elevation time-series at Crossover Points |\n|---|---|\n| ![alongtrack_whillans_ix_1080_pt3](https://user-images.githubusercontent.com/23487320/124219416-744cf800-db50-11eb-83a1-45e1e1159ba6.png) | ![crossover_anomaly_whillans_ix_2019-03-29_2020-12-24](https://user-images.githubusercontent.com/23487320/124219432-7a42d900-db50-11eb-92b4-c83728b8dc1c.png) |\n\n\n\n# Getting started\n\n## Quickstart\n\nLaunch in [Binder](https://mybinder.readthedocs.io) (Interactive jupyter lab environment in the cloud).\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/weiji14/deepicedrain/main)\n\nAlternative [Pangeo BinderHub](https://pangeo-binder.readthedocs.io) link.\nRequires a GitHub account and you\'ll have to install your own computing environment,\nbut it runs on AWS uswest2 which allows for\n[cloud access to ICESat-2](https://nsidc.org/data/user-resources/data-announcements/data-set-updates-new-earthdata-cloud-access-option-icesat-2-and-icesat-data-sets)!\n\n[![Pangeo BinderHub](https://aws-uswest2-binder.pangeo.io/badge_logo.svg)](https://hub.aws-uswest2-binder.pangeo.io/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fweiji14%2Fdeepicedrain&urlpath=lab%2Ftree%2Fdeepicedrain%2F&branch=main)\n\n\n## Usage\n\nOnce you\'ve properly installed the [`deepicedrain` package](deepicedrain)\n(see installation instructions further below), you\'ll have access to a\n[wide range of tools](https://github.com/weiji14/deepicedrain/tree/main/deepicedrain)\nfor downloading and performing quick calculations on ICESat-2 datasets.\nThe example below shows how to calculate ice surface elevation change\non a sample ATL11 dataset between ICESat\'s Cycle 3 and Cycle 4.\n\n import deepicedrain\n import xarray as xr\n\n # Loads a sample ATL11 file from the intake catalog into xarray\n atl11_dataset: xr.Dataset = deepicedrain.catalog.test_data.atl11_test_case.read()\n\n # Calculate elevation change in metres from ICESat-2 Cycle 3 to Cycle 4\n delta_height: xr.DataArray = deepicedrain.calculate_delta(\n dataset=atl11_dataset, oldcyclenum=3, newcyclenum=4, variable=""h_corr""\n )\n\n # Quick plot of delta_height along the ICESat-2 track\n delta_height.plot()\n\n![ATL11 delta_height along ref_pt track](https://user-images.githubusercontent.com/23487320/83319030-bf7e4280-a28e-11ea-9bed-331e35dbc266.png)\n\n\n\n## Installation\n\n### Basic\n\nTo just try out the scripts, download the `environment.yml` file from the repository and run the commands below:\n\n cd deepicedrain\n mamba env create --name deepicedrain --file environment.yml\n pip install git+https://github.com/weiji14/deepicedrain.git\n\n### Intermediate\n\nTo help out with development, start by cloning this [repo-url](/../../)\n\n git clone \n\nThen I recommend [using mamba](https://mamba.readthedocs.io/en/latest/installation.html)\nto install the non-python binaries.\nA virtual environment will also be created with Python and\n[poetry](https://github.com/python-poetry/poetry) installed.\n\n cd deepicedrain\n mamba env create --file environment.yml\n\nActivate the virtual environment first.\n\n mamba activate deepicedrain\n\nThen install the python libraries listed in the `pyproject.toml`/`poetry.lock` file.\n\n poetry install\n\nFinally, double-check that the libraries have been installed.\n\n poetry show\n\n### Advanced\n\nThis is for those who want full reproducibility of the virtual environment,\nand more computing power by using Graphical Processing Units (GPU).\n\nMaking an explicit [conda-lock](https://github.com/conda-incubator/conda-lock) file\n(only needed if creating a new virtual environment/refreshing an existing one).\n\n mamba env create -f environment.yml\n mamba list --explicit > environment-linux-64.lock\n\nCreating/Installing a virtual environment from a conda lock file.\nSee also https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#building-identical-conda-environments.\n\n mamba create --name deepicedrain --file environment-linux-64.lock\n mamba install --name deepicedrain --file environment-linux-64.lock\n\nIf you have a [CUDA](https://en.wikipedia.org/wiki/CUDA)-capable GPU,\nyou can also install the optional ""cuda"" packages to accelerate some calculations.\n\n poetry install --extras cuda\n\n\n## Running jupyter lab\n\n mamba activate deepicedrain\n python -m ipykernel install --user --name deepicedrain # to install virtual env properly\n jupyter kernelspec list --json # see if kernel is installed\n jupyter lab &\n\n\n## Related Projects\n\nThis work would not be possible without inspiration\nfrom the following cool open source projects!\nGo check them out if you have time.\n\n- [ATL11](https://github.com/suzanne64/ATL11)\n- [ICESAT-2 HackWeek](https://github.com/ICESAT-2HackWeek)\n- [icepyx](https://github.com/icesat2py/icepyx)\n\n\n## Citing\n\nThe work in this repository has not been peer-reviewed, but if you do want to\ncite it for some reason, use the following BibLaTeX code from this conference\nproceedings ([poster presentation](https://github.com/weiji14/nzasc2021)):\n\n @inproceedings{LeongSpatiotemporalvariabilityactive2021,\n title = {{Spatiotemporal Variability of Active Subglacial Lakes in Antarctica from 2018-2020 Using ICESat-2 Laser Altimetry}},\n author = {Leong, W. J. and Horgan, H. J.},\n date = {2021-02-10},\n publisher = {{Unpublished}},\n location = {{Christchurch, New Zealand}},\n doi = {10.13140/RG.2.2.27952.07680},\n eventtitle = {{New Zealand Antarctic Science Conference}}},\n langid = {english}\n }\n\nPython code for the DeepIceDrain package here on Github is also mirrored on Zenodo at https://doi.org/10.5281/zenodo.4071235.\n'",",https://doi.org/10.5281/zenodo.4071235,https://doi.org/10.5281/zenodo.4071235.\n","2019/10/09, 02:02:05",1478,LGPL-3.0,0,536,"2022/10/03, 21:13:02",12,324,332,0,387,7,0.0,0.40292275574112735,"2021/07/05, 02:53:09",v0.4.2,0,3,false,,false,false,,,,,,,,,,, LIVVkit,The land ice verification and validation toolkit.,LIVVkit,https://github.com/LIVVkit/LIVVkit.git,github,"verification,ice-sheet-models,validation,testing",Glacier and Ice Sheets,"2023/05/09, 20:17:47",7,11,1,true,Python,Land Ice Verification and Validation toolkit,LIVVkit,"Python,HTML,CSS,TeX,JavaScript",,"b'![](https://raw.githubusercontent.com/wiki/LIVVkit/LIVVkit/imgs/livvkit.png)\n\n The land ice verification and validation toolkit\n===============================================================================\n\nLIVVkit is a python-based toolkit for verification and validation of ice sheet\nmodels. It aims to provide the following capabilities:\n\n**Model V&V**\n* Numerical verification -- ""Are we solving the equations correctly?""\n* Physical validation -- ""Are we using the right physics?""\n\n**Software V&V**\n* Code verification -- ""did we build what *we* intended?""\n* Performance validation -- ""did we build what the *users* wanted?""\n\nWithin LIVVkit, these capabilities are broken into four components:\n\nModel V&V\n* Numerics\n* Validation\n\nSoftware V&V\n* Verification\n* Performance\n\nCurrently, LIVVkit is being used and developed in conjunction with E3SM\n([Energy Exascale Earth System Model](https://e3sm.org/)) and CISM\n([Community Ice Sheet Model](https://cism.github.io/)), but is designed\nto be extensible to other models. For further documentation view the\n[full documentation](https://livvkit.github.io/Docs).\n\n**Users and contributors are welcome!** We\xe2\x80\x99ll help you out \xe2\x80\x93\n[open an issue on github](https://github.com/LIVVkit/LIVVkit/issues)\nto contact us for any reason.\n\n Installation \n================\nThe latest LIVVkit release can be installed via [pip](https://pip.pypa.io/en/stable/):\n\n```sh\npip install livvkit\n```\n\nAdditionally, LIVVkit is released on github, and you can clone the source code:\n\n```sh\ngit clone https://github.com/LIVVkit/LIVVkit.git\n```\n\nIf you are having any troubles with installation or dependencies, open an issue on the \n[issue tracker](https://github.com/LIVVkit/LIVVkit/issues) or contact us!\n\n\n Usage\n==========\nLIVVkit is primarily controlled via options specified at the command line.\nTo see the full list of options, run:\n\n```sh\nlivv -h\n```\n\n Verification\n--------------\n\nIn verification mode, LIVVkit analyzes and compares a regression testing\ndataset to a reference dataset. For example, LIVVkit may analyze the dataset\nproduced from a proposed CISM 2.0.6 release (~400MB; download\n[here](http://jhkennedy.org/LIVVkit/cism-2.0.6-tests.20160728.tgz)) and\ncompare it to the dataset produced from the CISM 2.0.0 release (~400MB;\ndownload [here](http://jhkennedy.org/LIVVkit/cism-2.0.0-tests.20160728.tgz)).\nTo run this example, first download the two aforementioned datasets to a\ndirectory, open a terminal, and navigate to your download directory.\nThen, un-tar the datasets:\n\n```sh\ntar -zxvf cism-2.0.0-tests.20160728.tgz\ntar -zxvf cism-2.0.6-tests.20160728.tgz\n```\n\nFor ease, export the path to the two dataset directories:\n\n```sh\nexport REF=$PWD/cism-2.0.0-tests/titan-gnu/CISM_glissade\nexport TEST=$PWD/cism-2.0.6-tests/titan-gnu/CISM_glissade\n```\n\nTo run the suite, use:\n\n```sh\nlivv -v $TEST $REF -o cism206v200 -s\n```\n\nLIVVkit will run the verification suite, report a summary of the results\non the command line, produce an output website in the created `cism206v200`\ndirectory specified by the `-o/--out-dir` option, and launch an http server\n(the `-s/--serve option`) to easily view the output in your favorite web\nbrowser. LIVVkit will tell you the address to view the website at on the\ncommand line, which will typically look like\nhttp://0.0.0.0:8000/ver_test/index.html.\n\n\n Validation, Extensions\n-----------------------\n\nLIVVkit is extensible to more in-depth or larger validation analyses.\nHowever, because these validation analyses are particularly data intensive,\nmany of the observational and example model output files are much too\nlarge to distribute in the LIVVkit package. Therefore, we\'ve developed a\nLIVVkit Extensions repository (LEX) which uses\n[git-lfs](https://git-lfs.github.com) (Git Large File Support) in order to\ndistribute the required data. `git-lfs` can be installed either before or\nafter cloning this repository, but it will be needed *before* downloading\nthe required data. You can determine if you have `git-lfs` installed on\nyour system by running this command:\n\n```sh\ncommand -v git-lfs\n```\n\nIf `git-lfs` is not installed, you can install it by following the instructions here:\n\nhttps://git-lfs.github.com\n\nOnce `git-lfs` is installed, clone and enter this repository:\n\n```sh\ngit lfs clone https://code.ornl.gov/LIVVkit/lex.git\ncd lex\n```\n\nEach extension will have an associated JSON configuration file which will describe\nthe extension\'s analysis code, data locations, and options. To see a list of\navailable extensions, you can run this command:\n\n```sh\nfind . -iname ""*.json""\n```\n\nTo execute any of these extensions, point `livv`\nto any of these extensions config file via the `-e/--extension` option (or the\n`-V/--validate` option). For example, to run the minimal example extension,\nplace the output website in the `val_test` directory, and serve the output website\nyou\'d run this command:\n\n```sh\nlivv -e example/example.json -o vv_test -s\n```\n\n*Note:* All the extension configurations files assume you are working from the\ntop level `lex` directory. You *can* run any of these extensions from any\ndirectory, but you will need to edit the paths in the JSON configuration files so\nthat `livv` can find the required files.\n\nLikewise, you can also apply these analyses to any new model run by adjusting\nthe paths to point to your model run.\n\n \n More\n------\n\nFor more information about using LIVVkit see the [documentation](https://livvkit.github.io/Docs).\n\n Contact\n===========\n\nIf you would like to suggest features, request tests, discuss contributions,\nreport bugs, ask questions, or contact us for any reason, use the\n[Issue Tracker](https://github.com/LIVVkit/LIVVkit/issues).\n\nWant to send us a private message?\n\n**Joseph H. Kennedy** \n* github: @jhkennedy\n* email: kennedyjh [at] ornl.gov\n\n**Katherine J. Evans** \n* github: @kevans32\n* email: evanskj [at] ornl.gov\n\nIf you\'re emailing us, we recommend CC-ing all of us. \n\n'",,"2015/06/10, 14:14:15",3059,BSD-3-Clause,16,909,"2023/05/09, 20:17:47",14,21,43,1,169,1,0.7,0.5488721804511278,"2023/05/09, 20:20:28",v3.1.0,0,9,false,,true,false,"NCAR/iCIME_iHESP,ofuhrer/scream,Lizzy0Sun/heat-tepp,ESMCI/cmeps-cime,BjerknesCPU/NorESM1.3-HIRES,Huang-Group-UMICH/E3SM_v2_alpha,gdicker1/CIME_DAV,yihuiwang/E3SM,wanggangsheng/ELM-MEND,everpassenger/E3SM,LIVVkit/evv4esm",,https://github.com/LIVVkit,,,,,https://avatars.githubusercontent.com/u/12449792?v=4,,, pypromice,Deliver data about the mass balance of the Greenland ice sheet in near real-time.,GEUS-Glaciology-and-Climate,https://github.com/GEUS-Glaciology-and-Climate/pypromice.git,github,"greenland,weather,weather-station",Glacier and Ice Sheets,"2023/10/17, 11:01:13",10,0,3,true,Python,GEUS Glaciology and Climate,GEUS-Glaciology-and-Climate,"Python,TeX",https://pypromice.readthedocs.io,"b""# pypromice\n[![PyPI version](https://badge.fury.io/py/pypromice.svg)](https://badge.fury.io/py/pypromice)\n[![]()](https://www.doi.org/10.22008/FK2/3TSBF0) [![DOI](https://joss.theoj.org/papers/10.21105/joss.05298/status.svg)](https://doi.org/10.21105/joss.05298) [![Documentation Status](https://readthedocs.org/projects/pypromice/badge/?version=latest)](https://pypromice.readthedocs.io/en/latest/?badge=latest)\n \npypromice is designed for processing and handling [PROMICE](https://promice.dk) automated weather station (AWS) data.\n\nIt is envisioned for pypromice to be the go-to toolbox for handling and processing [PROMICE](https://promice.dk) and [GC-Net](http://cires1.colorado.edu/steffen/gcnet/) datasets. New releases of pypromice are uploaded alongside PROMICE AWS data releases to our [Dataverse](https://dataverse.geus.dk/dataverse/PROMICE) for transparency purposes and to encourage collaboration on improving our data. Please visit the pypromice [readthedocs](https://pypromice.readthedocs.io/en/latest/?badge=latest) for more information. \n\nIf you intend to use PROMICE AWS data and/or pypromice in your work, please cite these publications below, along with any other applicable PROMICE publications where possible:\n\n**Fausto, R.S., van As, D., Mankoff, K.D., Vandecrux, B., Citterio, M., Ahlstr\xc3\xb8m, A.P., Andersen, S.B., Colgan, W., Karlsson, N.B., Kjeldsen, K.K., Korsgaard, N.J., Larsen, S.H., Nielsen, S., Pedersen, A.\xc3\x98., Shields, C.L., Solgaard, A.M., and Box, J.E. (2021) Programme for Monitoring of the Greenland Ice Sheet (PROMICE) automatic weather station data, Earth Syst. Sci. Data, 13, 3819\xe2\x80\x933845, [https://doi.org/10.5194/essd-13-3819-2021](https://doi.org/10.5194/essd-13-3819-2021)**\n\n**How, P., Wright, P.J., Mankoff, K., Vandecrux, B., Fausto, R.S. and Ahlstr\xc3\xb8m, A.P. (2023) pypromice: A Python package for processing automated weather station data, Journal of Open Source Software, 8(86), 5298, [https://doi.org/10.21105/joss.05298](https://doi.org/10.21105/joss.05298)** \n\n**How, P., Lund, M.C., Nielsen, R.B., Ahlstr\xc3\xb8m, A.P., Fausto, R.S., Larsen, S.H., Mankoff, K.D., Vandecrux, B., Wright, P.J. (2023) pypromice, GEUS Dataverse, [https://doi.org/10.22008/FK2/3TSBF0](https://doi.org/10.22008/FK2/3TSBF0)** \n\n## Installation\n\n### Quick install\n\nThe latest release of pypromice can installed using pip:\n\n```\n$ pip install pypromice\n```\n\nFor the most up-to-date version, pypromice can be installed directly from the repo: \n\n```\n$ pip install --upgrade git+http://github.com/GEUS-Glaciology-and-Climate/pypromice.git\n```\n\n### Developer install\n\t\npypromice can be ran in an environment with the pypromice repo:\n\n```\n$ conda create --name pypromice python=3.8\n$ conda activate pypromice\n$ git clone git@github.com:GEUS-Glaciology-and-Climate/pypromice.git\n$ cd pypromice/\n$ pip install .\n```\n\n### Additional dependencies\n\nAdditional packages are required if you wish to use pypromice's post-processing functionality. \n\n[eccodes](https://confluence.ecmwf.int/display/ECC/ecCodes+installation) is the official package for BUFR encoding and decoding. Try firstly to install with conda-forge like so:\n\n```\n$ conda install -c conda-forge eccodes\n```\n\nIf the environment cannot resolve the eccodes installation then follow the steps documented [here](https://gist.github.com/MHBalsmeier/a01ad4e07ecf467c90fad2ac7719844a) to download eccodes and then install eccodes' python bindings using pip.\n\n```\n$ pip3 install eccodes-python\n```\n\n""",",https://doi.org/10.21105/joss.05298,https://doi.org/10.5194/essd-13-3819-2021,https://doi.org/10.5194/essd-13-3819-2021,https://doi.org/10.21105/joss.05298,https://doi.org/10.21105/joss.05298,https://doi.org/10.22008/FK2/3TSBF0,https://doi.org/10.22008/FK2/3TSBF0","2020/09/15, 11:37:27",1135,GPL-2.0,255,453,"2023/10/17, 11:01:20",46,72,143,126,8,3,1.4,0.7036082474226804,"2023/10/17, 11:02:08",v1.3.1,0,8,false,,false,false,,,https://github.com/GEUS-Glaciology-and-Climate,https://eng.geus.dk/about/organisation/departments/glaciology-and-climate,"Copenhagen, Denmark",,,https://avatars.githubusercontent.com/u/71171316?v=4,,, GLAFT,Python module for assessing glacier velocity maps using statistics- and physics-based metrics.,whyjz,https://github.com/whyjz/GLAFT.git,github,"cryosphere,glaciers,python,remote-sensing,science",Glacier and Ice Sheets,"2023/10/06, 11:24:18",14,0,12,true,Jupyter Notebook,,,"Jupyter Notebook,Python",https://whyjz.github.io/GLAFT/,"b'# GLAcier Feature Tracking testkit: `glaft`\n\nA Python package for assessing and benchmarking feature-tracked glacier velocity maps derived from satellite imagery. Includes demo Notebook examples using data from Kaskawulsh glacier, Canada. See our [user manual](https://whyjz.github.io/GLAFT/doc/introduction.html) for more information.\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/whyjz/GLAFT/master?labpath=doc%2Fquickstart.ipynb)\n\n## Installation\n\n**Try GLAFT without installing**: We recommend running our [Quick Start notebook on MyBinder.org](https://mybinder.org/v2/gh/whyjz/glacier-ft-test/master?urlpath=tree/jupyter-book/doc/quickstart.ipynb).\n\n**For cloud access**: We recommend using the [Ghub portal to launch GLAFT](https://theghub.org/tools/glaft/status) (registration required).\n\n**For local installation**: GLAFT is available on PyPI and can be installed via `pip`. \n\n```\npip install glaft\n```\n\n## License\n\nGLAFT uses the [MIT License](https://github.com/whyjz/GLAFT/blob/master/LICENSE).\n\nContribution is always welcome!'",,"2020/07/07, 17:57:49",1205,MIT,38,142,"2021/12/09, 01:04:41",0,1,1,0,686,0,0.0,0.14598540145985406,"2023/10/06, 10:04:19",v1.0.0,0,3,false,,false,false,,,,,,,,,,, ITS_LIVE,"Provide automated, low latency, global glacier flow and elevation change datasets.",nasa-jpl,https://github.com/nasa-jpl/its_live.git,github,"itslive,its-live,glacier,glacier-dynamics,glaciers,glacier-flow",Glacier and Ice Sheets,"2023/07/11, 18:33:08",36,0,10,true,Python,NASA Jet Propulsion Laboratory,nasa-jpl,"Python,Jupyter Notebook",https://its-live.jpl.nasa.gov/,"b'

Notebooks

\n\n### A NASA MEaSUREs project to provide automated, low latency, global glacier flow and elevation change datasets\n\n[![Voil\xc3\xa0](https://img.shields.io/badge/Launch-Voil\xc3\xa0-lightblue?atyle=plastic&logo=jupyter)](https://itslive-dashboard.labs.nsidc.org)\n\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/nasa-jpl/its_live/main?urlpath=lab/tree/notebooks)\n\n\nThis repository provides tools for accessing and working with [ITS_LIVE](https://its-live.jpl.nasa.gov/) data.\n\n## **Web App**\n\n[![ITS_LIVE Web App](https://its-live-data.s3.amazonaws.com/documentation/ITS_LIVE_widget.gif)](https://itslive-dashboard.labs.nsidc.org/)\n\n**Video tutorial**\n\n[![ITS_LIVE YouTube tutorial](https://its-live-data.s3.amazonaws.com/documentation/ITS_LIVE_widget_youtube.jpg)](https://youtu.be/VYKsVvpVbmU ""ITS_LIVE glacier speed - under 1 minute to first plot"")\n\n## **Notebook**\n\n![ITS_LIVE NoteBook](https://its-live-data.s3.amazonaws.com/documentation/ITS_LIVE_notebook.gif)\n\n**Video tutorial**\n\n[![ITS_LIVE YouTube tutorial](https://its-live-data.s3.amazonaws.com/documentation/ITS_LIVE_notebook_velocity_timeseries_youtube.jpg)](https://www.youtube.com/embed/G7E7rE5npvg ""ITS_LIVE glacier speeds - 4 min to first plot"")\n\n\n'",,"2021/12/03, 00:41:41",692,MIT,5,273,"2023/07/11, 18:33:56",0,12,12,1,106,0,0.1,0.36444444444444446,,,0,5,false,,false,false,,,https://github.com/nasa-jpl,https://www.jpl.nasa.gov,"Pasadena, California, US",,,https://avatars.githubusercontent.com/u/10360932?v=4,,, ODINN.jl,Global glacier model using Universal Differential Equations for climate-glacier interactions.,ODINN-SciML,https://github.com/ODINN-SciML/ODINN.jl.git,github,"glaciers,climate,cryosphere,sciml,julia,differential-equations,scientific-machine-learning",Glacier and Ice Sheets,"2023/09/19, 13:50:02",41,0,25,true,Julia,ODINN,ODINN-SciML,"Julia,Jupyter Notebook",,"b'# ODINN\n\n[![Build Status](https://github.com/ODINN-SciML/ODINN.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/ODINN-SciML/ODINN.jl/actions/workflows/CI.yml?query=branch%3Amain)\n[![Coverage](https://codecov.io/gh/ODINN-SciML/ODINN.jl/branch/main/graph/badge.svg)](https://app.codecov.io/gh/ODINN-SciML/ODINN.jl)\n[![CompatHelper](https://github.com/ODINN-SciML/ODINN.jl/actions/workflows/CompatHelper.yml/badge.svg)](https://github.com/ODINN-SciML/ODINN.jl/actions/workflows/CompatHelper.yml)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8033313.svg)](https://doi.org/10.5281/zenodo.8033313)\n\n\n\n### \xe2\x9a\xa0\xef\xb8\x8f New preprint available! \xe2\x9a\xa0\xef\xb8\x8f\n\nFor a detailed description of the model and the application of Universal Differential Equations to glacier ice flow modelling, take a look at [our preprint at Geoscientific Model Development](https://gmd.copernicus.org/preprints/gmd-2023-120/). \n\n### OGGM (Open Global Glacier Model) + DIfferential equation Neural Networks\n\nGlobal glacier model using Universal Differential Equations to model and discover processes of climate-glacier interactions. \n\n`ODINN.jl` uses neural networks and differential equations in order to combine mechanistic models describing glacier physical processes (e.g. ice creep, basal sliding, surface mass balance) with machine learning. Neural networks are used to learn parts of the equations, which then can be interpreted in a mathematical form (e.g. using SINDy) in order to update the original equation from the process. ODINN uses the Open Global Glacier Model ([OGGM](oggm.org/), Maussion et al., 2019) as a basic framework to retrieve all the topographical and climate data for the initial state of the simulations. This is done calling Python from Julia using PyCall. Then, all the simulations and processing are performed in Julia, benefitting from its high performance and the SciML ecosystem. \n\n
\n\n> **Overview of `ODINN.jl`\xe2\x80\x99s workflow to perform functional inversions of glacier physical processes using Universal Differential Equations**. The parameters ($\xce\xb8$) of a function determining a given physical process ($D_\xce\xb8$), expressed by a neural network $NN_\xce\xb8$, are optimized in order to minimize a loss function. In this example, the physical to be inferred law was constrained only by climate data, but any other proxies of interest can be used to design it. The climate data, and therefore the glacier mass balance, are downscaled (i.e. it depends on $S$), with $S$ being updated by the solver, thus dynamically updating the state of the simulation for a given timestep.\n\n## Installing ODINN \n\nIn order to install `ODINN` in a given environment, just do in the REPL:\n```julia\njulia> ] # enter Pkg mode\n(@v1.9) pkg> activate MyEnvironment # or activate whatever path for the Julia environment\n(MyEnvironment) pkg> add ODINN\n```\n\n## ODINN initialization: integration with OGGM and multiprocessing \n\nODINN depends on some Python packages, mainly [OGGM](https://github.com/OGGM/oggm) and [xarray](https://github.com/pydata/xarray). In order to install the necessary Python dependencies in an easy manner, we are providing a Python environment (`oggm_env`) in `environment.yml`. To install and activate the environment, we recommend using [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html):\n\n```\nmicromamba create -f environment.yml\nmicromamba activate oggm_env\n```\n\nIn order to call OGGM in Python from Julia, we use [PyCall.jl](https://github.com/JuliaPy/PyCall.jl). PyCall hooks on the Python installation and uses Python in a totally seamless way from Julia. \n\nThe path to this conda environment needs to be specified in the `ENV[""PYTHON""]` variable in Julia, for PyCall to find it. This configuration is very easy to implement, it just requires providing the Python path to PyCall and building it:\n\n```julia\njulia # start Julia session\n\njulia> ENV[""PYTHON""] = read(`which python`, String)[1:end-1] # trim backspace\njulia> import Pkg; Pkg.build(""PyCall"")\njulia> exit()\n\n# Now you can run your code using ODINN in a new Julia session; e.g.:\nusing ODINN\n```\n\nSo now you can start working with ODINN with PyCall correctly configured. These configuration step only needs to be done the first time, so from now on ODINN should be able to correctly find your Python libraries. If you ever want to change your conda environment, you would just need to repeat the steps above. The next step is to start a new Julia session and import `ODINN` (or just run your script which uses ODINN, e.g. `toy_model.jl`). If you want to run ODINN using multiprocessing you can enable it using the following command in Julia:\n\n```julia\nprocesses = 16\nODINN.enable_multiprocessing(processes)\n```\n\nFrom this point, it is possible to use ODINN with multiprocessing and to run Python from Julia running the different commands available in the PyCall documentation. In order to get a better idea on how this works, we recommend checking the toy model example [toy_model.jl](https://github.com/ODINN-SciML/ODINN.jl/blob/main/scripts/toy_model.jl). \n\n### Using OGGM for the initial conditions of the training/simulations \n\nODINN works as a back-end of OGGM, utilizing all its tools to retrieve RGI data, topographical data, climate data and other datasets from the OGGM shop. We use these data to specify the initial state of the simulations, and to retrieve the climate data to force the model. Everything related to the mass balance and ice flow dynamics models is written 100% in Julia. This allows us to run tests with this toy model for any glacier on Earth. In order to choose a glacier, you just need to specify the RGI ID, which you can find [here](https://www.glims.org/maps/glims). \n\n## Upcoming changes \xf0\x9f\x86\x95\n\nA stable API is still being designed, which will be available in the next release. If you plan to start using the model, please contact us, although we recommend to wait until next release for a smoother experience. \n\n## How to cite \xf0\x9f\x93\x96\n\nIf you want to cite this work, please use this BibTex citation from [our latest preprint](https://gmd.copernicus.org/preprints/gmd-2023-120/):\n```\n@article{bolibar_universal_2023,\n\ttitle = {Universal {Differential} {Equations} for glacier ice flow modelling},\n\tvolume = {2023},\n\turl = {https://gmd.copernicus.org/preprints/gmd-2023-120/},\n\tdoi = {10.5194/gmd-2023-120},\n\tjournal = {Geoscientific Model Development Discussions},\n\tauthor = {Bolibar, J. and Sapienza, F. and Maussion, F. and Lguensat, R. and Wouters, B. and P\xc3\xa9rez, F.},\n\tyear = {2023},\n\tpages = {1--26},\n}\n```\n'",",https://doi.org/10.5281/zenodo.8033313","2021/03/17, 16:12:29",952,MIT,20,387,"2023/09/12, 13:41:41",22,65,109,46,43,2,0.2,0.2129629629629629,"2023/06/13, 13:02:16",v0.2.0,0,3,false,,false,false,,,https://github.com/ODINN-SciML,,Earth,,,https://avatars.githubusercontent.com/u/80704461?v=4,,, Yelmo,A 3D ice-sheet-shelf model solving for the coupled dynamics and thermodynamics of the ice sheet system.,palma-ice,https://github.com/palma-ice/yelmo.git,github,,Glacier and Ice Sheets,"2023/06/19, 13:45:18",12,0,1,true,Fortran,PalMA Ice sheet modeling group,palma-ice,"Fortran,Python,Makefile,Shell,JavaScript,Dockerfile",,"b'# Yelmo\n\nYelmo is a 3D ice-sheet-shelf model solving\nfor the coupled dynamics and thermodynamics of the ice sheet system. Yelmo\ncan be used for idealized simulations, stand-alone ice sheet simulations\nand fully coupled ice-sheet and climate simulations.\n\nThe physics and design of the model are described in the following article:\n\n> Robinson, A., Alvarez-Solas, J., Montoya, M., Goelzer, H., Greve, R., and Ritz, C.: Description and validation of the ice-sheet model Yelmo (version 1.0), Geosci. Model Dev., 13, 2805\xe2\x80\x932823, [https://doi.org/10.5194/gmd-13-2805-2020](https://doi.org/10.5194/gmd-13-2805-2020), 2020.\n\nThe (growing) model documentation is provided help with proper use of the model,\nand can be found at:\n\n [https://palma-ice.github.io/yelmo-docs](https://palma-ice.github.io/yelmo-docs)\n \nWhile the model has been designed to be easy to use, there\nare many parameters that require knowledge of ice-sheet\nphysics and numerous parameterizations. It is not recommended to use the ice\nsheet model as a black box without understanding of the key parameters that\naffect its performance.\n\nNote that the test cases shown by Robinson et al. (2020) can be reproduced following the\ninstructions below in the section ""Test cases"".\n\nTo get started with compiling and running the model, see the quick-start\ninstructions below. Or go to the documentation directly: [https://palma-ice.github.io/yelmo-docs/getting-started/](https://palma-ice.github.io/yelmo-docs/getting-started/).\n\n# Getting started\n\nHere you can find the basic information and steps needed to get **Yelmo** running.\n\n## Super-quick start\n\nA summary of commands to get started is given below. For more detailed information see subsequent sections.\n\n```\n# Clone repository\ngit clone git@github.com:palma-ice/yelmo.git\n\n# Enter directory and run configuration script\ncd yelmo\npython config.py config/pik_ifort \n\n# Compile the benchmarks program\nmake clean \nmake benchmarks \n\n# Run a test simulation of the EISMINT1-moving experiment\n./runylmo -r -e benchmarks -o output/eismint1-moving -n par-gmd/yelmo_EISMINT-moving.nml\n\n# Compile the initmip program and run a simulation of Antarctica\nmake initmip \n./runylmo -r -e initmip -o output/ant-pd -n par/yelmo_initmip.nml -p ctrl.clim_nm=""clim_pd""\n```\n\n## Dependencies\n\nSee: [Dependencies](https://palma-ice.github.io/yelmo-docs/dependencies/) for installation tips.\n\n- NetCDF library (preferably version 4.0 or higher)\n- LIS: [Library of Iterative Solvers for Linear Systems](http://www.ssisc.org/lis/)\n- [Optional] Python 3.x, which is only needed for automatic configuration of the Makefile and the use of the script `runylmo` for job preparation and submission.\n- [Optional] \'runner\' Python library: [https://github.com/alex-robinson/runner](https://github.com/alex-robinson/runner). Used for changing parameters at the command line using `runylmo`, and for running ensembles. \n\n## Directory structure\n\n```fortran\n config/\n Configuration files for compilation on different systems.\n input/\n Location of any input data needed by the model.\n libs/\n Auxiliary libraries nesecessary for running the model.\n libyelmo/\n Folder containing all compiled files in a standard way with\n lib/, include/ and bin/ folders.\n output/\n Default location for model output.\n par/\n Default parameter files that manage the model configuration.\n src/\n Source code for Yelmo.\n tests/\n Source code and analysis scripts for specific model benchmarks and tests.\n```\n\n## Usage\n\nFollow the steps below to (1) obtain the code, (2) configure the Makefile for your system,\n(3) compile the Yelmo static library and an executable program and (4) run a test simulation.\n\n### 1. Get the code.\n\nClone the repository from [https://github.com/palma-ice/yelmo](https://github.com/palma-ice/yelmo):\n\n```\ngit clone git@github.com:palma-ice/yelmo.git $YELMOROOT\ncd $YELMOROOT\n```\n\nwhere `$YELMOROOT` is the installation directory.\n\nIf you plan to make changes to the code, it is wise to check out a new branch:\n\n```\ngit checkout -b user-dev\n```\n\nYou should now be working on the branch `user-dev`.\n\n### 2. Create the system-specific Makefile.\n\nTo compile Yelmo, you need to generate a Makefile that is appropriate for your system. In the folder `config`, you need to specify a configuration file that defines the compiler and flags, including definition of the paths to the `NetCDF` and `LIS` libraries. You can use another file in the config folder as a template, e.g.,\n\n```\ncd config\ncp pik_ifort myhost_mycompiler\n```\n\nthen modify the file `myhost_mycompiler` to match your paths. Back in `$YELMOROOT`, you can then generate your Makefile with the provided python configuration script:\n\n```\ncd $YELMOROOT\npython config.py config/myhost_mycompiler\n```\n\nThe result should be a Makefile in `$YELMOROOT` that is ready for use.\n\n#### Alternative configuration - quickstart with Docker and VS Code\n\nInstead of a manual install, one way to get up and running quickly with Yelmo is with VS Code and Docker. It works on any plattform and uses a Linux based container. You don\'t need to know Docker or VS Code to get started. Just install the following:\n\n1. [Docker](https://docs.docker.com/engine/install/)\n2. [VS Code](https://code.visualstudio.com) \n3. [install the remote development extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack)\n\nThen make sure that Docker is running and start VS Code. \nOpen the folder with the Yelmo code. Say Yes, when VS Code asks you if you want to open it in the container.\n\nNow you can directly go to step 3 below, just make sure that you use the terminal in VS Code.\n\n### 3. Compile the code.\n\nNow you are ready to compile Yelmo as a static library:\n\n```\nmake clean # This step is very important to avoid errors!!\nmake yelmo-static [debug=1]\n```\nThis will compile all of the Yelmo modules and libraries (as defined in `config/Makefile_yelmo.mk`),\nand link them in a static library. All compiled files can be found in the folder `libyelmo/`.\n\nOnce the static library has been compiled, it can be used inside of external Fortran programs and modules\nvia the statement `use yelmo`.\nTo include/link yelmo-static during compilation of another program, its location must be defined:\n\n```\nINC_YELMO = -I${YELMOROOT}/include\nLIB_YELMO = -L${YELMOROOT}/include -lyelmo\n```\n\nAlternatively, several test programs exist in the folder `tests/` to run Yelmo\nas a stand-alone ice sheet.\nFor example, it\'s possible to run different EISMINT benchmarks, MISMIP benchmarks and the\nISIMIP6 INITMIP simulation for Greenland, respectively:\n\n```\nmake benchmarks # compiles the program `libyelmo/bin/yelmo_benchmarks.x`\nmake mismip # compiles the program `libyelmo/bin/yelmo_mismip.x`\nmake initmip # compiles the program `libyelmo/bin/yelmo_initmip.x`\n```\n\nThe Makefile additionally allows you to specify debugging compiler flags with the option `debug=1`, in case you need to debug the code (e.g., `make benchmarks debug=1`). Using this option, the code will run much slower, so this option is not recommended unless necessary.\n\n### 4. Run the model.\n\nOnce an executable has been created, you can run the model. This can be\nachieved via the included Python job submission script `runylmo`. The following steps\nare carried out via the script:\n\n1. The output directory is created.\n2. The executable is copied to the output directory\n3. The relevant parameter files are copied to the output directory.\n4. Links to the input data paths (`input` and `ice_data`) are created in the output directory. Note that many simulations, such as benchmark experiments, do not depend on these external data sources, but the links are made anyway.\n4. The executable is run from the output directory, either as a background process or it is submitted to the queue via `sbatch` (the SLURM workload manager).\n\nTo run a benchmark simulation, for example, use the following command:\n\n```\n./runylmo -r -e benchmarks -o output/test -n par/yelmo_EISMINT.nml\n```\n\nwhere the option `-r` implies that the model should be run as a background process. If this is omitted, then the output directory will be populated, but no executable will be run, while `-s` instead will submit the simulation to cluster queue system instead of running in the background. The option `-e` lets you specify the executable. For some standard cases, shortcuts have been created:\n\n```\nbenchmarks = libyelmo/bin/yelmo_benchmarks.x\nmismip = libyelmo/bin/yemo_mismip.x\ninitmip = libyelmo/bin/yelmo_initmip.x\n```\nThe last two mandatory arguments `-o OUTDIR` and `-n PAR_PATH` are the output/run directory and the parameter file to be used for this simulation, respectively. In the case of the above simulation, the output directory is defined as `output/test`, where all model parameters (loaded from the file `par/yelmo_EISMINT.nml`) and model output can be found.\n\nIt is also possible to modify parameters inline via the option `-p KEY=VAL [KEY=VAL ...]`. The parameter should be specified with its namelist group and its name. E.g., to change the resolution of the EISMINT benchmark experiment to 10km, use:\n\n```\n./runylmo -r -e benchmarks -o output/test -n par/yelmo_EISMINT.nml -p ctrl.dx=10\n```\n\nSee `runylmo -h` for more details on the run script. \n\n## Test cases\n\nThe published model description includes several test simulations for validation\nof the model\'s performance. The following section describes how to perform these\ntests using the same model version documented in the article. From this point,\nit is assumed that the user has already configured the model for their system\n(see https://palma-ice.github.io/yelmo-docs) and is ready to compile the mode.\n\n### 1. EISMINT1 moving margin experiment\nTo perform the moving margin experiment, compile the benchmarks\nexecutable and call it with the EISMINT parameter file:\n\n```\nmake benchmarks\n./runylmo -r -e benchmarks -o output/eismint-moving -n par-gmd/yelmo_EISMINT_moving.nml\n```\n\n### 2. EISMINT2 EXPA\nTo perform Experiment A from the EISMINT2 benchmarks, compile the benchmarks\nexecutable and call it with the EXPA parameter file:\n\n```\nmake benchmarks\n./runylmo -r -e benchmarks -o output/eismint-expa -n par-gmd/yelmo_EISMINT_expa.nml\n```\n\n### 3. EISMINT2 EXPF\nTo perform Experiment F from the EISMINT2 benchmarks, compile the benchmarks\nexecutable and call it with the EXPF parameter file:\n\n```\nmake benchmarks\n./runylmo -r -e benchmarks -o output/eismint-expf -n par-gmd/yelmo_EISMINT_expf.nml\n```\n\n### 4. MISMIP RF\nTo perform the MISMIP rate factor experiment, compile the mismip executable\nand call it with the MISMIP parameter file the three parameter permutations of interest (default, subgrid and subgrid+gl-scaling):\n\n```\nmake mismip\n./runylmo -r -e mismip -o output/mismip-rf-0 -n par-gmd/yelmo_MISMIP3D.nml -p ydyn.beta_gl_stag=0 ydyn.beta_gl_scale=0\n./runylmo -r -e mismip -o output/mismip-rf-1 -n par-gmd/yelmo_MISMIP3D.nml -p ydyn.beta_gl_stag=3 ydyn.beta_gl_scale=0\n./runylmo -r -e mismip -o output/mismip-rf-2 -n par-gmd/yelmo_MISMIP3D.nml -p ydyn.beta_gl_stag=3 ydyn.beta_gl_scale=2\n```\nTo additionally change the resolution of the simulations change the parameter `mismip.dx`, e.g. for the default simulation with 10km resolution , call:\n\n```\n./runylmo -r -e mismip -o output/mismip-rf-0-10km -n par-gmd/yelmo_MISMIP3D.nml -p ydyn.beta_gl_stag=0 ydyn.beta_gl_scale=0 mismip.dx=10\n```\n\n### 5. Age profile experiments\nTo perform the age profile experiments, compile the Fortran program `tests/test_icetemp.f90`\nand run it:\n\n```\nmake icetemp\n./libyelmo/bin/test_icetemp.x\n```\n\nTo perform the different permutations, it is necessary to recompile for\nsingle or double precision after changing the precision parameter `prec` in the file\n`src/yelmo_defs.f90`. The number of vertical grid points can be specified in the main\nprogram file, as well as the output filename.\n\n### 6. Antarctica present-day and glacial simulations\nTo perform the Antarctica simulations as presented in the paper, it is necessary\nto compile the `initmip` executable and run with the present-day (pd) and\nglacial (lgm) parameter values:\n\n\n```\nmake initmip\n./runylmo -r -e initmip -o output/ant-pd -n par-gmd/yelmo_Antarctica.nml -p ctrl.clim_nm=""clim_pd""\n./runylmo -r -e initmip -o output/ant-lgm -n par-gmd/yelmo_Antarctica.nml -p ctrl.clim_nm=""clim_lgm""\n```\n'",",https://doi.org/10.5194/gmd-13-2805-2020,https://doi.org/10.5194/gmd-13-2805-2020","2019/05/06, 13:57:17",1633,GPL-3.0,159,2095,"2022/06/17, 08:36:52",0,2,3,0,495,0,0.0,0.009533898305084776,"2021/12/19, 13:15:52",solver-stability-v1.0,0,5,false,,false,false,,,https://github.com/palma-ice,www.palma-ucm.es/palma-ice,"Madrid, Spain",,,https://avatars.githubusercontent.com/u/23148337?v=4,,, WAVI.jl,"A fast and friendly ice sheet model, written in Julia.",RJArthern,https://github.com/RJArthern/WAVI.jl.git,github,,Glacier and Ice Sheets,"2023/09/29, 15:18:31",22,0,12,true,Julia,,,"Julia,TeX,Dockerfile",,"b'\n

\n \xe2\x98\x83\xef\xb8\x8f\xf0\x9f\x8f\x94\xef\xb8\x8f\xe2\x9d\x84\xef\xb8\x8f WAVI.jl \xe2\x9d\x84\xef\xb8\x8f\xf0\x9f\x8f\x94\xef\xb8\x8f\xe2\x98\x83\xef\xb8\x8f\n

\n\nWAVI (Wavelet-based Adaptive-grid Vertically-integrated Ice-model) is a fast and friendly ice sheet model, written in Julia. \n\n## Contents\n\n* [Installation instructions](#installation-instructions)\n* [Running your first model](#running-your-first-model)\n* [Getting help](#getting-help)\n* [Contributing](#contributing)\n* [Credits](#credits)\n\n## Installation instructions\nYou can install the latest version of WAVI using Julia\'s in-build package manager:\n```julia\njulia>using Pkg\njulia>Pkg.add(PackageSpec(url=""https://github.com/RJArthern/WAVI.jl.git""))\n```\nNote that WAVI requires Julia v1.5 or newer.\n\nUpdating WAVI is also achieved using the package manager\n```julia\njulia>using Pkg\njulia>Pkg.update(""WAVI""))\n```\nNote that updating should be done with care as WAVI is still developing rapidly; while we aim to keep breaking changes to a minimum, this cannot be guaranteed at present.\n\n## Running your first model\nLet\'s run the MISMIP+ experiment (http://www.climate-cryosphere.org/activities/targeted/153-misomip/1412-mismip-plus), the latest ice sheet model intercomparison experiment. We\'ll use a grid with 80x10 grid points and 8km resolution in both dimensions, and 4 vertical levels. We\'ll run the model to steady state for 10000 years with a timestep of 0.5 years. Since we\'re only interested in the steady state, we\'ll speed up the code by only doing one iteration of the velocity solve per timestep:\n```julia\nusing WAVI \ngrid = Grid(nx = 80, ny = 10, n\xcf\x83 = 4, dx = 8000., dy = 8000., u_iszero = [""north""], v_iszero = [""east"", ""west""])\nbed = WAVI.mismip_plus_bed \nparams = Params(accumulation_rate = 0.3)\nsolver_params = SolverParams(maxiter_picard = 1)\nmodel = Model(grid = grid,bed_elevation = bed, params = params)\ntimestepping_params = TimesteppingParams(dt = 0.5, end_time = 10000.)\nsimulation = Simulation(model = model, timestepping_params = timestepping_params)\nrun_simulation!(simulation)\n```\nIt\'s as easy as that: entry into the state of the art ice sheet model intercomparison in nine lines of code \xf0\x9f\x98\x8e\n\n## Getting help\n\n## Contributing\n\n## Credits\nThis package was initiated by Rob Arthern (https://github.com/RJArthern) and is currently maintained by Rob and Alex Bradley (https://github.com/alextbradley)\n'",,"2020/08/06, 17:31:05",1175,MIT,62,402,"2023/09/29, 15:18:31",19,38,54,10,26,1,0.3,0.26956521739130435,"2023/05/04, 21:22:09",v0.0.1,0,5,false,,false,false,,,,,,,,,,, Planet Snowcover,A project that pairs airborne lidar and Planet Labs satellite imagery with cutting-edge computer vision techniques to identify snow-covered area at unprecedented spatial and temporal resolutions.,acannistra,https://github.com/acannistra/planet-snowcover.git,github,"planet-labs,computer-vision,cloud-native",Snow and Permafrost,"2021/03/11, 17:32:03",20,0,0,false,Jupyter Notebook,,,"Jupyter Notebook,HTML,Python,HCL,Dockerfile,Shell",https://planet-snowcover.readthedocs.io/en/latest/,"b'
\n\n

Planet Snowcover

\n\nPlanet Snowcover is a project that pairs airborne lidar and Planet Labs satellite imagery with cutting-edge computer vision techniques to identify snow-covered area at unprecedented spatial and temporal resolutions.\n\n

\xf0\x9f\x92\xa1This work was presented by Tony (@acannistra) at AGU 2019 in San Francisco. See the slides here.

\n\n**Researchers**: *[Tony Cannistra](https://www.anthonycannistra.com)1, Dr. David Shean2, and Dr. Nicoleta Cristea2*\n\n\n\n
\n\n
1: Department of Biology, University of Washington, Seattle, WA.
2: Department of Civil and Environmental Engineering, University of Washington, Seattle, WA
\n\n## This Repository\n\nThis repository serves as the canonical source for the software and infrastructure necessary to sucessfully build and deploy a machine-learning based snow classifier using Planet Labs imagery and airborne lidar data.\n\n* [Primary Components](#primary-components)\n* [Requirements](#requirements)\n * [Basic Requirements](#basic-requirements)\n * [Development Requirements](#development-requirements)\n * [Accounts + Data](#accounts-and-data)\n* [Infrastructure Deployment](#infrastructure-deployment)\n* [Tutorials](#tutorials)\n* [Implementation Details](#implementation-details)\n * AWS Cloud Resources\n * Open Source Machine Learning\n* Funding Sources\n* [Original Research Proposal](#original-proposal)\n\n## Primary Components\n\nThe contents of this repository are divided into several main components, which we detail here. This is the place to look if you\'re looking for something in particular.\n\n| Folder | Description | Details |\n|----------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [`./pipeline`](./pipeline) | Jupyter notebooks detailing the entire data processing, machine learning, and evaluation pipeline. | These notebooks detail every step in this workflow, from start to finish. |\n| [`./preprocess`](./preprocess) | A set of Python CLI tools for preprocessing data assets. | These tools help to reproject and threshold the ASO raster files, create vector footprints of raster data, tile the imagery for training, and other related tasks. |\n| [`./model`](./model) | The implementation of the machine learning/computer vision techniques used by this project. | This work relies heavily on the [robosat.pink](https://github.com/datapink/robosat.pink) repository, which we\'ve [forked](https://github.com/acannistra/robosat.pink) and modified extensively. |\n| [`./sagemaker`](./sagemaker) | The infrastructure required to use [Amazon Sagemaker](https://aws.amazon.com/sagemaker/) to manage our ML training jobs. | Sagemaker requires considerable configuration, including a Docker container. We build this container from this directory, which has a copy of the `./model` directory. |\n| [`./experiments`](./experiments) | Configuration files that describe experiments used to assess the performance of this ML-based snow cover method. | Our ML infrastructure uses ""config files"" to describe the inputs and other parameters to train the model effectively. We use these files to describe experiments that we perform, using different sets of ASO and imagery. |\n| [`./implementation-notes`](./implementation-notes) | Technical descriptions of the implementation considerations that went into this project. | These are working documents, in raw Markdown format. |\n| [`./raster_utils`](./raster_utils) | Small utility functions for managing raster computations. | Not much to see here. |\n| [`./environment`](./environment) | Raw Python environment configuration files. | \xe2\x9a\xa0\xef\xb8\x8f These emerge from `conda` and change often. Use sparingly. We preserve our environment via Docker, which should be used in this case (see the `./sagemaker` directory) |\n| [`./analysis`](./analysis) | Jupyter notebooks that describe analyses about our snow mask product. | \xe2\x9a\xa0\xef\xb8\x8f These are a work in progress and change frequently. |\n\n## Requirements\n### Basic Requirements\nThe goal of this work is to provide a toolkit that is relatively easy to deploy for someone with **working knowledge** of the following tools:\n\n* Python 3\n* Jupyter notebooks\n* Basic command-line tools\n\nMore specific requirements can be found in the [Infrastructure Deployment](#infrastructure-deployment) section below.\n\n### Development Requirements\n\nThis free, open-source software depends on a good number of other free, open-source software packages that permit this work. To understand the inner workings of this project, you\'ll need familiarity with the following:\n\n* [PyTorch](https://pytorch.org)\n* [Tensorflow](https://www.tensorflow.org)\n* [scikit-image](https://scikit-image.org)\n* [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) / [s3fs](https://s3fs.readthedocs.io/en/latest/)\n* [Geopandas](https://github.com/geopandas/geopandas) / [shapely](https://github.com/Toblerity/Shapely)\n* [Rasterio](https://rasterio.readthedocs.io/en/stable/) / [rio-tiler](https://github.com/cogeotiff/rio-tiler)\n* [mercantile](https://github.com/mapbox/mercantile) / [supermercado](https://github.com/mapbox/supermercado)\n* [Amazon Sagemaker](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html)\n\n\nTo build and manage our infrastructure, we use [Docker](https://www.docker.com) and [Terraform](https://www.terraform.io).\n\n\n### Accounts and Data\n\n

\nAmazon Web Services\n\n

\n\n\n\nThis project relies on cloud infrastructure from Amazon Web Services, which is a cloud services provider run by Amazon. AWS isn\'t the only provider in this space, but is the one we chose due to a combination of funding resources and familiarity. To run these tutorials and perform development tasks with this software, you\'ll need an AWS account. You can get one [here](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/).\n\n

Planet Labs\n\n

\n\nIn order to access the imagery data from Planet Labs used to train our computer vision models and assess their performance, we rely on a relationship with collaborator [Dr. David Shean](https://dshean.github.io) in UW Civil and Environmental Engineering, who has access to Planet Labs data through a NASA Terrestrial Hydrology Program award.\n\nIf you\'re interested in getting access to Planet Labs imagery for research, check out the [Planet Education and Research Program](https://www.planet.com/markets/education-and-research/).\n\n

NASA Earthdata\n\n

\n\nFinally, to gain access to the NASA/JPL Airborne Snow Observatory lidar-derived snow depth information, you need an account with NASA Earthdata. [Sign up here](https://urs.earthdata.nasa.gov/users/new).\n\n## Infrastructure Deployment\n\nTo explore this work, and the tutorials herein, you\'ll need to deploy some cloud infrastructure to do so. This project uses [Docker](https://www.docker.com)and [Terraform](https://www.terraform.io) to manage and deploy consistent, reliable cloud infrastructure.\n\nFor detailed instructions on this process, view the [documentation](./deployment/).\n\nTo jump right to the guts of the deployment, here\'s our [Dockerfile](./sagemaker/Dockerfile) and Terraform [Resource Definition](./deployment/resources.tf).\n\n## Tutorials\n\nThrough support from Earth Science Information Partners, we\'re happy to be able to provide thorough interactive tutorials for these tools and methods in the form of Jupyter notebooks. You can see these tutorials in the data pipeline folder [`./pipeline`](pipeline).\n\n## Acknowledgements and Funding Sources\n\nThis work wouldn\'t be possible without the advice and support of Dr. Nicoleta Cristea, Dr. David Shean, Shashank Buhshan, and others.\n\nWe gratefully acknowledge financial support from the [Earth Science Innovation Partners Lab](https://www.esipfed.org), the [NASA Terrestrial Hydrology Program](https://neptune.gsfc.nasa.gov/index.php?section=19), the Planet Labs [Education and Research](https://www.planet.com/markets/education-and-research/) Program, and the [National Science Foundation](http://nsf.gov/).\n\n\n\n\n## Original Proposal\nTo see the original resarch proposal for this project, now of date, view it [here](./_historical/original-proposal.md).\n\n-------------\n

\n\n

\n'",,"2018/01/25, 00:11:48",2100,MIT,0,471,"2021/03/11, 17:32:03",7,14,24,0,958,5,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, smrt,Snow Microwave Radiative Transfer model to compute thermal emission and backscatter from snowpack.,smrt-model,https://github.com/smrt-model/smrt.git,github,"modeling,snow,microwave",Snow and Permafrost,"2023/10/25, 20:59:15",41,0,17,true,Python,,smrt-model,"Python,MATLAB",,"b""\nSnow Microwave Radiative Transfer model\n=============================================\n\n[SMRT](https://www.smrt-model.science/) is a radiative transfer model to compute emission and backscatter from snowpack.\n\nGetting started is easy, follow the [instructions](https://www.smrt-model.science/getstarted.html) and explore the other repositories\nwith examples in the ['smrt-model' github organization](https://github.com/smrt-model) or read the detailed ['documentation'](https://smrt.readthedocs.io/en/latest/).\n\nIf you want to try without installing anything on your computer, use free mybinder.org notenooks: [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/smrt-model/smrt/master?filepath=examples/iba_onelayer.ipynb)\n\n\nLicense information\n--------------------\n\nSee the file ``LICENSE.txt`` for terms & conditions for usage, and a DISCLAIMER OF ALL\nWARRANTIES.\n\nDISCLAIMER: This version of SMRT is under peer review. Please use this software with caution, ask for assistance if needed, and let us know any feedback you may have.\n\nCopyright (c) 2016-2022 Ghislain Picard, Melody Sandells, Henning L\xc3\xb6we.\n\n\nOther contributions\n--------------------\n\n - Nina Maass\n - Ludovic Brucker\n - Marion Leduc-Leballeur\n - Mai Winstrup\n - Carlo Marin\n\n""",,"2016/10/31, 16:42:14",2550,CUSTOM,31,584,"2022/09/09, 11:48:22",5,10,17,1,411,0,0.3,0.15991471215351816,"2022/09/16, 13:42:18",v1.1.0,0,7,false,,false,false,,,https://github.com/smrt-model,,,,,https://avatars.githubusercontent.com/u/23165787?v=4,,, FSM2,"The Flexible Snow Model is a multi-physics energy balance model of snow accumulation and melt, extending the Factorial Snow Model (Essery, 2015) with additional physics, driving and output options.",RichardEssery,https://github.com/RichardEssery/FSM2.git,github,,Snow and Permafrost,"2023/08/21, 20:43:13",22,0,4,true,Fortran,,,"Fortran,Shell,Batchfile,C",,"b'# FSM2 quickstart guide\n\nThe Flexible Snow Model (FSM2) is a multi-physics energy balance model of snow accumulation and melt, extending the Factorial Snow Model [(Essery, 2015)](#Essery2015) with additional physics, driving and output options. FSM2 adds forest canopy model options and the possibility of running simulations for more than one point at the same time. For greater efficiency than FSM, which selects physics options when it is run, FSM2 options are selected when the model is compiled. Otherwise, FSM2 is built and run in the same way as FSM; for details, see the user guide in docs.\n\n## Building the model\n\nFSM2 is coded in Fortran and consists of subroutines and modules contained in the src directory. A linux executable FSM2 is produced by running script compil.sh, which uses the [gfortran](https://gcc.gnu.org/wiki/GFortran) compiler. Physics and driving data configurations are selected in the compilation script by defining options that are copied to a preprocessor file before compilation.\n\n## Running the model\n\nFSM2 requires meteorological driving data and namelists to set options and parameters. An example can be run with the command\n\n ./FSM2 < nlst_Sod_1314\n\nwhich run simulations for the winter of 2013-2014 at Sodankyl\xc3\xa4, Finland [(Essery et al, 2016)](#Essery2016). Two points are simulated: one with forest cover and one without.\n\n## References\n\n Essery (2015). A Factorial Snowpack Model (FSM 1.0). *Geoscientific Model Development*, **8**, 3867-3876, [doi:10.5194/gmd-8-3867-2015](http://www.geosci-model-dev.net/8/3867/2015/)\n\n Essery et al. (2016). A 7-year dataset for driving and evaluating snow models at an Arctic site (Sodankyl\xc3\xa4, Finland). *Geosci. Instrum. Method. Data Syst.*, **5**, 219-227, [doi:10.5194/gi-5-219-2016](https://www.geosci-instrum-method-data-syst.net/5/219/2016/)\n\n\n'",,"2017/08/14, 19:35:47",2263,MIT,2,36,"2020/08/26, 11:16:33",4,3,11,0,1155,0,0.0,0.0,"2019/03/14, 11:31:24",v2.0.1,0,1,false,,false,false,,,,,,,,,,, Teaspoon,"A python library designed to make working with permafrost ground temperature time series data more straightforward, efficient, and reproduceable.",permafrostnet,https://gitlab.com/permafrostnet/teaspoon,gitlab,,Snow and Permafrost,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, BioSNICAR,"Predicts the spectral albedo of snow and glacier ice between 200nm to 5000nm given information about the illumination conditions, ice structure and the type and concentration particulates.",jmcook1186,https://github.com/jmcook1186/biosnicar-py.git,github,,Snow and Permafrost,"2023/07/20, 12:33:44",14,0,9,true,Python,,,"Python,MATLAB,Dockerfile,Shell",,"b'# BioSNICAR\n\n\n\n\n## Introduction\n\nBioSNICAR predicts the spectral albedo of snow and glacier ice between 200nm to 5000nm given information about the illumination conditions, ice structure and the type and concentration of light-absorbing particulates (LAPs) externally mixed with the snow/ice. The jumping-off point for this model was legacy FORTRAN and Matlab code from SNICAR model - Flanner et al. (2007) - which solves the radiative transfer equations after Toon et al. 1989. Two solvers are available in BioSNICAR: the original SNICAR matrix solver typically represents ice and snow with grains (Toon et al. 1989) and the Adding-Doubling (AD) solver represents the ice as a solid medium with air bubbles and allows the incorporation of Fresnel reflecting layers (Brieglib and Light, 2007, Dang et al. 2019, Wicker et al. 2022). BioSNICAR couples SNICAR to a bio-optical model that allows for the calculation of optical properties of snow and glacier algae to load into the model as LAPs (Cook et al. 2017, 2020). This functionality, along with the vectorized AD solver formulation, accessible user interface and applicability to a very wide range of surface conditions are the unique selling points of this implementation. This code is also very actively maintained and we welcome contributions from the community to help make BioSNICAR a useful tool for a diverse range of cryosphere scientists.\n\n## Documentation\n\nDetailed documentation is available at https://biosnicar.vercel.app. This README gives a brief overview of the key information required to run the model.\n\n\n## How to use\n\nThere are two ways to run the BioSNICAR model: 1) use the app; 2) run the code. The app is designed to be extremely user-friendly and requires no coding skills. The app simply runs in the browser and is operated with a simple graphical user interface. You can use the deployed version by visiting\n\n[bit.ly/biosnicar](bit.ly/biosnicar)\n\nAlternatively, you can run the app locally. Power users will prefer to run the code directly to give access to all of BioSNICAR\'s functions. Both running the code and the app (if running locally) require a Python development environment with specific packages installed. The following section describes how to set up that environment.\n\n### Installing Environment/Dependencies\n\nIf you do not have Python installed, download Python >3.8. It is recommended to use a fresh environment using conda or venv. Once activated, install the project dependencies with:\n\n```\npip install -r requirements.txt\n```\n\nNow install biosnicar:\n\n```\npip install -e .\n```\n\nFinally, if you do not wish to install anything on your computer, but you use VSCode and Docker, then you can use the devcontainer config provided to run this code in a remote container. This requires the ""remote containers"" extension to be added to VSCode. Further instructions are available here: https://code.visualstudio.com/docs/remote/containers\n\n### Using the App\n\nInstructions for using the app are provided below.\n\n#### Run the app\n\nTo run the deployed version of the app simply direct your browser to [bit.ly/biosnicar](bit.ly/biosnicar). Alternatively, run the app locally by following these instructions:\n\nThe code for the Streamlit app is in `~/app/streamlit/app.py`.\n\nIn a terminal, navigate to the top-level BioSNICAR directory and run:\n\n`./start_app.sh`\n\nThis starts the Streamlit server running on `http://localhost:8501`.\n\n\n\n\n### Get albedo data\n\nSimply update the values and the spectral albedo plot and the broadband albedo value will update on the screen. You can download this data to a csv file by clicking `download data`.\n\n### Running the code\n\nThe model driver and all the core source code can be found in `/src/biosnicar`. From the top-level directory (`~/BioSNICAR_GO_PY`), run:\n\n`python ./src/biosnicar/main.py`\n\nThis will run the model with all the default settings. The user will see a list of output values printed to the console and a spectral albedo plot appear in a separate window. The code can also be run in an interactive session (Jupyter/iPython) in which case the relevant data and figure will appear in the interactive console.\n\nMost users will want to experiment with changing input parameters. This is achieved by adjusting the values in the config file `inputs.yaml`. The nature of each parameter is described in in-line annotations to guide the user. Invalid combinations of values will be rejected by our error-checking code. Most users should have no reason to modify any other file in this repository except for those in `inputs.yaml`.\n\nMore complex applications of the model code, for example, model inversions, field/model comparisons etc are included under `/experiments`, with details provided in that module\'s own README.\n\nWe have also maintained a separate version of the BioSNICAR codebase that uses a ""functional"" programming style rather than the object-oriented approach taken here. We refer to this as BioSNICAR Classic and it is available in the `classic` branch of this repository. it might be useful for people already familiar with FORTRAN or Matlab implementations from previous literature. The two branches are entirely equivalent in their simulations but very different in their programming style. The object-oriented approach is preferred because it is more Pythonic, more flexible and easier to debug.\n\n#### Choosing Inputs\n\nIt is straightforward to adjust the model configuration by updating the values in `inputs.yaml`. However there is a lot of nuance to setting up the model to provide realistic simulations, and the meaning of the various parameters is not always obvious. For this reason, we have put together a guide. Please refer to the documentation at [https://biosnicar.vercel.app](biosnicar.vercel.app).\n\n# Contributions\n\nNew issues and pull requests are welcomed. Pull requests trigger our Github Actions workflow to test for any breaking changes. PRs that pass these automated tests will be reviewed.\n\n# Permissions\n\nThis code is provided under an MIT license with the caveat that it is in active development. Collaboration ideas and pull requests generally welcomed. Please use the citations below to credit the builders of this repository and its predecessors.\n\n# Citation\n\nIf you use this code in a publication, please cite:\n\nCook, J. et al. (2020): Glacier algae accelerate melt rates on the western Greenland Ice Sheet, The Cryosphere, doi:10.5194/tc-14-309-2020\n\nFlanner, M. et al. (2007): Present-day climate forcing and response from black carbon in snow, J. Geophys. Res., 112, D11202, https://doi.org/10.1029/2006JD008003\n\nAnd if using the adding-doubling method please also cite Dang et al (2019) and Whicker et al (2022) as their code was translated to form the adding_doubling_solver.py script here. The aspherical grain correction equations come from He et al. (2016).\n\n\n# References\n\nBalkanski, Y., Schulz, M., Claquin, T., & Guibert, S. (2007). Reevaluation of Mineral aerosol radiative forcings suggests a better agreement with satellite and AERONET data. Atmospheric Chemistry and Physics, 7(1), 81-95.\n\nBidigare, R. R., Ondrusek, M. E., Morrow, J. H., & Kiefer, D. A. (1990, September). In-vivo absorption properties of algal pigments. In Ocean Optics X (Vol. 1302, pp. 290-302). International Society for Optics and Photonics.\n\nBohren, C. F., & Huffman, D. R. (1983). Absorption and scattering of light by small particles. John Wiley & Sons.\n\nBriegleb, B. P., and B. Light. ""A Delta-Eddington multiple scattering parameterization for solar radiation in the sea ice component of the Community Climate System Model."" NCAR technical note (2007).\n\nClementson, L. A., & Wojtasiewicz, B. (2019). Dataset on the absorption characteristics of extracted phytoplankton pigments. Data in brief, 24, 103875.\n\nCook JM, et al (2017) Quantifying bioalbedo: A new physically-based model and critique of empirical methods for characterizing biological influence on ice and snow albedo. The Cryosphere: 1\xe2\x80\x9329. DOI: 10.5194/tc-2017-73, 2017b\n\nCook, J. M. et al. (2020): Glacier algae accelerate melt rates on the western Greenland Ice Sheet, The Cryosphere Discuss., https://doi.org/10.5194/tc-2019-58, in review, 2019.\n\nDang, C., Zender, C., Flanner M. 2019. Intercomparison and improvement of two-stream shortwave radiative transfer schemes in Earth system models for a unified treatment of cryospheric surfaces. The Cryosphere, 13, 2325\xe2\x80\x932343, https://doi.org/10.5194/tc-13-2325-2019\n\nFlanner, M. et al. (2007): Present-day climate forcing and response from black carbon in snow, J. Geophys. Res., 112, D11202, https://doi.org/10.1029/2006JD008003\n\nFlanner, M et al. (2009) Springtime warming and reduced snow cover from\ncarbonaceous particles. Atmospheric Chemistry and Physics, 9: 2481-2497, 2009.\n\nFlanner, M. G., Gardner, A. S., Eckhardt, S., Stohl, A., & Perket, J. (2014). Aerosol radiative forcing from the 2010 Eyjafjallaj\xc3\xb6kull volcanic eruptions. Journal of Geophysical Research: Atmospheres, 119(15), 9481-9491.\n\nFlanner, M. G. et al., SNICAR-ADv3: a community tool for modeling spectral snow albedo, Geosci. Model Dev., 14, 7673\xe2\x80\x937704, https://doi.org/10.5194/gmd-14-7673-2021, 2021.\n\nHe, C., Takano, Y., Liou, K. N., Yang, P., Li, Q., & Chen, F. (2017). Impact of snow grain shape and black carbon\xe2\x80\x93snow internal mixing on snow optical properties: Parameterizations for climate models. Journal of Climate, 30(24), 10019-10036.\n\nHe, C., Liou, K. N., Takano, Y., Yang, P., Qi, L., & Chen, F. (2018). Impact of grain shape and multiple black carbon internal mixing on snow albedo: Parameterization and radiative effect analysis. Journal of Geophysical Research: Atmospheres, 123(2), 1253-1268.\n\nKirchstetter, T. W., Novakov, T., & Hobbs, P. V. (2004). Evidence that the spectral dependence of light absorption by aerosols is affected by organic carbon. Journal of Geophysical Research: Atmospheres, 109(D21).\n\nLee, E., & Pilon, L. (2013). Absorption and scattering by long and randomly oriented linear chains of spheres. JOSA A, 30(9), 1892-1900.\n\nPicard, G., Libois, Q., & Arnaud, L. (2016). Refinement of the ice absorption spectrum in the visible using radiance profile measurements in Antarctic snow. The Cryosphere, 10(6), 2655-2672.\n\nPolashenski et al. (2015): Neither dust nor black carbon causing apparent albedo decline in Greenland\'s dry snow zone: Implications for MODIS C5 surface reflectance, Geophys. Res. Lett., 42, 9319\xe2\x80\x93 9327, doi:10.1002/2015GL065912, 2015.\n\nSkiles, S. M., Painter, T., & Okin, G. S. (2017). A method to retrieve the spectral complex refractive index and single scattering optical properties of dust deposited in mountain snow. Journal of Glaciology, 63(237), 133-147.\n\nToon, O. B., McKay, C. P., Ackerman, T. P., and Santhanam, K. (1989), Rapid calculation of radiative heating rates and photodissociation rates in inhomogeneous multiple scattering atmospheres, J. Geophys. Res., 94( D13), 16287\xe2\x80\x93 16301, doi:10.1029/JD094iD13p16287.\n\nvan Diedenhoven et al. (2014): A flexible paramaterization for shortwave opticalproperties of ice crystals. Journal of the Atmospheric Sciences, 71: 1763 \xe2\x80\x93 1782, doi:10.1175/JAS-D-13-0205.1\n\nWarren, S. G. (1984). Optical constants of ice from the ultraviolet to the microwave. Applied optics, 23(8), 1206-1225.\n\nWarren, S. G., & Brandt, R. E. (2008). Optical constants of ice from the ultraviolet to the microwave: A revised compilation. Journal of Geophysical Research: Atmospheres, 113(D14).\n\nWhicker et al., Halbach et al., Chevrollier et al. coming soon!\n'",",https://doi.org/10.1029/2006JD008003\n\nAnd,https://doi.org/10.5194/tc-2019-58,https://doi.org/10.5194/tc-13-2325-2019\n\nFlanner,https://doi.org/10.1029/2006JD008003\n\nFlanner,https://doi.org/10.5194/gmd-14-7673-2021","2019/03/15, 16:25:46",1685,MIT,23,583,"2023/07/20, 09:25:49",4,61,81,4,97,1,0.3,0.26175869120654394,"2022/03/28, 12:33:21",2.0,0,5,false,,false,false,,,,,,,,,,, Permamodel,A collection of numerical permafrost models with a range of capability and complexity.,permamodel,https://github.com/permamodel/permamodel.git,github,"permafrost,python,bmi,csdms,modeling",Snow and Permafrost,"2023/05/12, 20:59:38",14,4,6,true,Python,Permamodel,permamodel,"Python,Jupyter Notebook,Shell",https://permamodel.github.io,"b'[![PyPI](https://img.shields.io/pypi/v/permamodel)](https://pypi.org/project/permamodel)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/permamodel.svg)](https://anaconda.org/conda-forge/permamodel)\n[![Basic Model Interface](https://img.shields.io/badge/CSDMS-Basic%20Model%20Interface-green.svg)](https://bmi.readthedocs.io/)\n[![Test](https://github.com/permamodel/permamodel/actions/workflows/test.yml/badge.svg)](https://github.com/permamodel/permamodel/actions/workflows/test.yml)\n[![Coverage Status](https://coveralls.io/repos/github/permamodel/permamodel/badge.svg?branch=main)](https://coveralls.io/github/permamodel/permamodel?branch=main)\n\npermamodel\n==========\n\nPermamodel is a collection of numerical permafrost models.\nThis repository contains source code and examples for two models:\n\n* Frost number\n* Ku\n\nFor more information,\nsee the project home page: https://permamodel.github.io.\n\n\nInstallation\n------------\n\nPermamodel can be installed with `pip`:\n```\n$ pip install permamodel\n```\nor with `conda`:\n```\n$ conda install -c conda-forge permamodel\n```\nWe recommend installing permamodel into a Python virtual environment.\n'",,"2016/04/27, 21:05:18",2737,MIT,62,564,"2023/05/12, 15:00:50",7,61,79,15,166,5,0.2,0.47075208913649025,"2017/06/02, 22:04:41",v0.1,0,7,false,,false,false,"ethan-pierce/intro-to-landlab-2023,pymt-lab/pymt_permamodel,mcflugen/pymt_frost_number,mcflugen/pymt_ku",,https://github.com/permamodel,,University of Colorado Boulder,,,https://avatars.githubusercontent.com/u/16598343?v=4,,, SNOWPACK,"A multi-purpose snow and land-surface model, which focuses on a detailed description of the mass and energy exchange between the snow, the atmosphere and optionally with the vegetation cover and the soil.",snow-models,,custom,,Snow and Permafrost,,,,,,,,,,https://code.wsl.ch/snow-models/snowpack,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, MuSA,A flexible data assimilation toolbox for experimental and operational snowpack reanalysis development.,ealonsogzl,https://github.com/ealonsogzl/MuSA.git,github,,Snow and Permafrost,"2023/10/10, 11:16:29",12,0,7,true,Python,,,"Python,Fortran,Shell,C",,"b'# MuSA: The Multiple Snow data Assimilation System \n \nThe Multiscale Snow Assimilation System (MuSA), is a flexible data assimilation toolbox for experimental and operational snowpack reanalysis development. MuSA was designed to fuse gridded observations with an ensemble of simulations generated by the Flexible Snow Model [(FSM2)](https://github.com/RichardEssery/FSM2) by using different Bayesian based data assimilation algorithms. \n\nIn its current version, it also offers support for the Snow17 model and a simple temperature index model. Potentially other numerical models could be implemented (not necessarily limited to snow models).\n\n![alt text](https://github.com/ealonsogzl/MuSA/blob/master/img/PBS_animation.gif)\n Figure 1: Comparison between open loop and updated simulation after assimilating UAV snow depth retrievals at 5m resolution \n### Inputs \n \nThe inputs of MuSA are composed by meteorological forcing and observations to be assimilated. Both the forcing and observations must share the **same geometry**, with the same resolution and number of cells in the latitudinal and longitudinal axes, and should be provided in the [netCDF](https://www.unidata.ucar.edu/software/netcdf/) format. In this version, the meteorological forcing must be provided in an hourly timestep. Optionally it is possible to provide a mask with the same geometry of the mandatory input files to avoid to run MuSA over certain cells of your domain. The meteorological forcing needed for running MuSA is composed by: \n- Incident shortwave radiation (W m-2)\n- Incident longwave radiation ( W m-2)\n- Precipitation (Kg-2 m-2 s-1) \n- Temperature (K) \n- Relative Humidity (%) \n- Wind speed (m s-1) \n- Atmospheric pressure (Pa) \n \nIn its current version MuSA provides support for assimilating different variables. Note that it is possible to provide more than one of the following variables at the same time, i.e. MuSA has support for joint assimilation experiments. In its current version, MuSA is able to assimilate: \n- SWE (mm) \n- Snow depth (m) \n- Land/Snow surface temperature (K) \n- Fractional snow cover area (-) \n- Albedo (-)\n- Sensible heat flux to the atmosphere (W m-2)\n- Latent heat flux to the atmosphere (W m-2)\n\nThe support of other variables like liquid water content, density, ice content etc. could be relatively easily implemented on demand. \n \n### Data assimilation algorithms\nThere are different data assimilation and resampling algorithms implemented in MuSA. Some testing should be done when developing data assimilation experiments, as the performance may be different depending on the problem to solve, and regarding the literature there is not a clear winner. Also, the computational cost will be different, and may be a strong conditioner in some situations.\n\nFilters:\n- Particle Filter (PF)\n- Ensemble Kalman filter (EnKF)\n- Iterative ensemble Kalman filter (IEnKF)\n\nSmoothers:\n- Particle batch smoother (PBS)\n- Ensemble smoother (ES)\n- Iterative ensemble smoother (IES)\n- Particle-adjusted iterative ensemble smoother (PIES)\n- Robust Adaptive Metropolis initialised by IES (IES-MCMC)\n \nResampling (for particle filters only):\n- Bootstrapping\n- Residual resampling\n- Stratified resampling\n- Systematic resampling\n- Redraw from a normal approximation of the posterior\n\n### Outputs\nThe outputs of MuSA are composed by simple .csv files containing the following information:\n- **OL_lonid_latid**: This file contains the open loop simulation (snow simulation without any assimilation).\n- **updated_lonid_latid**: This file contains the updated simulation after the assimilation, i.e. weighted mean of the ensemble of simulations.\n- **sd_lonid_latid**: This file contains the weighted standard deviation of the ensemble after the assimilation.\n- **DA_lonid_latid**: This file contains information about the observed variables and posterior parameters (in the normal space).\n\n**lonid** and **latid** are the longitude and latitude ids of each cell of the simulation.\n\nAdditionally it is possible to store the ensembles generated for each cell. This is an optional feature as it may be a bit memory consuming. However It may be useful in some circumstances specially for advanced users. The ensembles will be stored as pickle objects, and will be composed of python lists containing [Ensemble objects](https://github.com/ealonsogzl/MuSA/blob/master/modules/internal_class.py).\n\n### Usage\n\nMuSA works on GNU/Linux (and therefore Mac) based platforms. MuSA has been tested also in Windows using the Windows Subsystem for Linux (WSL). MuSA relies on python3 with the usual scientific libraries (numpy, pandas, scipy...) and netCDF4 installed. You will also need to have gfortran in the path. The easiest way to do this is to generate a dedicated conda environment. You can use the [MuSAenv.yml](https://github.com/ealonsogzl/MuSA/blob/master/MuSAenv.yml) file of the repository to create the conda environment:\n\n```\nconda env create --name MuSAenv --file=MuSAenv.yml\n```\n\n\nThen for running MuSA simply:\n\n```\nconda activate MuSAenv\npython main.py\n```\n\nThis command should run the reproducible example included in the repository. This example contains all the information needed by MuSA. It is composed of a few cells containing meteorological forcing and drone SfM derived snowdepth information. To change the configuration of MuSA, you should modify the [config.py](https://github.com/ealonsogzl/MuSA/blob/master/config.py) file. Also it is possible to modify the way MuSA generates the ensemble by modifying the [constants.py](https://github.com/ealonsogzl/MuSA/blob/master/constants.py) file.\nAn [example script](https://github.com/ealonsogzl/MuSA/blob/master/run_PBS.sh) is also provided to run MuSA in distributed supercomputing facilities using PBS (Portable Batch System, not Particle Batch Smoother :wink:) or [Slurm](https://github.com/ealonsogzl/MuSA/blob/master/run_slurm.sh) arrays.\n\n### How to cite\n#### MuSA\n- Alonso-Gonz\xc3\xa1lez, E., Aalstad, K., Baba, M. W., Revuelto, J., L\xc3\xb3pez-Moreno, J. I., Fiddes, J., Essery, R., and Gascoin, S.: The Multiple Snow Data Assimilation System (MuSA v1.0), Geosci. Model Dev., 15, 9127\xe2\x80\x939155, https://doi.org/10.5194/gmd-15-9127-2022, 2022. \n#### FSM2\n- Mazzotti, G., Essery, R., Moeser, C. D., and Jonas, T.: Resolving small-scale forest snow patterns using an energy balance snow model with a one-layer canopy. Water Resour. Res., 56, https://doi.org/10.1029/2019WR026129, 2020.\n- Essery, R.: A factorial snowpack model (FSM 1.0), Geosci. Model Dev., 8, 3867\xe2\x80\x933876, https://doi.org/10.5194/gmd-8-3867-2015, 2015. \n#### Related references\n- Alonso-Gonz\xc3\xa1lez, E., Aalstad, K., Pirk, N., Mazzolini, M., Treichler, D., Leclercq, P., Westermann, S., L\xc3\xb3pez-Moreno, J. I., and Gascoin, S.: Spatio-temporal information propagation using sparse observations in hyper-resolution ensemble-based snow data assimilation, EGUsphere (preprint), https://doi.org/10.5194/egusphere-2023-954, 2023. \n- Alonso-Gonz\xc3\xa1lez, E., Gascoin, S., Arioli, S., and Picard, G.: Exploring the potential of thermal infrared remote sensing to improve a snowpack model through an observing system simulation experiment, EGUsphere The Cryosphere, 17, 3329\xe2\x80\x933342, https://doi.org/10.5194/tc-17-3329-2023, 2023.\n- Alonso-Gonz\xc3\xa1lez, E., Gutmann, E., Aalstad, K., Fayad, A., Bouchet, M., and Gascoin, S.: Snowpack dynamics in the Lebanese mountains from quasi-dynamically downscaled ERA5 reanalysis updated by assimilating remotely sensed fractional snow-covered area, Hydrol. Earth Syst. Sci., 25, 4455\xe2\x80\x934471, https://doi.org/10.5194/hess-25-4455-2021, 2021.\n- Fiddes, J., Aalstad, K., and Westermann, S.: Hyper-resolution ensemble-based snow reanalysis in mountain regions using clustering, Hydrol. Earth Syst. Sci., 23, 4717\xe2\x80\x934736, https://doi.org/10.5194/hess-23-4717-2019, 2019. \n- Aalstad, K., Westermann, S., Schuler, T. V., Boike, J., and Bertino, L.: Ensemble-based assimilation of fractional snow-covered area satellite retrievals to estimate the snow distribution at Arctic sites, The Cryosphere, 12, 247\xe2\x80\x93270, https://doi.org/10.5194/tc-12-247-2018, 2018.\n\n'",",https://doi.org/10.5194/gmd-15-9127-2022,https://doi.org/10.1029/2019WR026129,https://doi.org/10.5194/gmd-8-3867-2015,https://doi.org/10.5194/egusphere-2023-954,https://doi.org/10.5194/tc-17-3329-2023,https://doi.org/10.5194/hess-25-4455-2021,https://doi.org/10.5194/hess-23-4717-2019,https://doi.org/10.5194/tc-12-247-2018","2021/12/14, 09:57:15",680,GPL-3.0,113,171,"2023/08/27, 09:23:25",0,17,22,17,59,0,0.1,0.14583333333333337,"2023/05/08, 08:54:19",v2.0,0,3,false,,false,false,,,,,,,,,,, snotelr, R toolbox to facilitate easy SNOTEL data exploration and downloads through a convenient shiny based GUI.,bluegreen-labs,https://github.com/bluegreen-labs/snotelr.git,github,"snotel,climate-data,data-retrieval",Snow and Permafrost,"2023/09/16, 09:38:37",12,0,4,true,R,BlueGreen Labs,bluegreen-labs,"R,CSS",https://bluegreen-labs.github.io/snotelr/,"b'# snotelr \n\n[![R-CMD-check](https://github.com/bluegreen-labs/snotelr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/bluegreen-labs/snotelr/actions/workflows/R-CMD-check.yaml)\n[![codecov](https://codecov.io/gh/bluegreen-labs/snotelr/branch/master/graph/badge.svg)](https://app.codecov.io/gh/bluegreen-labs/snotelr)\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/snotelr)](https://cran.r-project.org/package=snotelr)\n[![](https://cranlogs.r-pkg.org/badges/grand-total/snotelr)](https://cran.r-project.org/package=snotelr)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7012728.svg)](https://doi.org/10.5281/zenodo.7012728)\n\n`snotelr` is an R toolbox to facilitate easy SNOTEL data exploration and downloads through a convenient R [shiny](https://shiny.posit.co/) based GUI. In addition it provides a routine to extract basic snow phenology metrics.\n\n## How to cite this package in your article\n\nYou can cite this package like this ""we obtained data from SNOTEL using the `snotelr` R package (Hufkens 2022)"". Here is the full bibliographic reference to include in your reference list:\n\n> Hufkens, K. (2022). snotelr: a toolbox to facilitate easy SNOTEL data exploration and downloads in R. Zenodo. https://doi.org/10.5281/zenodo.7012728.\n\n## Installation\n\n### stable release\n\nTo install the current stable release use a CRAN repository:\n\n```r\ninstall.packages(""snotelr"")\nlibrary(""snotelr"")\n```\n\nThe use of the GUI requires the installation of additional packages, which are side loaded.\n\n```r\ninstall.packages(c(""DT"",""shinydashboard"", ""plotly"", ""leaflet""))\n```\n\n### development release\n\nTo install the development releases of the package run the following\ncommands:\n\n```r\nif(!require(remotes)){install.packages(""remotes"")}\nremotes::install_github(""bluegreen-labs/snotelr"")\nlibrary(""snotelr"")\n```\n\nVignettes are not rendered by default, if you want to include additional\ndocumentation please use:\n\n```r\nif(!require(remotes)){install.packages(""remotes"")}\nremotes::install_github(""bluegreen-labs/snotelr"", build_vignettes = TRUE)\nlibrary(""snotelr"")\n```\n\n## Use\n\nMost people will prefer the GUI to explore data on the fly. To envoke the GUI use the following command:\n\n```r\nlibrary(snotelr)\nsnotel_explorer()\n```\n\nThis will start a shiny application with an R backend in your default browser. The first window will display all site locations, and allows for subsetting of the data based upon state or a bounding box. The bounding box can be selected by clicking top-left and bottom-right.\n\n![map](https://github.com/bluegreen-labs/snotelr/assets/1354258/f191081c-d5e9-4827-9cee-3e25376fc97c)\n\nThe *plot data* tab allows for interactive viewing of the soil water equivalent (SWE) data together with a covariate (temperature, precipitation). The SWE time series will also mark snow phenology statistics, mainly the day of:\n\n- first snow melt\n- a continuous snow free season (last snow melt)\n- first snow accumulation (first snow deposited)\n- continuous snow accumulation (permanent snow cover)\n- seasonal maximum SWE (and its amount)\n\nAll values are provided as relative to January first of the year mentioned (spring), and absolute dates.\n\n![time_series](https://github.com/bluegreen-labs/snotelr/assets/1354258/c430abbc-b714-45e1-8e31-0fdecb7d3796)\n\nTo access the full list of SNOTEL sites and associated meta-data use the **snotel_info()** function.\n\n```r\n# returns the site info as snotel_metadata.txt in the current working directory\nsnotel_info(path = ""."") \n\n# export to data frame\nmeta-data <- snotel_info(path = NULL) \n\n# show some lines of the data frame\nhead(meta-data)\n```\n\nTo query data for e.g. site 924 as shown in the image above use:\n\n```r\nsnotel_download(site_id = 924)\n```\n\nFor in depth analysis the statistics in the GUI can be retrieved using the **snotel_phenology()** function\n\n```r\n# with df a SNOTEL file or data frame in your R workspace\nsnotel_phenology(df)\n```\n\n# References\n\nHufkens, K. (2022). snotelr: a toolbox to facilitate easy SNOTEL data exploration and downloads in R. Zenodo. https://doi.org/10.5281/zenodo.7012728.\n\n# Acknowledgements\n\nThis project was in part supported by the National Science Foundation\xe2\x80\x99s Macro-system Biology Program (award EF-1065029) and the Marie Sk\xc5\x82odowska-Curie Action (H2020 grant 797668). Logo design elements are taken from the FontAwesome library according to [these terms](https://fontawesome.com/license), where the globe element was inverted and intersected.\n\n'",",https://doi.org/10.5281/zenodo.7012728,https://doi.org/10.5281/zenodo.7012728.\n\n##,https://doi.org/10.5281/zenodo.7012728.\n\n#","2016/12/20, 16:41:28",2500,AGPL-3.0,33,188,"2023/10/15, 09:35:14",1,5,22,5,10,0,0.0,0.012121212121212088,"2022/08/20, 13:47:41",v1.1,0,3,false,,false,false,,,https://github.com/bluegreen-labs,http://bluegreenlabs.org,"Melsele, Belgium",,,https://avatars.githubusercontent.com/u/65854203?v=4,,, Raven,Made to help scientists run hydrological modeling experiments with climate change projections.,Ouranosinc,https://github.com/Ouranosinc/raven.git,github,"hydrology,wps,pavics,gis,birdhouse",Freshwater and Hydrology,"2023/10/03, 14:56:17",36,0,8,true,Python,Ouranos inc.,Ouranosinc,"Python,Makefile,Dockerfile",https://pavics-raven.readthedocs.io,"b""Raven : Hydrological modeling and analytics\n===========================================\n\n.. image:: https://readthedocs.org/projects/pavics-raven/badge/?version=latest\n :target: https://pavics-raven.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://github.com/Ouranosinc/raven/actions/workflows/main.yml/badge.svg\n :target: https://github.com/Ouranosinc/raven/actions/workflows/main.yml\n :alt: Build status\n\n.. image:: https://img.shields.io/github/license/Ouranosinc/raven.svg\n :target: https://github.com/Ouranosinc/raven/blob/master/LICENSE.txt\n :alt: GitHub license\n\n.. image:: https://badges.gitter.im/bird-house/birdhouse.svg\n :target: https://gitter.im/bird-house/birdhouse?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge\n :alt: Join the chat at https://gitter.im/bird-house/birdhouse\n\n.. image:: https://app.fossa.com/api/projects/git%2Bgithub.com%2FOuranosinc%2Fraven.svg?type=shield\n :target: https://app.fossa.com/projects/git%2Bgithub.com%2FOuranosinc%2Fraven?ref=badge_shield\n :alt: FOSSA\n\n.. image:: https://zenodo.org/badge/135511617.svg\n :target: https://zenodo.org/badge/latestdoi/135511617\n :alt: DOI\n\nRaven (the bird)\n *Raven offers processes related to hydrological modeling, and in particular, the Raven hydrological modelling framework.*\n\nRaven is an open source server project offering data collection and preparation, as well as geoprocessing and catchment delineation through the Web Processing Service (WPS) standard. Raven processes can be embedded in a graphical user interface or accessed directly from a programming environment. From Python, birdy_ WPSClient provides a user-friendly python interface to Raven's WPS processes for geospatial processing.\n\nThe properties of custom watersheds can be extracted from a Digital Elevation Model and a land-use database.\n\nRaven can be compiled and installed, or simply deployed using docker. A hosted version is available at https://pavics.ouranos.ca/twitcher/ows/proxy/raven.\n\nDocumentation\n-------------\n\nLearn more about Raven in its official documentation at\nhttps://pavics-raven.readthedocs.io.\n\nSubmit bug reports, questions and feature requests at\nhttps://github.com/Ouranosinc/raven/issues\n\nContributing\n------------\n\nYou can find information about contributing in our `Developer Guide`_.\n\nPlease use bumpversion_ to release a new version.\n\nLicense\n-------\n\nFree software: MIT license\n\nCredits\n-------\n\nThis project was funded by the CANARIE_ research software program.\n\nHydrological models are based on the `Raven`_ modeling framework.\n\nThis package was created with Cookiecutter_ and the `bird-house/cookiecutter-birdhouse`_ project template.\n\n.. _`birdy`: https://birdy.readthedocs.io\n.. _`xarray`: http://xarray.pydata.org\n.. _`xclim`: https://xclim.readthedocs.io\n.. _`Raven`: http://raven.uwaterloo.ca\n.. _`CANARIE`: https://www.canarie.ca\n.. _Cookiecutter: https://github.com/audreyr/cookiecutter\n.. _`bird-house/cookiecutter-birdhouse`: https://github.com/bird-house/cookiecutter-birdhouse\n.. _`Developer Guide`: https://pavics-raven.readthedocs.io/en/latest/dev_guide.html\n.. _bumpversion: https://pavics-raven.readthedocs.io/en/latest/dev_guide.html#bump-a-new-version\n""",",https://zenodo.org/badge/latestdoi/135511617\n","2018/05/31, 00:37:14",1974,MIT,240,1827,"2023/10/03, 14:56:19",18,241,469,53,22,2,1.3,0.65625,"2023/07/06, 17:28:49",v0.18.2,0,10,false,,false,true,,,https://github.com/Ouranosinc,www.ouranos.ca,Canada,,,https://avatars.githubusercontent.com/u/1696763?v=4,,, hydroscoper,An R interface to the Greek National Data Bank for Hydrometeorological Information.,ropensci,https://github.com/ropensci/hydroscoper.git,github,"climate,hydroscope,hydrometeorology,hydrology,tidy-data,time-series,greece,water-resources,meteorological-stations,meteorological-data,r,rstats,r-package,peer-reviewed",Freshwater and Hydrology,"2022/09/29, 14:07:01",12,0,1,false,R,rOpenSci,ropensci,"R,TeX",https://docs.ropensci.org/hydroscoper,"b'hydroscoper\n================\n\n\n\n[![Travis-CI Build\nStatus](https://travis-ci.org/ropensci/hydroscoper.svg?branch=master)](https://travis-ci.org/ropensci/hydroscoper)\n[![AppVeyor Build\nStatus](https://ci.appveyor.com/api/projects/status/github/ropensci/hydroscoper?branch=master&svg=true)](https://ci.appveyor.com/project/ropensci/hydroscoper)\n[![codecov](https://codecov.io/github/ropensci/hydroscoper/branch/master/graphs/badge.svg)](https://codecov.io/gh/ropensci/hydroscoper)\n[![minimal R\nversion](https://img.shields.io/badge/R%3E%3D-3.4-6666ff.svg)](https://cran.r-project.org/)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/hydroscoper)](https://cran.r-project.org/package=hydroscoper)\n[![packageversion](https://img.shields.io/badge/Package%20version-1.5.0-orange.svg?style=flat-square)](https://github.com/ropensci/hydroscoper)\n[![](https://cranlogs.r-pkg.org/badges/grand-total/hydroscoper)](https://cran.r-project.org/package=hydroscoper)\n[![ropensci](https://badges.ropensci.org/185_status.svg)](https://github.com/ropensci/software-review/issues/185)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1196540.svg)](https://doi.org/10.5281/zenodo.1196540)\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.00625/status.svg)](https://doi.org/10.21105/joss.00625)\n\n\n\n`hydroscoper` is an R interface to the Greek National Data Bank for\nHydrological and Meteorological Information, *Hydroscope*. For more\ndetails checkout the package\xe2\x80\x99s\n[website](https://docs.ropensci.org/hydroscoper/) and the vignettes:\n\n- [An introduction to\n `hydroscoper`](https://docs.ropensci.org/hydroscoper/articles/intro_hydroscoper.html)\n with details about the Hydroscope project and the package.\n- [Using `hydroscoper`\xe2\x80\x99s data\n sets](https://docs.ropensci.org/hydroscoper/articles/stations_with_data.html)\n with a simple example of how to use the package\xe2\x80\x99s internal data\n sets.\n\n## Installation\n\nInstall the stable release from CRAN with:\n\n``` r\ninstall.packages(""hydroscoper"")\n```\n\nYou can install the development version from GitHub with:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""ropensci/hydroscoper"")\n```\n\n## Using hydroscoper\n\nThe functions that are provided by `hydroscoper` are:\n\n- `get_stations, get_timeseries, ..., etc.` family functions, to\n retrieve tibbles with Hydroscope\xe2\x80\x99s data for a given data source.\n- `get_data`, to retrieve a tibble with time series\xe2\x80\x99 values. \n- `hydro_coords`, to convert Hydroscope\xe2\x80\x99s points\xe2\x80\x99 raw format to a\n tibble.\n- `hydro_translate` to translate various terms and names from Greek to\n English.\n\nThe data sets that are provided by `hydroscoper` are:\n\n- `stations` a tibble with stations\xe2\x80\x99 data from Hydroscope.\n- `timeseries` a tibble with time series\xe2\x80\x99 data from Hydroscope.\n- `greece_borders` a tibble with the borders of Greece.\n\n## Example\n\nThis is a minimal example which shows how to get the station\xe2\x80\x99s *200200*\nprecipitation time series *56* from the *kyy* sub-domain.\n\nLoad libraries and get data:\n\n``` r\nlibrary(hydroscoper)\nlibrary(tibble)\nlibrary(ggplot2)\n\nts_raw <- get_data(subdomain = ""kyy"", time_id = 56)\nts_raw\n#> # A tibble: 147,519 \xc3\x97 3\n#> date value comment\n#> \n#> 1 1985-05-06 08:00:00 0 1 \n#> 2 1985-05-06 08:30:00 0 1 \n#> 3 1985-05-06 09:00:00 0 1 \n#> 4 1985-05-06 09:30:00 0 1 \n#> 5 1985-05-06 10:00:00 0 1 \n#> 6 1985-05-06 10:30:00 0 1 \n#> 7 1985-05-06 11:00:00 0 1 \n#> 8 1985-05-06 11:30:00 0 1 \n#> 9 1985-05-06 12:00:00 0 1 \n#> 10 1985-05-06 12:30:00 0 1 \n#> # \xe2\x80\xa6 with 147,509 more rows\n```\n\nLet\xe2\x80\x99s create a plot:\n\n``` r\nggplot(data = ts_raw, aes(x = date, y = value))+\n geom_line()+\n labs(title= ""30 min precipitation for station 200200"",\n x=""Date"", y = ""Rain height (mm)"")+\n theme_classic()\n```\n\n![](man/figures/README-plot_time_series-1.png)\n\n## Meta\n\n- Bug reports, suggestions, and code are welcome. Please see\n [Contributing](https://github.com/ropensci/hydroscoper/blob/master/CONTRIBUTING.md).\n- License:\n - All code is licensed MIT.\n - All data are from the public data sources in\n `http://www.hydroscope.gr/`.\n- To cite `hydroscoper` please use:\n\n\n\n Vantas Konstantinos, (2018). hydroscoper: R interface to the Greek National Data Bank for\n Hydrological and Meteorological Information. Journal of Open Source Software,\n 3(23), 625 DOI:10.21105/joss.00625\n\nor the BibTeX entry:\n\n @Article{kvantas2018,\n author = {Konstantinos Vantas},\n title = {{hydroscoper}: R interface to the Greek National Data Bank for Hydrological and Meteorological Information},\n doi = {10.21105/joss.00625},\n year = {2018},\n month = {mar},\n publisher = {The Open Journal},\n volume = {2},\n number = {23},\n journal = {The Journal of Open Source Software}\n }\n\n[![ropensci_footer](http://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n'",",https://doi.org/10.5281/zenodo.1196540,https://doi.org/10.21105/joss.00625","2017/12/13, 08:33:53",2142,CUSTOM,0,301,"2022/09/29, 14:56:06",0,1,17,0,391,0,0.0,0.019230769230769273,"2021/03/21, 16:33:42",1.4,0,3,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, WRF-Hydro,A community modeling system and framework for hydrologic modeling and model coupling.,NCAR,https://github.com/NCAR/wrf_hydro_nwm_public.git,github,"modeling,hydrologic-modeling,hydrology,earth-science,wrf-hydro,fortran",Freshwater and Hydrology,"2023/10/17, 16:09:22",158,0,36,true,Fortran,National Center for Atmospheric Research,NCAR,"Fortran,Python,C,Makefile,Shell,CMake,NCL,Perl,Forth",https://ral.ucar.edu/projects/wrf_hydro,"b'# WRF-Hydro\xc2\xae \n\n[![Build Status](https://github.com/NCAR/wrf_hydro_nwm_public/actions/workflows/test-pr.yml/badge.svg?branch=main)](https://github.com/NCAR/wrf_hydro_nwm_public/actions/workflows/test-pr.yml)\n[![Release](https://img.shields.io/github/release/NCAR/wrf_hydro_nwm_public.svg)](https://github.com/NCAR/wrf_hydro_nwm_public/releases/latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3625237.svg)](https://doi.org/10.5281/zenodo.3625237)\n\n## Description\nThis is the code repository for [WRF-Hydro\xc2\xae](https://ral.ucar.edu/projects/wrf_hydro).\n\nWRF-Hydro is a community modeling system and framework for hydrologic modeling and model coupling. In 2016 a configuration of WRF-Hydro was implemented as the [National Water Model](http://water.noaa.gov/about/nwm) (NWM) for the continental United States.\n\n## Documentation\nDocumentation and in-depth build instructions can be found on our [project website](https://ral.ucar.edu/projects/wrf_hydro/technical-description-user-guide).\nQuick [build instructions](docs/BUILD.md) for CMake are also available.\n\n## Resources and Support\nFor news and updates regarding the WRF-Hydro project please subscribe to our [email list](https://ral.ucar.edu/projects/wrf_hydro/subscribe).\n\nFor user support and general inquiries please use our [contact form](https://ral.ucar.edu/projects/wrf_hydro/contact).\n\nIf you have found a bug or would like to propose changes to the model code please refer to our [contributing guidelines](.github/CONTRIBUTING.md).\n\n## Contributions\nFor more information on how to contribute to this project please refer to our [contributing guidelines](.github/CONTRIBUTING.md).\n\n## License\nThe license and terms of use for this software can be found [here](LICENSE.txt).\nThe Crocus snowpack module and related files are from the SURFEX surface model developed by M\xc3\xa9t\xc3\xa9o-France, the French national meteorological service.\nThese files are under the [CeCILL-C](http://www.cecill.info/licences/Licence_CeCILL-C_V1-en.html) license.\n\n## Acknowledgements\nFunding support for the development and application of the WRF-Hydro system has been provided by:\n- The National Science Foundation and the National Center for Atmospheric Research\n- The National Oceanic and Atmospheric Administration (NOAA) Office of Water Prediction (OWP)\n- The U.S. National Weather Service (NWS)\n- The U.S. Department of Energy (DOE)\n- The Colorado Water Conservation Board\n- Baron Advanced Meteorological Services\n- National Aeronautics and Space Administration (NASA)\n'",",https://doi.org/10.5281/zenodo.3625237","2018/02/16, 21:25:17",2077,CUSTOM,35,2782,"2023/10/11, 15:21:15",122,456,586,53,14,13,1.0,0.6569709127382146,"2021/12/10, 19:00:01",v5.2.0,0,29,false,,true,true,,,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, rwrfhydro,"A community-contributed tool box for managing, analyzing, and visualizing WRF Hydro (and HydroDART) input and output files in R.",NCAR,https://github.com/NCAR/rwrfhydro.git,github,,Freshwater and Hydrology,"2021/02/23, 00:13:01",72,0,5,false,R,National Center for Atmospheric Research,NCAR,R,,"b'\n\n\n# rwrfhydro\n\n[![Travis-CI Build\nStatus](https://travis-ci.org/NCAR/rwrfhydro.png?branch=devBranch)](https://travis-ci.org/NCAR/rwrfhydro)\n\nA community-contributed tool box for managing, analyzing, and\nvisualizing WRF Hydro (and HydroDART) input and output files in R.\n\n# Purpose\n\nIntentionally, \xe2\x80\x9crwrfhydro\xe2\x80\x9d can be read as \xe2\x80\x9cour wrf hydro\xe2\x80\x9d. The purpose\nof this R package is to focus community development of tools for working\nwith and analyzing data related to the WRF Hydro model. These tools are\nboth free and open-source, just like R, which should help make them\naccessible and popular. For users new to R, several introductory\nresources are listed below.\n\nThe purposes of this README are 1) to get you started using rwrfhydro\nand 2) to explain the basics (and then some) of how we develop the\npackage so you can quickly start adding your contributions.\n\n# Table of Contents\n\n - [Installing](#installing)\n - [Using](#using)\n - [Developing](#developing)\n - [Version control for collaboration:\n Github](#version-control-for-collaboration-github)\n - [Forking](#forking-and-cloning)\n - [devBranch and pull\n requests](#devbranch-and-pull-requests) \n - [Not using Github](#not-using-github)\n - [Workflow: git, R packaging, and\n you](#workflow-git-r-packaging-and-you)\n - [Our best practices](#our-best-practices)\n - [R package best practices and code\n style](#r-package-best-practices-and-code-style)\n - [Organizing functions](#organizing-functions)\n - [R code style](#r-code-style)\n - [Packages are NOT scripts](#packages-are-not-scripts)\n - [Documentation with\n roxygen](#documentation-with-roxygen)\n - [Objects in rwrfhydro](#objects-in-rwrfhydro)\n - [Graphics](#graphics)\n - [R Package development resources](#r-package-development-resources)\n - [Introductory R resources](#introductory-r-resources)\n\n# Installing\n\nInstalling rwrfhydro (not on [CRAN](http://cran.r-project.org/)) is\nfacilitated by the devtools package (on CRAN), so devtools is installed\nfirst. The following is done for the initial install or to update the\nrwrfhydro package.\n\n``` r\ninstall.packages(""devtools"")\ndevtools::install_github(""NCAR/rwrfhydro"")\n```\n\nThe very first time this is run, it can take a while to install all the\npackage dependencies listed as \xe2\x80\x9cImports\xe2\x80\x9d in the\n[DESCRIPTION](https://github.com/NCAR/rwrfhydro/blob/master/DESCRIPTION)\nfile.\n\nTo check for updates once rwrfhydro is loaded run `CheckForUpdates()`.\n\nTo install other branches than master and perhaps from your own fork:\n\n``` r\ndevtools::install_github(""username/rwrfhydro"", ref=\'myBranch\')\n```\n\nImportantly, beta functionality can be installed using:\n\n``` r\ndevtools::install_github(""NCAR/rwrfhydro"", ref=\'devBranch\')\n```\n\nWe are finally gaining some Windows users and have attempted to improve\nportability of rwrfhydro to this system. A primary dependence of\nrwrfhydro is ncdf4. This ncdf4 package binary can be installed in the\nfollowing way: First, obtain the binary from\n. Then in an R session\n\n``` r\ninstall.packages(file.choose(), repos=NULL, type = ""binary"")\n```\n\nR will open a window for you to choose the downloaded zip file and will\ninstall it.\n\n# Using\n\nAfter the one-time install or any subsequent re-installs/updates, simply\nload the rwrfhydro library in an R session:\n\n``` r\nlibrary(rwrfhydro)\n```\n\nand now the package (namespace) is available.\n\n[*Online\nvignettes*](https://github.com/NCAR/rwrfhydro/blob/master/vignettes.Rmd)\n(or in R `browseVignettes(""rwrfhydro"")`) are probably the easiest way to\nget in-depth, thematic overviews of rwrfhydro functionality.\n\nTo get package metadata and a listing of functions:\n`library(help=rwrfhydro)`. Just the functions:\n`ls(\'package:rwrfhydro\')`. For specific functionality see function help\n(e.g. `?VisualizeDomain` or `help(VisualizeDomain)`).\n\n# Developing and bug reports\n\nBugs are to be reported\n[here](https://github.com/NCAR/rwrfhydro/issues). If you want to help\nsolve bugs and fixes into the code, please continue reading about\ndeveloping.\n\nThere are four main aspects of developing the code base and\ncontributing:\n\n - Version control for collaboration: Not terribly interesting but\n incredibly useful. For those new to the Git/Github process, it can\n be a bit daunting. Please contact us for some help, we do want to\n get your useful code into the main repository\\!\n\n - R Packaging: Again, not very interesting, but critical for creating\n the extrememly useful nature of R packages. Fortunately, the\n `devtools` package simplifies packaging tremendously and so figures\n prominently in the development process we sketch below. The main\n details have been sorted out, contributing new functions is\n generally fairly easy.\n\n - Our best practices: This ranges from fundamental to fussy.\n\n - Getting in touch with the community: Lots of aspects of this tool\n are under active development. Dont duplicate efforts, extend them\\!\n We will establish a site for communicating development tasks and\n statuses on the github wiki.\n\nR packageing and some version controll are treated by [Hadley Wickham\xe2\x80\x99s\nbook on R Packages](http://r-pkgs.had.co.nz/). Specific sections of this\nbook are linked below. Further resources on R package development are\nlisted at the end of this page.\n\n## Version control for collaboration: Github\n\nInstead of going straight to developing, we recommend that you install\nrwrfhydro using `devtools::install_github(\'NCAR/rwrfhydro\')` first,\nbecause this streamlines the installation of package dependencies. Note\nthat `devtools::install_github(\'NCAR/rwrfhydro\')` installs rwrfhydro\ninto you default library path (`.libPaths()[1]`) and that the source\ncode is not included.\n\nThe very best way to obtain and edit the source is to \xe2\x80\x9cfork\xe2\x80\x9d rwrfhydro\non github and then clone *your* the repository to your machine(s) of\nchoice. You edit your fork and, when it\xe2\x80\x99s ready, you submit a pull\nrequest to get the changes back to the main (upstream) fork of\nrwrfhydro. More details are provided below. Your cloned git repository\nis not in your default R library path (`.libPaths()[1]`), but somewhere\nelse where you choose to keep your development code. However, devtools\nallows you to build your development package into your library path.\nThis means that after you add some code locally, you can\n`library(rwrfhydro)` from other R sessions on that machine with your\nchanges appearing in the package. The basic use of devtools is outlined\nbelow. It greatly stream lines all aspects of developing R packages and\nis highly recommended. Particularly, it make is easy to go from github\nor local changes to an R package.\n\n### Forking and cloning\n\nPlease fork the repository to contribute. A fork is a separate copy of\nthe main repository on which you have write permissions. Note that you\ndo not have write permissions on any other fork of the repository.\n[Forking is trivial in\nGithub](https://help.github.com/articles/fork-a-repo/). You have to have\na free (for open-source repositories) account to fork on github.\n\nNext you\xe2\x80\x99ll clone *your* fork to your local computer and you\xe2\x80\x99ll to\ninteract between your forked repository on github, which is called\n\xe2\x80\x9corigin\xe2\x80\x9d. The repo \xe2\x80\x9cNCAR/rwrfhydro\xe2\x80\x9d is known as the \xe2\x80\x9cupstream\xe2\x80\x9d fork.\nThis is the \xe2\x80\x9cofficial\xe2\x80\x9d repo. It\xe2\x80\x99s also called upstream because changes\nto it should always flow to all other repos so that they can easily sync\ntheir separate changes back to it. Keep your fork sync\xe2\x80\x99d with upstream\nas much/often as possible to avoid painful merges, github notifies\ndownstream forks of changes to upstream.\n\nBecause you dont have write permissions to \xe2\x80\x9cupstream\xe2\x80\x9d (or any other\nfork), you have to request that your changes be pulled upstream. This is\ndone via a pull request on github (website). We give some tips below and\ngive a general overview of forking on github in [this\ndocument](https://docs.google.com/document/d/1DxsViogPdA0uObHgNx4YFKd4ClC-m9UFcX0rO-ZJTY0/edit?usp=sharing).\n\n### devBranch and pull requests\n\nWe maintain two main branches or rwrfhydro: master and devBranch. You\nshould *never work on the master branch*. All changes have to pass\nthrough the devBranch before going in to the master and this is\ncontrolled by the package maintainer. Therefore, devBranch is where your\n[pull requests](https://help.github.com/articles/using-pull-requests/)\nwill go. Other barnches on your fork are up to you. How you get your\ncode into your fork\xe2\x80\x99s devBranch is your choice. One suggestion is to\nwork on your personal branch. Then when various files are ready to be\ncontributed to `devBranch`, you first do `git checkout devBranch` then\nfollowed by `git checkout myBranch -- path/to/file` for each file you\ndesire to copy from `myBranch` into `devBranch`. Finally the `git add`\nand `git commit` formally put these files into `devBranch`. Some more\ndetails on using git are provided in the\n[workflow](#workflow-git-r-packaging-and-you) overview below.\n\n### Not using Github.\n\nThis is not recommended, but might be possible. It will certainly hinder\nyou interaction with the upstream repo.\n\n## Travis-CI and `R CMD check`\n\nThe rwrfhydro repo is configured to build on a third-party virtual linux\nmachine with every push or pull request to the master or devBranch\nbranches. This service is known as Travis-CI (continuous integration).\nThis means your pull requests are automatically checked by `R CMD\ncheck`, this keeps errors from creeping into the upstream code. There\nare a variety of hurdles to getting code to build on Travis-CI,\nincluding installing requisite system and R packages, which can be\nchallenging but worth it for the debugging provided by automated builds\nin conjunction with `R CMD check`.\n\n`R CMD check` accepts a variety of arguments. Ultimately, it 1) checks\nthe source for consistency including across platforms (Windows, OSX,\nlinux), 2) runs all specified code tests, essentially regression tests,\n3) runs all the examples provided in the documentation, and 4) builds\nall the vignettes. Currently, we are skipping vignette building until we\ncan streamline several of these.\n\nYou can configure your own fork to build on Travis-CI and you can push\nfrequently to check for errors. This is nearly identical to (slightly\nmore stringent than) running `devtools::check()`, but all you have to do\nis push your commits.\n\n## Workflow: git, R Packaging, and you\n\nWorkflow is approximately this:\n\n - Fork project on github\n\n - Clone *your* github fork to your local machine(s)\n\n - Set the upstream repo (`git remote add upstream\n https://github.com/NCAR/rwrfhydro` - different syntax for ssh\n access)\n\n - Checkout your devBranch (`git checkout devBranch`)\n\n - Development cycles:\n \n - Optional: Create a topic branch off of devBranch in git (e.g.\n `git checkout -b myBranch`) and push this to github (`git push\n origin myBranch`).\n - Write code (in these dirs: R/, NAMESPACE, src/, data/, inst/).\n - Write documentation (in these dirs: man/, vignettes/).\n - Write tests (in this dir: test/).\n - Document and check with devtools: `devtools::document();\n devtools::check_man(); devtools::check()`\n - Commit to your branch with git. (`git commit -am \'Some cool\n features were needed.\'`)\n - Push to github (`git push origin branch`). If Travis-CI is\n configured, this can trigger an `R CMD check` on Travis.\n\n - To get code back to the main reposiory/fork:\n \n - If on myBranch: You probably want to mege devBranch into\n myBranch: `git pull upstream devBranch`.\n - If on myBranch: `checkout devBranch`. The either wholsale merge\n your work from origin (`git pull origin myBranch`) or cherry\n pick files (`git check out myBranch -- path/to/file` for eac\n file) it into devBranch.\n - if on myBranch: commit, `git commit -am\'Fixes and new\n features\'`.\n - If not previously on myBranch and did not do a wholesale merge\n in previous step: Sync with the \xe2\x80\x9cupstream\xe2\x80\x9d devBranch: [See\n here.](https://help.github.com/articles/syncing-a-fork/) If\n upstream repo is set: `git pull upstream devBranch`.\n - Push your devBranch to your fork on github: `git push origin\n devBranch`.\n - Submit a [pull request on\n github](https://help.github.com/articles/using-pull-requests/)\n on devBranch to upstream (NCAR/rwrfhydro).\n\n## Our best practices\n\n### R package best practices and code style\n\n\n\n#### Organizing functions\n\n\n\n - Do NOT put all functions in a single file, nor each in their own\n file. Functions should be grouped by files and may occasionally need\n moved to new or different files as new functions are written.\n - File names end in .R and are all lowercase with \\_ used to separate\n words. (All lowercase (except the .R) helps ensure compatibility\n with Windows developers.)\n\n#### R code style\n\n\n\n - Generally follow Google\xe2\x80\x99s R style guide with preference for\n variableName (first-lower camel case) over variable.name (period\n distinction). Note that functions are first-upper camel case,\n e.g.\xc2\xa0FunctionName.\n \n - Variables are nouns. Functions are verbs.\n - Lots of other style considerations to learn: indents, braces, line\n length, assignment, comment style.\n\n#### Packages are NOT scripts\n\n\n\n - Don\xe2\x80\x99t use library() or require(). Use the DESCRIPTION to specify\n your package\xe2\x80\x99s requirements.\n - Use package::function() to use function from external packages. Make\n sure the package and version are listed in DESCRIPTION.\n - Never use source() to load code from a file. Rely on\n devtools::load\\_all() to automatically source all files in R/.\n - Don\xe2\x80\x99t modify global options() or graphics par(). Put state changing\n operations in functions that the user can call when they want.\n - Don\xe2\x80\x99t save files to disk with write(), write.csv(), or saveRDS().\n Use data/ to cache important data files.\n\n#### Documentation with roxygen\n\n\n\nOnce you get used to this, you will love writing documentation as you go\nfor your R functions.\n\n - Roxygen comments start with \\#\xe2\x80\x99 and come before a function. All the\n roxygen lines preceding a function are called ablock. Each line\n should be wrapped in the same way as your code, normally at 80\n characters.\n - Blocks are broken up into tags, which look like @tagName details.\n The content of a tag extends from the end of the tag name to the\n start of the next tag (or the end of the block). Because @ has a\n special meaning in roxygen, you need to write @@ if you want to add\n a literal @ to the documentation (this is mostly important for email\n addresses and for accessing slots of S4 objects).\n - Each block includes some text before the first tag. This is called\n the introduction, and is parsed specially:\n - The first sentence becomes the title of the documentation. That\xe2\x80\x99s\n what you see when you look at help(package = mypackage) and is shown\n at the top of each help file. It should fit on one line, be written\n in sentence case, and end in a full stop.\n - The second paragraph is the description: this comes first in the\n documentation and should briefly describe what the function does.\n - The third and subsequent paragraphs go into the details: this is a\n (often long) section that is shown after the argument description\n and should go into detail about how the function works.\n - All objects must have a title and description. Details are optional.\n - GetPkgMeta: NOTE that this function only works for packages *not*\n installed by devtools::load\\_all(). The function analzes the\n @keywords and @concepts tags supplied in the roxygen documentation\n to categorize and relate functions and objects in rwrfhydro (or any\n other package). Keywords follow R conventions (see this obscure link\n which I need to find). Concepts are customized to rwrfhydro. The\n current list of keywords and concepts can be gathered from the\n latest version of devBranch, using the following command, which\n gives as of this writing:\n\n\n\n``` r\n> GetPkgMeta(listMetaOnly=TRUE)\n-----------------------------------\nrwrfhydro concepts\n-----------------------------------\nAmeriflux\nDART\ndata\ndataAnalysis\ndataGet\ndataMgmt\ngeospatial\ngetData\nGHCN\nmodelEval\nMODIS\nncdf\nnudging\nplot\nSNODAS\nSNOTEL\nStreamflow\nusgs\nusgsStreamObs\n\n-----------------------------------\nrwrfhydro keywords\n-----------------------------------\ndata\ndatabase\nhplot\ninternal\nIO\nmanip\nsmooth\nts\nunivar\nutilities\n```\n\n### Objects in rwrfhydro\n\nWe will probably need to develop some s3 classes or reuse some from\nother packages. List of possible objects: gaugePts object for organizing\n\xe2\x80\x9cfrxst points\xe2\x80\x9d, both locations and data.\n\n### Graphics\n\nWe need to resolve if we are going to use base graphics or ggplot or\nboth. I\xe2\x80\x99m leaning towards both. Not all plotting routines have to always\nbe available for a given function, but I think that both will probably\ndevelop over time.\n\nBecause ggplot has a big learning curve, we can return closures which 1)\nprovide tweakability for basic things to be adjusted in the plot make\nthe plot when called, 2) which return the basic ggplot object which can\nthen also be extended with ggplot commands. I made an example of this in\nVisualizeDomain.R for ggmap/ggplot\n objects.\n\n# R Package development resources\n\n - \n - \n - \n - \n\n# Introductory R resources (somewhat random)\n\n - \\[My introduction to R: multiple resources but, sorry, the video\n link is broken.\\] ()\n - \\[My R cheat sheet (also availabile in LaTex inthe above link).\\]\n ()\n - \\[The popular YouTube serires on R by Roger Peng.\\]\n ()\n - \\[\\]\n ()\n - \\[\\]\n ()\n - \\[\\]\n ()\n'",,"2015/01/17, 03:44:03",3204,CUSTOM,0,784,"2021/02/23, 00:13:02",13,96,106,0,975,2,0.0,0.42409638554216866,"2021/05/10, 17:16:23",v1.0.1,0,12,false,,false,true,,,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, PCR-GLOBWB_model,A large-scale hydrological model intended for global to regional studies.,UU-Hydro,https://github.com/UU-Hydro/PCR-GLOBWB_model.git,github,,Freshwater and Hydrology,"2023/06/15, 09:00:55",103,0,21,true,Python,UU-Hydro,UU-Hydro,"Python,Shell,Cython",,"b'# PCR-GLOBWB\n\nPCR-GLOBWB (PCRaster Global Water Balance) is a large-scale hydrological model intended for global to regional studies and developed at the Department of Physical Geography, Utrecht University (Netherlands). This repository holds the model scripts of PCR-GLOBWB. \n\ncontact: Edwin H. Sutanudjaja (E.H.Sutanudjaja@uu.nl).\n\nPlease also see the file README.txt.\n\nMain reference/paper: Sutanudjaja, E. H., van Beek, R., Wanders, N., Wada, Y., Bosmans, J. H. C., Drost, N., van der Ent, R. J., de Graaf, I. E. M., Hoch, J. M., de Jong, K., Karssenberg, D., L\xc3\xb3pez L\xc3\xb3pez, P., Pe\xc3\x9fenteiner, S., Schmitz, O., Straatsma, M. W., Vannametee, E., Wisser, D., and Bierkens, M. F. P.: PCR-GLOBWB 2: a 5\xe2\x80\x89arcmin global hydrological and water resources model, Geosci. Model Dev., 11, 2429-2453, https://doi.org/10.5194/gmd-11-2429-2018, 2018.\n\n## Input and output files (including OPeNDAP-based access)\n\nPCR-GLOBWB input and output files for the runs made in Sutanudjaja et al. (2018, https://doi.org/10.5194/gmd-11-2429-2018) are available on https://geo.data.uu.nl/research-pcrglobwb/pcr-globwb_gmd_paper_sutanudjaja_et_al_2018/. For requesting access, please send an e-mail to E.H.Sutanudjaja@uu.nl.\n\nThe input files (and some output files) are also available on the OPeNDAP server: https://opendap.4tu.nl/thredds/catalog/data2/pcrglobwb/catalog.html. The OPeNDAP protocol (https://www.opendap.org) allow users to access PCR-GLOBWB input files from the remote server and perform PCR-GLOBWB runs **without** the need to download the input files (with total size ~250 GB for the global extent).\n\n## How to install\n\nPlease follow the following steps required to install PCR-GLOBWB:\n\n 1. You will need a working Python environment, we recommend to install Miniconda, particularly for Python 3. Follow their instructions given at https://docs.conda.io/en/latest/miniconda.html. The user guide and short reference on conda can be found [here](https://docs.conda.io/projects/conda/en/latest/user-guide/cheatsheet.html).\n\n 2. Get the requirement or environment file from this repository [conda_env/pcrglobwb_py3.yml](conda_env/pcrglobwb_py3.yml) and use it to install all modules required (e.g. PCRaster, netCDF4) to run PCR-GLOBWB:\n\n `conda env create --name pcrglobwb_python3 -f pcrglobwb_py3.yml`\n\n This will create a environment named *pcrglobwb_python3*.\n\n 3. Activate the environment in a command prompt:\n\n `conda activate pcrglobwb_python3`\n\n 4. Clone or download this repository. We suggest to use the latest version of the model, which should also be in the default branch. \n\n `git clone https://github.com/UU-Hydro/PCR-GLOBWB_model.git`\n\n This will clone PCR-GLOBWB into the current working directory.\n\n\n## PCR-GLOBWB configuration .ini file\n\nFor running PCR-GLOBWB, a configuration .ini file is required. Some configuration .ini file examples are given in the *config* directory. To be able to run PCR-GLOBWB using these .ini file examples, there are at least two things that must be adjusted. \n\nFirst, please make sure that you edit or set the *outputDir* (output directory) to the directory that you have access. You do not need to create this directory manually. \n\nMoreover, please also make sure that the *cloneMap* file is stored locally in your computing machine. The *cloneMap* file defines the spatial resolution and extent of your study area and must be in the pcraster format. Some examples are given in this repository [clone_landmask_maps/clone_landmask_examples.zip](clone_landmask_maps/clone_landmask_examples.zip).\n\nBy default, the configuration .ini file examples given in the *config* directory will use PCR-GLOBWB input files from the 4TU.ResearchData server, as set in their *inputDir* (input directory). \n\n`inputDir = https://opendap.4tu.nl/thredds/dodsC/data2/pcrglobwb/version_2019_11_beta/pcrglobwb2_input/`\n\nThis can be adjusted to any (local) locations, e.g. if you have the input files stored locally in your computing machine. \n\n\n## How to run\n\nPlease make sure that the correct conda environment in a command prompt:\n\n`conda activate pcrglobwb_python3`\n\nGo to to the PCR-GLOBWB *model* directory. You can start a PCR-GLOBWB run using the following command: \n\n`python deterministic_runner.py `\n\nwhere is the configuration file of PCR-GLOBWB. \n'",",https://doi.org/10.5194/gmd-11-2429-2018,https://doi.org/10.5194/gmd-11-2429-2018","2016/08/17, 15:20:35",2625,GPL-3.0,12,6275,"2023/06/15, 08:58:19",4,18,20,3,132,1,0.0,0.37254901960784315,"2022/11/08, 05:07:29",v2.3.0_alpha,0,5,false,,false,true,,,https://github.com/UU-Hydro,http://www.globalhydrology.nl/,"Utrecht, Netherlands",,,https://avatars.githubusercontent.com/u/8937096?v=4,,, HydroShare,A collaborative website for better access to data and models in the hydrologic sciences.,hydroshare,https://github.com/hydroshare/hydroshare.git,github,"django,django-rest-framework,postgresql,docker,hydroshare,python,hydrologic-sciences,collaboration,javascript,hydrology,hydrologic-modeling,hydrology-stormwater-analysis,hydrologic-networks,hydrologic-database,hydro,irods,nginx,solr",Freshwater and Hydrology,"2023/10/16, 12:48:45",171,0,21,true,Python,HydroShare,hydroshare,"Python,JavaScript,HTML,CSS,Shell,Vue,R,Dockerfile",https://www.hydroshare.org,"b'# HydroShare _(hydroshare)_\n\nHydroShare is a website and hydrologic information system for sharing hydrologic data and models aimed at giving users the cyberinfrastructure needed to innovate and collaborate in research to solve water problems.\n\n#### Nightly Build Status generated by [Jenkins CI](http://ci.hydroshare.org:8080) (develop branch)\n\n| Workflow | Clean | Build/Deploy | Unit Tests | Flake8 | Requirements |\n| -------- | ----- | ------------ | ---------- | -------| ------------ |\n| [![Build Status](http://ci.hydroshare.org:8080/job/nightly-build-workflow/badge/icon?style=plastic)](http://ci.hydroshare.org:8080/job/nightly-build-workflow/) | [![Build Status](http://ci.hydroshare.org:8080/job/nightly-build-clean/badge/icon?style=plastic)](http://ci.hydroshare.org:8080/job/nightly-build-clean/) | [![Build Status](http://ci.hydroshare.org:8080/job/nightly-build-deploy/badge/icon?style=plastic)](http://ci.hydroshare.org:8080/job/nightly-build-deploy/) | [![Build Status](http://ci.hydroshare.org:8080/job/nightly-build-test/badge/icon?style=plastic)](http://ci.hydroshare.org:8080/job/nightly-build-test/) | [![Build Status](http://ci.hydroshare.org:8080/job/nightly-build-flake8/badge/icon?style=plastic)](http://ci.hydroshare.org:8080/job/nightly-build-flake8/) | [![Requirements Status](https://requires.io/github/hydroshare/hs_docker_base/requirements.svg?branch=develop)](https://requires.io/github/hydroshare/hs_docker_base/requirements/?branch=master) | \n\nHydroShare is a website and hydrologic information system for sharing hydrologic data and models aimed at providing the cyberinfrastructure needed to enable innovation and collaboration in research to solve water problems. HydroShare is designed to advance hydrologic science by enabling the scientific community to more easily and freely share products resulting from their research, not just the scientific publication summarizing a study, but also the data and models used to create the scientific publication. With HydroShare users can: (1) share data and models with colleagues; (2) manage who has access to shared content; (3) share, access, visualize and manipulate a broad set of hydrologic data types and models; (4) use the web services API to program automated and client access; (5) publish data and models to meet the requirements of research project data management plans; (6) discover and access data and models published by others; and (7) use web apps to visualize, analyze, and run models on data in HydroShare.\n\n## Install\n\nPrerequisites\n\nSupported OS (developer laptops): macOS 10.12+, Win10+ Pro, Ent, Edu, Acad Pro, Acad Ent, CentOS 7 and Ubuntu/Lubuntu 18+ LTS\n\nWe got some troubles with Lubuntu 16.04 LTS so probably Ubuntu 16.04 LTS also does not work\n\nFamiliarity with docker and git are required to work with HydroShare\n\nSome VM skills such as network settings (Bridge/NAT/Host only) and file sharing are needed if you work with a virtual machine.\n\nFor Windows, this link is required to proceed - https://docs.google.com/document/d/1wIQEYq3OkWmzPTHeyGyjXLZWrinEXojJPBTJq7fczL8/edit#heading=h.mfmd8m9mxvsl\n\n \n\nOne-Time Install\n\nTables are provided (in Courier font) throughout this wiki for copy-paste of entire blocks.\n\n1. Open a terminal (macOS, Linux) or command prompt (Windows)\nNavigate to where you will store the source code, for example /Users/yourname/repo/\n\nTypically you will find it under this directory:\n \n cd ~/repo\n\n2. Get code\n\nNote it should have a default branch set to the develop branch\n\n git clone https://github.com/hydroshare/hydroshare.git\n\n git checkout \n\nTo get current solr revision fixes:\n\n a. git pull\n\ncd hydroshare\n\n b. docker exec -u hydro-service -ti hydroshare python manage.py solr_update\n\n \nIt\xe2\x80\x99s very important that please DO NOT change the directory name after cloned. Let it be \xe2\x80\x9chydroshare\xe2\x80\x9d\nIf you are running inside a virtual machine such as HydroDev Ubuntu 18.04 from here, you need to:\nReplace all FQDN_OR_IP on the file nginx/config-files/hydroshare-local-nginx.conf.template to be your VM IP address (totally four positions need to be replaced)\n\nAdd a new line into .gitignore file to make sure you will not commit your local setting to GitHub\n\n /nginx/config-files/hydroshare-local-nginx.conf.template\n\nIf you are running Windows 10, please make sure no process is listening on port 80. This maybe a pain for Windows 10 user, we found a very useful link here. However, if you can\'t stop the process which is listening on port 80, you need to do these steps:\n\nChange the nginx port by modify this file: local-dev.yml (change port setting from 80:80 to 8080:80 on nginx section)\n\nReplace line 26 on the file nginx/config-files/hydroshare-local-nginx.conf.template from\n\n if ($http_host != ""FQDN_OR_IP"") {\n\nTo\n\n if ($http_host != ""FQDN_OR_IP:8080"") {\n\nReplace line 35 on the file nginx/config-files/hydroshare-local-nginx.conf.template from\n\n rewrite ^ http://FQDN_OR_IP$request_uri permanent;\n\nTo\n\n rewrite ^ http://FQDN_OR_IP:8080$request_uri permanent;\n\nAdd new two lines into .gitignore file to make sure you will not commit your local setting to GitHub\n\n local-dev.yml\n\n /nginx/config-files/hydroshare-local-nginx.conf.template\n\n3. Log into Docker via application and command line.\nCommand line: \n \n docker login \n \n You will be asked to enter your username and password \n\n4. Launch the stack\n\n ./local-dev-first-start-only.sh\n\nFollowing the screen instruction to continue.\n\nRun the following command on completion to launch Hydroshare: \n\n docker-compose -f local-dev.yml up \n\n5. Sanity Checks\n\n Some WARNINGs are normal. \n\n HydroShare is available in your browser at http://localhost or http://localhost:8080 in case you are running inside a VM\n\n The default admin page is http://localhost/admin\n\n The default admin account is admin:default\n\n Swagger API docs http://localhost/hsapi/\n\n6. Start & Stop & Log\n\nTo start HydroShare, only need to open a windows shell, change to HydroShare code directory then run\n\n docker-compose -f local-dev.yml (up | down) [-d] [--build]\n\nNote bracketed -d for daemon is optional and you don\xe2\x80\x99t paste in the brackets\n\n Use -d option in case you want to type new command on this windows or don\xe2\x80\x99t want to see real-time output log.\n\n Use --build option in case docker keeps image in cache and does not update correctly while modifying the Dockerfile and working with PyCharm\n\nCREATE NEW ACCOUNT - This is the same as it\'s always been in HydroShare. Ask a teammate or hack at it. Basically open a hydroshare console window then use the UI to sign up for a new account and watch the hydroshare container console (docker logs hydroshare) for a verification link and paste that into your browser and save the new account in the UI.\n\nIf your develop machine is slow, the defaultworker container may not work properly because it won\xe2\x80\x99t see iRODS containers alive after timeout. Please wait seconds, open new terminal window then restart this container by the command: docker restart defaultworker, or just only start the whole HydroShare system by this command: ./slow-start.sh (no need to run docker-compose -f local-dev.yml up) but by this way, you will not have native log window.\n\n\n\nTo stop HydroShare, only need to close the running windows or open a new windows then run\n\n docker-compose -f local-dev.yml down\n\nAll data is persisted for the next start.\n\nTo see the logs in case you start with -d option, open a windows then run\n\n docker-compose -f local-dev.yml logs\n\nOr\n\n docker logs \n\nBranching\nWhen you activate a new branch, just bring the stack down and up again. Sometimes you can get away with a warm restart of the stack or even relying on the Django debug mode (doing nothing but waiting). \n\n## Usage\n\nFor all intents and purposes, Hydroshare is a large Python/Django application with some extra features and technologies added on:\n- SOLR for searching\n- Redis for caching\n- RabbitMQ for concurrency and serialization\n- iRODS for a federated file system\n- PostgreSQL for the database backend\n\n#### The `hsctl` Script\n\nThe `hsctl` script is your primary tool in interacting with and running tasks against your Hydroshare install. It has the syntax `./hsccl [command]` where `[command]` is one of:\n\n- `loaddb`: Deletes existing database and reloads the database specified in the `hydroshare-config.yaml` file.\n- `managepy [args]`: Executes a `python manage.py [args]` call on the running hydroshare container.\n- `maint_off`: Removes the maintenance page from view (only if NGINX is being used).\n- `maint_on`: Displays the maintenance page in the browser (only if NGINX is being used).\n- `rebuild`: Stops, removes and deletes only the hydroshare docker containers and images while retaining the database contents on the subsequent build as defined in the `hydroshare-config.yaml` file\n- `rebuild --db`: Fully stops, removes and deletes any prior hydroshare docker containers, images and database contents prior to installing a clean copy of the hydroshare codebase as defined in the `hydroshare-config.yaml` file.\n- `rebuild_index`: Rebuilds the solr/haystack index in a non-interactive way.\n- `restart`: Restarts the django server only (and nginx if applicable).\n- `start`: Starts all containers as defined in the `docker-compose.yml` file (and nginx if applicable).\n- `stop`: Stops all containers as defined in the `docker-compose.yml` file.\n- `update_index`: Updates the solr/haystack index in a non-interactive way.\n\n## Testing and Debugging\n\n### Testing\n\nTests are run via normal Django tools and conventions. However, you should use the `hsctl` script mentioned abouve with the `managepy` command. For example: `./hsctl managepy test hs_core.tests.api.rest.test_resmap --keepdb`.\n\nThere are currently over 600 tests in the system, so it is highly recommended that you run the test suites separately from one another.\n\n### Debugging\n\nYou can debug via PyCharm by following the instructions [here](https://docs.google.com/document/d/1w3hWAPMEUBL4qTjpHb5sYMWEiWFqwaarI0NkpKz3r6w/edit#).\n\n## Other Configuration Options\n\n### Local iRODS\n\nLocal iRODS is _not_ required for development unless you are specifically working on the iRODS integration. However,if you want to work with iRODS or you simply want to learn about it, you can enable it locally.\n\n### Local HTTPS\n\nTo enable HTTPS locally:\n1. edit `config/hydroshare-config.template` and change the two values under `### Deployment Options ###` to `true` like so:\n```\n### Deployment Options ###\nUSE_NGINX: true\nUSE_SSL: true\n```\n\nrestart local Hydroshare\n\n docker-compose -f local-dev.yml down\n docker-compose -f local-dev.yml up -d\n\n## Contribute\n\nThere are many ways to contribute to Hydroshare. Review [Contributing guidelines](https://github.com/hydroshare/hydroshare/blob/develop/docs/contributing.rst) and github practices for information on\n1. Opening issues for any bugs you find or suggestions you may have\n2. Developing code to contribute to HydroShare \n3. Developing a HydroShare App\n4. Submiting pull requests with code changes for review\n\n## License \n\nHydroshare is released under the BSD 3-Clause License. This means that [you can do what you want, so long as you don\'t mess with the trademark, and as long as you keep the license with the source code](https://tldrlegal.com/license/bsd-3-clause-license-(revised)).\n\n\xc2\xa92017 CUAHSI. This material is based upon work supported by the National Science Foundation (NSF) under awards 1148453 and 1148090. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.\n\n'",,"2014/10/02, 02:19:41",3311,BSD-3-Clause,1305,17136,"2023/10/23, 16:00:37",218,2324,5029,457,2,11,1.1,0.7380270485282419,"2023/10/19, 23:50:30",2.10,0,40,false,,false,false,,,https://github.com/hydroshare,http://hydroshare.cuahsi.org/,,,,https://avatars.githubusercontent.com/u/3444493?v=4,,, SOILWAT2,An ecosystem water balance simulation model.,DrylandEcology,https://github.com/DrylandEcology/SOILWAT2.git,github,,Freshwater and Hydrology,"2023/10/24, 19:02:41",4,0,0,true,C,Dryland Ecology,DrylandEcology,"C,C++,R,Shell,Makefile",,"b'\n[![gh nix build status][1]][2]\n[![gh win build status][3]][2]\n[![github release][5]][6]\n[![DOI][7]][8]\n[![license][9]][10]\n[![codecov status][11]][12]\n[![doc status][4]][2]\n\n\n\n[1]: https://github.com/DrylandEcology/SOILWAT2/actions/workflows/main_nix.yml/badge.svg?branch=master\n[2]: https://github.com/DrylandEcology/SOILWAT2/actions/workflows\n[3]: https://github.com/DrylandEcology/SOILWAT2/actions/workflows/main_win.yml/badge.svg?branch=master\n[4]: https://github.com/DrylandEcology/SOILWAT2/actions/workflows/check_doc.yml/badge.svg?branch=master\n\n[5]: https://img.shields.io/github/release/DrylandEcology/SOILWAT2.svg\n[6]: https://github.com/DrylandEcology/SOILWAT2/releases\n[7]: https://zenodo.org/badge/9551524.svg\n[8]: https://zenodo.org/badge/latestdoi/9551524\n[9]: https://img.shields.io/github/license/DrylandEcology/SOILWAT2.svg\n[10]: https://www.gnu.org/licenses/gpl.html\n[11]: https://codecov.io/gh/DrylandEcology/SOILWAT2/branch/master/graph/badge.svg\n[12]: https://codecov.io/gh/DrylandEcology/SOILWAT2\n\n[SOILWAT2]: https://github.com/DrylandEcology/SOILWAT2\n[rSOILWAT2]: https://github.com/DrylandEcology/rSOILWAT2\n[STEPWAT2]: https://github.com/DrylandEcology/STEPWAT2\n[issues]: https://github.com/DrylandEcology/SOILWAT2/issues\n[pull request]: https://github.com/DrylandEcology/SOILWAT2/pulls\n[guidelines]: https://github.com/DrylandEcology/workflow_guidelines\n[doxygen]: https://github.com/doxygen/doxygen\n[GoogleTest]: https://github.com/google/googletest\n[semantic versioning]: https://semver.org/\n\n
\n\n\n# SOILWAT2\n\nSOILWAT2 is an ecosystem water balance simulation model.\n\nThis repository of `SOILWAT2` contains the same code that is\nused by [rSOILWAT2][] and [STEPWAT2][].\n\nIf you utilize this model, please cite appropriate references, and we would\nlike to hear about your particular study (especially a copy of any published\npaper).\n\n\nSome references\n\n* Bradford, J. B., D. R. Schlaepfer, and W. K. Lauenroth. 2014. Ecohydrology of\n adjacent sagebrush and lodgepole pine ecosystems: The consequences of climate\n change and disturbance. Ecosystems 17:590-605.\n* Palmquist, K.A., Schlaepfer, D.R., Bradford, J.B., and Lauenroth, W.K. 2016.\n Mid-latitude shrub steppe plant communities: climate change consequences for\n soil water resources. Ecology 97:2342\xe2\x80\x932354.\n* Schlaepfer, D. R., W. K. Lauenroth, and J. B. Bradford. 2012. Ecohydrological\n niche of sagebrush ecosystems. Ecohydrology 5:453-466.\n\n
\n\n\n## Table of contents\n\n1. [How to get started](#get_started)\n 1. [Compilation](#compile)\n 2. [Documentation](#get_documentation)\n2. [How to contribute](#contribute)\n 1. [SOILWAT2 code](#SOILWAT2_code)\n 2. [Code guidelines](#follow_guidelines)\n 3. [Code documentation](#code_documentation)\n 4. [Code tests](#code_tests)\n 5. [Code debugging](#code_debugging)\n 6. [Code versioning](#code_versioning)\n 7. [Reverse dependencies](#revdep)\n3. [Some additional notes](#more_notes)\n\n
\n\n\n\n\n## How to get started\n\nSOILWAT2 comes with a\n[detailed manual](doc/additional_pages/A_SOILWAT2_user_guide.md)\nand short overviews of\n[inputs](doc/additional_pages/SOILWAT2_Inputs.md)\nand [outputs](doc/additional_pages/SOILWAT2_Outputs.md).\nA full code documentation may be built, see [here](#get_documentation).\n\n\n\n### Compilation\n * Requirements:\n - the `gcc` or `clang/llvm` toolchains compliant with `C99`\n - for unit tests (using `googletest`)\n - toolchains compliant with `C++14`\n - `POSIX API`\n - GNU-compliant `make`\n - On Windows OS: an installation of `cygwin`\n\n * Clone the repository\n (details can be found in the\n [manual](doc/additional_pages/A_SOILWAT2_user_guide.md)), for instance,\n ```{.sh}\n git clone --recursive https://github.com/DrylandEcology/SOILWAT2.git SOILWAT2\n ```\n\n * Build with `make` (see `make help` to print information about all\n available targets), for instance,\n ```{.sh}\n cd SOILWAT2/\n make\n ```\n\n You can use a specific compiler, e.g.,\n ```{.sh}\n CC=gcc make\n CC=clang make\n ```\n
\n\n\n\n### Documentation\n * Use [doxygen][] to generate help pages (locally) on the command-line with\n `make doc` (which basically runs `doxygen doc/Doxyfile`)\n\n * View documentation in your browser with `make doc_open`\n\n
\n\n\n\n## How to contribute\nYou can help us in different ways:\n\n1. Reporting [issues][]\n2. Contributing code and sending a [pull request][]\n\n
\n\n\n\n### `SOILWAT2` code is used as part of three applications\n * Stand-alone,\n\n * Part/submodule of [STEPWAT2][] (code flag `STEPWAT`), and\n\n * Part/submodule of the R package [rSOILWAT2][] (code flag `RSOILWAT`)\n\nChanges in `SOILWAT2` must be reflected by updates to `STEPWAT2` or `rSOILWAT2`;\nplease see section [reverse dependencies](#revdep).\n\n
\n\n\n\n### Follow our guidelines as detailed [here][guidelines]\n\n
\n\n\n### Code development, documentation, and tests go together\n\nWe develop code on development branches and,\nafter they are reviewed and pass our checks,\nmerge them into the master branch for release.\n\n\n\n#### Code documentation\n * Document new code with [doxygen][] inline documentation\n\n * Check that new documentation renders correctly and does not\n generate `doxygen` warnings, i.e.,\n run `make doc` and check that it returns successfully\n (it checks that `doxygen doc/Doxyfile | grep warning` is empty).\n Also check that new or amended documentation displays\n as intended by opening `doc/html/index.html` and navigating to the item\n in question\n\n * Keep `doc/html/` local, i.e., don\'t push to the repository\n\n * Use regular c-style comments for additional code documentation\n\n
\n\n\n\n#### Code tests\n__Testing framework__\n\nThe goal is to cover all code development with new or amended tests.\n`SOILWAT2` comes with unit tests and integration tests.\nAdditionally, the github repository runs continuous integration checks.\n\n
\n\n__Unit tests__\n\nWe use [GoogleTest][] for unit tests\nto check that individual units of code, e.g., functions, work as expected.\n\nThese tests are organized in the folder `test/`\nin files with the naming scheme `test_*.cc`.\n\nNote: `SOILWAT2` is written in C whereas `GoogleTest` is a C++ framework. This\ncauses some complications, see `makefile`.\n\n\nRun unit tests locally on the command-line with\n```{.sh}\n make test_run # compiles and executes the tests\n make test_severe # compiles/executes with strict/severe flags\n make clean_test # cleans build artifacts\n```\n\n\n__Miscellaneous scripts for tests__\n\nUsers of `SOILWAT2` work with a variety of compilers across different platforms\nand we aim to test that this works across a reasonable selection.\nWe can do that manually or use the bash-script `tools/check_SOILWAT2.sh`\nwhich runs tests with different compiler versions.\nPlease note that this script currently works only with `macports`.\n\n\n`SOILWAT2` is a deterministic simulation model; however, running unit tests\nrepeatedly may be helpful for debugging in rare situations. For that,\nthe bash-script `tools/many_test_runs.sh` will run `N` number of times and\nonly reports unit test failures, e.g.,\n```{.sh}\n ./tools/many_test_runs.sh # will run a default (currently, 10) number of times\n N=3 ./tool/many_test_runs.sh # will run 3 replicates\n```\n\n
\n\n__Integration tests__\n\nWe use integration tests to check that the entire simulation model works\nas expected when used in a real-world application setting.\n\nThe folder `tests/example/` contains all necessary inputs to run `SOILWAT2`\nfor one generic location\n(it is a relatively wet and cool site in the sagebrush steppe).\n\n```{.sh}\n make bin_run\n```\n\nThe simulated output is stored at `tests/example/Output/`.\n\n\nAnother use case is to compare output of a new (development) branch to output\nfrom a previous (reference) release.\n\nDepending on the purpose of the development branch\nthe new output should be exactly the same as reference output or\ndiffer in specific ways in specific variables.\n\nThe following steps provide a starting point for such comparisons:\n\n```{.sh}\n # Simulate on refernce branch and copy output to ""Output_ref""\n git checkout master\n make bin_run\n cp -r tests/example/Output tests/example/Output_ref\n\n # Switch to development branch and run the same simulation\n git checkout \n make bin_run\n\n # Compare the two sets of outputs\n # * Lists all output files and determine if they are exactly they same\n diff tests/example/Output/ tests/example/Output_ref/ -qs\n```\n\n\n__Additional tests__\n\nAdditional output can be generated by passing appropriate flags when running\nunit tests. Scripts are available to analyze such output.\nCurrently, the following is implemented:\n\n - Sun hour angles plots for horizontal and tilted surfaces\n\n 1. Numbers of daylight hours and of sunrise(s)/sunset(s)\n for each latitude and day of year for some slope/aspect combinations\n 2. Sunrise(s)/sunset(s) hour angles for each latitude\n and some slope/aspect/day of year combinations\n\n```{.sh}\n CPPFLAGS=-DSW2_SolarPosition_Test__hourangles_by_lat_and_doy make test_run\n Rscript tools/plot__SW2_SolarPosition_Test__hourangles_by_lat_and_doy.R\n\n CPPFLAGS=-DSW2_SolarPosition_Test__hourangles_by_lats make test_run\n Rscript tools/plot__SW2_SolarPosition_Test__hourangles_by_lats.R\n```\n\n - PET plots as function of radiation, relative humidity, wind speed, and cover\n\n```{.sh}\n CPPFLAGS=-DSW2_PET_Test__petfunc_by_temps make test_run\n Rscript tools/plot__SW2_PET_Test__petfunc_by_temps.R\n```\n\n
\n\n\n__Continous integration checks__\n\nDevelopment/feature branches can only be merged into the main branch and\nreleased if they pass all checks on the continuous integration servers\n(see `.github/workflows/`).\n\nPlease run the ""severe"", ""sanitizer"", and ""leak"" targets locally\n(see also `tools/check_SOILWAT2.sh`)\n```{.sh}\n make clean_build bin_debug_severe\n make clean_test test_severe\n```\n\n
\n\n__Sanitizers & leaks__\n\nRun the simulation and tests with the `leaks` program, For instance,\n```{.sh}\n make clean_build bin_leaks\n make clean_test test_leaks\n```\n\nRun the simulation and tests with sanitizers. For instance,\n```{.sh}\n make clean_build bin_sanitizer\n make clean_test test_sanitizer\n```\n\nThe address sanitizer may not work correctly and/or fail when used with the\n`Apple-clang` version that is shipped with macOS X\n(see [Sanitizer issue #1026](https://github.com/google/sanitizers/issues/1026)).\nA separate installation of `clang` may be required,\ne.g., via `homebrew` or `macports`.\n\n\nIf `clang` is installed in a non-default location and\nif shared dynamic libraries are not picked up correctly, then\nthe test executable may throw an error `... dyld: Library not loaded ...`.\nThis can be fixed, for instance, with the following steps\n(details depend on the specific setup, below is for `macports` and `clang-8.0`):\n\n```{.sh}\n # build test executable with clang and leak detection\n CXX=clang++ ASAN_OPTIONS=detect_leaks=1 LSAN_OPTIONS=suppressions=.LSAN_suppr.txt make clean test_severe\n\n # check faulty library path\n otool -L sw_test\n\n # figure out correct library path and insert with: e.g.,\n install_name_tool -change /opt/local/libexec/llvm-8.0/lib/libclang_rt.asan_osx_dynamic.dylib /opt/local/libexec/llvm-8.0/lib/clang/8.0.0/lib/darwin/libclang_rt.asan_osx_dynamic.dylib sw_test\n\n # run tests\n make test_run\n```\n\n
\n\n\n\n#### Debugging\n Debugging is controlled at two levels:\n * at the preprocessor (pass `-DSWDEBUG`):\n all debug code is wrapped by this flag so that it does not end up in\n production code; unit testing is compiled in debugging mode.\n\n * in functions with local debug variable flags (`int debug = 1;`):\n debug code can be conditional on such a variable, e.g.,\n\n```{.c}\n void foo() {\n #ifdef SWDEBUG\n int debug = 1;\n #endif\n ...\n #ifdef SWDEBUG\n if (debug) swprintf(""hello, this is debugging code\\n"");\n ...\n #endif\n ...\n }\n```\n\n * Clean, compile and run optimized `SOILWAT2`-standalone in debugging mode\n with, e.g.,\n\n```{.sh}\n make bin_run CPPFLAGS=-DSWDEBUG\n```\n\n * Alternatively, use the pre-configured debugging targets\n `bin_debug` and `bin_debug_severe`, for instance, with\n\n```{.sh}\n make bin_debug_severe\n```\n\n
\n\n\n\n\n#### Version numbers\n\nWe attempt to follow guidelines of [semantic versioning][] with version\nnumbers of `MAJOR.MINOR.PATCH`;\nhowever, our version number updates are focusing\non simulated output (e.g., identical output -> increase patch number) and\non dependencies `STEPWAT2` and `rSOILWAT2`\n(e.g., no updates required -> increase patch number).\n\nWe create a new release for each update to the master branch.\nThe master branch is updated via pull requests from development branches\nafter they are reviewed and pass required checks.\n\n\n
\n\n\n\n## Reverse dependencies\n\n`STEPWAT2` and `rSOILWAT2` depend on `SOILWAT2`;\nthey utilize the master branch of `SOILWAT2` as a submodule.\nThus, changes in `SOILWAT2` need to be propagated to `STEPWAT2` and `rSOILWAT2`.\n\nThe following steps can serve as starting point to resolve\nthe cross-repository reverse dependencies:\n\n 1. Create development branch `branch_*` in `SOILWAT2`\n 1. Create respective development branches in `STEPWAT2` and `rSOILWAT2`\n 1. Update the `SOILWAT2` submodule of `STEPWAT2` and `rSOILWAT2` as first\n commit on these new development branches:\n * Have `.gitmodules` point to the new `SOILWAT2` branch `branch_*`\n * Update the submodule `git submodule update --remote`\n 1. Develop and test code and follow guidelines of `STEPWAT2` and `rSOILWAT2`\n 1. Create pull requests for each development branch\n 1. Merge pull request `SOILWAT2` once development is finalized, reviewed, and\n sufficiently tested across all three repositories;\n create new `SOILWAT2` [release](#code_versioning)\n 1. Finalize development branches in `STEPWAT2` and `rSOILWAT2`\n * Have `.gitmodules` point to the new `SOILWAT2` release on `master`\n * Update the submodule `git submodule update --remote`\n 1. Handle pull requests for `STEPWAT2` and `rSOILWAT2` according to\n their guidelines\n\n
\n\n\n\n## Notes\n\n__Organization renamed from Burke-Lauenroth-Lab to DrylandEcology on Dec 22, 2017__\n\nAll existing information should\n[automatically be redirected](https://help.github.com/articles/renaming-a-repository/)\nto the new name.\nContributors are encouraged, however, to update local clones to\n[point to the new URL](https://help.github.com/articles/changing-a-remote-s-url/),\ni.e.,\n\n```{.sh}\ngit remote set-url origin https://github.com/DrylandEcology/SOILWAT2.git\n```\n\n\n__Repository renamed from SOILWAT to SOILWAT2 on Feb 23, 2017__\n\nAll existing information should\n[automatically be redirected](https://help.github.com/articles/renaming-a-repository/)\nto the new name.\nContributors are encouraged, however, to update local clones to\n[point to the new URL](https://help.github.com/articles/changing-a-remote-s-url/),\ni.e.,\n\n```{.sh}\ngit remote set-url origin https://github.com/DrylandEcology/SOILWAT2.git\n```\n'",",https://zenodo.org/badge/latestdoi/9551524\n","2013/04/19, 18:11:54",3841,GPL-3.0,439,1985,"2023/10/24, 19:02:47",55,128,323,51,1,3,3.7,0.32516891891891897,"2023/10/24, 19:05:42",v7.2.0,0,12,false,,false,false,,,https://github.com/DrylandEcology,,,,,https://avatars.githubusercontent.com/u/4193619?v=4,,, RivGraph,Extracting and quantifying graphical representations of river and delta channel networks from binary masks.,jonschwenk,https://github.com/VeinsOfTheEarth/RivGraph.git,github,"python,manuscript",Freshwater and Hydrology,"2023/10/19, 15:44:26",65,0,19,true,Python,Veins of the Earth,VeinsOfTheEarth,"Python,TeX",https://veinsoftheearth.github.io/RivGraph/,"b'[![build](https://github.com/VeinsOfTheEarth/RivGraph/actions/workflows/build.yml/badge.svg)](https://github.com/VeinsOfTheEarth/RivGraph/actions/workflows/build.yml)\n[![Coverage Status](https://coveralls.io/repos/github/jonschwenk/RivGraph/badge.svg)](https://coveralls.io/github/jonschwenk/RivGraph)\n![docs](https://github.com/VeinsOfTheEarth/RivGraph/workflows/docs/badge.svg)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02952/status.svg)](https://doi.org/10.21105/joss.02952)\n
\n\n[![RivGraph logo](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/docs/logos/rg_logo_full.png)](https://VeinsOfTheEarth.github.io/RivGraph/ ""Go to documentation."")\n\nAbout\n-----\n\nRivGraph is a Python package that provides tools for converting a binary mask of a channel network into a directed, weighted graph (i.e. a set of connected links and nodes).\n\n![Core functionality of RivGraph.\\label{fig:corefunctions}](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/examples/images/rivgraph_overview_white.PNG)\n\nThe figure above demonstrates the core components of RivGraph, but many other features are provided, including:\n\n- Morphologic metrics (lengths, widths, branching angles, braiding indices)\n- Algebraic representations of the channel network graph\n- Topologic metrics (both topologic and dynamic such as alternative paths, flux sharing, entropies, mutual information, etc.)\n- Tools for cleaning and preparing your binary channel network mask\n- Island detection, metrics, and filtering\n- Mesh generation for characterizing along-river characteristics\n- (beta) Tools for centerline migration analysis\n\nAll of RivGraph\'s functionality maintains and respects georeferencing information. If you start with a georeferenced mask (e.g. a GeoTIFF), RivGraph exports your results in the CRS (coordinate reference system) of your mask for convenient mapping, analysis, and fusion with other datasets in a GIS.\n\nYou can see some description of RivGraph\'s functionality via this [AGU poster](https://www.researchgate.net/publication/329845073_Automatic_Extraction_of_Channel_Network_Topology_RivGraph), and the flow directionality logic and validation is described in our [ESurf Dynamics paper](https://www.earth-surf-dynam.net/8/87/2020/esurf-8-87-2020.html). Examples demonstrating the basic RivGraph features are available for a [delta channel network](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/examples/delta_example.ipynb) and a [braided river](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/examples/braided_river_example.ipynb).\n\nInstalling\n-----\n\nRivGraph is hosted at conda-forge. We recommend installing into a fresh conda environment to minimize the risk of dependency clashes. The easiest way to do this is to open Terminal (Mac/Unix) or Anaconda Prompt (Windows) and type:\n\n
conda create -n rivgraph_env rivgraph -c conda-forge\nconda activate rivgraph_env\n
\n\nYou may then want to install Spyder or your preferred IDE. Conda should fetch all the required dependencies and handle versioning.\n\nIf you want to install RivGraph into an already-existing environment, you can run
conda activate myenv\nconda install rivgraph -c conda-forge
\n\nYou may also [install RivGraph from this Github repo](https://VeinsOfTheEarth.github.io/RivGraph/install/index.html#installation-from-source).\n\nInstructions for testing your installation are available [here](https://VeinsOfTheEarth.github.io/RivGraph/install/index.html#installation-from-source).\n\nHow to use?\n-----\n\nPlease see the [documentation](https://VeinsOfTheEarth.github.io/RivGraph/) for more detailed instructions.\n\nRivGraph requires that you provide a binary mask of your network. [This page](https://VeinsOfTheEarth.github.io/RivGraph/maskmaking/index.html) provides some help, hints, and tools for finding or creating your mask.\n\nTo see what RivGraph does and how to operate it, you can work through the [Colville Delta example](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/examples/delta_example.ipynb) or the [Brahmaputra River example](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/examples/braided_river_example.ipynb). Both examples include sample masks.\n\nRivGraph contains two primary classes (`delta` and `river`) that provide convenient methods for creating a processing workflow for a channel network. As the examples demonstrate, you can instantiate a delta or river class, then apply associated methods for each. After looking at the examples, take a look at [classes.py](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/rivgraph/classes.py) to understand what methods are available.\n\n**Note**: there are many functions under the hood that may be useful to you. Check out the [im_utils script](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/rivgraph/im_utils.py) (image utilities) in particular for functions to help whip your mask into shape!\n\nContributing\n------------\n\nIf you think you\'re not skilled or experienced enough to contribute, think again! We agree wholeheartedly with the sentiments expressed by this [Imposter syndrome disclaimer](https://github.com/Unidata/MetPy#contributing). We welcome all forms of user contributions including feature requests, bug reports, code, documentation requests, and code. Simply open an issue in the [tracker](https://github.com/VeinsOfTheEarth/RivGraph/issues). For code development contributions, please contact us via email to be added to our slack channel where we can hash out a plan for your contribution.\n\nCiting RivGraph\n------------\n\nCitations help us justify the effort that goes into building and maintaining this project. If you used RivGraph for your research, please consider citing us.\n\nIf you use RivGraph\'s flow directionality algorithms, please cite our [ESurf Dynamics paper](https://www.earth-surf-dynam.net/8/87/2020/esurf-8-87-2020.html). Additionally, if you publish work wherein RivGraph was used to process your data, please cite our [JOSS Paper](https://joss.theoj.org/papers/10.21105/joss.02952).\n\nContacting us\n-------------\n\nThe best way to get in touch is to [open an issue](https://github.com/VeinsOfTheEarth/rivgraph/issues/new) or comment on any open issue or pull request. Otherwise, send an email to j.........k@gmail.com\n\nLicense\n------------\n\nThis is free software: you can redistribute it and/or modify it under the terms of the **BSD 3-clause License**. A copy of this license is provided in [LICENSE.txt](https://github.com/VeinsOfTheEarth/RivGraph/blob/master/LICENSE.txt).\n\nRivGraph has been assigned number C19049 by the Feynman Center for Innovation.\n'",",https://doi.org/10.21105/joss.02952","2019/03/05, 19:17:19",1695,CUSTOM,30,636,"2023/10/19, 15:44:30",8,53,90,10,6,1,0.7,0.5796545105566219,"2022/08/10, 23:22:35",v0.5.0,3,7,false,,false,false,,,https://github.com/VeinsOfTheEarth,,,,,https://avatars.githubusercontent.com/u/88787335?v=4,,, WaterDetect,"End-to-end algorithm to generate open water cover mask, specially conceived for L2A Sentinel 2 imagery from MAJA1 processor, without any a priori knowledge on the scene.",cordmaur,https://github.com/cordmaur/WaterDetect.git,github,,Freshwater and Hydrology,"2023/05/24, 14:23:47",161,6,35,true,Jupyter Notebook,,,"Jupyter Notebook,Python",,"b'# WaterDetect\n\n[![DOI](https://zenodo.org/badge/224832878.svg)](https://zenodo.org/badge/latestdoi/224832878)\n\n## Synopsis\n\nWaterDetect is an end-to-end algorithm to generate open water cover mask, specially conceived for L2A Sentinel 2 imagery from [MAJA](https://logiciels.cnes.fr/en/content/maja)1 processor, without any a priori knowledge on the scene. It can also be used for Landsat 8 images and for other multispectral clustering/segmentation tasks.
\n\nThe water masks produced by WaterDetect were primarily designed for water quality product computation (Obs2Co processing chain) and are also used for multi-temporal water maps (Surfwater processing chain). Both chains are supported by the ""SWOT-Downstream"" and TOSCA programs by CNES. Products are provided by the THEIA / Hydroweb-NG platform. \n\nThe WaterDetect algorithm uses a multidimensional agglomerative clustering technique on a subsample of the scene\'s pixels, to group them in classes, and a naive bayes classifier to generalize the results for the whole scene, as summarized in the following picture:\n\n![Screenshot](GraphicalAbstract.JPG)\n\nAll the details and tests has been described in the article Automatic Water Detection from Multidimensional Hierarchical Clustering for Sentinel-2 Images and a Comparison with Level 2A Processors, under revision by the journal Remote Sensing of Environment.\n\n## How to cite\nCordeiro, M. C. R.; Martinez, J.-M.; Pe\xc3\xb1a-Luque, S. Automatic Water Detection from Multidimensional Hierarchical Clustering for Sentinel-2 Images and a Comparison with Level 2A Processors. Remote Sensing of Environment 2021, 253, 112209. https://doi.org/10.1016/j.rse.2020.112209.\n\n## Changelog\n### Release 1.5.15\n- Corrected PyPDF2 deprecation error.
\n\n### Release 1.5.13\n- Code updated to comply with Sen2Cor processing baseline 04.00, that started in January, 25th 2022.
\nBoA offset was added to the Sen2Cor workflow to deal with negative values. Oficial release:\nhttps://sentinels.copernicus.eu/documents/247904/4830984/OMPC.CS.DQR.002.07-2022%20-%20i52r0%20-%20MSI%20L2A%20DQR%20August%202022.pdf/36edbb04-0c6c-fba3-5c34-0ba3be82e91c\n\n### Release 1.5.12\n- Minor updates to make it compatible with the `waterquality` package. For more information, check the waterquality package here: https://github.com/cordmaur/WaterQuality\n\n### Release 1.5.11\n- Bug fix when loading L1C and S2COR images, from MacOS.\n\n### Release 1.5.9\n- Added external mask processing through the command `process_ext_masks`. It prepares an external mask (ex. created by FMask) to be used by WaterDetect.\n\n### Release 1.5.8\n- correct GlintMode to work on S2_THEIA images\n- Add calculation of MBWI index inside DWImageClustering class\n\n### Release 1.5.7 \n- new entry point runWaterDetect.py \n- Namespace correction for different versions of sklearn package \n- New treatment for negative reflectance values (individual pixel correction)\n- Added regularization option to avoid extreme values on Normalized Difference indices.\n- New water cluster detection method based on lowest Nir reflectance (""minnir"")\n- Updated reporting. Invalid mask is superimposed to the RGB scene representation\n- Added support for Sen2Cor internal masks\n- GLINT mode (For entire scenes only). Creates a Glint heatmap based on viewing and solar angles and updates the thresholds to include waters with sun glint in the final mask\n\n\n## Tutorial\nThe following topics have the first steps to install and run the library. For a more comprehensive tutorial with code samples and results please refer to this tutorial https://cordmaur.medium.com/water-detection-in-high-resolution-satellite-images-using-the-waterdetect-python-package-7c5a031e3d16.\n\n## Supported Formats\nThe algorithm has been developed taking into account atmospherically corrected images from MAJA, as described in the paper. However other image formats are also supported.\nTo the present, the following image formats are supported:\n* Sentinel 2 - L2A from MAJA: the products can be downloaded from (https://www.theia-land.fr/en/product/sentinel-2-surface-reflectance/)\n* Sentinel 2 - L2A from Sen2Cor: The L2A processed by Sen2Cor are available at Copernicus SciHub (https://scihub.copernicus.eu/)\n* Sentinel 2 - L1C: L1C Sentinel 2 images can be downloaded from Copernicus SciHub (https://scihub.copernicus.eu/)\n* Landsat 8 - To be validated\n\n## Dependencies\nThe required libraries are:\n```\nGDAL>=3.0.2\nmatplotlib>=3.1.2\nPyPDF2>=1.26.0\nscipy>=1.3.2\nscikit-learn>=0.22\nscikit-image>=0.16.2\nnumpy>=1.17\nPIL>=8.0\nlxml>=4.5\n```\n### Note 1:\nGDAL is required to open the satellite images. It\'s still possible to use without GDAL, from a python console or jupyter notebook, loading the rasters manually and passing all the necessary bands to the DWImageClustering class. Check the topic ""Usage from Console"" for more information.\n\n### Note 2:\nScikit-Image is only necessary to run Otsu threshold method. \n\nThe test_dependencies.py can be used to check if all libraries are loading correctly. Simply run:\n\n\n## Instalation\nThe easiest way to install waterdetect package is with `pip` command:
\n`pip install waterdetect`\n\nAlternatively, you can clone the repository and install from its root throught the following commands:\n```\ngit clone https://github.com/cordmaur/WaterDetect.git\ncd WaterDetect\npip install .\n```\n\nOnce installed, a `waterdetect` entry point is created in the path of the environment.\nOne can check the installation and options by running `waterdetect --help`. If GDAL is not found, a message will raise indicating that waterdetect will only run from a console.\n```\nusage: waterdetect [-h] [-GC] [-i INPUT] [-o OUT] [-s SHP] [-p PRODUCT]\n [-c CONFIG]\n\nThe waterdetect is a high speed water detection algorithm for satellite\nimages. It will loop through all images available in the input folder and\nwrite results for every combination specified in the .ini file to the output\nfolder. It can also run for single images from Python console or Jupyter\nnotebook. Refer to the onlinedocumentation\n\noptional arguments:\n -h, --help show this help message and exit\n -GC, --GetConfig Copy the WaterDetect.ini from the package into the\n specifieddirectory and skips the processing. Once\n copied you can edit the .ini file and launch the\n waterdetect without -c option.\n -i INPUT, --input INPUT\n The products input folder. Required.\n -o OUT, --out OUT Output directory. Required.\n -s SHP, --shp SHP SHP file. Optional.\n -p PRODUCT, --product PRODUCT\n The product to be processed (S2_THEIA, L8_USGS, S2_L1C\n or S2_S2COR)\n -c CONFIG, --config CONFIG\n Configuration .ini file. If not specified\n WaterDetect.ini from current dir and used as default\n\nTo copy the package\'s default .ini file into the current directory, type:\n`waterdetect -GC .` without other arguments and it will copy WaterDetect.ini\ninto the current directory.\n```\n\n### Config File\nThe waterdetect needs a config file that specifies the bands used in the clustering process as well as other parameters.\nTo obtain the default version of this file, one can use `waterdetec -GC` and the file WaterDetect.ini will be copied into the current working folder.\n\n## Usage as Script\nThe basic usage for the waterdetect is:
\n`waterdetect -i c:/input_folder -o -c:/output_folder -p S2_THEIA`\n\nThe input directory should contain the uncompressed folders for the images. The script will loop through all folders in the input directory and save the water masks, graphs and reports to the output folder. The output folder must be created beforehand.\n\nIf the config file is not specified, the script will search for WaterDetect.ini in the current folder.\n\n## Usage from Console\nOnce properly installed, the WaterDetect can be run from a console or a Jupyter Notebook, by importing the package and calling DWDetectWater.\n\n```\n>>> import waterdetect as wd\n>>> !waterdetect -GC\n>>> wd.DWWaterDetect.run_water_detect(input_folder=\'D:\\Images\\Input\\\',\n output_folder=\'D:\\Images\\Output\',\n shape_file=\'D:\\Shp\\SomeShapefile.shp\',\n single_mode=False,\n product=wd.DWProducts.Sentinel2_THEIA,\n config_file=\'WaterDetect.ini\'\n )\n```\nFor more information on how to use it from jupyter notebook, in batch or single mode or to use it with other satellite images or without GDAL, please refer to the tutorial available here https://towardsdatascience.com/water-detection-in-high-resolution-satellite-images-using-the-waterdetect-python-package-7c5a031e3d16.\n\n## Contributors\n> Author: Maur\xc3\xadcio Cordeiro (ANA/GET)
\n> Supervisor: Jean-Michel Martinez (IRD/GET)
\n> Validation dataset: Santiago Pena Luque (CNES) \n\n### Institutions\n* ANA - Ag\xc3\xaancia Nacional de \xc3\x81guas (https://www.gov.br/ana/en/)\n* GET - G\xc3\xa9osciences Environnement Toulouse (https://www.get.omp.eu/)\n* IRD - Institut de Recherche pour le D\xc3\xa9veloppement (https://en.ird.fr/)\n* CNES - Centre National d\'\xc3\x89tudes Spatiales (https://cnes.fr/fr)\n\n## License\nThis code is licensed under the [GNU General Public License v3.0](https://github.com/cordmaur/WaterDetect/blob/master/LICENSE) license. Please, refer to GNU\'s webpage (https://www.gnu.org/licenses/gpl-3.0.en.html) for details.\n\n## Reference\n(1) Hagolle, O.; Huc, M.; Pascual, D. V.; Dedieu, G. A Multi-Temporal Method for Cloud Detection, Applied to FORMOSAT-2, VEN\xc2\xb5S, LANDSAT and SENTINEL-2 Images. Remote Sensing of Environment 2010, 114 (8), 1747\xe2\x80\x931755. https://doi.org/10.1016/j.rse.2010.03.002.\n\n(2) Cordeiro, M. C. R.; Martinez, J.-M.; Pe\xc3\xb1a-Luque, S. Automatic Water Detection from Multidimensional Hierarchical Clustering for Sentinel-2 Images and a Comparison with Level 2A Processors. Remote Sensing of Environment 2021, 253, 112209. https://doi.org/10.1016/j.rse.2020.112209.\n\n'",",https://zenodo.org/badge/latestdoi/224832878,https://doi.org/10.1016/j.rse.2020.112209.\n\n##,https://doi.org/10.1016/j.rse.2010.03.002.\n\n(2),https://doi.org/10.1016/j.rse.2020.112209.\n\n","2019/11/29, 10:34:57",1426,Apache-2.0,9,200,"2023/03/20, 12:38:31",2,8,24,11,219,0,0.0,0.24827586206896557,"2023/02/20, 22:44:08",v1.5.15,0,5,false,,false,false,"Nikhil-Reddy-02/water_quality_analysis,tayerthiaggo/irivermetrics,mliconti/River-Pollution-Satellite-Detection-and-Warning-System,ShmulTomer/River-Sediment-Satellite-Detection,GolAnd071/FinalProject,cordmaur/WaterQuality",,,,,,,,,, FLAREr,"Flexible, scalable, robust, and near-real time iterative ecological forecasts in lakes and reservoirs.",FLARE-forecast,https://github.com/FLARE-forecast/FLAREr.git,github,,Freshwater and Hydrology,"2023/10/16, 18:53:15",6,0,1,true,R,,FLARE-forecast,"R,Rez",http://flare-forecast.org/FLAREr/,"b'\n [![R-CMD-check](https://github.com/FLARE-forecast/FLAREr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/FLARE-forecast/FLAREr/actions/workflows/R-CMD-check.yaml)\n[![Codecov test coverage](https://codecov.io/gh/FLARE-forecast/FLAREr/branch/master/graph/badge.svg)](https://codecov.io/gh/FLARE-forecast/FLAREr?branch=master)\n\n\n# FLAREr\n\nThis document serves as a users guide and a tutorial for the FLARE (Forecasting Lake and Reservoir Ecosystems) system ([Thomas et al. 2020](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019WR026138)). FLARE generates forecasts and forecast uncertainty of water temperature and water quality for 1 to 35-day ahead time horizon at multiple depths of a lake or reservoir. It uses data assimilation to update the initial starting point for a forecast and the model parameters based a real-time statistical comparisons to observations. It has been developed, tested, and evaluated for Falling Creek Reservoir in Vinton,VA ([Thomas et al. 2020](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019WR026138)) and National Ecological Observatory Network lakes ([Thomas et al. 2023](https://doi.org/10.1002/fee.2623)\n\nFLAREr is a set of R scripts that\n\n* Generating the inputs and configuration files required by the General Lake Model (GLM)\n* Applying data assimilation to GLM\n* Processing and archiving forecast output\n* Visualizing forecast output\n\nFLARE uses the 1-D General Lake Model ([Hipsey et al. 2019](https://www.geosci-model-dev.net/12/473/2019/)) as the mechanistic process model that predicts hydrodynamics of the lake or reservoir. For forecasts of water quality, it uses GLM with the Aquatic Ecosystem Dynamics library. The binaries for GLM and GLM-AED are included in the FLARE code that is available on GitHub. FLARE requires GLM version 3.3 or higher.\n\nMore information about the GLM can be found here:\n\n* [GLM 3.0.0 manuscript](https://www.geosci-model-dev.net/12/473/2019/) \n* [GLM on GitHub](https://github.com/AquaticEcoDynamics/glm-aed)\n* [GLM users guide](https://aquaticecodynamics.github.io/glm-workbook/) \n\nFLARE development has been supported by grants from the U.S. National Science Foundation (CNS-1737424, DBI-1933016, DBI-1933102)\n\n## Installation\n\nYou will need to download the necessary packages prior to running.\n\n```\nremotes::install_github(""FLARE-forecast/FLAREr"")\n```\n## Use\n\nFLAREr is a set of functions that address key steps in the forecasting workflow. \n\n### Requires\n\nUser generated *insitu* observations, meteorology, and inflow/outflow in a specified format. See FLARE example vignette for format specification.\n\n### Quick Run\n\nThe code below will produce a single forecast for Falling Creek Reservoir using configuration files included with the package.\n\n```\nlibrary(arrow)\nlibrary(tidyverse)\nlibrary(FLAREr)\n\ntmp <- tempdir()\nfile.copy(system.file(""extdata"", package = ""FLAREr""), tmp, recursive = TRUE)\nlake_directory <- file.path(tmp, ""example"")\nrun_flare(lake_directory = lake_directory,configure_run_file = ""configure_run.yml"", config_set_name = ""default"")\n\nopen_dataset(file.path(lake_directory,""forecasts/parquet"")) |> \n filter(variable == ""temperature"",\n depth == 1) |> \n collect() |> \n ggplot(aes(x = datetime, y = prediction, group = parameter)) +\n geom_line() +\n geom_vline(aes(xintercept = as_datetime(reference_datetime))) +\n labs(title = ""1 m water temperature forecast"")\n```\n\n\n\n \n'",",https://doi.org/10.1002/fee.2623","2020/09/02, 19:32:21",1148,MIT,166,676,"2023/10/16, 18:53:16",5,60,65,20,9,0,0.1,0.36590038314176243,"2022/02/16, 01:10:45",v2.2.1,0,5,false,,false,false,,,https://github.com/FLARE-forecast,,,,,https://avatars.githubusercontent.com/u/62960078?v=4,,, Buhayra,Obtaining water extent of small reservoirs in semi-arid regions from satellite data in real-time.,jmigueldelgado,https://github.com/jmigueldelgado/buhayra.git,github,"dams,sentinel,iwrm,reservoirs,filters",Freshwater and Hydrology,"2021/08/13, 21:21:11",10,0,0,false,Python,,,"Python,Shell",,"b""# Buhayra\n\nBuhayra (from al-buhayra) is a prototype application aiming at obtaining **water extent of small reservoirs** in semi-arid regions from satellite data in **real-time**. It collects, filters and processes weekly reservoir extents from Sentinel-1 for northeast Brazil and stores this geo-referenced information in a structured data model. This work has been funded by the German Research Foundation [DFG](http://gepris.dfg.de/gepris/projekt/266418622) under project number 266418622 and runs on the compute server of the Institute of Environmental Sciences and Geography of the Unierstity of Potsdam.\n\nPreliminary results can be found [on this buhayra-app](http://seca-vista.geo.uni-potsdam.de:3838/buhayra-app/). Click on the lakes to obtain plots and current state.\n\n## Before you start...\n\nRead about configurations and setup on the [wiki](https://github.com/jmigueldelgado/buhayra/wiki) and create and configure your location file accordingly (in `buhayra/parameters/location.yml`).\n\nThe scripts are suited to work on a PBS cluster or at least a dedicated machine with large RAM. There is a crontab that schedules the jobs to run once a week or more often. Although there are [conda environment files](https://conda.io/docs/user-guide/tasks/manage-environments.html#sharing-an-environment) to go with this repo, some libraries are quite machine specific and the currently used environments evolve a lot due to the experimental nature of this repo. Please contact me in case youi want to use any of this.\n\n## What it does\n\nIn short, the following steps are done sequentially:\n\n- query the [Copernicus Open Access Hub](https://scihub.copernicus.eu/) for Sentinel-1 and scenes ingested in the past 7 days. Download scenes.\n\n- calibrate, speckle-filter, correct geometry with [snappy](http://step.esa.int/main/toolboxes/snap/) (for SAR data)\n\n- subset based on a [global surface water database](https://global-surface-water.appspot.com/faq) from JRC\n\n- apply [minimum error thresholding](https://www.sciencedirect.com/science/article/abs/pii/0031320386900300)\n\n- polygonize and insert into PostGIS (with the amazing [GDAL](https://gdal.org/))\n\n## Visualization is being provided by a demo [dashboard under development](http://seca-vista.geo.uni-potsdam.de:3838/buhayra-app/).\n\n![example output](https://raw.githubusercontent.com/jmigueldelgado/buhayra/master/documents/screenshot.png)\n\nAn evaluation of the results is given by [valbuhayra](https://github.com/jmigueldelgado/valbuhayra)\n\n## In progress:\n\n- combine the water extent collection with bathymetric survey from TanDEM-X\n\n## Some of our references\n\nWe were at ESAs [_mapping water bodies from space 2nd conference_](http://mwbs2018.esa.int/) in Frascati (Rome), and at the [_World Water Forum_](http://www.worldwaterforum8.org/) in Bras\xc3\xadlia 2018.\n\n[Shuping's talk](documents/presentation167.pdf) and [Martin's poster](documents/poster_08.pdf) in Frascati. [My talk](documents/wwf2018.pdf) in Bras\xc3\xadlia.\n""",,"2018/04/09, 14:47:23",2025,MIT,0,974,"2021/02/03, 20:56:09",6,0,43,0,994,0,0,0.011627906976744207,,,0,2,false,,false,false,,,,,,,,,,, Wflow,"A Julia package that provides a hydrological modeling framework, as well as several different vertical and lateral concepts that can be used to run hydrological simulations.",Deltares,https://github.com/Deltares/Wflow.jl.git,github,"wflow,julia,hydrology,hydrological-modelling,hydrodynamics",Freshwater and Hydrology,"2023/09/27, 09:45:31",90,0,35,true,Julia,Stichting Deltares,Deltares,"Julia,Rich Text Format,XSLT,Dockerfile",https://deltares.github.io/Wflow.jl/dev/,"b""# Wflow\n\n[![Build Status](https://github.com/Deltares/Wflow.jl/workflows/CI/badge.svg)](https://github.com/Deltares/Wflow.jl/actions)\n[![Coverage](https://codecov.io/gh/Deltares/Wflow.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/Deltares/Wflow.jl)\n[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://deltares.github.io/Wflow.jl/dev)\n[![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://deltares.github.io/Wflow.jl/stable)\n[![DOI](https://zenodo.org/badge/246787232.svg)](https://zenodo.org/badge/latestdoi/246787232)\n[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)\n\nWflow is a [Julia](https://julialang.org/) package that provides a hydrological modeling\nframework, as well as several different vertical and lateral concepts that can be used to\nrun hydrological simulations. It is a continuation of the work available\n[here](https://github.com/openstreams/wflow).\n\n## Documentation\nSee [stable](https://deltares.github.io/Wflow.jl/stable) or\n[dev](https://deltares.github.io/Wflow.jl/dev) for the documentation.\n\n## Installation\nFor the installation enter the Pkg REPL by pressing `]` from the Julia REPL, and then\n```julia-repl\npkg> add Wflow\n```\nA more detailed description, including a development install, is available\n[here](https://deltares.github.io/Wflow.jl/dev/quick-start).\n\n## Contributions and reporting issues\nWe welcome reporting of issues [here](https://github.com/Deltares/Wflow.jl/issues). Please\nprovide a minimum working example so we are able to reproduce the issue. Furthermore, we\nwelcome contributions. We follow the [ColPrac guide for collaborative\npractices](https://github.com/SciML/ColPrac). New contributors should make sure to read that\nguide.\n\n## Citing\nFor citing our work see the Zenodo badge above, that points to the latest release.\n""",",https://zenodo.org/badge/latestdoi/246787232","2020/03/12, 08:56:58",1322,MIT,152,1039,"2023/09/08, 11:39:45",43,180,254,54,47,4,1.4,0.5184794086589228,"2023/09/27, 10:33:54",v0.7.2,0,13,false,,false,false,,,https://github.com/Deltares,https://www.deltares.nl/en/,"Delft, The Netherlands",,,https://avatars.githubusercontent.com/u/6613768?v=4,,, ParFlow,"An open-source, modular, parallel watershed flow model.",parflow,https://github.com/parflow/parflow.git,github,,Freshwater and Hydrology,"2023/09/12, 16:37:12",142,13,24,true,C,Parflow,parflow,"C,Tcl,Python,Fortran,C++,CMake,M4,Shell,Cuda,Slim,Perl,Makefile,PostScript,Dockerfile,Roff,R,Pascal,MATLAB,Batchfile",http://parflow.org,"b'# ParFlow\n\n![ParFlow CI Test](https://github.com/parflow/parflow/workflows/ParFlow%20CI%20Test/badge.svg)\n\nParFlow is an open-source, modular, parallel watershed flow model. It\nincludes fully-integrated overland flow, the ability to simulate\ncomplex topography, geology and heterogeneity and coupled land-surface\nprocesses including the land-energy budget, biogeochemistry and snow\n(via CLM). It is multi-platform and runs with a common I/O structure\nfrom laptop to supercomputer. ParFlow is the result of a long,\nmulti-institutional development history and is now a collaborative\neffort between CSM, LLNL, UniBonn and UCB. ParFlow has been coupled to\nthe mesoscale, meteorological code ARPS and the NCAR code WRF.\n\nFor an overview of the major features and capabilities see the\nfollowing paper: [Simulating coupled surface\xe2\x80\x93subsurface flows with\nParFlow v3.5.0: capabilities, applications, and ongoing development of\nan open-source, massively parallel, integrated hydrologic\nmodel](https://www.geosci-model-dev.net/13/1373/2020/gmd-13-1373-2020.pdf).\n\nAn online version of the users manual is available on [Read the\nDocks:Parflow Users\nManual](https://parflow.readthedocs.io/en/latest/index.html). The\nmanual contains additional documentation on how to use ParFlow and\nsetup input files. A quick start is included below. A PDF version is\navailable at [Parflow Users\nManual PDF](https://parflow.readthedocs.io/_/downloads/en/latest/pdf/).\n\n### Citing Parflow\n\nIf you want the DOI for a specific release see:\n[Zendo](https://zenodo.org/search?page=1&size=20&q=parflow&version)\n\nA generic DOI that always links to the most current release :\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4816884.svg)](https://doi.org/10.5281/zenodo.4816884)\n\nIf you use ParFlow in a publication and wish to cite a paper reference\nplease use the following that describe model physics:\n\n* Ashby S.F. and R.D. Falgout, Nuclear Science and Engineering 124:145-159, 1996\n* Jones, J.E. and C.S. Woodward, Advances in Water Resources 24:763-774, 2001\n* Kollet, S.J. and R.M. Maxwell, Advances in Water Resources 29:945-958, 2006\n* Maxwell, R.M. Advances in Water Resources 53:109-117, 2013\n\nIf you use ParFlow coupled to CLM in a publication, please also cite\ntwo additional papers that describe the coupled model physics:\n\n* Maxwell, R.M. and N.L. Miller, Journal of Hydrometeorology 6(3):233-247, 2005\n* Kollet, S.J. and R.M. Maxwell, Water Resources Research 44:W02402, 2008\n\n### Additional Parflow resources\n\nThe ParFlow website has additional information on the project:\n- [Parflow Web Site](https://parflow.org/)\n\nYou can join the Parflow Google Group/mailing list to communicate with\nthe Parflow developers and users. In order to post you will have to\njoin the group, old posts are visible without joining:\n- [Parflow-Users](https://groups.google.com/g/parflow)\n\nThe most recent build/installation guides are now located on the Parflow Wiki:\n- [Parflow Installation guides](https://github.com/parflow/parflow/wiki/ParFlow-Installation-Guides)\n\nA Parflow blog is available with notes from users on how to use Parflow:\n- [Parflow Blog](http://parflow.blogspot.com/)\n\nTo report Parflow bugs, please use the GitHub issue tracker for Parflow:\n- [Parflow Issue Tracker](https://github.com/parflow/parflow/issues)\n\n## Quick Start on Unix/Linux\n\nImportant note for users that have built with Autoconf, the CMake\nconfigure process is one step by default. Most builds of of ParFlow\nare on MPP architectures or workstations where the login node and\ncompute nodes are same architecture the default build process builds\nboth the ParFlow executable and tools with the same compilers and\nlibraries in one step. This will hopefully make building easier for\nthe majority of users. It is still possible to build the two\ncomponents separately; see instruction below for building pftools and\npfsimulator separately.\n\nCMake supports builds for several operating systems and IDE tools\n(like Visual Studio on Windows and XCode on MacOS). The ParFlow team\nhas not tested building on platforms other than Linux; there will\nlikely be some issues on other platforms. The ParFlow team welcomes\nbug reports and patches if you attempt other builds.\n\n### Step 1: Setup\n\nDecide where to install ParFlow and associated libraries.\n\nSet the environment variable `PARFLOW_DIR` to the chosen location:\n\nFor bash:\n\n```shell\n export PARFLOW_DIR=/home/snoopy/parflow\n``` \n\nFor csh and tcsh:\n\n```shell\n setenv PARFLOW_DIR /home/snoopy/parflow\n```\n\n### Step 2: Extract the Source\n\nExtract the source files from the compressed tar file.\n\nObtain the release from the ParFlow GitHub web site:\n\nhttps://github.com/parflow/parflow/releases\n\nand extract the release. Here we assume you are building in new\nsubdirectory in your home directory:\n\n```shell\n mkdir ~/parflow \n cd ~/parflow \n tar -xvf ../parflow.tar.gz\n```\n\nNote the ParFlow tar file will be have a different name based on the\nversion number.\n\nIf you are not using GNU tar or have a very old version GNU tar you\nwill need to uncompress the file first:\n\n```shell\n mkdir ~/parflow \n cd ~/parflow \n gunzip ../parflow.tar.gz\n tar -xvf ../parflow.tar\n```\n\n### Step 3: Running CMake to configure ParFlow\n\nCMake is a utility that sets up makefiles for building ParFlow. CMake\nallows setting of compiler to use and other options. First create a\ndirectory for the build. It is generally recommend to build outside\nof the source directory to make it keep things clean. For example,\nrestarting a failed build with a separate build directory simply\ninvolves removing the build directory.\n\n#### Building with the ccmake GUI\n\nYou can control build options for ParFlow using the ccmake GUI.\n\n```shell\n mkdir build\n cd build\n ccmake ../parflow \n```\nAt a minimum, you will want to set the CMAKE_INSTALL_PREFIX value to the same thing\nas PARFLOW_DIR was set to above. Other variables should be set as desired.\n\nAfter setting a variable \'c\' will configure `ParFlow. When you are\ncompletely done setting configuration options, use \'g\' to generate the\nconfiguration and exit ccmake.\n\nIf you are new to CMake, the creators of CMake provide some additional ccmake usage notes here:\n\nhttps://cmake.org/runningcmake/\n\n#### Building with the cmake command line\n\nCMake may also be configured from the command line using the cmake\ncommand. Instructions to build with different accelerator backends are found from the following documents: [CUDA, KOKKOS](README-GPU.md), [OpenMP](README-OPENMP.md). The default will configure a sequential version of ParFlow\nusing MPI libraries. CLM is being enabled.\n\n```shell\n mkdir build\n cd build\n cmake ../parflow \\\n \t -DCMAKE_INSTALL_PREFIX=${PARFLOW_DIR} \\\n \t -DPARFLOW_HAVE_CLM=ON\n```\n\nIf TCL is not installed in the standard locations (/usr or /usr/local)\nyou need to specify the path to the tclsh location:\n\n```shell\n\t-DTCL_TCLSH=${PARFLOW_TCL_DIR}/bin/tclsh8.6\n```\n\nBuilding a parallel version of ParFlow requires the communications\nlayer to use must be set. The most common option will be MPI. Here\nis a minimal example of an MPI build with CLM:\n\n```shell\n mkdir build\n cd build\n cmake ../parflow \\\n \t -DCMAKE_INSTALL_PREFIX=${PARFLOW_DIR} \\\n \t -DPARFLOW_HAVE_CLM=ON \\\n\t -DPARFLOW_AMPS_LAYER=mpi1\n```\n\nHere is a more complex example where location of various external\npackages are being specified and some features are being enabled:\n\n```shell\n mkdir build\n cd build\n cmake ../parflow \\\n -DPARFLOW_AMPS_LAYER=mpi1 \\\n\t-DHYPRE_ROOT=${PARFLOW_HYPRE_DIR} \\\n\t-DHDF5_ROOT=${PARFLOW_HDF5_DIR} \\\n\t-DSILO_ROOT=${PARFLOW_SILO_DIR} \\\n\t-DCMAKE_BUILD_TYPE=Debug \\\n\t-DPARFLOW_ENABLE_TIMING=TRUE \\\n\t-DPARFLOW_HAVE_CLM=ON \\\n\t-DCMAKE_INSTALL_PREFIX=${INSTALL_DIR}\n```\n\n### Step 4: Building and installing\n\nOnce CMake has configured and created a set of Makefiles; building is\neasy:\n\n```shell\n cd build\n make \n make install\n```\n\n### Step 5: Running a sample problem\n\nIf all went well a sample ParFlow problem can be run using:\n\n```shell\n cd parflow/test\n tclsh default_single.tcl 1 1 1\n```\n\nNote that the environment variable `PAFLOW_DIR` must be set for this\nto work and it assumes tclsh is in your path. Make sure to use the\nsame TCL shell as was used in the cmake configure.\n\nSome parallel machines do not allow launching a parallel executable\nfrom the login node; you may need to run this command in a batch file\nor by starting a parallel interactive session.\n\n## Building documentation\n\n### User Manual\n\nAn online version of the user manual is also available on [Read the\nDocks:Parflow Users\nManual](https://parflow.readthedocs.io/en/latest/index.html), a PDF\nversion is available at [Parflow Users\nManual PDF](https://parflow.readthedocs.io/_/downloads/en/latest/pdf/).\n\n#### Generating the user manaul in HTML\n\nAn HTML version of the user manual for Parflow may be built using:\n\n```shell\ncd docs/user_manual\npip install -r requirements.txt\n\nmake html\n```\n\nThe main HTML page created at _build/html/index.html. Open this using \na browser. On MacOS:\n\n```shell\nopen _build/html/index.html\n```\n\nor a browser if on Linux:\n\n```shell\nfirefox _build/html/index.html\n```\n\n#### Generating the user manaul in PDF\n\nAn HTML version of the user manual for Parflow may be built using:\n\n```shell\ncd docs/user_manual\npip install -r requirements.txt\n\nmake latexpdf\n```\n\nThis command is currently failing for a number of users, possibly due\nto old LaTex installs. We are currently investigating.\n\n### Code documentation\n\nParflow is moving to using Doxygen for code documenation. The documentation is currently very sparse.\n\nAdding the -DPARFLOW_ENABLE_DOXYGEN=TRUE option to the CMake configure\nwill enable building of the code documentation. After CMake has been\nrun the Doxygen code documenation is built with:\n\n```shell\n cd build\n make doxygen\n```\n\nHTML pages are generated in the build/docs/doxygen/html directory.\n\n### ParFlow keys documentation\n\n```shell\n cmake \\\n -S ./parflow \\\n -B ./build-docker \\\n -D BUILD_TESTING=OFF \\\n -D PARFLOW_ENABLE_TOOLS=OFF \\\n -D PARFLOW_ENABLE_SIMULATOR=OFF \\\n -D PARFLOW_ENABLE_KEYS_DOC=ON \\\n -D PARFLOW_ENABLE_PYTHON=ON \\\n -D PARFLOW_PYTHON_VIRTUAL_ENV=ON\n\n cd ./build-docker && make ParFlowKeyDoc\n```\n\nOn MacOS the key documenation may be viewed with `open` or use a browser to open the index.html file:\n\n```\n open ./build-docker/docs/user_manual/build-site/index.html\n```\t\n\n## Configure options\n\nA number of packages are optional for building ParFlow. The optional\npackages are enabled by PARFLOW_ENABLE_ value to be `TRUE` or\nsetting the _ROOT= value. If a package is enabled\nwith the using an ENABLE flag CMake will attempt to find the package\nin standard locations. Explicitly setting the location using the ROOT\nvariable for a package automatically enables it, you don\'t need to\nspecify both values.\n\nHere are some common packages:\n\n- __SIMULATOR__: The simulator is actually the core of ParFlow as it represent the simulation code.\n- __DOCKER__: This provide helpers for building docker images with ParFlow enable in them.\n- __DOXYGEN__: Doxygen and building of code documentation (C/Fortran).\n- __ETRACE__: builds ParFlow with etrace\n- __HDF5__: builds ParFlow with HDF5 which is required for the _NETCDF_ file format.\n- __HYPRE__: builds ParFlow with Hypre\n- __KEYS_DOC__: builds documentation (rst files) from key definitions.\n- __LATEX__: enables LaTEX and building of documentation (Manual PDF)\n- __NETCDF__: builds ParFlow with NetCDF. (If ON, HDF5 is required)\n- __PROFILING__: This allow to enable extra code execution that would enable code profiling.\n- __TIMING__: enables timing of key Parflow functions; may slow down performance\n- __TOOLS__: enables building of the Parflow tools (TCL version)\n- __VALGRIND__: builds ParFlow with Valgrind support\n- __PYTHON__: This is to enable you to build the Python version of __pftools__.\n- __SILO__: builds ParFlow with Silo.\n- __SLURM__: builds ParFlow with SLURM support (SLURM is queuing system on HPC).\n- __SUNDIALS__: builds ParFlow with SUNDIALS\n- __SZLIB__: builds ParFlow with SZlib compression library\n- __ZLIB__: builds ParFlow with Zlib compression library\n\n### How to specify the launcher command used to run MPI applications\n\nThere are multiple ways to run MPI applications such as mpiexec,\nmpirun, srun, and aprun. The command used is dependent on the job\nsubmission system used. By default CMake will attempt to determine an\nappropriate tool; a process that does not always yield the correct result.\n\nThere are several ways to modify the CMake guess on how applications\nshould be run. At configure time you may overwride the MPI launcher\nusing:\n\n```shell \n -DMPIEXEC=""""\n -DMPIEXEC_NUMPROC_FLAG=""""\n```\n\nAn example for mpiexec is -DMPIEXEC=""mpiexec"" -DMPIEXEC_NUMPROC_FLAG=""-n"".\n\nThe ParFlow script to run MPI applications will also include options\nspecified in the environment variable PARFLOW_MPIEXEC_EXTRA_FLAGS on\nthe MPI execution command line. For example when running with OpenMPI\non a single workstation the following will enable running more MPI\ntasks than cores and disable the busy loop waiting to improve\nperformance:\n\n```shell\n export PARFLOW_MPIEXEC_EXTRA_FLAGS=""--mca mpi_yield_when_idle 1 --oversubscribe""\n```\n\nLast the TCL script can explicity set the command to invoke for\nrunning ParFlow. This is done by setting the Process.Command key in\nthe input database. For example to use the mpiexec command and\ncontrol the cpu set used the following command string can be used:\n\n```shell\n pfset Process.Command ""mpiexec -cpu-set 1 -n %d parflow %s""\n```\n\nThe \'%d\' will be replaced with the number of processes (computed using\nthe Process.Topology values : P * Q * R) and the \'%s\' will be replaced\nby the name supplied to the pfrun command for the input database name.\nThe following shows how the default_single.tcl script could be\nmodified to use the custom command string:\n\n```shell\n pfset Process.Command ""mpiexec -cpu-set 1 -n %d parflow %s""\n pfrun default_single\n pfundist default_single\n```\n## Building simulator and tools support separately\n\nThis section is for advanced users runing on heterogenous HPC architectures.\n\nParFlow is composed of two main components that maybe configured and\nbuilt separately. Some HPC platforms are heterogeneous with the login\nnode being different than the compute nodes. The ParFlow system has\nan executable for the simulator which needs to run on the compute\nnodes and a set of TCL libraries used for problem setup that can be\nrun on the login node for problem setup.\n\nThe CMake variables PARFLOW_ENABLE_SIMULATOR and PARFLOW_ENABLE_TOOLS\ncontrol which component is configured. By default both are `TRUE`. To\nbuild separately use two build directories and run cmake in each to\nbuild the simulator and tools components separately. By specifying\ndifferent compilers and options for each, one can target different\narchitectures for each component.\n\n# Using Docker\n\nParFlow includes a Docker file for configuring a Docker image for\nrunning ParFlow.\n\n## Pre-built Docker Image\n\nA Docker image for ParFlow is available on Docker hub. See the\nfollowing section for how to run the Docker image. The Docker\nlatest image is automatically downloaded by Docker when run.\n\n## Running ParFlow with Docker\n\nThe https://github.com/parflow/docker repository contains an example\nsetup for running ParFlow in a Docker instance. See the README.md\nfile in this repository for more information.\n\n## Building the Docker image\n\nIf you want to build a Docker image, the build script in the bin\ndirectory will build an image using the latest ParFlow source in the\nmaster branch. If you want to build a different version of ParFlow\nyou will need to modify the \'Dockerfile\' file.\n\n### Unix/Linux/MacOS\n\n```shell\n./bin/docker-build.sh\n```\n\n### Windows\n\n```PowerShell\n.\\bin\\docker-build.bat\n```\n\n## Building the Docker image with CMake (expirmental not supported)\n\nRather than building ParFlow on your computer, you can use the build\nsystem to create a container and build ParFlow in it.\n\n```shell\ncmake \\\n -S ./parflow \\\n -B ./build-docker \\\n -D BUILD_TESTING=OFF \\\n -D PARFLOW_ENABLE_TOOLS=OFF \\\n -D PARFLOW_ENABLE_SIMULATOR=OFF \\\n -D PARFLOW_ENABLE_DOCKER=ON\n\ncd ./build-docker && make DockerBuildRuntime\n```\n\nFor more information look into our [Docker Readme](./docker/README.md)\n\n\n## Release\n\nCopyright (c) 1995-2021, Lawrence Livermore National Security LLC. \n\nProduced at the Lawrence Livermore National Laboratory. \n\nWritten by the Parflow Team (see the CONTRIBUTORS file)\n\nCODE-OCEC-08-103. All rights reserved.\n\nParflow is released under the GNU General Public License version 2.1\n\nFor details and restrictions, please read the LICENSE.txt file.\n- [LICENSE](./LICENSE.txt)\n'",",https://zenodo.org/search?page=1&size=20&q=parflow&version,https://doi.org/10.5281/zenodo.4816884","2016/05/25, 03:35:57",2710,CUSTOM,38,818,"2023/09/12, 16:38:25",82,296,411,63,43,7,0.3,0.19354838709677424,"2023/01/06, 18:29:16",v3.12.0,0,33,false,,false,true,"hydroframe/hf_point_data,hydroframe/hf_hydrodata,hydroframe/hydroframe,hydroframe/subsettools,Kitware/conceptual-modeler,olearypatrick/conceptual-modeler,hydroframe/PF_Model_Evaluation,wh3248/bill-play,hydroframe/parflow_python_shortcourse,HydroFrame-ML/sandtank-ml,parflow-resources/parflow-python-container,DrewLazzeriKitware/simulation-modeler,hydroframe/Subsetting",,https://github.com/parflow,https://www.parflow.org,,,,https://avatars.githubusercontent.com/u/19506192?v=4,,, River Runner,Visualize the path of a rain droplet from any point in the contiguous United States to its end point.,sdl60660,https://github.com/sdl60660/river-runner.git,github,"hydrology,mapping,usgs,usgs-data,usgs-api,visualization,data-visualization,mapbox,mapping-tools,svelte,river-runner,yellowstone-national-park,nhdplus,ocean,geology,3d,topography,nhdplus-data,mountain-features,sci-viz",Freshwater and Hydrology,"2023/10/22, 00:08:05",368,0,31,true,Svelte,,,"Svelte,JavaScript,GLSL,Python,HTML,CSS,Procfile,SCSS",https://river-runner-global.samlearner.com/,"b'# River Runner\n\nThis project visualizes the path of a rain droplet from any point in the world to its end point (usually an ocean or an inland water features). It will find the closest river/stream flowline coordinate to a click/search and then animate along that flowline\'s downstream path. The data used in this project comes from the [River Runner API](https://ksonda.github.io/global-river-runner/), which is based on several open source projects and datasets. Similar data, initially used for the project, came from the USGS\'s [NHDPlus data](https://www.usgs.gov/core-science-systems/ngp/national-hydrography/nhdplus-high-resolution) and their [NLDI API](https://waterdata.usgs.gov/blog/nldi-intro/) \n\nI\'ve used mapbox to animate the downstream path, but needed to make all sorts of adjustments for elevation and bearing changes to prevent jerkiness/nausea (just moving from point to point feels a little like flying through turbulence while shaking your head side-to-side).\n\nI\'ve hosted a dataset with NHDPlus [Value Added Attributes](https://www.usgs.gov/core-science-systems/ngp/national-hydrography/value-added-attributes-vaas) on Firebase, which allows me to group flowlines into their parent features and determine distances quickly.\n\n**Note**: The newly-released, global version of this project is in beta. We currently have relatively poor coverage of river names outside of the United States, which we are hoping to fill out, as well as some UX edge-cases and bugs that we hope to resolve.\n\n## Examples\n\nHere are a couple of examples of what it looks like in action.\n\nThis is a section of the path from eastern Turkey to the Persian Gulf:\n\n![Screenshot of the river runner in progress from eastern Turkey to the Persian Gulf. Mountain features and river are visible.](https://github.com/sdl60660/river-runner/blob/main/public/images/preview_image.png?raw=true)\n\nHere\'s part of the path from Southwest Arizona down to the Mexican border:\n\n![Screenshot of the river runner in progress from Southwest Arizona to Mexican border. Mountain features, desert, and river are visible.](https://github.com/sdl60660/river-runner/blob/main/public/images/example-2-az.png?raw=true)\n\nYou can look at a heatmap of previous searches [here](https://river-runner-query-heatmap.vercel.app/) or find a list of some of our favorite paths [here](https://docs.google.com/document/d/1EqRNDvvCwJdfNvejHzw-0zCd6Ax-0i7nyHkU4h0M9Kg/edit?usp=sharing)\n\n## Running this on your own\n\nIf you\'d like to run this locally and play around with it, just run the following commands in your terminal (assuming you have [npm](https://www.npmjs.com/get-npm) installed):\n\n1. `git clone https://github.com/sdl60660/river-runner.git`\n2. `cd river-runner`\n3. `npm install`\n4. `npm run dev` (then follow the link to the local server, probably `http://localhost:5000`).\n5. If you\'re running this on your own or forking into a new app, please replace the Mapbox Access Token strings in `src/access_tokens.js` with your own. You can generate a couple of tokens (for free), by creating a Mapbox account and visiting [this page](https://account.mapbox.com/access-tokens/). You\'ll need to generate two separate tokens to replace the ones in the existing file, but it does not matter which serves as the primary token and which serves as the secondary token.\n\n## Supporters\n\nThank you to [Mapbox](https://www.mapbox.com/) for sponsoring this project!\n\n\n\n## Updates\n* **January 2022**: The [global version](https://river-runner-global.samlearner.com/) of this tool is now released and in beta! While some lingering issues are resolved and it remains in beta, it can be found on this branch, while the original, US-only version is preserved [here](https://github.com/sdl60660/river-runner/tree/us-only) in Github, and at its original URL: https://river-runner.samlearner.com/. This is to avoid any breaking changes to existing share links/paths due to any discrepancies and because minor US issues persist on the global version, mainly when paths involve dams, canals, or conduits.\n\nIf you\'d like to be notified about major updates to the tool, you can sign up for an email list [here](https://tinyletter.com/samlearner).\n'",,"2021/05/08, 21:41:33",900,GPL-3.0,53,634,"2022/06/29, 17:13:18",9,12,23,0,483,0,0.0,0.0,"2022/07/03, 16:24:54",v2.3.1-beta,0,1,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,custom",false,false,,,,,,,,,,, visGWDB,A framework for groundwater-level informatics.,map/gw,,custom,,Freshwater and Hydrology,,,,,,,,,,https://code.usgs.gov/map/gw/visGWDBmrva,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, LakePy,Pythonic user-centered front-end to the Global Lake Level Database.,ESIPFed,https://github.com/ESIPFed/LakePy.git,github,,Freshwater and Hydrology,"2022/02/17, 23:52:53",32,0,1,false,Jupyter Notebook,ESIP,ESIPFed,"Jupyter Notebook,Python,TeX,Shell",,"b'## LakePy\n[![DOI](https://zenodo.org/badge/322186575.svg)](https://zenodo.org/badge/latestdoi/322186575)\n\n

\n \n

\n\nLakePy is the pythonic user-centered front-end to the [Global Lake Level Database](https://github.com/ESIPFed/Global-Lake-Level-Database). This package can instantly\n deliver lake water levels for some 2000+ lakes scattered across the globe. Data comes from three sources (so far!)\n - [United States Geological Survey National Water Information System](https://waterdata.usgs.gov/nwis)\n - [United States Department of Agriculture: Foreign Agricultural Service\'s G-REALM Database](https://ipad.fas.usda.gov/cropexplorer/global_reservoir/)\n - [Theia\'s HydroWeb Database](http://hydroweb.theia-land.fr/)\n\n \n**Funding for this work comes from the Earth Science Information Partners (ESIP) Winter 2020 Grant**\n\n_See the funded proposal [here](https://www.esipfed.org/wp-content/uploads/2020/04/Gearon.pdf)_\n\n## Motivation\nLake level data is incredibly important to federal and local governments, scientists, and citizens. Until now,\naccessing lake level data involves laborious data-preparation and wrangling. We aim to provide this data quickly\nand on-demand.\n\n## Software Used\nBuilt with\n- [Python](https://www.python.org/)\n - [Pandas](https://pandas.pydata.org/)\n - [PyMySQL](https://pymysql.readthedocs.io/en/latest/)\n - [Boto3](https://boto3.readthedocs.io/)\n- [Amazon MySQL RDS](https://aws.amazon.com/rds/mysql/)\n- [Amazon API Gateway](https://aws.amazon.com/api-gateway/)\n- [Amazon Lambda](https://aws.amazon.com/lambda/)\n\n## Quickstart\n- For a full API Reference, please consult LakePy\'s [documentation](http://lakepydocs.com.s3-website.us-east-2.amazonaws.com)\n- For a list of Lakes with corresponding ID numbers, see the [Lake Reference List](docs/resources/LakeReferenceList_Oct2021.csv)\n### Installation\n```\npip install lakepy\n```\nif you are using conda for package management you can\n [still use pip!](https://medium.com/@msarahan/anaconda-also-comes-with-pip-and-you-can-use-it-to-install-pypi-packages-into-conda-environments-9e7f021509f7)\n### Searching the Global Lake Level Database\n The database can be searched using a name, a source (""grealm"", ""hydroweb"", or ""usgs""), or an identification number\n . The best practice for searching is to first specify a name.\n \n Let\'s search for [Lake Mead](https://en.wikipedia.org/wiki/Lake_Mead) instantiating a Lake() object.\n```\nimport lakepy as lk\nmy_lake = lk.search(""mead"")\n```\nIf there is more than one Lake matching ""Mead"", the search function will return a RuntimeWarning and display a table.\n\n> ""Search Result: \'Mead\' has more than 1 Result. Showing the 2 most relevant results.\nSpecify \'id_No\' or narrow search name.""\n\n| | id_No | source | lake_name |\n|---:|--------:|:---------|:------------------------------------|\n| 0 | 138 | hydroweb | Mead |\n| 1 | 1556 | usgs | MEAD LAKE WEST BAY NEAR WILLARD, WI |\n\nWe will select id_No 138 corresponding to Lake Mead from HydroWeb\'s database and re-run our search 1 of 2 ways:\n- Specify the **id_No** explicitly as a string\n\n```\nmy_lake = lk.search(id_No = ""138"")\n```\n\n- Specify a **name** and a **source**\n```\nmy_lake = lk.search(name=""mead"", source=""hydroweb"", markdown=True)\n```\nWe _highly recommend_ specifying an id_No _whenever possible_ to avoid issues with similarly named lakes. Either way\n, the search returns a metadata markdown dataframe\n\n| | id_No | source | lake_name | basin | status | country | end_date | latitude | longitude | identifier | start_date |\n|---:|--------:|:---------|:------------|:---------|:---------|:----------|:-----------------|-----------:|------------:|:-------------|:-----------------|\n| 0 | 138 | hydroweb | Mead | Colorado | research | USA | 2014-12-29 00:21 | 36.13 | -114.45 | L_mead | 2000-06-14 10:22 |\n\nIt is important to note that different databases will return different types and amounts of metadata. Currently\n latitude & longitude are only available from the USGS and HydroWeb databases, but GREALM lakes will soon have them!\n \n### Lake() object\n \n The ""my_lake"" variable is now an object of class Lake() which comes with several attributes\n \n - name\n - country\n - continent _(currently not supported for HydroWeb)_\n - source\n - original_id\n - id_No\n - observation_period\n - latitude _(currently not supported for G-REALM)_\n - longitude _(currently not supported for G-REALM)_\n - misc_data \n - dataframe \n - data \n\nimportantly, my_lake.dataframe and my_lake.data are pandas dataframe instances with associated methods\n\n```\nmy_lake.dataframe.describe().to_markdown()\n```\n\n| | water_level |\n|:------|--------------:|\n| count | 119 |\n| mean | 342.807 |\n| std | 7.34547 |\n| min | 330.75 |\n| 25% | 337.905 |\n| 50% | 342.26 |\n| 75% | 347.555 |\n| max | 365.43 |\n\n### Plotting\n\nLakePy allows for native time series plotting as well as map-view plots\n```\nmy_lake.plot_timeseries()\n```\nPlotly (default)\n![](docs/resources/plotly.png)\n---\nSeaborn/Matplotlib\n```\nmy_lake.plot_timeseries(how=\'seaborn\')\n```\n![](docs/resources/seaborn.png)\n---\n```\nmy_lake.plot_mapview()\n```\n![](docs/resources/contextily.png)\n\n## API Reference\nPlease refer to the [LakePy documentation](http://lakepydocs.com.s3-website.us-east-2.amazonaws.com)\n\n## Contribute\n\nWe would love your help in making this project better. Please refer to our\n[contribution guide](https://github.com/ESIPFed/LakePy/blob/master/docs/contributing.md) to learn how. \n\n## Citing LakePy\nPlease consider citing us if you use LakePy in your research! The recommended citation is:\n> James Gearon, & John Franey. (2021, January 4). ESIPFed/LakePy v2.1.0 (Version v2.1.0). Zenodo. http://doi.org/10.\n5281/zenodo.4415936\n\n## Credits\nThis work is based on funding provided by the ESIP Lab with support from the National Aeronautics and Space\nAdministration (NASA), National Oceanic and Atmospheric Administration (NOAA) and the United States Geologic\nSurvey (USGS). LakePy received additional, generous support in 2021 from Derek Masaki and Farial Shahnaz. Many thanks to them.\n\n## License\n\nMIT \xc2\xa9 [James Hooker Gearon & John Franey](https://github.com/ESIPFed/GlobalLakeLevelDatabase/blob/master/LICENSE)\n'",",https://zenodo.org/badge/latestdoi/322186575,http://doi.org/10.\n5281/zenodo.4415936\n\n##","2020/12/17, 05:04:29",1042,MIT,0,171,"2021/10/12, 19:30:55",0,1,1,0,743,0,0.0,0.30434782608695654,"2021/01/04, 16:58:48",v2.1.0,0,4,false,,false,false,,,https://github.com/ESIPFed,http://esipfed.org,United States,,,https://avatars.githubusercontent.com/u/3588047?v=4,,, rivr,Designed as an educational tool for students and instructors of undergraduate and graduate courses in open channel hydraulics.,mkoohafkan,https://github.com/mkoohafkan/rivr.git,github,hydraulics,Freshwater and Hydrology,"2023/02/14, 07:15:45",20,0,2,true,R,,,"R,C++,TeX",https://hydroecology.net/rivr/,"b'# rivr: A package for teaching open-channel hydraulics \n\n\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/rivr)](http://cran.r-project.org/package=rivr)\n[![R-CMD-check](https://github.com/mkoohafkan/rivr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mkoohafkan/rivr/actions/workflows/R-CMD-check.yaml)\n\n\nSee the documentation at [hydroecology.net/rivr](https://hydroecology.net/rivr)\n\nThis package is designed as an educational tool for students and instructors \nof undergraduate and graduate courses in open channel hydraulics. Functions are \nprovided for computing flow and channel geometry, normal and critical depth, \ngradually-varied water-surface profiles (e.g. backwater curves) and unsteady \nflow (e.g. flood wave routing). For more information, see [our article in the R Journal](https://journal.r-project.org/archive/2015-2/koohafkan-younis.pdf).\n'",,"2015/01/11, 22:31:02",3209,GPL-3.0,4,85,"2021/12/01, 00:18:21",2,0,4,1,694,0,0,0.0,"2021/01/26, 03:39:25",v1.2-3,0,1,false,,false,false,,,,,,,,,,, eWaterCycle,Makes it easier to use hydrological models without having intimate knowledge about how to install and run the models.,eWaterCycle,https://github.com/eWaterCycle/ewatercycle.git,github,,Freshwater and Hydrology,"2023/10/10, 08:12:52",26,0,4,true,Python,,eWaterCycle,Python,https://ewatercycle.readthedocs.io/en/latest/,"b'# ewatercycle\n\n![image](https://github.com/eWaterCycle/ewatercycle/raw/main/docs/examples/logo.png)\n\nA Python package for running hydrological models.\n\n[![image](https://github.com/eWaterCycle/ewatercycle/actions/workflows/ci.yml/badge.svg)](https://github.com/eWaterCycle/ewatercycle/actions/workflows/ci.yml)\n[![image](https://sonarcloud.io/api/project_badges/measure?project=eWaterCycle_ewatercycle&metric=alert_status)](https://sonarcloud.io/dashboard?id=eWaterCycle_ewatercycle)\n[![image](https://sonarcloud.io/api/project_badges/measure?project=eWaterCycle_ewatercycle&metric=coverage)](https://sonarcloud.io/component_measures?id=eWaterCycle_ewatercycle&metric=coverage)\n[![Documentation Status](https://readthedocs.org/projects/ewatercycle/badge/?version=latest)](https://ewatercycle.readthedocs.io/en/latest/?badge=latest)\n[![PyPI](https://img.shields.io/pypi/v/ewatercycle)](https://pypi.org/project/ewatercycle/)\n[![image](https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8B-yellow)](https://fair-software.eu)\n[![image](https://zenodo.org/badge/DOI/10.5281/zenodo.5119389.svg)](https://doi.org/10.5281/zenodo.5119389)\n[![Research Software Directory Badge](https://img.shields.io/badge/rsd-ewatercycle-00a3e3.svg)](https://www.research-software.nl/software/ewatercycle)\n[![SQAaaS badge shields.io](https://img.shields.io/badge/sqaaas%20software-silver-lightgrey)](https://api.eu.badgr.io/public/assertions/1iy8I58zRvm7P9en2q0Egg ""SQAaaS silver badge achieved"")\n\nThe eWaterCycle package makes it easier to use hydrological models\nwithout having intimate knowledge about how to install and run the\nmodels.\n\n- Uses container for running models in an isolated and portable way\n with [grpc4bmi](https://github.com/eWaterCycle/grpc4bmi)\n- Generates rain and sunshine required for the model using\n [ESMValTool](https://www.esmvaltool.org/)\n- Supports observation data from [GRDC or\n USGS](https://ewatercycle.readthedocs.io/en/latest/observations.html)\n- Exposes [simple\n interface](https://ewatercycle.readthedocs.io/en/latest/user_guide.html)\n to quickly get up and running\n\n## Install\n\nThe ewatercycle package needs some geospatial non-python packages to\ngenerate forcing data. It is preferred to create a Conda environment to\ninstall those dependencies:\n\n```shell\nwget https://raw.githubusercontent.com/eWaterCycle/ewatercycle/main/environment.yml\nconda install mamba -n base -c conda-forge -y\nmamba env create --file environment.yml\nconda activate ewatercycle\n```\n\nThe ewatercycle package is installed with\n\n```shell\npip install ewatercycle\n```\n\nThe ewatercycle package ships without any models. Models are packaged in [plugins](https://ewatercycle.readthedocs.io/en/latest/plugins.html). To install all endorsed plugins use\n\n```shell\npip install ewatercycle-hype ewatercycle-lisflood ewatercycle-marrmot ewatercycle-pcrglobwb ewatercycle-wflow ewatercycle-leakybucket\n```\n\nBesides installing software you will need to create a configuration\nfile, download several data sets and get container images. See the\n[system setup\nchapter](https://ewatercycle.readthedocs.org/en/latest/system_setup.html)\nfor instructions.\n\n## Usage\n\nExample using the [Marrmot M14\n(TOPMODEL)](https://github.com/wknoben/MARRMoT/blob/master/MARRMoT/Models/Model%20files/m_14_topmodel_7p_2s.m)\nhydrological model on Merrimack catchment to generate forcing, run it\nand produce a hydrograph.\n\n```python\nimport pandas as pd\nimport ewatercycle.analysis\nimport ewatercycle.forcing\nimport ewatercycle.models\nimport ewatercycle.observation.grdc\n\nforcing = ewatercycle.forcing.generate(\n target_model=\'marrmot\',\n dataset=\'ERA5\',\n start_time=\'2010-01-01T00:00:00Z\',\n end_time=\'2010-12-31T00:00:00Z\',\n shape=\'Merrimack/Merrimack.shp\'\n)\n\nmodel = ewatercycle.models.MarrmotM14(version=""2020.11"", forcing=forcing)\n\ncfg_file, cfg_dir = model.setup(\n threshold_flow_generation_evap_change=0.1,\n leakage_saturated_zone_flow_coefficient=0.99,\n zero_deficit_base_flow_speed=150.0,\n baseflow_coefficient=0.3,\n gamma_distribution_phi_parameter=1.8\n)\n\nmodel.initialize(cfg_file)\n\nobservations_df, station_info = ewatercycle.observation.grdc.get_grdc_data(\n station_id=4147380,\n start_time=model.start_time_as_isostr,\n end_time=model.end_time_as_isostr,\n column=\'observation\',\n)\n\nsimulated_discharge = []\ntimestamps = []\nwhile (model.time < model.end_time):\n model.update()\n value = model.get_value(\'flux_out_Q\')[0]\n # flux_out_Q unit conversion factor from mm/day to m3/s\n area = 13016500000.0 # from shapefile in m2\n conversion_mmday2m3s = 1 / (1000 * 24 * 60 * 60)\n simulated_discharge.append(value * area * conversion_mmday2m3s)\n timestamps.append(model.time_as_datetime.date())\nsimulated_discharge_df = pd.DataFrame({\'simulated\': simulated_discharge}, index=pd.to_datetime(timestamps))\n\newatercycle.analysis.hydrograph(simulated_discharge_df.join(observations_df), reference=\'observation\')\n\nmodel.finalize()\n```\n\nMore examples can be found in the plugins listed in the\n[documentation](https://ewatercycle.readthedocs.io/en/latest/plugins.html).\n\n## Contributing\n\nIf you want to contribute to the development of ewatercycle package,\nhave a look at the [contribution guidelines](CONTRIBUTING.md).\n\n## License\n\nCopyright (c) 2018, Netherlands eScience Center & Delft University of\nTechnology\n\nApache Software License 2.0\n'",",https://doi.org/10.5281/zenodo.5119389","2018/08/29, 15:47:07",1883,Apache-2.0,21,205,"2023/10/06, 10:28:41",99,164,288,50,19,4,1.9,0.49586776859504134,"2023/10/10, 08:14:20",2.0.0,0,11,false,,true,true,,,https://github.com/eWaterCycle,http://www.ewatercycle.org,,,,https://avatars.githubusercontent.com/u/12843269?v=4,,, RHESSys,The Regional Hydro-Ecologic Simulation System.,RHESSys,https://github.com/RHESSys/RHESSys.git,github,,Freshwater and Hydrology,"2023/07/24, 16:55:11",80,0,9,true,C,,RHESSys,"C,C++,Makefile,Yacc,Python,Awk,Shell,R,Lex,Dockerfile,QMake",,"b'```diff\n+ TRUNK IS NOW THE DEFAULT BRANCH (REPLACING MASTER).\n+ THE MASTER BRANCH WILL REMAIN FOR ARCHIVAL PURPOSES, BUT TRUNK SHOULD NOW BE USED AS THE MAIN BRANCH.\n- SEE NEW CHANGES. \n- NEW CODE MAY NOT BE BACKWARD COMPATIBLE WITH YOUR CURRENT FILES. \n- PLEASE REVIEW INFORMATION ABOUT CHANGES ON THE WHAT\'S NEW WIKI PAGE \n```\nhttps://github.com/RHESSys/RHESSys/wiki/What\'s-New\n\nRHESSys - The Regional Hydro-Ecologic Simulation System\n=======================================================\n\nGithub is the new home for the RHESSys code repository.\n\nThe project homepage is at http://fiesta.bren.ucsb.edu/~rhessys/\n\nThe old SVN repository was at http://sourceforge.net/projects/rhessys/ \n\nBranches\n--------\nThe ""develop"" branch should be used for day-to-day development, with\nRHESSys releases pushed to the ""trunk"" (formerly ""master"") branch periodically (for example\nyearly).\n\n\nContinuous Build and Test\n-------------------------\n\nWe are using Travis-CI (http://travis-ci.org) to host our continuous integration efforts. Continuous integration helps us run our test suite upon every commit to this repository and let us know if and when we break the build.\n\nThe current build status is: [![Build Status](https://travis-ci.org/RHESSys/RHESSys.png?branch=develop)](https://travis-ci.org/RHESSys/RHESSys)\n\nThe above icon should be clickable and point to the latest build at Travis-CI: https://travis-ci.org/RHESSys/RHESSys\n\nThe `.travis.yml` configuration file defines how this project is hooked to Travis-CI. Github has a post-commit hook that is fired upon every commit to this repository. This post-commit hook uses an authentication token to login to Travis-CI and run the configured steps on a virtual machine. A return value of 0 means success and generates a \'green\' status indicator (hopefully illustrated in the previous paragraph).\n\nWhile the code is successfully compiling and running, there are a significant number of compiler warnings at this time:\n\n $ GISBASE=/usr/lib/grass64 make 2>&1 | grep warning | wc -l\n 1233\n\nTests\n-----\n\nThe Create Flowpaths subproject has a growing suite of tests that can be run via `make test`. Tests are defined as .c files in the `cf/test/src` directory and will automatically get compiled and run by the `make test` target.\n\nCode Coverage\n-------------\n\nThe Create Flowpaths subproject also has a code coverage script. This script will use [gcov](http://gcc.gnu.org/onlinedocs/gcc/Gcov.html) and [lcov](http://ltp.sourceforge.net/coverage/lcov.php) to generate an HTML coverage report and show where more tests are needed by illustrating which lines of code are not being exercised by the existing tests.\n\nRun the code coverage script:\n\n cd cf/\n ./generate_coverage.sh\n\nThis will generate an HTML report in the newly formed `cf/coverage_report/` directory. This new directory can be copied to a webserver or opened directly in your web browser.\n\nStatic Analysis\n---------------\n\nRHESSys can be analyzed by [cppcheck](http://cppcheck.sourceforge.net/) in a few seconds with the following command:\n\n cppcheck . --quiet\n\nOR to see the output, and save the errors out to a textfile:\n\n cppcheck . 2> err.txt\n cat err.txt\n\n $ wc -l err.txt\n 16\n\nStatic analysis will show things like memory leaks, out-of-bound references, and null pointers. It is generally assumed a good thing to have your code be ""static analysis clean"".\n'",,"2013/04/11, 02:32:19",3850,Apache-2.0,23,1856,"2022/11/29, 01:57:28",73,34,82,1,331,6,0.0,0.7263940520446097,"2021/08/23, 17:47:55",RHESSys-7.4,0,19,false,,false,false,,,https://github.com/RHESSys,,,,,https://avatars.githubusercontent.com/u/4058983?v=4,,, Pastas,An open-source Python framework for the analysis of groundwater time series.,pastas,https://github.com/pastas/pastas.git,github,"hydrology,groundwater,python,timeseries,analysis,pastas",Freshwater and Hydrology,"2023/08/17, 14:44:34",325,7,56,true,Python,Pastas,pastas,Python,https://pastas.readthedocs.io,"b'Pastas: Analysis of Groundwater Time Series\n===========================================\n\n.. image:: /doc/_static/logo_small.png\n :width: 200px\n :align: left\n\n.. image:: https://github.com/pastas/pastas/actions/workflows/ci.yml/badge.svg?branch=master\n :target: https://github.com/pastas/pastas/actions/workflows/ci.yml\n.. image:: https://img.shields.io/pypi/v/pastas.svg\n :target: https://pypi.python.org/pypi/pastas\n.. image:: https://img.shields.io/pypi/l/pastas.svg\n :target: https://mit-license.org/\n.. image:: https://img.shields.io/pypi/pyversions/pastas\n :target: https://pypi.python.org/pypi/pastas\n.. image:: https://img.shields.io/pypi/dm/pastas\n :target: https://pypi.org/project/pastas/\n.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1465866.svg\n :target: https://doi.org/10.5281/zenodo.1465866\n.. image:: https://api.codacy.com/project/badge/Grade/952f41c453854064ba0ee1fa0a0b4434\n :target: https://www.codacy.com/gh/pastas/pastas\n.. image:: https://api.codacy.com/project/badge/Coverage/952f41c453854064ba0ee1fa0a0b4434\n :target: https://www.codacy.com/gh/pastas/pastas\n.. image:: https://readthedocs.org/projects/pastas/badge/?version=latest\n :target: https://pastas.readthedocs.io/en/latest/?badge=latest\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/pastas/pastas/master?filepath=examples%2Fnotebooks%2F1_basic_model.ipynb\n\nPastas: what is it?\n~~~~~~~~~~~~~~~~~~~\nPastas is an open source python package for processing, simulating and analyzing\ngroundwater time series. The object oriented structure allows for the quick\nimplementation of new model components. Time series models can be created,\ncalibrated, and analysed with just a few lines of python code with the\nbuilt-in optimization, visualisation, and statistical analysis tools.\n\nDocumentation & Examples\n~~~~~~~~~~~~~~~~~~~~~~~~\n- Documentation is provided on the dedicated website `pastas.dev `_\n- Examples can be found on the `examples directory on the documentation website `_\n- View and edit a working example notebook of a Pastas model in `MyBinder `_\n- A list of publications that use Pastas is available in a `dedicated Zotero group `_\n\nGet in Touch\n~~~~~~~~~~~~\n- Questions on Pastas can be asked and answered on `Github Discussions `_.\n- Bugs, feature requests and other improvements can be posted as `Github Issues `_.\n- Pull requests will only be accepted on the development branch (dev) of\n this repository. Please take a look at the `developers section\n `_ on the documentation website for more\n information on how to contribute to Pastas.\n\nQuick installation guide\n~~~~~~~~~~~~~~~~~~~~~~~~\nTo install Pastas, a working version of Python 3.8, 3.9, 3.10, 3.11 has to be\ninstalled on your computer. We recommend using the `Anaconda Distribution\n`_ as it includes most of the python\npackage dependencies and the Jupyter Notebook software to run the notebooks.\nHowever, you are free to install any Python distribution you want.\n\nStable version\n--------------\nTo get the latest stable version, use::\n\n pip install pastas\n\nUpdate\n------\nTo update pastas, use::\n\n pip install pastas --upgrade\n\nDevelopers\n----------\nTo get the latest development version, use::\n\n pip install git+https://github.com/pastas/pastas.git@dev#egg=pastas\n\nRelated packages\n~~~~~~~~~~~~~~~~\n- `Pastastore `_ is a Python package for managing multiple timeseries and pastas models\n- `Metran `_ is a Python package to perform multivariate timeseries analysis using a technique called dynamic factor modelling.\n- `Hydropandas `_ can be used to obtain Dutch timeseries (KNMI, Dinoloket, ..)\n- `PyEt `_ can be used to compute potential evaporation from meteorological variables.\n\nDependencies\n~~~~~~~~~~~~\nPastas depends on a number of Python packages, of which all of the necessary\nare automatically installed when using the pip install manager. To\nsummarize, the dependencies necessary for a minimal function installation of\nPastas\n\n- numpy>=1.7\n- matplotlib>=3.1\n- pandas>=1.1\n- scipy>=1.8\n- numba>=0.51\n\nTo install the most important optional dependencies (solver LmFit and function visualisation Latexify) at the same time with Pastas use::\n\n pip install pastas[full]\n\nor for the development version use::\n\n pip install git+https://github.com/pastas/pastas.git@dev#egg=pastas[full]\n\nHow to Cite Pastas?\n~~~~~~~~~~~~~~~~~~~\nIf you use Pastas in one of your studies, please cite the Pastas article in Groundwater:\n\n- Collenteur, R.A., Bakker, M., Calj\xc3\xa9, R., Klop, S.A., Schaars, F. (2019) `Pastas: open source software for the analysis of groundwater time series `_. Groundwater. doi: 10.1111/gwat.12925.\n\nTo cite a specific version of Python, you can use the DOI provided for each official release (>0.9.7) through Zenodo. Click on the link to get a specific version and DOI, depending on the Pastas version.\n\n- Collenteur, R., Bakker, M., Calj\xc3\xa9, R. & Schaars, F. (XXXX). Pastas: open-source software for time series analysis in hydrology (Version X.X.X). Zenodo. http://doi.org/10.5281/zenodo.1465866\n\n'",",https://doi.org/10.5281/zenodo.1465866\n,http://doi.org/10.5281/zenodo.1465866\n\n","2016/04/15, 07:29:20",2749,MIT,223,2281,"2023/09/10, 15:41:56",49,261,531,210,45,11,1.6,0.46770155232849275,"2023/08/17, 14:45:53",v1.2.0,0,12,false,,true,true,"rckwzrd/hydro-tools,pastas/pastastore,Rishav1996/AI-MLOps-TimeSeries,ykato27/Time-Series-Forecasting,pastas/metran,MAMBA-python/course-material,KWR-Water/hgc",,https://github.com/pastas,https://pastas.readthedocs.io/en/latest/,,,,https://avatars.githubusercontent.com/u/18461359?v=4,,, river-dl,Deep learning model for predicting environmental variables on river systems.,USGS-R,https://github.com/USGS-R/river-dl.git,github,,Freshwater and Hydrology,"2023/06/02, 19:46:38",21,0,2,false,Python,USGS-R,USGS-R,"Python,Dockerfile",,"b""Active development moved to [https://code.usgs.gov/wma/wp/river-dl](https://code.usgs.gov/wma/wp/river-dl)\n\n\n# Deep Graph Convolutional Neural Network for Predicting Environmental Variables on River Networks\nThis repository contains code for predicting environmental variables on river networks. The models included are all either\ntemporally or spatiotemporally aware and incorporate information from the river network. The original intent of \nthis repository was to predict stream temperature and streamflow. \n\nThis work is being developed by researchers in the Data Science branch of the US. Geological Survey and researchers at the \nUniversity of Minnesota in Vipin Kumar's lab. Sources for specific models are included as comments within the code.\n\n# Running the code\nThere are functions for facilitating pre-processing and post-processing of the data in addition to running the models themselves. \nIncluded within the [workflow_examples](workflow_examples) folder of the repository are a number of example Snakemake workflow that show how to\nrun the entire process with a variety of models and end-goals. \n\n### To run the Snakemake workflow locally:\n\n1. Install the dependencies in the `environment.yaml` file. With conda you can do this with `conda env create -f environment.yaml`\n2. Activate your conda environment `source activate rdl_torch_tf`\n3. Install the local `river-dl` package by `pip install path/to/river-dl/` (_optional_)\n4. Edit the river-dl run configuration (including paths for I/O data) in the appropriate `config.yml`\nfrom the [workflow_examples](workflow_examples) folder.\n5. Run Snakemake with `snakemake --configfile config.yml -s Snakemake --cores `\n\n### To run the Snakemake Workflow on TallGrass\n1. Request a GPU allocation and start an interactive shell\n\n salloc -N 1 -t 2:00:00 -p gpu -A --gres=gpu:1 \n srun -A --pty bash\n\n2. Load the necessary cuda toolkit module and add paths to the cudnn drivers\n \n module load cuda11.3/toolkit/11.3.0 \n export LD_LIBRARY_PATH=/cm/shared/apps/nvidia/TensorRT-6.0.1.5/lib:/cm/shared/apps/nvidia/cudnn_8.0.5/lib64:$LD_LIBRARY_PATH\n3. Follow steps 1-5 above as you would to run the workflow locally (note, you may need to change `tensorflow`\nto `tensoflow-gpu` in the `environment.yml`). \n\n_After building your environment, you may want to make sure the recommended versions of PyTorch and CUDA were installed\naccording to the [PyTorch documentation](https://pytorch.org/). You can see the installed versions\nby calling `conda list` within your activated environment._\n\n### The data\nThe data used to run this model currently are specific to the Delaware River Basin but will soon be made more generic.\n\n___\n\n### Disclaimer\nThis software is in the public domain because it contains materials that originally came from the U.S. Geological Survey, an agency of the United States Department of Interior. For more information, see the official USGS copyright policy\n\nAlthough this software program has been used by the U.S. Geological Survey (USGS), no warranty, expressed or implied, is made by the USGS or the U.S. Government as to the accuracy and functioning of the program and related program material nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith.\n\nThis software is provided \xe2\x80\x9cAS IS.\xe2\x80\x9d\n""",,"2020/01/23, 21:59:16",1371,CC0-1.0,52,926,"2023/06/02, 10:33:16",26,90,194,8,145,2,9.1,0.4275956284153005,"2021/03/12, 16:13:31",paper_v0,0,7,false,,true,false,,,https://github.com/USGS-R,https://owi.usgs.gov/R/,,,,https://avatars.githubusercontent.com/u/3188813?v=4,,, VIC,A macroscale hydrologic model that solves full water and energy balances.,UW-Hydro,https://github.com/UW-Hydro/VIC.git,github,"hydrology,climate,land-surface,streamflow",Freshwater and Hydrology,"2021/12/15, 02:34:12",233,0,29,true,C,UW Hydro | Computational Hydrology,UW-Hydro,"C,Python,Fortran,Makefile,Shell,Dockerfile",http://vic.readthedocs.io,"b'# Variable Infiltration Capacity (VIC) Model\n\n| VIC Links & Badges | |\n|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| VIC Documentation | [![Documentation Status](https://readthedocs.org/projects/vic/badge/?version=latest)](http://vic.readthedocs.org/en/latest/) |\n| VIC Users Listserve | [![VIC Users Listserve](https://img.shields.io/badge/VIC%20Users%20Listserve-Active-blue.svg)](https://mailman.u.washington.edu/mailman/listinfo/vic_users) |\n| Developers Gitter Room | [![Join the chat at https://gitter.im/UW-Hydro/VIC](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/UW-Hydro/VIC?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) |\n| License | [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/UW-Hydro/VIC/master/LICENSE.txt) |\n| Current Release DOI | [![DOI](https://zenodo.org/badge/7766/UW-Hydro/VIC.svg)](https://zenodo.org/badge/latestdoi/7766/UW-Hydro/VIC) |\n\n----------\n\nThis repository serves as the public source code repository of the **Variable Infiltration Capacity Model**, better known as **VIC**. VIC documentation can be read on the [VIC documentation website](http://vic.readthedocs.org).\n\nThe Variable Infiltration Capacity (VIC) macroscale hydrological model (MHM) has been developed over the last two decades at the [University of Washington](http://uw-hydro.github.io/) and [Princeton University](http://hydrology.princeton.edu) in collaboration with a large number of other researchers around the globe. Development and maintenance of the official version of the VIC model is currently coordinated by the [UW Hydro | Computational Hydrology group](http://www.hydro.washington.edu) in the [Department of Civil and Environmental Engineering](http://www.ce.washington.edu) at the [University of Washington](http://www.washington.edu). All development activity is coordinated via the [VIC github page](https://github.com/UW-Hydro/VIC), where you can also find all archived, current, beta, and development versions of the model.\n\nA skeletal first version of the VIC model was introduced to the community by [Wood et al. [1992]](http://dx.doi.org/10.1029/91JD01786) and a greatly expanded version, from which current variations evolved, is described by [Liang et al. [1994]](http://dx.doi.org/10.1029/94jd00483). As compared to other MHMs, VIC\xe2\x80\x99s distinguishing hydrological features are its representation of subgrid variability in soil storage capacity as a spatial probability distribution to which surface runoff is related, and its parameterization of base flow, which occurs from a lower soil moisture zone as a nonlinear recession. Movement of moisture between the soil layers is modeled as gravity drainage, with the unsaturated hydraulic conductivity a function of the degree of saturation of the soil. Spatial variability in soil properties, at scales smaller than the grid scale, is represented statistically, without assigning infiltration parameters to specific subgrid locations. Over time, many additional features and representations of physical processes have been added to the model. VIC has been used in a large number of regional and continental scale (even global) hydrological studies. In 2016, VIC version 5 was released. This was a major update to the VIC source code focusing mainly on infrastructure improvements. The development of VIC-5 is detailed in [Hamman et al. 2018](https://doi.org/10.5194/gmd-11-3481-2018). A selection of VIC applications can be found on the [VIC references page](http://vic.readthedocs.org/en/latest/Documentation/References/).\n\nEvery new application addresses new problems and conditions that the model may not currently be able to handle, and as such the model is always under development. The VIC model is an open source development project, which means that contributions are welcome, including to the VIC documentation.\n\nBy placing the original source code archive on GitHub, we hope to encourage a more collaborative development environment. A tutorial on how to use the VIC git repository and how to contribute your changes to the VIC model can be found on the [working with git page](http://vic.readthedocs.org/en/latest/Development/working-with-git/). The most stable version of the model is in the `master` branch, while beta versions of releases under development can be obtained from the `development` branch of this repository.\n\nVIC is a research model developed by graduate students, post-docs and research scientists over a long period of time (since the early 1990s). Every new VIC application addresses new problems and conditions which the model may not currently be able to handle. As a result, the model is always under development. Because of the incremental nature of this development, not all sections of the code are equally mature and not every combination of model options has been exhaustively tested or is guaranteed to work. While you are more than welcome to use VIC in your own research endeavors, ***the model code comes with no guarantees, expressed or implied, as to suitability, completeness, accuracy, and whatever other claim you would like to make***. In addition, the model has no graphical user interface, nor does it include a large set of analysis tools, since most people want to use their own set of tools.\n\nWhile we would like to hear about your particular application (especially a copy of any published paper), we cannot give you individual support in setting up and running the model. The [VIC documentation website](http://vic.readthedocs.org) includes reasonably complete instructions on how to run the model, as well as the opportunity to sign up for the VIC Users Email List. The [VIC listserve](https://mailman.u.washington.edu/mailman/listinfo/vic_users) should be used for questions about model setup and application. It is basically VIC users helping other VIC users. All other exchanges about VIC source code are managed through the [VIC github page](https://github.com/UW-Hydro/VIC).\n\nIf you make use of this model, please acknowledge [Liang et al. [1994]](http://dx.doi.org/10.1029/94jd00483) and [Hamman et al. [2018]](https://doi.org/10.5194/gmd-11-3481-2018) plus any other references appropriate to the features you used that are cited in the [model overview](http://vic.readthedocs.org/en/latest/Overview/ModelOverview/).\n'",",https://zenodo.org/badge/latestdoi/7766/UW-Hydro/VIC,https://doi.org/10.5194/gmd-11-3481-2018,https://doi.org/10.5194/gmd-11-3481-2018","2013/02/07, 02:30:14",3913,MIT,0,2773,"2023/08/22, 03:40:11",128,484,811,4,65,2,0.7,0.7221438645980254,"2021/12/14, 19:34:38",5.1.0,0,14,false,,false,true,,,https://github.com/UW-Hydro,https://uw-hydro.github.io/,"Seattle, WA",,,https://avatars.githubusercontent.com/u/3475662?v=4,,, Badlands,"Basin and Landscape Dynamics is a long-term surface evolution model built to simulate landscape development, sediment transport and sedimentary basins formation from upstream regions down to marine environments.",badlands-model,https://github.com/badlands-model/badlands.git,github,"basin-evolution,landscape-dynamics,geosciences,sedimentology,stratigraphy,marine,climate,geomorphology",Freshwater and Hydrology,"2023/10/26, 01:28:55",121,0,19,true,Python,Badlands,badlands-model,"Python,Fortran,C,Makefile,Shell",https://badlands.readthedocs.io,"b'Badlands - Basin & Landscape Dynamics\n=====\n\n[![code](https://img.shields.io/badge/code-badlands-orange)](https://pypi.org/project/badlands)\n[![PyPI](https://img.shields.io/pypi/v/badlands)](https://pypi.org/project/badlands)\n[![code](https://img.shields.io/badge/code-companion-orange)](https://pypi.org/project/badlands-companion)\n[![PyPI](https://img.shields.io/pypi/v/badlands-companion)](https://pypi.org/project/badlands-companion)\n\n\n[![Documentation Status](https://readthedocs.org/projects/badlands/badge/?version=latest)](https://badlands.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/51286954.svg)](https://zenodo.org/badge/latestdoi/51286954)\n\n\n[![Docker Pulls](https://img.shields.io/docker/pulls/badlandsmodel/pybadlands-demo-serial)](https://cloud.docker.com/u/badlandsmodel/repository/docker/badlandsmodel/badlands)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/badlands-model/badlands-teaching/binder)\n\n
\n \n
\n\n\nBasin and Landscape Dynamics (Badlands) is a long-term surface evolution model built to simulate landscape development, sediment transport and sedimentary basins formation from upstream regions down to marine environments.\n\n\n> This program is a free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\n\n## What\xe2\x80\x99s in the box?\n\nBadlands is an open-source Python-based code and can be used to simulate:\n\n+ hillslope processes (linear & non-linear diffusion),\n+ fluvial incision (Stream Power Law, Transport Capacity Law)\n+ sediment transport and deposition,\n+ wave-induced longshore drift transport,\n+ reef growth and carbonate platform formation,\n+ submarine gravity currents (turbidity currents),\n+ spatially and temporally varying tectonic (horizontal + vertical displacements) and\n+ effects of climate changes (rainfall) and/or sea-level fluctuations.\n\n\n## Documentation & Installation\n\n**https://badlands.readthedocs.io/**\n'",",https://zenodo.org/badge/latestdoi/51286954","2016/02/08, 08:45:11",2816,GPL-3.0,15,496,"2023/10/25, 05:13:03",4,14,34,5,0,1,0.0,0.2085106382978723,"2019/08/01, 04:07:57",v2.2.0,0,7,false,,false,false,,,https://github.com/badlands-model,www.earthcolab.org,School of Geosciences - University of Sydney,,,https://avatars.githubusercontent.com/u/11727410?v=4,,, LAGOSNE,Interface to the LAke multi-scaled GeOSpatial & temporal database.,cont-limno,https://github.com/cont-limno/LAGOSNE.git,github,"water-quality,ecology,geoscience,limnology,cran,rstats",Freshwater and Hydrology,"2023/06/16, 20:09:00",13,0,2,true,R,Continental Limnology,cont-limno,"R,TeX",https://cont-limno.github.io/LAGOSNE/,"b'\n\n\n[![Project Status: Active - The project has reached a stable, usable\nstate and is being actively\ndeveloped.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n[![R-CMD-check](https://github.com/cont-limno/LAGOSNE/actions/workflows/R-CMD-check.yml/badge.svg)](https://github.com/cont-limno/LAGOSNE/actions/workflows/R-CMD-check.yml)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/LAGOSNE)](https://cran.r-project.org/package=LAGOSNE)\n[![CRAN RStudio mirror\ndownloads](http://cranlogs.r-pkg.org/badges/LAGOSNE)](https://cran.r-project.org/package=LAGOSNE)\n\n[![NSF-1065786](https://img.shields.io/badge/NSF-1065786-blue.svg)](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1065786)\n[![NSF-1638679](https://img.shields.io/badge/NSF-1638679-blue.svg)](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1638679)\n[![NSF-1065649](https://img.shields.io/badge/NSF-1065649-blue.svg)](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1065649)\n[![NSF-1065818](https://img.shields.io/badge/NSF-1065818-blue.svg)](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1065818)\n[![NSF-1638554](https://img.shields.io/badge/NSF-1638554-blue.svg)](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1638554)\n\n# LAGOSNE \n\nThe `LAGOSNE` package provides an R interface to download LAGOS-NE data,\nstore this data locally, and perform a variety of filtering and\nsubsetting operations.\n\nLAGOS-NE contains data for 51,101 lakes and reservoirs larger than 4 ha\nin 17 lake-rich US states. The database includes 3 data modules for:\nlake location and physical characteristics for all lakes; ecological\ncontext (i.e., the land use, geologic, climatic, and hydrologic setting\nof lakes) for all lakes; and in situ measurements of lake water quality\nfor a subset of the lakes from the past 3 decades for approximately\n2,600-12,000 lakes depending on the variable (see Soranno et al.\xc2\xa02017\n[below](https://github.com/cont-limno/LAGOSNE#references)).\n\n## Installation\n\n``` r\n# install stable version from CRAN\ninstall.packages(""LAGOSNE"")\n\n# install development version from Github\n# install devtools if not found - install.packages(""devtools"")\n# devtools::install_github(""cont-limno/LAGOSNE"", dependencies = TRUE)\n```\n\n### Data\n\nThe `lagosne_get` function downloads the LAGOSNE files corresponding to\nthe specified version from the [EDI data\nrepository](https://portal.edirepository.org/nis/home.jsp). Files are\nstored in a temporary directory before being \xe2\x80\x9ccompiled\xe2\x80\x9d to an `R` data\nformat in the location specified by the `dest_folder` argument.\nRecommended setting is `lagos_path()`. Data only needs to be downloaded\none time per version per machine. Each `LAGOSNE`\n[module](https://cont-limno.github.io/LAGOSNE/articles/lagosne_structure.html)\nhas a unique version number. However, only the limno module has been\ndynamically updated. Therefore the `LAGOSNE` `R` package uses the limno\nmodule version number to check-out specific datasets. **The latest\nversion of the `LAGOSNE` dataset is 1.087.3.**\n\n``` r\nlibrary(LAGOSNE)\nlagosne_get(dest_folder = lagos_path())\n```\n\n## Usage\n\n### Load Package\n\n``` r\nlibrary(LAGOSNE)\n```\n\n### Load data\n\nThe `lagosne_load` function returns a named list of `data.frame`\nobjects. Use the `names()` function to see a list of available data\nframes `names(dt)`.\n\n``` r\ndt <- lagosne_load()\nnames(dt)\n```\n\n #> [1] ""county"" ""county.chag"" ""county.conn"" \n #> [4] ""county.lulc"" ""edu"" ""edu.chag"" \n #> [7] ""edu.conn"" ""edu.lulc"" ""hu4"" \n #> [10] ""hu4.chag"" ""hu4.conn"" ""hu4.lulc"" \n #> [13] ""hu8"" ""hu8.chag"" ""hu8.conn"" \n #> [16] ""hu8.lulc"" ""hu12"" ""hu12.chag"" \n #> [19] ""hu12.conn"" ""hu12.lulc"" ""iws"" \n #> [22] ""iws.conn"" ""iws.lulc"" ""state"" \n #> [25] ""state.chag"" ""state.conn"" ""state.lulc"" \n #> [28] ""buffer100m"" ""buffer100m.lulc"" ""buffer500m"" \n #> [31] ""buffer500m.conn"" ""buffer500m.lulc"" ""lakes.geo"" \n #> [34] ""epi_nutr"" ""lakes_limno"" ""lagos_source_program""\n #> [37] ""locus""\n\n#### Locate tables containing a variable\n\n``` r\nquery_lagos_names(""secchi"")\n```\n\n #> [1] ""epi_nutr""\n\n#### Preview a table\n\n``` r\nhead(dt$state)\n#> state state_name state_zoneid state_lat state_long state_pct_in_nwi\n#> 1 IA Iowa State_13 42.07456 -93.49983 100\n#> 2 MA Massachusetts State_2 42.25762 -71.81240 100\n#> state_ha_in_nwi state_ha\n#> 1 14573561 14573561\n#> 2 2101262 2101262\n```\n\n#### Preview a specific lake\n\n``` r\nlake_info(name = ""Pine Lake"", state = ""Iowa"")\n# or using a lagoslakeid\n# lake_info(lagoslakeid = 4389)\n```\n\n #> lagoslakeid nhdid nhd_lat nhd_long lagosname1 meandepth\n #> 1 4510 155845265 42.37833 -93.05967 UPPER PINE LAKE 2.21\n #> meandepthsource maxdepth maxdepthsource legacyid gnis_name lake_area_ha\n #> 1 IA_CHEMISTRY 4.88 IA_CHEMISTRY 122 Pine Lake 36.07355\n #> lake_perim_meters nhd_fcode nhd_ftype iws_zoneid hu4_zoneid hu6_zoneid\n #> 1 5671.001 39004 390 IWS_51040 HU4_57 HU6_78\n #> hu8_zoneid hu12_zoneid edu_zoneid county_zoneid state_zoneid elevation_m\n #> 1 HU8_400 HU12_3008 EDU_23 County_275 State_13 300.23\n #> state state_name state_lat state_long state_pct_in_nwi state_ha_in_nwi\n #> 1 IA Iowa 42.07456 -93.49983 100 14573561\n #> state_ha lakeconnection iws_ha\n #> 1 14573561 DR_Stream 3593.379\n\n#### Read table metadata\n\n``` r\nhelp.search(""datasets"", package = ""LAGOSNE"")\n```\n\n| Package | Topic | Title |\n|:--------|:---------------------|:--------------------------------------------------------------|\n| LAGOSNE | chag | Climate, Hydrology, Atmospheric, and Geologic (CHAG) Datasets |\n| LAGOSNE | classifications | LAGOSNE Spatial Classifications Metadata |\n| LAGOSNE | conn | Connectivity Datasets |\n| LAGOSNE | epi_nutr | Epilimnion Water Quality Data |\n| LAGOSNE | lagos_source_program | LAGOSNE sources |\n| LAGOSNE | lagoslakes | Lake Geospatial Metadata |\n| LAGOSNE | lakes_limno | Metadata for Lakes with Water Quality |\n| LAGOSNE | locus | Metadata for all lakes \\> 1ha |\n| LAGOSNE | lulc | Land Use Land Cover (LULC) Data Frames |\n\n### Select data\n\n`lagosne_select` is a convenience function whose primary purpose is to\nprovide users with the ability to select subsets of LAGOS tables that\ncorrespond to specific keywords (see `LAGOSNE:::keyword_partial_key()`\nand `LAGOSNE:::keyword_full_key()`). See\n[here](http://adv-r.had.co.nz/Subsetting.html) for a comprehensive\ntutorial on generic `data.frame` subsetting.\n\n``` r\n# specific variables\nhead(lagosne_select(table = ""epi_nutr"", vars = c(""tp"", ""tn""), dt = dt))\n#> tp tn\n#> 1 29.00 NA\n#> 2 136.56 3521.7\nhead(lagosne_select(table = ""iws.lulc"", vars = c(""iws_nlcd2011_pct_95""), dt = dt))\n#> iws_nlcd2011_pct_95\n#> 1 0.04\n\n# categories\nhead(lagosne_select(table = ""locus"", categories = ""id"", dt = dt))\n#> lagoslakeid iws_zoneid hu4_zoneid hu6_zoneid hu8_zoneid hu12_zoneid\n#> 1 3201 HU4_11 HU6_12 HU8_47 HU12_16312\n#> 2 4510 IWS_51040 HU4_57 HU6_78 HU8_400 HU12_3008\n#> edu_zoneid county_zoneid state_zoneid\n#> 1 EDU_27 County_331 State_2\n#> 2 EDU_23 County_275 State_13\nhead(lagosne_select(table = ""epi_nutr"", categories = ""waterquality"", dt = dt))\n#> chla colora colort dkn doc nh4 no2 no2no3 srp tdn tdp tkn tn toc ton\n#> 1 16.60 60 NA NA NA NA NA NA NA NA NA NA NA NA NA\n#> 2 30.64 NA NA NA NA NA NA 1619.6 NA NA NA NA 3521.7 NA NA\n#> tp secchi\n#> 1 29.00 1.70\n#> 2 136.56 0.65\nhead(lagosne_select(table = ""hu4.chag"", categories = ""deposition"", dt = dt)[,1:4])\n#> hu4_dep_no3_1985_min hu4_dep_no3_1985_max hu4_dep_no3_1985_mean\n#> 1 7.2171 10.0448 7.9366\n#> 2 9.5538 21.1791 15.5290\n#> hu4_dep_no3_1985_std\n#> 1 0.3868\n#> 2 2.2330\n\n# mix of specific variables and categories\nhead(lagosne_select(table = ""epi_nutr"", vars = ""programname"", \n categories = c(""id"", ""waterquality""), dt = dt))\n#> programname lagoslakeid chla colora colort dkn doc nh4 no2 no2no3 srp tdn\n#> 1 MA_DEP 3201 16.60 60 NA NA NA NA NA NA NA NA\n#> 2 IA_CHEM 4510 30.64 NA NA NA NA NA NA 1619.6 NA NA\n#> tdp tkn tn toc ton tp secchi eventida10873\n#> 1 NA NA NA NA NA 29.00 1.70 45773\n#> 2 NA NA 3521.7 NA NA 136.56 0.65 64904\n```\n\n## Published LAGOSNE subsets\n\n``` r\n# Oliver et al. 2015\nlagos_get_oliver_2015()\nhead(lagos_load_oliver_2015())\n\n# Collins et al. 2017\nlagos_get_collins_2017()\nhead(lagos_load_collins_2017())\n```\n\n## Legacy Versions\n\n### R Package\n\nTo install versions of `LAGOSNE` compatible with older versions of\nLAGOS-NE run the following command where `ref` is set to the desired\nversion (in the example, it is version 1.087.1):\n\n``` r\n# install devtools if not found\n# install.packages(""devtools"")\ndevtools::install_github(""cont-limno/LAGOSNE"", ref = ""v1.087.1"")\n```\n\n## References\n\nOliver, SK, PA Soranno, CE Fergus, T Wagner, K Webster, CE Scott, LA\nWinslow, J Downing, and EH Stanley. 2015. \xe2\x80\x9cLAGOS - Predicted and\nObserved Maximum Depth Values for Lakes in a 17-State Region of the\nU.S.\xe2\x80\x9d\n.\n\nSoranno, P.A., Bacon, L.C., Beauchene, M., Bednar, K.E., Bissell, E.G.,\nBoudreau, C.K., Boyer, M.G., Bremigan, M.T., Carpenter, S.R., Carr, J.W.\nCheruvelil, K.S., and \xe2\x80\xa6 , 2017. LAGOS-NE: A multi-scaled geospatial and\ntemporal database of lake ecological context and water quality for\nthousands of US lakes. GigaScience,\n\n\nSoranno, PA, EG Bissell, KS Cheruvelil, ST Christel, SM Collins, CE\nFergus, CT Filstrup, et al.\xc2\xa02015. \xe2\x80\x9cBuilding a Multi-Scaled Geospatial\nTemporal Ecology Database from Disparate Data Sources: Fostering Open\nScience and Data Reuse.\xe2\x80\x9d Gigascience 4 (1).\n.\n\nStachelek J., Oliver S. 2017. LAGOSNE: Interface to the Lake\nMulti-scaled Geospatial and Temporal Database. R package version 1.1.0.\n\n\nSoranno P, Cheruvelil K. 2017. LAGOS-NE-LOCUS v1.01: a module for\nLAGOS-NE, a multi-scaled geospatial and temporal database of lake\necological context and water quality for thousands of U.S. Lakes:\n1925\xe2\x80\x932013. Environmental Data Initiative. \n\nSoranno P, Cheruvelil K. 2019. LAGOS-NE-LIMNO v1.087.3: a module for\nLAGOS-NE, a multi-scaled geospatial and temporal database of lake\necological context and water quality for thousands of U.S. Lakes:\n1925\xe2\x80\x932013. Environmental Data Initiative.\n.\n\nSoranno P, Cheruvelil K. 2017. LAGOS-NE-GEO v1.05: a module for\nLAGOS-NE, a multi-scaled geospatial and temporal database of lake\necological context and water quality for thousands of U.S. Lakes:\n1925\xe2\x80\x932013. Environmental Data Initiative.\n\n'",",https://doi.org/10.1093/gigascience/gix101,https://doi.org/10.6073/PASTA/0C23A789232AB4F92107E26F70A7D8EF,https://doi.org/10.6073/PASTA/08C6F9311929F4874B01BCC64EB3B2D7,https://doi.org/10.6073/PASTA/16F4BDAA9607C845C0B261A580730A7A","2016/09/21, 15:37:29",2590,GPL-3.0,25,633,"2023/06/16, 18:49:42",3,5,111,1,131,0,0.0,0.2522796352583586,"2023/06/17, 00:31:26",v2.0.3,0,5,false,,false,false,,,https://github.com/cont-limno,https://lagoslakes.org,,,,https://avatars.githubusercontent.com/u/24610902?v=4,,, RiverREM,Make river relative elevation models and REM visualizations from an input digital elevation model.,OpenTopography,https://github.com/OpenTopography/RiverREM.git,github,"lidar,digital-elevation-model,visualization",Freshwater and Hydrology,"2022/08/16, 19:08:48",108,0,44,true,Python,OpenTopography Facility,OpenTopography,"Python,Batchfile,Shell",https://opentopography.org/blog/new-package-automates-river-relative-elevation-model-rem-generation,"b""[![NSF-1948997](https://img.shields.io/badge/NSF-1948997-blue.svg)](https://nsf.gov/awardsearch/showAward?AWD_ID=1948997) [![NSF-1948994](https://img.shields.io/badge/NSF-1948994-blue.svg)](https://nsf.gov/awardsearch/showAward?AWD_ID=1948994) [![NSF-1948857](https://img.shields.io/badge/NSF-1948857-blue.svg)](https://nsf.gov/awardsearch/showAward?AWD_ID=1948857)\n\n[![Conda](https://img.shields.io/conda/v/conda-forge/riverrem?color=success)](https://anaconda.org/conda-forge/riverrem) [![Conda](https://img.shields.io/conda/dn/conda-forge/riverrem?color=success)](https://anaconda.org/conda-forge/riverrem)\n\n# RiverREM\n\nRiverREM is a Python package for automatically generating river relative elevation model (REM) visualizations from nothing but an input digital elevation model (DEM). The package uses the OpenStreetMap API to retrieve river centerline geometries over the DEM extent. Interpolation of river elevations is automatically handled using a sampling scheme based on raster resolution and river sinuosity to create striking high-resolution visualizations without interpolation artefacts straight out of the box and without additional manual steps. The package also contains a helper class for creating DEM raster visualizations. See the [documentation](https://opentopography.github.io/RiverREM/) pages for more details.\n\nFor more information on REMs and this project see [this OpenTopography blog post](https://opentopography.org/blog/new-package-automates-river-relative-elevation-model-rem-generation).\n\n![birch_creek_REM](docs/pics/birch_crop.png)\n\n## Installation\n\nInstall via conda:\n\n```bash\nconda install -c conda-forge riverrem\n```\n\nor clone this repo and create a conda environment from the `environment.yml`:\n\n```bash\ngit clone https://github.com/opentopography/RiverREM.git\ncd RiverREM\nconda env create -n riverrem --file environment.yml\n```\n\nIn order to handle dependencies such as GDAL and OSMnx, it is highly recommended to install with `conda` instead of `pip` for ease of use. \n\n## Usage\n\n1. Get a DEM for the area of interest. Some sources for free topographic data:\n\n - [OpenTopography](https://opentopography.org/)\n - [USGS](https://apps.nationalmap.gov/downloader/)\n - [Comprehensive list of DEM sources](https://github.com/DahnJ/Awesome-DEM)\n\n2. Create an REM visualization with default arguments:\n\n ```python\n from riverrem.REMMaker import REMMaker\n # provide the DEM file path and desired output directory\n rem_maker = REMMaker(dem='/path/to/dem.tif', out_dir='/out/dir/')\n # create an REM\n rem_maker.make_rem()\n # create an REM visualization with the given colormap\n rem_maker.make_rem_viz(cmap='mako_r')\n ```\n\nOptions for adjusting colormaps, shading, interpolation parameters, and more are detailed in the [documentation](https://opentopography.github.io/RiverREM/).\n\n![yukon_flats_REM](docs/pics/yukon_crop.png)\n\n## Troubleshooting\n\n- No river in DEM extent or inaccurate centerline: Use the [OSM editor](https://www.openstreetmap.org/edit) to \n create/modify river centerline(s). Alternatively, a user-provided centerline can be input to override the OSM centerline. See the [documentation](https://opentopography.github.io/RiverREM) for more details.\n\n## Issues\n\nSubmitting [issues](https://github.com/OpenTopography/RiverREM/issues), bugs, or suggested feature improvements are highly encouraged for this repository.\n\n## References\n\nThis is the OpenTopography fork of https://github.com/klarrieu/RiverREM by Kenneth Larrieu. This package was made possible and inspired by the following:\n\n- The [beautiful REMs](https://www.dnr.wa.gov/publications/ger_presentations_dmt_2016_coe.pdf) popularized by [Daniel Coe](https://dancoecarto.com/creating-rems-in-qgis-the-idw-method)\n- [DahnJ](https://github.com/DahnJ)'s implementation of [REMs using xarray](https://github.com/DahnJ/REM-xarray)\n- Geoff Boeing's [OSMnx](https://geoffboeing.com/publications/osmnx-complex-street-networks/) Python package leveraging the OSM Overpass API\n- The [UNAVCO](https://www.unavco.org/) Student Internship Program\n- The team at [OpenTopography](https://opentopography.org/) for supporting this effort under the following U.S. National Science Foundation award numbers: 1948997, 1948994, 1948857.\n\n\n![neches_REM](docs/pics/neches_REM_view.png)\n""",,"2022/08/03, 22:27:48",448,GPL-3.0,0,105,"2023/10/19, 15:45:25",0,0,7,3,6,0,0,0.0449438202247191,"2022/08/14, 02:35:57",v1.0.4,0,3,false,,false,false,,,https://github.com/OpenTopography,www.opentopography.org,"San Diego Supercomputer Center, UC San Diego",,,https://avatars.githubusercontent.com/u/3650515?v=4,,, mHM,The mesoscale Hydrological Model.,mhm,,custom,,Freshwater and Hydrology,,,,,,,,,,https://git.ufz.de/mhm/mhm,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, LISF,A software framework for high performance terrestrial hydrology modeling and data assimilation developed with the goal of integrating satellite and ground-based observational data products.,NASA-LIS,https://github.com/NASA-LIS/LISF.git,github,,Freshwater and Hydrology,"2023/10/23, 17:07:38",96,0,26,true,Fortran,,NASA-LIS,"Fortran,C,CSS,Python,Visual Basic 6.0,JavaScript,HTML,Shell,C++,Perl,Mathematica,Makefile,Roff,GDScript,SCSS,Mask,nesC,NCL,MATLAB,GLSL",,"b'= Land Information System Framework (LISF)\n\nifdef::env-github[]\n:tip-caption: :bulb:\n:note-caption: :information_source:\n:important-caption: :heavy_exclamation_mark:\n:caution-caption: :fire:\n:warning-caption: :warning:\nendif::[]\n\n:url-lis-website: https://lis.gsfc.nasa.gov\n:url-hsl-website: https://earth.gsfc.nasa.gov/hydro\n\n// HTML passthrough to float LIS logo to the right\n++++\n\n++++\n\n== Overview\n\nThe link:{url-lis-website}[Land Information System (LIS)] is a software framework for high performance terrestrial hydrology modeling and data assimilation developed with the goal of integrating satellite and ground-based observational data products and advanced modeling techniques to produce optimal fields of land surface states and fluxes.\n\nDevelopment of LIS is led by the link:{url-hsl-website}[Hydrological Sciences Laboratory] at NASA\'s Goddard Space Flight Center.\n\nThe software suite consists of three modeling components:\n\n. *Land surface Data Toolkit (LDT)*, a formal environment that handles the data-related requirements of LIS including land surface parameter processing, geospatial transformations, consistency checks, data assimilation preprocessing, and forcing bias correction,\n. *Land Information System (LIS)*, the modeling system that encapsulates physical models, data assimilation algorithms, optimization and uncertainty estimation algorithms, and high performance computing support, and\n. *Land surface Verification Toolkit (LVT)*, a formal model verification and benchmarking environment that can be used for enabling rapid prototyping and evaluation of model simulations by comparing against a comprehensive suite of in-situ, remote sensing, and model and reanalysis data products.\n\nPlease visit our website for more information: {url-lis-website}\n\n== Documentation\n\nOnline documentation is available link:https://nasa-lis.github.io/LISF/[here].\n\nNavigate into the link:https://github.com/NASA-LIS/LISF/tree/master/docs[`docs/`] subdirectory of this repository to view our documentation. See the `docs/README` file for instructions.\n\n== Support\n\nVisit our link:https://github.com/NASA-LIS/LISF/discussions[Discussions] forum on GitHub.\n\n_Please note that the LIS team is small and provides support as time allows. As such, we ask for your patience when requesting assistance._\n\n== Contributing\n\nWe welcome feedback and contributions from the LISF user community. Please review our link:https://github.com/NASA-LIS/LISF/blob/master/CONTRIBUTING.md[Contribution Guidelines] before opening an issue or pull request.\n\n== License\n\nThe Land Information System Framework (LISF) is released under the terms and conditions of the Apache License, Version 2.0. See https://www.apache.org/licenses/LICENSE-2.0.\n\nPlease see the LICENSES subdirectories under `ldt/`, `lis/`, and `lvt/` for the licenses of the third-party components within LISF.\n\n== Citation\n\nPlease use the following citations to cite or refer to the LIS software:\n\nKumar, S.V., C.D. Peters-Lidard, Y. Tian, P.R. Houser, J. Geiger, S. Olden, L. Lighty, J.L. Eastman, B. Doty, P. Dirmeyer, J. Adams, K. Mitchell, E. F. Wood, and J. Sheffield, 2006: Land Information System - An interoperable framework for high resolution land surface modeling. _Environ. Modeling & Software_, 21, 1402-1415, link:https://doi.org/10.1016/j.envsoft.2005.07.004[doi:10.1016/j.envsoft.2005.07.004]\n\nPeters-Lidard, C.D., P.R. Houser, Y. Tian, S.V. Kumar, J. Geiger, S. Olden, L. Lighty, B. Doty, P. Dirmeyer, J. Adams, K. Mitchell, E.F. Wood, and J. Sheffield, 2007: High-performance Earth system modeling with NASA/GSFC\'s Land Information System. Innovations in Systems and Software Engineering, 3(3), 157-165, link:https://doi.org/10.1007/s11334-007-0028-x[doi:10.1007/s11334-007-0028-x]\n'",",https://doi.org/10.1016/j.envsoft.2005.07.004,https://doi.org/10.1007/s11334-007-0028-x","2018/11/07, 17:39:35",1813,Apache-2.0,531,3300,"2023/10/23, 03:48:13",108,726,1002,153,3,11,0.0,0.6559139784946236,"2023/10/23, 17:35:49",master-latest,2,28,false,,false,true,,,https://github.com/NASA-LIS,,,,,https://avatars.githubusercontent.com/u/42041231?v=4,,, Conceptual Functional Equivalent,A conceptual rainfall-runoff model with an implementation of the Basic Model Interface.,NOAA-OWP,https://github.com/NOAA-OWP/cfe.git,github,"bmi,csdms,hydrology",Freshwater and Hydrology,"2023/08/29, 17:47:50",17,0,9,true,C,,NOAA-OWP,"C,Python,C++,Shell,CMake,Makefile",,"b'# Conceptual Functional Equivalent (CFE) Model\n\nCFE (Conceptual Functional Equivalent) is a simplified conceptual model written by Fred Ogden that is designed to be functionally equivalent to the National Water Model. To see the original author code, which is not BMI compatible, please refer to the [original_author_code](https://github.com/NOAA-OWP/cfe/tree/master/original_author_code) directory. For more information on the hypotheses and ideas underpinning the CFE model, see the [T-shirt Approximation of the National Water Model versions 1.2, 2.0, and 2.1](https://github.com/NOAA-OWP/cfe/blob/master/MODEL.md) section of this document. The remainder of this document discusses the BMI enabled and expanded CFE model. \n\n## Build and Run Instructions\nDetailed instructions on how to build and run CFE can be found in the [INSTALL](https://github.com/NOAA-OWP/cfe/blob/master/INSTALL.md) guide.\n - Test examples highlights\n - Unittest (see [tests](https://github.com/NOAA-OWP/cfe/blob/master/test/README.md))\n - Example 1 (standalone mode): CFE reads local forcing data\n - Example 2 (pseudo framework mode): CFE coupled to AORC (AORC provides forcing data through BMI)\n - Example 3 (pseudo framework mode): CFE coupled to AORC (provides forcing data through BMI) and PET (provides potential evapotranspiration via BMI)\n - Example 4 (pseudo framework mode): Example #3 repeated with rootzone-based actual evapotranspiration\n - Example 5 (nextgen framework mode): CFE coupled to PET module\n \n## Model Configuration File\nA detailed description of the parameters for model configuration is provided [here](https://github.com/NOAA-OWP/cfe/tree/master/configs/README.md).\n\n## Getting help\nFor questions, please contact XYZ, the main maintainer of the repository.\n\n## Known issues or raise an issue\nWe are constantly looking to improve the model and/or fix bugs as they arise. Please see the Git Issues for known issues or if you want to suggest adding a capability or to report a bug, please open an issue.\n\n## Getting involved\nSee general instructions to contribute to the model development ([instructions](https://github.com/NOAA-OWP/cfe/blob/master/CONTRIBUTING.md)) or simply fork the repository and submit a pull request.\n'",,"2021/05/26, 11:23:27",882,CUSTOM,21,109,"2023/10/10, 13:09:54",23,59,75,22,15,3,0.8,0.7272727272727273,,,0,12,false,,false,true,,,https://github.com/NOAA-OWP,,,,,https://avatars.githubusercontent.com/u/60660814?v=4,,, NeuralHydrology,Python library to train neural networks with a strong focus on hydrological applications.,neuralhydrology,https://github.com/neuralhydrology/neuralhydrology.git,github,,Freshwater and Hydrology,"2023/08/18, 09:28:12",249,1,74,true,Python,Neural Hydrology,neuralhydrology,Python,https://neuralhydrology.readthedocs.io/,"b'![#](docs/source/_static/img/neural-hyd-logo-black.png)\n\nPython library to train neural networks with a strong focus on hydrological applications.\n\nThis package has been used extensively in research over the last years and was used in various academic publications. \nThe core idea of this package is modularity in all places to allow easy integration of new datasets, new model \narchitectures or any training-related aspects (e.g. loss functions, optimizer, regularization). \nOne of the core concepts of this code base are configuration files, which let anyone train neural networks without\ntouching the code itself. The NeuralHydrology package is built on top of the deep learning framework \n[PyTorch](https://pytorch.org/), since it has proven to be the most flexible and useful for research purposes.\n\nWe (the AI for Earth Science group at the Institute for Machine Learning, Johannes Kepler University, Linz, Austria) are using\nthis code in our day-to-day research and will continue to integrate our new research findings into this public repository.\n\n- Documentation: [neuralhydrology.readthedocs.io](https://neuralhydrology.readthedocs.io)\n- Research Blog: [neuralhydrology.github.io](https://neuralhydrology.github.io)\n- Bug reports/Feature requests [https://github.com/neuralhydrology/neuralhydrology/issues](https://github.com/neuralhydrology/neuralhydrology/issues)\n\n# Cite NeuralHydrology\n\nIn case you use NeuralHydrology in your research or work, it would be highly appreciated if you include a reference to our [JOSS paper](https://joss.theoj.org/papers/10.21105/joss.04050#) in any kind of publication.\n\n```bibtex\n@article{kratzert2022joss,\n title = {NeuralHydrology --- A Python library for Deep Learning research in hydrology},\n author = {Frederik Kratzert and Martin Gauch and Grey Nearing and Daniel Klotz},\n journal = {Journal of Open Source Software},\n publisher = {The Open Journal},\n year = {2022},\n volume = {7},\n number = {71},\n pages = {4050},\n doi = {10.21105/joss.04050},\n url = {https://doi.org/10.21105/joss.04050},\n}\n```\n\n# Contact\n\nFor questions or comments regarding the usage of this repository, please use the [discussion section](https://github.com/neuralhydrology/neuralhydrology/discussions) on Github. For bug reports and feature requests, please open an [issue](https://github.com/neuralhydrology/neuralhydrology/issues) on GitHub.\nIn special cases, you can also reach out to us by email: neuralhydrology(at)googlegroups.com\n'",",https://doi.org/10.21105/joss.04050","2020/09/30, 07:16:56",1120,BSD-3-Clause,63,282,"2023/09/08, 13:59:03",2,55,96,29,47,0,1.0,0.5265486725663717,"2023/08/18, 11:36:21",v.1.9.0,0,8,false,,false,true,dmbrmv/my_dissertation,,https://github.com/neuralhydrology,https://neuralhydrology.github.io/,,,,https://avatars.githubusercontent.com/u/59032226?v=4,,, Surface water network,A Python package to create and analyze surface water networks.,mwtoews,https://github.com/mwtoews/surface-water-network.git,github,"hydrology,python,modflow,surface-water",Freshwater and Hydrology,"2023/10/23, 17:43:08",24,0,8,true,Python,,,"Python,FreeBasic",https://mwtoews.github.io/surface-water-network/,"b""# Surface water network\n[![DOI](https://zenodo.org/badge/187739645.svg)](https://zenodo.org/badge/latestdoi/187739645)\n[![Codacy](https://api.codacy.com/project/badge/Grade/420bcd8896c14f18b2077dd987c78849)](https://app.codacy.com/manual/mwtoews/surface-water-network?utm_source=github.com&utm_medium=referral&utm_content=mwtoews/surface-water-network&utm_campaign=Badge_Grade_Dashboard)\n[![Codcov](https://codecov.io/gh/mwtoews/surface-water-network/branch/main/graph/badge.svg)](https://codecov.io/gh/mwtoews/surface-water-network)\n[![CI](https://github.com/mwtoews/surface-water-network/actions/workflows/tests.yml/badge.svg?branch=main)](https://github.com/mwtoews/surface-water-network/actions/workflows/tests.yml)\n\nA Python package to create and analyze surface water networks.\n\n\n## Python packages\n\nPython 3.8+ is required.\n\n### Required\n\n - `geopandas >=0.9` - process spatial data similar to pandas\n - `packaging` - used to check package versions\n - `pandas >=1.2` - tabular data analysis\n - `pyproj >=2.2` - spatial projection support\n - `rtree` - spatial index support\n\n### Optional\n\n - `flopy` - read/write MODFLOW models\n - `netCDF4` - used to read TopNet files\n\n## Testing\n\nRun `pytest -v` or `python3 -m pytest -v`\n\nFor faster multi-core `pytest -v -n 2` (with `pytest-xdist`)\n\nTo run doctests `pytest -v swn --doctest-modules`\n\n## Examples\n\n```python\nimport geopandas\nimport pandas as pd\nimport swn\n```\n\nRead from Shapefile:\n```python\nshp_srs = 'tests/data/DN2_Coastal_strahler1z_stream_vf.shp'\nlines = geopandas.read_file(shp_srs)\nlines.set_index('nzsegment', inplace=True, verify_integrity=True) # optional\n```\n\nOr, read from PostGIS:\n```python\nfrom sqlalchemy import create_engine, engine\n\ncon_url = engine.url.URL(drivername='postgresql', database='scigen')\ncon = create_engine(con_url)\nsql = 'SELECT * FROM wrc.rec2_riverlines_coastal'\nlines = geopandas.read_postgis(sql, con)\nlines.set_index('nzsegment', inplace=True, verify_integrity=True) # optional\n```\n\nInitialise and create network:\n```python\nn = swn.SurfaceWaterNetwork.from_lines(lines.geometry)\nprint(n)\n# \n```\n\nPlot the network, write a Shapefile, write and read a SurfaceWaterNetwork file:\n```python\nn.plot()\n\nswn.file.gdf_to_shapefile(n.segments, 'segments.shp')\n\nn.to_pickle('network.pkl')\nn = swn.SurfaceWaterNetwork.from_pickle('network.pkl')\n```\n\nRemove segments that meet a condition (stream order), or that are\nupstream/downstream from certain locations:\n```python\nn.remove(\n n.segments.stream_order == 1,\n segnums=n.gather_segnums(upstream=3047927))\n```\n\nRead flow data from a TopNet netCDF file, convert from m3/s to m3/day:\n```python\n\nnc_path = 'tests/data/streamq_20170115_20170128_topnet_03046727_strahler1.nc'\nflow = swn.file.topnet2ts(nc_path, 'mod_flow', 86400)\n# remove time and truncate to closest day\nflow.index = flow.index.floor('d')\n\n# 7-day mean\nflow7d = flow.resample('7D').mean()\n\n# full mean\nflow_m = pd.DataFrame(flow.mean(0)).T\n```\n\nProcess a MODFLOW/flopy model:\n```python\nimport flopy\n\nm = flopy.modflow.Modflow.load('h.nam', model_ws='tests/data', check=False)\nnm = swn.SwnModflow.from_swn_flopy(n, m)\nnm.default_segment_data()\nnm.set_segment_data_inflow(flow_m)\nnm.plot()\nnm.to_pickle('sfr_network.pkl')\nnm = swn.SwnModflow.from_pickle('sfr_network.pkl', n, m)\nnm.set_sfr_obj()\nm.sfr.write_file('file.sfr')\nnm.grid_cells.to_file('grid_cells.shp')\nnm.reaches.to_file('reaches.shp')\n```\n\n## Citation\n\nToews, M. W.; Hemmings, B. 2019. A surface water network method for generalising streams and rapid groundwater model development. In: New Zealand Hydrological Society Conference, Rotorua, 3-6 December, 2019. p. 166-169.\n""",",https://zenodo.org/badge/latestdoi/187739645","2019/05/21, 01:32:38",1619,BSD-3-Clause,33,295,"2023/10/23, 17:43:12",13,63,73,18,2,3,0.0,0.10104529616724733,"2023/05/25, 00:33:58",0.6,0,4,false,,false,false,,,,,,,,,,, Lekan,"Provide a software that assists the user doing hydrological and hydraulic studies for flood mapping and forecasting, hydraulic structure design, or other tasks linked to natural surface flow.",vcloarec,https://github.com/vcloarec/ReosProject.git,github,"flooding,hydraulic-modeling,hydrology",Freshwater and Hydrology,"2023/09/04, 01:01:07",85,0,34,true,C++,,,"C++,Fortran,Python,CMake,Perl,QMake,Shell,PowerShell,C,NSIS,Batchfile,QML",https://www.reos.site/en/home/,"b'# Reos Project\n\nThe aim of this project is to provide free and open-source tools for hydrological and hydraulic analysis.\n\n# Lekan\n\nThe aim of Lekan is to assist the user for hydrological or hydraulic studies.\n\nThe user works in a GIS environment based on QGIS engine.\n\nSelf installer of this tool can be downloaded [here](https://www.reos.site/en/reos-project/download/) (for Windows). For other platforms, building yourself the project could be possible.\n\nDescription and documentation about functionalities provided by the actual release is available in the wiki of this repo [here](https://github.com/vcloarec/ReosProject/wiki).\n\nFor people that want to test, the development version can be dowload here: https://nightly.link/vcloarec/ReosProject/workflows/windows_test/master/Lekan%20Windows%20Installer.zip\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Development\n\nLekan is in constant development, that is depending of available time or resources, new feature are regularly added. The only limit to provide new functionalities to help user in hydrological/hydraulic studies are imagination and ... time. Help is very welcome, to contribute, see [here](https://www.reos.site/en/how-to-support/) how to.\n\n## Work in progress\n\nNow, efforts of development are concentrated to allow the user to build 2D hydraulic models directly from Lekan:\n\n- generate and edit mesh structure\n- apply/edit topography on the mesh\n- apply/edit roughness\n- define boundary limit and link them to other part of the actual hydraulic network (watershed/hydrograph nodes, hydraulic routing link) \n- Launch external calculation engine from Lekan (existing modeling engine).\n- Visualize and post-process result in Lekan.\n\n![](mesh_r.gif)\n\n\n\n\n\n'",,"2019/04/26, 20:48:29",1643,CUSTOM,449,1045,"2023/09/04, 01:01:08",16,167,202,166,52,1,0.0,0.0,"2023/06/01, 19:21:22",2.3.3,0,1,false,,false,false,,,,,,,,,,, RUBEM,A distributed hydrological model to calculate monthly flows with changes in land use over time.,LabSid-USP,https://github.com/LabSid-USP/RUBEM.git,github,"hydrology,hydrological-model,water,catchment,remote-sensing,grid,pcraster,simulation-model",Freshwater and Hydrology,"2023/05/17, 21:44:44",6,0,4,true,Python,LabSid,LabSid-USP,Python,https://labsid.poli.usp.br/softwares/rubem-hydrological/,"b'\n[![Documentation Status][readthedocs-shield]][readthedocs-url]\n[![Contributors][contributors-shield]][contributors-url]\n[![GitHub Commit Activity][commit-activity-shield]][commit-activity-url]\n[![Forks][forks-shield]][forks-url]\n[![Stargazers][stars-shield]][stars-url]\n[![Issues][issues-shield]][issues-url]\n[![LabSid YouTube Channel][youtube-shield]][youtube-url]\n\n\n\n
\n

\n \n \n \n

RUBEM
Rainfall rUnoff Balance Enhanced Model

\n

\n RUBEM is a distributed hydrological model to calculate monthly flows with changes in land use over time.\n
\n Explore the docs \xc2\xbb\n
\n
\n Support Form\n \xc2\xb7\n Report Bug\n \xc2\xb7\n Request Feature\n

\n

\n\n\n\n\n
\n

Table of Contents

\n
    \n
  1. \n About The Project\n \n
  2. \n
  3. \n Getting Started\n \n
  4. \n
  5. Usage
  6. \n
  7. Roadmap
  8. \n
  9. Contributing
  10. \n
  11. License
  12. \n
  13. Contact
  14. \n
  15. Acknowledgements
  16. \n
\n
\n\n\n\n## About The Project\n\nThe Rainfall rUnoff Balance Enhanced Model (RUBEM) is a hydrological model for transforming precipitation into surface and subsurface runoff. The model is based on equations that represent the physical processes of the hydrological cycle, with spatial distribution defined by pixel, in distinct vegetated and non-vegetated covers, and has the flexibility to study a wide range of applications, including impacts of changes in climate and land use, has flexible spatial resolution, the inputs are raster-type matrix files obtained from remote sensing data and operates with a reduced number of parameters arranged in a configuration file that facilitates its modification throughout the area.\n\n### Main features\n\nThe model was developed based on classical concepts of hydrological processes and equations based mainly on SPHY (TERINK et al., 2015), WEAP (YATES et al., 2005), and WetSpass-M (ABDOLLAHI et al. , 2017). The main features of the developed model are:\n\n- Distributed monthly step model;\n- Hydrological process based on soil water balance in each pixel, and flow total calculated after composition of the resulting accumulated flow, according to Direction drainage network flow established by the digital elevation model (DEM);\n- Calculations for two zones: rootzone and saturated;\n- Evapotranspiration and interception process based on vegetation index: Leaf Area Index (LAI), Photosynthetically Active Radiation Fraction (FPAR) and Normalized Difference Vegetation Index (NDVI); and\n- Sub-pixel level coverage classification, represented by four fractions that represent percentage of total pixel area covered exclusively by: area vegetated, bare soil area, water area and impervious area.\n\n\n## Getting Started\n\nTo get a local copy up and running follow these simple steps.\n\n### Prerequisites\n\nThis is an example of how to list things you need to use the software and how to install them.\n\n* From Miniconda base envionment create a new conda envionment\n ```sh\n conda create --name rubem python=3.7\n ```\n * Activate the new environment\n\n Windows\n\n ```powershell\n conda activate rubem\n ```\n \n Linux, macOS\n \n ```sh\n source activate rubem\n ```\n \n * Install GDAL conda package\n \n ```sh\n conda install -c conda-forge gdal \n ```\n \n * Install PCRaster conda package\n \n ```sh\n conda install -c conda-forge pcraster \n ```\n\n### Installation\n\n1. Download the latest release zip file from the [releases page](https://github.com/LabSid-USP/RUBEM/releases);\n2. Extract the zip, and copy the extracted root directory into a local directory.\n\n\n\n## Usage\n\n * Typical usage example\n ```sh\n python rubem.py --configfile config.ini\n ```\n * Help usage example\n ```sh\n python rubem.py -h\n ``` \n\n_For more examples, please refer to the [Documentation](https://rubem.readthedocs.io/en/latest)_.\n\n\n## Roadmap\n\nSee the [open issues](https://github.com/LabSid-USP/RUBEM/issues) for a list of proposed features (and known issues).\n\n\n\n## Contributing\n\nContributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. See [`CONTRIBUTING.md`](https://github.com/LabSid-USP/RUBEM/blob/main/CONTRIBUTING.md) for more information.\n\n\n## License\n\nDistributed under the GPLv3 License. See [`LICENSE.md`](https://github.com/LabSid-USP/RUBEM/blob/main/LICENSE.md) for more information.\n\n\n## Contact\n\nIn any of our communication channels please abide by the [RUBEM Code of Conduct](https://github.com/LabSid-USP/RUBEM). In summary, being friendly and patient, considerate, respectful, and careful in your choice of words.\n\n- Contact us at: [rubem.hydrological@labsid.eng.br](mailto:rubem.hydrological@labsid.eng.br)\n\n- Support Form: [https://forms.gle/JmxWKoXh4C29V2rD8](https://forms.gle/JmxWKoXh4C29V2rD8)\n\n- Project Link: [https://github.com/LabSid-USP/RUBEM](https://github.com/LabSid-USP/RUBEMHydrological)\n\n\n\n## Acknowledgements\n\n* [Laborat\xc3\xb3rio de Sistemas de Suporte a Decis\xc3\xb5es Aplicados \xc3\xa0 Engenharia Ambiental e de Recursos H\xc3\xaddricos](http://labsid.eng.br/Contato.aspx)\n* [Departamento de Engenharia Hidr\xc3\xa1ulica e Ambiental da Escola Polit\xc3\xa9cnica da Universidade de S\xc3\xa3o Paulo](http://www.pha.poli.usp.br/)\n* [Fundo Patrimonial Amigos da Poli](https://www.amigosdapoli.com.br/)\n\n\n[readthedocs-shield]: https://readthedocs.org/projects/rubem/badge/?version=latest\n[readthedocs-url]: https://rubem.readthedocs.io/en/latest/?badge=latest\n[contributors-shield]: https://img.shields.io/github/contributors/LabSid-USP/RUBEM\n[contributors-url]: https://github.com/LabSid-USP/RUBEM/graphs/contributors\n[commit-activity-shield]: https://img.shields.io/github/commit-activity/m/LabSid-USP/RUBEM\n[commit-activity-url]: https://github.com/LabSid-USP/RUBEM/pulse\n[forks-shield]: https://img.shields.io/github/forks/LabSid-USP/RUBEM\n[forks-url]: https://github.com/LabSid-USP/RUBEM/network/members\n[stars-shield]: https://img.shields.io/github/stars/LabSid-USP/RUBEM\n[stars-url]: https://github.com/LabSid-USP/RUBEM/stargazers\n[issues-shield]: https://img.shields.io/github/issues/LabSid-USP/RUBEM\n[issues-url]: https://github.com/LabSid-USP/RUBEM/issues\n[license-shield]: https://img.shields.io/github/license/LabSid-USP/RUBEM\n[license-url]: https://github.com/LabSid-USP/RUBEM/blob/master/LICENSE.md\n[youtube-shield]: https://img.shields.io/youtube/channel/subscribers/UCZOGKRCW5mQOY9_w8L7lKJg\n[youtube-url]: https://www.youtube.com/user/labsidengbr\n'",,"2021/01/28, 19:58:34",1000,GPL-3.0,38,370,"2023/05/17, 21:47:36",1,54,101,11,161,0,0.2,0.12101910828025475,"2023/05/17, 21:45:08",v0.2.2-beta.1,0,3,false,,true,true,,,https://github.com/LabSid-USP,www.labsid.eng.br,"São Paulo, Brazil",,,https://avatars.githubusercontent.com/u/70539830?v=4,,, pywatershed,"A sustainable integrated, hydrologic modeling framework for the U.S. Geological Survey.",EC-USGS,https://github.com/EC-USGS/pywatershed.git,github,"surfacehydrology,hydrology,water,modflow,python,modflow6,prms,swb",Freshwater and Hydrology,"2023/10/25, 01:28:53",24,0,20,true,Jupyter Notebook,Enterprise Capacity Development Workspace,EC-USGS,"Jupyter Notebook,Fortran,Python,C,Makefile,Shell,Meson,Batchfile",https://pywatershed.readthedocs.io,"b'# pywatershed\n\n[![ci-badge](https://github.com/ec-usgs/pywatershed/workflows/CI/badge.svg?branch=develop)](https://github.com/ec-usgs/pywatershed/actions?query=workflow%3ACI)\n[![codecov-badge](https://codecov.io/gh/ec-usgs/pywatershed/branch/main/graph/badge.svg)](https://codecov.io/gh/ec-usgs/pywatershed)\n[![Documentation Status](https://readthedocs.org/projects/pywatershed/badge/?version=latest)](https://pywatershed.readthedocs.io/en/latest/?badge=latest)\n[![asv](http://img.shields.io/badge/benchmarked%20by-asv-green.svg?style=flat)](https://github.com/ec-usgs/pywatershed)\n[![Formatted with black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/python/black)\n\n[![Available on pypi](https://img.shields.io/pypi/v/pywatershed.svg)](https://pypi.python.org/pypi/pywatershed)\n[![PyPI Status](https://img.shields.io/pypi/status/pywatershed.svg)](https://pypi.python.org/pypi/pywatershed)\n[![PyPI Versions](https://img.shields.io/pypi/pyversions/pywatershed.svg)](https://pypi.python.org/pypi/pywatershed)\n\n[![Anaconda-Server Badge](https://anaconda.org/conda-forge/pywatershed/badges/version.svg)](https://anaconda.org/conda-forge/pywatershed)\n[![Anaconda-Server Badge](https://anaconda.org/conda-forge/pywatershed/badges/platforms.svg)](https://anaconda.org/conda-forge/pywatershed)\n\n[![WholeTale](https://raw.githubusercontent.com/whole-tale/wt-design-docs/master/badges/wholetale-explore.svg)](https://dashboard.wholetale.org/run/64ae29e8a887f48b9f173678?tab=metadata)\n\n\n\n\n**Table of Contents**\n\n- [Purpose](#purpose)\n- [Installation](#installation)\n- [Contributing](#contributing)\n- [Example Notebooks](#example-notebooks)\n- [Overview of Repository Contents](#overview-of-repository-contents)\n- [Disclaimer](#disclaimer)\n\n\n\n## Purpose\n\nThe purpose of this repository is to refactor and redesign the [PRMS modeling\nsystem](https://www.usgs.gov/software/precipitation-runoff-modeling-system-prms)\nwhile maintaining its functionality. Code modernization is a step towards\nunification with [MODFLOW 6 (MF6)](https://github.com/MODFLOW-USGS/modflow6).\n\nThe following motivations are taken from our [AGU poster from December\n2022](https://agu2022fallmeeting-agu.ipostersessions.com/default.aspx?s=05-E1-C6-40-DF-0D-4D-C7-4E-DE-D2-61-02-05-8F-0A)\nwhich provides additional details on motivations, project status, and current\ndirections of this project as of approximately January 2023.\n\nGoals of the USGS Enterprise Capacity (EC) project include:\n\n * A sustainable integrated, hydrologic modeling framework for the U.S.\n Geological Survey (USGS)\n * Interoperable modeling\xc2\xa0across the USGS, partner agencies, and academia\n\nGoals for EC Watershed Modeling:\n\n * Couple the Precipitation-Runoff Modeling System (PRMS, e.g. Regan et al,\n\t2018)\xc2\xa0 with MODFLOW 6 (MF6, e.g. Langevin et al, 2017) in a sustainable\n\tway\n * Redesign PRMS to be more modern and flexible\n * Prioritize process representations in the current National Hydrological\n Model (NHM) based on\xc2\xa0PRMS 5.2.1\n\nPrototype an EC watershed model: ""pywatershed""\n\n * Redesign PRMS quickly in python\n * Couple to MF6\xc2\xa0via BMI/XMI interface (Hughes et al, 2021;\xc2\xa0Hutton et al, 2020)\n * Establish a\xc2\xa0prototyping ground for EC codes that couples\xc2\xa0to the compiled\n\tframework: low cost proof of concepts (at the price of potentially\xc2\xa0less\n computational performance) * Enable process representation\xc2\xa0hypothesis testing\n * Use cutting-edge techniques and technologies to improve models \n * Machine learning, automatic differentiation \n * Address challenges of modeling across space and time scales \n * Transition prototype watershed model to compiled EC code\n\n\n## Installation\n\n`pywatershed` uses Python 3.9 or 3.10.\n\nThe `pywatershed` package is [available on\nPyPI](https://pypi.org/project/pywatershed/) but installation of all\ndependencies sets (lint, test, optional, doc, and all) may not be reliable on\nall platforms. \n\nThe `pywatershed` package is [available on\nconda-forge](https://anaconda.org/conda-forge/pywatershed). The installation\nis the quickest way to get up and running by provides only the minimal set of\ndependencies (not including jupyter nor all packages needed for running the\nexample notebooks, also not suitable for development purposes). \n\nWe recommend the following installation procedures to get fully-functional\nenvironments for running `pywatershed` and its example notebooks. We strongly\nrecommend using [Mamba](https://mamba.readthedocs.io/en/latest/)to first\ninstal dependencies from the `environment_y_jupyter.yml` file in the\nrepository before installing `pywatershed` itself. Mamba will be much faster\nthan Ananconda (but the conda command could also be used). \n\nIf you wish to use the stable release, you will use `main` in place of \n`` in the following commands. If you want to follow developemnt, you\'ll\nuse `develop` instead.\n\nWithout using `git` (directly), you may:\n```\ncurl -L -O https://raw.githubusercontent.com/EC-USGS/pywatershed//environment_w_jupyter.yml\nmamba env create -f environment_w_jupyter.yml\nconda activate pws\npip install git+https://github.com/EC-USGS/pywatershed.git@\n```\n\nOr to use `git` and to be able to develop:\n\n```\ngit clone https://github.com/EC-USGS/pywatershed.git\ncd pywatershed\nmamba env create -f environment_w_jupyter.yml\nactivate pws\npip install -e .\n```\n\n(If you want to name the environment other than the default `pws`, use the\ncommand \n`mamba env update --name your_env_name --file environment_w_jupyter.yml --prune`\nyou will also need to activate this environment by name.)\n\n\nWe install the `environment_w_jupyter.yml` to provide all known dependencies \nincluding those for running the eample notebooks. (The `environment.yml` \ndoes not contain jupyter or jupyterlab because this interferes with installation\non WholeTale, see Example Notebooks seection below.)\n\n## Contributing\n\nSee the [developer documentation](./DEVELOPER.md) for instructions on setting up\na development environment. See the [contribution guide](./CONTRIBUTING.md) to\ncontribute to this project.\n\n## Example Notebooks\n\nFor introductory example notebooks, look in the\n[`examples/`](https://github.com/EC-USGS/pywatershed/tree/main/examples>)\ndirectory in the repository. Numbered starting at 00, these are meant to be\ncompleted in order. Non-numbered notebooks coveradditional topics. These\nnotebooks are note yet covered by testing and so may be expected to have some\nissues until they are added to testing. In `examples/developer/` there are\nnotebooks of interest to developers who may want to learn about running the\nsoftware tests.\n\nThough no notebook outputs are saved in Github, these notebooks can easily\nnavigated to and run in WholeTale containers (free but sign-up or log-in\nrequired). This is a very easy and quick way to get started without needing to\ninstall pywatershed requirements yourself. WholeTale is an NSF funded project\nand supports logins from many institutions, e.g. the USGS, and you may not need\nto register.\n\nThere are containers for both the `main` and `develop` branches.\n\n[![WholeTale](https://raw.githubusercontent.com/whole-tale/wt-design-docs/master/badges/wholetale-explore.svg)](https://dashboard.wholetale.org)\n\n * [WholeTale container for latest release (main\n\tbranch)](https://dashboard.wholetale.org/run/64ae29e8a887f48b9f173678?tab=metadata)\n * [WholeTale container for develop\n\tbranch](https://dashboard.wholetale.org/run/64ae25c3a887f48b9f1735c8?tab=metadata)\n\nWholeTale will give you a jupyter-lab running in the root of this\nrepository. You can navigate to `examples/` and then open and run the notebooks\nof your choice. The develop container may require the user to update the\nrepository (`git pull origin`) to stay current with development.\n\n## Overview of Repository Contents\n\nThe contents of directories at this level is described. Therein you may discover\nanother README.md for more information.\n\n```\n.github/: Github actions, scripts and Python environments for continuous integration (CI) and releasing,\nasv_benchmarks/: preformance benchmarking by ASV\nautotest/: pywatershed package testing using pytest\nautotest_exs/: pywatershed example notebook testing using pytest\nbin/:PRMS executables distributed\ndoc/:Package/code documentation source code\nevaluation/: tools for evaluation of pywatershed\nexamples/:How to use the package, mostly jupyter notebooks\nprms_src/:PRMS source used for generating executables in bin/\npywatershed/:Package source\nreference/:Ancillary materials for development\nresources/:Static stuff like images\ntest_data/:Data used for automated testing\n```\n\n## Disclaimer\n\nThis information is preliminary or provisional and is subject to revision. It is\nbeing provided to meet the need for timely best science. The information has not\nreceived final approval by the U.S. Geological Survey (USGS) and is provided on\nthe condition that neither the USGS nor the U.S. Government shall be held liable\nfor any damages resulting from the authorized or unauthorized use of the\ninformation.\n\nFrom: https://www2.usgs.gov/fsp/fsp_disclaimers.asp#5\n\nThis software is in the public domain because it contains materials that\noriginally came from the U.S. Geological Survey, an agency of the United States\nDepartment of Interior. For more information, see the [official USGS copyright\npolicy](https://www.usgs.gov/information-policies-and-instructions/copyrights-and-credits\n""official USGS copyright policy"")\n\nAlthough this software program has been used by the USGS, no warranty, expressed\nor implied, is made by the USGS or the U.S. Government as to the accuracy and\nfunctioning of the program and related program material nor shall the fact of\ndistribution constitute any such warranty, and no responsibility is assumed by\nthe USGS in connection therewith. This software is provided ""AS IS.""\n'",,"2022/01/20, 21:15:49",643,CC0-1.0,558,1138,"2023/10/25, 15:16:02",26,202,221,124,0,3,0.4,0.12790697674418605,"2023/07/21, 03:18:35",0.2.1,0,9,false,,false,true,,,https://github.com/EC-USGS,,,,,https://avatars.githubusercontent.com/u/100240913?v=4,,, pyMETRIC,"A set of Python based tools developed for estimating and mapping evapotranspiration for large areas, utilizing the Landsat image archive.",WSWUP,https://github.com/WSWUP/pymetric.git,github,,Freshwater and Hydrology,"2022/07/01, 19:45:44",30,0,6,false,Python,Western States Water Use Program (WSWUP),WSWUP,Python,,"b'# pyMETRIC\n\npyMETRIC is a set of Python based tools developed for estimating and mapping evapotranspiration (ET) for large areas, utilizing the Landsat image archive. This framework currently computes ET estimates using the [METRIC](http://www.uidaho.edu/cals/kimberly-research-and-extension-center/research/water-resources) surface energy balance model, developed at the University of Idaho.\n \nIn order to produce ET estimates, pyMETRIC produces ancillary rasters from Landsat data products. These products are stored within the pyMETRIC data structure, and may be useful for tasks tangentially related to ET mapping. The raster datasets produced during typical processing include the following:\n- Albedo\n- LAI (Leaf Area Index)\n- NDVI (Normalized Difference Vegetation Index)\n- NDWI (Normalized Difference Water Index)\n- Top of Atmosphere Reflectance\n\nIn addition to creating ET maps from Landsat images, pyMETRIC includes functionality to interpolate annual/seasonal/monthly ET maps, from individually processed ET maps.\n\n## Install\n\nDetails on installing pyMETRIC, Python, and necessary modules can be found in the [installation instructions](docs/INSTALL.md).\n\n## Example\n\nA detailed walk-through on the setup and operation of pyMETRIC has been assembled in the following series of documentation. These examples are setup to process a portion of the Harney Basin, located in eastern Oregon. The documentation is contained in the following links:\n1. [Data Preparation](docs/EXAMPLE_DATA.md)\n2. [Project Setup](docs/EXAMPLE_SETUP.md)\n3. [Running METRIC](docs/EXAMPLE_METRIC.md)\n\n## References\n\n* [Satellite-Based Energy Balance for Mapping Evapotranspiration with Internalized Calibration (METRIC)-Model](https://ascelibrary.org/doi/abs/10.1061/(ASCE)0733-9437(2007)133:4(380))\n* [Satellite-Based Energy Balance for Mapping Evapotranspiration with Internalized Calibration (METRIC)-Applications](https://ascelibrary.org/doi/abs/10.1061/(ASCE)0733-9437(2007)133:4(395))\n* [Assessing calibration uncertainty and automation for estimating evapotranspiration from agricultural areas using METRIC](https://www.dri.edu/images/stories/divisions/dhs/dhsfaculty/Justin-Huntington/Morton_et_al._2013.pdf)\n\n## Limitations\n\nMETRIC requires an assemblage of several datasets in order to produce accurate estimates of evapotranspiration. The pyMETRIC framework serve to download and process the required data. Please note that this code is written for the data as it is currently provided, however the data and it\xe2\x80\x99s formatting is controlled by the data providers and by third-party hosts. The maintainers of pyMETRIC will attempt to keep the package functional, however changes in the data and data availability may impact the functionality of pyMETRIC.\n\n## Directory Structure\n\nWhen initially downloading or cloning pyMETRIC, this directory does not contain data necessary for estimating ET. As python scripts are ran as prescribed in [""Data Preparation""](docs/EXAMPLE_DATA.md) and [""Project Setup""](docs/EXAMPLE_SETUP.md), the top level directory will be populated with additional directories containing support data. These folders will be assigned names according to the directory contents (eg. ""cdl"", ""dem"", ""gridmet"", etc...). Ideally these data directories will be populated with project-agnostic data (example. ""dem"" may contain a digital elevation model (DEM) for the entire continental United States). The support data will be processed by pyMETRIC, which will isolate and subset the relevant data for processing. \n\nTo serve as an example the ""example"" directory is included in the top-level directory. The ""example"" directory is an example of a ""project directory"", which should contain information specific to your chosen study area or area of interest. As pyMETRIC is ran according to [""Running METRIC""](docs/EXAMPLE_METRIC.md), the processed data will be stored within the project directory.\n'",,"2018/05/10, 23:57:09",1994,Apache-2.0,0,156,"2022/08/02, 22:51:55",35,7,50,0,449,3,0.0,0.23357664233576647,"2018/06/29, 16:03:02",v0.1.0,0,5,false,,false,false,,,https://github.com/WSWUP,https://www.dri.edu/western-states-water-use-program,"Reno, NV",,,https://avatars.githubusercontent.com/u/23201768?v=4,,, SWAT,"The Soil & Water Assessment Tool is a small watershed to river basin-scale model used to simulate the quality and quantity of surface and ground water and predict the environmental impact of land use, land management practices, and climate change.",blacklandgrasslandmodels/swat_development/src,,custom,,Freshwater and Hydrology,,,,,,,,,,https://bitbucket.org/blacklandgrasslandmodels/swat_development/src/master/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, HydroMap,Mapping of groundwater level for realistic flow flowpaths using semi-automated kriging.,peterson-tim-j,https://github.com/peterson-tim-j/HydroMap.git,github,"hydrology,hydrogeology,groundwater,geostatistics,geostatiscs,gstat,kriging,potentiometer,depth-to-watertable",Freshwater and Hydrology,"2023/09/05, 00:52:23",12,0,2,true,R,,,R,,"b'# HydroMap\n\n_HydroMap_ is a powerful R package for kriging interpolation of sparse groundwater level observations - that is, groundwater potentiometry. Key features include:\n * land surface elevation, topographic form (e.g. valleys and ridges) and the smoothness of the groundwater head (relative to the terrain) is account for in the mapping.\n * categorical land types such as geology can be included in the mapping.\n * fixed head boundary conditions, such as the ocean, can be included and importantly they only have an influence if there are no nearby observation points.\n * all mapping parameters can be calibrated using split-sample maximum likelihood estimation.\n * kriging variogram parameters can be calibrated - which trials have shown to significantly reduced the prediction error.\n\nImportantly, this is a **beta** release. There is basic documentation (see [PDF MANUAL](https://github.com/peterson-tim-j/HydroMap/blob/master/hydroMap.pdf)) and the code has been tested. In the future, additional functions will be made accessible to the user and documented.\n\nTo illustrate the performance of _HydroMap_, Fig. 1 shows the interpolated watertable depth for Victoria, Australia, at April 2000. It was derived using the conventional kriging approach of the DEM elevation being the only predictor of the head. It shows the water table to be very smooth - which somewhat counter intuitively arises because the interpolated water level elevation is very noisy. That is, whenever the DEM elevation rises so does the heads. Fig. 2 shows that when the above _HydroMap_ features are included in the mapping then the heads become smooth and, consequently, the depth to the water table becomes ""noisy"" with significantly greater variability shown in the mountainous regions of the east.\n\n![Conventional kriging approach](https://user-images.githubusercontent.com/8623994/44770420-57776580-abab-11e8-9b95-ff54604ba6e3.png)\nFigure 1. Depth to water table for Victoria, Australia, derived only the DEM elevation as a predictor of groundwater head.\n\n![HydroMap approach](https://user-images.githubusercontent.com/8623994/44770783-79bdb300-abac-11e8-9404-d0d7a4b4e9f4.png)\nFigure 2. Depth to water table for Victoria, Australia, derived the key features of _HydroMap_.\n\n# Installation\n\nTo install the package, following these steps:\n\n1. Open R and Install the required packages using the following R command: `install.packages(c(""sp"", ""grid"", ""gstat"",""raster"", ""parallel"", ""rgenoud"", ""snow"",""RSAGA""))`\n1. Download the ![latest release](https://github.com/peterson-tim-j/HydroMap/releases).\n1. Install _HydroMap_ using the R command: `install.packages(\'HydroMap.tar.gz\',repos = NULL)`\n1. Load _HydroMap_ using the R commands: `library(""HydroMap"")`\n1. Open the help documentation using `?krige.head` and follow the example. \n\n# Examples: Data setup\nThe examples below all use the same gridded data and point data. Below the data is imported and cropped to central northern Victoria. Also, here the calibration training and prediction data are pre-defined. This is done to ensure that the results from each of the following examples are comparable. The maps below show the location and head of the calibration training data and prediction data.\n\n \nFigure 3. Point data for examples\n\n```R\nlibrary(RSAGA)\n\n# Setup RSAGA with the paths to the requuired modules. Note you will to do this yourself for your own\n# installation of SAGA.\nset.env(saga.path = \'C:/Program Files (x86)/saga-9.0.1_x64\',saga.modules = \'C:/Program Files (x86)/saga-9.0.1_x64/tools\')\n\n# Load water table observations from April 2000 for Victoria, Australia and a 250m state-wide DEM.\n# Note, to import your own point data (as a CSV file) use the command: obs.data <- import.pointData(\'myPointData.csv\')\n# Using this import function will ensure that no single location has >1 observations. For details of this import function \n# use: ?import.pointData\ndata(\'victoria.groundwater\')\n\n# Crop this state-wide DEM and data points to a small in the centre north. \n# If you have your own ESRI ASC DEM, then you can import it using: DEM = import.DEM(\'myFile.asc\')\nDEM <- raster::crop(raster::raster(DEM), raster::extent(2400000, 2500000, 2550000, 2650000))\nDEM = as(DEM,\'SpatialGridDataFrame\')\nobs.data <- raster::crop(obs.data, DEM, inverse = F) \n\n# Load a model variogram and mapping parametyers found to be effective.\ndata(\'mapping.parameters\')\n\n# Calculate the depth to water table (DTWT)\nobs.data$DTWT = obs.data$elev - obs.data$head\n\n# Convert DTWT to catagories, to aid mapping\nobs.data$DTWT.cats =cut(obs.data$DTWT,breaks=c(-Inf,0,2,5,10,25,50, Inf ), \n labels=c(\'<0m\',\'0-2m\',\'2-5m\',\'5-10m\',\'10-25m\',\'25-50m\',\'>50m\'),include.lowest=T)\n\n# Enforce a minimum error variance of 5cm ^2 for the groundwater head elevation. \nobs.data$total_err_var = pmax(obs.data$total_err_var, 0.05^2)\n\n# Define the prediction data by randomly sample 25% of the observed data points. \n# The remaining 75% of data points are used for the presictions. \nnObs = nrow(obs.data);\nnObs.prediction = floor(0.25*nObs)\nset.seed(123456, sample.kind=\'default\')\npredictionData.index = sample(1:nObs, nObs.prediction, replace=F)\npredictionData = obs.data[predictionData.index,]\ntrainingData = obs.data[-predictionData.index,]\n\n# Plot the depth to water table of the prediction and training data\npng(\'Example1_pointData_A.png\')\nsp::spplot(predictionData, \'DTWT.cats\',scales = list(draw = TRUE), main=\'Prediction data DTWT [m]\')\ndev.off()\npng(\'Example1_pointData_B.png\')\nsp::spplot(trainingData, \'DTWT.cats\',scales = list(draw = TRUE), main=\'Training data DTWT [m]\')\ndev.off()\n\n# Define the variogram model. This can be either a full variogram object (see `gstat:vgm`),\n# which is then calibrated, or simply a type of variogram, as used here. When the type is input then \n# the initial variogram parameters are estimated using the residuals of the observed head and the \n# covariates, using ordinary least squares. Latter calibration of these parameters uses a range of 0.1\n# and 10 times the initial estimates.\nvariogram.model = \'Mat\';\n```\n# Example 1: Kriging with only land surface elevation \n\nThis is the simplest example. The point groundwater head observations are spatially interpolated using only the land surface elevation (from the digital elevation model, DEM). Only the variogram parameters are calibraed to minimise the prediction error. \n```R\n# Define the covariates for the kriging.\nf <- as.formula(\'head ~ elev\')\n\n# Calibrate the mapping parameters with 25% of the data randomly selected and using 2 cores.\n# NOTE 1: The rigor of the calibration is best controlled using the pop.size.multiplier input. \n# Here the size of the population of random guesses equals four time the number of calibration \n# parameters.\ncalib.results.example1 <- krige.head.calib(formula=f, grid=DEM, data=trainingData, newdata=predictionData, \n nmin=0, nmax=Inf, maxdist=Inf, omax=0, data.errvar.colname=\'total_err_var\',\n model = variogram.model, fit.variogram.type=1, smooth.std =NA,\n pop.size.multiplier=4, max.generations=100, debug.level=0, use.cluster = 2)\n\n# Do the interpolation of the point data using the calibration results. \n# NOTE: All of the observed data is used for the calibration. All CPU cores are also used. \nhead.grid.example1 <- krige.head(calibration.results = calib.results.example1, data=obs.data, use.cluster = T)\n\n# Map the head elevation and kriging uncertainty.\npng(\'Example1_head.png\')\nraster::plot(raster::raster(head.grid.example1),\'head\')\nraster::contour(raster::raster(head.grid.example1,1), levels = seq(70,125,by=5), add=T)\ndev.off()\n\n# Categorise the DTWT to seven classes.\nhead.grid.example1$DTWT.cats =cut(head.grid.example1$DTWT,breaks=c(-Inf,0,2,5,10,25,50, Inf ), \n labels=c(\'<0m\',\'0-2m\',\'2-5m\',\'5-10m\',\'10-25m\',\'25-50m\',\'>50m\'),include.lowest=T)\n\n# Map the categorised depth to water table.\npng(\'Example1_DTWT.png\')\nsp::spplot(head.grid.example1,\'DTWT.cats\', scales = list(draw = TRUE))\ndev.off()\n```\n\n# Example 2: Kriging with land surface elevation and smoothing\n\nIn this example point groundwater head observations are spatially interpolated using co-variates of the land surface elevation (from the digital elevation model, DEM) and a local smoothing of the DEM. The magnitude of the smoothing is calibrated along with the variogram parameters. Note, the only difference to example 1 is the formula and the parameters being calibrated (i.e. `smooth.std`). Below is a map of the final estimate of the groundwater elevation (head), and depth to water table (DTWT).\n

\n

Figure 6. Example 2 groundwater elevation.

\n

\n

Figure 7. Example 2 depth to water table.

\n\n```R\n# Define the covariates for the kriging.\nf <- as.formula(\'head ~ elev + smoothing\')\n\n# Calibrate the mapping parameters with 25% of the data randomly selected and using 2 cores.\n# NOTE: Here the smoothing parameter can between 0.5 and 2.5. \ncalib.results.example2 <- krige.head.calib(formula=f, grid=DEM, data=trainingData, newdata=predictionData, \n nmin=0, nmax=Inf, maxdist=Inf, omax=0, data.errvar.colname=\'total_err_var\', model = \n variogram.model, fit.variogram.type=1, smooth.std = c(0.5, 5.0),\n pop.size.multiplier=4, max.generations =200, debug.level=0, use.cluster = 2)\n\n# Do the interpolation of the point data using the calibration results. \n# NOTE: All of the observed data is used for the calibration. All CPU cores are also used. \nhead.grid.example2 <- krige.head(calibration.results = calib.results.example2, data=obs.data, use.cluster = T)\n\n# Map the head elevation and kriging uncertainty.\npng(\'Example2_head.png\')\nraster::plot(raster::raster(head.grid.example2),\'head\')\nraster::contour(raster::raster(head.grid.example2,1), levels = seq(70,125,by=5), add=T)\ndev.off()\n\n# Categorise the DTWT to seven classes.\nhead.grid.example2$DTWT.cats =cut(head.grid.example2$DTWT,breaks=c(-Inf,0,2,5,10,25,50, Inf ), \n labels=c(\'<0m\',\'0-2m\',\'2-5m\',\'5-10m\',\'10-25m\',\'25-50m\',\'>50m\'),include.lowest=T)\n\n# Map the categorised depth to water table.\npng(\'Example2_DTWT.png\')\nsp::spplot(head.grid.example2,\'DTWT.cats\', scales = list(draw = TRUE))\ndev.off()\n```\n\n# Example 3: Kriging with land surface elevation, smoothing and MrVBF and MrRTF\n\nIn this example point groundwater head observations are spatially interpolated using co-variates of the land surface elevation (from the digital elevation model, DEM), MrVBF measure of the valley floor and a local smoothing of the DEM. The magnitude of the smoothing is calibrated along with the variogram parameters. Below is a map of the final estimate of the groundwater elevation (head), and depth to water table (DTWT).\n

\n

Figure 8. Example 3 groundwater elevation.

\n

\n

Figure 9. Example 3 depth to water table.

\n\n```R\n# Define the covariates for the kriging.\nf <- as.formula(\'head ~ elev + smoothing + log(MrVBF) + log(MrRTF)\')\n\n# Calibrate the mapping parameters with 25% of the data randomly selected and using 2 cores.\ncalib.results.example3 <- krige.head.calib(formula=f, grid=DEM, data=trainingData, newdata=predictionData, \n nmin=0, nmax=Inf, maxdist=Inf, omax=0, data.errvar.colname=\'total_err_var\', model = \n variogram.model, fit.variogram.type=1, smooth.std = c(0.5, 5.0),\n pop.size.multiplier=4, debug.level=0, use.cluster = 2)\n\n# Do the interpolation of the point data using the calibration results. \n# NOTE: All of the observed data is used for the calibration. All CPU cores are also used. \nhead.grid.example3 <- krige.head(calibration.results = calib.results.example3, data=obs.data, use.cluster = T)\n\n# Map the head elevation and kriging uncertainty.\npng(\'Example3_head.png\')\npar(mar = c(2, 2, 1, 1))\nraster::plot(raster::raster(head.grid.example3),\'head\')\nraster::contour(raster::raster(head.grid.example3,1), levels = seq(70,125,by=5), add=T)\ndev.off()\n\n# Categorise the DTWT to seven classes.\nhead.grid.example3$DTWT.cats =cut(head.grid.example3$DTWT,breaks=c(-Inf,0,2,5,10,25,50, Inf ), \n labels=c(\'<0m\',\'0-2m\',\'2-5m\',\'5-10m\',\'10-25m\',\'25-50m\',\'>50m\'),include.lowest=T)\n\n# Map the categorised depth to water table.\npng(\'Example3_DTWT.png\')\npar(mar = c(1, 1, 1, 1))\nsp::spplot(head.grid.example3,\'DTWT.cats\', scales = list(draw = TRUE))\ndev.off()\n```'",,"2018/07/08, 23:56:17",1935,GPL-3.0,41,55,"2023/08/15, 22:50:05",1,1,5,4,71,0,0.0,0.0,"2023/09/05, 01:05:03",0.2.3.4,0,1,false,,false,false,,,,,,,,,,, HydroSight,"A statistical toolbox for data-driven insights into groundwater dynamics and aquifer properties. Many hundreds of bores can be easily analysed, all without any programming.",peterson-tim-j,https://github.com/peterson-tim-j/HydroSight.git,github,"hydrology,groundwater,timeseries,hydrogeology,matlab,recharge,aquifer-parameters",Freshwater and Hydrology,"2023/04/05, 02:08:56",38,0,7,true,MATLAB,,,"MATLAB,C",,"b' \n\n# _HydroSight_: _Open-source data-driven hydrogeological insights_\n \n[![Testing](https://github.com/peterson-tim-j/HydroSight/actions/workflows/testHydroSight.yml/badge.svg)](https://github.com/peterson-tim-j/HydroSight/actions/workflows/testHydroSight.yml) [![Codecov](https://img.shields.io/codecov/c/github/peterson-tim-j/HydroSight?logo=CODECOV)](https://app.codecov.io/github/peterson-tim-j/HydroSight) [![GitHub release](https://img.shields.io/github/release/peterson-tim-j/HydroSight)](https://github.com/peterson-tim-j/HydroSight/releases/) [![View HydroSight on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://au.mathworks.com/matlabcentral/fileexchange/48546-hydrosight) [![Github All Releases](https://img.shields.io/github/downloads/peterson-tim-j/HydroSight/total.svg?style=flat)]() [![GitHub license](https://img.shields.io/github/license/peterson-tim-j/HydroSight)](https://github.com/peterson-tim-j/HydroSight/blob/master/LICENSE) [![GitHub forks](https://img.shields.io/github/forks/peterson-tim-j/HydroSight)](https://github.com/peterson-tim-j/HydroSight/network) [![GitHub stars](https://img.shields.io/github/stars/peterson-tim-j/HydroSight)](https://github.com/peterson-tim-j/HydroSight/stargazers)\n\nHydroSight is statistical toolbox for data-driven insights into groundwater dynamics and aquifer properties. Many hundreds of bores can be easily analysed, all without any programming, to quantify:\n\n* drivers of groundwater trends, e.g. climate and pumping ([Shapoori et al., 2015a](https://github.com/peterson-tim-j/HydroSight/blob/master/documentation/html/papers/Shapoori_2015A.pdf)) and landuse change ([Peterson and Western, 2014](https://doi.org/10.1029/2017WR021838)).\n* recharge over time [Peterson et al., 2019](https://doi.org/10.1111/gwat.12946)).\n* aquifer hydraulic properties ([Shapoori et al., 2015c](https://github.com/peterson-tim-j/HydroSight/blob/master/documentation/html/papers/Shapoori_2015C.pdf), [Peterson et al., 2019](https://doi.org/10.1111/gwat.12946))\n* statistical identification of the major groundwater processes ([Shapoori et al., 2015b](https://github.com/peterson-tim-j/HydroSight/blob/master/documentation/html/papers/Shapoori_2015B.pdf)).\n* interpolate or extrapolate hydrographs to a regular time step ([Peterson & Western, 2018](https://doi.org/10.1029/2017WR021838).\n* simulate groundwater level under different climate or, say, pumping scenarios.\n* hydrograph monitoring errors and outliers ([Peterson et al., 2018](https://doi.org/10.1007/s10040-017-1660-7)).\n\n## Installation Options\n\n_HydroSight_ is operating system independent and has been tested on Windows 10+ and Linux (Ubuntu 20.04 LTS). There are four installation options:\n1. Stand-alone app within Windows. The latest .exe is [available here](https://github.com/peterson-tim-j/HydroSight/releases).\n1. Install _Hydrosight_ Matlab source code by (i) downloading the [source code](https://github.com/peterson-tim-j/HydroSight/releases), (ii) unzipping the downloaded file, (ii) setting the Matlab _Current Folder_ to where the file was unzipped and (iv) entering ``HydroSight`` into the Matlab _Command Window_.\n1. Install _Hydrosight_ from within Matlab using the _Add-Ons_ menu item and searching for _HydroSight_. From the _Add_ button select _Add to Matlab_. Once installed, enter ``HydroSight`` into the Matlab _Command Window_. \n1. Compile your own stand-alone app from within Matlab by (i) downloading the [source code](https://github.com/peterson-tim-j/HydroSight/releases) and (ii) running the command: ``makeStandaloneHydroSight()``\n\nFor futher details see the [installation wiki page](https://github.com/peterson-tim-j/HydroSight/wiki).\n\n## Examples\nMultiple examples are built into the _HydroSight_ GUI, each highlighting aspects of the above papers. Soon, each example will be supported by online videos. In the meantime major aspects of the graphical interface and the algorithms are outlined on the [wiki page](https://github.com/peterson-tim-j/HydroSight/wiki).\n\n_HydroSight_ can also be run from the Matlab command window. For an example of this [see here](https://github.com/peterson-tim-j/HydroSight/blob/master/algorithms/models/TransferNoise/Example_model/example_TFN_model.m).\n\n## What does _HydroSight_ look like?\n\nThe _HydroSight_ graphical interface includes tabs for each step in the modelling of groundwater hydrographs:\n1. Project documentation.\n2. Hydrograph outlier detection.\n1. Time-series model construction, specifically defining the data and the form of the model.\n1. Model calibration and tools to examine the internal dynamics of the calibrated model, e.g. recharge. The screenshot below shows this tab and an estimate of the annual groundwater recharge. \n1. Model simulations, allowing hydrograph decomposition, exploration of scenarios (e.g. different climate or pumping), hindcasting and interpolation.\n\n![_HydroSight_ Recharge estimation](https://user-images.githubusercontent.com/8623994/190363849-d6e8f457-7891-4213-8ace-71076e69e4f6.png)\n\n## Contributing\n\n_HydroSight_ is an ongoing research project and that depends upon [your support](https://github.com/peterson-tim-j/HydroSight/wiki/Support#giving-support-to-hydrosight). Two easy ways to support us are:\n1. Give us a GitHub \xe2\xad\x90. \n2. Cite the relevant papers (using the ""Cite Project"" option within the GUI). \n\nAnd, if _HydroSight_ doesn\'t do what you need then [Support](https://github.com/peterson-tim-j/HydroSight/wiki/Support#giving-support-to-hydrosight) gives more options.'",",https://doi.org/10.1029/2017WR021838,https://doi.org/10.1111/gwat.12946,https://doi.org/10.1111/gwat.12946,https://doi.org/10.1029/2017WR021838,https://doi.org/10.1007/s10040-017-1660-7","2014/09/10, 01:44:57",3333,GPL-3.0,103,257,"2023/03/27, 23:28:30",5,0,59,8,212,1,0,0.0,"2023/03/29, 05:41:06",V1.41.4,0,1,false,,false,false,,,,,,,,,,, basin3d,"A generalized data synthesis model that applies across a variety of earth science observation types (hydrology, geochemistry, climate etc.).",BASIN-3D,https://github.com/BASIN-3D/basin3d.git,github,,Freshwater and Hydrology,"2023/10/26, 01:16:50",11,3,6,true,Python,,BASIN-3D,Python,,"b""[![Python package](https://github.com/BASIN-3D/basin3d/actions/workflows/main.yml/badge.svg)](https://github.com/BASIN-3D/basin3d/actions/workflows/main.yml)\n[![Pypi](https://img.shields.io/pypi/v/basin3d)](https://pypi.org/project/basin3d/)\n\n# basin3d\nBroker for Assimilation, Synthesis and Integration of eNvironmental Diverse, Distributed Datasets. \n\n![basin3d](https://user-images.githubusercontent.com/20212666/112556236-ff1a9b80-8d86-11eb-9009-25b658ce41e0.png)\n\nBASIN-3D is a software ecosystem that synthesizes diverse earth science data from a variety of remote data sources on-demand, without the need for storing data in a single database. It is designed to parse, translate, and synthesize diverse observations from well-curated repositories into standardized formats for scientific uses such as analysis and visualization.\n\nbasin3d is the core BASIN-3D application that uses a generalized data synthesis model that applies across a variety of earth science observation types (hydrology, geochemistry, climate etc.). \n\nbasin3d has available plugins that can connect to specific data sources of interest, and map the data source vocabularies to the basin3d synthesis models.\n\n\n\n## Getting Started\n\n### Install\n\nInstall a source distribution with pip:\n\n $ pip install basin3d\n \nMake sure your installation was successful:\n\n $ python\n >>> import basin3d\n >>>\n\n## Documentation\n\nSee latest basin3d documentation [here](https://basin3d.readthedocs.io/en/latest/)\n\n\n## Contributing\n\nIf you\xe2\x80\x99re interested in contributing to basin3d, check out out our [contributing guidelines](CONTRIBUTING.md). It will help explain why, what, and how to get started.\n\n\n## Changelog\nSee the [changelog](https://basin3d.readthedocs.io/en/stable/changelog.html) for a history of updates and changes to basin3d\n\n## Authors\n\n* **Charuleka Varadharajan** - [LBL](https://eesa.lbl.gov/profiles/charuleka-varadharajan/)\n* **Valerie Hendrix** - [LBL](https://crd.lbl.gov/departments/data-science-and-technology/uss/staff/valerie-hendrix)\n* **Danielle Svehla Christianson** - [LBL](https://crd.lbl.gov/departments/data-science-and-technology/uss/staff/danielle-christianson/)\n* **Catherine Wong** - [LBL](https://crd.lbl.gov/departments/data-science-and-technology/uss)\n\n\n## Copyright\n\nBroker for Assimilation, Synthesis and Integration of eNvironmental Diverse, Distributed Datasets (BASIN-3D) Copyright (c) 2019, The\nRegents of the University of California, through Lawrence Berkeley National\nLaboratory (subject to receipt of any required approvals from the U.S.\nDept. of Energy). All rights reserved.\n\nIf you have questions about your rights to use or distribute this software,\nplease contact Berkeley Lab's Intellectual Property Office at\nIPO@lbl.gov.\n\nNOTICE. This Software was developed under funding from the U.S. Department\nof Energy and the U.S. Government consequently retains certain rights. As\nsuch, the U.S. Government has been granted for itself and others acting on\nits behalf a paid-up, nonexclusive, irrevocable, worldwide license in the\nSoftware to reproduce, distribute copies to the public, prepare derivative\nworks, and perform publicly and display publicly, and to permit other to do\nso.\n\n## License\n\nSee [LICENSE](https://basin3d.readthedocs.io/en/stable/license_agreement.html) file for licensing details\n\n## Acknowledgments\n\nThis research is supported as part of the Watershed Function Scientific Focus Area and a DOE Early Career Project funded by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research under Award no. DE-AC02-05CH11231. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.\n\n """,,"2020/12/09, 22:41:47",1050,CUSTOM,28,98,"2023/10/26, 01:16:52",45,45,128,50,0,1,1.1,0.5147058823529411,"2023/10/12, 12:06:19",0.5.0,0,4,false,,true,true,"narest-qa/repo37,BASIN-3D/django-basin3d,BASIN-3D/basin3d-views",,https://github.com/BASIN-3D,,,,,https://avatars.githubusercontent.com/u/73316616?v=4,,, hspfbintoolbox,A Python script and library of functions to read Hydrological Simulation Program Fortran (HSPF) binary files and print to screen.,timcera,https://github.com/timcera/hspfbintoolbox.git,github,"hydrology,simulation,cli,python",Freshwater and Hydrology,"2023/10/12, 23:59:16",8,6,4,true,Python,,,Python,,"b"".. image:: https://github.com/timcera/hspfbintoolbox/actions/workflows/python-package.yml/badge.svg\n :alt: Tests\n :target: https://github.com/timcera/hspfbintoolbox/actions/workflows/python-package.yml\n :height: 20\n\n.. image:: https://img.shields.io/coveralls/github/timcera/hspfbintoolbox\n :alt: Test Coverage\n :target: https://coveralls.io/r/timcera/hspfbintoolbox?branch=master\n :height: 20\n\n.. image:: https://img.shields.io/pypi/v/hspfbintoolbox.svg\n :alt: Latest release\n :target: https://pypi.python.org/pypi/hspfbintoolbox/\n :height: 20\n\n.. image:: http://img.shields.io/pypi/l/hspfbintoolbox.svg\n :alt: BSD-3 clause license\n :target: https://pypi.python.org/pypi/hspfbintoolbox/\n :height: 20\n\n.. image:: http://img.shields.io/pypi/dd/hspfbintoolbox.svg\n :alt: hspfbintoolbox downloads\n :target: https://pypi.python.org/pypi/hspfbintoolbox/\n :height: 20\n\n.. image:: https://img.shields.io/pypi/pyversions/hspfbintoolbox\n :alt: PyPI - Python Version\n :target: https://pypi.org/project/hspfbintoolbox/\n :height: 20\n\nDocumentation for hspfbintoolbox\n================================\nThe ``hspfbintoolbox`` is a Python script and library of functions to read\nHydrological Simulation Program Fortran (HSPF) binary files and print to\nscreen. The time series can then be redirected to file, or piped to other\ncommand line programs like ``tstoolbox``.\n\nRequirements\n------------\n\n* python 3.7 or later\n\n* tstoolbox - utilities to process time-series\n\nInstallation\n------------\npip\n~~~\n.. code-block:: bash\n\n pip install hspfbintoolbox\n\nconda\n~~~~~\n.. code-block:: bash\n\n conda install -c conda-forge hspfbintoolbox\n\n\nUsage - Command Line\n--------------------\nJust run 'hspfbintoolbox --help' to get a list of subcommands:\n\n catalog\n Prints out a catalog of data sets in the binary file.\n\n extract\n Prints out data to the screen from a HSPF binary output file.\n\nFor the subcommands that output data it is printed to the screen and you can\nthen redirect to a file.\n\nUsage - API\n-----------\nYou can use all of the command line subcommands as functions. The function\nsignature is identical to the command line subcommands. The return is always\na PANDAS DataFrame. Input can be a CSV or TAB separated file, or a PANDAS\nDataFrame and is supplied to the function via the 'input_ts' keyword.\n\nSimply import hspfbintoolbox::\n\n import hspfbintoolbox\n\n # Then you could call the functions\n ntsd = hspfbintoolbox.extract('tests/test.hbn', 'yearly', ',905,,AGWS')\n\n # Once you have a PANDAS DataFrame you can use that as input.\n ntsd = tstoolbox.aggregate(statistic='mean', agg_interval='daily', input_ts=ntsd)\n""",,"2014/12/14, 07:34:53",3237,BSD-3-Clause,20,182,"2022/08/14, 14:32:51",0,4,4,0,437,0,0.5,0.02285714285714291,,,0,4,false,,false,true,"tomjobes/hspf_utils,timcera/tsblender,timcera/tstoolbox,timcera/hspf_utils,timcera/hspf_water_balance,paulsenne/hsp2",,,,,,,,,, MOM6,A numerical representation of the ocean fluid with applications from the process scale to the planetary circulation scale.,NOAA-GFDL,https://github.com/NOAA-GFDL/MOM6.git,github,,Ocean Circulation Models,"2023/10/25, 15:37:12",15,0,12,true,Fortran,NOAA - Geophysical Fluid Dynamics Laboratory,NOAA-GFDL,"Fortran,M4,Python,Makefile,Shell,C",,"b'[![Read The Docs Status](https://readthedocs.org/projects/mom6/badge/?version=main)](https://mom6.readthedocs.io/en/main/?badge=main)\n[![codecov](https://codecov.io/gh/NOAA-GFDL/MOM6/branch/dev/gfdl/graph/badge.svg?token=uF8SVydCdp)](https://codecov.io/gh/NOAA-GFDL/MOM6)\n\n# MOM6\n\nThis is the MOM6 source code.\n\n\n# Where to find information\n\nStart at the [MOM6-examples wiki](https://github.com/NOAA-GFDL/MOM6-examples/wiki) which has installation instructions.\n\n[Source code documentation](http://mom6.readthedocs.io/) is hosted on read the docs.\n\n\n# What files are what\n\nThe top level directory structure groups source code and input files as follow:\n\n| File/directory | Purpose |\n| -------------- | ------- |\n| ```LICENSE.md``` | A copy of the Gnu lesser general public license, version 3. |\n| ```README.md``` | This file with basic pointers to more information. |\n| ```src/``` | Contains the source code for MOM6 that is always compiled. |\n| ```config_src/``` | Contains optional source code depending on mode and configuration such as dynamic-memory versus static, ocean-only versus coupled. |\n| ```pkg/``` | Contains third party (non-MOM6 or FMS) code that is compiled into MOM6. |\n| ```docs/``` | Workspace for generated documentation. See [docs/README.md](docs/README.md) |\n| ```.testing/``` | Contains the verification test suite. See [.testing/README.md](.testing/README.md) |\n| ```ac/``` | Contains the autoconf build configuration files. See [ac/README.md](ac/README.md) |\n\n\n# Quick start guide\n\nTo quickly get started and build an ocean-only MOM6 executable, see the\n[autoconf README](ac/README.md).\n\nFor setting up an experiment, or building an executable for coupled modeling,\nconsult the [MOM6-examples wiki](https://github.com/NOAA-GFDL/MOM6-examples/wiki).\n\n\n# Disclaimer\n\nThe United States Department of Commerce (DOC) GitHub project code is provided\non an ""as is"" basis and the user assumes responsibility for its use. DOC has\nrelinquished control of the information and no longer has responsibility to\nprotect the integrity, confidentiality, or availability of the information. Any\nclaims against the Department of Commerce stemming from the use of its GitHub\nproject will be governed by all applicable Federal law. Any reference to\nspecific commercial products, processes, or services by service mark,\ntrademark, manufacturer, or otherwise, does not constitute or imply their\nendorsement, recommendation or favoring by the Department of Commerce. The\nDepartment of Commerce seal and logo, or the seal and logo of a DOC bureau,\nshall not be used in any manner to imply endorsement of any commercial product\nor activity by DOC or the United States Government.\n\nThis project code is made available through GitHub but is managed by NOAA-GFDL\nat https://gitlab.gfdl.noaa.gov.\n'",,"2021/11/12, 20:39:58",712,CUSTOM,500,11090,"2023/10/25, 16:13:15",29,448,478,301,0,9,0.9,0.6584910690482537,,,0,64,false,,false,false,,,https://github.com/NOAA-GFDL,www.gfdl.noaa.gov,"Princeton, New Jersey",,,https://avatars.githubusercontent.com/u/11219395?v=4,,, MOM5,A numerical ocean model based on the hydrostatic primitive equations.,mom-ocean,https://github.com/mom-ocean/MOM5.git,github,,Ocean Circulation Models,"2023/08/08, 03:39:59",77,0,2,true,Fortran,Modular Ocean Model (Community Supported Version),mom-ocean,"Fortran,C,C++,TeX,Shell,Pawn,HTML,Makefile,Perl,Python,CMake,Gnuplot,NASL,Pan,CSS,Pascal,Assembly",https://mom-ocean.github.io/,b'| Compile | Model run (fast) | Bit Repro (fast)|\n| :-------: | :--------: | :--------: |\n| [![GitHub Build Status](https://github.com/mom-ocean/MOM5/workflows/CI/badge.svg)](https://github.com/mom-ocean/MOM5/actions?query=workflow%3ACI) | [![Fast model run](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=mom-ocean.org/MOM5_run)](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=mom-ocean.org/MOM5_run) | [![Bit Reproducibility](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=mom-ocean.org/MOM5_bit_reproducibility)](https://accessdev.nci.org.au/jenkins/buildStatus/icon?job=mom-ocean.org/MOM5_bit_reproducibility) |\n\n\n# The Modular Ocean Model\n\nMOM is a numerical ocean model based on the hydrostatic primitive equations. Development of the model is managed through this Github site.\n\nContributions from users and developers are always welcomed. Any questions should be directed to the [mailing list](https://groups.google.com/forum/#!forum/mom-users).\n\nTo get started with MOM please consult the [quickstart guide](https://mom-ocean.github.io/docs/quick-start-guide/). More information can be found in the [online documentation](https://mom-ocean.github.io/)\n\n',,"2012/06/27, 04:29:44",4138,LGPL-3.0,6,980,"2023/08/08, 03:39:59",63,234,320,9,79,7,0.5,0.5414746543778801,"2014/03/24, 03:51:28",5.1.0,0,19,false,,false,false,,,https://github.com/mom-ocean,https://mom-ocean.github.io,,,,https://avatars.githubusercontent.com/u/26398454?v=4,,, Bergen Layered Ocean Model,"Employs an isopycnic vertical coordinate, with near-isopycnic interior layers and variable density layers in the surface mixed boundary layer.",NorESMhub,https://github.com/NorESMhub/BLOM.git,github,,Ocean Circulation Models,"2023/10/24, 11:54:11",13,0,3,true,Fortran,Norwegian Earth System Modeling hub,NorESMhub,"Fortran,C++,Python,Meson,Jupyter Notebook,Shell,C,Roff",,"b'# BLOM: Bergen Layered Ocean Model\n\nThis is the source code of BLOM and includes the ocean biogeochemistry\nmodel iHAMOCC. BLOM is the ocean component of the Norwegian Earth System\nModel ().\n\n## BLOM documentation\n\nSince BLOM is mainly used in connection with the NorESM system, the BLOM user documetation has been integrated\ninto the general NorESM documentation on ReadTheDocs ().\n- [Running OMIP-type experiments](https://noresm-docs.readthedocs.io/en/latest/configurations/omips.html#blom)\n- [BLOM model description](https://noresm-docs.readthedocs.io/en/latest/model-description/ocn_model.html)\n- [iHAMOCC model description](https://noresm-docs.readthedocs.io/en/latest/model-description/ocn_model.html)\n\nThe [BLOM wiki](https://github.com/NorESMhub/BLOM/wiki) contains information about\nBLOM-specific topics that is not considered relevant for the general NorESM documentation:\n- working with the BLOM git repository on gitHub\n- running BLOM/iHAMOCC stand-alone test cases\n- details about model structure\n\n### Building a stand-alone BLOM executable with meson\nWhen compiling BLOM with NorESM, the NorESM build system should be used. A stand-alone\nBLOM executable can be built by using the meson build system.\nTo build the code ensure that [`Meson`](https://mesonbuild.com/) is available.\nThe following will build the default version of BLOM _without_ `MPI`.\n\n```bash\n$ meson setup builddir --buildtype=debugoptimized\n$ meson compile -C builddir\n```\n\nThe executable `blom` file will then be stored in the `./builddir` directory.\n\nSee the [BLOM/iHAMOCC stand-alone](https://github.com/NorESMhub/BLOM/wiki/Run-BLOM-iHAMOCC-stand-alone)\nwiki page for further instructions on how to configure the meson build system.\n\n### Running tests\nAfter successfully building the code it can be a good idea to test that the code\nbehaves as expected and changes to the code does not affect the output.\n\nTests can be run with the following:\n\n```bash\n$ meson test -C builddir\n```\n\nThe previous command will run all the test suites defined for BLOM. To run tests\nquicker one can select a few tests to run or just a single test suite. To list\nthe available tests run `meson test -C builddir --list`. One can then run a\nsingle test with:\n\n```bash\n$ meson test -C builddir ""run single_column""\n```\n\n## Contribute to BLOM/iHAMOCC development\n\nThe [CONTRIBUTING.md](CONTRIBUTING.md) file includes instructions on how to contribute\nto the BLOM/iHAMOCC model system. The [BLOM wiki](https://github.com/NorESMhub/BLOM/wiki) \nincludes more detailed instructions on how to work with the BLOM git repository with your\nown fork on gitHub.\n\n## License\n\nBLOM is licensed under the GNU Lesser General Public License - see the\n[COPYING](COPYING) and [COPYING.LESSER](COPYING.LESSER) files for\ndetails.\n'",,"2020/02/04, 16:32:04",1359,LGPL-3.0,202,900,"2023/10/20, 13:57:23",27,194,250,82,5,5,6.9,0.6945054945054945,"2022/12/08, 08:00:40",v1.3.0,0,18,false,,false,true,,,https://github.com/NorESMhub,https://NorESM-docs.readthedocs.io/en/latest,,,,https://avatars.githubusercontent.com/u/12492281?v=4,,, Oceananigans.jl,Fast and friendly fluid dynamics on CPUs and GPUs.,CliMA,https://github.com/CliMA/Oceananigans.jl.git,github,"climate,ocean,fluid-dynamics,julia,gpu,climate-change,machine-learning,data-assimilation",Ocean Circulation Models,"2023/10/25, 23:19:34",827,0,115,true,Julia,Climate Modeling Alliance,CliMA,"Julia,Mathematica,Python,TeX,Dockerfile",https://clima.github.io/OceananigansDocumentation/stable,"b'\n

\n Oceananigans.jl\n

\n\n\n

\n \xf0\x9f\x8c\x8a Fast and friendly ocean-flavored Julia software for simulating incompressible fluid dynamics in Cartesian and spherical shell domains on CPUs and GPUs. https://clima.github.io/OceananigansDocumentation/stable\n

\n\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

\n\n\n

\n \n \n \n \n \n \n \n \n \n

\n\n\n

\n \n \n \n \n \n \n

\n\nOceananigans is a fast, friendly, flexible software package for finite volume simulations of the nonhydrostatic\nand hydrostatic Boussinesq equations on CPUs and GPUs.\nIt runs on GPUs (wow, fast!), though we believe Oceananigans makes the biggest waves\nwith its ultra-flexible user interface that makes simple simulations easy, and complex, creative simulations possible.\n\nOceananigans.jl is developed by the [Climate Modeling Alliance](https://clima.caltech.edu) and heroic external collaborators.\n\n## Contents\n\n* [Installation instructions](#installation-instructions)\n* [Running your first model](#running-your-first-model)\n* [The Oceananigans knowledge base](#the-oceananigans-knowledge-base)\n* [Citing](#citing)\n* [Contributing](#contributing)\n* [Movies](#movies)\n* [Performance benchmarks](#performance-benchmarks)\n\n## Installation instructions\n\nOceananigans is a [registered Julia package](https://julialang.org/packages/). So to install it,\n\n1. [Download Julia](https://julialang.org/downloads/) (version 1.9 or later).\n\n2. Launch Julia and type\n\n```julia\njulia> using Pkg\n\njulia> Pkg.add(""Oceananigans"")\n```\n\nThis installs the latest version that\'s _compatible with your current environment_.\nDon\'t forget to *be careful* \xf0\x9f\x8f\x84 and check which Oceananigans you installed:\n\n```julia\njulia> Pkg.status(""Oceananigans"")\n```\n\n## Running your first model\n\nLet\'s run a two-dimensional, horizontally-periodic simulation of turbulence using 128\xc2\xb2 finite volume cells for 4 non-dimensional time units:\n\n```julia\nusing Oceananigans\ngrid = RectilinearGrid(CPU(), size=(128, 128), x=(0, 2\xcf\x80), y=(0, 2\xcf\x80), topology=(Periodic, Periodic, Flat))\nmodel = NonhydrostaticModel(; grid, advection=WENO())\n\xcf\xb5(x, y, z) = 2rand() - 1\nset!(model, u=\xcf\xb5, v=\xcf\xb5)\nsimulation = Simulation(model; \xce\x94t=0.01, stop_time=4)\nrun!(simulation)\n```\n\nBut there\'s more: changing `CPU()` to `GPU()` makes this code on a CUDA-enabled Nvidia GPU.\n\nDive into [the documentation](https://clima.github.io/OceananigansDocumentation/stable/) for more code examples and tutorials.\nBelow, you\'ll find movies from GPU simulations along with CPU and GPU [performance benchmarks](https://github.com/clima/Oceananigans.jl#performance-benchmarks).\n\n## The Oceananigans knowledge base\n\nIt\'s _deep_ and includes:\n\n* [Documentation](https://clima.github.io/OceananigansDocumentation/stable) that provides\n * example Oceananigans scripts,\n * tutorials that describe key Oceananigans objects and functions,\n * explanations of Oceananigans finite-volume-based numerical methods,\n * details of the dynamical equations solved by Oceananigans models, and\n * a library documenting all user-facing Oceananigans objects and functions.\n* [Discussions on the Oceananigans github](https://github.com/CliMA/Oceananigans.jl/discussions), covering topics like\n * [""Computational science""](https://github.com/CliMA/Oceananigans.jl/discussions/categories/computational-science), or how to science and set up numerical simulations in Oceananigans, and\n * [""Experimental features""](https://github.com/CliMA/Oceananigans.jl/discussions?discussions_q=experimental+features), which covers new and sparsely-documented features for those who like to live dangerously.\n \n If you\'ve got a question or something, anything! to talk about, don\'t hesitate to [start a new discussion](https://github.com/CliMA/Oceananigans.jl/discussions/new?).\n* The [Oceananigans wiki](https://github.com/CliMA/Oceananigans.jl/wiki) contains practical tips for [getting started with Julia](https://github.com/CliMA/Oceananigans.jl/wiki/Installation-and-getting-started-with-Oceananigans), [accessing and using GPUs](https://github.com/CliMA/Oceananigans.jl/wiki/Accessing-GPUs-and-using-Oceananigans-on-GPUs), and [productive workflows when using Oceananigans](https://github.com/CliMA/Oceananigans.jl/wiki/Productive-Oceananigans-workflows-and-Julia-environments).\n* The `#oceananigans` channel on the [Julia Slack](https://julialang.org/slack/), which accesses ""institutional knowledge"" stored in the minds of the amazing Oceananigans community.\n* [Issues](https://github.com/CliMA/Oceananigans.jl/issues) and [pull requests](https://github.com/CliMA/Oceananigans.jl/pulls) also contain lots of information about problems we\'ve found, solutions we\'re trying to implement, and dreams we\'re dreaming to make tomorrow better \xf0\x9f\x8c\x88.\n\n## Citing\n\nIf you use Oceananigans.jl as part of your research, teaching, or other activities, we would be grateful if you could cite our work and mention Oceananigans.jl by name.\n\n```bibtex\n@article{OceananigansJOSS,\n doi = {10.21105/joss.02018},\n url = {https://doi.org/10.21105/joss.02018},\n year = {2020},\n publisher = {The Open Journal},\n volume = {5},\n number = {53},\n pages = {2018},\n author = {Ali Ramadhan and Gregory LeClaire Wagner and Chris Hill and Jean-Michel Campin and Valentin Churavy and Tim Besard and Andre Souza and Alan Edelman and Raffaele Ferrari and John Marshall},\n title = {{Oceananigans.jl: Fast and friendly geophysical fluid dynamics on GPUs}},\n journal = {Journal of Open Source Software}\n}\n```\n\nWe also maintain a [list of publications using Oceananigans.jl](https://clima.github.io/OceananigansDocumentation/stable/#Papers-and-preprints-using-Oceananigans). If you have work using Oceananigans.jl that you would like to have listed there, please open a pull request to add it or let us know!\n\n## Contributing\n\nIf you\'re interested in contributing to the development of Oceananigans we want your help no matter how big or small a contribution you make!\nCause we\'re all in this together.\n\nIf you\'d like to work on a new feature, or if you\'re new to open source and want to crowd-source neat projects that fit your interests, you should [start a discussion](https://github.com/CliMA/Oceananigans.jl/discussions/new?) right away.\n\nFor more information check out our [contributor\'s guide](https://clima.github.io/OceananigansDocumentation/stable/contributing/).\n\n## Movies\n\n### [Deep convection](https://www.youtube.com/watch?v=kpUrxnKKMjI)\n\n[![Watch deep convection in action](https://raw.githubusercontent.com/ali-ramadhan/ali-ramadhan.Github.io/master/img/surface_temp_3d_00130_halfsize.png)](https://www.youtube.com/watch?v=kpUrxnKKMjI)\n\n### [Free convection](https://www.youtube.com/watch?v=yq4op9h3xcU)\n\n[![Watch free convection in action](https://raw.githubusercontent.com/ali-ramadhan/ali-ramadhan.Github.io/master/img/free_convection_0956.png)](https://www.youtube.com/watch?v=yq4op9h3xcU)\n\n### [Winds blowing over the ocean](https://www.youtube.com/watch?v=IRncfbvuiy8)\n\n[![Watch winds blowing over the ocean](https://raw.githubusercontent.com/ali-ramadhan/ali-ramadhan.Github.io/master/img/wind_stress_0400.png)](https://www.youtube.com/watch?v=IRncfbvuiy8)\n\n### [Free convection with wind stress](https://www.youtube.com/watch?v=ob6OMQgPfI4)\n\n[![Watch free convection with wind stress in action](https://raw.githubusercontent.com/ali-ramadhan/ali-ramadhan.Github.io/master/img/wind_stress_unstable_7500.png)](https://www.youtube.com/watch?v=ob6OMQgPfI4)\n\n## Performance benchmarks\n\nWe\'ve performed some preliminary performance benchmarks (see the [performance benchmarks](https://clima.github.io/OceananigansDocumentation/stable/appendix/benchmarks/) section of the documentation) by initializing models of various sizes and measuring the wall clock time taken per model iteration (or time step).\n\nThis is not really a fair comparison as we haven\'t parallelized across all the CPU\'s cores so we will revisit these benchmarks once Oceananigans.jl can run on multiple CPUs and GPUs.\n\nTo make full use of or fully saturate the computing power of a GPU such as an Nvidia Tesla V100 or\na Titan V, the model should have around ~10 million grid points or more.\n\nSometimes counter-intuitively running with `Float32` is slower than `Float64`. This is likely due\nto type mismatches causing slowdowns as floats have to be converted between 32-bit and 64-bit, an\nissue that needs to be addressed meticulously. Due to other bottlenecks such as memory accesses and\nGPU register pressure, `Float32` models may not provide much of a speedup so the main benefit becomes\nlower memory costs (by around a factor of 2).\n\n![Performance benchmark plots](https://user-images.githubusercontent.com/20099589/89906791-d2c85b00-dbb9-11ea-969a-4b8db2c31680.png)\n'",",https://doi.org/10.21105/joss.02018,https://doi.org/10.21105/joss.02018","2018/10/13, 14:15:44",1838,MIT,1262,12274,"2023/10/25, 02:17:16",202,1771,2965,602,1,55,6.4,0.6601414865781567,"2023/10/17, 22:40:19",v0.89.3,0,46,false,,false,true,,,https://github.com/CliMA,https://clima.caltech.edu,,,,https://avatars.githubusercontent.com/u/43161188?v=4,,, NEMO,Nucleus for European Modelling of the Ocean is a state-of-the-art modeling framework for research activities and forecasting services in ocean and climate sciences.,nemo,,custom,,Ocean Circulation Models,,,,,,,,,,https://forge.nemo-ocean.eu/nemo/nemo,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, GOLD,Stands for Generalized Ocean Layer Dynamics and is a hybrid coordinate finite volume ocean model code funded by NOAA and developed by the ocean group at NOAA-GFDL and Princeton University.,archive/p,,custom,,Ocean Circulation Models,,,,,,,,,,https://code.google.com/archive/p/gold-omod/,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Veros,Powerful tool that makes high-performance ocean modeling approachable and fun.,team-ocean,https://github.com/team-ocean/veros.git,github,"oceanography,python,parallel,multi-core,geophysics,jax,climate,distributed,gpu",Ocean Circulation Models,"2023/10/20, 07:41:00",270,5,36,true,Python,TeamOcean,team-ocean,"Python,Cython,Cuda,C",https://veros.readthedocs.io,"b'

\n\n

\n\n

\nVersatile Ocean Simulation in Pure Python\n

\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n

\n\nVeros, *the versatile ocean simulator*, aims to be the swiss army knife of ocean modeling. It is a full-fledged primitive equation ocean model that supports anything between idealized toy models and [realistic, high-resolution, global ocean simulations](https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2021MS002717). And because Veros is written in pure Python, the days of struggling with complicated model setup workflows, ancient programming environments, and obscure legacy code are finally over.\n\n*In a nutshell, we want to enable high-performance ocean modelling with a clear focus on flexibility and usability.*\n\nVeros supports a NumPy backend for small-scale problems, and a\nhigh-performance [JAX](https://github.com/google/jax) backend\nwith CPU and GPU support. It is fully parallelized via MPI and supports\ndistributed execution on any number of nodes, including multi-GPU architectures (see also [our benchmarks](https://veros.readthedocs.io/en/latest/more/benchmarks.html)).\n\nThe dynamical core of Veros is based on [pyOM2](https://wiki.cen.uni-hamburg.de/ifm/TO/pyOM2), an ocean model with a Fortran backend and Fortran and Python frontends.\n\nTo learn more about Veros, make sure to [visit our documentation](https://veros.readthedocs.io/en/latest/).\n\n#### How about a demonstration?\n\n

\n \n \n \n

\n\n

\n(0.25\xc3\x970.25\xc2\xb0 high-resolution model spin-up, click for better\nquality)\n

\n\n## Features\n\nVeros provides\n\n- a fully staggered **3-D grid geometry** (*C-grid*)\n- support for both **idealized and realistic configurations** in\n Cartesian or pseudo-spherical coordinates\n- several **friction and advection schemes**\n- isoneutral mixing, eddy-kinetic energy, turbulent kinetic energy,\n and internal wave energy **parameterizations**\n- several **pre-implemented diagnostics** such as energy fluxes,\n variable time averages, and a vertical overturning stream function\n (written to netCDF4 output)\n- **pre-configured idealized and realistic set-ups** that are ready to\n run and easy to adapt\n- **accessibility and extensibility** - thanks to the\n power of Python!\n\n## Veros for the impatient\n\nA minimal example to install and run Veros:\n\n```bash\n$ pip install veros\n$ veros copy-setup acc --to /tmp/acc\n$ veros run /tmp/acc/acc.py\n```\n\nFor more detailed installation instructions, have a look at [our\ndocumentation](https://veros.readthedocs.io).\n\n## Basic usage\n\nTo run Veros, you need to set up a model --- i.e., specify which settings\nand model domain you want to use. This is done by subclassing the\n`VerosSetup` base class in a *setup script* that is written in Python. You\nshould use the `veros copy-setup` command to copy one into your current\nfolder. A good place to start is the\n[ACC model](https://github.com/team-ocean/veros/blob/main/veros/setups/acc/acc.py):\n\n```bash\n$ veros copy-setup acc\n```\n\nAfter setting up your model, all you need to do is call the `setup` and\n`run` methods on your setup class. The pre-implemented setups can all be\nexecuted via `veros run`:\n\n```bash\n$ veros run acc.py\n```\n\nFor more information on using Veros, have a look at [our\ndocumentation](http://veros.readthedocs.io).\n\n## Contributing\n\nContributions to Veros are always welcome, no matter if you spotted an\ninaccuracy in [the documentation](https://veros.readthedocs.io), wrote a\nnew setup, fixed a bug, or even extended Veros\\\' core mechanics. There\nare 2 ways to contribute:\n\n1. If you want to report a bug or request a missing feature, please\n [open an issue](https://github.com/team-ocean/veros/issues). If you\n are reporting a bug, make sure to include all relevant information\n for reproducing it (ideally through a *minimal* code sample).\n2. If you want to fix the issue yourself, or wrote an extension for\n Veros - great! You are welcome to submit your code for review by\n committing it to a repository and opening a [pull\n request](https://github.com/team-ocean/veros/pulls). However,\n before you do so, please check [the contribution\n guide](http://veros.readthedocs.io/quickstart/get-started.html#enhancing-veros)\n for some tips on testing and benchmarking, and to make sure that\n your modifications adhere with our style policies. Most importantly,\n please ensure that you follow the [PEP8\n guidelines](https://www.python.org/dev/peps/pep-0008/), use\n *meaningful* variable names, and document your code using\n [Google-style\n docstrings](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html).\n\n## How to cite\n\nIf you use Veros in scientific work, please consider citing [the following publication](https://gmd.copernicus.org/articles/11/3299/2018/):\n\n```bibtex\n@article{hafner_veros_2018,\n\ttitle = {Veros v0.1 \xe2\x80\x93 a fast and versatile ocean simulator in pure {Python}},\n\tvolume = {11},\n\tissn = {1991-959X},\n\turl = {https://gmd.copernicus.org/articles/11/3299/2018/},\n\tdoi = {10.5194/gmd-11-3299-2018},\n\tnumber = {8},\n\tjournal = {Geoscientific Model Development},\n\tauthor = {H\xc3\xa4fner, Dion and Jacobsen, Ren\xc3\xa9 L\xc3\xb8we and Eden, Carsten and Kristensen, Mads R. B. and Jochum, Markus and Nuterman, Roman and Vinter, Brian},\n\tmonth = aug,\n\tyear = {2018},\n\tpages = {3299--3312},\n}\n```\n\nOr have a look at [our documentation](https://veros.readthedocs.io/en/latest/more/publications.html)\nfor more publications involving Veros.\n'",",https://zenodo.org/badge/latestdoi/87419383","2017/04/06, 10:59:21",2393,MIT,216,1731,"2023/10/20, 07:40:49",22,477,523,140,5,2,0.0,0.3416058394160584,"2023/10/11, 08:43:27",v1.5.1,0,10,false,,false,false,"team-ocean/veris,cwight2021/jupyter-docker-stacks,team-ocean/veros-extra-setups,ucphhpc/nbi-jupyter-docker-stacks,team-ocean/veros-bgc",,https://github.com/team-ocean,https://www.nbi.ku.dk/english/research/pice/oceanography/,Copenhagen,,,https://avatars.githubusercontent.com/u/57774860?v=4,,, MITgcm,A flexible non-hydrostatic formulation that efficiently simulates fluid phenomena over a wide range of scales.,MITgcm,https://github.com/MITgcm/MITgcm.git,github,"ocean-modelling,mitgcm,gfd,automatic-differentiation,data-assimilation,exoplanets,climate-science",Ocean Circulation Models,"2023/10/25, 14:50:13",296,0,40,true,Fortran,MITgcm,MITgcm,"Fortran,C,MATLAB,Jupyter Notebook,Shell,Python,Sage,HTML,Makefile,Perl,Roff,TeX,M,Slice,Terra,Limbo,Mercury,Objective-C,ColdFusion,Awk,Dockerfile,sed,Verilog",http://mitgcm.org/,b'[![Build Status](https://github.com/MITgcm/MITgcm/workflows/build/badge.svg)](https://github.com/MITgcm/MITgcm/actions)\n[![Documentation Status](http://readthedocs.org/projects/mitgcm/badge/?version=latest)](http://mitgcm.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1409237.svg)](https://doi.org/10.5281/zenodo.1409237)\n\n# MITgcm\n\nMIT General Circulation Model master code and documentation. The main MITgcm webpage can be found [here](http://mitgcm.org).\n\n## Documentation\n\nAccess the latest documentation [here](http://mitgcm.readthedocs.io/en/latest/)\n',",https://doi.org/10.5281/zenodo.1409237","2018/01/31, 20:38:06",2093,MIT,129,19805,"2023/10/25, 13:50:10",147,524,641,126,0,34,1.9,0.4951722682424946,"2023/08/02, 13:52:59",checkpoint68r,1,62,false,,false,false,,,https://github.com/MITgcm,,,,,https://avatars.githubusercontent.com/u/11047419?v=4,,, ccpp-physics,"The Common Community Physics Package is designed to facilitate the implementation of physics innovations in state-of-the-art atmospheric models, the use of various models to develop physics, and the acceleration of transition of physics innovations to operational NOAA models.",NCAR,https://github.com/NCAR/ccpp-physics.git,github,,Ocean Circulation Models,"2023/10/05, 14:23:20",48,0,8,true,Fortran,National Center for Atmospheric Research,NCAR,"Fortran,TeX,CSS,CMake,JavaScript,HTML,Python",,"b""# CCPP Physics\n\nThe Common Community Physics Package (CCPP) is designed to facilitate the implementation of physics innovations in state-of-the-art atmospheric models, the use of various models to develop physics, and the acceleration of transition of physics innovations to operational NOAA models.\n\nPlease see more information about the CCPP at the locations below.\n\n- [CCPP website hosted by the Developmental Testbed Center (DTC)](https://dtcenter.org/ccpp)\n- [CCPP public release information](https://dtcenter.org/community-code/common-community-physics-package-ccpp/download)\n- [CCPP Technical Documentation](https://ccpp-techdoc.readthedocs.io/en/v6.0.0/)\n- [CCPP Scientific Documentation](https://dtcenter.ucar.edu/GMTB/v6.0.0/sci_doc/index.html)\n- [CCPP Physics GitHub wiki](https://github.com/NCAR/ccpp-physics/wiki)\n- [CCPP Framework GitHub wiki](https://github.com/NCAR/ccpp-framework/wiki)\n\nFor the use of CCPP with its Single Column Model, see the [Single Column Model User's Guide](https://dtcenter.org/sites/default/files/paragraph/scm-ccpp-guide-v6-0-0.pdf).\n\nFor the use of CCPP with NOAA's Unified Forecast System (UFS), see the [UFS Medium-Range Application User's Guide](https://ufs-mrweather-app.readthedocs.io/en/latest), the [UFS Short-Range Application User's Guide](https://ufs-srweather-app.readthedocs.io/en/latest) and the [UFS Weather Model User's Guide](https://ufs-weather-model.readthedocs.io/en/latest).\n\nQuestions can be directed to the [CCPP User Support Forum](https://dtcenter.org/forum/ccpp-user-support) or posted in the [CCPP Physics GitHub discussions](https://github.com/NCAR/ccpp-physics/discussions) or [CCPP Framework GitHub discussions](https://github.com/NCAR/ccpp-framework/discussions). When using the CCPP with NOAA's UFS, questions can be posted in the [UFS Weather Model](https://forums.ufscommunity.org/forum/ufs-weather-model) section of the [UFS Forum](https://forums.ufscommunity.org).\n\n## Corresponding CCPP Standard Names dictionary\n\nThis revision of the CCPP physics library is compliant with [version 0.1.1 of the CCPP Standard Names dictionary](https://github.com/ESCOMP/CCPPStandardNames/releases/tag/v0.1.1).\n\n## Licensing\n\nThe Apache license will be in effect unless superseded by an existing license in specific files.\n""",,"2017/06/08, 17:03:06",2330,CUSTOM,507,5023,"2023/10/05, 14:23:21",44,765,933,58,20,3,2.3,0.7146720757268424,"2022/08/01, 20:03:07",v6.0.0,0,66,false,,false,false,,,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, MOHID-Lagrangian,"Mainly developed for oceanographic and fluvial modeling, application to atmospheric and other planetary settings should be trivial.",Mohid-Water-Modelling-System,https://github.com/Mohid-Water-Modelling-System/MOHID-Lagrangian.git,github,"fortran,lagrangian-ocean-modelling,oop,mohid,lagrangian,particle,faecal,tracer",Ocean Circulation Models,"2023/09/29, 17:37:55",15,0,1,true,C,MOHIDWater Modelling System,Mohid-Water-Modelling-System,"C,Fortran,Makefile,Shell,HTML,C++,M4,CMake,Roff,PLSQL,TeX,Pascal,LiveScript,Assembly,Python,Yacc,Ada,Perl,C#,Lex,Java,DIGITAL Command Language,Batchfile,CSS,Pawn,Tcl,SAS,Module Management System,Scilab",http://www.mohid.com,"b'## MOHID Lagrangian - v20.10\n\nMOHID Lagragian is a comprehensive high-performance Lagrangian tracer model, with sources, sinks, particle types and several options for forcing and I/O.\nAltough mainly developed for oceanographic and fluvial contexts, application to atmospheric and other planetary settings should be trivial. \n\nAvailable functionalities are\n\n- Robust pre-processing, modelling and post-processing tools\n- Support for netcdf-cf files with currents, winds and wave fields, as well as water quality (salinity, temperature)\n- Ability to model passive, bouyant and degrading tracers\n- Stokes drift, windage, beaching, resuspension and turbulent diffusion models and options\n- Ability to model millions of tracers in a modest laptop machine\n- Simple and fully documented simulation set-up files, ready to be abstracted by a UI\n- Raw vtk time encoded output, directly compatible with Paraview and other standard post-processors and renderers\n- Flexible python post processor, using cross-simulation reuseable post-processing recipes, ready to be automated\n- Computation of volumetric averages and cumulative integrations, exporting the results to standard netcfd files, so you can explore the results using GIS software or publish to a thredds server\n- Production of high-quality mapped plots and shapefiles using matplotlib and pandas, allowing for arbitrary calendar, integration types, subdomains including polygons and plot type combinations\n- Documentation on instalation, code structure, case preparation, post processing and general useage. Fully self contained examples to get you started\n- Pre-built windows executable\n- Cross-platform compliant, tested and deployed\n- Cmake based project, easy to set up for local compilation if required\n \nOutput examples\n \n![Vigo3D](https://github.com/mohid-water-modelling-system/MOHID-Lagrangian/blob/dev/docs/Vigo3DnoDiffusion.gif)\n\n*3D passive tracers on a [MOHID](http://www.mohid.com) operational currents solution in Vigo region, Galiza, Spain.*\n\n![Atlantic1](https://github.com/mohid-water-modelling-system/MOHID-Lagrangian/blob/dev/docs/Atlantic_2016_2017_density.gif)\n\n*Floating passive tracers on a [CMEMS](http://marine.copernicus.eu/) Atlantic currents solution.*\n\n![Arousa](https://github.com/Mohid-Water-Modelling-System/MOHID-Lagrangian/blob/master/docs/diff-mean-n_counts_PolygonTest.png)\n\n*Hourly mean tracer concentration on the Arousa intertidal test case.*\n\n![PCOMS](https://github.com/Mohid-Water-Modelling-System/MOHID-Lagrangian/blob/master/docs/mean-concentration_area_Box1.png)\n\n*Mean tracer concentration on the PCOMS test case*\n\n![PCOMS2](https://github.com/Mohid-Water-Modelling-System/MOHID-Lagrangian/blob/master/docs/mean-concentration_area_n_counts_global.png)\n\n*Mean tracer concentration on the PCOMS test case using the EU Marine Directives polygons*\n\nCheck out our [code documentation page](https://mohid-water-modelling-system.github.io/MOHID-Lagrangian/)!\n\n## Help, Bugs, Feedback\nIf you need help with MOHIDLagrangian or MOHID, want to keep up with progress, chat with developers or ask any other questions about MOHID, you can hang out by mail: or consult our [MOHID wiki](http://wiki.mohid.com). You can also subscribe to our [MOHID forum](http://forum.mohid.com). To report bugs, please create a GitHub issue or contact any developers. More information consult \n\n## License\nGNU General Public License. See the [GNU General Public License](http://www.gnu.org/copyleft/gpl.html) web page for more information.\n\n\n\n[![License](https://img.shields.io/badge/license-GNU%20GeneraL%20Public%20License%20v3,%20GPLv3-blue.svg)]()\n'",,"2018/02/23, 14:31:13",2070,GPL-3.0,24,1600,"2023/01/26, 10:58:49",3,97,98,2,272,0,0.0,0.23146747352496222,"2022/10/13, 15:23:38",v22.10,0,4,false,,false,false,,,https://github.com/Mohid-Water-Modelling-System,www.mohid.com,"Lisbon, Portugal",,,https://avatars.githubusercontent.com/u/15635544?v=4,,, Mohid,"A modular finite volumes water-modeling system written in ANSI-Fortran95 using an Object-oriented programming philosophy, integrating diverse mathematical models and supporting graphical user interfaces that manage all the pre- and post-processing.",Mohid-Water-Modelling-System,https://github.com/Mohid-Water-Modelling-System/Mohid.git,github,"catchment,circulation,estuary,mohid,ocean,watershed,fortran,biogeochemical-model,hydrodynamic-modeling,lagrangian-ocean-modelling,oil-spills,open-source,watershed-modeling",Ocean Circulation Models,"2023/10/25, 13:28:59",65,0,7,true,Fortran,MOHIDWater Modelling System,Mohid-Water-Modelling-System,"Fortran,C,Roff,C++,C#,Shell,Pascal,Makefile,Cuda,Rich Text Format,CMake,Batchfile",http://www.mohid.com,"b'# MOHID - Water Modelling System \n\nMOHID is short for Modelo Hidrodin\xc3\xa2mico which is hydrodynamic model in Portuguese. MOHID is a three-dimensional water modelling system, developed by MARETEC (Marine and Environmental Technology Research Center) at Instituto Superior T\xc3\xa9cnico (IST) which belongs to Lisbon University.\n\n## What is this repository?\nThis is the MOHID Water Modelling System OFFICIAL repository\n\n## Overview\nMOHID is a modular finite volumes water-modelling system written in ANSI-Fortran95 using an Object-oriented programming philosophy, integrating diverse mathematical models and supporting graphical user interfaces that manage all the pre- and post-processing. \nMOHID allows the adoption of an integrated modelling philosophy, not only of processes (physical and biogeochemical), but also of different scales (allowing the use of nested models) and systems (estuaries and watersheds), due to the adoption of an object oriented programming philosophy.\nThe development of MOHID started back in 1985. Since that time a continuous development effort of new features has been maintained. Model updates and improvements were made available in a regular basis were used in the framework of many research and engineering projects.\nAll programs included in MOHID Water Modelling System are built on the top of one or more base libraries and the two core executables files can be found at the top of the pyramid:\n* MOHID Water \xe2\x80\x93 Three-dimensional mathematical model to simulate surface water bodies.\n* MOHID Land \xe2\x80\x93 Watershed mathematical model or Hydrological transport model designed to simulate drainage basin and aquifer;\n\nSmaller utility programs are easily built on the top of the libraries, which are usually designed for pre or post-processing results of the models. This support tools are normally managed by graphical user interfaces which allow management of input data, control of program execution, and output results analysis, along with other pre- and post-processing operations.\nThe integration of MOHID\xe2\x80\x99s different tools can be easily achieved since these tools are based on the same framework. This coupling can thus be used to study the water cycle and its associated processes in an integrated approach.\n\n## Help, Bugs, Feedback\nIf you need help with MOHID, want to keep up with progress, chat with developers or ask any other questions about MOHID, you can hang out by mail: or consult our [MOHID wiki](http://wiki.mohid.com). You can also subscribe to our [MOHID forum](http://forum.mohid.com). To report bugs, please create a GitHub issue or contact any developers. More information consult \n\n## License\nGNU General Public License. See the [GNU General Public License](http://www.gnu.org/copyleft/gpl.html) web page for more information.\n'",,"2015/11/05, 16:52:33",2911,GPL-3.0,114,3113,"2023/09/20, 17:48:45",11,57,63,9,35,0,0.0,0.5814049586776859,"2022/10/13, 15:18:43",v22.10,0,25,false,,false,false,,,https://github.com/Mohid-Water-Modelling-System,www.mohid.com,"Lisbon, Portugal",,,https://avatars.githubusercontent.com/u/15635544?v=4,,, CDFTOOLS,A Fortran package for analysis and diagnostics on NEMO ocean model output.,meom-group,https://github.com/meom-group/CDFTOOLS.git,github,,Ocean Circulation Models,"2023/08/02, 08:28:26",30,0,4,true,Fortran,MEOM Research Group,meom-group,"Fortran,HTML,TeX,Makefile,Smarty,Shell,Pug,Ada,CSS",,"b'# CDFTOOLS\n CDFTOOLS is a diagnostic package written in fortran 90 for the analysis of NEMO model output, initialized in the frame of the DRAKKAR project (). It is now available on GitHub under the CeCILL license ().\n\n CDFTOOLS is an open source package and contributions from other group are welcomed. The Git workflow policy is still to be defined.\n\n Actual version of CDFTOOLS is known as version 4. (See changes in paragraph *New features in version 4*, below).\n\n## Using CDFTOOLS\n\n#### Cloning the git repository\nTo retrieve a copy of the CDFTOOLS source code and create a working directory, run the following on the command line: \n\n```> git clone https://github.com/meom-group/CDFTOOLS ```\n\n#### Compiling CDFTOOLS\nAll the fortran source are in src/ subdirectory. In src/ there is a Makefile for compiling the sources. The compiler/machines related definitions are supposed to be collected in a `make.macro` file. Some examples of `make.macro` are given in the Macrolib directory and can be used as template for a new compiler or new machine. Then the good practice is to make a link \n\n```> cd src/```\n\n```> ln -sf ../Macrolib/macro.MACHINE make.macro```\n\nIn the `make.macro` file, the PATH for the netcdf library is specified, as well as compiler name and used flags. In order to activate netcdf4/HDF5 chunking and deflation ( available in some cdftools), you need to set: \n\n```NC4=-Dkey_netcdf4 ```\n\nin the make.macro file, otherwise set\n\n```NC4= ```\n\nIn order to activate CMIP6 variable naming convention (for input files), you need to set:\n\n```CMIP6=-Dkey_CMIP6 ```\n\nThen using `make` (or even `make -j n` if you can compile on n cores), you will have the cdftools programs executable available in the bin/ sub directory. The executable files are ignore by git.\n\n\n#### Running CDFTOOLS\nCDFTOOLS is a collection of programs. Every single program performs one or many computation(s) using a set of input files and output the results as a netcdf file, and eventually also gives some results on the standard output. \n\nCDFTOOLS coding rules imply that a `usage message` is displayed when just running the tool without any arguments ( or with -h ). At the moment it is the only up to date documentation. \n\nAs CDFTOOLS is a collection of programs, a full diagnostic of model output can be performed writing a script using a sequence of tools. This is done for example in the Drakkar Monitoring Tools (DMONTOOLS, soon available on GitHub!).\n\n## Coding CDFTOOLS\n#### Coding rules\n##### Syntax\nThe coding rules are the NEMO coding rules, strictly followed. The idea is that people familiar with NEMO are familiar with CDFTOOLS. In DEV_TOOLS/ some template fortran line are available for program, modules, routine or function headers. Also a template for the `usage message`.\n##### Run time behaviour\nAny `cdftool`, run without argument or with option -h, should display a short documentation (`usage message`), similar to a unix man page, describing the purpose of the tool, the syntax (arguments, options, etc...) and giving details on the output files. For some tools, mesh or/and mask files are required to be present in the working directory, with respective name `mesh_hgr.nc`, `mesh_zgr.nc` or `mask.nc` (links are OK). The usage message should indicate the required files.\n\nExample:\n\n\n```> cdfcurl```\n\n usage : cdfcurl -u U-file U-var -v V-file V-var -l levlist [-T] [-8]...\n ... [-surf] [-overf] [-nc4] [-o OUT-file ]\n \n PURPOSE :\n Compute the curl of a vector field, at a specified level.\n If level is specified as 0, assume that the input files are\n forcing files, presumably on A-grid. In this latter case, the\n vector field is interpolated on the C-grid. In any case, the\n curl is computed on the F-point (unless -T option is used).\n \n ARGUMENTS :\n -u U-file U-var : file and variable name for zonal component\n -v V-file V-var : file and variable name for meridional component\n -l levlist : levels to be processed. If set to 0, assume forcing file\n in input. Example of recognized syntax :\n -l ""1,10,30"" or -l ""1-20"" or even -l ""1-3,10-20,30-""\n -l 1 . Note that -l ""3-"" set a levlist from 3 to the bottom\n \n OPTIONS :\n -T : compute curl at T point instead of default F-point\n -8 : save in double precision instead of standard simple precision.\n -surf : work with single level C-grid (not forcing)\n -overf : store the ratio curl/f where f is the coriolis parameter\n -nc4 : use netcdf4 output with chunking and deflation 1\n -o OUT-file : specify output file name instead of curl.nc\n \n REQUIRED FILES :\n mesh_hgr.nc\n \n OUTPUT : \n netcdf file : curl.nc\n variables : socurl or socurlt (if -T option), units : s^-1\n or socurloverf, no units (if -overf option)\n\n##### Improving/modifying existing tool\n It is possible to improve (of course !) or modify any tools, but one important law to respect is that the modified tool should still be able to be used with previous syntax, in order to avoid breaking of existing scripts using CDFTOOLS. If for some reason, this is not possible, then a discussion must be done to reach to a common decision. Eventually, some old options must be documented as osbolete in the usage message, which means that they may be removed from a future release. \n\n## New features in version 4\n#### Modified user interface\n * All arguments are passed with a `-key` switch. No more `free` arguments. Example `cdfmoy -l fich1.nc fich2.nc`\n * Add `-o` and `-nc4` options in all tools (when relevant). With `-o` the default output name can be changed, allowing easier paralellisation. With `-nc4` output file will use NetCdf4/Hdf5 format with chunking and deflation level 1.\n * Use of environment variable CDFT_xxx for overriding the default names of auxiliary files such as mesh_hgr.nc, mask.nc etc...so far there is support for \n\n CDFT_MESH_HGR\n CDFT_MESH_ZGR\n CDFT_MASK\n CDFT_BASINS\n CDFT_COORD\n\n#### Support for vvl simulations\n * When relevant, the switch `-vvl` indicates that the vertical metrics is time-varying. Therefore, CDFTOOLS assume that the vertical metrics is saved in the same file than the data.\n\n#### Support for CMIP6 naming convention\n * When the code is compiled with CPP key key_CMIP6 set, the default variable names are taken form modcdfnames_CMIP6.h90 instead of the standard DRAKKAR names.\n\n#### Simplification\n * The codes have been cleaned for obsolescences. Coding rules were reinforced.\n * Obsolete tools were removed or merged as options in more generic tools. \n\n#### Improved documentation.\n * Gathering the `usage` message into man pages still works (`make man`). Readibility of the man pages is improved by grouping the tools by category. The `usage` messages have been reviewed in order to give better information.\n * The man pages are automaticaly translated to an html document that can be vizualized from any browser.\n\n#### Back to release 3:\n * The last version 3\'s release has been tagged as v3.0.2. Use this tag if you want to stay at version 3.\n\n#### Introducing TEOS10 in the CDFTOOLS\n * This introduction is made following the NEMO coding of eosbn2, using a polynomial form (Roquet et Al, Ocean Modelling, 2015) for both \nEOS80 and TEOS10 equation of state, with a side effect of slightly changing the results (even using EOS80). \nlast commit before the introduction of this change corresponds to tag v4.0.0. \nAlso note that is you decide to use TEOS10, all relevant CDFTOOLS have now an option (-teos10) that switch the \nforce the used EOS to be TEOS10. Without this option, EOS80 (polynomial form) is used. \nLast but **important** : \nWhen using TEOS10, temperatures should be Conservative Temperature (CT, DegC) and salinity should be Absolute Salinity (SA, g/kg) \nWhen using EOS80, temperatures should be Potential Temperature (PT, DegC) and salinity should be Practical Salinity (SP, PSU) \nAs of Oct. 2021, no sanity check is performed for controling this important point.\n\n#### Interface with GSW library.\n * [GSW library](http://www.teos-10.org/pubs/gsw/html/gsw_contents.html#1) provides a collection of functions and routines linked\n with the TEOS-10 Equation of state for Sea Water, using Conservative Temperature (CT) and Absolute Salinity (SA). \n In CDFTOOLS, `cdf_gsw` is an interface with GSW toolbox. For using it, key_GSW must be defined in make.macro and the `libgsw.a` must\n be precompiled on your system. Up to now, only a subset of the GSW functions is interfaced, but cdf_gsw provides a usefull framework\n for interfacing other functions.\n\n\n\n'",,"2016/09/15, 12:05:01",2596,GPL-3.0,17,1382,"2022/10/24, 13:07:52",24,27,32,0,366,3,0.0,0.04709576138147564,"2017/07/21, 13:07:56",v3.0.2,0,10,false,,false,false,,,https://github.com/meom-group,http://meom-group.github.io/,"Grenoble, France",,,https://avatars.githubusercontent.com/u/18685991?v=4,,, GOTM,The General Ocean Turbulence Model is an ambitious name for a one-dimensional water column model for marine and limnological applications.,gotm-model,https://github.com/gotm-model/code.git,github,,Ocean Circulation Models,"2023/10/17, 15:22:34",45,0,10,true,Fortran,GOTM,gotm-model,"Fortran,Pascal,C,CMake,Python,Shell",https://gotm.net,"b'[![Build Status](https://travis-ci.org/gotm-model/code.svg?branch=master)](https://travis-ci.org/gotm-model/code)\n\n## What is GOTM?\n\nGOTM - the **G**eneral **O**cean **T**urbulence **M**odel is an ambitious name for a one-dimensional water column model for marine and limnological applications. It is coupled to a choice of traditional as well as state-of-the-art parameterisations for vertical turbulent mixing. The package consists of the FORTRAN source code, a number of idealised and realistic test cases, and a scientific documentation, all published under the GNU public license.\n\nA comprehensive description including compilation instructions are given at the official [GOTM homepage](http://www.gotm.net/portfolio/software).\n\n'",,"2015/12/11, 10:46:27",2875,GPL-2.0,27,2233,"2023/09/21, 09:29:19",22,12,20,3,34,7,0.2,0.4834254143646409,"2023/10/17, 15:14:50",v6.0.6,0,8,false,,false,false,,,https://github.com/gotm-model,https://gotm.net,,,,https://avatars.githubusercontent.com/u/16253606?v=4,,, ROMS,"A free-surface, terrain-following, primitive equations ocean model widely used by the scientific community for a diverse range of applications.",myroms,https://github.com/myroms/roms.git,github,,Ocean Circulation Models,"2023/09/25, 19:39:05",19,0,19,true,Fortran,Regional Ocean Modeling System (ROMS),myroms,"Fortran,C,Shell,Makefile,CMake,Perl",https://github.com/myroms/roms/wiki,"b""# Regional Ocean Modeling System (ROMS)\n\n![ROMS_Picture](https://github.com/myroms/roms/assets/23062912/d72765ed-9d55-4109-84fc-c51b05832adb)\n\n# License\n\n**Copyright (c) 2002-2023 The ROMS/TOMS Group**\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n# Overview\n\n**ROMS** solves the free-surface, hydrostatic, flux form of the primitive\nequations over variable bathymetry using stretched terrain following in the\nvertical and orthogonal curvilinear coordinates in the horizontal. The finite\nvolume grid is discretized on a staggered Arakawa C-grid. Detailed information\nabout its governing equations, numerical discretization, algorithms, usage, and\ntutorials is available in the **WikiROMS** documentation portal at\n**`www.myroms.org/wiki`**.\n\nThe dynamical kernel of **ROMS** is comprised of four separate models, including\nthe nonlinear (**NLM**), perturbation tangent linear (**TLM**), finite amplitude\ntangent linear (**RPM**), and adjoint (**ADM**). They are located in the\n**Nonlinear**, **Tangent**, **Representer**, and **Adjoint** sub-directories,\nrespectively. The **TLM** and **ADM** were hand-coded from the discrete **NLM**\ncode using the recipes of Giering and Kaminski (1998). Therefore, any change to\nits dynamical and numerical kernels will affect the symmetry of the **TLM** and\n**ADM** operators. The discrete adjoint is exact and is defined relative to\nthe inner product that prescribes the L2-norm.\n\nThis official community version of **ROMS** is developed and maintained at Rutgers,\nThe State University of New Jersey, New Brunswick, New Jersey, USA. Currently, this\n**git** repository contains the following branches:\n\n- **main**: Tagged versions and the latest stable release version of **ROMS**\n- **develop**: Main developing branch of **ROMS**. It is not recommended for public\n consumption but passes the internal tests. It is intended for **ROMS** superusers\n and beta testers.\n- **feature branches**: Research and new development branches recommended to\nsuperusers and beta testers.\n\nCheck **wiki** for more information:\n\n```\nhttps://github.com/myroms/roms/wiki\nhttps://github.com/myroms/roms/wiki/ROMS-Branches\n```\n\n# Instructions\n\nThe **ROMS** framework is intended for users interested in ocean modeling. It\nrequires an extensive background in ocean dynamics, numerical modeling, and\ncomputers to configure, run, and analyze the results to ensure you get the\ncorrect solution for your application. Therefore, **we highly recommend** users\nregister at https://www.myroms.org and set up a **username** and **password** to\naccess the **ROMS** forum, email notifications for bugs/updates, technical\nsupport from the community, **trac** code maintenance history, tutorials, workshops,\nand publications. The user's **ROMS** forum has over 24,000 posts with helpful\ninformation. Technical support is limited to registered users. We **do not**\nprovide user technical support, usage, or answers in **GitHub**. \n\nThis **GitHub** version becomes the official **git** repository for downloading,\nupdating, improving, and correcting defects/bugs to the **ROMS** source code.\nAlso, it is the version used in the **ROMS-JEDI** interface hosted at\nhttps://github.com/JCSDA-internal, which is currently private. Use the following\ncommand to download the **ROMS** source code:\n```\ngit clone https://github.com/myroms/roms.git (default)\ngit clone https://github.com/myroms/roms.git \n```\nThe idealized and realistic **ROMS** Test Cases and the Matlab processing\nsoftware can be downloaded from:\n```\ngit clone https://github.com/myroms/roms_test.git\ngit clone https://github.com/myroms/roms_matlab.git\n```\nWe highly recommend that Users define the **`ROMS_ROOT_DIR`** variable in their\ncomputer shell logging environment, specifying where the User cloned/downloaded\nthe **ROMS** source code, Test Cases, and Matlab processing software:\n```\nsetenv ROMS_ROOT_DIR MyDownlodLocationDirectory\n```\nThe **build** scripts will use this environmental variable when compiling any of\nthe **ROMS Test Cases** without the need to customize the location of the\n**ROMS** source code. Also, it is used for loading the path of Matlab scripts in\nthe **startup.m** configuration file.\n\nThe **doxygen** version of **ROMS** is available at:\n```\nhttps://www.myroms.org/doxygen\n```\nRegistered users of **ROMS** have access to:\n\n- **ROMS** User's Forum for technical support from the community:\n ```\n https://www.myroms.org/forum\n ```\n- **ROMS Trac** source code maintenance and evolution:\n ```\n https://www.myroms.org/projects/src\n ```\n- **WikiROMS** documentation and tutorials plus editing:\n ```\n https://www.myroms.org/wiki\n ```\n""",,"2023/06/30, 19:28:22",117,CUSTOM,202,944,"2023/09/25, 19:39:07",4,16,16,16,30,4,0.0,0.0,,,0,1,false,,false,false,,,https://github.com/myroms,https://www.myroms.org/wiki,United States of America,,,https://avatars.githubusercontent.com/u/138157068?v=4,,, pyroms,A collection of tools to process input and output files from the Regional Ocean Modeling System.,ESMG,https://github.com/ESMG/pyroms.git,github,,Ocean Circulation Models,"2023/03/10, 00:22:13",119,0,18,true,Python,,ESMG,"Python,Fortran,Makefile",,"b'# Pyroms\n\nWelcome to Pyroms!\n\nPyroms is a collection of tools to process input and output files\nfrom the Regional Ocean Modeling System, [ROMS](https://www.myroms.org/). It was originally\nstarted by Rob Hetland as a googlecode project, then he morphed it\ninto octant, also at googlecode. Frederic Castruccio then created a\nfork and renamed it back to pyroms.\n\nPyroms is now hosted on GitHub.com in the [ESMG/pyroms](https://github.com/ESMG/pyroms) repository. This version is on the [python3](https://github.com/ESMG/pyroms/tree/python3) branch. It requires Python 3.4 or later.\n\n## Installation\n\nPyroms is still a bit rough around the edges, particularly with regard to installation. Recent development has been done in Python environments managed by [Conda](https://docs.conda.io/en/latest/). However Pyroms itself cannot yet be installed with Conda.\n\nIf you are starting from scratch, we recommend that you install\n[Anaconda](https://www.anaconda.com/) or\n[Miniconda](https://docs.conda.io/en/latest/miniconda.html) and create a Python 3 environment (as of December 2020, version 3.8 is your best bet) for Pyroms and your other scientific software. You should also consider making conda-forge your default channel. See the [conda-forge tips and tricks page](https://conda-forge.org/docs/user/tipsandtricks.html).\n\nIf you don\'t want to use Conda, that\'s fine, but you will have to do more of the work yourself.\n\n## Prerequisites\n\nThe following are required and are all available from [Conda-Forge](https://conda-forge.org/).\n\n * Python >= 3.4 (Python 3.8 currently recommended for new environments)\n * [numpy](https://numpy.org/)\n * [scipy](https://www.scipy.org/)\n * [matplotlib](https://matplotlib.org/)\n * [basemap](https://matplotlib.org/basemap/)\n * [netcdf4](https://unidata.github.io/netcdf4-python/netCDF4/index.html)\n * [cftime](https://unidata.github.io/cftime/)\n * [lpsolve55](https://github.com/chandu-atina/lp_solve_python_3x)\n * [pip](https://pypi.org/project/pip/)\n\nThe following is optional: Pyroms can be built and run without it but some of the functionality will be missing.\n\n * scrip, a Python implementation of [SCRIP](https://github.com/SCRIP-Project/SCRIP),\n the Spherical Coordinate Remapping and Interpolation Package. This is used by the pyroms\n module. The Python scrip code (a rather old version) is\n [bundled in pyroms](https://github.com/ESMG/pyroms/tree/python3/pyroms/external/scrip)\n and can be built and installed separately as described below. In future we plan to\n move from the bundled scrip code to a stand-alone package like\n [ESMF/ESMPy](https://www.earthsystemcog.org/projects/esmpy/) or\n [PySCRIP](https://github.com/dchandan/PySCRIP).\n\nThe following is optional and provides high-resolution coastlines for basemap:\n\n * [basemap-data-hires](https://anaconda.org/conda-forge/basemap-data-hires/)\n\n## Install from source\n\nTo clone a copy of the source and install the pyroms packages, you can use the following commands\n```\n# Cd to a convenient directory\n$ git clone https://github.com/ESMG/pyroms.git\n$ pip install -e pyroms/pyroms\n$ pip install -e pyroms/pyroms_toolbox\n$ pip install -e pyroms/bathy_smoother\n```\n\nThis installs three PIP packages with the names pyroms, pyroms\\_toolbox and bathy\\_smoother,\neach with an [eponymous](https://en.wiktionary.org/wiki/eponymous) module.\n\nAn [editable-mode](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) installation is recommended becauses it means changes you make to your copy of the source code will take effect when you import the modules. If you don\'t want this you can omit the ""-e"" option\n\nThe ""pip install"" command runs ""python setup.py install"" (or ""python setup.py develop"" with the ""-e"" switch) in each of the subdirectories listed. The ""pip install"" form is recommended because it allow easy removal (see below)\n\nThe above should work on most Linuces and on OSX with the system gcc and gfortran compilers.\nThey have also been verified to work in a Conda environment on Windows,\nprovided you install the\n[m2w64-gcc](https://anaconda.org/msys2/m2w64-gcc) and [m2w64-gfortran](https://anaconda.org/msys2/m2w64-gcc-fortran) compilers.\n\n## Install scrip\n\nIf you install as above and try to import the three Pyroms modules without having installed\nscrip you will get a warning like this:\n\n```\n$ python\nPython 3.8.5 | packaged by conda-forge | (default, Aug 29 2020, 01:22:49)\n[GCC 7.5.0] on linux\nType ""help"", ""copyright"", ""credits"" or ""license"" for more information.\n>>> import pyroms\nWARNING:root: scrip could not be imported. Remapping functions will not be available\n>>> import pyroms_toolbox\n>>> import bathy_smoother\n```\n\nThe scrip module is not available via Conda or any other package repository and we are looking at alternatives. In the meantime, scrip can be built and installed from source as follows\n\n```\n# Start in the directory into which you cloned pyroms and cd to the SCRIP\n# source directory\n$ cd pyroms/pyroms/external/scrip/source/\n\n# Print the location of the active Conda environment (which is called ""python38""\n# in this case). The active environment location is used to find the netCDF and\n# other libraries.\n$ conda info | grep ""active env location""\n active env location : /home/hadfield/miniconda3/envs/python38\n\n# Run make to build the scrip Python extension and install it into the Conda\n# environment. The makefile calculates a variable called SCRIP_EXT_DIR, into\n# which it installs the scrip Python extension. If pyroms has been installed\n# in editable (development) mode, set the DEVELOP variable to a non-empty value.\n$ export PREFIX=/home/hadfield/miniconda3/envs/python38\n$ make DEVELOP=1 PREFIX=$PREFIX install\n$ mv -vf scrip*.so ../../../pyroms\n\xe2\x80\x98scrip.cpython-38-x86_64-linux-gnu.so\xe2\x80\x99 -> \xe2\x80\x98../../../pyroms/scrip.cpython-38-x86_64-linux-gnu.so\xe2\x80\x99\n```\n\n## Removal\n\nTo remove the three Pyroms packages you can use the ""pip uninstall"" command, referring to the packages by their package names\n\n```\n# Run from any directory in the same environment as you installed\n# and use the package name\n$ pip uninstall pyroms\n$ pip uninstall pyroms_toolbox\n$ pip uninstall bathy_smoother\n```\n\nIf you have built and installed the scrip extension from the makefile as above, you can also uninstall it with the makefile. The PREFIX does not need to be set in this case.\n\n```\n# Start in the directory into which you cloned pyroms and cd to the SCRIP\n# source directory\n$ cd pyroms/pyroms/external/scrip/source/\n\n# Remove with make.\n$ make DEVELOP=1 uninstall\n```\n\n## Running Pyroms\n\nWe have a gridid.txt file that\'s pointed to by the PYROMS\\_GRIDID\\_FILE\nenvironment variable. If you are operating on files containing\nsufficient grid information already, you won\'t need to use this.\nAn example is provided in the examples directory.\n\n\n## Doxygen\n\nRunning ""doxygen .doxygen"" in any of pyroms, pyroms\\_toolbox or\nbathy\\_smoother will generate doxygen files. Edit the .doxygen files to\nspecify html vs. some other output format.\n'",,"2010/10/21, 17:22:48",4752,CUSTOM,10,168,"2023/03/10, 00:23:19",22,15,27,2,230,1,0.1,0.20155038759689925,"2020/03/25, 21:48:47",v1.0.0,0,8,false,,false,false,,,https://github.com/ESMG,,,,,https://avatars.githubusercontent.com/u/18122841?v=4,,, wrfhydropy,Provides an end-to-end python interface to support reproducible research and construction of workflows involving the WRF-Hydro model.,NCAR,https://github.com/NCAR/wrf_hydro_py.git,github,,Ocean Circulation Models,"2023/07/28, 21:27:44",54,15,10,true,Python,National Center for Atmospheric Research,NCAR,Python,,"b""![](https://ral.ucar.edu/sites/default/files/public/wrf_hydro_symbol_logo_2017_09_150pxby63px.png) WRF-HYDRO-PY ![](https://www.python.org/static/community_logos/python-powered-h-140x182.png)\n\n[![Build Status](https://travis-ci.org/NCAR/wrf_hydro_py.svg?branch=master)](https://travis-ci.org/NCAR/wrf_hydro_py)\n[![Coverage Status](https://coveralls.io/repos/github/NCAR/wrf_hydro_py/badge.svg?branch=master&service=github)](https://coveralls.io/github/NCAR/wrf_hydro_py?branch=master)\n[![PyPI](https://img.shields.io/pypi/v/wrfhydropy.svg)](https://pypi.python.org/pypi/wrfhydropy)\n[![GitHub release](https://img.shields.io/github/release/NCAR/wrf_hydro_py.svg)](https://github.com/NCAR/wrf_hydro_py/releases/latest)\n[![Documentation Status](https://readthedocs.org/projects/wrfhydropy/badge/?version=latest)](https://wrfhydropy.readthedocs.io/en/latest/?badge=latest)\n\n**IMPORTANT:** This package is in the very early stages of development and the package API may change at any time. It is not recommended that this package be used for significant work until version 0.1\n\n## Description\n*wrfhydropy* provides an end-to-end python interface to support reproducible research and construction of workflows involving the \nWRF-Hydro model. See the docs for an extended description of [what-and-why wrfhydropy](https://wrfhydropy.readthedocs.io/en/latest/what-and-why.html).\n\n## Documentation\nDocumentation is available on-line through `help()` and via [readthedocs](https://wrfhydropy.readthedocs.io/en/latest/index.html). Documentation is a work in progress, please feel free to help improve the documentation or to make an issue when the docs are inaccurate!\n\n## Contributing standards\nFailure to adhere to contributing standards may result in your Pull Request being rejected.\n\n### pep8speaks\nAll pull requests will be linted automatically by pep8speaks and reported as a comment into the pull request. The pep8speaks configuration is specified in .pep8speaks.yml. All pull requests must satisfy pep8speaks. \nLocal linting can be performed after a `pip install` of [pycodestyle](https://github.com/PyCQA/pycodestyle). Pep8speaks linting reports also update with updated pull requests.\n\n### Additional Style Guidelines\n* Max line length: 100 chars.\n* docstrings: [Google style](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)\n* All other guidance follows [Google style guide](https://google.github.io/styleguide/pyguide.html)\n* General advice: [Hitchhiker's guide to code style](https://goo.gl/hqbW4r)\n\n### Testing\nAll pull requests must pass automated testing (via TravisCI). Testing can be performed locally by running `pytest` in the `wrfhydropy/tests` directory. Currently, this testing relies on the [`nccp`](https://gitlab.com/remikz/nccmp) binary for comparing netcdf files. A docker container can be supplied for testing on request (and documentation will subsequently be placed here).\n\n### Coverage\nTesting concludes by submitting a request to [coveralls](https://coveralls.io/). This will automatically report changes of code coverage by the testing. Coverage should be maximized with every pull request. That is all new functions or classes must be accompanied by comprehensive additional unit/integration tests in the `wrf_hydro_py/wrfhydropy/tests` directory. Running coverage locally can be achieved by `pip` installing [`coverage`](https://pypi.org/project/coverage/) and [`pytest-cov`](https://pypi.org/project/pytest-cov/) following a process similar to the following: \n```\ncd wrfhydropy/tests/\npytest --cov=wrfhydropy \ncoverage html -d coverage_html\nchrome coverage_html/index.html # or your browser of choice\n```\n""",,"2018/03/12, 14:41:50",2053,CUSTOM,20,985,"2023/10/18, 02:22:01",34,193,210,8,8,6,0.2,0.10270270270270265,"2018/08/22, 22:19:05",0.0.12,0,6,false,,false,false,"NCAR/github-actions-test,criedel40/ICEPACK-DART,johnsonbk/DART,hkershaw-brown/DART-hackathon21,NCAR/DART,NCAR/DART-lorenz96-lag,pvelissariou1/ADC-WW3-NWM-SCHISM-NEMS,hkershaw-brown/dartthedocs,hkershaw-brown/DART-autocompile,hkershaw-brown/dart-jekyll-test,hkershaw-brown/feature-preprocess,hkershaw-brown/rttov-test,jmccreight/DART,jdmattern/wrf_hydro_nwm_public,NCAR/wrf_hydro_nwm_public",,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, HYCOM,"A three-dimensional depiction of the ocean state at fine resolution in real time, provision of boundary conditions for coastal and regional models, and provision of oceanic boundary conditions for a global coupled ocean-atmosphere prediction model.",HYCOM,https://github.com/HYCOM/HYCOM-src.git,github,"earth-science,ocean-modelling",Ocean Circulation Models,"2023/09/19, 15:34:07",25,0,6,true,Fortran,HYCOM.org,HYCOM,"Fortran,C,Makefile,Shell",,"b'# HYCOM-src\n\nThis is the HYCOM-src repository for the HYbrid Coordinate Ocean Model (HYCOM) source code.\n\nHYCOM uses conventional Primitive Equation 2nd order finite volume dynamics in the horizontal and solves the layered continuity equation. However, in the vertical it uses the Arbitrary Lagrangian Eulerian (ALE) approach to extend its range of applicability beyond that of a standard layered Ocean General Circulation Model (OGCM).\n\n# Where to find the information\n\nAll information about the HYCOM-src (installation, compilation) are described in the [HYCOM-src Wiki](https://github.com/HYCOM/HYCOM-src/wiki) page. \n'",,"2019/02/20, 14:54:32",1708,MIT,12,129,"2023/09/08, 13:31:53",1,12,17,3,47,0,0.3,0.5849056603773585,"2019/12/05, 21:56:18",2.3.01,0,6,false,,false,false,,,https://github.com/HYCOM,https://hycom.org/hycom/overview,"Tallahassee, FL",,,https://avatars.githubusercontent.com/u/30185783?v=4,,, leaflet-velocity,"Create a canvas visualization layer for direction and intensity of arbitrary velocities (e.g. wind, ocean current).",danwild,https://github.com/onaci/leaflet-velocity.git,github,"leaflet,velocity,wind,water,ocean,current,weather,visualization,visualisation",Waves and Currents,"2023/03/15, 00:28:31",527,154,98,true,JavaScript,CSIRO Oceans and Atmosphere - Coastal Informatics Team,onaci,"JavaScript,CSS",https://onaci.github.io/leaflet-velocity/,"b'# leaflet-velocity [![NPM version][npm-image]][npm-url] [![NPM Downloads][npm-downloads-image]][npm-url]\n\n## Version 2 Notice\n\nAs of version 2, `leaflet-velocity` is now under [CSIRO](https://www.csiro.au)\'s [Open Source Software Licence Agreement](LICENSE.md), which is variation of the BSD / MIT License.\n\nThere are no other plans for changes to licensing, and the project will remain open source.\n\n---\n\nA plugin for Leaflet (v1.0.3, and v0.7.7) to create a canvas visualisation layer for direction and intensity of arbitrary velocities (e.g. wind, ocean current).\n\nLive Demo: https://onaci.github.io/leaflet-velocity/\n\n- Uses a modified version of [WindJS](https://github.com/Esri/wind-js) for core functionality.\n- Similar to [wind-js-leaflet](https://github.com/danwild/wind-js-leaflet), however much more versatile (provides a generic leaflet layer, and not restricted to wind).\n- Data input format is the same as output by [wind-js-server](https://github.com/danwild/wind-js-server), using [grib2json](https://github.com/cambecc/grib2json).\n\n![Screenshot](/screenshots/velocity.gif?raw=true)\n\n## Example use:\n\n```javascript\nvar velocityLayer = L.velocityLayer({\n displayValues: true,\n displayOptions: {\n // label prefix\n velocityType: ""Global Wind"",\n\n // leaflet control position\n position: ""bottomleft"",\n\n // no data at cursor\n emptyString: ""No velocity data"",\n\n // see explanation below\n angleConvention: ""bearingCW"",\n\n // display cardinal direction alongside degrees\n showCardinal: false,\n\n // one of: [\'ms\', \'k/h\', \'mph\', \'kt\']\n speedUnit: ""ms"",\n\n // direction label prefix\n directionString: ""Direction"",\n\n // speed label prefix\n speedString: ""Speed"",\n },\n data: data, // see demo/*.json, or wind-js-server for example data service\n\n // OPTIONAL\n minVelocity: 0, // used to align color scale\n maxVelocity: 10, // used to align color scale\n velocityScale: 0.005, // modifier for particle animations, arbitrarily defaults to 0.005\n colorScale: [], // define your own array of hex/rgb colors\n onAdd: null, // callback function\n onRemove: null, // callback function\n opacity: 0.97, // layer opacity, default 0.97\n\n // optional pane to add the layer, will be created if doesn\'t exist\n // leaflet v1+ only (falls back to overlayPane for < v1)\n paneName: ""overlayPane"",\n});\n```\n\nThe angle convention option refers to the convention used to express the wind direction as an angle from north direction in the control.\nIt can be any combination of `bearing` (angle toward which the flow goes) or `meteo` (angle from which the flow comes),\nand `CW` (angle value increases clock-wise) or `CCW` (angle value increases counter clock-wise). If not given defaults to `bearingCCW`.\n\nThe speed unit option refers to the unit used to express the wind speed in the control.\nIt can be `m/s` for meter per second, `k/h` for kilometer per hour or `kt` for knots. If not given defaults to `m/s`.\n\n## Public methods\n\n| method | params | description |\n| ------------ | ---------- | --------------------------------- |\n| `setData` | `{Object}` | update the layer with new data |\n| `setOptions` | `{Object}` | update the layer with new options |\n\n## Build / watch\n\n```shell\nnpm run watch\n```\n\n## Reference\n\n`leaflet-velocity` is possible because of things like:\n\n- [L.CanvasOverlay.js](https://gist.github.com/Sumbera/11114288)\n- [WindJS](https://github.com/Esri/wind-js)\n- [earth](https://github.com/cambecc/earth)\n\n## Example data\n\nData shown for the Great Barrier Reef has been derived from [CSIRO\'s eReefs products](https://research.csiro.au/ereefs/)\n\n## License\n\nCSIRO Open Source Software Licence Agreement (variation of the BSD / MIT License)\n\n[npm-image]: https://badge.fury.io/js/leaflet-velocity.svg\n[npm-url]: https://www.npmjs.com/package/leaflet-velocity\n[npm-downloads-image]: https://img.shields.io/npm/dt/leaflet-velocity.svg\n'",,"2017/02/05, 23:47:04",2453,CUSTOM,14,146,"2023/03/13, 23:11:41",21,33,78,6,226,3,0.0,0.3173076923076923,"2023/03/15, 00:29:23",v2.1.4,0,8,false,,false,false,"Topsya/home-work,madelinap/lapetitefleur,joohe71/map_visualization_test,nafisazizir/sparc-frontend,Siddhartha-ui/ReconData,tfjackc/sol_mapping,RAHULFROST7/Reports-Plastic-Detection,MrPhyaeSoneThwim/weather-map,Lycee-Experimental/nsi-lxp,Joelkohym/Bespojke,GlenWz/someCodeOfGIS,yazanismail1/digitalPulse,osgeocn/QuaGIS,RasikaRavi/locationmap,MAYHEM-Lab/Depot,HachaniAli/Smart-city,EgorSherstnev/MapApp,Omar-Abel/ProyectoIA-MR,ohheyitsdave/maps.sensor.community,govindpal5101999/weather-app,vikingnope/worther,ducphatnguyen/admindashboard,PortablePy/3.10.8.0,jacqueszk/MiningMaps,ronalgomezjmu/raston-flask,TrellixVulnTeam/streamlit_P7B5,mashoodj/wind-animation,PortablePy/3.10.9.0,PortablePy/3.11.1.0,PortablePy/3.09.10.0,PortablePy/3.08.12.3,PortablePy/3.09.04.0,coolest-team/atmosphere-web,PowderL/aqstmapdemo,KaziJahidurRahaman/dashboard-for-remotesensing-django,LanSevcnikar/IAZiOPzZD-private,tstojanovic8232/NWT-KTS-projekat,PaulESantos/talks_templates,slarkjm0803/vuetify-frontend,Jen-Wenzel/impact_monitoring,shihongbo1314/yunnanlincangtianqing,waldbrandpraevention/frontend,UCA-Datalab/smartship-landing-page,marc2541/ProjRecherche,ilyesMeritis/lefletLayerExample,appidaniel97/venv-geemap,dongyi1996/gee-manual,bukun/TorCMS,driver005/Frontend,tyson-swetnam/ipyleaflet,nbellias/AngularLeaflet,crabtr26/jupyterlite_dist,bella-mir/set-of-maps,ceeoinnovations/OnshapeFlaskServer,lweyajoe/jupyter-notebook,jthomasmock/quarto-presentation,XiHuang1999/EsGrassBiomass,tradingstrategy-ai/jupyterlite-pyodide-build,stcorp/ipyleaflet-gl-vector-layer-plugin,tix06/jupyterlite_test,manhninh/demo-netcdf,weacast/weacast,iiartpaint/jupyterlite,adminwayra/STREAMLIT-DEMO,JacobJeppesen/jupyterlite-django-deployment,wxwgh/weatherchaotunew,pjgueno/luft_node,airalab/sensors.robonomics.network,liangzuan1983/gis-changchun-info-vue,kamclar/jl,fredericturmel/karakella,lbybrian/test,1142235090/hanzhao-project,ichacahyawulan/leaflet-openweathermap,nsp-team3/north-sea-port-team-3,codelink-ai/jupyterlite,fthomas61/fthomas61.github.io,NewGardenFarms/feinstaub-map-v2,NewGardenFarms/network-observer,Priya-2016/myrepo,m-farahmand/forecast_web,wxwgh/weatherchaotu,Jero2760/jupyterlite_example,mcsmonk/jupyterlite,andresfelipe9619/todosurf-react-map,opendata-stuttgart/network-observer,shamil8/prpr-map,hoagVu/satellite-image-processing-web,Pobx/angular-leaflet,krishnaglodha/geopython2021-Geospatial-analysis-101,pjgueno/SCPublicData,Dayong99/South-China-Sea,EdonaHaziraj/Safometer,esglobe/gee-manual,wangmmeng/earlyWarning,duduita/react-icea_project,15895467489/qxpcvueapp,brianguzman0321/eric-cdt-frontend,yuanyuan-xy/typhoon,xionkq/WorkSpace,AlexRogalskiy/weather-map,acidburn0zzz/ipyleaflet,Fjldream/11.27vu,fengjutian/Vue2Leaflet,fengjutian/wind-leaflet-vue,bonomali/ipyleaflet,mohsulthana/sipongi-nuxt,pjgueno/feinstaub-map-v2-custom2,Phanty133/VITC2020,pjgueno/feinstaub-map-v2-sidebar,mi3nts/mints-air-quality-live-web,pjgueno/feinstaub-map-v2-custom,vacqq/LAN_vue,vacqq/vue_demo,terrygreen0606/Vue-Dashboard-App,OAC-TW/OAC-SPA,Global-Mapping-Hub/fire-dashboard,Danney100/Vessel-vue-vutify-vuex,glory6ms/height,a-bouts/nav-ui,sisyphean-timmy/106062-OacMap,simonsobs/psplay,weatherforce/ipyleaflet-reset,GithubDengmingchen/databook,xgarrido/psplay,thibautlouis/psplay,qingqibing/leaflet_ocean-weather,mokrayaGISka/react-leaflet-lesson6,mi3nts/sharedairdfw_map,didv097/cdt,weatherforce/ipyleaflet-legend,mi3nts/MINTS-AQ-FRONTEND-S20,nilotpolaaa/IndiaForecast1,nilotpolaaa/WeatherDashboard,hidrogit/Training,ligithubxn/databook,weishuliang/databookht,Senior-Design-UTD-MINTS/MINTS-AQ-FRONTEND,kimjulia1117/wind-frontend,songtianlun/gis-changchun-info-vue,kalisio/crisis,weacast/weacast-leaflet,kalisio/kano,gambinish/geoFAD,databooks/databook,openthings/databook-old,jupyter-widgets/ipyleaflet,christensenmichael0/OceanMapper,0nza1101/ionic5-leaflet-velocity,linghuam/ocean-weather,chriswilley/angular2-leaflet-base,weacast/weacast-client,RegatteBzh/html-client,weacast/weacast-app",,https://github.com/onaci,https://www.csiro.au/en/about/people/business-units/Oceans-and-Atmosphere,Australia,,,https://avatars.githubusercontent.com/u/34732655?v=4,,, OpenDrift,"A software for modeling the trajectories and fate of objects or substances drifting in the ocean, or even in the atmosphere.",OpenDrift,https://github.com/OpenDrift/opendrift.git,github,"python,ocean-modelling,trajectory,ocean",Waves and Currents,"2023/10/20, 13:04:37",205,1,44,true,Python,OpenDrift,OpenDrift,"Python,Shell,Dockerfile",https://opendrift.github.io,"b'[![Build Status](https://circleci.com/gh/OpenDrift/opendrift.svg?style=svg)](https://app.circleci.com/pipelines/github/OpenDrift/opendrift)\n[![Coverage Status](https://coveralls.io/repos/github/OpenDrift/opendrift/badge.svg?branch=master)](https://coveralls.io/github/OpenDrift/opendrift?branch=master)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.582321.svg)](https://doi.org/10.5281/zenodo.582321)\n[![Slack](https://img.shields.io/badge/slack-opendrift-yellow.svg)](https://join.slack.com/t/opendrift-dev/shared_invite/zt-ozansc5h-AzMOOS9jOs~3CBihRR37Lw)\n\nopendrift\n=========\n\n![Image](https://github.com/opendrift/opendrift/blob/master/docs/opendrift_logo.png)\n\nOpenDrift is a software for modeling the trajectories and fate of objects or substances drifting in the ocean, or even in the atmosphere.\n\n![OpenDrift animation](https://dl.dropboxusercontent.com/s/u9apyh7ci1mdowg/opendrift.gif?dl=0)\n\n[Documentation and installation instructions can be found here](https://opendrift.github.io/install.html).\n\nDevelopment\n===========\n\nWe have a [slack-organization open for anyone to join](https://join.slack.com/t/opendrift-dev/shared_invite/zt-ozansc5h-AzMOOS9jOs~3CBihRR37Lw).\n'",",https://doi.org/10.5281/zenodo.582321","2013/12/16, 13:32:04",3600,GPL-2.0,257,4015,"2023/10/25, 07:55:47",73,739,1033,153,0,12,0.0,0.28456042249145697,,,0,20,false,,false,false,olavforland/GhostNetModeling,,https://github.com/OpenDrift,https://opendrift.github.io/,,,,https://avatars.githubusercontent.com/u/23311271?v=4,,, dorado,Simulating passive Lagrangian particle transport over flow-fields from any 2D shallow-water hydrodynamic model using a weighted random walk methodology.,passaH2O,https://github.com/passaH2O/dorado.git,github,"lagrangian,particles,particle-transport,tracer,particle-tracking,particle-tracing,random-walk,rivers,hydrodynamic-modeling,simulator,particle,particle-routing,passive-tracers,numerical-modeling,numerical-modelling",Waves and Currents,"2023/06/20, 20:54:27",45,0,22,true,Python,PassaH2O Group,passaH2O,"Python,TeX",https://passah2o.github.io/dorado,"b'# dorado - Lagrangian particle routing\n![build](https://github.com/passaH2O/dorado/workflows/build/badge.svg)\n[![codecov](https://codecov.io/gh/passaH2O/dorado/branch/master/graph/badge.svg?token=A4MWN4K1XJ)](https://codecov.io/gh/passaH2O/dorado)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pydorado)\n[![PyPI version](https://badge.fury.io/py/pydorado.svg)](https://badge.fury.io/py/pydorado)\n[![status](https://joss.theoj.org/papers/f1f61292f998588ae06bb1cd14dd4063/status.svg)](https://joss.theoj.org/papers/f1f61292f998588ae06bb1cd14dd4063)\n
\n \n
\n\ndorado is a Python package for simulating passive Lagrangian particle transport over flow-fields from any 2D shallow-water hydrodynamic model using a weighted random walk methodology.\n\nFor user guides and detailed examples, refer to the [documentation](https://passah2o.github.io/dorado/index.html).\n\n### Example Uses:\n\n### Particles on an Unsteady ANUGA Flow Field of the Wax Lake Delta\n
\n \n
\n\n### Particles on a DeltaRCM Simulated Delta\n
\n \n
\n\n## Installation:\ndorado supports Python 2.7 as well as Python 3.5+. For the full distribution including examples, clone this repository using `git clone` and run `python setup.py install` from the cloned directory. To test this ""full"" installation, you must first install `pytest` via `pip install pytest`. Then from the cloned directory the command `pytest` can be run to ensure that your installed distribution passes all of the unit tests.\n\nFor a lightweight distribution including just the core functionality, use `pip` to install via PyPI:\n\n pip install pydorado\n \nInstallation using `conda` via `conda-forge` is also supported:\n\n conda install -c conda-forge pydorado\n\nFor additional installation options and instructions, refer to the [documentation](https://passah2o.github.io/dorado/install/index.html).\n\n## Contributing\nWe welcome contributions to the dorado project. Please open an issue or a pull request if there is functionality you would like to see or propose. Refer to our [contributing guide](https://passah2o.github.io/dorado/misc/contributing.html) for more information.\n\n## Citing\nIf you use this package and wish to cite it, please use the [Journal of Open Source Software article](https://joss.theoj.org/papers/10.21105/joss.02585#).\n\n## Funding Acknowledgments\nThis work was supported in part by NSF EAR-1719670, the NSF GRFP under grant DGE-1610403 and the NASA Earth Venture Suborbital (EVS) award 17-EVS3-17_1-0009 in support of the DELTA-X project.\n'",,"2020/07/28, 21:09:07",1184,MIT,12,410,"2023/06/20, 20:38:07",4,28,41,7,127,1,0.2,0.1962962962962963,"2023/06/20, 20:55:11",v2.5.3,0,4,false,,false,false,,,https://github.com/passaH2O,https://sites.google.com/site/passalacquagroup/home,,,,https://avatars.githubusercontent.com/u/30088719?v=4,,, parcels,"Can be used to track passive and active particulates such as water, plankton, plastic and fish.",OceanParcels,https://github.com/OceanParcels/parcels.git,github,"ocean-circulation-models,lagrangian-ocean-modelling,particles",Waves and Currents,"2023/10/25, 08:11:55",245,21,22,true,Python,,OceanParcels,"Python,C",http://www.oceanparcels.org,"b'## Parcels\n\n**Parcels** (**P**robably **A** **R**eally **C**omputationally **E**fficient **L**agrangian **S**imulator) is a set of Python classes and methods to create customisable particle tracking simulations using output from Ocean Circulation models. Parcels can be used to track passive and active particulates such as water, plankton, [plastic](http://www.topios.org/) and [fish](https://github.com/Jacketless/IKAMOANA).\n\n![Arctic-SO-medusaParticles](https://github.com/OceanParcels/oceanparcels_website/blob/master/images/homepage.gif)\n\n*Animation of virtual particles carried by ocean surface flow in the global oceans. The particles are advected with [Parcels](http://oceanparcels.org/) in data from the [NEMO Ocean Model](https://www.nemo-ocean.eu/).*\n\n### Parcels manuscript and code\n\nThe manuscript detailing the first release of Parcels, version 0.9, has been published in [Geoscientific Model Development](https://www.geosci-model-dev.net/10/4175/2017/gmd-10-4175-2017.html) and can be cited as\n\n*Lange, M and E van Sebille (2017) Parcels v0.9: prototyping a Lagrangian Ocean Analysis framework for the petascale age. Geoscientific Model Development, 10, 4175-4186. https://doi.org/10.5194/gmd-2017-167*\n\nThe manuscript detailing version 2.0 of Parcels is available at [Geoscientific Model Development](https://www.geosci-model-dev.net/12/3571/2019/gmd-12-3571-2019-discussion.html) and can be cited as:\n\n*Delandmeter, P and E van Sebille (2019) The Parcels v2.0 Lagrangian framework: new field interpolation schemes. Geoscientific Model Development, 12, 3571-3584. https://doi.org/10.5194/gmd-12-3571-2019*\n\nThe manuscript detailing the performance of Parcels v2.4 is available at [Computers & Geosciences](https://doi.org/10.1016/j.cageo.2023.105322) and can be cited as:\n\n*Kehl, C, PD Nooteboom, MLA Kaandorp and E van Sebille (2023) Efficiently simulating Lagrangian particles in large-scale ocean flows \xe2\x80\x94 Data structures and their impact on geophysical applications, Computers and Geosciences, 175, 105322. https://doi.org/10.1016/j.cageo.2023.105322*\n\n### Further information\n\nSee [oceanparcels.org](http://oceanparcels.org/) for further information about [installing](https://docs.oceanparcels.org/en/latest/installation.html) and [running](https://docs.oceanparcels.org/en/latest/documentation.html) the Parcels code, as well as extended [documentation](https://docs.oceanparcels.org/en/latest/reference.html) of the methods and classes.\n\n\n### Launch Parcels Tutorials on [mybinder.org](https://mybinder.org/v2/gh/OceanParcels/parcels/master?labpath=docs%2Fexamples%2Fparcels_tutorial.ipynb)\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/OceanParcels/parcels/master?labpath=docs%2Fexamples%2Fparcels_tutorial.ipynb)\n[![unit-tests](https://github.com/OceanParcels/parcels/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/OceanParcels/parcels/actions/workflows/unit-tests.yml)\n[![codecov](https://codecov.io/gh/OceanParcels/parcels/branch/master/graph/badge.svg)](https://codecov.io/gh/OceanParcels/parcels)\n[![Anaconda-release](https://anaconda.org/conda-forge/parcels/badges/version.svg)](https://anaconda.org/conda-forge/parcels/)\n[![Anaconda-date](https://anaconda.org/conda-forge/parcels/badges/latest_release_date.svg)](https://anaconda.org/conda-forge/parcels/)\n[![Zenodo](https://zenodo.org/badge/DOI/10.5281/zenodo.823561.svg)](https://doi.org/10.5281/zenodo.823561)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5353/badge)](https://bestpractices.coreinfrastructure.org/projects/5353)\n'",",https://doi.org/10.5194/gmd-2017-167*\n\nThe,https://doi.org/10.5194/gmd-12-3571-2019*\n\nThe,https://doi.org/10.1016/j.cageo.2023.105322,https://doi.org/10.1016/j.cageo.2023.105322*\n\n###,https://doi.org/10.5281/zenodo.823561","2015/09/29, 09:43:00",2948,MIT,680,5886,"2023/10/25, 08:11:59",12,780,1314,269,0,1,0.2,0.4808229616459233,"2023/10/10, 13:28:44",v3.0.0,0,45,false,,true,false,"OceanParcels/Lagrangian_diags,chyalexcheng/grainLearning,cspencerjones/amoc_heat_code,vinguyen777/Data-Science-on-Agriculture,MrBounty/Data_portfolio,rbucciarelli/ocean-parcels-dev,robot144/northsea_particles,faahrin/UKESM1_Ozone_clustering,GrainLearning/grainLearning,marcellinus-witarsah/2023-ey-open-science-data-challenge-1,euroargodev/VirtualFleet,zoeyjiao1104/BOG_practicum,tedw0ng/wri-agriadapt-demo,ngam/ngc-ext-pangeo,nia-jones/offer_day_sos,geo-smart/SnowCast,UBC-MOAD/MoaceanParcels,pgader/FundAndAppl-AI-STEM,axiom-data-science/oot-workflows,Orthogonal-Research-Lab/CGS,jimboH/CGS",,https://github.com/OceanParcels,http://oceanparcels.org,,,,https://avatars.githubusercontent.com/u/14887518?v=4,,, wavespectra,An open source project for working with ocean wave spectral data.,wavespectra,https://github.com/wavespectra/wavespectra.git,github,"ocean,python,spectra,wave,xarray,statistics",Waves and Currents,"2023/10/20, 08:33:31",42,10,16,true,Python,Wavespectra,wavespectra,"Python,Fortran,Makefile",https://wavespectra.readthedocs.io/en/latest/,"b""wavespectra\n===========\nPython library for ocean wave spectra.\n\n.. image:: https://zenodo.org/badge/205463939.svg\n :target: https://zenodo.org/badge/latestdoi/205463939\n.. image:: https://img.shields.io/github/actions/workflow/status/wavespectra/wavespectra/python-publish.yml\n :target: https://github.com/wavespectra/wavespectra/actions\n :alt: GitHub Workflow Status (with event)\n.. image:: https://coveralls.io/repos/github/wavespectra/wavespectra/badge.svg\n :target: https://coveralls.io/github/wavespectra/wavespectra\n.. image:: https://readthedocs.org/projects/wavespectra/badge/?version=latest\n :target: https://wavespectra.readthedocs.io/en/latest/\n.. image:: https://img.shields.io/pypi/v/wavespectra.svg\n :target: https://pypi.org/project/wavespectra/\n.. image:: https://img.shields.io/pypi/dm/wavespectra\n :target: https://pypistats.org/packages/wavespectra\n :alt: PyPI - Downloads\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/python/black\n.. image:: https://img.shields.io/pypi/pyversions/wavespectra\n :target: https://pypi.org/project/wavespectra/\n :alt: PyPI - Python Version\n\nMain contents:\n--------------\n* SpecArray_: extends xarray's `DataArray`_ with methods to manipulate wave spectra and calculate spectral statistics.\n* SpecDataset_: wrapper around `SpecArray`_ with methods for selecting and saving spectra in different formats.\n\nDocumentation:\n--------------\nThe documentation is hosted on ReadTheDocs at https://wavespectra.readthedocs.io/en/latest/.\n\nInstall:\n--------\nWhere to get it\n~~~~~~~~~~~~~~~\nThe source code is currently hosted on GitHub at: https://github.com/wavespectra/wavespectra\n\nBinary installers for the latest released version are available at the `Python package index`_.\n\nInstall from pypi\n~~~~~~~~~~~~~~~~~\n.. code:: bash\n\n # Default install, miss some dependencies and functionality\n pip install wavespectra\n\n # Complete install\n pip install wavespectra[extra]\n\nInstall from conda\n~~~~~~~~~~~~~~~~~~~\n.. code:: bash\n\n # wavespectra is available in the conda-forge channel\n conda install -c conda-forge wavespectra\n\nInstall from sources\n~~~~~~~~~~~~~~~~~~~~\nInstall requirements. Navigate to the base root of wavespectra_ and execute:\n\n.. code:: bash\n\n # Default install, miss some dependencies and functionality\n pip install -r requirements/default.txt\n\n # Also, for complete install\n pip install -r requirements/extra.txt\n\nThen install wavespectra:\n\n.. code:: bash\n\n python setup.py install\n\n # Run pytest integration\n python setup.py test\n\nAlternatively, to install in `development mode`_:\n\n.. code:: bash\n\n pip install -e .\n\nCode structure:\n---------------\nThe two main classes SpecArray_ and SpecDataset_ are defined as `xarray accessors`_. The accessors are registered on xarray's DataArray_ and Dataset_ respectively as a new namespace called `spec`.\n\nTo use methods in the accessor classes simply import the classes into your code and they will be available to your xarray.Dataset or xarray.DataArray instances through the `spec` attribute, e.g.\n\n.. code:: python\n\n import datetime\n import numpy as np\n import xarray as xr\n\n from wavespectra.specarray import SpecArray\n from wavespectra.specdataset import SpecDataset\n\n coords = {'time': [datetime.datetime(2017,01,n+1) for n in range(2)],\n 'freq': [0.05,0.1],\n 'dir': np.arange(0,360,120)}\n efth = xr.DataArray(data=np.random.rand(2,2,3),\n coords=coords,\n dims=('time','freq', 'dir'),\n name='efth')\n\n In [1]: efth\n Out[1]:\n \n array([[[ 0.100607, 0.328229, 0.332708],\n [ 0.532 , 0.665938, 0.177731]],\n\n [[ 0.469371, 0.002963, 0.627179],\n [ 0.004523, 0.682717, 0.09766 ]]])\n Coordinates:\n * freq (freq) float64 0.05 0.1\n * dir (dir) int64 0 120 240\n * time (time) datetime64[ns] 2017-01-01 2017-01-02\n\n In [2]: efth.spec\n Out[2]:\n \n array([[[ 0.100607, 0.328229, 0.332708],\n [ 0.532 , 0.665938, 0.177731]],\n\n [[ 0.469371, 0.002963, 0.627179],\n [ 0.004523, 0.682717, 0.09766 ]]])\n Coordinates:\n * freq (freq) float64 0.05 0.1\n * dir (dir) int64 0 120 240\n * time (time) datetime64[ns] 2017-01-01 2017-01-02\n\n In [3]: efth.spec.hs()\n Out[3]:\n \n array([ 10.128485, 9.510618])\n Coordinates:\n * time (time) datetime64[ns] 2017-01-01 2017-01-02\n Attributes:\n standard_name: sea_surface_wave_significant_height\n units: m\n\nSpecDataset provides a wrapper around the methods in SpecArray. For instance, these produce same result:\n\n.. code:: python\n\n In [4]: dset = efth.to_dataset(name='efth')\n\n In [5]: tm01 = dset.spec.tm01()\n\n In [6]: tm01.identical(dset.efth.spec.tm01())\n Out[6]: True\n\nData requirements:\n------------------\n\nSpecArray_ methods require DataArray_ to have the following attributes:\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n- wave frequency coordinate in `Hz` named as `freq` (required).\n- wave frequency coordinate in `Hz` named as `freq` (required).\n- wave direction coordinate in `degree` (coming from) named as `dir` (optional for 1D, required for 2D spectra).\n- wave energy density data in `m2/Hz/degree` (2D) or `m2/Hz` (1D) named as `efth`\n\nSpecDataset_ methods require xarray's Dataset_ to have the following attributes:\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n- spectra DataArray named as `efth`, complying with the above specifications\n\nExamples:\n---------\n\nDefine and plot spectra history from example SWAN_ spectra file:\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code:: python\n\n from wavespectra import read_swan\n\n dset = read_swan('/source/wavespectra/tests/manus.spec')\n spec_hist = dset.isel(lat=0, lon=0).sel(freq=slice(0.05,0.2)).spec.oned().T\n spec_hist.plot.contourf(levels=10)\n\n.. _SpecArray: https://github.com/wavespectra/wavespectra/blob/master/wavespectra/specarray.py\n.. _SpecDataset: https://github.com/wavespectra/wavespectra/blob/master/wavespectra/specdataset.py\n.. _DataArray: http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html\n.. _Dataset: http://xarray.pydata.org/en/stable/generated/xarray.Dataset.html\n.. _readspec: https://github.com/wavespectra/wavespectra/blob/master/wavespectra/readspec.py\n.. _xarray accessors: http://xarray.pydata.org/en/stable/internals.html?highlight=accessor\n.. _SWAN: http://swanmodel.sourceforge.net/online_doc/swanuse/node50.html\n.. _Python package index: https://pypi.python.org/pypi/wavespectra\n.. _wavespectra: https://github.com/wavespectra/wavespectra\n.. _development mode: https://pip.pypa.io/en/latest/reference/pip_install/#editable-installs\n""",",https://zenodo.org/badge/latestdoi/205463939\n","2019/08/30, 22:19:36",1517,MIT,102,1068,"2023/10/03, 08:05:56",14,35,82,24,22,1,1.2,0.1086294416243655,"2023/08/29, 01:07:07",v3.15.1,0,14,false,,true,true,"rom-py/rompy,akeow/WecOptTool,oalmeyda/WecOptTool,michaelcdevin/WecOptTool,dtgaebe/WecOptTool,cmichelenstrofer/WecOptTool,ryancoe/WecOptTool,sandialabs/WecOptTool,rsoutelino/pyromsgui,metocean/mirospy",,https://github.com/wavespectra,,,,,https://avatars.githubusercontent.com/u/54727176?v=4,,, LESbrary.jl,Generating a library of ocean turbulence large eddy simulation data to train ocean and climate models.,CliMA,https://github.com/CliMA/LESbrary.jl.git,github,,Waves and Currents,"2023/06/02, 06:45:43",27,0,2,true,Julia,Climate Modeling Alliance,CliMA,"Julia,Python",,"b'# LESbrary.jl\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7997001.svg)](https://doi.org/10.5281/zenodo.7997001)\n\nThis package is a framework for building a library of large eddy simulations (LES) of ocean surface boundary layer turbulence \xe2\x80\x94 the _LESbrary_ \xe2\x80\x94 with [Oceananigans.jl](https://github.com/climate-machine/Oceananigans.jl). The LESbrary will archive turbulence data for both idealized and realistic oceanic scenarios.\n\nLESbrary.jl is developed by the [Climate Modeling Alliance](https://clima.caltech.edu) and collaborators.\n\n## Adding LESbrary.jl to a julia environment\n\nTo add `LESbrary.jl` to your julia environment, type\n\n```julia\njulia> using Pkg\n\njulia> Pkg.add(url=""https://github.com/CliMA/LESbrary.jl.git"")\n```\n\nat the REPL. This will allow you to write `using LESbrary` in your scripts (and thereby run the scripts in `/examples`).\n\n## Accessing data generated by LESbrary.jl\n\nOutput from LESbrary.jl simulations --- currently somewhat haphazardly organized --- is accessible at https://engaging-web.mit.edu/~alir/lesbrary/\n'",",https://doi.org/10.5281/zenodo.7997001","2019/08/08, 15:40:56",1539,MIT,1,592,"2022/04/11, 23:11:37",24,95,115,0,562,3,0.7,0.34249471458773784,"2023/06/02, 06:43:42",v0.1.1,0,8,false,,false,false,,,https://github.com/CliMA,https://clima.caltech.edu,,,,https://avatars.githubusercontent.com/u/43161188?v=4,,, CO2SYS, A MATLAB (or Octave) project to compute variables of ocean CO2 systems.,jamesorr,https://github.com/jamesorr/CO2SYS-MATLAB.git,github,,Ocean Carbon and Temperature,"2020/06/29, 14:22:22",22,0,4,false,Jupyter Notebook,,,"Jupyter Notebook,MATLAB",,"b'**CITATION**\n\n- If you use any CO2SYS related software, please cite the original work by Lewis and Wallace (1998).\n- If you use CO2SYS.m, please cite van Heuven et al (2011).\n- If you use errors.m or derivnum.m, please cite Orr et al. (2018).\n\n**CO2SYS-MATLAB versions**\n\n- 1.1 (Sept 2011): van Heuven et al. (2011) \n- 2.0 (20 Dec 2016): includes uncertainty propagation\n- 2.0.1 (11 Oct 2017): supports TEOS-10 standards (conservattive temperature, absolute salinity)\n- 2.0.2 (17 Oct 2017): Octave enhancements changed to be MATLAB compatible\n- 2.0.3 (4 Jun 2018): examples added as Jupyter notebooks\n- 2.0.4 (10 Nov 2018): defaults for standard uncertainties in constants (epK vector and eBt) made consistent with Orr et al. (2018), i.e., final published version \n- 2.0.5 (23 Nov 2018): fixed bug in eBt propagation to deriv array (thanks A. Cochon)\n- 2.1 (29 Jun 2020): fixed bug in derivnum affecting OUT results (linked to TEMPOUT); masked derivs of input vars in derivnum\n\n**ABOUT CO2SYS**\n\nHere you will find a MATLAB-version of CO2SYS, originally written for\nDOS. CO2SYS calculates and returns a detailed state of the carbonate system for\noceanographic water samples, if supplied with sufficient input. Use the CO2SYS\nfunction as you would use any other MATLAB inline function, i.e.,\na=func(b,c). For much detail about how to use CO2SYS, simply type ""help CO2SYS""\nin MATLAB. The help function also works for the two new uncertainty propagation\nroutines (errors and derivnum). For details on the internal workings of CO2SYS,\nplease refer to the original publication (Lewis and Wallace, 1998) available at\nhttp://cdiac.ornl.gov/oceans/co2rprt.html. Since CO2SYS and the two new\nroutines each allow input of vectors, with just one call they can process many\nsamples. Each sample may have a different salinity, temperature, pH scale,\ndissociation constants, etc.\n\n**HISTORY**\n\nThe original version for DOS was written by Lewis and Wallace\n(1998). That was translated to MATLAB by Denis Pierrot at CIMAS,\nUniversity of Miami, Miami, Florida. Then that code was vectorized,\nrefined, and optimized for computational speed by Steven van Heuven,\nUniversity of Groningen, The Netherlands. Although functionality was\nadded, the output of the CO2SYS function has not changed in form. All\nversions of CO2SYS that are available at CDIAC (DOS, Excel, MATLAB)\nshould produce nearly identical results when supplied with identical\ninput. Indeed, close agreement between these different versions of\nCO2SYS was demonstrated by Orr et al. (2015). More recently,\nCO2SYS-MATLAB has been modified to include uncertainty propagation\n(Orr et al., 2018): the main routine CO2SYS.m was altered slightly,\nwhile two new routines were added (errors.m and derivnum.m)\n\nIf you discover inconsistencies or have a more general bug report for\nCO2SYS.m, please notify S. van Heuven (svheuven at gmail.com), Denis\nPierrot (Denis.Pierrot at noaa.gov), or Alex Kozyr (kozyr at\nornl.gov). For any concerns about the uncertainty propagation routines\n(errors.m and derivnum.m), please contact James Orr (james.orr at\nlsce.ipsl.fr)\n\n**INSTALLING**\n\nDownload the m-files in the src directory (CO2SYS.m, errors.m, and derivnum.m);\nyou may also wish to download the examples in the examples directory. Place\nthese files in a local directory that is in MATLAB\'s search path, or add the\ndirectory where they are located to MATLAB\'s search path. The latter can be\ndone with MATLAB\'s addpath command, for example\n\naddpath (""my_matlab_directory/my_subdir"")\n\nThen run either of the examples in Matlab, or start using the CO2SYS routine\nstraight away.\n\n**COMPATIBILITY**\n\nBesides their use in MATLAB, the three functions (CO2SYS.m, derivnum.m, and\nerrors.m) also work well under octave, GNU\'s MATLAB clone.\n\n**EXAMPLES**\n\nExample MATLAB scripts demonstrating use of CO2SYS can be found in the\nexamples directory. Using the two new routines is similar, adding only\na few new arguments, e.g., for input uncertainties. More elaborate\nexamples are also available in another form in the \'notebooks\'\ndirectory. Either click on those files to visualize them (as HTML) or\ndownload those files and use them interactively as jupyter\nnotebooks. Within MATLAB or octave, you may also use the native help\nfacility, i.e., by typing ""help errors"" or ""help derivnum"" to find out\nmore.\n\n**REFERENCES**\n\nLewis, E. and Wallace, D. W. R. (1998) Program Developed for CO2\nSystem Calculations, ORNL/CDIAC-105, Carbon Dioxide Inf. Anal. Cent.,\nOak Ridge Natl. Lab., Oak Ridge, Tenn., 38 pp.,\nhttps://salish-sea.pnnl.gov/media/ORNL-CDIAC-105.pdf\n\nOrr, J. C., J.-P. Gattuso, and J.-M. Epitalon (2015) Comparison of ten\npackages that compute ocean carbonate chemistry, Biogeosciences, 12,\n1483\xe2\x80\x931510, https://doi.org/10.5194/bg-12-1483-2015 .\n\nOrr, J.C., J.-M. Epitalon, A. G. Dickson, and J.-P. Gattuso (2018) Routine\nuncertainty propagation for the marine carbon dioxide system, in prep. for\nMar. Chem., in press, https://doi.org/10.1016/j.marchem.2018.10.006.\n\nvan Heuven, S., D. Pierrot, J.W.B. Rae, E. Lewis, and D.W.R. Wallace (2011)\nMATLAB Program Developed for CO2 System Calculations. ORNL/CDIAC-105b. Carbon\nDioxide Information Analysis Center, Oak Ridge National Laboratory, U.S.\nDepartment of Energy, Oak Ridge, Tennessee. https://doi.org/10.3334/CDIAC/otg.CO2SYS_MATLAB_v1.1\n\n'",",https://doi.org/10.5194/bg-12-1483-2015,https://doi.org/10.1016/j.marchem.2018.10.006.\n\nvan,https://doi.org/10.3334/CDIAC/otg.CO2SYS_MATLAB_v1.1\n\n","2016/12/20, 17:29:21",2500,MIT,0,51,"2021/12/23, 01:19:12",6,1,2,0,672,2,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, PyCO2SYS,Marine carbonate system calculations in Python.,mvdh7,https://github.com/mvdh7/PyCO2SYS.git,github,"carbon-dioxide,chemistry,chemistry-solver,co2sys,oceanography,python,seawater-carbonate-chemistry,marine-chemistry",Ocean Carbon and Temperature,"2023/07/05, 08:49:04",42,13,9,true,Python,,,"Python,MATLAB,TeX",https://PyCO2SYS.readthedocs.io,"b'# PyCO2SYS\n\n[![Tests](https://github.com/mvdh7/PyCO2SYS/workflows/Tests/badge.svg?branch=main)](https://github.com/mvdh7/PyCO2SYS/actions)\n[![pypi badge](https://img.shields.io/pypi/v/PyCO2SYS.svg?style=popout)](https://pypi.org/project/PyCO2SYS/)\n[![DOI](https://img.shields.io/badge/DOI-10.5281%2Fzenodo.3744275-informational)](https://doi.org/10.5281/zenodo.3744275)\n[![Docs](https://readthedocs.org/projects/pyco2sys/badge/?version=latest&style=flat)](https://pyco2sys.readthedocs.io/en/latest/)\n[![Coverage](https://github.com/mvdh7/PyCO2SYS/blob/main/.misc/coverage.svg)](https://github.com/mvdh7/PyCO2SYS/blob/main/.misc/coverage.txt)\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n**Contents:**\n\n- [PyCO2SYS](#pyco2sys)\n - [Introduction](#introduction)\n - [Citation](#citation)\n - [Installation](#installation)\n - [Documentation](#documentation)\n - [Basic use](#basic-use)\n - [About](#about)\n - [License](#license)\n\n## Introduction\n\n**PyCO2SYS** is a Python implementation of CO2SYS, based on the [MATLAB v2.0.5](https://github.com/jamesorr/CO2SYS-MATLAB) but also including the updates made for [MATLAB CO2SYS v3](https://github.com/jonathansharp/CO2-System-Extd) as well as some additional related calculations. PyCO2SYS solves the full marine carbonate system from the values of any two of its parameters.\n\nEvery combination of input parameters has been tested, with differences in the results small enough to be attributable to floating point errors and iterative solver endpoint differences (i.e. negligible). See the scripts in [validate](https://github.com/mvdh7/PyCO2SYS/tree//validate) to see how and check this for yourself, and their [discussion](https://pyco2sys.readthedocs.io/en/latest/validate/) in the online docs. **Please [let us know](https://github.com/mvdh7/PyCO2SYS/issues) ASAP if you discover a discrepancy that we have not spotted!**\n\nDocumentation is available online at [PyCO2SYS.readthedocs.io](https://pyco2sys.readthedocs.io/en/latest/).\n\nThere are also some usage examples that you can either download or run live in your web browser (with no Python installation required) at [PyCO2SYS-examples](https://github.com/mvdh7/PyCO2SYS-examples#pyco2sys-examples).\n\n## Citation\n\nA paper describing PyCO2SYS is freely available:\n\n> Humphreys, M. P., Lewis, E. R., Sharp, J. D., and Pierrot, D. (2022). PyCO2SYS v1.8: marine carbonate system calculations in Python. *Geoscientific Model Development* 15, 15-43. [doi:10.5194/gmd-15-15-2022](https://doi.org/10.5194/gmd-15-15-2022).\n\nThe citation for the PyCO2SYS code is:\n\n> Humphreys, M. P., Schiller, A. J., Sandborn, D. E., Gregor, L., Pierrot, D., van Heuven, S. M. A. C., Lewis, E. R., and Wallace, D. W. R. (2022). PyCO2SYS: marine carbonate system calculations in Python. *Zenodo.* [doi:10.5281/zenodo.3744275](https://doi.org/10.5281/zenodo.3744275).\n\nThe DOI above refers to all versions of PyCO2SYS. Please also specify the version number that you used. You can find this in Python with:\n\n```python\nimport PyCO2SYS as pyco2\npyco2.hello()\n```\n\nAs per the instructions in the [the CO2SYS-MATLAB repo](https://github.com/jamesorr/CO2SYS-MATLAB), you should also consider citing the original work by [Lewis and Wallace (1998)](https://pyco2sys.readthedocs.io/en/latest/refs/#l).\n\n## Installation\n\nIf you manage Python with conda, we recommend that you first install NumPy, pandas and xarray into the environment where PyCO2SYS is to be installed with conda.\n\nThen, you can install from the Python Package Index:\n\n pip install PyCO2SYS\n\nUpdate an existing installation:\n\n pip install PyCO2SYS --upgrade --no-cache-dir\n\n## Documentation\n\nDocumentation for the current release, based on the `main` branch, is available at [PyCo2SYS.readthedocs.io](https://pyco2sys.readthedocs.io/en/latest/). The documentation for the in-development next version, based on the `develop` branch, is rendered at [mvdh.xyz/PyCO2SYS](https://mvdh.xyz/PyCO2SYS/).\n\n## Basic use\n\nThe only function you need is `pyco2.sys`. To solve the marine carbonate system from two of its parameters (`par1` and `par2`), just use:\n\n```python\nimport PyCO2SYS as pyco2\nresults = pyco2.sys(par1, par2, par1_type, par2_type, **kwargs)\n```\n\nThe keys to the `results` dict are described in the [documentation](https://pyco2sys.readthedocs.io/en/latest/co2sys_nd/#results). Arguments should be provided as scalars or NumPy arrays in any mutually broadcastable combination. A large number of optional `kwargs` can be provided to specify everything beyond the carbonate system parameters \xe2\x80\x94 [read the docs!](https://pyco2sys.readthedocs.io/en/latest/co2sys_nd/).\n\nYou can also look at the [examples Notebooks](https://github.com/mvdh7/PyCO2SYS-examples) that you can try out without needing to install anything on your computer.\n\n## About\n\nPyCO2SYS is maintained by [Dr Matthew Humphreys](https://mvdh.xyz/) at the [NIOZ (Royal Netherlands Institute for Sea Research)](https://www.nioz.nl/en) with the support of the main developers of all previous versions of CO2SYS.\n\nContributions are welcome - please check the [guidelines](https://github.com/mvdh7/PyCO2SYS/blob/main/CONTRIBUTING.md) before setting to work.\n\n## License\n\nPyCO2SYS is licensed under the [GNU General Public License version 3 (GPLv3)](https://www.gnu.org/licenses/gpl-3.0.en.html).\n'",",https://doi.org/10.5281/zenodo.3744275,https://doi.org/10.5194/gmd-15-15-2022,https://doi.org/10.5281/zenodo.3744275","2020/01/30, 15:29:26",1364,GPL-3.0,39,987,"2023/02/06, 10:12:22",4,44,63,5,261,0,0.0,0.04638619201726002,"2023/01/19, 09:56:57",v1.8.2,0,6,false,,false,true,"takaito1/easX305_F2023,mvdh7/calkulate,ACTOMtoolbox/Code,takaito1/eas6133,phezel/eshape_carbon_flux,Reindra12/deteksi_kendaraan,takaito1/easX305_2022,mvdh7/foram,takaito1/easX305,BjerknesClimateDataCentre/bluecloud,andresawa/fluxes_exercice,mvdh7/vu-practical,mvdh7/PyCO2SYS-examples",,,,,,,,,, FluxEngine,An open source atmosphere ocean gas flux data processing toolbox.,oceanflux-ghg,https://github.com/oceanflux-ghg/FluxEngine.git,github,,Ocean Carbon and Temperature,"2022/11/04, 17:31:58",26,1,3,true,Jupyter Notebook,,,"Jupyter Notebook,Python,HTML,Shell",,"b""FluxEngine\n==========\n\nThe FluxEngine is an open source atmosphere-ocean gas flux data processing toolbox. The toolbox has so far contributed to 13 different journal publications, resulting in 5 press releases, contributed to 2 completed PhDs and 1 ongoing PhD, has been used within 5 UK and EU research projects and has been used in undergraduate and masters level teaching. It is now being used within the European Integrated Carbon Observing System (ICOS). This work collectively identifies and quantifies the importance of the oceans in regulating and storing carbon. \n\nKnown issues with v4.0.7\n----\n01 Sep 2022 - There is a known error with calculating fluxes using concentration data (thanks Silvie Lainela for finding this error and Tom Holding for identify the work around). With this option you have to set and use conca and concw as the main inputs. But the tools incorrectly request pgas_air and pgas_sw inputs (which are not used in the flux calculation when you use concentration data). To overcome this simply define and pass pgas_air and pgas_sw data as an additional input i.e. create a file containing NaN or 0s and add the information into the configuration file. This will allow the calculations to complete and the pgas data will not be used in the calculation. This error will be fixed in the next release of the fluxengine.\n\n\nVersion 4.0.\n----\nv4.0.7 released May 2022 - Small update adding flexibility to custom gas transfer velocity parameterisations.\n\nv4.0.2 released July 2020 - released as major FluxEngine v4 release - Small updates for re-analysis tool compatbility with SOCATv2020 release.\n\nv4.0.1 released May 2020.\n\nVersion 4.0.7 uses Python 3 (but still contains all of the functionality of FluxEngine v3.0). Version 4.0 is available to install from PyPi (https://pypi.org/project/fluxengine/) or GitHub (https://github.com/oceanflux-ghg/FluxEngine). This update means that the FluxEngine is now a standard Python package and this has simplified the installation process. For example, you can now install the FluxEngine by using the Python Package Installer, pip, using the command 'pip install fluxengine'. Whilst providing this update we also removed some functionality that was no longer needed which resulted in the FluxEngine configuration file format changing slightly and a new commandline tool is provided to update old configuration files (fe_update_config.py). Version 4.0 has been verified against reference runs using SOCATv4 pCO2 and Takahashi et al (2009) data sets, as described in Shutler et al. (2016) http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-14-00204.1. All results were consistent with those produced using v3.0. The description of all FluxEngine functionality can be found in Shutler et al., (2016) and Holding et al., (2019) https://doi.org/10.5194/os-15-1707-2019 and full documentation/usage instructions are included as a PDF documentation on the GitHub page.\n\nPlease reference these journal publications when using this toolbox and presenting its output in any publications.\n\nRe-analysed SOCAT datasets\n----\nEach year we used the FluxEngine to reanalyse the latest version of the Surface Ocean CO2 Atlas (SOCAT) database (https://www.socat.info). We provide this service for free and typically provide links to the reanalysed data a few weeks after the new release of the SOCAT database. The original SOCAT data are all sampled from different depths and each measurement is tied to a temperature values, but the sources of the temperature data varies. This makes these data less ideal for global air-sea flux analyses. To enable an accurate air-sea flux calculation they need to be reanalysed to a common depth and temperature dataset. The reanalysed SOCAT data that we provide are all referenced to a common sampling depth and temperature dataset and the pairing between CO2 data and temperature is retained. The tools and methods that perform the reanalysis are detailed within the FluxEngine publications. The reanalysed datasets contain the individual cruise version of the SOCAT database and the gridded SOCAT data. \n\nPlease read the dataset metadata when using these data and please follow the guidelines in the metadata when referencing and acknowleging their use. \n\nSims, R., Holding T, Ashton I, Shutler, J (2021) Reanalysed (depth and temperature consistent) surface ocean CO\xe2\x82\x82 atlas (SOCAT) version 2021\n\nHolding T, Ashton I, Shutler, J (2020) Reanalysed (depth and temperature consistent) surface ocean CO\xe2\x82\x82 atlas (SOCAT) version 2020, ICOS carbon portal, https://doi.org/10.18160/vmt4-4563\n\nHolding T, Ashton I, Shutler, J (2019). Reanalysed (depth and temperature consistent) surface ocean CO\xe2\x82\x82 atlas (SOCAT) version 2019, Pangaea, https://doi.pangaea.de/10.1594/PANGAEA.905316\n\n\n\nExample news articles (resulting from research performed using the FluxEngine)\n----\n1. The Daily Mail 'World's oceans soak up 900 million tonnes of CO2 a year MORE than previously thought \xe2\x80\x94 the amount emitted by 2.2 billion petrol cars' https://www.dailymail.co.uk/sciencetech/article-8698067/Worlds-oceans-soak-carbon-previously-thought.html\n\n2. Phys.org (2019) 'Satellites are key to monitoring ocean carbon' https://phys.org/news/2019-11-satellites-key-ocean-carbon.html\n\n3. ESA news (2019), 'Can oceans turn the tide on the climate crisis' https://www.esa.int/Our_Activities/Observing_the_Earth/Can_oceans_turn_the_tide_on_the_climate_crisis\n\n4. The Guardian (2018), 'Invisible scum on sea cuts CO2 exchange with air by up to 50%' https://www.theguardian.com/environment/2018/may/28/invisible-scum-on-sea-cuts-co2-exchange-with-air-by-up-to-50\n\n5. BBC News (2016), 'How Northern European waters soak up carbon dioxide' https://www.bbc.co.uk/news/science-environment-35654938\n\n\nAnimation \n----\nA short animation explaining the concepts of atmosphere-ocean gas exchange, why this is important, and what the FluxEngine enables \nhttp://due.esrin.esa.int/stse/videos/page_video013.php\n\n\nJournal publications (which use FluxEngine and/or FluxEngine outputs)\n----\nGuti\xc3\xa9rrez Loza, L., Wallin, M. B., Sahl\xc3\xa9e, E., Holding, T., Shutler, J. D., Rehder, G. & Rutgersson, A. (2021). Air\xe2\x80\x93sea CO2 exchange in the Baltic Sea\xe2\x80\x94A sensitivity analysis of the gas transfer velocity. Journal of Marine Systems, 222, https://www.sciencedirect.com/science/article/pii/S0924796321001007\n\nFriedlingstein et al, (2021), Global Carbon Budget 2020, Earth Syst. Sci. Data, 12, 3269\xe2\x80\x933340, https://doi.org/10.5194/essd-12-3269-2020. \n\nWatson, A.J., Schuster, U., Shutler, J.D. et al. Revised estimates of ocean-atmosphere CO2 flux are consistent with ocean carbon inventory. Nature Communications 11, 4422 (2020). https://doi.org/10.1038/s41467-020-18203-3 \n\nKitidis, V., Shutler, JD., Ashton, I. et al (2020) Winter weather controls net influx of atmospheric CO2 on the north-west European shelf, Scientific Reports, 9(20153), https://www.nature.com/articles/s41598-019-56363-5\n\nShutler, JD, Wanninkhof, R, Nightingale, PD, Woolf, DK, Bakker, DCE, Watson, A, Ashton, I, Holding, T, Chapron, B, Quilfen, Y, Fairall, C, Schuster, U, Nakajima, M, Donlon, D (2019) Satellites are critical for addressing critical science priorities for quantifying ocean carbon, Frontiers in Ecology and the Environment, https://doi.org/10.1002/fee.2129\n\nWoolf DK, Shutler JD, Goddijn\xe2\x80\x90Murphy L, Watson AJ, Chapron B, Nightingale PD, Donlon CJ, Piskozub J, Yelland MJ, Ashton I, et al (2019). Key Uncertainties in the Recent Air\xe2\x80\x90Sea Flux of CO2. Global Biogeochemical Cycles, https://doi.org/10.1029/2018GB006041\n\nHolding, T., Ashton, I. G., Shutler, J. D., Land, P. E., Nightingale, P. D., Rees, A. P., Brown, I., Piolle, J.-F., Kock, A., Bange, H. W., Woolf, D. K., Goddijn-Murphy, L., Pereira, R., Paul, F., Girand-Ardhuin, F., Chapron, B., Rehder, G., Ardhuin, F., Donlon, C. J. (2019) The FluxEngine air\xe2\x80\x93sea gas flux toolbox: simplified interface and extensions for in situ analyses and multiple sparingly soluble gases, Ocean Sci., https://doi.org/10.5194/os-15-1707-2019\n\nHenson SA, Humphreys MP, Land PE, Shutler JD, Goddijn-Murphy L, Warren M (2018). Controls on open-ocean North Atlantic \xce\x94pCO2 at seasonal and interannual timescales are different. Geophysical Research Letters, doi:10.1029/2018GL078797\n\nPereira R, Ashton, I, Sabbaghzadeh, B, Shutler, JD and Upstill-Goddard RC (2018). Reduced air\xe2\x80\x93sea CO2 exchange in the Atlantic Ocean due to biological surfactants. Nature Geoscience, 1. doi: 10.1038/s41561-018-0136-2\n\nHolding T, Ashton I, Woolf DK, Shutler JD (2018): FluxEngine v2.0 and v3.0 reference and verification data, PANGAEA, doi: 10.1594/PANGAEA.890118\n\nWrobel, I. (2017) Monthly dynamics of carbon dioxide exchange across the sea surface of the Arctic Ocean in response to changes in gas transfer velocity and partial pressure of CO2 in 2010. Oceanologia, 59(4), 445-459, doi: 10.1016/j.oceano.2017.05.001.\n\nAshton IG, Shutler JD, Land PE, Woolf DK, Quartly GD (2016), A Sensitivity Analysis of the Impact of Rain on Regional and Global Sea-Air Fluxes of CO2. PLoS ONE 11(9): e0161105. doi: 10.1371/journal.pone.0161105.\n\nWrobel I, Piskozub J (2016) Effect of gas-transfer velocity parameterization choice on air\xe2\x80\x93sea CO2 fluxes in the North Atlantic Ocean and the European Arctic, Ocean Science, 12, 1091-1103, doi: 10.5194/os-12-1091-2016.\n\nShutler JD, Land PE, Piolle J-F, Woolf DK, Goddijn-Murphy L, Paul F, Girard-Ardhuin F, Chapron B, Donlon CJ (2016), FluxEngine: a flexible processing system for calculating atmosphere-ocean carbon dioxide gas fluxes and climatologies, Journal of Atmospheric and Oceanic Technology, doi: 10.1175/JTECH-D-14-00204.1\n\nR\xc3\xb6denbeck C, Bakker DCE, Gruber N, Iida Y, Jacobson AR, Jones S, Landsch\xc3\xbctzer P, Metzl N, Nakaoka S, Olsen A, Park G-H, Peylin P, Rodgers KB, Sasse TP, Schuster U, Shutler JD, Valsala V, Wanninkhof R, and Zeng J (2015) Data-based estimates of the ocean carbon sink variability \xe2\x80\x93 first results of the Surface Ocean pCO2 Mapping intercomparison (SOCOM), Biogeosciences, 12, 7251-7278, doi: 10.5194/bg-12-7251-2015.\n\n\n\nInformation about older versions of the FluxEngine toolbox\n----\n\nVersion 3.0 (static as of 02 August 2019).\n----\nv3.0 (first release April 2018, updated September 2018, February 2019, April 2019, June 2019, July 2019, static 02 August 2019)\n\nVersion 3 (v3.0) uses Python 2.7 and has been verified against reference runs using SOCATv4 pCO2 and all results were identical to those produced using FluxEngine v2.0. A more comprehensive verification has been performed using references runs of the Takahashi et al. (2009) dataset as described in Shutler et al. (2016) http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-14-00204.1. All results were identical to those produced using v1.0 and v2.0. A journal paper describing the v3.0 updates is now available, Holding et al., (2019) and can be found here https://doi.org/10.5194/os-15-1707-2019. \n\nPlease reference these journal publications when using this toolbox and presenting its output in any publications.\n\nThe FluxEngine v3.0 updates and extensions were funded by the European Space Agency (ESA) research projects (OceanFlux Evolution, SKIM SciSoc) and two European Union (EU) research projects (Ringo and Integral). The two EU studies are preparatory projects for the European Integrated Carbon Observing System (ICOS). v3.0 additions to the toolbox include:\n\n \xe2\x80\xa2 A more flexible way of specifying input data in the configuration files.\n \xe2\x80\xa2 Data pre-processing options (e.g. unit conversion).\n \xe2\x80\xa2 Python is used for all tools, allowing a more streamlined workflow.\n \xe2\x80\xa2 A move toward an API-like toolkit, beyond a simple set of commandline tools.\n \xe2\x80\xa2 A more modularised structure to the code including modular k parameterisation and data pre-processing options.\n \xe2\x80\xa2 Metadata and default options specified in an xml file (settings.xml).\n \xe2\x80\xa2 Automatic verification scripts that use SOCATv4 and Takahashi09 reference datasets.\n \xe2\x80\xa2 Tools for simplifying analysis of in situ data (e.g. SOCAT format data from research cruises and fixed stations).\n \xe2\x80\xa2 Improvements for calculating N2O and CH4 gas fluxes (now using MOMENTO data format).\n\nv2.0 (July 2016)\n----\nThese updates have been verified against Takahashi (2009) using the verification options within the code. All results were identical to those derived from v1.0.\nThe updates included contribute to further publications in preparation and further details will be posted here following publication.\nThe updates include improved:\n\n \xe2\x80\xa2 handling for irregular grids,\n \xe2\x80\xa2 handling for different gases including O2, N2O and CH4, \n \xe2\x80\xa2 handling for in-situ data.\n\nSpecifically, data on irregular grids can now be handled through the main flux calculations. Note: the ofluxghg-flux-budgets.py tool is only valid for regular (1deg x 1deg) grids. \nIn-situ data should be put in separate netCDF files and the last two digits of the filename needs to represent the month of interest as a two digit number. e.g. January -> \xe2\x80\x9901\xe2\x80\x99. \nTo operate the system with different gases, the appropriate switch should be changed in ofluxghg-flux-calc.py. Please use ofluxghg-flux-calc.py --help for further information.\n\n\nv1.0 (09 March 2016)\n----\nThe FluxEngine open source atmosphere-ocean gas flux data processing tools. The license for this software toolbox can be found within this github repository.\nPlease reference the publication linked below when using this toolbox and presenting its output in any publications.\nA journal paper describing the toolbox has been published here: Shutler et al., (2016) http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-14-00204.1\nPlease send any feedback and comments to Jamie Shutler, email: j.d.shutler@exeter.ac.uk\nThe FluxEngine software was originally developed by The European Space Agency OceanFlux Greenhouse Gases and Evolution project teams.\n\n""",",https://doi.org/10.5194/os-15-1707-2019,https://doi.org/10.18160/vmt4-4563\n\nHolding,https://doi.org/10.5194/essd-12-3269-2020,https://doi.org/10.1038/s41467-020-18203-3,https://doi.org/10.1002/fee.2129\n\nWoolf,https://doi.org/10.1029/2018GB006041\n\nHolding,https://doi.org/10.5194/os-15-1707-2019\n\nHenson,https://doi.org/10.5194/os-15-1707-2019","2014/10/03, 10:10:37",3309,CUSTOM,3,381,"2021/09/23, 13:08:15",1,59,60,0,762,1,0.0,0.26,"2020/07/30, 08:21:27",fev4,0,3,false,,false,false,JamieLab/Teaching,,,,,,,,,, seacarb,An R package that calculates various parameters of the carbonate system in seawater.,jpgattuso,https://github.com/jpgattuso/seacarb-git.git,github,,Ocean Carbon and Temperature,"2023/05/22, 06:14:39",8,0,1,true,R,,,R,,"b'seacarb\n=======\n\nCarbonate chemistry with R\n\nIn 2003, Aur\xc3\xa9lien Proye and I put together seacarb, an R package that calculates various parameters of the carbonate system in seawater. The package was subsequently upgraded. In 2008, a new version (2.0) was built with the assistance of H\xc3\xa9lo\xc3\xafse Lavigne. Five additional functions were included to assist the design of perturbation experiments to investigate ocean acidification and other functions were revised in order to strictly follow the ""Guide to best practices for ocean CO2 measurements"" (Dickson et al., 2007). Seacarb uses equations mostly from the following sources:\n\n- Dickson A. G., Sabine C. L. & Christian J. R., 2007. Guide to best practices for ocean CO2 measurements. PICES Special Publication 3:1-191.\n- DOE, 1994. Handbook of methods for the analysis of the various parameters of the carbon dioxide system in sea water; version 2. Dickson, A. G. and Goyet, C., editors. ORNL/CDIAC-74, 1994.\n- Frankignoulle M., 1994. A complete set of buffer factors for acid/base CO2 system in seawater. Journal of Marine Systems 5: 111-118.\n- Orr J. C., Epitalon J.-M., Dickson A. G. & Gattuso J.-P., 2018. Routine uncertainty propagation for the marine carbon dioxide system. Marine Chemistry 207:84-107.\n- Zeebe R. E. & Wolf-Gladrow D. A., 2001. CO2 in seawater: equilibrium, kinetics, isotopes. Amsterdam: Elsevier, 346 pp.\n\nA portion of the code has been adapted, with permission from the authors, from the Matlab files mentioned above. I am grateful to Richard Zeebe and Dieter Wolf-Gladrow for that. H\xc3\xa9lo\xc3\xafse Lavigne contributed a lot to seacarb in 2008 and 2009; this was critical to the launch of version 2.0 and subsequent updates. Portions of code and/or corrections have also been contributed by Jean-Marie Epitalon (2004, 2018), Bernard Gentili (2006), Karline Soetaert (2007) and Jim Orr (2007, 2010, 2018). Jean-Marie Epitalon considerably improved the code, leading to much faster calculations in version 3.0.\n\nOrr et al. (2015) assessed seacarb together with other packages which calculate the seawater carbonate chemistry. In 2018, option to calculate uncertainty propagation was added (Orr et al., 2018).\n\n- Orr J. C., Epitalon J.-M., Dickson A. G. & Gattuso J.-P., 2018. Routine uncertainty propagation for the marine carbon dioxide system. Marine Chemistry 207:84-107. http://dx.doi.org/10.1016/j.marchem.2018.10.006\n- Orr J. C., Epitalon J.-M. & Gattuso J.-P., 2015. Comparison of ten packages that compute ocean carbonate chemistry. Biogeosciences 12:1483-1510. http://www.biogeosciences.net/12/1483/2015/\n\nThis program is provided free under the GNU General Public License (GNU GPL). It will be improved using the comments that I will receive. If you are new to R, please check the manuals and FAQs available on the R-project web site to get information on how to install R and the seacarb package on your system. Please only report and comment on seacarb, not on general problems related to R.\n\nBriefly, after installing R and if you have an Internet connection, here is the simplest way to install seacarb:\n\n- Launch R\n- To install seacarb (to be done only once), type the following command: install.packages(""seacarb"")\n- To load the seacarb package into memory in order to use it (to be done each time R is launched), type the following command: library(seacarb)\n\nThe seacarb package can be downloaded from the Comprehensive R Archive Network (CRAN; see link below). The documentation is included in the package and is accessible using standard R commands. Please give due credit to the publications mentioned above and cite seacarb as follows:\n\nGattuso J.-P., Epitalon J.-M., Lavigne H. & Orr J., 2021. seacarb: seawater carbonate chemistry. R package version 3.3.0. http://CRAN.R-project.org/package=seacarb\n\n'",,"2014/09/01, 06:39:34",3341,CUSTOM,4,362,"2023/05/21, 09:36:10",2,66,73,1,157,0,0.0,0.32820512820512826,"2021/03/12, 05:39:32",v3.2.16,0,6,false,,false,false,,,,,,,,,,, TSG-QC,Analysis and validation of underway Sea Surface Temperature and Sea Surface Salinity measurements from a SeaBird Thermosalinograph.,us191,,custom,,Ocean Carbon and Temperature,,,,,,,,,,https://forge.ird.fr/us191/TSG-QC,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Ocean Health Index Scores,"Provide invaluable, comprehensive, and quantitative assessments of progress towards healthy and sustainable oceans.",OHI-Science,https://github.com/OHI-Science/ohi-global.git,github,,Ocean Carbon and Temperature,"2023/10/02, 23:41:03",14,0,4,true,HTML,Ocean Health Index - Science,OHI-Science,"HTML,PostScript,R,TeX,JavaScript",http://ohi-science.org/ohi-global,"b'ohi-global\n==========\n \n### Ocean Health Index assessment of global EEZ regions \nThis repository includes files for the global OHI assessment using eez boundaries for 220 country and territorial regions.\n\nGeneral information about file structure is here: http://ohi-science.org/manual/#file-system-organization\n\n\n*eez* \n\nFiles associated with calculating the OHI scores for scenarios from 2012 to present.\n\n*metadata_documentation* \n\nDatabase for the data layers (data sources, names, etc.) and functions to create different data formats. layers_eez_base.csv describes filepaths of data layers used to calculate scores. \n\n*documents* \nSource documents and code to create method documents. \n\n*yearly_results* \nPost assessment analysis and visualization of data\n\n'",,"2014/06/11, 21:28:07",3423,GPL-2.0,78,1825,"2022/11/07, 21:56:44",0,52,52,1,352,0,0.0,0.6013071895424836,"2022/11/07, 21:54:44",v2022.1,0,20,false,,false,false,,,https://github.com/OHI-Science,http://ohi-science.org/,"Santa Barbara, CA",,,https://avatars.githubusercontent.com/u/5827205?v=4,,, ERSEM,"A marine biogeochemical and ecosystem model that describes the cycling of carbon, nitrogen, phosphorus, silicon, oxygen and iron through the lower trophic level pelagic and benthic ecosystems.",pmlmodelling,https://github.com/pmlmodelling/ersem.git,github,,Ocean Carbon and Temperature,"2023/06/30, 12:17:55",19,0,11,true,Fortran,Marine Systems Modelling group at the Plymouth Marine Laboratory,pmlmodelling,"Fortran,Python,Shell,CMake",,"b'![Tests](https://github.com/pmlmodelling/ersem/workflows/build-ersem/badge.svg) \n[![Documentation Status](https://readthedocs.org/projects/ersem/badge/?version=latest)](https://ersem.readthedocs.io/en/latest/?badge=latest)\n\n# ERSEM\n\n![ERSEM diagram](docs/images/ERSEM.png)\n\n[ERSEM](http://ersem.com) is a marine biogeochemical and ecosystem model. It describes the\ncycling of carbon, nitrogen, phosphorus, silicon, oxygen and iron through \nthe lower trophic level pelagic and benthic ecosystems. \n\n- **Installation Instructions:** conda: https://ersem.readthedocs.io/en/latest/tutorials/index.html#conda-installation, source: https://ersem.readthedocs.io/en/latest/developers/index.html\n- **User Documentation & Example Usage:** https://ersem.readthedocs.io/en/latest/\n- **Automated Tests:** Run via GitHub Actions https://github.com/pmlmodelling/ersem/actions\n- **Acknowledgements:** https://ersem.readthedocs.io/en/latest/acknowledgements.html\n- **License:** https://ersem.readthedocs.io/en/latest/license.html\n\n# Support\n\nWe strongly encourage everyone using the ERSEM code to register as a user by filling a\n[short registration form](https://forms.office.com/r/X0iXv8AvTC). \nWe\xe2\x80\x99d love to get a better understanding of who is using the ERSEM code and for what \napplications, as it will help us to provide the best support to the user community. \nIt will also allow you to receive information and news on the latest model developments.\n\n# How to cite\n\nTo refer to ERSEM in publications, please cite:\n\nButensch\xc3\xb6n, M., Clark, J., Aldridge, J.N., Allen, J.I., Artioli, Y.,\nBlackford, J., Bruggeman, J., Cazenave, P., Ciavatta, S., Kay, S., Lessin, G.,\nvan Leeuwen, S., van der Molen, J., de Mora, L., Polimene, L., Sailley, S.,\nStephens, N., Torres, R. (2016). ERSEM 15.06: a generic model for marine\nbiogeochemistry and the ecosystem dynamics of the lower trophic levels.\nGeoscientific Model Development, 9(4), 1293\xe2\x80\x931339.\ndoi: [10.5194/gmd-9-1293-2016](https://doi.org/10.5194/gmd-9-1293-2016).\n\nTo refer specifically to the ERSEM source code, you may use its Zenodo DOI:\n\n[![DOI](https://zenodo.org/badge/302390544.svg)](https://zenodo.org/badge/latestdoi/302390544)\n\n'",",https://doi.org/10.5194/gmd-9-1293-2016,https://zenodo.org/badge/latestdoi/302390544","2020/10/08, 15:51:14",1112,GPL-3.0,15,736,"2023/06/30, 12:17:56",47,35,67,11,117,3,1.3,0.4003623188405797,"2022/11/07, 18:15:16",22.11,0,11,false,,false,false,,,https://github.com/pmlmodelling,https://pml.ac.uk/science/Marine-Systems-Modelling,,,,https://avatars.githubusercontent.com/u/72567641?v=4,,, AIBECS.jl,A Julia package that provides ocean biogeochemistry modelers with an easy-to-use interface for creating and running models of the ocean system.,JuliaOcean,https://github.com/JuliaOcean/AIBECS.jl.git,github,"ocean,oceanography,ocean-circulation,modeling,model,models,global,optimization,inverse-model,marine,marine-tracer,tracer,transport,julia,awesome,biogeochemistry,biogeochemical,biogeochemical-model,pathways,fluxes",Ocean Carbon and Temperature,"2022/10/28, 14:28:44",36,0,5,true,Julia,,JuliaOcean,Julia,,"b'\n \n\n\n# AIBECS.jl\n\n*The ideal tool for exploring global marine biogeochemical cycles.*\n\n

\n \n \n \n \n \n \n \n \n \n

\n\n

\n \n \n \n \n \n \n

\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n

\n\n\n\n\n**AIBECS** (for **A**lgebraic **I**mplicit **B**iogeochemical **E**lemental **C**ycling **S**ystem, pronounced like the cool [ibex](https://en.wikipedia.org/wiki/Ibex)) is a Julia package that provides ocean biogeochemistry modellers with an easy-to-use interface for creating and running models of the ocean system.\n\nAIBECS is a system because it allows you to choose some biogeochemical tracers, define their interactions, select an ocean circulation and *Voil\xc3\xa0!* \xe2\x80\x94 your model is ready to run.\n\n## Getting started\n\nIf you are new to AIBECS, head over to the [documentation](https://JuliaOcean.github.io/AIBECS.jl/stable/) and look for the tutorials.\n(You can also click on the big ""Documentation"" badge above.)\n\n## Concept\n\nThis package was developed to exploit linear-algebra tools and algorithms in Julia to efficiently simulate marine tracers.\nAIBECS represents global biogeochemical cycles with a discretized system of nonlinear ordinary differential equations that takes the generic form\n\n$$\\frac{\xe2\x88\x82\\boldsymbol{x}}{\xe2\x88\x82t} + \\mathbf{T} \\boldsymbol{x} = \\boldsymbol{G}(\\boldsymbol{x})$$\n\nwhere $\\boldsymbol{x}$ represents the model state variables, i.e., the marine tracer(s) concentration.\nFor a single tracer, $\\boldsymbol{x}$ can be interpreted as the 3D field of its concentration.\nIn AIBECS, $\\boldsymbol{x}$ is represented as a column vector (that\'s why it\'s **bold** and *italic*).\n\nThe operator $\\mathbf{T}$ is a spatial differential operator that represents the transport of tracers.\nFor example, for a single tracer transported by ocean circulation,\n\n$$\\mathbf{T} = \\nabla \\cdot(\\boldsymbol{u} - \\mathbf{K}\\nabla)$$\n\nrepresents the effects of advection and eddy-diffusion\n(where $\\boldsymbol{u}$ is the 3D vector of the marine currents and $\\mathbf{K}$ is a 3\xc3\x973 eddy-diffusivity matrix).\nThus, $\\mathbf{T}$ ""acts"" on $\\boldsymbol{x}$ such that $\\mathbf{T}\\boldsymbol{x}$ is the flux divergence of that tracer.\nIn AIBECS, $\\mathbf{T}$ is represented by a matrix (that\'s why it\'s **bold** and upstraight).\n\nLastly, the right-hand-side, $\\boldsymbol{G}(\\boldsymbol{x}$), represents the local sources minus sinks of each tracer, which must be provided as functions of the tracer(s) $\\boldsymbol{x}$.\n\nTo simulate tracers using the AIBECS, you just need to define the transport operators $\\mathbf{T}$ and the net sources and sinks $\\boldsymbol{G}$.\nThat\'s pretty much the whole concept!\n\n## References\n\nIf you use this package, please cite it.\n\nIf you use data provided by this package (like the ocean circulation from the OCIM), please cite them as well.\n\nFor convenience, all the references are available in [BibTeX](https://en.wikipedia.org/wiki/BibTeX) format in the [CITATION.bib](./CITATION.bib) file.\n\nAlso, if you want to do research using the AIBECS, and you think I could help, do not hesitate to contact me directly (contacts on my [website](www.bpasquier.com)), I would be happy to contribute and collaborate!\n\n\n\nThe authors acknowledge funding from the Department of Energy grant DE-SC0016539 and from the National Science Foundation grant 1658380.\n'",",https://doi.org/10.21105/joss.03814,https://doi.org/10.5281/zenodo.2864051","2019/05/08, 05:03:31",1632,MIT,1,373,"2022/11/02, 21:55:38",30,35,65,2,357,0,0.0,0.04815864022662886,"2022/10/18, 22:20:00",v0.13.2,0,3,false,,false,false,,,https://github.com/JuliaOcean,,,,,https://avatars.githubusercontent.com/u/41747359?v=4,,, mocsy,Routines to model ocean carbonate system thermodynamics.,jamesorr,https://github.com/jamesorr/mocsy.git,github,,Ocean Carbon and Temperature,"2020/01/23, 14:19:05",16,0,2,false,Fortran,,,"Fortran,Jupyter Notebook,C++,Python,Makefile,M4",,"b'mocsy\n=====\n\nRoutines to model ocean carbonate system thermodynamics\n\nSynopsis: mocsy is a Fortran 95 package designed to compute all\ncarbonate system variables from total dissolved inorganic carbon (DIC)\nand total alkalinity, particularly from models. It updates previous\nOCMIP code, avoids 3 common model approximations, and offers the\nbest-practice constants as well as more recent options. It agrees with\nCO2SYS within 0.005%.\n\nThe mocsy package is described by Orr and Epitalon (2015) and has been\ncompared to other packages that compute marine carbonate chemistry\n(Orr et al., 2015). More recently, new routinnes were added to\npropagate uncertainties and compute sensitivities of derived variables\nto input variables (Orr et al., 2018)\n\n**Documentation and Examples**\n\n* Documentation: http://ocmip5.ipsl.jussieu.fr/mocsy/\n* Example scripts: see *examples* directory\n* Jupyter notebooks (interactive examples): see *notebooks* directory\n\n**REFERENCES**\n\nOrr, J. C. and Epitalon, J.-M. (2015) Improved routines to model the\nocean carbonate system: mocsy 2.0, Geosci. Model Dev., 8, 485-499,\nhttps://doi.org/10.5194/gmd-8-485-2015 .\n\nOrr, J. C., J.-P. Gattuso, and J.-M. Epitalon (2015) Comparison of ten\npackages that compute ocean carbonate chemistry, Biogeosciences, 12,\n1483\xe2\x80\x931510, https://doi.org/10.5194/bg-12-1483-2015 .\n\nOrr, J.C., J.-M. Epitalon, A. G. Dickson, and J.-P. Gattuso (2018) Routine\nuncertainty propagation for the marine carbon dioxide system, in prep. for\nMar. Chem., in press, https://doi.org/10.1016/j.marchem.2018.10.006 .\n\n'",",https://doi.org/10.5194/bg-12-1483-2015,https://doi.org/10.1016/j.marchem.2018.10.006","2014/01/21, 14:04:27",3564,MIT,0,157,"2017/06/02, 20:53:15",2,4,5,0,2336,0,0.0,0.0,,,0,1,false,,false,false,,,,,,,,,,, cbsyst,A Python module for calculating seawater carbon and boron chemistry.,oscarbranson,https://github.com/oscarbranson/cbsyst.git,github,"python,ocean,oceanography,carbon,boron,isotope",Ocean Carbon and Temperature,"2023/08/09, 10:19:36",24,5,4,true,Python,,,"Python,Makefile",,"b'
\n \n \n \n \n
\n\n
\n \n
\n\n**A Python module for calculating seawater carbon and boron chemistry.** \n\nThis will be particularly useful for anyone thinking about oceans in the distant past, when Mg and Ca concentrations were different. I use [Mathis Hain\'s MyAMI model](http://www.mathis-hain.net/resources/Hain_et_al_2015_GBC.pdf) to adjust speciation constants for Mg and Ca concentration.\n\n***Tested** in the modern ocean against GLODAPv2 data (see below). Performs as well as Matlab CO2SYS.*\n\n## Work in Progress:\n- [ ] [Compare to CO2SYS](https://github.com/oscarbranson/cbsyst/issues/6), a la [Orr et al (2015)](http://www.biogeosciences.net/12/1483/2015/bg-12-1483-2015.pdf)?\n\nIf anyone wants to help with any of this, please do contribute!\nA full list of bite-sized tasks that need doing is available on the [Issues](https://github.com/oscarbranson/cbsyst/issues) page.\n\n## Acknowledgement\nThe development of `cbsyst` has been greatly aided by [CO2SYS](http://cdiac.ornl.gov/oceans/co2rprt.html), and the [Matlab conversion of CO2SYS](http://cdiac.ornl.gov/ftp/oceans/co2sys/).\nIn particular, these programs represent a gargantuan effort to find the most appropriate coefficient formulations and parameterisations from typo-prone literature.\nCO2SYS has also provided an invaluable benchmark throughout development.\n\n## Data Comparison\nI have used the [GLODAPv2 data set](cbsyst/test_data/GLODAP_data/Olsen_et_al-2016_GLODAPv2.pdf) to test how well `cbsyst` works with modern seawater.\n\n### Method:\nImport the entire GLODAPv2 data set, remove all data where `flag != 2` (2 = good data), and exclude all rows that don\'t have all of (salinity, temperature, pressure, tco2, talk, phosphate, silicate and phtsinsitutp) - i.e. salinity, temperature, pressure, nutrients and all three measured carbonate parameters.\nThe resulting dataset contains 79,896 bottle samples. \nThe code used to process the raw GLODAPv2 data is available [here](cbsyst/test_data/GLODAP_data/get_GLODAP_data.py).\n\nNext, calculate the carbonate system from sets of two of the measured carbonate parameters, and compare the calculated third parameter to the measured third parameter (i.e. calculate Alkalinity from pH and DIC, then compared calculated vs. measured Alkalinities). The code for making these comparison plots is [here](cbsyst/test_data/GLODAP_data/plot_GLODAPv2_comparison.py).\n\n### Results:\n**Calculated pH** (from DIC and Alkalinity) is offset from measured values by -0.00061 (-0.029/+0.029).\n![Calculated vs Measured pH](cbsyst/test_data/GLODAP_data/Figures/pH_comparison.png)\n\n**Calculated Alkalinity** (from pH and DIC) is offset from measured values by 0.23 (-12/+11) umol/kg.\n![Calculated vs Measured TA](cbsyst/test_data/GLODAP_data/Figures/TA_comparison.png)\n\n**Calculated DIC** (from pH and Alkalinity) is offset from measured values by -0.22 (-11/+11) umol/kg.\n![Calculated vs Measured DIC](cbsyst/test_data/GLODAP_data/Figures/DIC_comparison.png)\n\nReported statistics are median \xc2\xb195% confidence intervals extracted from the residuals (n = 79,896).\n\nData are idential to within rouding errors as values calculated by Matlab CO2SYS (v1.1).\n\n### Conclusions:\n`cbsyst` does a good job of fitting the GLODAPv2 dataset!\n\n## Technical Details\n### Constants\nConstants calculated by an adaptation of [Mathis Hain\'s MyAMI model](http://www.mathis-hain.net/resources/Hain_et_al_2015_GBC.pdf). \nThe [original MyAMI code](https://github.com/MathisHain/MyAMI) is available on GitHub.\nA stripped-down version of MyAMI is [packaged with cbsyst](cbsyst/MyAMI_V2.py).\nIt has been modified to make it faster (by vectorising) and more \'Pythonic\'.\nAll the Matlab interface code has been removed.\n\nConstants not provided by MyAMI (KP1, KP2, KP3, KSi, KF) are formulated following [Dickson, Sabine & Christian\'s (2007) \'Guide to best practices for ocean CO2 measurements.\'](http://cdiac.ornl.gov/oceans/Handbook_2007.html).\n\nPressure corrections are applied to the calculated constants following Eqns. 38-40 of [Millero et al (2007)](cbsyst/docs/Millero_2007_ChemicalReview.pdf), using (typo-corrected) constants in their Table 5.\nAll constants are on the pH Total scale.\n\n### Calculations\nSpeciation calculations follow [Zeebe and Wolf-Gladrow (2001)](https://www.elsevier.com/books/co2-in-seawater-equilibrium-kinetics-isotopes/zeebe/978-0-444-50946-8).\nCarbon speciation calculations are described in Appendix B, except where Alkalinity is involved, in which cases the formulations of [Ernie Lewis\' CO2SYS](http://cdiac.ornl.gov/oceans/co2rprt.html) are used.\nBoron speciation calculations in Eqns. 3.4.43 - 3.4.46.\n\nBoron isotopes are calculated in terms of fractional abundances instead of delta values, as outlines [here](cbsyst/docs/B_systematics.pdf).\nDelta values can be provided as an input, and are given as an output.\n\n\n# Installation\n\n**Requires Python 3.5+**. \nDoes *not* work in 2.7. Sorry.\n\n### PyPi\n```bash\npip install cbsyst\n```\n\n### Conda-Forge\n```bash\nconda install cbsyst -c conda-forge\n```\n\n## Example Usage\n\n```python\nimport cbsyst as cb\nimport numpy as np\n\n# Create pH master variable for demo\npH = np.linspace(7,11,100) # pH on Total scale\n\n# Example Usage\n# -------------\n# The following functions can be used to calculate the\n# speciation of C and B in seawater, and the isotope\n# fractionation of B, given minimal input parameters.\n#\n# See the docstring for each function for info on\n# required minimal parameters.\n\n# Carbon system only\nCsw = cb.Csys(pHtot=pH, DIC=2000.)\n\n# Boron system only\nBsw = cb.Bsys(pHtot=pH, BT=433., dBT=39.5)\n\n# Carbon and Boron systems\nCBsw = cb.CBsys(pHtot=pH, DIC=2000., BT=433., dBT=39.5)\n\n# NOTE:\n# At present, each function call can only be used to\n# calculate a single minimal-parameter combination -\n# i.e. you can\'t pass it multiple arrays of parameters\n# with different combinations of parameters, as in\n# the Matlab CO2SYS code.\n\n# Example Output\n# --------------\n# The functions return a Bunch (modified dict with \'.\' \n# attribute access) containing all system parameters\n# and constants.\n#\n# Output for a single input condition shown for clarity:\n\nout = cb.CBsys(pHtot=8.1, DIC=2000., BT=433., dBT=39.5)\nout\n\n>>> {\'ABO3\': array([ 0.80882931]),\n \'ABO4\': array([ 0.80463763]),\n \'ABT\': array([ 0.80781778]),\n \'BO3\': array([ 328.50895695]),\n \'BO4\': array([ 104.49104305]),\n \'BT\': array([ 433.]),\n \'CO2\': array([ 9.7861814]),\n \'CO3\': array([ 238.511253]),\n \'Ca\': array([ 0.0102821]),\n \'DIC\': array([ 2000.]),\n \'H\': array([ 7.94328235e-09]),\n \'HCO3\': array([ 1751.7025656]),\n \'Ks\': {\'K0\': array([ 0.02839188]),\n \'K1\': array([ 1.42182814e-06]),\n \'K2\': array([ 1.08155475e-09]),\n \'KB\': array([ 2.52657299e-09]),\n \'KS\': array([ 0.10030207]),\n \'KW\': array([ 6.06386369e-14]),\n \'KspA\': array([ 6.48175907e-07]),\n \'KspC\': array([ 4.27235093e-07])},\n \'Mg\': array([ 0.0528171]),\n \'S\': array([ 35.]),\n \'T\': array([ 25.]),\n \'TA\': array([ 2333.21612227]),\n \'alphaB\': array([ 1.02725]),\n \'dBO3\': array([ 46.30877684]),\n \'dBO4\': array([ 18.55320208]),\n \'dBT\': array([ 39.5]),\n \'deltas\': True,\n \'fCO2\': array([ 344.68238018]),\n \'pCO2\': array([ 345.78871573]),\n \'pHtot\': array([ 8.1]),\n \'pdict\': None}\n\n# All of the calculated output arrays will be the same length as the longest\n# input array.\n\n# Access individual parameters by:\nout.CO3\n\n>>> array([ 238.511253])\n\n# Output data for external use:\ndf = cb.data_out(out, \'example_export.csv\')\n\n# This returns a pandas.DataFrame object with all C and B parameters.\n# It also saves the data to the specified file. The extension of the\n# file determined the format it is saved in (see data_out docstring).\n\n```\n\n## Technical Note: Whats is a `Bunch`?\n\nFor code readability and convenience, I\'ve used Bunch objects instead of traditional dicts.\nA [Bunch](cbsyst/helpers.py#L6) is a modification of a dict, which allows attribute access via the dot (.) operator.\nApart from that, it works *exactly* like a normal dict (all the usual methods are available transparrently).\n\n**Example:**\n```python\nfrom cbsyst.helpers import Bunch\n\n# Make a bunch\nbun = Bunch({\'a\': 1,\n \'b\': 2})\n\n# Access items of bunch...\n# as a dict:\nbun[\'a\']\n\n>>> 1\n\n# as a Bunch:\nbun.a\n\n>>> 1\n```'",",https://doi.org/10.5281/zenodo.1402261","2017/06/08, 04:16:07",2331,MIT,36,431,"2023/08/09, 10:19:37",11,12,24,2,77,2,1.3,0.05012531328320802,"2023/03/14, 09:43:14",0.4.7,0,5,false,,false,true,"narest-qa/repo55,lfcd2/QES_Lent_Practicals,Quantitative-Environmental-Science/OceanTools,oscarbranson/teaching,nicpittman/tropical_pacific_carbon_export",,,,,,,,,, Open Acidification Project,Apparatus to determine total alkalinity in sea water using an open-cell titration.,Open-Acidification,https://github.com/Open-Acidification/AlkalinityTitrator.git,github,"alkalinity-titrator,ocean-acidification,raspberry-pi",Ocean Carbon and Temperature,"2023/06/01, 03:14:42",6,0,2,true,Python,Open Acidification Project,Open-Acidification,"Python,C++,Shell",https://open-acidification.github.io/,"b'# Alkalinity Titrator Project\n\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-8-orange.svg?style=flat-square)](#contributors-)\n\n\n## Project motivations\n\nAs CO2 levels increase, the ocean absorbs more CO2 and becomes more acidic. There currently exists a large deficit of data on how this affects wildlife. Alkalinity Titrators are needed for ocean acidification research\xe2\x80\x8b. Currently, available models are expensive ($10,000-$25,000)\xe2\x80\x8b. Models on the lower end of the price range are not automated and are therefore time intensive.\n\nThis project aims to make ocean acidification research more widely available by lowering the cost of alkalinity titrators.\n\nThe problems that the alkalinity-titrator seek to fix are as follows:\n\n- Lower the cost of ocean science equipment by using inexpensive, widely-available parts\n- To automate the titration process, saving time and effort when determining total alkalinity\n\nThe titration process used in this project is based on SOP 3b from\n\n```Christian, James Robert, Andrew G. Dickson, and Christopher L. Sabine. Guide to Best Practices for Ocean CO2 Measurements. Sidney, B.C.: North Pacific Marine Science Organization, 2007.```\n\n## Current Development Note\n\nThe most recent development in this project is the implementation of a UI State Machine framework (see the titration/utils/UIState folder for UI states implemented). While the UI State Machine framework has been fully implemented, the actual titration processes and routines have not been integrated with the UI State Machine (see GitHub Issues for further specifications).\n\n## Setup and Installation\n\n### Setting up the Raspberry Pi\n\nRefer to for instructions on setting up the raspberry pi (note: headless setup is not required if a keyboard and monitor are available). Raspbian lite has everything needed, but the desktop version can be downloaded if working with a GUI is preferable.\n\n### Installing software\n\nRun standard updates on the pi:\n\n``` sh\nsudo apt-get update \nsudo apt-get upgrade\n```\n\nThis project utilizes SPI and I2C protocols, both of which often come disabled on the pi. To enable them, run:\n\n``` sh\nsudo raspi-config\n```\n\nand navigate to ""Interfacing Options""; enable both SPI and I2C.\n\nInstall git:\n\n``` sh\nsudo apt-get install git\n```\n\nClone alkalinity titrator repository to the pi\n\n``` sh\ngit clone https://github.com/Open-Acidification/alkalinity-titrator.git\n```\n\nRun installation script\n\n``` sh\nsudo ./install.sh\n```\n## User Instructions\n\n### Run on Device\n\nTo run (with the UI State Machine integrated)\n\n``` sh\n./run.sh\n```\n\n### Run in Local Environment\n\nTo run in a local environment with mocked devices (with the UI State Machine integrated)\n\n``` sh\n./run_mocked.sh\n```\n\n## Testing\n\nTo perform Pytest tests for the devices and UI states.\n\n``` sh\n./test.sh\n```\n\n## Pins\n\n### Temperature probe ([MAX31865 breakout board](https://learn.adafruit.com/adafruit-max31865-rtd-pt100-amplifier/python-circuitpython))\n\n- PIN 1 (3.3v) to sensor VIN\n- PIN 9 to sensor GND\n- PIN 19/BCM 10 to sensor SDI\n- PIN 21/BCM 21 to sensor SDO\n- PIN 23/BCM 23 to sensor CLK\n- PIN 29/BCM 5 to sensor CS (or use any other free GPIO pin)\n\n### pH probe ([ADS1115 analog converter](https://learn.adafruit.com/adafruit-4-channel-adc-breakouts/python-circuitpython))\n\n- PIN 17 (3.3v) to ADS1115 VDD - Remember the maximum input voltage to any ADC channel cannot exceed this VDD 3V value!\n- PIN 6 to ADS1115 GND\n- PIN 5/BCM 3 SCL to ADS1115 SCL\n- PIN 3/BCM 2 to ADS1115 SDA\n\n## Libraries\n\n1. Circuit Python - \n
\nUsed for communicating with the PT1000\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Kaden Sukachevin

\xf0\x9f\x92\xbb \xf0\x9f\x93\x96 \xf0\x9f\x90\x9b

Preston Carman

\xf0\x9f\x92\xbb \xf0\x9f\x90\x9b

Konrad McClure

\xf0\x9f\x92\xbb

Noah-Griffith

\xf0\x9f\x92\xbb

Barun Debnath

\xf0\x9f\x92\xbb

Kieran Sukachevin

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb

Josh Soref

\xf0\x9f\x92\xbb

TaylorSmith28

\xe2\x9a\xa0\xef\xb8\x8f \xf0\x9f\x92\xbb
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",,"2020/01/23, 08:20:38",1371,MIT,35,181,"2023/06/16, 01:04:36",20,76,107,69,132,0,3.5,0.3975903614457831,"2023/06/01, 15:15:39",v23.06.01,2,8,false,,true,true,,,https://github.com/Open-Acidification,https://open-acidification.github.io/,,,,https://avatars.githubusercontent.com/u/57500501?v=4,,, m_mhw,Detect and analyse spatial marine heatwaves.,ZijieZhaoMMHW,https://github.com/ZijieZhaoMMHW/m_mhw1.0.git,github,"climate-science,data-analysis,heatwaves,marine-heatwaves,matlab",Ocean Carbon and Temperature,"2023/10/04, 11:08:10",32,0,15,true,MATLAB,,,"MATLAB,M,Python",,"b'm_mhw\n==================================================================\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.01124/status.svg)](https://doi.org/10.21105/joss.01124)\n\nThe **`m_mhw`** toolbox is an matlab - based tool designed to detect and analyse spatial marine heatwaves (MHWs). Previously, approaches to detecting and analysing MHW time series have been applied in python (https://github.com/ecjoliver/marineHeatWaves, written by Eric C. J. Oliver) and R (Schlegel and Smit, 2018). \n\nThe **`m_mhw`** toolbox is designed 1) to determine spatial MHWs according to the definition provided in Hobday et al. (2016) and marine cold spells (MCSs) introduced in Schlegel et al. (2017); 2) to visualize MHW/MCS event in a particular location during a period; 3) to explore the mean states and trends of MHW metrics, such as what have done in Oliver et al. (2018). \n\nThe detection of MHW/MCS in each grid of data is done by simple loops instead of parallel computation. This is due to the fact that the size of detected MHW/MCS events (i.e. the number of events) is absolutely unknown, hence the resultant output MHW matrix has to be changed in each step. This is against the principle of parallel computation, which requires the independence of each step of calculation. Although we could still achieve parallel computation by detecting and storing each MHW independently, it means that we need to add another loop to combine them into a new matrix. If we do so, the resultant codes would only be a little bit faster than original codes using simple loops. Additionally, introducing parallel computation could also increase the complexity of this toolbox, which could be hard to handle by new users to matlab.\n\nInstallation\n-------------\n\nThe installation of this toolbox could be directly achieved by downloading this repositories and add its path in your MATLAB.\n\nRequirements\n-------------\n\nThe MATLAB Statistics and Machine Learning Toolbox. [m_map](https://www.eoas.ubc.ca/~rich/map.html) is recommended for running example.\n\nFunctions\n-------------\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FunctionDescription
detect()The main function, aiming to detect spatial MHW/MCS events following definition given by Hobday et al. (2016).
event_line()The function to create a line plot of MHW/MCS in a particular grid during a particular period.
mean_and_trend()The function to calculate spatial mean states and annual trends of MHW/MCS properties.
composites()The function to calculate composites for a particular dataset across a particular index.
\n\nAdditionally, this toolbox also provides sea surface temperature off eastern Tasmania [147-155E, 45-37S] during 1982-2015, extracted from NOAA OI SST V2 (Reynolds et al., 2007).\n\nInputs and outputs\n--------------------\n\nThe core function `detect` need some inputs:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
VariableDescription
tempA 3D matrix containing temperature data.
timeA numeric vector indicating the time corresponding to temp in the format of datenum().
cli_startA numeric value indicating the starting date for calculating climatology in the format of datenum().
cli_endA numeric value indicating the ending date for calculating climatology in the format of datenum().
mhw_startA numeric value indicating the starting date for detection of MHW in the format of datenum().
mhw_endA numeric value indicating the ending date for detection of MHW in the format of datenum().
\n\nThe core function `detect` would return four outputs, which are `MHW`, `mclim`, `m90` and `mhw_ts`. Their descriptions are summarized in following table. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
VariableDescription
MHWA table containing all detected MHW/MCS events, where every row corresponds to a particular event and every column indicates a metric or property.
mclimA 3D numeric matrix in size of (x,y,366), containing climatologies in each grid for every Julian day.
m90A 3D numeric matrix in size of (x,y,366), containing thresholds in each grid for every Julian day.
mhw_tsA 3D numeric matrix in size of (x,y,(datenum(MHW_end)-datenum(MHW_start)+1)), containing daily MHW/MCS intensity. 0 in this variable indicates that corresponding day is not in a MHW/MCS event and NaN indicates missing value or lands.
\n\nThe major output `MHW` contains all detected MHW/MCS events, characterized by 9 different properties, including:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PropertyDescription
mhw_onsetA numeric vector indicating the onset date (YYYYMMDD) of each event.
mhw_endSimilar to mhw_onset, but indicating the end date (YYYYMMDD).
mhw_durA numeric vector indicating the duration (days) of each event.
int_maxA numeric vector indicating the maximum intensity of each event in unit of deg. C.
int_meanA numeric vector indicating the mean intensity of each event in unit of deg. C.
int_varA numeric vector indicating the variance of intensity of each event.
int_cumA numeric vector indicating the cumulative intensity of each event in unit of deg. C x days.
xlocA numeric vector indicating the location of each event in the x-dimension of temperature data.
ylocA numeric vector indicating the location of each event in the y-dimension of temperature data.
\n\nFor information of other functions, please see `help` text via MATLAB. For practical tutorial and example, please see following contents.\n\nExample\n----------\n\nWe provide [examples](https://github.com/ZijieZhaoMMHW/m_mhw1.0/tree/master/examples) about how to use functions in **`m_mhw`** and how to apply them to real-world data. \n\nCurrent examples include:\n\n[An example about how to apply m_mhw to real-world data](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/an_example.md) [(Codes)](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/an_example.m)\n\n[Analysing seasonality and monthly variability of MHWs](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/seasonality.md) [(Codes)](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/seasonality.m)\n\n[EOF analysis on annual MHW days](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/mhweof.md) [(Codes)](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/mhweof.m)\n\n[MHW Category Analysis](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/category_analysis.md) [(Codes)](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/category_analysis.m)\n\n[EOF analysis on annual MHW cumulative intensity](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/mhweof_int.md) [(Codes)](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/examples/mhweof_int.m)\n\nIssues\n--------------------\n\nThe results from this toolbox would be slightly different from outputs from Python and R modules. This is due to the fact that MATLAB follows different rules to calculate percentile thresholds. The number of detected events from this toolbox would be slightly less than that from Python and R. Please see a [comparison](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/testing/compare_M_R.md). If you would like to get the same outputs as python, please set the optional input `\'percentile\'` as `\'python\'` (default is `\'matlab\'`).\n\nContributing to **`m_mhw`**\n----------\n\nTo contribute to the package please follow the guidelines [here](https://github.com/ZijieZhaoMMHW/m_mhw1.0/blob/master/docs/Contributing_to_mmhw.md).\n\nPlease use this [link](https://github.com/ZijieZhaoMMHW/m_mhw1.0/issues) to report any bugs found.\n\nReferences\n----------\n\nHobday, A.J. et al. (2016). A hierarchical approach to defining marine heatwaves, Progress in Oceanography, 141, pp.\xc2\xa0227-238.\n\nSchlegel, R. W., Oliver, E. C. J., Wernberg, T. W., Smit, A. J., 2017. Nearshore and offshore co-occurrences of marine heatwaves and cold-spells. Progress in Oceanography, 151, pp.\xc2\xa0189-205.\n\nSchlegel, R. W. and Smit, A. J, 2018. heatwaveR: A central algorithm for the detection of heatwaves and cold-spells. The Journal of Open Source Software, 3, p.821.\n\nOliver, E.C., Lago, V., Hobday, A.J., Holbrook, N.J., Ling, S.D. and Mundy, C.N., 2018. Marine heatwaves off eastern Tasmania: Trends, interannual variability, and predictability. Progress in Oceanography, 161, pp.116-130.\n\nReynolds, Richard W., Thomas M. Smith, Chunying Liu, Dudley B. Chelton, Kenneth S. Casey, Michael G. Schlax, 2007: Daily High-Resolution-Blended Analyses for Sea Surface Temperature. J. Climate, 20, 5473-5496. \n\nContact\n-------\n\nZijie Zhao\n\nSchool of Earth Science, The University of Melbourne\n\nParkville VIC 3010, Melbourne, Australia\n\nE-mail: \n\nMaxime Marin\n\nCSIRO Oceans & Atmosphere, Indian Ocean Marine Research Centre\n\nCrawley 6009, Western Australia, Australia\n\nE-mail: \n\n'",",https://doi.org/10.21105/joss.01124","2018/11/07, 23:08:47",1813,GPL-3.0,36,122,"2020/11/05, 05:11:45",11,0,8,0,1085,5,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, AutoQC,A testing suite for automatic quality control checks of subsurface ocean temperature observations.,IQuOD,https://github.com/IQuOD/AutoQC.git,github,,Ocean Carbon and Temperature,"2022/07/17, 20:30:44",27,0,5,false,Python,International Quality-controlled Ocean Database,IQuOD,"Python,Shell",,"b""AutoQC\n======\n\n[![Build Status](https://travis-ci.org/IQuOD/AutoQC.svg?branch=master)](https://travis-ci.org/IQuOD/AutoQC)\n\n## Introduction\n\nRecent studies suggest that changes to global climate as have been seen at the Earth's land and ocean surface are also making their way into the deep ocean, which is the largest active storage system for heat and carbon available on the timescale of a human lifetime. Historical measurements of subsurface ocean temperature are essential to the scientific research investigating the changes in the amount of heat stored in the ocean and also to other climate research activities such as combining observations with numerical models to provide estimates of the global ocean's and Earth's climate state in the past and predictions for the future. Unfortunately, as with all observations, these measurements contain errors and biases that must be identified to prevent a negative impact on the applications and investigations that rely on them. Various groups from around the world have developed quality control tests to perform this important task. However, this has led to duplication of effort, code that is not easily available to other researchers and the introduction of climate model differences solely due to the varying performance of these software systems whose nuances relative to one another are poorly known.\n\nRecently, an international team of researchers has decided to work together to break down the barriers between the various groups and countries through the formation of the IQuOD (International Quality Controlled Dataset) initiative. One of the key aims is to intercompare the performance of the various automatic quality control tests that are presently being run to determine a best performing set. This work has started. However, it currently involves individuals running test datasets through their own systems and is being confounded by complications associated with the differences in the file formats and systems that are in use in the various labs and countries.\n\nThe IQuOD proposal is to set up an open quality control benchmarking system. Work will begin by implementing a battery of simple tests to run on some test data, and producing summary statistics and visualizations of the results. Later goals include helping researchers either wrap their existing C, Fortran and MATLAB test functions in Python for use in this test suite, or re-implementing those tests in native Python.\n\n## Dependencies & Setup:\n\n### Local Install\n\n **Tested on Ubuntu 16.04**\n\n To clone this project and set it up, make sure `git` is installed, then:\n\n ```\n $ git clone https://github.com/IQuOD/AutoQC\n $ cd AutoQC\n $ source install.sh\n ```\n\n### Containerized Install\n\n To run AutoQC in a containerized environment, make sure `docker` is installed, then:\n\n ```\n $ docker image run -it -v /my/data/directory:/rawdata iquod/autoqc:ubuntu-16.04 bash\n ```\n\n Anything in `/my/daya/directory` on your machine will be available at `/rawdata` inside the container, and vice versa. Use this to add raw WOD-ASCII data to your container, and add multiple `-v origin:destination` paths to include multiple directories in the same way.\n\n You may also want to `git pull origin master` inside the `/AutoQC` directory inside your container, to fetch the latest version of the project.\n\n## Usage\n\nAutoQC runs in three steps: database construction, qc running, and result summarization.\n\n### Database Construction\n\n```\npython build-db.py filename tablename\n```\n\nWhere `filename` is the name of a WOD-ascii file to read profiles from, and `tablename` is the name of a postgres table to write the results to; `tablename` will be created if it doesn't\nexist, or appended to if it does. `tablename` will have the following columns:\n\ncolumn name | description\n------------|-----------\n`raw` | the raw WOD-ASCII text originally found in the input file\n`truth` | whether any temperature qc levels were flagged at 3 or greater\n`uid` | unique profile serial number\n`year` | timestamp year\n`month` | timestamp month, integers [1,12]\n`day` | timestamp day, integers [1,31]\n`time` | timestamp walltime, real [0,24)\n`lat` | profile latitude\n`long` | profile longitude\n`cruise` | cruise id\n`probe` | probe index, per WOD specifications\n\nAdditionally, there is a column in the table for the qc results of every test found in the `/qctests` directory; these columns are filled in in the next step.\n\n### QC Execution\n\n```\npython AutoQC.py tablename nProcessors\n```\n\nwhere `tablename` is the postgres table to pull profiles from (probably the same as `tablename` in the last step), and `nProcessors` is how many processors you'd like to parallelize over.\n\n### Result Summary\n\n```\npython summarize-results.py tablename\n```\n\nwhere `tablename` is the postgres table used in the previous steps. A summary of true flags, true passes, false positives and false negatives is generated for each test.\n\n\n## Testing\n\n### Testing Data\nEach quality control test must be written as its own file in `/qctests`, of the form `def test(p, parameters)`, where `p` is a profile object; each test returns a bool, where `True` indicates the test has *failed*.\n`parameters` is a dictionary for conveniently persisting *static* parameters and sharing them between threads; if your test has a great deal of parameters to load before it runs, include alongside its definition a `loadParmaeters(dict)` method, which writes those\nparameters to keys of your choosing on the dictionary passed in as an argument to `loadParameters`. That dictionary will subsequently be passed into every qc test as the `parameters` argument. Calling this `loadParameters` function is done automatically by the qc framework;\nit is enough for you to just write it, and the parameters you want will be available in your qc test on the keys you defined on the `parameters` object.\n\n### Testing Code\nTo run the code tests:\n\n```\npip install nose\nnosetests tests/*.py\n```\n\n## Profile Objects Specification\nSee [wodpy package](https://github.com/IQuOD/wodpy) for more information on the WodProfile class, a decoding helper for the WOD ASCII format.\n\n##Contributing\nQuality control checks waiting to be implemented are listed in the Issues. If you would like to work on coding up a check, please assign yourself to the issue to avoid others duplicating the effort.\nIf you have an idea for a new QC check, please open an issue and let us know, so we can help get you started on the right track.\n""",,"2014/07/10, 11:00:05",3394,MIT,0,926,"2022/07/17, 20:30:44",18,203,262,0,465,1,0.9,0.3517441860465116,"2022/01/09, 22:12:37",publication-2022,0,6,false,,false,false,,,https://github.com/IQuOD,,,,,https://avatars.githubusercontent.com/u/8112559?v=4,,, marineHeatWaves,A module for Python which implements the Marine Heatwave definition of Hobday et al. (2016).,ecjoliver,https://github.com/ecjoliver/marineHeatWaves.git,github,,Ocean Carbon and Temperature,"2022/05/19, 18:17:09",105,0,28,false,Python,,,Python,,"b'# Marine Heatwaves detection code\n\nmarineHeatWaves is a module for python which implements the Marine Heatwave (MHW) definition of Hobday et al. (2016). A version written in R is also [available](https://robwschlegel.github.io/heatwaveR/index.html).\n\n# Contents\n\n|File |Description|\n|---------------------|-----------|\n|CHANGES.txt |A list of software versions and changes|\n|docs/ |Documentation folder|\n|LICENSE.txt |Software license information|\n|marineHeatWaves.py |marineHeatWaves module|\n|README.md |This file|\n|setup.py |Installation script (see below)|\n\n# Installation\n\nThis module can be installed one of two ways:\n\n1. Standard python install. On Linux/UNIX or OS X run the following command in the terminal: \n ```\n python setup.py install\n ``` \n or on windows run this at the command prompt (not tested) \n ```\n setup.py install\n ```\n2. Alternatively just copy the marineHeatWaves.py to your working directory or any other directory from which Python can import modules.\n\nPrequisite Python modules include numpy, scipy, and datetime.\n\n# Documentation and Usage\n\nInside the documentation folder are the following helpful files and scripts:\n\n|File |Description|\n|--------------------------|-----------|\n|marineHeatWaves_manual.htm|HTML file of IPython notebook outlining use of marineHeatWaves code to detect the ""big three"" historical marine heatwaves. Original data files (NOAA OI SST hi-res) not supplied due to copyright.|\n|example_synthetic.ipynb |IPython notebook outlining use of marineHeatWaves code to detect events from a synthetic time series. This notebook can be run by the user as it relies only on internally-generated synthetic temperature data.|\n|example_synthetic.html |Static HTML version of example_synthetic.ipynb.|\n|mhw_stats.py |Script with some examples of how to output plots, stats, and data files from marineHeatWaves detection code. Requires a subfolder to be created with the name \'mhw_stats\', to which all files are output.|\n\n# References\n\nHobday, A.J. et al. (2016), A hierarchical approach to defining marine heatwaves, Progress in Oceanography, 141, pp. 227-238, doi: 10.1016/j.pocean.2015.12.014 [pdf](http://passage.phys.ocean.dal.ca/~olivere/docs/Hobdayetal_2016_PO_HierarchMHWDefn.pdf)\n\n# Acknowledgements\n\nThe code was written by Eric C. J. Oliver.\n\nContributors to the Marine Heatwaves definition and its numerical implementation include Alistair J. Hobday, Lisa V. Alexander, Sarah E. Perkins, Dan A. Smale, Sandra C. Straub, Jessica Benthuysen, Michael T. Burrows, Markus G. Donat, Ming Feng, Neil J. Holbrook, Pippa J. Moore, Hillary A. Scannell, Alex Sen Gupta, and Thomas Wernberg.\n\n# Contact\n\nEric C. J. Oliver \nDepartment of Oceanography \nDalhousie University \nHalifax, Nova Scotia, Canada \nt: (61) 902 494-2505 \ne: eric.oliver@dal.ca \nw: http://ecjoliver.weebly.com \nw: https://github.com/ecjoliver \n\n'",,"2015/04/28, 07:33:04",3102,CUSTOM,0,48,"2022/07/17, 20:30:44",6,0,0,0,465,0,0,0.0,"2016/03/16, 06:12:02",v0.16,0,1,false,,false,false,,,,,,,,,,, heatwaveR,Contains the original functions from the RmarineHeatWaves package that calculate and display marine heatwaves according to the definition of Hobday et al. (2016).,robwschlegel,https://github.com/robwschlegel/heatwaveR.git,github,,Ocean Carbon and Temperature,"2023/10/26, 02:48:01",40,0,6,true,R,,,"R,JavaScript,TeX,C++,HTML,CSS",https://robwschlegel.github.io/heatwaveR/,"b'# heatwaveR \n\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/heatwaveR)](https://cran.r-project.org/package=heatwaveR)\n[![R-CMD-check](https://github.com/robwschlegel/heatwaveR/workflows/R-CMD-check/badge.svg)](https://github.com/robwschlegel/heatwaveR/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/robwschlegel/heatwaveR/branch/master/graph/badge.svg)](https://app.codecov.io/gh/robwschlegel/heatwaveR?branch=master)\n[![JOSS](http://joss.theoj.org/papers/10.21105/joss.00821/status.svg)](https://doi.org/10.21105/joss.00821)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1324308.svg)](https://doi.org/10.5281/zenodo.1324308)\n[![Downloads](https://cranlogs.r-pkg.org/badges/grand-total/heatwaveR)](https://cran.r-project.org/package=heatwaveR)\n\nThe **`heatwaveR`** package is a project-wide update to the\n[**`RmarineHeatWaves`**](https://github.com/ajsmit/RmarineHeatWaves)\npackage, which is itself a translation of the original [Python\ncode](https://github.com/ecjoliver/marineHeatWaves) written by Eric C.\nJ. Oliver. The **`heatwaveR`** package also uses the same naming\nconventions for objects, columns, and arguments as the Python code.\n\nThe **`heatwaveR`** R package contains the original functions from the\n**`RmarineHeatWaves`** package that calculate and display marine\nheatwaves (MHWs) according to the definition of Hobday et al.\xc2\xa0(2016) as\nwell as calculating and visualising marine cold-spells (MCSs) as first\nintroduced in Schlegel et al.\xc2\xa0(2017a). It also contains the\nfunctionality to calculate the categories of MHWs as outlined in Hobday\net al.\xc2\xa0(2018).\n\nThis package does what **`RmarineHeatWaves`** does, but faster. The\nentire package has been deconstructed and modularised, and we are\ncontinuing to implement slow portions of the code in C++. This has\nalleviated the bottlenecks that slowed down the climatology creation\nportions of the code as well as generally creating an overall increase\nin the speed of the calculations. Currently the R code runs about as\nfast as the original python functions, at least in as far as applying it\nto single time series of temperatures. Readers familiar with both\nlanguages will know about the ongoing debate around the relative speed\nof the two languages. In our experience, R can be as fast as python,\nprovided that attention is paid to finding ways to reduce the\ncomputational inefficiencies that stem from i) the liberal use of\ncomplex and inefficient non-atomic data structures, such as data frames;\nii) the reliance on non-vectorised calculations such as loops; and iii)\nlazy (but convenient) coding that comes from drawing too heavily on the\n`tidyverse` suite of packages. We will continue to ensure that\n**`heatwaveR`** becomes more-and-more efficient so that it can be\napplied to large gridded data products with ease. To that end, the\nextension package\n[**`heatwave3`**](https://robwschlegel.github.io/heatwave3/index.html)\nhas been developed. This helps the user to apply the code from\n**`heatwaveR`** directly onto their NetCDF and other 3D gridded data\nfiles.\n\n**`heatwaveR`** was also developed and released in order to better\naccommodate the inclusion of the definitions of atmospheric heatwaves in\naddition to MHWs. Additionally, **`heatwaveR`** also provides the first\nimplementation of a definition for a \xe2\x80\x98compound heatwave\xe2\x80\x99. There are\ncurrently multiple different definitions for this type of event and each\nof which has arguments provided for it within the `ts2clm()` and\n`detect_event()` functions.\n\nThis package may be installed from CRAN by typing the following command\ninto the console:\n\n`install.packages(""heatwaveR"")`\n\nOr the development version may be installed from GitHub with:\n\n`devtools::install_github(""robwschlegel/heatwaveR"")`\n\n## The functions\n\n| Function | Description |\n|-------------------|----------------------------------------------------------------------------------------------------------------|\n| `ts2clm()` | Constructs seasonal and threshold climatologies as per the definition of Hobday et al.\xc2\xa0(2016). |\n| `detect_event()` | The main function which detects the events as per the definition of Hobday et al.\xc2\xa0(2016). |\n| `block_average()` | Calculates annual means for event metrics. |\n| `category()` | Applies event categories to the output of `detect_event()` based on Hobday et al.\xc2\xa0(2018). |\n| `exceedance()` | A function similar to `detect_event()` but that detects consecutive days above/below a given static threshold. |\n| `event_line()` | Creates a time series line graph of the heatwave or cold-spell results from `detect_event()`. |\n| `lolli_plot()` | Creates a lolliplot time series of a selected event metric from the results generated by `detect_event()`. |\n| `geom_flame()` | Creates flame polygons of heatwaves or cold-spells from a time series. |\n| `geom_lolli()` | Creates lolliplots from a time series of a selected event metric. |\n\nThe package also provides data of observed SST records for three\nhistorical MHWs: the 2011 Western Australia event, the 2012 Northwest\nAtlantic event, and the 2003 Mediterranean event.\n\n## The heatwave metrics\n\nThe `detect_event()` function will return a list of two tibbles (see the\n**`tidyverse`**), `climatology` and `event`, which are the time series\nclimatology and MHW (or MCS) events, respectively. The climatology\ncontains the full time series of daily temperatures, as well as the the\nseasonal climatology, the threshold and various aspects of the events\nthat were detected. The software was designed for detecting extreme\nthermal events, and the units specified below reflect that intended\npurpose. However, various other kinds of extreme events (e.g.\xc2\xa0rainfall)\nmay be detected according to the \xe2\x80\x98heatwave\xe2\x80\x99 specifications, and if that\nis the case, the appropriate `minDuration` etc. and units of measurement\nneed to be determined by the user.\n\n| Climatology metric | Description |\n|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `doy` | Julian day (day-of-year). For non-leap years it runs 1\xe2\x80\xa659 and 61\xe2\x80\xa6366, while leap years run 1\xe2\x80\xa6366. This column will be named differently if another name was specified to the `doy` argument. |\n| `t` | The date of the temperature measurement. This column will be named differently if another name was specified to the `x` argument. |\n| `temp` | If the software was used for the purpose for which it was designed, seawater temperature (deg. C) on the specified date will be returned. This column will of course be named differently if another kind of measurement was specified to the `y` argument. |\n| `seas` | Climatological seasonal cycle (deg. C). |\n| `thresh` | Seasonally varying threshold (e.g., 90th percentile) (deg. C). |\n| `var` | Variance (standard deviation) per `doy` of `temp` (deg. C). (not returned by default as of v0.3.5) |\n| `threshCriterion` | Boolean indicating if `temp` exceeds `thresh`. |\n| `durationCriterion` | Boolean indicating whether periods of consecutive `threshCriterion` are \\>= `minDuration`. |\n| `event` | Boolean indicating if all criteria that define a MHW or MCS are met. |\n| `event_no` | A sequential number indicating the ID and order of occurrence of the MHWs or MCSs. |\n\nThe events are summarised using a range of event metrics:\n\n| Event metric | Description |\n|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|\n| `event_no` | A sequential number indicating the ID and order of the events. This allows one to match/join results between the `climatology` and `event` outputs. |\n| `index_start` | Row number from the given time series where the event starts. |\n| `index_peak` | Row number from the given time series where the event peaks. |\n| `index_end` | Row number from the given time series where the event ends. |\n| `duration` | Duration of event (days). |\n| `date_start` | Start date of event (date). |\n| `date_peak` | Date of event peak (date). |\n| `date_end` | End date of event (date). |\n| `intensity_mean` | Mean intensity (deg. C). |\n| `intensity_max` | Maximum (peak) intensity (deg. C). |\n| `intensity_var` | Intensity variability (standard deviation) (deg. C). |\n| `intensity_cumulative` | Cumulative intensity (deg. C x days). |\n| `rate_onset` | Onset rate of event (deg. C / day). |\n| `rate_decline` | Decline rate of event (deg. C / day). |\n\n`intensity_max_relThresh`, `intensity_mean_relThresh`,\n`intensity_var_relThresh`, and `intensity_cumulative_relThresh` are as\nabove except relative to the threshold (e.g., 90th percentile) rather\nthan the seasonal climatology.\n\n`intensity_max_abs`, `intensity_mean_abs`, `intensity_var_abs`, and\n`intensity_cumulative_abs` are as above except as absolute magnitudes\nrather than relative to the seasonal climatology or threshold.\n\nNote that `rate_onset` and `rate_decline` will return `NA` when the\nevent begins/ends on the first/last day of the time series. This may be\nparticularly evident when the function is applied to large gridded data\nsets. Although the other metrics do not contain any errors and provide\nsensible values, please take this into account in the interpretation of\nthe output. It must also be noted that events whose `date_peak` occur on\nthe same day as the `date_start` or `date_end` of the event will return\nsmall negative values. This tends to only occur in areas with persistent\nice cover. The authors are currently thinking about how best to handle\nthis exception.\n\n## The Vignettes\n\nFor detailed explanations and walkthroughs on the use of the\n**`heatwaveR`** package please click on the Vignettes tab in the toolbar\nabove, or follow the links below:\n\n- For a basic introduction to the [detection and\n visualisation](https://robwschlegel.github.io/heatwaveR/articles/detection_and_visualisation.html)\n of events.\n- For an explanation on the use of the\n [exceedance](https://robwschlegel.github.io/heatwaveR/articles/exceedance.html)\n function.\n- For a walkthrough on the calculation and visualisation of [event\n categories](https://robwschlegel.github.io/heatwaveR/articles/event_categories.html).\n- For examples on the calculation of atmospheric events with\n [alternative\n thresholds](https://robwschlegel.github.io/heatwaveR/articles/complex_clims.html).\n- For a demonstration on how to [download and prepare OISST\n data](https://robwschlegel.github.io/heatwaveR/articles/OISST_preparation.html).\n- Which may then have the `detect_event()` function applied to the\n [gridded\n data](https://robwschlegel.github.io/heatwaveR/articles/gridded_event_detection.html),\n and then fit a GLM and plot the results.\n\n## The Marine Heatwave Tracker\n\nTo see the **`heatwaveR`** package in action, check out the [Marine\nHeatwave Tracker](https://www.marineheatwaves.org/tracker.html) website.\nThis is a daily updating global analysis of where in the world marine\nheatwaves are occurring. It has near real-time information as well as\nhistoric data going back to January 1st, 1982 and uses the Hobday et\nal.\xc2\xa0(2018) colour scheme to show how intense the MHWs are.\n\n## Contributing to **`heatwaveR`**\n\nTo contribute to the package please follow the guidelines\n[here](https://robwschlegel.github.io/heatwaveR/CONTRIBUTING.html).\n\nPlease use this [link](https://github.com/robwschlegel/heatwaveR/issues)\nto report any bugs found.\n\n## Citing **`heatwaveR`**\n\nBecause **`heatwaveR`** is and always will be free to use open source\nsoftware, its citation in scientific literature and other sources is the\nprimary metric through which the continued development of this package\nis motivated for. Therefore, if the **`heatwaveR`** package is used in\nany analyses please acknowledge this through the following citation:\n\nRobert W. Schlegel and Albertus J. Smit (2018). heatwaveR: A central\nalgorithm for the detection of heatwaves and cold-spells. Journal of\nOpen Source Software, 3(27), 821, \n\nThe BibTeX citation may be accessed in R with:\n\n`citation(""heatwaveR"")`\n\nFor a list of sources that have cited **`heatwaveR`** see the Citations\ntab in the toolbar at the top of this page. If you do not see your\npublication in the list of citations and would like it added please\ncontact the developer (see below).\n\n## References\n\nHobday, A.J. et al.\xc2\xa0(2016). A hierarchical approach to defining marine\nheatwaves. Progress in Oceanography, 141, pp.\xc2\xa0227-238.\n\nSchlegel, R. W., Oliver, E. C. J., Wernberg, T. W., Smit, A. J. (2017a).\nNearshore and offshore co-occurrences of marine heatwaves and\ncold-spells. Progress in Oceanography, 151, pp.\xc2\xa0189-205.\n\nSchlegel, R. W., Oliver, E. C., Perkins-Kirkpatrick, S., Kruger, A.,\nSmit, A. J. (2017b). Predominant atmospheric and oceanic patterns during\ncoastal marine heatwaves. Frontiers in Marine Science, 4, 323.\n\nHobday, A. J., Oliver, E. C. J., Sen Gupta, A., Benthuysen, J. A.,\nBurrows, M. T., Donat, M. G., Holbrook, N. J., Moore, P. J., Thomsen, M.\nS., Wernberg, T., Smale, D. A. (2018). Categorizing and naming marine\nheatwaves. Oceanography 31(2).\n\n## Acknowledgements\n\nThe Python code was written by Eric C. J. Oliver.\n\nContributors to the Marine Heatwaves definition and its numerical\nimplementation include Alistair J. Hobday, Lisa V. Alexander, Sarah E.\nPerkins, Dan A. Smale, Sandra C. Straub, Jessica Benthuysen, Michael T.\nBurrows, Markus G. Donat, Ming Feng, Neil J. Holbrook, Pippa J. Moore,\nHillary A. Scannell, Alex Sen Gupta, and Thomas Wernberg.\n\nThe translation from Python to R was done by A. J. Smit and the graphing\nfunctions were contributed by Robert. W. Schlegel.\n\n## Contact\n\nRobert W. Schlegel\n\nData Scientist\n\nLaboratoire d\xe2\x80\x99Oc\xc3\xa9anographie de Villefranche-sur-Mer, LOV\n\nInstitut de la Mer de Villefranche, IMEV\n\n\n'",",https://doi.org/10.21105/joss.00821,https://doi.org/10.5281/zenodo.1324308,https://doi.org/10.21105/joss.00821","2018/04/24, 09:33:11",2010,CUSTOM,58,556,"2023/07/07, 12:51:23",1,2,31,8,110,0,0.5,0.3239130434782609,"2021/01/11, 09:58:38",v0.4.5,0,3,false,,true,true,,,,,,,,,,, py-wave-runup,A Python module which makes it easy for coastal engineers and scientists to test and use various empirical wave runup models which have been published in literature.,chrisleaman,https://github.com/chrisleaman/py-wave-runup.git,github,"coastal,coastal-modelling,coastal-engineering,beach,beaches",Coastal and Reefs,"2021/07/07, 07:41:05",30,0,7,false,Python,,,Python,,"b'=================\nPython Wave Runup\n=================\n::\n\n Empirical wave runup models implemented in Python for coastal engineers and scientists.\n\n.. image:: https://zenodo.org/badge/180274721.svg\n :target: https://zenodo.org/badge/latestdoi/180274721\n\n.. image:: https://img.shields.io/pypi/v/py-wave-runup.svg\n :target: https://pypi.python.org/pypi/py-wave-runup\n\n.. image:: https://img.shields.io/travis/com/chrisleaman/py-wave-runup.svg\n :target: https://travis-ci.com/chrisleaman/py-wave-runup\n\n.. image:: https://readthedocs.org/projects/py-wave-runup/badge/?version=latest\n :target: https://py-wave-runup.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://codecov.io/gh/chrisleaman/py-wave-runup/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/chrisleaman/py-wave-runup\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/ambv/black\n\n\nContents\n----------\n- `Installation`_\n- `Usage`_\n- `Documentation`_\n- `Background`_\n- `Contributing`_\n- `Citation`_\n- `License`_\n- `References`_\n\n\n\nInstallation\n------------\n\nInstallation of ``py-wave-runup`` can be done with pip:\n\n.. code:: bash\n\n pip install py-wave-runup\n\n\nUsage\n-----\n\nThe following `wave runup models`_ are available for use:\n\n- ``models.Stockdon2006``: The most commonly cited and widely used runup model.\n- ``models.Power2018``: Based on the Gene-Expression Programming technique.\n- ``models.Holman1986``: Incorporated wave setup using Duck, NC measurements.\n- ``models.Nielsen2009``: Based on runup measurements from NSW, Australia.\n- ``models.Ruggiero2001``: Based on runup measurements from dissipative Orgeon beaches.\n- ``models.Vousdoukas2012``: Based on runup from European Atlantic coast\n- ``models.Senechal2011``: Based on extreme storm condition at Truc Vert, France\n- ``models.Beuzen2019``: Gaussian Process (GP) runup model\n- ``models.Passarella2018``: Genetic Programming (infragravity and total) swash model \n\nTo get calculate runup, setup and swash, define your offshore conditions in your\nselected runup model then you can access each parameter:\n\n.. code:: python\n\n from py_wave_runup import models\n\n model_sto06 = models.Stockdon2006(Hs=4, Tp=12, beta=0.1)\n\n model_sto06.R2 # 2.54\n model_sto06.setup # 0.96\n model_sto06.sinc # 2.06\n model_sto06.sig # 1.65\n\n.. _wave runup models: https://py-wave-runup.readthedocs.io/en/develop/models.html\n\nDocumentation\n-------------\nDocumentation is located at https://py-wave-runup.readthedocs.io.\n\n\nBackground\n----------\n\nWave runup refers to the final part of a wave\'s journey as it travels from offshore\nonto the beach. It is observable by anyone who goes to the beach and watches the edge\nof the water ""runup"" and rundown the beach. It is comprised of two components:\n\n - **setup**: the height of the time averaged superelevation of the mean water level\n above the Still Water Level (SWL)\n - **swash**: the height of the time varying fluctuation of the instantaneous water\n level about the setup elevation\n\nSetup, swash and other components of Total Water Level (TWL) rise are shown in this\nhandy figure below.\n\n.. image:: https://raw.githubusercontent.com/chrisleaman/py-wave-runup/master/docs/_static/VitousekDoubling2017Fig1.jpg\n :width: 500 px\n :align: center\n..\n\n | Figure from Vitousek et al. (2017) [#vit17]_\n\nWave runup can contribute a significant portion of the increase in TWL in coastal\nstorms causing erosion and inundation. For example, Stockdon et al. (2006) [#sto06]_\ncollated data from numerous experiments, some of which showed wave runup 2% excedence\nheights in excess of 3 m during some storms.\n\nGiven the impact such a large increase in TWL can have on coastlines, there has been\nmuch research conducted to try improve our understanding of wave runup processes.\nAlthough there are many processes which can influence wave runup (such as nonlinear\nwave transformation, wave reflection, three-dimensional effects, porosity, roughness,\npermeability and groundwater) [#cem06]_, many attempts have been made to derive\nempirical relatinoships based on easily measurable parameters. Typically, empirical\nwave runup models include:\n\n - **Hs**: significant wave height\n - **Tp**: peak wave length\n - **beta**: beach slope\n\nThis python package attempts to consolidate the work done by others in this field and\ncollate the numerous empirical relationships for wave runup which have been published.\n\nContributing\n------------\n\nAs there are many different empirical wave models out there, contributions are most\nwelcome. If you don\'t feel confident about changing the code yourself, feel free to open\na `Github issue`_ and let us know what could be added. Otherwise, follow the steps below\nto create a Pull Request:\n\n.. _Github issue: https://github.com/chrisleaman/py-wave-runup/issues\n\n1. Fork it (https://github.com/chrisleaman/py-wave-runup/fork)\n2. Create the development environment:\n\n - For pip, run ``pip install --pre -r requirements.txt``\n - For `poetry`_, run ``poetry install``\n - For `anaconda`_, run ``conda env create --name -f environment.yml``\n\n3. Create your feature branch (``git checkout -b feature/fooBar``)\n4. Install pre-commit hooks for automatic formatting (``pre-commit run -a``)\n5. Add your code!\n6. Add and run tests (``pytest``)\n7. Update and check documentation compiles (``sphinx-build -M html "".\\docs"" "".\\docs\\_build""``)\n8. Commit your changes (``git commit -am \'Add some fooBar``)\n9. Push to the branch (``git push origin feature/fooBar``)\n10. Create a new Pull Request\n\n.. _poetry: https://python-poetry.org/\n.. _anaconda: https://www.anaconda.com/distribution/#download-section\n\n\nCitation\n--------\n\nIf this package has been useful to you, please cite the following DOI: https://doi.org/10.5281/zenodo.2667464\n\n\nLicense\n--------\n\nDistributed under the GNU General Public License v3.\n\n\nReferences\n----------\n\n.. [#vit17] Vitousek, Sean, Patrick L. Barnard, Charles H. Fletcher, Neil Frazer,\n Li Erikson, and Curt D. Storlazzi. ""Doubling of Coastal Flooding Frequency\n within Decades Due to Sea-Level Rise."" Scientific Reports 7, no. 1 (May 18,\n 2017): 1399. https://doi.org/10.1038/s41598-017-01362-7.\n.. [#sto06] Stockdon, Hilary F., Robert A. Holman, Peter A. Howd, and Asbury H. Sallenger.\n ""Empirical Parameterization of Setup, Swash, and Runup."" Coastal Engineering 53,\n no. 7 (May 1, 2006): 573-88. https://doi.org/10.1016/j.coastaleng.2005.12.005\n.. [#cem06] United States, Army, and Corps of Engineers. Coastal Engineering Manual.\n Washington, D.C.: U.S. Army Corps of Engineers, 2006.\n'",",https://zenodo.org/badge/latestdoi/180274721\n\n,https://doi.org/10.5281/zenodo.2667464\n\n\nLicense\n--------\n\nDistributed,https://doi.org/10.1038/s41598-017-01362-7.\n,https://doi.org/10.1016/j.coastaleng.2005.12.005\n","2019/04/09, 03:07:38",1661,CUSTOM,0,257,"2022/09/01, 06:06:44",22,207,212,0,419,11,0.0,0.2233502538071066,"2020/01/29, 04:53:42",v0.1.10,0,7,false,,false,false,,,,,,,,,,, CoastSat,Enables users to obtain time-series of shoreline position at any coastline worldwide from 30+ years of publicly available satellite imagery.,kvos,https://github.com/kvos/CoastSat.git,github,"google-earth-engine,earth-engine,remote-sensing,satellite-images,coastal-engineering,shoreline-detection",Coastal and Reefs,"2023/09/22, 06:26:22",565,0,112,true,Jupyter Notebook,,,"Jupyter Notebook,Python",http://coastsat.wrl.unsw.edu.au/,"b'# CoastSat\n[![Last Commit](https://img.shields.io/github/last-commit/kvos/CoastSat)](\nhttps://github.com/kvos/CoastSat/commits/)\n[![GitHub release](https://img.shields.io/github/release/kvos/CoastSat)](https://GitHub.com/kvos/CoastSat/releases/)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2779293.svg)](https://doi.org/10.5281/zenodo.2779293)\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![Join the chat at https://gitter.im/CoastSat/community](https://badges.gitter.im/spyder-ide/spyder.svg)](https://gitter.im/CoastSat/community)\n![GitHub issues](https://img.shields.io/github/issues/kvos/CoastSat)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/y/kvos/CoastSat)\n\nCoastSat is an open-source software toolkit written in Python that enables users to obtain time-series of shoreline position at any coastline worldwide from 38+ years (and growing) of publicly available satellite imagery.\n\n![Alt text](https://github.com/kvos/CoastSat/blob/master/doc/example.gif)\n\n:point_right: Visit the [CoastSat website](http://coastsat.wrl.unsw.edu.au/) to explore and download regional-scale datasets of satellite-derived shorelines and beach slopes generated with CoastSat in different regions (Pacific Rim, US Atlantic coast).\n\n:point_right: Useful publications describing the toolbox:\n\n- Shoreline detection algorithm: https://doi.org/10.1016/j.envsoft.2019.104528 (Open Access)\n- Accuracy assessment: https://doi.org/10.1016/j.coastaleng.2019.04.004\n- Challenges in meso-macrotidal environments: https://doi.org/10.1016/j.geomorph.2021.107707\n- Basin-scale shoreline mapping (Paficic): https://www.nature.com/articles/s41561-022-01117-8 (The Conversation article [here](https://theconversation.com/millions-of-satellite-images-reveal-how-beaches-around-the-pacific-vanish-or-replenish-in-el-nino-and-la-nina-years-198505))\n- Beach slope estimation: https://doi.org/10.1029/2020GL088365 (preprint [here](https://www.essoar.org/doi/10.1002/essoar.10502903.2))\n- Beach-face slope dataset for Australia: https://doi.org/10.5194/essd-14-1345-2022\n\n:point_right: Other repositories and extensions related to the toolbox:\n- [SDS_Benchmark](https://github.com/SatelliteShorelines/SDS_Benchmark): testbed for satellite-derived shorelines mapping algorithms.\n- [CoastSat.slope](https://github.com/kvos/CoastSat.slope): estimates the beach-face slope from the satellite-derived shorelines obtained with CoastSat.\n- [CoastSat.PlanetScope](https://github.com/ydoherty/CoastSat.PlanetScope): shoreline extraction for PlanetScope Dove imagery (near-daily since 2017 at 3m resolution).\n- [InletTracker](https://github.com/VHeimhuber/InletTracker): monitoring of intermittent open/close estuary entrances.\n- [CoastSat.islands](https://github.com/mcuttler/CoastSat.islands): 2D planform measurements for small reef islands.\n- [CoastSeg](https://github.com/dbuscombe-usgs/CoastSeg): image segmentation, deep learning, doodler.\n- [CoastSat.Maxar](https://github.com/kvos/CoastSat.Maxar): shoreline extraction on Maxar World-View images (in progress)\n\n:star: **If you like the repo put a star on it!** :star:\n\n### Latest updates\n\n:arrow_forward: *(2023/07/07)*\nCoastSat v2.3: addition of a better cloud mask for Sentinel-2 imagery using the s2cloudless collection on GEE\n\n### Project description\n\nSatellite remote sensing can provide low-cost long-term shoreline data capable of resolving the temporal scales of interest to coastal scientists and engineers at sites where no in-situ field measurements are available. CoastSat enables the non-expert user to extract shorelines from Landsat 5, Landsat 7, Landsat 8, Landsat 9 and Sentinel-2 images.\nThe shoreline detection algorithm implemented in CoastSat is optimised for sandy beach coastlines. It combines a sub-pixel border segmentation and an image classification component, which refines the segmentation into four distinct categories such that the shoreline detection is specific to the sand/water interface.\n\nThe toolbox has the following functionalities:\n1. easy retrieval of satellite imagery spanning the user-defined region of interest and time period from Google Earth Engine, including state-of-the-art pre-processing steps (re-projecting the different bands, pansharpening, advanced cloud masking).\n2. automated extraction of shorelines from all the selected images using a sub-pixel resolution technique.\n3. intersection of the 2D shorelines with user-defined shore-normal transects.\n4. tidal correction using tide/water levels and an estimate of the beach slope.\n5. post-processing of the shoreline time-series, despiking and seasonal averaging.\n6. validation example at Narrabeen-Collaroy beach, Sydney.\n\n### Table of Contents\n\n- [Installation](#installation)\n- [Usage](#usage)\n - [Retrieval of the satellite images](#retrieval)\n - [Shoreline detection](#detection)\n - [Shoreline change time-series](#analysis)\n - [Tidal correction](#correction)\n - [Post-processing (seasonal averages and linear trends)](#postprocessing)\n - [Validation against survey data](#validation)\n- [Contributing and Issues](#issues)\n- [References](#references)\n\n## 1. Installation\n\n### 1.1 Create an environment with Anaconda\n\nTo run the toolbox you first need to install the required Python packages in an environment. To do this we will use **Anaconda**, which can be downloaded freely [here](https://www.anaconda.com/download/). If you are a more advanced user and have **Mamba** installed, use Mamba as it will install everything faster and without problems.\n\nOnce you have it installed on your PC, open the Anaconda prompt (in Mac and Linux, open a terminal window) and use the `cd` command (change directory) to go the folder where you have downloaded this repository.\n\nCreate a new environment named `coastsat` with all the required packages by entering these commands in succession:\n\n```\nconda create -n coastsat\nconda activate coastsat\nconda install -c conda-forge geopandas earthengine-api scikit-image matplotlib astropy notebook -y\npip install pyqt5\n```\n\nAll the required packages have now been installed in an environment called `coastsat`. Always make sure that the environment is activated with:\n\n```\nconda activate coastsat\n```\n\nTo confirm that you have successfully activated CoastSat, your terminal command line prompt should now start with (coastsat).\n\n:warning: **In case errors are raised** :warning:: clean things up with the following command (better to have the Anaconda Prompt open as administrator) before attempting to install `coastsat` again:\n```\nconda clean --all\n```\n\nYou can also install the packages with the **Anaconda Navigator**, in the *Environments* tab. For more details, the following [link](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) shows how to create and manage an environment with Anaconda.\n\n### 1.2 Activate Google Earth Engine Python API\n\nFirst, you need to request access to Google Earth Engine at https://signup.earthengine.google.com/. Then install `gcloud`, go to https://cloud.google.com/sdk/docs/install and install gcloud CLI. After you have installed it, close the Anaconda Prompt and restart it.\n\nWith the `coastsat` environment activated, run the following command on the Anaconda Prompt to link your environment to the GEE server:\n\n```\nearthengine authenticate --auth_mode=notebook\n```\n\nNow you are ready to start using the CoastSat toolbox!\n\n:warning: remember to always activate the environment with `conda activate coastsat` each time you are preparing to use the toolbox.\n\n## 2. Usage\n\nAn example of how to run the software in a Jupyter Notebook is provided in the repository (`example_jupyter.ipynb`). To run this, first activate your `coastsat` environment with `conda activate coastsat` (if not already active), and then type:\n\n```\njupyter notebook\n```\n\nA web browser window will open. Point to the directory where you downloaded this repository and click on `example_jupyter.ipynb`. A Jupyter Notebook combines formatted text and code. To run the code, place your cursor inside one of the code sections and click on the `run cell` button (or press `Shift` + `Enter`) and progress forward.\n\n![image](https://user-images.githubusercontent.com/7217258/165960239-e8870f7e-0dab-416e-bbdd-089b136b7d20.png)\n\nIf you prefer to use **Spyder** or other integrated development environments (IDEs), a Python script named `example.py` is also included in the repository. If using **Spyder**, make sure that the Graphics Backend is set to **Automatic** and not **Inline** (as this mode doesn\'t allow to interact with the figures). To change this setting go under Preferences>IPython console>Graphics.\n\nThe following sections show an example of how to use the toolbox at Narrabeen-Collaroy beach (Australia).\n\n### 2.1 Retrieval of the satellite images\n\nTo retrieve from the GEE server the available satellite images cropped around the user-defined region of coastline for the particular time period of interest, the following variables are required:\n- `polygon`: the coordinates of the region of interest (longitude/latitude pairs in WGS84)\n- `dates`: dates over which the images will be retrieved (e.g., `dates = [\'2017-12-01\', \'2018-01-01\']`)\n- `sat_list`: satellite missions to consider (e.g., `sat_list = [\'L5\', \'L7\', \'L8\', \'L9\', \'S2\']` for Landsat 5, 7, 8, 9 and Sentinel-2 collections)\n- `sitename`: name of the site (this is the name of the subfolder where the images and other accompanying files will be stored)\n- `filepath`: filepath to the directory where the data will be stored\n- `landsat_collection`: whether to use Collection 1 (`C01`) or Collection 2 (`C02`). Note that after 2022/01/01, Landsat images are only available in Collection 2. Landsat 9 is therefore only available as Collection 2. So if the user has selected `C01`, images prior to 2022/01/01 will be downloaded from Collection 1, while images captured after that date will be automatically taken from `C02`.\n\nThe call `metadata = SDS_download.retrieve_images(inputs)` will launch the retrieval of the images and store them as .TIF files (under */filepath/sitename*). The metadata contains the exact time of acquisition (in UTC time) of each image, its projection and its geometric accuracy. If the images have already been downloaded previously and the user only wants to run the shoreline detection, the metadata can be loaded directly by running `metadata = SDS_download.get_metadata(inputs)`.\n\nThe screenshot below shows an example of inputs that will retrieve all the images of Collaroy-Narrabeen (Australia) acquired by Sentinel-2 in December 2017.\n\n![doc1](https://user-images.githubusercontent.com/7217258/166197244-9f41de17-f387-40a6-945e-8a78b581c4b1.png)\n\n:warning: The area of the polygon should not exceed 100 km2, so for very long beaches split it into multiple smaller polygons.\n\n### 2.2 Shoreline detection\n\nTo map the shorelines, the following user-defined settings are needed:\n- `cloud_thresh`: threshold on maximum cloud cover that is acceptable on the images (value between 0 and 1 - this may require some initial experimentation).\n- `dist_clouds`: buffer around cloud pixels where shoreline is not mapped (in metres)\n- `output_epsg`: epsg code defining the spatial reference system of the shoreline coordinates. It has to be a cartesian coordinate system (i.e. projected) and not a geographical coordinate system (in latitude and longitude angles). See http://spatialreference.org/ to find the EPSG number corresponding to your local coordinate system. If unsure, use 3857 which is the web-mercator.\n- `check_detection`: if set to `True` the user can quality control each shoreline detection interactively (recommended when mapping shorelines for the first time) and accept/reject each shoreline.\n- `adjust_detection`: in case users wants more control over the detected shorelines, they can set this parameter to `True`, then they will be able to manually adjust the threshold used to map the shoreline on each image.\n- `save_figure`: if set to `True` a figure of each mapped shoreline is saved under */filepath/sitename/jpg_files/detection*, even if the two previous parameters are set to `False`. Note that this may slow down the process.\n\nThere are additional parameters (`min_beach_size`, `min_length_sl`, `cloud_mask_issue`, `sand_color` and `pan_off`) that can be tuned to optimise the shoreline detection (for Advanced users only). For the moment leave these parameters set to their default values, we will see later how they can be modified.\n\nAn example of settings is provided here:\n\n![image](https://user-images.githubusercontent.com/7217258/182158840-ef1c527c-6ddb-44ab-a6fc-f4b46c8b0127.png)\n\nOnce all the settings have been defined, the batch shoreline detection can be launched by calling:\n```\noutput = SDS_shoreline.extract_shorelines(metadata, settings)\n```\nWhen `check_detection` is set to `True`, a figure like the one below appears and asks the user to manually accept/reject each detection by pressing **on the keyboard** the `right arrow` (\xe2\x87\xa8) to `keep` the shoreline or `left arrow` (\xe2\x87\xa6) to `skip` the mapped shoreline. The user can break the loop at any time by pressing `escape` (nothing will be saved though).\n\n![map_shorelines](https://user-images.githubusercontent.com/7217258/60766769-fafda480-a0f1-11e9-8f91-419d848ff98d.gif)\n\nWhen `adjust_detection` is set to `True`, a figure like the one below appears and the user can adjust the position of the shoreline by clicking on the histogram of MNDWI pixel intensities. Once the threshold has been adjusted, press `Enter` and then accept/reject the image with the keyboard arrows.\n\n![Alt text](https://github.com/kvos/CoastSat/blob/master/doc/adjust_shorelines.gif)\n\nOnce all the shorelines have been mapped, the output is available in two different formats (saved under */filepath/data/SITENAME*):\n- `SITENAME_output.pkl`: contains a list with the shoreline coordinates, the exact timestamp at which the image was captured (UTC time), the geometric accuracy and the cloud cover of each individual image. This list can be manipulated with Python, a snippet of code to plot the results is provided in the example script.\n- `SITENAME_output.geojson`: this output can be visualised in a GIS software (e.g., QGIS, ArcGIS).\n\nThe figure below shows how the satellite-derived shorelines can be opened in a GIS software (QGIS) using the `.geojson` output. Note that the coordinates in the `.geojson` file are in the spatial reference system defined by the `output_epsg`.\n\n

\n \n

\n\n#### 2.2.1 Reference shoreline\n\nBefore running the batch shoreline detection, there is the option to manually digitize a reference shoreline on one cloud-free image. This reference shoreline helps to reject outliers and false detections when mapping shorelines as it only considers as valid shorelines the points that are within a defined distance from this reference shoreline. **It is highly recommended to use a reference shoreline**.\n\n The user can manually digitize one or several reference shorelines on one of the images by calling:\n```\nsettings[\'reference_shoreline\'] = SDS_preprocess.get_reference_sl_manual(metadata, settings)\nsettings[\'max_dist_ref\'] = 100 # max distance (in meters) allowed from the reference shoreline\n```\nThis function allows the user to click points along the shoreline on cloud-free satellite images, as shown in the animation below.\n\n![ref_shoreline](https://user-images.githubusercontent.com/7217258/70408922-063c6e00-1a9e-11ea-8775-fc62e9855774.gif)\n\nThe maximum distance (in metres) allowed from the reference shoreline is defined by the parameter `max_dist_ref`. This parameter is set to a default value of 100 m. If you think that 100 m buffer from the reference shoreline will not capture the shoreline variability at your site, increase the value of this parameter. This may be the case for large nourishments or eroding/accreting coastlines.\n\n#### 2.2.2 Advanced shoreline detection parameters\n\nAs mentioned above, there are some additional parameters that can be modified to optimise the shoreline detection:\n- `min_beach_area`: minimum allowable object area (in metres^2) for the class \'sand\'. During the image classification, some features (for example, building roofs) may be incorrectly labelled as sand. To correct this, all the objects classified as sand containing less than a certain number of connected pixels are removed from the sand class. The default value is 4500 m^2, which corresponds to 20 connected pixels of 15 m^2. If you are looking at a very small beach (<20 connected pixels on the images), try decreasing the value of this parameter.\n- `min_length_sl`: minimum length (in metres) of shoreline perimeter to be valid. This can be used to discard small features that are detected but do not correspond to the actual shoreline. The default value is 500 m. If the shoreline that you are trying to map is shorter than 500 m, decrease the value of this parameter.\n- `cloud_mask_issue`: the cloud mask algorithm applied to Landsat images by USGS, namely CFMASK, does have difficulties sometimes with very bright features such as beaches or white-water in the ocean. This may result in pixels corresponding to a beach being identified as clouds and appear as masked pixels on your images. If this issue seems to be present in a large proportion of images from your local beach, you can switch this parameter to `True` and CoastSat will remove from the cloud mask the pixels that form very thin linear features, as often these are beaches and not clouds. Only activate this parameter if you observe this very specific cloud mask issue, otherwise leave to the default value of `False`.\n- `sand_color`: this parameter can take 3 values: `default`, `latest`, `dark` or `bright`. Only change this parameter if you are seing that with the `default` the sand pixels are not being classified as sand (in orange). If your beach has dark sand (grey/black sand beaches), you can set this parameter to `dark` and the classifier will be able to pick up the dark sand. On the other hand, if your beach has white sand and the `default` classifier is not picking it up, switch this parameter to `bright`. The `latest` classifier contains all the training data and can pick up sand in most environments (but not as accurately). At this stage the different classifiers are only available for Landsat images (soon for Sentinel-2 as well).\n- `pan_off`: by default Landsat 7, 8 and 9 images are pan-sharpened using the panchromatic band and a PCA algorithm. If for any reason you prefer not to pan-sharpen the Landsat images, switch it off by setting `pan_off` to `True`.\n\n#### 2.2.3 Re-training the classifier\nCoastSat\'s shoreline mapping alogorithm uses an image classification scheme to label each pixel into 4 classes: sand, water, white-water and other land features. While this classifier has been trained using a wide range of different beaches, it may be that it does not perform very well at specific sites that it has never seen before. You can train a new classifier with site-specific training data in a few minutes by following the Jupyter notebook in [re-train CoastSat classifier](https://github.com/kvos/CoastSat/blob/master/doc/train_new_classifier.md).\n\n### 2.3 Shoreline change time-series\n\nThis section shows how to obtain time-series of shoreline change along shore-normal transects. Each transect is defined by two points, its origin and a second point that defines its length and orientation. The origin is always defined first and located landwards, the second point is located seawards. There are 3 options to define the coordinates of the transects:\n1. Interactively draw shore-normal transects along the mapped shorelines:\n```\ntransects = SDS_transects.draw_transects(output, settings)\n```\n2. Load the transect coordinates from a .geojson file:\n```\ntransects = SDS_tools.transects_from_geojson(path_to_geojson_file)\n```\n3. Create the transects by manually providing the coordinates of two points:\n```\ntransects = dict([])\ntransects[\'Transect 1\'] = np.array([[342836, ,6269215], [343315, 6269071]])\ntransects[\'Transect 2\'] = np.array([[342482, 6268466], [342958, 6268310]])\ntransects[\'Transect 3\'] = np.array([[342185, 6267650], [342685, 6267641]])\n```\n\n:warning: if you choose option 2 or 3, make sure that the points that you are providing are in the spatial reference system defined by `settings[\'output_epsg\']`.\n\nOnce the shore-normal transects have been defined, the intersection between the 2D shorelines and the transects is computed with the following function:\n```\nsettings[\'along_dist\'] = 25\ncross_distance = SDS_transects.compute_intersection(output, transects, settings)\n```\nThe parameter `along_dist` defines the along-shore distance around the transect over which shoreline points are selected to compute the intersection. The default value is 25 m, which means that the intersection is computed as the median of the points located within 25 m of the transect (50 m alongshore-median). This helps to smooth out localised water levels in the swash zone.\n\nAn example is shown in the animation below:\n\n![transects](https://user-images.githubusercontent.com/7217258/49990925-8b985a00-ffd3-11e8-8c54-57e4bf8082dd.gif)\n\nThere is also a more advanced function to compute the intersections `SDS_transects.compute_intersection_QA()`, which provides more quality-control and can deal with small loops, multiple intersections, false detections etc. It is recommended to use this function as it can provide cleaner shoreline time-series. See the [Jupyter Notebook](https://github.com/kvos/CoastSat/blob/master/example_jupyter.ipynb) for an example of how to use it. An example of parameter values fare provided below, the default parameters should work in most cases (leave as it is if unsure).\n\n![image](https://user-images.githubusercontent.com/7217258/182160883-5edfb8f9-e668-440c-b55c-87e8697a2b64.png)\n\n- `along_dist`: (in metres),\n alongshore distance to caluclate the intersection (median of points\n within this distance).\n- `min_points`: minimum number of shoreline points to calculate an intersection.\n- `max_std`: (in metres) maximum STD for the shoreline points within the alongshore range,\n if STD is above this value a NaN is returned for this intersection.\n- `max_range`: (in metres) maximum RANGE for the shoreline points within the alongshore range,\n if RANGE is above this value a NaN is returned for this intersection.\n- `min_chainage`: (in metres) furthest distance landward of the transect origin that an intersection is\n accepted, beyond this point a NaN is returned.\n- `multiple_inter`: (\'auto\',\'nan\',\'max\') defines how to deal with multiple shoreline intersections\n- `auto_prc`: (value between 0 and 1) by default 0.1, percentage of the time that a multiple intersection needs to be present to use the max in auto mode\n\nThe `multiple_inter` setting helps to deal with multiple shoreline intersections along the same transect. This is quite common, for example when there is a lagoon behind the beach and the transect crosses two water bodies. The function will try to identify this cases and the user can choose whether to:\n- `\'nan\'`: always assign a NaN when there are multile intersections.\n- `\'max\'`: always take the max (intersection the furtherst seaward).\n- `\'auto\'`: let the function decide transect by transect, and if it thinks there are two water bodies, take the max.\nIf `\'auto\'` is chosen, the `auto_prc` parameter will define when to use the max, by default it is set to 0.1, which means that the function thinks there are two water bodies if 10% of the time-series show multiple intersections.\n\n### 2.4 Tidal Correction\n\nEach satellite image is captured at a different stage of the tide, therefore a tidal correction is necessary to remove the apparent shoreline changes cause by tidal fluctuations.\n\nIn order to tidally-correct the time-series of shoreline change you will need the following data:\n- Time-series of water/tide level: this can be formatted as a .csv file, an example is provided [here](https://github.com/kvos/CoastSat/blob/master/examples/NARRA_tides.csv). Make sure that the dates are in UTC time as the CoastSat shorelines are always in UTC time. Also the vertical datum needs to be approx. Mean Sea Level.\n\n- An estimate of the beach-face slope along each transect. If you don\'t have this data you can obtain it using [CoastSat.slope](https://github.com/kvos/CoastSat.slope), see [Vos et al. 2020](https://doi.org/10.1029/2020GL088365) for more details (preprint available [here](https://www.essoar.org/doi/10.1002/essoar.10502903.2)).\n\nWave setup and runup corrections are not included in the toolbox, but for more information on these additional corrections see [Castelle et al. 2021](https://doi.org/10.1016/j.geomorph.2021.107707).\n\n### 2.5 Post-processing (seasonal averages and linear trends)\n\nThe tidally-corrected time-series can be post-processed to remove outliers with a despiking algorithm `SDS_transects.reject_outliers()`. This function was developed to remove obvious outliers in the time-series by removing the points that do not make physical sense in a shoreline change setting. For example, the shoreline can experience rapid erosion after a large storm, but it will then take time to recover and return to its previous state. Therefore, if the shoreline erodes/accretes suddenly of a significant amount (`max_cross_change`) and then immediately returns to its previous state, this spike does not make any physical sense and can be considered an outlier.\n\n![image](https://user-images.githubusercontent.com/7217258/182162154-9d8da81d-a5fc-486e-baf6-55e2a5782096.png)\n\nAdditionally, this function also checks that the Otsu thresholds used to map the shoreline are within the typical range defined by `otsu_threshold`, with values outside this range (typically -0.5 to 0) identified as outliers.\n\n![otsu_threhsolds](https://github.com/kvos/CoastSat/assets/7217258/86ffa18b-206d-418c-84df-fbd369c28757)\n\nAdditionally, a set of functions to compute seasonal averages, monthly averages and linear trends on the shoreline time-series are provided.\n\n`SDS_transects.seasonal_averages()`\n\n![NA1_seasonally](https://github.com/kvos/CoastSat/assets/7217258/c98cfb7e-b6c6-45d6-9168-86b3c7cb5ed9)\n\n`SDS_transects.monthly_averages()`\n\n![NA1_monthly](https://github.com/kvos/CoastSat/assets/7217258/6bc3fd62-47d7-4e5f-bdec-a04e5a8b4142)\n\n:warning: given that the shoreline time-series are not uniformly sampled and there is more density of datapoints towards the end of the record (more satellite in orbit), it is best to estimate the long-term trends on the seasonally-averaged shoreline time-series as the trend estimated on the raw time-series may be biased towards the end of the record.\n\n### 2.6 Validation against survey data\n\nThis section provides code to compare the satellite-derived shorelines against the survey data for Narrabeen, available at http://narrabeen.wrl.unsw.edu.au/.\n\n![comparison_transect_PF1](https://user-images.githubusercontent.com/7217258/183917858-d2fefdaf-f215-42d4-b103-3cbab636079e.jpg)\n\n## Contributing and Issues\nHaving a problem? Post an issue in the [Issues page](https://github.com/kvos/coastsat/issues) (please do not email).\n\nIf you are willing to contribute, check out our todo list in the [Projects page](https://github.com/kvos/CoastSat/projects/1).\n1. Fork the repository (https://github.com/kvos/coastsat/fork).\nA fork is a copy on which you can make your changes.\n2. Create a new branch on your fork\n3. Commit your changes and push them to your branch\n4. When the branch is ready to be merged, create a Pull Request (how to make a clean pull request explained [here](https://gist.github.com/MarcDiethelm/7303312))\n\n## References and Datasets\n\nThis section provides a list of references that use the CoastSat toolbox as well as existing shoreline datasets extracted with CoastSat.\n\n### Publications\n\n- Vos K., Splinter K.D., Harley M.D., Simmons J.A., Turner I.L. (2019). CoastSat: a Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery. *Environmental Modelling and Software*. 122, 104528. https://doi.org/10.1016/j.envsoft.2019.104528 (Open Access)\n\n- Vos K., Harley M.D., Splinter K.D., Simmons J.A., Turner I.L. (2019). Sub-annual to multi-decadal shoreline variability from publicly available satellite imagery. *Coastal Engineering*. 150, 160\xe2\x80\x93174. https://doi.org/10.1016/j.coastaleng.2019.04.004\n\n- Vos K., Harley M.D., Splinter K.D., Walker A., Turner I.L. (2020). Beach slopes from satellite-derived shorelines. *Geophysical Research Letters*. 47(14). https://doi.org/10.1029/2020GL088365 (Open Access preprint [here](https://www.essoar.org/doi/10.1002/essoar.10502903.2))\n\n- Vos, K. and Deng, W. and Harley, M. D. and Turner, I. L. and Splinter, K. D. M. (2022). Beach-face slope dataset for Australia. *Earth System Science Data*. volume 14, 3, p. 1345--1357. https://doi.org/10.5194/essd-14-1345-2022\n\n- Vos, K., Harley, M.D., Turner, I.L. et al. Pacific shoreline erosion and accretion patterns controlled by El Ni\xc3\xb1o/Southern Oscillation. *Nature Geosciences*. 16, 140\xe2\x80\x93146 (2023). https://doi.org/10.1038/s41561-022-01117-8\n\n- Castelle B., Masselink G., Scott T., Stokes C., Konstantinou A., Marieu V., Bujan S. (2021). Satellite-derived shoreline detection at a high-energy meso-macrotidal beach. *Geomorphology*. volume 383, 107707. https://doi.org/10.1016/j.geomorph.2021.107707\n\n- Castelle, B., Ritz, A., Marieu, V., Lerma, A. N., & Vandenhove, M. (2022). Primary drivers of multidecadal spatial and temporal patterns of shoreline change derived from optical satellite imagery. Geomorphology, 413, 108360. https://doi.org/10.1016/j.geomorph.2022.10836\n\n- Konstantinou, A., Scott, T., Masselink, G., Stokes, K., Conley, D., & Castelle, B. (2023). Satellite-based shoreline detection along high-energy macrotidal coasts and influence of beach state. Marine Geology, 107082. https://doi.org/10.1016/j.margeo.2023.107082\n\n### Datasets\n\n- Time-series of shoreline change along the Pacific Rim (v1.4) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7758183\n\n- Time-series of shoreline change along the U.S. Atlantic coast: U.S. Geological Survey data release, https://doi.org/10.5066/P9BQQTCI.\n\n- Time-series of shoreline change for the Klamath River Littoral Cell (California) (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7641757\n\n- Beach-face slope dataset for Australia (Version 2) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7272538\n\n- Training dataset used for pixel-wise classification in CoastSat (initial version): https://doi.org/10.5281/zenodo.3334147\n'",",https://doi.org/10.5281/zenodo.2779293,https://doi.org/10.1016/j.envsoft.2019.104528,https://doi.org/10.1016/j.coastaleng.2019.04.004\n-,https://doi.org/10.1016/j.geomorph.2021.107707\n-,https://doi.org/10.1029/2020GL088365,https://doi.org/10.5194/essd-14-1345-2022\n\n:point_right,https://doi.org/10.1029/2020GL088365,https://doi.org/10.1016/j.geomorph.2021.107707,https://doi.org/10.1016/j.envsoft.2019.104528,https://doi.org/10.1016/j.coastaleng.2019.04.004\n\n-,https://doi.org/10.1029/2020GL088365,https://doi.org/10.5194/essd-14-1345-2022\n\n-,https://doi.org/10.1038/s41561-022-01117-8\n\n-,https://doi.org/10.1016/j.geomorph.2021.107707\n\n-,https://doi.org/10.1016/j.geomorph.2022.10836\n\n-,https://doi.org/10.1016/j.margeo.2023.107082\n\n###,https://doi.org/10.5281/zenodo.7758183\n\n-,https://doi.org/10.5066/P9BQQTCI.\n\n-,https://doi.org/10.5281/zenodo.7641757\n\n-,https://doi.org/10.5281/zenodo.7272538\n\n-,https://doi.org/10.5281/zenodo.3334147\n","2018/09/28, 06:37:19",1853,GPL-3.0,43,457,"2023/10/06, 09:51:16",38,98,392,85,19,1,0.0,0.1098039215686275,"2023/07/07, 06:31:02",v2.3,2,10,false,,false,false,,,,,,,,,,, PySAMOSA,"A Python-based software for processing open ocean and coastal waveforms from SAR satellite altimetry to measure sea surface heights, wave heights, and wind speed for the oceans and inland waters.",floschl,https://github.com/floschl/pysamosa.git,github,,Coastal and Reefs,"2023/08/13, 08:31:57",12,0,12,true,Jupyter Notebook,,,"Jupyter Notebook,Python,Makefile,Shell,Cython",,"b'# PySAMOSA\n\n![CI](https://github.com/floschl/pysamosa/actions/workflows/ci.yml/badge.svg)\n![Release](https://github.com/floschl/pysamosa/actions/workflows/release.yml/badge.svg)\n![PyPI](https://img.shields.io/pypi/v/pysamosa)\n[![DOI](https://zenodo.org/badge/646028227.svg)](https://zenodo.org/badge/latestdoi/646028227)\n[![License: LGPL v3](https://img.shields.io/badge/License-LGPL_v3-blue.svg)](https://www.gnu.org/licenses/lgpl-3.0)\n\n![]()
\n\nPySAMOSA is a Python-based software for processing open ocean and coastal waveforms from SAR satellite\naltimetry to measure sea surface heights, wave heights, and wind speed for the oceans and inland waters.\nSatellite altimetry is a space-borne remote sensing technique used for Earth observation. More details on satellite\naltimetry can be found [here](https://www.altimetry.info/file/Radar_Altimetry_Tutorial.pdf).\n\nThe process of extracting of the three geophysical parameters from the reflected echoes/waveforms is called retracking. The measured (noisy) waveforms are fitted against the open ocean power return echo waveform model SAMOSA2 [1,2].\n\nIn the coastal zone, the return echoes are affected by spurious signals from strongly reflective targets such as sand and mud banks, tidal flats, shipping platforms, sheltered bays, or calm waters close to the shoreline.\n\nThe following European Space Agency (ESA) satellite altimetry missions are supported:\n- Sentinel-3 (S3)\n- Sentinel-6 Michael Freilich (S6-MF)\n\nThe software retracks the waveforms, i.e. the Level-1b (L1b) data, to extract the\nretracked variables SWH, range, and Pu.\n\nThe open ocean retracker implementation specification documents from the official EUMETSAT baseline are used (S3 [1],\nS6-MF [2]).\n\nFor retracking coastal waveforms the following retrackers are implemented:\n- SAMOSA+ [3]\n- CORAL [4,5]\n\nIn addition, FF-SAR-processed S6-MF data can be retracked using the zero-Doppler beam of the SAMOSA2 model and a\nspecially adapted $\\alpha_p$ LUT table, created by the ESA L2 GPP project [7]. The application of the FF-SAR-processed data\nhas been validated in [5].\n\nNot validated (experimental) features:\n- CryoSat-2 (CS2) support\n- SAMOSA++ coastal retracker [2]\n- Monte-carlo SAMOSA2 simulator\n\n## Getting-started\n\n### Usage\n\nInstall pysamosa into your environment\n\n $ pip install pysamosa\n\nThis is the sample to retrack a single L1b file from the S6-MF mission\n\n``` python\nfrom pathlib import Path\nimport numpy as np\n\nfrom pysamosa.common_types import L1bSourceType\nfrom pysamosa.data_access import data_vars_s3, data_vars_s6\nfrom pysamosa.retracker_processor import RetrackerProcessor\nfrom pysamosa.settings_manager import get_default_base_settings, SettingsPreset\n\n\nl1b_files = []\n# l1b_files.append(Path(\'S6A_P4_1B_HR______20211120T051224_20211120T060836_20220430T212619_3372_038_018_009_EUM__REP_NT_F06.nc\'))\nl1b_files.append(Path.cwd().parent / \'.data\' / \'s6\' / \'l1b\' / \'S6A_P4_1B_HR______20211120T051224_20211120T060836_20220430T212619_3372_038_018_009_EUM__REP_NT_F06.nc\')\n\nl1b_src_type = L1bSourceType.EUM_S6_F06\ndata_vars = data_vars_s6\n\n# configure coastal CORAL retracker\npres = SettingsPreset.CORALv2\nrp_sets, retrack_sets, fitting_sets, wf_sets, sensor_sets = get_default_base_settings(settings_preset=pres, l1b_src_type=l1b_src_type)\n\nrp_sets.nc_dest_dir = l1b_files[0].parent / \'processed\'\nrp_sets.n_offset = 0\nrp_sets.n_inds = 0 #0 means all\nrp_sets.n_procs = 6 #use 6 cores in parallel\nrp_sets.skip_if_exists = False\n\nadditional_nc_attrs = {\n \'L1B source type\': l1b_src_type.value.upper(),\n \'Retracker preset\': pres.value.upper(),\n}\n\nrp = RetrackerProcessor(l1b_source=l1b_files, l1b_data_vars=data_vars[\'l1b\'],\n rp_sets=rp_sets,\n retrack_sets=retrack_sets,\n fitting_sets=fitting_sets,\n wf_sets=wf_sets,\n sensor_sets=sensor_sets,\n nc_attrs_kw=additional_nc_attrs,\n bbox=[np.array([-29.05, -29.00, 0, 360])],\n )\n\nrp.process() #start processing\n\nprint(rp.output_l2) #retracked L2 output can be found in here\n```\n\nA running minimal working example for retracking is shown in `notebooks/retracking_example.ipynb`.\n\n### Development\n\nIt is highly recommended to use a proper Python IDE, such as\n[PyCharm Community](https://www.jetbrainscom/pycharm/download/) or Visual Studio Code.\nUsing the IDE will allow you to familiarise yourself better with the code, debug and extend it.\n\nClone the repo\n\n $ git clone {repo_url}\n\nEnter cloned directory\n\n $ cd pysamosa\n\nInstall dependencies into your conda env/virtualenv\n\n $ pip install -r requirements.txt\n\nCompile the .pyx files (e.g. model_helpers.pyx) by running cython to build the extensions\nFor Windows users: An installed C/C++ compiler may be required for installation, e.g. MSCV, which comes with\nthe free [Visual Studio Community](https://visualstudio.microsoft.com/vs/community/)\n\n $ python setup.py build_ext --inplace\n\nOptional: Compile on an HPC cluster (not normally required)\n\n $ LDSHARED=""icc -shared"" CC=icc python setup.py build_ext --inplace\n\n## Tips\n\nThe following list provides a brief description of the recommended use of the software.\n1. **Getting-started with Jupyter Notebook**\nThe `notebooks/retracking_example.ipynb` contains a sample script how to retrack a sample EUMETSAT baseline L1b file.\nThe retracked SWH and SWH data are compared with the EUMETSAT baseline L2 data. The `notebooks/demo_script.py` provides\nthe code example from above to quickly launch a small retracking example.\n\n2. **More entry points**\nThe files `main_s3.py`, `main_s6.py`, `main_cs.py`, (`main_*.py`) etc. serve as entry points for batch processing of\n multiple nc files.\nA list of L1b files (or a single file) is read for retracking, which are fully retracked or based on the given\n bounding box (bbox) paramater. A retracked L2 file is written out per processed\n L1b file.\n\n3. **Settings**\nThe `RetrackerProcessor` inputs require the `RetrackerProcessorSettings`, `RetrackerSettings`, `FittingSettings`,\n `WaveformSettings`, and `SensorSettings` objects to be inserted during initialisation. The default settings of these settings objects can be retrieved with the `get_default_base_settings` function based on the three\n settings `L1bSourceType` and `SettingsPreset`.\n For instance, the following code snippet is taken from the `main_s3.py` file and retracks Sentinel-3 data with the default SAMOSA-based open ocean retracker with no SettingsPreset (100 waveforms from measurement index 25800,\n and using 6 cores).\n```python\n l1b_src_type = L1bSourceType.EUM_S3\n pres = SettingsPreset.NONE #use this for the standard SAMOSA-based retracker [2]\n # pres = SettingsPreset.CORALv2 #use this for CORALv2 [5]\n # pres = SettingsPreset.NONE #use this for SAMOSA+ [3]\n rp_sets, retrack_sets, fitting_sets, wf_sets, sensor_sets = get_default_base_settings(settings_preset=pres, l1b_src_type=l1b_src_type)\n\n rp_sets.nc_dest_dir = nc_dest_path / run_name\n rp_sets.n_offset = 25800\n rp_sets.n_inds = 100\n rp_sets.n_procs = 6\n rp_sets.skip_if_exists = False\n```\n\n4. **Evaluation environment**\nThere are several unit tests located in `./pysamosa/tests/` that aim to analyse the retracked output in more detail.\nThe most important test scripts are `test_retrack_multi.py`, which includes retracking of small along-track\nsegments of the S3A, S6, CS2 missions (and a generic input nc file).\n`test_retrack_single` allows you to check the retracking result of a single waveform and compare it to reference\n retracking result.\n\nPlease uncomment the line `mpl.use(\'TkAgg\')` in file `conftest.py` to\nplot the test output, which is particularly useful for the retracking tests in files `tests/test_retrack_multi.\npy` and `tests/test_retrack_multi.py`.\n\n\n5. **Difference between CORALv1 and CORALv2**\n- v2 has two additional extensions that were required for S6-MF\n- retrack_sets.interference_masking_mask_before_le = True\nInterference signals before the leading edge are also masked out by the adaptive inteference mitigation scheme (AIM, CORAL feature)\n- fitting_sets.Fit_Var_2_MinMax_Hs = (0.0, 20)\nlower SWH boundary for fitting procedure is set to 0.0, as defined in [2]\n\n6. **Quality flag**\nDuring the retracking process, the quality flag variables `swh_qual\' and `range_qual\' (where the latter is just a copy of the former)\nare part of the retracked output and indicate the quality of the retracking of each individual waveform (0=good, 1=bad).\nThis makes a difference particularly in coastal scenarios where the waveforms are affected by spurious signals which tend\nto cause incorrectly retracked waveforms. The CORAL coastal retracker maximises the number of valid records in the coastal zone.\nWe therefore emphasise the importance of considering `swh_qual`/`range_qual` quality flags in the retracked product.\n\n## Validation\n\n### Run tests\n\nTo run all the unit tests (using the pytest framework), run\n\n $ pytest\n\n### Comparison with EUMETSAT L2 baseline data\n\nComparison of a retracked open ocean segment from S3 and S6-MF missions with the EUMETSAT L2 baseline (S3: 004,\nS6-MF: F06)\n(generated by `notebooks/retracking_example.ipynb` Jupyter notebook)\n\nS3 | S6-MF\n:-:|:-:\n![](https://github.com/floschl/pysamosa/blob/main/resources/S3_comparison_w_baseline.jpg?raw=true) | ![](https://github.com/floschl/pysamosa/blob/main/resources/S6_comparison_w_baseline.jpg?raw=true)\n\n## Contributions\n\nThis software is intended to be a community-based project. Contributions to this project are very welcome.\nIn this case:\n- Fork this repository\n- Submit a pull request to be merged back into this repository.\n\nBefore submitting changes, please check that your changes pass flake8, black, isort and the\n tests, including testing other Python versions with tox:\n\n $ flake8 pysamosa tests scripts\n $ black . --check --diff\n $ isort . --check-only --diff\n $ pytest\n $ tox\n\nIf your pull request is accepted, you will be included in the next official release and will be listed as a\nco-author for the DOI link created by Zenodo.\n\n## Future work\n\nPossible developments of this project are:\n\nRetracking-related\n- Align CS-2 retracking with the CS-2 baseline processing chain, validate against\n[SAMpy](https://github.com/cls-obsnadir-dev/SAMPy) developed as part of the [ESA Cryo-TEMPO project](https://earth.esa.int/eogateway/documents/20142/37627/Cryo-TEMPO-ATBD-Coastal-Oceans.pdf)\n- Implement evolutions of the EUMETSAT\'s baseline processing chain [6], e.g. the numerical retracking planned\n for Q3/2023\n\nSoftware-related\n- Create notebook for a coastal retracking demo\n- Create richer documentation (readthedocs)\n\n## Citation\n\nIf you use this software or the code, please cite this DOI:\n\nFlorian Schlembach; Marcello Passaro. PySAMOSA: An Open-source Software Framework for Retracking SAMOSA-based, Open\nOcean and Coastal Waveforms of SAR Satellite Altimetry. Zenodo. https://zenodo.org/badge/latestdoi/646028227.\n\n## Acknowledgement\n\nThe authors are grateful to\n\nSalvatore Dinardo for his support in implementing the SAMOSA-based and SAMOSA+ [3] retracking algorithms.\n\nThis package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the\n[audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) project template.\n\n## References\n\n[1] SAMOSA Detailed Processing Model: Christine Gommenginger, Cristina Martin-Puig, Meric Srokosz, Marco Caparrini, Salvatore Dinardo, Bruno Lucas, Marco Restano, Am\xc3\xa9rico, Ambr\xc3\xb3zio and J\xc3\xa9r\xc3\xb4me Benveniste, Detailed Processing Model of the Sentinel-3 SRAL SAR altimeter ocean waveform retracker, Version 2.5.2, 31 October 2017, Under ESA-ESRIN Contract No. 20698/07/I-LG (SAMOSA), Restricted access as defined in the Contract, J\xc3\xa9r\xc3\xb4me Benveniste (Jerome.Benvensite@esa.int) pers. comm.\n\n[2] EUMETSAT. Sentinel-6/Jason-CS ALT Level 2 Product Generation Specification (L2 ALT PGS), Version V4D; 2022.\nhttps://www.eumetsat.int/media/48266.\n\n[3] Dinardo, Salvatore. \xe2\x80\x98Techniques and Applications for Satellite SAR Altimetry over Water, Land and Ice\xe2\x80\x99.\nDissertation, Technische Universit\xc3\xa4t, 2020. https://tuprints.ulb.tu-darmstadt.de/11343/.\n\n[4] Schlembach, F.; Passaro, M.; Dettmering, D.; Bidlot, J.; Seitz, F. Interference-Sensitive Coastal SAR Altimetry\nRetracking Strategy for Measuring Significant Wave Height. Remote Sensing of Environment 2022, 274, 112968. https://doi.org/10.1016/j.rse.2022.112968.\n\n[5] Schlembach, F.; Ehlers, F.; Kleinherenbrink, M.; Passaro, M.; Dettmering, D.; Seitz, F.; Slobbe, C. Benefits of Fully Focused SAR Altimetry to Coastal Wave Height Estimates: A Case Study in the North Sea. Remote Sensing of Environment 2023, 289, 113517. https://doi.org/10.1016/j.rse.2023.113517.\n\n[6] Scharroo, R.; Martin-Puig, C.; Meloni, M.; Nogueira Loddo, C.; Grant, M.; Lucas, B. Sentinel-6 Products Status. Ocean Surface Topography Science Team (OSTST) meeting in Venice 2022. https://doi.org/10.24400/527896/a03-2022.3671.\n\n[7] ESA L2 GPP Project: FF-SAR SAMOSA LUT generation was funded under ESA contract 4000118128/16/NL/AI.\n'",",https://zenodo.org/badge/latestdoi/646028227,https://zenodo.org/badge/latestdoi/646028227.\n\n##,https://doi.org/10.1016/j.rse.2022.112968.\n\n,https://doi.org/10.1016/j.rse.2023.113517.\n\n,https://doi.org/10.24400/527896/a03-2022.3671.\n\n","2023/05/27, 04:09:17",152,LGPL-3.0,130,130,"2023/10/06, 09:51:16",0,0,0,0,19,0,0,0.0078125,"2023/08/13, 08:32:20",v1.0.0,0,2,false,,false,false,,,,,,,,,,, Digital Earth Australia Coastlines,Extracting tidally-constrained annual shorelines and robust rates of coastal change from freely available Earth observation data at continental scale.,GeoscienceAustralia,https://github.com/GeoscienceAustralia/dea-coastlines.git,github,"digitalearthaustralia,opendatacube,coastal-change,remote-sensing,erosion,coastal-dynamics,dea,coastlines,satellite-imagery,python,jupyter-notebook,geospatial,earth-observation,gis",Coastal and Reefs,"2023/08/29, 00:14:00",46,0,15,true,PLpgSQL,Geoscience Australia,GeoscienceAustralia,"PLpgSQL,Jupyter Notebook,Python,QML,Dockerfile,Shell",https://cmi.ga.gov.au/data-products/dea/581/dea-coastlines,"b'> \xf0\x9f\x8c\x8f **Note: This branch contains the Australia-specific implementation of DEA Coastlines.** For a more flexible implementation adapted to globally available USGS Collection 2 Level 2 Landsat data, we recommend adapting the [Digital Earth Africa Coastlines](https://github.com/digitalearthafrica/deafrica-coastlines/) implementation instead.\n\n![Digital Earth Australia Coastlines](visualisation/images/DEACoastlines_header.gif)\n\n# Digital Earth Australia Coastlines\n\n[![DOI](https://img.shields.io/badge/DOI-10.1016/j.rse.2021.112734-0e7fbf.svg)](https://doi.org/10.1016/j.rse.2021.112734)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![codecov](https://codecov.io/gh/GeoscienceAustralia/dea-coastlines/branch/develop/graph/badge.svg?token=7HXSIPGT5I)](https://codecov.io/gh/GeoscienceAustralia/dea-coastlines)\n[![example workflow](https://github.com/GeoscienceAustralia/dea-coastlines/actions/workflows/test.yaml/badge.svg)](https://github.com/GeoscienceAustralia/dea-coastlines/actions/workflows/test.yaml)\n\n**License:** The code in this repository is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0 license](https://creativecommons.org/licenses/by/4.0/).\n\n**Contact:** For assistance with any of the Python code or Jupyter Notebooks in this repository, please post a [Github issue](https://github.com/GeoscienceAustralia/dea-coastlines/issues/new). For questions or more information about this workflow, email Robbi.BishopTaylor@ga.gov.au.\n\n**To cite:**\n> Bishop-Taylor, R., Nanson, R., Sagar, S., Lymburner, L. (2021). Mapping Australia\'s dynamic coastline at mean sea level using three decades of Landsat imagery. _Remote Sensing of Environment_, 267, 112734. Available: https://doi.org/10.1016/j.rse.2021.112734\n\n> Nanson, R., Bishop-Taylor, R., Sagar, S., Lymburner, L., (2022). Geomorphic insights into Australia\'s coastal change using a national dataset derived from the multi-decadal Landsat archive. _Estuarine, Coastal and Shelf Science_, 265, p.107712. Available: https://doi.org/10.1016/j.ecss.2021.107712\n\n> Bishop-Taylor, R., Sagar, S., Lymburner, L., Alam, I., Sixsmith, J. (2019). Sub-pixel waterline extraction: characterising accuracy and sensitivity to indices and spectra. _Remote Sensing_, 11 (24):2984. Available: https://doi.org/10.3390/rs11242984\n\n---\n\n[**Digital Earth Australia Coastlines**](https://maps.dea.ga.gov.au/story/DEACoastlines) is a continental dataset that includes annual shorelines and rates of coastal change along the entire Australian coastline from 1988 to the present.\n\nThe product combines satellite data from Geoscience Australia\'s [Digital Earth Australia program](https://www.ga.gov.au/dea) with tidal modelling to map the typical location of the coastline at mean sea level for each year. The product enables trends of coastal erosion and growth to be examined annually at both a local and continental scale, and for patterns of coastal change to be mapped historically and updated regularly as data continues to be acquired. This allows current rates of coastal change to be compared with that observed in previous years or decades.\n\nThe ability to map shoreline positions for each year provides valuable insights into whether changes to our coastline are the result of particular events or actions, or a process of more gradual change over time. This information can enable scientists, managers and policy makers to assess impacts from the range of drivers impacting our coastlines and potentially assist planning and forecasting for future scenarios.\n\n#### Applications\n* Monitoring and mapping rates of coastal erosion along the Australian coastline\n* Prioritise and evaluate the impacts of local and regional coastal management based on historical coastline change\n* Modelling how coastlines respond to drivers of change, including extreme weather events, sea level rise or human development\n* Supporting geomorphological studies of how and why coastlines have changed across time\n\n---\n\n## Table of contents\n- [Digital Earth Australia Coastlines](#digital-earth-australia-coastlines)\n - [Applications](#applications)\n - [Table of contents](#table-of-contents)\n - [Repository code](#repository-code)\n - [Getting started](#getting-started)\n - [FES2014 tidal model](#fes2014-tidal-model)\n - [Python modules](#python-modules)\n - [Jupyter notebooks](#jupyter-notebooks)\n - [Running a DEA Coastlines analysis using the command-line interface (CLI)](#running-a-dea-coastlines-analysis-using-the-command-line-interface-cli)\n - [Analysis outputs](#analysis-outputs)\n - [Data access](#data-access)\n - [Data download](#data-download)\n - [Interactive map](#interactive-map)\n - [Loading DEA Coastlines data from the Web Feature Service (WFS) using Python](#loading-dea-coastlines-data-from-the-web-feature-service-wfs-using-python)\n - [Loading DEA Coastlines data from the Web Feature Service (WFS) using R](#loading-dea-coastlines-data-from-the-web-feature-service-wfs-using-r)\n - [Jupyter Notebook](#jupyter-notebook)\n - [Credits](#credits)\n - [References](#references)\n\n---\n\n## Repository code\nThe code in this repository is built on the Digital Earth Australia implementation of the [Open Data Cube](https://www.opendatacube.org/) software for accessing, managing, and analyzing large quantities of Earth observation (EO) data.\nThe code currently runs on the [Digital Earth Australia Sandbox](https://app.sandbox.dea.ga.gov.au/) infrastructure.\n\n#### Getting started\n\nClone the `dea-coastlines` repository and checkout the `develop` branch:\n```\ngit clone https://github.com/GeoscienceAustralia/dea-coastlines.git\ngit checkout --track origin/develop\n```\n\n##### FES2014 tidal model\nDEA Coastlines uses the FES2014 tidal model to account for the influence of tide on shoreline positions.\nTo install this tidal model, follow the [Setting up tidal models for DEA Coastlines guide on the Wiki](https://github.com/GeoscienceAustralia/dea-coastlines/wiki/Setting-up-tidal-models-for-DEA-Coastlines).\n\n#### Python modules\n\nCode in this repository is included in the `coastlines` Python package which contains three main modules. These are intended to be run in the following order:\n\n1. [`coastlines.raster`](coastlines/raster.py): This module conducts raster generation for DEA Coastlines. This analysis is processed on individual study area tiles to minimise peak memory usage.\n\n * Load stack of all available Landsat 5, 7 and 8 satellite imagery for a location using [ODC Virtual Products](https://docs.dea.ga.gov.au/notebooks/Frequently_used_code/Virtual_products.html)\n * Convert each satellite image into a remote sensing water index (e.g. MNDWI)\n * For each satellite image, model ocean tides into a tidal modelling grid based on exact time of image acquisition\n * Interpolate tide heights into spatial extent of image stack\n * Mask out high and low tide pixels by removing all observations acquired outside of 50 percent of the observed tidal range centered over mean sea level\n * Combine tidally-masked data into annual median composites representing the most representative position of the shoreline at approximately mean sea level each year\n\n2. [`coastlines.vector`](coastlines/vector.py): This module conducts vector subpixel coastline extraction and rates of change statistics calculation. This analysis is processed on individual study area tiles to minimise peak memory usage.\n\n * Apply morphological extraction algorithms to mask annual median composite rasters to a valid coastal region\n * Extract shoreline vectors using subpixel waterline extraction ([Bishop-Taylor et al. 2019b](https://doi.org/10.3390/rs11242984))\n * Compute rates of coastal change at every 30 m along the coastline using linear regression\n\n3. [`coastlines.continental`](coastlines/continental.py): This module combines tiled layers into seamless continental-scale vector files:\n\n * Combines multiple output shoreline and rates of change statistics point vectors into single continental datasets\n * Aggregates this data to produce moving window coastal change hotspot datasets that summarise coastal change at regional and continental scale.\n\n\n#### Jupyter notebooks\nAn interactive walk-through of each step of the tiled raster and vector DEA Coastlines workflow and the continental layer generation is provided in the following Jupyter Notebooks. These notebooks can be run on the [DEA Sandbox](https://app.sandbox.dea.ga.gov.au/) to assist in prototyping or troubleshooting:\n* [DEA Coastlines raster generation](notebooks/DEACoastlines_generation_raster.ipynb)\n* [DEA Coastlines vector generation](notebooks/DEACoastlines_generation_vector.ipynb)\n* [DEA Coastlines continental hotspots](notebooks/DEACoastlines_generation_continental.ipynb)\n\n### Running a DEA Coastlines analysis using the command-line interface (CLI)\n\nThese three modules have a command-line interface that can be used to automate each stage of the analysis. An example of using these tools is provided in the following Jupyter Notebook:\n* [DEA Coastlines generation using command line tools](notebooks/DEACoastlines_generation_CLI.ipynb)\n\nFor help using these command line tools, run:\n```\npython -m coastlines.raster --help\npython -m coastlines.vector --help\npython -m coastlines.continental --help\n```\n\n#### Analysis outputs\nFiles generated by DEA Coastlines are exported to the `data` directory.\n\nTemporary raster and vector outputs produced by [`coastlines.raster`](coastlines/raster.py) and [`coastlines.vector`](coastlines/vector.py) for each study area grid cell are exported to:\n```\ndata/interim/raster/{unique_analysis_name}/{unique_analysis_name}_{study_area_name}\ndata/interim/vector/{unique_analysis_name}/{unique_analysis_name}_{study_area_name}\n```\n\nOnce all study area grid cells have been processed, these are combined into a continental-scale output GeoPackage vector file and zipped ESRI Shapefiles using [`coastlines.continental`](coastlines/continental.py). These final outputs are exported to:\n```\ndata/processed/{unique_analysis_name}/coastlines_{continental_version}.gpkg\ndata/processed/{unique_analysis_name}/coastlines_{continental_version}.shp.zip\n```\n\n---\n### Data access\n\n#### Data download\n\nTo download DEA Coastlines data for the entire Australian coastline, visit the ""Access"" tab of the [Geoscience Australia DEA Coastlines product description](https://cmi.ga.gov.au/data-products/dea/581/dea-coastlines#access) and follow the instructions under ""Access notes"". Data is available in two formats:\n\n* GeoPackage (recommended): suitable for QGIS; includes built-in symbology for easier interpretation\n* ESRI Shapefiles: suitable for ArcMap and QGIS\n\n#### Interactive map\n\nTo explore DEA Coastlines on an interactive map, visit the [Digital Earth Australia Maps platform](https://maps.dea.ga.gov.au/story/DEACoastlines).\n\n![Zooming to annual rates of change and plotting chart in DEA Maps](https://data.dea.ga.gov.au/projects/coastlines/DEACoastLines_DEAMaps_1.gif)\n\n\n#### Loading DEA Coastlines data from the Web Feature Service (WFS) using Python\n\nDEA Coastlines data can be loaded directly in a Python script or Jupyter Notebook using the DEA Coastlines Web Feature Service (WFS) and `geopandas`:\n\n```\nimport geopandas as gpd\n\n# Specify bounding box\nymax, xmin = -33.65, 115.28\nymin, xmax = -33.66, 115.30\n\n# Set up WFS requests for annual shorelines & rates of change points\ndeacl_annualshorelines_wfs = f\'https://geoserver.dea.ga.gov.au/geoserver/wfs?\' \\\n f\'service=WFS&version=1.1.0&request=GetFeature\' \\\n f\'&typeName=dea:shorelines_annual&maxFeatures=1000\' \\\n f\'&bbox={ymin},{xmin},{ymax},{xmax},\' \\\n f\'urn:ogc:def:crs:EPSG:4326\'\ndeacl_ratesofchange_wfs = f\'https://geoserver.dea.ga.gov.au/geoserver/wfs?\' \\\n f\'service=WFS&version=1.1.0&request=GetFeature\' \\\n f\'&typeName=dea:rates_of_change&maxFeatures=1000\' \\\n f\'&bbox={ymin},{xmin},{ymax},{xmax},\' \\\n f\'urn:ogc:def:crs:EPSG:4326\'\n\n# Load DEA Coastlines data from WFS using geopandas\ndeacl_annualshorelines_gdf = gpd.read_file(deacl_annualshorelines_wfs)\ndeacl_ratesofchange_gdf = gpd.read_file(deacl_ratesofchange_wfs)\n\n# Ensure CRSs are set correctly\ndeacl_annualshorelines_gdf.crs = \'EPSG:3577\'\ndeacl_ratesofchange_gdf.crs = \'EPSG:3577\'\n\n# Optional: Keep only rates of change points with ""good"" certainty \n# (i.e. no poor quality flags)\ndeacl_ratesofchange_gdf = deacl_ratesofchange_gdf.query(""certainty == \'good\'"")\n```\n\n### Loading DEA Coastlines data from the Web Feature Service (WFS) using R\nDEA Coastlines data can be loaded directly into `R` using the DEA Coastlines Web Feature Service (WFS) and `sf`:\n\n```\nlibrary(magrittr)\nlibrary(glue)\nlibrary(sf)\n\n# Specify bounding box\nxmin = 115.28\nxmax = 115.30\nymin = -33.66\nymax = -33.65\n\n# Read in DEA Coastlines annual shoreline data, using `glue` to insert our bounding \n# box into the string, and `sf` to load the spatial data from the Web Feature Service \n# and set the Coordinate Reference System to Australian Albers (EPSG:3577)\ndeacl_annualshorelines = ""https://geoserver.dea.ga.gov.au/geoserver/wfs?service=WFS&version=1.1.0&request=GetFeature&typeName=dea:shorelines_annual&maxFeatures=1000&bbox={ymin},{xmin},{ymax},{xmax},urn:ogc:def:crs:EPSG:4326"" %>% \n glue::glue() %>%\n sf::read_sf() %>% \n sf::st_set_crs(3577)\n\n# Read in DEA Coastlines rates of change points\ndeacl_ratesofchange = ""https://geoserver.dea.ga.gov.au/geoserver/wfs?service=WFS&version=1.1.0&request=GetFeature&typeName=dea:rates_of_change&maxFeatures=1000&bbox={ymin},{xmin},{ymax},{xmax},urn:ogc:def:crs:EPSG:4326"" %>% \n glue::glue() %>%\n sf::read_sf() %>% \n sf::st_set_crs(3577)\n```\n\n#### Jupyter Notebook\nAn [Introduction to DEA Coastlines](https://docs.dea.ga.gov.au/notebooks/DEA_datasets/DEA_Coastlines.html) Jupyter notebook providing additional useful tools for loading and analysing DEA Coastlines data can be found on the [DEA Notebooks repository](https://github.com/GeoscienceAustralia/dea-notebooks). This notebook is available on the interactive [DEA Sandbox learning and analysis environment](https://docs.dea.ga.gov.au/setup/Sandbox/sandbox.html) for easy access via a web browser.\n\n\n\n---\n## Credits\nTidal modelling is provided by the [FES2014 global tidal model](https://www.aviso.altimetry.fr/es/data/products/auxiliary-products/global-tide-fes/description-fes2014.html), implemented using the [pyTMD Python package](). FES2014 was produced by NOVELTIS, LEGOS, CLS Space Oceanography Division and CNES. It is distributed by AVISO, with support from CNES (http://www.aviso.altimetry.fr/).\n\n\n## References\n> Bishop-Taylor, R., Nanson, R., Sagar, S., Lymburner, L. (2021). Mapping Australia\'s dynamic coastline at mean sea level using three decades of Landsat imagery. _Remote Sensing of Environment_, 267, 112734. Available: https://doi.org/10.1016/j.rse.2021.112734\n\n> Bishop-Taylor, R., Sagar, S., Lymburner, L., & Beaman, R. J. (2019a). Between the tides: Modelling the elevation of Australia\'s exposed intertidal zone at continental scale. _Estuarine, Coastal and Shelf Science_, 223, 115-128. Available: https://doi.org/10.1016/j.ecss.2019.03.006\n\n> Bishop-Taylor, R., Sagar, S., Lymburner, L., Alam, I., & Sixsmith, J. (2019b). Sub-pixel waterline extraction: Characterising accuracy and sensitivity to indices and spectra. _Remote Sensing_, 11(24), 2984. Available: https://doi.org/10.3390/rs11242984\n\n> Nanson, R., Bishop-Taylor, R., Sagar, S., Lymburner, L., (2022). Geomorphic insights into Australia\'s coastal change using a national dataset derived from the multi-decadal Landsat archive. _Estuarine, Coastal and Shelf Science_, 265, p.107712. Available: https://doi.org/10.1016/j.ecss.2021.107712\n'",",https://doi.org/10.1016/j.rse.2021.112734,https://doi.org/10.1016/j.rse.2021.112734\n\n,https://doi.org/10.1016/j.ecss.2021.107712\n\n,https://doi.org/10.3390/rs11242984\n\n---\n\n,https://doi.org/10.3390/rs11242984,https://doi.org/10.1016/j.rse.2021.112734\n\n,https://doi.org/10.1016/j.ecss.2019.03.006\n\n,https://doi.org/10.3390/rs11242984\n\n,https://doi.org/10.1016/j.ecss.2021.107712\n","2020/06/30, 00:32:57",1213,Apache-2.0,184,515,"2023/08/29, 00:14:00",2,39,83,18,58,0,0.3,0.09375,"2023/08/07, 03:55:00",2.1.1,0,3,false,,false,false,,,https://github.com/GeoscienceAustralia,http://www.ga.gov.au/,"Canberra, Australia",,,https://avatars.githubusercontent.com/u/4704285?v=4,,, Thetis,An unstructured grid coastal ocean model built using the Firedrake finite element framework.,thetisproject,https://github.com/thetisproject/thetis.git,github,,Coastal and Reefs,"2023/08/21, 14:01:56",59,0,12,true,Python,,thetisproject,"Python,Makefile",,b'# Thetis\n\nFinite element model for simulating coastal and estuarine flows.\n\nSee [thetisproject.org](http://thetisproject.org/) for documentation and installation instructions.\n\nThis project is licensed under the terms of the MIT license.\n',,"2015/12/11, 18:43:31",2875,CUSTOM,37,1999,"2023/08/18, 13:42:23",21,266,327,18,68,9,2.1,0.3755707762557078,"2022/06/24, 07:52:44",Thetis_20220624,0,18,false,,false,false,,,https://github.com/thetisproject,,,,,https://avatars.githubusercontent.com/u/17277005?v=4,,, OceanMesh2D,Precise distance-based two-dimensional automated mesh generation toolbox intended for coastal ocean/shallow water flow models.,CHLNDDEV,https://github.com/CHLNDDEV/OceanMesh2D.git,github,"meshes,geophysics,shallow-water-equations,coastal-modelling,multiscale-simulation,open-source",Coastal and Reefs,"2023/08/11, 16:07:01",156,0,35,true,MATLAB,,,"MATLAB,C++,C,M,Mathematica,TeX,Shell",https://www.overleaf.com/read/hsqjhvtbkgvj,"b'

\n \n

Precise distance-based two-dimensional automated mesh generation toolbox intended for coastal ocean/shallow water flow models.

\n

\n\nTable of contents\n=================\n\n\n * [OceanMesh2D](#oceanmesh2d)\n * [Table of contents](#table-of-contents)\n * [Getting help](#getting-help)\n * [Contributing](#contributing)\n * [Code framework](#code-framework)\n * [Starting out](#starting-out)\n * [Testing](#testing)\n * [References](#references)\n * [Gallery](#gallery)\n * [Changelog](#changelog)\n\n\n## IMPORTANT NOTE:\nThis is the default and recommended `PROJECTION` branch. Please use it unless you otherwise require legacy (`MASTER` branch) or the absolute newest features (`DEV` branch).\n\nOceanMesh2D is a set of user-friendly MATLAB functions to generate two-dimensional (2D) unstructured meshes for coastal ocean circulation problems. These meshes are based on a variety of feature driven geometric and bathymetric mesh size functions, which are generated according to user-defined parameters. Mesh generation is achieved through a force-balance algorithm combined with a number of topological improvement strategies aimed at improving the worst case triangle quality. The software embeds the mesh generation process into an object-orientated framework that contains pre- and post-processing workflows, which makes mesh generation flexible, reproducible, and script-able.\n\nGetting help\n==============\n\nBesides posting [issues](https://github.com/CHLNDDEV/OceanMesh2D/issues) with the code on Github, you can also ask questions via our Slack channel [here](https://join.slack.com/t/oceanmesh2d/shared_invite/zt-1b96mhvhw-sHUhP2emepHlGtw0~fWmAg).\n\nNote: If the slack link invite isn\'t working, please send either one of us and an email and we\'ll fix it. By default, the invite link expires every 30 days.\n\nOtherwise please reach out to either Dr. William Pringle (wpringle@anl.gov) or Dr. Keith Roberts (keithrbt0@gmail.com) with questions or concerns or feel free to start an Issue in the issues tab above.\n\n\nContributing\n============\n\nAll contributions are welcome!\n\nTo contribute to the software:\n\n1. [Fork](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/fork-a-repo) the repository.\n2. Clone the forked repository, add your contributions and push the changes to your fork.\n3. Create a [Pull request](https://github.com/CHLNDDEV/OceanMesh2D/pulls)\n\nBefore creating the pull request, make sure that the examples pass...\n\nSome things that will increase the chance that your pull request is accepted:\n- Write minimal working [examples](https://en.wikipedia.org/wiki/Minimal_working_example#:~:text=In%20computing%2C%20a%20minimal%20working,to%20be%20demonstrated%20and%20reproduced) that demonstrate the functionality.\n- Write good commit and pull request messages.\n\nCode framework\n================\n`OceanMesh2D` consists of four standalone classes that are called in sequence. It requires no paid toolboxes to build meshes and has been tested to work with a trial version of MATLAB.\n\n OceanMesh2D::\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 geodata -- process geospatial data.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 edgefx -- build mesh size functions.\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 meshgen -- generate mesh based on mesh size functions and boundaries.\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 msh -- store, write, read, inspect, and visualize meshes and their axuillary components for numerical simulation.\n\nStarting Out\n============\n\nClone or download and unzip the current [repository](https://github.com/CHLNDDEV/OceanMesh2D/archive/Projection.zip)\n\nPLEASE READ THE USER GUIDE!\nA recent pdf of the user guide is located in this branch. For a continually updated version click [here](https://www.overleaf.com/read/hsqjhvtbkgvj#/54715995/) (wait for compilation and then click download PDF)\n\nRun the ""setup.sh"" bash script to download the required m_map package and the base datasets:\n- GSHHG global shoreline\n- SRTM15_PLUS global topobathy DEM\n\nAdditional data required for some of the following examples must be downloaded manually from [here](https://drive.google.com/open?id=1LeQJFKaVCM2K59pKO9jDcB02yjTmJPmL). Specifically, Examples 2, 3, 4, 5 and 5b require additional datasets from the google drive folder while base datasets are sufficient for the other examples.\n```\nFeatured in \xe2\x94\x8c\xe2\x95\xbc Examples/Example_1_NZ.m %<- A simple mesh around South Island New Zealand that uses GSHHS shoreline.\nuser guide \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Examples/Example_2_NY.m %<- A high-resolution mesh around the New York/Manhattan area that uses a DEM created from LiDAR data.\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 Examples/Example_3_ECGC.m %<- Builds a mesh for the western North Atlantic with a local high-resolution nest around New York\nFeatured in \xe2\x94\x8c\xe2\x95\xbc Examples/Example_4_PRVI.m %<- Builds a mesh for the western North Atlantic with three high-resolution nests around Peurto Rico and US Virgin Islands\nGeoscientific Model \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Examples/Example_5_JBAY.m %<- An extremely high-fidelity (15-m) mesh from LiDAR data around Jamaica Bay with CFL-limiting.\nDevelopment paper[1]\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 Examples/Example_6_GBAY.m %<- An example of the polyline/thalweg mesh size function along the Houston Ship Channel.\n\n```\n\nSee [Testing](#testing) to test OceanMesh2D on your system.\n\nTesting\n==========\n\nTo ensure the software is fully functional on your system before building some crazy meshes it is strongly recommended to run the tests (RunTests.m) in the Tests/ directory.\n\nWe test all pull requests using this test suite on a local host before accepting. For substantial pull requests we will also test the Examples from the Examples/ directory.\n\nReferences!\n==============\n\nIf you make use of `OceanMesh2D` please include a reference to [1], and to any of [2]-[5] if pertinent ([latex .bib file](https://github.com/CHLNDDEV/OceanMesh2D/tree/Projection/UserGuide/OceanMesh2D_library.bib)). We would also appreciate using our [logo](https://github.com/CHLNDDEV/OceanMesh2D/tree/Projection/imgs) in a presentation featuring `OceanMesh2D`.\n```\n\n[1] - Roberts, K. J., Pringle, W. J., and Westerink, J. J., 2019.\n OceanMesh2D 1.0: MATLAB-based software for two-dimensional unstructured mesh generation in coastal ocean modeling,\n Geoscientific Model Development, 12, 1847-1868. https://doi.org/10.5194/gmd-12-1847-2019.\n[2] - Roberts, K. J., Pringle, W. J, 2018.\n OceanMesh2D: User guide - Precise distance-based two-dimensional automated mesh generation toolbox intended for coastal\n ocean/shallow water. https://doi.org/10.13140/RG.2.2.21840.61446/2.\n[3] - Roberts, Keith J. Unstructured Mesh Generation and Dynamic Load Balancing for Coastal Ocean Hydrodynamic Simulation, 2019.\n PhD Thesis, University of Notre Dame. https://curate.nd.edu/show/4q77fr0022c.\n[4] - Roberts, Keith J., Pringle W.J., Westerink J. J. Contreras, M.T., Wirasaet, D., 2019.\n On the automatic and a priori design of unstructured mesh resolution for coastal ocean circulation models,\n Ocean Modelling, 144, 101509. https://doi.org/10.1016/j.ocemod.2019.101509.\n[5] - Pringle, W. J., Wirasaet, D., Roberts, K. J., and Westerink, J. J., 2021.\n Global Storm Tide Modeling with ADCIRC v55: Unstructured Mesh Design and Performance,\n Geoscientific Model Development, 14(2), 1125-1145. https://doi.org/10.5194/gmd-14-1125-2021.\n\n```\nIn addition, best practice when using software in a scientific publication is to cite the permanent doi corresponding to the version used (e.g., for reproducibility). All our releases are archived at the following `Zenodo` repository doi [link](https://doi.org/10.5281/zenodo.1341384).\n```\nAuthors (202X). CHLNDDEV/OceanMesh2D: OceanMesh2D VX.X. Zenodo. https://doi.org/10.5281/zenodo.1341384\n```\nPlease fill in the version (VX.X), author list and year corresponding to the version used.\n\nWe would also like to acknowledge various scripts and algorithms from [`mesh2d`](https://github.com/dengwirda/mesh2d) included in OceanMesh2D that have been developed by @dengwirda. Please also see [`JIGSAW-GEO`](https://github.com/dengwirda/jigsaw-geo-matlab):\n```\n[i] - Engwirda, D., 2017.\n JIGSAW-GEO (1.0): Locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere.\n Geoscientific Model Development, 10(6), 2117\xe2\x80\x932140. https://doi.org/10.5194/gmd-10-2117-2017.\n```\n\n## `DISCLAIMER: `\nThe boundary of the meshing domain must be a polygon (first point equals the last and non-self intersecting) but it does not need to be simplified. Read the user guide for more information about the inputs.\n\nGALLERY:\n=========\n* These images can be made by running the [examples](https://github.com/CHLNDDEV/OceanMesh2D/tree/Projection/Examples)\n

\n        \n        \n        \n        \n        \n

\n\n\n* The following images are provided from happy users. Please send us your meshes.\n\nJiangchao Qiu from his [publication](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021EF002638).\n

\n        \n

\n\nChangelog\n=========\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)\n\n### Unreleased (on current HEAD of the Projection branch)\n## Added\n- `namelist` and `RSTIMNC` input arguments for `Make_f15.m` fort.15 generator. updated the help message for all input argumebts to `Make_f15`. https://github.com/CHLNDDEV/OceanMesh2D/pull/283 \n- New stability namelist options to `Make_f15.m` fort.15 generator. https://github.com/CHLNDDEV/OceanMesh2D/pull/283 \n- Optionality for mesh2dgen to choose the method of mesh generation `kind`, and the maximum iteration count `iter`. https://github.com/CHLNDDEV/OceanMesh2D/pull/272\n- Option `improve_with_reduced_quality` to `meshgen` for allowing for mesh improvements even when quality is decreasing or large number of nodes eliminated, which sometimes is necessary to force the advancement in mesh quality.\n- Option `delaunay_elim_on_exit` to `meshgen` to skip the last call to `delaunay_elim` to potentially avoid deleting boundary elements.\n- Geoid offset nodal attribute in `Calc_f13` subroutine. https://github.com/CHLNDDEV/OceanMesh2D/pull/251\n- Support for writing Self Attraction and Loading (SAL) files in NetCDF for the ADCIRC model. https://github.com/CHLNDDEV/OceanMesh2D/pull/231\n## Changed\n- removed namelists from default setup in `Make_f15.m` fort.15 generator to be invoked by user as an input argument instead. https://github.com/CHLNDDEV/OceanMesh2D/pull/283 \n- Use implicit smoother (`ds=1`) in `msh.clean` when fix points are present. https://github.com/CHLNDDEV/OceanMesh2D/pull/283\n- Default filename for the dynamicWaterLevelCorrection is now `null` so that it is not evoked by default. https://github.com/CHLNDDEV/OceanMesh2D/pull/272\n- Default mesh improvement strategy is `ds` 2.\n- Retrieve boundary indices in `msh.get_boundary_of_mesh` method. https://github.com/CHLNDDEV/OceanMesh2D/pull/259\n- `msh.offset63` struct and associated write/make routines for dynamicwaterlevel offset functionality. https://github.com/CHLNDDEV/OceanMesh2D/pull/259\n- dynamicWaterLevelCorrection to fort.15 namelist, and PRBCKGRND option to met fort.15 namelist. https://github.com/CHLNDDEV/OceanMesh2D/pull/261\n## Fixed\n- writing out the wave coupling timestep `RSTIMNC` on the `WTIMNC` line of fort.15 when `NWS > 300`. https://github.com/CHLNDDEV/OceanMesh2D/pull/283 \n- `make_bc` call with empty varargin, e.g., when calling inner. https://github.com/CHLNDDEV/OceanMesh2D/pull/283\n- `Make_offset63` call with constant offset (length 1 `time_vector`). https://github.com/CHLNDDEV/OceanMesh2D/pull/283\n- `ourKNNsearch` call in `nanfill` option of `Griddata` (`msh.interp`). https://github.com/CHLNDDEV/OceanMesh2D/pull/283\n- `m_map` link in `setup.sh` Issue #277. https://github.com/CHLNDDEV/OceanMesh2D/pull/283\n- Updated `Calc_f13.m` to avoid an ""Unrecognized variable"" error by ensuring ""broken"" is always defined. https://github.com/CHLNDDEV/OceanMesh2D/pull/282\n- Fixed test for likely geographic coordinates in `Make_f15.m`. https://github.com/CHLNDDEV/OceanMesh2D/pull/282\n- updated `Gridded_to_Mesh_SeaBed_DepthAveraged.m` to fix the infinite loop in using `Cal_IT_Fric.m` by filling in the NaNs at greater depths with values from above. https://github.com/CHLNDDEV/OceanMesh2D/pull/280\n- Recursive cleaning issues: infinite loop and preservation of fixed points.\n- `msh.interp` method for `K` argument of length 1, and for the test to determine whether the bathymetry grid is irregular. https://github.com/CHLNDDEV/OceanMesh2D/pull/259\n- Printing of namelist character strings or numbers. https://github.com/CHLNDDEV/OceanMesh2D/pull/261\n- `Make_offset63.m` time interval computation. https://github.com/CHLNDDEV/OceanMesh2D/pull/261 and https://github.com/CHLNDDEV/OceanMesh2D/pull/272\n- Removed dependency on statistics toolbox when using the \'nanfill\' option in `msh.interp`. https://github.com/CHLNDDEV/OceanMesh2D/pull/269\n- Missing routines for reading in elvstaname and velstaname in readfort15.m by adding readlinevecname() method. https://github.com/CHLNDDEV/OceanMesh2D/pull/281\n- Incorrect reference to `ibtype` was changed to `ibtypee` in `map_mesh_properties` function. https://github.com/CHLNDDEV/OceanMesh2D/pull/298\n\n### [5.0.0] - 2021-07-29\n## Added\n- `meshgen.build()` now will rewind the iteration set in the case mesh improvement cannot improve the qualities. https://github.com/CHLNDDEV/OceanMesh2D/pull/234\n- `msh.plot()` now has a `cmap` in which the user can specify their any `cmocean` colormap\n- `radius_separated_points` function that trims the points in the mesh to have a specified resolution that can be used before `m_quiver` so that vectors are evenly plotted. https://github.com/CHLNDDEV/OceanMesh2D/pull/225\n- Deleting boundary conditions by specifyng their indices in `msh.object.bd` field. See https://github.com/CHLNDDEV/OceanMesh2D/pull/205\n- Ability for user to set their own axis limits when plotting with `msh.plot()`. https://github.com/CHLNDDEV/OceanMesh2D/pull/224\n## Fixed\n- Minor fix to `msh.make_bc` using the `auto` method. https://github.com/CHLNDDEV/OceanMesh2D/pull/237\n- correction in setting stereographic projection bounds in `setProj` to make sure points are not pushed outside and become NaNs (was limited to radius of 178 deg but made sure can go up to full 180 deg). https://github.com/CHLNDDEV/OceanMesh2D/pull/225\n- Correctly deleting weirs from boundary object through `make_bc` delete method. See https://github.com/CHLNDDEV/OceanMesh2D/pull/205\n- Array format fix for reading in ibtype and nvell from fort.14 file and when executing carry_over_weirs. See https://github.com/CHLNDDEV/OceanMesh2D/pull/206\n- Fix for irregular grid spacings in DEMs. See https://github.com/CHLNDDEV/OceanMesh2D/pull/204\n- tidal constituents for `Make_f15` can now contain ""major8"" in addition to other constituents in the string/cell array https://github.com/CHLNDDEV/OceanMesh2D/pull/221\n- Correctly collect NDBC and NOS stations in mesh when creating `fort15` file using `Make_f15` for meteorological, velocity and elevation records https://github.com/CHLNDDEV/OceanMesh2D/pull/242\n## Changed\n- `msh.plot()` using type `bd` option now creates a legend for the different boundary condition types. https://github.com/CHLNDDEV/OceanMesh2D/pull/247\n- forcing facecolor to white in `m_trimesh` so that it does not intefere with background color option. https://github.com/CHLNDDEV/OceanMesh2D/pull/245\n- made topographic elevation bound option for max_ele, wl, slp, and g `edgefx` kwargs consistent and added explanation of this option is included in `edgefx` help. https://github.com/CHLNDDEV/OceanMesh2D/pull/230\n- `m_plot()` function calls `m_grid()` with background color input kwarg (if `backcolor` option used) instead of manual application. https://github.com/CHLNDDEV/OceanMesh2D/pull/225\n- `tidal_data_to_ob` function called from `Make_f15` populates boundary condition tidal constituents that do not exist in the tidal database with zero values so that user can add user-defined values later (previously did not populate). https://github.com/CHLNDDEV/OceanMesh2D/pull/225\n- Improved `cmocean` for pivot handling with discrete colormap. https://github.com/CHLNDDEV/OceanMesh2D/pull/225\n- Renamed `Calc_NLCD_Mannings` to `Calc_Mannings_Landcover` and making option for \'ccap\' landcover type in addition to \'nlcd\' (default) and added the ability to using user specified inteprolation (e.g., nearest, linear, cell-averaging, etc.) of the landcover data to the mesh vertices. https://github.com/CHLNDDEV/OceanMesh2D/pull/221\n\n### [4.0.0] - 2021-03-14\n## Added\n- `mesh2d` interface improvements to filter small polygons.\n- Support for creation of `fort.20` files for forcing rivers by @Jiangchao3\n- Mesh ""cleaning"" modes moderate and aggressive transfer nodal attributes via improvements to `msh.map_mesh_properties`\n- `msh.remove_attribute()` method to remove f13 attribute(s)\n- new `msh.interp()` `slope_calc` kwarg option to set the method of computing the topographic gradients (slopes), either `rms` [default] or `abs`\n- new `extract_subdomain()` `keep_numbering` kwarg option to keep the full mesh triangulation numbering on the subdomain [off by default].\n\n## Changed\n- `msh.plot()` overhaul. All options specified via kwarg.\n- `msh.plot()` option `subset` option is now called `subdomain`\n- `msh.plot()` arbitary f13 option now utilizes `colormap` kwarg\n- `utilities/extract_subdomain` now is called with kwargs.\n- Cleaning up `msh.bound_courant_number()` to use `msh.get_boundary_of_mesh()` for Delaunay-triangulation and allowing `msh.clean()` to do the transfer of attributes automatically.\n- `msh.plus(obj1,obj2)` can now carry over obj2 f13 attributes if also exist in obj1\n- `msh()` more efficient storing of boundary conditions read in from fort.xx files, and `msh.write()` can write out arbitrary vertex indices (instead of just 1 to NP).\n\n## Fixed\n- Boundary labeling fix\n- Prompt when labeling bcs using `outer` kwarg in `make_bc`\n- fix for boundary condition mapping in `msh.map_mesh_properties()` especially for weirs/barriers\n- fix for barrier mapping in `msh.plus()` routine\n- fix for `msh.make_bc(m,auto,gdat)` where gdat is empty. In this case it uses the depths on the mesh to determine the open boundaries.\n- check for `poly2ccw` mapping toolbox function in `kml2struct`\n- fix for `msh.plot()` on log colormap when plotting f13 attributes\n## Deleted\n- Deprecating `msh.CheckTimestep()` for `msh.bound_courant_number`. Added error message and instruction in the CheckTimestep help\n\n### [3.3.0] - 2020-12-21\n## Fixed\n- Users without mapping toolbox could not read in shapefiles because of a bug that\n made them required to have a 3d shapefiles.\n- plotting `gdat` with no shoreline.\n- plotting a mesh\'s bathymetry with a non-zero datum using cmocean.\n- cell-averaging interpolation method in msh.interp fixed for unequal lon-lat DEM grid spacings\n\n## Added\n- Mesh patch smoother\n- Ability to remesh abritary patches of elements within the domain while respecting user-defined mesh sizes and the patches boundaries.\n- Ability to use the TPXO9 Atlas for the tidal bcs and sponge (inside tidal_data_to_ob.m and Calc_Sponge.m) by using \'**\' wildcards in place of the constituent name within the tidal atlas filename (the atlas has an individual file for each constituent).\n- Introducing \'auto_outer\' option for the make_bc msh method which populates the bc for the outermost mesh boundary polygon (ignores islands)\n- Changelog to README\n- ""mapMeshProperties"" msh method ports over mesh properties for a mesh subset\n- \'invert\' option in the msh.interp method to turn off the DEM value inversion typically performed\n\n## Changed\n- for the make_bc msh method \'auto\'/\'auto_outer\' options, allowing for the \'depth\' method of classification to use the interpolated depths on the mesh if gdat is empty.\n- improving help for make_bc msh method, Make_f15.m and Calc_Sponge.m\n- renamed ""ExtractSubDomain.m"" to ""extract_subdomain.m""\n- improving ""extract_subdomain.m"" help and facilitating NaN-delimited polygons\n- ability to return boundary as a cell in ""get_boundary_of_mesh"" msh method\n- ""Example_1_NZ.m"" includes example of plotting bcs of a msh subset\n- using ""mapMeshproperties"" method in ""fixmeshandcarry""\n- using ""fixmeshandcarry"" in the ""cat"" msh method\n- improving warning and error messages for the ""interp"" msh method\n- adding geofactor into ""writefort15"" for the GAHM vortex model\n'",",https://doi.org/10.5194/gmd-12-1847-2019.\n,https://doi.org/10.13140/RG.2.2.21840.61446/2.\n,https://doi.org/10.1016/j.ocemod.2019.101509.\n,https://doi.org/10.5194/gmd-14-1125-2021.\n\n```\nIn,https://doi.org/10.5281/zenodo.1341384,https://doi.org/10.5281/zenodo.1341384\n```\nPlease,https://doi.org/10.5194/gmd-10-2117-2017.\n```\n\n##","2018/06/28, 18:52:40",1945,GPL-3.0,27,785,"2023/08/11, 16:07:02",35,169,267,22,75,5,1.3,0.20993227990970653,"2021/08/02, 01:58:54",V5.0,0,12,false,,false,false,,,,,,,,,,, oceanmesh,A Python package for the development of unstructured triangular meshes that are used in the simulation of coastal ocean circulation.,CHLNDDEV,https://github.com/CHLNDDEV/oceanmesh.git,github,"mesh-generation,coastal-modelling,python",Coastal and Reefs,"2023/05/27, 14:14:05",44,0,13,true,Python,,,"Python,C++,Batchfile,CMake,Makefile",,"b'oceanmesh: Automatic coastal ocean mesh generation\n=====================================================\n:ocean: :cyclone:\n\n[![Tests](https://github.com/CHLNDDEV/oceanmesh/actions/workflows/testing.yml/badge.svg)](https://github.com/CHLNDDEV/oceanmesh/actions/workflows/testing.yml)\n\n[![CodeCov](https://codecov.io/gh/CHLNDDEV/oceanmesh/branch/master/graph/badge.svg)](https://codecov.io/gh/CHLNDDEV/oceanmesh)\n\nCoastal ocean mesh generation from vector and raster GIS data.\n\nTable of contents\n=================\n\n\n * [oceanmesh](#oceanmesh)\n * [Table of contents](#table-of-contents)\n * [Functionality](#functionality)\n * [Citing](#citing)\n * [Questions or problems](#questions-or-problems)\n * [Installation](#installation)\n * [Examples](#examples)\n * [Setting the region](#setting-the-region)\n * [Reading in geophysical data](#reading-in-geophysical-data)\n * [Building mesh sizing functions](#building-mesh-sizing-functions)\n * [Mesh generation](#mesh-generation)\n * [Multiscale mesh generation](#multiscale-mesh-generation)\n * [Cleaning up the mesh](#cleaning-up-the-mesh)\n * [Testing](#testing)\n * [License](#license)\n\n\n\nFunctionality\n=============\n\n* A Python package for the development of unstructured triangular meshes that are used in the simulation of coastal ocean circulation. The software integrates mesh generation directly with geophysical datasets such as topo-bathymetric rasters/digital elevation models and shapefiles representing coastal features. It provides some necessary pre- and post-processing tools to inevitably perform a successful numerical simulation with the developed model.\n * Automatically deal with arbitrarily complex shoreline vector datasets that represent complex coastal boundaries and incorporate the data in an automatic-sense into the mesh generation process.\n * A variety of commonly used mesh size functions to distribute element sizes that can easily be controlled via a simple scripting application interface.\n * Mesh checking and clean-up methods to avoid simulation problems.\n\n\nCiting\n======\n\nThe Python version of the algorithm does not yet have a citation; however, similar algorithms and ideas are shared between both version.\n\n```\n[1] - Roberts, K. J., Pringle, W. J., and Westerink, J. J., 2019.\n OceanMesh2D 1.0: MATLAB-based software for two-dimensional unstructured mesh generation in coastal ocean modeling,\n Geoscientific Model Development, 12, 1847-1868. https://doi.org/10.5194/gmd-12-1847-2019.\n```\n\nQuestions or problems\n======================\n\nBesides posting issues with the code on Github, you can also ask questions via our Slack channel [here](https://join.slack.com/t/oceanmesh2d/shared_invite/zt-su1q3lh3-C_j6AIOQPrewqZnanhzN7g).\n\nOtherwise please reach out to Dr. Keith Roberts (keithrbt0@gmail.com) with questions or concerns!\n\nPlease include version information when posting bug reports.\n`oceanmesh` uses [versioneer](https://github.com/python-versioneer/python-versioneer).\n\nThe version can be inspected through Python\n```\npython -c ""import oceanmesh; print(oceanmesh.__version__)""\n```\nor through\n```\npython setup.py version\n```\nin the working directory.\n\nTo see what\'s going on with `oceanmesh` when running scripts, you can turn on logging (which is by default suppressed) by importing the two modules `sys` and `logging` and then placing one of the three following logging commands after your imports in your calling script. The amount of information returned is the greatest with `logging.DEBUG` and leas with `logging.WARNING`.\n```\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.WARNING)\n# logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\n```\n\nInstallation\n============\n\nThe notes below refer to installation on platforms other than MS Windows. For Windows, refer to the following section. \n\nFor installation, `oceanmesh` needs [cmake](https://cmake.org/), [CGAL](https://www.cgal.org/):\n\n sudo apt install cmake libcgal-dev\n\nCGAL and can also be installed with [`conda`](https://www.anaconda.com/products/individual):\n\n conda install -c conda-forge cgal\n\nAfter that, clone the repo and `oceanmesh` can be updated/installed using pip.\n\n pip install -U -e .\n\nOn some clusters/HPC in order to install CGAL, you may need to load/install [gmp](https://gmplib.org/) and [mpfr](https://www.mpfr.org/).\nFor example, to install:\n\n sudo apt install libmpfr-dev libgmp3-dev\n\nInstallation on Windows\n=======================\n\nPython under Windows can easily experience DLL hell due to version incompatibilities. Such is the case for the combination of support packages required for OceanMesh. For this reason, we have provided \'install_cgal.bat\' to build a CGAL development distribution separately as a prerequisite.\n\nPrerequisites to build CGAL using the provided batch file are as follows:\n\n* Windows 10 or later\n* Visual Studio with C++\n* CMake\n* Git\n\nAfter successful intallation of a CGAL development package, proceed via one of the two options below to generate a python environment with OceanMesh installed.\n \nIf you are using a conda-based Python distribution, then \'install_oceanmesh.bat\' should take care of everything, provided no package conflicts arise.\n\nIf you have a different Python distribution, or if you do not want to use packages from conda forge, then the following process may be required:\n\n1. Obtain binary wheels for your python distribution for the latest GDAL, Fiona, and Rasterio (https://www.lfd.uci.edu/~gohlke/pythonlibs).\n2. Create a new virtual environment and activate it.\n3. Execute: cmd.exe /C ""for %f in (GDAL\\*.whl Fiona\\*.whl rasterio\\*.whl) do pip install %f""\n4. Execute: pip install geopandas rasterio scikit-fmm pybind11\n5. Execute: python setup.py install\n\n:warning:\n\n**WARNING: THIS PROGRAM IS IN ACTIVE DEVELOPMENT. INSTALLATION IS ONLY RECOMMENDED FOR DEVELOPERS AT THIS TIME. WHEN A STABLE API IS REACHED, THE PROGRAM WILL BE AVAILABLE VIA pypi**\n\nExamples\n==========\n\nSetting the region\n------------------\n`oceanmesh` can mesh a polygonal region in the vast majority of coordinate reference systems (CRS). Thus, all `oceanmesh` scripts begin with declaring the extent and CRS to be used and/or transofmring to a different CRS like this.\n\n```python\nimport oceanmesh as om\n\nEPSG = 32619 # A Python int, dict, or str containing the CRS information (in this case UTM19N)\nbbox = (\n -70.29637,\n -43.56508,\n -69.65537,\n 43.88338,\n) # the extent of the domain (can also be a multi-polygon delimited by rows of np.nan)\nextent = om.Region(\n extent=bbox, crs=4326\n) # set the region (the bbox is given here in EPSG:4326 or WGS84)\nextent = extent.transform_to(EPSG) # Now I transform to the desired EPSG (UTM19N)\nprint(\n extent.bbox\n) # now the extents are in the desired CRS and can be passed to various functions later on\n```\n\n\nReading in geophysical data\n---------------------------\n`oceanmesh` uses shoreline vector datasets (i.e., ESRI shapefiles) and digital elevation models (DEMs) to construct mesh size functions and signed distance functions to adapt mesh resolution for complex and irregularly shaped coastal ocean domains.\n\nShoreline datasets are necessary to build signed distance functions which define the meshing domain. Here we show how to download a world shoreline dataset referred to as [GSHHG](https://www.ngdc.noaa.gov/mgg/shorelines/) and read it into to `oceanmesh`.\n\n```python\nimport zipfile\n\nimport requests\n\nimport oceanmesh as om\n\n# Download and load the GSHHS shoreline\nurl = ""http://www.soest.hawaii.edu/pwessel/gshhg/gshhg-shp-2.3.7.zip""\nfilename = url.split(""/"")[-1]\nwith open(filename, ""wb"") as f:\n r = requests.get(url)\n f.write(r.content)\n\nwith zipfile.ZipFile(""gshhg-shp-2.3.7.zip"", ""r"") as zip_ref:\n zip_ref.extractall(""gshhg-shp-2.3.7"")\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\nEPSG = 4326 # EPSG code for WGS84 which is what you want to mesh in\n# Specify and extent to read in and a minimum mesh size in the unit of the projection\nextent = om.Region(extent=(-75.000, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length = 0.01 # In the units of the projection!\nshoreline = om.Shoreline(\n fname, extent.bbox, min_edge_length, crs=EPSG\n) # NB: the Shoreline class assumes WGS84:4326 if not specified\nshoreline.plot(\n xlabel=""longitude (WGS84 degrees)"",\n ylabel=""latitude (WGS84 degrees)"",\n title=""shoreline boundaries"",\n)\n# Using our shoreline, we create a signed distance function\n# which will be used for meshing later on.\nsdf = om.signed_distance_function(shoreline)\n```\n![Figure_1](https://user-images.githubusercontent.com/18619644/133544070-2d0f2552-c29a-4c44-b0aa-d3649541af4d.png)\n\nDEMs are used to build some mesh size functions (e.g., wavelength, enforcing size bounds, enforce maximum Courant bounds) but are not essential for mesh generation purposes. The DEM used below \'Eastcoast.nc\' was created using the Python package [elevation](https://github.com/bopen/elevation) with the following command:\n\n```\neio clip - o EastCoast.nc --bounds -74.4 40.2 -73.4 41.2\n```\nThis data is a clip from the [SRTM30 m](https://lpdaac.usgs.gov/products/srtmgl1nv003/) elevation dataset.\n\n```python\nimport oceanmesh as om\n\nfdem = ""datasets/EastCoast.nc""\n\n# Digital Elevation Models (DEM) can be read into oceanmesh in\n# either the NetCDF format or GeoTiff format provided they are\n# in geographic coordinates (WGS84)\n\n# If no extents are passed (i.e., the kwarg bbox), then the entire extent of the\n# DEM is read into memory.\n# Note: the DEM will be projected to the desired CRS automatically.\nEPSG = 4326\ndem = om.DEM(fdem, crs=EPSG)\ndem.plot(\n xlabel=""longitude (WGS84 degrees)"",\n ylabel=""latitude (WGS84 degrees)"",\n title=""SRTM 30m"",\n cbarlabel=""elevation (meters)"",\n vmin=-10, # minimum elevation value in plot\n vmax=10, # maximum elevation value in plot\n)\n```\n![Figure_2](https://user-images.githubusercontent.com/18619644/133544110-44497a6b-4a5a-482c-9447-cdc2f3663f17.png)\n\n# Defining the domain\n---------------------\nThe domain is defined by a signed distance function. A signed distance function can be automatically generated from a complex coastal ocean domain as such\n\n```python\nimport oceanmesh as om\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\nEPSG = 4326 # EPSG:4326 or WGS84\nextent = om.Region(extent=(-75.00, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length = 0.01 # minimum mesh size in domain in projection\nshoreline = om.Shoreline(fname, extent.bbox, min_edge_length)\n\n# build a signed distance functiona automatically\nsdf = om.signed_distance_function(shoreline)\n```\nIn some situations, it is necessary to flip the definition of what\'s `inside` the meshing domain. This can be accomplished via the `invert` kwarg for the `signed_distance_function`.\n\n\n```python\nsdf = om.signed_distance_function(shoreline, invert=True)\n```\n\nSetting `invert=True` will be mesh the \'land side\' of the domain rather than the ocean.\n\nBuilding mesh sizing functions\n------------------------------\nIn `oceanmesh` mesh resolution can be controlled according to a variety of feature-driven geometric and topo-bathymetric functions. In this section, we briefly explain the major functions and present examples and code. Reasonable values for some of these mesh sizing functions and their affect on the numerical simulation of barotropic tides was investigated in [Roberts et. al, 2019](https://www.sciencedirect.com/science/article/abs/pii/S1463500319301222)\n\nAll mesh size functions are defined on regular Cartesian grids. The properties of these grids are abstracted by the [Grid](https://github.com/CHLNDDEV/oceanmesh/blob/40baeeae313eb8ef285acc395c671c36c1b9605f/oceanmesh/grid.py#L33) class.\n\n### Distance and feature size\n\nA high degree of mesh refinement is often necessary near the shoreline boundary to capture its geometric complexity. If mesh resolution is poorly distributed, critical conveyances may be missed, leading to larger-scale errors in the nearshore circulation. Thus, a mesh size function that is equal to a user-defined minimum mesh size h0 along the shoreline boundary, growing as a linear function of the signed distance d from it, may be appropriate.\n\n```python\nimport oceanmesh as om\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\nEPSG = 4326 # EPSG:4326 or WGS84\nextent = om.Region(extent=(-75.00, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length = 0.01 # minimum mesh size in domain in projection\nshoreline = om.Shoreline(fname, extent.bbox, min_edge_length)\nedge_length = om.distance_sizing_function(shoreline, rate=0.15)\nax = edge_length.plot(\n xlabel=""longitude (WGS84 degrees)"",\n ylabel=""latitude (WGS84 degrees)"",\n title=""Distance sizing function"",\n cbarlabel=""mesh size (degrees)"",\n hold=True,\n)\nshoreline.plot(ax=ax)\n```\n![Figure_3](https://user-images.githubusercontent.com/18619644/133544111-314cb668-7fd2-45db-b754-4dc204617628.png)\n\nOne major drawback of the distance mesh size function is that the minimum mesh size will be placed evenly along straight stretches of shoreline. If the distance mesh size function generates too many vertices (or your application can tolerate it), a feature mesh size function that places resolution according to the geometric width of the shoreline should be employed instead ([Conroy et al., 2012](https://link.springer.com/article/10.1007/s10236-012-0574-0);[Koko, 2015](https://ideas.repec.org/a/eee/apmaco/v250y2015icp650-664.html)).\n\nIn this function, the feature size (e.g., the width of channels and/or tributaries and the radius of curvature of the shoreline) along the coast is estimated by computing distances to the medial axis of the shoreline geometry. In `oceanmesh`, we have implemented an approximate medial axis method closely following [Koko, (2015)](https://ideas.repec.org/a/eee/apmaco/v250y2015icp650-664.html).\n\n```python\nimport oceanmesh as om\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\nEPSG = 4326 # EPSG:4326 or WGS84\nextent = om.Region(extent=(-75.00, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length = 0.01 # minimum mesh size in domain in projection\nshoreline = om.Shoreline(fname, extent.bbox, min_edge_length)\nsdf = om.signed_distance_function(shoreline)\n# Visualize the medial points\nedge_length = om.feature_sizing_function(\n shoreline, sdf, max_edge_length=0.05, plot=True\n)\nax = edge_length.plot(\n xlabel=""longitude (WGS84 degrees)"",\n ylabel=""latitude (WGS84 degrees)"",\n title=""Feature sizing function"",\n cbarlabel=""mesh size (degrees)"",\n hold=True,\n xlim=[-74.3, -73.8],\n ylim=[40.3, 40.8],\n)\nshoreline.plot(ax=ax)\n```\n![Figure_4](https://user-images.githubusercontent.com/18619644/133544112-d5fde284-6839-4e45-901d-c81bca9b8000.png)\n\n### Enforcing mesh size gradation\n\nSome mesh size functions will not produce smooth element size transitions when meshed and this can lead to problems with numerical simulation. All mesh size function can thus be graded such that neighboring mesh sizes are bounded above by a constant. Mesh grading edits coarser regions and preserves the finer mesh resolution zones.\n\nRepeating the above but applying a gradation rate of 15% produces the following:\n```python\nimport oceanmesh as om\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\nEPSG = 4326 # EPSG:4326 or WGS84\nextent = om.Region(extent=(-75.00, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length = 0.01 # minimum mesh size in domain in projection\nshoreline = om.Shoreline(fname, extent.bbox, min_edge_length)\nsdf = om.signed_distance_function(shoreline)\nedge_length = om.feature_sizing_function(shoreline, sdf, max_edge_length=0.05)\nedge_length = om.enforce_mesh_gradation(edge_length, gradation=0.15)\nax = edge_length.plot(\n xlabel=""longitude (WGS84 degrees)"",\n ylabel=""latitude (WGS84 degrees)"",\n title=""Feature sizing function with gradation bound"",\n cbarlabel=""mesh size (degrees)"",\n hold=True,\n xlim=[-74.3, -73.8],\n ylim=[40.3, 40.8],\n)\nshoreline.plot(ax=ax)\n```\n![Figure_5](https://user-images.githubusercontent.com/18619644/133544114-cedc0750-b33a-4b7c-9fa5-d14b4e169c40.png)\n\n### Wavelength-to-gridscale\n\nIn shallow water theory, the wave celerity, and hence the wavelength \xce\xbb, is proportional to the square root of the depth of the water column. This relationship indicates that more mesh resolution at shallower depths is required to resolve waves that are shorter than those in deep water. With this considered, a mesh size function hwl that ensures a certain number of elements are present per wavelength (usually of the M2-dominant semi-diurnal tidal species but the frequency of the dominant wave can be specified via kwargs) can be deduced.\n\nIn this snippet, as before we compute the feature size function, but now we also compute the wavelength-to-gridscale sizing function using the SRTM dataset and compute the minimum of all the functions before grading. We discretize the wavelength of the M2 by 30 elements (e.g., wl=30)\n```python\nimport oceanmesh as om\n\nfdem = ""datasets/EastCoast.nc""\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\n\nmin_edge_length = 0.01\n\ndem = om.DEM(fdem, crs=4326)\nshoreline = om.Shoreline(fname, dem.bbox, min_edge_length)\nsdf = om.signed_distance_function(shoreline)\nedge_length1 = om.feature_sizing_function(shoreline, sdf, max_edge_length=0.05)\nedge_length2 = om.wavelength_sizing_function(\n dem, wl=100, period=12.42 * 3600\n) # use the M2-tide period (in seconds)\n# Compute the minimum of the sizing functions\nedge_length = om.compute_minimum([edge_length1, edge_length2])\nedge_length = om.enforce_mesh_gradation(edge_length, gradation=0.15)\nax = edge_length.plot(\n xlabel=""longitude (WGS84 degrees)"",\n ylabel=""latitude (WGS84 degrees)"",\n title=""Feature sizing function + wavelength + gradation bound"",\n cbarlabel=""mesh size (degrees)"",\n hold=True,\n xlim=[-74.3, -73.8],\n ylim=[40.3, 40.8],\n)\nshoreline.plot(ax=ax)\n```\n![Figure_7](https://user-images.githubusercontent.com/18619644/133544116-ba0f9404-a01e-4b30-bb0d-841c8f61224d.png)\n\n### Resolving bathymetric gradients\n\nThe distance, feature size, and/or wavelength mesh size functions can lead to coarse mesh resolution in deeper waters that under-resolve and smooth over the sharp topographic gradients that characterize the continental shelf break. These slope features can be important for coastal ocean models in order to capture dissipative effects driven by the internal tides, transmissional reflection at the shelf break that controls the astronomical tides, and trapped shelf waves. The bathymetry field contains often excessive details that are not relevant for most flows, thus the bathymetry can be smoothed by a variety of filters (e.g., lowpass, bandpass, and highpass filters) before calculating the mesh sizes.\n\n```python\nimport oceanmesh as om\n\nfdem = ""datasets/EastCoast.nc""\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\n\nEPSG = 4326 # EPSG:4326 or WGS84\nbbox = (-74.4, -73.4, 40.2, 41.2)\nextent = om.Region(extent=bbox, crs=EPSG)\ndem = om.DEM(fdem, crs=4326)\n\nmin_edge_length = 0.0025 # minimum mesh size in domain in projection\nmax_edge_length = 0.10 # maximum mesh size in domain in projection\nshoreline = om.Shoreline(fname, extent.bbox, min_edge_length)\nsdf = om.signed_distance_function(shoreline)\n\nedge_length1 = om.feature_sizing_function(\n shoreline,\n sdf,\n max_edge_length=max_edge_length,\n crs=EPSG,\n)\nedge_length2 = om.bathymetric_gradient_sizing_function(\n dem,\n slope_parameter=5.0,\n filter_quotient=50,\n min_edge_length=min_edge_length,\n max_edge_length=max_edge_length,\n crs=EPSG,\n)\nedge_length3 = om.compute_minimum([edge_length1, edge_length2])\nedge_length3 = om.enforce_mesh_gradation(edge_length3, gradation=0.15)\n```\n\nCleaning up the mesh\n--------------------\n\nAfter mesh generation has terminated, a secondary round of mesh improvement strategies is applied that is focused towards improving the geometrically worst-quality triangles that often occur near the boundary of the mesh and can make simulation impossible. Low-quality triangles can occur near the mesh boundary because the geospatial datasets used may contain features that have horizontal length scales smaller than the minimum mesh resolution. To handle this issue, a set of algorithms is applied that iteratively addresses the vertex connectivity problems. The application of the following mesh improvement strategies results in a simplified mesh boundary that conforms to the user-requested minimum element size.\n\nTopological defects in the mesh can be removed by ensuring that it is valid, defined as having the following properties:\n\n1. the vertices of each triangle are arranged in counterclockwise order;\n\n2. conformity (a triangle is not allowed to have a vertex of another triangle in its interior); and\n\n3. traversability (the number of boundary segments is equal to the number of boundary vertices, which guarantees a unique path along the mesh boundary).\n\nHere are some of the relevant codes to address these common problems.\n\n```python\n# Address (1) above.\npoints, cells = fix_mesh(points, cells)\n# Addresses (2)-(3) above. Remove degenerate mesh faces and other common problems in the mesh\npoints, cells = make_mesh_boundaries_traversable(points, cells)\n# Remove elements (i.e., ""faces"") connected to only one channel\n# These typically occur in channels at or near the grid scale.\npoints, cells = delete_faces_connected_to_one_face(points, cells)\n# Remove low quality boundary elements less than min_qual\npoints, cells = delete_boundary_faces(points, cells, min_qual=0.15)\n# Apply a Laplacian smoother that preserves the element density\npoints, cells = laplacian2(points, cells)\n```\n\n\nMesh generation\n----------------\nMesh generation is based on the [DistMesh algorithm](http://persson.berkeley.edu/distmesh/) and requires only a signed distance function and a mesh sizing function. These two functions can be defined through the previously elaborated commands above; however, they can also be straightforward functions that take an array of point coordinates and return the signed distance/desired mesh size.\n\nIn this example, we demonstrate all of the above to build a mesh around New York, United States with an approximate minimum element size of around 1 km expanding linear with distance from the shoreline to an approximate maximum element size of 5 km.\n\n**Here we use the GSHHS shoreline [here](http://www.soest.hawaii.edu/pwessel/gshhg/gshhg-shp-2.3.7.zip) and the Python package `meshio` to write the mesh to a VTK file for visualization in ParaView. Other mesh formats are possible; see `meshio` for more details**\n\n```python\nimport meshio\nimport oceanmesh as om\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\n\nEPSG = 4326 # EPSG:4326 otherwise known as WGS84\nextent = om.Region(extent=(-75.00, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length = 0.01 # minimum mesh size in domain in projection\n\nshore = om.Shoreline(fname, extent.bbox, min_edge_length)\n\nedge_length = om.distance_sizing_function(shore, max_edge_length=0.05)\n\ndomain = om.signed_distance_function(shore)\n\npoints, cells = om.generate_mesh(domain, edge_length)\n\n# remove degenerate mesh faces and other common problems in the mesh\npoints, cells = om.make_mesh_boundaries_traversable(points, cells)\n\npoints, cells = om.delete_faces_connected_to_one_face(points, cells)\n\n# remove low quality boundary elements less than 15%\npoints, cells = om.delete_boundary_faces(points, cells, min_qual=0.15)\n\n# apply a Laplacian smoother\npoints, cells = om.laplacian2(points, cells)\n\n# write the mesh with meshio\nmeshio.write_points_cells(\n ""new_york.vtk"",\n points,\n [(""triangle"", cells)],\n file_format=""vtk"",\n)\n```\n\n![new_york](https://user-images.githubusercontent.com/18619644/132709756-1759ef99-f810-4edc-9710-66226e851a50.png)\n\nMultiscale mesh generation\n---------------------------\n\nThe major downside of the DistMesh algorithm is that it cannot handle regional domains with fine mesh refinement or variable datasets due to the intense memory requirements. The multiscale mesh generation technique addresses these problems and enables an arbitrary number of refinement zones to be incorporated seamlessly into the domain.\n\nAreas of finer refinement can be incorporated seamlessly by using the `generate_multiscale_mesh` function. In this case, the user passes lists of signed distance and edge length functions to the mesh generator but besides this the user API remains the same to the previous mesh generation example. The mesh sizing transitions between nests are handled automatically to produce meshes suitable for FEM and FVM numerical simulations through the parameters prefixed with ""blend"".\n\n```python\nimport matplotlib.gridspec as gridspec\nimport matplotlib.pyplot as plt\nimport matplotlib.tri as tri\nimport numpy as np\n\nimport oceanmesh as om\n\nfname = ""gshhg-shp-2.3.7/GSHHS_shp/f/GSHHS_f_L1.shp""\nEPSG = 4326 # EPSG:4326 or WGS84\nextent1 = om.Region(extent=(-75.00, -70.001, 40.0001, 41.9000), crs=EPSG)\nmin_edge_length1 = 0.01 # minimum mesh size in domain in projection\nbbox2 = np.array(\n [\n [-73.9481, 40.6028],\n [-74.0186, 40.5688],\n [-73.9366, 40.5362],\n [-73.7269, 40.5626],\n [-73.7231, 40.6459],\n [-73.8242, 40.6758],\n [-73.9481, 40.6028],\n ],\n dtype=float,\n)\nextent2 = om.Region(extent=bbox2, crs=EPSG)\nmin_edge_length2 = 4.6e-4 # minimum mesh size in domain in projection\ns1 = om.Shoreline(fname, extent1.bbox, min_edge_length1)\nsdf1 = om.signed_distance_function(s1)\nel1 = om.distance_sizing_function(s1, max_edge_length=0.05)\ns2 = om.Shoreline(fname, extent2.bbox, min_edge_length2)\nsdf2 = om.signed_distance_function(s2)\nel2 = om.distance_sizing_function(s2)\n# Control the element size transition\n# from coarse to fine with the kwargs prefixed with `blend`\npoints, cells = om.generate_multiscale_mesh(\n [sdf1, sdf2],\n [el1, el2],\n)\n# remove degenerate mesh faces and other common problems in the mesh\npoints, cells = om.make_mesh_boundaries_traversable(points, cells)\n# remove singly connected elements (elements connected to only one other element)\npoints, cells = om.delete_faces_connected_to_one_face(points, cells)\n# remove poor boundary elements with quality < 15%\npoints, cells = om.delete_boundary_faces(points, cells, min_qual=0.15)\n# apply a Laplacian smoother that preservers the mesh size distribution\npoints, cells = om.laplacian2(points, cells)\n\n# plot it showing the different levels of resolution\ntriang = tri.Triangulation(points[:, 0], points[:, 1], cells)\ngs = gridspec.GridSpec(2, 2)\ngs.update(wspace=0.5)\nplt.figure()\n\nbbox3 = np.array(\n [\n [-73.78, 40.60],\n [-73.75, 40.60],\n [-73.75, 40.64],\n [-73.78, 40.64],\n [-73.78, 40.60],\n ],\n dtype=float,\n)\n\nax = plt.subplot(gs[0, 0]) #\nax.set_aspect(""equal"")\nax.triplot(triang, ""-"", lw=1)\nax.plot(bbox2[:, 0], bbox2[:, 1], ""r--"")\nax.plot(bbox3[:, 0], bbox3[:, 1], ""m--"")\n\nax = plt.subplot(gs[0, 1])\nax.set_aspect(""equal"")\nax.triplot(triang, ""-"", lw=1)\nax.plot(bbox2[:, 0], bbox2[:, 1], ""r--"")\nax.set_xlim(np.amin(bbox2[:, 0]), np.amax(bbox2[:, 0]))\nax.set_ylim(np.amin(bbox2[:, 1]), np.amax(bbox2[:, 1]))\nax.plot(bbox3[:, 0], bbox3[:, 1], ""m--"")\n\nax = plt.subplot(gs[1, :])\nax.set_aspect(""equal"")\nax.triplot(triang, ""-"", lw=1)\nax.set_xlim(-73.78, -73.75)\nax.set_ylim(40.60, 40.64)\nplt.show()\n```\n![Multiscale](https://user-images.githubusercontent.com/18619644/136119785-8746552d-4ff6-44c3-9aa1-3e4981ba3518.png)\n\n\nSee the tests inside the `testing/` folder for more inspiration. Work is ongoing on this package.\n\nTesting\n============\n\nTo run the `oceanmesh` unit tests (and turn off plots), check out this repository and type `tox`. `tox` can be installed via pip.\n\nLicense\n=======\n\nThis software is published under the [GPLv3 license](https://www.gnu.org/licenses/gpl-3.0.en.html)\n'",",https://doi.org/10.5194/gmd-12-1847-2019.\n```\n\nQuestions","2020/07/27, 02:21:05",1186,GPL-3.0,3,36,"2023/06/09, 07:04:47",12,40,56,10,138,2,0.6,0.4736842105263158,"2020/12/21, 23:52:51",V0.0.1,0,7,false,,false,false,,,,,,,,,,, AeoLiS,"Simulating aeolian sediment transport in situations where supply-limiting factors are important, like in coastal environments.",openearth,https://github.com/openearth/aeolis-python.git,github,,Coastal and Reefs,"2023/10/18, 12:02:06",21,0,11,true,Python,OpenEarth,openearth,Python,http://aeolis.readthedocs.io/,"b'![AeoLiS Banner](https://github.com/openearth/aeolis-shortcourse/blob/main/Sandmotor/notebooks/logo.png)\n\n[![ReadTheDocs](http://readthedocs.org/projects/aeolis/badge/?version=latest)](http://aeolis.readthedocs.io/en/latest/)\n[![PyPI](https://img.shields.io/pypi/v/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![PyPI_versions](https://img.shields.io/pypi/pyversions/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![PyPI_status](https://img.shields.io/pypi/status/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![PyPI_format](https://img.shields.io/pypi/format/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![License](https://img.shields.io/pypi/l/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![DOI](https://zenodo.org/badge/7830/openearth/aeolis-python.svg)](https://zenodo.org/badge/latestdoi/7830/openearth/aeolis-python)\n\n# AeoLiS\nAeoLiS is a process-based model for simulating aeolian sediment transport in situations where supply-limiting factors are important,\nlike in coastal environments. Supply-limitations currently supported\nare soil moisture contents, sediment sorting and armouring, bed slope\neffects, air humidity and roughness elements.\n\nhttps://github.com/openearth/aeolis-python/assets/14054272/128684d6-73ac-4a5f-a186-51559679bd66\n\n## Installation\n\n**Requirements:**\n\n- Python 3.9 or newer\n- pip 22.0 or newer\n- netCDF4\n\n### Installing from PyPI\n\nOn the comand line of your working environment (Bash/Shell, Conda, Mamba, or similar), run the following:\n\n```shell\npip install aeolis\n```\n\n> For Windows users, the recommend way to install AeoLiS is to use [Anaconda](https://docs.anaconda.com/free/anaconda/install/windows/).\n\n\n### Installing from source\n\n1. Clone the repository using Git, or download the source code.\n\n2. AeoLiS users may install the package with only the required dependencies. Go to `aeolis-python` directory and install using pip\n ```shell\n cd aeolis-python/\n pip install .\n ```\n\n3. AeoLiS users who intend to modify the sourcecode can install additional dependencies for test and documentation as follows. Go to root directory `aeolis-python/` and:\n\n ```shell\n pip install -e .[dev]\n ```\n\n### Running AeoLiS\n\nExamples from command line:\n\n```shell\naeolis run \n# or wind module\naeolis wind --mean=6 --duration=3600\n```\n\n## Documentation\nDetailed documentation can be found at [AeoLiS ReadTheDocs](http://aeolis.readthedocs.io/)\n\n\n## AeoLiS Developer Team\nThe maintenance and development is done by a group of very enthusiastic people.\n\n**Get Involved:**\nRead our [Contribution Guidelines](CONTRIBUTING.md) to know how you can help to develop AeoLiS.\n\n**Current Members:**\n\n- [Bart van Westen](mailto:Bart.vanWesten@deltares.nl) at Deltares\n- [Nick Cohn](mailto:nick.cohn@usace.army.mil) at U.S. Army Engineer Research and Development Center (ERDC)\n- [Sierd de Vries](mailto:Sierd.deVries@tudelft.nl) (founder) at Delft University of Technology\n- [Christa van IJzendoorn](mailto:C.O.vanIJzendoorn@tudelft.nl) at Delft University of Technology\n- [Caroline Hallin](mailto:E.C.Hallin@tudelft.nl) at Delft University of Technology\n- [Glenn Strypsteen](mailto:glenn.strypsteen@kuleuven.be) at Katholieke Universiteit Leuven\n- [Janelle Skaden](mailto:Janelle.E.Skaden@usace.army.mil) at U.S. Army Engineer Research and Development Center (ERDC)\n\n**Previous Members & Contributors:**\n- [Bas Hoonhout](mailto:bas@hoonhout.com) (founder)\n- Tom Pak\n- Pieter Rauwoens\n- Lisa Meijer\n\n## Citation\n\nPlease, cite this software as follows:\n\n*de Vries, S., Hallin, C., van IJzendoorn, C., van Westen, B., Cohn, N., Strypsteen, G., Skaden, J., Agrawal, N., & Garcia Alvarez, M. (2023). AeoLiS (Version 3.0.0.rc1) [Computer software]. https://github.com/openearth/aeolis-python*\n\n## Acknowlegdements\n\n- AeoLiS is supported by the [Digital Competence Centre](https://dcc.tudelft.nl), Delft University of Technology.\n- The contributing guidelines for AeoLiS are derived from the [NLeSC/python-template](https://github.com/NLeSC/python-template) and [numpy contributing guide](https://numpy.org/devdocs/dev/index.html#development-process-summary)\n\n© (2023) AeoLiS Development Team, Delft, The Netherlands.\n'",",https://zenodo.org/badge/latestdoi/7830/openearth/aeolis-python","2015/12/30, 11:32:38",2856,GPL-3.0,211,795,"2023/10/18, 12:02:06",27,87,138,110,7,0,0.9,0.7712519319938176,"2023/10/18, 12:07:54",v3.0.0.rc1,0,11,false,,false,true,,,https://github.com/openearth,http://publicwiki.deltares.nl/display/OET,Delft,,,https://avatars.githubusercontent.com/u/6883038?v=4,,, REEF3D,"An efficiently parallelized hydrodynamics framework with a focus on coastal, marine and hydraulic engineering flows.",REEF3D,https://github.com/REEF3D/REEF3D.git,github,,Coastal and Reefs,"2023/08/30, 10:22:27",61,0,25,true,C++,REEF3D,REEF3D,"C++,C,Makefile,Dockerfile",,"b'![AeoLiS Banner](https://github.com/openearth/aeolis-shortcourse/blob/main/Sandmotor/notebooks/logo.png)\n\n[![ReadTheDocs](http://readthedocs.org/projects/aeolis/badge/?version=latest)](http://aeolis.readthedocs.io/en/latest/)\n[![PyPI](https://img.shields.io/pypi/v/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![PyPI_versions](https://img.shields.io/pypi/pyversions/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![PyPI_status](https://img.shields.io/pypi/status/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![PyPI_format](https://img.shields.io/pypi/format/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![License](https://img.shields.io/pypi/l/aeolis.svg)](https://pypi.python.org/pypi/aeolis)\n[![DOI](https://zenodo.org/badge/7830/openearth/aeolis-python.svg)](https://zenodo.org/badge/latestdoi/7830/openearth/aeolis-python)\n\n# AeoLiS\nAeoLiS is a process-based model for simulating aeolian sediment transport in situations where supply-limiting factors are important,\nlike in coastal environments. Supply-limitations currently supported\nare soil moisture contents, sediment sorting and armouring, bed slope\neffects, air humidity and roughness elements.\n\nhttps://github.com/openearth/aeolis-python/assets/14054272/128684d6-73ac-4a5f-a186-51559679bd66\n\n## Installation\n\n**Requirements:**\n\n- Python 3.9 or newer\n- pip 22.0 or newer\n- netCDF4\n\n### Installing from PyPI\n\nOn the comand line of your working environment (Bash/Shell, Conda, Mamba, or similar), run the following:\n\n```shell\npip install aeolis\n```\n\n> For Windows users, the recommend way to install AeoLiS is to use [Anaconda](https://docs.anaconda.com/free/anaconda/install/windows/).\n\n\n### Installing from source\n\n1. Clone the repository using Git, or download the source code.\n\n2. AeoLiS users may install the package with only the required dependencies. Go to `aeolis-python` directory and install using pip\n ```shell\n cd aeolis-python/\n pip install .\n ```\n\n3. AeoLiS users who intend to modify the sourcecode can install additional dependencies for test and documentation as follows. Go to root directory `aeolis-python/` and:\n\n ```shell\n pip install -e .[dev]\n ```\n\n### Running AeoLiS\n\nExamples from command line:\n\n```shell\naeolis run \n# or wind module\naeolis wind --mean=6 --duration=3600\n```\n\n## Documentation\nDetailed documentation can be found at [AeoLiS ReadTheDocs](http://aeolis.readthedocs.io/)\n\n\n## AeoLiS Developer Team\nThe maintenance and development is done by a group of very enthusiastic people.\n\n**Get Involved:**\nRead our [Contribution Guidelines](CONTRIBUTING.md) to know how you can help to develop AeoLiS.\n\n**Current Members:**\n\n- [Bart van Westen](mailto:Bart.vanWesten@deltares.nl) at Deltares\n- [Nick Cohn](mailto:nick.cohn@usace.army.mil) at U.S. Army Engineer Research and Development Center (ERDC)\n- [Sierd de Vries](mailto:Sierd.deVries@tudelft.nl) (founder) at Delft University of Technology\n- [Christa van IJzendoorn](mailto:C.O.vanIJzendoorn@tudelft.nl) at Delft University of Technology\n- [Caroline Hallin](mailto:E.C.Hallin@tudelft.nl) at Delft University of Technology\n- [Glenn Strypsteen](mailto:glenn.strypsteen@kuleuven.be) at Katholieke Universiteit Leuven\n- [Janelle Skaden](mailto:Janelle.E.Skaden@usace.army.mil) at U.S. Army Engineer Research and Development Center (ERDC)\n\n**Previous Members & Contributors:**\n- [Bas Hoonhout](mailto:bas@hoonhout.com) (founder)\n- Tom Pak\n- Pieter Rauwoens\n- Lisa Meijer\n\n## Citation\n\nPlease, cite this software as follows:\n\n*de Vries, S., Hallin, C., van IJzendoorn, C., van Westen, B., Cohn, N., Strypsteen, G., Skaden, J., Agrawal, N., & Garcia Alvarez, M. (2023). AeoLiS (Version 3.0.0.rc1) [Computer software]. https://github.com/openearth/aeolis-python*\n\n## Acknowlegdements\n\n- AeoLiS is supported by the [Digital Competence Centre](https://dcc.tudelft.nl), Delft University of Technology.\n- The contributing guidelines for AeoLiS are derived from the [NLeSC/python-template](https://github.com/NLeSC/python-template) and [numpy contributing guide](https://numpy.org/devdocs/dev/index.html#development-process-summary)\n\n© (2023) AeoLiS Development Team, Delft, The Netherlands.\n'",,"2018/11/11, 09:38:48",1809,GPL-3.0,549,2361,"2023/09/04, 09:49:00",1,27,35,15,51,1,0.0,0.003547896604156109,"2023/08/30, 10:26:40",23.08,0,3,false,,false,false,,,https://github.com/REEF3D,www.reef3d.com,"NTNU Trondheim, Norway",,,https://avatars.githubusercontent.com/u/44928822?v=4,,, pygetm,A Python rewrite of the General Estuarine Transport Model.,BoldingBruggeman,https://github.com/BoldingBruggeman/getm-rewrite.git,github,estuary-earth-science,Coastal and Reefs,"2023/10/14, 14:38:51",5,0,3,true,Python,Bolding & Bruggeman ApS,BoldingBruggeman,"Python,Fortran,Jupyter Notebook,Cython,CMake,C,Batchfile,Makefile,Shell",https://pygetm.readthedocs.io/,"b'# pygetm\n\nThis is a rewrite of the [General Estuarine Transport Model (GETM)](https://getm.eu).\nIt is mostly written in Python; only performance-critical sections of the code are implemented in Fortran.\n\n## Installing\n\nYou will need the [Anaconda Python distribution](https://www.anaconda.com/products/individual). On many systems that is already installed: try running `conda --version`.\nIf that fails, you may need to load an anaconda module first: try `module load anaconda` or `module load anaconda3`. If that still does not give you a working `conda` command,\nyou may want to install [Miniconda](https://docs.conda.io/en/latest/miniconda.html).\n\nBefore using conda for the very first time, you will need to initialize its environment:\n\n```\nconda init bash\n```\n\nIf you are using a different shell than bash, replace `bash` with the name of your shell (see `conda init -h` for supported ones), or use\n`conda init --all`.\n\nThis needs to be done just once, as it modifies your `.bashrc` that is sourced every time you login.\nAfter this, restart your shell by logging out and back in.\n\n### Installation with conda (currently Linux/Windows only)\n\nTo install or update pygetm:\n\n```\nconda install pygetm -c bolding-bruggeman -c conda-forge\n```\n\n### Manual build and install\n\nIf you need a customized version of pygetm, for instance, built with specific compiler options, or with specific biogeochemical models that are not part of the standard [FABM](https://fabm.net) distribution, you can manually obtain the pygetm source code, build it, and then install it.\n\n#### Linux/Mac\n\nTo obtain the repository with setups and scripts, set up your conda environment, and build and install pygetm:\n\n```\ngit clone --recursive https://github.com/BoldingBruggeman/getm-rewrite.git\ncd getm-rewrite\nconda env create -f environment.yml\nconda activate pygetm\nsource ./install\n```\n\nIf you are using a different shell than bash, you may need to replace `source` in the last line by `bash`. If you are installing on an HPC system that already has a Fortran compiler and MPI libraries that you would like to use, replace `environment.yml` with `environment-min.yml` in the above.\n\nTo customize the build step, you typically edit [the install script](https://github.com/BoldingBruggeman/getm-rewrite/blob/devel/install). For instance, you can:\n* Specify a specific Fortran compiler to use by adding `-DCMAKE_Fortran_COMPILER=` in the first call to cmake\n* Customize compiler flags by adding `export FFLAGS=` before the first call to cmake\n* Configure FABM by adding options (e.g., `-DFABM_INSTITUTES`, `-DFABM__BASE`) to the first call to cmake\n\n#### Windows\n\nAs on other platforms, you need [Anaconda](https://www.anaconda.com/products/individual) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html). In addition, you need to ensure that software to obtain and build Fortran code is available. Therefore, install:\n\n* [Git for Windows](https://git-scm.com/download/win)\n* [Visual Studio Community 2019](https://my.visualstudio.com/Downloads?q=visual%20studio%202019&wt.mc_id=o~msft~vscom~older-downloads)\n* [Intel Fortran Compiler](https://www.intel.com/content/www/us/en/developer/articles/tool/oneapi-standalone-components.html#fortran)\n* [Microsoft MPI](https://www.microsoft.com/en-us/download/details.aspx?id=100593) - you need both the runtime library and the Software Development Kit\n\nNow obtain the repository with setups and scripts, set up your conda environment, and build and install pygetm:\n\n```\ngit clone --recursive https://github.com/BoldingBruggeman/getm-rewrite.git\ncd getm-rewrite\nconda env create -f environment-min.yml\nconda activate pygetm\n\nmkdir build\ncd build\ncmake ..\\python\ncmake --build . --config Release\ncmake --install .\n```\n\n#### Staying up to date\n\nTo update this repository including its submodules (GOTM, FABM, etc.), make sure you are in the getm-rewrite directory and execute:\n\n```\ngit pull\ngit submodule update --init --recursive\nconda env update -f \nconda activate pygetm\nsource ./install\n```\n\nIn the above, replace `` with the name of the environment file you used previously: `environment.yml` for stand-alone conda environments, or `environment-min.yml` for a setup that uses the local MPI implementation and Fortran compiler.\n\n## Using pygetm\n\nYou should always activate the correct Python environment before you use the model with `conda activate pygetm`.\nThis needs to be done any time you start a new shell.\n\n### Jupyter Notebooks\n\nThe best place to start is the [`python/examples`](https://github.com/BoldingBruggeman/getm-rewrite/tree/devel/python/examples) directory with Jupyter Notebooks that demonstrate the functionality of the model:\n\n```\ncd python/examples\npython -m jupyterlab\n```\n\n### Simulations\n\nSome of the original GETM test cases have been ported to pygetm:\n\n* [north_sea](https://github.com/BoldingBruggeman/getm-rewrite/blob/devel/python/examples/north_sea_legacy.py) - including [an extended version](https://github.com/BoldingBruggeman/getm-rewrite/blob/devel/python/examples/north_sea.py) that shows new pygetm features such as command-line configurability.\n* [box_spherical](https://github.com/BoldingBruggeman/getm-rewrite/blob/devel/python/examples/box_spherical.py)\n* [seamount](https://github.com/BoldingBruggeman/getm-rewrite/blob/devel/python/examples/seamount.py)\n\nTo run a simulation:\n\n```\npython [OPTIONS]\n```\n\nTo run in parallel:\n\n```\nmpiexec -n python [OPTIONS]\n```\n\n## Generating optimal subdomain division\n\nA tool is included to calculated an optimal subdomain division provided only a valid bathymetry file _topo.nc_ and the number of processes the setup is to be run on. The tool searches for a solution with the smallest subdomain size that still covers the entire calculation domain. Output of the command (a python pickle file) is used directly as an argument when the domain object is created in the Python run-script.\n\nThe command to generate the subdomain division is:\n```bash\npygetm-subdiv optimize --legacy --pickle subdiv_7.pickle topo.nc 7\n```\nThe calculated layout can be shown - and plotted - via:\n```bash\npygetm-subdiv show subdiv_7.pickle --plot\n```\nResulting in in a layout as:\n\n\n\nThe example is for the standard North Sea case provided by legacy GETM.\n\nTo use a given subdomain division at runtime the call to creating the domain object should be similar to:\n\n```python\ndomain = pygetm.legacy.domain_from_topo(os.path.join(getm_setups_dir, \'NorthSea/Topo/NS6nm.v01.nc\'), nlev=30, z0_const=0.001, tiling=\'subdiv.pickle\')\n```\n\nNote the *tiling*-argument.\n\n## Contributing\n\nHow to contribute to the development:\n\n 1. Make a [fork](https://github.com/BoldingBruggeman/getm-rewrite/projects/4) - upper right - of the repository to you private GitHub account(\\*) \n 2. Make a [pull request](https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests/about-pull-requests)\n\nNote that all communication in relation to development of GETM is done via GitHub - using [tickets](https://github.com/BoldingBruggeman/tickets/issues), [code](https://github.com/BoldingBruggeman/getm-rewrite), [issues](https://github.com/BoldingBruggeman/getm-rewrite/issues), [projects](https://github.com/BoldingBruggeman/getm-rewrite/projects).\n\n\n(\\*) If you use a service other than GitHub for your daily work - please have a look [here](https://stackoverflow.com/questions/37672694/can-i-submit-a-pull-request-from-gitlab-com-to-github)\n\nhttps://yarchive.net/comp/linux/collective_work_copyright.html\n'",,"2020/06/30, 07:28:13",1212,CUSTOM,220,1753,"2021/06/02, 09:34:05",13,3,12,0,875,0,0.3333333333333333,0.18713105076741443,,,0,4,false,,false,false,,,https://github.com/BoldingBruggeman,www.bolding-bruggeman.com,,,,https://avatars.githubusercontent.com/u/11145431?v=4,,, HyRiver,A Python software stack for retrieving hydroclimate data from web services.,cheginit,https://github.com/hyriver/hyriver.github.io.git,github,"python,webservice,hydrology,climate,data",Ocean Data Processing and Access,"2023/09/23, 15:05:09",81,0,26,true,Makefile,HyRiver,hyriver,"Makefile,Python",https://docs.hyriver.io,"b"".. image:: https://raw.githubusercontent.com/hyriver/HyRiver-examples/main/notebooks/_static/hyriver_logo_text.png\n :target: https://github.com/hyriver/HyRiver-examples\n\n|\n\n.. |pygeohydro| image:: https://github.com/hyriver/pygeohydro/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/pygeohydro/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |pygeoogc| image:: https://github.com/hyriver/pygeoogc/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/pygeoogc/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |pygeoutils| image:: https://github.com/hyriver/pygeoutils/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/pygeoutils/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |pynhd| image:: https://github.com/hyriver/pynhd/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/pynhd/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |py3dep| image:: https://github.com/hyriver/py3dep/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/py3dep/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |pydaymet| image:: https://github.com/hyriver/pydaymet/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/pydaymet/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |pynldas2| image:: https://github.com/hyriver/pynldas2/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/pynldas2/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |async| image:: https://github.com/hyriver/async-retriever/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/async-retriever/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |signatures| image:: https://github.com/hyriver/hydrosignatures/actions/workflows/test.yml/badge.svg\n :target: https://github.com/hyriver/hydrosignatures/actions/workflows/test.yml\n :alt: Github Actions\n\n.. |geoh_stat| image:: https://static.pepy.tech/personalized-badge/pygeohydro?period=total&left_color=blue&right_color=yellowgreen&left_text=PyGeoHydro\n :target: https://github.com/hyriver/pygeohydro\n :alt: Download Stat\n\n.. |ogc_stat| image:: https://static.pepy.tech/personalized-badge/pygeoogc?period=total&left_color=blue&right_color=yellowgreen&left_text=PyGeoOGC\n :target: https://github.com/hyriver/pygeoogc\n :alt: Download Stat\n\n.. |utils_stat| image:: https://static.pepy.tech/personalized-badge/pygeoutils?period=total&left_color=blue&right_color=yellowgreen&left_text=PyGeoUtils\n :target: https://github.com/hyriver/pygeoutils\n :alt: Download Stat\n\n.. |nhd_stat| image:: https://static.pepy.tech/personalized-badge/pynhd?period=total&left_color=blue&right_color=yellowgreen&left_text=PyNHD\n :target: https://github.com/hyriver/pynhd\n :alt: Download Stat\n\n.. |3dep_stat| image:: https://static.pepy.tech/personalized-badge/py3dep?period=total&left_color=blue&right_color=yellowgreen&left_text=Py3DEP\n :target: https://github.com/hyriver/py3dep\n :alt: Download Stat\n\n.. |day_stat| image:: https://static.pepy.tech/personalized-badge/pydaymet?period=total&left_color=blue&right_color=yellowgreen&left_text=PyDaymet\n :target: https://github.com/hyriver/pydaymet\n :alt: Download Stat\n\n.. |nldas_stat| image:: https://static.pepy.tech/personalized-badge/pynldas2?period=total&left_color=blue&right_color=yellowgreen&left_text=PyNLDAS2\n :target: https://github.com/hyriver/pynldas2\n :alt: Download Stat\n\n.. |async_stat| image:: https://static.pepy.tech/personalized-badge/async-retriever?period=total&left_color=blue&right_color=yellowgreen&left_text=AsyncRetriever\n :target: https://github.com/hyriver/async-retriever\n :alt: Download Stat\n\n.. |sig_stat| image:: https://static.pepy.tech/personalized-badge/hydrosignatures?period=total&left_color=blue&right_color=yellowgreen&left_text=HydroSignatures\n :target: https://github.com/hyriver/hydrosignatures\n :alt: Download Stat\n\n.. _PyGeoHydro: https://github.com/hyriver/pygeohydro\n.. _PyGeoOGC: https://github.com/hyriver/pygeoogc\n.. _PyGeoUtils: https://github.com/hyriver/pygeoutils\n.. _PyNHD: https://github.com/hyriver/pynhd\n.. _Py3DEP: https://github.com/hyriver/py3dep\n.. _PyDaymet: https://github.com/hyriver/pydaymet\n.. _PyNLDAS2: https://github.com/hyriver/pynldas2\n.. _HydroSignatures: https://github.com/hyriver/hydrosignatures\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/hyriver/HyRiver-examples/main?urlpath=lab/tree/notebooks\n :alt: Binder\n\n.. image:: https://github.com/hyriver/hyriver.github.io/actions/workflows/gh-pages.yml/badge.svg\n :target: https://github.com/hyriver/hyriver.github.io/actions/workflows/gh-pages.yml\n :alt: Build Website\n\n.. image:: https://joss.theoj.org/papers/b0df2f6192f0a18b9e622a3edff52e77/status.svg\n :target: https://joss.theoj.org/papers/b0df2f6192f0a18b9e622a3edff52e77\n :alt: JOSS\n\n=============== ==================================================================== ============\nPackage Description CI\n=============== ==================================================================== ============\n|nhd_stat| Navigate and subset NHDPlus (MR and HR) using web services |pynhd|\n|3dep_stat| Access topographic data through National Map's 3DEP web service |py3dep|\n|geoh_stat| Access NWIS, NID, WQP, eHydro, NLCD, CAMELS, and SSEBop databases |pygeohydro|\n|day_stat| Access daily, monthly, and annual climate data via Daymet |pydaymet|\n|nldas_stat| Access hourly NLDAS-2 data via web services |pynldas2|\n|sig_stat| A collection of tools for computing hydrological signatures |signatures|\n|async_stat| High-level API for asynchronous requests with persistent caching |async|\n|ogc_stat| Send queries to any ArcGIS RESTful-, WMS-, and WFS-based services |pygeoogc|\n|utils_stat| Utilities for manipulating geospatial, (Geo)JSON, and (Geo)TIFF data |pygeoutils|\n=============== ==================================================================== ============\n\n\nHyRiver: Hydroclimate Data Retriever\n====================================\n\nFeatures\n--------\n\n`HyRiver `__ is a software stack consisting of eight\nPython libraries that are designed to aid in hydroclimate analysis through web services.\nCurrently, this project only includes hydrology and climatology data\nwithin the US. Some major capabilities of HyRiver are:\n\n* Easy access to many web services for subsetting data on server-side and returning the requests\n as masked Datasets or GeoDataFrames.\n* Splitting large requests into smaller chunks, under-the-hood, since web services often limit\n the number of features per request. So the only bottleneck for subsetting the data\n is your local machine memory.\n* Navigating and subsetting NHDPlus database (both medium- and high-resolution) using web services.\n* Cleaning up the vector NHDPlus data, fixing some common issues, and computing vector-based\n accumulation through a river network.\n* A URL inventory for many popular (and tested) web services.\n* Some utilities for manipulating the obtained data and their visualization.\n\n.. image:: https://docs.hyriver.io/_images/hyriver_deps.png\n :target: https://docs.hyriver.io\n\nPlease visit `examples `__\nwebpage to see some example notebooks. You can also watch these videos for a quick overview\nof ``HyRiver`` capabilities:\n\n* `Pangeo Showcase `__\n* `ESIP IT&I `__\n* `WaterHackWeek 2020 `__\n* `UH Seminar `__\n\nYou can also try this project without installing it on your system by clicking on the binder\nbadge. A Jupyter Lab instance with the HyRiver software stack pre-installed will be launched\nin your web browser, and you can start coding!\n\nPlease note that this project is in early development stages, while the provided\nfunctionalities should be stable, changes in APIs are possible in new releases. But we\nappreciate it if you give this project a try and provide feedback. Contributions are most welcome.\n\nMoreover, requests for additional databases and functionalities can be submitted via issue trackers\nof packages.\n\nCitation\n--------\nIf you use any of HyRiver packages in your research, we appreciate citations:\n\n.. code-block:: bibtex\n\n @article{Chegini_2021,\n author = {Chegini, Taher and Li, Hong-Yi and Leung, L. Ruby},\n doi = {10.21105/joss.03175},\n journal = {Journal of Open Source Software},\n month = {10},\n number = {66},\n pages = {1--3},\n title = {{HyRiver: Hydroclimate Data Retriever}},\n volume = {6},\n year = {2021}\n }\n\nInstallation\n------------\n\nYou can install all the packages using ``pip``:\n\n.. code-block:: console\n\n $ pip install py3dep pynhd pygeohydro pydaymet pynldas2 hydrosignatures pygeoogc pygeoutils async-retriever\n\nPlease note that installation with ``pip`` fails if ``libgdal`` is not installed on your system.\nYou should install this package manually beforehand. For example, on Ubuntu-based distros\nthe required package is ``libgdal-dev``. If this package is installed on your system\nyou should be able to run ``gdal-config --version`` successfully.\n\nAlternatively, you can install them using ``conda``:\n\n.. code-block:: console\n\n $ conda install -c conda-forge py3dep pynhd pygeohydro pydaymet pynldas2 hydrosignatures pygeoogc pygeoutils async-retriever\n\nor ``mambaforge`` (recommended):\n\n.. code-block:: console\n\n $ mamba install py3dep pynhd pygeohydro pydaymet pynldas2 hydrosignatures pygeoogc pygeoutils async-retriever\n\nAdditionally, you can create a new environment, named ``hyriver`` with all the packages\nand optional dependencies installed with ``mambaforge`` using the provided\n``environment.yml`` file:\n\n.. code-block:: console\n\n $ mamba env create -f ./environment.yml\n\n.. image:: https://raw.githubusercontent.com/hyriver/HyRiver-examples/main/notebooks/_static/flow_accumulation.png\n :target: https://github.com/hyriver/HyRiver-examples\n""",,"2021/03/04, 02:46:42",966,CUSTOM,86,344,"2023/09/05, 00:02:59",0,13,21,11,51,0,0.0,0.4375,"2021/10/26, 23:06:22",v0.11,0,3,false,,true,true,,,https://github.com/hyriver,https://docs.hyriver.io,United States of America,,,https://avatars.githubusercontent.com/u/109259800?v=4,,, argopy,"A global network of nearly 4000 autonomous probes measuring pressure, temperature and salinity from the surface to 2000m depth every 10 days.",euroargodev,https://github.com/euroargodev/argopy.git,github,"argo,python,argo-floats,oceanography,argo-data",Ocean Data Processing and Access,"2023/10/20, 14:01:26",159,13,29,true,Python,Euro-Argo ERIC,euroargodev,"Python,HTML,CSS,Shell,Dockerfile,nesC",https://argopy.readthedocs.io,"b'|
``argopy`` is a python library dedicated to Argo data access, visualisation and manipulation for regular users as well as Argo experts and operators|\n|:---------:|\n|[![DOI][joss-badge]][joss-link] ![CI][ci-badge] [![codecov][cov-badge]][conda-link] [![Documentation][rtd-badge]][rtd-link] [![Pypi][pip-badge]][pip-link] [![Conda][conda-badge]][conda-link]|\n\n[joss-badge]: https://img.shields.io/badge/DOI-10.21105%2Fjoss.02425-brightgreen\n[joss-link]: https://dx.doi.org/10.21105/joss.02425\n[ci-badge]: https://github.com/euroargodev/argopy/actions/workflows/pytests.yml/badge.svg\n[cov-badge]: https://codecov.io/gh/euroargodev/argopy/branch/master/graph/badge.svg\n[cov-link]: https://codecov.io/gh/euroargodev/argopy\n[rtd-badge]: https://img.shields.io/readthedocs/argopy?logo=readthedocs\n[rtd-link]: https://argopy.readthedocs.io/en/latest/?badge=latest\n[pip-badge]: https://img.shields.io/pypi/v/argopy\n[pip-link]: https://pypi.org/project/argopy/\n[conda-badge]: https://img.shields.io/conda/vn/conda-forge/argopy?logo=anaconda\n[conda-link]: https://anaconda.org/conda-forge/argopy\n\n### Documentation\n\nThe official documentation is hosted on ReadTheDocs.org: https://argopy.readthedocs.io\n\n### Install\n\nBinary installers for the latest released version are available at the [Python Package Index (PyPI)](https://pypi.org/project/argopy/) and on [Conda](https://anaconda.org/conda-forge/argopy).\n\n```bash\n# conda\nconda install -c conda-forge argopy\n````\n```bash\n# or PyPI\npip install argopy\n````\n\n``argopy`` is continuously tested to work under most OS (Linux, Mac, Windows) and with python versions >= 3.8\n\n### Usage\n\n```python\n# Import the main data fetcher:\nfrom argopy import DataFetcher as ArgoDataFetcher\n```\n```python\n# Define what you want to fetch... \n# a region:\nArgoSet = ArgoDataFetcher().region([-85,-45,10.,20.,0,10.])\n# floats:\nArgoSet = ArgoDataFetcher().float([6902746, 6902747, 6902757, 6902766])\n# or specific profiles:\nArgoSet = ArgoDataFetcher().profile(6902746, 34)\n```\n```python\n# then fetch and get data as xarray datasets:\nds = ArgoSet.load().data\n# or\nds = ArgoSet.to_xarray()\n```\n```python\n# you can even plot some information:\nArgoSet.plot(\'trajectory\') \n```\n\nThey are many more usages and fine-tuning to allow you to access and manipulate Argo data:\n- [filters at fetch time](https://argopy.readthedocs.io/en/latest/user_mode.html) (standard vs expert users, automatically select QC flags or data mode, ...)\n- [select data sources](https://argopy.readthedocs.io/en/latest/data_sources.html) (erddap, ftp, local, argovis, ...)\n- [manipulate data](https://argopy.readthedocs.io/en/latest/data_manipulation.html) (points, profiles, interpolations, binning, ...)\n- [visualisation](https://argopy.readthedocs.io/en/latest/visualisation.html) (trajectories, topography, histograms, ...)\n- [tools for Quality Control](https://argopy.readthedocs.io/en/latest/data_quality_control.html) (OWC, figures, ...)\n- [access meta-data and other Argo-related datasets](https://argopy.readthedocs.io/en/latest/metadata_fetching.html) (index, reference tables, deployment plans, topography, ...)\n- [improve performances](https://argopy.readthedocs.io/en/latest/performances.html) (caching, parallel data fetching)\n\nJust check out [the documentation for more](https://argopy.readthedocs.io) ! \n\n## Development and contributions \n\nSee our development roadmap here: https://github.com/euroargodev/argopy/milestone/3\n\nCheckout [the contribution page](https://argopy.readthedocs.io/en/latest/contributing.html) if you want to get involved and help maintain or develop ``argopy``.\n'",,"2020/03/17, 16:14:32",1317,EUPL-1.2,795,2277,"2023/10/10, 09:30:12",17,177,275,70,15,3,0.0,0.06358087487283826,"2023/09/29, 13:28:03",v0.1.14,3,10,false,,true,true,"oceanhackweek/ohw23_proj_argo_ml,narest-qa/repo18,euroargodev/VirtualFleet,zoeyjiao1104/BOG_practicum,st-howard/ArgoBV,euroargodev/boundary_currents_pcm,ngam/ngc-ext-pangeo,euroargodev/argopy-data,griverat/dmelon,closes/fdls,euroargodev/floatoftheday,euroargodev/argopy-status,euroargodev/argoonlineschool",,https://github.com/euroargodev,https://www.euro-argo.eu,,,,https://avatars.githubusercontent.com/u/57987739?v=4,,, tidyhydat,An R package to import Water Survey of Canada hydrometric data and make it tidy.,ropensci,https://github.com/ropensci/tidyhydat.git,github,"r,tidy-data,government-data,water-resources,rstats,r-package,hydrology,hydrometrics,citz",Ocean Data Processing and Access,"2023/08/17, 21:59:32",70,0,5,true,R,rOpenSci,ropensci,"R,CSS,TeX",https://docs.ropensci.org/tidyhydat,"b'\n\n# tidyhydat \n\n\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/license/apache-2-0/)\n[![Coverage\nstatus](https://codecov.io/gh/ropensci/tidyhydat/branch/master/graph/badge.svg)](https://codecov.io/github/ropensci/tidyhydat?branch=master)\n[![R build\nstatus](https://github.com/ropensci/tidyhydat/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/tidyhydat/actions)\n\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/tidyhydat)](https://cran.r-project.org/package=tidyhydat)\n[![CRAN\nDownloads](https://cranlogs.r-pkg.org/badges/tidyhydat?color=brightgreen)](https://CRAN.R-project.org/package=tidyhydat)\n[![cran\nchecks](https://badges.cranchecks.info/worst/tidyhydat.svg)](https://cran.r-project.org/web/checks/check_results_tidyhydat.html)\n[![r-universe](https://ropensci.r-universe.dev/badges/tidyhydat)](https://ropensci.r-universe.dev/builds)\n[![](http://badges.ropensci.org/152_status.svg)](https://github.com/ropensci/software-review/issues/152)\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.00511/status.svg)](https://doi.org/10.21105/joss.00511)\n[![DOI](https://zenodo.org/badge/100978874.svg)](https://zenodo.org/badge/latestdoi/100978874)\n[![R-CMD-check](https://github.com/ropensci/tidyhydat/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/ropensci/tidyhydat/actions/workflows/R-CMD-check.yaml)\n\n\n## What does `tidyhydat` do?\n\n- Provides functions (`hy_*`) that access hydrometric data from the\n HYDAT database, a national archive of Canadian hydrometric data and\n return tidy data.\n- Provides functions (`realtime_*`) that access Environment and\n Climate Change Canada\xe2\x80\x99s real-time hydrometric data source.\n- Provides functions (`search_*`) that can search through the\n approximately 7000 stations in the database and aid in generating\n station vectors\n- Keep functions as simple as possible. For example, for daily flows,\n the `hy_daily_flows()` function queries the database, *tidies* the\n data and returns a [tibble](https://tibble.tidyverse.org/) of daily\n flows.\n\n## Installation\n\nYou can install `tidyhydat` from CRAN:\n\n install.packages(""tidyhydat"")\n\nTo install the development version of the `tidyhydat` package, you can\ninstall directly from the rOpenSci development server:\n\n install.packages(""tidyhydat"", repos = ""https://dev.ropensci.org"")\n\n## Usage\n\nMore documentation on `tidyhydat` can found at the rOpenSci doc page:\n\n\nWhen you install `tidyhydat`, several other packages will be installed\nas well. One of those packages, `dplyr`, is useful for data\nmanipulations and is used regularly here. To use actually use `dplyr` in\na session you must explicitly load it. A helpful `dplyr` tutorial can be\nfound\n[here](https://cran.r-project.org/package=dplyr/vignettes/dplyr.html).\n\n library(tidyhydat)\n library(dplyr)\n\n### HYDAT download\n\nTo use many of the functions in the `tidyhydat` package you will need to\ndownload a version of the HYDAT database, Environment and Climate Change\nCanada\xe2\x80\x99s database of historical hydrometric data then tell R where to\nfind the database. Conveniently `tidyhydat` does all this for you via:\n\n download_hydat()\n\nThis downloads (with your permission) the most recent version of HYDAT\nand then saves it in a location on your computer where `tidyhydat`\xe2\x80\x99s\nfunction will look for it. Do be patient though as this can take a long\ntime! To see where HYDAT was saved you can run `hy_default_db()`. Now\nthat you have HYDAT downloaded and ready to go, you are all set to begin\nlooking at Canadian hydrometric data.\n\n### Real-time\n\nTo download real-time data using the datamart we can use approximately\nthe same conventions discussed above. Using `realtime_dd()` we can\neasily select specific stations by supplying a station of interest:\n\n realtime_dd(station_number = ""08MF005"")\n #> Queried on: 2023-04-04 12:54:46 (UTC)\n #> Date range: 2023-03-05 to 2023-04-04 \n #> # A tibble: 17,118 \xc3\x97 8\n #> STATION_NUMBER PROV_TE\xe2\x80\xa6\xc2\xb9 Date Param\xe2\x80\xa6\xc2\xb2 Value Grade Symbol Code \n #> \n #> 1 08MF005 BC 2023-03-05 08:00:00 Flow 571 1 \n #> 2 08MF005 BC 2023-03-05 08:05:00 Flow 572 1 \n #> 3 08MF005 BC 2023-03-05 08:10:00 Flow 571 1 \n #> 4 08MF005 BC 2023-03-05 08:15:00 Flow 571 1 \n #> 5 08MF005 BC 2023-03-05 08:20:00 Flow 571 1 \n #> 6 08MF005 BC 2023-03-05 08:25:00 Flow 572 1 \n #> 7 08MF005 BC 2023-03-05 08:30:00 Flow 572 1 \n #> 8 08MF005 BC 2023-03-05 08:35:00 Flow 571 1 \n #> 9 08MF005 BC 2023-03-05 08:40:00 Flow 572 1 \n #> 10 08MF005 BC 2023-03-05 08:45:00 Flow 573 1 \n #> # \xe2\x80\xa6 with 17,108 more rows, and abbreviated variable names \xc2\xb9\xe2\x80\x8bPROV_TERR_STATE_LOC,\n #> # \xc2\xb2\xe2\x80\x8bParameter\n\nOr we can use `realtime_ws`:\n\n realtime_ws(\n station_number = ""08MF005"",\n parameters = c(46, 5), ## see param_id for a list of codes\n start_date = Sys.Date() - 14,\n end_date = Sys.Date()\n )\n #> Warning: One or more parsing issues, call `problems()` on your data frame for details,\n #> e.g.:\n #> dat <- vroom(...)\n #> problems(dat)\n #> All station successfully retrieved\n #> All parameters successfully retrieved\n #> # A tibble: 4,384 \xc3\x97 10\n #> STATIO\xe2\x80\xa6\xc2\xb9 Date Name_En Value Unit Grade Symbol Appro\xe2\x80\xa6\xc2\xb2 Param\xe2\x80\xa6\xc2\xb3\n #> \n #> 1 08MF005 2023-03-21 00:00:00 Water \xe2\x80\xa6 5.06 \xc2\xb0C -1 NA 5\n #> 2 08MF005 2023-03-21 01:00:00 Water \xe2\x80\xa6 4.65 \xc2\xb0C -1 NA 5\n #> 3 08MF005 2023-03-21 02:00:00 Water \xe2\x80\xa6 4.63 \xc2\xb0C -1 NA 5\n #> 4 08MF005 2023-03-21 03:00:00 Water \xe2\x80\xa6 4.22 \xc2\xb0C -1 NA 5\n #> 5 08MF005 2023-03-21 04:00:00 Water \xe2\x80\xa6 4.4 \xc2\xb0C -1 NA 5\n #> 6 08MF005 2023-03-21 05:00:00 Water \xe2\x80\xa6 3.94 \xc2\xb0C -1 NA 5\n #> 7 08MF005 2023-03-21 06:00:00 Water \xe2\x80\xa6 4 \xc2\xb0C -1 NA 5\n #> 8 08MF005 2023-03-21 07:00:00 Water \xe2\x80\xa6 4 \xc2\xb0C -1 NA 5\n #> 9 08MF005 2023-03-21 08:00:00 Water \xe2\x80\xa6 3.76 \xc2\xb0C -1 NA 5\n #> 10 08MF005 2023-03-21 09:00:00 Water \xe2\x80\xa6 3.7 \xc2\xb0C -1 NA 5\n #> # \xe2\x80\xa6 with 4,374 more rows, 1 more variable: Code , and abbreviated variable\n #> # names \xc2\xb9\xe2\x80\x8bSTATION_NUMBER, \xc2\xb2\xe2\x80\x8bApproval, \xc2\xb3\xe2\x80\x8bParameter\n\n## Compare realtime\\_ws and realtime\\_dd\n\n`tidyhydat` provides two methods to download realtime data.\n`realtime_dd()` provides a function to import .csv files from\n[here](https://dd.weather.gc.ca/hydrometric/csv/). `realtime_ws()` is an\nclient for a web service hosted by ECCC. `realtime_ws()` has several\ndifference to `realtime_dd()`. These include:\n\n- *Speed*: The `realtime_ws()` is much faster for larger queries\n (i.e.\xc2\xa0many stations). For single station queries to `realtime_dd()`\n is more appropriate.\n- *Length of record*: `realtime_ws()` records goes back further in\n time.\n- *Type of parameters*: `realtime_dd()` are restricted to river flow\n (either flow and level) data. In contrast `realtime_ws()` can\n download several different parameters depending on what is available\n for that station. See `data(""param_id"")` for a list and explanation\n of the parameters.\n- *Date/Time filtering*: `realtime_ws()` provides argument to select a\n date range. Selecting a data range with `realtime_dd()` is not\n possible until after all files have been downloaded.\n\n### Plotting\n\nPlot methods are also provided to quickly visualize realtime data:\n\n realtime_ex <- realtime_dd(station_number = ""08MF005"")\n\n plot(realtime_ex)\n\n![](man/figures/README-unnamed-chunk-8-1.png)\n\nand also historical data:\n\n hy_ex <- hy_daily_flows(station_number = ""08MF005"", start_date = ""2013-01-01"")\n\n plot(hy_ex)\n\n![](man/figures/README-unnamed-chunk-9-1.png)\n\n## Getting Help or Reporting an Issue\n\nTo report bugs/issues/feature requests, please file an\n[issue](https://github.com/ropensci/tidyhydat/issues/).\n\nThese are very welcome!\n\n## How to Contribute\n\nIf you would like to contribute to the package, please see our\n[CONTRIBUTING](https://github.com/ropensci/tidyhydat/blob/master/CONTRIBUTING.md)\nguidelines.\n\nPlease note that this project is released with a [Contributor Code of\nConduct](https://github.com/ropensci/tidyhydat/blob/master/CODE_OF_CONDUCT.md).\nBy participating in this project you agree to abide by its terms.\n\n## Citation\n\nGet citation information for `tidyhydat` in R by running:\n\n\n To cite package \'tidyhydat\' in publications use:\n\n Albers S (2017). ""tidyhydat: Extract and Tidy Canadian Hydrometric\n Data."" _The Journal of Open Source Software_, *2*(20).\n doi:10.21105/joss.00511 ,\n .\n\n A BibTeX entry for LaTeX users is\n\n @Article{,\n title = {tidyhydat: Extract and Tidy Canadian Hydrometric Data},\n author = {Sam Albers},\n doi = {10.21105/joss.00511},\n url = {http://dx.doi.org/10.21105/joss.00511},\n year = {2017},\n publisher = {The Open Journal},\n volume = {2},\n number = {20},\n journal = {The Journal of Open Source Software},\n }\n\n[![ropensci\\_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)\n\n## License\n\nCopyright 2017 Province of British Columbia\n\nLicensed under the Apache License, Version 2.0 (the \xe2\x80\x9cLicense\xe2\x80\x9d); you may\nnot use this file except in compliance with the License. You may obtain\na copy of the License at\n\n\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \xe2\x80\x9cAS IS\xe2\x80\x9d BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n'",",https://doi.org/10.21105/joss.00511,https://zenodo.org/badge/latestdoi/100978874,https://doi.org/10.21105/joss.00511","2017/08/21, 18:01:23",2256,Apache-2.0,21,726,"2023/08/18, 15:49:52",9,53,186,8,68,2,0.5,0.04606240713224363,"2022/10/18, 04:09:15",0.5.7,0,9,false,,true,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, OceansDB,A database of marine reference data like climatologies and bathymetry.,castelao,https://github.com/castelao/oceansdb.git,github,,Ocean Data Processing and Access,"2023/02/22, 03:42:30",20,8,5,true,Python,,,"Python,Makefile",,"b""========\nOceansDB\n========\n\n.. image:: https://zenodo.org/badge/52222122.svg\n :target: https://zenodo.org/badge/latestdoi/52222122\n\n.. image:: https://readthedocs.org/projects/oceansdb/badge/?version=latest\n :target: http://oceansdb.readthedocs.org/en/latest/?badge=latest\n :alt: Documentation Status\n\n.. image:: https://img.shields.io/travis/castelao/oceansdb.svg\n :target: https://travis-ci.org/castelao/oceansdb\n\n.. image:: https://img.shields.io/pypi/v/oceansdb.svg\n :target: https://pypi.python.org/pypi/oceansdb\n\n\nPackage to subsample, or interpolate, climatologies like WOA to any coordinates.\n\nThis package started with functions to obtain climatological values to compare with measured data, allowing a quality control check by comparison. It hence needed to work for any coordinates requested. I split these functionalities from `CoTeDe `_ into this standalone package to allow more people to use it for other purposes.\n\n* Free software: 3-clause BSD style license - see LICENSE.rst \n* Documentation: https://oceansdb.readthedocs.io.\n\nFeatures\n--------\n\n- If the database files are not localy available, automatically download it.\n\n- Extract, or interpolate if necessary, climatologic data on requested coordinates;\n\n - Can request a single point, a profile or a section;\n\n - Ready to handle -180 to 180 or 0 to 360 coordinate system;\n\n- Ready to use with:\n\n - World Ocean Atlas (WOA)\n\n - CSIRO Atlas Regional Seas (CARS)\n\n - ETOPO (topography)\n\nQuick howto use\n---------------\n\nInside python:\n\n.. code-block:: python\n\n >>> import oceansdb\n >>> with oceansdb.WOA() as db:\n\nFind out what is available:\n\n.. code-block:: python\n\n >>> db.keys()\n\nAverage temperature at one point:\n\n.. code-block:: python\n\n >>> t = db['sea_water_temperature'].extract(var='mean', doy=136.875, depth=0, lat=17.5, lon=-37.5)\n\nA profile of salinity:\n\n.. code-block:: python\n\n >>> t = db['sea_water_salinity'].extract(var='mean', doy=136.875, depth=[0, 10, 15, 18], lat=17.5, lon=-37.5)\n\nA full depth section of temperature:\n\n.. code-block:: python\n\n >>> t = db['sea_water_temperature'].extract(var='mean', doy=136.875, lat=17.48, lon=[-39, -37.5, -35.2])\n\nUsing CARS instead of WOA:\n\n.. code-block:: python\n\n >>> with oceansdb.CARS() as db:\n >>> t = db['sea_water_temperature'].extract(var='mean', doy=136.875, lat=17.48, lon=[-39, -37.5, -35.2], depth=[0,10,120,280])\n\nOr to get topography for one point from the 1 min arc resolution:\n\n.. code-block:: python\n\n >>> with oceansdb.ETOPO(resolution='1min') as db:\n >>> h = db['topography'].extract(lat=17.5, lon=0)\n""",",https://zenodo.org/badge/latestdoi/52222122\n\n","2016/02/21, 18:52:28",2803,BSD-3-Clause,4,335,"2023/02/22, 03:42:32",7,8,18,4,246,1,0.25,0.007352941176470562,"2017/05/18, 01:14:32",0.8,0,3,false,,false,true,"nlesc-eTAOC/Omuse-POP-Model,castelao/CoTeDe,CI-CMG/auto-qc-pipeline,jmetteUni/CoTeDe-modified,PabloOtero/CTDChecker,petejan/imos-tools,UoA-eResearch/argo,BjerknesClimateDataCentre/xover",,,,,,,,,, stglib,Routines used by the USGS Coastal/Marine Hazards & Resources Program to process oceanographic time-series data.,USGS-CMG,https://github.com/USGS-CMG/stglib.git,github,,Ocean Data Processing and Access,"2023/10/12, 22:45:08",13,0,8,true,Python,USGS Coastal and Marine Geology Program,USGS-CMG,"Python,Jupyter Notebook,MATLAB,Mathematica",,"b'# stglib - Process data from a variety of oceanographic instrumentation\n\n[![Documentation Status](https://readthedocs.org/projects/stglib/badge/?version=latest)](http://stglib.readthedocs.io/en/latest/?badge=latest)\n![stglib](https://github.com/USGS-CMG/stglib/workflows/stglib/badge.svg)\n[![Anaconda-Server Badge](https://anaconda.org/conda-forge/stglib/badges/version.svg)](https://anaconda.org/conda-forge/stglib)\n\nThis package contains code to process data from a variety of oceanographic instrumentation, consistent with the procedures of the USGS [Coastal/Marine Hazards and Resources Program](https://marine.usgs.gov) (formerly Coastal and Marine Geology Program).\n\nCurrently, this package has at least partial support for:\n\n- Nortek Aquadopp profilers, in mean-current wave-burst, and HR modes\n- Nortek Vector velocimeters\n- Nortek Signature profilers\n- RBR pressure (including waves) and turbidity sensors\n- YSI EXO2 water-quality sondes\n- SonTek IQ flow monitors\n- WET labs sensors, including ECO NTUSB and ECO PAR\n- Onset HOBO pressure sensors\n- Vaisala Weather Transmitter WXT sensors\n- In-Situ Aqua TROLL sensors\n- RD Instruments ADCPs\n- Moving-boat ADCP data processed using [QRev](https://hydroacoustics.usgs.gov/movingboat/QRev.shtml), for use in index-velocity computation\n- EofE ECHOLOGGER altimeters\n\nWe have plans to support:\n\n- RDI Sentinel V profilers\n\nThis package makes heavy use of [NumPy](http://www.numpy.org), [xarray](http://xarray.pydata.org/en/stable/), and [netCDF4](http://unidata.github.io/netcdf4-python/). It works on Python 3.8+.\n\n[Read the documentation](http://stglib.readthedocs.io/).\n'",,"2017/12/18, 22:44:25",2137,CUSTOM,131,755,"2023/10/12, 22:45:08",10,108,147,84,13,3,0.0,0.05841446453407506,"2023/10/05, 18:21:43",v0.10.0,0,8,false,,false,false,,,https://github.com/USGS-CMG,http://marine.usgs.gov,,,,https://avatars.githubusercontent.com/u/11758291?v=4,,, noaa_coops, A Python wrapper for the NOAA CO-OPS Tides & Currents Data and Metadata APIs.,GClunies,https://github.com/GClunies/noaa_coops.git,github,"noaa,coops,currents,metocean,sensors-data,tides,water-level,weather-api,python",Ocean Data Processing and Access,"2023/05/15, 05:34:12",62,0,24,true,Python,,,"Python,Makefile",,"b'# noaa_coops\n\n[![PyPI](https://img.shields.io/pypi/v/noaa_coops.svg)](https://pypi.python.org/pypi/noaa-coops)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/noaa_coops.svg)](https://pypi.python.org/pypi/noaa-coops)\n\nA Python wrapper for the NOAA CO-OPS Tides & Currents [Data](https://tidesandcurrents.noaa.gov/api/)\nand [Metadata](https://tidesandcurrents.noaa.gov/mdapi/latest/) APIs.\n\n## Installation\nThis package is distributed via [PyPi](https://pypi.org/project/noaa-coops/) and can be installed using , `pip`, `poetry`, etc.\n```bash\n# Install with pip\n\xe2\x9d\xaf pip install noaa_coops\n\n# Install with poetry\n\xe2\x9d\xaf poetry add noaa_coops\n```\n\n## Getting Started\n\n### Stations\nData is accessed via `Station` class objects. Each station is uniquely identified by an `id`. To initialize a `Station` object, run:\n\n```python\n>>> from noaa_coops import Station\n>>> seattle = Station(id=""9447130"") # Create Station object for Seattle (ID = 9447130)\n```\n\nStations and their IDs can be found using the Tides & Currents [mapping interface](https://tidesandcurrents.noaa.gov/). Alternatively, you can search for stations in a bounding box using the `get_stations_from_bbox` function, which will return a list of stations found in the box (if any).\n```python\n>>> from pprint import pprint\n>>> from noaa_coops import Station, get_stations_from_bbox\n>>> stations = get_stations_from_bbox(lat_coords=[40.389, 40.9397], lon_coords=[-74.4751, -73.7432])\n>>> pprint(stations)\n[\'8516945\', \'8518750\', \'8519483\', \'8531680\']\n>>> station_one = Station(id=""8516945"")\n>>> pprint(station_one.name)\n\'Kings Point\'\n```\n\n### Metadata\nStation metadata is stored in the `.metadata` attribute of a `Station` object. Additionally, the keys of the metadata attribute dictionary are also assigned as attributes of the station object itself.\n\n```python\n>>> from pprint import pprint\n>>> from noaa_coops import Station\n>>> seattle = Station(id=""9447130"")\n>>> pprint(list(seattle.metadata.items())[:5]) # Print first 3 items in metadata\n[(\'tidal\', True), (\'greatlakes\', False), (\'shefcode\', \'EBSW1\')] # Metadata dictionary can be very long\n>>> pprint(seattle.lat_lon[\'lat\']) # Print latitude\n47.601944\n>>> pprint(seattle.lat_lon[\'lon\']) # Print longitude\n-122.339167\n```\n\n### Data Inventory\nA description of a Station\'s data products and available dates can be accessed via the `.data_inventory` attribute of a `Station` object.\n\n```python\n>>> from noaa_coops import Station\n>>> from pprint import pprint\n>>> seattle = Station(id=""9447130"")\n>>> pprint(seattle.data_inventory)\n{\'Air Temperature\': {\'end_date\': \'2019-01-02 18:36\',\n \'start_date\': \'1991-11-09 01:00\'},\n \'Barometric Pressure\': {\'end_date\': \'2019-01-02 18:36\',\n \'start_date\': \'1991-11-09 00:00\'},\n \'Preliminary 6-Minute Water Level\': {\'end_date\': \'2023-02-05 19:54\',\n \'start_date\': \'2001-01-01 00:00\'},\n \'Verified 6-Minute Water Level\': {\'end_date\': \'2022-12-31 23:54\',\n \'start_date\': \'1995-06-01 00:00\'},\n \'Verified High/Low Water Level\': {\'end_date\': \'2022-12-31 23:54\',\n \'start_date\': \'1977-10-18 02:18\'},\n \'Verified Hourly Height Water Level\': {\'end_date\': \'2022-12-31 23:00\',\n \'start_date\': \'1899-01-01 00:00\'},\n \'Verified Monthly Mean Water Level\': {\'end_date\': \'2022-12-31 23:54\',\n \'start_date\': \'1898-12-01 00:00\'},\n \'Water Temperature\': {\'end_date\': \'2019-01-02 18:36\',\n \'start_date\': \'1991-11-09 00:00\'},\n \'Wind\': {\'end_date\': \'2019-01-02 18:36\', \'start_date\': \'1991-11-09 00:00\'}}\n```\n\n### Data Retrieval\nAvailable data products can be found in NOAA CO-OPS Data API docs.\n\nStation data can be fetched using the `.get_data` method on a `Station` object. Data is returned as a Pandas [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) for ease of use and analysis. DataFrame columns are named according to the NOAA CO-OPS API [docs](https://api.tidesandcurrents.noaa.gov/api/prod/responseHelp.html), with the `t` column (timestamp) set as the DataFrame index.\n\nThe example below fetches water level data from the Seattle station (id=9447130) for a 1 month period. The corresponding [web output](https://tidesandcurrents.noaa.gov/waterlevels.html?id=9447130&units=metric&bdate=20150101&edate=20150131&timezone=GMT&datum=MLLW) is shown below the code as a reference.\n\n```python\n>>> from noaa_coops import Station\n>>> seattle = Station(id=""9447130"")\n>>> df_water_levels = seattle.get_data(\n... begin_date=""20150101"",\n... end_date=""20150131"",\n... product=""water_level"",\n... datum=""MLLW"",\n... units=""metric"",\n... time_zone=""gmt"")\n>>> df_water_levels.head()\n v s f q\nt\n2015-01-01 00:00:00 1.799 0.023 0,0,0,0 v\n2015-01-01 00:06:00 1.718 0.018 0,0,0,0 v\n2015-01-01 00:12:00 1.639 0.013 0,0,0,0 v\n2015-01-01 00:18:00 1.557 0.012 0,0,0,0 v\n2015-01-01 00:24:00 1.473 0.014 0,0,0,0 v\n\n```\n\n![image](https://user-images.githubusercontent.com/28986302/233147224-765fbe05-372c-40f3-8bbe-4102536e7ff3.png)\n\n\n## Development\n\n### Requirements\nThis package and its dependencies are managed using [poetry](https://python-poetry.org/). To install the development environment for `noaa_coops`, first install poetry, then run (inside the repo):\n\n```bash\npoetry install\n```\n\n### TODO\nClick [here](https://github.com/GClunies/noaa_coops/issues) for a list of existing issues and to submit a new one.\n\n### Contribution\nContributions are welcome, feel free to submit a pull request.\n'",,"2019/04/06, 16:19:40",1663,Apache-2.0,55,176,"2023/06/09, 19:44:07",2,36,54,27,138,2,0.4,0.09090909090909094,,,0,7,false,,false,false,,,,,,,,,,, Ferret,An interactive computer visualization and analysis environment designed to meet the needs of oceanographers and meteorologists analyzing large and complex gridded datasets.,NOAA-PMEL,https://github.com/NOAA-PMEL/Ferret.git,github,,Ocean Data Processing and Access,"2020/12/14, 19:08:18",50,0,4,true,Fortran,Pacific Marine Environmental Laboratory,NOAA-PMEL,"Fortran,Gnuplot,C,PostScript,HTML,Shell,Roff,Makefile,C++,PHP,M4,Objective-C,Assembly,Forth,NASL,Pascal,sed,Awk,Max,DIGITAL Command Language",https://ferret.pmel.noaa.gov/Ferret/,"b'# Ferret\nThe Ferret program from NOAA/PMEL. \nSee [https://ferret.pmel.noaa.gov/Ferret/](https://ferret.pmel.noaa.gov/Ferret/)\nfor more information about Ferret and PyFerret.\n\nThis repository is regularly synchronized with Ferret repository at PMEL\n(the trunk of the ferret project in the subversion repository at PMEL)\nusing git-svn.\n\n#### Legal Disclaimer\n*This repository is a software product and is not official communication\nof the National Oceanic and Atmospheric Administration (NOAA), or the\nUnited States Department of Commerce (DOC). All NOAA GitHub project\ncode is provided on an \'as is\' basis and the user assumes responsibility\nfor its use. Any claims against the DOC or DOC bureaus stemming from\nthe use of this GitHub project will be governed by all applicable Federal\nlaw. Any reference to specific commercial products, processes, or services\nby service mark, trademark, manufacturer, or otherwise, does not constitute\nor imply their endorsement, recommendation, or favoring by the DOC.\nThe DOC seal and logo, or the seal and logo of a DOC bureau, shall not\nbe used in any manner to imply endorsement of any commercial product\nor activity by the DOC or the United States Government.*\n\n## Ferret Documentation\n\nFor more information on using Ferret, see the Ferret documentation under\n[https://ferret.pmel.noaa.gov/Ferret/](https://ferret.pmel.noaa.gov/Ferret/)\n\nInformation about the Ferret email users group, and archives of past discussions\nfrom the group (which should be searched prior to sending a question to the email\nusers group) can be found at\n[https://ferret.pmel.noaa.gov/Ferret/email-users-group](https://ferret.pmel.noaa.gov/Ferret/email-users-group)\n\n## If you build Ferret from these source files:\n\n1. We highly recommend you build and use PyFerret (see:\n[https://github.com/NOAA-PMEL/PyFerret](https://github.com/NOAA-PMEL/PyFerret))\ninstead of Ferret.\nPyFerret provides publication-quality graphics and much more funtionality while still maintaining\nthe Ferret command prompt and compatibility with existing Ferret scripts.\nPyFerret uses Qt for displayed graphics, removing the need for X-Windows software libraries and \nheader files for Ferret builds. PyFerret maintains this Ferret repository as an upstream\nsource, so updates to Ferret will also be found in the ""Ferret engine"" within PyFerret.\n\n2. If you still wish to build Ferret, more information on building Ferret can be found in the \n[README_build_ferret](https://github.com/NOAA-PMEL/Ferret/blob/master/README_build_ferret) \nand the \n[README_ferret_mac_homebrew.md](https://github.com/NOAA-PMEL/Ferret/blob/master/README_ferret_mac_homebrew.md)\nfiles in this repository.\nThis second file can also provide some useful information about building Ferret on Linux and Unix-type systems. \n\n3. The `site_specific.mk.in` and `external_functions/ef_utilites/site_specific.mk.in`\nfiles in the repository must be copied to files without the `.in` extensions, and\nthe contents of these `site_specific.mk` files edited for your system configuration.\nThe `site_specific.mk` files will be ignored by git (the name was added to\n`.gitignore`) so your customized configuration files will not be added to your\nrepository if you have cloned this repository.\n\n4. The definitions of CC, FC, and LD (the last only for building external\nfunctions written in Fortran) were moved to the `site_specific.mk.in` files.\n(These were previously defined in the `platform_specific.mk.*` files.)\nIf you already have customized `site_specific.mk` files, please appropriately\nupdated your customized files with these additional definitions.\n\n'",,"2016/02/09, 21:14:32",2815,Unlicense,0,8191,"2023/10/13, 08:51:55",589,8,1410,3,12,0,0.0,0.3442249240121581,"2020/06/25, 18:16:32",v7.6.0,0,6,false,,false,false,,,https://github.com/NOAA-PMEL,http://www.pmel.noaa.gov,"Seattle, WA",,,https://avatars.githubusercontent.com/u/16405663?v=4,,, Blueant,Environmental data for Antarctic and Southern Ocean science.,AustralianAntarcticDivision,https://github.com/AustralianAntarcticDivision/blueant.git,github,,Ocean Data Processing and Access,"2023/09/04, 03:19:56",12,0,0,true,R,Australian Antarctic Division,AustralianAntarcticDivision,R,https://australianantarcticdivision.github.io/blueant/,"b'\n\n\n\n\n[![R-CMD-check](https://github.com/AustralianAntarcticDivision/blueant/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/AustralianAntarcticDivision/blueant/actions/workflows/R-CMD-check.yaml)\n[![Codecov test\ncoverage](https://codecov.io/gh/AustralianAntarcticDivision/blueant/branch/master/graph/badge.svg)](https://codecov.io/gh/AustralianAntarcticDivision/blueant?branch=master)\n[![Project Status: Active - The project has reached a stable, usable\nstate and is being actively\ndeveloped.](http://www.repostatus.org/badges/latest/active.svg)](http://www.repostatus.org/#active)\n\n\n# Blueant\n\nBlueant provides a set of data source configurations to use with the\n[bowerbird](https://github.com/AustralianAntarcticDivision/bowerbird)\npackage. These data sources are themed around Antarctic and Southern\nOcean data, and include a range of oceanographic, meteorological,\ntopographic, and other environmental data sets. Blueant will allow you\nto download data from these external data providers to your local file\nsystem, and to keep that data collection up to date.\n\n## Installing\n\n``` r\n## Download and install blueant in R\ninstall.packages(""blueant"", repos = c(SCAR = ""https://scar.r-universe.dev"",\n CRAN = ""https://cloud.r-project.org""))\n\n## or\n## install.packages(""remotes"") ## if needed\nremotes::install_github(""AustralianAntarcticDivision/blueant"")\n```\n\n## Usage overview\n\n### Configuration\n\nBuild up a configuration by first defining global options such as the\ndestination on your local file system. Usually you would choose this\ndestination data directory to be a persistent location, suitable for a\ndata library. For demonstration purposes here we\xe2\x80\x99ll just use a temporary\ndirectory:\n\n``` r\nlibrary(blueant)\nmy_data_dir <- tempdir()\ncf <- bb_config(local_file_root = my_data_dir)\n```\n\nAdd data sources from those provided by blueant. A summary of these\nsources is given at the end of this document. Here we\xe2\x80\x99ll use the \xe2\x80\x9cGeorge\nV bathymetry\xe2\x80\x9d data source as an example:\n\n``` r\nmysrc <- sources(""George V bathymetry"")\ncf <- cf %>% bb_add(mysrc)\n```\n\nThis data source is fairly small (around 200MB, see\n`mysrc$collection_size`). Be sure to check the `collection_size`\nparameter of your chosen data source before running the synchronization.\nSome of these collections are quite large (see the summary table at the\nbottom of this document).\n\n### Synchronization\n\nOnce the configuration has been defined and the data source added to it,\nwe can run the sync process. We set `verbose = TRUE` here so that we see\nadditional progress output:\n\n``` r\nstatus <- bb_sync(cf, verbose = TRUE)\n```\n\n ## \n ## Wed Aug 31 09:31:14 2022\n ## Synchronizing dataset: George V bathymetry\n ## Source URL c(""http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem100m_v3.nc"", ""http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem250m_v3.nc"", ""http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem500m_v3.nc"", ""http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem1000m_v3.nc"")\n ## --------------------------------------------------------------------------------------------\n ## \n ## this dataset path is: /tmp/data/public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf\n ## downloading file 1 of 4: http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem100m_v3.nc ... done.\n ## downloading file 2 of 4: http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem250m_v3.nc ... done.\n ## downloading file 3 of 4: http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem500m_v3.nc ... done.\n ## downloading file 4 of 4: http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem1000m_v3.nc ... done.\n ## \n ## Wed Aug 31 09:32:03 2022 dataset synchronization complete: George V bathymetry\n\nCongratulations\\! You now have your own local copy of this data set. The\nfiles in this data set have been stored in a data-source-specific\nsubdirectory of our local file root, with details given by the returned\n`status` object:\n\n``` r\nmyfiles <- status$files[[1]]\nmyfiles\n## # A tibble: 4 \xc3\x97 3\n## url \n## \n## 1 http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem100\xe2\x80\xa6\n## 2 http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem250\xe2\x80\xa6\n## 3 http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem500\xe2\x80\xa6\n## 4 http://public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem100\xe2\x80\xa6\n## file \n## \n## 1 /tmp/data/public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem\xe2\x80\xa6\n## 2 /tmp/data/public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem\xe2\x80\xa6\n## 3 /tmp/data/public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem\xe2\x80\xa6\n## 4 /tmp/data/public.services.aad.gov.au/datasets/science/GVdem_2008_netcdf/gvdem\xe2\x80\xa6\n## note \n## \n## 1 downloaded\n## 2 downloaded\n## 3 downloaded\n## 4 downloaded\n```\n\nThe data sources provided by blueant can be read, manipulated, and\nplotted using a range of other R packages, including\n[RAADTools](https://github.com/AustralianAntarcticDivision/raadtools)\nand [raster](https://cran.r-project.org/package=raster). In this case\nthe data files are netcdf, which can be read by `raster`:\n\n``` r\nlibrary(raster)\nx <- raster(myfiles$file[grepl(""gvdem500m_v3"", myfiles$file)])\nplot(x)\n```\n\n![](man/figures/README-rasterplot-1.png)\n\n## Nuances\n\n### Choosing a data directory\n\nIt\xe2\x80\x99s up to you where you want your data collection kept, and to provide\nthat location to bowerbird. A common use case for bowerbird is\nmaintaining a central data collection for multiple users, in which case\nthat location is likely to be some sort of networked file share.\nHowever, if you are keeping a collection for your own use, you might\nlike to look at to help find a\nsuitable directory location.\n\n### Authentication\n\nSome data providers require users to log in. These are indicated by the\n`authentication_note` column in the configuration table. For these\nsources, you will need to provide your user name and password, e.g.:\n\n``` r\nsrc <- sources(name=""CMEMS global gridded SSH reprocessed (1993-ongoing)"")\nsrc$user <- ""yourusername""\nsrc$password <- ""yourpassword""\ncf <- bb_add(cf, src)\n\n## or, using the pipe operator\nmysrc <- bb_example_sources(""CMEMS global gridded SSH reprocessed (1993-ongoing)"") %>%\n bb_modify_source(user = ""yourusername"", password = ""yourpassword"")\ncf <- cf %>% bb_add(mysrc)\n```\n\n### Writing and modifying data sources\n\nThe bowerbird documentation is a good place to start to find out more\nabout writing your own data sources or modifying existing ones.\n\n#### Reducing download sizes\n\nSometimes you might only want part of a data collection. Perhaps you\nonly want a few years from a long-term collection, or perhaps the data\nare provided in multiple formats and you only need one. If the data\nsource uses the `bb_handler_rget` method, you can restrict what is\ndownloaded by modifying the arguments passed through the data source\xe2\x80\x99s\n`method` parameter, particularly the `accept_follow`, `reject_follow`,\n`accept_download`, and `reject_download` options.\n\nFor example, the CERSAT SSM/I sea ice concentration data are arranged in\nyearly directories, and so it is fairly easy to restrict ourselves to,\nsay, only the 2017 data:\n\n``` r\nmysrc <- sources(""CERSAT SSM/I sea ice concentration"")\n\n## first make sure that the data source doesn\'t already have an accept_follow parameter defined\n""accept_follow"" %in% names(mysrc$method[[1]])\n\n## nope, so we can safely go ahead and impose our own\nmysrc$method[[1]]$accept_follow <- ""/2017""\ncf <- cf %>% bb_add(mysrc)\n```\n\nAlternatively, for data sources that are divided into subdirectories,\none could replace the whole-data-source `source_url` with one or more\nthat point to specific yearly (or other) subdirectories. For example,\nthe default `source_url` for the CERSAT sea ice data above is\n`ftp://ftp.ifremer.fr/ifremer/cersat/products/gridded/psi-concentration/data/antarctic/daily/netcdf/`\n(which has yearly subdirectories). So e.g.\xc2\xa0for 2016 and 2017 data we\ncould do:\n\n``` r\nmysrc <- sources(""CERSAT SSM/I sea ice concentration"")\nmysrc$source_url[[1]] <- c(\n ""ftp://ftp.ifremer.fr/ifremer/cersat/products/gridded/psi-concentration/data/antarctic/daily/netcdf/2016/"",\n ""ftp://ftp.ifremer.fr/ifremer/cersat/products/gridded/psi-concentration/data/antarctic/daily/netcdf/2017/"")\ncf <- cf %>% bb_add(mysrc)\n```\n\n#### Defining new data sources\n\nIf the blueant data sources don\xe2\x80\x99t cover your needs, you can define your\nown using the `bb_source` function. See the bowerbird documentation.\n\n## Data source summary\n\nThese are the data source definitions that are provided as part of the\nblueant package.\n\n### Data group: Altimetry\n\n#### CMEMS global gridded SSH near-real-time\n\nFor the Global Ocean - Multimission altimeter satellite gridded sea\nsurface heights and derived variables computed with respect to a\ntwenty-year mean. Previously distributed by Aviso+, no change in the\nscientific content. All the missions are homogenized with respect to a\nreference mission which is currently Jason-3. The acquisition of various\naltimeter data is a few days at most. VARIABLES\n\n - sea\\_surface\\_height\\_above\\_sea\\_level (SSH)\n\n - surface\\_geostrophic\\_eastward\\_sea\\_water\\_velocity\\_assuming\\_sea\\_level\\_for\\_geoid\n (UVG)\n\n - surface\\_geostrophic\\_northward\\_sea\\_water\\_velocity\\_assuming\\_sea\\_level\\_for\\_geoid\n (UVG)\n\n - sea\\_surface\\_height\\_above\\_geoid (SSH)\n\n - surface\\_geostrophic\\_eastward\\_sea\\_water\\_velocity (UVG)\n\n - surface\\_geostrophic\\_northward\\_sea\\_water\\_velocity (UVG)\n\nAuthentication note: Copernicus Marine login required, see\n\n\nApproximate size: 3 GB\n\nDocumentation link:\n\n\n#### CMEMS global gridded SSH reprocessed (1993-ongoing)\n\nFor the Global Ocean - Multimission altimeter satellite gridded sea\nsurface heights and derived variables computed with respect to a\ntwenty-year mean. Previously distributed by Aviso+, no change in the\nscientific content. All the missions are homogenized with respect to a\nreference mission which is currently OSTM/Jason-2. VARIABLES\n\n - sea\\_surface\\_height\\_above\\_sea\\_level (SSH)\n\n - surface\\_geostrophic\\_eastward\\_sea\\_water\\_velocity\\_assuming\\_sea\\_level\\_for\\_geoid\n (UVG)\n\n - surface\\_geostrophic\\_northward\\_sea\\_water\\_velocity\\_assuming\\_sea\\_level\\_for\\_geoid\n (UVG)\n\n - sea\\_surface\\_height\\_above\\_geoid (SSH)\n\n - surface\\_geostrophic\\_eastward\\_sea\\_water\\_velocity (UVG)\n\n - surface\\_geostrophic\\_northward\\_sea\\_water\\_velocity (UVG)\n\nAuthentication note: Copernicus Marine login required, see\n\n\nApproximate size: 310 GB\n\nDocumentation link:\n\n\n#### CNES-CLS2013 Mean Dynamic Topography\n\nCNES-CLS2013 Mean dynamic topography over the 1993-2012 period of the\nsea surface height above geoid. The MDT\\_CNES-CLS13 is an estimate of\nthe ocean MDT for the 1993-2012 period. Since April 2014 (Duacs 2014,\nv15.0 version), the Ssalto/Duacs (M)SLA products are computed relative\nto 1993-2012 period that is consistent with this new MDT CNES-CLS13.\nBased on 2 years of GOCE data, 7 years of GRACE data, and 20 years of\naltimetry and in-situ data (hydrologic and drifters data).\n\nAuthentication note: AVISO login required, see\n\n\nApproximate size: 0.1 GB\n\nDocumentation link:\n\n\n#### Delayed-time finite size Lyapunov exponents\n\nThe maps of Backward-in-time, Finite-Size Lyapunov Exponents (FSLEs) and\nOrientations of associated eigenvectors are computed over 21-year\naltimetry period and over global ocean within the SALP/Cnes project in\ncollaboration with CLS, LOcean and CTOH. These products provide the\nexponential rate of separation of particle trajectories initialized\nnearby and advected by altimetry velocities. FSLEs highlight the\ntransport barriers that control the horizontal exchange of water in and\nout of eddy cores.\n\nAuthentication note: AVISO login required, see\n\n\nApproximate size: 1200 GB\n\nDocumentation link:\n\n\n#### Gridded Sea Level Heights and geostrophic currents - Antarctic Ocean\n\nExperimental Ssalto/Duacs gridded multimission altimeter products\ndedicated to Antarctic Ocean. This dataset is one of the experimental\nproducts which are available on the SSALTO/DUACS experimental products.\nMultimission sea level heights computed with respect to a twenty-year\nmean and associated geostrophic current anomalies. The formal error is\nalso included.\n\nAuthentication note: AVISO login required, see\n\n\nApproximate size: 4.5 GB\n\nDocumentation link:\n\n\n#### Near-real-time finite size Lyapunov exponents\n\nThe maps of Backward-in-time, Finite-Size Lyapunov Exponents (FSLEs) and\nOrientations of associated eigenvectors are computed over 21-year\naltimetry period and over global ocean within the SALP/Cnes project in\ncollaboration with CLS, LOcean and CTOH. These products provide the\nexponential rate of separation of particle trajectories initialized\nnearby and advected by altimetry velocities. FSLEs highlight the\ntransport barriers that control the horizontal exchange of water in and\nout of eddy cores.\n\nAuthentication note: AVISO login required, see\n\n\nApproximate size: 100 GB\n\nDocumentation link:\n\n\n#### WAVERYS Global Ocean Waves Reanalysis\n\nGLOBAL\\_REANALYSIS\\_WAV\\_001\\_032 for the global wave reanalysis\ndescribing past sea states since years 1993. This product also bears the\nname of WAVERYS within the GLO-HR MFC, for correspondence to other\nglobal multi-year products like GLORYS, BIORYS, etc. The core of WAVERYS\nis based on the MFWAM model, a third generation wave model that\ncalculates the wave spectrum, i.e.\xc2\xa0the distribution of sea state energy\nin frequency and direction on a 1/5-degree irregular grid. Average wave\nquantities derived from this wave spectrum, such as the SWH (significant\nwave height) or the average wave period, are delivered on a regular\n1/5-degree grid with a 3h time step. The wave spectrum is discretized\ninto 30 frequencies obtained from a geometric sequence of first member\n0.035 Hz and a reason 7.5. WAVERYS takes into account oceanic currents\nfrom the GLORYS12 physical ocean reanalysis and assimilates significant\nwave height observed from historical altimetry missions and directional\nwave spectra from Sentinel 1 SAR from 2017 onwards.\n\nAuthentication note: Copernicus Marine login required, see\n\n\nApproximate size: 1100 GB\n\nDocumentation link:\n\n\n### Data group: Biology\n\n#### CMEMS Global ocean low and mid trophic levels biomass content hindcast\n\nThe Low and Mid-Trophic Levels (LMTL) reanalysis for global ocean is\nproduced at CLS on behalf of Global Ocean Marine Forecasting Center. It\nprovides 2D fields of biomass content of zooplankton and six groups of\nmicronekton. It uses the LMTL component of \\[CLS dynamical population\nmodel\\]\\](). No data assimilation has been\ndone. This product also contains forcing data: net primary production,\neuphotic depth, depth of each pelagic layers zooplankton and micronekton\ninhabit, average temperature and currents over pelagic layers.\n\nAuthentication note: Copernicus Marine login required, see\n\n\nApproximate size: not specified\n\nDocumentation link:\n\n\n#### Myctobase\n\n

\n\nThe global importance of mesopelagic fish is increasingly recognised,\nbut they remain poorly studied. This is particularly true in the\nSouthern Ocean, where mesopelagic fishes are both key predators and\nprey, but where the remote environment makes sampling them challenging.\nDespite this, multiple national Antarctic research programs have\nundertaken regional sampling of mesopelagic fish over several decades.\nHowever, data are dispersed, and sampling methodologies often differ\nprecluding comparisons and limiting synthetic analyses. Here, we have\ncollated and standardised existing\xc2\xa0survey data of\xc2\xa0mesopelagic\xc2\xa0fishes\ninto a circumpolar dataset called\xc2\xa0Myctobase.\xc2\xa0To\ndate,\xc2\xa0Myctobase\xc2\xa0holds 17,491 occurrence and 11,190 abundance\nrecords from 4780 net hauls from 72 different research cruises. Data\ninclude trait-based information of individuals including standard\nlength, weight and life-stage. Data span across 37 years from 1991 to\n2019. Detailed metadata has also been provided for each sampling\nevent\xc2\xa0including the\xc2\xa0date, time, position (latitude, longitude, and\ndepth), sampling protocol, net\xc2\xa0type, net mesh size, tow speed, volume\nfiltered and haul type (routine, target, random).\xc2\xa0\n\n

\n\n

\n\nThe dataset is comprised of three comma-separated files.\xc2\xa0The first file\n(event.csv) describes the survey methodology.\xc2\xa0The second file\n(groupOccurrence.csv) contains the catch data linked to the survey\nmethodology by an event ID.\xc2\xa0The final file (individualOccurrence.csv)\ncontains measurements of individuals. Each row contains\xc2\xa0the event and\noccurrence ID, which links each measurement to the first and second\nfile. See associated metadata record for definitions and units for each\nvariable in\xc2\xa0\xe2\x80\x98definitions.xlsx\xe2\x80\x99.\n\n

\n\n

\n\nThe final dataset was subject to quality control and validation\nprocesses. Entries with ambiguous or incomplete records were\nidentified\xc2\xa0with a \xe2\x80\x980\xe2\x80\x99 in the column labelled \xe2\x80\x98validation\xe2\x80\x99\n(event.csv)\xc2\xa0and a description of the missing data can be found in the\nproceeding column labelled \xe2\x80\x98validationDescription\xe2\x80\x99.\n\n

\n\n

\n\nThe taxonomic name\xc2\xa0for\xc2\xa0each individual was verified against the\xc2\xa0World\nRegister of Marine Species\xc2\xa0().\n\n

\n\nApproximate size: 0.009 GB\n\nDocumentation link: \n\n#### SCAR RAATD data filtered\n\nTracking data from 17 species of Antarctic and subantarctic seabirds,\nmarine mammals, and penguins. This data set is the \xe2\x80\x98filtered\xe2\x80\x99 version of\nthe data files. These files contain position estimates that have been\nprocessed using a state-space model in order to estimate locations at\nregular time intervals. For technical details of the filtering process,\nconsult the data paper. The filtering code can be found in the\n repository.\n\nApproximate size: 1.2 GB\n\nDocumentation link: \n\n#### SCAR RAATD data standardised\n\nTracking data from 17 species of Antarctic and subantarctic seabirds,\nmarine mammals, and penguins. This data set is the \xe2\x80\x98standardized\xe2\x80\x99\nversion of the data files. These files contain position estimates as\nprovided by the original data collectors (generally, raw Argos or GPS\nlocations, or estimated GLS locations). Original data files have been\nconverted to a common format and quality-checking applied, but have not\nbeen further filtered or interpolated.\n\nApproximate size: 0.3 GB\n\nDocumentation link: \n\n#### SCAR RAATD model outputs\n\nSingle-species habitat importance maps for 17 species of Antarctic and\nsubantarctic seabirds, marine mammals, and penguins. The data also\ninclude the integrated maps that incorporate all species (weighted by\ncolony size, and unweighted)\n\nApproximate size: 0.3 GB\n\nDocumentation link: \n\n#### SEAPODYM Zooplankton & Micronekton weekly potential and biomass distribution\n\nThe zooplankton & micronekton biomass distributions are outputs of the\nSEAPODYM Low and Mid-Trophic Levels (LMTL) model (Lehodey et al., 1998;\n2010; 2015). SEAPODYM-LMTL model simulates the spatial and temporal\ndynamics of six micronekton and one zooplankton functional groups\nbetween the sea surface and \\~1000m. The model is driven by ocean\ntemperature, horizontal currents, primary production and euphotic depth.\nPrimary production can be outputs from biogeochemical models or derived\nfrom ocean color satellite data using empirical optical models (e.g.,\nBehrenfeld and Falkowski 1997).\n\nAuthentication note: Requires registration, see\n\n\nApproximate size: not specified\n\nDocumentation link:\n\n\n#### Southern Ocean Continuous Plankton Recorder\n\nContinuous Plankton Recorder (CPR) surveys from the Southern Ocean.\nZooplankton species, numbers and abundance data are recorded on a\ncontinuous basis while vessels are in transit\n\nApproximate size: 0.1 GB\n\nDocumentation link: \n\n### Data group: Meteorological\n\n#### Antarctic Mesoscale Prediction System grib files\n\nThe Antarctic Mesoscale Prediction System - AMPS - is an experimental,\nreal-time numerical weather prediction capability that provides support\nfor the United States Antarctic Program, Antarctic science, and\ninternational Antarctic efforts.\n\nApproximate size: not specified\n\nDocumentation link: \n\n### Data group: Modelling\n\n#### Southern Ocean marine environmental data\n\nA collection of gridded marine environmental data layers suitable for\nuse in Southern Ocean species distribution modelling. All environmental\nlayers have been generated at a spatial resolution of 0.1 degrees,\ncovering the Southern Ocean extent (80 degrees S - 45 degrees S, -180 -\n180 degrees). The layers include information relating to bathymetry, sea\nice, ocean currents, primary production, particulate organic carbon, and\nother oceanographic data.\n\nApproximate size: 0.1 GB\n\nDocumentation link: \n\n### Data group: Ocean colour\n\n#### Oceandata MODIS Aqua Level-3 binned daily RRS\n\nDaily remote-sensing reflectance from MODIS Aqua. RRS is used to produce\nstandard ocean colour products such as chlorophyll concentration\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 800 GB\n\nDocumentation link: \n\n#### Oceandata MODIS Aqua Level-3 mapped daily 4km chl-a\n\nDaily remote-sensing chlorophyll-a from the MODIS Aqua satellite at 4km\nspatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 40 GB\n\nDocumentation link: \n\n#### Oceandata MODIS Aqua Level-3 mapped monthly 9km chl-a\n\nMonthly remote-sensing chlorophyll-a from the MODIS Aqua satellite at\n9km spatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 8 GB\n\nDocumentation link: \n\n#### Oceandata SeaWiFS Level-3 binned daily RRS\n\nDaily remote-sensing reflectance from SeaWiFS. RRS is used to produce\nstandard ocean colour products such as chlorophyll concentration\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 130 GB\n\nDocumentation link: \n\n#### Oceandata SeaWiFS Level-3 mapped monthly 9km chl-a\n\nMonthly remote-sensing chlorophyll-a from the SeaWiFS satellite at 9km\nspatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 7.2 GB\n\nDocumentation link: \n\n#### Oceandata VIIRS Level-3 binned daily RRS\n\nDaily remote-sensing reflectance from VIIRS. RRS is used to produce\nstandard ocean colour products such as chlorophyll concentration\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 180 GB\n\nDocumentation link: \n\n#### Oceandata VIIRS Level-3 mapped 32-day 9km chl-a\n\nRolling 32-day composite remote-sensing chlorophyll-a from the VIIRS\nsatellite at 9km spatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 4 GB\n\nDocumentation link: \n\n#### Oceandata VIIRS Level-3 mapped daily 4km chl-a\n\nDaily remote-sensing chlorophyll-a from the VIIRS satellite at 4km\nspatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 50 GB\n\nDocumentation link: \n\n#### Oceandata VIIRS Level-3 mapped monthly 9km chl-a\n\nMonthly remote-sensing chlorophyll-a from the VIIRS satellite at 9km\nspatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 1 GB\n\nDocumentation link: \n\n#### Oceandata VIIRS Level-3 mapped seasonal 9km chl-a\n\nSeasonal remote-sensing chlorophyll-a from the VIIRS satellite at 9km\nspatial resolution\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98OB.DAAC Data Access\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 0.5 GB\n\nDocumentation link: \n\n#### Southern Ocean summer chlorophyll-a climatology (Johnson)\n\nClimatological summer chlorophyll-a layer for the Southern Ocean south\nof 40S, following the OC3M algorithm of Johnson et al.\xc2\xa0(2013)\n\nApproximate size: 0.05 GB\n\nDocumentation link: \n\n### Data group: Oceanographic\n\n#### Argo ocean basin data (USGODAE)\n\nArgo float data from the Global Data Access Centre in Monterey, USA (US\nGlobal Ocean Data Assimilation Experiment). These are multi-profile\nnetcdf files divided by ocean basin.\n\nApproximate size: not specified\n\nDocumentation link: \n\n#### Argo profile data\n\nArgo profile data from .\n\nApproximate size: not specified\n\nDocumentation link: \n\n#### Argo profile data (USGODAE)\n\nArgo profile data from the Global Data Access Centre in Monterey, USA\n(US Global Ocean Data Assimilation Experiment).\n\nApproximate size: not specified\n\nDocumentation link: \n\n#### CSIRO Atlas of Regional Seas 2009\n\nCARS is a digital climatology, or atlas of seasonal ocean water\nproperties.\n\nApproximate size: 2.8 GB\n\nDocumentation link: \n\n#### Effects of Sound on the Marine Environment\n\nESME uses publically available environmental data sources that provide\ndetailed information about the ocean, in the form of four primary\ndatabases supplied by the Oceanographic and Atmospheric Master Library\n(OAML). (1) Bottom Sediment Type (BST) v 2.0 : This database provides\ninformation on the type of sediment on the ocean bottom, which affects\nits acoustic reflectivity. Available data resolutions: 2 min, 0.1 min.\n(2) Digital Bathymetry Database (DBDB) v 5.4 : This database provides\ninformation on the depth of the water column. Available data\nresolutions: 2 min, 1 min, .5 min, .1 min, 0.05 min. (3) Generalized\nDigital Environment Model (GDEM) v 3.0 : This database provides water\ntemperature and water salinity data for a selected month or months of\ntime, which is used to calculate the changes in the speed of sound in\nwater. Available data resolution: 15 min. (4) Surface Marine Gridded\nClimatology (SMGC) v 2.0 : This database provides wind speed data for a\nselected month or months. Wind speed, and consequently surface roughness\nand wave height, affect the surface\xe2\x80\x99s acoustic reflectivity. Available\ndata resolution: 60 min.\n\nApproximate size: 5 GB\n\nDocumentation link: \n\n#### World Ocean Atlas 2009\n\nWorld Ocean Atlas 2009 (WOA09) is a set of objectively analyzed (1\ndegree grid) climatological fields of in situ temperature, salinity,\ndissolved oxygen, Apparent Oxygen Utilization (AOU), percent oxygen\nsaturation, phosphate, silicate, and nitrate at standard depth levels\nfor annual, seasonal, and monthly compositing periods for the World\nOcean. It also includes associated statistical fields of observed\noceanographic profile data interpolated to standard depth levels on both\n1 degree and 5 degree grids\n\nApproximate size: 6 GB\n\nDocumentation link: \n\n#### World Ocean Atlas 2013 V2\n\nWorld Ocean Atlas 2013 version 2 (WOA13 V2) is a set of objectively\nanalyzed (1 degree grid) climatological fields of in situ temperature,\nsalinity, dissolved oxygen, Apparent Oxygen Utilization (AOU), percent\noxygen saturation, phosphate, silicate, and nitrate at standard depth\nlevels for annual, seasonal, and monthly compositing periods for the\nWorld Ocean. It also includes associated statistical fields of observed\noceanographic profile data interpolated to standard depth levels on 5\ndegree, 1 degree, and 0.25 degree grids\n\nApproximate size: 57 GB\n\nDocumentation link: \n\n### Data group: Reanalysis\n\n#### CCMP Wind Product V2\n\nThe Cross-Calibrated Multi-Platform (CCMP) gridded surface vector winds\nare produced using satellite, moored buoy, and model wind data, and are\na Level-3 ocean vector wind analysis product. The V2 CCMP processing\ncombines Version-7 RSS radiometer wind speeds, QuikSCAT and ASCAT\nscatterometer wind vectors, moored buoy wind data, and ERA-Interim model\nwind fields using a Variational Analysis Method (VAM) to produce four\nmaps daily of 0.25 degree gridded vector winds\n\nApproximate size: 120 GB\n\nDocumentation link: \n\n#### NCEP-DOE Reanalysis 1 monthly averages\n\nThe NCEP/NCAR Reanalysis 1 project is using a state-of-the-art\nanalysis/forecast system to perform data assimilation using past data\nfrom 1948 to the present. Monthly averages are calculated from the\n6-hourly model output.\n\nApproximate size: 2 GB\n\nDocumentation link:\n\n\n#### NCEP-DOE Reanalysis 2 monthly averages\n\nNCEP-DOE Reanalysis 2 is an improved version of the NCEP Reanalysis I\nmodel that fixed errors and updated paramterizations of of physical\nprocesses. Monthly averages are calculated from the 6-hourly model\noutput.\n\nApproximate size: 2 GB\n\nDocumentation link:\n\n\n### Data group: Sea ice\n\n#### Artist AMSR-E sea ice concentration\n\nPassive microwave estimates of daily sea ice concentration at 6.25km\nspatial resolution, from 19-Jun-2002 to 2-Oct-2011.\n\nApproximate size: 25 GB\n\nDocumentation link:\n\n\n#### Artist AMSR-E supporting files\n\nGrids and other support files for Artist AMSR-E passive microwave sea\nice data.\n\nApproximate size: 0.01 GB\n\nDocumentation link:\n\n\n#### Artist AMSR2 near-real-time 3.125km sea ice concentration\n\nNear-real-time passive microwave estimates of daily sea ice\nconcentration at 3.125km spatial resolution (full Antarctic coverage).\n\nApproximate size: 100 GB\n\nDocumentation link:\n\n\n#### Artist AMSR2 near-real-time sea ice concentration\n\nNear-real-time passive microwave estimates of daily sea ice\nconcentration at 6.25km spatial resolution, from 24-July-2012 to\npresent.\n\nApproximate size: 11 GB\n\nDocumentation link:\n\n\n#### Artist AMSR2 supporting files\n\nGrids and landmasks for Artist AMSR2 passive microwave sea ice data.\n\nApproximate size: 0.02 GB\n\nDocumentation link:\n\n\n#### CERSAT SSM/I sea ice concentration\n\nPassive microwave sea ice concentration data at 12.5km resolution,\n3-Dec-1991 to present\n\nApproximate size: 2.5 GB\n\nDocumentation link:\n\n\n#### CERSAT SSM/I sea ice concentration supporting files\n\nGrids for the CERSAT SSM/I sea ice concentration data.\n\nApproximate size: 0.01 GB\n\nDocumentation link:\n\n\n#### Circum-Antarctic landfast sea ice extent, 2000-2018 - version 2.2\n\nThis dataset (provided as a series of CF-compatible netcdf file)\nconsists of 432 consecutive maps of Antarctic landfast sea ice, derived\nfrom NASA MODIS imagery. There are 24 maps per year, spanning the 18\nyear period from March 2000 to Feb 2018. The data are provided in a\npolar stereographic projection with a latitude of true scale at 70 S\n(i.e., to maintain compatibility with the NSIDC polar stereographic\nprojection).\n\nApproximate size: 8 GB\n\nDocumentation link: \n\n#### MODIS Composite Based Maps of East Antarctic Fast Ice Coverage\n\nMaps of East Antarctic landfast sea-ice extent, generated from approx.\n250,000 1 km visible/thermal infrared cloud-free MODIS composite imagery\n(augmented with AMSR-E 6.25-km sea-ice concentration composite imagery\nwhen required). Coverage from 2000-03-01 to 2008-12-31\n\nApproximate size: 0.4 GB\n\nDocumentation link: \n\n#### National Ice Center Antarctic daily sea ice charts\n\nThe USNIC Daily Ice Edge product depicts the daily sea ice pack in red\n(8-10/10ths or greater of sea ice), and the Marginal Ice Zone (MIZ) in\nyellow. The marginal ice zone is the transition between the open ocean\n(ice free) and pack ice. The MIZ is very dynamic and affects the\nair-ocean heat transport, as well as being a significant factor in\nnavigational safety. The daily ice edge is analyzed by sea ice experts\nusing multiple sources of near real time satellite data, derived\nsatellite products, buoy data, weather, and analyst interpretation of\ncurrent sea ice conditions. The product is a current depiction of the\nlocation of the ice edge vice a satellite derived ice edge product.\n\nApproximate size: not specified\n\nDocumentation link: \n\n#### Nimbus Ice Edge Points from Nimbus Visible Imagery\n\nThis data set (NmIcEdg2) estimates the location of the North and South\nPole sea ice edges at various times during the mid to late 1960s, based\non recovered Nimbus 1 (1964), Nimbus 2 (1966), and Nimbus 3 (1969)\nvisible imagery.\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98NSIDC\\_DATAPOOL\\_OPS\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 0.1 GB\n\nDocumentation link: \n\n#### NSIDC passive microwave supporting files\n\nGrids and other support files for NSIDC passive microwave sea ice data.\n\nApproximate size: 0.1 GB\n\nDocumentation link: \n\n#### NSIDC SMMR-SSM/I Nasateam near-real-time sea ice concentration\n\nNear-real-time passive microwave estimates of sea ice concentration at\n25km, daily resolution. For older, quality-controlled data see the\n\xe2\x80\x9cNSIDC SMMR-SSM/I Nasateam sea ice concentration\xe2\x80\x9d source\n\nApproximate size: 0.6 GB\n\nDocumentation link: \n\n#### NSIDC SMMR-SSM/I Nasateam sea ice concentration\n\nPassive microwave estimates of sea ice concentration at 25km spatial\nresolution. Daily and monthly resolution, available from 1-Oct-1978 to\npresent. Data undergo a quality checking process and are updated\nannually. More recent data if required are available via the \xe2\x80\x9cNSIDC\nSMMR-SSM/I Nasateam near-real-time sea ice concentration\xe2\x80\x9d source.\n\nAuthentication note: Requires Earthdata login, see\n. Note that you will also need to\nauthorize the application \xe2\x80\x98NSIDC\\_DATAPOOL\\_OPS\xe2\x80\x99 (see \xe2\x80\x98My Applications\xe2\x80\x99\nat )\n\nApproximate size: 10 GB\n\nDocumentation link: \n\n#### Polarview Sentinel-1 imagery\n\nSentinel-1 imagery from polarview.aq\n\nApproximate size: not specified\n\nDocumentation link: \n\n#### Sea ice lead climatologies\n\nLong-term relative sea ice lead frequencies for the Arctic (November -\nApril 2002/03 - 2018/19) and Antarctic (April - September 2003 - 2019).\nIce surface temperature data (MYD/MOD29 col.\xc2\xa06) from the\nModerate-Resolution Imaging Spectroradiometer are used to derive daily\nobservations of sea ice leads in both polar regions. Sea ice leads are\ndefined as significant local surface temperature anomalies and they are\nautomatically identified during a two-stage process, including 1) the\ntile-based retrieval of potential sea ice leads and 2) the\nidentification of cloud artefacts using fuzzy logic (see Reiser et al.,\n2020 for further details). Subsequently, all daily sea ice lead maps are\ncombined to long-term averages showing the climatological distribution\nof leads in the Arctic and Antarctic. The dataset represents an update\nfor the Arctic (Willmes & Heinemann, 2016) and is the first for the\nAntarctic. These maps reveal that multiple distinct features with\nincreased lead frequencies are present that are related to bathymetric\nstructures, e.g.\xc2\xa0the continental shelf break or ridges and troughs.\n\nApproximate size: 0.25 GB\n\nDocumentation link: \n\n### Data group: Sea surface temperature\n\n#### GHRSST Level 4 MUR Global Foundation SST v4.1\n\nA Group for High Resolution Sea Surface Temperature (GHRSST) Level 4 sea\nsurface temperature analysis produced as a retrospective dataset (four\nday latency) at the JPL Physical Oceanography DAAC using wavelets as\nbasis functions in an optimal interpolation approach on a global 0.011\ndegree grid. The version 4 Multiscale Ultrahigh Resolution (MUR) L4\nanalysis is based upon nighttime GHRSST L2P skin and subskin SST\nobservations from several instruments including the NASA Advanced\nMicrowave Scanning Radiometer-EOS (AMSRE), the Moderate Resolution\nImaging Spectroradiometer (MODIS) on the NASA Aqua and Terra platforms,\nthe US Navy microwave WindSat radiometer and in situ SST observations\nfrom the NOAA iQuam project. The ice concentration data are from the\narchives at the EUMETSAT Ocean and Sea Ice Satellite Application\nFacility (OSI SAF) High Latitude Processing Center and are also used for\nan improved SST parameterization for the high-latitudes. This data set\nis funded by the NASA MEaSUREs program\n(),\nand created by a team led by Dr.\xc2\xa0Toshio Chin from JPL.\n\nApproximate size: 2000 GB\n\nDocumentation link:\n\n\n#### NOAA Extended Reconstructed SST V3b\n\nA global monthly SST analysis from 1854 to the present derived from\nICOADS data with missing data filled in by statistical methods\n\nApproximate size: 0.3 GB\n\nDocumentation link:\n\n\n#### NOAA Extended Reconstructed SST V5\n\nA global monthly sea surface temperature dataset derived from the\nInternational Comprehensive Ocean-Atmosphere Dataset (ICOADS)\n\nApproximate size: 0.3 GB\n\nDocumentation link:\n\n\n#### NOAA OI 1/4 Degree Daily SST AVHRR\n\nSea surface temperature at 0.25 degree daily resolution, from 1-Sep-1981\nto present\n\nApproximate size: 140 GB\n\nDocumentation link:\n\n\n#### NOAA OI 1/4 Degree Daily SST AVHRR v2\n\nSea surface temperature at 0.25 degree daily resolution, from 1-Sep-1981\nto Apr-2020\n\nApproximate size: 140 GB\n\nDocumentation link:\n\n\n#### NOAA OI SST V2\n\nWeekly and monthly mean and long-term monthly mean SST data, 1-degree\nresolution, 1981 to present. Ice concentration data are also included,\nwhich are the ice concentration values input to the SST analysis\n\nApproximate size: 0.9 GB\n\nDocumentation link:\n\n\n#### Oceandata MODIS Aqua Level-3 mapped monthly 9km SST\n\nMonthly remote-sensing SST from the MODIS Aqua satellite at 9km spatial\nresolution\n\nApproximate size: 7 GB\n\nDocumentation link: \n\n#### Oceandata MODIS Terra Level-3 mapped monthly 9km SST\n\nMonthly remote-sensing sea surface temperature from the MODIS Terra\nsatellite at 9km spatial resolution\n\nApproximate size: 7 GB\n\nDocumentation link: \n\n### Data group: Topography\n\n#### AAS\\_4116\\_Coastal\\_Complexity\n\nThe Antarctic outer coastal margin is the key interface between the\nmarine and terrestrial environments. Its physical configuration\n(including both length scale of variation and orientation/aspect) has\ndirect bearing on several closely associated cryospheric, biological,\noceanographical and ecological processes. This dataset provides a\ncharacterisation of Antarctic coastal complexity. At each point, a\ncomplexity metric is calculated at length scales from 1 to 256 km,\ngiving a multiscale estimate of the magnitude and direction of\nundulation or complexity at each point location along the entire\ncoastline.\n\nApproximate size: 0.05 GB\n\nDocumentation link: \n\n#### Bedmap2\n\nBedmap2 is a suite of gridded products describing surface elevation,\nice-thickness and the sea floor and subglacial bed elevation of the\nAntarctic south of 60S.\n\nApproximate size: 3.3 GB\n\nDocumentation link: \n\n#### Cryosat-2 digital elevation model\n\nA New Digital Elevation Model of Antarctica derived from 6 years of\ncontinuous CryoSat-2 measurements\n\nApproximate size: 2 GB\n\nDocumentation link: \n\n#### EGM2008 Global 2.5 Minute Geoid Undulations\n\nEach zip file contains an ESRI GRID raster data set of 2.5-minute geoid\nundulation values covering a 45 x 45 degree area. Each raster file has a\n2.5-minute cell size and is a subset of the global 2.5 x 2.5-minute grid\nof pre-computed geoid undulation point values found on the EGM2008-WGS\n84 Version web page. This ESRI GRID format represents a continuous\nsurface of geoid undulation values where each 2.5-minute raster cell\nderives its value from the original pre-computed geoid undulation point\nvalue located at the SW corner of each cell.\n\nApproximate size: not specified\n\nDocumentation link:\n\n\n#### ETOPO1 bathymetry\n\nETOPO1 is a 1 arc-minute global relief model of Earth\xe2\x80\x99s surface that\nintegrates land topography and ocean bathymetry.\n\nApproximate size: 1.3 GB\n\nDocumentation link: \n\n#### ETOPO2 bathymetry\n\n2-Minute Gridded Global Relief Data (ETOPO2v2c)\n\nApproximate size: 0.3 GB\n\nDocumentation link: \n\n#### GEBCO 2014 bathymetry\n\nA global grid at 30 arc-second intervals. Originally published in 2014,\nlast updated in April 2015. The data set is largely based on a database\nof ship-track soundings with interpolation between soundings guided by\nsatellite-derived gravity data. Where they improve on this model, data\nsets generated from other methods are included. The grid is accompanied\nby a Source Identifier Grid (SID). This indicates if the corresponding\ncells in the GEBCO\\_2014 Grid are based on soundings, pre-generated\ngrids or interpolation.\n\nApproximate size: 1.2 GB\n\nDocumentation link:\n\n\n#### GEBCO 2014 bathymetry SID\n\nA global grid at 30 arc-second intervals. Originally published in 2014,\nlast updated in April 2015. The data set is largely based on a database\nof ship-track soundings with interpolation between soundings guided by\nsatellite-derived gravity data. Where they improve on this model, data\nsets generated from other methods are included. The grid is accompanied\nby a Source Identifier Grid (SID). This indicates if the corresponding\ncells in the GEBCO\\_2014 Grid are based on soundings, pre-generated\ngrids or interpolation.\n\nApproximate size: 0.1 GB\n\nDocumentation link:\n\n\n#### GEBCO 2019 bathymetry\n\nThe GEBCO\\_2019 Grid is the latest global bathymetric product released\nby the General Bathymetric Chart of the Oceans (GEBCO). The GEBCO\\_2019\nproduct provides global coverage, spanning 89d 59\xe2\x80\x99 52.5\xe2\x80\x9cN, 179d 59\xe2\x80\x99\n52.5\xe2\x80\x9dW to 89d 59\xe2\x80\x99 52.5\xe2\x80\x9cS, 179d 59\xe2\x80\x99 52.5\xe2\x80\x9dE on a 15 arc-second grid. It\nconsists of 86400 rows x 43200 columns, giving 3,732,480,000 data\npoints. The data values are pixel-centre registered i.e.\xc2\xa0they refer to\nelevations at the centre of grid cells.\n\nApproximate size: 13 GB\n\nDocumentation link:\n\n\n#### GEBCO 2021 bathymetry\n\nThe GEBCO\\_2021 Grid is a global terrain model for ocean and land,\nproviding elevation data, in meters, on a 15 arc-second interval grid.\nIt includes a number of additonal data sets compared to the GEBCO\\_2020\nGrid. The grid is accompanied by a Type Identifier (TID) Grid, giving\ninformation on the types of source data that the GEBCO\\_2021 Grid is\nbased on. The primary GEBCO\\_2021 grid contains land and ice surface\nelevation information - as provided for previous GEBCO grid releases. In\naddition, for the 2021 release, we have made available a version with\nunder-ice topography/bathymetry information for Greenland and\nAntarctica.\n\nApproximate size: 11.5 GB\n\nDocumentation link:\n\n\n#### GEBCO 2021 sub-ice bathymetry\n\nThe GEBCO\\_2021 Grid is a global terrain model for ocean and land,\nproviding elevation data, in meters, on a 15 arc-second interval grid.\nIt includes a number of additonal data sets compared to the GEBCO\\_2020\nGrid. The grid is accompanied by a Type Identifier (TID) Grid, giving\ninformation on the types of source data that the GEBCO\\_2021 Grid is\nbased on. The primary GEBCO\\_2021 grid contains land and ice surface\nelevation information - as provided for previous GEBCO grid releases. In\naddition, for the 2021 release, we have made available a version with\nunder-ice topography/bathymetry information for Greenland and\nAntarctica.\n\nApproximate size: 11.5 GB\n\nDocumentation link:\n\n\n#### GEBCO 2021 type identifier grid\n\nThe GEBCO\\_2021 Grid is a global terrain model for ocean and land,\nproviding elevation data, in meters, on a 15 arc-second interval grid.\nIt includes a number of additonal data sets compared to the GEBCO\\_2020\nGrid. The grid is accompanied by a Type Identifier (TID) Grid, giving\ninformation on the types of source data that the GEBCO\\_2021 Grid is\nbased on. The primary GEBCO\\_2021 grid contains land and ice surface\nelevation information - as provided for previous GEBCO grid releases. In\naddition, for the 2021 release, we have made available a version with\nunder-ice topography/bathymetry information for Greenland and\nAntarctica.\n\nApproximate size: 4.5 GB\n\nDocumentation link:\n\n\n#### George V bathymetry\n\nThis dataset comprises Digital Elevation Models (DEMs) of varying\nresolutions for the George V and Terre Adelie continental margin,\nderived by incorporating all available singlebeam and multibeam point\ndepth data.\n\nApproximate size: 0.15 GB\n\nDocumentation link:\n\n\n#### Geoscience Australia multibeam bathymetric grids of the Macquarie Ridge\n\nThis is a compilation of all the processed multibeam bathymetry data\nthat are publicly available in Geoscience Australia\xe2\x80\x99s data holding for\nthe Macquarie Ridge.\n\nApproximate size: 0.4 GB\n\nDocumentation link: \n\n#### GSHHG coastline data\n\nA Global Self-consistent, Hierarchical, High-resolution Geography\nDatabase\n\nApproximate size: 0.6 GB\n\nDocumentation link: \n\n#### IBCSO bathymetry\n\nThe International Bathymetric Chart of the Southern Ocean (IBCSO)\nVersion 1.0 is a new digital bathymetric model (DBM) portraying the\nseafloor of the circum-Antarctic waters south of 60S. IBCSO is a\nregional mapping project of the General Bathymetric Chart of the Oceans\n(GEBCO). The IBCSO Version 1.0 DBM has been compiled from all available\nbathymetric data collectively gathered by more than 30 institutions from\n15 countries. These data include multibeam and single-beam echo\nsoundings, digitized depths from nautical charts, regional bathymetric\ngridded compilations, and predicted bathymetry. Specific gridding\ntechniques were applied to compile the DBM from the bathymetric data of\ndifferent origin, spatial distribution, resolution, and quality. The\nIBCSO Version 1.0 DBM has a resolution of 500 x 500 m, based on a polar\nstereographic projection, and is publicly available together with a\ndigital chart for printing from the project website (www.ibcso.org) and\nat .\n\nApproximate size: 4.3 GB\n\nDocumentation link: \n\n#### IBCSO chart for printing\n\nThe IBCSO Poster, 2013, is a polar stereographic view of the Southern\nOcean displaying bathymetric contours south of 60S at a scale of\n1:7,000,000. The poster size is 39.25 x 47.125 inches.\n\nApproximate size: 0.2 GB\n\nDocumentation link: \n\n#### IBCSOv2 bathymetry\n\nThe International Bathymetric Chart of the Southern Ocean Version 2\n(IBCSO v2) is a digital bathymetric model for the area south of 50S with\nspecial emphasis on the bathymetry of the Southern Ocean. IBCSO v2 has a\nresolution of 500 m x 500 m in a Polar Stereographic projection. The\ntotal data coverage of the seafloor is 23.79% with a multibeam-only data\ncoverage of 22.32%. The remaining 1.47% include singlebeam and other\ndata. IBCSO v2 is the most authoritative seafloor map of the area south\nof 50S.\n\nApproximate size: 0.5 GB\n\nDocumentation link: \n\n#### Kerguelen Plateau bathymetric grid 2010\n\nThis data replaces the digital elevation model (DEM) for the Kerguelen\nPlateau region produced in 2005 (Sexton 2005). The revised grid has been\ngridded at a grid pixel resolution of 0.001-arc degree (about 100 m).\nThe new grid utilised the latest data sourced from ship-based multibeam\nand singlebeam echosounder surveys, and satellite remotely-sensed data.\nReport Reference: Beaman, R.J. and O\xe2\x80\x99Brien, P.E., 2011. Kerguelen\nPlateau bathymetric grid, November 2010. Geoscience Australia, Record,\n2011/22, 18 pages.\n\nApproximate size: 0.7 GB\n\nDocumentation link: \n\n#### Natural Earth 10m physical vector data\n\nNatural Earth is a public domain map dataset available at 1:10m, 1:50m,\nand 1:110 million scales.\n\nApproximate size: 0.2 GB\n\nDocumentation link:\n\n\n#### New Zealand Regional Bathymetry 2016\n\nThe NZ 250m gridded bathymetric data set and imagery, Mitchell et\nal.\xc2\xa02012, released 2016.\n\nApproximate size: 1.3 GB\n\nDocumentation link:\n\n\n#### Radarsat Antarctic digital elevation model V2\n\nThe high-resolution Radarsat Antarctic Mapping Project (RAMP) digital\nelevation model (DEM) combines topographic data from a variety of\nsources to provide consistent coverage of all of Antarctica. Version 2\nimproves upon the original version by incorporating new topographic\ndata, error corrections, extended coverage, and other modifications.\n\nApproximate size: 5.3 GB\n\nDocumentation link: \n\n#### Reference Elevation Model of Antarctica mosaic tiles\n\nThe Reference Elevation Model of Antarctica (REMA) is a high resolution,\ntime-stamped digital surface model of Antarctica at 8-meter spatial\nresolution. REMA is constructed from hundreds of thousands of individual\nstereoscopic Digital Elevation Models (DEM) extracted from pairs of\nsubmeter (0.32 to 0.5 m) resolution DigitalGlobe satellite imagery.\nVersion 1 of REMA includes approximately 98% of the contiguous\ncontinental landmass extending to maximum of roughly 88 degrees S.\nOutput DEM raster files are being made available as both \xe2\x80\x98strip\xe2\x80\x99 files\nas they are output directly from SETSM that preserve the original source\nmaterial temporal resolution, as well as mosaic tiles that are compiled\nfrom multiple strips that have been co-registered, blended, and\nfeathered to reduce edge-matching artifacts.\n\nApproximate size: 1.2 GB\n\nDocumentation link: \n\n#### Revision of the Kerguelen Plateau bathymetric grid\n\nThe existing regional bathymetric grid of the Kerguelen Plateau,\nsouth-west Indian Ocean, was updated using new singlebeam echosounder\ndata from commercial fishing and research voyages, and some new\nmultibeam swath bathymetry data. Source bathymetry data varies from\nInternational Hydrographic Organisation (IHO) S44 Order 1a to 2. The\nsource data were subjected to area-based editing to remove data spikes,\nthen combined with the previous Sexton (2005) grid to produce a new grid\nwith a resolution of 0.001-arcdegree. Satellite-derived datasets were\nused to provide island topography and to fill in areas of no data. The\nnew grid improves the resolution of morphological features observed in\nearlier grids, including submarine volcanic hills on the top of the\nKerguelen Plateau and a complex of submarine channels draining the\nsouthern flank of the bank on which Heard Island sits\n\nApproximate size: 0.7 GB\n\nDocumentation link: \n\n#### RTOPO-1 Antarctic ice shelf topography\n\nSub-ice shelf circulation and freezing/melting rates in ocean general\ncirculation models depend critically on an accurate and consistent\nrepresentation of cavity geometry. The goal of this work is to compile\nindependent regional fields into a global data set. We use the S-2004\nglobal 1-minute bathymetry as the backbone and add an improved version\nof the BEDMAP topography for an area that roughly coincides with the\nAntarctic continental shelf. Locations of the merging line have been\ncarefully adjusted in order to get the best out of each data set.\nHigh-resolution gridded data for upper and lower ice surface topography\nand cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and\nGeorge VI Ice Shelves, and for Pine Island Glacier have been carefully\nmerged into the ambient ice and ocean topographies. Multibeam survey\ndata for bathymetry in the former Larsen B cavity and the southeastern\nBellingshausen Sea have been obtained from the data centers of Alfred\nWegener Institute (AWI), British Antarctic Survey (BAS) and\nLamont-Doherty Earth Observatory (LDEO), gridded, and again carefully\nmerged into the existing bathymetry map.\n\nApproximate size: 4.1 GB\n\nDocumentation link: \n\n#### Shuttle Radar Topography Mission elevation data SRTMGL1 V3\n\nGlobal 1-arc-second topographic data generated from NASA\xe2\x80\x99s Shuttle Radar\nTopography Mission. Version 3.0 (aka SRTM Plus or Void Filled) removes\nall of the void areas by incorporating data from other sources such as\nthe ASTER GDEM.\n\nAuthentication note: Requires Earthdata login, see\n\n\nApproximate size: 620 GB\n\nDocumentation link:\n\n\n#### Smith and Sandwell bathymetry\n\nGlobal seafloor topography from satellite altimetry and ship depth\nsoundings\n\nApproximate size: 1.4 GB\n\nDocumentation link: \n'",",https://doi.org/10.5281/zenodo.6809070,https://doi.org/10.4225/15/5afcadad6c130,https://doi.org/10.4225/15/5afcb927e8162,https://doi.org/10.26179/5d64b361ca8ec,https://doi.org/10.26179/fbfd-0828,https://doi.org/10.26179/5b8f30e30d4f3,https://doi.org/10.4225/15/5906b48f70bf9,https://doi.org/10.26179/5d267d1ceb60c,https://doi.org/10.4225/15/5667AC726B224,https://doi.org/10.26179/5d1af0ba45c03,https://doi.org/10.5194/tc-2017-223,https://doi.org/10.4225/25/53D9B12E0F96E","2017/08/22, 02:04:24",2256,CUSTOM,28,285,"2023/08/14, 01:50:35",16,2,16,1,73,0,0.0,0.018050541516245522,,,0,3,false,,false,false,,,https://github.com/AustralianAntarcticDivision,,,,,https://avatars.githubusercontent.com/u/8952518?v=4,,, VAPOR,"The Visualization and Analysis Platform for Ocean, Atmosphere and Solar Researchers.",NCAR,https://github.com/NCAR/VAPOR.git,github,"visualization,atmosphere,science",Ocean Data Processing and Access,"2023/10/24, 14:44:24",156,0,35,true,C++,National Center for Atmospheric Research,NCAR,"C++,C,Python,GLSL,NCL,CMake,IDL,Perl,Shell,Prolog,HTML,Dockerfile,Objective-C++,Awk,Batchfile",https://www.vapor.ucar.edu/,"b""[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8325689.svg)](https://doi.org/10.5281/zenodo.8325689)\n[![CircleCI](https://circleci.com/gh/NCAR/VAPOR.svg?style=svg)](https://circleci.com/gh/NCAR/VAPOR) \n\n## Vapor:\n\n**VAPOR** is the **V**isualization and **A**nalysis **P**latform for **O**cean, Atmosphere, and Solar **R**esearchers. VAPOR provides an interactive 3D visualization environment that can also produce animations and still frame images. VAPOR runs on most UNIX and Windows systems equipped with modern 3D graphics cards.\n\nThe VAPOR Data Collection (**VDC**) data model allows users progressively access the fidelity of their data, allowing for the visualization of terascale data sets on commodity hardware. VAPOR can also directly import data formats including WRF, MOM, POP, ROMS, and some GRIB and NetCDF files.\n\nUsers can perform ad-hoc analysis with VAPOR's interactive Python interpreter; which allows for the creation, modification, and visualization of new variables based on input model data.\n\nVAPOR is a product of the **National Center for Atmospheric Research's Computational and Information Systems Lab**. Support for VAPOR is provided by the U.S. **National Science Foundation** (grants # 03-25934 and 09-06379, ACI-14-40412), and by the **Korea Institute of Science and Technology Information**\n\nProject homepage and binary releases can be found at [https://www.vapor.ucar.edu/](https://www.vapor.ucar.edu/)\n\n## Citation\nIf VAPOR benefits your research, please kindly cite [this publication](https://www.mdpi.com/2073-4433/10/9/488):\n```\n@Article{atmos10090488,\nAUTHOR = {Li, Shaomeng and Jaroszynski, Stanislaw and Pearse, Scott and Orf, Leigh and Clyne, John},\nTITLE = {VAPOR: A Visualization Package Tailored to Analyze Simulation Data in Earth System Science},\nJOURNAL = {Atmosphere},\nVOLUME = {10},\nYEAR = {2019},\nNUMBER = {9},\nARTICLE-NUMBER = {488},\nURL = {https://www.mdpi.com/2073-4433/10/9/488},\nISSN = {2073-4433},\nABSTRACT = {Visualization is an essential tool for analysis of data and communication of findings in the sciences, and the Earth System Sciences (ESS) are no exception. However, within ESS, specialized visualization requirements and data models, particularly for those data arising from numerical models, often make general purpose visualization packages difficult, if not impossible, to use effectively. This paper presents VAPOR: a domain-specific visualization package that targets the specialized needs of ESS modelers, particularly those working in research settings where highly-interactive exploratory visualization is beneficial. We specifically describe VAPOR’s ability to handle ESS simulation data from a wide variety of numerical models, as well as a multi-resolution representation that enables interactive visualization on very large data while using only commodity computing resources. We also describe VAPOR’s visualization capabilities, paying particular attention to features for geo-referenced data and advanced rendering algorithms suitable for time-varying, 3D data. Finally, we illustrate VAPOR’s utility in the study of a numerically- simulated tornado. Our results demonstrate both ease-of-use and the rich capabilities of VAPOR in such a use case.},\nDOI = {10.3390/atmos10090488}\n}\n```\n\n## Project Members:\n\n- Nihanth Cherukuru\n- John Clyne\n- Scott Pearse\n- Samuel Li\n- Stanislaw Jaroszynski\n- Kenny Gruchalla\n- Niklas Roeber\n- Pamela Gillman\n\n![Vapor Banner](share/images/vapor_banner.png)\n""",",https://doi.org/10.5281/zenodo.8325689","2017/06/14, 19:25:05",2324,CUSTOM,73,4480,"2023/10/05, 17:14:03",499,1280,2978,193,20,1,1.0,0.6983906770255273,"2023/10/19, 13:34:33",Weekly,0,12,false,,false,true,,,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, Ocean-Data-Map-Project,A Data Visualization tool that enables users to discover and view 3D ocean model output quickly and easily.,DFO-Ocean-Navigator,https://github.com/DFO-Ocean-Navigator/Ocean-Data-Map-Project.git,github,"science,ocean-model,oceanography,javascript,netcdf,python,ocean-navigator,gis,data-visualization",Ocean Data Processing and Access,"2023/10/11, 17:02:38",44,0,7,true,Python,Ocean Navigator,DFO-Ocean-Navigator,"Python,JavaScript,HTML,SCSS,Shell,CSS,EJS",http://navigator.oceansdata.ca,"b'# Ocean Navigator\n\n[![CodeFactor](https://www.codefactor.io/repository/github/dfo-ocean-navigator/ocean-data-map-project/badge)](https://www.codefactor.io/repository/github/dfo-ocean-navigator/ocean-data-map-project)\n[![Lint Python](https://github.com/DFO-Ocean-Navigator/Ocean-Data-Map-Project/actions/workflows/lint_python.yml/badge.svg)](https://github.com/DFO-Ocean-Navigator/Ocean-Data-Map-Project/actions/workflows/lint_python.yml)\n[![Python tests](https://github.com/DFO-Ocean-Navigator/Ocean-Data-Map-Project/actions/workflows/python-tests.yml/badge.svg)](https://github.com/DFO-Ocean-Navigator/Ocean-Data-Map-Project/actions/workflows/python-tests.yml)\n\n## Contents\n* Overview\n* Development\n* Automate CLASS4 pickle generation\n\n---\n\n## Overview\n\nOcean Navigator is a Data Visualization tool that enables users to discover and view 3D ocean model output quickly and easily.\n\nThe model outputs are stored as [NetCDF4](https://en.wikipedia.org/wiki/NetCDF) files. Our file management is now handled by an SQLite3 process that incrementally scans the files for a dataset, and updates a corresponding table so that the Python layer can only open the exact files required to perform computations; as opposed to the THREDDS aggregation approach which serves all the files in a dataset as a single netcdf file. The THREDDS approach was unable to scale to the sheer size of the datasets we deal with.\n\nThe server-side component of the Ocean Navigator is written in Python 3, using the Flask web API. Conceptually, it is broken down into three components:\n\n-\tQuery Server\n\n\tThis portion returns metadata about the selected dataset in JSON format. These queries include things like the list of variables in the dataset, the times covered, the list of depths for that dataset, etc.\n\n\tThe other queries include things such as predefined areas (NAFO divisions, EBSAs, etc), and ocean drifter paths. The drifter paths are loaded from NetCDF files, but all the other queries are loaded from KML files.\n\n-\tPlotting\n\n\tThis portion generates an image plot, which could be a map with surface fields (or fields at a particular depth), a transect through a defined part of the ocean, depth profiles of one or more points, etc. We use the matplotlib python module to generate the plots.\n\n\tBecause the model grid rarely lines up with the map projection, and profiles and transects don\'t necessarily fall on model grid points, we employ some regridding and interpolation to generate these plots. For example, for a map plot, we select all the model points that fall within the area, plus some extra around the edges and regrid to a 500x500 grid that is evenly spaced over the projection area. An added benefit of this regridding is that we can directly compare across models with different grids. This allows us to calculate anomalies on the fly by comparing the model to a climatology. In theory, this would also allow for computing derived outputs from variables in different datasets with different native grids.\n\n-\tTile Server\n\n\tThis portion is really a special case of the plotting component. The tile server serves 256x256 pixel tiles at different resolutions and projections that can be used by the OpenLayers web mapping API. This portion doesn\'t use matplotlib, as the tiles don\'t have axis labels, titles, legends, etc. The same style of interpolation/regridding is done to generate the data for the images.\n\n\tThe generated tiles are cached to disk after they are generated the first time, this allows the user request to bypass accessing the NetCDF files entirely on subsequent requests.\n\nThe user interface is written in Javascript using the React framework. This allows for a single-page, responsive application that offloads as much processing from the server onto the user\'s browser as possible. For example, if the user chooses to load points from a CSV file, the file is parsed in the browser and only necessary parts of the result are sent back to the server for plotting.\n\nThe main display uses the OpenLayers mapping API to allow the user to pan around the globe to find the area of interest. It also allows the user to pick an individual point to get more information about, draw a transect on the map, or draw a polygon to extract a map or statistics for an area.\n\n---\n\n## Development\n\n### Local Installation\nThe instructions for performing a local installation of the Ocean Data Map Project are available at:\n[https://github.com/DFO-Ocean-Navigator/Navigator-Installer/blob/master/README.md](https://github.com/DFO-Ocean-Navigator/Navigator-Installer/blob/master/README.md)\n\n* While altering Javascript code, it can be actively transpiled using:\n\t* `cd oceannavigator/frontend`\n\t* `yarn run dev`\n* There\'s also a linter available: `yarn run lint`.\n* For production use the command: \n\t* `rm -r oceannavigator/frontend`\n\t* `cd oceannavigator/frontend`\n\t* `yarn run build`\n\n### SQLite3 backend\nSince we\'re now using a home-grown indexing solution, as such there is now no ""server"" to host the files through a URL (at the moment). You also need to install the dependencies for the [netcdf indexing tool](https://github.com/DFO-Ocean-Navigator/netcdf-timestamp-mapper). Then, download a released binary for Linux systems [here](https://github.com/DFO-Ocean-Navigator/netcdf-timestamp-mapper/releases). You should go through the README for basic setup and usage details.\n\nThe workflow to import new datasets into the Navigator has also changed:\n1. Run the indexing tool linked above.\n2. Modify `datasetconfig.json` so that the `url` attribute points to the absolute path of the generated `.sqlite3` database.\n3. Restart web server.\n\n### Running the webserver for development\nAssuming the above installation script succeeded, your PATH should be set to point towards `${HOME}/miniconda/3/amd64/bin`, and the `navigator` conda environment has been activated.\n* Debug server (single-threaded):\n\t* `python ./bin/runserver.py`\n* Multi-threaded (via gUnicorn):\n\t* `./bin/runserver.sh`\n\n### Running the webserver for production\nUsing the launch-web-service.sh script will automatically determine how many processors are available, determine the platform\'s IP address, what port above 5000 can be used, print out the IP and port information. The IP:PORT information can then be copied to a web browser to access the Ocean Navigator web service either locally or shared with others. This script will also copy all information bring written to stdout and place the information in the ${HOME}/launch-on-web-service.log file.\n* Multi-threaded (via gUnicorn):\n * `./bin/launch-web-service.sh`\n\n### Coding Style (Javascript)\nJavascript is a dynamically-typed language so it\'s super important to have clear and concise code, that demonstrates it\'s exact purpose.\n\n* Comment any code whose intention may not be self-evident (safer to have more comments than none at all).\n* Use `var`, `let`, and `const` when identifying variables appropriately:\n\t* `var`: scoped to the nearest function block. Modern ES6/Javascript doesn\'t really use this anymore because it usually leads to scoping conflicts. However, `var` allows re-declaration of a variable.\n\t* `let`: new keyword introduced to ES6 standard which is scoped to the *nearest block*. It\'s very useful when using `for()` loops (and similar), so don\'t predefine loop variable:\n\n\t\t* Bad:\n\t\t\t```\n\t\t\t\tmyfunc() {\n\t\t\t\t\tvar i;\n\t\t\t\t\t...\n\t\t\t\t\t// Some code\n\t\t\t\t\t...\n\t\t\t\t\tfor (i = 0; i < something; ++i) {\n\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t```\n\t\t* Good:\n\t\t\t```\n\t\t\t\tmyFunc() {\n\t\t\t\t\t...\n\t\t\t\t\t// Some code\n\t\t\t\t\t...\n\t\t\t\t\tfor (let i = 0; i < something; ++i) {\n\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t```\n\t\t\n\t\tKeep in mind that `let` *does not* allow re-declaration of a variable.\n\n\t* `const`: functionally identical to the `let` keyword, however disallows variable re-assignment. Just like const-correctness in C++, `const` is a great candidate for most variable declarations, as it immediately states that ""I am not being changed"". This leads to the next rule.\n* Use `const` when declaring l-values with `require()`. Example:\n\t```\n\t\tconst LOADING_IMAGE = require(""../images/bar_loader.gif"");\n\t```\n* Unless using `for` loops, *DO NOT* use single-letter variables! It\'s an extreme nuisance for other programmers to understand the intention of the code if functions are littered with variables like: `s`, `t`, etc. Slightly more verbose code that is extremely clear will result in a much lower risk for bugs.\n\t* Bad:\n\t\t```\n\t\t\t$.when(var_promise, time_promise).done(function(v, t) {\n\t\t\t\t// Some code\n\t\t\t\t...\n\t\t\t}\n\t\t```\n\t* Good:\n\t\t```\n\t\t\t$.when(var_promise, time_promise).done(function(variable, time) {\n\t\t\t\t// Some code\n\t\t\t\t...\n\t\t\t}\n\n\t\t```\n* Try to avoid massive `if` chains. Obviously the most important thing is to get a feature/bugfix working. However if it results in a whole bunch of nested `if` statements, or `if`-`for`-`if`-`else`, etc., try to take that working result and incorporate perhaps a `switch`, or hashtable to make your solution cleaner, and more performant. If it\'s unavoidable, a well-placed comment would reduce the likelihood of a fellow developer trying to optimize it.\n\n### Coding Style (Python)\nComing soon...\n\n## Automate CLASS4 pickle generation\n\nIn order to generate the class4.pickle file daily. You should create a crontab entry for the user account hosting the Ocean Navigator instance. Use the command `crontab -e` to add the string `@daily ${HOME}/Ocean-Data-Map-Project/bin/launch-pickle.sh`. Then once a day at midnight the script launch-pickle.sh will index all the CLASS4 files.\n\n## Proper handling of the datasetconfig.json and oceannavigator.cfg configuration files\n\nIn order to provide a production ready and off-site configuration files. We have implemented a new configurations repository. When people clone the Ocean-Data-Map-Project repository they will need to perform an additional step of updating any defined submodules. The following command changes your working directory to your local Ocean-Data-Map-Project directory and then updates the submodules recursively.\n\n* cd ${HOME}/Ocean-Data-Map-Project ; git submodule update --init --recursive\n'",,"2017/07/06, 19:29:57",2302,GPL-3.0,32,1315,"2023/10/11, 17:02:40",134,571,968,95,14,4,0.7,0.6346153846153846,"2023/09/27, 13:40:30",v9.1,3,18,false,,false,false,,,https://github.com/DFO-Ocean-Navigator,http://navigator.oceansdata.ca,,,,https://avatars.githubusercontent.com/u/29958846?v=4,,, OceanSpy,An open source and user-friendly Python package that enables scientists and interested amateurs to analyze and visualize ocean model datasets.,hainegroup,https://github.com/hainegroup/oceanspy.git,github,"oceanography,ocean,physical-oceanography",Ocean Data Processing and Access,"2023/10/24, 04:50:23",85,0,15,true,Jupyter Notebook,Thomas Haine Research Group,hainegroup,"Jupyter Notebook,Python,CSS,TeX",https://oceanspy.readthedocs.io,"b"".. _readme:\n\n======================================================================================\nOceanSpy - A Python package to facilitate ocean model data analysis and visualization.\n======================================================================================\n\n|OceanSpy|\n\n|version| |conda forge| |docs| |CI| |pre-commit| |codecov| |black| |license| |doi| |JOSS| |binder|\n\n.. admonition:: Interactive Demo\n\n Check out the interactive demonstration of OceanSpy at `www.bndr.it/gfvgd `_\n\nFor publications, please cite the following paper:\n\nAlmansi, M., R. Gelderloos, T. W. N. Haine, A. Saberi, and A. H. Siddiqui (2019). OceanSpy: A Python package to facilitate ocean model data analysis and visualization. *Journal of Open Source Software*, 4(39), 1506, doi: https://doi.org/10.21105/joss.01506 .\n\nThis material is based upon work supported by the National Science Foundation under Grant Numbers 1835640, 124330, 118123, and 1756863. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.\n\nWhat is OceanSpy?\n-----------------\n**OceanSpy** is an open-source and user-friendly Python package that enables scientists and interested amateurs to analyze and visualize ocean model datasets.\nOceanSpy builds on software packages developed by the Pangeo_ community, in particular xarray_, dask_, and xgcm_.\nThe integration of dask facilitates scalability, which is important for the petabyte-scale simulations that are becoming available.\n\nWhy OceanSpy?\n-------------\nSimulations of ocean currents using numerical circulation models are becoming increasingly realistic.\nAt the same time, these models generate increasingly large volumes of model output data, making the analysis of model data harder.\nUsing OceanSpy, model data can be easily analyzed in the way observational oceanographers analyze field measurements.\n\nHow to use OceanSpy?\n--------------------\nOceanSpy can be used as a standalone package for analysis of local circulation model output, or it can be run on a remote data-analysis cluster, such as the Johns Hopkins University SciServer_ system, which hosts several simulations and is publicly available (see `SciServer Access`_, and `Datasets`_).\n\n.. note::\n\n OceanSpy has been developed and tested using MITgcm output. However, it is designed to work with any (structured grid) ocean general circulation model. OceanSpy's architecture allows to easily implement model-specific features, such as different grids, numerical schemes for vector calculus, budget closures, and equations of state. We actively seek input and contributions from users of other ocean models (`feedback submission`_).\n\n\n\n\n.. _Pangeo: http://pangeo-data.github.io\n.. _xarray: http://xarray.pydata.org\n.. _dask: https://dask.org\n.. _xgcm: https://xgcm.readthedocs.io\n.. _SciServer: http://www.sciserver.org\n.. _`SciServer Access`: https://oceanspy.readthedocs.io/en/latest/sciserver.html\n.. _Datasets: https://oceanspy.readthedocs.io/en/latest/datasets.html\n.. _`feedback submission`: https://github.com/hainegroup/oceanspy/issues\n\n.. |OceanSpy| image:: https://github.com/hainegroup/oceanspy/raw/main/docs/_static/oceanspy_logo_blue.png\n :alt: OceanSpy image\n :target: https://oceanspy.readthedocs.io\n\n.. |version| image:: https://img.shields.io/pypi/v/oceanspy.svg?style=flat\n :alt: PyPI\n :target: https://pypi.python.org/pypi/oceanspy\n\n.. |conda forge| image:: https://anaconda.org/conda-forge/oceanspy/badges/version.svg\n :alt: conda-forge\n :target: https://anaconda.org/conda-forge/oceanspy\n\n.. |docs| image:: http://readthedocs.org/projects/oceanspy/badge/?version=latest\n :alt: Documentation\n :target: http://oceanspy.readthedocs.io/en/latest/?badge=latest\n\n.. |CI| image:: https://img.shields.io/github/workflow/status/hainegroup/oceanspy/CI?logo=github\n :alt: CI\n :target: https://github.com/hainegroup/oceanspy/actions\n\n.. |codecov| image:: https://codecov.io/github/hainegroup/oceanspy/coverage.svg?branch=main\n :alt: Coverage\n :target: https://codecov.io/github/hainegroup/oceanspy?branch=main\n\n.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :alt: black\n :target: https://github.com/psf/black\n\n.. |license| image:: https://img.shields.io/github/license/mashape/apistatus.svg\n :alt: License\n :target: https://github.com/hainegroup/oceanspy\n\n.. |doi| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3270646.svg\n :alt: doi\n :target: https://doi.org/10.5281/zenodo.3270646\n\n.. |JOSS| image:: http://joss.theoj.org/papers/10.21105/joss.01506/status.svg\n :alt: JOSS\n :target: https://doi.org/10.21105/joss.01506\n\n.. |binder| image:: https://mybinder.org/badge_logo.svg\n :alt: binder\n :target: https://mybinder.org/v2/gh/hainegroup/oceanspy.git/main?filepath=binder\n\n.. |pre-commit| image:: https://results.pre-commit.ci/badge/github/hainegroup/oceanspy/main.svg\n :target: https://results.pre-commit.ci/latest/github/hainegroup/oceanspy/main\n :alt: pre-commit.ci status\n""",",https://doi.org/10.21105/joss.01506,https://doi.org/10.5281/zenodo.3270646\n\n,https://doi.org/10.21105/joss.01506\n\n","2018/04/09, 06:20:24",2025,MIT,73,1166,"2023/10/24, 04:50:24",12,258,378,119,2,0,0.7,0.5451092117758785,"2023/04/04, 01:53:47",v0.3.4,0,11,false,,false,true,,,https://github.com/hainegroup,,,,,https://avatars.githubusercontent.com/u/61514340?v=4,,, oce,An R package for oceanographic processing.,dankelley,https://github.com/dankelley/oce.git,github,"oceanography,r",Ocean Data Processing and Access,"2023/10/25, 22:44:35",136,0,10,true,R,,,"R,C++,Fortran,C,HTML,Makefile,MATLAB,TeX,Shell,Python",http://dankelley.github.io/oce/,"b'# oce \n\n\n\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/oce)](https://cran.r-project.org/package=oce)\n[![status](https://joss.theoj.org/papers/10.21105/joss.03594/status.svg)](https://joss.theoj.org/papers/10.21105/joss.03594)\n[![Project Status: Active \xe2\x80\x93 The project has reached a stable, usable state and is being actively developed.](http://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/)\n[![GitHub last commit](https://img.shields.io/github/last-commit/dankelley/oce)](https://img.shields.io/github/last-commit/dankelley/oce)\n[![R-CMD-check](https://github.com/dankelley/oce/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/dankelley/oce/actions/workflows/R-CMD-check.yaml)\n![RStudio CRAN mirror downloads](https://cranlogs.r-pkg.org/badges/last-month/oce)\n![RStudio CRAN mirror downloads](https://cranlogs.r-pkg.org/badges/last-week/oce)\n![RStudio CRAN mirror downloads](https://cranlogs.r-pkg.org/badges/last-day/oce)\n![RStudio CRAN mirror downloads](https://cranlogs.r-pkg.org/badges/grand-total/oce)\n\n\n## Why use R for oceanographic analysis?\n\nThe R language is popular in many branches of science, and Oceanography is no\nexception. With its broad statistical support, R is a natural choice for\noceanographers in the biological, chemical and geological sub-disciplines.\nHowever, some physical oceanographers have remained attached to Matlab, which\nwas widely adopted during the 1990s. Lately, this has been changing, as\noceanographers turn to open-source systems such as Python and R. A particular\nstrength of R is its provision of many powerful and well-vetted packages for\nhandling specialized calculations. The oce package is a prime example.\n\n## What the oce package provides\n\nThe oce package handles a wide variety of tasks that come up in the analysis of\nOceanographic data. In addition to the present README file, a brief sketch of\nthe package has been written by the core developers (Kelley Dan E., Clark\nRichards and Chantelle Layton, 2022. [oce: an R package for Oceanographic\nAnalysis](https://doi.org/10.21105/joss.03594). Journal of Open Source\nSoftware, 7(71), 3594), and the primary developer uses the package extensively\nin his book about the place of R in oceanographic analysis\n(Kelley, Dan E., 2018.\n[Oceanographic Analysis with R](https://link.springer.com/book/10.1007/978-1-4939-8844-0).\nNew York. Springer-Verlag ISBN 978-1-4939-8844-0). Details of oce functions\nare provided within the R help system, and in the package\n[webpage](https://dankelley.github.io/oce/).\n\n## Installing oce\n\nStable versions of oce are available from CRAN, and may be installed\nfrom within R, in the same way as other packages. However, the CRAN\nversion is only updated a few times a year (pursuant to policy), so many\nusers install the `""develop""` branch instead. This branch may be updated\nseveral times per day, as the authors fix bugs or add features that are\nmotivated by day-to-day usage. This is the branch favoured by users who\nneed new features or who would wish to contribute to Oce development.\n\nThe easy way to install the `""develop""` branch is to execute the\nfollowing commands in R.\n\n remotes::install_github(""dankelley/oce"", ref=""develop"")\n\nand most readers should also install Ocedata, with\n\n remotes::install_github(""dankelley/ocedata"", ref=""main"")\n\n## Evolution of oce\n\nOce is emphatically an open-source system, and so the participation of\nusers is very important. This is why Git is used for version control of\nthe Oce source code, and why GitHub is the host for that code. Users are\ninvited to take part in the development process, by suggesting features,\nby reporting bugs, or just by watching as others do such things.\nOceanography is a collaborative discipline, so it makes sense that the\nevolution of Oce be similarly collaborative.\n\n## Examples using built-in datasets\n\n### CTD\n\n library(oce)\n data(ctd)\n plot(ctd, which=c(1,2,3,5), type=""l"", span=150)\n\n![Sample CTD plot.](https://raw.githubusercontent.com/dankelley/oce/develop/oce-demo-1.png)\n\n\n### Acoustic Doppler profiler\n\n library(oce)\n data(adp)\n plot(adp)\n\n![Sample adp plot.](https://raw.githubusercontent.com/dankelley/oce/develop/oce-demo-2.png)\n\n### Sealevel and tides\n\n library(oce)\n data(sealevel)\n m <- tidem(sealevel)\n par(mfrow=c(2, 1))\n plot(sealevel, which=1)\n plot(m)\n\n![Sample sealevel plot.](https://raw.githubusercontent.com/dankelley/oce/develop/oce-demo-3.png)\n\n### Echosounder\n\n library(oce)\n data(echosounder)\n plot(echosounder, which=2, drawTimeRange=TRUE, drawBottom=TRUE)\n\n![Sample echosounder plot.](https://raw.githubusercontent.com/dankelley/oce/develop/oce-demo-4.png)\n\n### Map\n\n library(oce)\n par(mar=rep(0.5, 4))\n data(endeavour, package=""ocedata"")\n data(coastlineWorld, package=""oce"")\n mapPlot(coastlineWorld, col=""gray"")\n mapPoints(endeavour$longitude, endeavour$latitude, pch=20, col=""red"")\n\n![Sample map plot.](https://raw.githubusercontent.com/dankelley/oce/develop/oce-demo-5.png)\n\n### Landsat image\n\n library(ocedata)\n library(oce)\n data(landsat)\n plot(landsat)\n\n![Sample landsat image plot.](https://raw.githubusercontent.com/dankelley/oce/develop/oce-demo-6.png)\n\n'",",https://doi.org/10.21105/joss.03594","2010/03/19, 21:23:42",4968,GPL-3.0,431,10108,"2023/10/21, 14:20:07",13,133,2139,155,4,0,0.5,0.04479180349402334,"2022/03/30, 13:27:08",v1.7.2,0,13,false,,true,true,,,,,,,,,,, GPM-API,"Provides an easy-to-use python interface to download, read, process and visualize most of the products of the Global Precipitation Measurement Mission (GPM) data archive.",ghiggi,https://github.com/ghiggi/gpm_api.git,github,,Ocean Data Processing and Access,"2023/10/24, 22:08:32",25,0,23,true,Jupyter Notebook,,,"Jupyter Notebook,Python",https://gpm-api.readthedocs.io,"b'# Welcome to GPM-API\n[![DOI](https://zenodo.org/badge/286664485.svg)](https://zenodo.org/badge/latestdoi/286664485)\n[![PyPI version](https://badge.fury.io/py/gpm_api.svg)](https://badge.fury.io/py/gpm_api)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/gpm_api.svg)](https://anaconda.org/conda-forge/gpm_api)\n[![Tests](https://github.com/ghiggi/gpm_api/actions/workflows/tests.yml/badge.svg)](https://github.com/ghiggi/gpm_api/actions/workflows/tests.yml)\n[![Coverage Status](https://coveralls.io/repos/github/ghiggi/gpm_api/badge.svg?branch=main)](https://coveralls.io/github/ghiggi/gpm_api?branch=main)\n[![Documentation Status](https://readthedocs.org/projects/gpm-api/badge/?version=latest)](https://gpm-api.readthedocs.io/en/latest/?badge=latest)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)\n[![License](https://img.shields.io/github/license/ghiggi/gpm_api)](https://github.com/ghiggi/gpm_api/blob/master/LICENSE)\n\nThe GPM-API is still in development. Feel free to try it out and to report issues or to suggest changes.\n\n## Quick start\nGPM-API provides an easy-to-use python interface to download, read, process and visualize most\nof the products of the Global Precipitation Measurement Mission (GPM) data archive.\n\nThe list of available products can be retrieved using:\n\n```python\nimport gpm_api\n\ngpm_api.available_products(product_type=""RS"") # research products\ngpm_api.available_products(product_type=""NRT"") # near-real-time products\n\n```\n\nBefore starting using GPM-API, we highly suggest to save into a configuration file:\n1. your credentials to access the [NASA Precipitation Processing System (PPS) servers][PPS_link]\n2. the directory on the local disk where to save the GPM dataset of interest.\n\nTo facilitate the creation of the configuration file, you can run the following script:\n\n```python\nimport gpm_api\n\nusername = """" # likely your mail\npassword = """" # likely your mail\ngpm_base_dir = "" `:\n\n```bash\n download_daily_gpm_data 2A-DPR 2022 7 22\n```\n\nA GPM granule can be opened in python using:\n\n```python\nimport gpm_api\n\nds = gpm_api.open_granule()\n\n```\n\nwhile multiple granules over a specific time period can be opened using:\n\n```python\nimport gpm_api\nimport datetime\n\nproduct = ""2A-DPR""\nproduct_type = ""RS""\nversion = 7\n\nstart_time = datetime.datetime(2020,7, 22, 0, 1, 11)\nend_time = datetime.datetime(2020,7, 22, 0, 23, 5)\nds = gpm_api.open_dataset(product=product,\n product_type=product_type,\n version=version\n start_time=start_time,\n end_time=end_time)\n```\n\nLook at the [Tutorials][tutorial_link] to learn how to analyse and visualize the GPM products !\n\n## Installation\n\n\n### pip\n\nGPM-API can be installed via [pip][pip_link] on Linux, Mac, and Windows.\nOn Windows you can install [WinPython][winpy_link] to get Python and pip\nrunning.\nPrior installation of GPM-API, to avoid [GEOS](https://libgeos.org/) library version incompatibilities when\ninstalling the Cartopy package, we highly suggest to install first Cartopy using `conda install cartopy>=0.21.0`.\n\nThen, install the GPM-API package by typing the following command in the command terminal:\n\n pip install gpm_api\n\nTo install the latest development version via pip, see the\n[documentation][doc_install_link].\n\n### conda [NOT YET AVAILABLE]\n\nGPM-API can be installed via [conda][conda_link] on Linux, Mac, and Windows.\nInstall the package by typing the following command in a command terminal:\n\n conda install gpm_api\n\nIn case conda forge is not set up for your system yet, see the easy to follow\ninstructions on [conda forge][conda_forge_link].\n\n\n## Documentation for GPM-API\n\nYou can find the documentation under [gpm_api.readthedocs.io][doc_link]\n\n### Tutorials and Examples\n\nThe documentation also includes some [tutorials][tut_link], showing the most important use cases of GPM-API.\nThese tutorial are also available as Jupyter Notebooks and in Google Colab:\n\n- 1. Download the GPM products [[Notebook][tut1_download_link]][[Colab][colab1_download_link]]\n- 2. Introduction to the IMERG products [[Notebook][tut2_imerg_link]][[Colab][colab2_imerg_link]]\n- 2. Introduction to the PMW 1B and 1C products [[Notebook][tut2_pmw1bc_link]][[Colab][colab_pmw1bc_link]]\n- 2. Introduction to the PMW 2A products [[Notebook][tut2_pmw2a_link]][[Colab][colab2_pmw2a_link]]\n- 2. Introduction to the RADAR 2A products [[Notebook][tut2_radar_2a_link]][[Colab][colab2_radar_2a_link]]\n- 2. Introduction to the CORRA 2B products [[Notebook][tut2_corra_2b_link]][[Colab][colab2_corra_2b_link]]\n- 2. Introduction to the Latent Heating products [[Notebook][tut2_lh_link]][[Colab][colab2_lh_link]]\n- 2. Introduction to the ENVironment products [[Notebook][tut2_env_link]][[Colab][colab2_env_link]]\n- 3. Introduction to image labeling and patch extraction [[Notebook][tut3_label_link]][[Colab][colab3_label_link]]\n- 3. Introduction to image patch extraction [[Notebook][tut3_patch_link]][[Colab][colab3_patch_link]]\n\nThe associated python scripts are also provided in the `tutorial` folder.\n\n## Citation\n\nIf you are using GPM-API in your publication please cite our paper:\n\nTODO: GMD\n\nYou can cite the Zenodo code publication of GPM-API by:\n\n> Ghiggi Gionata & XXXX . ghiggi/gpm_api. Zenodo. https://doi.org/10.5281/zenodo.7753488\n\nIf you want to cite a specific version, have a look at the [Zenodo site](https://doi.org/10.5281/zenodo.7753488).\n\n## Requirements:\n\n- [xarray](https://docs.xarray.dev/en/stable/)\n- [dask](https://www.dask.org/)\n- [cartopy](https://scitools.org.uk/cartopy/docs/latest/)\n- [pyresample](https://pyresample.readthedocs.io/en/latest/)\n- [h5py](https://github.com/h5py/h5py)\n- [curl](https://curl.se/)\n- [wget](https://www.gnu.org/software/wget/)\n\n### Optional\n\n- [zarr](https://zarr.readthedocs.io/en/stable/)\n- [dask_image](https://image.dask.org/en/latest/)\n- [skimage](https://scikit-image.org/)\n\n## License\n\nThe content of this repository is released under the terms of the [MIT](LICENSE) license.\n\n[PPS_link]: https://gpm.nasa.gov/data/sources/pps-research\n[tutorial_link]: https://github.com/ghiggi/gpm_api/tree/master#tutorials-and-examples\n\n[pip_link]: https://pypi.org/project/gstools\n[conda_link]: https://docs.conda.io/en/latest/miniconda.html\n[conda_forge_link]: https://github.com/conda-forge/gpm_api-feedstock#installing-gpm_api\n[conda_pip]: https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#installing-non-conda-packages\n[pipiflag]: https://pip-python3.readthedocs.io/en/latest/reference/pip_install.html?highlight=i#cmdoption-i\n[winpy_link]: https://winpython.github.io/\n\n[doc_link]: https://gpm_api.readthedocs.io/projects/gpm_api/en/stable/\n[doc_install_link]: https://gpm_api.readthedocs.io/projects/gpm_api/en/stable/#pip\n\n[tut1_download_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab1_download_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_imerg_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab2_imerg_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_pmw1bc_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab_pmw1bc_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_pmw2a_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab2_pmw2a_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_radar_2a_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab2_radar_2a_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_corra_2b_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab2_corra_2b_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_lh_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab2_lh_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut2_env_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab2_env_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut3_label_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab3_label_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n\n[tut3_patch_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n[colab3_patch_link]: https://github.com/ghiggi/gpm_api/tree/master/tutorials\n'",",https://zenodo.org/badge/latestdoi/286664485,https://doi.org/10.5281/zenodo.7753488\n\nIf,https://doi.org/10.5281/zenodo.7753488","2020/08/11, 06:30:57",1170,MIT,349,388,"2023/10/19, 14:43:48",3,14,16,14,6,2,7.9,0.2877094972067039,"2023/09/15, 10:14:51",v0.2.5,0,5,false,,false,true,,,,,,,,,,, hddtools,"An open source project designed to facilitate access to a variety of online open data sources relevant for hydrologists and, in general, environmental scientists and practitioners.",ropensci,https://github.com/ropensci/hddtools.git,github,"r,rstats,r-package,hydrology,sepa,precipitation,mopex,data60uk,grdc,kgclimateclass,peer-reviewed",Ocean Data Processing and Access,"2022/07/18, 11:51:48",42,0,2,false,R,rOpenSci,ropensci,"R,TeX",https://docs.ropensci.org/hddtools,"b'# hddtools: Hydrological Data Discovery Tools\n\n\n[![DOI](https://zenodo.org/badge/22423032.svg)](https://zenodo.org/badge/latestdoi/22423032)\n[![status](https://joss.theoj.org/papers/10.21105/joss.00056/status.svg)](https://joss.theoj.org/papers/10.21105/joss.00056)\n\n[![CRAN Status\nBadge](http://www.r-pkg.org/badges/version/hddtools)](https://cran.r-project.org/package=hddtools)\n[![CRAN Total\nDownloads](http://cranlogs.r-pkg.org/badges/grand-total/hddtools)](https://cran.r-project.org/package=hddtools)\n[![CRAN Monthly\nDownloads](http://cranlogs.r-pkg.org/badges/hddtools)](https://cran.r-project.org/package=hddtools)\n\n[![R-CMD-check](https://github.com/ropensci/hddtools/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/hddtools/actions)\n[![codecov.io](https://codecov.io/github/ropensci/hddtools/coverage.svg?branch=master)](https://codecov.io/github/ropensci/hddtools?branch=master)\n[![](https://badges.ropensci.org/73_status.svg)](https://github.com/ropensci/software-review/issues/73)\n\n\n`hddtools` stands for Hydrological Data Discovery Tools. This R package\nis an open source project designed to facilitate access to a variety of\nonline open data sources relevant for hydrologists and, in general,\nenvironmental scientists and practitioners.\n\nThis typically implies the download of a metadata catalogue, selection\nof information needed, a formal request for dataset(s), de-compression,\nconversion, manual filtering and parsing. All those operations are made\nmore efficient by re-usable functions.\n\nDepending on the data license, functions can provide offline and/or\nonline modes. When redistribution is allowed, for instance, a copy of\nthe dataset is cached within the package and updated twice a year. This\nis the fastest option and also allows offline use of package\xe2\x80\x99s\nfunctions. When re-distribution is not allowed, only online mode is\nprovided.\n\n## Installation\n\nGet the stable version from CRAN:\n\n``` r\ninstall.packages(""hddtools"")\n```\n\nOr the development version from GitHub using the package `remotes`:\n\n``` r\ninstall.packages(""remotes"")\nremotes::install_github(""ropensci/hddtools"")\n```\n\nLoad the `hddtools` package:\n\n``` r\nlibrary(""hddtools"")\n```\n\n## Data sources and Functions\n\nThe package contains functions to interact with the data providers\nlisted below. For examples of the various functionalities see the\n[vignette](https://github.com/ropensci/hddtools/blob/master/vignettes/hddtools_vignette.Rmd).\n\n - [KGClimateClass](http://koeppen-geiger.vu-wien.ac.at/): The Koppen\n Climate Classification map is used for classifying the world\xe2\x80\x99s\n climates based on the annual and monthly averages of temperature and\n precipitation.\n\n - [GRDC](http://www.bafg.de/GRDC/): The Global Runoff Data Centre\n (GRDC) provides datasets for all the major rivers in the world.\n\n - [Data60UK](http://tdwg.catchment.org/datasets.html): The Data60UK\n initiative collated datasets of areal precipitation and streamflow\n discharge across 61 gauging sites in England and Wales (UK).\n\n - [MOPEX](http://tdwg.catchment.org/datasets.html): This dataset\n contains historical hydrometeorological data and river basin\n characteristics for hundreds of river basins in the US.\n\n - [SEPA](https://www2.sepa.org.uk/WaterLevels/): The Scottish\n Environment Protection Agency (SEPA) provides river level data for\n hundreds of gauging stations in the UK.\n\n## Meta\n\n - This package and functions herein are part of an experimental\n open-source project. They are provided as is, without any guarantee.\n - Please note that this project is released with a [Contributor Code\n of Conduct](https://github.com/ropensci/hddtools/blob/master/CONDUCT.md).\n By participating in this project you agree to abide by its terms.\n - Please [report any issues or bugs](https://github.com/ropensci/hddtools/issues).\n - License: [GPL-3](https://opensource.org/licenses/GPL-3.0)\n - This package was reviewed by [Erin Le\n Dell](https://github.com/ledell) and [Michael\n Sumner](https://github.com/mdsumner) for submission to ROpenSci (see\n review [here](https://github.com/ropensci/software-review/issues/73)) and\n the Journal of Open Source Software (see review status\n [here](https://github.com/openjournals/joss-reviews/issues/56)).\n - Cite `hddtools`: `citation(package = ""rdefra"")`\n\n
\n\n[![ropensci\\_footer](https://ropensci.org/public_images/github_footer.png)](https://ropensci.org)\n'",",https://zenodo.org/badge/latestdoi/22423032","2014/07/30, 10:39:49",3374,MIT,0,245,"2022/07/18, 11:52:18",5,5,29,0,464,1,0.2,0.10126582278481011,"2017/04/02, 09:04:42",v0.7,0,6,false,,false,false,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, hydrobr,"Help users select, download and clean data from pluvio- and fluviometric stations from the Brazilian National Water Agency.",hydroversebr,https://github.com/hydroversebr/hydrobr.git,github,,Ocean Data Processing and Access,"2023/08/15, 01:59:45",15,0,4,true,R,,,R,,"b'[![lifecycle](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://www.tidyverse.org/lifecycle/#experimental) \n[![license](https://img.shields.io/badge/license-GPL3-lightgrey.svg)](https://choosealicense.com/)\n\n\n\n

\n English |\n Portugu\xc3\xaas\n\n# hydrobr \n\n\n## Description\n\nThe hydrobr package was developed to help users select, download and clean data from pluvio- and fluviometric stations from the Brazilian National Water Agency (pt: Ag\xc3\xaancia Nacional de \xc3\x81guas - ANA). The data is made available by ANA and is one of the main databases for hydrological studies in Brazil. These is a beta version (version 0.0.0.9000), which was tested by the developers in few Brazilian basins. Please ensure that processings are correct in your study case by a cross check and let us know. Functions for statistical analyses are currently under development.\n\nThis is a voluntary initiative from a few Brazilian hydrologists and part of the hydroversebr. The package improvement is open to enthusiasts. Contact us if you want to be part of the team and help us develop this project.\n\n## Installing this package\n\nYou can download and install the most up-to-date version directly from this repository. The procedure is\n1. Install the package ""devtools"" (you only have to do this once. Note that this will also install several dependancies)\n2. Load the devtools library\n3. Install the package.\n\nThe commands are:\n``` R\nif (!require(devtools)) install.packages(""devtools"")\nlibrary(devtools)\ninstall_github(""hydroversebr/hydrobr"", build_vignettes = TRUE)\n```\nTo read the vignettes and examples of how to use the package:\n\n``` R\nvignette(package = \'hydrobr\', topic = \'intro_to_hydrobr\')\n```\n\n## Contact\n\n

\n \n\n ![](https://komarev.com/ghpvc/?username=hydrobr)\n\n\n\n'",,"2022/05/13, 16:04:00",530,GPL-3.0,4,222,"2022/09/30, 14:23:20",0,47,47,0,390,0,0.0,0.2543352601156069,"2022/08/04, 16:30:59",v0.0.0.9000,0,3,false,,false,false,,,,,,,,,,, SCREAM,"A global atmosphere model targeted towards 3 km (""cloud resolving"") resolution.",E3SM-Project,https://github.com/E3SM-Project/scream.git,github,"climate,atmosphere-model,kokkos,gcm,hpc,cxx,e3sm",Atmospheric Composition and Dynamics,"2023/10/19, 19:47:20",58,0,28,true,Fortran,Energy Exascale Earth System Model Project,E3SM-Project,"Fortran,C++,TeX,Python,HTML,Perl,C,CMake,NCL,Shell,NewLisp,Makefile,Roff,JavaScript,XSLT,sed,MATLAB,CSS,Pascal,NASL,Dockerfile,Puppet,Forth,Assembly,Raku,SourcePawn",https://e3sm-project.github.io/scream/,"b""[![E3SM Logo](https://e3sm.org/wp-content/themes/e3sm/assets/images/e3sm-logo.png)](https://e3sm.org)\n\nEnergy Exascale Earth System Model (E3SM)\n================================================================================\n\nE3SM is a state-of-the-art fully coupled model of the Earth's climate including\nimportant biogeochemical and cryospheric processes. It is intended to address\nthe most challenging and demanding climate-change research problems and\nDepartment of Energy mission needs while efficiently using DOE Leadership\nComputing Facilities. \n\nDOI: [10.11578/E3SM/dc.20230110.5](http://dx.doi.org/10.11578/E3SM/dc.20230110.5)\n\nPlease visit the [project website](https://e3sm.org) or our [Confluence site](https://acme-climate.atlassian.net/wiki/spaces/DOC/overview)\nfor further details.\n\nFor questions about the model, use [Github Discussions](https://github.com/E3SM-Project/E3SM/discussions)\n\nTable of Contents \n--------------------------------------------------------------------------------\n- [Quick Start](#quickstart)\n- [Supported Machines](#supportedmachines)\n- [Running](#running)\n- [Contributing](#contributing)\n- [Acknowledge](#acknowledge)\n- [License](#license)\n\nQuick Start\n--------------------------------------------------------------------------------\nThe [Quick Start](https://e3sm.org/model/running-e3sm/e3sm-quick-start/) page \nincludes instructions on obtaining the necessary code and input data for model \nsetup and execution on a supported machine.\n\nSupported Machines \n--------------------------------------------------------------------------------\nE3SM is a high-performance computing application and generally requires a\ncapable compute cluster to run a scientifically validated case at a useful\nsimulation speed.\n\nTo run E3SM, it is recommended that you obtain time on a \n[Supported Machine](https://e3sm.org/model/running-e3sm/supported-machines/).\n\nRunning\n--------------------------------------------------------------------------------\nPlease refer to [Running E3SM](https://e3sm.org/model/running-e3sm/) \n for instructions on running the model. \n\nContributing\n--------------------------------------------------------------------------------\nPlease refer to [Contributing](CONTRIBUTING.md) for details on our code development\nprocess.\n\nAcknowledgement\n--------------------------------------------------------------------------------\nThe Energy Exascale Earth System Model (E3SM) Project should be acknowledged in\npublications as the origin of the model using\n[these guidelines](https://e3sm.org/resources/policies/acknowledge-e3sm/).\n\nIn addition, the software should be cited. For your convenience,\nthe following BibTeX entry is provided.\n```TeX\n@misc{e3sm-model,\n\ttitle = {{Energy Exascale Earth System Model (E3SM)}},\n\tauthor = {{E3SM Project}},\n\tabstractNote = {{E3SM} is a state-of-the-art fully coupled model of the {E}arth's \n\t\tclimate including important biogeochemical and cryospheric processes.},\n\thowpublished = {[Computer Software] \\url{https://dx.doi.org/10.11578/E3SM/dc.20230110.5}},\n\turl = {https://dx.doi.org/10.11578/E3SM/dc.20230110.5},\n\tdoi = {10.11578/E3SM/dc.20230110.5},\n\tyear = 2023,\n\tmonth = jan,\n}\n```\n\nLicense\n--------------------------------------------------------------------------------\nThe E3SM model is available under a BSD 3-clause license.\nPlease see [LICENSE](LICENSE) for details.\n\n""",,"2018/06/27, 20:52:18",1946,CUSTOM,3374,55909,"2023/10/19, 19:47:23",214,1760,2368,605,6,13,2.4,0.8938671841557033,"2021/03/25, 18:42:10",SCREAMv0,1,195,false,,false,true,,,https://github.com/E3SM-Project,http://e3sm.org,,,,https://avatars.githubusercontent.com/u/7558558?v=4,,, qgs,"Models the dynamics of a 2-layer quasi-geostrophic channel atmosphere on a beta-plane, coupled to a simple land or shallow-water ocean component.",Climdyn,https://github.com/Climdyn/qgs.git,github,"atmospheric-models,python,meteorology,climate-variability,numba,ocean-atmosphere-model,climate",Atmospheric Composition and Dynamics,"2023/07/11, 12:22:17",28,0,6,true,Jupyter Notebook,RMIB - Dynamical Meteorology and Climatology,Climdyn,"Jupyter Notebook,Python",,"b'\nQuasi-Geostrophic Spectral model (qgs)\n======================================\n\n\n[![PyPI version](https://badge.fury.io/py/qgs.svg)](https://badge.fury.io/py/qgs)\n[![PyPI pyversions](https://img.shields.io/pypi/pyversions/qgs.svg)](https://pypi.org/project/qgs/)\n[![DOI](https://zenodo.org/badge/246609584.svg)](https://zenodo.org/badge/latestdoi/246609584)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02597/status.svg)](https://doi.org/10.21105/joss.02597)\n[![Documentation Status](https://readthedocs.org/projects/qgs/badge/?version=latest)](https://qgs.readthedocs.io/en/latest/?badge=latest)\n[![tests](https://github.com/Climdyn/qgs/actions/workflows/checks.yml/badge.svg?branch=master)](https://github.com/Climdyn/qgs/actions/workflows/checks.yml)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nGeneral Information\n-------------------\n\nqgs is a Python implementation of an atmospheric model for midlatitudes. It models the dynamics of\na 2-layer [quasi-geostrophic](https://en.wikipedia.org/wiki/Quasi-geostrophic_equations) channel\natmosphere on a [beta-plane](https://en.wikipedia.org/wiki/Beta_plane), coupled to a simple land or\n[shallow-water](https://en.wikipedia.org/wiki/Shallow_water_equations) ocean component. \n\n![](https://github.com/Climdyn/qgs/blob/master/misc/figs/readme.gif?raw=true)\n\n> **You can try qgs online !** \n> Simply click on one of the following links to access an introductory tutorial:\n> [![Open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Climdyn/qgs/blob/master/notebooks/introduction_qgs.ipynb)\n> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Climdyn/qgs/master?filepath=notebooks/introduction_qgs.ipynb)\n> [](https://deepnote.com/launch?name=MyProject&url=https://github.com/Climdyn/qgs/tree/master/notebooks/introduction_qgs.ipynb)\n\n\nAbout\n-----\n\n(c) 2020-2023 qgs Developers and Contributors\n\nPart of the code originates from the Python [MAOOAM](https://github.com/Climdyn/MAOOAM) implementation by Maxime Tondeur and Jonathan Demaeyer.\n\nSee [LICENSE.txt](https://raw.githubusercontent.com/Climdyn/qgs/master/LICENSE.txt) for license information.\n\n**Please cite the code description article if you use (a part of) this software for a publication:**\n\n* Demaeyer J., De Cruz, L. and Vannitsem, S. , (2020). qgs: A flexible Python framework of reduced-order multiscale climate models. \n *Journal of Open Source Software*, **5**(56), 2597, [https://doi.org/10.21105/joss.02597](https://doi.org/10.21105/joss.02597).\n\nPlease consult the qgs [code repository](http://www.github.com/Climdyn/qgs) for updates.\n\n\nInstallation\n------------\n\n#### With pip\n\nThe easiest way to install and run qgs is to use [pip](https://pypi.org/).\nType in a terminal\n\n pip install qgs\n\nand you are set!\n\nAdditionally, you can clone the repository\n\n git clone https://github.com/Climdyn/qgs.git\n\nand perform a test by running the script\n\n python qgs/qgs_rp.py\n\nto see if everything runs smoothly (this should take less than a minute).\n\n> **Note:** \n> With the pip installation, in order to be able to generate the movies with the diagnostics, \n> you need to install separately [ffmpeg](https://ffmpeg.org/).\n\n#### With Anaconda\n\nThe second easiest way to install and run qgs is to use an appropriate environment created through [Anaconda](https://www.anaconda.com/).\n\nFirst install Anaconda and clone the repository:\n\n git clone https://github.com/Climdyn/qgs.git\n\nThen install and activate the Python3 Anaconda environment:\n\n conda env create -f environment.yml\n conda activate qgs\n\nYou can then perform a test by running the script\n\n python qgs_rp.py\n \nto see if everything runs smoothly (this should take less than a minute).\n\n#### Note for Windows and MacOS users\n\nPresently, qgs is compatible with Windows and MacOS but users wanting to use qgs inside their Python scripts must guard the main script with a\n\n```\nif __name__ == ""__main__"":\n```\n\nclause and add the following lines below\n\n```\n from multiprocessing import freeze_support\n freeze_support()\n```\n\nAbout this usage, see for example the main scripts `qgs_rp.py` and `qgs_maooam.py` in the root folder.\nNote that the Jupyter notebooks are not concerned by this recommendation and work perfectly well on both operating systems.\n\n> **Why?** These lines are required to make the multiprocessing library works with these operating systems. See [here](https://docs.python.org/3.8/library/multiprocessing.html) for more details, \n> and in particular [this section](https://docs.python.org/3.8/library/multiprocessing.html#the-spawn-and-forkserver-start-methods).\n\n\n#### Activating DifferentialEquations.jl optional support\n\nIn addition to the qgs builtin Runge-Kutta integrator, the qgs model can alternatively be integrated with a package called [DifferentialEquations.jl](https://github.com/SciML/DifferentialEquations.jl) written in [Julia](https://julialang.org/), and available through the\n[diffeqpy](https://github.com/SciML/diffeqpy) python package.\nThe diffeqpy package first installation step is done by Anaconda in the qgs environment but then you must [install Julia](https://julialang.org/downloads/) and follow the final manual installation instruction found in the [diffeqpy README](https://github.com/SciML/diffeqpy).\n\nThese can be summed up as opening a terminal and doing:\n```\nconda activate qgs\npython\n```\nand then inside the Python command line interface do:\n\n```\n>>> import diffeqpy\n>>> diffeqpy.install()\n```\nwhich will then finalize the installation. An example of a notebook using this package is available in the documentation and on [readthedocs](https://qgs.readthedocs.io/en/latest/files/examples/diffeq.html).\n\nDocumentation\n-------------\n\nTo build the documentation, please run (with the conda environment activated):\n\n cd documentation\n make html\n\nYou may need to install [make](https://www.gnu.org/software/make/) if it is not already present on your system.\nOnce built, the documentation is available [here](./documentation/build/html/index.html).\n\nThe documentation is also available online on read the docs: [https://qgs.readthedocs.io/](https://qgs.readthedocs.io/)\n\nUsage\n-----\n\nqgs can be used by editing and running the script `qgs_rp.py` and `qgs_maooam.py` found in the main folder.\n\nFor more advanced usages, please read the [User Guides](https://qgs.readthedocs.io/en/latest/files/user_guide.html).\n\nExamples\n--------\n\nAnother nice way to run the model is through the use of Jupyter notebooks. \nSimple examples can be found in the [notebooks folder](./notebooks).\nFor instance, running \n\n conda activate qgs\n cd notebooks\n jupyter-notebook\n \nwill lead you to your favorite browser where you can load and run the examples.\n\nDependencies\n------------\n\nqgs needs mainly:\n\n * [Numpy](https://numpy.org/) for numeric support\n * [sparse](https://sparse.pydata.org/) for sparse multidimensional arrays support\n * [Numba](https://numba.pydata.org/) for code acceleration\n * [Sympy](https://www.sympy.org/) for symbolic manipulation of inner products\n \nCheck the yaml file [environment.yml](https://raw.githubusercontent.com/Climdyn/qgs/master/environment.yml) for the dependencies.\n\nForthcoming developments\n------------------------\n\n* Scientific development (short-to-mid-term developments)\n + Non-autonomous equation (seasonality, etc...)\n + Energy diagnostics\n* Technical mid-term developments\n + Dimensionally robust Parameter class operation\n + Vectorization of the tensor computation\n* Long-term development track\n + Active advection\n + True quasi-geostrophic ocean when using ocean model version\n + Salinity in the ocean \n + Symbolic PDE equation specification\n + Numerical basis of function\n \nContributing to qgs\n-------------------\n\nIf you want to contribute actively to the roadmap detailed above, please contact the main authors.\n\nIn addition, if you have made changes that you think will be useful to others, please feel free to suggest these as a pull request on the [qgs Github repository](https://github.com/Climdyn/qgs).\n\nMore information and guidance about how to do a pull request for qgs can be found in the documentation [here](https://qgs.readthedocs.io/en/latest/files/general_information.html#contributing-to-qgs).\n\nOther atmospheric models in Python\n----------------------------------\n\nNon-exhaustive list:\n\n* [Q-GCM](http://q-gcm.org/): A mid-latitude grid based ocean-atmosphere model like MAOOAM. Code in Fortran,\n interface is in Python.\n* [pyqg](https://github.com/pyqg/pyqg): A pseudo-spectral python solver for quasi-geostrophic systems.\n* [Isca](https://execlim.github.io/IscaWebsite/index.html): Research GCM written in Fortran and largely\n configured with Python scripts, with internal coding changes required for non-standard cases.\n \n \n'",",https://zenodo.org/badge/latestdoi/246609584,https://doi.org/10.21105/joss.02597,https://doi.org/10.21105/joss.02597,https://doi.org/10.21105/joss.02597","2020/03/11, 15:32:55",1323,MIT,42,109,"2023/07/11, 12:22:22",0,17,27,17,106,0,0.2,0.06603773584905659,"2023/06/19, 07:54:22",v0.2.8,0,3,false,,false,false,,,https://github.com/Climdyn,http://climdyn.meteo.be,"Brussels, Belgium",,,https://avatars.githubusercontent.com/u/17494336?v=4,,, pyglow,A Python module that wraps several upper atmosphere climatological models written in FORTRAN.,timduly4,https://github.com/timduly4/pyglow.git,github,,Atmospheric Composition and Dynamics,"2023/05/02, 19:07:51",94,0,15,true,Fortran,,,"Fortran,Python,Makefile,Dockerfile",,"b'Semaphore CI: [![Build Status](https://semaphoreci.com/api/v1/timduly4/pyglow/branches/master/badge.svg)](https://semaphoreci.com/timduly4/pyglow)\n\nTravis CI: [![Build Status](https://travis-ci.com/timduly4/pyglow.svg?branch=master)](https://travis-ci.com/timduly4/pyglow)\n\n\n\n[_(airglow viewed aboard the ISS)_](http://en.wikipedia.org/wiki/File:Cupola_above_the_darkened_Earth.jpg)\n\n# Overview\n\n`pyglow` is a Python module that wraps several upper atmosphere climatological models written in FORTRAN, such as the Horizontal Wind Model (HWM), the International Geomagnetic Reference Field (IGRF), the International Reference Ionosphere (IRI), and the Mass Spectrometer and Incoherent Scatter Radar (MSIS).\n\nIt includes the following upper atmospheric models:\n\n * HWM 1993\n * HWM 2007\n * HWM 2014\n * IGRF 11\n * IGRF 12\n * IRI 2012\n * IRI 2016\n * MSIS 2000\n\npyglow also provides access to the the following geophysical indices:\n * AP\n * Kp\n * F10.7\n * DST\n * AE\n\n`pyglow` offers access to these models & indices in a convenient, high-level object-oriented interface within Python.\n\n# Prerequisites\n\n`pyglow` requires the following packages for installation:\n\n1. `gfortran` (`$ sudo apt-get install gfortran`)\n2. `f2py` (`$ pip install numpy --upgrade`)\n3. Python packages listed in `requirements.txt` (`$ pip install -r requirements.txt`)\n\n# Installation\n\n### I\'m Feeling Lucky:\n\nFirst, checkout the repository:\n\n```\n$ git clone git://github.com/timduly4/pyglow.git pyglow\n```\n\nChange directories into the repository folder, compile the f2py bindings, then install the Python package:\n```\n$ cd pyglow/\n$ make -C src/pyglow/models source\n$ python3 setup.py install --user\n```\n### Troubleshooting\n\nAs of Apr 2023, pyglow is far behind on maintenance. A fresh installation is difficult. In case it helps, here are the steps I used to successfully install it:\n- use Python3.8 (pyglow is incompatible with Python3.10)\n- `pip install ipykernel jupyter numpy==1.20 scipy matplotlib future charset_normalizer pandas==1.3`\n\nA solution to Issue #139 is sorely needed.\n\n\n### Trouble in downloading model files:\n\nIf you have problems downloading files from the official websites, follow the next steps:\n\n(1) Create the local http server:\n\n```\n$ cd static/\n$ python3 -m http.server 8080\n```\n\n(2) Edit the file `src/pyglow/models/Makefile`, replace the appropriate line with the following code:\n\n```\ndownload:\n python get_models_offline.py\n```\n\n(3) Compile the f2py bindings, then install the Python package:\n\n```\n$ cd pyglow/\n$ make -C src/pyglow/models source\n$ python3 setup.py install --user\n```\n\nNote: The model files may not be latest.\n\n### Individual installation steps:\n\nIf you have troubles, follow the individual installation steps:\n\n(1) Download the package:\n```\n$ git clone git://github.com/timduly4/pyglow.git\n$ cd pyglow/\n```\n\n(2) Download the climatological models and wrap them with f2py:\n```\n$ cd ./src/pyglow/models/\n$ make all\n```\n * If successful, there should be a `*.so` file in each of the `./models/dl_models//` directories:\n\n ```\n $ find . -name ""*.so""\n ./dl_models/hwm07/hwm07py.so\n ./dl_models/hwm93/hwm93py.so\n ./dl_models/hwm14/hwm14py.so\n ./dl_models/igrf11/igrf11py.so\n ./dl_models/igrf12/igrf12py.so\n ./dl_models/iri12/iri12py.so\n ./dl_models/iri16/iri16py.so\n ./dl_models/msis/msis00py.so\n ```\n\n(3) Install the Python package\n```\n$ cd ../../../ # get back to root directory\n$ python3 setup.py install --user\n```\n * On a mac, the folder `pyglow` and `*.so` files from `./models/dl_models//` should be in `/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages`\n * The `--user` flag installs the package locally (i.e., you do not need `sudo` access)\n\n# Unit tests\n\nSee unit tests in `./test`. For example, run the unittest suite with:\n\n`$ pytest test/`\n\n(Be sure that the f2py modules have been compiled via `$ make -C src/pyglow/models source`, first.)\n\n# Examples\n\nSee example scripts located in `./examples` for example calls to `pyglow`.\n\n# Docker\n\nWe\'ve included a Dockerfile for `pyglow`. To build the image:\n\n`$ docker build -t pyglow .`\n\nThis will compile and install pyglow within the Docker container.\n\nRun the unit tests within the container via:\n\n`$ docker run pyglow`\n\n# Hints\n\n### General\n1. Use tab completion in ipython to view the full set of member data and variables available in the Point class.\n * For example, in the test code, run `pt.` and class information will be listed.\n\n### Updating geophysical indices with `pyglow.update_indices()`\nYou\'ll need to download the geophysical indices as they become available. The `update_indices()` function is available in pyglow that enables you do this:\n\n```\n# Grabs indices between 2016 and 2018:\n$ python3 -c ""import pyglow; pyglow.update_indices(2016, 2018)""\n```\n\nNote: you only need to run this function when you would like to update the indices.\n\nYou can check if you have geophysical indices between dates with:\n```\n$ python3 -c ""import pyglow; pyglow.check_stored_indices(\'2015-01-01\', \'2019-01-01\')""\n\nChecking: input date range:\n 2015-01-01\n to\n 2019-01-01\n>> We have all of the geophysical indices files between these dates.\n```\n\n# Uninstallation\n\nThe install directory for pyglow can be outputted via `python3 -c ""import pyglow; print(pyglow.__file__)""`. For example:\n```\n~ $ python3 -c ""import pyglow; print(pyglow.__file__)""\n/Users/duly/Library/Python/3.7/lib/python/site-packages/pyglow/__init__.pyc\n```\nThis tells you the installation location, and then you can remove the package with:\n```\n~ $ rm -rf /Users/duly/Library/Python/3.7/lib/python/site-packages/pyglow\n```\n'",,"2013/08/09, 17:06:55",3729,MIT,6,216,"2023/05/02, 19:07:52",27,59,126,5,176,2,0.0,0.20855614973262027,,,0,10,false,,false,false,,,,,,,,,,, Mission Support System,A collaboration server to plan atmospheric research flights.,Open-MSS,https://github.com/Open-MSS/MSS.git,github,"mission-support-system,flight-planning,python,conda-forge",Atmospheric Composition and Dynamics,"2023/10/12, 01:39:59",47,0,19,true,Python,,Open-MSS,"Python,HTML,Shell,Mako,Batchfile",https://open-mss.github.io,"b'Mission Support System Usage Guidelines\n=======================================\n\nWelcome to the Mission Support System software for planning\natmospheric research flights. This document is intended to point you\ninto the right direction in order to get the software working on your\ncomputer.\n\n\nInstalling MSS\n==============\n\nAutomatically\n-------------\n\n- For **Windows**, go [here](https://github.com/Open-MSS/mss-install/blob/main/Windows.bat?raw=1)\n - Right click on the webpage and select ""Save as..."" to download the file\n - Double click the downloaded file and follow further instructions\n - For fully automatic installation, open cmd and execute it with `/Path/To/Windows.bat -a`\n- For **Linux/Mac**, go [here](https://github.com/Open-MSS/mss-install/blob/main/LinuxMac.sh?raw=1)\n - Right click on the webpage and select ""Save as..."" to download the file\n - Make it executable via `chmod +x LinuxMac.sh`\n - Execute it and follow further instructions `./LinuxMac.sh`\n - For fully automatic installation, run it with the -a parameter `./LinuxMac.sh -a`\n\nManually\n--------\n\nAs **Beginner** start with an installation of Mambaforge \nGet [mambaforge](https://github.com/conda-forge/miniforge#mambaforge) for your Operation System\n\n\nYou must install mss into a new environment to ensure the most recent\nversions for dependencies (On the Anaconda Prompt on Windows, you have\nto leave out the \'source\' here and below).\n\n```\n $ mamba create -n mssenv\n $ mamba activate mssenv\n (mssenv) $ mamba install mss python\n```\nFor updating an existing MSS installation to the current version, it is\nbest to install it into a new environment. If an existing environment\nshall be updated, it is important to update all packages in this\nenvironment. \n\n```\n $ mamba activate mssenv\n (mssenv) $ msui --update\n```\n\nIt is possible to list all versions of `mss` available on your platform with:\n\n```\n $ mamba search mss --channel conda-forge\n```\n\nFor a simple test you can setup a demodata wms server and start a msolab server with default settings\n\n```\n (mssenv) $ mswms_demodata --seed\n (mssenv) $ export PYTHONPATH=~/mss\n (mssenv) $ mswms &\n (mssenv) $ mscolab start &\n (mssenv) $ msui\n```\n\n\n\n\nCurrent release info\n====================\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/mss.svg)](https://anaconda.org/conda-forge/mss)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6572620.svg)](https://doi.org/10.5281/zenodo.6572620)\n[![Conda Platforms](https://img.shields.io/conda/pn/conda-forge/mss.svg)](https://anaconda.org/conda-forge/mss)\n[![DOCS](https://img.shields.io/badge/%F0%9F%95%AE-docs-green.svg)](http://mss.rtd.io)\n[![Conda Recipe](https://img.shields.io/badge/recipe-mss-green.svg)](https://anaconda.org/conda-forge/mss) \n[![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/mss.svg)](https://anaconda.org/conda-forge/mss)\n[![Coverage Status](https://coveralls.io/repos/github/Open-MSS/MSS/badge.svg?branch=develop)](https://coveralls.io/github/Open-MSS/MSS?branch=develop)\n\n\nPublications\n============\n\nPlease read the reference documentation\n\n Bauer, R., Groo\xc3\x9f, J.-U., Ungermann, J., B\xc3\xa4r, M., Geldenhuys, M., and Hoffmann, L.: The Mission Support\n System (MSS v7.0.4) and its use in planning for the SouthTRAC aircraft campaign, Geosci.\n Model Dev., 15, 8983\xe2\x80\x938997, https://doi.org/10.5194/gmd-15-8983-2022, 2022.\n\n Rautenhaus, M., Bauer, G., and Doernbrack, A.: A web service based\n tool to plan atmospheric research flights, Geosci. Model Dev., 5,\n 55-71, https://doi.org/10.5194/gmd-5-55-2012, 2012.\n\nand the paper\'s Supplement (which includes a tutorial) before using the\napplication. The documents are available at:\n\n- http://www.geosci-model-dev.net/5/55/2012/gmd-5-55-2012.pdf\n- http://www.geosci-model-dev.net/5/55/2012/gmd-5-55-2012-supplement.pdf\n\nFor copyright information, please see the files NOTICE and LICENSE, located\nin the same directory as this README file.\n \n\n When using this software, please be so kind and acknowledge its use by\n citing the above mentioned reference documentation in publications,\n presentations, reports, etc. that you create. Thank you very much.\n\n\n\n'",",https://doi.org/10.5281/zenodo.6572620,https://doi.org/10.5194/gmd-15-8983-2022,https://doi.org/10.5194/gmd-5-55-2012","2020/12/03, 21:52:50",1056,Apache-2.0,235,3651,"2023/10/25, 19:50:20",123,787,1941,470,0,6,1.4,0.5884034299714169,"2023/10/12, 01:47:01",8.3.2,1,41,false,,true,true,,,https://github.com/Open-MSS,,,,,https://avatars.githubusercontent.com/u/75254179?v=4,,, MiMA,Model of an idealized Moist Atmosphere: Intermediate-complexity General Circulation Model with full radiation.,mjucker,https://github.com/mjucker/MiMA.git,github,"fortran,gcm,climate-model,atmospheric-science,atmospheric-modelling",Atmospheric Composition and Dynamics,"2023/06/09, 11:09:11",31,0,4,true,Fortran,,,"Fortran,HTML,C++,C,Pawn,Perl,CMake,Shell,NetLinx,NASL,Python,Assembly,Makefile,SourcePawn",,"b'# Model of an idealized Moist Atmosphere (MiMA) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.597136.svg)](https://doi.org/10.5281/zenodo.597136)\nMiMA is an intermediate-complexity General Circulation Model with interactive water vapor and full radiation. It is published in\n\n[M Jucker and EP Gerber, 2017: *Untangling the annual cycle of the tropical tropopause layer with an idealized moist model*, Journal of Climate 30, 7339-7358](http://dx.doi.org/10.1175/JCLI-D-17-0127.1)\n\nfor v1.x and \n\n[Garfinkel et al. (2020): *The building blocks of Northern Hemisphere wintertime stationary waves*, Journal of Climate](http://journals.ametsoc.org/doi/10.1175/JCLI-D-19-0181.1)\n\nfor v2.0.\n\nPlease see the documentation [online](http://mjucker.github.io/MiMA/) or [in the docs folder](docs/) for information about the model.\n\nSee the 30 second trailer on [YouTube](https://www.youtube.com/watch?v=8UfaFnGtCrk ""Model of an idealized Moist Atmosphere (MiMA)""): \n\n[![MiMA thumbnail](https://img.youtube.com/vi/8UfaFnGtCrk/0.jpg)](https://www.youtube.com/watch?v=8UfaFnGtCrk ""Model of an idealized Moist Atmosphere (MiMA)"")\n\nAlthough free to use under a GPLv3 license, we ask you to cite the relevant scientific work given in the documentation in any publications using this code.\n\n## License\n\nMiMA is distributed under a GNU GPLv3 license. That means you have permission to use, modify, and distribute the code, even for commercial use. However, you must make your code publicly available under the same license. See LICENSE.txt for more details.\n\nAM2 is distributed under a GNU GPLv2 license. That means you have permission to use, modify, and distribute the code, even for commercial use. However, you must make your code publicly available under the same license.\n\nRRTM/RRTMG: Copyright \xc2\xa9 2002-2010, Atmospheric and Environmental Research, Inc. (AER, Inc.). This software\nmay be used, copied, or redistributed as long as it is not sold and this copyright notice is reproduced\non each copy made. This model is provided as is without any express or implied warranties.\n'",",https://doi.org/10.5281/zenodo.597136","2015/05/21, 12:46:24",3079,GPL-3.0,1,354,"2023/06/09, 11:11:08",0,32,35,4,138,0,0.0,0.06666666666666665,"2020/02/04, 21:53:46",v1.1,0,5,false,,false,false,,,,,,,,,,, Isca,A framework for the idealized modeling of the global circulation of planetary atmospheres at varying levels of complexity and realism.,ExeClim,https://github.com/ExeClim/Isca.git,github,"planetary-atmospheres,atmospheric-science,atmospheric-modelling,geophysical-fluid-dynamics,climate-model",Atmospheric Composition and Dynamics,"2023/10/19, 10:39:53",124,0,17,true,Fortran,Exeter Climate Systems,ExeClim,"Fortran,C++,C,HTML,Python,Pawn,Perl,Shell,NASL,Dockerfile,Groovy,SourcePawn",https://execlim.github.io/IscaWebsite,"b'

\n \n
\n

\n\n

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n

\n\nIsca is a framework for the idealized modelling of the global circulation of\nplanetary atmospheres at varying levels of complexity and realism. The\nframework is an outgrowth of models from GFDL designed for Earth\'s atmosphere,\nbut it may readily be extended into other planetary regimes. Various forcing\nand radiation options are available. At the simple end of the spectrum a\nHeld-Suarez case is available. An idealized grey radiation scheme, a grey\nscheme with moisture feedback, a two-band scheme and a multi-band scheme are\nalso available, all with simple moist effects and astronomically-based solar\nforcing. At the complex end of the spectrum the framework provides a direct\nconnection to comprehensive atmospheric general circulation models.\n\nFor Earth modelling, options include an aqua-planet and configurable (idealized\nor realistic) continents with idealized or realistic topography. Continents may\nbe defined by changing albedo, heat capacity and evaporative parameters, and/or\nby using a simple bucket hydrology model. Oceanic Q-fluxes may be added to\nreproduce specified sea-surface temperatures, with any continents or on an\naquaplanet. Planetary atmospheres may be configured by changing planetary size,\nsolar forcing, atmospheric mass, radiative, and other parameters.\n\nThe underlying model is written in Fortran and may largely be configured with\nPython scripts, with internal coding changes required for non-standard cases.\nPython scripts are also used to run the model on different architectures, to\narchive the output, and for diagnostics, graphics, and post-processing. All of\nthese features are publicly available on a Git-based repository.\n\n## Getting Started\n\nA python module `isca` (note lowercase) is provided alongside the Fortran source code that should help to do a lot of the heavy-lifting of compiling, configuring and running the model for you. Isca can be compiled, run and configured without using python, but using the python wrapper is recommended.\n\n### Installing the `isca` python module\n\nThe python module is found in the `src` directory and can be installed using `pip`. It\'s recommended that you use some sort of python environment manager to do this, such as using a conda distribution and creating an environment (in the code below called ""`isca_env`""), or using `virtualenv` instead. This ""getting started"" will show you how to create a python environment that includes Isca\'s required packages, and then install the model. \n\n1. **Install [Miniforge](https://github.com/conda-forge/miniforge)**\n\n*Recommended step*: Some workstations may have outdated default python and conda installations, which may cause conflicts during installation. As a lightweight solution to get up-to-date installations, we recommend downloading [Miniforge](https://github.com/conda-forge/miniforge).\nTo ensure this works as expected, check that `$PYTHONPATH` is unset and that your `.bashrc` does not contain `module load` statements that may cause conda conflicts.\n\n*If you have a recent conda version installed in your home directory already you may wish to skip this step.*\n\n2. **Check out or download this repository**\n\nTo begin you\'ll need a copy of the source code. Either fork the Isca repository to your own github username, or clone directly from the ExeClim group.\n\n```{bash}\n$ git clone https://github.com/ExeClim/Isca\n$ cd Isca\n```\n\n3. **Create a conda environment**\n\nRequirements for Isca can be installed via the .yml file included with the model in `Isca/ci/environment-py3.9.yml`\nNavigate to the downloaded Isca folder, and create a conda environment `isca_env` containing the required packages using: \n```{bash}\n$ conda env create -f ci/environment-py3.9.yml\n```\nThen activate the environment; you\'ll need to do this each time you launch a new bash session.\n```{bash}\n$ conda activate isca_env\n```\n\n4. **Install the model**\n\nNow install the `isca` python module in ""development mode"". This will allow you, if you wish, to edit the `src/extra/python/isca` files and have those changes be used when you next run an Isca script. Navigate to `Isca/src/extra/python/` and run:\n\n```{bash}\n(isca_env)$ pip install -e .\n...\nSuccessfully installed Isca\n```\n\n### Compiling for the first time\n\nAt Exeter University, Isca is compiled using:\n\n* Intel Compiler Suite 14.0\n* OpenMPI 10.0.1\n* NetCDF 4.3.3.1\n* git 2.1.2\n\nDifferent workstations/servers at different institutions will have different compilers and libraries available. The Isca framework assumes you have something similar to our stack at Exeter, but provides a hook for you to configure the environment in which the model is run.\n\nBefore Isca is compiled/run, an environment is first configured which loads the specific compilers and libraries necessary to build the code. This done by setting the environment variable `GFDL_ENV` in your session.\n\nFor example, on the EMPS workstations at Exeter, I have the following in my `~/.bashrc`:\n\n```{bash}\n# directory of the Isca source code\nexport GFDL_BASE=/scratch/jamesp/Isca \n# ""environment"" configuration for emps-gv4\nexport GFDL_ENV=emps-gv\n# temporary working directory used in running the model\nexport GFDL_WORK=/scratch/jamesp/gfdl_work\n# directory for storing model output\nexport GFDL_DATA=/scratch/jamesp/gfdl_data\n```\n\nThe value of `GFDL_ENV` corresponds to a file in `src/extra/env` that is sourced before each run or compilation. For an example that you could adapt to work on your machine, see `src/extra/env/emps-gv`.\n\nWe are not able to provide support in configuring your environment at other institutions other than Exeter University - we suggest that you contact your friendly local sysops technician for guidance in getting the compilers and libraries collated if you are not sure how to proceed.\n\nIf you work at another large institution and have successfully compiled and run Isca, we welcome you to commit your own environment config to `/src/extra/env/my-new-env` for future scientists to benefit from and avoid the pain of debugging compilation!\n\n## Running the model\n\nOnce you have installed the `isca` python module you will most likely want to try a compilation and run a simple test case. There are several test cases highlighting features of Isca in the `exp/test_cases` directory.\n\nA good place to start is the famous Held-Suarez dynamical core test case. Take a look at the python file for an idea of how an Isca experiment is constructed and then try to run it.\n```\n(isca_env)$ cd $GFDL_BASE/exp/test_cases/held_suarez\n(isca_env)$ python held_suarez_test_case.py\n```\nThe `held_suarez_test_case.py` experiment script will attempt to compile the source code for the dry dynamical core and then run for several iterations. \n\nIt is likely that the first time you run the script, compilation will fail. Debug, adjust your environment file as necessary, and then rerun the python script to try again.\n\nOnce the code has sucessfully compiled, the script will continue on to run the model distributed over some number of cores. Once it completes, netCDF diagnostic files will be saved to `$GFDL_DATA/held_suarez_test_case/run####`.\n\nOnce you\'ve got an environment file that works for your machine saved in `src/extra/env`, all of the test cases should now compile and run - you\'re now ready to start running your own experiments!\n\n## Site-specific help\n\nThere are some site-specific guides to running Isca on your local system located in the directory [exp/site_specific/](https://github.com/ExeClim/Isca/tree/master/exp/site_specific).\n\n## Contributing to Isca\n\nIf you have made changes that you think will be useful to others, please feel free to suggest these as a Github pull request.\nThese might include adding site specific configurations that could be useful to future users, basic bug fixes, or addition of new options or modules for modeling your planet of choice. \nAn Isca team member will then review your Pull Request and suggest any changes needed before merging it in. Things to consider:\n- Before submitting a pull request, double check that the branch to be merged contains only changes you wish to add to the master branch. This will save time in reviewing the code.\n- If you add a new feature to the Fortran code, please make it off by default so that other users\' results won\'t change if they update from the master. \n- For any changes to model Fortran files, please run the trip-tests found in [/Isca/exp/test_cases/trip_test/](https://github.com/ExeClim/Isca/tree/master/exp/test_cases/trip_test). These compile and perform brief runs of some standard configurations to help identify any accidental changes to the model your commits may have caused. Isca includes a broad range of options, so try to take into consideration whether the changes you make will affect other configurations while you implement them.\n- For substantial additions of code, for example an entirely new module, please also include a test case in [/Isca/exp/test_cases/](https://github.com/ExeClim/Isca/tree/master/exp/test_cases/). This helps in testing that the option works as expected, and provides support for future users in using the new configuration.\n- Please do not make changes to existing test cases, these are here for trip-testing as well as user guidance.\n- As well as our model work on Isca, we are all Isca users ourselves, so responding to Pull Requests may take time, but we will aim to respond to urgent queries as soon as we can.\n\nFor more information, please read the [contributing guide](https://github.com/execlim/Isca/blob/master/docs/source/contributing.rst).\n\n## License\n\nIsca is distributed under a GNU GPLv3 license. See the [`LICENSE`](LICENSE) file for details. \n\nRRTM/RRTMG: Copyright \xc2\xa9 2002-2010, Atmospheric and Environmental Research, Inc. (AER, Inc.). \nThis software may be used, copied, or redistributed as long as it is not sold and this \ncopyright notice is reproduced on each copy made. This model is provided as is without \nany express or implied warranties.\n\nSome of the code provided in the `src/atmos_params/socrates/interface` folder were provided by the UK Met Office,\nand are therefore covered by British Crown Copyright. The copyright statement at the top of the \nrelevant code is provided below. For the `copyright.txt` refered to in this statement, please see the\nSocrates source code itself, which is downloadable from the Met Office, and is not packaged with Isca.\n\n```\n! *****************************COPYRIGHT*******************************\n! (C) Crown copyright Met Office. All rights reserved.\n! For further details please refer to the file COPYRIGHT.txt\n! which you should have received as part of this distribution.\n! *****************************COPYRIGHT*******************************\n```\n\nThe `check_disk_space.py` script, which is used as part of the email-alerts functionality\nof the `gfdl` module, was written by Giampaolo Rodola and is released under the MIT license.\n\nThe parts of Isca provided by GFDL are also released under a GNU GPL license. A copy of the \nrelevant GFDL license statement is provided below.\n\n```\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n!! !!\n!! GNU General Public License !!\n!! !!\n!! This file is part of the Flexible Modeling System (FMS). !!\n!! !!\n!! FMS is free software; you can redistribute it and/or modify it !!\n!! under the terms of the GNU General Public License as published by !!\n!! the Free Software Foundation, either version 3 of the License, or !!\n!! (at your option) any later version. !!\n!! !!\n!! FMS is distributed in the hope that it will be useful, !!\n!! but WITHOUT ANY WARRANTY; without even the implied warranty of !!\n!! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the !!\n!! GNU General Public License for more details. !!\n!! !!\n!! You should have received a copy of the GNU General Public License !!\n!! along with FMS. if not, see: http://www.gnu.org/licenses/gpl.txt !!\n!! !!\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n```\n'",",https://doi.org/10.5194/gmd-11-843-2018","2017/07/15, 00:13:40",2294,GPL-3.0,8,1727,"2023/10/19, 10:39:53",55,126,200,7,6,27,1.1,0.5332314744079449,"2018/02/02, 16:11:51",v1.0,0,20,false,,false,false,,,https://github.com/ExeClim,,,,,https://avatars.githubusercontent.com/u/13350658?v=4,,, pyvortex,Equivalent Latitude and polar vortex edge calculation using Nash criteria.,pankajkarman,https://github.com/pankajkarman/pyvortex.git,github,,Atmospheric Composition and Dynamics,"2021/08/04, 05:24:13",2,0,1,false,Python,,,"Python,Jupyter Notebook,Makefile",,"b'_________________\n\n[![PyPI version](https://badge.fury.io/py/pyvortex.svg)](http://badge.fury.io/py/pyvortex)\n[![License](https://img.shields.io/github/license/mashape/apistatus.svg)](https://pypi.python.org/pypi/pyvortex/)\n[![Downloads](https://pepy.tech/badge/pyvortex)](https://pepy.tech/project/pyvortex)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n_________________\n\n### About\n\nThe module `pyvortex` consists of functions to calculate the [equivalent latitude](https://journals.ametsoc.org/doi/citedby/10.1175/1520-0469%282003%29060%3C0287%3ATELADT%3E2.0.CO%3B2) and edge of a polar vortex using [Nash criteria](https://agupubs.onlinelibrary.wiley.com/doi/10.1029/96JD00066).\n\n### Installation\n\n```\npip install pyvortex\n```\n\ninstall the latest version using \n```\npip install git+https://github.com/pankajkarman/pyvortex.git\n\n```\n\n## Documentation\n\nLatest documentation is available [here](https://pankajkarman.github.io/pyvortex/).\n\n\n### Usage\n\n`pyvortex` is easy to use. Just import:\n\n```python\nimport pyvortex as vr\n```\n\n#### Northern Hemisphere\n\nInstantiate the `PolarVortex` class using: \n```python\npol = PolarVortex(pv, uwind)\n```\nGet equivalent lqtitude for the provided vorticity data as:\n```python\neql = pol.get_eql()\n```\nIf you want to get both equivalent latitude and Vortex edge, just use:\n```python\neql = pol.get_edge(min_eql=30)\n```\nExample:\n![Arctic Vortex](./example/arctic_polar_vortex_20110201.gif)\n\n#### Southern Hemisphere\n\nFlip pv and uwind along latitude dimension and multiply pv by -1. All other things will be the same.\n\nExample:\n\n![Polar Vortex](./example/antarctic.gif)\n'",,"2020/05/31, 20:38:59",1242,MIT,0,16,"2023/10/19, 10:39:53",2,0,0,0,6,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, ClimaAtmos.jl,A library for building atmospheric circulation models that is designed from the outset to leverage data assimilation and machine learning tools.,CliMA,https://github.com/CliMA/ClimaAtmos.jl.git,github,"climate,fluid-dynamics,machine-learning,data-assimilation,optimization,julia",Atmospheric Composition and Dynamics,"2023/10/25, 22:42:09",58,0,40,true,Julia,Climate Modeling Alliance,CliMA,"Julia,Shell",,"b'\n

\n ClimaAtmos.jl\n

\n\n\n

\n Atmosphere components of the CliMA software stack.\n

\n\n[![docsbuild][docs-bld-img]][docs-bld-url]\n[![dev][docs-dev-img]][docs-dev-url]\n[![ghaci][gha-ci-img]][gha-ci-url]\n[![buildkite][bk-ci-img]][bk-ci-url]\n[![codecov][codecov-img]][codecov-url]\n[![discussions][discussions-img]][discussions-url]\n[![col-prac][col-prac-img]][col-prac-url]\n\n[docs-bld-img]: https://github.com/CliMA/ClimaAtmos.jl/workflows/Documentation/badge.svg\n[docs-bld-url]: https://github.com/CliMA/ClimaAtmos.jl/actions?query=workflow%3ADocumentation\n\n[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg\n[docs-dev-url]: https://CliMA.github.io/ClimaAtmos.jl/dev/\n\n[gha-ci-img]: https://github.com/CliMA/ClimaAtmos.jl/actions/workflows/ci.yml/badge.svg\n[gha-ci-url]: https://github.com/CliMA/ClimaAtmos.jl/actions/workflows/ci.yml\n\n[bk-ci-img]: https://badge.buildkite.com/2a31b42d67409c27660a0dcce65b49294cd9c6b9f14c12f21e.svg\n[bk-ci-url]: https://buildkite.com/clima/climaatmos-ci\n\n[codecov-img]: https://codecov.io/gh/CliMA/ClimaAtmos.jl/branch/main/graph/badge.svg\n[codecov-url]: https://codecov.io/gh/CliMA/ClimaAtmos.jl\n\n[col-prac-img]: https://img.shields.io/badge/ColPrac-Contributor\'s%20Guide-blueviolet?style=flat-square\n[col-prac-url]: https://github.com/SciML/ColPrac\n\n[discussions-img]: https://img.shields.io/badge/Ask%20us-anything-1abc9c.svg?style=flat-square\n[discussions-url]: https://github.com/CliMA/ClimaAtmos.jl/discussions\n\n\nClimaAtmos.jl is the atmosphere components of the CliMA software stack. We strive for a user interface that makes ClimaAtmos.jl as friendly and intuitive to use as possible, allowing users to focus on the science.\n\n## Installation instructions\n\nRecommended Julia: Stable release v1.9.3\n\nClimaAtmos.jl is a [registered Julia package](https://julialang.org/packages/). To install\n\n```julia\njulia> using Pkg\n\njulia> Pkg.add(""ClimaAtmos"")\n```\n\nAlternatively, download the `ClimaAtmos`\n[source](https://github.com/CliMA/ClimaAtmos.jl) with:\n\n```\n$ git clone https://github.com/CliMA/ClimaAtmos.jl.git\n```\n\nNow change into the `ClimaAtmos.jl` directory with\n\n```\n$ cd ClimaAtmos.jl\n```\n\nTo use ClimaAtmos, you need to instantiate all dependencies with:\n\n```\n$ julia --project\njulia> ]\n(ClimaAtmos) pkg> instantiate\n```\n\n## Running instructions\n\nCurrently, the simulations are stored in the `test` folder. Run all the test cases with the following commands.\n\nFirst, we instantiate the test environment, by launching (from `ClimaAtmos.jl/`) Julia with the `test/` environment active:\n\n```\n$ julia --project=test\n```\n\nThen, once in the Julia REPL, we switch to the package manager by pressing `]`:\n\n```julia\njulia> ]\n```\n\nOnce in the package manager, we `develop` the `ClimaAtmos.jl/` directory:\n\n```pkg\ntest> dev .\n```\n\nNow, we can switch back to the Julia REPL by escaping and run the test suite interactively:\n\n```julia\njulia> include(joinpath(""test"", ""runtests.jl""))\n```\nOr escape the Julia REPL and run from the command line:\n\n```\n$ julia --project=test test/runtests.jl\n```\n\nIf you run into issues when running the test suite this way, please open an issue.\n\n## Contributing\n\nIf you\'re interested in contributing to the development of ClimaAtmos we want your help no matter how big or small a contribution you make! It\'s always great to have new people look at the code with fresh eyes: you will see errors that other developers have missed.\n\nLet us know by [opening an issue](https://github.com/CliMA/ClimaAtmos.jl/issues/new) if you\'d like to work on a new feature.\n\nHere is the rule of thumb [coding style](https://clima.github.io/ClimateMachine.jl/latest/DevDocs/CodeStyle/) and [unicode usage restrictions](https://clima.github.io/ClimateMachine.jl/latest/DevDocs/AcceptableUnicode/).\n\nFor more information, check out our [contributor\'s guide](https://clima.github.io/ClimaAtmos.jl/dev/contributor_guide/).\n'",,"2021/06/17, 17:25:24",860,Apache-2.0,1740,3291,"2023/10/26, 00:43:18",143,1645,2146,1229,0,31,0.5,0.5804303278688525,"2023/10/10, 22:38:28",v0.16.2,1,30,false,,false,false,,,https://github.com/CliMA,https://clima.caltech.edu,,,,https://avatars.githubusercontent.com/u/43161188?v=4,,, WaveBreaking," A python package that provides detection, classification and tracking of Rossby Wave Breaking in weather and climate data.",skaderli,https://github.com/skaderli/WaveBreaking.git,github,"classification,detection,tracking",Atmospheric Composition and Dynamics,"2023/09/13, 09:41:17",12,0,7,true,Python,,,"Python,Makefile",,"b'.. image:: https://img.shields.io/pypi/v/wavebreaking.svg\n :target: https://pypi.python.org/pypi/wavebreaking\n \n.. image:: https://img.shields.io/github/license/skaderli/wavebreaking\n :target: https://github.com/skaderli/wavebreaking/blob/master/LICENSE\n :alt: License\n \n.. image:: https://zenodo.org/badge/431515314.svg\n :target: https://zenodo.org/badge/latestdoi/431515314\n \n.. image:: https://readthedocs.org/projects/wavebreaking/badge/?version=latest\n :target: https://wavebreaking.readthedocs.io/en/latest/?version=latest\n :alt: Documentation Status\n \n.. image:: https://www.codefactor.io/repository/github/skaderli/wavebreaking/badge\n :target: https://www.codefactor.io/repository/github/skaderli/wavebreaking\n :alt: CodeFactor\n\n====================================================================================\nWaveBreaking - Detection, Classification and Tracking of Rossby Wave Breaking\n====================================================================================\n\n.. image:: https://raw.githubusercontent.com/skaderli/WaveBreaking/main/docs/figures/readme.gif\n :alt: readme gif\n\n.. start_intro\n \nWaveBreaking is a Python package that provides detection, classification and tracking of Rossby Wave Breaking (RWB) in weather and climate data. The detection of RWB is based on analyzing the dynamical tropopause represented by a closed contour line encircling the pole as for example the 2 Potential Vorticity Units (PVU) contour line in Potential Vorticity (PV) fields. By applying three different breaking indices, regions of RWB are identified and different characteristics of the events such as area and intensity are calculated. The event tracking provides information about the temporal evolution of the RWB events. Finally, the implemented plotting methods allow for a first visualization. This tool was developed during my master studies at the University of Bern. \n\nThe detection of RWB is based on applying a RWB index to the dynamical tropopause. The WaveBreaking package provides three different RWB indices:\n\n* **Streamer Index:** The streamer index is based on work by `Wernli and Sprenger (2007)`_ (and `Sprenger et al. 2017`_). Streamers are elongated structures present on the contour line that represents the dynamical tropopause. They can be described by a pair of contour points that are close together considering their geographical distance but far apart considering their distance connecting the points on the contour. Further description can be found in my `master thesis `_.\n\n* **Overturning Index:** The overturning index is based on work by `Barnes and Hartmann (2012)`_. This index identifies overturning structures of the contour line. An overturning of the contour line is present if the contour intersects at least three times with the same longitude. Further description can be found in my `master thesis `_.\n\n* **Cutoff Index:** The Cutoff Index provides information about the decaying of a wave breaking event. From a Potential Vorticity perspective, a wave breaking event is formed by an elongation of the 2 PVU contour line. These so-called streamers can elongate further until they separate from the main stratospheric or tropospheric body. The separated structure is referred to as a cutoff (`Wernli and Sprenger (2007)`_.\n\n.. _`Wernli and Sprenger (2007)`: https://journals.ametsoc.org/view/journals/atsc/64/5/jas3912.1.xml\n.. _`Sprenger et al. 2017`: https://journals.ametsoc.org/view/journals/bams/98/8/bams-d-15-00299.1.xml\n.. _`Barnes and Hartmann (2012)`: https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2012JD017469\n\nThe tool is designed to analyze gridded data provided as an `xarray.DataArray `_. Output is provided either in a `geopandas.GeoDataFrame `_ or in an `xarray.DataArray `_.\n\nParts of the data setup functions and of the tracking function are based on the `ConTrack - Contour Tracking `_ tool developed by `Daniel Steinfeld `_. \n\n**Important information:**\n\n* Free software: MIT license\n* Further documentation about the implemented methods can be found in my `master thesis `_.\n\n**Referencing:**\n\n* Please cite WaveBreaking in your publication: Kaderli, S., 2023. WaveBreaking - Detection, Classification and Tracking of Rossby Wave Breaking. https://doi.org/10.5281/zenodo.8123188\n* If you are using the Streamer Index, please cite `Wernli and Sprenger (2007)`_ (and `Sprenger et al. 2017`_).\n* If you are using the Overturning INdex, please cite `Barnes and Hartmann (2012)`_.\n\n.. end_intro\n\n.. start_installation\n\nInstallation\n-------------\n\nStable release\n~~~~~~~~~~~~~~~\nTo install WaveBreaking, run this command in your terminal:\n \n.. code-block:: \n\n pip install wavebreaking\n\nThis is the preferred method to install WaveBreaking, as it will always install the most recent stable release. \nYour virtual environment is automatically checked for the necessary dependencies. \nAfter the installation, you can start calculating RWB events by following the tutorial below.\n\nFrom sources\n~~~~~~~~~~~~~\n\nThe sources for WaveBreaking can be downloaded in two different ways. You can either install WaveBreaking directly from the GitHub repository:\n\n.. code-block:: \n\n pip install git+https://github.com/skaderli/WaveBreaking\n\nOr you can clone the GitHub repository first and then install WaveBreaking locally. For that, start with setting the working directory and cloning the repository.\n\n.. code-block:: \n\n git clone https://github.com/skaderli/WaveBreaking.git\n cd /path/to/local/WaveBreaking\n\nSecond, set up the conda environment and install the necessary dependencies (this may take some time):\n\n.. code-block:: \n\n conda create -y -n wb_env\n conda env update -f environment.yml -n wb_env\n\nNow activate the environment and install the WaveBreaking package locally by using the developer mode \xe2\x80\x9c-e\xe2\x80\x9d:\n\n.. code-block::\n\n conda activate wb_env\n pip install -e .\n\nTo check if the installation was successful, perform some tests:\n\n.. code-block::\n \n python -m unittest tests.test_wavebreaking\n \n.. end_installation\n\n.. start_tutorial_part1\n\nTutorial\n---------\n\nThis tutorial shows how to calculate RWB events step by step. After successfully installing WaveBreaking, the module needs to be imported. Make sure that the Python kernel with the correct virtual environment (where WaveBreaking is installed) is running.\n\n.. code-block:: python\n\n import wavebreaking as wb\n \nMore information about the functions presented below can be found in the `documentation `_.\n\nPlease note that the algorithm depends on the order of the spatial dimensions. Both the longitude and latitude dimensions should be in ascending order. Although the WaveBreaking tool identifies and adjusts descending coordinates, the dataset should be checked and adapted before starting the calculation to get the best performance. \n \nData pre-processing:\n~~~~~~~~~~~~~~~~~~~~~ \n\nOptionally, the variable intended for the RWB calculations can be smoothed. The smoothing routine applies by default a 5-point smoothing (not diagonally) with a double-weighted center and an adjustable number of smoothing passes. Since the smoothing is based on the scipy.ndimage.convolve function, array-like weights and the mode for handling boundary values can be passed as an argument. This routine returns a xarray.DataArray with the variable ""smooth_"". \n\n.. code-block:: python\n\n # read data\n import xarray as xr\n demo_data = xr.open_dataset(""tests/data/demo_data.nc"")\n\n # smooth variable with 5 passes\n import numpy as np\n smoothed = wb.calculate_smoothed_field(data=demo_data.PV, \n passes=5,\n weights=np.array([[0, 1, 0], [1, 2, 1], [0, 1, 0]]), # optional\n mode=""wrap"") # optional\n \nThe wavebreaking module calculates the intensity for each identified event, if an intensity field is provided. In my master thesis, the intensity is represented by the momentum flux derived from the product of the (daily) zonal deviations of both wind components. The routine creates a xarray.DataArray with the variable ""mflux"". More information can be found in my `master thesis `_.\n\n.. code-block:: python\n\n # calculate momentum flux\n mflux = wb.calculate_momentum_flux(u=demo_data.U, \n v=demo_data.V)\n \n \nContour calculation:\n~~~~~~~~~~~~~~~~~~~~\n \nAll RWB indices are based on a contour line representing the dynamical tropopause. The ""calculate_contours()"" function calculates the dynamical tropopause on the desired contour levels (commonly the 2 PVU level for Potential Vorticity). The function supports several contour levels at a time which allows for processing data of both hemispheres at the same time (e.g., contour levels -2 and 2). The contour calculation is also included in the RWB index functions and doesn\'t need to be performed beforehand. However, you can also pass the contours directly to the index functions. This is especially useful if you want to perform the calculation of several indices at once. \n\nIf the input field is periodic, the parameter ""periodic_add"" can be used to extend the field in the longitudinal direction (default 120 degrees) to correctly extract the contour at the date border. With ""original_coordinates = False"", array indices are returned (used for the index calculations) instead of original coordinates. The routine returns a geopandas.GeoDataFrame with a geometry column and some properties for each contour. \n\n.. code-block:: python\n\n # calculate contours\n contours = wb.calculate_contours(data=smoothed, \n contour_levels=[-2, 2], \n periodic_add=120, # optional\n original_coordinates=True) # optional\n \n\nIndex calculation:\n~~~~~~~~~~~~~~~~~~~\n\nAll three RWB indices perform the contour calculation before identifying the RWB events. If you pass the separately calculated contours, the contour calcultion is skipped. For the streamer index, the default parameters are taken from `Wernli and Sprenger (2007)`_ (and `Sprenger et al. 2017`_) and for the overturning index from `Barnes and Hartmann (2012)`_. If the intensity is provided (momentum flux, see data pre-processing), it is calculated for each event. All index functions create a geopandas.GeoDataFrame with a geometry column and some properties for each event. \n\n.. code-block:: python\n\n # calculate streamers\n streamers = wb.calculate_streamers(data=smoothed, \n contour_levels=[-2, 2], \n contours=contours, #optional\n geo_dis=800, # optional\n cont_dis=1200, # optional\n intensity=mflux, # optional\n periodic_add=120) # optional\n \n.. code-block:: python \n\n # calculate overturnings\n overturnings = wb.calculate_overturnings(data=smoothed, \n contour_levels=[-2, 2],\n contours=contours, #optional\n range_group=5, # optional\n min_exp=5, # optional\n intensity=mflux, # optional\n periodic_add=120) # optional\n \n.. code-block:: python\n \n # calculate cutoffs\n cutoffs = wb.calculate_cutoffs(data=smoothed, \n contour_levels=[-2, 2],\n contours=contours, #optional\n min_exp=5, # optional\n intensity=mflux, # optional\n periodic_add=120) # optional\n \nEvent classification:\n~~~~~~~~~~~~~~~~~~~~~~\n\nThe event classification is based on selecting the events of interest from the geopandas.GeoDataFrame provided by the index calculation functions. \n\nSome suggested classifications:\n\n.. code-block:: python\n\n # stratospheric and tropospheric (only for streamers and cutoffs)\n stratospheric = events[events.mean_var >= contour_level]\n tropospheric = events[events.mean_var < contour_level]\n \n # anticyclonic and cyclonic by intensity for the Northern Hemisphere\n anticyclonic = events[events.intensity >= 0]\n cyclonic = events[events.intensity < 0]\n \n # anticyclonic and cyclonic by intensity for the Southern Hemisphere\n anticyclonic = events[events.intensity <= 0]\n cyclonic = events[events.intensity > 0]\n \n # anticyclonic and cyclonic by orientation (only for overturning events)\n anticyclonic = events[events.orientation == ""anticyclonic""]\n cyclonic = events[events.orientation == ""cyclonic""]\n\n\nIn addition, a subset of events with certain characteristics can be selected, e.g. the 10% largest events:\n\n.. code-block:: python\n\n # 10 percent largest events\n large = events[events.event_area >= events.event_area.quantile(0.9)]\n\n\nTransform to DataArray:\n~~~~~~~~~~~~~~~~~~~~~~~\n\nTo calculate and visualize the occurrence of RWB events, it comes in handy to transform the coordinates of the events into a xarray.DataArray. The ""to_xarray"" function flags every grid cell where an event is present with the value 1. Before the transformation, it is suggested to classify the events first and only use for example stratospheric events. \n\n.. code-block:: python\n\n # classify events\n stratospheric = streamers[streamers.mean_var.abs() >= 2]\n \n # transform to xarray.DataArray\n flag_array = wb.to_xarray(data=smoothed, \n events=stratospheric)\n\n \nVisualization: \n~~~~~~~~~~~~~~~\n\nWaveBreaking provides two options to do a first visual analysis of the output. Both options are based on the xarray.DataArray with the flagged grid cells from the ""to_xarray"" function. \n\nTo analyze a specific large scale situation, the RWB events on a single time steps can be plotted:\n\n.. code-block:: python\n\n # import cartopy for projection\n import cartopy.crs as ccrs\n \n wb.plot_step(flag_data=flag_array,\n step=""1959-06-05T06"", #index or date\n data=smoothed, # optional\n contour_level=[-2, 2], # optional\n proj=ccrs.PlateCarree(), # optional\n size=(12,8), # optional\n periodic=True, # optional\n labels=True,# optional\n levels=None, # optional\n cmap=""Blues"", # optional\n color_events=""gold"", # optional\n title="""") # optional\n\n.. end_tutorial_part1\n\n.. image:: https://raw.githubusercontent.com/skaderli/WaveBreaking/main/docs/figures/plot_step.png\n :alt: plot step \n \n.. start_tutorial_part2 \n \nThe analyze Rossby wave breaking from a climatological perspective, the occurrence (for specific seasons) can be plotted:\n\n.. code-block:: python\n\n wb.plot_clim(flag_data=flag_array, \n seasons=None, # optional\n proj=ccrs.PlateCarree(), # optional\n size=(12,8), # optional\n smooth_passes=0, # optional\n periodic=True, # optional\n labels=True, # optional\n levels=None, # optional\n cmap=None, # optional\n title="""") # optional\n\n.. end_tutorial_part2\n\n.. image:: https://raw.githubusercontent.com/skaderli/WaveBreaking/main/docs/figures/plot_climatology.png\n :alt: plot climatology \n\n.. start_tutorial_part3\n \nEvent tracking:\n~~~~~~~~~~~~~~~~\n\nLast but not least, WaveBreaking provides a routine to track events over time. Beside the time range of the temporal tracking, two methods for defining the spatial coherence are available. Events receive the same label if they either spatially overlap (method ""by_overlapping"") or if the centre of mass lies in a certain radius (method ""by_radius""). Again, it is suggested to classify the events first and only use for example stratospheric events. This routine adds a column ""label"" to the events geopandas.GeoDataFrame.\n\n.. code-block:: python\n\n # classify events\n anticyclonic = overturnings[overturnings.orientation == ""anticyclonic""]\n\n # track events\n tracked = wb.track_events(events=anticyclonic, \n time_range=6, #time range for temporal tracking in hours\n method=""by_overlap"", #method for tracking [""by_overlap"", ""by_distance""], optional\n buffer=0, # buffer in degrees for polygons overlapping, optional\n overlap=0, # minimum overlap percentage, optinal\n distance=1000) # distance in km for method ""by_distance""\n\nThe result can be visualized by plotting the paths of the tracked events:\n\n.. code-block:: python\n \n wb.plot_tracks(data=smoothed,\n events=tracked, \n proj=ccrs.PlateCarree(), # optional\n size=(12,8), # optional\n min_path=0, # optional\n plot_events=True, # optional\n labels=True, # optional\n title="""") # optional\n \n \n.. end_tutorial_part3\n \n.. image:: https://raw.githubusercontent.com/skaderli/WaveBreaking/main/docs/figures/plot_tracks.png\n :alt: plot tracks\n\nCredits\n---------\n\n* The installation guide is to some extend based on the `ConTrack - Contour Tracking `_ tool developed by `Daniel Steinfeld `_. \n\n* This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.\n\n.. _Cookiecutter: https://github.com/audreyr/cookiecutter\n.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage\n'",",https://zenodo.org/badge/latestdoi/431515314\n,https://doi.org/10.5281/zenodo.8123188\n*","2021/11/24, 14:28:46",700,MIT,363,363,"2023/09/09, 21:46:18",0,1,2,2,46,0,0.0,0.0,"2023/07/07, 06:35:20",v.0.3.7,0,1,false,,false,true,,,,,,,,,,, typhon,A collection of tools for atmospheric research with Python 3.,atmtools,https://github.com/atmtools/typhon.git,github,"python,python3,science,radiative-transfer,atmospheric-science",Atmospheric Composition and Dynamics,"2023/02/09, 06:57:37",56,22,10,true,Python,,atmtools,"Python,TeX,Shell",http://www.radiativetransfer.org/,"b'[![PyPI version](https://badge.fury.io/py/typhon.svg)](https://badge.fury.io/py/typhon)\n[![Anaconda-Server Badge](https://anaconda.org/rttools/typhon/badges/installer/conda.svg)](https://anaconda.org/rttools/typhon)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1300318.svg)](https://doi.org/10.5281/zenodo.1300318)\n[![Test](https://github.com/atmtools/typhon/workflows/Test/badge.svg?branch=master)](https://github.com/atmtools/typhon/commits/master)\n\n# typhon - Tools for atmospheric research\n\n## Installation\nTyphon requires Python version 3.6 or higher. The recommended way to get Python\nis through [Anaconda]. But of course, any other Python distribution is also\nworking.\n\n### Stable release\nThe latest stable release of typhon can be installed using ``conda`` \n(recommended)\n```bash\n$ conda install -c rttools typhon\n```\nor ``pip``\n```bash\n$ pip install typhon\n```\n\n### Development version\nCheck our information on how to [setup a development environment](CONDA-ENV.md)\nfor typhon.\n\n## Testing\nTyphon contains a simple testing framework using [pytest]. It is good\npractice to write tests for all your functions and classes. Those tests may not\nbe too extensive but should cover the basic use cases to ensure correct\nbehavior through further development of the package.\n\nTests can be run on the command line...\n```bash\n$ pytest --pyargs typhon\n```\nor using the Python interpreter:\n```python\nimport typhon\ntyphon.test()\n```\n\n## Configuration\nTyphon supports a configuration file in ``configparser`` syntax. The\nconfiguration is handled by the ``typhon.config`` module. The default file\nlocation is ``~/.typhonrc`` but can be changed using the ``TYPHONRC``\nenvironment variable.\n\n## Documentation\nA recent build of the documentation is accessible\n[online](http://radiativetransfer.org/misc/typhon/doc-trunk).\nKindly note that bleeding edge features might not be covered.\n\n[Sphinx]: http://www.sphinx-doc.org\n[Anaconda]: https://www.continuum.io/downloads\n[pytest]: https://docs.pytest.org/\n'",",https://doi.org/10.5281/zenodo.1300318","2015/10/19, 07:07:24",2928,MIT,14,2108,"2023/02/09, 06:57:41",12,357,404,7,258,2,0.9,0.5740259740259741,"2021/12/16, 12:21:28",v0.9.0,0,17,false,,true,true,"leric2/GROMORA-harmo,JoelPang/tax_calculator,Smpljack/retrieval,SEE-GEO/DARDAR_ERA_interpolation,dmserg/CovidSpbAnalytics,lkluft/cloud-base,stefanbuehler/jupyter_arts_olr_spectrum,olemke/jupyarts,eurec4a/eurec4a-environment,diegojco/konrad,simonpf/artssat,tmieslinger/myutils,patternizer/AVHRR_NOISE,Xi-CAM/Xi-cam.Acquire,patternizer/SST_CDR_STATS,smichel/py_rainprog,jonas-hagen/pyretrievals,pcdshub/lightpath,jonas-hagen/wirac-retrievals,FIDUCEO/FCDR_HIRS,atmtools/konrad,gerritholl/pyatmlab",,https://github.com/atmtools,,,,,https://avatars.githubusercontent.com/u/17474833?v=4,,, Pace,A implementation of the FV3GFS / SHiELD atmospheric model developed by NOAA/GFDL using the GT4Py domain-specific language in Python.,ai2cm,https://github.com/ai2cm/pace.git,github,"climate-modeling,hpc,python",Atmospheric Composition and Dynamics,"2022/12/22, 21:45:34",31,7,19,true,Python,ai2cm,ai2cm,"Python,Shell,Jupyter Notebook,Makefile,Dockerfile",https://ai2cm.github.io/pace/,"b""[![CircleCI][circleci-shield]][circleci-url]\n[![Contributors][contributors-shield]][contributors-url]\n[![Stargazers][stars-shield]][stars-url]\n[![Issues][issues-shield]][issues-url]\n[![Apache License][license-shield]][license-url]\n\n# Pace\n\nPace is an implementation of the FV3GFS / SHiELD atmospheric model developed by NOAA/GFDL using the GT4Py domain-specific language in Python. The model can be run on a laptop using Python-based backend or on thousands of heterogeneous compute nodes of a large supercomputer.\n\nFull Sphinx documentation can be found at [https://ai2cm.github.io/pace/](https://ai2cm.github.io/pace/).\n\n**WARNING** This repo is under active development - supported features and procedures can change rapidly and without notice.\n## Quickstart - bare metal\n\n### Build\n\nPace requires GCC > 9.2, MPI, and Python 3.8 on your system, and CUDA is required to run with a GPU backend. You will also need the headers of the boost libraries in your `$PATH` (boost itself does not need to be installed).\n\n```shell\ncd BOOST/ROOT\nwget https://boostorg.jfrog.io/artifactory/main/release/1.79.0/source/boost_1_79_0.tar.gz\ntar -xzf boost_1_79_0.tar.gz\nmkdir -p boost_1_79_0/include\nmv boost_1_79_0/boost boost_1_79_0/include/\nexport BOOST_ROOT=BOOST/ROOT/boost_1_79_0\n```\n\nWhen cloning Pace you will need to update the repository's submodules as well:\n```shell\ngit clone --recursive https://github.com/ai2cm/pace.git\n```\nor if you have already cloned the repository:\n```\ngit submodule update --init --recursive\n```\n\nWe recommend creating a python `venv` or conda environment specifically for Pace.\n\n```shell\npython3 -m venv venv_name\nsource venv_name/bin/activate\n```\n\nInside of your pace `venv` or conda environment pip install the Python requirements, GT4Py, and Pace:\n```shell\npip3 install -r requirements_dev.txt -c constraints.txt\n```\n\nShell scripts to install Pace on specific machines such as Gaea can be found in `examples/build_scripts/`.\n\n### Run\n\nWith the environment activated, you can run an example baroclinic test case with the following command:\n```shell\nmpirun -n 6 python3 -m pace.driver.run driver/examples/configs/baroclinic_c12.yaml\n\n# or with oversubscribe if you do not have at least 6 cores\nmpirun -n 6 --oversubscribe python3 -m pace.driver.run driver/examples/configs/baroclinic_c12.yaml\n```\n\nAfter the run completes, you will see an output direcotry `output.zarr`. An example to visualize the output is provided in `driver/examples/plot_output.py`. See the [driver example](driver/examples/README.md) section for more details.\n\n## Quickstart - Docker\n### Build\n\nWhile it is possible to install and build pace bare-metal, we can ensure all system libraries are installed with the correct versions by using a Docker container to test and develop pace.\n\nFirst, you will need to update the git submodules so that any dependencies are cloned and at the correct version:\n```shell\ngit submodule update --init --recursive\n```\n\nThen build the `pace` docker image at the top level.\n```shell\nmake build\n```\n### Run\n\n```shell\nmake dev\nmpirun --mca btl_vader_single_copy_mechanism none -n 6 python3 -m pace.driver.run /pace/driver/examples/configs/baroclinic_c12.yaml\n```\n\n## Running translate tests\n\nSee the [translate tests](stencils/pace/stencils/testing/README.md) section for more information.\n\n## Repository structure\n\nThe top-level directory contains the main components of pace such as the dynamical core, the physical parameterizations and utilities.\n\nThis git repository is laid out as a mono-repo, containing multiple independent projects. Because of this, it is important not to introduce unintended dependencies between projects. The graph below indicates a project depends on another by an arrow pointing from the parent project to its dependency. For example, the tests for fv3core should be able to run with only files contained under the fv3core and util projects, and should not access any files in the driver or physics packages. Only the top-level tests in Pace are allowed to read all files.\n\n![Graph of interdependencies of Pace modules, generated from dependences.dot](./dependencies.svg)\n\n\n## ML emulation\n\nAn example of integration of an ML model replacing the microphysics parametrization is available on the `feature/microphysics-emulator` branch.\n\n[circleci-shield]: https://dl.circleci.com/status-badge/img/gh/ai2cm/pace/tree/main.svg?style=svg\n[circleci-url]: https://dl.circleci.com/status-badge/redirect/gh/ai2cm/pace/tree/main\n[contributors-shield]: https://img.shields.io/github/contributors/ai2cm/pace.svg\n[contributors-url]: https://github.com/ai2cm/pace/graphs/contributors\n[stars-shield]: https://img.shields.io/github/stars/ai2cm/pace.svg\n[stars-url]: https://github.com/ai2cm/pace/stargazers\n[issues-shield]: https://img.shields.io/github/issues/ai2cm/pace.svg\n[issues-url]: https://github.com/ai2cm/pace/issues\n[license-shield]: https://img.shields.io/github/license/ai2cm/pace.svg\n[license-url]: https://github.com/ai2cm/pace/blob/main/LICENSE.md\n""",,"2021/08/02, 22:05:11",814,Apache-2.0,36,1145,"2023/07/03, 13:47:47",21,430,450,107,114,4,0.0,0.8034979423868313,"2022/12/20, 18:29:01",v0.2.0,0,18,false,,false,true,"ai2cm/SHiELD-wrapper,dspwithaheart/pace,pchakraborty/pace-snapshot,ai2cm/fv3gfs-fortran,ai2cm/fv3net,ai2cm/fv3gfs-wrapper,ai2cm/pace",,https://github.com/ai2cm,https://allenai.org/climate-modeling,"Seattle WA, USA",,,https://avatars.githubusercontent.com/u/55798839?v=4,,, Project Horus,A Amateur Radio High Altitude Ballooning project.,projecthorus,https://github.com/projecthorus/radiosonde_auto_rx.git,github,,Atmospheric Composition and Dynamics,"2023/10/07, 05:41:53",428,0,75,true,C,Project Horus,projecthorus,"C,Python,HTML,JavaScript,C++,CSS,Perl,Dockerfile,Shell,Makefile",,"b""![auto_rx logo](autorx.png)\n# Automatic Radiosonde Receiver Utilities\n\n**Please refer to the [auto_rx wiki](https://github.com/projecthorus/radiosonde_auto_rx/wiki) for the latest information.**\n\nThis project is built around [rs1279's RS](https://github.com/rs1729/RS) demodulators, and provides a set of utilities ('auto_rx') to allow automatic reception and uploading of [Radiosonde](https://en.wikipedia.org/wiki/Radiosonde) positions to multiple services, including:\n\n* The [SondeHub Radiosonde Tracker](https://tracker.sondehub.org) - a tracking website specifically designed for tracking radiosondes!\n* APRS-IS, for display on sites such as [radiosondy.info](https://radiosondy.info). (Note that aprs.fi now blocks radiosonde traffic.)\n* [ChaseMapper](https://github.com/projecthorus/chasemapper) for mobile\n radiosonde chasing.\n\nAuto-RX's [Web Interface](https://github.com/projecthorus/radiosonde_auto_rx/wiki/Web-Interface-Guide) provides a way of seeing the live status of your station, and also a means of reviewing and analysing previous radiosonde flights. Collected meteorological data can be plotted in the common 'Skew-T' format.\n\n### Radiosonde Support Matrix\n\nManufacturer | Model | Position | Temperature | Humidity | Pressure | XDATA\n-------------|-------|----------|-------------|----------|----------|------\nVaisala | RS92-SGP/NGP | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:\nVaisala | RS41-SG/SGP/SGM | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: (for -SGP) | :heavy_check_mark:\nGraw | DFM06/09/17 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | :heavy_check_mark:\nMeteomodem | M10 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | Not Sent | :x:\nMeteomodem | M20 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: (For some models) | :x:\nIntermet Systems | iMet-4 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:\nIntermet Systems | iMet-54 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | Not Sent | :x:\nLockheed Martin | LMS6-400/1680 | :heavy_check_mark: | :x: | :x: | :x: | Not Sent\nMeisei | iMS-100 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | Not Sent\nMeisei | RS11G | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | Not Sent\nMeteo-Radiy | MRZ-H1 (400 MHz) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | Not Sent\nMeteosis | MTS01 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | Not Sent\n\nSupport for other radiosondes may be added as required - please send us sondes to test with! If you have any information about telemetry formats, we'd love to hear from you (see our contact details below).\n\nImprovements from the upstream RS codebase will be merged into this codebase when/where appropriate. A big thanks to rs1729 for continuing to develop and improve these decoders, and working with us to make auto_rx decode *all* the radiosondes!\n\n### Updates\n\n**This software is under regular development. Please [update regularly](https://github.com/projecthorus/radiosonde_auto_rx/wiki/Performing-Updates) to get bug-fixes and improvements!**\n\nPlease consider joining the Google Group to receive updates on new software features:\nhttps://groups.google.com/forum/#!forum/radiosonde_auto_rx\n\n## Presentations\n* Linux.conf.au 2019 - https://www.youtube.com/watch?v=YBy-bXEWZeM\n* UKHAS Conference 2019 - [Presented via Skype](https://youtu.be/azDJmMywBgw?t=643) which had some audio issues at the start. Slides [here](https://rfhead.net/sondes/auto_rx_presentation_UKHAS2019.pdf).\n\n## Contacts\n* [Mark Jessop](https://github.com/darksidelemm) - vk5qi@rfhead.net\n* [Michaela Wheeler](https://github.com/TheSkorm) - radiosonde@michaela.lgbt\n\n## Licensing Information\nAll software within this repository is licensed under the GNU General Public License v3. Refer this repositories LICENSE file for the full license text.\n\nRadiosonde telemetry data captured via this software and uploaded into the [Sondehub](https://sondehub.org/) Database system is licensed under [Creative Commons BY-SA v2.0](https://creativecommons.org/licenses/by-sa/2.0/). \nTelemetry data uploaded into the APRS-IS network is generally considered to be released into the public domain. \n\nBy uploading data into these systems (by enabling the relevant uploaders within the `station.cfg` file) you as the user agree for your data to be made available under these licenses. Note that uploading to Sondehub is enabled by default. """,,"2017/12/16, 10:43:18",2139,GPL-3.0,253,2139,"2023/10/23, 20:39:49",43,501,783,145,2,3,0.0,0.5007610350076104,"2023/10/07, 08:37:02",v1.7.1,0,24,true,"patreon,custom",false,false,,,https://github.com/projecthorus,http://projecthorus.org/,,,,https://avatars.githubusercontent.com/u/15122500?v=4,,, ANEMOI,Large-eddy simulation code written in CUDA Fortran for simulating atmospheric boundary layer flows.,moulin1024,https://github.com/moulin1024/ANEMOI.git,github,"computational-fluid-dynamics,large-eddy-simulation,cuda-fortran",Atmospheric Composition and Dynamics,"2023/08/31, 20:12:55",13,0,7,true,Fortran,,,"Fortran,Python,Makefile,Shell,Jupyter Notebook",,"b""## ANEMOI\n@author: Mou Lin (mou.lin@epfl.ch),Tristan Revaz(tristan.revaz@epfl.ch)\n![Farmers Market Finder Demo](./animation.gif)\n\n### Introduction\nANEMOI is a large-eddy simulation code written in CUDA Fortran for simulating atmospheric boundary layer flows. It solves the filtered continuity equation and the filtered Navier-Stokes equations (using the Boussinesq approximation). The numerical method used in this code is based on the PhD thesis of Albertson, 1996 [1] (attached in doc/theory).\n\nIts main features can be summarized as follows: It uses a second-order Adams\xe2\x80\x93Bashforth explicit scheme for time advancement and a hybrid pseudospectral finite-difference scheme for the spatial discretization. The lateral boundary conditions are periodic. The top boundary condition is set up as a flux-free condition. The bottom boundary condition requires the calculation of the instantaneous surface shear stress, which is accomplished through the local application of Monin\xe2\x80\x93Obukhov similarity theory. The SGS fluxes of momentum are parameterized using Lagrangian scale-dependent dynamic models [2]. \n\n### Code structure:\n\n\n```bash\nWireLES2/\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 doc (documentation dir)\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 job (simulation job dir)\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 [job1_name] \n\xe2\x94\x82 \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 input (contains config file)\n\xe2\x94\x82 \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 src (copied and compiled src dir)\n\xe2\x94\x82\t\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 [job2_name]\n\xe2\x94\x82\t\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 ...\n\xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 prc (preprocessing python script dir)\n\xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 src (CUDA Fortran code dir)\n```\n### How to run\nRun the following line in the terminal first or add it in ~/.bashrc :\n```bash\nalias wireles='python prc/wireles.py'\n```\nthen you can run various of application with the line\n```bash\nwireles [applications name] [case name]\n```\napplications list:\n- create: create case with dummy default file\n- remove: remove case completely\n- clean: remove all files except input/config\n- edit: edit config file\n- pre: pre-process case\n- solve: submit case to the cluster through the slurm \n- debug: run the case on the local machine or cluster interactive debug mode\n- make: compile the case\n- post: quick build-in post processing of the simulation data\n- anime: produce an animation of the flow from instant fields output\n- h5gen: generating self-explained hdf5 file of the simulation data for customized postprocessing\n\nA simple workflow: create --> edit --> pre --> solve/debug --> post/anime\n \nTo make life easier, a bash script *test_run.sh* has been provided to allow the new user to run a example case easily. \n\n \n### Reference\n [1] Albertson, J.D., 1996. Large eddy simulation of land-atmosphere interaction. University of California, Davis.\n \n [2] Bou-Zeid, E., Meneveau, C. and Parlange, M., 2005. A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows. Physics of fluids, 17(2), p.025105.\n\n\n""",,"2020/09/10, 09:43:21",1140,MIT,35,258,"2023/10/23, 20:39:49",1,0,0,0,2,0,0,0.012345679012345734,,,0,2,false,,false,false,,,,,,,,,,, CIS,"An open source command-line tool for easy collocation, visualization, analysis, and comparison of diverse gridded and ungridded datasets used in the atmospheric science.",cedadev,https://github.com/cedadev/cis.git,github,"geo,climate-data,python,earth-science,analysis",Atmospheric Composition and Dynamics,"2023/10/13, 22:16:15",41,6,11,true,Python,Centre for Environmental Data Analysis Developers,cedadev,"Python,Shell",www.cistools.net,"b""CIS\n===\n\n[![Join the chat at https://gitter.im/cedadev/cis](https://badges.gitter.im/cedadev/cis.svg)](https://gitter.im/cedadev/cis?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n[![Build Status](https://circleci.com/gh/cedadev/cis/tree/master.svg?style=svg)](https://circleci.com/gh/cedadev/cis/tree/master)\n[![codecov](https://codecov.io/gh/cedadev/cis/branch/master/graph/badge.svg?token=8QTY6ZZN95)](https://codecov.io/gh/cedadev/cis)\n[![Documentation Status](https://readthedocs.org/projects/cis/badge/?version=latest)](https://readthedocs.org/projects/cis/?badge=latest)\n[![Downloads](https://anaconda.org/conda-forge/cis/badges/downloads.svg)](https://anaconda.org/conda-forge/cis/files)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4518867.svg)](https://doi.org/10.5281/zenodo.4518867)\n\n\nCIS is an open source Python library and command-line tool for the easy collocation, visualization, analysis, and comparison of a\ndiverse set of gridded and ungridded datasets used across earth sciences. Visit our homepage at www.cistools.net.\n\nFor issue tracking and improvement suggestions please see our JIRA project at: https://jira.ceh.ac.uk/projects/JASCIS/issues.\n\nInstallation\n------------\n\nA pre-packaged version of CIS is available for installation using conda for Linux, Mac OSX and Windows.\n\nOnce conda is installed, you can easily install CIS with the following command:\n\n conda install -c conda-forge cis\n\nIf you don\xe2\x80\x99t already have conda, you must first download and install it.\nAnaconda is a free conda package that includes Python and many common scientific and data analysis libraries, and is available here: http://continuum.io/downloads.\nFurther documentation on using Anaconda and the features it provides can be found here: http://docs.continuum.io/anaconda/index.html\n\nMore details for installing CIS from source, and other package sources, can be found in the get-started [documentation](http://cistools.net/get-started#installation).\n\n\nContact\n-------\n\nPhilip.Kershaw@stfc.ac.uk, Philip.Stier@physics.ox.ac.uk or Duncan.Watson-Parris@physics.ox.ac.uk\n\n\nCopyright and licence\n---------------------\n\n(C) University of Oxford 2013\n\nThis file is part of the Community Intercomparison Suite (CIS).\n\nCIS is free software: you can redistribute it and/or modify it under\nthe terms of the GNU Lesser General Public License as published by the\nFree Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nCIS is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Lesser General Public License for more details.\n\nYou should have received a copy of the GNU Lesser General Public License\nalong with CIS. If not, see .\n\nWe gratefully use a number of NASA Visible Earth 'Blue Marble' raster \nimages in this repository.\n""",",https://doi.org/10.5281/zenodo.4518867","2014/04/02, 14:50:20",3493,LGPL-3.0,9,3160,"2023/10/13, 22:16:16",17,16,28,2,12,4,0.3,0.3183649971214738,"2023/05/24, 03:14:58",1.7.8,0,8,false,,false,false,"narest-qa/repo68,DarkS1deX/DarkOs1nt,cp4cds/qcapp,duncanwp/CALIOPy,duncanwp/cmorize,pytroll/cis-troll-match",,https://github.com/cedadev,http://www.ceda.ac.uk,UK,,,https://avatars.githubusercontent.com/u/1781681?v=4,,, EMC²,An open source framework for atmospheric model and observational column comparison.,columncolab,https://github.com/columncolab/EMC2.git,github,,Atmospheric Composition and Dynamics,"2023/07/28, 15:22:27",8,0,3,true,Python,,columncolab,"Python,Shell",,"b'EMC\xc2\xb2: the Earth Model Column Collaboratory\n==========================================\n\n.. image:: https://img.shields.io/pypi/v/emc2.svg\n :target: https://pypi.python.org/pypi/emc2\n :alt: Latest PyPI version\n\n.. image:: https://travis-ci.org/columncolab/EMC2.png\n :target: https://travis-ci.org/columncolab/EMC2\n :alt: Latest Travis CI build status\n\nAn open source framework for atmospheric model and observational column comparison.\nSupported by the Atmospheric Systems Research (ASR) program of the United States Department of Energy.\n\nThe Earth Model Column Collaboratory (EMC\xc2\xb2) is inspired from past work comparing remotely sensed zenith-pointing\nmeasurements to climate models and their single-column model modes (SCMs)\n(e.g., Bodas-Salcedo et al., 2008; Lamer et al. 2018; Swales et al. 2018).\n\nEMC\xc2\xb2 provides an open source software framework to:\n\n1. Represent both ARM measurements and GCM columns in the Python programming\n language building on the Atmospheric Community Toolkit (ACT, Theisen et. al. 2019)\n and leveraging the EMC\xc2\xb2 team\xe2\x80\x99s success with Py-ART (Helmus and Collis 2016).\n2. Scale GCM outputs (using the cloud fraction) to compare with sub-grid-scale column measurements\n using a modular sub column generator designed to run off-line on time series extracted from\n existing GCM/SCM output.\n3. Enable a suite of comparisons between ARM (and other) column measurements and\n the GCM model subcolumns.\n\nDetailed description of EMC\xc2\xb2 is provided in Silber et al. (GMD, 2022;\nhttps://doi.org/10.5194/gmd-15-901-2022).\n\n\nUsage\n-----\n\nFor details on how to use EMC\xc2\xb2, please see the Documentation (https://columncolab.github.io/EMC2).\n\nInstallation\n------------\n\nIn order to install EMC\xc2\xb2, you can use either pip or anaconda. In a terminal, simply type either of::\n\n$ pip install emc2\n$ conda install -c conda-forge emc2\n\nIn addition, if you want to build EMC\xc2\xb2 from source and install, type in the following commands::\n\n$ git clone https://github.com/columncolab/EMC2\n$ cd EMC2\n$ pip install .\n\nRequirements\n^^^^^^^^^^^^\n\nEMC\xc2\xb2 requires Python 3.6+ as well as: \n * Atmoshperic Community Toolkit (https://arm-doe.github.io/ACT). \n * Numpy (https://numpy.org)\n * Scipy (https://scipy.org)\n * Matplotlib (https://matplotlib.org)\n * Xarray (http://xarray.pydata.org)\n * Pandas (https://pandas.pydata.org/)\n \nLicence\n-------\n\nCopyright 2021 Authors\n\nRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ""AS IS"" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nAuthors\n-------\n\n`EMC\xc2\xb2` was written by `Robert Jackson `_ and `Israel Silber `_.\nCollaborators and Contributors include `Scott Collis `_, and Ann Fridlind (NASA GISS). \nPlease don\'t hesitate to reach out to contributors `Jingjing Tian `_ and `Yuying Zhang `_ if you have any questions regarding the statistics_LLNL module.\n\nReferences\n----------\n\nBodas-Salcedo, A., Webb, M. J., Brooks, M. E., Ringer, M. A., Williams, K. D., Milton, S. F., and Wilson, D. R. (2008), Evaluating cloud systems inthe Met Office global forecast model using simulated CloudSat radar reflectivities, Journal of Geophysical Research: Atmospheres, 113,5https://doi.org/https://doi.org/10.1029/2007JD009620, https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2007JD009620.\n\nEynard-Bontemps, G., R Abernathey, J. Hamman, A. Ponte, W. Rath, (2019), The Pangeo Big Data Ecosystem and its use at CNES. In P. Soille, S. Loekken, and S. Albani, Proc. of the 2019 conference on Big Data from Space (BiDS\xe2\x80\x992019), 49-52. EUR 29660 EN, Publications Office of the European Union, Luxembourg. ISBN: 978-92-76-00034-1, doi:10.2760/848593.\n\nHelmus, J., Collis, S. (2016), The Python ARM Radar Toolkit (Py-ART), a Library for Working with Weather Radar Data in the Python Programming Language. Journal of Open Research Software 4. https://doi.org/10.5334/jors.119\n\nJupyter et al. (2018), ""Binder 2.0 - Reproducible, Interactive, Sharable Environments for Science at Scale,"" Proceedings of the 17th Python in Science Conference, 10.25080/Majora-4af1f417-011\n\nLamer, K. (2018), Relative Occurrence of Liquid Water, Ice and Mixed-Phase Conditions within Various Cloud and Precipitation Regimes: Long Term Ground-Based Observations for GCM Model Evaluation, The Pennsylvania State University, PhD dissertation.\n\nSwales, D.J., Pincus, R., Bodas-Salcedo, A. (2018), The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2. Geosci. Model Dev. 11, 77\xe2\x80\x9381. https://doi.org/10.5194/gmd-11-77-2018\n\nTheisen et. al. (2019), Atmospheric Community Toolkit: https://github.com/ANL-DIGR/ACT.\n'",",https://doi.org/10.5334/jors.119\n\nJupyter,https://doi.org/10.5194/gmd-11-77-2018\n\nTheisen","2019/10/07, 14:57:34",1479,BSD-3-Clause,30,556,"2023/07/28, 15:22:28",2,94,107,7,89,0,0.5,0.327433628318584,"2023/04/07, 21:57:17",1.3.0,0,4,false,,false,false,,,https://github.com/columncolab,,,,,https://avatars.githubusercontent.com/u/54288448?v=4,,, Unidata Science Gateway,"Provide the academic and research community an environment they can employ to access, analyze and visualize real-time and case study Earth system science data.",Unidata,https://github.com/Unidata/science-gateway.git,github,"cloud-computing,unidata,xsede,geoscience,atmospheric-science,nsf,science-gateways,jupyterhub,escience",Atmospheric Composition and Dynamics,"2023/10/18, 15:47:50",17,0,2,true,TeX,Unidata,Unidata,"TeX,Jupyter Notebook,Shell,HTML,CSS,Dockerfile,JavaScript,Python,Java",https://science-gateway.unidata.ucar.edu/,"b'[![DOI](https://img.shields.io/static/v1?label=DOI&message=10.5065/688s-2w73&color=blue)](https://doi.org/10.5065/688s-2w73) [![License: BSD-3-Clause](https://img.shields.io/badge/License-BSD--3--Clause-green)](https://opensource.org/licenses/BSD-3-Clause)\n\n\n# Unidata Science Gateway on the NSF Jetstream Cloud\n\n![img](https://github.com/Unidata/science-gateway/blob/master/jetstream.png ""Jetstream"")\n\nWelcome to this README on running Unidata Science Gateway resources on the NSF Jetstream cloud. This document points to a collection of READMEs for running Unidata technologies on Jetstream. Browse below on the following topics:\n\n- [Creating a Zero To JupyterHub Cluster on Jetstream](vms/jupyter/readme.md)\n- [Creating a THREDDS VM on Jetstream](vms/thredds/readme.md)\n- [Creating a THREDDS AWS Nexrad VM on Jetstream](vms/thredds-aws/readme.md)\n- [Creating a Science Gateway VM on Jetstream](vms/science-gateway/readme.md)\n- [Creating an IDD Relay VM on Jetstream](vms/idd-relay/readme.md)\n- [Creating an IDD Archiver VM on Jetstream](vms/idd-archiver/readme.md)\n- [Creating an ADDE VM on Jetstream](vms/mcidas/readme.md)\n- [Creating a RAMADDA VM on Jetstream](vms/ramadda/readme.md)\n- [Running VMs on Jetstream with OpenStack](openstack/readme.md)\n'",",https://doi.org/10.5065/688s-2w73","2017/03/02, 20:15:55",2428,BSD-3-Clause,155,741,"2023/10/03, 21:58:08",19,683,850,145,22,14,0.0,0.15896739130434778,"2021/09/21, 15:42:07",v1.0.0,0,7,false,,false,false,,,https://github.com/Unidata,https://www.unidata.ucar.edu/,"Boulder, Colorado, USA",,,https://avatars.githubusercontent.com/u/613345?v=4,,, SounderPy,A python package that helps you to access and plot vertical profile data for meteorological analysis.,kylejgillett,https://github.com/kylejgillett/sounderpy.git,github,"data-analysis-python,meteorology,python,weather,weather-data,atmospheric-science,atmospheric-sciences",Atmospheric Composition and Dynamics,"2023/10/17, 21:22:10",21,0,21,true,Python,,,Python,https://pypi.org/project/sounderpy/,"b'
\r\n\r\n\r\n# SounderPy - Vertical Profile Data Retrieval and Analysis Tool For Python\r\nLATEST VERSION: v2.0.6 | RELEASED: October 4th, 2023 | COPYRIGHT Kyle J Gillett, 2023\r\n### [VISIT SOUNDERPY DOCUMENTATION HERE](https://github.com/kylejgillett/sounderpy/wiki)\r\n#### [CHECK OUT AN EXAMPLES & TUTORIALS](https://github.com/kylejgillett/sounderpy/blob/main/examples)\r\nA Python package that helps you to access and plot vertical profile data for meteorological analysis \r\n\r\n[![PyPI Package](https://img.shields.io/pypi/v/sounderpy.svg)](https://pypi.python.org/pypi/sounderpy/)\r\n[![PyPI Downloads](https://img.shields.io/pypi/dm/sounderpy.svg)](https://pypi.python.org/pypi/sounderpy/)\r\n[![PyPI license](https://img.shields.io/pypi/l/ansicolortags.svg)](https://github.com/kylejgillett/sounderpy/blob/main/LICENSE.txt)\r\n[![PyPI pyversions](https://img.shields.io/pypi/pyversions/sounderpy.svg)](https://pypi.python.org/pypi/sounderpy/)\r\n[![GitHub commits](https://badgen.net/github/commits/kylejgillett/sounderpy)](https://GitHub.com/kylejgillett/sounderpy/commit/)\r\n[![Maintainer](https://img.shields.io/badge/maintainer-kylejgillett-blue)](https://github.com/kylejgillett)\r\n[![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg)](https://www.python.org/)\r\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10011851.svg)](https://doi.org/10.5281/zenodo.10011851)\r\n\r\n
\r\n\r\n\r\n\r\n-----\r\n## What is SounderPy:\r\n\r\n- SounderPy is a Python package used to access vertical profile data for calculations or plotting of a vertical profile (sounding). SounderPy\'s main use is for getting the data, but some basic plotting tools are included.\r\n\r\n## SounderPy is currently capable of accessing and processing data from:\r\n\r\n- ECMWF CDS ERA5 reanalysis [1940-present] *note: you must set up an account through the CDS to unlock ERA5 data. (see: https://cds.climate.copernicus.eu/api-how-to)\r\n- UNIDATA THREDDS TDS RAP reanalysis [2005-present]\r\n- UNIDATA THREDDS TDS RUC reanalysis [2005-2020]\r\n- The University of Wyoming RAOB archive [1973-present, depending on station]\r\n- Iowa State University\'s RAOB archive [1945-present, depending on station]\r\n- The IGRAv2 Observed profile archive [1905-present, depending on station]\r\n- Iowa State University\'s BUFKIT archive [2011-present, depending on station & model]\r\n- Penn State University\'s BUFKIT archive [most recent run, depending on station & model]\r\n- UNIDATA THREDDS TDS RAP-most-recent-analysis [now, most recent analysis only]\r\n- OU Aircraft Communications, Addressing and Reporting System (ACARS) [2019-present]\r\n- NCEP FNL 0.25deg Gridded Reanalysis Dataset [2021-present]\r\n\r\n\r\n## Why SounderPy?\r\n- Sometimes data is tough to find, and often times is even tougher to get it in the format you like. SounderPy gets you this data!\r\n- The code needed for loading and parsing vertical data (especially from models) can be large and messy. SounderPy keeps it hidden away in a PyPi package -- just import and call sounderPy functions to keep your code clean!\r\n\r\n-------\r\n\r\n## How to use SounderPy:\r\n1. Make sure your environment has the required dependencies:\r\n - cdsapi>=0.6.1\r\n - ecape>0.0.0\r\n - matplotlib>=3.3.0, <=3.7.1\r\n - metpy>=1.5.1\r\n - netcdf4>=1.6.4\r\n - numpy>=1.20.0\r\n - pandas>=1.2.0\r\n - siphon>=0.9\r\n - scipy>= 1.10.1\r\n - xarray>=0.18.0\r\n\r\n2. ```\r\n pip install sounderpy\r\n ```\r\n Find it at https://pypi.org/project/sounderpy/\r\n3. ```\r\n import sounderpy as spy\r\n ```\r\n4. ```\r\n year = \'2011\' \r\n month = \'04\'\r\n day = \'27\'\r\n hour = \'22\'\r\n latlon = [33.19, -87.46]\r\n method = \'rap\' \r\n ```\r\n5. ```\r\n raw_data = spy.get_model_data(method, latlon, year, month, day, hour)\r\n ```\r\n6. ```\r\n clean_data = spy.parse_data(raw_data)\r\n ```\r\n------\r\n and boom! Now you have a callable dictionary of vertical profile reanalysis data including... \r\n \r\n1. Temperature\r\n2. Dewpoint\r\n3. Relative Humidity\r\n4. Pressure\r\n5. Height \r\n6. U-component Wind \r\n7. V-component Wind\r\n\r\n\r\nYou can make a quick sounding plot of the data using built-in MetPy plotting functions! Just call...\r\n`spy.metpy_sounding(clean_data)`\r\n
\r\n\r\n
\r\n\r\n\r\nor for a hodograph-only plot...\r\n\r\n`spy.metpy_hodograph(clean_data)`\r\n\r\n\r\n
\r\n\r\n
\r\n\r\n------\r\n\r\nor a looping GIF using this tutorial: *[SounderPy Looping GIF Tutorial](https://github.com/kylejgillett/sounderpy/blob/main/examples/sounderpy-gif_tutorial.ipynb)*\r\n\r\n\r\n
\r\n\r\n
\r\n\r\n------\r\n\r\n## AUTHORS AND CONTRIBUTORS\r\n### **AUTHOR: Kyle J Gillett, Central Michigan University** \r\n#### CONTRIBUTOR: Scott Thomas, NWS Grand Rapids \r\n\r\n------\r\n\r\n\r\n## CITING SOUNDERPY\r\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10011851.svg)](https://doi.org/10.5281/zenodo.10011851)\r\n\r\nin AMS format:\r\n\r\n- Gillett, K., 2023: SounderPy: Vertical Profile Data Retrieval & Analysis Tool for Python (Version 2.0.6). Py-Pi, https://pypi.org/project/sounderpy/\r\n\r\n------\r\n\r\n\r\n\r\n## REFERENCES\r\n- Harris, C.R., Millman, K.J., van der Walt, S.J. et al. Array programming with NumPy. Nature 585, 357\xe2\x80\x93362 (2020). DOI: 10.1038/s41586-020-2649-2.\r\n \r\n- Hoyer, S. & Hamman, J., (2017). xarray: N-D labeled Arrays and Datasets in Python. Journal of Open Research Software. 5(1), p.10. DOI: https://doi.org/10.5334/jors.148\r\n \r\n- J. D. Hunter, ""Matplotlib: A 2D Graphics Environment"", Computing in Science & Engineering, vol. 9, no. 3, pp. 90-95, 2007.\r\n\r\n- Ryan M. May, Sean C. Arms, Patrick Marsh, Eric Bruning, John R. Leeman, Kevin Goebbert, Jonathan E. Thielen, Zachary S Bruick, and M. Drew. Camron. Metpy: a Python package for meteorological data. 2023. URL: Unidata/MetPy, doi:10.5065/D6WW7G29.\r\n\r\n- Ryan M. May, Sean C. Arms, John R. Leeman, and Chastang, J. Siphon: A collection of Python Utilities for Accessing Remote Atmospheric and Oceanic Datasets. Unidata. 2017. [Available online at https://github.com/Unidata/siphon.] doi:10.5065/D6CN72NW.\r\n\r\n- Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, St\xc3\xa9fan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, CJ Carey, \xc4\xb0lhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E.A. Quintero, Charles R Harris, Anne M. Archibald, Ant\xc3\xb4nio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. (2020) SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17(3), 261-272.\r\n'",",https://doi.org/10.5281/zenodo.10011851,https://doi.org/10.5281/zenodo.10011851,https://doi.org/10.5334/jors.148\r\n","2023/07/06, 21:31:30",111,MIT,176,176,"2023/09/26, 14:13:22",2,1,12,12,29,0,0.0,0.0,"2023/10/17, 00:58:48",v2.0.6,0,1,false,,false,false,,,,,,,,,,, AC_tools,Contains functions and scripts used for working with atmospheric model output and observational data.,tsherwen,https://github.com/tsherwen/AC_tools.git,github,"geos-chem,geos5,atmospheric-chemistry,scientific-computing,kpp,hemco,geos-cf,ctm",Atmospheric Composition and Dynamics,"2023/03/02, 15:59:27",10,0,3,true,Python,,,"Python,Shell",,"b'AC_tools: Atmospheric Chemistry (AC) tools\n======================================\n.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.4789901.svg\n :target: https://doi.org/10.5281/zenodo.4789901\n\n**Atmospheric Chemistry Tools (AC_Tools)** contains functions and scripts used for \nworking with atmospheric model output and observational data. \nMany functions are included for working with global and regional \nchemical transport model (CTM) ouput from the GEOS-Chem model.\n\nThis package started as a just collection of scripts that were\nfound to be useful for work in atmospheric chemistry and now\nsimply aims to contain functionality outside the remit of the \nmore specialised community packages (e.g PyGChem_, xbpch_, and \ngcpy_) and use the existing Python stack (e.g. dask_, xarray_, \npandas_). \n`Please raise any questions/comments or bugs as issues here on GitHub `_ \nand `pull requests are welcome! `_\n\nInstallation\n------------\n\n**AC_Tools** is currently only installable from source. To do this, you\ncan either install directly via pip (recommended and this includes any dependencies)::\n\n\n $ pip install git+https://github.com/tsherwen/AC_tools.git\n\n\nor (not recommended), clone the source directory and manually install::\n\n\n $ git clone https://github.com/tsherwen/AC_tools.git\n $ cd AC_tools\n $ python setup.py install\n\n\nIf you specifically wish to use the legacy ""bpch to NetCDF"" (`bpch2netCDF`_) capability, you will need to run AC_tools in a Python2 environment due to use of an `iris`_ backend via `PyGChem`_. In this specific case, please install using the later method and ensure that you have `iris`_ (version 1.13.0) installed. You will also need to install `PyGChem`_ (version 0.3.0) by the same route. \n\n\nQuick Start\n-----------\n\nFunctions within **AC_Tools** can be used for various tasks for handling model output and observations. \n\nAn example would be importing NetCDF files or converting ctm.bpch files from a directory of GEOS-Chem_ output (with ``tracerinfo.dat`` and ``diaginfo.dat`` files). Or using GEOS-Chem_ NetCDF output to make a quick plot of surface ozone. \n\nIf using within a python3 environment and GEOS-Chem \n\n.. code:: python\n\n import AC_tools as AC\n folder = \'\'\n # Get the GEOS-Chem NetCDF output as a xarray dataset object\n # NOTE: this is just a wrapper of get_GEOSChem_files_as_ds, which can retrieve GEOS-Chem NetCDFs as a dataset\n ds = AC.GetSpeciesConcDataset(wd=folder)\n # Average dataset over time\n ds = ds.mean(dim=\'time\') \n # Select the surface level\n ds = ds.sel( lev=ds.lev[0] ) \n # Select ozone and do plot basic plot\n spec = \'O3\' \n #ds[\'SpeciesConc_\'+spec].plot() # very simple plot\n AC.quick_map_plot( ds, var2plot=\'SpeciesConc_\'+spec) # basic lat-lon plot\n plt.show()\n # Get global average surface CO \n spec = \'CO\'\n ratio = (ds[\'SpeciesConc_\'+spec] * ds[\'AREA\']).sum() / ds[\'AREA\'].sum()\n ratio = float(ratio.values) \n # Make a formatted string and then print using this to screen\n prt_str = ""The global average surface mixing ratio of {spec} (ppbv) is: {ratio}"" \n print(prt_str.format(spec=spec, ratio=ratio*1E9))\n\n\nIf using within a python2 environment, the below example is a way of accessing GEOS-Chem data. The data is converted from bpch to NetCDF by default via an iris backend through PyGChem (using bpch2netCDF.py).\n\n.. code:: python\n\n import AC_tools as AC\n folder = \'\'\n # Get the atmospheric ozone burden in Gg O3 as a np.array\n array = AC.get_O3_burden_bpch(folder)\n print( ""The ozone burden is: {burden}"".format(burden=array.sum()))\n # Get surface area for resolution \n s_area = get_surface_area(res)[..., 0] # m2 land map\n # Get global average surface CO \n spec = \'CO\'\n array = AC.get_GC_output(wd=folder, vars=[\'IJ_AVG_S__{}\'.format(spec)])\n ratio = AC.get_2D_arr_weighted_by_X(array, res=\'4x5\', s_area=s_area) \n # Make a formatted string and then print using this to screen\n prt_str = ""The global average surface mixing ratio of {spec} (ppbv) is: {ratio}""\n print( prt_str.format(spec=spec, ratio=ratio*1E9))\n \n \nUsage\n------------\n\nExample analysis code for using AC_tools is available in the \nscripts folder. \n\nFor more information, please visit the AC_tools_wiki_.\n\n\nLicense\n-------\n\nCopyright (c) 2015 `Tomas Sherwen`_\n\nThis work is licensed under a permissive MIT License.\n\nContact\n-------\n\n`Tomas Sherwen`_ - tomas.sherwen@york.ac.uk\n\n.. _`Tomas Sherwen`: http://github.com/tsherwen\n.. _conda: http://conda.pydata.org/docs/\n.. _dask: http://dask.pydata.org/\n.. _licensed: LICENSE\n.. _GEOS-Chem: http://www.geos-chem.org\n.. _xarray: http://xarray.pydata.org/\n.. _pandas: https://pandas.pydata.org/\n.. _gcpy: https://github.com/geoschem/gcpy\n.. _PyGChem: https://github.com/benbovy/PyGChem\n.. _xbpch: https://github.com/darothen/xbpch\n.. _iris: https://scitools.org.uk/iris/docs/latest/\n.. _bpch2netCDF: https://github.com/tsherwen/AC_tools/blob/master/Scripts/bpch2netCDF.py\n.. _AC_tools_wiki: https://github.com/tsherwen/AC_tools/wiki\n'",",https://doi.org/10.5281/zenodo.4789901\n\n**Atmospheric","2015/09/24, 14:16:58",2953,MIT,3,1043,"2023/03/02, 15:59:27",9,81,116,1,237,0,0.0,0.1605603448275862,"2021/08/06, 15:25:04",v0.1.3,0,5,false,,false,false,,,,,,,,,,, ACT,The Atmospheric data Community Toolkit is an open source Python toolkit for working with atmospheric time-series datasets of varying dimensions.,ARM-DOE,https://github.com/ARM-DOE/ACT.git,github,"atmospheric-science,visualization,time-series,meteorological-data,meteorology,retrieval,corrections,closember",Atmospheric Composition and Dynamics,"2023/10/13, 17:18:40",120,31,38,true,Python,ARM User Facility,ARM-DOE,"Python,TeX",https://ARM-DOE.github.io/ACT/,"b""========================================\r\nAtmospheric data Community Toolkit (ACT)\r\n========================================\r\n\r\n|AnacondaCloud| |CodeCovStatus| |Build| |Docs|\r\n\r\n|CondaDownloads| |Zenodo| |ARM|\r\n\r\n.. |AnacondaCloud| image:: https://anaconda.org/conda-forge/act-atmos/badges/version.svg\r\n :target: https://anaconda.org/conda-forge/act-atmos\r\n\r\n.. |CondaDownloads| image:: https://anaconda.org/conda-forge/act-atmos/badges/downloads.svg\r\n :target: https://anaconda.org/conda-forge/act-atmos/files\r\n\r\n.. |Zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.3855537.svg\r\n :target: https://doi.org/10.5281/zenodo.3855537\r\n\r\n.. |CodeCovStatus| image:: https://codecov.io/gh/ARM-DOE/ACT/branch/main/graph/badge.svg\r\n :target: https://codecov.io/gh/ARM-DOE/ACT\r\n\r\n.. |ARM| image:: https://img.shields.io/badge/Sponsor-ARM-blue.svg?colorA=00c1de&colorB=00539c\r\n :target: https://www.arm.gov/\r\n\r\n.. |Docs| image:: https://github.com/ARM-DOE/ACT/actions/workflows/build-docs.yml/badge.svg\r\n :target: https://github.com/ARM-DOE/ACT/actions/workflows/build-docs.yml\r\n\r\n.. |Build| image:: https://github.com/ARM-DOE/ACT/actions/workflows/python-package-conda.yml/badge.svg\r\n :target: https://github.com/ARM-DOE/ACT/actions/workflows/python-package-conda_linux.yml\r\n\r\nThe Atmospheric data Community Toolkit (ACT) is an open source Python toolkit for working with atmospheric time-series datasets of varying dimensions. The toolkit has functions for every part of the scientific process; discovery, IO, quality control, corrections, retrievals, visualization, and analysis. It is a community platform for sharing code with the goal of reducing duplication of effort and better connecting the science community with programs such as the `Atmospheric Radiation Measurement (ARM) User Facility `_. Overarching development goals will be updated on a regular basis as part of the `Roadmap `_ .\r\n\r\n|act|\r\n\r\n.. |act| image:: ./docs/source/act_plots.png\r\n\r\nPlease report any issues or feature requests by sumitting an `Issue `_. Additionally, our `discussions boards `_ are open for ideas, general discussions or questions, and show and tell!\r\n\r\nAnnouncements\r\n~~~~~~~~~~~~~\r\n\r\nFollowing GitHub standards for a more open community and inclusiveness, ACT main branch will be renamed from master to main.\r\n\r\nhttps://github.com/github/renaming\r\nhttps://www.git-tower.com/learn/git/faq/git-rename-master-to-main\r\n\r\nFor those using ACT with anaconda and pip, there will be no changes. If you are using a fork of ACT with GitHub under branch settings on GitHub you can rename the branch to main.\r\n\r\ncommands to switch naming locally can be found here:\r\nhttps://www.git-tower.com/learn/git/faq/git-rename-master-to-main\r\n\r\nImportant Links\r\n~~~~~~~~~~~~~~~\r\n\r\n* Documentation: https://arm-doe.github.io/ACT/\r\n* Examples: https://arm-doe.github.io/ACT/source/auto_examples/index.html\r\n* Issue Tracker: https://github.com/ARM-DOE/ACT/issues\r\n\r\nCiting\r\n~~~~~~\r\n\r\nIf you use ACT to prepare a publication, please cite the DOI listed in the badge above, which is updated with every version release to ensure that contributors get appropriate credit. DOI is provided through Zenodo.\r\n\r\nDependencies\r\n~~~~~~~~~~~~\r\n\r\n* `xarray `_\r\n* `NumPy `_\r\n* `SciPy `_\r\n* `matplotlib `_\r\n* `skyfield `_\r\n* `pandas `_\r\n* `dask `_\r\n* `Pint `_\r\n* `PyProj `_\r\n* `Six `_\r\n* `Requests `_\r\n* `MetPy `_\r\n* `ffspec `_\r\n* `lazy_loader `_\r\n* `cmweather `_\r\n\r\nOptional Dependencies\r\n~~~~~~~~~~~~~~~~~~~~~\r\n\r\n* `MPL2NC `_ Reading binary MPL data.\r\n* `Cartopy `_ Mapping and geoplots\r\n* `Py-ART `_ Reading radar files, plotting and corrections\r\n* `scikit-posthocs `_ Using interquartile range or generalized Extreme Studentized Deviate quality control tests\r\n* `icartt `_ icartt is an ICARTT file format reader and writer for Python\r\n* `PySP2 `_ PySP2 is a python package for reading and processing Single Particle Soot Photometer (SP2) datasets.\r\n\r\nInstallation\r\n~~~~~~~~~~~~\r\n\r\nACT can be installed a few different ways. One way is to install using pip.\r\nWhen installing with pip, the ACT dependencies found in\r\n`requirements.txt `_ will also be installed. To install using pip::\r\n\r\n pip install act-atmos\r\n\r\nThe easiest method for installing ACT is to use the conda packages from\r\nthe latest release. To do this you must download and install\r\n`Anaconda `_ or\r\n`Miniconda `_.\r\nWith Anaconda or Miniconda install, it is recommended to create a new conda\r\nenvironment when using ACT or even other packages. To create a new\r\nenvironment based on the `environment.yml `_::\r\n\r\n conda env create -f environment.yml\r\n\r\nOr for a basic environment and downloading optional dependencies as needed::\r\n\r\n conda create -n act_env -c conda-forge python=3.11 act-atmos\r\n\r\nBasic command in a terminal or command prompt to install the latest version of\r\nACT::\r\n\r\n conda install -c conda-forge act-atmos\r\n\r\nTo update an older version of ACT to the latest release use::\r\n\r\n conda update -c conda-forge act-atmos\r\n\r\nIf you are using mamba::\r\n\r\n mamba install -c conda-forge act-atmos\r\n\r\nIf you do not wish to use Anaconda or Miniconda as a Python environment or want\r\nto use the latest, unreleased version of ACT see the section below on\r\n**Installing from source**.\r\n\r\nInstalling from Source\r\n~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nInstalling ACT from source is the only way to get the latest updates and\r\nenhancement to the software that have no yet made it into a release.\r\nThe latest source code for ACT can be obtained from the GitHub repository,\r\nhttps://github.com/ARM-DOE/ACT. Either download and unpack the\r\n`zip file `_ of\r\nthe source code or use git to checkout the repository::\r\n\r\n git clone https://github.com/ARM-DOE/ACT.git\r\n\r\nTo install in your home directory, use::\r\n\r\n python setup.py install --user\r\n\r\nTo install for all users on Unix/Linux::\r\n\r\n python setup.py build\r\n sudo python setup.py install\r\n\r\nDevelopment install using pip from within the ACT directory::\r\n\r\n pip install -e .\r\n\r\nContributing\r\n~~~~~~~~~~~~\r\n\r\nACT is an open source, community software project. Contributions to the\r\npackage are welcomed from all users.\r\n\r\nThe latest source code can be obtained with the command::\r\n\r\n git clone https://github.com/ARM-DOE/ACT.git\r\n\r\nIf you are planning on making changes that you would like included in ACT,\r\nforking the repository is highly recommended.\r\n\r\nWe welcome contributions for all uses of ACT, provided the code can be\r\ndistributed under the BSD 3-clause license. A copy of this license is\r\navailable in the **LICENSE.txt** file in this directory. For more on\r\ncontributing, see the `contributor's guide. `_\r\n\r\nTesting\r\n~~~~~~~\r\nFor testing, we use pytest. To install pytest::\r\n\r\n $ conda install -c conda-forge pytest\r\n\r\nAnd for matplotlib image testing with pytest::\r\n\r\n $ conda install -c conda-forge pytest-mpl\r\n\r\nAfter installation, you can launch the test suite from outside the\r\nsource directory (you will need to have pytest installed and for the mpl\r\nargument need pytest-mpl)::\r\n\r\n $ pytest --mpl --pyargs act\r\n\r\nIn-place installs can be tested using the `pytest` command from within\r\nthe source directory.\r\n""",",https://doi.org/10.5281/zenodo.3855537\r\n\r\n","2019/03/11, 15:19:03",1689,CUSTOM,125,1731,"2023/10/13, 17:20:51",24,529,699,204,12,6,1.4,0.505223880597015,"2023/09/29, 18:36:16",v1.5.2,0,12,false,,true,true,"ME-Data-Pipeline-Software/MHKiT-Pipelines,ME-Data-Pipeline-Software/tutorial_pipelines,ME-Data-Pipeline-Software/sofar_spotter_pipelines,ME-Data-Pipeline-Software/acoustic_Doppler_pipelines,narest-qa/repo2,EVS-ATMOS/air-quality-sensors,jrwebster/HighIQ,portal-cat/nino_tsdat,Fxe/Dataverse,a2edap/ingest-oracle,CROCUS-Urban/instrument-cookbooks,tsdat/tsdat,a2edap/ingest-timestream,whitij/tsdat-mcrl,whitij/tsdat-mcrl-local,StefanoWind/tsdat_4x,a2edap/ingest-buoy,kefeimo/ncei-data-ingest-tutorial,Alex-McVey/TSDAT_Ingests,luonaaaa/tsdat-ecobee,liza183/tsdat_template_new,liza183/tsdat_template,tsdat/pipeline-template,kenkehoe/AtmosphericPythonCourse,cyclogenesis-au/opaware,ARM-Development/RadTraQ,columncolab/EMC2,rcjackson/HighIQ,ARM-DOE/PySP2,AdamTheisen/PrecipBE,AdamTheisen/AOSShipExhaustML",,https://github.com/ARM-DOE,,,,,https://avatars.githubusercontent.com/u/2540600?v=4,,, Freva,A data search and analysis platform developed by the atmospheric science community for the atmospheric science community.,freva,,custom,,Atmospheric Composition and Dynamics,,,,,,,,,,https://gitlab.dkrz.de/freva/evaluation_system,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, MPTRAC,A Lagrangian particle dispersion model for the analysis of atmospheric transport processes in the free troposphere and stratosphere.,slcs-jsc,https://github.com/slcs-jsc/mptrac.git,github,"atmospheric-modelling,dispersion-model,stratosphere,trajectories,dispersion,atmospheric-science,climate,climate-science,troposphere,meteorology,high-performance-computing",Atmospheric Dispersion and Transport,"2023/10/23, 08:37:39",31,0,14,true,C,Simulation and Data Laboratory Climate Science,slcs-jsc,"C,Shell,Makefile",,"b'# Massive-Parallel Trajectory Calculations\n\nMassive-Parallel Trajectory Calculations (MPTRAC) is a Lagrangian particle dispersion model for the analysis of atmospheric transport processes in the free troposphere and stratosphere.\n\n![logo](https://github.com/slcs-jsc/mptrac/blob/master/docs/logo/MPTRAC_320px.png)\n\n[![release (latest by date)](https://img.shields.io/github/v/release/slcs-jsc/mptrac)](https://github.com/slcs-jsc/mptrac/releases)\n[![commits since latest release (by SemVer)](https://img.shields.io/github/commits-since/slcs-jsc/mptrac/latest)](https://github.com/slcs-jsc/mptrac/commits/master)\n[![last commit](https://img.shields.io/github/last-commit/slcs-jsc/mptrac.svg)](https://github.com/slcs-jsc/mptrac/commits/master)\n[![top language](https://img.shields.io/github/languages/top/slcs-jsc/mptrac.svg)](https://github.com/slcs-jsc/mptrac/tree/master/src)\n[![code size in bytes](https://img.shields.io/github/languages/code-size/slcs-jsc/mptrac.svg)](https://github.com/slcs-jsc/mptrac/tree/master/src)\n[![codacy](https://api.codacy.com/project/badge/Grade/a9de7b2239f843b884d2a4eb583726c9)](https://app.codacy.com/gh/slcs-jsc/mptrac?utm_source=github.com&utm_medium=referral&utm_content=slcs-jsc/mptrac&utm_campaign=Badge_Grade_Settings)\n[![codecov](https://codecov.io/gh/slcs-jsc/mptrac/branch/master/graph/badge.svg?token=4X6IEHWUBJ)](https://codecov.io/gh/slcs-jsc/mptrac)\n[![tests](https://img.shields.io/github/actions/workflow/status/slcs-jsc/mptrac/tests.yml?branch=master&label=tests)](https://github.com/slcs-jsc/mptrac/actions)\n[![docs](https://img.shields.io/github/actions/workflow/status/slcs-jsc/mptrac/docs.yml?branch=master&label=docs)](https://slcs-jsc.github.io/mptrac)\n[![license](https://img.shields.io/github/license/slcs-jsc/mptrac.svg)](https://github.com/slcs-jsc/mptrac/blob/master/COPYING)\n[![doi](https://zenodo.org/badge/DOI/10.5281/zenodo.4400597.svg)](https://doi.org/10.5281/zenodo.4400597)\n\n## Features\n\n* MPTRAC calculates air parcel trajectories by solving the kinematic equation of motion using given horizontal wind and vertical velocity fields from global reanalyses or forecasts.\n* Mesoscale diffusion and subgrid-scale wind fluctuations are simulated using the Langevin equation to add stochastic perturbations to the trajectories. A new inter-parcel exchange module represents mixing of air.\n* Additional modules are implemented to simulate convection, sedimentation, exponential decay, gas and aqueous phase chemistry, and wet and dry deposition.\n* Meteorological data pre-processing code provides estimates of the boundary layer, convective available potential energy, geopotential heights, potential vorticity, and tropopause data.\n* Various output methods for particle, grid, ensemble, profile, sample, and station data. Gnuplot and ParaView interfaces for visualization.\n* MPI-OpenMP-OpenACC hybrid parallelization and distinct code optimizations for efficient use from single workstations to HPC and GPU systems.\n* Distributed as open source under the terms of the GNU GPL.\n\n## Getting started\n\n### Prerequisites\n\nThis README file describes the installation of MPTRAC on a Linux system.\n\nThe following software dependencies are mandatory for the compilation of MPTRAC:\n\n* the [GNU make](https://www.gnu.org/software/make) build tool\n* the C compiler of the [GNU Compiler Collection (GCC)](https://gcc.gnu.org)\n* the [GNU Scientific Library (GSL)](https://www.gnu.org/software/gsl) for numerical calculations\n* the [netCDF library](http://www.unidata.ucar.edu/software/netcdf) for file-I/O\n\nOptionally, the following software is required to enable further capabilities of MPTRAC:\n\n* the distributed version control system [Git](https://git-scm.com/) to access the code repository\n* the [HDF5 library](https://www.hdfgroup.org/solutions/hdf5) to enable the netCDF4 file format\n* the [Zstandard library](https://facebook.github.io/zstd) and the [zfp library](https://computing.llnl.gov/projects/zfp) for compressed meteo data\n* the [NVIDIA HPC Software Development Kit](https://developer.nvidia.com/hpc-sdk) for GPU support\n* an MPI library such as [OpenMPI](https://www.open-mpi.org) or [ParaStation MPI](https://github.com/ParaStation/psmpi) for HPC support\n* the graphing utility [gnuplot](http://www.gnuplot.info) for visualization\n\nSome of the software is provided along with the MPTRAC repository, please see next section.\n\n### Installation\n\nStart by downloading the most recent or any of the earlier [MPTRAC releases on GitHub](https://github.com/slcs-jsc/mptrac/releases). Unzip the release file:\n\n unzip mptrac-x.y.zip\n\nAlternatively, you can retrieve the most recent development version of the software from the GitHub repository:\n\n git clone https://github.com/slcs-jsc/mptrac.git\n\nSeveral libraries provided along with MPTRAC can be compiled and installed by running a build script:\n\n cd [mptrac_directory]/libs\n ./build.sh -a\n\nNext, change to the source directory and edit the `Makefile` according to your needs:\n\n cd [mptrac_directory]/src\n emacs Makefile\n\nIn particular, you might want to check:\n\n* Edit the `LIBDIR` and `INCDIR` paths to point to the directories where the GSL, netCDF, and other libraries are located on your system.\n\n* By default, the MPTRAC binaries will be linked statically, i.e., they can be copied and used on other machines. However, sometimes static compilations causes issues, e.g., in combination with dynamically compiled GSL and netCDF libraries or when using MPI and OpenACC. In this case, disable the `STATIC` flag and remember to set the `LD_LIBRARY_PATH` to include the paths to the shared libraries.\n\n* To make use of the MPI parallelization of MPTRAC, the `MPI` flag needs to be enabled. Further steps will require an MPI library such as OpenMPI to be available on your system. To make use of the OpenACC parallelization, the `GPU` flag needs to be enabled. The NVIDIA HPC SDK is required to compile the GPU code. The OpenMP parallelization of MPTRAC is always enabled.\n\nNext, try to compile the code:\n\n make [-j]\n\nTo run the test cases to check the installation, please use:\n\n make check\n\nThis will run sequentially through a set of tests. The execution of the tests will stop if any of the tests fails. Please inspect the log messages.\n\n### Run the example\n\nA simple example is provided, illustrating how to simulate the dispersion of volcanic ash from the eruption of the Puyehue-Cord\xc3\xb3n Caulle volcano, Chile, in June 2011.\n\nThe example can be found in the `projects/example` subdirectory. The `project` subdirectory can also be used to store the results of your own simulation experiments with MPTRAC.\n\nThe example simulation is controlled by a shell script:\n\n cd mptrac/projects/example\n ./run.sh\n\nPlease see the script `run.sh` on how to invoke MPTRAC programs such as `atm_init` and `atm_split` to initialize trajectory seeds and `trac` to calculate the trajectories.\n\nThe script generates simulation output in the `examples/data` subdirectory. The corresponding reference data can be found in `examples/data.ref`.\n\nA set of plots of the simulation output at different time steps after the eruption generated by means of the `gnuplot` graphing tool can be found `examples/plots`. The plots should look similar to the output provided in `examples/plots.ref`.\n\nThis is an example showing the particle positions and grid output on 6th and 8th of June 2011:\n

\n

\n\n## Further information\n\nMore detailed information for users of MPTRAC is provided in the [user manual](https://slcs-jsc.github.io/mptrac).\n\nThese are the main scientific publications providing information on MPTRAC:\n\n* Hoffmann, L., Baumeister, P. F., Cai, Z., Clemens, J., Griessbach, S., G\xc3\xbcnther, G., Heng, Y., Liu, M., Haghighi Mood, K., Stein, O., Thomas, N., Vogel, B., Wu, X., and Zou, L.: Massive-Parallel Trajectory Calculations version 2.2 (MPTRAC-2.2): Lagrangian transport simulations on graphics processing units (GPUs), Geosci. Model Dev., 15, 2731\xe2\x80\x932762, https://doi.org/10.5194/gmd-15-2731-2022, 2022.\n\n* Hoffmann, L., T. R\xc3\xb6\xc3\x9fler, S. Griessbach, Y. Heng, and O. Stein, Lagrangian transport simulations of volcanic sulfur dioxide emissions: Impact of meteorological data products, J. Geophys. Res. Atmos., 121, 4651-4673, https://doi.org/10.1002/2015JD023749, 2016. \n\nAdditional references are collected on the [references web site](https://slcs-jsc.github.io/mptrac/references/).\n\nInformation for developers of MPTRAC is provided in the [doxygen manual](https://slcs-jsc.github.io/mptrac/doxygen).\n\n## Contributing\n\nWe are interested in supporting operational and research applications with MPTRAC.\n\nYou can submit bug reports or feature requests on the [issue tracker](https://github.com/slcs-jsc/mptrac/issues).\n\nProposed code modifications can be submitted as [pull requests](https://github.com/slcs-jsc/mptrac/pulls).\n\nPlease do not hesitate to contact us if you have any questions or need support.\n\n## License\n\nMPTRAC is distributed under the [GNU General Public License v3.0](https://github.com/slcs-jsc/mptrac/blob/master/COPYING).\n\nPlease see the [citation file](https://github.com/slcs-jsc/mptrac/blob/master/CITATION.cff) for further information on citing the MPTRAC model in scientific publications.\n\n## Contact\n\nDr. Lars Hoffmann\n\nJ\xc3\xbclich Supercomputing Centre, Forschungszentrum J\xc3\xbclich\n\ne-mail: l.hoffmann@fz-juelich.de\n'",",https://doi.org/10.5281/zenodo.4400597,https://doi.org/10.5194/gmd-15-2731-2022,https://doi.org/10.1002/2015JD023749","2019/12/27, 13:02:15",1398,GPL-3.0,379,1065,"2023/09/21, 11:18:59",7,9,14,5,34,0,0.0,0.07165109034267914,"2023/07/27, 17:42:20",v2.5,0,8,false,,false,false,,,https://github.com/slcs-jsc,http://www.fz-juelich.de/ias/jsc/slcs,"Forschungszentrum Jülich, Germany",,,https://avatars.githubusercontent.com/u/14200814?v=4,,, GRAL,A Lagrangian dispersion model with reasonable demands on computational times and sensible accuracy.,GralDispersionModel,https://github.com/GralDispersionModel/GRAL.git,github,,Atmospheric Dispersion and Transport,"2022/11/11, 16:55:15",15,0,1,true,C#,,GralDispersionModel,C#,,"b'# GRAL Dispersion Model
\nAtmospheric dispersion modeling in complex terrain and in situations with low wind speeds is still challenging. Nevertheless, air pollution has to be assessed in such environments.\nIt is therefore necessary to develop models and methods, which allow for such assessments with reasonable demands on computational times and with sensible accuracy.\nThis has been the motivation for the development of the Lagrangian dispersion model GRAL at the Graz University of Technology, Institute for Internal Combustion Engines and Thermodynamics, ever since 1999.
\nDr. Dietmar Oettl from the Office of the Provincial Government of Styria (2006 - 2020) and Markus Kuntner (since 2016), Austria, are further developing the model. In 2019, the decision was made to publish GRAL as Open Source.
\n\nThe basic principle of Lagrangian models is the tracing/tracking of a multitude of fictitious particles moving on trajectories within a 3-d windfield. GRAL provides a CFD model for the flow calculation around buildings or micrsocale terrain structures. To take the presence of topography into account, GRAL can be linked with the prognostic wind field model GRAMM.
\nTo speed up the calculation, GRAL is parallelized, the CFD model supports SSE/AVX vectorization and the latest performance optimizations of the .NET framework are used.
\n\n## Validation\nGRAL has been used to calculate a large number of validation data sets, which are continuously updated and documented in the GRAL manual. The manual is available on the GRAL homepage in the download section. Individual validation projects are also published on Github.
\n\n## Usage\nIt is possible to create or evaluate all input files and output files, which are text files and/or binary files, by yourself. The file formats are documented in the GRAL manual.
\nWe are also developing a comprehensive graphical user interface (GUI), in order to simplify the numerous input values, display the results and allow verification of your model input and output. Our goal is to provide the user with a simple and comprehensive model checking tool to support the quality requirements of dispersion calculations. Also the GUI is free and completely published as Open Source.
\n\n## Built With\n* [Visual Studio 2019](https://visualstudio.microsoft.com/de/downloads/) \n* [Visual Studio Code](https://code.visualstudio.com/)\n\n## Official Release and Documentation\nThe current validated and signed GRAL version, the documentation and a recommendation guide is available at the [GRAL homepage](https://gral.tugraz.at/)\n\n## Contributing\nEveryone is invited to contribute to the project [Contributing](Contributing.md)\n \n## Versioning\nThe version number includes the release year and the release month, e.g. 20.01.\n\n## License\nThis project is licensed under the GPL 3.0 License - see the [License](License.md) file for details\n'",,"2019/12/27, 15:05:08",1398,GPL-3.0,4,87,"2022/12/16, 19:09:34",0,30,34,3,313,0,0.0,0.012499999999999956,"2022/11/05, 09:07:19",V2209,0,1,false,,true,true,,,https://github.com/GralDispersionModel,,,,,https://avatars.githubusercontent.com/u/56819015?v=4,,, SNAP,A lagrangian type atmospheric dispersion model specialized on modelling dispersion of radioactive debris.,metno,https://github.com/metno/snap.git,github,,Atmospheric Dispersion and Transport,"2023/10/12, 14:07:06",20,0,3,true,Fortran,Norwegian Meteorological Institute,metno,"Fortran,Python,Shell,Makefile,HTML,XSLT",,"b'# SNAP\n\nSNAP, the Severe Nuclear Accident Programme is a lagrangian type\natmospheric dispersion model specialized on modelling dispersion\nof radioactive debris. A model description can be found at\n[this link](https://drive.google.com/file/d/0B8SjSRklVkHkQXoxY1VQdE0wdnM/view?usp=sharing&resourcekey=0-BBP4nQlukt1M66uNzJz1BA).\n\n\n## Meteorological input fields\n\nSNAP needs meteorological driver data from NWP models in sigma or\neta-hybrid model-levels, in the netcdf format. The minimum\nlist of parameters are for the surface layer:\n\n * surface-air-pressure\n * precipitation (eventually split into convective and large-scale)\n * x- and y-wind-10m\n\nAnd for the model layers:\n\n * x- and y-wind\n * air-temperature or potential-temperature\n * ap and b hybrid level values, or sigma level values\n\nParameter names can be specified in [readfield_nc.f90](src/common/readfield_nc.f90).\n\nAnd example on how to set up downloading of freely available meteorological data\nfrom the NOAA GFS model can be found under [src/naccident/examples/gfs/](./src/naccident/examples/gfs/)\n\n\n## Dependencies\n\nSNAP requires the following libraries and programs to be installed for\ncompilation\n\n * fortran77/90 compiler, e.g. gfortran or ifort\n * NetCDF (netcdf > 4.1.1)\n * NetCDF-fortran\n * Python3 (optional)\n * git (optional)\n * fimex (optional)\n\n\n## Installation\n\nCreate a file `current.mk` in the `src` directory. Use e.g the file\n[ubuntuXenial.mk](src/ubuntuXenial.mk)\nas a template. The most important parameters to modify are NCDIR and\nBINDIR, where the final files will be installed to.\nTHE MIINC and MILIB should be uncommented.\n\nIn the `src` directory run then:\n\n```sh\nmake install\n```\n\nThis will install `bsnap_naccident` to `BINDIR`. Run SNAP using\nthe command\n\n```sh\nbsnap_naccident snap.input\n```\n\nExamples of `snap.input` can be found in the directory [src/naccident/examples/](src/naccident/examples).\n\n### Versioning\n\nThe master branch in git is used for development. Stable versions are tagged as \'vX.YY.ZZ\'. Releases should also have a DOI for citation, see https://doi.org/10.5281/zenodo.1155159 . For the user-interface snappy, we use tags like \'snappy-vX.YY.ZZ\' with independent version numbers. Other tags are used internally.\n\n\nThe build system uses automatic versioning based on git tags and revision numbers and embeds this into the resulting program. If git or python3 is unavailable, this logic should be bypassed by setting the environment variable VERSION to some value, e.g.\n```sh\nenv VERSION=""some_version_number"" make install\n```\n\n\n## License\n\n```\nSNAP: Servere Nuclear Accident Programme\nCopyright (C) 1992-2023 Norwegian Meteorological Institute\n\nSNAP is free software: you can\nredistribute it and/or modify it under the terms of the\nGNU General Public License as published by the\nFree Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program, i.e. see COPYING for more information.\nIf not, see .\n```\n'",",https://doi.org/10.5281/zenodo.1155159","2017/10/30, 09:34:18",2186,GPL-3.0,239,1643,"2023/10/12, 06:49:09",5,114,125,31,13,1,2.3,0.3742414025623736,"2023/09/05, 12:21:21",dry_deposition,0,6,false,,false,true,,,https://github.com/metno,https://www.met.no,Norway,,,https://avatars.githubusercontent.com/u/893264?v=4,,, GasDispersion.jl,"A set of tools for atmospheric dispersion modeling of gaseous releases, such as might occur during an emergency at a chemical plant or more routinely from a stack.",aefarrell,https://github.com/aefarrell/GasDispersion.jl.git,github,"chemical-engineering,process-safety",Atmospheric Dispersion and Transport,"2023/10/01, 20:19:57",9,0,6,true,Julia,,,Julia,,"b'# GasDispersion.jl\n[![LICENSE](https://img.shields.io/badge/license-MIT-lightgrey.svg)](https://github.com/aefarrell/GasDispersion.jl/blob/main/LICENSE)\n[![Documentation](https://img.shields.io/badge/docs-dev-blue)](https://aefarrell.github.io/GasDispersion.jl/dev/)\n[![Build Status](https://github.com/aefarrell/GasDispersion.jl/workflows/CI/badge.svg)](https://github.com/aefarrell/GasDispersion.jl/actions)\n[![Coverage](https://codecov.io/gh/aefarrell/GasDispersion.jl/branch/main/graph/badge.svg?token=PB3LOR80K2)](https://codecov.io/gh/aefarrell/GasDispersion.jl)\n\nGasDispersion.jl is a set of tools for atmospheric dispersion modeling of\ngaseous releases, such as might occur during an emergency at a chemical plant\nor more routinely from a stack. This is intended to be the level of disperson\nmodeling used support consequence analysis or QRA such as is described in *Lee\'s\nLoss Prevention in the Process Industries* or the CCPS *Guidelines for\nConsequence Analysis of Chemical Releases*.\n\n## Installation\n\nGasDispersion.jl can be installed using Julia\'s built-in package manager. In a\nJulia session, enter the package manager mode by hitting `]`, then run the\ncommand\n\n```julia\npkg> add GasDispersion\n```\n\n\n## Example usage\n\nThis scenario is adapted from CCPS *Guidelines for Consequence Analysis of\nChemical Releases*, CCPS, pg 47.\n\nSuppose we wish to model the dispersion of gaseous propane from a leak from a storage tank, where the leak is from a 10 mm hole that is 3.5 m above the ground and the propane is at 25\xc2\xb0C and 4barg. Assume the discharge coefficient $c_{D} = 0.85$\n\nFor ambient conditions we assume the atmosphere is dry air at standard conditions \nof 1atm and 25\xc2\xb0C, with a windspeed of 1.5m/s and class F stability (a ""worst case""\natmospheric stability), the default atmosphere if nothing else is specified.\n\n\n```julia\nusing GasDispersion\n\npropane = Substance(name = :propane,\n gas_density = 9.7505, # kg/m^3, NIST Webbook\n liquid_density = 526.13, # kg/m^3, NIST Webbook\n reference_temp= 298.15, # K\n reference_pressure= 501325, # Pa\n boiling_temp = 231.04, # K, NIST Webbook\n latent_heat = 16.25/44.0956, # J/kg, NIST Webbook\n gas_heat_capacity = 1.6849, # J/kg/K, NIST Webbook\n liquid_heat_capacity = 2.2460) # J/kg/K, NIST Webbook\n\nscn = scenario_builder(propane, JetSource; \n phase = :gas,\n diameter = 0.01, # m\n dischargecoef = 0.85,\n k = 1.15, # heat capacity ratio, from Crane\'s\n temperature = 298.15, # K\n pressure = 501325, # Pa\n height = 3.5) # m, height of hole above the ground\n```\n\nThis generates a `Scenario` defined for a gas jet discharging into dry air\nat standard conditions. Once we have this defined we can determine the\nconcentration at any point downwind of the release point, assuming the release\nis a continuous plume, using\n\n```julia\n# returns a callable\np = plume(scn, GaussianPlume)\n\np(x,y,z) # gives the concentration in kg/m^3 at the point x, y, z\n```\n'",,"2021/09/04, 17:06:24",781,MIT,62,118,"2023/10/01, 20:18:08",3,16,25,16,24,0,0.0,0.0,"2023/10/01, 20:43:57",v0.1.1,0,1,false,,false,false,,,,,,,,,,, CloudDrift,"Accelerates the use of Lagrangian data for atmospheric, oceanic, and climate sciences.",Cloud-Drift,https://github.com/Cloud-Drift/clouddrift.git,github,"climate-data,climate-science,data-structures,oceanography,python",Atmospheric Dispersion and Transport,"2023/10/25, 15:12:13",25,5,17,true,Python,CloudDrift,Cloud-Drift,Python,https://clouddrift.org/,"b'# CloudDrift\n![CI](https://github.com/Cloud-Drift/clouddrift/workflows/CI/badge.svg)\n[![Documentation Status](https://github.com/Cloud-Drift/clouddrift/actions/workflows/docs.yml/badge.svg)](https://cloud-drift.github.io/clouddrift)\n[![codecov](https://codecov.io/gh/Cloud-Drift/clouddrift/branch/main/graph/badge.svg)](https://codecov.io/gh/Cloud-Drift/clouddrift/)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Cloud-Drift/clouddrift-examples/main?labpath=notebooks)\n[![Available on conda-forge](https://anaconda.org/conda-forge/clouddrift/badges/version.svg?style=flat-square)](https://anaconda.org/conda-forge/clouddrift/)\n[![Available on pypi](https://img.shields.io/pypi/v/clouddrift.svg?style=flat-square&color=blue)](https://pypi.org/project/clouddrift/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![NSF-2126413](https://img.shields.io/badge/NSF-2126413-blue.svg)](https://nsf.gov/awardsearch/showAward?AWD_ID=2126413)\n[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FCloud-Drift%2Fclouddrift&count_bg=%2368C563&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com)\n\nCloudDrift is a Python package that accelerates the use of Lagrangian data for atmospheric, oceanic, and climate sciences.\nIt is funded by [NSF EarthCube](https://www.earthcube.org/info) through the\n[EarthCube Capabilities Grant No. 2126413](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2126413).\n\nRead the [documentation](https://cloud-drift.github.io/clouddrift) or explore\nthe [Jupyter Notebook Examples](https://github.com/Cloud-Drift/clouddrift-examples).\n\n## Using CloudDrift\n\nStart by reading the [documentation](https://cloud-drift.github.io/clouddrift).\n\nExample Jupyter notebooks that showcase the library, as well as scripts\nto process various Lagrangian datasets, can be found in\n[clouddrift-examples](https://github.com/Cloud-Drift/clouddrift-examples), [gdp-get-started](https://github.com/Cloud-Drift/gdp-get-started), [mosaic-get-started](https://github.com/Cloud-Drift/mosaic-get-started), or [a demo for the EarthCube community workshop 2023](https://github.com/Cloud-Drift/e3-comm-workshop-2023).\n\n## Contributing and scope\n\nWe welcome contributions from the community.\nIf you would like to propose an idea for a new feature or contribute your own\nimplementation, please follow these steps:\n\n1. Open a new [issue](https://github.com/Cloud-Drift/clouddrift/issues) to discuss your proposal.\n2. Once we agree on a general way forward, [fork the repository](https://docs.github.com/en/github-ae@latest/get-started/quickstart/fork-a-repo) and [create a\n new branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository) for your contribution.\n3. Write your code and [tests](https://docs.github.com/en/actions/automating-builds-and-tests). Please follow the same style as the rest of the\n codebase and ensure that all new functionality is covered by your tests.\n4. Open a pull request and request a review.\n\nThe scope of CloudDrift includes:\n\n* Working with contiguous ragged-array data; for example, see the\n [`clouddrift.ragged`](https://cloud-drift.github.io/clouddrift/_autosummary/clouddrift.ragged.html) module.\n* Common scientific analysis of Lagrangian data, oceanographic or otherwise;\n for example, see the\n [`clouddrift.kinematics`](https://cloud-drift.github.io/clouddrift/_autosummary/clouddrift.kinematics.html),\n [`clouddrift.signal`](https://cloud-drift.github.io/clouddrift/_autosummary/clouddrift.signal.html), and\n [`clouddrift.wavelet`](https://cloud-drift.github.io/clouddrift/_autosummary/clouddrift.wavelet.html) modules.\n* Processing existing Lagrangian datasets into a common data structure and format;\n for example, see the [`clouddrift.adapters.mosaic`](https://cloud-drift.github.io/clouddrift/_autosummary/clouddrift.adapters.mosaic.html) module.\n* Making cloud-optimized ragged-array datasets easily accessible; for example,\n see the [`clouddrift.datasets`](https://cloud-drift.github.io/clouddrift/_autosummary/clouddrift.datasets.html) module.\n\nIf you have an idea that does not fit into the scope of CloudDrift but you think\nit should, please open an issue to discuss it.\n\n## Getting started\n\n### Install CloudDrift\n\nYou can install the latest release of CloudDrift using [pip](https://pypi.org/project/clouddrift/) or [conda](https://anaconda.org/conda-forge/clouddrift).\n\n#### Latest official release:\n##### pip:\n\nIn your virtual environment, type:\n\n```\npip install clouddrift\n```\n\n##### Conda:\n\nFirst add `conda-forge` to your channels in your Conda configuration (`~/.condarc`):\n\n```\nconda config --add channels conda-forge\nconda config --set channel_priority strict\n```\n\nthen install CloudDrift:\n\n```\nconda install clouddrift\n```\n\n#### Development branch:\n\nIf you need the latest development version, you can install it directly from this GitHub repository.\n\n##### pip:\n\nIn your virtual environment, type:\n\n```\npip install git+https://github.com/cloud-drift/clouddrift\n```\n\n##### Conda:\n```\nconda env create -f environment.yml\n```\nwith the environment [file](https://github.com/Cloud-Drift/clouddrift/blob/main/environment.yml) located in the main repository.\n\n### Run the tests\n\nTo run the tests, you need to first download the CloudDrift source code from\nGitHub:\n\n```\ngit clone https://github.com/cloud-drift/clouddrift\ncd clouddrift/\n```\n\nand create the virtual environment.\n\nWith pip:\n\n```\npython3 -m venv .venv\nsource .venv/bin/activate\npip install .\n```\n\nWith Conda:\n\n```\nconda env create -f environment.yml\nconda activate clouddrift\n```\n\nThen, run the tests like this:\n\n```\npython -m unittest tests/*.py\n```\n\n### Installing CloudDrift on unsupported platforms\n\nOne or more dependencies of CloudDrift may not have pre-built wheels for\nplatforms like IBM Power9 or Raspberry Pi.\nIf you are using pip to install CloudDrift and are getting errors during the\ninstallation step, try installing CloudDrift using Conda.\nIf you still have issues installing CloudDrift, you may need to install system\ndependencies first.\nPlease let us know by opening an\n[issue](https://github.com/Cloud-Drift/clouddrift/issues/new) and we will do our\nbest to help you.\n\n## Found an issue or need help?\n\nPlease create a new issue [here](https://github.com/Cloud-Drift/clouddrift/issues/new)\nand provide as much detail as possible about your problem or question.\n'",,"2021/11/03, 14:43:06",721,MIT,169,280,"2023/10/23, 13:16:19",27,151,255,226,2,2,1.8,0.5358361774744027,"2023/10/09, 18:04:36",v0.24.0,0,3,false,,false,false,"narest-qa/repo70,philippemiron/clouddrift-examples,selipot/clouddrift-examples,Cloud-Drift/clouddrift-examples,milancurcic/clouddrift-examples",,https://github.com/Cloud-Drift,https://clouddrift.org,United States of America,,,https://avatars.githubusercontent.com/u/91622877?v=4,,, IPART,"A Python package for automated Atmospheric River (AR) detection, axis finding and AR tracking from gridded Integrated Vapor Transport data.",ihesp,https://github.com/ihesp/IPART.git,github,,Atmospheric Dispersion and Transport,"2023/07/12, 06:42:36",17,0,11,true,Python,,ihesp,"Python,Jupyter Notebook,TeX",https://ipart.readthedocs.io/en/latest/,"b'# Image-Processing based Atmospheric River Tracking (IPART) algorithms\n\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.02407/status.svg)](https://doi.org/10.21105/joss.02407)\n\n## Introduction\n\nIPART (Image-Processing based Atmospheric River Tracking) is a Python package\nfor automated Atmospheric River (AR) detection, axis finding and AR tracking\nfrom gridded Integrated Vapor Transport (IVT) data, for instance Reanalysis\ndatasets, or model simulations.\n\nIPART is intended for researchers and students who are interested in the\nfield of atmospheric river studies in the present day climate or future\nprojections. Unlike the convectional detection methods that rely on magnitude\nthresholding on the intensities of atmospheric vapor fluxes, IPART tackles the\ndetection task from a spatio-temporal scale perspective and is thus\nfree from magnitude thresholds.\n\n## Documentation\n\nFurther documentation can be found at [https://ipart.readthedocs.io/en/latest/](https://ipart.readthedocs.io/en/latest/).\nA description of the methods is given in this work: [Xu, G., Ma, X., Chang, P., and Wang, L.: Image-processing-based atmospheric river tracking method version 1 (IPART-1), Geosci. Model Dev., 13, 4639\xe2\x80\x934662, https://doi.org/10.5194/gmd-13-4639-2020, 2020.](https://doi.org/10.5194/gmd-13-4639-2020).\n\n\n## Example use case\n\n\n| ![fig3](joss/fig3.png) |\n| :--: |\n|*(a) The IVT field in kg/m/s at 1984-01-26 00:00 UTC over the North Hemisphere. (b) the IVT reconstruction field (IVT_rec) at the same time point. (c) the IVT anomaly field (IVT_ano) from the THR process at the same time point.*|\n\n| ![](joss/ar_track_198424.png) |\n| :--: |\n|*Locations of a track labelled ""198424"" found in year 1984. Black to yellow color scheme indicates the evolution.*|\n\n\n\n## Dependencies\n\n* Python2.7 or Python3.7.\n* netCDF4 (tested 1.4.2, 1.5.3 in py2, tested 1.5.3 in py3)\n* numpy (developed in 1.16.5 in py2, tested 1.18.1, 1.19.0 in py3)\n* scipy (developed in 1.2.1 in py2, tested 1.4.1, 1.5.1 in py3)\n* matplotlib (tested 2.2.5 in py2, tested 3.3.1 in py3)\n* pandas (developed in 0.23.4, 0.24.2 in py2, tested 1.0.3, 1.0.5 in py3)\n* networkx (developed in 1.11 and 2.2 in py2, tested 2.4 in py3)\n* scikit-image (developed in 0.14.2, 0.14.3 in py2, tested 0.16.2, 0.17.2 in py3)\n* cartopy (optional, only used for plotting. Tested 0.17.0 in py2, tested 1.18.0 in py3)\n* opencv (optional but recommended. Tested 4.5.5 in py3. Used to speed up some computations, new in v3.2.0)\n* OS: Linux or Mac, may work in Windows.\n\n## Installation\n\nRecommend building the Python environment using [Anaconda](https://www.anaconda.com/distribution/).\n\n\n### Install from conda-forge\n\nIn your working Python environment:\n\n```\nconda install -c conda-forge ipart\n```\n\nwill install `ipart` and its dependencies for Python 3.\n\n\n### Create conda environment using environment file\n\nThis way will install the optional `cartopy` package and allow you to run\nthe notebook examples.\n\nAfter Anaconda installation, git clone this repository:\n\n```\ngit clone https://github.com/ihesp/IPART\n```\n\nThen build a new conda environment using the environment file provided. For example:\n\n```\ncd IPART\nconda env create -f environment_py3.yml\n```\n\nThis creates a new environment named `ipartpy3`. Activate the environment using\n\n```\nconda activate ipartpy3\n```\n\nAfter that, you can check the list of packages installed by\n\n```\nconda list\n```\n\nSimilarly for Python 2.7, use\n\n```\nconda env create -f environment_py2.yml\n```\n\nFinally install IPART using:\n\n```\npip install -e .\n```\n\n\n## tests\n\nTo validate installation, issue a new Python session and run\n\n```\nimport ipart\n```\n\nIf nothing prints out, installation is successful.\n\nThe `tests` folder also contains a number of `unittest`s, to run them (only if you have done a source code install):\n\n```\npython -m unittest discover -s tests\n```\n\n\n\n## Inventory\n\n* docs: readthedocs documentation.\n* ipart: core module functions.\n* notebooks: a series of jupyter notebooks illustrating the major functionalities of the package.\n* scripts: example computation scripts. Can be used as templates to quickly develop your own working scripts.\n\n\n## Changelog\n\n### v3.5.0\n\nMinor fix:\n\n* When data resolution is higher than 1.0 degree, put axis-finding using down-sampled AR mask in a `try` block. If it failed, revert back to axis-finding using original resolution.\n\n### v3.4.0\n\nMinor fixes:\n\n* fix a bug in latitudinal range filtering when data cover both of the Northern and Southern Hemispheres.\n* more robust handling of zonally cyclic data.\n* (related to a change in v3.3.0) a better way to prevent potential [matplotlib memory leaking](https://github.com/matplotlib/matplotlib/issues/20490).\n\n\n### v3.3.0\n\n* Minor fixes\n\nUse `agg` backend of `matplotlib` in `utils/funcs.py` to prevent [memory leaking](https://github.com/matplotlib/matplotlib/issues/20490).\n\nAllow specifying the calendar type (e.g. `noleap`) when reading netCDF data using `readNC()`:\n`readNC(data_path, varid, calendar=\'noleap\')`.\n\n### v3.2.0\n\n* Speed optimization for the AR detection task.\n\nFor computations in `scripts/detect_ARs.py` and\n`scripts/detect_ARs_generator_version.py`, expect to see a 200 - 300 % speed up\n(only when the data resolution is higher than 1.0 degree latitude/longitude).\n\nIf the `opencv` module is also installed, up to 300 - 500 % speed gain\n(tested with 0.25 degree resolution data).\n\n### v3.0\n\nMake algorithms zonally cyclic.\n\n### v2.0\n\n* restructure into a module `ipart`, separate module from scripts.\n* add a `findARsGen()` generator function to yield results at each time point separately.\n\n### v1.0\n\n* initial upload. Can perform AR detection and tracing through time.\n\n\n\n## Contributing\n\nFollowing the guidelines by the [Neurohackademy 2020 curriculum](https://github.com/neurohackademy/nh2020-curriculum), we welcome\ncontributions from the community. Please create a fork of the project on GitHub\nand use a pull request to propose your changes. We strongly encourage creating\nan issue before starting to work on major changes, to discuss these changes\nfirst.\n\n## Citation\n\nIf you use `IPART` in published research, please cite it by referencing the\n[peer-reviewed work published in JOSS](https://doi.org/10.21105/joss.02407).\n\n## Getting help\n\nPlease post issues on the project GitHub page.\n'",",https://doi.org/10.21105/joss.02407,https://doi.org/10.5194/gmd-13-4639-2020,https://doi.org/10.5194/gmd-13-4639-2020,https://doi.org/10.21105/joss.02407","2020/01/02, 15:38:55",1392,GPL-3.0,2,177,"2021/11/12, 07:25:28",7,9,15,0,712,0,0.0,0.08484848484848484,"2023/07/13, 06:59:54",v3.6.1,0,5,false,,false,false,,,https://github.com/ihesp,,,,,https://avatars.githubusercontent.com/u/48726625?v=4,,, GEOS-Chem,"Advance understanding of human and natural influences on the environment through a comprehensive, state-of-the-science, readily accessible global model of atmospheric composition.",geoschem,https://github.com/geoschem/geos-chem.git,github,"cloud-computing,atmospheric-modelling,aws,scientific-computing,bash-script,configuration,integration-tests,greenhouse-gases,aerosols,atmospheric-chemistry,atmospheric-composition,carbon-cycle,climate,mercury,particulate-matter,atmospheric-chemistry-modeling,fortran,hg,run-directory,earth-system-modeling",Atmospheric Chemistry and Aerosol,"2023/10/23, 15:38:15",146,0,31,true,Fortran,GEOS-Chem,geoschem,"Fortran,Shell,C,Python,CMake,Makefile,Perl",http://geos-chem.org,"b'[![Release](https://img.shields.io/github/v/release/geoschem/geos-chem?label=Latest%20Release)](http://wiki.geos-chem.org/GEOS-Chem_versions)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1343546.svg)](https://doi.org/10.5281/zenodo.1343546)\n[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/geoschem/geos-chem/blob/master/LICENSE.txt)\n\n## Description\n\nThis repository contains the __GEOS-Chem science codebase__. Included in this repository are:\n\n * The source code for GEOS-Chem science routines;\n * Scripts to create GEOS-Chem run directories;\n * Template configuration files that specify run-time options;\n * Scripts to run GEOS-Chem tests;\n * Driver routines (e.g. `main.F90`) that enable GEOS-Chem to be run in several different implementations (as GEOS-Chem ""Classic"", as GCHP, etc.)\n\n### Version 12.9.3 and prior\n\nGEOS-Chem 12.9.3 was the last version in which this ""Science Codebase"" repository was used in a standalone manner.\n\n### Version 13.0.0 and later\n\nGEOS-Chem 13.0.0 and later versions use this ""Science Codebase"" repository as a submodule within the [GCClassic](https://github.com/geoschem/GCClassic) and [GCHP](https://github.com/geoschem/GCHP) repositories.\n\nReleases for GEOS-Chem 13.0.0 and later versions will be issued at the [GCClassic](https://github.com/geoschem/GCClassic) and [GCHP](https://github.com/geoschem/GCHP) Github repositories. We will also tag and release the corresponding versions at this repository for the sake of completeness.\n\n## User Manuals\n\nEach implementation of GEOS-Chem has its own manual page. For more information, please see:\n\n* __GEOS-Chem ""Classic"":__ [https://geos-chem.readthedocs.io](https://geos-chem.readthedocs.io)\n\n* __GCHP:__ [https://gchp.readthedocs.io](https://gchp.readthedocs.io)\n\n* __WRF-GC:__ [http://wrf.geos-chem.org](http:/wrf.geos-chem.org)\n\n* __Other documentation:__ [View related documentation @ Read TheDocs](https://geos-chem.readthedocs.io/en/latest/geos-chem-shared-docs/supplemental-guides/related-docs.html)\n\n## About GEOS-Chem\n\nGEOS-Chem is a global 3-D model of atmospheric chemistry driven by meteorological input from the Goddard Earth Observing System (GEOS) of the [NASA Global Modeling and Assimilation Office](http://gmao.gsfc.nasa.gov/). It is applied by [research groups around the world](http://geos-chem.org/people.html) to a wide range of atmospheric composition problems. Scientific direction of the model is provided by the international [GEOS-Chem Steering Committee](http://geos-chem.org/steering-committee.html) and by [User Working Groups](http://geos-chem.org/working-groups.html). The model is managed by the [GEOS-Chem Support Team](http://geos-chem.org/support-team.html), based at Harvard University and Washington University with support from the US NASA Earth Science Division, the Canadian National and Engineering Research Council, and the Nanjing University of Information Sciences and Technology.\n\nGEOS-Chem is a grass-roots open-access model owned by its [users](http://geos-chem.org/people.html), and ownership implies some responsibilities as listed in our [welcome page for new users](http://geos-chem.org/welcome.html).\n'",",https://doi.org/10.5281/zenodo.1343546","2018/06/19, 16:55:12",1954,CUSTOM,804,9845,"2023/10/23, 20:49:46",123,482,1792,557,2,21,1.2,0.5509245670678016,"2023/10/23, 16:17:18",14.2.2,0,50,false,,false,false,,,https://github.com/geoschem,http://www.geos-chem.org,International,,,https://avatars.githubusercontent.com/u/8321017?v=4,,, gcpy,A Python-based toolkit containing useful functions for working specifically with the GEOS-Chem model of atmospheric chemistry and composition.,geoschem,https://github.com/geoschem/gcpy.git,github,"python,cloud-computing,visualization-tools,geos-chem,atmospheric-modelling,atmospheric-chemistry,scientific-computing,plotting-in-python,benchmarking,python-toolkit,xarray,cartopy,plots,numpy",Atmospheric Chemistry and Aerosol,"2023/03/09, 16:58:43",46,0,9,true,Python,GEOS-Chem,geoschem,"Python,Jupyter Notebook,Shell,Dockerfile",https://gcpy.readthedocs.io,"b""[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/geoschem/gcpy/blob/master/LICENSE.txt)\n\n# GCPy: Python toolkit for GEOS-Chem\n\n**GCPy** is a Python-based toolkit containing useful functions for working specifically with the GEOS-Chem model of atmospheric chemistry and composition.\n\nGCPy aims to build on the well-established scientific Python technical stack, leveraging tools like cartopy and xarray to simplify the task of working with model output and performing atmospheric chemistry analyses.\n\n\n\n## What GCPy was intended to do:\n\n1. Produce plots and tables from GEOS-Chem output using simple function calls.\n2. Generate the standard evaluation plots and tables from GEOS-Chem benchmark output.\n3. Obtain GEOS-Chem's horizontal/vertical grid information.\n4. Implement GCHP-specific regridding functionalities (e.g. cubed-sphere to lat-lon regridding)\n5. Provide example scripts for creating specific types of plots or analysis from GEOS-Chem output.\n\n## What GCPY was not intended to do:\n\n1. General NetCDF file modification: (crop a domain, extract some variables):\n * Use [xarray](http://xarray.pydata.org) instead.\n * Also see [our *Working with netCDF data files* wiki page](http://wiki.geos-chem.org/Working_with_netCDF_data_files).\n2. Statistical analysis:\n * Use [scipy](http://www.scipy.org)/[scikit-learn](https://scikit-learn.org) tools instead\n3. Machine Learning:\n * Use the standard machine learning utilities ([pytorch](https://pytorch.org), [tensorflow](https://www.tensorflow.org), [julia](https://julialang.org), etc.)\n\n\n## Documentation:\n\nFor more information on installing and using GCPy, visit the official documentation at [gcpy.readthedocs.io](https://gcpy.readthedocs.io/).\n\n\n## License\n\nGCPy is distributed under the MIT license. Please read the license documents LICENSE.txt and AUTHORS.txt, which are located in the root folder.\n\n\n## Contact\n\nTo contact us, please [open a new issue on the issue tracker connected to this repository](https://github.com/geoschem/gcpy/issues/new/choose). You can ask a question, report a bug, or request a new feature.\n""",,"2018/07/26, 14:43:02",1917,CUSTOM,66,1205,"2023/10/24, 17:22:47",25,70,241,90,1,1,3.8,0.64030131826742,"2023/03/09, 17:12:30",1.3.3,0,11,false,,false,false,,,https://github.com/geoschem,http://www.geos-chem.org,International,,,https://avatars.githubusercontent.com/u/8321017?v=4,,, PartMC,Particle-resolved Monte Carlo code for atmospheric aerosol simulation.,compdyn,https://github.com/compdyn/partmc.git,github,,Atmospheric Chemistry and Aerosol,"2023/08/23, 01:48:57",26,0,8,true,Python,,compdyn,"Python,Fortran,Gnuplot,Shell,HTML,C,CMake,TeX,Dockerfile",http://lagrange.mechse.illinois.edu/partmc/,"b'![PartMC](https://raw.githubusercontent.com/wiki/compdyn/partmc/logo.svg)\n======\n\nPartMC: Particle-resolved Monte Carlo code for atmospheric aerosol simulation\n\n[![Latest version](https://img.shields.io/github/tag/compdyn/partmc.svg?label=version)](https://github.com/compdyn/partmc/blob/master/ChangeLog.md) [![Docker build status](https://img.shields.io/docker/automated/compdyn/partmc.svg)](https://hub.docker.com/r/compdyn/partmc/builds/) [![Github Actions Status](https://github.com/compdyn/partmc/workflows/CI/badge.svg?branch=master)](https://github.com/compdyn/partmc/actions/workflows/main.yml) [![License](https://img.shields.io/github/license/compdyn/partmc.svg)](https://github.com/compdyn/partmc/blob/master/COPYING) [![DOI](https://zenodo.org/badge/24058992.svg)](https://zenodo.org/badge/latestdoi/24058992)\n\nVersion 2.7.0 \nReleased 2023-08-11\n\n**Source:** \n\n**Homepage:** \n\n**Cite as:** M. West, N. Riemer, J. Curtis, M. Michelotti, and J. Tian (2023) PartMC, [![version](https://img.shields.io/github/release/compdyn/partmc.svg?label=version)](https://github.com/compdyn/partmc), [![DOI](https://zenodo.org/badge/24058992.svg)](https://zenodo.org/badge/latestdoi/24058992)\n\nCopyright (C) 2005-2022 Nicole Riemer and Matthew West \nPortions copyright (C) Andreas Bott, Richard Easter, Jeffrey Curtis,\nMatthew Michelotti, and Jian Tian \nLicensed under the GNU General Public License version 2 or (at your\noption) any later version. \nFor details see the file COPYING or\n.\n\n**References:**\n\n * N. Riemer, M. West, R. A. Zaveri, and R. C. Easter (2009)\n Simulating the evolution of soot mixing state with a\n particle-resolved aerosol model, _J. Geophys. Res._ 114(D09202),\n .\n * N. Riemer, M. West, R. A. Zaveri, and R. C. Easter (2010)\n Estimating black carbon aging time-scales with a\n particle-resolved aerosol model, _J. Aerosol Sci._ 41(1),\n 143-158, .\n * R. A. Zaveri, J. C. Barnard, R. C. Easter, N. Riemer, and M. West\n (2010) Particle-resolved simulation of aerosol size, composition,\n mixing state, and the associated optical and cloud condensation\n nuclei activation properties in an evolving urban plume,\n _J. Geophys. Res._ 115(D17210),\n .\n * R. E. L. DeVille, N. Riemer, and M. West (2011) Weighted Flow\n Algorithms (WFA) for stochastic particle coagulation,\n _J. Comp. Phys._ 230(23), 8427-8451,\n \n * J. Ching, N. Riemer, and M. West (2012) Impacts of black carbon\n mixing state on black carbon nucleation scavenging: Insights from\n a particle-resolved model, _J. Geophys. Res._ 117(D23209),\n \n * M. D. Michelotti, M. T. Heath, and M. West (2013) Binning for\n efficient stochastic multiscale particle simulations, _Multiscale\n Model. Simul._ 11(4), 1071-1096,\n \n * N. Riemer and M. West (2013) Quantifying aerosol mixing state\n with entropy and diversity measures, _Atmos. Chem. Phys._ 13,\n 11423-11439, \n * J. Tian, N. Riemer, M. West, L. Pfaffenberger, H. Schlager, and\n A. Petzold (2014) Modeling the evolution of aerosol particles in\n a ship plume using PartMC-MOSAIC, _Atmos. Chem. Phys._ 14,\n 5327-5347, \n * R. M. Healy, N. Riemer, J. C. Wenger, M. Murphy, M. West,\n L. Poulain, A. Wiedensohler, I. P. O\'Connor, E. McGillicuddy,\n J. R. Sodeau, and G. J. Evans (2014) Single particle diversity\n and mixing state measurements, _Atmos. Chem. and Phys._ 14,\n 6289-6299, \n * J. H. Curtis, M. D. Michelotti, N. Riemer, M. Heath, and M. West\n (2016) Accelerated simulation of stochastic particle removal\n processes in particle-resolved aerosol models, _J. Comp. Phys._\n 322, 21-32, \n * J. Ching, N. Riemer, and M. West (2016) Black carbon mixing state\n impacts on cloud microphysical properties: Effects of aerosol\n plume and environmental conditions, _J. Geophys. Res._ 121(10),\n 5990-6013, \n * J. Ching, J. Fast, M. West, and N. Riemer (2017) Metrics to\n quantify the importance of mixing state for CCN activity, _Atmos.\n Chem. and Phys._ 17, 7445-7458,\n \n * J. Tian, B. T. Brem, M. West, T. C. Bond, M. J. Rood, and\n N. Riemer (2017) Simulating aerosol chamber experiments with the\n particle-resolved aerosol model PartMC, _Aerosol Sci. Technol._\n 51(7), 856-867, \n * J. H. Curtis, N. Riemer, and M. West (2017) A single-column\n particle-resolved model for simulating the vertical distribution\n of aerosol mixing state: WRF-PartMC-MOSAIC-SCM v1.0,\n _Geosci. Model Dev._ 10, 4057-4079,\n \n * J. Ching, M. West, and N. Riemer (2018) Quantifying impacts of\n aerosol mixing state on nucleation-scavenging of black carbon\n aerosol particles, _Atmosphere_ 9(1), 17,\n \n * M. Hughes, J. K. Kodros, J. R. Pierce, M. West, and N. Riemer\n (2018) Machine learning to predict the global distribution of\n aerosol mixing state metrics, _Atmosphere_ 9(1), 15,\n \n * R. E. L. DeVille, N. Riemer, and M. West (2019) Convergence of a\n generalized Weighted Flow Algorithm for stochastic particle\n coagulation, _Journal of Computational Dynamics_ 6(1), 69-94,\n \n * N. Riemer, A. P. Ault, M. West, R. L. Craig, and J. H. Curtis\n (2019) Aerosol mixing state: Measurements, modeling, and impacts,\n _Reviews of Geophysics_ 57(2), 187-249,\n \n * C. Shou, N. Riemer, T. B. Onasch, A. J. Sedlacek, A. T. Lambe,\n E. R. Lewis, P. Davidovits, and M. West (2019) Mixing state\n evolution of agglomerating particles in an aerosol chamber:\n Comparison of measurements and particle-resolved simulations,\n _Aerosol Science and Technology_ 53(11), 1229-1243,\n \n * J. T. Gasparik, Q. Ye, J. H. Curtis, A. A. Presto, N. M. Donahue,\n R. C. Sullivan, M. West, and N. Riemer (2020) Quantifying errors\n in the aerosol mixing-state index based on limited particle\n sample size, _Aerosol Science and Technology_ 54(12), 1527-1541,\n \n * Z. Zheng, J. H. Curtis, Y. Yao, J. T. Gasparik, V. G. Anantharaj,\n L. Zhao, M. West, and N. Riemer (2021) Estimating submicron\n aerosol mixing state at the global scale with machine learning\n and earth system modeling, _Earth and Space Science_ 8(2),\n e2020EA001500, \n\n\nRunning PartMC with Docker\n==========================\n\nThis is the fastest way to get running.\n\n* **_Step 1:_** Install [Docker Community Edition](https://www.docker.com/community-edition).\n * On Linux and MacOS this is straightforward. [Download from here](https://store.docker.com/search?type=edition&offering=community).\n * On Windows the best version is [Docker Community Edition for Windows](https://store.docker.com/editions/community/docker-ce-desktop-windows), which requires Windows 10 Pro/Edu.\n\n* **_Step 2:_** (Optional) Run the PartMC test suite with:\n\n docker run -it --rm compdyn/partmc bash -c \'cd /build; make test\'\n\n* **_Step 3:_** Run a scenario like the following. This example uses `partmc/scenarios/4_chamber`. This mounts the current directory (`$PWD`, replace with `%cd%` on Windows) into `/run` inside the container, changes into that directory, and then runs PartMC.\n\n cd partmc/scenarios/4_chamber\n docker run -it --rm -v $PWD:/run compdyn/partmc bash -c \'cd /run; /build/partmc chamber.spec\'\n\nIn the above `docker run` command the arguments are:\n\n- `-it`: activates ""interactive"" mode so Ctrl-C works to kill the command\n- `--rm`: remove temporary docker container files after running\n- `-v LOCAL:REMOTE`: mount the `LOCAL` directory to the `REMOTE` directory inside the container\n- `compdyn/partmc`: the docker image to run\n- `bash -c \'COMMAND\'`: run `COMMAND` inside the docker container\n\nThe directory structure inside the docker container is:\n\n /partmc # a copy of the partmc git source code repository\n /build # the diretory in which partmc was compiled\n /build/partmc # the compiled partmc executable\n /run # the default diretory to run in\n\n\nDependencies\n============\n\nRequired dependencies:\n\n * Fortran 2003 compiler - or similar\n * CMake version 2.6.4 or higher - \n * NetCDF version 4.2 or higher -\n \n\nOptional dependencies:\n\n * CAMP chemistry code - \n * MOSAIC chemistry code version 2012-01-25 - Available from Rahul\n Zaveri - \n * MPI parallel support - \n * GSL for random number generators -\n \n * SUNDIALS ODE solver for condensation support -\n \n * gnuplot for testcase plotting - \n\n\nInstallation\n============\n\n1. Install cmake and NetCDF (see above). The NetCDF libraries are\n required to compile PartMC. The `netcdf.mod` Fortran 90 module file\n is required, and it must be produced by the same compiler being\n used to compile PartMC.\n\n2. Unpack PartMC:\n\n tar xzvf partmc-2.7.0.tar.gz\n\n3. Change into the main PartMC directory (where this README file is\n located):\n\n cd partmc-2.7.0\n\n4. Make a directory called `build` and change into it:\n\n mkdir build\n cd build\n\n5. If desired, set environment variables to indicate the install\n locations of supporting libraries. If running `echo $SHELL`\n indicates that you are running `bash`, then you can do something\n like:\n\n export NETCDF_HOME=/\n export MOSAIC_HOME=${HOME}/mosaic-2012-01-25\n export SUNDIALS_HOME=${HOME}/opt\n export GSL_HOME=${HOME}/opt\n\n Of course the exact directories will depend on where the libraries\n are installed. You only need to set variables for libraries\n installed in non-default locations, and only for those libraries\n you want to use. Everything except NetCDF is optional.\n\n If `echo $SHELL` instead is `tcsh` or similar, then the environment\n variables can be set like `setenv NETCDF_HOME /` and similarly.\n\n6. Run cmake with the main PartMC directory as an argument (note the\n double-c):\n\n ccmake ..\n\n7. Inside ccmake press `c` to configure, edit the values as needed,\n press `c` again, then `g` to generate. Optional libraries can be\n activated by setting the `ENABLE` variable to `ON`. For a parallel\n build, toggle advanced mode with `t` and set the\n `CMAKE_Fortran_COMPILER` to `mpif90`, then reconfigure.\n\n8. Optionally, enable compiler warnings by pressing `t` inside ccmake\n to enable advanced options and then setting `CMAKE_Fortran_FLAGS`\n to:\n\n -O2 -g -fimplicit-none -W -Wall -Wconversion -Wunderflow -Wimplicit-interface -Wno-compare-reals -Wno-unused -Wno-unused-parameter -Wno-unused-dummy-argument -fbounds-check\n\n8. Compile PartMC and test it as follows.\n\n make\n make test\n\n9. To run just a single test do something like:\n\n ctest -R bidisperse # argument is a regexp for test names\n\n10. To see what make is doing run it like:\n\n VERBOSE=1 make\n\n11. To run tests with visible output or to make some plots from the\n tests run them as follows. Note that tests often rely on earlier\n tests in the same directory, so always run `test_1`, then\n `test_2`, etc. Tests occasionally fail due to random sampling, so\n re-run the entire sequence after failures. For example:\n\n cd test_run/emission\n ./test_emission_1.sh\n ./test_emission_2.sh\n ./test_emission_3.sh # similarly for other tests\n gnuplot -persist plot_species.gnuplot # etc...\n\n12. To run full scenarios, do, for example:\n\n cd ../scenarios/1_urban_plume\n ./1_run.sh\n\n\nUsage\n=====\n\nThe main `partmc` command reads `.spec` files and does the run\nspecified therein. Either particle-resolved runs, sectional-code runs,\nor exact solutions can be generated. A run produces one NetCDF file\nper output timestep, containing per-particle data (from\nparticle-resolved runs) or binned data (from sectional or exact\nruns). The `extract_*` programs can read these per-timestep NetCDF\nfiles and output ASCII data (the `extract_sectional_*` programs are\nused for sectional and exact model output).\n'",",https://zenodo.org/badge/latestdoi/24058992,https://zenodo.org/badge/latestdoi/24058992","2014/09/15, 14:13:51",3327,GPL-2.0,4,2729,"2023/08/23, 01:48:58",55,64,136,7,64,8,1.6,0.20896084337349397,"2023/08/11, 20:24:26",2.7.0,0,8,false,,false,false,,,https://github.com/compdyn,,,,,https://avatars.githubusercontent.com/u/8762060?v=4,,, PyCHAM,"CHemistry with Aerosol Microphysics in Python box model for Windows, Linux and Mac.",simonom,https://github.com/simonom/PyCHAM.git,github,"atmospheric-modelling,scipy,chemical-scheme,aerosol-microphysics,aerosol-chambers,indoor-air-quality",Atmospheric Chemistry and Aerosol,"2023/10/23, 15:22:48",45,2,7,true,Python,,,"Python,TeX",,"b'

\n \n

\n\nWelcome to the PyCHAM (CHemistry with Aerosol Microphysics in Python Box Model) software for modelling of indoor environments, including aerosol chambers. Funding has been provided by the [EUROCHAMP-2020 research project](http://www.eurochamp.org) and the National Centre for Atmospheric Science ([NCAS](https://www.ncas.ac.uk/en/)). Please open an issue on the GitHub repository or contact Simon O\'Meara (simon.omeara@manchester.ac.uk) with any issues, comments or suggestions.\n\nPyCHAM is an open-source computer code (written in Python) for simulating aerosol chambers. It is supplied under the GNU General Public License v3.0. The license document is provided with the software (LICENSE) and contains information around modification and conveyancing.\n\n# Table of Content\n1. [Documentation](#Documentation)\n2. [Installation](#Installation)\n3. [Running](#Running)\n4. [Testing](#Testing)\n5. [Inputs](#Inputs)\n6. [Outputs](#Outputs)\n7. [Photochemistry](#Photochemistry)\n8. [Gas-particle Partitioning](#Gas-particle-Partitioning)\n9. [Numerical Considerations](#Numerical-Considerations)\n10. [Quick Plotting Tab](#Quick-Plotting-Tab)\n11. [Flow Mode](#Flow-Mode)\n13. [Indoor Air Quality Modelling](#Indoor-Air-Quality-Modelling)\n14. [Frequently Asked Questions](#Frequently-Asked-Questions)\n15. [Acknowledgements](#Acknowledgements)\n\n## Documentation\n\nThe README file you are now viewing serves as the PyCHAM manual, explaining how to setup the software and use it. As an additional resource, we also provide an [introductory video](https://www.youtube.com/watch?v=W8NbcU8WHeg&t=506s).\n\nThe [article](https://doi.org/10.21105/joss.01918) published in the Journal for Open Source Software explains the underlying mechanisms of PyCHAM and its purpose. This article was reviewed using v0.2.4 of PyCHAM. Additionally, the [article](https://doi.org/10.5194/gmd-14-675-2021) published in Geophysical Model Development provides a detailed introduction of PyCHAM and its use. This article was reviewed using v2.1.1 of PyCHAM The DOI for all PyCHAM releases is: [10.5281/zenodo.3752676](https://www.doi.org/10.5281/zenodo.3752676).\n\n\n\nVersion numbers of PyCHAM try to adhere to the semantics described by [semver](https://semver.org).\n\n## Installation\n\nThere are two options for installing, via conda and via pip. The pip method takes longer as the openbabel package has to be installed separately. The instructions below for the pip method currently apply only to linux and macOS, whilst the conda instructions apply to windows, linux and macOS.\n\n\n## Install via conda\n\n1. Download the PyCHAM repository from github.com/simonom/PyCHAM\n\n2. Download and install the package manager Miniconda (Anaconda is also suitable but takes more memory and takes longer to install) using the following address and selecting the appropriate operating system version: https://docs.conda.io/en/latest/miniconda.html.\n\n3. Ensure conda is operating correctly, the method varies between operating systems and is explained [here](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)\n\nThe following steps are at the command line:\n\n4. cd into the directory where the PyCHAM package is stored.\n\n5. Use the following command to install: conda env create -f PyCHAM_OSenv.yml -n PyCHAM, where OS is replaced by your operating system name (win (for Windows), lin (for Linux), mac (for macOS)).\n\n6. Activate the environment with the command: conda activate PyCHAM\n\n6a. For Windows (please skip this step if on Linux or Mac) it has been noted that when following the above steps openbabel does not always link to its data directory. You can test this by looking for open babel warning messages when running the PyCHAM examples (to run PyCHAM please see [Running](#Running)). A message such as \'the aromatic.txt file could not be found\' indicates the data library is not set. To set, search for Control Panel in the Windows search bar then User Accounts and User Accounts again. Then select \'Change my environment variables\' for User variables select New then create the new variable with Variable name \'BABEL_DATADIR\' and with Variable value set as the path to the relevant open babel data directory. To identify your open babel data directory you should look for the open babel folder containing aromatic.txt at a location such as: C:\\Users\\your_account_name\\\\.conda\\pkgs\\openbabel-2.4.1-py37_5\\share\\openbabel. Once the path is suppled to the Variable value box select OK to create the variable then open a new anaconda prompt to restart PyCHAM.\n\n6b. On Mac (please skip this step if on Linux or Windows) it has been noted that when following the above steps and then running PyCHAM (to run PyCHAM please see [Running](#Running)), an error is given stating that Developer Tools have not been installed. If this error displays please follow the on-screen instructions to install Developer Tools.\n\n7. Install is complete for Linux, Mac and Windows; to run PyCHAM please see [Running](#Running). \n\n## Install via pip\n\n1) Ensure that swig is installed on your system. For example, for macOS, a command at the command line like: brew install swig , and for linux a command at the command line like: sudo apt install swig\n\n2) Ensure that eigen is available on your system. For example, for macOS, a command at the command line like: brew install eigen, and for linux, download the latest stable eigen release [here](http://eigen.tuxfamily.org/index.php?title=Main_Page#Download), then unzip and move the unzipped folder into the /usr/local/include folder using a command at the command line like: sudo mv eigen-3.3.9 /usr/local/include\n\n3) Create a virtual environment in a suitable location running with at least python3.6. For example, if your command line recognises python3.6 as python3.6, the command to make a virtual environment called 3env on macOS is: python3.6 -m venv 3env. Note that python3.6 in this example should be replaced with the appropriate command for recognising python on your machine (often just python).\n\n4) Activate this environment. For example, for a virtual environment called 3env on macOS and linux the command at the command line is: source 3env/bin/activate. For example, for a virtual environment called 3env on Windows the command at the command line is: .\\3env\\Scripts\\activate\n\n5) openbabel must be installed separately to PyCHAM, begin by downloading and unzipping the tar file (file name containing .tar.) for the latest version of openbabel on [github](https://github.com/openbabel/openbabel/releases). Note that the unzipped version can be stored in the Downloads folder for the installation process.\n\nThe following steps are at the command line:\n\n6) At the command line cd into the unzipped openbabel folder, for example: cd openbabel-3.1.1\n\n7) Create a build directory: mkdir build\n\n8) Change into the build directory: cd build\n\n9) Using cmake, prepare the openbabel build files. This requires several specifications, the -DCMAKE_INSTALL_PREFIX specification should be the path to the site-packages folder of the virtual environment created above: cmake -DRUN_SWIG=ON -DPYTHON_BINDINGS=ON -DCMAKE_INSTALL_PREFIX=~/3env/lib/python3.6/site-packages ..\n\n9a) If the above command causes a message at the command prompt that eigen cannot be found this can be fixed by stating its location, for example: \ncmake .. -DRUN_SWIG=ON -DPYTHON_BINDINGS=ON -DCMAKE_INSTALL_PREFIX=~/3env/lib/python3.6/site-packages -DEIGEN3_INCLUDE_DIR=/usr/local/Cellar/eigen/3.3.9/include/eigen3\n\n10) Complete installation with: make install\n\n11) Test that openbabel is functioning with: python\n\n11a) Then inside the python interpreter use: import openbabel\n\n11b) If this works fine (no error message) continue to step 12 If this returns: ModuleNotFoundError: No module named \'openbabel\', then quit the python interpreter: quit()\n\n11c) If error seen in step above, add the openbabel path to the python path, for example: export PYTHONPATH=~/3env/lib/python3.6/site-packages/lib/python3.6/site-packages/openbabel:$PYTHONPATH\n\n12) Test that pybel is functioning with: python\n\n12a) Then inside the python interpreter use: import pybel\n\n12b) If no error message seen continue to next step. During testing this threw a relative import error which was corrected by changing the relevant line (given in the error message) in the pybel.py file (file location given in the error message) from ""from . import openbabel as ob"" to ""import openbabel as ob""\n\n13) Ensure pip and wheel up to date: pip install --upgrade pip wheel\n\n14) Install PyCHAM and its dependencies in the virtual environment: python -m pip install --upgrade PyCHAM\n\nInstall is complete, to run PyCHAM please see [Running](#Running).\n\n## Running\n\n1. For model inputs, ensure you have: a .txt file chemical reaction scheme, a .xml file for converting the component names used in the chemical reaction scheme file to SMILE strings and a .txt file stating values of model variables (e.g. temperature) - see details about these three files below and note that example files are available in PyCHAM/input\n\n2. Once [Installation](#Installation) is complete and the appropriate environment has been activated (see [Installation](#Installation)), use the command line to change into the top level directory PyCHAM (the directory above the PyCHAM __main__ file).\n\n3. There are two choices for starting up the programme. If you are new to PyCHAM, and/or have simulations that can be practically selected manually, then begin the programme from the command line to use PyCHAM via the graphical user interface: python PyCHAM. Alternatively, if you wish to automate simulation setup and run, then edit the module \'automated_setup_and_call.py\' to your needs and call this from the command line: python automated_setup_and_call\n\n4. The PyCHAM graphical user interface (GUI) should now display on your screen. Using the \'Simulate\' tab, one can select the folder containing all input files using the \'Select Folder Containing Input Files\' button. This will search the selected folder for the input files (chemical reaction scheme, xml and model variables). For the chemical scheme, files with filenames including \'chem\' will be identified. For the xml, files with filenames including \'xml\' will be identified. For the model variables, files with filenames including \'var\' will be identified. Any identified files will then be displayed in the GUI (see below for details on the contents of the chemical scheme, xml and model variables input files).\n\n5. To select any of the input files individually, one can use the corresponding GUI button.\n\n6. The GUI will display the found inputs provided in the selected model variables file. For inputs not stated in this file, the displayed variables are default.\n\n7. Problems with the input files will be displayed in the GUI - this functionality is under development, meaning that not all problems are currently captured.\n\n8. Once the first simulation is ready (through selection of the desired combination of correct input files described above), the user chooses between a single simulation or adding to batch, with the latter allowing multiple simulations to be queued.\n\n9a. If the user chooses a single simulation to run, a progress bar will show, which represents the time through the experiment as a fraction of the total experiment time.\n\n9b. If the user chooses to add to batch, then further simulations can be chosen by repeating steps 4-7 above. When ready, the batch can be run with the start series of simulations button. The progress bar then represents individual experiments and the current simulation is shown in the GUI. Note that when adding to batch input files should be located in different folders (rather than changing the inputs inside a folder already selected for batch between adding to batch).\n\n10. The \'Plot\' tab allows multiple plotting options. The Standard Results Plot produces two sub-plots in one figure: one with the particle number distribution, secondary aerosol mass, and particle number concentration against time, and another plot that shows the gas-phase concentrations of specified components with time (the specified components are those with initial concentrations given in the model variables file).\n\n11. The \'Quit\' button will stop the programme. If it does not work, the ctrl+z key combination in the console window can cease operations safely. In both cases Python will release all memory associated with the simulation.\n\n## Testing\n\nUnit tests for PyCHAM modules can be found in the PyCHAM/unit_tests folder. Call these tests from the home folder for PyCHAM, with: python test_module.py with module replaced by the name of the PyCHAM module to be tested. For some unit tests example inputs are required, the chemical scheme files for these are stored in unit_tests/input with file names beginning with test_ ..., therefore we recommend users do not use chemical schemes with the same naming convention to prevent confusion. Where required, model variables for unit tests either use the default values or those given in the unit test script and use the xml file provided in PyCHAM/input.\n\nContinuous integration testing can be completed using the \'.travis.yml\' (home folder) and \'test_TravisCI.py\' (unit_tests folder) files at the [Travis CI website](https://travis-ci.com).\n\nExample run output is saved in the PyCHAM/output/example_scheme folder. To reproduce this, select from PyCHAM/input example_scheme.txt for the chemical scheme, example_xml.xml for the xml file and example_model_var.txt for the model variables. Note that the example output may vary between releases so please check correspondence.\n\n## Inputs\n\n## Chemical Scheme file\n\nThe chemical scheme file states the reactions and their rate coefficients in the gas- and aqueous-phases.\n\nAn example chemical scheme file is given in the PyCHAM/input folder, called \'example_scheme.txt\', which has been obtained from the [Master Chemical Mechanism (MCM) website](http://mcm.leeds.ac.uk/MCM/) (KPP version) and modified. \n\nResults are automatically saved in PyCHAM/output/name_of_chemical_scheme_file/name_given_in_model_variables_input_file_for_saving. \n\nThe unit tests described above save results with the prefix \'test_\', therefore we recommend using a different convention for chemical scheme names to prevent confusion.\n\nMarkers are required to recognise different sections of the chemical scheme. The default markers are for the MCM KPP format, however, others can be specified using the chem_scheme_markers input in the model variables input file. A guide to chem_scheme_markers is given in the Model Variables .txt file section below. This includes how to distinguish between gas- and aqueous-phase reactions.\n\nReaction rate coefficients for chemical reactions and generic rate coefficients must adhere to the following rules:\nThe expression for the rate coefficient can use Fortran type scientific notation or python type; acceptable math functions: EXP, exp, dsqrt, dlog, LOG, dabs, LOG10, numpy.exp, numpy.sqrt, numpy.log, numpy.abs, numpy.log10; rate coefficients may be functions of TEMP, RH, M, N2, O2 where TEMP is temperature (K), RH is relative humidity (0-1), M, N2 and O2 are the concentrations of third body, nitrogen and oxygen, respectively (# molecules/cm3 (air)).\n\nInside the chemical scheme file, the expression for the reaction rate coefficient of a chemical reaction and the reaction itself must be contained on the same line of the file, with some delimiter (described above with chem_scheme_markers) separating them.\n\n## Chemical Scheme .xml file\n\nAn example is given in the inputs folder (of the Github repository), called \'examples_xml.xml\'. It has a two line header, the first states that the mechanism is beginning (``) and the second states that the species definition is beginning (``). The end of the species list must be marked (``) and finally, the end of the mechanism must be marked (``). \n\nBeneath this, every component included in the reactions of the chemical scheme must have its SMILES string given. To add new components, use this three line example per new component:\n\n``\n\n`O=O`\n\n``\n\nHere the first line states that the species definition is beginning and gives a unique code, s6058 in this case, and its chemical scheme name, in this case O2. The second line provides the SMILES string, in this case O=O. The third line states that the definition is finished. For information on SMILES please see: [SMILES website](https://daylight.com/smiles/index.html).\n\nFor Master Chemical Mechanism chemical schemes the associated xml file can be acquired from the MCM website.\n\n\n## Model Variables .txt file\n\nAn example is provided in the inputs folder (of the Github repository), called \n\'example_model_var.txt\' , this can include the following variables separated by a \nreturn (so one line per variable), \nnote that if a variable is irrelevant for your simulation, it can be omitted and will be replaced by the default.\nIn addition, you can find more information around photochemistry and flow mode instruments in the corresponding section of this README file.\n\n| Input Name | Description|\n| ---------- | ---------- |\n| res_file_name = | Name of folder to save results to. Note that by default results will be saved to PyCHAM/PyCHAM/output/name of chemical scheme used/res_file_name, however the user can state their own path to results, including the name of the folder they want saving to, e.g.: res_file_name = my documents/my PyCHAM results saves to: my documents/my PyCHAM results. |\n| total_model_time = | Total experiment time to be simulated (s) |\n| update_step = | Time (s) interval for updating integration constants (specifically natural light intensity (if applicable) and particle number concentration due to its change during any of: coagulation, particle loss to wall and/or nucleation). Defaults to 1 s. Can be set to more than the total_model_time variable above to prevent updates. |\n| recording_time_step = | Time interval for recording results (s). Must be at least the value of update_step if particles are present (number_size_bins variable below greater than zero). Defaults to 60 s. Note that recorded values represent values at the recording interval. Therefore, instantaneous injections at the recording interval are registered. |\n| size_structure = | The size structure for the sectional approach to particles of varying size. Set to 0 for moving-centre (default) and 1 for full-moving. |\n| number_size_bins = | Number of size bins (excluding wall); to turn off particle considerations set to 0 (which is also the default), likewise set pconc and seed_name variables below off. Must be integer (e.g. 1) not float (e.g. 1.0) |\n| lower_part_size = | Radius of smallest size bin boundary (um), defaults to 0.0 um |\n| upper_part_size = | Radius of largest size bin boundary (um), defaults to 0.5 um |\n| space_mode = | lin for linear spacing of size bins in radius space, or log for logarithmic spacing of size bins in radius space, if empty defaults to linear spacing. |\n| wall_on = | 1 to consider wall for gas-wall partitioning and particle deposition to wall, 0 to neglect these processes. Defaults to 1. However, in addition, please note the described defaults below for model variables relevant to wall processes. |\n| number_wall_bins = | The number of wall bins to use, defaults to one. Note that if wall_on set to zero, then this variable will be ignored and no wall will be simulated. |\n| mass_trans_coeff = | Mass transfer coefficients for vapour-wall partitioning (/s), if left empty defaults to zero (which implies no partitioning with wall). If using multiple wall bins, then separate values by a comma and ensure alignment with multiple values in the eff_abs_wall_massC variable (described below). To specify mass transfer coefficients for vapour-wall partitioning (/s) for specific components on specific wall, use the syntax: ComponentNameInChemicalScheme_walln_mtc, where ComponentNameInChemicalScheme is the name of the component as used in the chemical scheme (case sensitive), n is the wall number (beginning at 1), and mtc is the vapour-wall partitioning mass transfer coefficient (/s) for that chemical on that surface. To separate different components use a semi-colon. For example, to specify NO2 and HONO transfer coefficients onto the first surface, whilst all other components have the same transfer coefficient to that surface of 1x10-4 /s, and all components (including NO2 and HONO) have a transfer coefficient to a second surface of 1x10-5 /s: mass_trans_coeff = 1e-4; NO2_wall1_2e-4; HONO_wall1_3e-4, 1e-5|\n| eff_abs_wall_massC = | Effective absorbing wall mass concentrations (g/m3 (air)), if left empty defaults to zero (which implies no partitioning with wall). If using multiple wall bins, then separate values by a comma and ensure alignment with multiple values in the mass_trans_coeff variable (described above). |\n| temperature = | Air temperature inside the chamber (K). At least one value must be given for the experiment start (times corresponding to temperatures given in tempt variable below). If multiple values, representing changes in temperature at different times, then separate by a comma. For example, if the temperature at experiment start is 290.0 K and this increases to 300.0 K after 3600.0 s of the experiment, inputs are: temperature = 290.0, 300.0, tempt = 0.0, 3600.0. A change in temperature during the simulation will automatically cause relative humidity, chamber pressure, component volatilities and gas-phase diffusivities to change accordingly. |\n| tempt = | Times since start of experiment (s) at which the temperature(s) set by the temperature variable above, are reached. Defaults to 0.0 if left empty as at least the temperature at experiment start needs to be known. If multiple values, representing changes in temperature at different times, then separate by a comma. For example, if the temperature at experiment start is 290.0 K and this increases to 300.0 K after 3600.0 s of the experiment, inputs are: temperature = 290.0, 300.0; tempt = 0.0, 3600.0 |\n| p_init = | Pressure of air inside the chamber (Pa) |\n| rh = | Relative Humidity (fraction, 0-1), if this changes during the simulation, values at different times should be separated by a comma, with the corresponding times provided in the rht model variable. Defaults to 0.65. If this model variable is used, a relative humidity at experiment start must be provided. For example, for an experiment starting at relative humidity 0.8 and increasing to 0.9 after 30 minutes, inputs would be: rh = 0.8, 0.9 and rht = 0., 1800.. Note that relative humidity cannot be changed through Compt and associated model variables for instantaneous injection of gas-phase components to prevent conflicts with this rh model variable. Note also that relative humidity will change in response to changing temperature (temperature model variable). It is possible to specify a change in relative humidity that is too great for PyCHAM to maintain stability in the ODE solver, see the [Numerical Considerations](#Numerical-Considerations) section below for more information on this. |\n| rht = | Times (s) through simulation at which the relative humidities stated in the rh model variable are reached. Defaults to 0, which implies a constant relative humidity. If times provided, a time of 0 (experiment start) must also be provided along with a corresponding relative humidity in the rh model variable. For example, for an experiment starting at relative humidity 0.8 and increasing to 0.9 after 30 minutes, inputs would be: rh = 0.8, 0.9 and rht = 0., 1800.|\n| lat = | Latitude (degrees) for natural light intensity (if applicable, leave empty if not (if experiment is dark set light_status below to 0 for all times)). |\n| lon = | Longitude (degrees) for natural light intensity (if applicable, leave empty if not (if experiment is dark set light_status below to 0 for all times)). |\t\n| DayOfYear = | Day of the year for natural light intensity (if applicable, leave empty if not (if experiment is dark set light_status below to 0 for all times)), must be integer between 1 and 365. |\n| daytime_start = | Time of day experiment starts, for natural light intensity (if applicable, leave empty if not (if experiment is dark set light_status below to 0 for all times)) (Greenwich Mean Time (GMT)/Coordinated Universal Time (UTC) in seconds (not hours:minutes:seconds)). |\n| act_flux_path = | Path to the csv file containing the actinic flux values; use only if you wish to specify actinic fluxes. The file should have a line for each wavelength, with the first number in each line representing the wavelength in nm, and the second number separated from the first by a comma stating the flux (Photons/cm2/nm/s) at that wavelength. No headers should be present in this file. Example of file given by /PyCHAM/photofiles/Example_act_flux.csv and an example of the act_flux_path variable is: act_flux_path = /PyCHAM/photofiles/Example_act_flux.csv. Note, please include the .csv in the variable name if this is part of the file name. If the chamber light status is set to illuminated and a Master Chemical Mechanism chemical scheme is used, PyCHAM defaults to estimating the MCM photolysis reactions based on natural solar radiation using the parameterisation of Hayman (1997) which is described in [Saunders et al. (2003)](https://doi.org/10.5194/acp-3-161-2003) and which requires estimation of the solar zenith angle as described by the textbook chapter ""The Atmosphere and UV-B Radiation at Ground Level"" by S. Madronich (in \'Environmental UV Photobiology\' textbook, 1993). It is not necessary to state the full path to the actinic flux file - if the file is saved in the photofiles folder of PyCHAM, it is only necessary to state the name of the file. |\n| photo_par_file = | Name of txt file stored in PyCHAM/photofiles containing the wavelength-dependent absorption cross-sections and quantum yields for photochemistry. If left empty defaults to MCMv3.2 recommended values (http://mcm.leeds.ac.uk/MCMv3.3.1/parameters/photolysis.htt), which come as part of PyCHAM. File must be of .txt format with the formatting:
J_n_axs
wv_m, axs_m
J_n_qy
wv_M, qy_m
J_end
where n is the photochemical reaction number, axs represents the absorption cross-section (cm2/molecule), wv is wavelength (nm), _m is the wavelength number, and qy represents quantum yield (fraction). J_end marks the end of the photolysis file. An example is provided in PyCHAM/photofiles/example_inputs.txt. Note, please include the .txt in the file name. |\n| ChamSA = | Chamber surface area (m2), used if the Rader and McMurry wall loss of particles option (McMurry_flag) is set to 1 (on) below. Note that the model will convert this to a chamber radius by assuming the chamber is a sphere.|\n| coag_on = | set to 1 (default if left empty) for coagulation to be modelled, or set to zero to omit coagulation|\n| nucv1 = | Nucleation parameterisation value 1 to control the total number of newly formed particles|\n| nucv2 = | Nucleation parameterisation value 2 to control the start time of nucleation|\n| nucv3 = | Nucleation parameterisation value 3 to control the duration of nucleation|\n| nuc_comp = | Name of component contributing to nucleation (only one allowed), must correspond to a name in the chemical scheme file, or \'core\' for a generic zero vapour pressure component. Defaults to \'core\'.|\n| new_partr = | Radius of newly nucleated particles (cm), if empty defaults to 2.0e-7 cm. |\n| inflectDp = | Particle diameter (m) at which the particle deposition to wall rate function has an inflection point. Defaults to 1e-6 m (== 1 um). |\n| Grad_pre_inflect = | Gradient of the logarithm of particle wall deposition rate against the logarithm of particle diameter before inflection. For example, for the rate to decrease by an order of magnitude every order of magnitude increase in particle diameter, set to 1. Defaults to 0.|\n| Grad_post_inflect = | Gradient of the logarithm of particle wall deposition rate against the logarithm of particle diameter after inflection . For example, for the rate to increase by an order of magnitude every order of magnitude increase in particle diameter, set to 1. Defaults to 0.|\n| Rate_at_inflect = | Particle deposition rate to wall at inflection (/s). Defaults to 0.|\n| part_charge_num = | Average number of charges per particle, only required if the McMurry and Rader (1985) model for particle deposition to walls is selected.|\n| elec_field = | Average electric field inside the chamber (g.m/A.s3), only required if the McMurry and Rader (1985) model for particle deposition to walls is selected |\n| McMurry_flag = | 0 to use a user-defined particle to wall deposition rate as a function of particle size. With the user defining this function through the model variables (inflectDp, Grad_pre_inflect, Grad_post_inflect, Rate_at_inflect, which are described above). 1 to use the McMurry and Rader (1985, doi: 10.1080/02786828508959054) method for particle wall loss, which uses the chamber surface area given by ChamSA above, average number of charges per particle (part_charge_num above) and average electric field inside chamber (elec_field above), defaults to no particle wall loss if empty, similarly -1 turns off particle wall loss. Note, that if using the McMurry and Rader approach, it will be assumed that the surface is spherical.|\n| C0 = | Initial concentrations of components present at the experiment start (ppb), must correspond to component names in Comp0 variable below. Can affect the gas-phase and/or surface (e.g. wall), see the Comp0 dexcription below for how to distinguish between reservoirs. To select the initial concentrations from the end of a previously saved simulation. Then state the path (without quotation marks) as the value for C0. E.g.: C0 = /path/to/outputs/from/previous/simulation |\n| Comp0 = | Names of components present at experiment start (in the order corresponding to their concentrations in C0). Note, this is case sensitive, with the case matching that in the chemical scheme file. For components in the gas-phase, only their name (chemical scheme name) is needed. For surfaces, postfix chemical scheme names with \'_walln\', where n is the surface (e.g. wall) number starting from 1.|\n| Ct = | The concentrations of components following instantaneous injection at some time after experiment start (ppb). Seperate injections at different times with commas. Seperate different components with a semicolon. E.g., if k ppb of component A injected after m seconds and j ppb of component B injected after n (n>m) seconds, then injectt should be m, n and Compt should be A, B and Ct should be k,0;0,j. Note, this is for components with concentrations allowed to change, see const_comp for those with invariable concentrations. |\n| Compt = | Chemical scheme name of component injected instantaneously at some time after experiment start. Note, this is case sensitive, with the case matching that in the chemical scheme file - note this for components with concentrations allowed to change, see const_comp for those with invariable concentrations. Also note that water should not be stated here, rather, for varying relative humidity, use the rh and rht model variables. Separate components with a comma. |\n| injectt = | Time(s) at which instantaneous injections occur (seconds), which correspond to the concentrations in Ct. Separate multiple values (representing injection at multiple times) with commas. If multiple components are injected after the start time, then this input should still consist of just one series of times as these will apply to all components. E.g., if k ppb of component A injected after m seconds and j ppb of component B injected after n (n>m) seconds, then this input should be m, n and Compt should be A, B and the Ct should be k,0;0,j Note this is for components with concentrations allowed to change, see const_comp for those with invariable concentrations. |\n| const_comp = | Name of component with continuous gas-phase concentration inside chamber. Note, this is case sensitive, with the case matching that in the chemical file. Defaults to nothing if left empty. To specifically account for constant influx, see const_infl variable below.|\n| obs_file = | Name of xlsx file containing concentrations (molecules/cm3) of components and times (s) through experiment. Component chemical scheme names must be in the first row, time must vary with later rows and the first column must contain times (s), whilst later columns contain the concentrations (molecules/cm3) of the components given in the first row. If this variable is provided PyCHAM will automatically fix the component concentrations to those provided in this file. The sheet name to extract information from must be called PyCHAMobs.|\n| cont_infl = | Name of component(s) with continuous gas-phase influx to chamber. Note, this is case sensitive, with the case matching that in the chemical file. Defaults to nothing if left empty. For constant gas-phase concentration see const_comp variable above. Should be one dimensional array covering all components. For example, if component A has continuous influx of K ppb/s from 0 s to 10 s and component B has continuous influx of J ppb/s from 5 s to 20 s, the input is: cont_infl = A, B Cinfl = K, K, 0, 0; 0, J, J, 0 cont_infl_t = 0, 5, 10, 20 therefore, the semicolon in Cinfl is used to distinguish the influxes of different components. If information on influxing component names, times and concentrations is more neatly held in an excel spreadsheet, then state the path to that spreadsheet as the value for cont_infl. Do not contain the path inside quotation marks. An example of stating the path inside a model variables file can be seen in PyCHAM/input/ind_AQ_ex/model_var_test.txt. An example of such a spreadsheet is available in PyCHAM/input/ind_AQ_ex/cont_infl.xlsx. Note that inside this example spreadsheet, the following formatting rules are exemplified: i) in the first row, first column cell, state the units of the emission rate: ppb/s for parts per billion per second, molec/cm3/s for number of molecules per cm cubed per second, ii) in the first row, column 2 onwards, state the times (s) through simulation that emissions relate to, iii) in the first column, row 2 onwards |\n| cont_infl_t = | Times during which continuous influx of each component given in the cont_infl variable occurs, with the rate of their influx given in the Cinfl variable. Should be one dimensional array covering all components. For example, if component A has constant influx of K ppb/s from 0 s to 10 s and component B has constant influx of J ppb/s from 5 s to 20 s, the input is: cont_infl = A, B Cinfl = K, K, 0, 0; 0, J, J, 0 cont_infl_t = 0, 5, 10, 20 therefore, the semicolon in Cinfl is used to distinguish the influxes of different components. |\n| cont_infl_tf = | Flag for denoting how to treat continuous influxes with respect to time. 0 (default) means that continuous influx times stated explicitly. 1 means that continuous influxes repeated on a 24 hour loop, in this instance, only the first 24 hours of continuous influxes will be considered. |\n| Cinfl = | Rate of gas-phase influx of components with continuous influx (stated in the cont_infl variable above). In units of ppb/s. Defaults to zero if left empty. If multiple components affected, their influx rate should be separated by a semicolon, with a rate given for all times presented in const_infl_t (even if this is constant from the previous time step for a given component). For example, if component A has continuous influx of K ppb/s from 0 s to 10 s and component B has continuous influx of J ppb/s from 5 s to 20 s, the input is: cont_infl = A, B Cinfl = K, K, 0, 0; 0, J, J, 0 cont_infl_t = 0, 5, 10, 20 therefore, the semicolon in Cinfl is used to distinguish the influxes of different components. Cannot be an expression, e.g. 1.e-1, must be number, e.g. 0.1 instead of 1.e-1. |\n| remove_influx_not_in_scheme = | A flag to tell PyCHAM whether or not to ignore continuous influxes of components that are not found in the chemical scheme. Default to 0, which means do not ignore (in which case any components with continuous influxes not found in the chemical scheme cause an error message, but can be set to 1, which ignores such components. |\n| dens_Comp = | Chemical scheme names of components with a specified density, if more than one name then separate with comma. The number of names must match the number of densities provided in the dens input. Default is to estimate density based on the SMILE string of each component and the Girolami method contained in UManSysProp. |\n| dens = | The density of components specified in the dens_Comp input above (g/cm3), if more than one density then separate with a comma. The number of densities must match the number of names provided in the dens_Comp input. Default is to estimate density based on the SMILE string of each component and the Girolami method contained in UManSysProp. |\n| vol_Comp = | Names of components with vapour pressures to be manually assigned from volP, names must correspond to those in the chemical scheme file and if more than one, separated by commas. Can be left empty, which is the default (in which case vapour pressures are estimated from a vapour pressure estimation method by UManSysProp). To specify a group of components based on their estimated vapour pressures (Pa), use inequalities, e.g., for all components with vapour pressures less than 1.e-2 Pa: all_<1.e-2. And to specify for a certain wall, postfix with \\_walln, where n is the wall number (starting at 1 for the first wall), e.g. for all components with estimated vapour pressure less than 1e-2 Pa and for the second wall: all_<1.e-2_wall2. |\n| volP = | Vapour pressures (Pa) of components with names given in vol_Comp variable above, where one vapour pressure must be stated for each component named in vol_Comp and multiple values should be separated by a comma. Acceptable to use e for standard notation, such as 1.e-2 for 0.01 Pa. To specify the vapour pressure associated with a particular wall (wall order given by the order for the mass_trans_coeff and mass_trans_coeff variables described above), inside the vol_Comp variable described above, use _walln to postfix the relevant component/vapour pressure category name, where n is the wall number (starting at 1 for the first wall). |\n| act_comp = | Names of components (names given in the chemical scheme) with activity coefficients stated in act_user variable below (if multiple names, separate with a comma). Must have same length as act_user.|\n| act_user = | Activity coefficients of components with names given in act_comp variable above, if multiple values then separate with a comma. Must have same length as act_comp. |\n| accom_coeff_comp = | Names of components (corresponding to names in chemical scheme file) with accommodation coefficients set by the user in the accom_coeff_user variable below, therefore length must equal that of accom_coeff_user. Multiple names must be separated by a comma. For any components not mentioned in accom_coeff_comp, accommodation coefficient defaults to 1.0. For an introduction to accommodation coefficients, the recommended reading is page 525 of [Seinfeld and Pandis 2016](https://www.wiley.com/en-us/Atmospheric+Chemistry+and+Physics%3A+From+Air+Pollution+to+Climate+Change%2C+3rd+Edition-p-9781118947401). |\n| accom_coeff_user = | Accommodation coefficients (dimensionless) of the components with names given in the variable accom_coeff_comp variable, therefore number of accommodation coefficients must equal number of names, with multiple coefficients separated by a comma. Can be a function of radius (m), in which case use the variable name radius, e.g: for NO2 and N2O5 with accommodation coefficients set to 1.0 and 6.09e-08/Rp, respectively, where Rp is radius of particle at a given time (m), you would use the inputs: accom_coeff_comp = NO2, N2O5 accom_coeff_user = 1., 6.09e-08/radius. For any components not mentioned in accom_coeff_comp, accommodation coefficient defaults to 1. See the description for the accom_coeff_comp variable for recommended reading on accommodation coefficients. |\n| pconct = | Times (seconds) at which seed particles of number concentration given in pconc are introduced to the chamber (by default this assumed to be instantaneous injection but a continuous injection can be specified using the pcont variable). If introduced at multiple times, separate times by a semicolon. For example, for a two size bin simulation with 10 and 5 particles/cm3 in the first and second size bin respectively introduced at time 0 s, and later at time 120 s seed particles of concentration 6 and 0 particles/cm3 in the first and second size bin respectively are introduced, the pconc input is: pconc = 10, 5; 6, 0 and the pconct input is: pconct = 0; 120 and the number_size_bins input is: number_size_bins = 2. Only one initial (pconct = 0.0 s) number size distribution is allowed. If you wish to have injection of particles after experiment start but close to time = 0 s please use a pconct value that is greater than 0.0 s but small compared to the recording time step. |\n| pconctf = | Flag for treatment of particle influxes. 0 (the default) means to treat injection times as explicitly stated in pconct, 1 means to repeat influxes every 24 hours, in this case any injections beyond 24 hours that are given by pconct and pconc are ignored. |\n| pconc = | Either total particle concentration per mode (modes separated by a colon), or particle concentration per size bin, in which case length should equal number of particle size bins and values should be separated by a comma (# particles/cm3 (air)). If total particle concentration per mode, the particles per mode will be spread across size bins, with the degree and location of spread based on the values in the std and mean_rad inputs, respectively. If seed aerosol introduced at multiple times during the simulation, separate times using a semicolon, however maintain consistency between times as to whether number size distributions are being expressed in terms of modes or explicitly via the concentration per size bin. For example, for a two size bin simulation with 10 and 5 particles/cm3 in the first and second size bin respectively introduced at time 0 s, and later at time 120 s seed particles of concentration 6 an 0 particles/cm3 in the first and second size bin respectively are introduced, the pconc input is: pconc = 10, 5; 6, 0 and the pconct input is: pconct = 0; 120 and the number_size_bins input is: number_size_bins = 2. When transferring measured number concentration from a particle counter the unit of #particles/cm3 used for the pconc input refers to total number concentration (dN) rather than the density spectral (dN/dLogDp). |\n| pcont = | Flag for whether the injection of particles given by pconct, pconc and associated inputs is continuous or instantaneous. Defaults to instantaneous (flag = 0), in which case units of pconc are # particles/cm3 or can be set to 1 for continuous, in which case units of pconc are # particles/cm3.s. E.g., to change the example given in pconc description from the default two instantaneous injections to a continuous injection followed by an instantaneous injection: pcont = 1; 0. |\n| seed_name = | Name of components comprising seed particles, can either be core for a component not present in the chemical scheme, or a name from the chemical scheme. If more than one component then separate names with a comma. Defaults to core. Do not include water in seed_name, instead use the chamber relative humidity setting along with the Vwat_inc and seed_eq_wat model variables to determine water\'s contribution and timing of contribution to seed particles. PyCHAM will error if a name given here is not included in the chemical scheme (and isn\'t core). It can be included in the chemical scheme with a zero reaction rate coefficient. IMPORTANT: if the components given in seed_name are volatile, the PyCHAM ODE solver may error since particles will evaporate. To prevent this, set the vapour pressures of seed components using the vol_Comp and volP model variables explained above. Note that seed components without a manually assigned vapour pressure (using the vol_Comp and volP model variables) will be automatically assigned a low vapour pressure to stop evaporation, but this can be changed through manual assignment using the vol_Comp and volP model variables. |\n| seed_mw = | Molecular weight of seed component (g/mol), if empty defaults to that of ammonium sulphate (132.14 g/mol). This only needs to be specified if seed_name input contains core. If seed_name is a component(s) from the chemical scheme, then its molecular weight is estimated by Pybel (based on the component SMILE strings). |\n| seed_dens = | Density of seed material (g/cm3), defaults to 1.0 g/cm3 if left empty. This only needs to be specified if the seed_name contains core. If seed_name is a component(s) from the chemical scheme, then density should be specified using the dens_comp and dens inputs, otherwise density is estimated by UManSysProp (based on the component SMILE string). |\n| seedx = | Mole fraction of non-water components in dry (no water) seed particles. Defaults to equal mole fractions for all non-water components stated in the seed_name model variable. Must match length of seed_name with the mole fractions of different components separated by a semicolon. If just one mole fraction provided for each component it will be assumed that this applies to all size bins. If mole fractions are to be provided for each size bin, then separate values with a comma and the number of values per component must equal the number_size_bins variable above (the first value representing the smallest size bin and so on). For example, for two components (seed_name = AMM_SUL, SOOT) and two size bins with mole fractions for the first component (AMM_SUL) of 0.1 and 0.2 in size bin 1 and size bin 2 and mole fractions for the second component (SOOT) of 0.9 and 0.8 in size bin 1 and size bin 2: seedx = 0.1, 0.2; 0.9, 0.8, where the component order (first, second, etc.), is given by the order stated in seed_name. Please note that PyCHAM does not automatically estimate the effect of particle composition on activity coefficients (which affect gas-particle partitioning), nor reactions at the particle surface, nor reactions in the particle bulk. Please see the [Gas-particle Partitioning](#Gas-particle-Partitioning) section for more related information. |\n| Vwat_inc = | Flag to say whether (set to 1) or not (set to 0) water volume is accounted for in the seed particle number size distribution. Default is 1. |\n| mean_rad = | Mean radius of particles (um). If in number size distributions given in modal mode, then mean_rad should represent the mean radius of the lognormal size distribution per mode (modes separated by a colon). Whereas if particle number concentrations given per size bin, mean_rad . Defaults to mean radius of the particle size bin radius bounds given by lower_part_size and upper_part_size inputs. If seed particles are introduced at more than one time, then mean_rad for the different times should be separated by a semicolon. For example, if seed particle with two modes of mean radii of 1.e-2 and 1.e-1 um introduced at start and with mean radii of 2.0e-2 and 2.e-1 um introduced after 120 s, the mean_rad input is: mean_rad = 1.e-2 : 1.e-1 ; 2.e-2 : 2.e-1 and the pconct input is pconct = 0. ; 120. |\n| seed_eq_wat = | If water is not included in the provided seed particle number size distribution (determined by the Vwat_inc model variable), this variable determines whether (1) or not (0) to allow water vapour in chamber (determined by the relative humidity) to equilibrate with seed particles prior to the experiment starting. Defaults to 1 which allows equilibrium. |\n| std = | Geometric mean standard deviation of seed particle number concentration (dimensionless) when total particle number concentrations per mode provided in pconc variable. If more than one mode, separate modes with a colon. Role explained online in scipy.stats.lognorm page, under pdf method: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html. If left empty defaults to 1.2. If seed particles introduced at multiple times, then separate std for different times using a semicolon. For example, if seed particle with modes of standard deviation 1.2 and 1.3 introduced at start and with standard deviations of 1.4 and 1.5 introduced after 120 s, the std input is: std = 1.2 : 1.3 ; 1.4 : 1.5 and the pconct input is: pconct = 0. ; 120.. Note that std must be greater than 1.0, with wider distributions given by greater values. Will error if a number less than or equal to 1.0 supplied.|\n| seed_diss = | Dissociation constant(s) for seed component(s) (dimensionless), if empty defaults to one. If more than one component comprising seed particles please separate their dissociation constants with a comma and ensure that number of constants matches number of components named in seed_name input. |\n| z_prt_coeff = | Fraction of the total gas-particle partitioning coefficient below which partitioning of components (including water) to a size bin is treated as zero. Defaults to one billionth (1.0e-9). This setting is necessary for ODE solver stability when some particle size bins have relatively very small surface area. To the best of our knowledge, the default value of one billionth has no significant effect on model results. Please note that this is a variable required for numerical practicality and not for improved representation of simulated processes. See the [Numerical Considerations](#Numerical-Considerations) section for further information. |\n| light_time = | Times (s) for light status, corresponding to the elements of light_status (below), defaults to 0.0s (start of experiment). Use this setting regardless of whether light is natural or artificial (chamber lamps). For example, for a 4 hour experiment, with lights on for first half and lights off for second, use: light_time = 0.0, 7200.0 light_status = 1, 0. light_time must include 0 (experiment start) and a corresponding light status in the light_status model variable. |\n| light_status = | 1 for lights on and 0 for lights off, with times given in light_time (above), if empty defaults to lights off for whole experiment. Setting to off (0) means that even if variables defining light intensity above, the simulation will be dark. Use this variable for both natural and artificial (chamber lamps) light. The setting for a particular time is recognised when the time through experiment reached the time given in light_time. For example, for a 4 hour experiment, with lights on for first half and lights off for second, use: light_time = 0.0, 7200.0 light_status = 1, 0. If status not given for the experiment start (0.0 s), default is lights off at experiment start. |\n| trans_fac = | Transmission factor (0-1) for light. Use 1 for unattenuated light and lower values to represent attenuated light transmission, such as clouds or windows. Note, if using the Hayman approach to estimating MCM photolysis rates (for natural light), a single value should be supplied, which will be applied to all MCM photolysis reactions (i.e. not wavelength-dependent). For wavelength-dependent transmission factors (which are available when the user supplies their own actinic flux, see act_flux_path model variable), starting with the longest wavelength region, state the wavelength (nm) followed by the transmission factor (0-1 fraction) with an underscore separating the two. For example, for 49.2 \\% for wavelengths above UV and 15 \\% for wavelengths in and below UV range: 400_0.492, 0_0.15. For transmission factors at different times, separate times with a semi-colon and provide corresponding times (s) in trans_fact, e.g. for 1 from 0-30 minutes and 0 from 30-60 minutes: trans_fac = 1; 0 and trans_fact = 0.0; 1800. |\n| trans_fact = | Times (s) corresponding to the start time of the transmission factors given by trans_fac. Seperate different times with a semi-colon. E.g. for 1 from 0-30 minutes and 0 from 30-60 minutes: trans_fac = 1; 0 and trans_fact = 0.0; 1800. |\n| tf_UVC = | Fraction (0-1) of UVC light (100-280 nm) (where relevant) stated in the provided actinic flux file (specified in the act_flux_path model variable) allowed into chamber. E.g. when a UV-C lamp has variable input. |\n| tf_UVCt = | Times (s) through experiment when values for the tf_UVC model variable are valid. Defaults to 0.0 s (start of experiment), provide values in the same manner as described for the light_time model variable. |\n| tracked_comp = | Name of component(s) to track rate of concentration change (molecules/cm3/s); must match name given in chemical scheme (description of how to track multiple components with a group name given later in this section), and if multiple components given they must be separated by a comma. Can be left empty and then defaults to tracking no components. Use RO2_ind and RO_ind to track all individual alkyl peroxy radicals and alkoxy radicals, respectively. |\n| umansysprop_update = | Flag to update the UManSysProp module via internet connection: set to 1 to update and 0 to not update. If empty defaults to no update. In the case of no update, the module PyCHAM checks whether an existing UManSysProp module is available and if not tries to update via the internet. If update requested and either no internet or UManSysProp repository page is down, code stops with an error. |\n| chem_scheme_markers = | Markers denoting various sections of the user\'s chemical scheme. If left empty defaults to Kinetic Pre-Processor (KPP) formatting. If provided, must have following elements separated with commas (brackets at start of description give pythonic index): (0) marker for start of gas-phase reaction lines (just the first element), note this must be different to that for aqueous-phase reaction, (1) marker for peroxy radical list starting, note that this should occur at the start of the peroxy radical list in the chemical scheme file, (2) marker between peroxy radical names, (3) prefix to peroxy radical name, (4) string after peroxy radical name, (5) marker for end of peroxy radical list (if no marker, then leave empty), (6) marker for RO2 list continuation onto next line, note this may be the same as marker between peroxy radical names, (7) marker at the end of each line containing generic rate coefficients, (8) marker for start of aqueous-phase reaction lines (just the first element), note this must be different to that for gas-phase reaction, (9) marker for start of reaction rate coefficient section of an equation line (note this must be the same for gas- and aqueous-phase reactions), (10) marker for start of equation section of an equation line (note this must be the same for gas- and aqueous-phase reactions), (11) final element of an equation line (should be constant for all phases of reactions), (12) marker for start of reaction lines corresponing to surface (e.g. wall) (just the first element), note this must be different to that for gas-phase and aqueous-phase reaction. For example, for the MCM KPP format (which only includes gas-phase reactions): chem_scheme_markers = {, RO2, +, C(ind_, ), , &, , , :, }, ;, |\n| int_tol = | Integration tolerances, with absolute tolerance first followed by relative tolerance. Separate absolute and relative tolerance with a comma, for example: 1.e-6, 1.e-7. In ,https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html, is explained that absolute tolerance controls absolute accuracy (number of correct decimal places), and relative tolerance controls relative accuracy (number of correct digits), since problems solved in PyCHAM often have several orders of magnitude difference in the absolute concentratation and rate of change of components, PyCHAM defaults to: 1.e9 for absolute and 1.e-3 for relative (i.e. relative tolerance dominates allowable error in integration in default PyCHAM). |\n| dil_fac = | Volume fraction per second air is diluted by, should be just a single number. Defaults to zero if left empty. For different dilutions at different times, separate dilutions of different times with a comma, and align with the dil_fact model variable described below. |\n| dil_fact = | Times through simulation (s), that dilutions given by the model variable dil_fac (described above) begin occuring at. Defaults to 0.0 s (start of simulation). Separate different times with a comma. |\n| H2O_hist = | Flag for particle-phase history with respect to water partitioning: 0 for dry and therefore on the deliquescence curve, 1 for wet and therefore on the efflorescence curve. Defaults to 1 if left empty.|\n| drh_ft = | Expression for deliquescence relative humidity (fraction between 0-1) as a function of temperature, where the usual python math symbols should be used for mathematical functions and TEMP should be used to represent temperature which has units K. E.g. for a deliquescence relative humidity at 298.15 K of 0.5 and an increase/decrease of 0.001 for every unit decrease/increase in temperature: drh_ft = 0.5-(1.e-3*(TEMP-298.15)). Defaults to a deliquescence relative humidity of 0.0 at all temperatures if left empty (which combined with the default H2O_hist model variable of 1 would result in the assumption of no crystallisation and therefore particle-phase always treated as a solution). For general information on deliquescence and efflorescence, please see page 410 of [Seinfeld and Pandis 2016](https://www.wiley.com/en-us/Atmospheric+Chemistry+and+Physics%3A+From+Air+Pollution+to+Climate+Change%2C+3rd+Edition-p-9781118947401) and references therein. Research into the gas-particle partitioning of water for various types of particles is ongoing, and a literature search is recommended for a particular PyCHAM simulation setup. Please see the [Gas-particle Partitioning](#Gas-particle-Partitioning) section for information on how PyCHAM uses this model variable.|\n| erh_ft = | Expression for efflorescence relative humidity (fraction between 0-1) as a function of temperature, where the usual python math symbols should be used for mathematical functions and TEMP should be used to represent temperature which has units K. E.g. for an efflorescence relative humidity at 298.15 K of 0.5 and an increase/decrease of 0.001 for every unit decrease/increase in temperature: erh_ft = 0.5-(1.e-3*(TEMP-298.15)). Defaults to an efflorescence relative humidity of 0.0 at all temperatures if left empty (which combined with the default H2O_hist model variable of 1 would result in the assumption of no crystallisation and therefore particle-phase always treated as a solution). For general information on deliquescence and efflorescence, please see page 410 of [Seinfeld and Pandis 2016](https://www.wiley.com/en-us/Atmospheric+Chemistry+and+Physics%3A+From+Air+Pollution+to+Climate+Change%2C+3rd+Edition-p-9781118947401) and references therein. Research into the gas-particle partitioning of water for various types of particles is ongoing, and a literature search is recommended for a particular PyCHAM simulation setup. Please see the [Gas-particle Partitioning](#Gas-particle-Partitioning) section for information on how PyCHAM uses this model variable.|\n| ser_H2O = | Integer value for whether to separate the integration of the water partitioning between vapour and particle problem from integration of other processes. Set to 0 to turn off separation and set to 1 to turn on (1 is default). See [Numerical Considerations](#Numerical-Considerations) for more information. |\n\n## Outputs\n\nModel results are saved to the folder specified by the user in the model variables file, using the model variable res_file_name (described above). PyCHAM automatically places this folder inside the PyCHAM/output/name of chemical scheme/ folder. Several files are stored in this output folder. The concentrations of components is stored in the file with name beginning \'concentrations_all_components_all_times\'. This is a comma separated value (csv) file. If you would like suggestions for code to open and view results, please see below, along with the plotter_gp.py file and the retr_out.py files.\n\nA minimum working example (for plotting the time profile of the gas-phase concentration of a given component) is (note that you may need to activate the PyCHAM environment in order to have the necessary packages available (numpy and matplotlib)):\n\n```\n# state path to output folder (for Windows Operating System use \\\\ to separate folders rather than /))\noutput_by_sim = \'path to your output folder\'\n\n# combine folder path with specific file name for component names\nfname = str(output_by_sim + \'/model_and_component_constants\')\n\n# open file containing component name\nconst_in = open(fname)\n# create empty dictionary to hold component names\nconst = {}\n\n# loop through lines of file containing component names\nfor line in const_in.readlines():\n\t\t\n\tdlist = [] # empty list to hold values\n\tfor i in line.split(\',\')[1::]:\n\t\t\t\n\t\tif (str(line.split(\',\')[0]) == \'chem_scheme_names\') or (str(line.split(\',\')[0]) == \'SMILES\') or (str(line.split(\',\')[0]) == \'space_mode\'):\n\t\t\ti = i.strip(\'\\n\')\n\t\t\ti = i.strip(\'[\')\n\t\t\ti = i.strip(\']\')\n\t\t\ti = i.strip(\' \')\n\t\t\ti = i.strip(\'\\\'\')\n\t\t\tdlist.append(str(i))\n\t\t\t\n\tconst[str(line.split(\',\')[0])] = dlist\n\n# close file with component names\nconst_in.close()\n\n# isolate component names from dictionary\ncomp_names = const[\'chem_scheme_names\']\n\n# withdraw times (s) -----------------\nfname = str(output_by_sim+\'/time\')\n# import numpy package\nimport numpy as np\n# load times\nt_array = np.loadtxt(fname, delimiter=\',\', skiprows=1)\ntimehr = t_array/3600.0 # convert from s to hr\n# ------------------------------------------\n\n# combine folder path with specific file name for component concentrations\nfname = str(output_by_sim + \'/concentrations_all_components_all_times_gas_particle_wall\')\n# load file, omitting headers\ny = np.loadtxt(fname, delimiter=\',\', skiprows=1)\n\n# state name of component you want to plot\ncomp_names_to_plot = \'APINENE\'\n\n# get index of this component\nindx_plot = [comp_names.index(comp_names_to_plot.strip())]\n\n# import packages for plotting\nimport matplotlib.pyplot as plt\n\n# make plot\nplt.plot(timehr, y[:, indx_plot])\n\n# show plot\nplt.show()\n\n```\n\n## Photochemistry\nChemical schemes may include photochemical reactions where the rate of reaction is dependent on light intensity. Several of the model variables described here in the Model Variables .txt file section are relevant to correct modelling of photochemistry and these will be further detailed here. \n\nThe input variables light_status and light_time determine when the chamber is illuminated or dark. \n\nThe input variable act_flux_path states the actinic flux (photon/cm2/nm/s) as a function of wavelength (nm). For chambers with artificial light (lamps) it is necessary to supply this file so that PyCHAM knows the light intensity spectrum. PyCHAM will automatically interpolate the wavelengths and corresponding actinic fluxes given in act_flux_path to unit wavelength resolution (every 1 nm) to ensure correct integration of photolysis rate across the spectrum. Inside the photofiles folder are examples of act_flux_path (e.g. Example_act_flux.csv), including the file for Manchester Aerosol Chamber (MAC) (MAC_Actinic_Flux_Spectrum.csv). The required format is a comma separated value file with wavelength (nm) in the first column and the corresponding actinic flux (photon/cm2/nm/s) in the second column. No headers are allowed.\n\nFor chambers with natural light (open roof), users may also supply an act_flux_path representing the relevant solar light intensity spectrum. However, if natural light is present and the chemical scheme is derived from the Master Chemical Mechanism then PyCHAM will use the parameterisation of Hayman (1997), described in [Saunders et al. (2003)](https://doi.org/10.5194/acp-3-161-2003), to estimate the photolysis rates of the Master Chemical Mechanism. In this model setup users may also supply the day number of the year (# days) (DayOfYear model variabled) time of day (Greenwich Mean Time (GMT)/Coordinated Universal Time (UTC) in seconds (not hours:minutes:seconds)) that the experiment starts (daytime_start model variable), the latitude (lat model variable) (degrees) and longitude (lon model variable) (degrees). These inputs allow the solar zenith angle to be calculated according to the first chapter of Environmental UV Photobiology (1993): ""The Atmosphere and UV-B Radiation at Ground Level"" by S. Madronich (Environmental UV Photobiology, 1993). This setting of solar radiation and deriving the photolysis rates for MCM is the default setting when light_status is set to illuminated.\n\nPhotolysis rate is the product of actinic flux, component absorption cross-sections (wavelength dependent) and quantum yield (wavelength dependent) integrated over the relevant range of the light spectrum. By default PyCHAM assumes the Master Chemical Mechanism photolysis reaction rate coefficients require estimation. For this reason, the PyCHAM software comes with the component absorption cross-sections and quantum yields as recommended by the Master Chemical Mechanism v3.3.1 website: http://mcm.leeds.ac.uk/MCMv3.3.1/parameters/photolysis.htt.\n\nFor photolysis reactions in chemical schemes other than Master Chemical Mechanism, the user can supply their own file for component absorption cross-section and quantum yield (the values are to be contained in the same file). The file name should be stated in the photo_par_file model variable and the file should be stored in PyCHAM/photofiles. A short example is given in the photofiles folder, called example_inputs.txt. File must be of .txt format with the formatting:
J_n_axs
wv_m, axs_m
J_n_qy
wv_M, qy_m
J_end
where n is the photochemical reaction number, axs represents the absorption cross-section (cm2/molecule), wv is wavelength (nm), _m is the wavelength number, and qy represents quantum yield (fraction). J_end marks the end of the photolysis file.\n\nIf a lamp power is given in terms of watts (J (kg m^2/s^2)/s), and a spectrum per unit wavelength is provided then it is possible to convert to actinic flux in units of photons/s. To do this use the photon energy formula: E (J (kg m^2/s^2)) = h (J (kg m^2/s^2) s)*c (m/s)/lamda (m), where h is Planck\'s constant (6.626e-34 Js), c is the speed of light (3e8 m/s) and lambda is the wavelength (m), to get the energy of one photon. Then, divide the provided Watts (J/s) per unit wavelength value by the result to get the actinic flux (photons/s) (note that by definition the actinic flux corresponds to a unit wavelength and a unit area).\n\n## Gas-particle Partitioning\nWhilst the numerical treatment of gas-particle partitioning is described in the [Geophysical Model Development (GMD) paper](https://doi.org/10.5194/gmd-14-675-2021), here the link between model variables and gas-particle partitioning is further described.\n\nSetting the seed particle composition through the model variables seed_name and seedx has no automatic effect on the activity coefficient of components (and therefore on gas-particle partitioning via activity), since PyCHAM does not automatically estimate activity coefficients.\n\nTo estimate gas-particle partitiong, PyCHAM uses the difference in concentration of a component between the gas and particle phase, which depends on the mole fraction of that component in the particle phase. Particle-phase mole fractions are a function of the concentrations of all components present in a particle and, for seed particle components, depends on the product of their particle-phase concentration and their dissociation constants (seed_diss model variable). The dissociation constant is the number of ions an inorganic component dissociates into at infinite dilution per molecule of that component, e.g. for ammonium sulphate the constant is 3 and for sodium chloride it is 2. Gas-particle partitioning is also a function of the activity coefficient of a component with respect to particles. The activity coefficient is not estimated automatically in PyCHAM, but maybe set by the user through the act_user and act_comp model variables. This is currently limited to one value per component for all size bins throughout a whole simulation. More information on activity coefficients can be found on page 407 of [Seinfeld and Pandis 2016](https://www.wiley.com/en-us/Atmospheric+Chemistry+and+Physics%3A+From+Air+Pollution+to+Climate+Change%2C+3rd+Edition-p-9781118947401).\n\nThe default model treatment for gas-particle partitioning of water is the same as for other components, including the ability of the user to set the activity coefficient for water. The user may additionally (for water only) use the drh_ft, erh_ft and H2O_hist model variables to set whether water is able to partition to particles or not at a given relative humidity and water partitioning history. If partitioning is not allowed, particles are modelled as completely dry (zero mole fraction of water), whilst if partitioning is allowed, water is allowed to partition as described above in this section and in the [Geophysical Model Development (GMD) paper] (https://doi.org/10.5194/gmd-14-675-2021). PyCHAM cannot automatically determine deliquescence or efflorescence relative humidities, nor activity coefficients of water. \n\n## Numerical Considerations\nIn this section aspects of PyCHAM affecting numerical stability and computation time speed-up are discussed.\n\nSpeed-up of computation time can be achieved through solving gas-particle partitioning of water separately to other processes (the other processes are gas-wall partitioning of water, partitioning of non-water components between gas-particle and gas-wall, and chemical reactions). For a system with ~1000 chemical reactions and 32 particle size bins a speed-up of a factor ~500 was seen when this separation was introduced. Separation is done by default but can be turned off by setting the ser_H2O model variable to 0. To the best of our knowledge, solving water gas-particle partitioning separately has negligible effect on integration estimates.\n\nThe Ordinary Differential Equation (ODE) solver package is [solve_ivp](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) from Scipy. For the integration of the vapour-particle partitioning of water problem the Radau integration method is used as testing indicates this gives least computation time. For integration of other processes (vapour-particle partitioning of non-water components and chemistry) problems, the backward differentiation formula (BDF) method is used as it is well suited to stiff problems.\n\nThe user can supply their own integration tolerances (int_tol) in the model variables file. By default PyCHAM uses tolerances that were found to suit the problems presented in the GMD software decription paper (cited above). However, non-stiff problems can be solved with less computation using higher integration tolerances, whilst stiffer problems may become unstable unless lower tolerances are used.\n\nThe ODE solver can fail to find a stable solution. Sometimes this is due to a problem being too stiff. If this occurs a message is printed to the PyCHAM GUI. The initial message describes that instability has been noted and a smaller integration time step is being tried to improve stability. Below a certain small time step the message changes and the simulation stops. It has been decided that a stable solution cannot be found. In this situation, to help the user identify the cause of instability, a file called ODE_solver_break_relevant_fluxes is produced which contains the fluxes of components with negative concentrations output by the ODE solver (where negative concentrations are physically unrealistic and therefore indicative of causing/resulting from integration instability). The user could identify small or large fluxes (relative to other fluxes in this file) to gain indication of the processes causing instability and therefore determine whether a change of inputs is possible or whether PyCHAM is unsuitable for the problem in question.\n\nStability of the ODE solver may be compromised by a change in gas-particle partitioning of water that is too great for the solver. We recommend that instantaneous changes to relative humidity therefore do not surpass 0.01. In the example input titled rh_change_example is an example of relativly large changes to relative humidity that PyCHAM can cope with, however much greater changes would cause instability.\n\nIf the user-supplied model variables includes a seed component that is volatile then it may evaporate. This can cause instability in the ODE solver. Because this scenario is (in our experimental experience) unlikely in reality, PyCHAM automatically makes seed components non-volatile. However, this can be over-ridden by specifying a pure component saturation vapour pressure for seed components in the user-supplied model variables. \n\nComputation time speed-up and ODE solver stability may be gained by increasing the value of the user-supplied model variable z_prt_coeff. This variable determines for which size bins gas-particle partitioning of components (including water) is allowed. If the gas-particle partitioning coefficient of a particle size bin is sufficiently low that it comprises less than this fraction of the total gas-particle partitioning coefficient (sum of coefficients across all size bins), then for this size bin partitioning is treated as zero. If a size bin represents a relatively very small fraction of total partitioning the partitioning problem can become too stiff and prevent the ODE solver from concluding. To the best of our knowledge the default value of 1.e-9 for z_prt_coeff has negligible effect on model estimates. However, if users state a higher number, we recommend a sensitivity test for their simulation between this user-supplied number and one an order of magnitude higher to check whether a significant effect is seen - if not then the user-supplied number is shown to be conservative as it does not affect results - if it is then the user must decide whether the loss of accuracy is acceptable.\n\nA change in boundary conditions, such as lighting, can cause concentrations to tend toward zero. In turn, this can cause negative concentrations of components. Therefore, limited negativity is allowed when checking for instability in the solution. This can be seen inside the ode_updater module below the call to the ode_solv module. An example input folder called neg_conc_example can invoke the allowable negative concentrations. Gas-phase (and possibly particle-phase) output for the components with MCM names: O, O1D and PROPALO show a tendency toward zero after 50 minutes of experiment, with fluctuations above and below zero.\n\n## Quick Plotting Tab\n\nParticle mass concentration (whether total of all components or excluding certain components) is calculated based on the concentration of components and their molecular weight. At a given time, for a given component in a given particle size bin (note that the concentration of a component represents the sum over all particles present in a size bin) the formula is: ((# molecules/cm3)/(Avogadro\'s Number (# molecules/mol)))*(g/mol)*1.e12 = ug/m3. This equation can be extended over certain size bins and certain groupings of components to attain the desired particle-phase mass concentration.\n\n## Flow Mode\n\nWhen preparing model variable inputs for an instrument in flow mode (e.g. a flow tube or a chamber in flow reactor mode), the dilution factor and continuous influx of component model variables can be used. To simulate removal of a constant fraction of the chamber\'s volume per second, set the dil_fac model variable accordingly. For example, if the residence time in the instrument is 10 seconds, then 0.1 of the volume is removed per second, so use dil_fac = 0.1. If components of interest (including potentially water) are injected to the instrument to replace the components lost through chamber air being extracted, then use the model variables: const_infl, const_infl_t and Cinfl to describe their continuous influx.\n\n## Indoor Air Quality Modelling\n\nFor simulating atmospheres inside a building, an example set of input files is provided at PyCHAM/input/ind_AQ_ex. When simulating buildings for first time, it is recommended to first read the notes above regarding Flow Mode. This is because ventilation of the building will extract a certain fraction of air per second, which will be replaced by outdoor air.\n\nOther features that are relevant to indoor air simulation include: the wavelength-dependent attenuation of solar radiation (see the model variable trans_fac above); the difference rates of deposition of gases and vapours to surfaces (possibly due to differing reaction rates on surfaces) (see the vol_Comp model variable above); the emission of gases/vapours into the building either from outdoors, or from indoor surfaces can be dealt with through the const_infl model variable (described above), which has the option of specifying a path to a spreadsheet containing emissions into the gas phase (e.g. if the number of components or/and the number of time points is sufficiently great that it is more neatly contained in a spreadsheet).\n\nPyCHAM has been tested against observations from homes with the purpose of verifying that it can include the key aspects of atmospheric science that are particularly important for the indoor environment: surface-gas-particle parititioning of organics (verified against [Lunderberg et al. (2020)] (https://dx.doi.org/10.1021/acs.est.0c00966)), ozone reaction on surfaces to generate oxidised organics (verified against [Morrison et al. (2022)] (https://pubs.rsc.org/en/content/articlelanding/2022/EM/D2EM00307D)), the effect of ventilation; the effect of indoor emissions on indoor aerosol (gas and particle), infiltration of outdoor pollution indoors, infiltration of indoor pollution outdoors. \n\nThe [Lunderberg et al. (2020)] (https://dx.doi.org/10.1021/acs.est.0c00966) study observes that components with volatilities comparable to C13-C22 alkanes show increased mixing ratios in the gas-phase+particle-phase with increased temperature, with decreased dependency as volatility decreases. They conclude that increased temperature decreases the fraction of these components on surfaces. Furthermore, components with volatilities comparable to C23-C31 alkanes show increased mixing ratios in the gas-phase+particle-phase with increased mass concentrations of particulate matter with diameter below 2.5 um. They conclude that a reservoir for these components on surfaces acts to modulate the particle-phase concentration. The PyCHAM inputs to generate the Lunderberg et al. (2020) results and the code to generate the PyCHAM results below are stored in PyCHAM/input/ind_AQ_ex/Lunderberg2020. Note that to run this code for your system, you will need to set the dir_path and ret_path variables to the path for your system. The plots below show results from PyCHAM simulations and reproduce the Lunderberg et al. (2020) observations.\n\n![Lunderberg Fig. 2](https://github.com/simonom/PyCHAM/blob/master/Images_for_readme/Lunderberg2020Fig2.png ""Figure 2"")\n![Lunderberg Fig. 3](https://github.com/simonom/PyCHAM/blob/master/Images_for_readme/Lunderberg2020Fig3.png ""Figure 3"")\n\n## Frequently Asked Questions\n\n**Why does PyCHAM crash without an error message?**\nThis has been observed using the conda install on a Windows 10 operating system with a Intel(R) Core(TM) i7-8500y CPU @ 1.50Ghz processor with processor speed 1400 MHz. Checking the Event Viewer application on Windows, under the Windows Logs/Application tab, showed that libblas was crashing. To correct, the PyCHAM virtual environment was activated in the command line, then from the command line conda was used to uninstall libblas and its dependents, then conda was used to install libblas again followed by its dependents. This solved the issue.\n\n**How are seed component concentrations initialised?**\nThe concentration of seed particles is based on the seed particle properties supplied by the user in the model variable file. The molar volume (cm3/mol) of the seed component is calculated using: (g/mol)/(g/cm3) or (molar mass)/(component density), where density is estimated from the Girolami method of UManSysProp. The volume of seed particles per size bin is calculated from the number size distribution stated in the model variable file. Seed particle volumes per size bin are then divided by the molar volume of seed components to estimate the concentration of the seed components per size bin: # molecules/cm3 (seed components per size bin) = (cm3/cm3 (seed particle per size bin)/(cm3/mol) (seed components))*Avogadro\'s constant.\n\n**What if I need an automated method for generating the xml file?**\nA suggested code is contained in SMILES_generator.py - which allows xml file generation for the PRAM chemical mechanism, as well as accretion product formation from two organic peroxy radicals.\n\n## Acknowledgements\nThis project has received funding from the European Union\'s Horizon 2020 research and innovation programme under grant agreement No 730997. Simon O\'Meara received funding support from the Natural Environment Research Council through the National Centre for Atmospheric Science.\n'",",https://doi.org/10.21105/joss.01918,https://doi.org/10.5194/gmd-14-675-2021,https://doi.org/10.5194/acp-3-161-2003,https://doi.org/10.5194/acp-3-161-2003,https://doi.org/10.5194/gmd-14-675-2021,https://doi.org/10.5194/gmd-14-675-2021","2019/09/13, 13:22:51",1503,GPL-3.0,93,304,"2022/03/08, 09:24:56",0,8,22,0,596,0,0.125,0.006734006734006703,"2020/12/17, 14:52:45",v2.1.1,0,2,false,,false,true,"Pravansh5/My_awesome_cart,kidiloskahyper45/kidiloska-Isabella-Rbot",,,,,,,,,, Chemical Lagrangian Model of the Stratosphere,A world leader in simulating exchange processes in the atmosphere across transport barriers such as stratosphere-troposphere exchange.,clams,,custom,,Atmospheric Chemistry and Aerosol,,,,,,,,,,https://jugit.fz-juelich.de/clams/CLaMS,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, PyBox,A Python based box-model generator and simulator designed for atmospheric chemistry and aerosol studies.,loftytopping,https://github.com/loftytopping/PyBox.git,github,"chemistry,numba,fortran,atmospheric-science",Atmospheric Chemistry and Aerosol,"2020/11/03, 12:57:05",35,0,5,false,Python,,,"Python,Dockerfile,TeX",,"b'# PyBox [![DOI](http://joss.theoj.org/papers/10.21105/joss.00755/status.svg)](https://doi.org/10.21105/joss.00755)\n\nPyBox is a Python based box-model generator and simulator designed for atmospheric chemistry and aerosol studies. The first phase of the PyBox project is to develop a gas phase model, using the reaction information within the [Master Chemical Mechanism (MCM)](http://mcm.leeds.ac.uk/MCM/) as the basis, coupled with an idealised sectional aerosol model. PyBox also relates component properties, using molecular structural information, through the [UManSysProp](http://umansysprop.seaes.manchester.ac.uk) informatics suite. Any public release will occur according to new processes added, agreement from any partner contributions and/or associated peer-review papers.\n\nPlease check the project wiki page for more information on updates and planned releases. You can also check the current PyBox documentation at https://pybox.readthedocs.io/en/latest/\n\nThis project is licensed under the terms of the GNU General Public License v3.0, as provided with this repository. \n\n# Table of contents\n1. [Model overview](#Model-overview)\n2. [Dependencies and installation](#Dependencies)\n * 2a. [Using your own machine](#own)\n * 2b. [Binder](#Binder)\n * 2c. [Docker](#Docker)\n3. [Folder structure and running the model](#Folder-Structure)\n4. [Unit tests](#Automated-unit-tests)\n5. [Contributing](#Contributing)\n6. [Code of Conduct](#Code-of-Conduct)\n7. [Citation](#Citation)\n\n## Model overview\n\nPyBox works on the basis of reading a file that defines reactions between compounds in the gas phase and the associated reaction coefficient. For example, take the MCM [Alpha-Pinene](https://en.wikipedia.org/wiki/Alpha-Pinene) chemical mechanism file \'MCM_APINENE.eqn.txt\' stored in the \'mechanism_files\' directory of PyBox. This contains the following snippet of text:\n\n {125.} \t C96OOH + OH = C96O2 : \t1.90D-12*EXP(190/TEMP) \t;\n {126.} \t C96OOH + OH = NORPINAL + OH : \t1.30D-11 \t;\n {127.} \t C96OOH = C96O + OH : \tJ(41)+J(22) \t;\n {128.} \t C96NO3 + OH = NORPINAL + NO2 : \t2.88D-12 \t;\n {129.} \t C96NO3 = C96O + NO2 : \tJ(53)+J(22) \t;\n {130.} \t C96O = C97O2 : \t4.20D+10*EXP(-3523/TEMP) \t;\n {131.} \t C96OH + OH = NORPINAL + HO2 : \t7.67D-12 \t;\n {132.} \t C96OH = C96O + HO2 : \tJ(22) \t;\n\nWhere the equation number is defined first, then the reactants/products along with a defined rate coefficient. This equation file is parsed by functions in \'Parse_eqn_file.py\', providing information that can be used to set up and solve the relevant ordinary differential equations (ODEs) to simulate the evolution of the chemical mechanism. Each component in this chemical mechanism also has an associated record of chemical structure in the form of a [SMILES string](http://www.daylight.com/dayhtml/doc/theory/theory.smiles.html). This information is carried in a .xml file, provided by the MCM, and stored in the root directory of PyBox. Why is this important? Well, this information is taken by the [UManSysProp](http://umansysprop.seaes.manchester.ac.uk) informatics suite and allows us to predict properties of each compound that helps us predict whether they are likely to remain in the gas phase or condense to an existing particulate phase through gas-to-particle partitioning. Before we take a look at the directory structure provided in this repository, lets deal with the dependencies.\n\n## Dependencies and installation \n\nPyBox has been built in the [Anaconda environment](https://www.anaconda.com/download/#macos). [Assimulo](http://www.jmodelica.org/assimulo) is *currently* the numerical core of PyBox. The Assimulo Ordinary Differential Equation (ODE) solver package allows us to use solvers designed for stiff systems. [UManSysProp](http://umansysprop.seaes.manchester.ac.uk) is used to automate predictions of pure component and mixture properties to allow gas-to-particle partitioning simulations *if you need to run the partitioning option*. \n \n### 2(a). Using your own machine\n\nHaving a Python distribution on your own machine is attractive for a number of reasons, not least gaining familiarity with building projects in your own time. If you havent already, I would reccomend installing the Anaconda distribution. You can download a copy using [this link](https://www.anaconda.com/products/individual). That page will give you the option to download a version for Windows, Mac or Linux. Download the graphical installer and, typically, accept all options. Once you have installed this, now open a terminal. On Windows, go to the menu of options and find \'Anaconda Prompt\' under the Anaconda folder. On a Mac, go to Finder -> Utilities -> Terminal. If on a Mac, when in this terminal when you type:\n\n> Python\n\nDo you see the reference to Anaconda? For example, you may see something *similar* to:\n\n> Python 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32\n\nNow we are going to create a virtual environment to run our notebooks in. Virtal environments are a great way of maintaining a \'work space\' that is seperate to your default installation. For example, if you are going to start installing lots of bespoke modules, you may sometimes come across a clash of version numbers which then becomes tricky to maintain. In the worst case scenario, this would require a re-installation of Python. So lets create a virtual environment for our project. You can switch-on and switch-ff these virtual environments from the command line/terminal whenever you need them.\n\nIf you are on Windows, go back to the Anaconda prompt. If you are on a Mac or Linux, go back to ther terminal. First we need to clone this repository. We should use Git for this, becuase with Git you can keep pulling updates from this repository. If you do not already have Git installed on your machine, you can get it from the [download page](https://git-scm.com/downloads). Once you have installed this, at the prompt/terminal type:\n\n> git clone https://github.com/loftytopping/PyBox.git\n\nThis will download the project to the location you are in already. You can change this location before running the above command, or move the folder later. Github also gives you the option to download a ZIP file of the entire project if you cannot or do not want to use Git. Once you have the project downloaded, open a command propmt/terminal and navigate to the project folder. We are now going to use the file \'environment.yml\' to create a new virtual environment. Run the following command:\n\n> conda env create -f environment.yml\n\nYou will see a number of packages being downloaded, eventually, by the conda package manager which is part of the Anaconda distribution. Accept any requests and, when finished, you will see a message that resembles the following:\n\n To activate this environment, use\n \n $ conda activate PyBox\n \n To deactivate an active environment, use\n \n $ conda deactivate\n \nThese are the commands for swithing on/off this new virtual environment. Let\'s switch it on. Type the following in the command prompt/terminal:\n\n> conda activate PyBox\n\nIn the command prompt, you will see the name (PyBox) appear from (base). Now we can run the default gas phase example. Still within the project folder, type the following:\n\n> python Gas_simulation.py \n\n### 2(b). Binder \n\nIf you do not, or cannot, run Python from your own machine we have provided the ability for you to interact with these files using Binder. The Binder project offers an easy place to share computing environments to everyone. It allows users to specify custom environments and share them with a [single link](https://jupyter.org/binder). Indeed, if you click the link below this will spin-up an individual session for you. Please bare in mind it can take a while to start, and if idle for a short period these sessions will stop. However you can download your notebook file during the session. Everytime you start a Binder link, it will start from scratch.\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/loftytopping/Env_modelling/master)\n\n\n### 2(c). Using a Docker container \n\nAs a method for a \'fully automated setup\' I have provided the option to setup and run PyBox within a Docker container. I have provided a Dockerfile that will automatically build all dependencies within a new container based on the Ubuntu:16.04 image. To build the new image, assuming you have Docker installed, run the following command in the directory of the supplied Dockerfile:\n\n> docker build -t pybox .\n\nAfter this has completed [which may take some time], type the following to see your new image listed:\n\n> docker images\n\nTo create and run a new container based on this image, with a name \'project_pybox\', type:\n\n> docker run --name=project_pybox -it pybox\n\nThis will take you in to the container. So, lets run the gas phase model in PyBox whilst you are there. Change directory to where PyBox is located:\n\n> cd /Code/Git_repos/PyBox/\n\nLets run the default simulation:\n\n> python Gas_simulation.py \n\nDont worry about the error message regarding the Matplotlib plots. This is a result of working in a Docker container. For those not familiar with standard Docker commands, please check the brief instructions provided in the Docker_README.txt file where I give some additional examples on how to stop, restart and delete the PyBox container. \n\n## Folder structure and running the model \n\nBefore we run PyBox, lets make sure you have the correct link to the UManSysProp suite. Once you have cloned the repository, you will need to add the location of it in the python script \'Property_calculation.py\' within the \'Aerosol\' directory of PyBox. As with the Assimulo package, you can test this import by opening an interactive Python shell and typing:\n\n> import sys\n\n> sys.path.append(\'<-- add your path here -->/UManSysProp_public/\')\n\n> from umansysprop import boiling_points\n\n> from umansysprop import vapour_pressures\n\n> from umansysprop import critical_properties\n\n> from umansysprop import liquid_densities\n\n> from umansysprop import partition_models\n\n> from umansysprop.forms import CoreAbundanceField\n\nIf you are happy all dependencies are installed and working, to run PyBox \'out of the box\', type the following in the root directory:\n\n> python Gas_simulation.py\n\nIf you are not running within a Docker container, you will see a plot displaying the concentration of two compounds over time. To understand what this simulation has actually done, let us now discuss the repository structure.\n\n### Directory layout\n\n . # Gas phase only model [using Numba]\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 f2py # Gas phase only model [using f2py Fortran to Python Interface Generator] \n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 Aerosol # Coupled gas and gas-to-particle partitioning routines\n |------f2py # Coupled gas and gas-to-particle partitioning routines [using f2py]\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 test # Automated unit tests\n |------data # Data used in the automated unit tests\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 mechanism_files # Copies of chemical mechanisms\n \xe2\x94\x9c\xe2\x94\x80\xe2\x94\x80 LICENSE\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80 README.md\n \nCurrently there are two versions of the gas phase only model; one held within the root directory and the other in folder \'f2py\':\n \n#### 1) Python [using Numba]\nThis is the default version in the root directory. Recall the parsing of the equation file? After new python scripts are created for use within the ODE solvers, the [Numba](https://numba.pydata.org) package then compiles these before the first simulation. Numba does this as the modules are imported. You will therefore find the initial pre-simulation stages of the first simulation will take some time, but not in subsequent model simulations if you wish to study a fixed chemical mechanism. In this case Numba will not need to re-compile even when you start with new initial conditions. Once you have conducted your first simulation, you may change the following within \'Gas_simulation.py\':\n\n files_exist = False\n\nto\n\n files_exist = True\n\nThe current version of PyBox provides you with an out-of-the-box example. It is based on the MCM representation of the degredation of [Alpha-Pinene](https://en.wikipedia.org/wiki/Alpha-Pinene). The Alpha-Pinene mechanism file is stored within the \'mechanism_files\' folder and referenced in the \'Gas_simulation.py\' file through: \n\n filename=\'MCM_APINENE\'\n\nAs already noted, to run the model, once you are happy all dependecies are installed, type the following from the root directory:\n\n> python Gas_simulation.py\n\nYou can modify the ambient conditions and species concentrations in \'Gas_simulation.py\'. First you can define ambient conditions and simulation time, the default given as :\n\n temp=288.15 # Kelvin\n RH=0.5 # RH/100% [range 0-0.99]\n #Define a start time \n hour_of_day=12.0 # 24 hr format\n simulation_time= 7200.0 # seconds\n batch_step=100.0 # seconds\n \nThe \'batch_step\' variable allows us to define when to stop/start/record outputs from our simulation for later use. The ODE methods can provide output at every internal time-step, so it is up to the user to use this information, or not, within the output of the \'ODE_solver.py\' script. Following this, the default option for species concentrations is provided as:\n\n # Define initial concentrations, in pbb, of species using names from KPP file\n species_initial_conc=dict()\n species_initial_conc[\'O3\']=18.0\n species_initial_conc[\'APINENE\']=30.0\n\nIf you are not running within a Docker container, if you run \'Gas_simulation.py\' as provided you will see a simple plot of Alpha-Pinene concentration decay over 2 hours.\n\n#### 2) Python + Fortran [using f2py Fortran to Python Interface Generator] \nWhilst the above variant uses the Numba package, in the folder \'f2py\' the same model is constructed using the [F2Py](https://docs.scipy.org/doc/numpy/f2py/)package, where functions that define the ODEs are converted into pre-compiled Fortran modules with the option to use [OpenMP](http://www.openmp.org) to exploit the number of cores available to you on any given platform. As before, please check the relevant files for defining initial conditions, species concetrations, and expect some compilation time during the first run. To run this simulation, type the following from the f2py directory:\n\n> python Gas_simulation_f2py.py\n\n\nExample output from the default gas phase simulation of Alpha-Pinene\n\n### Aerosol\n\nIn the Aerosol folder you can find gas-to-particle partitioning frameworks. There are two examples provided. The first, within the f2py folder, simulates the partitioning of compounds to 16 size bins again from the Alpha-Pinene chemical mechanism as this evolves over time. This uses properties calculated from the UManSysProp suite. To run the model, once you are happy all dependencies are installed, type the following from the Aerosol/f2py directory:\n\n> python Aerosol_simulation_f2py.py\n\nIn addition to the species concentrations and ambient conditions, you can change the size distribution and number of size bins in the following:\n\n num_bins=16 #Number of size bins\n total_conc=100 #Total particles per cc\n std=2.2 #Standard Deviation\n lowersize=0.01 #microns\n uppersize=1.0 #microns\n meansize=0.2 #microns\n \nPlease note this does require some knowledge of typical aerosol size distributions and reasonable number concentrations. Within the folder \'Fixed_yield\' is a partitioning only model, using fixed total concentrations of compounds in the gas phase. It is important to note that much more work is planned on the aerosol model since there are multiple properties and processes that affect gas-to-particle partitioning. The current version is beyond the most basic used in atmospheric research. Nonetheless, PyBox is designed with the community in mind and my goal is to include all relevant processes. This ethos is captured in the proceeding note on contributing to the project.\n\n\n\nExample total organic aerosol loading from the default aerosol simulation of Alpha-Pinene\n\n\nExample normalised contributions per size bin for the default fixed yield simulations\n\n\n## Unit tests \n\nWithin the folder tests, run the following command:\n\n> python test_modules.py -v\n\nThis will use the unittest Python module to test the output of generated functions used within the ODE solvers against pre-generated .npy files provided in the data subfolder of tests.\n\n## Contributing\n\nContributions to PyBox are more than welcome. Box-models of aerosol systems can rely on many different process representations. It is thus difficult to define a \'standard\' full complexity model. There are many developments planned for PyBox, which you can follow from a scientific perspective in the project wiki. I am therefore very happy to discuss ideas for improvement and how to add/remove features. There are two key rules to follow:\n\n - Any addition must include appropriate unit tests\n - Any addition from a scientific process perspective must include a link to a peer-reviewed paper before it is accepted into the public branch\n\nPlease use the issue tracker at https://github.com/loftytopping/PyBox/issues if you want to notify me of an issue or need support. If you want to contribute, please either create an issue or make a pull request. Alternatively, come and see us in Manchester and/or lets meet for a coffee and a chat!\n\n## Code of Conduct\n\nPlease note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its [terms](code-of-conduct.md). There needs to be greater support and recognition for software development and developers. PyBox can act as a vehicle for enabling better collaboration and therefore better science.\n\n## Citation\n\nIf you use PyBox in any study we ask you reference our paper in the Journal of Open Source Software [![DOI](http://joss.theoj.org/papers/10.21105/joss.00755/status.svg)](https://doi.org/10.21105/joss.00755). Citation: Topping et al., (2018). PyBox: An automated box-model generator for atmospheric chemistry and aerosol simulations. . Journal of Open Source Software, 3(28), 755, https://doi.org/10.21105/joss.00755\n'",",https://doi.org/10.21105/joss.00755,https://doi.org/10.21105/joss.00755,https://doi.org/10.21105/joss.00755\n","2018/04/18, 10:09:33",2016,GPL-3.0,0,236,"2018/08/12, 19:43:58",5,0,7,0,1900,0,0,0.0,"2018/08/14, 11:27:32",1.0.1,0,1,false,,false,false,,,,,,,,,,, MICM Chemistry,A unique chemistry module that can be implemented in any atmosphere model used at NCAR.,NCAR,https://github.com/NCAR/micm.git,github,"atmospheric-chemistry,atmospheric-modeling,atmospheric-science,ode-solver,cuda,gpu,gpu-acceleration,hpc",Atmospheric Chemistry and Aerosol,"2023/10/25, 22:23:03",4,0,4,true,C++,National Center for Atmospheric Research,NCAR,"C++,CMake,Cuda,Python,PLSQL,Dockerfile",https://ncar.github.io/micm/,"b'MICM Chemistry\n==============\n\nModel Independent Chemical Module. MICM can be used to configure and solve atmospheric chemistry systems.\n\n[![GitHub Releases](https://img.shields.io/github/release/NCAR/micm.svg)](https://github.com/NCAR/micm/releases)\n[![License](https://img.shields.io/github/license/NCAR/micm.svg)](https://github.com/NCAR/micm/blob/master/LICENSE)\n[![Docker builds](https://github.com/NCAR/micm/actions/workflows/docker_and_coverage.yml/badge.svg)](https://github.com/NCAR/micm/actions/workflows/docker_and_coverage.yml)\n[![Windows](https://github.com/NCAR/micm/actions/workflows/windows.yml/badge.svg)](https://github.com/NCAR/micm/actions/workflows/windows.yml)\n[![Mac](https://github.com/NCAR/micm/actions/workflows/mac.yml/badge.svg)](https://github.com/NCAR/micm/actions/workflows/mac.yml)\n[![Ubuntu](https://github.com/NCAR/micm/actions/workflows/ubuntu.yml/badge.svg)](https://github.com/NCAR/micm/actions/workflows/ubuntu.yml)\n[![codecov](https://codecov.io/gh/NCAR/micm/branch/main/graph/badge.svg?token=ATGO4DKTMY)](https://codecov.io/gh/NCAR/micm)\n[![DOI](https://zenodo.org/badge/294492778.svg)](https://zenodo.org/badge/latestdoi/294492778)\n\nCopyright (C) 2018-2023 National Center for Atmospheric Research\n\n\n

\n \n

\n\n> **Note**\n> MICM 3.x.x is part of a refactor and may include breaking changes across minor revision numbers\nand partially implemented features\n\n\n# Getting Started\n\n## Installing MICM locally\nTo build and install MICM locally, you must have CMake installed on your machine.\n\nOpen a terminal window, navigate to a folder where you would like the MICM files to exist,\nand run the following commands:\n\n```\ngit clone https://github.com/NCAR/micm.git\ncd micm\nmkdir build\ncd build\nccmake ..\nsudo make install -j 8\n```\n\nTo run the tests:\n\n```\nmake test\n```\n\nIf you would later like to uninstall MICM, you can run\n`sudo make uninstall` from the `build/` directory.\n\n## Options\n\nThere are multiple options for running micm. You can use [json](https://github.com/nlohmann/json)\nto configure a solver, [llvm](https://llvm.org/) to JIT-compile\nsolvers on CPUs or [cuda](https://developer.nvidia.com/cuda-zone)-based solvers to solve chemistry on GPUs.\nPlease [read our docs](https://ncar.github.io/micm/getting_started.html) \nto learn how to enable these options.\n\n## Running a MICM Docker container\n\nYou must have [Docker Desktop](https://www.docker.com/get-started) installed and running.\nWith Docker Desktop running, open a terminal window.\nTo build the latest MICM release, run the following command to start the MICM container:\n\n```\ndocker run -it ghcr.io/ncar/micm:release bash\n```\n\nTo build the latest pre-release version of MICM, instead run:\n\n```\ngit clone https://github.com/NCAR/micm.git\ncd micm\ndocker build -t micm -f docker/Dockerfile .\ndocker run -it micm bash\n```\n\nInside the container, you can run the MICM tests from the `/build/` folder:\n\n```\ncd /build/\nmake test\n```\n\n# Using MICM\n\nThe following example solves the fictitious chemical system:\n\n```\nfoo --k1--> 0.8 bar + 0.2 baz\nfoo + bar --k2--> baz\n```\nThe `k1` and `k2` rate constants are for Arrhenius reactions. See the [MICM documentation](https://ncar.github.io/micm/) for details on the types of reactions available in MICM and how to configure them.\n\nTo solve this system save the following code in a file named `foo_chem.cpp`:\n\n```c++\n#include \n#include \n#include \n#include \n\nusing namespace micm;\n\nint main(const int argc, const char *argv[])\n{\n auto foo = Species{ ""Foo"" };\n auto bar = Species{ ""Bar"" };\n auto baz = Species{ ""Baz"" };\n\n Phase gas_phase{ std::vector{ foo, bar, baz } };\n\n System chemical_system{ SystemParameters{ .gas_phase_ = gas_phase } };\n\n Process r1 = Process::create()\n .reactants({ foo })\n .products({ Yield(bar, 0.8), Yield(baz, 0.2) })\n .rate_constant(ArrheniusRateConstant({ .A_ = 1.0e-3 }))\n .phase(gas_phase);\n\n Process r2 = Process::create()\n .reactants({ foo, bar })\n .products({ Yield(baz, 1) })\n .rate_constant(ArrheniusRateConstant({ .A_ = 1.0e-5, .C_ = 110.0 }))\n .phase(gas_phase);\n\n std::vector reactions{ r1, r2 };\n\n RosenbrockSolver<> solver{ chemical_system, reactions, RosenbrockSolverParameters::three_stage_rosenbrock_parameters() };\n\n State state = solver.GetState();\n\n state.conditions_[0].temperature_ = 287.45; // K\n state.conditions_[0].pressure_ = 101319.9; // Pa\n state.SetConcentration(foo, 20.0); // mol m-3\n\n std::cout << std::setw(5) << ""time [s],"" \n << std::setw(13) << ""foo, ""\n << std::setw(12) << ""bar, ""\n << std::setw(10) << ""baz"" << std::endl;\n for (int i = 0; i < 10; ++i)\n {\n auto result = solver.Solve(500.0, state);\n state.variables_ = result.result_;\n std::cout << std::setfill(\' \') << std::fixed\n << std::setw(8) << std::setprecision(2) << i * 500.0 << "", ""\n << std::setw(10) << std::setprecision(4) << state.variables_[0][state.variable_map_[""Foo""]] << "", ""\n << std::setw(10) << std::setprecision(4) << state.variables_[0][state.variable_map_[""Bar""]] << "", "" \n << std::setw(10) << std::setprecision(4) << state.variables_[0][state.variable_map_[""Baz""]]\n << std::endl;\n }\n\n return 0;\n}\n```\n\nTo build and run the example using GNU (assuming the default install location):\n```\ng++ -o foo_chem foo_chem.cpp -I/usr/local/micm-3.1.0/include -std=c++20\n./foo_chem\n```\n\nOutput:\n```\ntime [s], foo, bar, baz\n 0.00, 11.8435, 5.9048, 1.9070\n 500.00, 6.7920, 9.0460, 3.3173\n 1000.00, 3.8287, 10.7406, 4.2105\n 1500.00, 2.1381, 11.6637, 4.7394\n 2000.00, 1.1879, 12.1695, 5.0425\n 2500.00, 0.6581, 12.4475, 5.2133\n 3000.00, 0.3640, 12.6007, 5.3086\n 3500.00, 0.2011, 12.6851, 5.3616\n 4000.00, 0.1110, 12.7317, 5.3909\n 4500.00, 0.0613, 12.7574, 5.4071\n```\n# Citation\n\nMICM is part of the MUSICA project and can be cited by reference to the MUSICA vision paper. The BibTeX entry below can be used to generate a citation for this.\n\n```\n@Article { acom.software.musica-vision,\n author = ""Gabriele G. Pfister and Sebastian D. Eastham and Avelino F. Arellano and Bernard Aumont and Kelley C. Barsanti and Mary C. Barth and Andrew Conley and Nicholas A. Davis and Louisa K. Emmons and Jerome D. Fast and Arlene M. Fiore and Benjamin Gaubert and Steve Goldhaber and Claire Granier and Georg A. Grell and Marc Guevara and Daven K. Henze and Alma Hodzic and Xiaohong Liu and Daniel R. Marsh and John J. Orlando and John M. C. Plane and Lorenzo M. Polvani and Karen H. Rosenlof and Allison L. Steiner and Daniel J. Jacob and Guy P. Brasseur"",\n title = ""The Multi-Scale Infrastructure for Chemistry and Aerosols (MUSICA)"",\n journal = ""Bulletin of the American Meteorological Society"",\n year = ""2020"",\n publisher = ""American Meteorological Society"",\n address = ""Boston MA, USA"",\n volume = ""101"",\n number = ""10"",\n doi = ""10.1175/BAMS-D-19-0331.1"",\n pages= ""E1743 - E1760"",\n url = ""https://journals.ametsoc.org/view/journals/bams/101/10/bamsD190331.xml""\n}\n```\n\n# Community and contributions\nWe welcome contributions and feedback from anyone, everything from updating\nthe content or appearance of the documentation to new and\ncutting edge science.\n\n- [Collaboration](https://github.com/NCAR/musica/blob/main/docs/Software%20Development%20Plan.pdf)\n - Anyone interested in scientific collaboration\nwhich would add new software functionality should read the [MUSICA software development plan](https://github.com/NCAR/musica/blob/main/docs/Software%20Development%20Plan.pdf).\n\n- [Code of conduct](CODE_OF_CONDUCT.md)\n - Please read this through to you understand the expectations with how to interact with this project.\n\n- [Contributor\'s guide](https://ncar.github.io/micm/contributing/index.html)\n - Before submiitting a PR, please thouroughly read this to you understand our expectations. We reserve the right to reject any PR not meeting our guidelines.\n\n\n# Documentation\nPlease see the [MICM documentation](https://ncar.github.io/micm/) for detailed\ninstallation and usage instructions.\n\n# License\n\n- [Apache 2.0](/LICENSE)\n\nCopyright (C) 2018-2023 National Center for Atmospheric Research\n'",",https://zenodo.org/badge/latestdoi/294492778","2020/09/10, 18:37:15",1140,Apache-2.0,806,1017,"2023/10/25, 22:23:04",37,175,286,276,0,3,1.4,0.6305587229190421,"2023/09/25, 20:32:39",v3.2.0,0,12,false,,true,false,,,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, PySDM,Pythonic particle-based warm-rain/aqueous-chemistry cloud microphysics package.,open-atmos,https://github.com/open-atmos/PySDM.git,github,"physics-simulation,monte-carlo-simulation,gpu-computing,atmospheric-modelling,particle-system,numba,thrust,nvrtc,pint,atmospheric-physics,python,simulation,gpu,cuda,research,pypi-package",Atmospheric Chemistry and Aerosol,"2023/10/25, 21:45:25",45,7,13,true,Python,,open-atmos,"Python,TeX,Shell",https://open-atmos.github.io/PySDM/,"b'# PySDM\n\n[![Python 3](https://img.shields.io/static/v1?label=Python&logo=Python&color=3776AB&message=3)](https://www.python.org/)\n[![LLVM](https://img.shields.io/static/v1?label=LLVM&logo=LLVM&color=gold&message=Numba)](https://numba.pydata.org)\n[![CUDA](https://img.shields.io/static/v1?label=CUDA&logo=nVidia&color=87ce3e&message=ThrustRTC)](https://pypi.org/project/ThrustRTC/)\n[![Linux OK](https://img.shields.io/static/v1?label=Linux&logo=Linux&color=yellow&message=%E2%9C%93)](https://en.wikipedia.org/wiki/Linux)\n[![macOS OK](https://img.shields.io/static/v1?label=macOS&logo=Apple&color=silver&message=%E2%9C%93)](https://en.wikipedia.org/wiki/macOS)\n[![Windows OK](https://img.shields.io/static/v1?label=Windows&logo=Windows&color=white&message=%E2%9C%93)](https://en.wikipedia.org/wiki/Windows)\n[![Jupyter](https://img.shields.io/static/v1?label=Jupyter&logo=Jupyter&color=f37626&message=%E2%9C%93)](https://jupyter.org/)\n[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/open-atmos/PySDM/graphs/commit-activity)\n[![OpenHub](https://www.openhub.net/p/atmos-cloud-sim-uj-PySDM/widgets/project_thin_badge?format=gif)](https://www.openhub.net/p/atmos-cloud-sim-uj-PySDM)\n[![status](https://joss.theoj.org/papers/62cad07440b941f73f57d187df1aa6e9/status.svg)](https://joss.theoj.org/papers/62cad07440b941f73f57d187df1aa6e9)\n[![DOI](https://zenodo.org/badge/199064632.svg)](https://zenodo.org/badge/latestdoi/199064632) \n[![EU Funding](https://img.shields.io/static/v1?label=EU%20Funding%20by&color=103069&message=FNP&logoWidth=25&logo=image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAAAeCAYAAABTwyyaAAAEzklEQVRYw9WYS2yUVRiGn3P5ZzozpZ3aUsrNgoKlKBINmkhpCCwwxIAhsDCpBBIWhmCMMYTEhSJ4i9EgBnSBEm81MRrFBhNXEuUSMCopiRWLQqEGLNgr085M5//POS46NNYFzHQ6qGc1i5nzP/P973m/9ztCrf7A8T9csiibCocUbvTzfxLcAcaM3cY3imXz25lT3Y34G7gQYAKV3+bFAHcATlBTPogJNADG92iY28FHW97kyPbnuW/W7xgzAhukQ9xe04PJeOT0HkQRwK0TlEeGWb/kOO9v3kdD3a8YK9GhDMfa6mg9fxunOm/lWPtcpDI4K7n/jnN8+uQbrFrUSiwU/DtSEUB/MsKKBT+zYslJqiYNgVE4JwhHkzy86wlWvrKVWDSZ/YFjZlU39yw4y/rGoyQGowWB67zl4QQue+jssMdXrQvZ/00jyeHwqCgDKwnsiJjSvkYAxsG5K9WsenYbJdqAtAjhCIxCSZt/4fK1w5A2WCvxrUAKCHwNVoA2aGmvq11jJQQapEXrgMBKqmJJugejKGWLIxXrBPFoigfv/omd675gRkU/xgqUDlAhH3UDaAAlLSqUQekAYyVTyhLs3tDMsvntlIYzOFcEcOcEGd9jx9oDbGs6QO0t/Tijxi9S4bhzxiWaVh5m94Zm0n7oui4ybo0raUlcncQnxx+g+WgDF/vLoYDmoqSl/dJUnt7XRCoTZjij0Z6Pc2LiNS4EBBkNvoeOJXN+yPWWSZeANOhwJq/98nKVwNdoL8B5AROxBKBL0gjh8DMhdCh3eJnrA0yqhLpplwmyup6IajvAOIGfKGVx3VmCRGnOMpe5QAdG0bT8CAeeep0d6z6nqjSJnQiZWEllLMWrmz6k+fE9rGk8MVqYgsGv5ZH2i1Opr+9kajzB5d74hKQ+KS3d/WVMLhtgdu1lriRiOR/4nDVunaR24x7qp3UV5Cb/fJvC83nv26W81LIK58SYNFmwq4hsGx/5BwKlzYRma2NUthgOJSew4i7ru9nJYCQF5tApb2yvjiDQKJV/IfJKh0o6qssSLKv/jcAoRKHQQzE2Lj2OMV5OkWFc4MZIpsev8uXWXRx6ZicbGk8QZLxxgwe+x/rlR3h3816+f2E7lbEU+ZDn3vKVpePCdFovzCISHqbl5EIoQOteKMPB1rto65zNyfOz+KOrGl06lHPQyi/WOohH0/T0l1MZH6A3GUEKl7Pmr2la6wBrBWWRDP2DUcqjKVKBGom9RZmABAykwnglafpSJSPQvsfiOR0EQ7ExVmazA8cY6N4K1iw6RdAXRwi4mgrheT5Dvs4LeuS81a15Ll/3dQisFVSVpnj7sf1sX/sZvhAc+6UOrQyBVUQ8gx/orFmDsZqtaw/y1qZ9zKjp5vDpenyjcNe+cLNmTiUdf/bEOddVQ0VpgsOn54ET+EYxvWKALSu+5tGG76it7MNaiZKGQ23zCIcMfUMxBnrjN3fmHHvCAlp+vJcXWx6itqoXpAEnUNLx8iMfo5Xh1i17R3PJYCpC2cZ3qK3sQ8WGEDDuXlAQuFKGHzpmopXhTNfk0bmxs7uC1w6uJul79AxFkMIiBJy5UoUWjrZLU5DCFdTARDHuDqVw+OkSwI0MCEW4gtNF2BPrBCo8fKNbtILWX9aUDqFqHnn7AAAAAElFTkSuQmCC)](https://www.fnp.org.pl/en/)\n[![PL Funding](https://img.shields.io/static/v1?label=PL%20Funding%20by&color=d21132&message=NCN&logoWidth=25&logo=image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAANCAYAAACpUE5eAAAABmJLR0QA/wD/AP+gvaeTAAAAKUlEQVQ4jWP8////fwYqAiZqGjZqIHUAy4dJS6lqIOMdEZvRZDPcDQQAb3cIaY1Sbi4AAAAASUVORK5CYII=)](https://www.ncn.gov.pl/?language=en)\n[![US Funding](https://img.shields.io/static/v1?label=US%20DOE%20Funding%20by&color=267c32&message=ASR&logoWidth=25&logo=image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAB4AAAAQCAMAAAA25D/gAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAASFBMVEVOTXyyIjRDQnNZWINZWITtzdFUU4BVVIFVVYHWiZM9PG/KZnNXVoJaWYT67/FKSXhgX4hgX4lcW4VbWoX03uHQeIN2VXj///9pZChlAAAAAWJLR0QXC9aYjwAAAAd0SU1FB+EICRMGJV+KCCQAAABdSURBVBjThdBJDoAgEETRkkkZBBX0/kd11QTTpH1/STqpAAwWBkobSlkGbt0o5xmEfqxDZJB2Q6XMoBwnVSbTylWp0hi42rmbwTOYPDfR5Kc+07IIUQQvghX9THsBHcES8/SiF0kAAAAldEVYdGRhdGU6Y3JlYXRlADIwMTctMDgtMDlUMTk6MDY6MzcrMDA6MDCX1tBgAAAAJXRFWHRkYXRlOm1vZGlmeQAyMDE3LTA4LTA5VDE5OjA2OjM3KzAwOjAw5oto3AAAAABJRU5ErkJggg==)](https://asr.science.energy.gov/)\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.html)\n\n[![Github Actions Build Status](https://github.com/open-atmos/PySDM/workflows/tests+artifacts+pypi/badge.svg?branch=main)](https://github.com/open-atmos/PySDM/actions)\n[![Appveyor Build status](http://ci.appveyor.com/api/projects/status/github/open-atmos/PySDM?branch=main&svg=true)](https://ci.appveyor.com/project/slayoo/pysdm/branch/main)\n[![Coverage Status](https://codecov.io/gh/open-atmos/PySDM/branch/main/graph/badge.svg)](https://app.codecov.io/gh/open-atmos/PySDM) \n[![PyPI version](https://badge.fury.io/py/PySDM.svg)](https://pypi.org/project/PySDM)\n[![API docs](https://img.shields.io/badge/API_docs-pdoc3-blue.svg)](https://open-atmos.github.io/PySDM/)\n\nPySDM is a package for simulating the dynamics of population of particles. \nIt is intended to serve as a building block for simulation systems modelling\n fluid flows involving a dispersed phase,\n with PySDM being responsible for representation of the dispersed phase.\nCurrently, the development is focused on atmospheric cloud physics\n applications, in particular on modelling the dynamics of particles immersed in moist air \n using the particle-based (a.k.a. super-droplet) approach \n to represent aerosol/cloud/rain microphysics.\nThe package features a Pythonic high-performance implementation of the \n Super-Droplet Method (SDM) Monte-Carlo algorithm for representing collisional growth \n ([Shima et al. 2009](https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.441)), hence the name. \n\nThere is a growing set of example Jupyter notebooks exemplifying how to perform \n various types of calculations and simulations using PySDM.\nMost of the example notebooks reproduce resutls and plot from literature, see below for \n a list of examples and links to the notebooks (which can be either executed or viewed \n ""in the cloud"").\n\nPySDM has two alternative parallel number-crunching backends \n available: multi-threaded CPU backend based on [Numba](http://numba.pydata.org/) \n and GPU-resident backend built on top of [ThrustRTC](https://pypi.org/project/ThrustRTC/).\nThe [`Numba`](https://open-atmos.github.io/PySDM/PySDM/backends/numba/numba.html) backend (aliased ``CPU``) features multi-threaded parallelism for \n multi-core CPUs, it uses the just-in-time compilation technique based on the LLVM infrastructure.\nThe [`ThrustRTC`](https://open-atmos.github.io/PySDM/PySDM/backends/thrustRTC/thrustRTC.html) backend (aliased ``GPU``) offers GPU-resident operation of PySDM\n leveraging the [SIMT](https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads) \n parallelisation model. \nUsing the ``GPU`` backend requires nVidia hardware and [CUDA driver](https://developer.nvidia.com/cuda-downloads).\n\nFor an overview of PySDM features (and the preferred way to cite PySDM in papers), please refer to our JOSS papers:\n- [Bartman et al. 2022](https://doi.org/10.21105/joss.03219) (PySDM v1).\n- [de Jong, Singer et al. 2023](https://doi.org/10.21105/joss.04968) (PySDM v2).\n \nPySDM includes an extension of the SDM scheme to represent collisional breakup described in [de Jong, Mackay et al. 2023](10.5194/gmd-16-4193-2023). \nFor a list of talks and other materials on PySDM, see the [project wiki](https://github.com/open-atmos/PySDM/wiki).\n\nA [pdoc-generated](https://pdoc3.github.io/pdoc) documentation of PySDM public API is maintained at: [https://open-atmos.github.io/PySDM](https://open-atmos.github.io/PySDM) \n\n## Example Jupyter notebooks (reproducing results from literature):\n\nSee [PySDM-examples README](https://github.com/open-atmos/PySDM/blob/main/examples/README.md).\n\n![animation](https://github.com/open-atmos/PySDM/wiki/files/kinematic_2D_example.gif)\n\n## Dependencies and Installation\n\nPySDM dependencies are: [Numpy](https://numpy.org/), [Numba](http://numba.pydata.org/), [SciPy](https://scipy.org/), \n[Pint](https://pint.readthedocs.io/), [chempy](https://pypi.org/project/chempy/), \n[pyevtk](https://pypi.org/project/pyevtk/),\n[ThrustRTC](https://fynv.github.io/ThrustRTC/) and [CURandRTC](https://github.com/fynv/CURandRTC).\n\nTo install PySDM using ``pip``, use: ``pip install PySDM`` \n(or ``pip install git+https://github.com/open-atmos/PySDM.git`` to get updates\nbeyond the latest release).\n\nConda users may use ``pip`` as well, see the [Installing non-conda packages](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#installing-non-conda-packages) section in the conda docs. Dependencies of PySDM are available at the following conda channels:\n- numba: [numba](https://anaconda.org/numba/numba)\n- conda-forge: [pyevtk](https://anaconda.org/conda-forge/pyevtk), [pint](https://anaconda.org/conda-forge/pint) and []()\n- fyplus: [ThrustRTC](https://anaconda.org/fyplus/thrustrtc), [CURandRTC](https://anaconda.org/fyplus/curandrtc)\n- bjodah: [chempy](https://anaconda.org/bjodah/chempy)\n- nvidia: [cudatoolkit](https://anaconda.org/nvidia/cudatoolkit)\n\nFor development purposes, we suggest cloning the repository and installing it using ``pip -e``.\nTest-time dependencies can be installed with ``pip -e .[tests]``.\n\nPySDM examples constitute the [``PySDM-examples``](https://github.com/open-atmos/PySDM-examples) package.\nThe examples have additional dependencies listed in [``PySDM_examples`` package ``setup.py``](https://github.com/open-atmos/PySDM/blob/main/examples/setup.py) file.\nRunning the example Jupyter notebooks requires the ``PySDM_examples`` package to be installed.\nThe suggested install and launch steps are:\n```\ngit clone https://github.com/open-atmos/PySDM.git\ncd examples\npip install -e .\njupyter-notebook\n```\nAlternatively, one can also install the examples package from pypi.org by \nusing ``pip install PySDM-examples`` (note that this does not apply to notebooks itself,\nonly the supporting .py files).\n\n## Submodule organization\n```mermaid\nmindmap\n root((PySDM))\n Builder\n Formulae\n Particulator\n ((attributes))\n (physics)\n DryVolume: ExtensiveAttribute\n Kappa: DerivedAttribute\n ...\n (chemistry)\n Acidity\n ...\n (...)\n ((backends))\n CPU\n GPU\n ((dynamics))\n AqueousChemistry\n Collision\n Condensation\n ...\n ((environments))\n Box\n Parcel\n Kinematic2D\n ...\n ((initialisation))\n (spectra)\n Lognormal\n Exponential\n ...\n (sampling)\n (spectral_sampling)\n ConstantMultiplicity\n UniformRandom\n Logarithmic\n ...\n (...) \n (...)\n ((physics))\n (hygroscopicity)\n KappaKoehler\n ...\n (condensation_coordinate)\n Volume\n VolumeLogarithm\n (...)\n ((products))\n (size_spectral)\n EffectiveRadius\n WaterMixingRatio\n ...\n (ambient_thermodynamics)\n AmbientRelativeHumidity\n ...\n (...)\n```\n\n## Hello-world coalescence example in Python, Julia and Matlab\n\nIn order to depict the PySDM API with a practical example, the following\n listings provide sample code roughly reproducing the \n Figure 2 from [Shima et al. 2009 paper](http://doi.org/10.1002/qj.441)\n using PySDM from Python, Julia and Matlab.\nIt is a [`Coalescence`](https://open-atmos.github.io/PySDM/PySDM/dynamics/coalescence.html)-only set-up in which the initial particle size \n spectrum is [`Exponential`](https://open-atmos.github.io/PySDM/PySDM/initialisation/spectra.html#PySDM.initialisation.spectra.Exponential) and is deterministically sampled to match\n the condition of each super-droplet having equal initial multiplicity:\n
\nJulia (click to expand)\n\n```Julia\nusing Pkg\nPkg.add(""PyCall"")\nPkg.add(""Plots"")\nPkg.add(""PlotlyJS"")\n\nusing PyCall\nsi = pyimport(""PySDM.physics"").si\nConstantMultiplicity = pyimport(""PySDM.initialisation.sampling.spectral_sampling"").ConstantMultiplicity\nExponential = pyimport(""PySDM.initialisation.spectra"").Exponential\n\nn_sd = 2^15\ninitial_spectrum = Exponential(norm_factor=8.39e12, scale=1.19e5 * si.um^3)\nattributes = Dict()\nattributes[""volume""], attributes[""multiplicity""] = ConstantMultiplicity(spectrum=initial_spectrum).sample(n_sd)\n```\n
\n
\nMatlab (click to expand)\n\n```Matlab\nsi = py.importlib.import_module(\'PySDM.physics\').si;\nConstantMultiplicity = py.importlib.import_module(\'PySDM.initialisation.sampling.spectral_sampling\').ConstantMultiplicity;\nExponential = py.importlib.import_module(\'PySDM.initialisation.spectra\').Exponential;\n\nn_sd = 2^15;\ninitial_spectrum = Exponential(pyargs(...\n \'norm_factor\', 8.39e12, ...\n \'scale\', 1.19e5 * si.um ^ 3 ...\n));\ntmp = ConstantMultiplicity(initial_spectrum).sample(int32(n_sd));\nattributes = py.dict(pyargs(\'volume\', tmp{1}, \'multiplicity\', tmp{2}));\n```\n
\n
\nPython (click to expand)\n\n```Python\nfrom PySDM.physics import si\nfrom PySDM.initialisation.sampling.spectral_sampling import ConstantMultiplicity\nfrom PySDM.initialisation.spectra.exponential import Exponential\n\nn_sd = 2 ** 15\ninitial_spectrum = Exponential(norm_factor=8.39e12, scale=1.19e5 * si.um ** 3)\nattributes = {}\nattributes[\'volume\'], attributes[\'multiplicity\'] = ConstantMultiplicity(initial_spectrum).sample(n_sd)\n```\n
\n\nThe key element of the PySDM interface is the [``Particulator``](https://open-atmos.github.io/PySDM/PySDM/particulator.html) \n class instances of which are used to manage the system state and control the simulation.\nInstantiation of the [``Particulator``](https://open-atmos.github.io/PySDM/PySDM/particulator.html) class is handled by the [``Builder``](https://open-atmos.github.io/PySDM/PySDM/builder.html)\n as exemplified below:\n
\nJulia (click to expand)\n\n```Julia\nBuilder = pyimport(""PySDM"").Builder\nBox = pyimport(""PySDM.environments"").Box\nCoalescence = pyimport(""PySDM.dynamics"").Coalescence\nGolovin = pyimport(""PySDM.dynamics.collisions.collision_kernels"").Golovin\nCPU = pyimport(""PySDM.backends"").CPU\nParticleVolumeVersusRadiusLogarithmSpectrum = pyimport(""PySDM.products"").ParticleVolumeVersusRadiusLogarithmSpectrum\n\nradius_bins_edges = 10 .^ range(log10(10*si.um), log10(5e3*si.um), length=32) \n\nbuilder = Builder(n_sd=n_sd, backend=CPU())\nbuilder.set_environment(Box(dt=1 * si.s, dv=1e6 * si.m^3))\nbuilder.add_dynamic(Coalescence(collision_kernel=Golovin(b=1.5e3 / si.s)))\nproducts = [ParticleVolumeVersusRadiusLogarithmSpectrum(radius_bins_edges=radius_bins_edges, name=""dv/dlnr"")] \nparticulator = builder.build(attributes, products)\n```\n
\n
\nMatlab (click to expand)\n\n```Matlab\nBuilder = py.importlib.import_module(\'PySDM\').Builder;\nBox = py.importlib.import_module(\'PySDM.environments\').Box;\nCoalescence = py.importlib.import_module(\'PySDM.dynamics\').Coalescence;\nGolovin = py.importlib.import_module(\'PySDM.dynamics.collisions.collision_kernels\').Golovin;\nCPU = py.importlib.import_module(\'PySDM.backends\').CPU;\nParticleVolumeVersusRadiusLogarithmSpectrum = py.importlib.import_module(\'PySDM.products\').ParticleVolumeVersusRadiusLogarithmSpectrum;\n\nradius_bins_edges = logspace(log10(10 * si.um), log10(5e3 * si.um), 32);\n\nbuilder = Builder(pyargs(\'n_sd\', int32(n_sd), \'backend\', CPU()));\nbuilder.set_environment(Box(pyargs(\'dt\', 1 * si.s, \'dv\', 1e6 * si.m ^ 3)));\nbuilder.add_dynamic(Coalescence(pyargs(\'collision_kernel\', Golovin(1.5e3 / si.s))));\nproducts = py.list({ ParticleVolumeVersusRadiusLogarithmSpectrum(pyargs( ...\n \'radius_bins_edges\', py.numpy.array(radius_bins_edges), ...\n \'name\', \'dv/dlnr\' ...\n)) });\nparticulator = builder.build(attributes, products);\n```\n
\n
\nPython (click to expand)\n\n```Python\nimport numpy as np\nfrom PySDM import Builder\nfrom PySDM.environments import Box\nfrom PySDM.dynamics import Coalescence\nfrom PySDM.dynamics.collisions.collision_kernels import Golovin\nfrom PySDM.backends import CPU\nfrom PySDM.products import ParticleVolumeVersusRadiusLogarithmSpectrum\n\nradius_bins_edges = np.logspace(np.log10(10 * si.um), np.log10(5e3 * si.um), num=32)\n\nbuilder = Builder(n_sd=n_sd, backend=CPU())\nbuilder.set_environment(Box(dt=1 * si.s, dv=1e6 * si.m ** 3))\nbuilder.add_dynamic(Coalescence(collision_kernel=Golovin(b=1.5e3 / si.s)))\nproducts = [ParticleVolumeVersusRadiusLogarithmSpectrum(radius_bins_edges=radius_bins_edges, name=\'dv/dlnr\')]\nparticulator = builder.build(attributes, products)\n```\n
\n\nThe ``backend`` argument may be set to ``CPU`` or ``GPU``\n what translates to choosing the multi-threaded backend or the \n GPU-resident computation mode, respectively.\nThe employed [`Box`](https://open-atmos.github.io/PySDM/PySDM/environments/box.html) environment corresponds to a zero-dimensional framework\n (particle positions are not considered).\nThe vectors of particle multiplicities ``n`` and particle volumes ``v`` are\n used to initialise super-droplet attributes.\nThe [`Coalescence`](https://open-atmos.github.io/PySDM/PySDM/dynamics/coalescence.html)\n Monte-Carlo algorithm (Super Droplet Method) is registered as the only\n dynamic in the system.\nFinally, the [`build()`](https://open-atmos.github.io/PySDM/PySDM/builder.html#PySDM.builder.Builder.build) method is used to obtain an instance\n of [`Particulator`](https://open-atmos.github.io/PySDM/PySDM/particulator.html#PySDM.particulator.Particulator) which can then be used to control time-stepping and\n access simulation state.\n\nThe [`run(nt)`](https://open-atmos.github.io/PySDM/PySDM/particulator.html#PySDM.particuparticulatorr.Particulator.run) method advances the simulation by ``nt`` timesteps.\nIn the listing below, its usage is interleaved with plotting logic\n which displays a histogram of particle mass distribution \n at selected timesteps:\n
\nJulia (click to expand)\n\n```Julia\nrho_w = pyimport(""PySDM.physics.constants_defaults"").rho_w\nusing Plots; plotlyjs()\n\nfor step = 0:1200:3600\n particulator.run(step - particulator.n_steps)\n plot!(\n radius_bins_edges[1:end-1] / si.um,\n particulator.products[""dv/dlnr""].get()[:] * rho_w / si.g,\n linetype=:steppost,\n xaxis=:log,\n xlabel=""particle radius [\xc2\xb5m]"",\n ylabel=""dm/dlnr [g/m^3/(unit dr/r)]"",\n label=""t = $step s""\n ) \nend\nsavefig(""plot.svg"")\n```\n
\n
\nMatlab (click to expand)\n\n```Matlab\nrho_w = py.importlib.import_module(\'PySDM.physics.constants_defaults\').rho_w;\n\nfor step = 0:1200:3600\n particulator.run(int32(step - particulator.n_steps));\n x = radius_bins_edges / si.um;\n y = particulator.products{""dv/dlnr""}.get() * rho_w / si.g;\n stairs(...\n x(1:end-1), ... \n double(py.array.array(\'d\',py.numpy.nditer(y))), ...\n \'DisplayName\', sprintf(""t = %d s"", step) ...\n );\n hold on\nend\nhold off\nset(gca,\'XScale\',\'log\');\nxlabel(\'particle radius [\xc2\xb5m]\')\nylabel(""dm/dlnr [g/m^3/(unit dr/r)]"")\nlegend()\n```\n
\n
\nPython (click to expand)\n\n```Python\nfrom PySDM.physics.constants_defaults import rho_w\nfrom matplotlib import pyplot\n\nfor step in [0, 1200, 2400, 3600]:\n particulator.run(step - particulator.n_steps)\n pyplot.step(x=radius_bins_edges[:-1] / si.um,\n y=particulator.products[\'dv/dlnr\'].get()[0] * rho_w / si.g,\n where=\'post\', label=f""t = {step}s"")\n\npyplot.xscale(\'log\')\npyplot.xlabel(\'particle radius [\xc2\xb5m]\')\npyplot.ylabel(""dm/dlnr [g/m$^3$/(unit dr/r)]"")\npyplot.legend()\npyplot.savefig(\'readme.png\')\n```\n
\n\nThe resultant plot (generated with the Python code) looks as follows:\n\n![plot](https://github.com/open-atmos/PySDM/releases/download/tip/readme.png)\n\nThe component submodules used to create this simulation are visualized below:\n```mermaid\n graph\n COAL["":Coalescence""] --->|passed as arg to| BUILDER_ADD_DYN([""Builder.add_dynamic()""])\n BUILDER_INSTANCE[""builder :Builder""] -...-|has a method| BUILDER_BUILD([""Builder.build()""])\n ATTRIBUTES[attributes: dict] -->|passed as arg to| BUILDER_BUILD\n N_SD[""n_sd :int""] -->|passed as arg to| BUILDER_INIT\n BUILDER_INSTANCE -..-|has a method| BUILDER_SET_ENV([""Builder.set_environment()""])\n BUILDER_INIT([""Builder.__init__()""]) ----->|instantiates| BUILDER_INSTANCE\n BUILDER_INSTANCE -..-|has a method| BUILDER_ADD_DYN([""Builder.add_dynamic()""])\n ENV_INIT([""Box.__init__()""]) --->|instantiates| ENV\n DT[dt :float] ----->|passed as arg to| ENV_INIT\n DV[dv :float] ----->|passed as arg to| ENV_INIT\n ENV["":Box""] -->|passed as arg to| BUILDER_SET_ENV\n B[""b: float""] --->|passed as arg to| KERNEL_INIT([""Golovin.__init__()""])\n KERNEL_INIT -->|instantiates| KERNEL\n KERNEL[collision_kernel: Golovin] -->|passed as arg to| COAL_INIT([""Coalesncence.__init__()""])\n COAL_INIT -->|instantiates| COAL\n PRODUCTS[products: list] ----->|passed as arg to| BUILDER_BUILD\n NORM_FACTOR[norm_factor: float]-->|passed as arg to| EXP_INIT\n SCALE[scale: float]-->|passed as arg to| EXP_INIT\n EXP_INIT([""Exponential.__init__()""]) -->|instantiates| IS\n IS[""initial_spectrum :Exponential""] -->|passed as arg to| CM_INIT\n CM_INIT([""ConstantMultiplicity.__init__()""]) -->|instantiates| CM_INSTANCE\n CM_INSTANCE["":ConstantMultiplicity""] -.-|has a method| SAMPLE\n SAMPLE([""ConstantMultiplicity.sample()""]) -->|returns| n\n SAMPLE -->|returns| volume\n n -->|added as element of| ATTRIBUTES\n PARTICULATOR_INSTANCE -.-|has a method| PARTICULATOR_RUN([""Particulator.run()""])\n volume -->|added as element of| ATTRIBUTES\n BUILDER_BUILD -->|returns| PARTICULATOR_INSTANCE[""particulator :Particulator""]\n PARTICULATOR_INSTANCE -.-|has a field| PARTICULATOR_PROD([""Particulator.products:dict""])\n BACKEND_INSTANCE[""backend :CPU""] -->|passed as arg to| BUILDER_INIT\n PRODUCTS -.-|accessible via| PARTICULATOR_PROD\n NP_LOGSPACE([""np.logspace()""]) -->|returns| EDGES \n EDGES[radius_bins_edges: np.ndarray] -->|passed as arg to| SPECTRUM_INIT\n SPECTRUM_INIT[""ParticleVolumeVersusRadiusLogarithmSpectrum.__init__()""] -->|instantiates| SPECTRUM\n SPECTRUM["":ParticleVolumeVersusRadiusLogarithmSpectrum""] -->|added as element of| PRODUCTS\n\n click COAL ""https://open-atmos.github.io/PySDM/PySDM/dynamics/collisions/collision.html""\n click BUILDER_INSTANCE ""https://open-atmos.github.io/PySDM/PySDM/builder.html""\n click BUILDER_INIT ""https://open-atmos.github.io/PySDM/PySDM/builder.html""\n click BUILDER_ADD_DYN ""https://open-atmos.github.io/PySDM/PySDM/builder.html""\n click BUILDER_SET_ENV ""https://open-atmos.github.io/PySDM/PySDM/builder.html""\n click ENV_INIT ""https://open-atmos.github.io/PySDM/PySDM/environments/index.html""\n click ENV ""https://open-atmos.github.io/PySDM/PySDM/environments/index.html""\n click KERNEL_INIT ""https://open-atmos.github.io/PySDM/PySDM/dynamics/collisions/collision_kernels/index.html""\n click KERNEL ""https://open-atmos.github.io/PySDM/PySDM/dynamics/collisions/collision_kernels/index.html""\n click EXP_INIT ""https://open-atmos.github.io/PySDM/PySDM/initialisation/spectra/index.html""\n click IS ""https://open-atmos.github.io/PySDM/PySDM/initialisation/spectra/index.html""\n click CM_INIT ""https://open-atmos.github.io/PySDM/PySDM/initialisation/sampling/spectral_sampling.html""\n click CM_INSTANCE ""https://open-atmos.github.io/PySDM/PySDM/initialisation/sampling/spectral_sampling.html""\n click SAMPLE ""https://open-atmos.github.io/PySDM/PySDM/initialisation/sampling/spectral_sampling.html""\n click PARTICULATOR_INSTANCE ""https://open-atmos.github.io/PySDM/PySDM/particulator.html""\n click BACKEND_INSTANCE ""https://open-atmos.github.io/PySDM/PySDM/backends/numba.html""\n click BUILDER_BUILD ""https://open-atmos.github.io/PySDM/PySDM/builder.html""\n click NP_LOGSPACE ""https://numpy.org/doc/stable/reference/generated/numpy.logspace.html""\n click SPECTRUM_INIT ""https://open-atmos.github.io/PySDM/PySDM/products/size_spectral/particle_volume_versus_radius_logarithm_spectrum.html""\n click SPECTRUM ""https://open-atmos.github.io/PySDM/PySDM/products/size_spectral/particle_volume_versus_radius_logarithm_spectrum.html""\n```\n\n## Hello-world condensation example in Python, Julia and Matlab\n\nIn the following example, a condensation-only setup is used with the adiabatic \n[`Parcel`](https://open-atmos.github.io/PySDM/PySDM/environments/parcel.html) environment.\nAn initial [`Lognormal`](https://open-atmos.github.io/PySDM/PySDM/initialisation/spectra.html#PySDM.initialisation.spectra.Lognormal)\nspectrum of dry aerosol particles is first initialised to equilibrium wet size for the given\ninitial humidity. \nSubsequent particle growth due to [`Condensation`](https://open-atmos.github.io/PySDM/PySDM/dynamics/condensation.html) of water vapour (coupled with the release of latent heat)\ncauses a subset of particles to activate into cloud droplets.\nResults of the simulation are plotted against vertical \n[`ParcelDisplacement`](https://open-atmos.github.io/PySDM/PySDM/products/housekeeping/parcel_displacement.html)\nand depict the evolution of \n[`PeakSupersaturation`](https://open-atmos.github.io/PySDM/PySDM/products/condensation/peak_supersaturation.html), \n[`EffectiveRadius`](https://open-atmos.github.io/PySDM/PySDM/products/size_spectral/effective_radius.html), \n[`ParticleConcentration`](https://open-atmos.github.io/PySDM/PySDM/products/size_spectral/particle_concentration.html#PySDM.products.particles_concentration.ParticleConcentration) \nand the \n[`WaterMixingRatio `](https://open-atmos.github.io/PySDM/PySDM/products/size_spectral/water_mixing_ratio.html).\n\n
\nJulia (click to expand)\n\n```Julia\nusing PyCall\nusing Plots; plotlyjs()\nsi = pyimport(""PySDM.physics"").si\nspectral_sampling = pyimport(""PySDM.initialisation.sampling"").spectral_sampling\ndiscretise_multiplicities = pyimport(""PySDM.initialisation"").discretise_multiplicities\nLognormal = pyimport(""PySDM.initialisation.spectra"").Lognormal\nequilibrate_wet_radii = pyimport(""PySDM.initialisation"").equilibrate_wet_radii\nCPU = pyimport(""PySDM.backends"").CPU\nAmbientThermodynamics = pyimport(""PySDM.dynamics"").AmbientThermodynamics\nCondensation = pyimport(""PySDM.dynamics"").Condensation\nParcel = pyimport(""PySDM.environments"").Parcel\nBuilder = pyimport(""PySDM"").Builder\nFormulae = pyimport(""PySDM"").Formulae\nproducts = pyimport(""PySDM.products"")\n\nenv = Parcel(\n dt=.25 * si.s,\n mass_of_dry_air=1e3 * si.kg,\n p0=1122 * si.hPa,\n initial_water_vapour_mixing_ratio=20 * si.g / si.kg,\n T0=300 * si.K,\n w= 2.5 * si.m / si.s\n)\nspectrum = Lognormal(norm_factor=1e4/si.mg, m_mode=50*si.nm, s_geom=1.4)\nkappa = .5 * si.dimensionless\ncloud_range = (.5 * si.um, 25 * si.um)\noutput_interval = 4\noutput_points = 40\nn_sd = 256\n\nformulae = Formulae()\nbuilder = Builder(backend=CPU(formulae), n_sd=n_sd)\nbuilder.set_environment(env)\nbuilder.add_dynamic(AmbientThermodynamics())\nbuilder.add_dynamic(Condensation())\n\nr_dry, specific_concentration = spectral_sampling.Logarithmic(spectrum).sample(n_sd)\nv_dry = formulae.trivia.volume(radius=r_dry)\nr_wet = equilibrate_wet_radii(r_dry=r_dry, environment=env, kappa_times_dry_volume=kappa * v_dry)\n\nattributes = Dict()\nattributes[""multiplicity""] = discretise_multiplicities(specific_concentration * env.mass_of_dry_air)\nattributes[""dry volume""] = v_dry\nattributes[""kappa times dry volume""] = kappa * v_dry\nattributes[""volume""] = formulae.trivia.volume(radius=r_wet) \n\nparticulator = builder.build(attributes, products=[\n products.PeakSupersaturation(name=""S_max"", unit=""%""),\n products.EffectiveRadius(name=""r_eff"", unit=""um"", radius_range=cloud_range),\n products.ParticleConcentration(name=""n_c_cm3"", unit=""cm^-3"", radius_range=cloud_range),\n products.WaterMixingRatio(name=""liquid water mixing ratio"", unit=""g/kg"", radius_range=cloud_range),\n products.ParcelDisplacement(name=""z"")\n])\n \ncell_id=1\noutput = Dict()\nfor (_, product) in particulator.products\n output[product.name] = Array{Float32}(undef, output_points+1)\n output[product.name][1] = product.get()[cell_id]\nend \n \nfor step = 2:output_points+1\n particulator.run(steps=output_interval)\n for (_, product) in particulator.products\n output[product.name][step] = product.get()[cell_id]\n end \nend \n\nplots = []\nylbl = particulator.products[""z""].unit\nfor (_, product) in particulator.products\n if product.name != ""z""\n append!(plots, [plot(output[product.name], output[""z""], ylabel=ylbl, xlabel=product.unit, title=product.name)])\n end\n global ylbl = """"\nend\nplot(plots..., layout=(1, length(output)-1))\nsavefig(""parcel.svg"")\n```\n
\n
\nMatlab (click to expand)\n\n```Matlab\nsi = py.importlib.import_module(\'PySDM.physics\').si;\nspectral_sampling = py.importlib.import_module(\'PySDM.initialisation.sampling\').spectral_sampling;\ndiscretise_multiplicities = py.importlib.import_module(\'PySDM.initialisation\').discretise_multiplicities;\nLognormal = py.importlib.import_module(\'PySDM.initialisation.spectra\').Lognormal;\nequilibrate_wet_radii = py.importlib.import_module(\'PySDM.initialisation\').equilibrate_wet_radii;\nCPU = py.importlib.import_module(\'PySDM.backends\').CPU;\nAmbientThermodynamics = py.importlib.import_module(\'PySDM.dynamics\').AmbientThermodynamics;\nCondensation = py.importlib.import_module(\'PySDM.dynamics\').Condensation;\nParcel = py.importlib.import_module(\'PySDM.environments\').Parcel;\nBuilder = py.importlib.import_module(\'PySDM\').Builder;\nFormulae = py.importlib.import_module(\'PySDM\').Formulae;\nproducts = py.importlib.import_module(\'PySDM.products\');\n\nenv = Parcel(pyargs( ...\n \'dt\', .25 * si.s, ...\n \'mass_of_dry_air\', 1e3 * si.kg, ...\n \'p0\', 1122 * si.hPa, ...\n \'initial_water_vapour_mixing_ratio\', 20 * si.g / si.kg, ...\n \'T0\', 300 * si.K, ...\n \'w\', 2.5 * si.m / si.s ...\n));\nspectrum = Lognormal(pyargs(\'norm_factor\', 1e4/si.mg, \'m_mode\', 50 * si.nm, \'s_geom\', 1.4));\nkappa = .5;\ncloud_range = py.tuple({.5 * si.um, 25 * si.um});\noutput_interval = 4;\noutput_points = 40;\nn_sd = 256;\n\nformulae = Formulae();\nbuilder = Builder(pyargs(\'backend\', CPU(formulae), \'n_sd\', int32(n_sd)));\nbuilder.set_environment(env);\nbuilder.add_dynamic(AmbientThermodynamics());\nbuilder.add_dynamic(Condensation());\n\ntmp = spectral_sampling.Logarithmic(spectrum).sample(int32(n_sd));\nr_dry = tmp{1};\nv_dry = formulae.trivia.volume(pyargs(\'radius\', r_dry));\nspecific_concentration = tmp{2};\nr_wet = equilibrate_wet_radii(pyargs(...\n \'r_dry\', r_dry, ...\n \'environment\', env, ...\n \'kappa_times_dry_volume\', kappa * v_dry...\n));\n\nattributes = py.dict(pyargs( ...\n \'multiplicity\', discretise_multiplicities(specific_concentration * env.mass_of_dry_air), ...\n \'dry volume\', v_dry, ...\n \'kappa times dry volume\', kappa * v_dry, ... \n \'volume\', formulae.trivia.volume(pyargs(\'radius\', r_wet)) ...\n));\n\nparticulator = builder.build(attributes, py.list({ ...\n products.PeakSupersaturation(pyargs(\'name\', \'S_max\', \'unit\', \'%\')), ...\n products.EffectiveRadius(pyargs(\'name\', \'r_eff\', \'unit\', \'um\', \'radius_range\', cloud_range)), ...\n products.ParticleConcentration(pyargs(\'name\', \'n_c_cm3\', \'unit\', \'cm^-3\', \'radius_range\', cloud_range)), ...\n products.WaterMixingRatio(pyargs(\'name\', \'liquid water mixing ratio\', \'unit\', \'g/kg\', \'radius_range\', cloud_range)) ...\n products.ParcelDisplacement(pyargs(\'name\', \'z\')) ...\n}));\n\ncell_id = int32(0);\noutput_size = [output_points+1, length(py.list(particulator.products.keys()))];\noutput_types = repelem({\'double\'}, output_size(2));\noutput_names = [cellfun(@string, cell(py.list(particulator.products.keys())))];\noutput = table(...\n \'Size\', output_size, ...\n \'VariableTypes\', output_types, ...\n \'VariableNames\', output_names ...\n);\nfor pykey = py.list(keys(particulator.products))\n get = py.getattr(particulator.products{pykey{1}}.get(), \'__getitem__\');\n key = string(pykey{1});\n output{1, key} = get(cell_id);\nend\n\nfor i=2:output_points+1\n particulator.run(pyargs(\'steps\', int32(output_interval)));\n for pykey = py.list(keys(particulator.products))\n get = py.getattr(particulator.products{pykey{1}}.get(), \'__getitem__\');\n key = string(pykey{1});\n output{i, key} = get(cell_id);\n end\nend\n\ni=1;\nfor pykey = py.list(keys(particulator.products))\n product = particulator.products{pykey{1}};\n if string(product.name) ~= ""z""\n subplot(1, width(output)-1, i);\n plot(output{:, string(pykey{1})}, output.z, \'-o\');\n title(string(product.name), \'Interpreter\', \'none\');\n xlabel(string(product.unit));\n end\n if i == 1\n ylabel(string(particulator.products{""z""}.unit));\n end\n i=i+1;\nend\nsaveas(gcf, ""parcel.png"");\n```\n
\n
\nPython (click to expand)\n\n```Python\nfrom matplotlib import pyplot\nfrom PySDM.physics import si\nfrom PySDM.initialisation import discretise_multiplicities, equilibrate_wet_radii\nfrom PySDM.initialisation.spectra import Lognormal\nfrom PySDM.initialisation.sampling import spectral_sampling\nfrom PySDM.backends import CPU\nfrom PySDM.dynamics import AmbientThermodynamics, Condensation\nfrom PySDM.environments import Parcel\nfrom PySDM import Builder, Formulae, products\n\nenv = Parcel(\n dt=.25 * si.s,\n mass_of_dry_air=1e3 * si.kg,\n p0=1122 * si.hPa,\n initial_water_vapour_mixing_ratio=20 * si.g / si.kg,\n T0=300 * si.K,\n w=2.5 * si.m / si.s\n)\nspectrum = Lognormal(norm_factor=1e4 / si.mg, m_mode=50 * si.nm, s_geom=1.5)\nkappa = .5 * si.dimensionless\ncloud_range = (.5 * si.um, 25 * si.um)\noutput_interval = 4\noutput_points = 40\nn_sd = 256\n\nformulae = Formulae()\nbuilder = Builder(backend=CPU(formulae), n_sd=n_sd)\nbuilder.set_environment(env)\nbuilder.add_dynamic(AmbientThermodynamics())\nbuilder.add_dynamic(Condensation())\n\nr_dry, specific_concentration = spectral_sampling.Logarithmic(spectrum).sample(n_sd)\nv_dry = formulae.trivia.volume(radius=r_dry)\nr_wet = equilibrate_wet_radii(r_dry=r_dry, environment=env, kappa_times_dry_volume=kappa * v_dry)\n\nattributes = {\n \'multiplicity\': discretise_multiplicities(specific_concentration * env.mass_of_dry_air),\n \'dry volume\': v_dry,\n \'kappa times dry volume\': kappa * v_dry,\n \'volume\': formulae.trivia.volume(radius=r_wet)\n}\n\nparticulator = builder.build(attributes, products=[\n products.PeakSupersaturation(name=\'S_max\', unit=\'%\'),\n products.EffectiveRadius(name=\'r_eff\', unit=\'um\', radius_range=cloud_range),\n products.ParticleConcentration(name=\'n_c_cm3\', unit=\'cm^-3\', radius_range=cloud_range),\n products.WaterMixingRatio(name=\'liquid water mixing ratio\', unit=\'g/kg\', radius_range=cloud_range),\n products.ParcelDisplacement(name=\'z\')\n])\n\ncell_id = 0\noutput = {product.name: [product.get()[cell_id]] for product in particulator.products.values()}\n\nfor step in range(output_points):\n particulator.run(steps=output_interval)\n for product in particulator.products.values():\n output[product.name].append(product.get()[cell_id])\n\nfig, axs = pyplot.subplots(1, len(particulator.products) - 1, sharey=""all"")\nfor i, (key, product) in enumerate(particulator.products.items()):\n if key != \'z\':\n axs[i].plot(output[key], output[\'z\'], marker=\'.\')\n axs[i].set_title(product.name)\n axs[i].set_xlabel(product.unit)\n axs[i].grid()\naxs[0].set_ylabel(particulator.products[\'z\'].unit)\npyplot.savefig(\'parcel.svg\')\n```\n
\n\nThe resultant plot (generated with the Matlab code) looks as follows:\n\n![plot](https://github.com/open-atmos/PySDM/releases/download/tip/parcel.png)\n\n## Contributing, reporting issues, seeking support \n\n#### Our technologicial stack: \n[![Python 3](https://img.shields.io/static/v1?label=+&logo=Python&color=darkred&message=Python)](https://www.python.org/)\n[![Numba](https://img.shields.io/static/v1?label=+&logo=Numba&color=orange&message=Numba)](https://numba.pydata.org)\n[![LLVM](https://img.shields.io/static/v1?label=+&logo=LLVM&color=gold&message=LLVM)](https://llvm.org)\n[![CUDA](https://img.shields.io/static/v1?label=+&logo=nVidia&color=darkgreen&message=ThrustRTC/CUDA)](https://pypi.org/project/ThrustRTC/)\n[![NumPy](https://img.shields.io/static/v1?label=+&logo=numpy&color=blue&message=NumPy)](https://numpy.org/)\n[![pytest](https://img.shields.io/static/v1?label=+&logo=pytest&color=purple&message=pytest)](https://pytest.org/) \n[![Colab](https://img.shields.io/static/v1?label=+&logo=googlecolab&color=darkred&message=Colab)](https://colab.research.google.com/)\n[![Codecov](https://img.shields.io/static/v1?label=+&logo=codecov&color=orange&message=Codecov)](https://codecov.io/)\n[![PyPI](https://img.shields.io/static/v1?label=+&logo=pypi&color=gold&message=PyPI)](https://pypi.org/)\n[![GithubActions](https://img.shields.io/static/v1?label=+&logo=github&color=darkgreen&message=GitHub Actions)](https://github.com/features/actions)\n[![Jupyter](https://img.shields.io/static/v1?label=+&logo=Jupyter&color=blue&message=Jupyter)](https://jupyter.org/)\n[![PyCharm](https://img.shields.io/static/v1?label=+&logo=pycharm&color=purple&message=PyCharm)](https:///)\n\nSubmitting new code to the project, please preferably use [GitHub pull requests](https://github.com/open-atmos/PySDM/pulls) - it helps to keep record of code authorship, \ntrack and archive the code review workflow and allows to benefit\nfrom the continuous integration setup which automates execution of tests \nwith the newly added code. \n\nCode contributions are assumed to imply transfer of copyright.\nShould there be a need to make an exception, please indicate it when creating\na pull request or contributing code in any other way. In any case, \nthe license of the contributed code must be compatible with GPL v3.\n\nDeveloping the code, we follow [The Way of Python](https://www.python.org/dev/peps/pep-0020/) and \nthe [KISS principle](https://en.wikipedia.org/wiki/KISS_principle).\nThe codebase has greatly benefited from [PyCharm code inspections](https://www.jetbrains.com/help/pycharm/code-inspection.html)\nand [Pylint](https://pylint.org), [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/)\ncode analysis (which are all part of the CI workflows).\n\nWe also use [pre-commit hooks](https://pre-commit.com). \nIn our case, the hooks modify files and re-format them.\nThe pre-commit hooks can be run locally, and then the resultant changes need to be staged before committing.\nTo set up the hooks locally, install pre-commit via `pip install pre-commit` and\nset up the git hooks via `pre-commit install` (this needs to be done every time you clone the project).\nTo run all pre-commit hooks, run `pre-commit run --all-files`.\nThe `.pre-commit-config.yaml` file can be modified in case new hooks are to be added or\n existing ones need to be altered. \n\nFurther hints addressed at PySDM developers are maintained in the [open-atmos/python-dev-hints Wiki](https://github.com/open-atmos/python-dev-hints/wiki).\n\nIssues regarding any incorrect, unintuitive or undocumented bahaviour of\nPySDM are best to be reported on the [GitHub issue tracker](https://github.com/open-atmos/PySDM/issues/new).\nFeature requests are recorded in the ""Ideas..."" [PySDM wiki page](https://github.com/open-atmos/PySDM/wiki/Ideas-for-new-features-and-examples).\n\nWe encourage to use the [GitHub Discussions](https://github.com/open-atmos/PySDM/discussions) feature\n(rather than the issue tracker) for seeking support in understanding, using and extending PySDM code.\n\nWe look forward to your contributions and feedback.\n\n## Credits:\n\nThe development and maintenance of PySDM is led by [Sylwester Arabas](https://github.com/slayoo/).\n[Piotr Bartman](https://github.com/piotrbartman/) had been the architect and main developer \nof technological solutions in PySDM. \nThe suite of examples shipped with PySDM includes contributions from researchers \nfrom [Jagiellonian University](https://en.uj.edu.pl/en) departments of computer science, physics and chemistry;\nand from \n[Caltech\'s Climate Modelling Alliance](https://clima.caltech.edu/).\n\nDevelopment of PySDM had been initially supported by the EU through a grant of the \n[Foundation for Polish Science](https://www.fnp.org.pl/)) (POIR.04.04.00-00-5E1C/18) \nrealised at the [Jagiellonian University](https://en.uj.edu.pl/en).\nThe immersion freezing support in PySDM is developed with support from the\nUS Department of Energy [Atmospheric System Research](https://asr.science.energy.gov/) programme\nthrough a grant realised at the \n[University of Illinois at Urbana-Champaign](https://illinois.edu/).\n\ncopyright: [Jagiellonian University](https://en.uj.edu.pl/en) \nlicence: [GPL v3](https://www.gnu.org/licenses/gpl-3.0.html)\n\n## Related resources and open-source projects\n\n### SDM patents (some expired, some withdrawn):\n- https://patents.google.com/patent/US7756693B2\n- https://patents.google.com/patent/EP1847939A3\n- https://patents.google.com/patent/JP4742387B2\n- https://patents.google.com/patent/CN101059821B\n\n### Other SDM implementations:\n- SCALE-SDM (Fortran): \n https://github.com/Shima-Lab/SCALE-SDM_BOMEX_Sato2018/blob/master/contrib/SDM/sdm_coalescence.f90\n- Pencil Code (Fortran): \n https://github.com/pencil-code/pencil-code/blob/master/src/particles_coagulation.f90\n- PALM LES (Fortran): \n https://palm.muk.uni-hannover.de/trac/browser/palm/trunk/SOURCE/lagrangian_particle_model_mod.f90\n- libcloudph++ (C++): \n https://github.com/igfuw/libcloudphxx/blob/master/src/impl/particles_impl_coal.ipp\n- LCM1D (Python) \n https://github.com/SimonUnterstrasser/ColumnModel\n- superdroplet (Cython/Numba/C++11/Fortran 2008/Julia) \n https://github.com/darothen/superdroplet\n- NTLP (FORTRAN) \n https://github.com/Folca/NTLP/blob/SuperDroplet/les.F\n\n### non-SDM probabilistic particle-based coagulation solvers\n\n- PartMC (Fortran): \n https://github.com/compdyn/partmc\n\n### Python models with discrete-particle (moving-sectional) representation of particle size spectrum\n\n- pyrcel: https://github.com/darothen/pyrcel\n- PyBox: https://github.com/loftytopping/PyBox\n- py-cloud-parcel-model: https://github.com/emmasimp/py-cloud-parcel-model\n'",",https://zenodo.org/badge/latestdoi/199064632,https://doi.org/10.21105/joss.03219,https://doi.org/10.21105/joss.04968,http://doi.org/10.1002/qj.441","2019/07/26, 18:41:26",1552,GPL-3.0,158,3636,"2023/10/25, 21:45:35",120,636,1045,224,0,19,2.0,0.3688410825815406,"2021/10/21, 22:25:13",tip,0,16,false,,false,false,"slayoo/PyPartMC-examples,open-atmos/PyPartMC,abulenok/PySuperDropletLES,slayoo/PyMPDATA-MPI,Delcior/PySuperDropletLES,open-atmos/PyMPDATA-MPI,slayoo/PySDM-examples",,https://github.com/open-atmos,,,,,https://avatars.githubusercontent.com/u/87869712?v=4,,, pyrcel,"An implementation of a simple, adiabatic cloud parcel model for use in aerosol-cloud interaction studies.",darothen,https://github.com/darothen/pyrcel.git,github,,Atmospheric Chemistry and Aerosol,"2023/09/26, 06:22:47",19,0,3,true,Python,,,Python,,"b""pyrcel: cloud parcel model\n==========================\n\n![sample parcel model run](docs/figs/model_example.png)\n\n[![DOI](https://zenodo.org/badge/12927551.svg)](https://zenodo.org/badge/latestdoi/12927551)[![PyPI Version](https://badge.fury.io/py/pyrcel.svg)](https://badge.fury.io/py/pyrcel)[![CircleCI Build Status](https://circleci.com/gh/darothen/pyrcel/tree/master.svg?style=svg)](https://circleci.com/gh/darothen/pyrcel/tree/master)[![Documentation Status](https://readthedocs.org/projects/pyrcel/badge/?version=stable)](http://pyrcel.readthedocs.org/en/stable/?badge=stable)\n\n\nThis is an implementation of a simple, adiabatic cloud parcel model for use in\naerosol-cloud interaction studies. [Rothenberg and Wang (2016)](http://journals.ametsoc.org/doi/full/10.1175/JAS-D-15-0223.1) discuss the model in detail and its improvements\n and changes over [Nenes et al (2001)][nenes2001]:\n\n* Implementation of \xce\xba-K\xc3\xb6hler theory for condensation physics ([Petters and\nKreidenweis, 2007)][pk2007]\n* Extension of model to handle arbitrary sectional representations of aerosol\npopulations, based on user-controlled empirical or parameterized size distributions\n* Improved, modular numerical framework for integrating the model, including bindings\nto several different stiff integrators:\n - `lsoda` - [scipy ODEINT wrapper](http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html)\n - `vode, lsode*, lsoda*` - ODEPACK via [odespy][hplgit]\n - `cvode` - SUNDIALS via [Assimulo](http://www.jmodelica.org/assimulo_home/index.html#)\n\namong other details. It also includes a library of droplet activation routines and scripts/notebooks for evaluating those schemes against equivalent calculations done with the parcel model.\n\nUpdated code can be found the project [github repository](https://github.com/darothen/pyrcel). If you'd like to use this code or have any questions about it, please [contact the author][author_email]. In particular, if you use this code for research purposes, be sure to carefully read through the model and ensure that you have tweaked/configured it for your purposes (i.e., modifying the accomodation coefficient); other derived quantities).\n\n[Detailed documentation is available](http://pyrcel.readthedocs.org/en/latest/index.html), including a [scientific description](http://pyrcel.readthedocs.org/en/latest/sci_descr.html), [installation details](http://pyrcel.readthedocs.org/en/latest/install.html), and a [basic example](http://pyrcel.readthedocs.org/en/latest/examples/basic_run.html) which produces a figure like the plot at the top of this page.\n\nRequirements\n------------\n\n**Required**\n\n* Python >= 3.7\n* [numba](http://numba.pydata.org)\n* [NumPy](http://www.numpy.org)\n* [SciPy](http://www.scipy.org)\n* [pandas](http://pandas.pydata.org) - v0.25+\n* [xarray](http://xarray.pydata.org/en/stable/) - v2023+\n* [PyYAML](http://pyyaml.org/)\n\n**Optional**\n\nThe following packages are used for better numerics (ODE solving)\n\n* [Assimulo](http://www.jmodelica.org/assimulo)\n\nThe easiest way to satisfy the basic requirements for building and running the\nmodel is to use the [Anaconda](http://continuum.io/downloads) scientific Python\ndistribution. Alternatively, a\n[miniconda environment](http://conda.pydata.org/docs/using/envs.html) is\nprovided to quickly set-up and get running the model. Assimulo's dependency on\nthe SUNDIALS library makes it a little bit tougher to install in an automated\nfashion, so it has not been included in the automatic setup provided here; you\nshould refer to [Assimulo's documentation](http://www.jmodelica.org/assimulo_home/installation.html)\nfor more information on its installation process. Note that many components of\nthe model and package can be used without Assimulo.\n\nDevelopment\n-----------\n\n[http://github.com/darothen/pyrcel]()\n\nPlease fork this repository if you intend to develop the model further so that the\ncode's provenance can be maintained.\n\nLicense\n-------\n\n[All scientific code should be licensed](http://www.astrobetter.com/the-whys-and-hows-of-licensing-scientific-code/). This code is released under the New BSD (3-clause) [license](LICENSE.md).\n\n[author_email]: mailto:daniel@danielrothenberg.com\n[nenes2001]: http://nenes.eas.gatech.edu/Preprints/KinLimitations_TellusPP.pdf\n[pk2007]: http://www.atmos-chem-phys.net/7/1961/2007/acp-7-1961-2007.html\n[hplgit]: https://github.com/hplgit/odespy\n""",",https://zenodo.org/badge/latestdoi/12927551","2013/09/18, 15:53:10",3689,BSD-3-Clause,28,395,"2023/09/26, 02:54:45",9,7,9,5,30,0,0.14285714285714285,0.0714285714285714,"2023/09/26, 06:25:51",v1.3.2,0,3,false,,false,false,,,,,,,,,,, ORAC,"An optimal estimation retrieval scheme for the estimation of aerosol and cloud properties from a wide range of visible-infrared imaging satellites, such as MODIS, AATSR, AVHRR and SEVIRI.",ORAC-CC,https://github.com/ORAC-CC/orac.git,github,,Atmospheric Chemistry and Aerosol,"2023/06/26, 10:11:07",25,0,3,true,Fortran,,,"Fortran,C,Python,C++,Pawn,Makefile,Yacc,Shell,Assembly,NASL,CMake,Lex,Perl,Batchfile",,"b'Documentation for ORAC is managed through the Wiki at:\r\n\r\nhttps://github.com/ORAC-CC/orac/wiki\r\n\r\nORAC is licensed under the GNU General Public License (GPL), Version 3. See the\r\nfile COPYING included with this source for more details.\r\n'",,"2017/11/27, 19:35:57",2158,GPL-3.0,90,2335,"2023/06/26, 14:40:02",29,44,60,13,121,6,4.2,0.4768480909829407,"2021/08/27, 14:45:00",v08-beta,0,8,false,,false,false,,,,,,,,,,, METplus,A verification framework that spans a wide range of temporal (warn-on-forecast to climate) and spatial (storm to global) scales.,dtcenter,https://github.com/dtcenter/METplus.git,github,,Meteorological Observation and Forecast,"2023/09/08, 23:00:12",84,0,11,true,Python,Developmental Testbed Center,dtcenter,"Python,Shell,Dockerfile,Common Lisp",https://metplus.readthedocs.io,"b'Model Evaluation Tools Plus (METplus) Repository\n================================================\n\n\n[![Tests](https://github.com/DTCenter/METplus/actions/workflows/testing.yml/badge.svg?event=push)](https://github.com/DTCenter/METplus/actions/workflows/testing.yml)\n[![Docs](https://img.shields.io/badge/Documentation-latest-brightgreen.svg)](https://metplus.readthedocs.io)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5567804.svg)](https://doi.org/10.5281/zenodo.5567804)\n\nWelcome to the METplus GitHub repository hosted to the community through\nthe Developmental Testbed Center (DTC).\n\nMETplus is a Python scripting infrastructure around the MET verification tools\n(and eventually METViewer, a tool used for plotting MET output verification statistics).\n\nDocumentation for both Users and Contributors can be found [here](https://metplus.readthedocs.io).\nFor more information about the entire suite of tools, please visit the\n[DTC METplus website](https://dtcenter.org/community-code/metplus).\n\nSupport for the METplus components is provided through the\n[METplus Discussions](https://github.com/dtcenter/METplus/discussions) forum.\nUsers are welcome and encouraged to answer or address each other\'s questions there! For more\ninformation, please read\n""[Welcome to the METplus Components Discussions](https://github.com/dtcenter/METplus/discussions/939)"".\n\nThis infrastructure utilizes the NCEP produtil package, which is a platform-independent\nweather and ocean forecasting utility package developed at the National Oceanic\nand Atmospheric Administration (NOAA).\n\nFor information about the support provided for releases, see our [Release Support Policy](https://metplus.readthedocs.io/en/develop/Release_Guide/index.html#release-support-policy).\n'",",https://doi.org/10.5281/zenodo.5567804","2016/08/25, 15:19:30",2617,Apache-2.0,232,6665,"2023/10/25, 23:01:57",136,713,1559,318,0,1,1.2,0.5261904761904762,"2023/09/15, 20:04:40",v6.0.0-beta1,0,29,false,,false,false,,,https://github.com/dtcenter,https://dtcenter.org,,,,https://avatars.githubusercontent.com/u/67171881?v=4,,, WRF-ARW,The official repository for the Weather Research and Forecasting model.,wrf-model,https://github.com/wrf-model/WRF.git,github,"wrf,weather,climate,forecast,nwp,model,simulation,fortran",Meteorological Observation and Forecast,"2023/07/25, 21:06:01",1046,0,211,true,Fortran,Weather Research and Forecasting Model,wrf-model,"Fortran,Roff,C++,C,NASL,Shell,Assembly,Makefile,Pawn,NCL,PLSQL,Pascal,Perl,Yacc,Lex,M4,HTML,Forth,MATLAB,Emacs Lisp,EmberScript,sed,SourcePawn,NewLisp",,"b'### WRF-ARW Modeling System ###\n\nWe request that all new users of WRF please register. This allows us to better determine how to support and develop the model. Please register using this form:[https://www2.mmm.ucar.edu/wrf/users/download/wrf-regist.php](https://www2.mmm.ucar.edu/wrf/users/download/wrf-regist.php).\n\nFor an overview of the WRF modeling system, along with information regarding downloads, user support, documentation, publications, and additional resources, please see the WRF Model Users\' Web Site: [https://www2.mmm.ucar.edu/wrf/users/](https://www2.mmm.ucar.edu/wrf/users/).\n \nInformation regarding WRF Model citations (including a DOI) can be found here: [https://www2.mmm.ucar.edu/wrf/users/citing_wrf.html](https://www2.mmm.ucar.edu/wrf/users/citing_wrf.html).\n\nThe WRF Model is open-source code in the public domain, and its use is unrestricted. The name ""WRF"", however, is a registered trademark of the University Corporation for Atmospheric Research. The WRF public domain notice and related information may be found here: [https://www2.mmm.ucar.edu/wrf/users/public.html](https://www2.mmm.ucar.edu/wrf/users/public.html).\n\n\n'",,"2016/08/16, 20:39:14",2626,CUSTOM,66,6537,"2023/10/12, 18:09:46",169,1552,1758,124,13,33,1.0,0.6472757066775912,"2023/07/25, 21:26:30",v4.5.1,0,123,false,,false,false,,,https://github.com/wrf-model,http://www2.mmm.ucar.edu/wrf/users/,"Boulder, Colorado, USA",,,https://avatars.githubusercontent.com/u/12666893?v=4,,, wrf-python,A collection of diagnostic and interpolation routines for use with output from the Weather Research and Forecasting Model.,NCAR,https://github.com/NCAR/wrf-python.git,github,,Meteorological Observation and Forecast,"2023/06/16, 22:26:09",363,0,44,true,Python,National Center for Atmospheric Research,NCAR,"Python,C,Fortran,Jupyter Notebook,NCL,Shell,Batchfile",,"b'wrf-python\n==============\n\nA collection of diagnostic and interpolation routines for use with output from the Weather Research and Forecasting (WRF-ARW) Model.\n\nThis package provides over 30 diagnostic calculations, several interpolation routines, and utilities to help with plotting via cartopy, basemap, or PyNGL. The functionality is similar to what is provided by the NCL WRF package.\n\n\nInstallation\n----------------------------\n\n conda install -c conda-forge wrf-python\n\nDocumentation\n----------------------------------\n\nhttp://wrf-python.rtfd.org\n\n\nCitation\n------------------\n\nIf you use this software, please cite it as described at the [WRF-Python - Citation](\nhttps://wrf-python.readthedocs.io/en/latest/citation.html) page.\n\n\n--------------------\n\n*The National Center for Atmospheric Research is sponsored by the National\nScience Foundation. Any opinions, findings and conclusions or recommendations\nexpressed in this material do not necessarily reflect the views of the\nNational Science Foundation.*\n\n\n'",,"2016/05/23, 20:55:40",2711,Apache-2.0,4,555,"2023/08/29, 22:41:20",59,53,158,13,57,1,1.3,0.6153846153846154,"2022/05/26, 16:33:58",v1.3.4.1,0,17,false,,true,true,,,https://github.com/NCAR,http://ncar.ucar.edu,"Boulder, CO",,,https://avatars.githubusercontent.com/u/2007542?v=4,,, Open-Meteo,Global weather API for non-commercial use with hourly weather forecast.,open-meteo,https://github.com/open-meteo/open-meteo.git,github,"weather-api,weather,weather-forecast",Meteorological Observation and Forecast,"2023/10/25, 10:32:41",1370,0,846,true,Swift,Open-Meteo,open-meteo,"Swift,C,Dockerfile,Shell",https://open-meteo.com,"b'# \xf0\x9f\x8c\xa4 Open-Meteo Weather API\n\n[![Test](https://github.com/open-meteo/open-meteo/actions/workflows/test.yml/badge.svg?branch=main)](https://github.com/open-meteo/open-meteo/actions/workflows/test.yml) [![codebeat badge](https://codebeat.co/badges/af28fed6-9cbf-41df-96a1-9bba03ae3c53)](https://codebeat.co/projects/github-com-open-meteo-open-meteo-main) [![GitHub license](https://img.shields.io/github/license/open-meteo/open-meteo)](https://github.com/open-meteo/open-meteo/blob/main/LICENSE) [![license: CC BY 4.0](https://img.shields.io/badge/license-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/) [![Twitter](https://img.shields.io/badge/follow-%40open_meteo-1DA1F2?logo=twitter&style=social)](https://twitter.com/open_meteo) [![Mastodon](https://img.shields.io/mastodon/follow/109320332765909743?domain=https%3A%2F%2Ffosstodon.org)](https://fosstodon.org/@openmeteo) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7970649.svg)](https://doi.org/10.5281/zenodo.7970649)\n\n\nOpen-Meteo is an open-source weather API and offers free access for non-commercial use. No API key is required. You can use it immediately!\n\nHead over to https://open-meteo.com! Stay up to date with our blog at https://openmeteo.substack.com.\n\n## Features\n- [Hourly weather forecast](https://open-meteo.com/en/docs) for up to 16 days\n- Global weather models with 11 km and regional models up to 1.5 km resolution\n- Weather model updates every hour for Europe and North America\n- 80 years [Historical Weather API](https://open-meteo.com/en/docs/historical-weather-api)\n- Based on the best weather models: [NOAA GFS with HRRR](https://open-meteo.com/en/docs/gfs-api), [DWD ICON](https://open-meteo.com/en/docs/dwd-api), [MeteoFrance Arome&Arpege](https://open-meteo.com/en/docs/meteofrance-api), [ECMWF IFS](https://open-meteo.com/en/docs/ecmwf-api), [JMA](https://open-meteo.com/en/docs/jma-api), [GEM HRDPS](https://open-meteo.com/en/docs/gem-api), [MET Norway](https://open-meteo.com/en/docs/metno-api)\n- [Marine Forecast API](https://open-meteo.com/en/docs/marine-weather-api), [Air Quality API](https://open-meteo.com/en/docs/air-quality-api), [Geocoding API](https://open-meteo.com/en/docs/geocoding-api), [Elevation API](https://open-meteo.com/en/docs/elevation-api), [Flood API](https://open-meteo.com/en/docs/flood-api)\n- Lightning fast APIs with response times below 10 ms\n- Servers located in Europe and North America with GeoDNS for best latency and high-availability\n- No API key required, CORS supported, no ads, no tracking, not even cookies\n- Free for non-commercial use with data under Attribution 4.0 International (CC BY 4.0)\n- Source code available under AGPLv3\n\n## How does Open-Meteo work?\nOpen-Meteo utilizes open-data weather forecasts provided by national weather services. These services offer numerical weather predictions that are free to download. However, working with these models can be challenging, as it requires expertise in binary file formats, grid-systems, projections, and the fundamentals of weather predictions.\n\nLike many other weather APIs, Open-Meteo integrates high-resolution local and global weather models. Over 2 TB of data are downloaded and processed daily from multiple national weather services. The collected data is then stored in local files using a customized file format and compression technique to enhance access to time-series data such as a 14-day temperature forecast.\n\nIn contrast to other weather APIs, Open-Meteo provides complete access to its source code, and all data sources are openly listed, crediting the national weather services for their work. With Docker or prebuilt Ubuntu packages, it is possible to launch your own weather API within minutes. By providing the source code, users can conduct detailed verifications of the weather data processing and even make modifications themselves. Contributions are highly encouraged and welcomed.\n\nThe API is available for non-commercial use at no cost. Despite being free of charge, the forecast accuracy is top-notch. The API utilizes a vast array of local weather models with rapid updates, ensuring that the most precise forecast is generated for any location globally.\n\n## Resources\n- All API documentation can be found on https://open-meteo.com. The source code for the website, documentation and API generator is available here: https://github.com/open-meteo/open-meteo-website\n- The free non-commerical API is hosted at [https://api.open-meteo.com](https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.41&hourly=temperature_2m) using to GeoDNS to servers in Europe and North America (HTTPS is optional). The API source code is in this current repository.\n- The geocoding API source code is available in a separate repository https://github.com/open-meteo/geocoding-api\n- Larger changes are announced in the [Open-Meteo Blog](https://openmeteo.substack.com)\n\n## Who is using Open-Meteo?\nApps:\n- [WeatherGraph](https://weathergraph.app) Apple Watch App\n- [Slideshow](https://slideshow.digital/) Digital Signage app for Android\n- [weewx-DWD](https://github.com/roe-dl/weewx-DWD) Weather forecasts etc. for WeeWX\n- [omWeather](https://github.com/woheller69/omweather) Android Weather App\n- [solXpect](https://github.com/woheller69/solxpect) Android app which forecasts the output of your solar power plant\n- [Raindrop](https://github.com/metalfoxdev/Raindrop) Simple and intuitive weather app for the linux terminal.\n- [Weatherian](https://weatherian.com/) Multi-model meteogram (multi-platform)\n- [WeatherAI](https://play.google.com/store/apps/details?id=com.kingfu.weatherai) WeatherAI offers an intuitive user experience that makes checking the weather a breeze.\n- [Weather](https://github.com/GustavLindberg99/AndroidWeather) Free, open source, simple and complete weather app for Android\n- [DroneWeather](https://play.google.com/store/apps/details?id=xyz.droneweather.app) Weather forecasts, satellite count, and KP index for drone pilots.\n- [Clima](https://f-droid.org/packages/co.prestosole.clima/) Beautiful, minimal, and fast weather app\n- [SkyMuse](https://github.com/cakephone/skymuse) Minimal, privacy-respecting weather app. Built with web technologies.\n- [Weather Please](https://github.com/ggaidelevicius/weather-please/) Clean and minimal new tab replacement for browsers\n- [QuickWeather](https://github.com/TylerWilliamson/QuickWeather) Fast, free, and open source Android app\n- [Rain](https://github.com/DarkMooNight/Rain) Free, open source, beautiful, minimal and fast weather app\n\nRepositories:\n- [Captain Cold](https://github.com/cburton-godaddy/captain-cold) Simple Open-Meteo -> Discord integration\n- [wthrr-the-weathercrab](https://github.com/tobealive/wthrr-the-weathercrab) Weather companion for the terminal\n- [Weather-Cli](https://github.com/Rayrsn/Weather-Cli) A CLI program written in golang that allows you to get weather information from the terminal\n- [Homepage](https://github.com/benphelps/homepage/) A highly customizable homepage (or startpage / application dashboard) with Docker and service API integrations.\n- [Spots Guru](https://www.spots.guru) Weather forecast for lazy, the best wind & wave spots around you.\n- [WeatherReport.jl](https://github.com/vnegi10/WeatherReport.jl) A simple weather app for the Julia REPL\n\nOther:\n- [Menubar Weather](https://www.raycast.com/koinzhang/menubar-weather) A Raycast extension that displays live weather information in your menu bar\n- Contributions welcome!\n\nDo you use Open-Meteo? Please open a pull request and add your repository or app to the list!\n\n## Client SDKs\n- Go https://github.com/HectorMalot/omgo\n- Python https://github.com/m0rp43us/openmeteopy\n- Kotlin https://github.com/open-meteo/open-meteo-api-kotlin\n- .Net / C# https://github.com/AlienDwarf/open-meteo-dotnet\n- PHP Laravel https://github.com/michaelnabil230/laravel-weather\n- R https://github.com/tpisel/openmeteo\n- PHP Symfony 6.2 https://gitlab.com/flibidi67/open-meteo\n- PHP for Geocoding API: https://gitlab.com/flibidi67/open-meteo-geocoding\n- Android library for Geocoding API: https://github.com/woheller69/OmGeoDialog\n- Rust: https://github.com/angelodlfrtr/open-meteo-rs\n\nContributions welcome! Writing a SDK for Open-Meteo is more than welcome and a great way to help users.\n\n## Support\nIf you encounter bugs while using Open-Meteo APIs, please file a new issue ticket. For general ideas or Q&A please use the [Discussion](https://github.com/open-meteo/open-meteo/discussions) section on Github. Thanks!\n\nFor other enquiries please contact info@open-meteo.com\n\n\n## Run your own API\nInstructions to use Docker to run your own weather API are available in the [getting started guide](/docs/getting-started.md).\n\n\n\n## Terms & Privacy\nOpen-Meteo APIs are free for open-source developer and non-commercial use. We do not restrict access, but ask for fair use.\n\nIf your application exceeds 10\'000 requests per day, please contact us. We reserve the right to block applications and IP addresses that misuse our service.\n\nFor commercial use of Open-Meteo APIs, please contact us.\n\nAll data is provided as is without any warranty.\n\nWe do not collect any personal data. We do not share any personal information. We do not integrate any third party analytics, ads, beacons or plugins.\n\n## Data License\nAPI data are offered under Attribution 4.0 International (CC BY 4.0)\n\nYou are free to share: copy and redistribute the material in any medium or format and adapt: remix, transform, and build upon the material.\n\nAttribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\n\nYou must include a link next to any location, Open-Meteo data are displayed like:\n\nWeather data by Open-Meteo.com\n\n\n## Source Code License\nOpen-Meteo is open-source under the GNU Affero General Public License Version 3 (AGPLv3) or any later version. You can [find the license here](LICENSE). Exceptions are third party source-code with individual licensing in each file.\n'",",https://doi.org/10.5281/zenodo.7970649","2021/06/24, 09:48:38",853,AGPL-3.0,1083,1501,"2023/10/25, 10:16:26",63,118,283,253,0,1,0.0,0.051386071670047384,"2023/10/10, 13:05:31",0.2.89,0,37,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,lfx_crowdfunding,custom",false,false,,,https://github.com/open-meteo,https://open-meteo.com,,,,https://avatars.githubusercontent.com/u/86407831?v=4,,, stationaRy,Get hourly meteorological data from one of thousands of global stations.,rich-iannone,https://github.com/rich-iannone/stationaRy.git,github,"r,met-data,dataset,global",Meteorological Observation and Forecast,"2020/05/04, 16:36:00",248,0,11,false,R,,,R,http://rich-iannone.github.io/stationaRy/,"b'\n\n\n# stationaRy \n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/stationaRy)](https://CRAN.R-project.org/package=stationaRy)\n[![R build\nstatus](https://github.com/rich-iannone/stationaRy/workflows/R-CMD-check/badge.svg)](https://github.com/rich-iannone/stationaRy/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/rich-iannone/stationaRy/branch/master/graph/badge.svg)](https://codecov.io/gh/rich-iannone/stationaRy?branch=master)\n\n## Overview\n\nGet meteorological data from met stations located all over the world.\nThat\xe2\x80\x99s what you can do with this **R** package. There are *LOTS* of\nstations too (29,729 available in this dataset) and many have data that\ngo pretty far back in time. The data comes from the Integrated Surface\nDataset (ISD), which is maintained by the National Oceanic and\nAtmospheric Administration (NOAA).\n\n### Retrieving Met Data with a `station_id`\n\nLet\xe2\x80\x99s get some met data from La Guardia Airport in New York City (the\nstation ID value is `""725030-14732""`). This station has a pretty long\nhistory (starting operations in 1973) but we\xe2\x80\x99ll just obtain data from\nthe years of 2017 and 2018.\n\n``` r\nlga_met_data <- \n get_met_data(\n station_id = ""725030-14732"",\n years = 2017:2018\n )\n```\n\n``` r\nlga_met_data\n#> # A tibble: 17,520 x 10\n#> id time temp wd ws atmos_pres dew_point rh\n#> \n#> 1 7250\xe2\x80\xa6 2017-01-01 00:00:00 7.2 230 5.7 1012. -4.4 43.5\n#> 2 7250\xe2\x80\xa6 2017-01-01 01:00:00 7.8 230 4.6 1012. -3.9 43.4\n#> 3 7250\xe2\x80\xa6 2017-01-01 02:00:00 7.2 230 3.6 1012. -2.2 51.3\n#> 4 7250\xe2\x80\xa6 2017-01-01 03:00:00 7.8 240 5.7 1013. -3.3 45.4\n#> 5 7250\xe2\x80\xa6 2017-01-01 04:00:00 7.8 240 4.6 1013. -3.9 43.4\n#> 6 7250\xe2\x80\xa6 2017-01-01 05:00:00 8.3 240 4.6 1014. -4.4 40.4\n#> 7 7250\xe2\x80\xa6 2017-01-01 06:00:00 8.3 250 5.1 1015. -3.9 41.9\n#> 8 7250\xe2\x80\xa6 2017-01-01 07:00:00 8.3 260 5.7 1016. -3.3 43.8\n#> 9 7250\xe2\x80\xa6 2017-01-01 08:00:00 8.3 240 5.1 1017. -2.8 45.5\n#> 10 7250\xe2\x80\xa6 2017-01-01 09:00:00 8.3 260 6.2 1019. -2.8 45.5\n#> # \xe2\x80\xa6 with 17,510 more rows, and 2 more variables: ceil_hgt ,\n#> # visibility \n```\n\n### Discovering Met Stations\n\nAt a minimum we need a station\xe2\x80\x99s identifier to obtain its met data. We\ncan start the process of getting an identifier by accessing the entire\ncatalog of station metadata with the `get_station_metadata()` function.\nThe output tibble has station `id` values in the first column. Let\xe2\x80\x99s get\na subset of stations from that: those stations that are located in\nNorway.\n\n``` r\nstations_norway <- \n get_station_metadata() %>%\n dplyr::filter(country == ""NO"")\n\nstations_norway\n#> # A tibble: 405 x 16\n#> id usaf wban name country state icao lat lon elev begin_date\n#> \n#> 1 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 BOGU\xe2\x80\xa6 NO ENRS NA NA NA 2001-09-27\n#> 2 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 JAN \xe2\x80\xa6 NO ENJA 70.9 -8.67 9 1931-01-01\n#> 3 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 ROST NO NA NA NA 1986-11-20\n#> 4 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 SORS\xe2\x80\xa6 NO ENSO 59.8 5.34 48.8 1986-11-20\n#> 5 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 BRIN\xe2\x80\xa6 NO 61.4 5.87 327 1987-01-17\n#> 6 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 RORV\xe2\x80\xa6 NO 64.8 11.2 14 1987-01-16\n#> 7 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 FRIGG NO ENFR 60.0 2.25 48 1988-03-20\n#> 8 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 VERL\xe2\x80\xa6 NO 80.0 16.2 8 1986-11-09\n#> 9 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 HORN\xe2\x80\xa6 NO 77 15.5 12 1985-06-01\n#> 10 0100\xe2\x80\xa6 0100\xe2\x80\xa6 99999 NY-A\xe2\x80\xa6 NO ENAS 78.9 11.9 8 1973-01-01\n#> # \xe2\x80\xa6 with 395 more rows, and 5 more variables: end_date ,\n#> # begin_year , end_year , tz_name , years \n```\n\nThis table can be even more greatly reduced to isolate the stations of\ninterest. For example, we could elect to get only high-altitude stations\n(above 1000 meters) in Norway.\n\n``` r\nnorway_high_elev <-\n stations_norway %>% \n dplyr::filter(elev > 1000)\n\nnorway_high_elev\n#> # A tibble: 12 x 16\n#> id usaf wban name country state icao lat lon elev begin_date\n#> \n#> 1 0122\xe2\x80\xa6 0122\xe2\x80\xa6 99999 MANN\xe2\x80\xa6 NO 62.4 7.77 1294 2010-03-15\n#> 2 0123\xe2\x80\xa6 0123\xe2\x80\xa6 99999 HJER\xe2\x80\xa6 NO 62.2 9.55 1012 2010-09-07\n#> 3 0134\xe2\x80\xa6 0134\xe2\x80\xa6 99999 MIDT\xe2\x80\xa6 NO 60.6 7.27 1162 2011-11-25\n#> 4 0135\xe2\x80\xa6 0135\xe2\x80\xa6 99999 FINS\xe2\x80\xa6 NO 60.6 7.53 1208 2003-03-30\n#> 5 0135\xe2\x80\xa6 0135\xe2\x80\xa6 99999 FINS\xe2\x80\xa6 NO 60.6 7.5 1224 1973-01-02\n#> 6 0135\xe2\x80\xa6 0135\xe2\x80\xa6 99999 SAND\xe2\x80\xa6 NO 60.2 7.48 1250 2004-01-07\n#> 7 0136\xe2\x80\xa6 0136\xe2\x80\xa6 99999 JUVV\xe2\x80\xa6 NO 61.7 8.37 1894 2009-06-26\n#> 8 0136\xe2\x80\xa6 0136\xe2\x80\xa6 99999 SOGN\xe2\x80\xa6 NO 61.6 8 1413 1979-03-01\n#> 9 0137\xe2\x80\xa6 0137\xe2\x80\xa6 99999 KVIT\xe2\x80\xa6 NO 61.5 10.1 1028 1973-01-01\n#> 10 0143\xe2\x80\xa6 0143\xe2\x80\xa6 99999 MIDT\xe2\x80\xa6 NO 59.8 6.98 1081 1973-01-01\n#> 11 0144\xe2\x80\xa6 0144\xe2\x80\xa6 99999 BLAS\xe2\x80\xa6 NO 59.3 6.87 1105. 1973-01-01\n#> 12 0146\xe2\x80\xa6 0146\xe2\x80\xa6 99999 GAUS\xe2\x80\xa6 NO 59.8 8.65 1804. 2014-06-05\n#> # \xe2\x80\xa6 with 5 more variables: end_date , begin_year , end_year ,\n#> # tz_name , years \n```\n\nThe station IDs from the tibble can be transformed into a vector of\nstation IDs with `dplyr::pull()`.\n\n``` r\nnorway_high_elev %>% dplyr::pull(id)\n#> [1] ""012200-99999"" ""012390-99999"" ""013460-99999"" ""013500-99999"" ""013510-99999""\n#> [6] ""013520-99999"" ""013620-99999"" ""013660-99999"" ""013750-99999"" ""014330-99999""\n#> [11] ""014400-99999"" ""014611-99999""\n```\n\nSuppose you\xe2\x80\x99d like to collect several years of met data from a\nparticular station and fetch only the observations that meet some set of\nconditions. Here\xe2\x80\x99s an example of obtaining temperatures above 15 degrees\nCelsius from the high-altitude `""JUVVASSHOE""` station in Norway and\nadding a column with temperatures in degrees Fahrenheit.\n\n``` r\nstation_data <- \n get_station_metadata() %>%\n dplyr::filter(name == ""JUVVASSHOE"") %>%\n dplyr::pull(id) %>%\n get_met_data(years = 2011:2019)\n\nhigh_temp_data <-\n station_data %>%\n dplyr::select(id, time, wd, ws, temp) %>% \n dplyr::filter(temp > 16) %>%\n dplyr::mutate(temp_f = ((temp * (9/5)) + 32) %>% round(1)) %>%\n dplyr::arrange(dplyr::desc(temp_f))\n```\n\n``` r\nhigh_temp_data\n#> # A tibble: 50 x 6\n#> id time wd ws temp temp_f\n#> \n#> 1 013620-99999 2019-07-26 15:00:00 160 5 18.5 65.3\n#> 2 013620-99999 2019-07-26 17:00:00 210 3 18.4 65.1\n#> 3 013620-99999 2019-07-26 18:00:00 180 2 18.3 64.9\n#> 4 013620-99999 2019-07-26 16:00:00 180 4 18.2 64.8\n#> 5 013620-99999 2014-07-23 16:00:00 270 2 17.6 63.7\n#> 6 013620-99999 2019-07-26 14:00:00 150 4 17.5 63.5\n#> 7 013620-99999 2014-07-23 17:00:00 300 4 17.3 63.1\n#> 8 013620-99999 2019-07-28 16:00:00 130 6 17.3 63.1\n#> 9 013620-99999 2014-07-23 18:00:00 280 3 17.2 63 \n#> 10 013620-99999 2018-07-04 15:00:00 340 2 17.2 63 \n#> # \xe2\x80\xa6 with 40 more rows\n```\n\n### Additional Data Fields\n\nThere can be a substantial amount of additional met data beyond wind\nspeed, ambient temperature, etc. However, these additional fields can\nvary greatly across stations. The nomenclature for the additional\ncategories of data uses \xe2\x80\x98two-letter + digit\xe2\x80\x99 identifiers (e.g., `AA1`,\n`GA1`, etc.). Within each category are numerous fields, where the\nvariables are coded as `[identifer]_[index]`). More information about\nthese additional data fields can be found in [this PDF\ndocument](http://www1.ncdc.noaa.gov/pub/data/ish/ish-format-document.pdf).\n\nTo find out which categories of additional data fields are available for\na station, we can use the `station_coverage()` function. You\xe2\x80\x99ll get a\ntibble with the available additional categories and their counts over\nthe specified period.\n\n``` r\nadditional_data_fields <-\n get_station_metadata() %>%\n dplyr::filter(name == ""JUVVASSHOE"") %>%\n dplyr::pull(id) %>%\n station_coverage(years = 2015)\n```\n\n``` r\nadditional_data_fields\n#> # A tibble: 87 x 3\n#> id category count\n#> \n#> 1 013620-99999 AA1 0\n#> 2 013620-99999 AB1 0\n#> 3 013620-99999 AC1 0\n#> 4 013620-99999 AD1 0\n#> 5 013620-99999 AE1 0\n#> 6 013620-99999 AG1 0\n#> 7 013620-99999 AH1 0\n#> 8 013620-99999 AI1 0\n#> 9 013620-99999 AJ1 194\n#> 10 013620-99999 AK1 0\n#> # \xe2\x80\xa6 with 77 more rows\n```\n\nWe can use **purrr**\xe2\x80\x99s `map_df()` function to get additional data field\ncoverage for a subset of stations (those that are near sea level and\nhave data in 2019). With the `station_coverage()` function set to output\ntibbles in `wide` mode (one row per station, field categories as\ncolumns, and counts of observations as values), we can ascertain which\nstations have the particular fields we need.\n\n``` r\nstns <- \n get_station_metadata() %>%\n dplyr::filter(country == ""NO"", elev <= 5 & end_year == 2019)\n\ncoverage_tbl <- \n purrr::map_df(\n seq(nrow(stns)),\n function(x) {\n stns %>%\n dplyr::pull(id) %>%\n .[[x]] %>%\n station_coverage(\n years = 2019,\n wide_tbl = TRUE\n )\n }\n )\n```\n\n``` r\ncoverage_tbl\n#> # A tibble: 1 x 88\n#> id AA1 AB1 AC1 AD1 AE1 AG1 AH1 AI1 AJ1 AK1 AL1 AM1\n#> * \n#> 1 01167\xe2\x80\xa6 491 0 0 0 0 0 0 0 167 0 0 0\n#> # \xe2\x80\xa6 with 75 more variables: AN1 , AO1 , AP1 , AU1 ,\n#> # AW1 , AX1 , AY1 , AZ1 , CB1 , CF1 ,\n#> # CG1 , CH1 , CI1 , CN1 , CN2 , CN3 ,\n#> # CN4 , CR1 , CT1 , CU1 , CV1 , CW1 ,\n#> # CX1 , CO1 , CO2 , ED1 , GA1 , GD1 ,\n#> # GF1 , GG1 , GH1 , GJ1 , GK1 , GL1 ,\n#> # GM1 , GN1 , GO1 , GP1 , GQ1 , GR1 ,\n#> # HL1 , IA1 , IA2 , IB1 , IB2 , IC1 ,\n#> # KA1 , KB1 , KC1 , KD1 , KE1 , KF1 ,\n#> # KG1 , MA1 , MD1 , ME1 , MF1 , MG1 ,\n#> # MH1 , MK1 , MV1 , MW1 , OA1 , OB1 ,\n#> # OC1 , OE1 , RH1 , SA1 , ST1 , UA1 ,\n#> # UG1 , UG2 , WA1 , WD1 , WG1 \n```\n\nFor the `""KAWAIHAE""` station in Hawaii, some interesting data fields are\navailable. In particular, its `SA1` category provides sea surface\ntemperature data, where the `sa1_1` and `sa1_2` variables represent the\nsea surface temperature and its quality code.\n\nCombining the use of `get_met_data()` with functions from **dplyr**, we\ncan create a table of the mean ambient and sea-surface temperatures by\nmonth. The additional data is included in the met data table by using\nthe `add_fields` argument and specifying the `""SA1""` category (multiple\ncategories can be included).\n\n``` r\nkawaihae_sst <- \n get_met_data(\n station_id = ""997173-99999"",\n years = 2017:2018,\n add_fields = ""SA1""\n ) %>%\n dplyr::mutate(\n year = lubridate::year(time),\n month = lubridate::month(time)\n ) %>%\n dplyr::filter(sa1_2 == 1) %>%\n dplyr::group_by(year, month) %>%\n dplyr::summarize(\n avg_temp = mean(temp, na.rm = TRUE),\n avg_sst = mean(sa1_1, na.rm = TRUE)\n )\n```\n\n``` r\nkawaihae_sst\n#> # A tibble: 6 x 4\n#> # Groups: year [2]\n#> year month avg_temp avg_sst\n#> \n#> 1 2017 12 24.0 25.7\n#> 2 2018 1 23.8 25.2\n#> 3 2018 2 23.7 25.1\n#> 4 2018 3 23.8 25.0\n#> 5 2018 4 25.6 26.3\n#> 6 2018 12 26.5 25.9\n```\n\n## Installation\n\nThe **stationaRy** package can be easily installed from CRAN.\n\n``` r\ninstall.packages(""stationaRy"")\n```\n\nTo install the development version of **stationaRy**, use the following:\n\n``` r\ninstall.packages(""devtools"")\nremotes::install_github(""rich-iannone/stationaRy"")\n```\n\nIf you encounter a bug, have usage questions, or want to share ideas to\nmake this package better, feel free to file an\n[issue](https://github.com/rich-iannone/stationaRy/issues).\n\n## License\n\nMIT \xc2\xa9 Richard Iannone\n'",,"2013/11/21, 02:06:45",3626,CUSTOM,0,918,"2021/01/14, 22:49:50",4,4,26,0,1014,0,0.0,0.004439511653718142,"2019/09/25, 15:01:35",v0.5.0,0,4,false,,true,false,,,,,,,,,,, weathercan,This package makes it easier to search for and download multiple months/years of historical weather data from the Environment and Climate Change Canada (ECCC) website.,ropensci,https://github.com/ropensci/weathercan.git,github,"weather-data,environment-canada,weather-downloader,r,rstats,r-package,peer-reviewed",Meteorological Observation and Forecast,"2023/09/20, 22:05:19",97,0,10,true,R,rOpenSci,ropensci,"R,TeX,HTML",https://docs.ropensci.org/weathercan,"b'\n# weathercan \n\n[![:name status\nbadge](https://ropensci.r-universe.dev/badges/:name)](https://ropensci.r-universe.dev)\n[![weathercan status\nbadge](https://ropensci.r-universe.dev/badges/weathercan)](https://ropensci.r-universe.dev)\n[![R-CMD-check](https://github.com/ropensci/weathercan/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/weathercan/actions)\n[![codecov](https://codecov.io/gh/ropensci/weathercan/branch/main/graph/badge.svg)](https://app.codecov.io/gh/ropensci/weathercan)\n\n[![](https://badges.ropensci.org/160_status.svg)](https://github.com/ropensci/software-review/issues/160)\n[![DOI](https://zenodo.org/badge/60650396.svg)](https://zenodo.org/badge/latestdoi/60650396)\n[![DOI](http://joss.theoj.org/papers/10.21105/joss.00571/status.svg)](https://doi.org/10.21105/joss.00571)\n\n\n\nThis package makes it easier to search for and download multiple\nmonths/years of historical weather data from [Environment and Climate\nChange Canada (ECCC)\nwebsite](https://climate.weather.gc.ca/historical_data/search_historic_data_e.html).\n\nBear in mind that these downloads can be fairly large and performing\nmultiple downloads may use up ECCC\xe2\x80\x99s bandwidth unnecessarily. Try to\nstick to what you need.\n\nFor more details and tutorials checkout the [weathercan\nwebsite](https://docs.ropensci.org/weathercan/) (or see the [development\ndocs](http://ropensci.github.io/weathercan/))\n\n> Check out the Demo weathercan shiny dashboard\n> ([html](https://steffilazerte.shinyapps.io/weathercan_shiny/);\n> [source](https://github.com/steffilazerte/weathercan_shiny))\n\n## Installation\n\nYou can install `weathercan` from the [rOpenSci\nr-Universe](https://ropensci.r-universe.dev/ui/):\n\n``` r\ninstall.packages(""weathercan"", \n repos = c(""https://ropensci.r-universe.dev"", \n ""https://cloud.r-project.org""))\n```\n\nView the available vignettes with `vignette(package = ""weathercan"")`\n\nView a particular vignette with, for example,\n`vignette(""weathercan"", package = ""weathercan"")`\n\n## General usage\n\nTo download data, you first need to know the `station_id` associated\nwith the station you\xe2\x80\x99re interested in.\n\n### Stations\n\n`weathercan` includes the function `stations()` which returns a list of\nstations and their details (including `station_id`).\n\n``` r\nhead(stations())\n```\n\n ## # A tibble: 6 \xc3\x97 16\n ## prov station_name station_id climate_id WMO_id TC_id lat lon elev tz interval start end normals normals_1981_2010 normals_1971_2000\n ## \n ## 1 AB DAYSLAND 1795 301AR54 NA 52.9 -112. 689. Etc/GMT+7 day 1908 1922 FALSE FALSE FALSE \n ## 2 AB DAYSLAND 1795 301AR54 NA 52.9 -112. 689. Etc/GMT+7 hour NA NA FALSE FALSE FALSE \n ## 3 AB DAYSLAND 1795 301AR54 NA 52.9 -112. 689. Etc/GMT+7 month 1908 1922 FALSE FALSE FALSE \n ## 4 AB EDMONTON CORONATION 1796 301BK03 NA 53.6 -114. 671. Etc/GMT+7 day 1978 1979 FALSE FALSE FALSE \n ## 5 AB EDMONTON CORONATION 1796 301BK03 NA 53.6 -114. 671. Etc/GMT+7 hour NA NA FALSE FALSE FALSE \n ## 6 AB EDMONTON CORONATION 1796 301BK03 NA 53.6 -114. 671. Etc/GMT+7 month 1978 1979 FALSE FALSE FALSE\n\n``` r\nglimpse(stations())\n```\n\n ## Rows: 26,382\n ## Columns: 16\n ## $ prov ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", ""AB"", \xe2\x80\xa6\n ## $ station_name ""DAYSLAND"", ""DAYSLAND"", ""DAYSLAND"", ""EDMONTON CORONATION"", ""EDMONTON CORONATION"", ""EDMONTON CORONATION"", ""FLEET"", ""FLEET"", ""FLEET"", \xe2\x80\xa6\n ## $ station_id 1795, 1795, 1795, 1796, 1796, 1796, 1797, 1797, 1797, 1798, 1798, 1798, 1799, 1799, 1799, 1800, 1800, 1800, 1801, 1801, 1801, 1802, \xe2\x80\xa6\n ## $ climate_id ""301AR54"", ""301AR54"", ""301AR54"", ""301BK03"", ""301BK03"", ""301BK03"", ""301B6L0"", ""301B6L0"", ""301B6L0"", ""301B8LR"", ""301B8LR"", ""301B8LR"", \xe2\x80\xa6\n ## $ WMO_id NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, \xe2\x80\xa6\n ## $ TC_id NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, \xe2\x80\xa6\n ## $ lat 52.87, 52.87, 52.87, 53.57, 53.57, 53.57, 52.15, 52.15, 52.15, 53.20, 53.20, 53.20, 52.40, 52.40, 52.40, 54.08, 54.08, 54.08, 53.52,\xe2\x80\xa6\n ## $ lon -112.28, -112.28, -112.28, -113.57, -113.57, -113.57, -111.73, -111.73, -111.73, -110.15, -110.15, -110.15, -115.20, -115.20, -115.2\xe2\x80\xa6\n ## $ elev 688.8, 688.8, 688.8, 670.6, 670.6, 670.6, 838.2, 838.2, 838.2, 640.0, 640.0, 640.0, 1036.0, 1036.0, 1036.0, 585.2, 585.2, 585.2, 668\xe2\x80\xa6\n ## $ tz ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""Etc/GMT+7"", ""E\xe2\x80\xa6\n ## $ interval ""day"", ""hour"", ""month"", ""day"", ""hour"", ""month"", ""day"", ""hour"", ""month"", ""day"", ""hour"", ""month"", ""day"", ""hour"", ""month"", ""day"", ""hour\xe2\x80\xa6\n ## $ start 1908, NA, 1908, 1978, NA, 1978, 1987, NA, 1987, 1987, NA, 1987, 1980, NA, 1980, 1980, NA, 1980, 1986, NA, 1986, 1987, NA, 1987, 1986\xe2\x80\xa6\n ## $ end 1922, NA, 1922, 1979, NA, 1979, 1990, NA, 1990, 1998, NA, 1998, 2009, NA, 2007, 1981, NA, 1981, 2019, NA, 2007, 1991, NA, 1991, 1995\xe2\x80\xa6\n ## $ normals FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, TRUE, TRU\xe2\x80\xa6\n ## $ normals_1981_2010 FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, TRUE, TRU\xe2\x80\xa6\n ## $ normals_1971_2000 FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE,\xe2\x80\xa6\n\nYou can look through this data frame directly, or you can use the\n`stations_search` function:\n\n``` r\nstations_search(""Kamloops"", interval = ""hour"")\n```\n\n ## # A tibble: 3 \xc3\x97 16\n ## prov station_name station_id climate_id WMO_id TC_id lat lon elev tz interval start end normals normals_1981_2010 normals_1971_2000\n ## \n ## 1 BC KAMLOOPS A 1275 1163780 71887 YKA 50.7 -120. 345. Etc/GMT+8 hour 1953 2013 TRUE TRUE TRUE \n ## 2 BC KAMLOOPS A 51423 1163781 71887 YKA 50.7 -120. 345. Etc/GMT+8 hour 2013 2023 FALSE FALSE FALSE \n ## 3 BC KAMLOOPS AUT 42203 1163842 71741 ZKA 50.7 -120. 345 Etc/GMT+8 hour 2006 2023 FALSE FALSE FALSE\n\nTime frame must be one of \xe2\x80\x9chour\xe2\x80\x9d, \xe2\x80\x9cday\xe2\x80\x9d, or \xe2\x80\x9cmonth\xe2\x80\x9d.\n\nYou can also search by proximity:\n\n``` r\nstations_search(coords = c(50.667492, -120.329049), dist = 20, interval = ""hour"")\n```\n\n ## # A tibble: 3 \xc3\x97 17\n ## prov station_name station_id climate_id WMO_id TC_id lat lon elev tz interval start end normals normals_1981_2010 normals_1971_2000 distance\n ## \n ## 1 BC KAMLOOPS A 1275 1163780 71887 YKA 50.7 -120. 345. Etc/GMT+8 hour 1953 2013 TRUE TRUE TRUE 8.61\n ## 2 BC KAMLOOPS AUT 42203 1163842 71741 ZKA 50.7 -120. 345 Etc/GMT+8 hour 2006 2023 FALSE FALSE FALSE 8.61\n ## 3 BC KAMLOOPS A 51423 1163781 71887 YKA 50.7 -120. 345. Etc/GMT+8 hour 2013 2023 FALSE FALSE FALSE 9.26\n\nYou can update this list of stations with\n\n``` r\nstations_dl()\n```\n\n ## According to Environment Canada, Modified Date: 2023-01-24 23:30 UTC\n\n ## Environment Canada Disclaimers:\n ## ""Station Inventory Disclaimer: Please note that this inventory list is a snapshot of stations on our website as of the modified date, and may be subject to change without notice.""\n ## ""Station ID Disclaimer: Station IDs are an internal index numbering system and may be subject to change without notice.""\n\n ## Stations data saved...\n ## Use `stations()` to access most recent version and `stations_meta()` to see when this was last updated\n\nAnd check when it was last updated with\n\n``` r\nstations_meta()\n```\n\n ## $ECCC_modified\n ## [1] ""2023-01-24 23:30:00 UTC""\n ## \n ## $weathercan_modified\n ## [1] ""2023-09-20""\n\n**Note:** For reproducibility, if you are using the stations list to\ngather your data, it can be a good idea to take note of the ECCC date of\nmodification and include it in your reports/manuscripts.\n\n### Weather\n\nOnce you have your `station_id`(s) you can download weather data:\n\n``` r\nkam <- weather_dl(station_ids = 51423, start = ""2018-02-01"", end = ""2018-04-15"")\n```\n\n ## As of weathercan v0.3.0 time display is either local time or UTC\n ## See Details under ?weather_dl for more information.\n ## This message is shown once per session\n\n``` r\nkam\n```\n\n ## # A tibble: 1,776 \xc3\x97 37\n ## station_name station_id station_operator prov lat lon elev climate_id WMO_id TC_id date time year month day hour weather hmdx\n ## \n ## 1 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 00:00:00 2018 02 01 00:00 NA\n ## 2 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 01:00:00 2018 02 01 01:00 Snow NA\n ## 3 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 02:00:00 2018 02 01 02:00 NA\n ## 4 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 03:00:00 2018 02 01 03:00 NA\n ## 5 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 04:00:00 2018 02 01 04:00 Cloudy NA\n ## 6 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 05:00:00 2018 02 01 05:00 NA\n ## 7 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 06:00:00 2018 02 01 06:00 NA\n ## 8 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 07:00:00 2018 02 01 07:00 Cloudy NA\n ## 9 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 08:00:00 2018 02 01 08:00 NA\n ## 10 KAMLOOPS A 51423 NA BC 50.7 -120. 345. 1163781 71887 YKA 2018-02-01 2018-02-01 09:00:00 2018 02 01 09:00 NA\n ## # \xe2\x84\xb9 1,766 more rows\n\nYou can also download data from multiple stations at once:\n\n``` r\nkam_pg <- weather_dl(station_ids = c(48248, 51423), start = ""2018-02-01"", end = ""2018-04-15"")\n```\n\n## Climate Normals\n\nTo access climate normals, you first need to know the `climate_id`\nassociated with the station you\xe2\x80\x99re interested in.\n\n``` r\nstations_search(""Winnipeg"", normals_years = ""current"")\n```\n\n ## # A tibble: 1 \xc3\x97 13\n ## prov station_name station_id climate_id WMO_id TC_id lat lon elev tz normals normals_1981_2010 normals_1971_2000\n ## \n ## 1 MB WINNIPEG RICHARDSON INT\'L A 3698 5023222 71852 YWG 49.9 -97.2 239. Etc/GMT+6 TRUE TRUE TRUE\n\nThen you can download the climate normals with the `normals_dl()`\nfunction.\n\n``` r\nn <- normals_dl(""5023222"")\n```\n\nSee the [Getting\nStarted](https://docs.ropensci.org/weathercan/articles/weathercan.html)\nvignette for more details.\n\n## Citation\n\n``` r\ncitation(""weathercan"")\n```\n\n ## To cite \'weathercan\' in publications, please use:\n ## \n ## LaZerte, Stefanie E and Sam Albers (2018). weathercan: Download and format weather data from Environment and Climate Change Canada. The\n ## Journal of Open Source Software 3(22):571. doi:10.21105/joss.00571.\n ## \n ## A BibTeX entry for LaTeX users is\n ## \n ## @Article{,\n ## title = {{weathercan}: {D}ownload and format weather data from Environment and Climate Change Canada},\n ## author = {Stefanie E LaZerte and Sam Albers},\n ## journal = {The Journal of Open Source Software},\n ## volume = {3},\n ## number = {22},\n ## pages = {571},\n ## year = {2018},\n ## url = {https://joss.theoj.org/papers/10.21105/joss.00571},\n ## }\n\n## License\n\nThe data and the code in this repository are licensed under multiple\nlicences. All code is licensed\n[GPL-3](https://www.gnu.org/licenses/gpl-3.0.en.html). All weather data\nis licensed under the ([Open Government License -\nCanada](http://open.canada.ca/en/open-government-licence-canada)).\n\n## `weathercan` in the wild!\n\n- Browse [`weathercan` use cases](https://ropensci.org/usecases/) on\n rOpenSci.org\n- Checkout the [`weathercan` Shiny\n App](https://nickrongkp.shinyapps.io/WeatherCan/) by Nick Rong\n (@nickyrong) and Nathan Smith (@WraySmith)\n- R package [`RavenR`](https://github.com/rchlumsk/RavenR/tree/master/R)\n has functions for converting ECCC data downloaded by `weathercan` to\n the .rvt format for Raven.\n- R package [`meteoland`](https://github.com/emf-creaf/meteoland) has\n functions for converting ECCC data downloaded by `weathercan` to the\n format required for use in `meteoland`.\n\n## Similar packages\n\n**[`rclimateca`](https://github.com/paleolimbot/rclimateca)**\n\n`weathercan` and `rclimateca` were developed at roughly the same time\nand as a result, both present up-to-date methods for accessing and\ndownloading data from ECCC. The largest differences between the two\npackages are: a) `weathercan` includes functions for interpolating\nweather data and directly integrating it into other data sources. b)\n`weathercan` actively seeks to apply tidy data principles in R and\nintegrates well with the tidyverse including using tibbles and nested\nlistcols. c) `rclimateca` contains arguments for specifying short\nvs.\xc2\xa0long data formats. d) `rclimateca` has the option of formatting data\nin the MUData format using the\n[`mudata2`](https://cran.r-project.org/package=mudata2) package by the\nsame author.\n\n**[`CHCN`](https://cran.r-project.org/package=CHCN)**\n\n`CHCN` is an older package last updated in 2012. Unfortunately, ECCC\nupdated their services within the last couple of years which caused a\ngreat many of the previous web scrapers to fail. `CHCN` relies on a\ndecommissioned [older web-scraper](https://quickcode.io/) and so is\ncurrently broken.\n\n## Contributions\n\nWe welcome any and all contributions! To make the process as painless as\npossible for all involved, please see our [guide to\ncontributing](CONTRIBUTING.md)\n\n## Code of Conduct\n\nPlease note that this project is released with a [Contributor Code of\nConduct](https://ropensci.org/code-of-conduct/). By participating in\nthis project you agree to abide by its terms.\n\n[![ropensci_footer](http://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)\n'",",https://zenodo.org/badge/latestdoi/60650396,https://doi.org/10.21105/joss.00571","2016/06/07, 22:20:22",2696,GPL-3.0,54,844,"2023/09/20, 22:05:24",8,39,131,10,35,0,0.2,0.10951760104302477,"2023/09/20, 22:06:21",v0.7.1,0,6,false,,false,true,,,https://github.com/ropensci,https://ropensci.org/,"Berkeley, CA",,,https://avatars.githubusercontent.com/u/1200269?v=4,,, metR,Several functions and utilities that make R better for handling meteorological data in the tidy data paradigm.,eliocamp,https://github.com/eliocamp/metR.git,github,"r,atmospheric-science,ggplot2,r-package,rstats,visualization",Meteorological Observation and Forecast,"2023/03/25, 14:14:33",134,0,15,true,R,,,R,https://eliocamp.github.io/metR/,"b'\n\n\n# metR \n\n[![R build\nstatus](https://github.com/eliocamp/metR/workflows/R-CMD-check/badge.svg)](https://github.com/eliocamp/metR/actions)\n[![Coverage\nstatus](https://codecov.io/gh/eliocamp/metR/branch/master/graph/badge.svg?token=jVznWTMCpz)](https://app.codecov.io/gh/eliocamp/metR)\n[![CRAN\nstatus](http://www.r-pkg.org/badges/version/metR)](https://cran.r-project.org/package=metR)\n[![DOI](https://zenodo.org/badge/96357263.svg)](https://zenodo.org/badge/latestdoi/96357263)\n\nmetR packages several functions and utilities that make R better for\nhandling meteorological data in the tidy data paradigm. It started\nmostly sa a packaging of assorted wrappers and tricks that I wrote for\nmy day to day work as a researcher in atmospheric sciences. Since then,\nit has grown organically and for my own needs and feedback from users.\n\nConceptually it\xe2\x80\x99s divided into *visualization tools* and *data tools*.\nThe former are geoms, stats and scales that help with plotting using\n[ggplot2](https://ggplot2.tidyverse.org/index.html), such as\n`stat_contour_fill()` or `scale_y_level()`, while the later are\nfunctions for common data processing tools in the atmospheric sciences,\nsuch as `Derivate()` or `EOF()`; these are implemented to work in the\n[data.table](https://github.com/Rdatatable/data.table/wiki) paradigm,\nbut also work with regular data frames.\n\nCurrently metR is in development but maturing. Most functions check\narguments and there are some tests. However, some functions might change\nit\xe2\x80\x99s interface, and functionality can be moved to other packages, so\nplease bear that in mind.\n\n## Installation\n\nYou can install metR from CRAN with:\n\n``` r\ninstall.packages(""metR"")\n```\n\nOr the development version ([![Build\nStatus](https://github.com/eliocamp/metR/workflows/R-CMD-check/badge.svg?branch=dev)](https://github.com/eliocamp/metR/actions?query=workflow%3AR-CMD-check))\nwith:\n\n``` r\n# install.packages(""devtools"")\ndevtools::install_github(""eliocamp/metR@dev"")\n```\n\nIf you need to read netcdf files, you might need to install the netcdf\nand udunits2 libraries. On Ubuntu and it\xe2\x80\x99s derivatives this can be done\nby typing\n\n sudo apt install libnetcdf-dev netcdf-bin libudunits2-dev\n\n## Citing the package\n\nIf you use metR in your research, please consider citing it. You can get\ncitation information with\n\n``` r\ncitation(""metR"")\n#> \n#> To cite metR in publications use:\n#> \n#> \n#> \n#> A BibTeX entry for LaTeX users is\n#> \n#> @Manual{,\n#> title = {metR: Tools for Easier Analysis of Meteorological Fields},\n#> author = {Elio Campitelli},\n#> year = {2021},\n#> note = {R package version 0.13.0.9000},\n#> url = {https://github.com/eliocamp/metR},\n#> doi = {10.5281/zenodo.2593516},\n#> }\n```\n\n## Examples\n\nIn this example we easily perform Principal Components Decomposition\n(EOF) on monthly geopotential height, then compute the geostrophic wind\nassociated with this field and plot the field with filled contours and\nthe wind with streamlines.\n\n``` r\nlibrary(metR)\nlibrary(data.table)\nlibrary(ggplot2)\ndata(geopotential)\n# Use Empirical Orthogonal Functions to compute the Antarctic Oscillation\ngeopotential <- copy(geopotential)\ngeopotential[, gh.t.w := Anomaly(gh)*sqrt(cos(lat*pi/180)),\n by = .(lon, lat, month(date))]\naao <- EOF(gh.t.w ~ lat + lon | date, data = geopotential, n = 1)\naao$left[, c(""u"", ""v"") := GeostrophicWind(gh.t.w/sqrt(cos(lat*pi/180)), \n lon, lat)]\n\n# AAO field\nbinwidth <- 0.01\nggplot(aao$left, aes(lon, lat)) +\n geom_contour_fill(aes(z = gh.t.w/sqrt(cos(lat*pi/180)), \n fill = after_stat(level)), binwidth = binwidth,\n xwrap = c(0, 360)) +\n geom_streamline(aes(dx = dlon(u, lat), dy = dlat(v)),\n linewidth = 0.4, L = 80, skip = 3, xwrap = c(0, 360)) +\n scale_x_longitude() +\n scale_y_latitude(limits = c(-90, -20)) +\n scale_fill_divergent_discretised(name = ""AAO pattern"") +\n coord_polar()\n#> Warning in .check_wrap_param(list(...)): \'xwrap\' and \'ywrap\' will be\n#> deprecated. Use ggperiodic::periodic insead.\n```\n\n![](man/figures/field-1.png)\n\n``` r\n# AAO signal\nggplot(aao$right, aes(date, gh.t.w)) +\n geom_line() +\n geom_smooth(span = 0.4)\n#> `geom_smooth()` using method = \'loess\' and formula = \'y ~ x\'\n```\n\n![](man/figures/timeseries-1.png)\n\nYou can read more in the vignettes: [Visualization\ntools](https://eliocamp.github.io/metR/articles/Visualization-tools.html)\nand [Working with\ndata](https://eliocamp.github.io/metR/articles/Working-with-data.html).\n'",",https://zenodo.org/badge/latestdoi/96357263","2017/07/05, 20:09:40",2303,GPL-3.0,25,991,"2023/05/08, 14:48:54",26,10,155,11,170,3,0.0,0.003089598352214229,"2023/03/25, 14:03:11",v0.14.0,0,4,false,,false,false,,,,,,,,,,, climate,The goal of the climate R package is to automatize downloading of meteorological and hydrological data from publicly available repositories.,bczernecki,https://github.com/bczernecki/climate.git,github,"climate,climate-data,meteorological-data,meteorology,ogimet,imgw,sounding,r,noaa-data,r-package",Meteorological Observation and Forecast,"2023/04/01, 13:48:48",62,0,13,true,R,,,R,https://bczernecki.github.io/climate/,"b'# climate \n\n\n\n[![R-CMD-check](https://github.com/bczernecki/climate/workflows/R-CMD-check/badge.svg)](https://github.com/bczernecki/climate/actions)\n[![HTML5 check](https://github.com/bczernecki/climate/actions/workflows/html5-check.yaml/badge.svg?branch=master)](https://github.com/bczernecki/climate/actions/workflows/html5-check.yaml)\n[![Codecov test\ncoverage](https://codecov.io/gh/bczernecki/climate/branch/dev/graph/badge.svg)](https://app.codecov.io/gh/bczernecki/climate?branch=dev)\n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/climate)](https://cran.r-project.org/package=climate)\n[![CRAN RStudio mirror\ndownloads](http://cranlogs.r-pkg.org/badges/climate)](https://cran.r-project.org/package=climate)\n[![](http://cranlogs.r-pkg.org/badges/grand-total/climate?color=brightgreen)](https://cran.r-project.org/package=climate)\n\n\nThe goal of the **climate** R package is to automatize downloading of *in-situ* meteorological\nand hydrological data from publicly available repositories:\n\n- OGIMET [(ogimet.com)](http://ogimet.com/index.phtml.en) - up-to-date collection of SYNOP dataset\n- University of Wyoming - atmospheric vertical profiling data (http://weather.uwyo.edu/upperair/)\n- National Oceanic & Atmospheric Administration - Earth System Research Laboratories - Global Monitoring Laboratory [(NOAA)](https://gml.noaa.gov/ccgg/trends/)\n- Polish Institute of Meterology and Water Management - National Research Institute [(IMGW-PIB)](https://dane.imgw.pl/)\n- National Oceanic & Atmospheric Administration - National Climatic Data Center - Integrated Surface Hourly (ISH) [(NOAA)](https://www1.ncdc.noaa.gov/pub/data/noaa/)\n\n## Installation\n\nThe stable release of the **climate** package from the [CRAN](https://CRAN.R-project.org) reposity can be installed with:\n\n``` r\ninstall.packages(""climate"")\n```\n\nIt is highly recommended to install the most up-to-date development version of **climate** from [GitHub](https://github.com/bczernecki/climate) with:\n\n``` r\nlibrary(remotes)\ninstall_github(""bczernecki/climate"")\n```\n\n## Overview\n\n### Meteorological data\n\n- **meteo_ogimet()** - Downloading hourly and daily meteorological data from the SYNOP stations available in the ogimet.com collection.\nAny meteorological (aka SYNOP) station working under the World Meteorological Organizaton framework after year 2000 should be accessible.\n\n- **meteo_imgw()** - Downloading hourly, daily, and monthly meteorological data from the SYNOP/CLIMATE/PRECIP stations available in the danepubliczne.imgw.pl collection. \nIt is a wrapper for `meteo_monthly()`, `meteo_daily()`, and `meteo_hourly()`\n\n- **meteo_noaa_hourly()** - Downloading hourly NOAA Integrated Surface Hourly (ISH) meteorological data - Some stations have > 100 years long history of observations\n\n- **sounding_wyoming()** - Downloading measurements of the vertical profile of atmosphere (aka rawinsonde data)\n\n- **meteo_noaa_co2()** - Downloading monthly CO2 measurements from Mauna Loa Observatory\n\n \n### Hydrological data\n\n- **hydro_imgw()** - Downloading hourly, daily, and monthly hydrological data from the SYNOP / CLIMATE / PRECIP stations available in the\ndanepubliczne.imgw.pl collection.\nIt is a wrapper for previously developed set of functions such as: `hydro_annual()`, `hydro_monthly()`, and `hydro_daily()`\n\n### Auxiliary functions and datasets\n\n- **stations_ogimet()** - Downloading information about all stations available in the selected\ncountry in the Ogimet repository\n- **nearest_stations_ogimet()** - Downloading information about nearest stations to the selected point using Ogimet repository\n- **nearest_stations_noaa()** - Downloading information about nearest stations to the selected point available for the selected country in the NOAA ISH meteorological repository\n- **nearest_stations_imgw()** - List of nearby meteorological or hydrological IMGW-PIB stations in Poland\n- **imgw_meteo_stations** - Built-in metadata from the IMGW-PIB repository for meteorological stations, their geographical\ncoordinates, and ID numbers\n- **imgw_hydro_stations** - Built-in metadata from the IMGW-PIB repository for hydrological stations, their geographical\ncoordinates, and ID numbers\n- **imgw_meteo_abbrev** - Dictionary explaining variables available for meteorological stations (from the IMGW-PIB repository)\n- **imgw_hydro_abbrev** - Dictionary explaining variables available for hydrological stations (from the IMGW-PIB repository)\n\n\n\n## Example 1\n#### Download hourly dataset from NOAA ISH meteorological repository:\n\n``` r0\nlibrary(climate)\nnoaa <- meteo_noaa_hourly(station = ""123300-99999"", year = 2018:2019) # station ID: Poznan, Poland\nhead(noaa)\n\n# year month day hour lon lat alt t2m dpt2m ws wd slp visibility\n# 2019 1 1 0 16.85 52.417 84 3.3 2.3 5 220 1025.0 6000\n# 2019 1 1 1 16.85 52.417 84 3.7 3.0 4 220 1024.2 1500\n# 2019 1 1 2 16.85 52.417 84 4.2 3.6 4 220 1022.5 1300\n# 2019 1 1 3 16.85 52.417 84 5.2 4.6 5 240 1021.2 1900\n```\n\n\n\n## Example 2\n#### Finding a nearest meteorological stations in a given country using NOAA ISH data source:\n\n``` r1\nlibrary(climate)\n# find 100 nearest UK stations to longitude 1W and latitude 53N :\n\nnearest_stations_ogimet(country = ""United+Kingdom"",\n date = Sys.Date(),\n add_map = TRUE,\n point = c(-1, 53),\n no_of_stations = 100\n)\n\n# wmo_id station_names lon lat alt distance [km]\n# 03354 Nottingham Weather Centre -1.250005 53.00000 117 28.04973\n# 03379 Cranwell -0.500010 53.03333 67 56.22175\n# 03377 Waddington -0.516677 53.16667 68 57.36093\n# 03373 Scampton -0.550011 53.30001 57 60.67897\n# 03462 Wittering -0.466676 52.61668 84 73.68934\n# 03544 Church Lawford -1.333340 52.36667 107 80.29844\n# ...\n```\n\n![100 nearest stations to given coordinates in UK](http://iqdata.eu/kolokwium/uk.png)\n\n\n## Example 3\n#### Downloading daily (or hourly) data from a global (OGIMET) repository knowing its ID (see also `nearest_stations_ogimet()`):\n``` r\nlibrary(climate)\no = meteo_ogimet(date = c(Sys.Date() - 5, Sys.Date() - 1), \n interval = ""daily"",\n coords = FALSE, \n station = 12330)\nhead(o)\n\n#> station_ID Date TemperatureCAvg TemperatureCMax TemperatureCMin TdAvgC HrAvg WindkmhDir\n#> 3 12330 2019-12-21 8.8 13.2 4.9 5.3 79.3 SSE\n#> 4 12330 2019-12-20 5.4 8.5 -1.2 4.5 92.4 ESE\n#> 5 12330 2019-12-19 3.8 10.3 -3.0 1.9 89.6 SW\n#> 6 12330 2019-12-18 6.3 9.0 2.2 4.1 84.8 S\n#> 7 12330 2019-12-17 4.9 7.6 0.3 2.9 87.2 SSE\n#> WindkmhInt WindkmhGust PresslevHp Precmm TotClOct lowClOct SunD1h VisKm SnowDepcm PreselevHp\n#> 3 11.4 39.6 995.9 1.8 3.6 2.0 6.7 21.4 NA\n#> 4 15.0 NA 1015.0 0.0 6.4 0.6 1.0 8.0 NA\n#> 5 7.1 NA 1020.4 0.0 5.2 5.9 2.5 14.1 NA\n#> 6 9.2 NA 1009.2 0.0 5.7 2.7 1.4 12.2 NA\n#> 7 7.2 NA 1010.8 0.1 6.2 4.6 13.0 NA\n```\n\n## Example 4\n#### Downloading monthly/daily/hourly meteorological/hydrological data from the Polish (IMGW-PIB) repository:\n\n``` r3\nm = meteo_imgw(interval = ""monthly"", rank = ""synop"", year = 2000, coords = TRUE)\nhead(m)\n#> rank id X Y station yy mm tmax_abs\n#> 575 SYNOPTYCZNA 353230295 23.16228 53.10726 BIA\xc5\x81YSTOK 2000 1 5.3\n#> 577 SYNOPTYCZNA 353230295 23.16228 53.10726 BIA\xc5\x81YSTOK 2000 2 10.6\n#> 578 SYNOPTYCZNA 353230295 23.16228 53.10726 BIA\xc5\x81YSTOK 2000 3 14.8\n#> 579 SYNOPTYCZNA 353230295 23.16228 53.10726 BIA\xc5\x81YSTOK 2000 4 27.8\n#> 580 SYNOPTYCZNA 353230295 23.16228 53.10726 BIA\xc5\x81YSTOK 2000 5 29.3\n#> 581 SYNOPTYCZNA 353230295 23.16228 53.10726 BIA\xc5\x81YSTOK 2000 6 32.6\n#> tmax_mean tmin_abs tmin_mean t2m_mean_mon t5cm_min rr_monthly\n#> 575 0.4 -16.5 -4.5 -2.1 -23.5 34.2\n#> 577 4.1 -10.4 -1.4 1.3 -12.9 25.4\n#> 578 6.2 -6.4 -1.0 2.4 -9.4 45.5\n#> 579 17.9 -4.6 4.7 11.5 -8.1 31.6\n#> 580 21.3 -4.3 5.7 13.8 -8.3 9.4\n#> 581 23.1 1.0 9.6 16.6 -1.8 36.4\n\nh = hydro_imgw(interval = ""semiannual_and_annual"", year = 2010:2011)\nhead(h)\n id station riv_or_lake hyy idyy Mesu idex H beyy bemm bedd behm\n3223 150210180 ANNOPOL Wis\xc5\x82a (2) 2010 13 H 1 227 2009 12 19 NA\n3224 150210180 ANNOPOL Wis\xc5\x82a (2) 2010 13 H 2 319 NA NA NA NA\n3225 150210180 ANNOPOL Wis\xc5\x82a (2) 2010 13 H 3 531 2010 3 3 18\n3226 150210180 ANNOPOL Wis\xc5\x82a (2) 2010 14 H 1 271 2010 8 29 NA\n3227 150210180 ANNOPOL Wis\xc5\x82a (2) 2010 14 H 1 271 2010 10 27 NA\n3228 150210180 ANNOPOL Wis\xc5\x82a (2) 2010 14 H 2 392 NA NA NA NA\n```\n\n## Example 5\n#### Create Walter & Lieth climatic diagram based on downloaded data\n\n\n``` r4\nlibrary(climate)\nlibrary(dplyr)\n\ndf = meteo_imgw(interval = \'monthly\', rank=\'synop\', year = 1991:2019, station = ""POZNA\xc5\x83"") \ndf2 = select(df, station:t2m_mean_mon, rr_monthly)\n\nmonthly_summary = df2 %>% \n group_by(mm) %>% \n summarise(tmax = mean(tmax_abs, na.rm = TRUE), \n tmin = mean(tmin_abs, na.rm = TRUE),\n tavg = mean(t2m_mean_mon, na.rm = TRUE), \n prec = sum(rr_monthly) / n_distinct(yy)) \n\nmonthly_summary = as.data.frame(t(monthly_summary[, c(5,2,3,4)])) \nmonthly_summary = round(monthly_summary, 1)\ncolnames(monthly_summary) = month.abb\nprint(monthly_summary)\n\n# Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec\n# prec 37.1 31.3 38.5 31.3 53.9 60.8 94.8 59.6 40.5 39.7 35.7 38.6\n# tmax 8.7 11.2 17.2 23.8 28.3 31.6 32.3 31.8 26.9 21.3 14.3 9.8\n# tmin -15.0 -11.9 -7.6 -3.3 1.0 5.8 8.9 7.5 2.7 -2.4 -5.2 -10.4\n# tavg -1.0 0.5 3.7 9.4 14.4 17.4 19.4 19.0 14.3 9.1 4.5 0.8\n\n# create plot with use of the ""climatol"" package:\nclimatol::diagwl(monthly_summary, mlab = ""en"", \n est = ""POZNA\xc5\x83"", alt = NA, \n per = ""1991-2019"", p3line = FALSE)\n```\n\n![Walter and Lieth climatic diagram for Poznan, Poland](http://iqdata.eu/kolokwium/poznan.svg)\n\n## Example 6\n#### Download monthly CO2 dataset from Mauna Loa observatory\n\n``` r5\nlibrary(climate)\nlibrary(ggplot2)\nlibrary(ggthemes)\n\nco2 = meteo_noaa_co2()\nhead(co2)\nco2$date = ISOdate(co2$yy, co2$mm, 1)\nggplot(co2, aes(date, co2_avg)) + \n geom_line()+ geom_smooth()+\n theme_bw()+\n labs(\n title = ""Carbon Dioxide (CO2)"",\n subtitle = paste0(""Mauna Loa Observatory ""),\n caption = ""data source: NOAA\n visualization: Bartosz Czernecki / R climate package"",\n x = """",\n y = ""ppm""\n)\n\n```\n\n![CO2 monthly concentration, Mauna Loa observatory](http://iqdata.eu/kolokwium/co2_chart.svg)\n\n\n## Example 7\n#### Use ""climate"" inside python environment via rpy2\n\n```python\n# load required packages\nfrom rpy2.robjects.packages import importr\nimport rpy2.robjects as robjects\nimport pandas as pd\nimport datetime as dt\n\n# load climate package (make sure that it was installed in R before)\nimportr(\'climate\')\n# test functionality e.g. with meteo_ogimet function for New York - La Guardia:\ndf = robjects.r[\'meteo_ogimet\'](interval = ""daily"", station = 72503,\n date = robjects.StrVector([\'2022-05-01\', \'2022-06-15\']))\n# optionally - transform object to pandas data frame and rename columns + fix datetime:\nres = pd.DataFrame(df).transpose()\nres.columns = df.colnames\nres[\'Date\'] = pd.TimedeltaIndex(res[\'Date\'], unit=\'d\') + dt.datetime(1970,1,1)\nres.head\n\n>>> res[res.columns[0:7]].head()\n# station_ID Date TemperatureCAvg ... TemperatureCMin TdAvgC HrAvg\n#0 72503.0 2022-06-15 23.5 ... 19.4 10.9 45.2\n#1 72503.0 2022-06-14 25.0 ... 20.6 16.1 59.0\n#2 72503.0 2022-06-13 20.4 ... 17.8 16.0 74.8\n#3 72503.0 2022-06-12 21.3 ... 18.3 12.0 57.1\n#4 72503.0 2022-06-11 22.6 ... 17.8 8.1 40.1\n\n```\n\n## Acknowledgment\n\nOgimet.com, University of Wyoming, and Institute of Meteorology and Water Management - National Research Institute (IMGW-PIB), National Oceanic & Atmospheric Administration (NOAA) - Earth System Research Laboratory, Global Monitoring Division and Integrated Surface Hourly (NOAA ISH) are the sources of the data.\n\n## Contribution\n\nContributions to this package are welcome. \nThe preferred method of contribution is through a GitHub pull request. \nFeel also free to contact us by creating [an issue](https://github.com/bczernecki/climate/issues).\n\n\n## Citation\n\nTo cite the `climate` package in publications, please use [this paper](https://www.mdpi.com/2071-1050/12/1/394):\n\nCzernecki, B.; G\xc5\x82ogowski, A.; Nowosad, J. Climate: An R Package to Access Free In-Situ Meteorological and Hydrological Datasets for Environmental Assessment. Sustainability 2020, 12, 394. https://doi.org/10.3390/su12010394""\n\nLaTeX/BibTeX version can be obtained with:\n```\nlibrary(climate)\ncitation(""climate"")\n```\n\n'",",https://doi.org/10.3390/su12010394""\n\nLaTeX/BibTeX","2019/07/17, 19:49:40",1561,CUSTOM,1,396,"2023/09/22, 09:47:47",3,29,84,6,33,1,1.1,0.4,"2022/10/12, 13:22:29",v1.1.0,0,4,false,,false,false,,,,,,,,,,, rdwd,"An R package to select, download and read climate data from the German Weather Service.",brry,https://github.com/brry/rdwd.git,github,,Meteorological Observation and Forecast,"2023/09/27, 12:19:31",62,0,11,true,R,,,R,https://bookdown.org/brry/rdwd,"b'# rdwd\n\n\n`rdwd` is an [R](https://www.r-project.org/) package to select, download and read climate data from the \nGerman Weather Service (Deutscher Wetterdienst, DWD). \nThe DWD provides thousands of datasets with weather observations online at \n[opendata.dwd.de](https://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/). \nSince May 2019, `rdwd` also supports reading the Radolan (binary) raster data at \n[grids_germany](https://opendata.dwd.de/climate_environment/CDC/grids_germany/).\n\n`rdwd` is available on CRAN:\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version-last-release/rdwd)](https://cran.r-project.org/package=rdwd) \n[![downloads](http://cranlogs.r-pkg.org/badges/rdwd)](https://www.r-pkg.org/services)\n[![Rdoc](http://www.rdocumentation.org/badges/version/rdwd)](https://www.rdocumentation.org/packages/rdwd)\n![""rdwd dependencies""](https://tinyverse.netlify.com/badge/rdwd)\n\nIt has been presented at [FOSDEM 2017](https://archive.fosdem.org/2017/schedule/event/geo_weather/)\nand [UseR!2017](https://user2017.sched.com/event/Axr3/rdwd-manage-german-weather-observations) in Brussels and with a 5 Minute [video](https://youtu.be/KOYZPMMgiHo?t=233) at [e-Rum2020](https://milano-r.github.io/erum2020program/lightning-talks.html#rdwd-r-interface-to-german-weather-service-data),\nfeatured in Rstudio\'s [data package list](https://rviews.rstudio.com/2017/02/17/january-new-data-packages), \nwritten about in [OSOR](https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/study-german-weather-data) and used e.g. for\n[NDR: Starkregen im Norden](https://story.ndr.de/starkregen-im-norden/index.html).\nDevelopment of `rdwd` was triggered 2016 by flash flood research in Braunsbach \n([1](https://www.uni-potsdam.de/en/natriskchange/qualification-program/task-force-braunsbach-flash-flood-2016), [2](https://doi.org/10.1016/j.scitotenv.2018.02.241),\n[3](https://doi.org/10.5675/HyWa_2017,3_1),\n[4](https://publishup.uni-potsdam.de/frontdoor/index/index/docId/39488)).\n\n\n```diff\n- HELP NEEDED\n- with the new 5-minute data (April 2022), the fileIndex etc are getting very big.\n- ideas on package size reduction are welcome at https://github.com/brry/rdwd/issues/35\n```\n\n### Documentation\n\nA website with more information, examples, use cases and an interactive map of the DWD stations\ncan be found at \n\n\n### Usage\n\nUsage for observational weather data from the measuring stations usually looks something like the following:\n\n```R\n# Download and install (once only):\ninstall.packages(""rdwd"")\n# Load the package into library (needed in every R session):\nlibrary(rdwd)\n\n# select a dataset (e.g. last year\'s daily climate data from Potsdam city):\nlink <- selectDWD(""Potsdam"", res=""daily"", var=""kl"", per=""recent"")\n\n# Actually download that dataset, returning the local storage file name:\nfile <- dataDWD(link, read=FALSE)\n# Read the file from the zip folder:\nclim <- readDWD(file, varnames=TRUE) # can happen directly in dataDWD\n\n# Inspect the data.frame:\nstr(clim)\n# Quick time series graphic:\nplotDWD(clim, ""FM.Windgeschwindigkeit"")\n```\n\nFor data interpolated onto a 1 km raster, including radar data up to the last hour,\nsee the corresponding [chapter](https://bookdown.org/brry/rdwd/raster-data.html) on the website.\n\n\n### App\nSince April 2023, there is an [interactive app](https://brry.shinyapps.io/wetter/) to compare weather periods:\n\n\n\nWith `rdwd::app()`, you can run this locally with cached data, i.e. faster responses.\n\n### New to R\n\nIf you\'re new to R, these links might help you to get started:\n\n- [install R & Rstudio](https://github.com/brry/course/#install)\n- [brief introduction to R](https://github.com/brry/hour)\n- [very large set of slides I use for my courses](https://github.com/brry/course/#slides)\n\nback to `rdwd`:\n\n\n### Installation\n\n#### Normal\n```R\ninstall.packages(""rdwd"")\n```\n\n#### Latest version\n```R\nrdwd::updateRdwd()\n# checks version and (if needed) calls remotes::install_github(""brry/rdwd"", build_vignettes=TRUE)\n```\n\n#### Full\nSuggested (not mandatory) dependencies: \n```R\ninstall.packages(""rdwd"", dependencies=""Suggests"") \n```\n\n- `RCurl` for indexFTP and selectDWD(..., current=TRUE)\n- `data.table`, `bit64` for readDWD(..., fread=TRUE)\n- `terra`, `stars`, `R.utils`, `ncdf4`, `dwdradar` for readDWD with gridded data\n- `readr` for readDWD.stand(..., fast=TRUE)\n- `knitr`, `rmarkdown`, `testthat`, `roxygen2`, `devtools`, `remotes`, `XML`, `gsheet` for local testing, development and documentation\n- `leaflet`, `OSMscale` for interactive/static maps, see [OSMscale installation tips](https://github.com/brry/OSMscale#installation)\n- `shiny` for the interactive weather comparison app\n\nNote: on Linux (Ubuntu), install `RCurl` via the terminal (CTRL+ALT+T, note lowercase rcurl):\n```\nsudo apt install r-cran-rcurl\n```\n'",",https://doi.org/10.1016/j.scitotenv.2018.02.241,https://doi.org/10.5675/HyWa_2017,3_1","2016/10/19, 15:07:55",2562,CUSTOM,63,866,"2023/09/16, 08:53:25",2,2,38,6,39,0,0.0,0.006968641114982632,"2023/06/17, 09:08:32",v1.8,0,2,false,,false,false,,,,,,,,,,, MetPy,"A collection of tools in Python for reading, visualizing and performing calculations with weather data.",Unidata,https://github.com/Unidata/MetPy.git,github,"python,atmospheric-science,meteorology,weather,plotting,scientific-computations,hodograph,skew-t,weather-data,hacktoberfest",Meteorological Observation and Forecast,"2023/10/25, 21:13:03",1109,324,184,true,Python,Unidata,Unidata,"Python,Dockerfile,Ruby,Makefile",https://unidata.github.io/MetPy/,"b'MetPy\n=====\n\n[![MetPy Logo](https://github.com/Unidata/MetPy/raw/main/docs/_static/metpy_150x150.png)](https://unidata.github.io/MetPy/)\n[![Unidata Logo](https://github.com/Unidata/MetPy/raw/main/docs/_static/unidata_150x150.png)](https://www.unidata.ucar.edu)\n\n[![License](https://img.shields.io/pypi/l/metpy.svg)](https://pypi.python.org/pypi/MetPy/)\n[![Gitter](https://badges.gitter.im/Unidata/MetPy.svg)](https://gitter.im/Unidata/MetPy?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=round-square)](https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github)\n\n[![Latest Docs](https://github.com/Unidata/MetPy/workflows/Build%20Docs/badge.svg)](http://unidata.github.io/MetPy)\n[![PyPI Package](https://img.shields.io/pypi/v/metpy.svg)](https://pypi.python.org/pypi/MetPy/)\n[![Conda Package](https://anaconda.org/conda-forge/metpy/badges/version.svg)](https://anaconda.org/conda-forge/metpy)\n[![PyPI Downloads](https://img.shields.io/pypi/dm/metpy.svg)](https://pypi.python.org/pypi/MetPy/)\n[![Conda Downloads](https://anaconda.org/conda-forge/metpy/badges/downloads.svg)](https://anaconda.org/conda-forge/metpy)\n\n[![PyPI Tests](https://github.com/Unidata/MetPy/workflows/PyPI%20Tests/badge.svg)](https://github.com/Unidata/MetPy/actions?query=workflow%3A%22PyPI+Tests%22)\n[![Conda Tests](https://github.com/Unidata/MetPy/workflows/Conda%20Tests/badge.svg)](https://github.com/Unidata/MetPy/actions?query=workflow%3A%22Conda+Tests%22)\n[![Code Coverage Status](https://codecov.io/github/Unidata/MetPy/coverage.svg?branch=main)](https://codecov.io/github/Unidata/MetPy?branch=main)\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/2e64843f595c42e991457cb76fcfa769)](https://www.codacy.com/gh/Unidata/MetPy/dashboard)\n[![Code Climate](https://codeclimate.com/github/Unidata/MetPy/badges/gpa.svg)](https://codeclimate.com/github/Unidata/MetPy)\n\nMetPy is a collection of tools in Python for reading, visualizing and\nperforming calculations with weather data.\n\nMetPy follows [semantic versioning](https://semver.org) in its version number. This means\nthat any MetPy ``1.x`` release will be backwards compatible with an earlier ``1.y`` release. By\n""backward compatible"", we mean that **correct** code that works on a ``1.y`` version will work\non a future ``1.x`` version.\n\nFor additional MetPy examples not included in this repository, please see the [Unidata Python\nGallery](https://unidata.github.io/python-gallery/).\n\nWe support Python >= 3.9.\n\nNeed Help?\n----------\n\nNeed help using MetPy? Found an issue? Have a feature request? Checkout our\n[support page](https://github.com/Unidata/MetPy/blob/main/SUPPORT.md).\n\nImportant Links\n---------------\n\n- [HTML Documentation](http://unidata.github.io/MetPy)\n- [Unidata Python Gallery](https://unidata.github.io/python-gallery/)\n- ""metpy"" tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/metpy)\n- [Gitter chat room](https://gitter.im/Unidata/MetPy)\n\nDependencies\n------------\n\nOther required packages:\n\n- Numpy\n- Scipy\n- Matplotlib\n- Pandas\n- Pint\n- Xarray\n\nThere is also an optional dependency on the pyproj library for geographic\nprojections (used with cross sections, grid spacing calculation, and the GiniFile interface).\n\nSee the [installation guide](https://unidata.github.io/MetPy/latest/userguide/installguide.html)\nfor more information.\n\nCode of Conduct\n---------------\n\nWe want everyone to feel welcome to contribute to MetPy and participate in discussions. In that\nspirit please have a look at our [Code of Conduct](https://github.com/Unidata/MetPy/blob/main/CODE_OF_CONDUCT.md).\n\nContributing\n------------\n\n**Imposter syndrome disclaimer**: We want your help. No, really.\n\nThere may be a little voice inside your head that is telling you that you\'re not ready to be\nan open source contributor; that your skills aren\'t nearly good enough to contribute. What\ncould you possibly offer a project like this one?\n\nWe assure you - the little voice in your head is wrong. If you can write code at all,\nyou can contribute code to open source. Contributing to open source projects is a fantastic\nway to advance one\'s coding skills. Writing perfect code isn\'t the measure of a good developer\n(that would disqualify all of us!); it\'s trying to create something, making mistakes, and\nlearning from those mistakes. That\'s how we all improve, and we are happy to help others learn.\n\nBeing an open source contributor doesn\'t just mean writing code, either. You can help out by\nwriting documentation, tests, or even giving feedback about the project (and yes - that\nincludes giving feedback about the contribution process). Some of these contributions may be\nthe most valuable to the project as a whole, because you\'re coming to the project with fresh\neyes, so you can see the errors and assumptions that seasoned contributors have glossed over.\n\nFor more information, please read the see the [contributing guide](https://github.com/Unidata/MetPy/blob/main/CONTRIBUTING.md).\n\nPhilosophy\n----------\n\nThe space MetPy aims for is GEMPAK (and maybe NCL)-like functionality, in a way that plugs\neasily into the existing scientific Python ecosystem (numpy, scipy, matplotlib). So, if you\ntake the average GEMPAK script for a weather map, you need to:\n\n- read data\n- calculate a derived field\n- show on a map/skew-T\n\nOne of the benefits hoped to achieve over GEMPAK is to make it easier to use these routines for\nany meteorological Python application; this means making it easy to pull out the LCL\ncalculation and just use that, or reuse the Skew-T with your own data code. MetPy also prides\nitself on being well-documented and well-tested, so that on-going maintenance is easily\nmanageable.\n\nThe intended audience is that of GEMPAK: researchers, educators, and any one wanting to script\nup weather analysis. It doesn\'t even have to be scripting; all python meteorology tools are\nhoped to be able to benefit from MetPy. Conversely, it\'s hoped to be the meteorological\nequivalent of the audience of scipy/scikit-learn/skimage.\n'",,"2011/02/25, 04:20:25",4626,BSD-3-Clause,807,5811,"2023/10/25, 20:13:31",330,1910,2713,417,0,39,0.9,0.4745844374849434,"2023/07/07, 18:45:55",v1.5.1,10,71,false,,true,true,"nrb171/Automatic-Publication-Management-System,dwest77a/kerchunk-builder,msearsie3/AMS_workshop,developmentseed/titiler-xarray,nbarjun/SCAFET,nrb171/Automatic-Publication-Summarization,djzurawski/dan-weather-suite,Artur-Nurm/sf_data_science,bgorr/satplan,jejjohnson/ocn-tools,nriet/algorithm,terezapohankova/pySEBS,NOAA-GSL/VxIngest,chyalexcheng/grainLearning,NuttaSkanupong/NS-ML-CWCNN,nobodyyyyyy/evaporation_ducts,thunderhoser/ml_for_wildfire_wpo,SexyJoJo/HeBei_product,seakers/DMASpy,jancgreyling/cookiecutter_sai,wharf-wombat/goes2go,nicalcoca/qgeoidcolweb-b,xzwbsz/DGFormer,kylejgillett/sounderpy,ClariNerd617/ClariNerd617Backend,miky21121996/MO_project,miky21121996/HFR-project,TangoIndiaMango/porfolio_backend,USA-RedDragon/python-gis,vinguyen777/Data-Science-on-Agriculture,marcwatine/CC-LUT,2lambda123/svrstormsig,DuobleFlowTwentysix/ST-GRF,USA-RedDragon/nws-slack-bot,JSpencerPittman/Ecozo,PLAknazaXM/Education,Tanvi-Jain01/Delhi-AirQuality-,EVS-ATMOS/air-quality-sensors,HenryWinterbottom-NOAA/ufs_diags,acblackford/wedgj,phoenixes94/A-robust-error-correction-method-for-NWP-wind-speed-based-on-VMD-PCA-RF,swiftsoftwaregroup/tropess-notes,citylikeamradio/ecape,mishrakeshav/Radiosonde-Ground-Station-Software,Tanvi-Jain01/Delhi-AirQuality,timothyvinzent/Capstone,1chooo/atmo-lab,actris-cloudnet/mwrpy,philine-bommer/Climate_X_Quantus,zmlabe/Attribution_SpringNA,PLAknazaXM/Chem-112-Lab-Code,FelipeSBarros/EngolindoFumaca_paper,jejjohnson/oceanbench,Fxe/Dataverse,TrellixVulnTeam/pyschism-sciclone-tests_5BBQ,nasa/svrstormsig,CROCUS-Urban/instrument-cookbooks,yutong-xia/EE5907-Pattern-Recognistion,xzwbsz/DCPGRNcode,romainpilon/cloudbandPy,xzwbsz/DCPGRN,khazidhea/jua_warnings_api,jtfedd/3d-radar,daniloceano/cyclone_thermodynamics,OceanTrader1/pimospheric,gzlijiaming/MingNet,sauterto/clim_env_hydro,Pavan-lanka/object_detection,LinOuyang/pybarnes,oloapinivad/ECmean4,dunned024/WSIHT,GrainLearning/grainLearning,dennissergeev/arke,Mitchell-D/aes670hw2,bsepaul/climate-models,zhangslTHU/pyhon_binder_repo,marcellinus-witarsah/2023-ey-open-science-data-challenge-1,Madhumitha11-s/AirPy,bartulo/portugal-binder,a1amador/wgpack,chris-hedemann/ml-project-air-pollution,xmnlab/satellite-weather-downloader,theofilis/netcdfella,luabida/satellite-weather-downloader,vosps/tropical_cyclone,cycle13/uwnet-1,ai2cm/uwnet,dtcenter/METviewer,SEE-GEO/ccic,GEUS-Glaciology-and-Climate/GC-Net-evaluation,Wend3620/visual,nhm-usgs/gfsetl,stresearch/gaia,amotl/herbie-without-docs,openclimateinitiative/surface-demo,igmk/actris_mwr_pro,EOanalyticsLTD/FLARES,posener/gliders,dtcenter/METplotpy,gzileni/pyImport,dtcenter/METcalcpy,ARM-DOE/ACT,a2edap/ingest-buoy,bertranddelvaux/data-web-tc,coecms/frontdetection,drivendataorg/snowcast-showdown,david-rx/fish-mip-emulator,osl-incubator/satellite-weather-downloader,bartulo/jupyter-cilifo,marcoscole/cosipy,kdrushka/oceanliner,SteepAtticStairs/AtticServer,glidar-project/glidar-analyst,blaylockbk/goes2go,djzurawski/mpas-realtime,jroettenbacher/phd_base,nikfot/netcdfella,igorkso/estatisticas_meteorologicas,luabida/ADClima,ai2es/ptype-physical,Vipermdl/Oxyformer,UNISvalbard/unisacsi,cal-adapt/climakitae,ryandbair/Herbie,brown-ccv/jupyterhub-docker-images,amaurylancelin/npFDT,tedw0ng/wri-agriadapt-demo,dooley-ch/whatstheweather,bmanjaree/oceanliner_back-up,suhendra0812/cmems_opendap,U-S-NRL-Marine-Meteorology-Division/xnrl,Geet-George/recode_smocs,tanukansalgit/weather-plotting-distributed-systems,apache/incubator-sdap-nexus,dlr-pa/climaccf,pinstripeninjas/met-breakfast,Kang-ChangWoo/ERA05-extractor,daniloceano/lorenz-cycle,CUG-hydro/CUG-HydroMeteorology.py,ShyftSolutions/exploring-wx-data,physicsgoddess1972/GOES-mapping,observingClouds/pysonde,bnb32/spring_onset,dnerini/startleiter,ngam/ngc-ext-pangeo,saramoira/tropical-cyclone-explorer,jgmsantos/Livro-Python,HaominYu0/CGF,JamesBassford/Rain_App,AML-CS/wrf-baq-0.5km,airavata-courses/vulcan,mrshll/saltsavvy,ywbarton/panel-weather,flyaheadl/ll_newstart,asselapathirana/oro,kamaal44/Termux-2,RyosukeDTomita/meteoplot,franzihe/CloudSat_ERA5_CMIP6_analysis,Geet-George/quickdrop,zmlabe/SAI,GeologicalSurveySouthAustralia/SA-geochemical-maps,nhm-usgs/grd2shp,Olympic-Tibidi/Tides,moroots/DATAS,djzurawski/weather-products,airavata-courses/terra,wxmann/sounding-api,EarthObservationSimulator/instrupy,glidar-project/glidar-model,MODAP/datascience,sd19surf/docker_metpy,NOAA-GSL/hrrr-smoke-vis,climate-enigma/dparcel,crouchr/yoctopuce-sensor,briis/pyweatherflowudp,jglauner74/glauner-weather,NOAA-GSL/ml4tc,bnb32/gcm_aws,Quinticx/PyNEXRAD,rpmanser/large_ic_ensembles,Skyehawk/BackBuilding_QuasiStationary_Tstorm_ID,berkeley-dsep-infra/datahub,opencdms/surface,NCAR/rechunk_retro_nwm_v21,AlFontal/covid-climate-signatures,blaylockbk/Herbie,kwodzicki/tamu_met_products,cryotools/cosipy,schism-dev/pyschism,nforceroh/METAR,zmlabe/predictGMSTrate,pyinstaller/pyinstaller-hooks-contrib,LuisSevillano/goes16-processing,nmcdev/metdig,vidurmithal/covid-fire-aq,ramanakumars/GOESplot,RADutchie/pygeochemtools,rmcd-mscb/grd2shp_xagg,ahsouri/GOESVisualizer,nbren12/uwnet,xiwei-ff/leida_data,suvodip1212/bemuda,NUsav77/Open-Ended-Capstone-Step-3,NOAA-GSL/ml4convection,kwodzicki/dcotssUtils,physicsgoddess1972/Precipitable-Water-Model,NCAR/geocat-comp,Open-MSS/data-retrieval,DWesl/weather-data-downloader,J-Wall/camfi,crouchr/display-generator,thunderhoser/ml4tc,genericnme/mettool,asselapathirana/orographic_rainfall,thunderhoser/ml4convection,Marilyth/mss-data-retrieval,thunderhoser/ml4rt,NeiliNeji/Covid-19,sodoesaburningbus/pysonde,ai2cm/fv3net,adair-kovac/TetheredBalloon-7710,hkershaw-brown/dartthedocs,Unidata/siphon,crouchr/webcamd,C4IROcean/python_sdk_example_notebooks,crouchr/synopsisd,mhaberler/radiosonde-datacollector,crouchr/metfuncs,EFisher828/CarolinaWxGroupFrequent,zmlabe/ModelBiasesANN,RomiNahir/bootstrap-demo,EFisher828/ROABVerification,observingClouds/trade-wind-course_2021,genericnme/genericnme.github.io,zmlabe/DataVizStudies,zmlabe/ExtremeEvents,Daviology38/AMS_Python_2021_Project_Coe,fatekong/GE-STDGN,hkershaw-brown/DART-autocompile,hkershaw-brown/dart-jekyll-test,hkershaw-brown/feature-preprocess,rpmanser/AMS101-Dask-in-MetPy,Divyaakula28/TvChannelSchedule,cuspaceflight/CamPyRoS,mhaberler/jumpvis,mhaberler/dwd-trajectory,hkershaw-brown/rttov-test,matzegoebel/run_WRF,jmccreight/DART,akrherz/pyWWA,zmlabe/InternalSignal,berland/pyrotun,ovaisnazir/Master-Thesis,rahulsingh2310/django-react,vchaparro/wind-power-forecasting,cemac/jupyterhub_provisioning,110621013/weatheric2,shuowang-ai/PM2.5-GNN,YnnuSL/PM2.5-GNN,NateWeiler/Resources,pmlmodelling/ncplot,pmlmodelling/nctoolkit,C4IROcean/odp-sdk-python,guidocioni/icon_forecasts,crazymidnight/weather,oubliss/Profiles,marciohssilveira/met_charts,tmiyachi/vue-emagram,zjfstart/kb,thunderhoser/cira_ml_short_course,srmullens/jet_stream_bot,marciohssilveira/gfs_soundings,jthielen/OpenMosaic,antarcticrainforest/esm_analysis,nmcdev/nmc_met_graphics,areed145/kk6gpv-metpy,ac0015/wrf-ens-tools,leosaffin/scripts,andrewbrown31/SCW-analysis,Geet-George/JOANNE,DishaTalreja3/Science-Gateway-for-Weather-Forecasting,nmcdev/nmc_met_base,fvalka/nwp-sounding,Denizhan-Yigitbas/PotHoles_DSCI400,fvalka/icon-skewt-plot,airavata-courses/Orenda,EUREC4A-UK/twin-otter,DS4Earth/sp2020,jgodwinWX/sfcplots_v2,exoclim/aeolus,liyuan3970/study_demo,nmcdev/nmc_met_map,areed145/kk6gpv-workers,observingClouds/eurec4a_snd,weiming9115/MSEplots,apottr/nexrad-something,apottr/nexrad-process,frontogenesis/flask,yotf/meteoDash,zmlabe/AntSeaIceVari,metwork-framework/mfextaddon_scientific,zmlabe/StratoVari,kuchaale/X-regression,JoDeMiro/Eniko,jaws/jaws,kms22134/tamu_met_products,zmlabe/AMIP_Simu,launda/learn_flask,brianmapes/MSEplot,CyanideCN/PyCINRAD,kwodzicki/WxStationPlots,thunderhoser/GewitterGefahr,hawson/Nexfetch,meteorolog90/PROGRAM,zmlabe/ThicknessSensitivity,zmlabe/SeaIceQBO,weiming9115/Working-Space,zmlabe/ClimatePython,zmlabe/IceVarFigs,Unidata/drilsdown,NORCatUofC/rain,jrleeman/skewtweb,akrherz/pyIEM,ASRCsoft/wxprofilers",,https://github.com/Unidata,https://www.unidata.ucar.edu/,"Boulder, Colorado, USA",,,https://avatars.githubusercontent.com/u/613345?v=4,,, wetterdienst,Trying to make access to weather data in Python feel like a warm summer breeze.,earthobservations,https://github.com/earthobservations/wetterdienst.git,github,"deutscher-wetterdienst,germany,open-source,open-data,historical-data,time-series,dwd,radar,weather,weatherservice,weather-forecast,weather-api,weather-station,eccc,canada,data,united-states,uk,hydrology,meteorology",Meteorological Observation and Forecast,"2023/10/24, 07:41:59",289,25,83,true,Python,,earthobservations,"Python,CSS",https://wetterdienst.readthedocs.io/,"b'Wetterdienst - Open weather data for humans\n###########################################\n\n.. |pic1| image:: https://raw.githubusercontent.com/earthobservations/wetterdienst/main/docs/img/german_weather_stations.png\n :alt: German weather stations managed by Deutscher Wetterdienst\n :width: 32 %\n\n.. |pic2| image:: https://raw.githubusercontent.com/earthobservations/wetterdienst/main/docs/img/temperature_ts.png\n :alt: temperature timeseries of Hohenpeissenberg/Germany\n :width: 32 %\n\n.. |pic3| image:: https://raw.githubusercontent.com/earthobservations/wetterdienst/main/docs/img/hohenpeissenberg_warming_stripes.png\n :alt: warming stripes of Hohenpeissenberg/Germany\n :width: 32 %\n\n|pic1| |pic2| |pic3|\n\n**Warning**\n\nThis library is a work in progress!\n\nBreaking changes should be expected until a 1.0 release, so version pinning is recommended.\n\n.. note::\n\n Wetterdienst 0.57.0 switched from pandas to Polars, which may cause breaking changes\n for certain user-space code heavily using pandas idioms, because Wetterdienst now\n returns a `Polars DataFrame`_. If you absolutely must use a pandas DataFrame, you can\n cast the Polars DataFrame to pandas by using the ``.to_pandas()`` method.\n\n**What our customers say:**\n\n""Our house is on fire. I am here to say, our house is on fire. I saw it with my own eyes using **wetterdienst**\nto get the data."" - Greta Thunberg\n\n\xe2\x80\x9cYou must be the change you wish to see in the world. And when it comes to climate I use **wetterdienst**.\xe2\x80\x9d - Mahatma Gandhi\n\n""Three things are (almost) infinite: the universe, human stupidity and the temperature time series of\nHohenpeissenberg, Germany I got with the help of **wetterdienst**; and I\'m not sure about the universe."" - Albert Einstein\n\n""We are the first generation to feel the effect of climate change and the last generation who can do something about\nit. I used **wetterdienst** to analyze the climate in my area and I can tell it\'s getting hot in here."" - Barack Obama\n\n.. image:: https://github.com/earthobservations/wetterdienst/actions/workflows/tests.yml/badge.svg?branch=main\n :target: https://github.com/earthobservations/wetterdienst/actions?workflow=Tests\n :alt: CI: Overall outcome\n.. image:: https://codecov.io/gh/earthobservations/wetterdienst/branch/main/graph/badge.svg\n :target: https://codecov.io/gh/earthobservations/wetterdienst\n :alt: CI: Code coverage\n.. image:: https://img.shields.io/pypi/v/wetterdienst.svg\n :target: https://pypi.org/project/wetterdienst/\n :alt: PyPI version\n.. image:: https://img.shields.io/conda/vn/conda-forge/wetterdienst.svg\n :target: https://anaconda.org/conda-forge/wetterdienst\n :alt: Conda version\n\n.. image:: https://img.shields.io/pypi/status/wetterdienst.svg\n :target: https://pypi.python.org/pypi/wetterdienst/\n :alt: Project status (alpha, beta, stable)\n.. image:: https://static.pepy.tech/personalized-badge/wetterdienst?period=month&units=international_system&left_color=grey&right_color=blue&left_text=PyPI%20downloads/month\n :target: https://pepy.tech/project/wetterdienst\n :alt: PyPI downloads\n.. image:: https://img.shields.io/conda/dn/conda-forge/wetterdienst.svg?label=Conda%20downloads\n :target: https://anaconda.org/conda-forge/wetterdienst\n :alt: Conda downloads\n.. image:: https://img.shields.io/github/license/earthobservations/wetterdienst\n :target: https://github.com/earthobservations/wetterdienst/blob/main/LICENSE\n :alt: Project license\n.. image:: https://img.shields.io/pypi/pyversions/wetterdienst.svg\n :target: https://pypi.python.org/pypi/wetterdienst/\n :alt: Python version compatibility\n\n.. image:: https://readthedocs.org/projects/wetterdienst/badge/?version=latest\n :target: https://wetterdienst.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation status\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n :alt: Documentation: Black\n\n.. image:: https://zenodo.org/badge/160953150.svg\n :target: https://zenodo.org/badge/latestdoi/160953150\n :alt: Citation reference\n\n\n.. overview_start_marker\n\nIntroduction\n############\n\nOverview\n********\n\nWelcome to Wetterdienst, your friendly weather service library for Python.\n\nWe are a group of like-minded people trying to make access to weather data in\nPython feel like a warm summer breeze, similar to other projects like\nrdwd_ for the R language, which originally drew our interest in this project.\nOur long-term goal is to provide access to multiple weather services as well as other\nrelated agencies such as river measurements. With ``wetterdienst`` we try to use modern\nPython technologies all over the place. The library is based on polars_ (we <3 pandas_, it is still part of some\nIO processes) across the board, uses Poetry_ for package administration and GitHub Actions for all things CI.\nOur users are an important part of the development as we are not currently using the\ndata we are providing and only implement what we think would be the best. Therefore\ncontributions and feedback whether it be data related or library related are very\nwelcome! Just hand in a PR or Issue if you think we should include a new feature or data\nsource.\n\n.. _rdwd: https://github.com/brry/rdwd\n.. _polars: https://www.pola.rs/\n.. _pandas: https://pandas.pydata.org/\n.. _Poetry: https://python-poetry.org/\n\nData\n****\n\nFor an overview of the data we have currently made available and under which\nlicense it is published take a look at the data_ section. Detailed information\non datasets and parameters is given at the coverage_ subsection. Licenses and\nusage requirements may differ for each provider so check this out before including\nthe data in your project to be sure that you fulfill copyright requirements!\n\n.. _data: https://wetterdienst.readthedocs.io/en/latest/data/index.html\n.. _coverage: https://wetterdienst.readthedocs.io/en/improve-documentation/data/coverage.html\n.. _map: https://bookdown.org/brry/rdwd/interactive-map.html\n.. _table: https://bookdown.org/brry/rdwd/available-datasets.html\n\nHere is a short glimpse on the data that is included:\n\n.. coverage_start_marker\n\nDWD (Deutscher Wetterdienst / German Weather Service / Germany)\n - Historical Weather Observations\n - Historical (last ~300 years), recent (500 days to yesterday), now (yesterday up to last hour)\n - Every minute to yearly resolution\n - Time series of stations in Germany\n - see the rdwd pages for an interactive map_ and table_ of available datasets\n - Mosmix - statistical optimized scalar forecasts extracted from weather models\n - Point forecast\n - 5400 stations worldwide\n - Both MOSMIX-L and MOSMIX-S is supported\n - Up to 115 parameters\n - DMO - timeseries extracted from weather models\n - Point forecast\n - 5400 stations worldwide\n - Up to 115 parameters\n - Road Weather Observations\n - Historical weather observations of German highway stations\n - Radar\n - 16 locations in Germany\n - All of Composite, Radolan, Radvor, Sites and Radolan_CDC\n - Radolan: calibrated radar precipitation\n - Radvor: radar precipitation forecast\n\nECCC (Environnement et Changement Climatique Canada / Environment and Climate Change Canada / Canada)\n - Historical Weather Observations\n - Historical (last ~180 years)\n - Hourly, daily, monthly, (annual) resolution\n - Time series of stations in Canada\n\nNOAA (National Oceanic And Atmospheric Administration / National Oceanic And Atmospheric Administration / United States Of America)\n - Global Historical Climatology Network\n - Historical, daily weather observations from around the globe\n - more then 100k stations\n - data for weather services which don\'t publish data themselves\n\nWSV (Wasserstra\xc3\x9fen- und Schifffahrtsverwaltung des Bundes / Federal Waterways and Shipping Administration)\n - Pegelonline\n - data of river network of Germany\n - coverage of last 30 days\n - parameters like stage, runoff and more related to rivers\n\nEA (Environment Agency)\n - Hydrology\n - data of river network of UK\n - parameters flow and ground water stage\n\nNWS (NOAA National Weather Service)\n - Observation\n - recent observations (last week) of US weather stations\n - currently the list of stations is not completely right as we use a diverging source!\nEaufrance\n - Hubeau\n - data of river network of France (continental)\n - parameters flow and stage of rivers of last 30 days\n\nGeosphere (Geosphere Austria, formerly Central Institution for Meteorology and Geodynamics)\n - Observation\n - historical meteorological data of Austrian stations\n\nIMGW (Institute of Meteorology and Water Management)\n - Meteorology\n - meteorological data of polish weather stations\n - daily and monthly summaries\n - Hydrology\n - hydrological data of polish river stations\n - daily and monthly summaries\n\nTo get better insight on which data we have currently made available and under which\nlicense those are published take a look at the data_ section.\n\n.. coverage_end_marker\n\nFeatures\n********\n\n- APIs for stations and values\n- Get stations nearby a selected location\n- Define your request by arguments such as `parameter`, `period`, `resolution`,\n `start date`, `end date`\n- Define general settings in Settings context\n- Command line interface\n- Web-API via FastAPI\n- Rich UI features like wetterdienst explorer and `streamlit app`_\n- Run SQL queries on the results\n- Export results to databases and other data sinks\n- Public Docker image\n- Interpolation and Summary of station values\n\n.. _streamlit app: https://wetterdienst.streamlit.app\n\nSetup\n*****\n\nNative\n======\n\nVia PyPi (standard):\n\n.. code-block:: bash\n\n pip install wetterdienst\n\nVia Github (most recent):\n\n.. code-block:: bash\n\n pip install git+https://github.com/earthobservations/wetterdienst\n\nThere are some extras available for ``wetterdienst``. Use them like:\n\n.. code-block:: bash\n\n pip install wetterdienst[http,sql]\n\n- docs: Install the Sphinx documentation generator.\n- ipython: Install iPython stack.\n- export: Install openpyxl for Excel export and pyarrow for writing files in Feather- and Parquet-format.\n- http: Install HTTP API prerequisites.\n- sql: Install DuckDB for querying data using SQL.\n- duckdb: Install support for DuckDB.\n- influxdb: Install support for InfluxDB.\n- cratedb: Install support for CrateDB.\n- mysql: Install support for MySQL.\n- postgresql: Install support for PostgreSQL.\n- interpolation: Install support for station interpolation.\n\nIn order to check the installation, invoke:\n\n.. code-block:: bash\n\n wetterdienst --help\n\n.. _run-in-docker:\n\nDocker\n======\n\nDocker images for each stable release will get pushed to GitHub Container Registry.\n\nThere are images in two variants, ``wetterdienst-standard`` and ``wetterdienst-full``.\n\n``wetterdienst-standard`` will contain a minimum set of 3rd-party packages,\nwhile ``wetterdienst-full`` will try to serve a full environment, including\n*all* of the optional dependencies of Wetterdienst.\n\nPull the Docker image:\n\n.. code-block:: bash\n\n docker pull ghcr.io/earthobservations/wetterdienst-standard\n\nLibrary\n-------\n\nUse the latest stable version of ``wetterdienst``:\n\n.. code-block:: bash\n\n $ docker run -ti ghcr.io/earthobservations/wetterdienst-standard\n Python 3.8.5 (default, Sep 10 2020, 16:58:22)\n [GCC 8.3.0] on linux\n\n.. code-block:: python\n\n import wetterdienst\n wetterdienst.__version__\n\nCommand line script\n-------------------\n\nThe ``wetterdienst`` command is also available:\n\n.. code-block:: bash\n\n # Make an alias to use it conveniently from your shell.\n alias wetterdienst=\'docker run -ti ghcr.io/earthobservations/wetterdienst-standard wetterdienst\'\n\n wetterdienst --help\n wetterdienst --version\n wetterdienst info\n\n\nRaspberry Pi / LINUX ARM\n========================\n\nRunning wetterdienst on Raspberry Pi, you need to install **numpy**\nand **lxml** prior to installing wetterdienst by running the following\nlines:\n\n.. code-block:: bash\n\n # not all installations may be required to get lxml running\n sudo apt-get install gfortran\n sudo apt-get install libopenblas-base\n sudo apt-get install libopenblas-dev\n sudo apt-get install libatlas-base-dev\n sudo apt-get install python3-lxml\n\nAdditionally expanding the Swap to 2048 mb may be required and can be done via swap-file:\n\n.. code-block:: bash\n\n sudo nano /etc/dphys-swapfile\n\nThanks `chr-sto`_ for reporting back to us!\n\n\n.. _chr-sto: https://github.com/chr-sto\n\nExample\n*******\n\n**Task: Get historical climate summary for two German stations between 1990 and 2020**\n\nLibrary\n=======\n\n.. code-block:: python\n\n >>> import polars as pl\n >>> _ = pl.Config.set_tbl_hide_dataframe_shape(True)\n >>> from wetterdienst import Settings\n >>> from wetterdienst.provider.dwd.observation import DwdObservationRequest\n >>> settings = Settings( # default\n ... ts_shape=""long"", # tidy data\n ... ts_humanize=True, # humanized parameters\n ... ts_si_units=True # convert values to SI units\n ... )\n >>> request = DwdObservationRequest(\n ... parameter=[""climate_summary""],\n ... resolution=""daily"",\n ... start_date=""1990-01-01"", # if not given timezone defaulted to UTC\n ... end_date=""2020-01-01"", # if not given timezone defaulted to UTC\n ... settings=settings\n ... ).filter_by_station_id(station_id=(1048, 4411))\n >>> stations = request.df\n >>> stations.head()\n \xe2\x94\x8c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x90\n \xe2\x94\x82 station_id \xe2\x94\x86 from_date \xe2\x94\x86 to_date \xe2\x94\x86 height \xe2\x94\x86 latitude \xe2\x94\x86 longitude \xe2\x94\x86 name \xe2\x94\x86 state \xe2\x94\x82\n \xe2\x94\x82 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x82\n \xe2\x94\x82 str \xe2\x94\x86 datetime[\xce\xbcs, \xe2\x94\x86 datetime[\xce\xbcs, \xe2\x94\x86 f64 \xe2\x94\x86 f64 \xe2\x94\x86 f64 \xe2\x94\x86 str \xe2\x94\x86 str \xe2\x94\x82\n \xe2\x94\x82 \xe2\x94\x86 UTC] \xe2\x94\x86 UTC] \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x82\n \xe2\x95\x9e\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xa1\n \xe2\x94\x82 01048 \xe2\x94\x86 1934-01-01 \xe2\x94\x86 ... \xe2\x94\x86 228.0 \xe2\x94\x86 51.1278 \xe2\x94\x86 13.7543 \xe2\x94\x86 Dresden-Klo \xe2\x94\x86 Sachsen \xe2\x94\x82\n \xe2\x94\x82 \xe2\x94\x86 00:00:00 UTC \xe2\x94\x86 00:00:00 UTC \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 tzsche \xe2\x94\x86 \xe2\x94\x82\n \xe2\x94\x82 04411 \xe2\x94\x86 1979-12-01 \xe2\x94\x86 ... \xe2\x94\x86 155.0 \xe2\x94\x86 49.9195 \xe2\x94\x86 8.9671 \xe2\x94\x86 Schaafheim- \xe2\x94\x86 Hessen \xe2\x94\x82\n \xe2\x94\x82 \xe2\x94\x86 00:00:00 UTC \xe2\x94\x86 00:00:00 UTC \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 \xe2\x94\x86 Schlierbach \xe2\x94\x86 \xe2\x94\x82\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n >>> values = request.values.all().df\n >>> values.head()\n \xe2\x94\x8c\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xac\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x90\n \xe2\x94\x82 station_id \xe2\x94\x86 dataset \xe2\x94\x86 parameter \xe2\x94\x86 date \xe2\x94\x86 value \xe2\x94\x86 quality \xe2\x94\x82\n \xe2\x94\x82 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x86 --- \xe2\x94\x82\n \xe2\x94\x82 str \xe2\x94\x86 str \xe2\x94\x86 str \xe2\x94\x86 datetime[\xce\xbcs, UTC] \xe2\x94\x86 f64 \xe2\x94\x86 f64 \xe2\x94\x82\n \xe2\x95\x9e\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xaa\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\x90\xe2\x95\xa1\n \xe2\x94\x82 01048 \xe2\x94\x86 climate_summary \xe2\x94\x86 cloud_cover_total \xe2\x94\x86 1990-01-01 00:00:00 UTC \xe2\x94\x86 100.0 \xe2\x94\x86 10.0 \xe2\x94\x82\n \xe2\x94\x82 01048 \xe2\x94\x86 climate_summary \xe2\x94\x86 cloud_cover_total \xe2\x94\x86 1990-01-02 00:00:00 UTC \xe2\x94\x86 100.0 \xe2\x94\x86 10.0 \xe2\x94\x82\n \xe2\x94\x82 01048 \xe2\x94\x86 climate_summary \xe2\x94\x86 cloud_cover_total \xe2\x94\x86 1990-01-03 00:00:00 UTC \xe2\x94\x86 91.25 \xe2\x94\x86 10.0 \xe2\x94\x82\n \xe2\x94\x82 01048 \xe2\x94\x86 climate_summary \xe2\x94\x86 cloud_cover_total \xe2\x94\x86 1990-01-04 00:00:00 UTC \xe2\x94\x86 28.75 \xe2\x94\x86 10.0 \xe2\x94\x82\n \xe2\x94\x82 01048 \xe2\x94\x86 climate_summary \xe2\x94\x86 cloud_cover_total \xe2\x94\x86 1990-01-05 00:00:00 UTC \xe2\x94\x86 91.25 \xe2\x94\x86 10.0 \xe2\x94\x82\n \xe2\x94\x94\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\xb4\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x80\xe2\x94\x98\n\n.. code-block:: python\n\n values.to_pandas() # to get a pandas DataFrame and e.g. create some matplotlib plots\n\nClient\n======\n\n.. code-block:: bash\n\n # Get list of all stations for daily climate summary data in JSON format\n wetterdienst stations --provider=dwd --network=observations --parameter=kl --resolution=daily\n\n # Get daily climate summary data for specific stations\n wetterdienst values --provider=dwd --network=observations --station=1048,4411 --parameter=kl --resolution=daily\n\nFurther examples (code samples) can be found in the examples_ folder.\n\n.. _examples: https://github.com/earthobservations/wetterdienst/tree/main/example\n\n.. overview_end_marker\n\nAcknowledgements\n****************\n\nWe want to acknowledge all environmental agencies which provide their data open and free\nof charge first and foremost for the sake of endless research possibilities.\n\nWe want to acknowledge Jetbrains_ and the `Jetbrains OSS Team`_ for providing us with\nlicenses for Pycharm Pro, which we are using for the development.\n\nWe want to acknowledge all contributors for being part of the improvements to this\nlibrary that make it better and better every day.\n\n.. _Jetbrains: https://www.jetbrains.com/\n.. _Jetbrains OSS Team: https://github.com/JetBrains\n\nImportant Links\n***************\n\n- Full documentation: https://wetterdienst.readthedocs.io/\n- Usage: https://wetterdienst.readthedocs.io/en/latest/usage/\n- Contribution: https://wetterdienst.readthedocs.io/en/latest/contribution/\n- Known Issues: https://wetterdienst.readthedocs.io/en/latest/known_issues/\n- Changelog: https://wetterdienst.readthedocs.io/en/latest/changelog.html\n- Examples (runnable scripts): https://github.com/earthobservations/wetterdienst/tree/main/example\n- Benchmarks: https://github.com/earthobservations/wetterdienst/tree/main/benchmarks\n\n\n.. _Polars DataFrame: https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/\n'",",https://zenodo.org/badge/latestdoi/160953150\n","2018/12/08, 15:39:42",1782,MIT,364,1243,"2023/10/24, 07:42:00",21,777,994,302,1,6,0.0,0.3864468864468864,"2023/10/24, 07:43:26",v0.65.0,0,14,true,"github,patreon,custom",true,true,"earthobservations/wetterdienst,RWTH-EBC/AixWeather,UTEC-GmbH/UTEC-Tools,JuliaMelle01/bb_rhythm,Lawdrian/benchmark-system-gb,earthobservations/simplecast,smba/meteo,ThomasRgbg/dataimport,Lemurnaut/BicyleAccidentsBremen,niclashoyer/simplecast,earthobservations/wetterdienst-ui,earthobservations/quadrantic,lasinludwig/st_test,MyPyDavid/HeFDI-Forschungsdatentag-2022,Jeon-peng/dongil,eUgEntOptIc44/dwd-mosmix-stations,padmalcom/AISpeechAssistant,CAMELS-DE/dataset-builder,Macilias/django-kurs,vduseev/number-encoding,leoschleier/acopf,GeoStat-Examples/gstools-temperature-trend,reknih/monitor,jokus-pokus/Gesundheitsamt,tinoetzold/PV-Prognose",,https://github.com/earthobservations,,Germany,,,https://avatars.githubusercontent.com/u/67055674?v=4,,, AWIPS,The Advanced Weather Interactive Processing System is a meteorological display and analysis package originally developed by the National Weather Service and Raytheon.,Unidata,https://github.com/Unidata/awips2.git,github,"meteorology,forecasting,hdf5,gempak,data-visualization,data-server,awips,java,spring,python,httpd-server,postgresql,hibernate,python-awips,cave,awips-data,edex,foss",Meteorological Observation and Forecast,"2023/10/25, 06:05:50",154,0,26,true,Java,Unidata,Unidata,"Java,Python,Shell,PLpgSQL,HTML,Groovy,TSQL,C,Ruby,JavaScript,C++,Perl,XSLT,Batchfile,GLSL,CSS,Dockerfile,Roff,Raku",http://unidata.github.io/awips2/,"b""# Unidata AWIPS\n\n[https://www.unidata.ucar.edu/software/awips/](https://www.unidata.ucar.edu/software/awips/)\n\n[![GitHub release](https://img.shields.io/github/release/Unidata/awips2/all.svg)]() [![Travis Badge](https://travis-ci.org/Unidata/awips2.svg?branch=unidata_18.2.1)](https://travis-ci.org/Unidata/awips2)\n\nThe Advanced Weather Interactive Processing System (AWIPS) is a meteorological software package. It is used for decoding, displaying, and analyzing data, and was originally developed for the National Weather Service (NWS) by Raytheon. There is a division here at UCAR called the Unidata Program Center (UCP) which develops and supports a modified non-operational version of AWIPS for use in research and education by [UCAR member institutions](http://president.ucar.edu/governance/members/universities-representatives). This is released as open source software, free to download and use by anyone.\n\nAWIPS takes a unified approach to data ingest, where most data ingested into the system comes through the LDM client pulling data feeds from the [Unidata IDD](https://www.unidata.ucar.edu/projects/#idd). Various raw data and product files (netCDF, grib, BUFR, ASCII text, gini, AREA) are decoded and stored as HDF5 files and Postgres metadata by [EDEX](docs/install/install-edex), which serves products and data over http.\n\nUnidata supports two data visualization frameworks: [CAVE](docs/install/install-cave) (an Eclipse-built Java application which runs on Linux, Mac, and Windows), and [python-awips](docs/python/overview) (a python package).\n\n> **Note**: Our version of CAVE is a **non-operational** version. It does not support some features of NWS AWIPS. Warnings and alerts cannot be issued from Unidata's CAVE. Additional functionality may not be available as well.\n\n\n![CAVE](https://www.unidata.ucar.edu/software/awips2/images/Unidata_AWIPS2_CAVE.png)\n\n---\n\n## License\n\nUnidata AWIPS source code and binaries (RPMs) are considered to be in the public domain, meaning there are no restrictions on any download, modification, or distribution in any form (original or modified). Unidata AWIPS license information can be found [here](https://github.com/Unidata/awips2/blob/unidata_18.2.1/LICENSE).\n\n---\n\n## AWIPS Data in the Cloud\n\nUnidata and XSEDE Jetstream have partnered to offer an EDEX data server in the cloud, open to the community. Select the server in the Connectivity Preferences dialog, or enter **`edex-cloud.unidata.ucar.edu`** (without *http://* before, or *:9581/services* after).\n\n![EDEX in the cloud](docs/images/boEbFSf28t.gif)\n\n\n# Documentation - http://unidata.github.io/awips2/\n\n* [Unidata AWIPS User Manual](http://unidata.github.io/awips2/)\n* [How to Install CAVE](http://unidata.github.io/awips2/install/install-cave)\n* [How to Install EDEX](http://unidata.github.io/awips2/install/install-edex)\n* [Starting and Stopping EDEX](http://unidata.github.io/awips2/install/start-edex)\n* [The D2D Perspective](http://unidata.github.io/awips2/cave/d2d-perspective)\n* [The Localization Perspective](http://unidata.github.io/awips2/cave/localization-perspective)\n* [AWIPS Development Environment (ADE)](http://unidata.github.io/awips2/dev/awips-development-environment)\n* [python-awips Data Access Framework](http://unidata.github.io/python-awips/)\n* [awips2-users Mailing List Archives](https://www.unidata.ucar.edu/mailing_lists/archives/awips2-users/)\n\n\t* [(click to subscribe)](mailto:awips2-users-join@unidata.ucar.edu)""",,"2014/05/01, 00:59:04",3465,CUSTOM,163,1529,"2023/10/24, 12:54:33",74,298,525,102,1,0,0.6,0.41947852760736193,"2023/07/12, 15:04:49",20.3.2-0.4-release,0,6,false,,false,false,,,https://github.com/Unidata,https://www.unidata.ucar.edu/,"Boulder, Colorado, USA",,,https://avatars.githubusercontent.com/u/613345?v=4,,, Metview Python bindings,"Python interface to Metview, a meteorological workstation and batch system for accessing, examining, manipulating and visualising meteorological data.",ecmwf,https://github.com/ecmwf/metview-python.git,github,,Meteorological Observation and Forecast,"2023/08/09, 13:22:29",115,12,21,true,Python,European Centre for Medium-Range Weather Forecasts,ecmwf,"Python,Makefile,C",https://metview.readthedocs.io/en/latest/,"b'\nMetview Python bindings\n=======================\n\nPython interface to Metview, a meteorological workstation and batch system for accessing, examining, manipulating and visualising meteorological data.\nSee documentation at https://metview.readthedocs.io/en/latest/index.html\n\n\nTry the example notebooks on Binder!\n------------------------------------\nClick the link below to start a Binder session to try the examples online now:\n\n.. image:: https://mybinder.org/badge_logo.svg\n :target: https://mybinder.org/v2/gh/ecmwf/metview-python/master?filepath=examples\n\n\nRequirements\n------------\n\n- A working Metview 5 installation (at least version 5.0.3, ideally 5.3.0 or above), either from binaries or built from source.\n Conda packages are available for Linux, and native packages are available for many Linux distributions.\n See https://metview.readthedocs.io/en/latest/install.html\n\n - An alternative is to build from the Metview Source Bundle.\n See https://confluence.ecmwf.int/metview/The+Metview+Source+Bundle\n\n- Ensure that the command \'metview\' will run this version by setting your PATH to include the \'bin\' directory\n from where you built or installed it if it\'s not in a default location.\n\n- A Python 3 interpreter (ideally version >= 3.5) \n\n\nInstall\n-------\n\nThe package is installed from PyPI with::\n\n $ pip install metview\n\n\nor from conda-forge with::\n\n $ conda install metview-python -c conda-forge\n\n\nTest\n----\n\nYou may run a simple selfcheck command to ensure that your system is set up correctly::\n\n $ python -m metview selfcheck\n Hello world - printed from Metview!\n Trying to connect to a Metview installation...\n Metview version 5.2.0 found\n Your system is ready.\n\n\nTo manually test that your system is properly setup open a Python 3 interpreter and try::\n\n >>> import metview as mv\n >>> mv.lowercase(\'Hello World!\')\n \'hello world!\'\n\n\nExamples\n--------\n\nThe [examples](examples) folder contains some Jupyter notebooks and some standalone examples for you to try out!\n\n\nProject resources\n-----------------\n\n============= =========================================================\nDevelopment https://github.com/ecmwf/metview-python\nDownload https://pypi.org/project/metview\nCode quality .. image:: https://travis-ci.com/ecmwf/metview-python.svg?branch=master\n :target: https://travis-ci.com/ecmwf/metview-python\n :alt: Build Status on Travis CI\n .. image:: https://coveralls.io/repos/ecmwf/metview-python/badge.svg?branch=master&service=github\n :target: https://coveralls.io/github/ecmwf/metview-python\n :alt: Coverage Status on Coveralls\n============= =========================================================\n\n\nLicense\n-------\n\nCopyright 2017-2021 European Centre for Medium-Range Weather Forecasts (ECMWF).\n\nLicensed under the Apache License, Version 2.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n'",,"2018/06/08, 12:37:53",1965,Apache-2.0,23,1135,"2023/10/13, 15:24:31",15,8,38,2,12,0,0.0,0.4527750730282376,"2021/06/14, 11:04:57",1.7.2,1,12,false,,false,true,"Steven-Mugisha/drought_pakistan,sferics/obs-processing,ECMWFCode4Earth/surface_benchmarking,neherdata/weather-tools,ktrask/ProbabilisticWeatherForecast,ep-infosec/50_google_weather-tools,Marine-Weather-Intelligence/mwi_tools,google/weather-tools,ecmwf/ecmwf-data,metwork-framework/mfextaddon_scientific,launda/avguide,ecmwf/ecpoint-calibrate",,https://github.com/ecmwf,www.ecmwf.int,"Shinfield Park, Reading, United Kingdom",,,https://avatars.githubusercontent.com/u/6368067?v=4,,, Herbie,A python package that downloads recent and archived numerical weather prediction model output from different cloud archive sources.,blaylockbk,https://github.com/blaylockbk/Herbie.git,github,"grib,hrrr,cfgrib,xarray,noaa-data,big-data-program,python,rap,nomads,grib2,gfs,download,ecmwf-data,numerical-weather-prediction,open-data",Meteorological Observation and Forecast,"2023/10/15, 04:55:42",290,0,120,true,Python,,,"Python,Makefile",https://herbie.readthedocs.io/,"b'\n\n\n\n# Herbie: Retrieve NWP Model Data \xf0\x9f\x8f\x81\n\n\n\n[![](https://img.shields.io/pypi/v/herbie-data)](https://pypi.python.org/pypi/herbie-data/)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/herbie-data.svg)](https://anaconda.org/conda-forge/herbie-data)\n[![DOI](https://zenodo.org/badge/275214142.svg)](https://zenodo.org/badge/latestdoi/275214142)\n\n![License](https://img.shields.io/github/license/blaylockbk/Herbie)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Tests (Conda)](https://github.com/blaylockbk/Herbie/actions/workflows/tests-conda.yml/badge.svg)](https://github.com/blaylockbk/Herbie/actions/workflows/tests-conda.yml)\n[![Tests (Python)](https://github.com/blaylockbk/Herbie/actions/workflows/tests-python.yml/badge.svg)](https://github.com/blaylockbk/Herbie/actions/workflows/tests-python.yml)\n[![Documentation Status](https://readthedocs.org/projects/herbie/badge/?version=latest)](https://herbie.readthedocs.io/?badge=latest)\n[![Python](https://img.shields.io/pypi/pyversions/herbie-data.svg)](https://pypi.org/project/herbie-data/)\n[![Conda Recipe](https://img.shields.io/badge/recipe-herbie--data-green.svg)](https://anaconda.org/conda-forge/herbie-data)\n[![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/herbie-data.svg)](https://anaconda.org/conda-forge/herbie-data)\n[![Conda Platforms](https://img.shields.io/conda/pn/conda-forge/herbie-data.svg)](https://anaconda.org/conda-forge/herbie-data)\n\n\n\n
\n\n**Herbie** is a python package that downloads recent and archived numerical weather prediction (NWP) model output from different cloud archive sources. **Its most popular capability is to download HRRR model data.** NWP data in GRIB2 format can be read with xarray+cfgrib. Much of this data is made available through the [NOAA Open Data Dissemination](https://www.noaa.gov/information-technology/open-data-dissemination) (NODD) Program (formerly the Big Data Program) which has made weather data more accessible than ever before.\n\nHerbie helps you discover, download, and read data from:\n\n- [High Resolution Rapid Refresh (HRRR)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/hrrr.html) | [HRRR-Alaska](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/hrrrak.html)\n- [Rapid Refresh (RAP)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/rap.html)\n- [Global Forecast System (GFS)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/gfs.html)\n- [Global Ensemble Forecast System (GEFS)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/gefs.html)\n- [ECMWF Open Data Forecast Products](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/ecmwf.html)\n- [North American Mesoscale Model (NAM)](https://github.com/blaylockbk/Herbie/blob/main/docs/user_guide/_model_notebooks/nam.ipynb)\n- [National Blend of Models (NBM)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/nbm.html)\n- [Rapid Refresh Forecast System - Prototype (RRFS)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/rrfs.html)\n- [Real-Time/Un-Restricted Mesoscale Analysis (RTMA/URMA)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/rtma.html)\n- [Hurricane Analysis And Forecast System (HAFS)](https://herbie.readthedocs.io/en/latest/user_guide/_model_notebooks/hafs.html)\n\n# \xf0\x9f\x93\x93 [Herbie Documentation](https://herbie.readthedocs.io/)\n\n## Installation\n\nThe easiest way to instal Herbie and its dependencies is with [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) from conda-forge.\n\n```bash\nconda install -c conda-forge herbie-data\n```\n\nYou may also create the provided Conda environment, **[`environment.yml`](https://github.com/blaylockbk/Herbie/blob/main/environment.yml)**.\n\n```bash\n# Download environment file\nwget https://github.com/blaylockbk/Herbie/raw/main/environment.yml\n\n# Modify that file if you wish.\n\n# Create the environment\nconda env create -f environment.yml\n\n# Activate the environment\nconda activate herbie\n```\n\nAlternatively, Herbie is published on PyPI and you can install it with pip, _but_ it requires some dependencies that you will have to install yourself:\n\n- Python 3.8 to 3.11\n- cURL\n- [Cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html), which requires GEOS and Proj.\n- [cfgrib](https://github.com/ecmwf/cfgrib), which requires eccodes.\n- _Optional:_ wgrib2\n- _Optional:_ [Carpenter Workshop](https://github.com/blaylockbk/Carpenter_Workshop)\n\nWhen those are installed within your environment, _then_ you can install Herbie with pip.\n\n```bash\n# Latest published version\npip install herbie-data\n\n# ~~ or ~~\n\n# Most recent changes\npip install git+https://github.com/blaylockbk/Herbie.git\n```\n\n## Capabilities\n\n- Search for model output from different data sources.\n- Download full GRIB2 files.\n- Download subset GRIB2 files (by grib field).\n- Read data with xarray.\n- Read index file with Pandas.\n- Plot data with Cartopy (very early development).\n\n```mermaid\n graph TD;\n d1[(HRRR)] -..-> H\n d2[(RAP)] -.-> H\n d3[(GFS)] -..-> H\n d33[(GEFS)] -.-> H\n d4[(ECMWF)] -..-> H\n d5[(NBM)] -.-> H\n d6[(RRFS)] -..-> H\n d7[(RTMA)] -.-> H\n d8[(URMA)] -..-> H\n H((Herbie))\n H --- .inventory\n H --- .download\n H --- .xarray\n\n style H fill:#d8c89d,stroke:#0c3576,stroke-width:4px,color:#000000\n```\n\n```python\nfrom herbie import Herbie\n\n# Herbie object for the HRRR model 6-hr surface forecast product\nH = Herbie(\n \'2021-01-01 12:00\',\n model=\'hrrr\',\n product=\'sfc\',\n fxx=6\n)\n\n# Look at file contents\nH.inventory()\n\n# Download the full GRIB2 file\nH.download()\n\n# Download a subset, like all fields at 500 mb\nH.download("":500 mb"")\n\n# Read subset with xarray, like 2-m temperature.\nH.xarray(""TMP:2 m"")\n```\n\n## Data Sources\n\nHerbie downloads model data from the following sources, but can be extended to include others:\n\n- [NOMADS](https://nomads.ncep.noaa.gov/)\n- [NOAA Open Data Dissemination Program (NODD)](https://www.noaa.gov/information-technology/open-data-dissemination) partners (i.e., AWS, Google, Azure).\n- [ECMWF Open Data}(https://www.ecmwf.int/en/forecasts/datasets/open-data)Azure storage\n- University of Utah CHPC Pando archive\n- Local file system\n\n## History\n\nDuring my PhD at the University of Utah, I created, at the time, the [only publicly-accessible archive of HRRR data](http://hrrr.chpc.utah.edu/). Over 1,000 research scientists and professionals used that archive.\n\n
\n

Blaylock B., J. Horel and S. Liston, 2017: Cloud Archiving and Data Mining of High Resolution Rapid Refresh Model Output. Computers and Geosciences. 109, 43-50. https://doi.org/10.1016/j.cageo.2017.08.005

\n
\n\nIn the later half of 2020, the HRRR dataset from 2014 to present was made available through the [NODD Program](https://www.noaa.gov/information-technology/open-data-dissemination) (formerly NOAA\'s Big Data Program). Herbie organizes and expands my original download scripts into a more coherent package with the extended ability to download data for other models from many different archive sources.\n\nI originally released this package under the name \xe2\x80\x9cHRRR-B\xe2\x80\x9d because it only worked with the HRRR dataset; the \xe2\x80\x9cB\xe2\x80\x9d was for my first-name initial. Since then, I have added the ability to download RAP, GFS, ECMWF, GEFS, RRFS, and others with potentially more models in the future. Thus, this package was renamed **_Herbie_**, named after one of my favorite childhood movie characters.\n\nThe University of Utah MesoWest group now manages a [HRRR archive in Zarr format](http://hrrr.chpc.utah.edu/). Maybe someday, Herbie will be able to take advantage of that archive.\n\n## How to Cite and Acknowledge\n\nIf Herbie played an important role in your work, please [tell me about it](https://github.com/blaylockbk/Herbie/discussions/categories/show-and-tell)! Also, consider including a citation or acknowledgement in your article or product.\n\n**_Suggested Citation_**\n\n> Blaylock, B. K. (2022). Herbie: Retrieve Numerical Weather Prediction Model Data (Version 2022.9.0) [Computer software]. https://doi.org/10.5281/zenodo.4567540\n\n**_Suggested Acknowledgment_**\n\n> A portion of this work used code generously provided by Brian Blaylock\'s Herbie python package (https://doi.org/10.5281/zenodo.4567540)\n\n---\n\n**Thanks for using Herbie, and happy racing!**\n\n\xf0\x9f\x8f\x81 Brian\n\n
\n\n| | |\n| :-: | ----------------------------------------------------------------------------------- |\n| \xf0\x9f\x91\xa8\xf0\x9f\x8f\xbb\xe2\x80\x8d\xf0\x9f\x92\xbb | [Contributing Guidelines](https://herbie.readthedocs.io/user_guide/contribute.html) |\n| \xf0\x9f\x92\xac | [GitHub Discussions](https://github.com/blaylockbk/Herbie/discussions) |\n| \xf0\x9f\x9a\x91 | [GitHub Issues](https://github.com/blaylockbk/Herbie/issues) |\n| \xf0\x9f\x8c\x90 | [Personal Webpage](http://home.chpc.utah.edu/~u0553130/Brian_Blaylock/home.html) |\n| \xf0\x9f\x8c\x90 | [University of Utah HRRR archive](http://hrrr.chpc.utah.edu/) |\n\n
\n\nP.S. If you like Herbie, check out my other repos:\n\n- [\xf0\x9f\x8c\x8e GOES-2-go](https://github.com/blaylockbk/goes2go): A python package to download GOES-East/West data and make RGB composites.\n- [\xf0\x9f\x8c\xa1 SynopticPy](https://github.com/blaylockbk/SynopticPy): A python package to download mesonet data from the Synoptic API.\n- [\xf0\x9f\x94\xa8 Carpenter Workshop](https://github.com/blaylockbk/Carpenter_Workshop): A python package with various tools I made that are useful (like easy funxtions to build Cartopy maps).\n- [\xf0\x9f\x92\xac Bubble Print](https://github.com/blaylockbk/BubblePrint): A silly little python package that gives your print statement\'s personality.\n- [\xf0\x9f\x93\x9c MET Syntax](https://github.com/blaylockbk/vscode-met-syntax): An extension for Visual Studio Code that gives syntax highlighting for Model Evaluation Tools (MET) configuration files.\n\n> **Note**: Alternative Download Tools \n> As an alternative to Herbie, you can use [rclone](https://rclone.org/) to download files from AWS or GCP. I love rclone. Here is a short [rclone tutorial](https://github.com/blaylockbk/pyBKB_v3/blob/master/rclone_howto.md)\n\n| [Visualize Structure](https://mango-dune-07a8b7110.1.azurestaticapps.net/?repo=blaylockbk%2FHerbie) |\n'",",https://zenodo.org/badge/latestdoi/275214142,https://doi.org/10.1016/j.cageo.2017.08.005,https://doi.org/10.1016/j.cageo.2017.08.005,https://doi.org/10.5281/zenodo.4567540\n\n**_Suggested,https://doi.org/10.5281/zenodo.4567540","2020/06/26, 17:43:11",1216,MIT,204,937,"2023/10/15, 04:55:42",51,47,117,59,11,4,0.2,0.05369928400954649,"2023/03/12, 05:25:23",2023.3.0,0,11,false,,false,false,,,,,,,,,,, MEWS,A Python package designed to add extreme weather events to existing weather data or projections.,sandialabs,https://github.com/sandialabs/MEWS.git,github,,Meteorological Observation and Forecast,"2023/09/14, 17:50:56",22,0,9,true,Python,Sandia National Laboratories,sandialabs,"Python,Fortran,Cython",,"b'![MEWS](information/figures/logo.png)\r\n[![Documentation Status](https://readthedocs.org/projects/mews/badge/?version=latest)](https://mews.readthedocs.io/en/latest/?badge=latest)\r\n\r\n![workflow](https://github.com/sandialabs/mews/actions/workflows/pytest.yml/badge.svg)\r\n\r\nThe Multi-scenario Extreme Weather Simulator (MEWS) is a Python package designed to add\r\nextreme weather events to existing weather data or projections. MEWS does not simulate\r\nweather but rather adds variations in weather for the purpose of probabilistic analyses \r\nof infrastructure or environmental systems. \r\n\r\nCurrently MEWS works for extreme temperature. Other enhancements to MEWS are envisioned that provide reasonably realistic selection\r\nof hurricane futures and extreme precipitation.\r\n\r\nCurrently the infrastructure focus has been for Building Energy Simulation and MEWS can read/write\r\naltered weather files for Energy Plus (https://energyplus.net/) and DOE-2 (https://www.doe2.com/)\r\nweather files. Both of these provide a rich library of historic and Typical Meteorological weather\r\ninputs around the world. \r\n\r\nMEWS has been tested on Linux, Max-OS, and Windows using Python 3.8, 3.9, and 3.10. \r\nDocumentation will also follow in the near future.\r\n\r\nLicense\r\n------------\r\n\r\nSee the LICENSE.md file for license details. This package has third party packages that have their own licenses that are appended to the MEWS license.\r\n\r\nOrganization\r\n------------\r\n\r\nDirectories\r\n * mews - Python package\r\n * dist - wheel and tar.gz binaries for installing mews 1.1\r\n * docs - UNDER CONSTRUCTION - inital build available on ReadTheDocs (https://mews.readthedocs.io/en/latest/)\r\n * information - contains general information about MEWS\r\n * examples - current working example is run_mews_extreme_temperature_example_v_1_1.py. All others are deprecated or use older techniques\r\n\r\nInstallation\r\n------------\r\n * To install the latest released version:\r\n \r\n ```\r\n pip install mews\r\n ```\r\n \r\n For the current code:\r\n \r\n ```\r\n cd < a directory you want to work with >\r\n python -m venv \r\n /Scripts/activate\r\n git clone git@github.com:sandialabs/MEWS.git\r\n cd MEWS\r\n pip install -e .\r\n ```\r\n If this does not work an alternative method is to:\r\n \r\n ```\r\n cd < a directory you want to work with >\r\n python -m venv \r\n /Scripts/activate\r\n git clone git@github.com:sandialabs/MEWS.git\r\n cd MEWS\r\n pip install -r requirements.txt\r\n python setup.py develop\r\n ```\r\n \r\n Then run the following to ensure the code passes unit testing\r\n \r\n ```\r\n pip install pytest\r\n pytest\r\n ```\r\n \r\n All tests should pass. Sometimes failures occur if you have tex on your computer.\r\n \r\n The API for MEWS is only documented in the code and has many inputs. The best example of how to use the latest version is available in examples/run_mews_extreme_temperature_example_v_1_1.py\r\n the other examples are either depricated or are not being kept up to date presently.\r\n\r\nOther Installation Requirements\r\n-------------------------------\r\n * MEWS requires Cython which needs a C compiler in place. For windows, this can be the visual studio free Microsoft Visual C++ 14.0 Build Tools \r\nthat are freely available at https://visualstudio.microsoft.com/visual-cpp-build-tools/. Download the build tools and install them. It is necessary\r\nto assure the correct version of the build tools is installed. The stack exchange thread below shows how to verify the correct version is installed.\r\n\r\nhttps://stackoverflow.com/questions/66838238/cython-setup-py-cant-find-installed-visual-c-build-tools\r\n\r\n * MEWS downloads CMIP6 data when using the ClimateScenario class. This step can be messy though and requires many retries when downloading the data live from multiple servers. As a result, the entire dataset (~24Gb) has been uploaded to https://osf.io/ts9e8/files/osfstorage and is publicly available to manually download.\r\n\r\nDownload the CMIP6_Data_Files file and then make its local path equal to the ""output_folder"" parameter for the ClimateScenario class in\r\n\r\nmews.weather.climate.ClimateScenario\r\n\r\nUsing MEWS\r\n--------\r\nA training video has been made available at: https://drive.google.com/file/d/1B-G5yGu0BFXCqj0BYfu_e8XFliAoeoRi/view?usp=drive_link\r\n\r\nMEWS has many classes that have their API\'s documented in the code. These classes have specialized functions that most users will not want to work with.\r\nThe MEWS function for heat waves is:\r\n\r\n```\r\nfrom mews.run_mews import extreme_temperature\r\n```\r\n\r\nThe example in MEWS/examples/run_mews_extreme_temperature_v_1_1.py shows how to use extreme_temperature. The repository now contains\r\npre-processed solution files for the following cities: \r\n\r\n```\r\ncities = [""Chicago"",\r\n ""Baltimore"",\r\n ""Minneapolis"",\r\n ""Phoenix"",\r\n \'Miami\',\r\n \'Houston\'\r\n \'Atlanta\', \r\n \'LasVegas\',\r\n \'LosAngeles\',\r\n \'SanFrancisco\',\r\n \'Albuquerque\',\r\n \'Seattle\', \r\n \'Denver\',\r\n \'Helena\', \r\n \'Duluth\',\r\n \'Fairbanks\',\r\n \'McAllen\',\r\n \'Kodiak\',\r\n \'Worcester\']\r\n ```\r\n \r\nThe extreme_temperature input parameters can be used to only generate files from the solutions rather than running the lengthy optimization process again.\r\n\r\nInside ""MEWS/examples/example_data"" are folders for each city and inside these folders you can find the solution files in ""results"" and ""mews_epw_results"" folder for EnergyPlus epw files. \r\n\r\nContact \r\n--------\r\n\r\n * Daniel Villa, Sandia National Laboratories (SNL) dlvilla@sandia.gov\r\n \r\nCiting MEWS\r\n-----------\r\nYou can cite MEWS with one of the following:\r\n\r\n* Villa, Daniel L., Tyler J. Schostek, Krissy Govertsen, and Madeline Macmillan. 2023. ""A Stochastic Model of Future Extreme Temperature Events for Infrastructure Analysis."" _Environmental Modeling & Software_ https://doi.org/10.1016/j.envsoft.2023.105663.\r\n\r\n* Villa, Daniel L., Juan Carvallo, Carlo Bianchi, and Sang Hoon Lee. 2022. ""Multi-scenario Extreme Weather Simulator Application to Heat Waves."" _2022 Building Performance Analysis Conference and SimBuild co-organized by ASHRAE and IBPSA-USA_ https://doi.org/10.26868/25746308.2022.C006\r\n\r\n\r\nSandia Funding Statement\r\n------------------------\r\nSandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy\'s National Nuclear Security Administration under contract DE-NA-0003525.\r\n\r\n'",",https://doi.org/10.1016/j.envsoft.2023.105663.\r\n\r\n*,https://doi.org/10.26868/25746308.2022.C006\r\n\r\n\r\nSandia","2021/06/22, 19:04:10",855,CUSTOM,151,317,"2022/11/11, 17:09:39",3,16,17,7,348,0,0.3,0.14432989690721654,"2023/07/26, 21:28:13",1.1.1,0,3,false,,false,false,,,https://github.com/sandialabs,https://software.sandia.gov,United States,,,https://avatars.githubusercontent.com/u/4993680?v=4,,, eeweather,"Fetch NCDC ISD, TMY3, or CZ2010 weather data that corresponds to ZIP Code Tabulation Areas or Latitude/Longitude.",openeemeter,https://github.com/openeemeter/eeweather.git,github,"weather,weather-data,weather-station",Meteorological Observation and Forecast,"2023/05/30, 17:50:56",44,18,6,true,HTML,OpenEEmeter,openeemeter,"HTML,Python,Shell,Dockerfile",http://eeweather.openee.io/,"b'EEweather: Weather station wrangling for EEmeter\n================================================\n\n.. image:: https://travis-ci.org/openeemeter/eeweather.svg?branch=master\n :target: https://travis-ci.org/openeemeter/eeweather\n\n.. image:: https://img.shields.io/github/license/openeemeter/eeweather.svg\n :target: https://github.com/openeemeter/eeweather\n\n.. image:: https://readthedocs.org/projects/eeweather/badge/?version=latest\n :target: http://eeweather.readthedocs.io/en/latest/?badge=latest\n\n.. image:: https://img.shields.io/pypi/v/eeweather.svg\n :target: https://pypi.python.org/pypi/eeweather\n\n.. image:: https://codecov.io/gh/openeemeter/eeweather/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/openeemeter/eeweather\n\n---------------\n\n**EEweather** \xe2\x80\x94 tools for matching to and fetching data from NCDC ISD, TMY3, or CZ2010 weather stations.\n\nEEweather comes with a database of weather station metadata, ZCTA metadata, and GIS data that makes it easier to find the right weather station to use for a particular ZIP code or lat/long coordinate.\n\n`Read the docs. `_\n\nInstallation\n------------\n\nEEweather is a python package and can be installed with pip.\n\n::\n\n $ pip install eeweather\n\nSupported Sources of Weather Data\n---------------------------------\n\n- NCDC Integrated Surface Database (ISD)\n- Global Summary of the Day (GSOD)\n- NREL Typical Meteorological Year 3 (TMY3)\n- California Energy Commission 1998-2009 Weather Normals (CZ2010)\n\nFeatures\n--------\n\n- Match by ZIP code (ZCTA) or by lat/long coordinates\n- Use user-supplied weather station mappings\n- Match within climate zones\n\n - IECC Climate Zones\n - IECC Moisture Regimes\n - Building America Climate Zones\n - California Building Climate Zone Areas\n\n- User-friendly SQLite database of metadata compiled from primary sources\n\n - US Census Bureau (ZCTAs, county shapefiles)\n - Building America climate zone county lists\n - NOAA NCDC Integrated Surface Database Station History\n - NREL TMY3 site\n\n- Plot maps of outputs\n\nContributing\n------------\n\nDev installation::\n\n $ pipenv --python 3.6.4 # create virtualenv with python 3.6.4\n $ pipenv install --dev # install dev requirements with pipenv\n $ pipenv install -e . # install package in editable mode\n $ pipenv shell # activate pipenv virtual environment\n\nBuild docs::\n\n $ make -C docs html\n\nAutobuild docs::\n\n $ make -C docs livehtml\n\nCheck spelling in docs::\n\n $ make -C docs spelling\n\nRun tests::\n\n $ pytest\n\nRun tests on multiple python versions::\n\n $ tox\n\nUpload to pypi (using twine)::\n\n $ python setup.py upload\n\nUse with Docker\n---------------\n\nTo use with docker-compose, use the following:\n\nRun a tutorial notebook (copy link w/ token, open tutorial.ipynb)::\n\n $ docker-compose up jupyter\n\nLive-edit docs::\n\n $ docker-compose up docs\n\nOpen a shell::\n\n $ docker-compose run --rm shell\n\nRun tests::\n\n $ docker-compose run --rm test\n\nRun the CLI::\n\n $ docker-compose run --rm eeweather --help\n\n\nNotice Regarding CZ2010 Data\n----------------------------\n\nThere may be conditions placed on their international commercial use.\nThey can be used within the U.S. or for non-commercial international activities without restriction.\nThe non-U.S. data cannot be redistributed for commercial purposes.\nRe-distribution of these data by others must provide this same notification.\n\nSee `further explanation `_ here. \n'",,"2018/01/29, 21:14:08",2095,Apache-2.0,21,297,"2023/05/30, 17:50:57",15,61,71,7,148,6,0.7,0.29518072289156627,"2018/08/21, 21:02:31",v0.3.5,0,8,false,,true,true,"occamssafetyrazor/deps,pombredanne/5000-deps,lucas-batier/eufron,MatthewSteen/schema,markborkum/schema,jfenna/eemeter-intl,vgonzvadimap/core,kfiramar/baldar,jjcaine/top_dependencies_python,ekmixon/eemeter,Tyler1456/Test_Brython_API,BuildingSync/schema,nguyenanhtuan11041998/Inest,opentaps/opentaps_seas,grantaguinaldo/folium-choropleth-map,BLM-UoR/BLM,EPAENERGYSTAR/epathermostat,openeemeter/eemeter",,https://github.com/openeemeter,https://lfenergy.org/projects/openeemeter/,,,,https://avatars.githubusercontent.com/u/19336002?v=4,,, met.3D,Interactive three-dimensional visualization of numerical ensemble weather predictions and similar numerical atmospheric model datasets.,wxmetvis,https://gitlab.com/wxmetvis/met.3d,gitlab,,Meteorological Observation and Forecast,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, agera5tools,"Tools for mirroring, manipulating and serving Global Weather for Agriculture data (AgERA5).",ajwdewit,https://github.com/ajwdewit/agera5tools.git,github,,Meteorological Observation and Forecast,"2023/10/02, 14:25:59",9,1,2,true,Python,,,"Python,HTML",,"b""# AgERA5tools\nTools for mirroring, manipulating (exporting, extracting) and \nserving [AgERA5](https://doi.org/10.24381/cds.6c68c9bb) data.\n\nThe agera5tools consist of a set of commandline scripts as well as the `agera5tools` python package\nwhich can be used to \n- set up a mirror for AgERA5 that can automatically build a \n local copy and keep it up to date with the latest AgERA5 data.\n- Allow operations on the downloaded NetCDF files directly such as dumping, point extraction and clipping\n- Serve AgERA5 data on web API through the HTTP protocol. By providing the latitude and longitude in the\n URL, agera5tools can retrieve the corresponding data and return it as JSON.\n\n\n## Commandline tools\n\nThe agera5 commandline tools currently have 8 options. The first four are for setting up and managing\nthe local AgERA5 database:\n- *init* to generate a configuration file and initialize the set up.\n- *build* to download the relevant AgERA5 data from Copernicus Climate Data Store (CDS) and build the local database.\n- *mirror* to update the current database with new days from the CDS.\n- *serve* to serve AgERA5 data through a web API and return as JSON encoded data.\n\nThe other four tools operate directly on the NetCDF files downloaded from the CDS.\n- *extract_point*: this can be used to extract a time-series of variables for a given location\n- *dump* which can be used to dump one day of AgERA5 data to CSV, JSON or SQLite \n- *clip* which can be used to extract a subset of AgERA5 data which will be written to a new NetCDF file.\n- *dump_grid* which dumps the AgERA5 grid definition to CSV, JSON or SQLite.\n\n### Init\n\n```Shell\n$ agera5tools init --help\nusing config from /data/agera5/agera5tools.yaml\nUsage: agera5tools init [OPTIONS]\n\n Initializes AgERA5tools\n\n This implies the following:\n - Creating a template configuration file in the current directory\n - Creating a $HOME/.cdsapirc file for access to the CDS\n - Creating the database tables\n - Filling the grid table with the reference grid.\n\nOptions:\n --help Show this message and exit.\n```\n\n### Build\n\n```Shell\n$ agera5tools build --help\nusing config from /data/agera5/agera5tools.yaml\nUsage: agera5tools build [OPTIONS]\n\n Builds the AgERA5 database by bulk download from CDS\n\nOptions:\n -d, --to_database Load AgERA5 data into the database\n -c, --to_csv Write AgERA5 data to compressed CSV files.\n --help Show this message and exit.\n```\n\n### Mirror\n\n```Shell\n$ agera5tools mirror --help\nusing config from /data/agera5/agera5tools.yaml\nUsage: agera5tools mirror [OPTIONS]\n\n Incrementally updates the AgERA5 database by daily downloads from the CDS.\n\nOptions:\n -c, --to_csv Write AgERA5 data to compressed CSV files.\n --help Show this message and exit.\n```\n\n### Serve\n\n```Shell\n$ agera5tools serve --help\nusing config from /data/agera5/agera5tools.yaml\nUsage: agera5tools serve [OPTIONS]\n\n Starts the http server to serve AgERA5 data through HTTP\n\nOptions:\n -p, --port INTEGER Port to number to start listening, default=8080.\n --help Show this message and exit.\n```\n\n### Extract point\n\n```Shell\n$ agera5tools extract_point --help\nUsage: agera5tools extract_point [OPTIONS] AGERA5_PATH LONGITUDE LATITUDE\n STARTDATE ENDDATE\n Extracts AgERA5 data for given location and date range.\n\n AGERA5_PATH: path to the AgERA5 dataset\n LONGITUDE: the longitude for which to extract [dd, -180:180]\n LATITUDE: the latitude for which to extract [dd, -90:90]\n STARTDATE: the start date (yyyy-mm-dd, >=1979-01-01)\n ENDDATE: the last date (yyyy-mm-dd, <= 1 week ago)\n\nOptions:\n -o, --output PATH output file to write to: .csv, .json and .db3 (SQLite)\n are supported.Giving no output will write to stdout in\n CSV format\n\n --tocelsius Convert temperature values from degrees Kelvin to Celsius\n --help Show this message and exit.\n```\n\n### Dump\n\n```Shell\n$ agera5tools dump --help\nUsage: agera5tools dump [OPTIONS] AGERA5_PATH DAY\n\n Dump AgERA5 data for a given day to CSV, JSON or SQLite\n\n AGERA5_PATH: Path to the AgERA5 dataset\n DAY: specifies the day to be dumped (yyyy-mm-dd)\n\nOptions:\n -o, --output PATH output file to write to: .csv, .json and .db3 (SQLite)\n are supported. Giving no output will write to stdout in\n CSV format\n\n --tocelsius Convert temperature values from degrees Kelvin to Celsius\n --add_gridid Adds a grid ID instead of latitude/longitude columns.\n --bbox FLOAT... Bounding box: \n --help Show this message and exit.\n```\n\n### Clip\n\n```Shell\n$ agera5tools clip --help\nUsage: agera5tools clip [OPTIONS] AGERA5_PATH DAY\n\n Extracts a portion of agERA5 for the given bounding box and saves to\n NetCDF.\n\n AGERA5_PATH: Path to the AgERA5 dataset\n DAY: specifies the day to be dumped (yyyy-mm-dd)\n\nOptions:\n --base_fname TEXT Base file name to use, otherwise will use\n 'agera5_clipped'\n\n -o, --output_dir PATH Directory to write output to. If not provided, will\n use current directory.\n\n --box FLOAT... Bounding box: \n --help Show this message and exit.\n```\n\n### dump_grid\n\n```Shell\nUsage: agera5tools dump_grid [OPTIONS]\n\n Dump the agERA5 grid to a CSV, JSON or SQLite DB.\n\nOptions:\n -o, --output PATH output file to write to: .csv, .json and .db3 (SQLite)\n are supported.Giving no output will write to stdout in\n CSV format\n\n --help Show this message and exit.\n\n```\n\n## Python package\n\nThe shell commands described above can also be used from python directly by importing the agera5tools package. \nTheir working is nearly identical as the shell commands. The major difference is that the python functions \nreturn either datasets (clip) or dataframes (extract_point, dump, dump_grid). An example for the `clip` function:\n\n```python\nIn [1]: import datetime as dt\n ...: import agera5tools\n ...: from agera5tools.util import BoundingBox\n ...: day = dt.date(2018,1,1)\n ...: bbox = BoundingBox(lon_min=87, lon_max=90, lat_min=24, lat_max=27)\n ...: ds = agera5tools.clip(day, bbox)\n ...: \n\nIn [2]: ds\nOut[2]: \n\nDimensions: (time: 1, lon: 30, lat: 30)\nCoordinates:\n * time (time) datetime64[ns] 2018-01-01\n * lon (lon) float64 87.1 87.2 ... 89.9 90.0\n * lat (lat) float64 26.9 26.8 ... 24.1 24.0\nData variables:\n Precipitation_Flux (time, lat, lon) float32 dask.array\n Solar_Radiation_Flux (time, lat, lon) float32 dask.array\n Temperature_Air_2m_Max_Day_Time (time, lat, lon) float32 dask.array\n Temperature_Air_2m_Mean_24h (time, lat, lon) float32 dask.array\n Temperature_Air_2m_Min_Night_Time (time, lat, lon) float32 dask.array\n Vapour_Pressure_Mean (time, lat, lon) float32 dask.array\n Wind_Speed_10m_Mean (time, lat, lon) float32 dask.array\nAttributes:\n CDI: Climate Data Interface version 1.9.2 (http://mpimet.mpg.de/...\n history: Fri Mar 12 15:04:43 2021: cdo splitday /archive/ESG/wit015/...\n Conventions: CF-1.7\n CDO: Climate Data Operators version 1.9.2 (http://mpimet.mpg.de/...\n```\n\nIt works in a very similar way for the `extract_point` function:\n\n```python\nIn[6]: from agera5tools.util import Point\n\nIn[7]: pnt = Point(latitude=26, longitude=89)\nIn[8]: df = agera5tools.extract_point(pnt, startday=dt.date(2018, 1, 1), endday=dt.date(2018, 1, 31)),\nIn [7]: df.head(5)\nOut[7]: \n day precipitation_flux solar_radiation_flux ... temperature_air_2m_min_night_time vapour_pressure_mean wind_speed_10m_mean\n0 2018-01-01 0.31 13282992 ... 12.156799 11.809731 1.317589\n1 2018-01-02 1.91 13646220 ... 12.342041 11.711860 1.416075\n2 2018-01-03 0.14 14817991 ... 11.064514 11.198871 1.524268\n3 2018-01-04 0.03 14131904 ... 10.861877 11.413278 1.566405\n4 2018-01-05 0.07 14315206 ... 12.292969 10.984181 1.597181\n\n[5 rows x 8 columns]\n```\n\nNote that extracting point data for a long timeseries can be time-consuming because all netCDF files have to be opened, decompressed and the point extracted. \n\n## Installing agera5tools\n\n### Requirements\nThe agera5tools package requires python >=3.8 and has a number of dependencies:\n- pandas == 1.4.1\n- PyYAML >= 6.0\n- Pandas >= 1.5\n- SQLAlchemy >= 1.4\n- PyYAML >= 6.0\n- xarray >= 2022.12.0\n- dask >= 2022.7.0\n- click >= 8.1\n- flask >= 2.2\n- cdsapi >= 0.5.1\n- dotmap >= 1.3\n- netCDF4 >= 1.6\n- requests >= 2.28\n- wsgiserver >= 1.3\n \nLower versions of dependencies may work, but have not been tested.\n \n### Installing\n\nInstalling `agera5tools` can be done through the github repository to get the latest version:\n\n```shell script\npip install https://github.com/ajwdewit/agera5tools/archive/refs/heads/main.zip\n``` \n\nor directory from PyPI:\n\n```shell script\npip install agera5tools\n``` \n""",",https://doi.org/10.24381/cds.6c68c9bb","2021/04/23, 11:25:33",915,MIT,31,37,"2023/08/23, 09:34:44",0,0,2,2,63,0,0,0.0,,,0,1,false,,false,false,narest-qa/repo4,,,,,,,,,, MeteoInfo,GIS and scientific computation environment for meteorological community.,meteoinfo,https://github.com/meteoinfo/MeteoInfo.git,github,"gis,scientific,scientific-computing,meteorology,java,jython",Meteorological Observation and Forecast,"2023/09/16, 01:59:01",276,0,45,true,Java,MeteoInfo,meteoinfo,"Java,Python,GLSL,HTML",http://www.meteothink.org/,"b'MeteoInfo: GIS and scientific computation environment for meteorological community\n==================================================================================\n\n[![Join the chat at https://gitter.im/meteoinfo/community](https://badges.gitter.im/meteoinfo/community.svg)](https://gitter.im/meteoinfo/community)\n[![DOI](https://zenodo.org/badge/172686439.svg)](https://zenodo.org/badge/latestdoi/172686439)\n[![Project Map](https://sourcespy.com/shield.svg)](https://sourcespy.com/github/meteoinfometeoinfo/)\n\n**MeteoInfo** is an integrated framework for GIS application (**MeteoInfoMap**), scientific computation and \nvisualization environment (**MeteoInfoLab**), especially for meteorological community.\n\n**MeteoInfoMap** is a GIS application which enables the user to visualize and analyze\nthe spatial and meteorological data with multiple data formats.\n\n![MeteoInfoMap GUI](images/MeteoInfoMap.PNG)\n \n**MeteoInfoLab** is a scientific computation and visualization environment using Jython scripting with the \nability of multiple dimensional array calculation and 2D/3D plotting.\n\n![MeteoInfoLab GUI](images/MeteoInfoLab.PNG)\n\nIt requires that Java 8 or greater be installed on your computer. See the\nhttp://www.java.com website for a free download of Java if you do not have it\nalready installed.\n\nDocumentation\n-------------\n\nLearn more about MeteoInfo in its official documentation at http://meteothink.org/. For a general project overview refer to [build, module, dependency and other diagrams](https://sourcespy.com/github/meteoinfometeoinfo/).\n\nPublication\n-----------\n\n- Wang, Y.Q., 2014. MeteoInfo: GIS software for meteorological data visualization and analysis. Meteorological Applications, 21: 360-368.\n- Wang, Y.Q., 2019. An Open Source Software Suite for Multi-Dimensional Meteorological Data Computation and Visualisation. Journal of Open Research Software, 7(1), p.21. DOI: http://doi.org/10.5334/jors.267\n\nGet in touch\n------------\n\n- Report bugs, suggest features or view the source code [`on GitHub`](http://github.com/meteoinfo/MeteoInfo)\n\nLicense\n-------\n\nCopyright 2010-2023, MeteoInfo Developers\n\nLicensed under the LGPL License, Version 3.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.gnu.org/licenses/lgpl.html\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n'",",https://zenodo.org/badge/latestdoi/172686439,http://doi.org/10.5334/jors.267\n\nGet","2019/02/26, 10:06:54",1702,LGPL-3.0,126,1152,"2023/09/27, 07:24:10",0,8,36,10,28,0,0.0,0.005253940455341555,"2023/09/04, 01:41:12",v3.7.0,0,6,true,"github,patreon,open_collective,ko_fi,tidelift,community_bridge,liberapay,issuehunt,otechie,lfx_crowdfunding,custom",false,false,,,https://github.com/meteoinfo,http://www.meteothink.org/,,,,https://avatars.githubusercontent.com/u/12314696?v=4,,, thundeR,Rapid computation and visualisation of convective parameters from rawinsonde and Numerical weather prediction data.,bczernecki,https://github.com/bczernecki/thundeR.git,github,"convective-parameters,thunder,severe-weather,cape,hodograph,download-sounding,cin,rawinsonde,tornado",Meteorological Observation and Forecast,"2023/09/07, 11:45:04",33,0,14,true,R,,,"R,C++,Shell",https://bczernecki.github.io/thundeR/,"b'# thundeR \n\n**Rapid computation and visualisation of convective parameters from\nrawinsonde and numerical weather prediction data**\n\n\n\n[![R-CMD-check](https://github.com/bczernecki/thunder/workflows/R-CMD-check/badge.svg)](https://github.com/bczernecki/thunder/actions)\n[![Codecov test\ncoverage](https://codecov.io/gh/bczernecki/thunder/branch/master/graph/badge.svg)](https://app.codecov.io/gh/bczernecki/thunder?branch=master)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/thunder)](https://cran.r-project.org/package=thunder)\n[![CRAN RStudio mirror\ndownloads](http://cranlogs.r-pkg.org/badges/thunder)](https://cran.r-project.org/package=thunder)\n[![](http://cranlogs.r-pkg.org/badges/grand-total/thunder?color=brightgreen)](https://cran.r-project.org/package=thunder)\n\n\n\n**`thundeR`** is a freeware R package for\nrapid computation and visualisation of convective parameters commonly\nused in the operational forecasting of severe convective storms. Core\nalgorithm is based on C++ code implemented into R language via `Rcpp`.\nThis solution allows to compute over 200 thermodynamic and kinematic\nparameters in less than 0.02s per profile and process large datasets\nsuch as reanalyses or operational NWP models in a reasonable amount of\ntime. Package has been developed since 2017 by research meteorologists\nspecializing in severe convective storms and is constantly updated with\nnew features.\n\n### Online browser\n\nOnline rawinsonde browser of **thundeR** package is available at\n\n\n### Installation\n\nThe stable version can be installed from the CRAN repository:\n\n``` r\ninstall.packages(""thunder"")\n```\n\nThe development version can be installed directly from the github\nrepository:\n\n``` r\nremotes::install_github(""bczernecki/thunder"")\n```\n\n### Usage\n\n#### Draw Skew-T, hodograph and convective parameters on a single layout and export to png file\n\n``` r\ndata(""sounding_vienna"") # load example dataset (Vienna rawinsonde profile for 23 Aug 2011 12UTC):\npressure = sounding_vienna$pressure # vector of pressure [hPa]\naltitude = sounding_vienna$altitude # vector of altitude [meters]\ntemp = sounding_vienna$temp # vector of temperature [degree Celsius]\ndpt = sounding_vienna$dpt # vector of dew point temperature [degree Celsius]\nwd = sounding_vienna$wd # vector of wind direction [azimuth in degrees]\nws = sounding_vienna$ws # vector of wind speed [knots]\nsounding_save(filename = ""Vienna.png"", title = ""Vienna - 23 August 2011 1200 UTC"", pressure, altitude, temp, dpt, wd, ws)\n```\n\n![](https://raw.githubusercontent.com/bczernecki/thundeR/master/inst/figures/Vienna.png)\n\n#### Download LBF North Platte rawinsonde profile for 03 Jul 1999 00UTC and export to png file\n\n``` r\nprofile = get_sounding(wmo_id = 72562, yy = 1999, mm = 7, dd = 3, hh = 0)\n\nsounding_save(filename = ""NorthPlatte.png"", title = ""North Platte - 03 July 1999 0000 UTC"", profile$pressure, profile$altitude, profile$temp, profile$dpt, profile$wd, profile$ws)\n```\n![](https://raw.githubusercontent.com/bczernecki/thundeR/master/inst/figures/NorthPlatte.png)\n\n#### Compute convective parameters based on a sample vertical profile data:\n\n``` r\nlibrary(""thunder"")\n\npressure = c(1000, 855, 700, 500, 300, 100, 10) # pressure [hPa]\naltitude = c(0, 1500, 2500, 6000, 8500, 12000, 25000) # altitude [meters]\ntemp = c(25, 10, 0, -15, -30, -50, -92) # temperature [degree Celsius]\ndpt = c(20, 5, -5, -30, -55, -80, -99) # dew point temperature [degree Celsius]\nwd = c(0, 90, 135, 180, 270, 350, 0) # wind direction [azimuth in degress]\nws = c(5, 10, 20, 30, 40, 5, 0) # wind speed [knots]\naccuracy = 2 # accuracy of computations where 3 = high (slow), 2 = medium (recommended), 1 = low (fast)\nsounding_compute(pressure, altitude, temp, dpt, wd, ws, accuracy)\n\n\n# MU_CAPE MU_CAPE_M10 MU_CAPE_M10_PT MU_02km_CAPE \n# 2269.9257 998.1443 313.0747 247.9794 \n# MU_03km_CAPE MU_HGL_CAPE MU_CIN MU_LCL_HGT \n# 575.6293 1616.5384 0.0000 730.0000 \n# MU_LFC_HGT MU_EL_HGT MU_LI MU_LI_M10 \n# 730.0000 8300.0000 -10.1119 -10.8539 \n# MU_WMAX MU_EL_TEMP MU_LCL_TEMP MU_LFC_TEMP \n# 67.3784 -28.8000 17.7000 17.7000 \n# MU_MIXR MU_CAPE_500 MU_CAPE_500_M10 MU_CAPE_500_M10_PT \n# 14.8759 1076.0322 389.3651 137.0814 \n# MU_CIN_500 MU_LI_500 MU_LI_500_M10 SB_CAPE \n# 0.0000 -5.0417 -6.2346 2269.9257 \n# SB_CAPE_M10 SB_CAPE_M10_PT SB_02km_CAPE SB_03km_CAPE \n# 998.1443 313.0747 247.9794 575.6293 \n# SB_HGL_CAPE SB_CIN SB_LCL_HGT SB_LFC_HGT \n# 1616.5384 0.0000 730.0000 730.0000 \n# SB_EL_HGT SB_LI SB_LI_M10 SB_WMAX \n# 8300.0000 -10.1119 -10.8539 67.3784 \n# SB_EL_TEMP SB_LCL_TEMP SB_LFC_TEMP SB_MIXR \n# -28.8000 17.7000 17.7000 14.8759 \n# ML_CAPE ML_CAPE_M10 ML_CAPE_M10_PT ML_02km_CAPE \n# 1646.0639 670.1001 225.2816 164.0798 \n# ML_03km_CAPE ML_HGL_CAPE ML_CIN ML_LCL_HGT \n# 422.4290 1250.0221 0.0000 975.0000 \n# ML_LFC_HGT ML_EL_HGT ML_LI ML_LI_M10 \n# 975.0000 7900.0000 -7.6203 -8.5845 \n# ML_WMAX ML_EL_TEMP ML_LCL_TEMP ML_LFC_TEMP \n# 57.3771 -26.4000 15.2500 15.2500 \n# ML_MIXR LR_0500m LR_01km LR_02km \n# 13.0487 -10.0000 -10.0000 -10.0000 \n# LR_03km LR_04km LR_06km LR_16km \n# -9.0476 -7.8571 -6.6667 -6.0000 \n# LR_26km LR_24km LR_36km LR_26km_MAX \n# -5.0000 -5.7672 -4.2857 -5.7143 \n# LR_500700hPa LR_500800hPa LR_600800hPa FRZG_HGT \n# -4.2857 -5.1807 -5.8333 2500.0000 \n# FRZG_wetbulb_HGT HGT_max_thetae_03km HGT_min_thetae_04km Delta_thetae \n# 2275.0000 0.0000 3750.0000 28.0698 \n# Delta_thetae_min04km Thetae_01km Thetae_02km DCAPE \n# 28.8346 330.5323 323.6191 598.3100 \n# Cold_Pool_Strength Wind_Index PRCP_WATER Moisture_Flux_02km \n# 12.6322 33.9064 27.1046 30.4255 \n# RH_01km RH_02km RH_14km RH_25km \n# 0.7291 0.7197 0.6452 0.5550 \n# RH_36km RH_HGL BS_0500m BS_01km \n# 0.4436 0.4603 1.9172 3.8344 \n# BS_02km BS_03km BS_06km BS_08km \n# 8.7821 12.6560 18.0055 17.4077 \n# BS_36km BS_26km BS_16km BS_18km \n# 9.3693 13.3304 16.6478 20.2791 \n# BS_EFF_MU BS_EFF_SB BS_EFF_ML BS_SFC_to_M10 \n# 14.2232 14.2232 13.8968 15.5104 \n# BS_1km_to_M10 BS_2km_to_M10 BS_MU_LFC_to_M10 BS_SB_LFC_to_M10 \n# 13.6499 9.8830 14.0737 14.0737 \n# BS_ML_LFC_to_M10 BS_MW02_to_SM BS_MW02_to_RM BS_MW02_to_LM \n# 13.6864 7.3040 10.1410 10.7870 \n# BS_HGL_to_SM BS_HGL_to_RM BS_HGL_to_LM MW_0500m \n# 4.8934 7.7860 9.9885 2.3086 \n# MW_01km MW_02km MW_03km MW_06km \n# 2.4251 3.3476 4.8003 7.8107 \n# MW_13km SRH_100m_RM SRH_250m_RM SRH_500m_RM \n# 6.8389 4.2535 10.0537 19.7206 \n# SRH_1km_RM SRH_3km_RM SRH_36km_RM SRH_100m_LM \n# 39.6346 152.5219 236.5901 1.5027 \n# SRH_250m_LM SRH_500m_LM SRH_1km_LM SRH_3km_LM \n# 3.5518 6.9670 14.0023 -13.1308 \n# SRH_36km_LM SV_500m_RM SV_01km_RM SV_03km_RM \n# -24.3790 0.0039 0.0039 0.0048 \n# SV_500m_LM SV_01km_LM SV_03km_LM MW_SR_500m_RM \n# 0.0010 0.0011 -0.0014 10.0863 \n# MW_SR_01km_RM MW_SR_03km_RM MW_SR_500m_LM MW_SR_01km_LM \n# 10.1501 9.5359 13.7821 12.8585 \n# MW_SR_03km_LM MW_SR_VM_500m_RM MW_SR_VM_01km_RM MW_SR_VM_03km_RM \n# 8.4579 10.1078 10.2253 10.5358 \n# MW_SR_VM_500m_LM MW_SR_VM_01km_LM MW_SR_VM_03km_LM SV_FRA_500m_RM \n# 13.7647 12.8371 8.8342 0.9982 \n# SV_FRA_01km_RM SV_FRA_03km_RM SV_FRA_500m_LM SV_FRA_01km_LM \n# 0.9871 0.9560 0.2592 0.2800 \n# SV_FRA_03km_LM Bunkers_RM_A Bunkers_RM_M Bunkers_LM_A \n# -0.2862 209.4046 7.7933 122.0585 \n# Bunkers_LM_M Bunkers_MW_A Bunkers_MW_M Corfidi_downwind_A \n# 13.1825 151.9494 7.8107 218.6955 \n# Corfidi_downwind_M Corfidi_upwind_A Corfidi_upwind_M K_Index \n# 14.6982 231.3283 9.1794 24.3548 \n# Showalter_Index TotalTotals_Index SWEAT_Index STP_fix \n# 3.7501 44.3548 106.4168 0.3600 \n# STP_new STP_fix_LM STP_new_LM SCP_fix \n# 0.2005 0.1272 0.0708 6.2338 \n# SCP_new SCP_fix_LM SCP_new_LM SHIP \n# 4.9243 -0.5367 -0.4239 0.6287 \n# HSI DCP MU_WMAXSHEAR SB_WMAXSHEAR \n# 1.7159 1.1507 1213.1848 1213.1848 \n# ML_WMAXSHEAR MU_EFF_WMAXSHEAR SB_EFF_WMAXSHEAR ML_EFF_WMAXSHEAR \n# 1033.1051 958.3359 958.3359 797.3548 \n# EHI_500m EHI_01km EHI_03km EHI_500m_LM \n# 0.2798 0.5623 2.1638 0.0988 \n# EHI_01km_LM EHI_03km_LM SHERBS3 SHERBE \n# 0.1987 -0.1863 0.6482 0.7015 \n# SHERBS3_v2 SHERBE_v2 DEI DEI_eff \n# 0.8642 0.9353 1.5198 1.1885 \n# TIP \n# 2.4356 \n```\n\n#### Hodograph example:\n\nDownload sounding and draw hodograph:\n\n``` r\ndata(""northplatte"")\nsounding_hodograph(ws = northplatte$ws, wd = northplatte$wd, altitude = northplatte$altitude, max_speed = 38)\ntitle(""North Platte - 03 July 1999, 00:00 UTC"")\n```\n\n![](https://raw.githubusercontent.com/bczernecki/thundeR/master/inst/figures/hodograph.png)\n\n\n#### Perform sounding computations using Python with rpy2:\n\nIt is possible to launch `thunder` under Python via rpy2 library. Below\nyou can find the minimum reproducible example:\n\nMake sure that pandas and rpy2 libraries are available for your Python\nenvironment. If not install required python packages:\n\n``` bash\npip install pandas \npip install rpy2 \n```\n\nLaunch `thunder` under Python with `rpy2`:\n\n``` py\n# load required packages\nfrom rpy2.robjects.packages import importr\nfrom rpy2.robjects import r,pandas2ri\nimport rpy2.robjects as robjects\npandas2ri.activate()\n\n# load thunder package (make sure that it was installed in R before)\nimportr(\'thunder\')\n\n# download North Platte sounding \nprofile = robjects.r[\'get_sounding\'](wmo_id = 72562, yy = 1999, mm = 7, dd = 3,hh = 0)\n\n# compute convective parameters\nparameters = robjects.r[\'sounding_compute\'](profile[\'pressure\'], profile[\'altitude\'], profile[\'temp\'], profile[\'dpt\'], profile[\'wd\'], profile[\'ws\'], accuracy = 2)\n\n\n# customize output and print all computed variables, e.g. most-unstable CAPE (first element) equals 9413 J/kg\n\nprint(list(map(\'{:.2f}\'.format, parameters)))\n[\'9413.29\', \'233.35\', \'1713.74\', \'0.00\', \'775.00\', \'775.00\',\n\'15500.00\', \'-16.55\', \'137.21\', \'-66.63\', \'23.98\', \'23.98\',\n\'23.36\', \'9413.29\', \'233.35\', \'1713.74\', \'0.00\', \'775.00\',\n\'775.00\', \'15500.00\', \'-16.55\', \'137.21\', \'-66.63\', \'23.98\', \n\'23.98\', \'23.36\', \'7805.13\', \'115.22\', \'1515.81\', \'-4.35\', \n\'950.00\', \'950.00\', \'15000.00\', ...]\n```\n\n#### Accuracy tables for `sounding_compute()`\n\nThe interpolation algorithm used in the `sounding_compute()` function\nimpacts accuracy of parameters such as CAPE or CIN and the performance\nof the script. The valid options for the `accuracy` parameter are 1, 2\nor 3:\n\n**accuracy = 1** - High performance but low accuracy. Dedicated for\nlarge dataset when output data needs to be quickly available (e.g.\noperational numerical weather models). This option is around 20 times\nfaster than high accuracy (3) setting. Interpolation is peformed for 60\nlevels (m AGL):\n\n``` r\nc(0, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000, 5200, 5400, 5600, 5800, 6000, 6500, 7000, 7500, 8000, 8500, 9000, 9500, 10000, 10500, 11000, 11500, 12000, 12500, 13000, 13500, 14000, 15000, 16000, 17000, 18000, 19000, 20000)\n```\n\n**accuracy = 2** - Compromise between script performance and accuracy.\nRecommended for efficient processing of large numerical weather\nprediction datasets such as meteorological reanalyses for research\nstudies. This option is around 10 times faster than high accuracy (3)\nsetting. Interpolation is peformed for 318 levels (m AGL):\n\n``` r\nc(0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 410, 420, 430, 440, 450, 460, 470, 480, 490, 500, 510, 520, 530, 540, 550, 560, 570, 580, 590, 600, 610, 620, 630, 640, 650, 660, 670, 680, 690, 700, 710, 720, 730, 740, 750, 775, 800, 825, 850, 875, 900, 925, 950, 975, 1000, 1025, 1050, 1075, 1100, 1125, 1150, 1175, 1200, 1225, 1250, 1275, 1300, 1325, 1350, 1375, 1400, 1425, 1450, 1475, 1500, 1525, 1550, 1575, 1600, 1625, 1650, 1675, 1700, 1725, 1750, 1775, 1800, 1825, 1850, 1875, 1900, 1925, 1950, 1975, 2000, 2025, 2050, 2075, 2100, 2125, 2150, 2175, 2200, 2225, 2250, 2275, 2300, 2325, 2350, 2375, 2400, 2425, 2450, 2475, 2500, 2525, 2550, 2575, 2600, 2625, 2650, 2675, 2700, 2725, 2750, 2775, 2800, 2825, 2850, 2875, 2900, 2925, 2950, 2975, 3000, 3050, 3100, 3150, 3200, 3250, 3300, 3350, 3400, 3450, 3500, 3550, 3600, 3650, 3700, 3750, 3800, 3850, 3900, 3950, 4000, 4050, 4100, 4150, 4200, 4250, 4300, 4350, 4400, 4450, 4500, 4550, 4600, 4650, 4700, 4750, 4800, 4850, 4900, 4950, 5000, 5050, 5100, 5150, 5200, 5250, 5300, 5350, 5400, 5450, 5500, 5550, 5600, 5650, 5700, 5750, 5800, 5850, 5900, 5950, 6000, 6100, 6200, 6300, 6400, 6500, 6600, 6700, 6800, 6900, 7000, 7100, 7200, 7300, 7400, 7500, 7600, 7700, 7800, 7900, 8000, 8100, 8200, 8300, 8400, 8500, 8600, 8700, 8800, 8900, 9000, 9100, 9200, 9300, 9400, 9500, 9600, 9700, 9800, 9900, 10000, 10100, 10200, 10300, 10400, 10500, 10600, 10700, 10800, 10900, 11000, 11100, 11200, 11300, 11400, 11500, 11600, 11700, 11800, 11900, 12000, 12250, 12500, 12750, 13000, 13250, 13500, 13750, 14000, 14250, 14500, 14750, 15000, 15250, 15500, 15750, 16000, 16250, 16500, 16750, 17000, 17250, 17500, 17750, 18000, 18250, 18500, 18750, 19000, 19250, 19500, 19750, 20000)\n```\n\n**accuracy = 3**: High accuracy but low performance setting. Recommended\nfor analysing individual profiles. Interpolation is performed with 5 m\nvertical resolution step up to 20 km AGL (i.e.: `0, 5, 10, ... 20000` m\nAGL)\n\n### Important notes\n\n- Remember to always input wind speed data in knots.\n- Script will always consider first height level as the surface (h =\n 0), therefore input height data can be as above sea level (ASL) or\n above ground level (AGL).\n- For efficiency purposes it is highly recommended to clip input data\n for a maximum of 16-18 km AGL or lower.\n- Values of parameters will be different for different accuracy\n settings.\n\n### Developers\n\n**thundeR** package has been developed by atmospheric scientists, each\nhaving an equal contribution (listed in alphabetical order): \n\n- Bartosz Czernecki (Adam Mickiewicz University in Pozna\xc5\x84, Poland) \n\n- Piotr Szuster (Cracow University of Technology, Poland) \n\n- Mateusz Taszarek (CIMMS/NSSL in Norman, Oklahoma, United States)\n\n### Contributions\n\n[Feel free to submit issues and enhancement\nrequests.](https://github.com/bczernecki/thunder/issues)\n'",,"2021/02/07, 10:51:33",990,MIT,206,516,"2023/09/07, 11:48:22",3,43,53,12,48,0,0.0,0.34782608695652173,"2023/09/07, 11:56:39",v1.1.2,0,4,false,,false,false,,,,,,,,,,, AtmoSwing,"Allow predicting local meteorological variables of interest, such as the daily precipitation, based on synoptic variables.",atmoswing,https://github.com/atmoswing/atmoswing.git,github,"analogue,forecast,downscaling,precipitation",Meteorological Observation and Forecast,"2023/08/04, 10:30:12",6,0,1,true,C++,AtmoSwing,atmoswing,"C++,Python,CMake,C,HTML,Cuda,Dockerfile,Shell",,"b""[![AtmoSwing](https://raw.githubusercontent.com/atmoswing/atmoswing/master/art/logo/logo.png)](http://www.atmoswing.org)\r\n\r\n[![doi](https://zenodo.org/badge/95885904.svg)](https://zenodo.org/badge/latestdoi/95885904)\r\n[![GitHub release](https://img.shields.io/github/v/release/atmoswing/atmoswing)](https://github.com/atmoswing/atmoswing/releases)\r\n[![Docker Image Version](https://img.shields.io/docker/v/atmoswing/forecaster?label=docker%20forecaster)](https://hub.docker.com/r/atmoswing/forecaster)\r\n[![Docker Image Version](https://img.shields.io/docker/v/atmoswing/optimizer?label=docker%20optimizer)](https://hub.docker.com/r/atmoswing/optimizer)\r\n[![Docker Image Version](https://img.shields.io/docker/v/atmoswing/downscaler?label=docker%20downscaler)](https://hub.docker.com/r/atmoswing/downscaler)\r\n[![Linux builds](https://github.com/atmoswing/atmoswing/actions/workflows/linux-builds.yml/badge.svg)](https://github.com/atmoswing/atmoswing/actions/workflows/linux-builds.yml)\r\n[![Windows builds](https://github.com/atmoswing/atmoswing/actions/workflows/windows-builds.yml/badge.svg)](https://github.com/atmoswing/atmoswing/actions/workflows/windows-builds.yml)\r\n[![Docker images](https://github.com/atmoswing/atmoswing/actions/workflows/docker-images.yml/badge.svg)](https://github.com/atmoswing/atmoswing/actions/workflows/docker-images.yml)\r\n[![Coverity Scan](https://img.shields.io/coverity/scan/13133)](https://scan.coverity.com/projects/atmoswing-atmoswing)\r\n[![Documentation Status](https://readthedocs.org/projects/atmoswing/badge/?version=latest)](https://atmoswing.readthedocs.io/en/latest/?badge=latest)\r\n[![Codecov](https://img.shields.io/codecov/c/github/atmoswing/atmoswing)](https://codecov.io/gh/atmoswing/atmoswing)\r\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/1107/badge)](https://bestpractices.coreinfrastructure.org/projects/1107)\r\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/87f76e2cfa7f4e2280d37c824df843f1)](https://www.codacy.com/gh/atmoswing/atmoswing/dashboard?utm_source=github.com&utm_medium=referral&utm_content=atmoswing/atmoswing&utm_campaign=Badge_Grade)\r\n\r\nAnalog methods (AMs) allow predicting local meteorological variables of interest (predictand), such as the daily precipitation, based on synoptic variables (predictors). They rely on the hypothesis that similar atmospheric conditions are likely to result in similar local effects. The statistical relationship is first defined (e.g. which predictors, and how many subsampling steps) and calibrated (e.g. which spatial domain, and how many analogues) before being applied to the target period, may it be for operational forecasting or for climate impact studies. A benefit of AMs is that they are lightweight and can provide valuable results for a negligible cost.\r\n\r\nAtmoSwing is an open source (CDDL-1.0) software that implements different AM variants in a very flexible way, so that they can be easily configured by means of XML files. It is written in C++, is object-oriented and multi-platform. AtmoSwing provides four tools: the Optimizer to establish the relationship between the predictand and predictors, the Downscaler to apply the method for climate impact studies, the Forecaster to perform operational forecasts, and the Viewer to display the results. \r\n\r\nThe Optimizer provides a semi-automatic sequential approach, as well as Monte-Carlo analyses, and a global optimization technique by means of Genetic Algorithms. It calibrates the statistical relationship that can be later applied in a forecasting or climatic context.\r\n\r\nThe Downscaler takes as input the outputs of climate models, either GCMs or RCMs in order to provide a downscaled time series of the predictand of interest at a local scale.\r\n\r\nThe Forecaster automatically downloads and reads operational NWP outputs to provide operational forecasting of the predictand of interest. The processing of a forecast is extremely lightweight in terms of computing resources; it can indeed run on almost any computer.\r\n\r\nThe Viewer displays the forecasts in an interactive GIS environment. It contains several layers of syntheses and details in order to provide a quick overview of the potential critical situations in the coming days, as well as the possibility for the user to go into the details of the forecasted predictand distribution.\r\n\r\n## What's in there ##\r\n\r\nThis repository contains 4 different tools:\r\n\r\n* The Forecaster: automatically processes the forecast\r\n* The Viewer: displays the resulting files in a GIS environment\r\n* The Optimizer: optimizes the method for a given precipitation timeseries\r\n* The Downscaler: downscaling for climate impact studies\r\n\r\nAdditionally, multiple unit tests are available and are built along with the software. It is highly recommended to run these tests before using AtmoSwing operationally.\r\n\r\n## Documentation ##\r\n\r\nAtmoSwing documentation can be found here: https://atmoswing.readthedocs.io/en/latest/\r\n\r\nThe repository of the documentation is https://github.com/atmoswing/user-manual\r\n\r\n## Docker images ##\r\n\r\nAtmoSwing Forecaster image: https://hub.docker.com/r/atmoswing/forecaster\r\n\r\nAtmoSwing Optimizer image: https://hub.docker.com/r/atmoswing/optimizer\r\n\r\nAtmoSwing Downscaler image: https://hub.docker.com/r/atmoswing/downscaler\r\n\r\n## Download AtmoSwing ##\r\n\r\nYou can download the releases under: https://github.com/atmoswing/atmoswing/releases\r\n\r\n## How to build AtmoSwing ##\r\n\r\nThe wiki (https://github.com/atmoswing/atmoswing/wiki) explains how to compile the required libraries and the source code of AtmoSwing. In order to get AtmoSwing compiled, follow these steps:\r\n\r\n1. [Get the required **libraries**](https://github.com/atmoswing/atmoswing/wiki/Libraries)\r\n2. [**Configure / build** with CMake](https://github.com/atmoswing/atmoswing/wiki/Build)\r\n\r\n## How to contribute ##\r\n\r\nIf you want to contribute to the software development, you can fork this repository (keep it public !) and then suggest your improvements by sending pull requests. We would be glad to see a community growing around this project.\r\n\r\nWhen adding a new feature, please write a test along with it.\r\n\r\nAdditionally, you can report issues or suggestions in the issues tracker (https://github.com/atmoswing/atmoswing/issues).\r\n\r\nAtmoSwing will follow [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html) (not the case so far) with a few differences (mainly based on [wxWidgets Coding Guidelines](https://www.wxwidgets.org/develop/coding-guidelines)):\r\n* Use ``CamelCase`` for types (classes, structs, enums, unions), methods and functions \r\n* Use ``camelCase`` for the variables.\r\n* Use ``m_`` prefix for member variables.\r\n* Global variables shouldn\xe2\x80\x99t normally be used at all, but if they are, should have ``g_`` prefix.\r\n* Use Set/Get prefixes for accessors\r\n\r\n## Credits ##\r\n\r\n[![University of Lausanne](https://raw.githubusercontent.com/atmoswing/atmoswing/master/art/misc/logo-Unil.png)](http://unil.ch/iste) \r\n    \r\n[![Terranum](https://raw.githubusercontent.com/atmoswing/atmoswing/master/art/misc/logo-Terranum.png)](http://terranum.ch) \r\n    \r\n[![University of Bern](https://raw.githubusercontent.com/atmoswing/atmoswing/master/art/misc/logo-Unibe.png)](http://www.geography.unibe.ch/) \r\n\r\nCopyright (C) 2007-2013, [University of Lausanne](http://unil.ch/iste), Switzerland.\r\n\r\nCopyright (C) 2013-2015, [Terranum](http://terranum.ch), Switzerland.\r\n\r\nCopyright (C) 2016-2017, [University of Bern](http://www.geography.unibe.ch/), Switzerland.\r\n\r\nContributions:\r\n\r\n* Developed by Pascal Horton\r\n* Under the supervision of Charles Obled and Michel Jaboyedoff\r\n* With inputs from Lucien Schreiber, Richard Metzger and Renaud Marty\r\n\r\nFinancial contributions:\r\n\r\n* 2008-2011 Cantons of Valais and Vaud (Switzerland): basis of the software from the MINERVE project.\r\n* 2011-2013 University of Lausanne (Switzerland): reorganization of the source code, improvement of the build system, documentation.\r\n* 2014 Direction r\xc3\xa9gionale de l\xe2\x80\x99environnement, de l\xe2\x80\x99am\xc3\xa9nagement et du logement (France): addition of new forecast skill scores (reliability of the CRPS and rank histogram).\r\n* 2015 Cantons of Valais (Switzerland): addition of synthetic xml export and the aggregation of parametrizations in the viewer.\r\n\r\nSee both license.txt and notice.txt files for details about the license and its enforcement.\r\n""",",https://zenodo.org/badge/latestdoi/95885904","2017/06/30, 12:21:21",2308,CUSTOM,473,3083,"2023/08/21, 09:52:38",50,34,194,43,65,4,0.0,0.003553660270078196,"2023/08/04, 11:38:35",v3.0.11,0,3,false,,false,false,,,https://github.com/atmoswing,www.atmoswing.org,,,,https://avatars.githubusercontent.com/u/28556905?v=4,,, ufs-weather-model,"Contains the model code and external links needed to build the Unified Forecast System atmosphere model and associated components, including the WaveWatch III model.",ufs-community,https://github.com/ufs-community/ufs-weather-model.git,github,"nwp,numerical-weather-prediction,forecast-model,weather-model,ufs,unified-forecast-system,community-modeling",Meteorological Observation and Forecast,"2023/10/24, 15:20:12",119,0,24,true,Fortran,Unified Forecast System (UFS),ufs-community,"Fortran,Shell,CMake,Python,Lua,C,Dockerfile",,"b'[![Read The Docs Status](https://readthedocs.org/projects/ufs-weather-model/badge/?badge=latest)](http://ufs-weather-model.readthedocs.io/)\n\n# ufs-weather-model\n\nThis is the UFS weather model source code\n\n# Where to find information\n\nStart at the [wiki](https://github.com/ufs-community/ufs-weather-model/wiki) which has quick start instructions.\n\n[User\'s reference guide](http://ufs-weather-model.readthedocs.io/) is hosted on read the docs.\n\n# What files are what\n\nThe top level directory structure groups source code and input files as follow:\n\n| File/directory | Purpose |\n| -------------- | ------- |\n| ```LICENSE.md``` | A copy of the GNU Lesser General Public License, Version 3. |\n| ```README.md``` | This file with basic pointers to more information. |\n| ```NEMS/``` | Contains NOAA Environmental Modeling System source code and nems compset run scripts. |\n| ```CMEPS-interface/``` | Contains CMEPS mediator |\n| ```FV3/``` | Contains FV3 atmosphere model component including FV3 dynamical core, dynamics to physics driver, physics and IO. |\n| ```DATM/``` | Contains Data Atmosphere model component |\n| ```WW3/``` | Contains community wave modeling framework WW3. |\n| ```MOM6-interface/``` | Contains MOM6 ocean model component |\n| ```CICE-interface/``` | Contains CICE sea-ice model component including CICE6 and Icepack |\n| ```stochastic_physics/``` | Contains the stochastic physics source code. |\n| ```cmake/``` | Contains compile option files on various platforms. |\n| ```modulefiles/``` | Contains module files on various platforms. |\n| ```tests/``` | Regression and unit testing framework scripts. |\n| ```build.sh``` | Script to build the model executable. (also used by `tests/`) |\n\nE.g. use of `build.sh` to build the coupled model with `FV3_GFS_v15p2` as the CCPP suite.\n```\n$> CMAKE_FLAGS=""-DAPP=S2S"" CCPP_SUITES=""FV3_GFS_v15p2"" ./build.sh\n```\nThe build system is regularly tested with [Tier-1 and Tier-2 platforms](\nhttps://github.com/ufs-community/ufs-weather-model/wiki/Regression-Test-Policy-for-Weather-Model-Platforms-and-Compilers).\nConfigurations for other platforms that are available with UFS should be used with the understanding that they are not regularly\ntested and users will have to adapt those to make it work.\n\n# Disclaimer\n\nThe United States Department of Commerce (DOC) GitHub project code is provided\non an ""as is"" basis and the user assumes responsibility for its use. DOC has\nrelinquished control of the information and no longer has responsibility to\nprotect the integrity, confidentiality, or availability of the information. Any\nclaims against the Department of Commerce stemming from the use of its GitHub\nproject will be governed by all applicable Federal law. Any reference to\nspecific commercial products, processes, or services by service mark,\ntrademark, manufacturer, or otherwise, does not constitute or imply their\nendorsement, recommendation or favoring by the Department of Commerce. The\nDepartment of Commerce seal and logo, or the seal and logo of a DOC bureau,\nshall not be used in any manner to imply endorsement of any commercial product\nor activity by DOC or the United States Government.\n'",,"2019/10/15, 14:38:48",1471,CUSTOM,122,839,"2023/10/19, 18:27:03",153,900,1727,462,6,22,3.9,0.8506069094304388,"2022/03/10, 15:41:14",Release/P8b,0,65,false,,false,false,,,https://github.com/ufs-community,https://ufscommunity.org/,,,,https://avatars.githubusercontent.com/u/49994907?v=4,,, WeatherBench 2,A framework for evaluating and comparing data-driven and traditional numerical weather forecasting models.,google-research,https://github.com/google-research/weatherbench2.git,github,,Meteorological Observation and Forecast,"2023/10/26, 01:17:23",160,0,160,true,Python,Google Research,google-research,Python,https://weatherbench2.readthedocs.io,"b'\n![logo](docs/source/_static/wb2-logo-wide.png)\n\n[![CI](https://github.com/google-research/weatherbench2/actions/workflows/ci-build.yml/badge.svg)](https://github.com/google-research/weatherbench2/actions/workflows/ci-build.yml)\n[![Lint](https://github.com/google-research/weatherbench2/actions/workflows/lint.yml/badge.svg)](https://github.com/google-research/weatherbench2/actions/workflows/lint.yml)\n[![Documentation Status](https://readthedocs.org/projects/weatherbench2/badge/?version=latest)](https://weatherbench2.readthedocs.io/en/latest/?badge=latest)\n\n \n\n\n# WeatherBench 2 - A benchmark for the next generation of data-driven global weather models\n\n[arXiv paper](https://arxiv.org/abs/2308.15560) \n[Google AI Blog post](http://ai.googleblog.com/2023/08/weatherbench-2-benchmark-for-next.html)\n\n## Why WeatherBench?\n\nWeatherBench 2 is a framework for evaluating and comparing data-driven and traditional numerical weather forecasting models. WeatherBench consists of:\n- Publicly available, cloud-optimized ground truth and baseline datasets. For a complete list, see [this page](https://weatherbench2.readthedocs.io/en/latest/data-guide.html). \n- Open-source evaluation code. See this [quick-start](https://weatherbench2.readthedocs.io/en/latest/evaluation.html) to explore the basic functionality or the [API docs](https://weatherbench2.readthedocs.io/en/latest/api.html) for more detail. Since high-resolution forecast files can be large, the WeatherBench 2 code was written with scalability in mind. See the [command-line scripts](https://weatherbench2.readthedocs.io/en/latest/command-line-scripts.html) based on [Xarray-Beam](https://xarray-beam.readthedocs.io/en/latest/) and [this guide](https://weatherbench2.readthedocs.io/en/latest/beam-in-the-cloud.html) for running the scripts on GCP using [DataFlow](https://cloud.google.com/dataflow).\n- A [website](https://sites.research.google/weatherbench) displaying up-to-date scores of many of the state-of-the-art data-driven and physical approaches.\n- A [paper](https://arxiv.org/abs/2308.15560) describing the rationale behind the evaluation setup.\n\nWeatherBench 2 has been built as an evolving tool for the entire community. For this reason, we welcome any feedback (ideally, submitted as [GitHub issues](https://github.com/google-research/weatherbench2/issues)) or contributions. If you would like you model to be part of WeatherBench, check out [this guide](https://weatherbench2.readthedocs.io/en/latest/submit.html).\n\n\n## Citation\n```\n@misc{rasp2023weatherbench,\n title={WeatherBench 2: A benchmark for the next generation of data-driven global weather models}, \n author={Stephan Rasp and Stephan Hoyer and Alexander Merose and Ian Langmore and Peter Battaglia and Tyler Russel and Alvaro Sanchez-Gonzalez and Vivian Yang and Rob Carver and Shreya Agrawal and Matthew Chantry and Zied Ben Bouallegue and Peter Dueben and Carla Bromberg and Jared Sisk and Luke Barrington and Aaron Bell and Fei Sha},\n year={2023},\n eprint={2308.15560},\n archivePrefix={arXiv},\n primaryClass={physics.ao-ph}\n}\n```\n\n## License\n\nThis is not an official Google product.\n\n```\nCopyright 2023 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the ""License"");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n https://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an ""AS IS"" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```'",",https://arxiv.org/abs/2308.15560,https://arxiv.org/abs/2308.15560","2023/02/24, 09:44:32",243,Apache-2.0,173,173,"2023/10/26, 01:17:29",7,63,68,68,0,3,0.0,0.4920634920634921,"2023/08/31, 08:33:16",v0.1.0,0,7,false,,false,false,,,https://github.com/google-research,https://research.google,Earth,,,https://avatars.githubusercontent.com/u/43830688?v=4,,, MeteoHist,A Streamlit app to create interactive temperature and precipitation graphs for places around the world.,yotkadata,https://github.com/yotkadata/meteo_hist.git,github,"climate-change,climate-data,data-visualization,global-warming",Meteorological Observation and Forecast,"2023/10/19, 10:44:25",29,0,29,true,Python,,,"Python,Dockerfile,CSS",https://yotka.org/meteo-hist/,"b'![social-media-image](https://github.com/yotkadata/meteo_hist/assets/7913590/0d4dc378-a6be-4d61-bec8-a664d729a4e2)\n\n# MeteoHist - Historical Meteo Graphs\n\n## A Streamlit app to create interactive temperature and precipitation graphs for places around the world.\n\nThis app allows to create temperature and precipitation (rain, showers, and snowfall) graphs that compare the values of a given location in a given year to the values of a **reference period** at the same place.\n\nThe reference period **defaults to 1961-1990** which [according](https://public.wmo.int/en/media/news/it%E2%80%99s-warmer-average-what-average) to the World Meteorological Organization (WMO) is currently the **best ""long-term climate change assessment""**. Other reference periods of 30 years each can be selected, too.\n\nThe **peaks** on the graph show how the displayed year\'s values deviate from the mean of the reference period. For temperature graphs, this means that the more and the higher the red peaks, the more ""hotter days than usual"" have been observed. The blue peaks indicate days colder than the historical mean. Precipitation graphs show blue peaks on top which means ""more precipitation than normal"" and in red ""less than normal"".\n\nThe interactive plot is created using Python\'s **Plotly** library. In a first version with static images, **Matplotlib** came to use.\n\nBy default, mean values of the reference period are **smoothed** using [Locally Weighted Scatterplot Smoothing (LOWESS)](https://www.statsmodels.org/devel/generated/statsmodels.nonparametric.smoothers_lowess.lowess.html). The value can be adjusted under ""advanced settings"" in the app.\n\n### Interactive version\n\nIn the latest version (first published on 17 August 2023), the graphs are displayed interactively on larger screens. That means you can hover over the graph and get the exact values displayed for every day. You can also zoom in to see parts of the plot.\n\n### Data\n\nTo create the graph, data from the open-source weather API [**Open-Meteo**](https://open-meteo.com/en/docs/historical-weather-api) is used. According to them, ""the Historical Weather API is based on **reanalysis datasets** and uses a **combination of weather station, aircraft, buoy, radar, and satellite observations** to create a comprehensive record of past weather conditions. These datasets are able to **fill in gaps by using mathematical models** to estimate the values of various weather variables. As a result, reanalysis datasets are able to provide detailed historical weather information for **locations that may not have had weather stations nearby**, such as rural areas or the open ocean.""\n\nThe **Reanalysis Models** are based on [ERA5](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels?tab=overview), [ERA5-Land](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-land?tab=overview), and [CERRA](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-cerra-single-levels?tab=overview) from the [**European Union\'s Copernicus Programme**](https://www.copernicus.eu/en).\n\nTo get location data (lat/lon) for the input location, [**Openstreetmap\'s Nominatim**](https://nominatim.openstreetmap.org/) is used.\n\n### Metrics\n\nAvailable metrics are:\n\n- **Mean Temperature:** Mean daily air temperature at 2 meters above ground (24 hour aggregation from hourly values)\n- **Minimum Temperature:** Minimum daily air temperature at 2 meters above ground (24 hour aggregation from hourly values)\n- **Maximum Temperature:** Maximum daily air temperature at 2 meters above ground (24 hour aggregation from hourly values)\n- **Precipitation (Rolling Average):** 30-day rolling/moving average of the sum of daily precipitation (including rain, showers and snowfall)\n- **Precipitation (Cumulated):** Cumulated sum of daily precipitation (including rain, showers, and snowfall)\n\n### Settings\n\n- **Location to display:** Name of the location you want to display. A search at Openstreetmap\'s Nominatim will be performed to find the location and get latitude and longitude.\n- **Year to show:** Year to be compared to reference period.\n- **Reference period:** The reference period is used to calculate the historical average of the daily values. The average is then used to compare the daily values of the selected year. 1961-1990 (default) is currently considered the best ""long-term climate change assessment"" by the World Meteorological Organization (WMO).\n- **Peaks to be annotated:** Number of maximum and minimum peaks to be annotated (default: 1). If peaks are too close together, the next highest/lowest peak is selected to avoid overlapping.\n- **Unit system:** Whether to use Metric System (\xc2\xb0C, mm - default) or Imperial System (\xc2\xb0F, In).\n- **Smoothing:** Degree of smoothing to apply to the historical data. 0 means no smoothing. The higher the value, the more smoothing is applied. Smoothing is done using LOWESS (Locally Weighted Scatterplot Smoothing).\n- **Peak method:** Method to determine the peaks. Either the difference to the historical mean (default) or the difference to the 05/95 percentiles. The percentile method focuses more on extreme events, while the mean method focuses more on the difference to the historical average.\n- **Emphasize peaks:** If checked, peaks that leave the gray area between the 5 and 95 percentiles will be highlighted more.\n\n### Examples\n\n

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

\n\n### License\n\nThe app and the plots it produces are published under a [**Creative Commons license (CC by-sa-nc 4.0)**](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).\n\n### Try it\n\nYou can try the app at [https://yotka.org/meteo-hist/](https://yotka.org/meteo-hist/)\n\nTo use the app on your machine, there are two simple ways:\n\n**1. Set up a Python environment, clone the repository, and run app.py using streamlit:**\n\n```bash\ngit clone https://github.com/yotkadata/meteo_hist/\ncd meteo_hist/\npip install -r requirements.txt\nstreamlit run app.py\n```\n\nThis should open a page in your default browser at http://localhost:8501 that shows the app.\n\n**2. Set up Docker and run it in a container (you can change the name and the tag, of course):**\n\n```bash\ndocker build -t meteo_hist:latest github.com/yotkadata/meteo_hist\ndocker run -d --name meteo_hist -p 8501:8501 meteo_hist:latest\n```\n\nThen open http://localhost:8501 or http://0.0.0.0:8501/ in your browser to see the app.\n\nTo save the generated files outside the Docker container, you can add a binding to a folder on your hard drive when you start the container:\n(replace `/home/user/path/output/` with the path to the folder to be used).\n\n```bash\ndocker run -d --name meteo_hist -p 8501:8501 -v /home/user/path/output/:/app/output meteo_hist:latest\n```\n\n### Using the class without the web interface\n\nIt is also possible to use the Python class directly, without the web app. See the `notebooks` directory for examples.\n\n### Thanks\n\n- This app was inspired by [plots](https://twitter.com/dr_xeo/status/1656933695511511043) made by [Dr. Dominic Roy\xc3\xa9](https://github.com/dominicroye) - thanks for the idea and the exchange about it.\n'",,"2023/06/24, 13:39:06",123,CUSTOM,244,244,"2023/08/05, 19:08:19",1,0,3,3,81,0,0,0.0,,,0,1,false,,false,false,,,,,,,,,,, ecPoint-Calibrate,"A software that uses conditional verification tools to compare numerical weather prediction model outputs against point observations and, in this way, anticipate sub-grid variability and identify biases at grid scale.",ecmwf,https://github.com/ecmwf/ecpoint-calibrate.git,github,"python,meteorology,weather-forecast,ecmwf,calibration,metview,decision-trees",Meteorological Observation and Forecast,"2023/07/20, 09:48:52",21,0,3,true,JavaScript,European Centre for Medium-Range Weather Forecasts,ecmwf,"JavaScript,Python,MATLAB,CSS,Shell,GLSL,HTML",,"b'# ecPoint-Calibrate\n\n![Core unit tests](https://github.com/esowc/ecPoint-Calibrate/workflows/Core%20unit%20tests/badge.svg)\n![Release Core](https://github.com/esowc/ecPoint-Calibrate/workflows/Release%20Core/badge.svg)\n![Release Electron](https://github.com/esowc/ecPoint-Calibrate/workflows/Release%20Electron/badge.svg)\n[![codecov](https://codecov.io/gh/esowc/ecPoint-Calibrate/branch/master/graph/badge.svg?token=x1SGIykSpy)](https://codecov.io/gh/esowc/ecPoint-Calibrate)\n[![made-with-python](https://img.shields.io/badge/Made%20with-Python3.8-1f425f.svg)](https://www.python.org/)\n\necPoint-Calibrate is a software that uses conditional verification tools to compare numerical weather prediction (NWP) model outputs against point observations and, in this way, anticipate sub-grid variability and identify biases at grid scale.\nIt provides a dynamic and user-friendly environment to post-process NWP model parameters (such as precipitation, wind, temperature, etc.) and produce probabilistic products for geographical locations (everywhere in the world, and up to medium-range forecasts).\n\nThe development of this project was sponsored by the project ""ECMWF Summer of Weather Code (ESoWC)""\n[@esowc_ecmwf](https://twitter.com/esowc_ecmwf?lang=en)\n[ECMWF](https://www.ecmwf.int).\n\n## Build with Docker\n\n```\ndocker build -f Dockerfile.core -t ecmwf/ecpoint-calibrate-core:dev .\n```\n\n## Deploy new versions of the Docker containers\n\n```\n./deploy.sh\n```\n\n## Create a production AppImage\n\n```\nyarn dist\n```\n\nThe appimage won\'t work on modern machines without manually adding the `--no-sandbox` electron\noption and re-packaging.\n\n### Install `appimagetool`\n\n```\nsudo wget https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage -O /usr/local/bin/appimagetool\nsudo chmod +x /usr/local/bin/appimagetool\n```\n\n### Repackage the AppImage\n\n```\ncd pkg\n./ecPoint-Calibrate-0.30.0.AppImage --appimage-extract\n```\n\nThis will extract the image into the `squashfs-root` directory.\nOpen `squashfs-root/AppRun` and change the `exec` lines to have the `--no-sandbox` argument.\ne.g. `exec ""$BIN"" --no-sandbox`\n\nThen repackage:\n```\nappimagetool squashfs-root ecPoint-Calibrate-0.30.0.AppImage\n```\n\n## Python Backend\n\nWe need `metview-batch` from conda-forge so unfortunately need to use `conda` with `poetry`.\n\n### Creating the environment\n\n```\nconda create --name ecpoint_calibrate_env --file conda-linux-64.lock\nconda activate ecpoint_calibrate_env\npoetry install\n```\n\n### Activating the environment\n\n```\nconda activate ecpoint_calibrate_env\n```\n\n### Updating the environment\n\n#### Poetry (strongly preferred)\n\nInstalling a new package with poetry will update the poetry lockfile.\n\n```\npoetry install $DEP\n```\n\n#### Conda\n\nYou should very rarely need to add a new conda dep.\n\n```\nconda-lock -k explicit --conda mamba\nmamba update --file conda-linux-64.lock\npoetry update\n```\n\n\n### Run tests\n\nFirst activate the conda env, then run `pytest`.\n\n## Electron Frontend\n\nYou\'ll need node v 14.5.0.\n\n### Installing deps\n\n```\nyarn\n```\n\n### Run the app\n\n```\nyarn start\n```\n\n### Run tests\n\n```\nnpm run test\n```\n'",,"2018/05/11, 16:15:36",1993,GPL-3.0,27,1018,"2023/07/13, 07:43:18",29,27,182,4,104,16,0.2,0.07829534192269572,"2023/07/20, 09:58:01",v1.0.1,0,4,false,,false,false,,,https://github.com/ecmwf,www.ecmwf.int,"Shinfield Park, Reading, United Kingdom",,,https://avatars.githubusercontent.com/u/6368067?v=4,,, imdlib,Download and handle binary grided data from Indian Meterological department.,iamsaswata,https://github.com/iamsaswata/imdlib.git,github,"imd,python,gridded-data",Meteorological Observation and Forecast,"2023/10/11, 09:38:37",22,10,7,true,Python,,,"Python,Shell,Batchfile,Makefile",https://imdlib.readthedocs.io,"b'# imdlib\n[![Build Status](https://github.com/iamsaswata/imdlib/actions/workflows/pypi.yml/badge.svg)](https://github.com/iamsaswata/imdlib/actions/workflows/pypi.yml)\n![GitHub](https://img.shields.io/github/license/iamsaswata/imdlib)\n![PyPI](https://img.shields.io/pypi/v/imdlib)\n![Conda](https://img.shields.io/conda/v/iamsaswata/imdlib)\n[![Downloads](https://pepy.tech/badge/imdlib)](https://pepy.tech/project/imdlib)\n\n\nThis is a python package to download and handle binary grided data from Indian Meterological department (IMD).\n\n## Installation\n\n> pip install imdlib\n \n or\n\n> conda install -c iamsaswata imdlib\n\nor \n\n> pip install git+https://github.com/iamsaswata/imdlib.git\n\n\n## Documentation\n\n[Tutorial](https://saswatanandi.github.io/softwares/imdlib)\n[Tutorial](https://pratiman-91.github.io/blog.html)\n\n## Video Tutorial \n \n[![IMDLIB - Albedo Foundation](https://img.youtube.com/vi/uSIPPY5WRaM/0.jpg)](https://www.youtube.com/watch?v=uSIPPY5WRaM)\n\n## License\n\nimdlib is available under the [MIT](https://opensource.org/licenses/MIT) license.\n\n## Citation\n\nIf you are using imdlib and would like to cite it in academic publication, we would certainly appreciate it. We recommend to use the zenodo DOI for this purpose:\n\nNandi, S., Patel, P., and Swain, S. (2022). IMDLIB: A python library for IMD gridded data. Zenodo. https://doi.org/10.5281/zenodo.7205414\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7205414.svg)](https://doi.org/10.5281/zenodo.7205414)\n\n## Publications using IMDLIB \n \nGarg, N., Negi, S., Nagar, R., Rao, S., & KR, S. (2023). Multivariate multi-step LSTM model for flood runoff prediction: a case study on the Godavari River Basin in India. *Journal of Water and Climate Change*, [[DOI]](https://doi.org/10.2166/wcc.2023.374) \n \nBora, S., & Hazarika, A. (2023). Rainfall time series forecasting using ARIMA model. In 2023 ATCON-1, (pp. 1-5). *IEEE*, [[DOI]](https://doi.org/10.1109/ICAIA57370.2023.10169493) \n \nPanja, A., Garai, S., Zade, S., Veldandi, A., Sahani, S., & Maiti, S. (2023). Climate Data Extraction for Social Science Research: A Step by Step Process. *Social Science Dimensions of Climate Resilient Agriculture*, [[ISBN]](https://www.researchgate.net/profile/Sanjit-Maiti/publication/372909405_Social_Science_Dimensions_of_Climate_Resilient_Agriculture/links/64cd3c4191fb036ba6c6d311/Social-Science-Dimensions-of-Climate-Resilient-Agriculture.pdf#page=57) (ISBN: 978-81-964762-1-2)\n \nChakra, S., Ganguly, A., Oza, H., Padhya, V., Pandey, A., & Deshpande, R. D. (2023). Multidecadal summer monsoon rainfall trend reversals in South Peninsular India: a new approach to examining long-term rainfall dataset. *Journal of Hydrology*, [[DOI]](https://doi.org/10.1016/j.jhydrol.2023.129975).\n \nSardar, P., and Samadder, S. R. (2023).\xc2\xa0 Long-term ecological vulnerability assessment of indian sundarban region under present and future climatic conditions under CMIP6 model. *Ecological Informatics*. [[DOI]](https://doi.org/10.1016/j.ecoinf.2023.102140) \n \nRoy, P. K., Ghosh, A., Basak, S. K., Mohinuddin, S., & Roy M. B. (2023).\xc2\xa0 Analysing the Role of AHP Model to Identify Flood Hazard Zonation in a Coastal Island, India. *Journal of the Indian Society of Remote Sensing Article*, 1-15. [[DOI]](https://doi.org/10.1007/s12524-023-01697-x) \n \nKundu, M., Zafor, A., & Maiti, R. (2023). Assessing the nature of potential groundwater zones through machine learning (ML) algorithm in tropical plateau region, West Bengal, India. *Acta Geophysica*, 1-16. [[DOI]](https://doi.org/10.1007/s11600-023-01042-3) \n \nVenkatesh, S., Kirubakaran, T., Ayaz, R. M., Umar, S. M., & Parimalarenganayaki, S. (2023). Non-parametric Approaches to Identify Rainfall Pattern in Semi-Arid Regions: Ranipet, Vellore, and Tirupathur Districts, Tamil Nadu, India. *In River Dynamics and Flood Hazards* (pp. 507-525). Springer, Singapore. [[DOI]](https://doi.org/10.1007/978-981-19-7100-6_28) \n\nSwain, S., Mishra, S. K., Pandey, A., & Dayal, D. (2022). Assessment of drought trends and variabilities over the agriculture-dominated Marathwada Region, India. *Environmental Monitoring and Assessment, 194(12)*, 1-18. \n[[DOI]](https://doi.org/10.1007/s10661-022-10532-8) \n \nSwain, S., Mishra, S. K., Pandey, A., Dayal, D., & Srivastava, P. K. (2022). Appraisal of historical trends in maximum and minimum temperature using multiple non-parametric techniques over the agriculture-dominated Narmada Basin, India. *Environmental Monitoring and Assessment*, 194(12), 1-23. [[DOI]](https://doi.org/10.1007/s10661-022-10534-6) \n'",",https://doi.org/10.5281/zenodo.7205414\n\n,https://doi.org/10.5281/zenodo.7205414,https://doi.org/10.2166/wcc.2023.374,https://doi.org/10.1109/ICAIA57370.2023.10169493,https://doi.org/10.1016/j.jhydrol.2023.129975,https://doi.org/10.1016/j.ecoinf.2023.102140,https://doi.org/10.1007/s12524-023-01697-x,https://doi.org/10.1007/s11600-023-01042-3,https://doi.org/10.1007/978-981-19-7100-6_28,https://doi.org/10.1007/s10661-022-10532-8,https://doi.org/10.1007/s10661-022-10534-6","2020/01/21, 23:42:54",1373,MIT,58,218,"2023/05/15, 08:10:37",2,6,23,6,163,0,0.0,0.07211538461538458,"2023/05/15, 08:54:44",0.1.17,0,2,false,,false,false,"CivicDataLab/IDS-DRR-Data-Sources,urbanemissions/urbanemissions.github.io,Maniktherana/GRD-to-CSV-querying,Spiruel/spen_farm,vidurmithal/imd_data,answerquest/IMD-grid-data-work,jeevakir/imd-grd-python,atreebangalore/ruralwaterLayers,craigdsouza/imdgrid,iamsaswata/imdlib",,,,,,,,,, goes2go,Download and process GOES-16 and GOES-17 data from NOAA's archive on AWS using Python.,blaylockbk,https://github.com/blaylockbk/goes2go.git,github,"goes,glm,xarray,goes-satellite,noaa-satellite,python,big-data-program,goes-16,goes-17,satellite-imagery,satellite,satellite-data,netcdf,download,open-data",Meteorological Observation and Forecast,"2023/08/23, 05:06:50",143,2,68,true,Python,,,Python,https://goes2go.readthedocs.io/,"b'\n\n![](https://github.com/blaylockbk/goes2go/blob/main/docs/_static/goes2go_logo_100dpi.png?raw=true)\n\n# Download and display GOES-East and GOES-West data\n\n\n\n[![](https://img.shields.io/pypi/v/goes2go)](https://pypi.python.org/pypi/goes2go/)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/goes2go.svg)](https://anaconda.org/conda-forge/goes2go)\n[![DOI](https://zenodo.org/badge/296737878.svg)](https://zenodo.org/badge/latestdoi/296737878)\n\n![](https://img.shields.io/github/license/blaylockbk/goes2go)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Tests (Python)](https://github.com/blaylockbk/goes2g0/actions/workflows/tests-python.yml/badge.svg)](https://github.com/blaylockbk/goes2g0/actions/workflows/tests-python.yml)\n[![Documentation Status](https://readthedocs.org/projects/goes2go/badge/?version=latest)](https://goes2go.readthedocs.io/?badge=latest)\n[![Python](https://img.shields.io/pypi/pyversions/goes2go.svg)](https://pypi.org/project/goes2go/)\n[![Conda Recipe](https://img.shields.io/badge/recipe-goes2go-green.svg)](https://anaconda.org/conda-forge/goes2go)\n[![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/goes2go.svg)](https://anaconda.org/conda-forge/goes2go)\n[![Conda Platforms](https://img.shields.io/conda/pn/conda-forge/goes2go.svg)](https://anaconda.org/conda-forge/goes2go)\n\n\n\n
\n\nGOES-East and GOES-West satellite data are made available on Amazon Web Services through [NOAA\'s Open Data Dissemination Program](https://www.noaa.gov/information-technology/open-data-dissemination). **GOES-2-go** is a python package that makes it easy to find and download the files you want from [AWS](https://registry.opendata.aws/noaa-goes/) to your local computer with some additional helpers to visualize and understand the data.\n\n
\n\n
\n\n# \xf0\x9f\x93\x94 [GOES-2-go Documentation](https://goes2go.readthedocs.io/)\n\n
\n\n
\n\n# Installation\n\nThe easiest way to install `goes2go` and its dependencies is with [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) from conda-forge.\n\n```\nconda install -c conda-forge goes2go\n```\n\nYou may also create the provided Conda environment, **[`environment.yml`](https://github.com/blaylockbk/goes2go/blob/main/environment.yml)**.\n\n```bash\n# Download environment file\nwget https://github.com/blaylockbk/goes2go/raw/main/environment.yml\n\n# Modify that file if you wish.\n\n# Create the environment\nconda env create -f environment.yml\n\n# Activate the environment\nconda activate goes2go\n```\n\nAlternatively, `goes2go` is published on PyPI and you can install it with pip, _but_ it requires some additional dependencies that you will have to install yourself:\n\n- Python 3.8+\n- [Cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html), which requires GEOS and Proj (if using `cartopy<0.22.0`).\n- MetPy\n- _Optional:_ [Carpenter Workshop](https://github.com/blaylockbk/Carpenter_Workshop)\n\nWhen those are installed within your environment, _then_ you can install GOES-2-go with pip.\n\n```bash\n# Latest published version\npip install goes2go\n\n# ~~ or ~~\n\n# Most recent changes\npip install git+https://github.com/blaylockbk/goes2go.git\n```\n\n# Capabilities\n\n- [Download GOES Data](#download-data)\n- [Create RGB composites](#rgb-recipes)\n- [Get the field of view](#field-of-view)\n\n```mermaid\n graph TD;\n aws16[(AWS\\nnoaa-goes16)] -.-> G\n aws17[(AWS\\nnoaa-goes17)] -.-> G\n aws18[(AWS\\nnoaa-goes18)] -.-> G\n G((. GOES 2-go .))\n G --- .latest\n G --- .nearesttime\n G --- .timerange\n .latest --> ds[(xarray.DataSet)]\n .nearesttime --> ds[(xarray.DataSet)]\n .timerange --> ds[(xarray.DataSet)]\n ds --- rgb[ds.rgb\\naccessor to make RGB composites]\n ds --- fov[ds.FOV\\naccessor to get field-of-view polygons]\n\n style G fill:#F8AF22,stroke:#259DD7,stroke-width:4px,color:#000000\n```\n\n## Download Data\n\nDownload GOES ABI or GLM NetCDF files to your local computer. Files can also be read with xarray.\n\nFirst, create a GOES object to specify the satellite, data product, and domain you are interested in. The example below downloads the Multi-Channel Cloud Moisture Imagery for CONUS.\n\n```python\nfrom goes2go import GOES\n\n# ABI Multi-Channel Cloud Moisture Imagry Product\nG = GOES(satellite=16, product=""ABI-L2-MCMIP"", domain=\'C\')\n\n# Geostationary Lightning Mapper\nG = GOES(satellite=17, product=""GLM-L2-LCFA"", domain=\'C\')\n\n# ABI Level 1b Data\nG = GOES(satellite=17, product=""ABI-L1b-Rad"", domain=\'F\')\n```\n\n> A complete listing of the products available are available [here](https://github.com/blaylockbk/goes2go/blob/main/goes2go/product_table.txt).\n\nThere are methods to do the following:\n\n- List the available files for a time range\n- Download data to your local drive for a specified time range\n- Read the data into an xarray Dataset for a specific time\n\n```python\n # Produce a pandas DataFrame of the available files in a time range\n df = G.df(start=\'2022-07-04 01:00\', end=\'2022-07-04 01:30\')\n```\n\n```python\n # Download and read the data as an xarray Dataset nearest a specific time\n ds = G.nearesttime(\'2022-01-01\')\n```\n\n```python\n # Download and read the latest data as an xarray Dataset\n ds = G.latest()\n```\n\n```python\n # Download data for a specified time range\n G.timerange(start=\'2022-06-01 00:00\', end=\'2022-06-01 01:00\')\n\n # Download recent data for a specific interval\n G.timerange(recent=\'30min\')\n```\n\n## RGB Recipes\n\nThe `rgb` xarray accessor computes various RGB products from a GOES ABI ***ABI-L2-MCMIP*** (multi-channel cloud and moisture imagry products) `xarray.Dataset`. See the [demo](https://goes2go.readthedocs.io/en/latest/user_guide/notebooks/DEMO_rgb_recipes.html#) for more examples of RGB products.\n\n```python\nimport matplotlib.pyplot as plt\nds = GOES().latest()\nax = plt.subplot(projection=ds.rgb.crs)\nax.imshow(ds.rgb.TrueColor(), **ds.rgb.imshow_kwargs)\nax.coastlines()\n```\n\n![](./images/TrueColor.png)\n\n## Field of View\n\nThe `FOV` xarray accessor creates `shapely.Polygon` objects for the ABI and GLM field of view. See notebooks for [GLM](https://goes2go.readthedocs.io/en/latest/user_guide/notebooks/field-of-view_GLM.html) and [ABI](https://goes2go.readthedocs.io/en/latest/user_guide/notebooks/field-of-view_ABI.html) field of view.\n\n```python\nfrom goes2go.data import goes_latest\nG = goes_latest()\n# Get polygons of the full disk or ABI domain field of view.\nG.FOV.full_disk\nG.FOV.domain\n# Get Cartopy coordinate reference system\nG.FOV.crs\n```\n\nGOES-West is centered over -137 W and GOES-East is centered over -75 W. When GOES was being tested, it was in a ""central"" position, outlined in the dashed black line. Below is the ABI field of view for the full disk:\n![field of view image](./images/ABI_field-of-view.png)\n\nThe GLM field of view is slightly smaller and limited by a bounding box. Below is the approximated GLM field of view:\n![field of view image](./images/GLM_field-of-view.png)\n\n# How to Cite and Acknowledge\n\nIf GOES-2-go played an important role in your work, please [tell me about it](https://github.com/blaylockbk/goes2go/discussions/categories/show-and-tell)! Also, consider including a citation or acknowledgement in your article or product.\n\n**_Suggested Citation_**\n\n> Blaylock, B. K. (2023). GOES-2-go: Download and display GOES-East and GOES-West data (Version 2022.07.15) [Computer software]. https://github.com/blaylockbk/goes2go\n\n**_Suggested Acknowledgment_**\n\n> A portion of this work used code generously provided by Brian Blaylock\'s GOES-2-go python package (https://github.com/blaylockbk/goes2go)\n\n### What if I don\'t like the GOES-2-go or Python?\n\nAs an alternative you can use [rclone](https://rclone.org/) to download GOES files from AWS. I quite like rclone. Here is a [short rclone tutorial](https://github.com/blaylockbk/pyBKB_v3/blob/master/rclone_howto.md).\n\n
\n\nI hope you find this makes GOES data easier to retrieve and display. Enjoy!\n\n\\- Brian Blaylock\n\n\xf0\x9f\x91\xa8\xf0\x9f\x8f\xbb\xe2\x80\x8d\xf0\x9f\x92\xbb [Contributing Guidelines](https://goes2go.readthedocs.io/en/latest/user_guide/contribute.html) \n\xf0\x9f\x92\xac [GitHub Discussions](https://github.com/blaylockbk/goes2go/discussions) \n\xf0\x9f\x9a\x91 [GitHub Issues](https://github.com/blaylockbk/goes2go/issues) \n\xf0\x9f\x8c\x90 [Personal Webpage](http://home.chpc.utah.edu/~u0553130/Brian_Blaylock/home.html)\n\nP.S. If you like GOES-2-go, check out my other python packages\n- [\xf0\x9f\x8f\x81 Herbie](https://github.com/blaylockbk/Herbie): download numerical weather model data\n- [\xf0\x9f\x8c\xa1\xef\xb8\x8f SynopticPy](https://github.com/blaylockbk/SynopticPy): retrieve mesonet data from the Synoptic API.\n- [\xf0\x9f\x8c\xb9 Pandas-rose](https://github.com/blaylockbk/pandas-rose): easly wind rose from Pandas dataframe.\n\n# Related Content\n\n- [\xf0\x9f\x99\x8b\xf0\x9f\x8f\xbb\xe2\x80\x8d\xe2\x99\x82\xef\xb8\x8f Brian\'s AWS GOES Web Downloader](https://home.chpc.utah.edu/~u0553130/Brian_Blaylock/cgi-bin/goes16_download.cgi)\n- [\xf0\x9f\x93\x94 GOES-R Series Data Book](https://www.goes-r.gov/downloads/resources/documents/GOES-RSeriesDataBook.pdf)\n- [\xf0\x9f\x8e\xa0 Beginner\'s Guide](https://www.goes-r.gov/downloads/resources/documents/Beginners_Guide_to_GOES-R_Series_Data.pdf)\n- [\xf0\x9f\x96\xa5 Rammb Slider GOES Viewer](https://rammb-slider.cira.colostate.edu)\n- [\xf0\x9f\x92\xbe GOES on AWS](https://registry.opendata.aws/noaa-goes/)\n- [\xf0\x9f\x90\x8d Unidata Plot GOES Data](https://unidata.github.io/python-training/gallery/mapping_goes16_truecolor/)\n- [\xf0\x9f\x97\xba Plotting tips form geonetcast blog](https://geonetcast.wordpress.com/2019/08/02/plot-0-5-km-goes-r-full-disk-regions/)\n- [\xf0\x9f\x90\x8d `glmtools`](https://github.com/deeplycloudy/glmtools/)\n- [\xf0\x9f\x90\x8d `satpy`](https://github.com/pytroll/satpy)\n- [\xf0\x9f\x96\xa5 CSPPGEO](http://cimss.ssec.wisc.edu/csppgeo/) | [Gridded GLM software package](https://download.ssec.wisc.edu/files/csppgeo/)\n'",",https://zenodo.org/badge/latestdoi/296737878","2020/09/18, 21:59:47",1132,MIT,59,342,"2023/08/23, 05:06:54",15,11,38,20,64,0,0.3,0.012578616352201255,"2023/08/23, 05:36:45",2023.8.0,0,4,false,,false,false,"joffe97/worldwide_cloud_data,ShyftSolutions/exploring-wx-data",,,,,,,,,, MetNet,A neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km² and at the temporal resolution of 2 minutes with a latency in the order of second.,openclimatefix,https://github.com/openclimatefix/metnet.git,github,pytorch,Meteorological Observation and Forecast,"2023/06/16, 11:34:54",162,2,94,true,Python,Open Climate Fix,openclimatefix,Python,,"b'# MetNet and MetNet-2\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-6-orange.svg?style=flat-square)](#contributors-)\n\n\nPyTorch Implementation of Google Research\'s MetNet for short term weather forecasting (https://arxiv.org/abs/2003.12140), inspired from https://github.com/tcapelle/metnet_pytorch/tree/master/metnet_pytorch\n\nMetNet-2 (https://arxiv.org/pdf/2111.07470.pdf) is a further extension of MetNet that takes in a larger context image to predict up to 12 hours ahead, and is also implemented in PyTorch here.\n\n## Installation\n\nClone the repository, then run\n```shell\npip install -r requirements.txt\npip install -e .\n````\n\nAlternatively, you can also install a usually older version through ```pip install metnet```\n\nPlease ensure that you\'re using Python version 3.9 or above.\n\n## Data\n\nWhile the exact training data used for both MetNet and MetNet-2 haven\'t been released, the papers do go into some detail as to the inputs, which were GOES-16 and MRMS precipitation data, as well as the time period covered. We will be making those splits available, as well as a larger dataset that covers a longer time period, with [HuggingFace Datasets](https://huggingface.co/datasets/openclimatefix/goes-mrms)! Note: The dataset is not available yet, we are still processing data!\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(""openclimatefix/goes-mrms"")\n```\n\nThis uses the publicly avaiilable GOES-16 data and the MRMS archive to create a similar set of data to train and test on, with various other splits available as well.\n\n## Pretrained Weights\nPretrained model weights for MetNet and MetNet-2 have not been publicly released, and there is some difficulty in reproducing their training. We release weights for both MetNet and MetNet-2 trained on cloud mask and satellite imagery data with the same parameters as detailed in the papers on HuggingFace Hub for [MetNet](https://huggingface.co/openclimatefix/metnet) and [MetNet-2](https://huggingface.co/openclimatefix/metnet-2). These weights can be downloaded and used using:\n\n```python\nfrom metnet import MetNet, MetNet2\nmodel = MetNet().from_pretrained(""openclimatefix/metnet"")\nmodel = MetNet2().from_pretrained(""openclimatefix/metnet-2"")\n```\n\n## Example Usage\n\nMetNet can be used with:\n\n```python\nfrom metnet import MetNet\nimport torch\nimport torch.nn.functional as F\n\nmodel = MetNet(\n hidden_dim=32,\n forecast_steps=24,\n input_channels=16,\n output_channels=12,\n sat_channels=12,\n input_size=32,\n )\n# MetNet expects original HxW to be 4x the input size\nx = torch.randn((2, 12, 16, 128, 128))\nout = []\nfor lead_time in range(24):\n out.append(model(x, lead_time))\nout = torch.stack(out, dim=1)\n# MetNet creates predictions for the center 1/4th\ny = torch.randn((2, 24, 12, 8, 8))\nF.mse_loss(out, y).backward()\n```\n\nAnd MetNet-2 with:\n\n```python\nfrom metnet import MetNet2\nimport torch\nimport torch.nn.functional as F\n\nmodel = MetNet2(\n forecast_steps=8,\n input_size=64,\n num_input_timesteps=6,\n upsampler_channels=128,\n lstm_channels=32,\n encoder_channels=64,\n center_crop_size=16,\n )\n# MetNet expects original HxW to be 4x the input size\nx = torch.randn((2, 6, 12, 256, 256))\nout = []\nfor lead_time in range(8):\n out.append(model(x, lead_time))\nout = torch.stack(out, dim=1)\ny = torch.rand((2,8,12,64,64))\nF.mse_loss(out, y).backward()\n```\n\n## Contributors \xe2\x9c\xa8\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n

Jacob Bieker

\xf0\x9f\x92\xbb

Jack Kelly

\xf0\x9f\x92\xbb

Valter Fallenius

\xf0\x9f\x93\x93

terigenbuaa

\xf0\x9f\x92\xac

Kan.Dai

\xf0\x9f\x92\xac

Sailesh Bechar

\xf0\x9f\x92\xac
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n'",",https://arxiv.org/abs/2003.12140,https://arxiv.org/pdf/2111.07470.pdf","2021/09/02, 11:20:11",783,MIT,36,158,"2023/04/13, 09:50:25",26,20,25,12,195,6,0.1,0.2710280373831776,"2023/06/16, 11:35:16",v4.1.15,2,5,false,,false,false,"openclimatefix/pv-pseudo-experiments,openclimatefix/satflow",,https://github.com/openclimatefix,openclimatefix.org,London,,,https://avatars.githubusercontent.com/u/48357542?v=4,,, jmastats,Download Weather Data from Japan Meteorological Agency Website.,uribo,https://github.com/uribo/jmastats.git,github,"climate-change,rpackage,weather",Meteorological Observation and Forecast,"2023/09/27, 12:45:32",19,0,17,true,R,,,R,https://uribo.github.io/jmastats/,"b'\n\n\n# jmastats \n\n\n\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/jmastats)](https://CRAN.R-project.org/package=jmastats)\n[![CRANlogs\ndownloads](https://cranlogs.r-pkg.org/badges/grand-total/jmastats)](https://cran.r-project.org/package=jmastats)\n[![DOI](https://zenodo.org/badge/515892382.svg)](https://zenodo.org/badge/latestdoi/515892382)\n\n\njmastats\n\xe3\x81\xaf[\xe6\xb0\x97\xe8\xb1\xa1\xe5\xba\x81](https://www.jma.go.jp/jma/index.html)\xe3\x81\xae\xe3\x82\xa6\xe3\x82\xa7\xe3\x83\x96\xe3\x82\xb5\xe3\x82\xa4\xe3\x83\x88\xe3\x81\xa7\xe5\x85\xac\xe9\x96\x8b\xe3\x81\x95\xe3\x82\x8c\xe3\x82\x8b\xe6\xb0\x97\xe8\xb1\xa1\xe3\x80\x81\xe5\x9c\xb0\xe9\x9c\x87\xe3\x80\x81\xe6\xb5\xb7\xe6\xb4\x8b\xe7\xad\x89\xe3\x81\xae\xe5\x90\x84\xe7\xa8\xae\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x82\x92R\xe4\xb8\x8a\xe3\x81\xa7\xe6\x89\xb1\xe3\x81\x86\xe3\x81\x9f\xe3\x82\x81\xe3\x81\xae\xe3\x83\x91\xe3\x83\x83\xe3\x82\xb1\xe3\x83\xbc\xe3\x82\xb8\xe3\x81\xa7\xe3\x81\x99\xe3\x80\x82\n\n## \xe3\x82\xa4\xe3\x83\xb3\xe3\x82\xb9\xe3\x83\x88\xe3\x83\xbc\xe3\x83\xab\n\nCRAN\xe3\x81\x8b\xe3\x82\x89\xe3\x82\xa4\xe3\x83\xb3\xe3\x82\xb9\xe3\x83\x88\xe3\x83\xbc\xe3\x83\xab\xe3\x81\x8c\xe5\x8f\xaf\xe8\x83\xbd\xe3\x81\xa7\xe3\x81\x99\xe3\x80\x82\n\n``` r\ninstall.packages(""jmastats"")\n```\n\n\xe9\x96\x8b\xe7\x99\xba\xe7\x89\x88\xe3\x82\x92\xe5\x88\xa9\xe7\x94\xa8\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x84\xe5\xa0\xb4\xe5\x90\x88\xe3\x81\xaf\xe6\xac\xa1\xe3\x81\xae\xe3\x82\xb3\xe3\x83\x9e\xe3\x83\xb3\xe3\x83\x89\xe3\x82\x92\xe5\xae\x9f\xe8\xa1\x8c\xe3\x81\x99\xe3\x82\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xa7\xe3\x82\xa4\xe3\x83\xb3\xe3\x82\xb9\xe3\x83\x88\xe3\x83\xbc\xe3\x83\xab\xe3\x81\x8c\xe8\xa1\x8c\xe3\x82\x8f\xe3\x82\x8c\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n``` r\ninstall.packages(\n ""jmastats"", \n repos = c(uribo = ""https://uribo.r-universe.dev"", getOption(""repos"")))\n```\n\n## \xe7\x89\xb9\xe5\xbe\xb4\n\n- \xe9\x81\x8e\xe5\x8e\xbb\xe3\x81\xae\xe6\xb0\x97\xe8\xb1\xa1\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xae\xe3\x81\xbb\xe3\x81\x8b\xe3\x80\x81\xe6\xb0\x97\xe8\xb1\xa1\xe5\xba\x81\xe3\x81\x8c\xe5\x85\xac\xe9\x96\x8b\xe3\x81\x99\xe3\x82\x8b\xe3\x81\x95\xe3\x81\xbe\xe3\x81\x96\xe3\x81\xbe\xe3\x81\xaa\xe6\xb0\x97\xe8\xb1\xa1\xe3\x83\x95\xe3\x82\xa1\xe3\x82\xa4\xe3\x83\xab\xe3\x82\x92R\xe3\x81\xa7\xe6\x89\xb1\xe3\x81\x84\xe3\x82\x84\xe3\x81\x99\xe3\x81\x84\xe5\xbd\xa2\xe5\xbc\x8f\xe3\x81\xa7\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x81\xbf\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n - \xe5\x9c\xb0\xe7\x90\x86\xe7\xa9\xba\xe9\x96\x93\xe6\x83\x85\xe5\xa0\xb1\xe3\x81\x8c\xe5\x88\xa9\xe7\x94\xa8\xe3\x81\xa7\xe3\x81\x8d\xe3\x82\x8b\xe5\xa0\xb4\xe5\x90\x88\xe3\x80\x81\xe9\x81\xa9\xe5\x88\x87\xe3\x81\xaa\xe7\xa8\xae\xe9\xa1\x9e\xef\xbc\x88\xe3\x83\x9d\xe3\x82\xa4\xe3\x83\xb3\xe3\x83\x88\xe3\x80\x81\xe3\x83\xa9\xe3\x82\xa4\xe3\x83\xb3\xe7\xad\x89\xef\xbc\x89\xe3\x81\xaesf\xe3\x82\xaa\xe3\x83\x96\xe3\x82\xb8\xe3\x82\xa7\xe3\x82\xaf\xe3\x83\x88\xe3\x81\xab\xe5\xa4\x89\xe6\x8f\x9b\xe3\x81\x97\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n - `parse_unit()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\xab\xe3\x82\x88\xe3\x82\x8a\xe3\x80\x81\xe6\xb0\x97\xe8\xb1\xa1\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xae\xe5\x8d\x98\xe4\xbd\x8d\xe7\xb3\xbb\xef\xbc\x88SI\xe5\x8d\x98\xe4\xbd\x8d\xe7\xb3\xbb\xef\xbc\x89\xe3\x82\x92units\xe3\x82\xaa\xe3\x83\x96\xe3\x82\xb8\xe3\x82\xa7\xe3\x82\xaf\xe3\x83\x88\xe3\x81\xab\xe5\xa4\x89\xe6\x8f\x9b\xe3\x81\x97\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n- \xe5\x8f\x96\xe5\xbe\x97\xe3\x81\x97\xe3\x81\x9f\xe6\xb0\x97\xe8\xb1\xa1\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xaf\xe3\x82\xb3\xe3\x83\xb3\xe3\x83\x94\xe3\x83\xa5\xe3\x83\xbc\xe3\x82\xbf\xe5\x86\x85\xe3\x81\xab\xe3\x82\xad\xe3\x83\xa3\xe3\x83\x83\xe3\x82\xb7\xe3\x83\xa5\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe4\xbf\x9d\xe5\xad\x98\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\xe3\x81\x9d\xe3\x81\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\xe4\xb8\x80\xe5\xba\xa6\xe5\x8f\x96\xe5\xbe\x97\xe3\x81\x97\xe3\x81\x9f\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xaf\xe3\x82\xa4\xe3\x83\xb3\xe3\x82\xbf\xe3\x83\xbc\xe3\x83\x8d\xe3\x83\x83\xe3\x83\x88\xe6\x8e\xa5\xe7\xb6\x9a\xe3\x81\x8c\xe3\x81\xa7\xe3\x81\x8d\xe3\x81\xaa\xe3\x81\x84\xe7\x8a\xb6\xe6\x85\x8b\xe3\x81\xa7\xe3\x82\x82\xe5\x8f\x82\xe7\x85\xa7\xe5\x8f\xaf\xe8\x83\xbd\xe3\x81\xa7\xe3\x81\x99\xe3\x80\x82\xe3\x82\xad\xe3\x83\xa3\xe3\x83\x83\xe3\x82\xb7\xe3\x83\xa5\xe3\x81\xae\xe5\x88\xa9\xe7\x94\xa8\xe3\x81\xab\xe3\x82\x88\xe3\x82\x8a\xe3\x80\x81\xe5\x8f\x96\xe5\xbe\x97\xe6\x99\x82\xe3\x81\xae\xe6\xb0\x97\xe8\xb1\xa1\xe5\xba\x81\xe3\x82\xa6\xe3\x82\xa7\xe3\x83\x96\xe3\x82\xb5\xe3\x82\xa4\xe3\x83\x88\xe3\x81\xb8\xe3\x81\xae\xe8\xb2\xa0\xe8\x8d\xb7\xe3\x82\x82\xe8\xbb\xbd\xe6\xb8\x9b\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n## \xe4\xbd\xbf\xe3\x81\x84\xe6\x96\xb9\n\n``` r\nlibrary(jmastats)\n```\n\n### \xe5\x9c\xb0\xe7\x82\xb9\xe6\xb0\x97\xe8\xb1\xa1\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\n\n\xe5\x8f\x96\xe5\xbe\x97\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x84\xe5\xaf\xbe\xe8\xb1\xa1\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x82\x92`item`\xe5\xbc\x95\xe6\x95\xb0\xe3\x81\xab\xe6\x8c\x87\xe5\xae\x9a\xe3\x80\x81\xe8\xa6\xb3\xe6\xb8\xac\xe6\x89\x80\xe3\x81\xae\xe4\xbd\x8d\xe7\xbd\xae\xe3\x81\x99\xe3\x82\x8b `block_no`\n\xe3\x81\xa8\xe5\xaf\xbe\xe8\xb1\xa1\xe3\x81\xae\xe5\xb9\xb4 `year` \xe6\x9c\x88 `month` \xe6\x97\xa5 `day`\xe3\x82\x92\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xab\xe5\xbf\x9c\xe3\x81\x98\xe3\x81\xa6\xe4\xb8\x8e\xe3\x81\x88\xe3\x81\xa6\xe5\xae\x9f\xe8\xa1\x8c\xe3\x81\x97\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n``` r\n# \xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xae\xe7\xa8\xae\xe9\xa1\x9e: hourly (\xef\xbc\x91\xe6\x99\x82\xe9\x96\x93\xe3\x81\x94\xe3\x81\xa8\xe3\x81\xae\xe5\x80\xa4)\n# \xe3\x83\x96\xe3\x83\xad\xe3\x83\x83\xe3\x82\xaf\xe7\x95\xaa\xe5\x8f\xb7: 47646 \xef\xbc\x88\xe3\x80\x8c\xe3\x81\xa4\xe3\x81\x8f\xe3\x81\xb0\xe3\x80\x8d\xef\xbc\x89\n# \xe5\xaf\xbe\xe8\xb1\xa1\xe5\xb9\xb4\xe6\x9c\x88\xe6\x97\xa5: 2022\xe5\xb9\xb41\xe6\x9c\x881\xe6\x97\xa5\njma_collect(item = ""hourly"", block_no = 47646, year = 2022, month = 1, day = 1)\n```\n\n`block_no` \xe3\x81\x8c\xe4\xb8\x8d\xe6\x98\x8e\xe3\x81\xae\xe6\x99\x82\xe3\x81\xaf\n\xe5\xaf\xbe\xe8\xb1\xa1\xe5\x9c\xb0\xe7\x82\xb9\xe3\x81\xae\xe7\xb7\xaf\xe5\xba\xa6\xe7\xb5\x8c\xe5\xba\xa6\xe3\x82\x92\xe5\x85\x83\xe3\x81\xab`nearest_station()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x82\x84\xe5\xbe\x8c\xe8\xbf\xb0\xe3\x81\xae\xe6\xb0\x97\xe8\xb1\xa1\xe8\xa6\xb3\xe6\xb8\xac\xe5\x9c\xb0\xe7\x82\xb9\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\x8b\xe3\x82\x89\xe6\xa4\x9c\xe7\xb4\xa2\xe3\x81\xa7\xe3\x81\x8d\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n``` r\n# \xe4\xbb\xbb\xe6\x84\x8f\xe3\x81\xae\xe7\xb7\xaf\xe5\xba\xa6\xe7\xb5\x8c\xe5\xba\xa6\xe5\xba\xa7\xe6\xa8\x99\xe3\x81\x8b\xe3\x82\x89\xe6\x9c\x80\xe5\xaf\x84\xe3\x82\x8a\xe3\x81\xae\xe8\xa6\xb3\xe6\xb8\xac\xe6\x89\x80\xe3\x81\xa8\xe5\xba\xa7\xe6\xa8\x99\xe3\x81\xa8\xe3\x81\xae\xe8\xb7\x9d\xe9\x9b\xa2\xe3\x82\x92\xe5\x87\xba\xe5\x8a\x9b\nnearest_station(longitude = 140.112, latitude = 36.083)\n```\n\n\xe3\x83\xa6\xe3\x83\xbc\xe3\x82\xb6\xe3\x83\xbc\xe3\x81\x8c\xe4\xbb\xbb\xe6\x84\x8f\xe3\x81\xae\xe5\x9c\xb0\xe7\x82\xb9\xe3\x80\x81\xe9\xa0\x85\xe7\x9b\xae\xe3\x80\x81\xe6\x9c\x9f\xe9\x96\x93\xe7\xad\x89\xe3\x81\xae\xe7\xb5\x84\xe3\x81\xbf\xe3\x81\x82\xe3\x82\x8f\xe3\x81\x9b\xe3\x81\xa7\xe5\x87\xba\xe5\x8a\x9b\xe5\x8f\xaf\xe8\x83\xbd\xe3\x81\xaa\xe3\x80\x81`\xe9\x81\x8e\xe5\x8e\xbb\xe3\x81\xae\xe6\xb0\x97\xe8\xb1\xa1\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf`\xe3\x81\xaecsv\xe3\x83\x95\xe3\x82\xa1\xe3\x82\xa4\xe3\x83\xab\xe3\x82\x92\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x82\x80\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6`read_jma_weather()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\x8c\xe5\x88\xa9\xe7\x94\xa8\xe3\x81\xa7\xe3\x81\x8d\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n``` r\n# \xe3\x83\x80\xe3\x82\xa6\xe3\x83\xb3\xe3\x83\xad\xe3\x83\xbc\xe3\x83\x89\xe3\x81\x97\xe3\x81\x9fcsv\xe3\x83\x95\xe3\x82\xa1\xe3\x82\xa4\xe3\x83\xab\xe3\x81\xae\xe3\x83\x91\xe3\x82\xb9\xe3\x82\x92\xe4\xb8\x8e\xe3\x81\x88\xe3\x81\xa6\xe5\xae\x9f\xe8\xa1\x8c\xe3\x81\x97\xe3\x81\xbe\xe3\x81\x99\nread_jma_weather(system.file(""dummy/dl_data.csv"", package = ""jmastats""))\n```\n\n### \xe5\x8f\xb0\xe9\xa2\xa8\xe8\xb3\x87\xe6\x96\x99\n\n\xe6\xb0\x97\xe8\xb1\xa1\xe5\xba\x81\xe3\x81\xae\xe3\x82\xa6\xe3\x82\xa7\xe3\x83\x96\xe3\x82\xb5\xe3\x82\xa4\xe3\x83\x88\xe3\x80\x81[RMCS\nTokyo](https://www.jma.go.jp/jma/jma-eng/jma-center/rsmc-hp-pub-eg/trackarchives.html)\xe3\x81\x8c\xe6\x8f\x90\xe4\xbe\x9b\xe3\x81\x99\xe3\x82\x8b\xe5\x8f\xb0\xe9\xa2\xa8\xe3\x81\xae\xe3\x83\x99\xe3\x82\xb9\xe3\x83\x88\xe3\x83\x88\xe3\x83\xa9\xe3\x83\x83\xe3\x82\xaf\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x82\x92\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x82\x80\xe3\x81\x9f\xe3\x82\x81\xe3\x81\xae\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\x8c\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n`read_rsmc_besttrack()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\xa7\xe3\x83\x99\xe3\x82\xb9\xe3\x83\x88\xe3\x83\x88\xe3\x83\xa9\xe3\x83\x83\xe3\x82\xaf\xe3\x81\xae\xe3\x83\x95\xe3\x82\xa1\xe3\x82\xa4\xe3\x83\xab\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3\x82\x8c\xe3\x81\x9f\xe3\x83\x91\xe3\x82\xb9\xe3\x82\x92\xe6\x8c\x87\xe5\xae\x9a\xe3\x81\x97\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n``` r\nread_rsmc_besttrack(path = system.file(""dummy/bst.txt"", package = ""jmastats"")) |> \n dplyr::glimpse()\n#> Rows: 2\n#> Columns: 22\n#> $ datetime 1991-09-2\xe2\x80\xa6\n#> $ indicator_002 ""002"", ""00\xe2\x80\xa6\n#> $ grade ""5"", ""5""\n#> $ `central_pressure(hPa)` 935, 994\n#> $ `maximum_sustained_wind_speed(knot)` 95, NA\n#> $ `_direction_of_the_longest_radius_of_50kt_winds_or_greater` ""3"", NA\n#> $ `_the_longest_radius_of_50kt_winds_or_greater(nm)` ""0180"", NA\n#> $ `_the_shortest_radius_of_50kt_winds_or_greater(nm)` ""0140"", NA\n#> $ `_direction_of_the_longest_radius_of_30kt_winds_or_greater` ""3"", NA\n#> $ `_the_longest_radius_of_30kt_winds_or_greater(nm)` ""0400"", NA\n#> $ `_the_shortest_radius_of_30kt_winds_or_greater(nm)` ""0260"", NA\n#> $ indicator_of_landfall_or_passage ""#"", NA\n#> $ international_number ""9119"", ""9\xe2\x80\xa6\n#> $ geometry POINT (129\xe2\x80\xa6\n#> $ indicator_66666 66666, 666\xe2\x80\xa6\n#> $ nrow 1, 1\n#> $ tropical_cyclone_number ""0045"", ""0\xe2\x80\xa6\n#> $ international_number_copy ""9119"", ""9\xe2\x80\xa6\n#> $ flag_last_data_line ""0"", ""0""\n#> $ DTM 6, 6\n#> $ storm_name MIRREILE, \xe2\x80\xa6\n#> $ last_update 1992-07-01\xe2\x80\xa6\n```\n\nURL\xe3\x82\x92\xe7\x9b\xb4\xe6\x8e\xa5\xe6\x8c\x87\xe5\xae\x9a\xe3\x81\x97\xe3\x81\x9f\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x81\xbf\xe3\x82\x82\xe5\x8f\xaf\xe8\x83\xbd\xe3\x81\xa7\xe3\x81\x99\xe3\x80\x82\n\n``` r\nread_rsmc_besttrack(path = ""https://www.jma.go.jp/jma/jma-eng/jma-center/rsmc-hp-pub-eg/Besttracks/bst2023.txt"")\n```\n\n`read_rsmc_besttrack()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\xae\xe8\xbf\x94\xe3\x82\x8a\xe5\x80\xa4\xe3\x81\xaf\xe5\x8f\xb0\xe9\xa2\xa8\xe3\x81\xae\xe7\xb5\x8c\xe8\xb7\xaf\xe3\x82\x92\xe8\xa8\x98\xe9\x8c\xb2\xe3\x81\x97\xe3\x81\x9f\xe3\x83\x9d\xe3\x82\xa4\xe3\x83\xb3\xe3\x83\x88\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf(sf)\xe3\x81\xa8\xe3\x81\xaa\xe3\x81\xa3\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82`track_combine()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\xa7\xe3\x81\xaf\xe3\x80\x81`read_rsmc_besttrack()`\xe9\x96\xa2\xe6\x95\xb0\xe3\x81\xa7\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x82\x93\xe3\x81\xa0\xe3\x83\x99\xe3\x82\xb9\xe3\x83\x88\xe3\x83\x88\xe3\x83\xa9\xe3\x83\x83\xe3\x82\xaf\xe3\x81\xae\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xab\xe3\x81\xa4\xe3\x81\x84\xe3\x81\xa6\xe3\x80\x81\xe5\x8f\xb0\xe9\xa2\xa8\xe3\x81\x94\xe3\x81\xa8\xe3\x81\xae\xe7\xb5\x8c\xe8\xb7\xaf\xe3\x82\x92\xe3\x83\xa9\xe3\x82\xa4\xe3\x83\xb3\xe3\x83\x87\xe3\x83\xbc\xe3\x82\xbf\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe3\x81\xbe\xe3\x81\xa8\xe3\x82\x81\xe3\x81\xbe\xe3\x81\x99\xe3\x80\x82\n\n``` r\nread_rsmc_besttrack(path = system.file(""dummy/bst.txt"", package = ""jmastats"")) |> \n track_combine(group_vars = ""storm_name"")\n```\n\n### \xe6\xb0\x97\xe8\xb1\xa1\xe5\xba\x81\xe9\x98\xb2\xe7\x81\xbd\xe6\x83\x85\xe5\xa0\xb1XML\xe3\x83\x95\xe3\x82\xa9\xe3\x83\xbc\xe3\x83\x9e\xe3\x83\x83\xe3\x83\x88\n\n``` r\nread_kishou_feed(""high"", type = ""regular"")\n\nread_kishou_feed(""low"", ""other"")\n```\n\n### \xe6\xbd\xae\xe6\xb1\x90\xe8\xa6\xb3\xe6\xb8\xac\xe8\xb3\x87\xe6\x96\x99\n\n``` r\n# URL\xe3\x82\x92\xe6\x8c\x87\xe5\xae\x9a\xe3\x81\x97\xe3\x81\xa6\xe3\x81\xae\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x81\xbf\nread_tide_level(""https://www.data.jma.go.jp/gmd/kaiyou/data/db/tide/suisan/txt/2020/TK.txt"")\n# URL\xe3\x82\x92\xe6\xa7\x8b\xe6\x88\x90\xe3\x81\x99\xe3\x82\x8b\xe3\x83\x91\xe3\x83\xa9\xe3\x83\xa1\xe3\x83\xbc\xe3\x82\xbf\xe3\x82\x92\xe6\x8c\x87\xe5\xae\x9a\xe3\x81\x97\xe3\x81\x9f\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x81\xbf\nread_tide_level(.year = 2020, .month = 2, .stn = ""TK"")\n```\n\n``` r\n# \xe3\x83\xad\xe3\x83\xbc\xe3\x82\xab\xe3\x83\xab\xe3\x81\xab\xe4\xbf\x9d\xe5\xad\x98\xe3\x81\x97\xe3\x81\x9f\xe3\x83\x95\xe3\x82\xa1\xe3\x82\xa4\xe3\x83\xab\xe3\x81\xae\xe8\xaa\xad\xe3\x81\xbf\xe8\xbe\xbc\xe3\x81\xbf\xef\xbc\x88\xe3\x83\x91\xe3\x82\xb9\xe3\x82\x92\xe6\x8c\x87\xe5\xae\x9a\xef\xbc\x89\nread_tide_level(system.file(""dummy/tide.txt"", package = ""jmastats""))\n#> New names:\n#> New names:\n#> New names:\n#> New names:\n#> \xe2\x80\xa2 `hm` -> `hm...1`\n#> \xe2\x80\xa2 `hm` -> `hm...2`\n#> \xe2\x80\xa2 `hm` -> `hm...3`\n#> \xe2\x80\xa2 `hm` -> `hm...4`\n#> # A tibble: 1 \xc3\x97 42\n#> hry_00 hry_01 hry_02 hry_03 hry_04 hry_05 hry_06 hry_07 hry_08 hry_09 hry_10\n#> [cm] [cm] [cm] [cm] [cm] [cm] [cm] [cm] [cm] [cm] [cm]\n#> 1 128 127 122 115 107 102 101 106 117 132 146\n#> # \xe2\x84\xb9 31 more variables: hry_11 [cm], hry_12 [cm], hry_13 [cm], hry_14 [cm],\n#> # hry_15 [cm], hry_16 [cm], hry_17 [cm], hry_18 [cm], hry_19 [cm],\n#> # hry_20 [cm], hry_21 [cm], hry_22 [cm], hry_23 [cm], date , stn ,\n#> # low_tide_hm_obs1