Do-a-thon: running and testing open-source models

model-comparison
validation
aarhus-2019

#1

In the past editions, we have noticed that many people (myself included) develop their own model, but don’t take the time to test similar and already existing open-source models. This causes useless double-work and we miss the opportunity to join forces for the improvement of the models.

The fact that many model developers will be present at the workshop is an opportunity to exchange and learn about each other’s models, with the objective to work more together.

In this do-a-thon, I would like to provide the models developers the opportunity to have the audience download, install, run and display the results of their model in a limited time slot (e.g. half an hour). The challenge is the timing: a reasonable input dataset should be available for testing purposes, with the ability to get some results in a low computational time.

A second challenge is the licensing. Some models require e.g. a GAMS license that not everyone has on its own computer.

For this to be possible, we would need at least 4 model developers willing to play the game. I can do it with Dispa-SET but I am more than happy to leave my spot if many people are willing to do it.

All comments regarding the opportunity and/or the best way to organize this are very welcome!


#2

@sylvain Here are a couple of tag links for related posings:

There would appear to be some overlap with the beyond factsheets do‑a‑thon proposed by @sar_b, also for Aarhus 2019.

In addition, people elsewhere were pushing for do‑a‑thons to have specified concrete outputs.


#3

Hi Sylvain, I absolutely love this idea :grinning: !!
Personally, I think we should do more exercises of this type… as the “soft barriers” to entry may be just as important as the physical (closed models), and legal (no licence) barriers.
So if we can get more people to try out other models… and use their questions to improve “model introduction pages” for different models, that should be a great benefit.
Taking this further, I think it could also be useful to collect short descriptions (when to use a model; its main strengths and weaknesse; link to introduction videos / tutorial / example analysis / jupyter books etc) next to the currently rather “dry” :sleeping: list in our “model wiki” page… What do you think? (or others? … comments welcome!!)


#4

Here is the pad to write any comments one the model testing:

https://pad.colibris-outilslibres.org/p/model_testing