2017 IS-GEO Summer Institute
July 24-28, 2017, Austin, TX
The goal of the IS-GEO Summer Institute is to engage intelligent systems and geosciences researchers in collaborations through a series of introductory tutorials and follow up hands-on sessions. Each tutorial will provide an introductory overview to concepts in a particular research area, which will be applied to a concrete problem in the follow up hands-on session.
Participants will be guided by the instructors to formulate projects that lead to innovative and publishable outcomes. Evenings will be devoted to brainstorming projects of interest to participants, and drafting papers based on the work done during the day.
The 2017 Summer Institute will be a pilot with a few selected short courses on key topics over a period of a week. In this pilot, we expect to learn from the format and the formulation of the interactions between IS and GEO researchers. In 2018 and beyond, the Summer Institute will be of longer duration and the courses will be proposed by the community.
Schedule: IS-GEO Summer Institute – July 24-28, 2017
- Texas Advanced Computing Center, The University of Texas at Austin
- Jackson School of Geosciences, The University of Texas at Austin
- University of Texas El Paso
- Intelligent Systems Institute, University of Southern California
- Colorado University
- Scott Peckham (Scott.Peckham@colorado.edu), GEO, Research Scientist
- University of Minnesota
- Kansas University
- US Geologic Survey
- University of Texas Dallas
- Azar Ghahari (Azar.Ghahari@utdallas.edu), IS, student
- Oberlin College
- Joe Martin (email@example.com), GEO and IS, student
- Virginia Tech
- Thaovy Nguyen, GEO, student
- Paper co-authored by participants (lead: Yolanda): We are targeting a paper (for Computing and Geosciences or for PLOS ONE), with these contributions:
- the CS tools improve productivity: we show this because with the tools that we set up throughout the week we can run additional analyses with other data (eg another lake) and other models much more rapidly than doing those analyses from scratch. Our baseline will be some estimates of the time to do the work without any tools, as is normally done in geosciences
- science results: based on analysis for 2 (or more?) watersheds and/or aquifers [currently: Rio Grande and Barton Springs]
- a use case and framework for efficient modeling: the data comes from data repositories that we access, pre-process, use with models, and deliver results together with provenance and reusable workflow
- the tutorials leave behind educational materials for IS-GEO and beyond
- Scenario selection : This scenario will drive the different hands-on tutorial practice sessions
- Model: MODFLOW, we will use Flopy library which runs MODFLOW from python
- Use cases:
- Case 1 – (easiest) – Barton Springs, Austin, Texas – MODFLOW model converted and running; all data files in place
- Case 2 – (moderate) – Rio Grande Basin (Deana’s work) – Grace Data analysis by Anuj, possible groundwater model available, most data available
- Cases 3 – ~30 Groundwater Availability Models for state of Texas | Conversions in process on TACC’s Wrangler….Most data easy to access. Can chain models.
- Integrating multiple models: not easy, but can explore to integrate multiple models in MODFLOW and PARFLOW
- Hands-on session planning
- Start workshop with a discussion of the paper, so everyone can work towards the planned contributions
- Data integration/Karma: Plan to use real-time data feeds or near real-time with spring flow from Barton Springs, comparing with modeled spring flows from MODFLOW, spatial data for Austin, and stakeholder preference information/settings
- Standard variable names: Ongoing work on mapping variables from MODFLOW and PARFLOW to the Geoscience Standard Names (GSN)
- Water resources modeling: Perhaps use Ucode to study model uncertainty. Flopy tutorial using python/jupyter notebooks using the Houston Area Groundwater Model. MODFLOW with a new web user interface
- Decision support and dashboards: Ongoing work on demo with Barton Springs data, ETL features of the Watermark dynamic data dashboard
- Machine learning: Yolanda Gil and Vipin Kumar
- Machine learning to extract remote sensing data: viewer available at http://umnlcc.cs.umn.edu/WaterMonitorPublic-dev/, can explore information and look at graphs showing surface water extent and various other metrics. Also Matlab application with Grace data to view how Grace readings have changed over time. Could link Grace data with surface water analyses to compare the correlation roughly between groundwater and surface water conditions over time.
- Provenance: describe provenance for a run of MODFLOW, from data sources all the way to visualizations, using a case study area near Elephant Butte. Possible integration with remote sensing data from Grace
- Workflows: use Wings, create workflows that include modeling, data pre-processing, and visualization steps. Have initial workflows prepared based on Flopy notebook. Compare different runs, publish workflows and provenance.
- Prepare final integrated presentation that will include all the figures for the paper, and will contain the main bullets to be covered in each section