XClose

COMP0210: Research Computing with C++

Home
Menu

Summary

This week we’ve covered a lot. You’ve been introduced to the big concepts of parallel programming with OpenMP: the parallel construct, parallelising and optimising loops, reductions, ways of controlling data privacy and thread execution, along with strong and weak scaling. You should now have a good grasp on how to parallelise your own codes (including the second assignment) and the dangers that come with parallel programming.

We haven’t covered many features of OpenMP and general parallel programming that you might find useful or interesting in your career so I list them here for reference:

  • Detailed concurrent programming with mutexes, locks, atomics, etc
  • target: OpenMP’s way of offloading computation to accelerators like GPU
  • task: Task based parallelism
  • Parallel profiling