Computer models of water distribution networks are commonly used to simulate large systems under complex dynamic scenarios. These models normally use so-called demand-driven solvers, which determine the nodal pressures and pipe flow rates that correspond to specified nodal demands. This paper investigates the use of data parallel high performance computing (HPC) techniques to accelerate demand-driven hydraulic solvers. The sequential code of the solver implemented in the CWSNet library is analysed to understand which computational blocks contribute the most to the total computation time of a hydraulic simulation. The results obtained show that, contrary to popular belief, the linear solver is not the code block with the highest impact on the simulation time, but the pipe head loss computation. Two data parallel HPC techniques, single instruction multiple data (SIMD) operations and general purpose computation on graphics processing units (GPGPU), are used to accelerate the pipe head loss computation and linear algebra operations in new implementations of the hydraulic solver of CWSNet library. The results obtained on different network models show that the use of this techniques can improve significantly the performance of a demand-driven hydraulic solver.