We discuss the use of current shared-memory systems for discrete-particle modeling of heterogeneous mesoscopic complex fluids in irregular geometries. This has been demonstrated by way of mesoscopic blood flow in bifurcating capillary vessels. The plasma is represented by fluid particles, while the other blood constituents are made of "solid" particles interacting with harmonic forces. The particle code was tested on 4 and 8 processors of SGI/Origin 3800 (R14000/500), IBM Regatta (Power4/1300), SGI Altix 3000 (Itanium®2/1300) systems and 2-processor AMD Opteron 240 motherboard. The tests were performed for the same system employing two million fluid and "solid" particles. We show that irregular boundary conditions and heterogeneity of the particle fluid inhibit efficient implementation of the model on superscalar processors. We improve the efficiency almost threefold by reducing the effect of computational imbalance using a simple load-balancing scheme. Additionally, in employing MPI on shared memory machines, we have constructed a simple middleware library to simplify parallelization. The efficiency of the particle code depends critically on the memory latency. Therefore, the latest architectures with the fastest CPU-memory interface, such as AMD Opteron and Power4, represent the most promising platforms for modeling the complex mesoscopic systems with fluid particles. As an example of application of small, shared-memory clusters in solving very complex problems we demonstrate the results of modeling red blood cells clotting in blood flow in capillary vessels due to fibrin aggregation.
|Original language||English (US)|
|Number of pages||12|
|State||Published - Jan 15 2005|
Bibliographical noteFunding Information:
Support for this work has come from the Polish Committee for Scientific Research (KBN) project 4T11F02022 and the Complex Fluids Program of DOE.
- Blood flow dynamics
- Fluid particle model
- Parallel algorithm
- Shared memory systems