Information about the toxicity of nanoparticles is important in determining how nanoparticles will be regulated. In the U.S., the burden of collecting this information and conducting risk assessment is placed on regulatory agencies without the budgetary means to carry out this mandate. In this paper, we analyze the impact of testing costs on society's ability to gather information about nanoparticle toxicity and whether such costs can reasonably be borne by an emerging industry. We show for the United States that costs for testing existing nanoparticles ranges from $249 million for optimistic assumptions about nanoparticle hazards (i.e., they are primarily safe and mainly require simpler screening assays) to $1.18 billion for a more comprehensive precautionary approach (i.e., all nanomaterials require long-term in vivo testing). At midlevel estimates of total corporate R&D spending,and assuming plausible levels of spending on hazard testing, the time taken to complete testing is likely to be very high (34-53 years) if all existing nanomaterials are to be thoroughly tested. These delays will only increase with time as new nanomaterials are introduced. The delays are considerably less if less-stringent yet risk-averse perspectives are used. Our results support a tiered risk-assessment strategy similar to the EU's REACH legislation for regulating toxic chemicals.