Show simple item record

dc.contributor.authorM. Curtsinger, Charlesen_US
dc.date.accessioned2017-06-08T09:39:52Z
dc.date.available2017-06-08T09:39:52Z
dc.date.issued2016en_US
dc.identifier.otherHPU4160746en_US
dc.identifier.urihttps://lib.hpu.edu.vn/handle/123456789/24904
dc.description.abstractPerformance is once again a first-class concern. Developers can no longer wait for the next generation of processors to automatically "optimize" their software. Unfortunately, existing techniques for performance analysis and debugging cannot cope with complex modern hardware, concurrent software, or latency-sensitive software services. While processor speeds have remained constant, increasing transistor counts have allowed architects to increase processor complexity. This complexity often improves performance, but the benefits can be brittle, small changes to a program’s code, inputs, or execution environment can dramatically change performance, resulting in unpredictable performance in deployed software and complicating performance evaluation and debugging. Developers seeking to improve performance must resort to manual performance tuning for large performance gains. Software profilers are meant to guide developers to important code, but conventional profilers do not produce actionable information for conc rrent applications. These profilers report where a program spends its time, not where optimizations will yield performance improvements. Furthermore, latency is a critical measure of performance for software services and interactive applications, but conventional profilers measure only throughput. Many performance issues appear only when a system is under high load, but generating this load in development is often impossible. Developers need to identify and mitigate scalability issues before deploying software, but existing tools offer developers little or no assistance. In this dissertation, I introduce an empirically-driven approach to performance analysis and debugging. I present three systems for performance analysis and debugging. Stabilizer mitigates the performance variability that is inherent in modern processors, enabling both predictable performance in deployment and statistically sound performance evaluation. Coz conducts performance experiments using virtual speedups to create the effect of an op imization in a running application. This approach accurately predicts the effect of hypothetical optimizations, guiding developers to code where optimizations will have the largest effect. Amp allows developers to evaluate system scalability using load amplification to create the effect of high load in a testing environment. In combination, Amp and Coz allow developers to pinpoint code where manual optimizations will improve the scalability of their software.en_US
dc.format.extent111 p.en_US
dc.format.mimetypeapplication/pdfen_US
dc.language.isoenen_US
dc.publisherUniversity of Massachusetts Amhersten_US
dc.subjectOSen_US
dc.subjectNetworksen_US
dc.subjectProgramming Languages and Compilersen_US
dc.subjectAnalysisen_US
dc.subjectDebuggingen_US
dc.titleEffective Performance Analysis and Debuggingen_US
dc.typeDoctoral Dissertationen_US
dc.size1.02Mben_US
dc.departmentTechnologyen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record