基于ServerScope平台TPCW性能评价
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着电子商务以及INTERNET技术的日益成熟,电子商务网站已经对人们的生活产生了巨大的影响,人们对电子商务网站服务质量也越来越关注,而支撑整个电子商务网站的后台Web服务器的性能自然而然地成为了人们关注的重中之重,由此人们研究了很多优化Web服务器性能的方法,也提出了很多评测和分析Web服务器性能的方法。
     在系统性能评价方面,SPEC和TPC是两个著名的性能评测组织,它们制定的各种基准程序被广为使用,而且几乎所有知名的服务器厂商和软件厂商都是它们的成员。这些基准程序已经成为了事实上的标准,其中一个重要原因就是它们已经成为厂商之间相互进行性能比较的标准。TPC系列基准是现在流行的商业基准组,主要服务器和数据库供应商都派代表加入了这一组织。SPEC则强调开发实际应用基准程序,以求更准确地反映实际工作负载。也就是说,这些基准程序是面向厂商的,各家厂商借助这些标准不断追求着更高的性能指标。而这些指标却与用户实际相差甚远,不能为用户提供贴切的服务。
     我们提出的性能评测系统(ServerScope),希望建立起一套面向用户的,旨在追求用户实际性能的评测方法和评测系统,可以针对不同应用、不同软硬件配置进行性能监控、评价和优化,对用户使用的系统提出性能上的建议。另外,能够根据评测系统的实际情况抽取、建立一定的理论模型,对系统的性能进行预测。
     我们的主要工作是建立了一个通用的计算机系统性能评测平台ServerScope,
    
    四少l!大学硕士学位论文
    并且基于Serverscope性能评测系统进行TpC一(Transaet ion proeessing
    Performance Coune 11@W)的评测,使用Serverscope提供的数据监控服务来监
    控被测系统的性能,分析和优化电子商务Web服务器的性能。
     曙光3000超级服务器是国产超级服务器的代表。在评测的过程中,我们提出
    了一种Web服务器性能评测以及提高其性能的从上而下的优化方法。这些测试也
    说明了曙光3000服务器的一个服务结点即可满足现有中小型商业应用,为曙光
    3000超级服务器进入国内商业领域提供了性能数值上的依据。
Today E-Commerce websites have had great influence on people's life style. And more and more attention has been paid to quality of service of E-Commerce website, and of course web server of an E-Commerce website, which is supposed to be the backbone of the website, becomes the focus of the issue. So people have developed many methods to optimize the performance of web servers, at the same time many testing and analyzing methods came into being.
    The main objective of this paper is to research how to improve and optimize the performance of an E-Commerce server. SPEC and TPC are two very famous system tuning councils. Almost all famous server factory and soft factory are theirs members. The benchmarks they instituted are widely applied.These benchmarks are standard infact because they are the performance comparing standard between factories. That is; namely.These benchmarks fact factories and factories beg higher performance target based the benchmarks. TPC is a popular commercial benchmark group and most all of main server and database enterprises have joined the organization. SPEC emphasizes on developing actual application benchmarks and expressing actual workloads. But these benchmarks can not provide appropriate service for common users.
    We provide ServerScope testsuite to build a performance tuning system based users and seek testing method and tuning system more effctive.In addition,we can extract and builder appropriate method model based the factial condition of the system
    
    
    under test to prognosticate the performance of the system under test.
    The main work of us is building a common-used computer system performance tuning platform ServerScope.And we have tested the system performance,monitored the performance of the system under test,analyzed and optimized E-Commerce server performance in virtue of TPC-W benchmark and oracle database based on the ServerScope platform.
    Dawning3000 Cluster Server is the representative of cluster system in our country. We implemented the TPC-W programs on this platform, designed several testing models, and tested the performance of its nodes, and did many studies in web server performance and database tuning, and presented the WIPS value in tuning. During the period of testing, we presented a TPC-W evaluation method and a from-up-to-down optimizing method, which are suitable to be used in the Cluster environment. The test also proved that one node of Dawning 3000 can content requirements of the medium-sized or small enterprises, which gave actual support for applying these servers in such E-Commerce fields.
引文
1. Ed Tittel, "Benchmarking the Web" , SunWorld feature, September 2001.
    2. V. Almeida, A. Bestavros, M. Crovella, and A. de Oliveira. "Characterizing Reference Locality in the WWW," In IEEE Conference on Parallel and Distributed Information Systems, Miami Beach, Florida, December 1996.
    3. M. Arlitt, R. Friedrich, and T. Jin, "Workload Characterization of a Web Proxy in a Cable Environment," ACM Performance Evaluation Review, 27 (2), Aug. 1999, pp. 25—36.
    4. M. Arlitt and C. Williamson, "Web Server Workload Characterization: the Search for Invariants," Proc. ACM Sigmetrics Conf. Measurement & Modeling of Computer Systems, Philadelphia, PA, May 1996 , pp. 126—137.
    5. M. Crovella and A. Bestravos, "Self-Similarity in Word Wide Web Traffic: evidence and possible causes," Proc. 1996 SIGMETRICS Conf. Measurements Compt Syst. ACM, Philidelphia, May 1996.
    6. D. A. Menascé and V. A. F. Almeida, Scaling for E-Business: technologies, models, performance, and capacity planning, Prentice Hall, January 2000.
    7. D. A. Menascé, V. Almeida, R. Fonseca, and M. A. Mendes, "A Methodology for Workload Characterization of E-commerce Sites," Proc. First ACM Conference on Electronic Commerce, November 1999.
    8. D.A. Menascé and V. A. F. Almeida, "Challenges in Scaling E-Business Sites," Proc. 2000 Computer Measurement Group Conference, Orlando, FL, December 10-15, 2000.
    9. Transaction Processing Performance Council, TPC BENCHMARK W (Web Commerce) Specification, Rev. 1.1, Jun 2000.
    10. K. Keeton, Computer architecture support for database applications, PhD dissertation, Univ. of California at Berkeley, July 1999.
    11. Paul Barford, Mark Crovella, "A Performance Evaluation of Hyper Text Transfer Protocols", Performance Evaluation Review, Vol 27. July 2001.
    
    
    12. Gene Trent & Mark Sake, "WebSTONE: The First Generation in HTTP Server Benchmarking" , Feb 1995, http://www, sgi.com/.
    13. "Web Server Comparisons ", http://www, acme. com/software, thttpd/benchmarks, html.
    14. Krishna Kant and Youjip Won, "Server Capacity Planning for Web Traffic Workload", IEEE Transactions on Knowledge and Data Engineering, Vol 11, No. 5, September/October 1999.
    15. Louis P. Slothouber, Ph.D, "A Model of web server performance", StarNine Technologies.
    16. Evaluation Review, Vol 27, No. 1 June 1999.
    17. "Improving Internet Server Performance", http://www. tru64unix, compaq, com /faqs/publications/best_practices.
    18. Philip J. Mucci & Kevin S. London & John Thurman, "The BLASBench Report", Nov 1998, http://www, netlib, org/.
    19. Philip J. Mucci & Kevin S. London & John Thurman, "The CacheBench Report", Nov 1998, http://www, netlib, org/.
    20. Douglas E. Comer, "Internetworking with TCP/IP", Sep 1998, Tsinghua University Press & Prentice Hall.
    21. Michael Frumkin & Haoqiang Jin & Jerry Yan, "Implementation of NAS Parallel Benchmark in High Performance Fortran" , Sep 1998, http://www, nas. nasa. gov/.
    22. J. Kelly Flanagan & Brent E. Nelson & Greg Thompson, "The Inaccuracy of Trace-Driven Simulation Using Incomplete Multiprogramming Trace Data" , Aug 1998.
    23. Heng Zhou & J. Kelly Flanagan, "A System-assisted Disk I/O Simulation Technique", Aug 1998.
    24. David Elwood & George Fann & Richard Littlefield, "Parallel Eigensystem
    
    Solver" ,Jul 1998, http://www, tcg. anl.gov./
    25. Abdul Waheed & Jerry Yan, "Parallelization of NAS Benchmark for Share Memory Multiprocessors " Mar 1998 ,http://www. nas. nasa. gov/.
    26. Stefan Spaennare & Lund Sweden, "SSBench CPU Benchmark test for UNIX and Linux " , Feb 1998, http://nastol.astro, lu. se/~stefans/bench, html.
    27. Andres Klemm, "FreeBSD SMP -Kernel compilation bench" ,May 1997, http://klemm. gtn. com./
    28. Andrew S. Tanenbaum, "Distributrd Operating Systems", Feb 1997, Tsinghua University Press & Prentice Hall.
    29. Andrew S. Tanenbaum, "Computer Network" , Feb 1997, Tsinghua University Press & Prentice Hall.
    30. William Stallings, "Network And Internetwork Security Principles and Practice", May 1997, Prentice Hall.
    31. Subhash Saini & David H. Bailey, "NASParallel Benchmark(Version 1.0) Result 11-96", Nov 1996, http://www. nas. nasa. gov/.
    32. John Gustafson, "A New Approach to Computer Performance Prediction", Jun 1996, http://www, scl.amaslab, gov/Publication.
    33. John L. Gustafson, "Fixed Time, Tiered Memory, and Superlinear Speedup", Jun 1996 , http://www, scl.ameslab, gov/Publication.
    34. John Gustafson & Rajat Todi, "Conventional Benchmarks as a Sample of the Performance Spectrum ", Jun 1996..
    35. Hewlett-Packard Co, "NetPerf: A Network Performance Benchmark", Feb 1996, http://www, cup. hp. com/netperf/NetPerfPage, html
    36. Furio Ercolessi, "Informations on MDBNCH, amolecular dynamics benchmark", Jan 1996, http://www, sissa, it.
    37. Charles Mosher & Siamak Hassanzadeh, "The Acro Parallel Siesmic Environment", Dec 1995.
    
    
    38. Joseph M. Orost, "The Bench++ Benchmark Suite", Dec 1995, http://www, research, att. com/.
    39. David Bailey & Tim Harris & William Saphir & Rob Van der Wijingaat & Alex Woo & Maurice Yarrow, "The NAS Parallel Benchmark 2.0", Dec 1995, http://www, nas. nasa. gov/.
    40. Stephen Chan, "WebStone Frequently Asked Questions, with Answers", Nov 1995, http://www, sgi.com/.
    41. Ray S. Tuminaro & John N. Shadid & Scott A. Hutchinson, "Parallel Sparse Matrix-Vector Multiply Software for Matrices with Data Locality," Jul 1995, http://www. cs. saudia, gov/.
    42. Mark Papiani, "GBIS Benchmark Header File: pde2", Jun 1994, http://hpcc, soton, ac. uk/Staff/ECS./
    43. K. Kant , "Introduction To Computer System Performance Evaluation" ,1992, McGraw-Hill Inc.
    44.潘建平 & 顾冠群 & 沈苏彬,“Web/HTTP协议性能分析和改进的研究”,《计算机研究与发展》,1999年1月。
    45. Linux多种启动发式,http://www. zhujiangroad. com/school/linux/4. htm
    46.尹植平.Linux 环境下无盘工作站的架设和实现.http://www.yesky.com/20010824/193654.shtml
    47.V. Richard Stevens 著,杨继张译,“Unix网络编程第2卷:进程间通信第2版”,清华大学出版社,2000
    48.杨秋蔚,“远程过程调用(RPC)编程简介”,http://www.ccw.com.cn/htm/app/aprog/01_12_30_3.asp
    49.王建勇 & 徐志伟 & 孙毓忠 & 祝明发,“并行NASBenchmarks的需求特征分析”,《计算机研究与发展》,1998年6月。
    50.申俊 & 郑纬民,“异构并行工作站机群系统的性能评价指标”,《计算机研究与发展》,1998年3月。
    
    
    51. Kai Hwang, Zhiwei Xu 《Scalable Parallel ComputingTechnology, Architecture, Programming》 .May 2001.
    52. RAJ JAIN, 《THE ART OF COMPUTER SYSTEMS PERFORMANCE ANALYSIS Techniques for Experimental Design, Measurement, Simulation and Modeling》 . June 1999.