Energy Saving Scheduling Strategy Based on Model Prediction Control for Data Centers
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (61003185); Natural Science Foundation of Hubei Province of China (201FFB04505)

  • Article
  • | |
  • Metrics
  • |
  • Reference [26]
  • |
  • Related [20]
  • |
  • Cited by [1]
  • | |
  • Comments
    Abstract:

    Today the ever-growing energy cost, especially cooling cost of data centers, draws much attention for carbon emission reduction. This paper presents an energy efficient scheduling strategy based on model prediction control (MPC) to reduce cooling cost in data centers. It uses dynamic voltage frequency scaling technology to adjust the frequencies of computing nodes of a cluster in a way to minimize heat recirculation effect among the nodes. The maximum inlet temperature of nodes can be kept under temperature limits with little stable error. The method can also deal with inner disturbance (system model variation) by dynamic frequencies regulation among the nodes. Analysis shows good scalability and small overhead, making the method applicable in huge data centers. A temperature-aware controller is designed to reduce inlet temperatures to improve energy efficiency of data centers. Using a simulated online bookstore run in a heterogeneous data center the proposed method is proved to have larger throughput in both normal and emergency cases compared with existing solutions such as safe least recirculation heat temperature controller and traditional feedback temperature controller. The MPC-based scheduling method also has less inlet temperature and cooling cost comparing with those two methods under same workload.

    Reference
    [1] Mukherjee T, Banerjee A, Varsamopoulos G, Gupta SKS. Spatio-Temporal thermal-aware job scheduling to minimize energy consumption in virtualized heterogeneous data centers. Computer Networks, 2009,53(17):2888-2904.[doi:10.1016/j.comnet.2009. 06.008]
    [2] Reducing energy consumption and cost in the data center. 2014. http://www.datacenterknowledge.com/archives/2014/12/11/reducing-energy-consumption-cost-data-center/
    [3] In the data center, power and cooling costs more than the IT equipment it supports. 2007. http://www.electronics-cooling.com/2007/02/in-the-data-center-power-and-cooling-costs-more-than-the-it-equipment-it-supports/
    [4] Li S, Le H, Pham N, Heo J, Abdelzaher T. Joint optimization of computing and cooling energy:Analytic model and machine room case study. In:Proc. of the Int'l Conf. on Distributed Computing Systems (ICDCS 2012). IEEE, 2012. 396-405.[doi:10.1109/ICDCS.2012.64]
    [5] Parolini L, Sinopoli B, Krogh BH, Wang ZK. A cyber-physical-system approach to data center modeling and control for energy efficiency. In:Proc. of the IEEE, 2011,100(1):254-268.[doi:10.1109/JPROC.2011.2161244]
    [6] Breen TJ, Walsh EJ, Punch J, Shan AJ, Bash CE. From chip to cooling tower data center modeling:Part I-Influence of server inlet temperature and temperature rise across cabinet. In:Proc. of the 12th IEEE Intersociety Conf. on Thermal and Thermo Mechanical Phenomena in Electronic Systems (ITherm). IEEE, 2010. 1-10.[doi:10.1109/ITHERM.2010.5501421]
    [7] Chen, Y, Gmach, D, Hyser C, Wang ZK, Bash CE, Hoover C, Singhal S. Integrated Management of application performance, power and cooling in data centers. In:Proc. of the Network Operations and Management Symp. (NOMS). IEEE, 2010. 615-622.[doi:10.1109/NOMS.2010.5488433]
    [8] Intel Corporation. Dual-Core Intel® Xeon® processor LV and ULV. Datasheet, 2006.
    [9] Ranganathan P, Leech P, Irwin D, Chase J. Ensemble-Level power management for dense blade servers. In:Proc. of the 33rd Annual Int'l Symp. on Computer Architecture (ISCA 2006). Washington:IEEE Computer Society, 2006. 66-77.[doi:10.1109/ISCA.2006.20]
    [10] Brunschwiler T, Smith B, Ruetsche E, Michel B. Toward zero emission data centers through direct reuse of thermal energy. IBM Journal of Research and Development, 2009,53(3):1-13.[doi:10.1147/JRD.2009.5429024]
    [11] GoiriÍ, Katsak W, Le K, Nguyen TD, Bianchini R. Parasol and green switch:managing datacenters powered by renewable energy. In:Proc. of the ASPLOS 2013. New York:ACM, 2013. 51-64.[doi:10.1145/2451116.2451123]
    [12] Wang D, Ren CG, Sivasubramaniam A. Virtualizing power distribution in datacenters. In:Proc. of the ISCA 2013. New York:ACM, 2013. 595-606.[doi:10.1145/2485922.2485973]
    [13] Tsafrir D, Etsion Y, Feitelson DG. Backfilling using system generated predictions rather than user runtime estimates. IEEE Trans. on Parallel and Distributed Systems, 2007,18(6):789-803.[doi:10.1109/TPDS.2007.70606]
    [14] Manzanares A, Qin X, Ruan XJ, Yin S. PRE-BUD:Prefetching for energy-efficient parallel I/O systems with buffer disks. ACM Trans. on Storage, 2011,7(1):Article 3.[doi:10.1145/1970343.1970346]
    [15] Chen T, Yang Y, Zhang HG, Kim H, Horneman K. Network energy saving technologies for green wireless access networks. IEEE Wireless Communications, 2011,18(5):30-38.[doi:10.1109/MWC.2011.6056690]
    [16] Uptime Institute:The average PUE is 1.8. 2011. http://www.datacenterknowledge.com/archives/2011/05/10/uptime-institute-the-average-pue-is-1-8/
    [17] Data centers move to cut water waste. 2009. http://www.datacenterknowledge.com/archives/2009/04/09/data-centers-move-to-cut-water-waste/
    [18] Ramos L, Bianchini R. C-Oracle:Predictive thermal management for data centers. In:Proc. of IEEE the 14th Int'l Symp. on High Performance Computer Architecture (HPCA 2008). IEEE, 2008. 111-122.[doi:10.1109/HPCA.2008.4658632]
    [19] Li L, Liang CJM, Liu J. ThermoCast:A cyber-physical forecasting model for data centers. In:Proc. of the KDD 2011. New York:ACM, 2011. 1370-1378.[doi:10.1145/2020408.2020611]
    [20] Choi J, Kim Y, Sivasubranmaniam A. Modeling and managing thermal profiles of rack-mounted servers with ThermoStat. In:Proc. of the HPCA 2007. IEEE, 2007. 205-215.[doi:10.1109/HPCA.2007.346198]
    [21] Wang XR, Chen M. Cluster-Level feedback power control for performance optimization. In:Proc. of IEEE the 14th Int'l Symp. on High Performance Computer Architecture. IEEE, 2008. 101-110.[doi:10.1109/HPCA.2008.4658631]
    [22] Tang QH, Gupta SKS, Varsamopoulos G. Energy-Efficient thermal aware task scheduling for homogeneous high-performance computing data centers:A cyber-physical approach. IEEE Trans. on Parallel and Distributed Systems, 2008,19(11):1458-1472.[doi:10.1109/TPDS.2008.111]
    [23] Wang XR, Wang YF. Coordinating power control and performance management for virtualized server clusters. IEEE Trans on Parallel and Distributed Systems, 2011,22(2):245-259.[doi:10.1109/TPDS.2010.91]
    [24] Moore J, Chase J, Ranganathan P, Sharma R. Making scheduling "cool":Temperature-Aware resource assignment in data centers. In:Proc. of the 2005 Usenix Annual Technical Conf. Usenix, 2005. 61-75.
    [25] Raghavendra R, Ranganathan P, Talwar V, Wang ZH, Zhu XY. No "power" struggles:Coordinated multi-level power management for the data center. In:Proc. of the 13th Int'l Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS XⅢ). New York:ACM, 2008. 48-59.[doi:10.1145/1346281.1346289]
    [26] Abbasi Z, Varsamopoulos G, Gupta SKS. Thermal aware server provisioning and workload distribution for Internet data centers. In:Proc. of the 19th ACM Int'l Symp. on High Performance Distributed Computing (HPDC 2010). New York:ACM, 2010. 130-141.[doi:10.1145/1851476.1851493]
    Comments
    Comments
    分享到微博
    Submit
Get Citation

赵小刚,胡启平,丁玲,沈志东.基于模型预测控制的数据中心节能调度算法.软件学报,2017,28(2):429-442

Copy
Share
Article Metrics
  • Abstract:2491
  • PDF: 5024
  • HTML: 1928
  • Cited by: 0
History
  • Received:January 14,2015
  • Revised:December 22,2015
  • Online: January 24,2017
You are the first2038072Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063