应用于低PUE值和高密度数据中心的紧凑型换热方案,尽在舒瑞普(SWEP)!

联系我们

 

应用于低PUE值和高密度数据中心的紧凑型换热方案,尽在舒瑞普(SWEP)!

 

在快节奏发展的数据中心存储领域,高效冷却和能源回收解决方案至关重要。舒瑞普(SWEP)钎焊板式换热器(以下简称BPHE)拥有高效紧凑的完美解决方案,为您免除空间方面的担忧。


数据中心冷却中的各类BPHE应用

液体和两相CDU解决方案

优化的冷水机解决方案

利用自然冷却节省能源和空间

 


挑战能效,拒绝妥协!

跟随舒瑞普(SWEP)了解聚焦数据中心能源的实用建议!

 

点击阅读

 

 

 

用于数据中心冷却的液体CDU装置

用于数据中心冷却的液体CDU装置

数据中心内各区域占地价格不菲,空间优化是系统运行的重中之重。应用舒瑞普(SWEP)钎焊板式换热技术,数据中心可以使用具有超强冷却能力且结构极为紧凑的机架式或机柜式CDU装置。此外,我们的双流程结构换热器的热力性能几乎是同类型产品的两倍而无需增大占地面积,实现了设计空间最小化。

舒瑞普(SWEP)两相流BPHE具有无可匹敌的换热性能,带有冷凝液的CDU装置在进行换热器选型时须力求精准,并能满足客户的实际应用需求。使用舒瑞普(SWEP)选型软件SSP时,该系统能够基于实际的测试情况对BPHE应用进行模拟,从而确保选型结果符合系统应用条件,并具备可靠优化的换热性能。

 

舒瑞普(SWEP)提供一系列可用于液体CDU系统的换热器型号。

  • 液冷工作范围基于水或丙二醇(30%)而变化
    • 进水口25℃(77℉)
    • 丙二醇(30%)入口55℃(131℉)
  • 欲了解更多信息或寻求选型支持,请联系舒瑞普(SWEP)

 

舒瑞普(SWEP)提供一系列可用于两相CDU系统的换热器型号

  • 基于制冷剂入口温度16-20℃(61-68℉)的两相冷却
  • 适用于冷凝器和复叠设计的装置
  • 欲了解更多信息或寻求选型支持,请联系舒瑞普(SWEP)

 

 


机械冷却

完善的冷水机解决方案

适用于冷水机的一系列舒瑞普(SWEP)BPHE蒸发器将板片与分配技术创新结合,最大限度提高了冷却能力和效率。与此同时,舒瑞普(SWEP)对于所有主流制冷剂以及A2L和A3这类罕见制冷剂均有了解,这使我们成为数据中心冷水机领域的理想合作伙伴。此外,BPHE技术也完美适用于冷凝器和经济器,并可将单层板片或双层板片技术应用于热回收。

舒瑞普(SWEP)提供带有分配系统的蒸发器,该系统针对各种制冷剂和应用进行了优化。

基于使用R134a和R410A的冷水机工作范围,下图列举了合适的舒瑞普(SWEP)蒸发器型号

特灵:超出20%!舒瑞普(SWEP)助力特灵优化季节能效比

舒瑞普(SWEP)真双回路钎焊板式换热器DFX650精准聚焦欧盟生态设计指令,成功助力知名暖通空调系统制造商特灵在针对季节能效比(SEER)的设计指令中以高达20%的幅度超出既定标准,显著降低客户能耗。

欧盟生态设计指令针对风冷式冷水机和热泵产品制定了最低能效标准,其影响波及各行各业。据估算,该政策将为欧洲居民在能源成本方面年平均节省 490 欧元。

点击阅读完整案例

 


自然冷却/经济器 

 

节能冷却效率

如果环境空气或其它冷源在冷水机关闭的情况下仍能冷却服务器机架,则是利用“自然冷却”实现节能目的。我们的BPHE技术采用紧凑式设计实现较高的传热效率,因此非常适合用作中间回路,将外部乙二醇回路与内部服务器回路分开。

舒瑞普(SWEP)旗下拥有全球最大的BPHE型号,6”端口,单台换热器可处理高达1500GPM(340 m3/h)的水流。BPHE的紧凑结构为模块化设置提供了可能,保障可靠性的同时提高部分负载效率,实现具有成本效益的冗余。

舒瑞普(SWEP)BPHE的冷却范围现已扩展至兆瓦级,在兼具紧凑的同时实现具有成本效益的冗余。

 
  • 自然冷却工作范围基于水和乙二醇(30%)而变化
    • 进水口16°C(61°F)
    • 乙二醇(30%)入口13°C(55°F)
  • 欲了解更多信息或寻求选型支持,请联系舒瑞普(SWEP)。

印孚瑟斯(Infosys)数据中心的高效冷却

大型数据中心需要强大的冷却解决方案。印孚瑟斯(Infosys Technologies)是一家总部设在印度班加罗尔的知名IT公司,当其需要安装关键性IT设备的冷却系统时,项目设计方施耐德电气选择了高效可靠的舒瑞普(SWEP)BPHE。

该应用是一个数据中心的水-水系统,舒瑞普(SWEP)BPHE用于隔离主要和次要来源,因为主要水源来自冷却塔。供水的第二来源是供给关键IT设备的冷却盘管。


点击阅读完整案例

该客户一直致力于开发应用于大型数据中心和其它关键业务领域的精密环境控制技术。

舒瑞普(SWEP)提供的B427型BPHE解决方案让乙二醇能够在BPHE内进行换热,而板换与用户侧利用水循环进行再次换热,降低泄漏风险,同时帮助优化制冷方案,减少能耗和运维成本。

点击阅读完整案例

FAQs


We have gathered some of the most common questions and answers relating to data center cooling. FAQ

Need more information? Find your local sales representative https://www.swep.net/company/contacts/

Data center cooling (short DCC) refers to the control of temperature inside a data center to give IT equipment optimal working temperature, for best efficiency and durability. Excessive heat can lead to significant stress that can lead to downtime, damage to critical components, and a shorter lifespan for equipment, which leads to increased capital expenditure. Not only that. Inefficient cooling systems can increase power costs significantly from an operational perspective.

  • A traditional DCC approach deals with Computer Room Air Conditioner (CRAC) in
    order to keep the room and its IT racks fresh. Very similarly, Computer Room Air
    Handlers (CRAH) centralize the cooling water production for multiple units and/or
    rooms. Cooling water might be issued by an adiabatic cooling tower, a dry cooler,
    which counts as free-cooling, or with a dedicated chiller when the climate is too
    warm.
  • Because air is a bad heat carrier, various improvements have been developed to
    increase cooling efficiency. Raised floor, hot aisle and/or cold aisle containment,
    and in-row up to In-rack cooling, have consistently decreased the losses.
  • While CRAH units and cooling towers have become legacy, water usage has been growing year after year to become a challenge. Water is sprayed in the air to dissipate heat better than in a dry cooler. With growing water scarcity, Water Usage Effectiveness (WUE) is now an important factor for the data center industry.
  • Liquid cooling is the most recent and advanced technology improvement and
    includes hybrid systems with integral coil or Rear Door Heat-Exchanger (RDHX), and Direct-to-Chip (DTC) while immersed systems offer the best possible Power Usage Efficiency (PUE) with highest energy density and unequaled WUE.

The cost of data center cooling depends on the type of data center, the Tier level, the location, design choices including cooling technology, etc. Total Cost of Ownership (TCO) and Return on Investment (ROI) are probably a better approach to get a full view on cost.

TCO comprises of three critical components:

  1. CAPEX (Capital Expenditure) The initial investment which takes Tier level, expected lifetime and design choices into consideration – the cost to build.
  2. OPEX (Operational Expenditure) Refers to the operating and maintenance costs and considerations like location and design choices, including PUE and cooling
    technology etc.
  3. Energy costs: since water scarcity and climate warming increase as well as fossil energy stocks decrease, increased attention should be given to Leadership in Energy and Environmental Design (LEED) certification.

These considerations lead to a more holistic view and better evaluation of ROI and strategic choices.

Water Usage Effectiveness (short WUE) is a simple rating in l/kWh comparing the annual data center water consumption (in liters) with the IT equipment energy consumption (in kilowatt hours). Water usage includes cooling, regulating humidity and producing electricity onsite. Uptime Institute claims that a medium-sized data center (15 MW) uses as much water as 3 average-sized hospitals or more than two 18-holes golf courses* While the demand is growing for more data centers, WUE becomes crucial while water scarcity becomes more and more common. As a result, data centers must rely on more sustainable cooling methods. Ramping up on renewable energies (solar and wind) also allows data centers to indirectly curb their water consumption while lowering carbon emissions.

Power Usage Effectiveness (short PUE) is a metric for the energy-efficiency of data centers; specifically how much energy is used by the computing equipment, in contrast to cooling and other overhead that supports the equipment. PUE is also the inverse Data Center Infrastructure Efficiency (DCIE). An ideal PUE is 1.0. Anything that isn’t considered a computing device in a data center (e.g. lighting, cooling, etc.) falls into the category of facility energy consumption. Traditional data centers score PUE around 1,7-1,8 or more while aisle containment lowers PUE down to 1,2. Liquid cooling technologies allow down to 1,05-1,1.

A Coolant Distribution Unit (short CDU) is a system that enables smaller, more efficient and more precise liquid cooling in a data center, often integrating facility water. The CDU circulates the coolant in a closed-loop system on the secondary side (cooling application) and utilizes facility water on the primary side. (heat rejection) A CDU has a pump, reservoir, power supply, control board, and a brazed plate heat exchanger (BPHE) as the key components. Filters, flow meters, pressure transducers, and other devices are also used for managing the operation of the CDU optimally. In-Rack CDUs are designed to integrate into a server chassis and distribute coolant to a series of servers or heat sources. In-Rack CDU offer up to 60-80kW of cooling capacity. These can feature redundant pump design, dynamic condensation-free control, automatic coolant replenishing, a bypass loop for stand-by operation, and automatic leak detection.Freestanding In-Row CDUs are larger and designed to manage high heat loads across a series of server chassis in data center. These full liquid cooling systems distribute coolant in and out of server chassis and can integrate into existing facility cooling systems or be designed to be fully self-contained. In-Row CDU capacity ranges typically around 300 kW with models up to 700 kW.

Direct-to-chip cooling (short DTC) utilizes cold plates in contact with hot components and removes heat by running cooling fluid through the cold plates. Cooling fluids can be a refrigerant (Direct expansion DX or 2-phase systems) or chilled water (single phase) in direct feed or via CDU. Practically, liquid cooled systems often have one or more loops for each server. In the GPU server (Graphic Processing Unit), there are five loops, so one needs a CDU for the rack. DTC extends cooling to CPU (Core Processing Unit), GPU, RAM (Random Access Memory) and NIC (Network Interface Card) for High-frequency trading, Hyperscale Computing, Rendering and Gaming, Supercomputer, Telecommunications, etc.

Immersion systems involve submerging the hardware itself into a bath of non-conductive and non-flammable liquid. Both the fluid and the hardware are contained within a leak-proof case. The dielectric fluid absorbs heat far more efficiently than air and is circulated to a BPHE where heat is transferred to the chilled facility water.

In a 2-phase system, the dielectric liquid is evaporated to vapor phase, re-condensed into liquid phase on top of the casing. Heat is captured by fluid’s evaporation and dissipated into the condenser toward chilled facility water. Because latent heat (phase change) is far more important than sensible heat (temperature change), data center density can reach unequaled level. Also, temperature stability is over the top since phase change occurs at constant temperature. Finally, peak loads are shaved by the thermal mass that the dielectric fluid volume represents.

An alternative system makes the dielectric fluid circulate inside the racks where IT equipment is enclosed into leakproof casings. More likely in single phase, dielectric fluid actively absorbs heat and is then cooled again in the CDU. As such, immersion cooling is the best data center cooling method, encouraging future applications like High Power Computing (HPC), machine learning Artificial Intelligence (AI), Crypto Money mining, Big data analytic programs, Internet of Things (IoT) with 5G and cloud computing deployment, etc.

Not necessarily. There is a significant quantity of copper in direct contact with the dielectric coolant, which is likely non-corrosive. Hence, copper-free BPHEs is not a must. Printed circuit boards (short PCB) are used in nearly all electronic products. This medium is used to connect electronic components to one another in a controlled manner. It takes the form of a laminated sandwich structure of conductive and insulating layers: each of the conductive layers is designed with an artwork pattern of traces, planes and other features (similar to wires on a flat surface) etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate.

In Direct-to-Chip or DTC cooling, there is no direct contact between the electronics and the cooling fluid. It is crucial that the fluid is non-conductive in order to avoid perturbating the electronics operation and deionized water could be used. When reaching high purity and low electric conductivity (typically < 10 µS/cm), pure water becomes copper-corrosive.

When the DC uses evaporative or adiabatic cooling towers to reject heat, water is sprayed on the cooling air for better efficiency and resulting in a lower temperature than with a dry cooler. Unfortunately, in addition to water evaporation, salt concentration also increases to becoming fouling and corrosive. Water treatment then, becomes necessary, including water make-up for compensation, but associated operational cost rise. In order to limit this extra-cost, systems might be operated close to minimum water quality, which could result in copper-corrosive water. In these conditions, All-SS or copper-free BPHEs should be considered, but assessed case-by-case.

 


获取信息

An error has occurred while getting captcha image