Skip to main content

IBM Announces IBM Telum Processor

As the first processor that contains on-chip acceleration for AI inferencing, the Telum chip introduces an updated process that aims to prevent fraud

Red, blue and green lines and arrows against a gold background

At the annual Hot Chips conference, IBM revealed new details of its upcoming, three-years-in-development IBM Telum Processor that features deep learning AI inference to detect fraud as it occurs. Telum will be the central processor chip for the next generation IBM Z and LinuxONE systems.

In order to conduct AI far from on-site applications and data collection, intelligence requires a certain low latency that’s remained unavailable. Currently, there are chips dedicated to AI and server processors that handle enterprise workloads, such as databases and transactions, but these services lack connected infrastructure to decrease the stimulation-to-response time. 
 
Falling short of these strict latency benchmarks, today’s technology limits most businesses to catching fraud post-execution. This can be a time consuming and compute-intensive process that leaves some businesses choosing to omit fraud detection altogether. As the first processor that contains on-chip acceleration for AI inferencing, the Telum chip introduces an updated process that aims to prevent fraud, rather than detecting it after the fact. It’s specifically designed for banking, finance, trading and insurance industries that harness AI for operations such as loan processing, clearing and settlement of trades, anti-money laundering and risk analysis. It also targets businesses at their more local bases, specifically retail stores, where hundreds to thousands of transactions take place daily with the potential for fraudulent purchases. For both its clients and their consumers, Telum can increasingly protect sensitive information and decrease financial losses from what is standard today.
 
The chip contains eight processor cores that break down the detection process through dynamic execution and instruction-level parallelism, running with more than 5GHz clock frequency. This allows several executions to run simultaneously as data is available, rather than in a strictly ordered structure, disallowing the processor to fall idle and therefore achieving a newfound AI inference speed. Telum also features a renovated cache and chip-interconnection infrastructure that provides 32MB cache per core, capable of scaling to 32 chips. The interconnection enables caches to communicate with each other and send data as different programs perform in parallel. These capabilities are utilized by the AI accelerator, which has an entry and exit point to the cache connection. With this design, the accelerator can reach data and perform the necessary operations so that execution results are stored, both in their respective caches and the overall core, and are readily accessible.
 
This innovation spotlights the process of research converting to commercialization. Telum marks IBM’s first chip with technology created by the IBM Research AI Hardware Center, which teamed up with Samsung as their Telum technology development partner. The chip was developed by Samsung’s new semiconductor line, the 7nm EUV technology node, which was announced as undergoing mass production in early 2020. Looking to the future, IBM plans for a Telum-based system for the first half of 2022.
Webinars

Stay on top of all things tech!
View upcoming & on-demand webinars →