Intel quietly launched mysterious new AI CPU that promises to bring deep learning inference and computing to the edge — but you won’t be able to plug them in a motherboard anytime soon

Intel quietly launched mysterious new AI CPU that promises to bring deep learning inference and computing to the edge — but you won’t be able to plug them in a motherboard anytime soon

Intel has launched a new AI processor series for the edge, promising industrial-class deep learning inference. The new ‘Amston Lake’ Atom x7000RE chips offer up to double the cores and twice the higher graphics base frequency as the previous x6000RE series, all neatly packed within a 6W–12W BGA package. The x7000RE series packs more performance … Read more

Chip firm founded by ex-Intel president plans massive 256-core CPU to surf AI inference wave and give Nvidia B100 a run for its money — Ampere Computing AmpereOne-3 likely to support PCIe 6.0 and DDR5 tech

Chip firm founded by ex-Intel president plans massive 256-core CPU to surf AI inference wave and give Nvidia B100 a run for its money — Ampere Computing AmpereOne-3 likely to support PCIe 6.0 and DDR5 tech

Ampere Computing unveiled its AmpereOne Family of processors last year, boasting up to 192 single-threaded Ampere cores, which was the highest in the industry.  These chips, designed for cloud efficiency and performance, were Ampere’s first product based on its new custom core leveraging internal IP,  signalling a shift in the sector, according to CEO Renée … Read more

AMD teams up with Arm to unveil AI chip family that does preprocessing, inference and postprocessing on one silicon — but you will have to wait more than 12 months to get actual products

AMD teams up with Arm to unveil AI chip family that does preprocessing, inference and postprocessing on one silicon — but you will have to wait more than 12 months to get actual products

AMD is introducing two new adaptive SoCs – Versal AI Edge Series Gen 2 for AI-driven embedded systems, and Versal Prime Series Gen 2 for classic embedded systems. Multi-chip solutions typically come with significant overheads but single hardware architecture isn’t fully optimized for all three AI phases – preprocessing, AI inference, and postprocessing.  To tackle … Read more

Samsung is going after Nvidia’s billions with new AI chip — Mach-1 accelerator will combine CPU, GPU and memory to tackle inference tasks but not training

Samsung is going after Nvidia’s billions with new AI chip — Mach-1 accelerator will combine CPU, GPU and memory to tackle inference tasks but not training

Samsung is reportedly planning to launch its own AI accelerator chip, the ‘Mach-1’, in a bid to challenge Nvidia‘s dominance in the AI semiconductor market.  The new chip, which will likely target edge applications with low power consumption requirements, will go into production by the end of this year and make its debut in early … Read more

SteerLM a simple technique to customize LLMs during inference

SteerLM a simple technique to customize LLMs during inference

Large language models (LLMs) have made significant strides in artificial intelligence (AI) natural language generation. Models such as GPT-3, Megatron-Turing, Chinchilla, PaLM-2, Falcon, and Llama 2 have revolutionized the way we interact with technology. However, despite their progress, these models often struggle to provide nuanced responses that align with user preferences. This limitation has led … Read more

fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp