In 2024, Samsung will launch low latency wide (LLW) DRAMs that are designed to improve the power efficiency of artificial intelligence applications by 70% more than that of regular DRAMs.
LLW DRAMs will become its flagship, next-generation chips and be embedded into AI devices such as extended reality headsets. Samsung aims to beef up artificial intelligence chip foundry sales to about 50% of its total foundry sales in five years.
Samsung’s LLW DRAM is a low-power memory that features wide I/O, low latency and boasts a bandwidth of 128 GB/s, presumably per module.
The new AI chips will enhance data processing speed and capacity by increasing the number of input/output terminals (I/O) in a semiconductor circuit, compared with existing DRAMs.
A special type of DRAM that increases bandwidth and latency limits, Low Latency Wide IO (#LLW) is significantly more efficient in processing real-time data than #LPDDR, making it perfect for mobile #AI and #Gaming applications. Learn more.https://t.co/ONb8wHwnTv pic.twitter.com/nra2v9OUpc
— Samsung Semiconductor (@SamsungDSGlobal) November 29, 2023
3D PACKAGING
In 2024, Smasung will unveil an advanced three-dimensional (3D) chip packaging technology, including the most advanced 3.5D packaging.
3 NM PROCESS
Samsung will improve 3-nanometer chip processing technology, currently the industry’s smallest and most advanced process node, to be suitable for AI applications.
In 2023, they began the mass production of 3 nm chips for fabless clients as a global first and ahead of Taiwan’s TSMC.
In 2023, Samsung advanced the production yields of its first-generation 3 nm process technology to a perfect level.
Samsung will improve memory performance by 2.2 times every two year.
GDP STRATEGY
Samsung has new GAA, dram and packaging, or GDP as a new strategy.
GAA, short for gate-all-around, reduces the leakage current of processors with a circuit width of 3 nm or below. It is Samsung’s key architecture to develop next-generation DRAMs and packaging technology.
TESLA FSD and a New AI Chip
Samsung is working with Tesla to develop the EV maker’s next-generation Full Self-Driving (FSD) chips for Level-5 autonomous driving vehicles.
Samsung will develop a 4 nm AI accelerator, a high-performance computing machine used to process AI workloads.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Is the powersavings coming from the die shrink or the architecture? I’m not quite clear on this. I do know if you take an IR camera to an NVIDA 3080 or flagship 4080, the hotspot is the memory.