Acemagic F3A an AMD Ryzen AI 9 HX 370 Mini PC with up to 128GB of RAM

4

Acemagic F3A Performance

We looked at the AMD Ryzen AI 9 HX 370 previously, so the more interesting thing is probably the comparsion to the Beelink SER9 that used LPDDR5X. How much are you giving up to get expandable memory?

AMD Ryzen AI 9 HX 370 DDR5 SODIMM versus LPDDR5X Performance Comparison

Here is the quick comparison between the two:

Acemagic F3A AMD Ryzen AI 9 HX 370 to Beelink SER9 Performance
Acemagic F3A AMD Ryzen AI 9 HX 370 to Beelink SER9 Performance

5-13% is not a huge amount, but another way to look at it is that it is somwhere around 0.5-1 core in a 12 core CPU system worth of performance. Surely you give up something for the slotted DDR5 SODIMM.

Geekbench 5 and Geekbench 6 Performance

On Geekbench 6, we ran the GPU compute (OpenCL) on the two systems, and clearly the LPDDR5X on the Beelink was faster.

Acemagic F3A Geekbench 6 GPU Performance
Acemagic F3A Geekbench 6 GPU Performance

On the CPU compute side, we saw a big impact on multi-threaded workloads, but not as much on single-threaded workloads.

Acemagic F3A Geekbench 6 Performance
Acemagic F3A Geekbench 6 Performance

Here is the Geekbench 5 comparison:

Acemagic F3A Geekbench 5 Performance
Acemagic F3A Geekbench 5 Performance

Overall, there is a notable gap between the LPDDR5X and DDR5-5600 SODIMMs in terms of performance with the AMD Ryzen AI 9 HX 370. At the same time, there can also be a capacity gap, which we will get to next.

The AI Monster Within

Normally we would end it here, but you are probably wondering why you would give up that kind of performance. Here is asking deepseek-r1 32b running.

Acemagic F3A 128GB Running Ollama Deepseek R1 32b Performance
Acemagic F3A 128GB Running Ollama Deepseek R1 32b Performance

This is not fast, but it is running. Of course, more interesting is running a 70b version, so here is the deepseek-r1 70b distill running in LM Studio consuming well over 40GB alone, or more than the total amount of memory we had in the Beelink SER9.

Acemagic F3A Running deepseek-r1 distill llama 70b task manager
Acemagic F3A Running deepseek-r1 distill llama 70b task manager

When we started playing with getting the integrated Radeon 890M going as well, that increased our memory usage.

Acemagic F3A Running deepseek-r1 distill llama 70b
Acemagic F3A Running deepseek-r1 distill llama 70b

To be clear, if you want to go buy a big GPU for AI workloads with lots of memory, the performance will be much better. On the other hand, if you just want an AI assistant that you can task with something, and come back to in a few minutes, then this is really neat. You can use a higher accuracy model with a mini PC that costs closer to $1000.

Next, let us get to the power consumption and noise.

4 COMMENTS

  1. According to the review this mini PC can run DeepSeek R1 70B distilled although slowly. How slowly? I’ve tried the same model on an older dual socket Epyc server and quickly realised that too slow makes a big difference in terms of usability.

    As large language models perform differently than what’s included in the current selection of Serve the Home benchmarks, I think comparing DeepSeek R1 70B performance across a wide variety of CPU and GPU hardware would be make a very interesting article.

  2. Did this thing come with preinstalled Windows? Be careful, Acemagic shipped Mini PCs with preinstalled malware/spyware in the past; you can find this documented very well on YT.
    TBH, since then i dont trust them. Uefi is clean? With that, it would be possible to undermine anya installation, also fresh own ones.

    I’d guess, no ECC Ram capabilities? This i’d love: a silent Ryzen mini PC for proxmox with 2 fast Lan ports and ECC support.

  3. I am also waiting for the AMD AI MAX+ 395 MiniPCs to come out. I agree with Eric Olson that it would be nice to see an LLM inference speed comparison across different CPUs and GPUs. For me the expandable RAM is key. I was looking at the SER9 when it launched late last year, but I ended up with a 8945HX MiniPC because I didn’t like that I couldn’t get at least 48 GB of RAM in the SER9. I do LLM inference on some of my MiniPCs but my primary use for them is part of a Proxmox cluster so having expandable RAM is more important than maximum performance.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.