hermann on Sat, 07 Mar 2026 14:48:09 +0100


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Phantastic performance of writebin()+read() for very large t_VEC


Recently I worked with Carmichael numbers a lot.
First up to 10^16, then 10^18, then 10^22 and finally 10^24 from here:
https://blue.butler.edu/~jewebste/

The text file with 308 million Carmichael numbers, listing all their prime factors in each line has size 184GB. After "cut -f1 -d\ ..." and appending comma after each number, prepending "{[" and appending "];}" the file size
is still 7.5GB.

I was not able to read that file on 32GB RAM computer, so switched to my
2-socket server with 192GB RAM. Reading took more than 13 minutes and gp
resident memory size was 110GB after the read:

hermann@E5-2680v4:~$ gp -q
? b=read("carm10e24.gp");
? ##
*** last result: cpu time 11min, 6,811 ms, real time 13min, 2,970 ms.
? #b
308279939
? parforeach(b,c,if(!issquarefree(c),print(c)));
? ##
*** last result: cpu time 6h, 16min, 3,739 ms, real time 13min, 13,494 ms.
?


Then I searched for binary write in doc to speed things up, and found 3.2.88 writebin(). The binary file written (11.73GB) is only little bit bigger than 7.5GB text file. And read() runtime is phenomenal, only 34 seconds instead more than 13 minutes! Unfortunately gp resident memory size after fresh start and read is 38GB RAM. So I cannot read it on my fast AMD 7950X PC with 32GB RAM. Anyway phantastic!

? b=read("carm10e24.bin");
Killed
hermann@7950x:~$


Regards,

Hermann.