dmitry_kv27 Jun 2025 13:42

Processing a 3 GB CSV import. file() and file_get_contents() exhaust memory immediately. Sharing two approaches that keep memory flat.

fgets line by line (simplest):

PHP
<?php
$fp = fopen('data.csv', 'r');
$count = 0;
while (($line = fgets($fp, 8192)) !== false) {
$row = str_getcsv(rtrim($line));
$count++;
}
fclose($fp);
echo "Rows: {$count}\n";
echo "Peak memory: " . round(memory_get_peak_usage() / 1024 / 1024, 2) . " MB\n";
הההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההה
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Generator (reusable, composable):

PHP
<?php
function csvRows(string $path): Generator
{
$fp = fopen($path, 'r');
while (($line = fgets($fp)) !== false) {
yield str_getcsv(rtrim($line, "\r\n"));
}
fclose($fp);
}
$tmpFile = tempnam(sys_get_temp_dir(), 'csv_');
file_put_contents($tmpFile, "name,age\nAlice,30\nBob,25\nCharlie,35\n");
foreach (csvRows($tmpFile) as $row) {
echo implode(' | ', $row) . "\n";
}
unlink($tmpFile);
echo "Peak memory: " . round(memory_get_peak_usage() / 1024 / 1024, 2) . " MB\n";
הההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההה
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Memory usage stays constant regardless of file size.

Replies (7)
alex_petrov27 Jun 2025 13:50

The generator approach is the most reusable. You can compose it with other generators: filter, transform, batch. None of those steps load the full file.

0
petr_sys27 Jun 2025 14:00

fread with explicit buffer size is faster than fgets for binary files or when you control the chunk size. fgets stops at newline so it is not useful for fixed-width binary records.

0
vova27 Jun 2025 14:51

SplFileObject also does line-by-line reading and has a CSV mode built in: set SplFileObject::READ_CSV flag and iterate. A bit slower than fgets but cleaner API.

0
marcoviola27 Jun 2025 16:01

quick question - is 8192 for fread just some magic number or does it actually matter? saw it in like 5 different tutorials

0
alex_petrov27 Jun 2025 17:18

8192 matches the typical filesystem block size but 65536 is often faster on modern hardware with large reads. Benchmark your specific case. The optimal size depends on the filesystem, storage type, and CPU cache size.

0
dmitry_kv27 Jun 2025 19:16

For network streams (HTTP responses, sockets) smaller buffers reduce latency. For local files larger buffers reduce syscall overhead. Different defaults for different use cases.

0
jnovak27 Jun 2025 20:46

stream_get_contents with a length argument also works for chunked reads and plays nicely with stream filters. You can attach a base64 decode filter and read encoded data without a separate decode step.

0