when scraping websites, i usually use the function file_get_contents. However, there are times when we only need a specific portion of the site to get; for instance: getting the title of the site or the description.
Instead of using file_get_contents function we instead use the builtin file fopen and fgets functions like this:
<?php $url = 'http://www.tildemark.com/'; $fp = fopen( $url, 'r' ); // r means open the site for reading $buffer = trim(fgets($fp, 1024)); // read the first 1024 bytes of data print "<pre>$buffer</pre>"; ?>
But, using CURL functions will be a lot faster. We will use CURLOPT_RANGE to get the specific amount of data from a specified url. CURLOPT_RANGE defines as range(s) of data to retrieve in the format “X-Y” where X or Y are optional. HTTP transfers also support several intervals, separated with commas in the format “X-Y,N-M”.
<?php $url = 'http://www.tildemark.com/'; $curl = curl_init($url); curl_setopt($curl, CURLOPT_RANGE, "0-1024"); curl_setopt($curl, CURLOPT_RETURNTRANSFER, true); $content = curl_exec ($curl); echo "<pre>$content</pre>"; ?>