CURL does not receive url HTML

I am trying to make a simple web crawler with PHP and I am having problems with the HTML source of a given url. I am currently using cURL to get the source.

My code:

 $url = "http://www.nytimes.com/";

    function url_get_contents($Url) {
        if (!function_exists('curl_init')) {
            die('CURL is not installed!');
        }
        $ch = curl_init();
        curl_setopt($ch, CURLOPT_URL, $Url);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        $output = curl_exec($ch);
        if ($output === false) { die(curl_error($ch)); }
        curl_close($ch);
        return $output;
    }

    echo url_get_contents($url);
    ?>

      

Nothing is currently echoing and there are no errors, so this is a bit of a mystery. Any suggestions or corrections would be appreciated

Edit: I added

if ($ output === false) {die (curl_error ($ ch)); }

to the middle of the function and it ended up giving me an error (finally!):

Unable to resolve host: www.nytimes.com

I still don't know what the problem is. Any ideas?

thank

+3


source to share


2 answers


It turns out this is not a cURL problem

My host server (Ubuntu VM) was running with a "host-only" network adapter, which blocked access to all other IPs or domains outside of its host machine, making it impossible for cURL to connect to URLs.



Once it was changed to a "bridged" network adapter, I had access to the outside world.

Hope it helps.

+2


source


Variable mismatch ( $url

vs. $url

). Change:

function url_get_contents($Url) {

      



to

function url_get_contents($Url) {

      



0


source







All Articles