290 posts
Location
St.Louis, MO
Posted 21 January 2015 - 12:21 AM
I have no websites blocked in my config file,
whenever i do
'r = http.get('
http://translate.google.com')'
r always returns nil
when i do
'r = http.get('
http://google.com')'
r is not nil
when i do
'r = http.get('
http://yahoo.com')'
r is not nil
What is going on here?
3790 posts
Location
Lincoln, Nebraska
Posted 21 January 2015 - 03:37 PM
Most subdomains in Google are https:// domains. Try using that, and see if that fixes it.
290 posts
Location
St.Louis, MO
Posted 15 February 2015 - 03:31 AM
Sorry for super late reply… I was in India.
I have tried everything from https:// to http:// to www. to
http://www. to
https://www. and r always returns nil…
8543 posts
Posted 15 February 2015 - 04:51 AM
The translate subdomain on Google may disallow java user agents in an effort to prevent automated tools from scraping the translation system.
290 posts
Location
St.Louis, MO
Posted 15 February 2015 - 07:05 AM
It also happens with computercraft.info
7508 posts
Location
Australia
Posted 15 February 2015 - 08:54 AM
What does the CC config look like? You may be allowing the sites incorrectly.
570 posts
Posted 15 February 2015 - 11:50 PM
Some sites require you to change your user agent, I suppose because of what Lyqyd said. This includes computercraft.info.
local req = http.get("http://computercraft.info")
print(type(req)) --# nil
req = http.get("http://computercraft.info", { ["User-Agent"] = "something" })
print(type(req)) --# Works! (table)
req.close()
While it's not really a bug, it would be nice if the default user agent were something other than Java to prevent issues like this.