This is a read-only snapshot of the ComputerCraft forums, taken in April 2020.
AndreWalia's profile picture

http doesnt work with some websites

Started by AndreWalia, 20 January 2015 - 11:21 PM
AndreWalia #1
Posted 21 January 2015 - 12:21 AM
I have no websites blocked in my config file,
whenever i do
'r = http.get('http://translate.google.com')'
r always returns nil
when i do
'r = http.get('http://google.com')'
r is not nil
when i do
'r = http.get('http://yahoo.com')'
r is not nil



What is going on here?
Cranium #2
Posted 21 January 2015 - 03:37 PM
Most subdomains in Google are https:// domains. Try using that, and see if that fixes it.
AndreWalia #3
Posted 15 February 2015 - 03:31 AM
Sorry for super late reply… I was in India.

I have tried everything from https:// to http:// to www. to http://www. to https://www. and r always returns nil…
Lyqyd #4
Posted 15 February 2015 - 04:51 AM
The translate subdomain on Google may disallow java user agents in an effort to prevent automated tools from scraping the translation system.
AndreWalia #5
Posted 15 February 2015 - 07:05 AM
It also happens with computercraft.info
theoriginalbit #6
Posted 15 February 2015 - 08:54 AM
What does the CC config look like? You may be allowing the sites incorrectly.
Lignum #7
Posted 15 February 2015 - 11:50 PM
Some sites require you to change your user agent, I suppose because of what Lyqyd said. This includes computercraft.info.

local req = http.get("http://computercraft.info")
print(type(req)) --# nil

req = http.get("http://computercraft.info", { ["User-Agent"] = "something" })
print(type(req)) --# Works! (table)
req.close()

While it's not really a bug, it would be nice if the default user agent were something other than Java to prevent issues like this.