58 posts
Location
Seattle
Posted 03 April 2013 - 11:31 AM
Not sure if bug, or just stupid.
Using this code:
local function get(url)
local x = http.get(url)
x = x.readAll()
return x
end
local nickURL = "http://oc.tc/nix610"
local simonURL = "http://oc.tc/simon6689"
local carterURL = "http://oc.tc/ManHog2"
term.clear()
term.setCursorPos(1,1)
textutils.slowPrint("Loading Rankings...")
local simon = get(simonURL)
local carter = get(carterURL)
local nick = get(nickURL)
it will always return:
rank:3: attempt to index ? (a nil value)
which means that x would be nil because http.get didn't work.
please before you ask:
HTTP IS ENABLED .
Upon further investigation, the http api cannot retrieve any url that does not end in .com
please help
If this is a bug, please move this to the bugs section.
1190 posts
Location
RHIT
Posted 03 April 2013 - 01:01 PM
Well the website you're trying to access forces a secure connection (https as opposed to http), and I don't believe that ComputerCraft supports https yet. Could be wrong, but I remember seeing that somewhere in the bugs or suggestions.
1214 posts
Location
The Sammich Kingdom
Posted 03 April 2013 - 01:09 PM
Well the website you're trying to access forces a secure connection (https as opposed to http), and I don't believe that ComputerCraft supports https yet. Could be wrong, but I remember seeing that somewhere in the bugs or suggestions.
It support HTTPS back in 1.3 IIRC. Try putting in HTTPS instead of HTTP in the URL.
1190 posts
Location
RHIT
Posted 03 April 2013 - 01:20 PM
Well the website you're trying to access forces a secure connection (https as opposed to http), and I don't believe that ComputerCraft supports https yet. Could be wrong, but I remember seeing that somewhere in the bugs or suggestions.
It support HTTPS back in 1.3 IIRC. Try putting in HTTPS instead of HTTP in the URL.
I did try, which is what led me to believe that it isn't supported.
1214 posts
Location
The Sammich Kingdom
Posted 03 April 2013 - 01:22 PM
Well the website you're trying to access forces a secure connection (https as opposed to http), and I don't believe that ComputerCraft supports https yet. Could be wrong, but I remember seeing that somewhere in the bugs or suggestions.
It support HTTPS back in 1.3 IIRC. Try putting in HTTPS instead of HTTP in the URL.
I did try, which is what led me to believe that it isn't supported.
It works with HTTPS. I had to use HTTPS for Github downloaders.
58 posts
Location
Seattle
Posted 03 April 2013 - 01:27 PM
I added an "s" to the urls, but they still won't work.
1190 posts
Location
RHIT
Posted 03 April 2013 - 01:27 PM
Well the website you're trying to access forces a secure connection (https as opposed to http), and I don't believe that ComputerCraft supports https yet. Could be wrong, but I remember seeing that somewhere in the bugs or suggestions.
It support HTTPS back in 1.3 IIRC. Try putting in HTTPS instead of HTTP in the URL.
I did try, which is what led me to believe that it isn't supported.
It works with HTTPS. I had to use HTTPS for Github downloaders.
Hmm. I was sure I remembered seeing that somewhere, but sure enough I can get a github https page just fine.
So yeah, this must be an URL issue. Unfortunately all of that is done Java side so there's really nothing we can do about it. Wrong! This is a problem with the website blocking the java user agent.
58 posts
Location
Seattle
Posted 03 April 2013 - 01:46 PM
alright, il just use a proxy
2217 posts
Location
3232235883
Posted 03 April 2013 - 03:41 PM
it might not be that, some sites (like cloudflare) dont allow access from java user agents
1190 posts
Location
RHIT
Posted 03 April 2013 - 03:56 PM
it might not be that, some sites (like cloudflare) dont allow access from java user agents
Now I remember a thread that was talking about just that. It seems weird to me that they would disable access based on user agent. Would you (or anyone else) happen to know why?
2217 posts
Location
3232235883
Posted 03 April 2013 - 04:00 PM
to prevent spam and worms and such
for example a rouge java application could spam someone easialy
1190 posts
Location
RHIT
Posted 03 April 2013 - 04:07 PM
to prevent spam and worms and such
for example a rouge java application could spam someone easialy
Yes, but I was under the assumption that it was a fairly trivial task to change your user agent (Java side that is - we poor MC mod users must suffer through). So it kind of makes the whole thing pointless and also it blocks those who actually could have something pretty cool from doing things simply.
2217 posts
Location
3232235883
Posted 03 April 2013 - 04:43 PM
as i said,
spam / worms EDIT:
Bots with a well-defined purpose will typically identify themselves with a unique name. These Java user-agents are either not interested in identifying their purpose or not ready to publish their name and take ownership of the crawling activities. Both cases are a waste of bandwidth. Test your new application on someone else’s website. Play with your shady crawler on someone else’s website. Come back when you are willing to identify yourself.
1190 posts
Location
RHIT
Posted 03 April 2013 - 05:03 PM
as i said,
spam / wormsEDIT:
Bots with a well-defined purpose will typically identify themselves with a unique name. These Java user-agents are either not interested in identifying their purpose or not ready to publish their name and take ownership of the crawling activities. Both cases are a waste of bandwidth. Test your new application on someone else’s website. Play with your shady crawler on someone else’s website. Come back when you are willing to identify yourself.
So it's more to stop the noobs who are just going to waste bandwidth than an actual attempt to stop serious hackers. I see now.
@OP By the way, I can get the html content of sites like adf.ly so I'm pretty sure it's just the website you want blocking the user agent.