327 posts
Location
Julfander Squad Studio
Posted 06 August 2017 - 03:36 PM
I know that you can get the data of a file from pastebin, but i am working with gitHub and want something like an update checker which downloads the new file if there is an update. I don't know how to use the HTTP API, can someone explain me how it works?
194 posts
Posted 06 August 2017 - 03:51 PM
This is the simplest way to get data from a URL. If you replace url with the raw for a github file, it returns the contents of the file.
var = http.get(url).readAll()
3057 posts
Location
United States of America
Posted 06 August 2017 - 06:27 PM
If http.get fails, that example will throw an error. You should check if http.get worked before using the result.
Edited on 06 August 2017 - 04:28 PM
467 posts
Location
Van Diemen's Land
Posted 07 August 2017 - 01:07 AM
Github has a raw file feature. Go to your Github project, click on the file you want then click on "Raw" in the file editor. This'll give you a link to a webpage that ONLY contains the contents of the file.
The URL will look something like this:
https://raw.githubusercontent.com/YourUsername/YourProject/master/test.txt
327 posts
Location
Julfander Squad Studio
Posted 07 August 2017 - 09:56 AM
So here is the code i would use:
local website = http.get(url)
if website then
github_file = website.readAll()
end
website.close()
Edited on 07 August 2017 - 07:58 AM
160 posts
Location
Probably within 2 metres of my laptop.
Posted 07 August 2017 - 08:04 PM
So here is the code i would use:
local website = http.get(url)
if website then
github_file = website.readAll()
end
website.close()
That would still error if website == nil.
local website = http.get(url)
if website then
github_file = website.readAll()
website.close()
end
6 posts
Posted 10 August 2017 - 04:32 PM
I'm using this
local function wget(option, url, ziel)
if type(url) ~= "string" and type(ziel) ~= "string" then
return
elseif type(option) == "string" and option ~= "-f" and type(url) == "string" then
ziel = url
url = option
end
if http.checkURL(url) then
if fs.exists(ziel) and option ~= "-f" then
printError("<Error> Target exists already")
return
else
term.write("Downloading ... ")
local timer = os.startTimer(60)
http.request(url)
while true do
local event, id, data = os.pullEvent()
if event == "http_success" then
print("success")
local f = io.open(ziel, "w")
f:write(data.readAll())
f:close()
data:close()
print("Saved as " .. ziel)
return true
elseif event == "timer" and timer == id then
printError("<Error> Timeout")
return
elseif event == "http_failure" then
printError("<Error> Download")
os.cancelAlarm(timer)
return
end
end
end
else
printError("<Error> URL")
return
end
end
It checks for all possible errors, with a 60s timeout and then saves the file with an overwriting option -f.
In addition it also writes what it does to the screen.
wget("http://example.org/", "testing")
wget("-f", "http://example.org/", "testing")
This will download the website example.org and save it as testing.
Edited on 10 August 2017 - 02:32 PM