java - Reading and saving the full HTML contents of a URL to a text file -
this question has answer here:
requirement:
to read html website "http://www.twitter.com ".
print retrived html
save text file on local machine .
code:
import java.net.*; import java.io.*; public class oddless { public static void main(string[] args) throws exception { url oracle = new url("http://www.fetagracollege.org"); bufferedreader in = new bufferedreader(new inputstreamreader(oracle.openstream())); outputstream os = new fileoutputstream("/users/rohan/new_sourcee.txt"); string inputline; while ((inputline = in.readline()) != null) system.out.println(inputline); in.close(); } }
code above retrieves data, prints on console , saves text file retrieves half code (because of line space in html code). not save code further.
questions:
how can save full html code?
are there other alternatives?
i have used different approach received same output you. not there problem on server side of url?
closeablehttpclient httpclient = httpclients.createdefault(); httpget httpget = new httpget("http://www.fetagracollege.org"); closeablehttpresponse response1 = httpclient.execute(httpget); try { system.out.println(response1.getstatusline()); httpentity entity1 = response1.getentity(); string content = entityutils.tostring(entity1); system.out.println(content); } { response1.close(); }
it finishes with:
</table> <p><br>
update: faculty of engineering , technology not have formed home page. content complete, code works well. commentators have right, shall use try/catch/finally block.
Comments
Post a Comment