我尝试过其他方法从URL下载信息,但需要更快的方法.我需要下载并解析大约250个单独的页面,并希望该应用程序看起来不会显得非常慢.这是我目前用于检索单个页面的代码,任何洞察都会很棒.
try
{
URL myURL = new URL("http://www.google.com");
URLConnection ucon = myURL.openConnection();
InputStream inputStream = ucon.getInputStream();
BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);
ByteArrayBuffer byteArrayBuffer = new ByteArrayBuffer(50);
int current = 0;
while ((current = bufferedInputStream.read()) != -1) {
byteArrayBuffer.append((byte) current);
}
tempString = new String(byteArrayBuffer.toByteArray());
}
catch (Exception e)
{
Log.i("Error",e.toString());
}
Run Code Online (Sandbox Code Playgroud)