21

実験用にJavaベースのWebクローラーを作成したかったのです。初めての場合は、JavaでWebクローラーを作成するのが良い方法だと聞きました。ただし、2つの重要な質問があります。

  1. 私のプログラムはどのようにWebページに「アクセス」または「接続」しますか?簡単に説明してください。(ハードウェアからソフトウェアまでの抽象化レイヤーの基本を理解しています。ここでは、Javaの抽象化に興味があります)

  2. どのライブラリを使用する必要がありますか?Webページに接続するためのライブラリ、HTTP / HTTPSプロトコル用のライブラリ、およびHTML解析用のライブラリが必要だと思います。

4

12 に答える 12

15

Crawler4j is the best solution for you,

Crawler4j is an open source Java crawler which provides a simple interface for crawling the Web. You can setup a multi-threaded web crawler in 5 minutes!

Also visit. for more java based web crawler tools and brief explanation for each.

于 2012-11-18T01:46:19.430 に答える
11

これは、プログラムがWebページに「アクセス」または「接続」する方法です。

    URL url;
    InputStream is = null;
    DataInputStream dis;
    String line;

    try {
        url = new URL("http://stackoverflow.com/");
        is = url.openStream();  // throws an IOException
        dis = new DataInputStream(new BufferedInputStream(is));

        while ((line = dis.readLine()) != null) {
            System.out.println(line);
        }
    } catch (MalformedURLException mue) {
         mue.printStackTrace();
    } catch (IOException ioe) {
         ioe.printStackTrace();
    } finally {
        try {
            is.close();
        } catch (IOException ioe) {
            // nothing to see here
        }
    }

これにより、HTMLページのソースがダウンロードされます。

HTML解析については、これを参照してください

jSpiderjsoupも見てください

于 2012-07-01T13:51:35.860 に答える
6

Right now there is a inclusion of many java based HTML parser that support visiting and parsing the HTML pages.

Here's the complete list of HTML parser with basic comparison.

于 2014-11-24T07:40:19.060 に答える
4

For parsing content, I'm using Apache Tika.

于 2012-12-10T14:37:22.487 に答える
4

Have a look at these existing projects if you want to learn how it can be done:

A typical crawler process is a loop consisting of fetching, parsing, link extraction, and processing of the output (storing, indexing). Though the devil is in the details, i.e. how to be "polite" and respect robots.txt, meta tags, redirects, rate limits, URL canonicalization, infinite depth, retries, revisits, etc.

Norconex HTTP Collector flow diagram

Flow diagram courtesy of Norconex HTTP Collector.

于 2018-09-05T09:25:23.477 に答える
3

I come up with another solution to propose that no one mention. There is a library called Selenum it is is an open-source automating testing tool used for automating web applications for testing purposes, but is certainly not limited to only this . You can write a web crawler and get benefited from this automation testing tool just as a human would do.

As an illustration, i will provide to you a quick tutorial to get a better look of how it works. if you are being bored to read this post take a look at this Video to understand what capabilities this library can offer in order to crawl web pages.

Selenium Components

To begin with Selenium consist of various components that coexisted in a unique process and perform their action on the java program. This main component is called Webdriver and it must be included in your program in order to make it working properly.

Go to the following site here and download the latest release for your computer OS (Windows, Linux, or MacOS). It is a ZIP archive containing chromedriver.exe. Save it on your computer and then extract it to a convenient location just as C:\WebDrivers\User\chromedriver.exe We will use this location later in the java program.

The next step is to inlude the jar library. Assuming you are using maven project to build the java programm you need to add the follow dependency to your pom.xml

<dependency>
 <groupId>org.seleniumhq.selenium</groupId>
 <artifactId>selenium-java</artifactId>
 <version>3.8.1</version>
</dependency>

Selenium Web driver Setup

Let us get started with Selenium. The first step is to create a ChromeDriver instance:

System.setProperty("webdriver.chrome.driver", "C:\WebDrivers\User\chromedriver.exe);
WebDriver driver = new ChromeDriver();

Now its time to get deeper in code.The following example shows a simple programma that open a web page and extract some useful Html components. It is easy to understand, as it has comments that explain the steps clearly. Please take a brief look to understand how to capture the objects

//Launch website
      driver.navigate().to("http://www.calculator.net/");

      //Maximize the browser
      driver.manage().window().maximize();

      // Click on Math Calculators
      driver.findElement(By.xpath(".//*[@id = 'menu']/div[3]/a")).click();

      // Click on Percent Calculators
      driver.findElement(By.xpath(".//*[@id = 'menu']/div[4]/div[3]/a")).click();

      // Enter value 10 in the first number of the percent Calculator
      driver.findElement(By.id("cpar1")).sendKeys("10");

      // Enter value 50 in the second number of the percent Calculator
      driver.findElement(By.id("cpar2")).sendKeys("50");

      // Click Calculate Button
      driver.findElement(By.xpath(".//*[@id = 'content']/table/tbody/tr[2]/td/input[2]")).click();


      // Get the Result Text based on its xpath
      String result =
         driver.findElement(By.xpath(".//*[@id = 'content']/p[2]/font/b")).getText();


      // Print a Log In message to the screen
      System.out.println(" The Result is " + result);

Once you are done with your work, the browser window can be closed with:

driver.quit();

Selenium Browser Options

There too much functionality you can implement when you working with this library, For example, assuming you are using chrome you can add in your code

ChromeOptions options = new ChromeOptions();

Take look at how we can use WebDriver to open Chrome extensions using ChromeOptions

options.addExtensions(new File("src\test\resources\extensions\extension.crx"));

This is for using Incognito mode

options.addArguments("--incognito");

this one for disabling javascript and info bars

options.addArguments("--disable-infobars");
options.addArguments("--disable-javascript");

this one if you want to make the browser scraping silently and hide browser crawling in the background

options.addArguments("--headless");

once you have done with it then

WebDriver driver = new ChromeDriver(options);

To sum up let's see what Selenium has to offer and make it a unique choice compared with the other solutions that proposed on this post thus far.

  • Language and Framework Support
  • Open Source Availability
  • Multi-Browser Support
  • Support Across Various Operating Systems
  • Ease Of Implementation
  • Reusability and Integrations
  • Parallel Test Execution and Faster Go-to-Market
  • Easy to Learn and Use
  • Constant Updates
于 2019-08-03T11:56:18.367 に答える
2

I recommend you to use the HttpClient library. You can found examples here.

于 2012-07-01T13:58:45.310 に答える
2

I would prefer crawler4j. Crawler4j is an open source Java crawler which provides a simple interface for crawling the Web. You can setup a multi-threaded web crawler in few hours.

于 2014-02-22T01:02:43.367 に答える
1

I think jsoup is better than others, jsoup runs on Java 1.5 and up, Scala, Android, OSGi, and Google App Engine.

于 2015-01-03T12:19:42.820 に答える
0

あなたはexplore.apachedroidまたはapachenutchを使ってJavaベースのクローラーの感触をつかむことができます

于 2012-07-01T18:06:08.843 に答える
0

Though mainly used for Unit Testing web applications, HttpUnit traverses a website, clicks links, analyzes tables and form elements, and gives you meta data about all the pages. I use it for Web Crawling, not just for Unit Testing. - http://httpunit.sourceforge.net/

于 2014-02-18T17:50:25.587 に答える
0

Here is a list of available crawler:

https://java-source.net/open-source/crawlers

But I suggest using Apache Nutch

于 2017-01-26T07:04:27.610 に答える