Saturday, 28 September 2013

Visual Web Ripper: Using External Input Data Sources

Sometimes it is necessary to use external data sources to provide parameters for the scraping process. For example, you have a database with a bunch of ASINs and you need to scrape all product information for each one of them. As far as Visual Web Ripper is concerned, an input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values.

An input data source is normally used in one of these scenarios:

    To provide a list of input values for a web form
    To provide a list of start URLs
    To provide input values for Fixed Value elements
    To provide input values for scripts

Visual Web Ripper supports the following input data sources:

    SQL Server Database
    MySQL Database
    OleDB Database
    CSV File
    Script (A script can be used to provide data from almost any data source)

To see it in action you can download a sample project that uses an input CSV file with Amazon ASIN codes to generate Amazon start URLs and extract some product data. Place both the project file and the input CSV file in the default Visual Web Ripper project folder (My Documents\Visual Web Ripper\Projects).

For further information please look at the manual topic, explaining how to use an input data source to generate start URLs.


Source: http://extract-web-data.com/visual-web-ripper-using-external-input-data-sources/

Thursday, 26 September 2013

Using External Input Data in Off-the-shelf Web Scrapers

There is a question I’ve wanted to shed some light upon for a long time already: “What if I need to scrape several URL’s based on data in some external database?“.

For example, recently one of our visitors asked a very good question (thanks, Ed):

    “I have a large list of amazon.com asin. I would like to scrape 10 or so fields for each asin. Is there any web scraping software available that can read each asin from a database and form the destination url to be scraped like http://www.amazon.com/gp/product/{asin} and scrape the data?”

This question impelled me to investigate this matter. I contacted several web scraper developers, and they kindly provided me with detailed answers that allowed me to bring the following summary to your attention:
Visual Web Ripper

An input data source can be used to provide a list of input values to a data extraction project. A data extraction project will be run once for each row of input values. You can find the additional information here.
Web Content Extractor

You can use the -at”filename” command line option to add new URLs from TXT or CSV file:

    WCExtractor.exe projectfile -at”filename” -s

projectfile: the file name of the project (*.wcepr) to open.
filename – the file name of the CSV or TXT file that contains URLs separated by newlines.
-s – starts the extraction process

You can find some options and examples here.
Mozenda

Since Mozenda is cloud-based, the external data needs to be loaded up into the user’s Mozenda account. That data can then be easily used as part of the data extracting process. You can construct URLs, search for strings that match your inputs, or carry through several data fields from an input collection and add data to it as part of your output. The easiest way to get input data from an external source is to use the API to populate data into a Mozenda collection (in the user’s account). You can also input data in the Mozenda web console by importing a .csv file or importing one through our agent building tool.

Once the data is loaded into the cloud, you simply initiate building a Mozenda web agent and refer to that Data list. By using the Load page action and the variable from the inputs, you can construct a URL like http://www.amazon.com/gp/product/%asin%.
Helium Scraper

Here is a video showing how to do this with Helium Scraper:


The video shows how to use the input data as URLs and as search terms. There are many other ways you could use this data, way too many to fit in a video. Also, if you know SQL, you could run a query to get the data directly from an external MS Access database like
SELECT * FROM [MyTable] IN "C:\MyDatabase.mdb"

Note that the database needs to be a “.mdb” file.
WebSundew Data Extractor
Basically this allows using input data from external data sources. This may be CSV, Excel file or a Database (MySQL, MSSQL, etc). Here you can see how to do this in the case of an external file, but you can do it with a database in a similar way (you just need to write an SQL script that returns the necessary data).
In addition to passing URLs from the external sources you can pass other input parameters as well (input fields, for example).
Screen Scraper

Screen Scraper is really designed to be interoperable with all sorts of databases. We have composed a separate article where you can find a tutorial and a sample project about scraping Amazon products based on a list of their ASINs.


Source: http://extract-web-data.com/using-external-input-data-in-off-the-shelf-web-scrapers/

Wednesday, 25 September 2013

A simple way to turn a website into JSON

Recently, while surfing the web I stumbled upon an simple web scraping service named Web Scrape Master. It is a kind of RESTful web service that extracts data from a specified web site and returns it to you in JSON format.
How it works

Though I don’t know what this service may be useful for, I still like its simplicity: all you need to do is to make an HTTP GET request, passing all necessary parameters in the query string:
http://webscrapemaster.com/api/?url={url}&xpath={xpath}&attr={attr}&callback={callback}

    url  - the URL of the website you want to scrape
    xpath – xpath determining the data you need to extract
    attr - attribute the name you need to get the value of (optional)
    callback - JSON callback function (optional)

For example, for the following request to our testing ground:

http://webscrapemaster.com/api/?url=http://testing-ground.extract-web-data.com/blocks&xpath=//div[@id=case1]/div[1]/span[1]/div

You will get the following response:

[{"text":"<div class='name'>Dell Latitude D610-1.73 Laptop Wireless Computer</div>","attrs":{"class":"name"}}]
Visual Web Scraper

Also, this service offers a special visual tool for building such requests. All you need to do is to enter the URL of the website and click to the element you need to scrape:
Visual Web Scraper
Conclusion

Though I understand that the developer of this service is attempting to create a simple web scraping service, it is still hard to imagine where it can be useful. The task that the service does can be easily accomplished by means of any language.

Probably if you already have software receiving JSON from the web, and you want to feed it with data from some website, then you may find this service useful. The other possible application is to hide your IP when you do web scraping. If you have other ideas, it would be great if you shared them with us.



Source: http://extract-web-data.com/a-simple-way-to-turn-a-website-into-json/

Tuesday, 24 September 2013

Selenium IDE and Web Scraping

Selenium is a browser automation framework that includes IDE, Remote Control server and bindings of various flavors including Java, .Net, Ruby, Python and other. In this post we touch on the basic structure of the framework and its application to  Web Scraping.
What is Selenium IDE


Selenium IDE is an integrated development environment for Selenium scripts. It is implemented as a Firefox plugin, and it allows recording browsers’ interactions in order to edit them. This works well for software tests, composing and debugging. The Selenium Remote Control is a server specific for a particular environment; it causes custom scripts to be implemented for controlled browsers. Selenium deploys on Windows, Linux, and iOS. How various Selenium components are supported with major browsers read here.
What does Selenium do and Web Scraping

Basically Selenium automates browsers. This ability is no doubt to be applied to web scraping. Since browsers (and Selenium) support JavaScript, jQuery and other methods working with dynamic content why not use this mix for benefit in web scraping, rather than to try to catch Ajax events with plain code? The second reason for this kind of scrape automation is browser-fasion data access (though today this is emulated with most libraries).

Yes, Selenium works to automate browsers, but how to control Selenium from a custom script to automate a browser for web scraping? There are Selenium PHP and other language libraries (bindings) providing for scripts to call and use Selenium. It is possible to write Selenium clients (using the libraries) in almost any language we prefer, for example Perl, Python, Java, PHP etc. Those libraries (API), along with a server, the Java written server that invokes browsers for actions, constitute the Selenum RC (Remote Control). Remote Control automatically loads the Selenium Core into the browser to control it. For more details in Selenium components refer to here.



A tough scrape task for programmer

“…cURL is good, but it is very basic.  I need to handle everything manually; I am creating HTTP requests by hand.
This gets difficult – I need to do a lot of work to make sure that the requests that I send are exactly the same as the requests that a browser would
send, both for my sake and for the website’s sake. (For my sake
because I want to get the right data, and for the website’s sake
because I don’t want to cause error messages or other problems on their site because I sent a bad request that messed with their web application).  And if there is any important javascript, I need to imitate it with PHP.
It would be a great benefit to me to be able to control a browser like Firefox with my code. It would solve all my problems regarding the emulation of a real browser…
it seems that Selenium will allow me to do this…” -Ryan S

Yes, that’s what we will consider below.
Scrape with Selenium

In order to create scripts that interact with the Selenium Server (Selenium RC, Selenium Remote Webdriver) or create local Selenium WebDriver script, there is the need to make use of language-specific client drivers (also called Formatters, they are included in the selenium-ide-1.10.0.xpi package). The Selenium servers, drivers and bindings are available at Selenium download page.
The basic recipe for scrape with Selenium:

    Use Chrome or Firefox browsers
    Get Firebug or Chrome Dev Tools (Cntl+Shift+I) in action.
    Install requirements (Remote control or WebDriver, libraries and other)
    Selenium IDE : Record a ‘test’ run thru a site, adding some assertions.
    Export as a Python (other language) script.
    Edit it (loops, data extraction, db input/output)
    Run script for the Remote Control

The short intro Slides for the scraping of tough websites with Python & Selenium are here (as Google Docs slides) and here (Slide Share).
Selenium components for Firefox installation guide

For how to install the Selenium IDE to Firefox see  here starting at slide 21. The Selenium Core and Remote Control installation instructions are there too.
Extracting for dynamic content using jQuery/JavaScript with Selenium

One programmer is doing a similar thing …

1. launch a selenium RC (remote control) server
2. load a page
3. inject the jQuery script
4. select the interested contents using jQuery/JavaScript
5. send back to the PHP client using JSON.

He particularly finds it quite easy and convenient to use jQuery for
screen scraping, rather than using PHP/XPath.
Conclusion

The Selenium IDE is the popular tool for browser automation, mostly for its software testing application, yet also in that Web Scraping techniques for tough dynamic websites may be implemented with IDE along with the Selenium Remote Control server. These are the basic steps for it:

    Record the ‘test‘ browser behavior in IDE and export it as the custom programming language script
    Formatted language script runs on the Remote Control server that forces browser to send HTTP requests and then script catches the Ajax powered responses to extract content.

Selenium based Web Scraping is an easy task for small scale projects, but it consumes a lot of memory resources, since for each request it will launch a new browser instance.



Source: http://extract-web-data.com/selenium-ide-and-web-scraping/

Monday, 23 September 2013

Data Entry Outsourcing Eases Handling of Your Business

Running a business of any kind successfully is not an easy task and as a business owner one must put in lots of effort in this direction. There are different aspects of a business which one needs to monitor constantly and see how the business is doing actually. Data entry is one such aspects of any business that needs to be handled properly for making your business a successful venture. There are many other aspects and each component has its own importance, so being a business owner it is your prerogative to decide which ones are on priority for your business. Often it is not possible on the part of the business owner to take care of all aspects of business as he does not have professional qualifications to do so. So in such a scenario outsourcing is an option that can be adopted to take care of this.

Data entry outsourcing is one aspect of a business which is undertaken on a huge scale by several companies. Global statistics on outsourcing indicate that the process is one the rise and many companies have been immensely benefited by this. One of the main reasons why this has become such a common phenomenon is the fact that the services are available from highly qualified professionals at a very low cost. Data entry services provided by outsourcing companies offer various services under this. So it does not matter what type of data entry services you require, everything will be taken care of by these outsourcing service providing companies.

Having records of a business in the correct manner is very important if one wants to make their business a success. The need for data entry in organizations is on a daily basis and if done on time, one can actually manage all the records in just the correct way. So it may be that you may require the services of the professionals who work for data entry outsourcing daily, weekly or on a monthly basis. This depends on the kind of business you are running and you have to decide what type of data entry outsourcings services you want to have for your business. Today maintaining all the records of company through data entry services manually is apse. In fact with the huge amount of data and other information which any business possesses this is not at all possible.

While you are seeking an outsourcing company to help you out in taking care of this work, you have to be careful about certain aspects. You will be handing over certain important elements of your business to an outside party to a third party, so you need to find out the credentials of the company. Make sure that you get the work done from a reputed company and do not fall prey to the hands of any fake company that are operating in the market. The business is your and it's your responsibility to ensure that you hire the services of the best firm to handle your data entry outsourcing work.




Source: http://ezinearticles.com/?Data-Entry-Outsourcing-Eases-Handling-of-Your-Business&id=566609

Friday, 20 September 2013

Data Entry Services has Tremendous Potential

Running a business is not a very difficult thing to do, but running it successfully surely needs some tremendous efforts on your part. There are many things that need to be handled properly for a business to be successful. Data entry is one of the vital elements for running a business successfully. Data entry is small but nonetheless very significant aspect of a business which one needs to take care of. Outsourcing is a business process which is increasingly being used by businesses to take care of the data entry aspect. In fact the process of outsourcing has made things simpler for business owners.

Data entry services are provided by several companies who are involved in outsourcing work. Such companies specialize in providing different types of services including data entry services to companies that are looking for professional support to unburden from the heavy workload that they have. For the smooth running and proper management of any type of business getting data entry done in a proper manner is crucial. So if you own a business and want to manage it properly you must opt for hiring data entry services for your business.

Each business is different and the business needs also differ vastly. So depending on the type of business which you run you can hire data entry services accordingly. You may need data entry services of different types like offline or online data entry, insurance claim entry, banking information entry and building up a huge database with various other types of entry. The demand for data entry is increasing gradually day by day and in present time it has turned out to be very lucrative as well.

Always hire data entry services from outsourcing companies that provide you with services of extremely talented professionals. You are spending huge amount of money and for no reason must you settle for anything less. Talking with someone who has already used such services help a great deal as they can tell you exactly what the advantages and disadvantages are of using the data entry service. You can get customized services for small, mid size and large scaled business from the companies which provide data entry services. Data entry is not a complicated application but it is extremely time consuming and this is one of the major reasons why a business must undertake data entry services.

Each business has different things to take care of and they cannot afford to waste such valuable time in data entry work. Professionals that work for data entry are specially trained to suit the requirements of particular businesses. So you do not have to worry if the company which is providing data entry services will be able to do the work as per your needs or not. You can just hire data entry services and rest easy, all your work will be done efficiently by the professionals working there.

Data entry services has received tremendous boost and this is one aspect of business which is there to stay for a long time. So hiring data entry services for your business will be just perfect.




Source: http://ezinearticles.com/?Data-Entry-Services-has-Tremendous-Potential&id=472028

Thursday, 19 September 2013

Assuring Scraping Success with Proxy Data Scraping

Have you ever heard of "Data Scraping?" Data Scraping is the process of collecting useful data that has been placed in the public domain of the internet (private areas too if conditions are met) and storing it in databases or spreadsheets for later use in various applications. Data Scraping technology is not new and many a successful businessman has made his fortune by taking advantage of data scraping technology.

Sometimes website owners may not derive much pleasure from automated harvesting of their data. Webmasters have learned to disallow web scrapers access to their websites by using tools or methods that block certain ip addresses from retrieving website content. Data scrapers are left with the choice to either target a different website, or to move the harvesting script from computer to computer using a different IP address each time and extract as much data as possible until all of the scraper's computers are eventually blocked.

Thankfully there is a modern solution to this problem. Proxy Data Scraping technology solves the problem by using proxy IP addresses. Every time your data scraping program executes an extraction from a website, the website thinks it is coming from a different IP address. To the website owner, proxy data scraping simply looks like a short period of increased traffic from all around the world. They have very limited and tedious ways of blocking such a script but more importantly -- most of the time, they simply won't know they are being scraped.

You may now be asking yourself, "Where can I get Proxy Data Scraping Technology for my project?" The "do-it-yourself" solution is, rather unfortunately, not simple at all. Setting up a proxy data scraping network takes a lot of time and requires that you either own a bunch of IP addresses and suitable servers to be used as proxies, not to mention the IT guru you need to get everything configured properly. You could consider renting proxy servers from select hosting providers, but that option tends to be quite pricey but arguably better than the alternative: dangerous and unreliable (but free) public proxy servers.

There are literally thousands of free proxy servers located around the globe that are simple enough to use. The trick however is finding them. Many sites list hundreds of servers, but locating one that is working, open, and supports the type of protocols you need can be a lesson in persistence, trial, and error. However if you do succeed in discovering a pool of working public proxies, there are still inherent dangers of using them. First off, you don't know who the server belongs to or what activities are going on elsewhere on the server. Sending sensitive requests or data through a public proxy is a bad idea. It is fairly easy for a proxy server to capture any information you send through it or that it sends back to you. If you choose the public proxy method, make sure you never send any transaction through that might compromise you or anyone else in case disreputable people are made aware of the data.

A less risky scenario for proxy data scraping is to rent a rotating proxy connection that cycles through a large number of private IP addresses. There are several of these companies available that claim to delete all web traffic logs which allows you to anonymously harvest the web with minimal threat of reprisal. Companies such as http://www.Anonymizer.com offer large scale anonymous proxy solutions, but often carry a fairly hefty setup fee to get you going.

The other advantage is that companies who own such networks can often help you design and implementation of a custom proxy data scraping program instead of trying to work with a generic scraping bot. After performing a simple Google search, I quickly found one company (www.ScrapeGoat.com) that provides anonymous proxy server access for data scraping purposes. Or, according to their website, if you want to make your life even easier, ScrapeGoat can extract the data for you and deliver it in a variety of different formats often before you could even finish configuring your off the shelf data scraping program.

Whichever path you choose for your proxy data scraping needs, don't let a few simple tricks thwart you from accessing all the wonderful information stored on the world wide web!




Source: http://ezinearticles.com/?Assuring-Scraping-Success-with-Proxy-Data-Scraping&id=248993