Saturday, 29 June 2013

Pushing Bad Data- Google's Latest Black Eye

Google stopped counting, or at least publicly displaying, the number of pages it indexed in September of 05, after a school-yard "measuring contest" with rival Yahoo. That count topped out around 8 billion pages before it was removed from the homepage. News broke recently through various SEO forums that Google had suddenly, over the past few weeks, added another few billion pages to the index. This might sound like a reason for celebration, but this "accomplishment" would not reflect well on the search engine that achieved it.

What had the SEO community buzzing was the nature of the fresh, new few billion pages. They were blatant spam- containing Pay-Per-Click (PPC) ads, scraped content, and they were, in many cases, showing up well in the search results. They pushed out far older, more established sites in doing so. A Google representative responded via forums to the issue by calling it a "bad data push," something that met with various groans throughout the SEO community.

How did someone manage to dupe Google into indexing so many pages of spam in such a short period of time? I'll provide a high level overview of the process, but don't get too excited. Like a diagram of a nuclear explosive isn't going to teach you how to make the real thing, you're not going to be able to run off and do it yourself after reading this article. Yet it makes for an interesting tale, one that illustrates the ugly problems cropping up with ever increasing frequency in the world's most popular search engine.

A Dark and Stormy Night

Our story begins deep in the heart of Moldva, sandwiched scenically between Romania and the Ukraine. In between fending off local vampire attacks, an enterprising local had a brilliant idea and ran with it, presumably away from the vampires... His idea was to exploit how Google handled subdomains, and not just a little bit, but in a big way.

The heart of the issue is that currently, Google treats subdomains much the same way as it treats full domains- as unique entities. This means it will add the homepage of a subdomain to the index and return at some point later to do a "deep crawl." Deep crawls are simply the spider following links from the domain's homepage deeper into the site until it finds everything or gives up and comes back later for more.

Briefly, a subdomain is a "third-level domain." You've probably seen them before, they look something like this: subdomain.domain.com. Wikipedia, for instance, uses them for languages; the English version is "en.wikipedia.org", the Dutch version is "nl.wikipedia.org." Subdomains are one way to organize large sites, as opposed to multiple directories or even separate domain names altogether.

So, we have a kind of page Google will index virtually "no questions asked." It's a wonder no one exploited this situation sooner. Some commentators believe the reason for that may be this "quirk" was introduced after the recent "Big Daddy" update. Our Eastern European friend got together some servers, content scrapers, spambots, PPC accounts, and some all-important, very inspired scripts, and mixed them all together thusly...

Five Billion Served- And Counting...

First, our hero here crafted scripts for his servers that would, when GoogleBot dropped by, start generating an essentially endless number of subdomains, all with a single page containing keyword-rich scraped content, keyworded links, and PPC ads for those keywords. Spambots are sent out to put GoogleBot on the scent via referral and comment spam to tens of thousands of blogs around the world. The spambots provide the broad setup, and it doesn't take much to get the dominos to fall.

GoogleBot finds the spammed links and, as is its purpose in life, follows them into the network. Once GoogleBot is sent into the web, the scripts running the servers simply keep generating pages- page after page, all with a unique subdomain, all with keywords, scraped content, and PPC ads. These pages get indexed and suddenly you've got yourself a Google index 3-5 billion pages heavier in under 3 weeks.

Reports indicate, at first, the PPC ads on these pages were from Adsense, Google's own PPC service. The ultimate irony then is Google benefits financially from all the impressions being charged to AdSense users as they appear across these billions of spam pages. The AdSense revenues from this endeavor were the point, after all. Cram in so many pages that, by sheer force of numbers, people would find and click on the ads in those pages, making the spammer a nice profit in a very short amount of time.

Billions or Millions? What is Broken?

Word of this achievement spread like wildfire from the DigitalPoint forums. It spread like wildfire in the SEO community, to be specific. The "general public" is, as of yet, out of the loop, and will probably remain so. A response by a Google engineer appeared on a Threadwatch thread about the topic, calling it a "bad data push". Basically, the company line was they have not, in fact, added 5 billions pages. Later claims include assurances the issue will be fixed algorithmically. Those following the situation (by tracking the known domains the spammer was using) see only that Google is removing them from the index manually.

The tracking is accomplished using the "site:" command. A command that, theoretically, displays the total number of indexed pages from the site you specify after the colon. Google has already admitted there are problems with this command, and "5 billion pages", they seem to be claiming, is merely another symptom of it. These problems extend beyond merely the site: command, but the display of the number of results for many queries, which some feel are highly inaccurate and in some cases fluctuate wildly. Google admits they have indexed some of these spammy subdomains, but so far haven't provided any alternate numbers to dispute the 3-5 billion showed initially via the site: command.

Over the past week the number of the spammy domains & subdomains indexed has steadily dwindled as Google personnel remove the listings manually. There's been no official statement that the "loophole" is closed. This poses the obvious problem that, since the way has been shown, there will be a number of copycats rushing to cash in before the algorithm is changed to deal with it.

Conclusions

There are, at minimum, two things broken here. The site: command and the obscure, tiny bit of the algorithm that allowed billions (or at least millions) of spam subdomains into the index. Google's current priority should probably be to close the loophole before they're buried in copycat spammers. The issues surrounding the use or misuse of AdSense are just as troubling for those who might be seeing little return on their adverting budget this month.

Do we "keep the faith" in Google in the face of these events? Most likely, yes. It is not so much whether they deserve that faith, but that most people will never know this happened. Days after the story broke there's still very little mention in the "mainstream" press. Some tech sites have mentioned it, but this isn't the kind of story that will end up on the evening news, mostly because the background knowledge required to understand it goes beyond what the average citizen is able to muster. The story will probably end up as an interesting footnote in that most esoteric and neoteric of worlds, "SEO History."


Source: http://ezinearticles.com/?Pushing-Bad-Data--Googles-Latest-Black-Eye&id=226954

Thursday, 27 June 2013

Top Data Mining Tools

Data mining is important because it means pulling out critical information from vast amounts of data. The key is to find the right tools used for the expressed purposes of examining data from any number of viewpoints and effectively summarize it into a useful data set.

Many of the tools used to organize this data have become computer based and are typically referred to as knowledge discovery tools.

Listed below are the top data mining tools in the industry:

    Insightful Miner - This tool has the best selection of ETL functions of any data mining tool on the market. This allows the merging, appending, sorting and filtering of data.
    SQL Server 2005 Data Mining Add-ins for Office 2007 - These are great add-ins for taking advantage of SQL Server 2005 predictive analytics in Office Excel 2007 and Office Visio 2007. The add-ins Allow you to go through the entire development lifecycle within Excel 2007 by using either a spreadsheet or external data accessible through your SQL Server 2005 Analysis Services instance.
    Rapidminder - Also known as YALE is a pretty comprehensive and arguably world-leading when it comes to an open-source data mining solution. it is widely used from a large number of companies an organizations. Even though it is open-source, this tool, out of the box provides a secure environment and provides enterprise capable support and services so you will not be left out in the cold.

The list is short but ever changing in order to meet the increasing demands of companies to provide useful information from years of data.


Source: http://ezinearticles.com/?Top-Data-Mining-Tools&id=1380551

Tuesday, 25 June 2013

Facts on Data Mining


Data mining is the process of examining a data set to extract certain patterns. Companies use this process to determine the outcome of their existing goals. They summarize this information into useful methods to create revenue and/or cut costs. When search engines are accessed, they begin to build lists of links from the first page it accesses. It continues this process throughout the site until it reaches the root page. This data not only includes text, but also numbers and facts.

Data mining focuses on consumers in relation to both "internal" (price, product positioning), and "external" (competition, demographics) factors which help determine consumer price, customer satisfaction, and corporate profits. It also provides a link between separate transactions and analytical systems. Four types of relationships are sought with data mining:

o Classes - information used to increase traffic
o Clusters - grouped to determine consumer preferences or logical relationships
o Associations - used to group products normally bought together (i.e., bacon, eggs; milk, bread)
o Patterns - used to anticipate behavior trends

This process provides numerous benefits to businesses, governments, society, and especially individuals as a whole. It starts with a cleaning process which removes errors and ensures consistency. Algorithms are then used to "mine" the data to establish patterns. With all new technology, there are positives and negatives. One negative issue that arises from the process is privacy. Although it is against the law, the selling of personal information over the Internet has occurred. Companies have to obtain certain personal information to be able to properly conduct their business. The problem is that the security systems in place are not adequately protecting this information.

From a customer viewpoint, data mining benefits businesses more than their interests. Their personal information is out there, possibly unprotected, and there is nothing they can do until a negative issue arises. On the other hand, from the business side, it helps enhance overall operations and aid in better customer satisfaction. In regards to the government, they use personal data to tighten security systems and protect the public from terrorism; however, they want to protect people's privacy rights as well. With numerous servers, databases, and websites out there, it becomes increasingly difficult to enforce stricter laws. The more information we introduce to the web, the greater the chances of someone hacking into this data.

Better security systems should be developed before data mining can truly benefit all parties involved. Privacy invasion can ruin people's lives. It can take months, even years, to regain a level of trust that our personal information will be protected. Benefits aside, the safety and well being of any human being should be top priority.


Source: http://ezinearticles.com/?Facts-on-Data-Mining&id=3640795

Friday, 21 June 2013

Business Intelligence Data Mining

Data mining can be technically defined as the automated extraction of hidden information from large databases for predictive analysis. In other words, it is the retrieval of useful information from large masses of data, which is also presented in an analyzed form for specific decision-making.

Data mining requires the use of mathematical algorithms and statistical techniques integrated with software tools. The final product is an easy-to-use software package that can be used even by non-mathematicians to effectively analyze the data they have. Data Mining is used in several applications like market research, consumer behavior, direct marketing, bioinformatics, genetics, text analysis, fraud detection, web site personalization, e-commerce, healthcare, customer relationship management, financial services and telecommunications.

Business intelligence data mining is used in market research, industry research, and for competitor analysis. It has applications in major industries like direct marketing, e-commerce, customer relationship management, healthcare, the oil and gas industry, scientific tests, genetics, telecommunications, financial services and utilities. BI uses various technologies like data mining, scorecarding, data warehouses, text mining, decision support systems, executive information systems, management information systems and geographic information systems for analyzing useful information for business decision making.

Business intelligence is a broader arena of decision-making that uses data mining as one of the tools. In fact, the use of data mining in BI makes the data more relevant in application. There are several kinds of data mining: text mining, web mining, social networks data mining, relational databases, pictorial data mining, audio data mining and video data mining, that are all used in business intelligence applications.

Some data mining tools used in BI are: decision trees, information gain, probability, probability density functions, Gaussians, maximum likelihood estimation, Gaussian Baves classification, cross-validation, neural networks, instance-based learning /case-based/ memory-based/non-parametric, regression algorithms, Bayesian networks, Gaussian mixture models, K-means and hierarchical clustering, Markov models and so on.



Source: http://ezinearticles.com/?Business-Intelligence-Data-Mining&id=196648

Thursday, 20 June 2013

Basics of Web Data Mining and Challenges in Web Data Mining Process

Today World Wide Web is flooded with billions of static and dynamic web pages created with programming languages such as HTML, PHP and ASP. Web is great source of information offering a lush playground for data mining. Since the data stored on web is in various formats and are dynamic in nature, it's a significant challenge to search, process and present the unstructured information available on the web.

Complexity of a Web page far exceeds the complexity of any conventional text document. Web pages on the internet lack uniformity and standardization while traditional books and text documents are much simpler in their consistency. Further, search engines with their limited capacity can not index all the web pages which makes data mining extremely inefficient.

Moreover, Internet is a highly dynamic knowledge resource and grows at a rapid pace. Sports, News, Finance and Corporate sites update their websites on hourly or daily basis. Today Web reaches to millions of users having different profiles, interests and usage purposes. Every one of these requires good information but don't know how to retrieve relevant data efficiently and with least efforts.

It is important to note that only a small section of the web possesses really useful information. There are three usual methods that a user adopts when accessing information stored on the internet:

• Random surfing i.e. following large numbers of hyperlinks available on the web page.
• Query based search on Search Engines - use Google or Yahoo to find relevant documents (entering specific keywords queries of interest in search box)
• Deep query searches i.e. fetching searchable database from eBay.com's product search engines or Business.com's service directory, etc.

To use the web as an effective resource and knowledge discovery researchers have developed efficient data mining techniques to extract relevant data easily, smoothly and cost-effectively.


Source: http://ezinearticles.com/?Basics-of-Web-Data-Mining-and-Challenges-in-Web-Data-Mining-Process&id=4937441

Tuesday, 18 June 2013

Usefulness of Web Scraping Services

For any business or organization, surveys and market research play important roles in the strategic decision-making process. Data extraction and web scraping techniques are important tools that find relevant data and information for your personal or business use. Many companies employ people to copy-paste data manually from the web pages. This process is very reliable but very costly as it results to time wastage and effort. This is so because the data collected is less compared to the resources spent and time taken to gather such data.

Nowadays, various data mining companies have developed effective web scraping techniques that can crawl over thousands of websites and their pages to harvest particular information. The information extracted is then stored into a CSV file, database, XML file, or any other source with the required format. After the data has been collected and stored, data mining process can be used to extract the hidden patterns and trends contained in the data. By understanding the correlations and patterns in the data; policies can be formulated and thereby aiding the decision-making process. The information can also be stored for future reference.

The following are some of the common examples of data extraction process:

• Scrap through a government portal in order to extract the names of the citizens who are reliable for a given survey.
• Scraping competitor websites for feature data and product pricing
• Using web scraping to download videos and images for stock photography site or for website design

Automated Data Collection
It is important to note that web scraping process allows a company to monitor the website data changes over a given time frame. It also collects the data on a routine basis regularly. Automated data collection techniques are quite important as they help companies to discover customer trends and market trends. By determining market trends, it is possible to understand the customer behavior and predict the likelihood of how the data will change.

The following are some of the examples of the automated data collection:

• Monitoring price information for the particular stocks on hourly basis
• Collecting mortgage rates from the various financial institutions on the daily basis
• Checking on weather reports on regular basis as required

By using web scraping services it is possible to extract any data that is related to your business. The data can then be downloaded into a spreadsheet or a database for it to be analyzed and compared. Storing the data in a database or in a required format makes it easier for interpretation and understanding of the correlations and for identification of the hidden patterns.

Through web scraping it is possible to get quicker and accurate results and thus saving many resources in terms of money and time. With data extraction services, it is possible to fetch information about pricing, mailing, database, profile data, and competitors data on a consistent basis. With the emergence of professional data mining companies outsourcing your services will greatly reduce your costs and at the same time you are assured of high quality services.


Source: http://ezinearticles.com/?Usefulness-of-Web-Scraping-Services&id=7181014

Sunday, 16 June 2013

Professional Data Entry Services - Ensure Maximum Security for Data


Though a lot of people have concerns about it, professional data entry services can actually ensure maximum security for your data. This is in addition to the quality and cost benefits that outsourcing provides anyway. The precautionary measures for data protection would begin from the time you provide your documents/files for entry to the service provider till completion of the project and delivery of the final output to you. Whether performed onshore or offshore, the security measures are stringent and effective. You only have to make sure you outsource to the right service provider. Making use of the free trials offered by different business process outsourcing companies would help you choose right.

BPO Company Measures for Data Protection and Confidentiality

• Data Remains on Central Servers - The company would ensure that all data remains on the central servers and also that all processing is done only on these servers. No text or images would leave the servers. The company's data entry operators cannot download or print any of this data.

• Original Documents Are Not Circulated - The source files or documents (hard copies) which you give to the service provider is not distributed as such to their staff. This source material is scanned with the help of high speed document scanners. The data would be keyed from scanned images or extracted utilizing text recognition techniques.

• Source Documents Safely Disposed Of - After use, your source documents would be disposed of in a secure manner. Whenever necessary, the BPO company would get assistance from a certified document destruction company. Such measures would keep your sensitive documents from falling into the hands of unauthorized personnel.

• Confidentiality - All staff would be required to sign confidentiality agreements. They would also be apprised of information protection policies that they would have to abide by. In addition, the different projects of various clients would be handled in segregated areas.

• Security Checks - Surprise security checks would be carried out to ensure that there is adherence to data security requirements when performing data entry services.

• IT Security - All computers used for the project would be password protected. These computers would additionally be provided with international quality anti-virus protection and advanced firewalls. The anti-virus software would be updated promptly.

• Backup - Regular backups would be done of information stored in the system. The backup data would be locked away securely.

• Other Measures - Other advanced measures that would be taken for information protection include maintenance of a material and personnel movement register, firewalls and intrusion detection, 24/7 security manning the company's premises, and 256 bit AES encryption.

Take Full Advantage of It

Take advantage of professional data entry services and ensure maximum security for your data. When considering a particular company to outsource to, do ask them about their security measures in addition to their pricing and turnaround.


Source: http://ezinearticles.com/?Professional-Data-Entry-Services---Ensure-Maximum-Security-for-Data&id=6961870

Friday, 14 June 2013

Effectiveness of Web Data Mining Through Web Research


Web data mining is systematic approach to keyword based and hyperlink based web research for gaining business intelligence. It requires analytical skills to understand hyperlink structure of given website. Hyperlinks possess enormous amount of hidden human annotations that can help automatically understand the authority. If the webmaster provides a hyperlink pointing to another website or web page, this action is perceived as an endorsement to that webpage. Search engines highly focus on such endorsements to define the importance of the page and place them higher in organic search results.

However every hyperlink does not refer to the endorsement since the webmaster may have used it for other purposes, such as navigation or to render paid advertisements. It is important to note that authoritative pages rarely provide informative descriptions. For an instant, Google's homepage may not provide explicit self-description as "Web search engine."

These features of hyperlink systems have forced researchers to evaluate another important webpage category called hubs. A hub is a unique, informative webpage that offers collections of links to authorities. It may have only a few links pointing to other web pages but it links to a collection of prominent sites on a single topic. A hub directly awards authority status on sites that focus on a single topic. Typically, a quality hub points to many quality authorities, and, conversely, a web page that many such hubs link to can be deemed as a superior authority.

Such approach of identifying authoritative pages has resulted in the development of various popularity algorithms such as PageRank. Google uses PageRank algorithm to define authority of each webpage for a relevant search query. By analyzing hyperlink structures and web page content, these search engines can render better-quality search results than term-index engines such as Ask and topic directories such as DMOZ.



Source: http://ezinearticles.com/?Effectiveness-of-Web-Data-Mining-Through-Web-Research&id=5094403

Wednesday, 12 June 2013

Unraveling the Data Mining Mystery - The Key to Dramatically Higher Profits

Data mining is the art of extracting nuggets of gold from a set of seemingly meaningless and random data. For the web, this data can be in the form of your server hit log, a database of visitors to your website or customers that have actually purchased from your web site at one time or another.

Today, we will look at how examining customer purchases can give you big clues to revising/improving your product selection, offering style and packaging of products for much greater profits from both your existing customers and an increased visitor to customer ratio.

To get a feel for this, lets take a look at John, a seller of vitamins and nutritional products on the internet. He has been online for two years and has made a fairly good living at selling vitamins and such online but knows he can do better but isn't sure how.

John was smart enough to keep all customer sales data in a database which was a good idea because it is now available for analysis. The first step is for John to run several reports from his database.

In this instance, these reports include: repeat customers, repeat customer frequency, most popular items, least popular items, item groups, item popularity by season, item popularity by geographic region and repeat orders for the same products. Lets take a brief look at each report and how it could guide John to greater profits.

    Repeat Customers - If I know who my repeat customers are, I can make special offers to them via email or offer them incentive coupons (if automated) surprise discounts at the checkout stand for being such a good customer.
    Repeat Customer Frequency - By knowing how often your customer buys from you, you can start tailoring automatic ship programs for that customer where every so many weeks, you will automatically ship the products the customer needs without the hassle of reordering. It shows the customer that you really value his time and appreciate his business.
    Repeat Orders - By knowing what a customer repeatedly buys and by knowing about your other products, you can make suggestions for additional complimentaty products for the customer to add to the order. You could even throw in free samples for the customer to try. And of course, you should try to get the customer on an auto-ship program.
    Most Popular Items - By knowing what items are purchased the most, you will know what items to highlight in your web site and what items would best be used as a loss-leader in a sale or packaged with other less popular items. If a popular product costs $20 and it is bundled with another $20 product and sold for $35, people will buy the bundle for the savings provided they perceive a need of some sort for the other product.
    Least Popular Items - This fact is useful for inventory control and for bundling (described above.) It is also useful for possible special sales to liquidate unpopular merchandise.
    Item Groups - Understanding item groups is very important in a retail environment. By understanding how customer's typically buy groups of products, you can redesign your display and packaging of items for sale to take advantage of this trend. For instance, if lots of people buy both Vitamin A and Vitamin C, it might make sense to bundle the two together at a small discount to move more product or at least put a hint on their respective web pages that they go great together.
    Item Popularity by season - Some items sell better in certain seasons than others. For instance, Vitamin C may sell better in winter than summer. By knowing the seasonability of the products, you will gain insight into what should be featured on your website and when.
    Item Popularity by Geographic Region - If you can find regional buying patterns in your customer base, you have a great opportunity for personalized, targeted mailings of specific products and product groups to each geographic region. Any time you can be more specific in your offering, your close percentage increases.

As you can see, each of these elements gives very valuable information that can help shape the future of this business and how it conducts itself on the web. It will dictate what new tools are needed, how data should be presented, whether or not a personal experience is justified (i.e. one that remembers you and presents itself based on your past interactions), how and when special sales should be run, what are good loss leaders, etc.

Although it can be quite a bit of work, data mining is a truly powerful way to dramatically increase your profit without incurring the cost of capturing new customers. The cost of being more responsive to an existing customer, making that customer feel welcome and selling that customer more product more often is far less costly than the cost of constantly getting new customers in a haphazard fashion.

Even applying the basic principles shared in this article, you will see a dramatic increase in your profits this coming year. And if you don't have good records, perhaps this is the time to start a system to track all this information. After all, you really don't want to be throwing all that extra money away, do you?




Source: http://ezinearticles.com/?Unraveling-the-Data-Mining-Mystery---The-Key-to-Dramatically-Higher-Profits&id=26665

Monday, 10 June 2013

Web Data Extraction Services and Data Collection Form Website Pages

For any business market research and surveys plays crucial role in strategic decision making. Web scrapping and data extraction techniques help you find relevant information and data for your business or personal use. Most of the time professionals manually copy-paste data from web pages or download a whole website resulting in waste of time and efforts.

Instead, consider using web scraping techniques that crawls through thousands of website pages to extract specific information and simultaneously save this information into a database, CSV file, XML file or any other custom format for future reference.

Examples of web data extraction process include:
• Spider a government portal, extracting names of citizens for a survey
• Crawl competitor websites for product pricing and feature data
• Use web scraping to download images from a stock photography site for website design

Automated Data Collection
Web scraping also allows you to monitor website data changes over stipulated period and collect these data on a scheduled basis automatically. Automated data collection helps you discover market trends, determine user behavior and predict how data will change in near future.

Examples of automated data collection include:
• Monitor price information for select stocks on hourly basis
• Collect mortgage rates from various financial firms on daily basis
• Check whether reports on constant basis as and when required

Using web data extraction services you can mine any data related to your business objective, download them into a spreadsheet so that they can be analyzed and compared with ease.

In this way you get accurate and quicker results saving hundreds of man-hours and money!

With web data extraction services you can easily fetch product pricing information, sales leads, mailing database, competitors data, profile data and many more on a consistent basis.

Should you have any queries regarding Web Data extraction services, please feel free to contact us. We would strive to answer each of your queries in detail. Email us at info@outsourcingwebresearch.com


Source: http://ezinearticles.com/?Web-Data-Extraction-Services-and-Data-Collection-Form-Website-Pages&id=4860417

Tuesday, 4 June 2013

What You Should Know About Data Mining

Often called data or knowledge discovery, data mining is the process of analyzing data from various perspectives and summarizing it into useful information to help beef up revenue or cut costs. Data mining software is among the many analytical tools used to analyze data. It allows categorizing of data and shows a summary of the relationships identified. From a technical perspective, it is finding patterns or correlations among fields in large relational databases. Find out how data mining works and its innovations, what technological infrastructures are needed, and what tools like phone number validation can do.

Data mining may be a relatively new term, but it uses old technology. For instance, companies have made use of computers to sift through supermarket scanner data - volumes of them - and analyze years' worth of market research. These kinds of analyses help define the frequency of customer shopping, how many items are usually bought, and other information that will help the establishment increase revenue. These days, however, what makes this easy and more cost-effective are disk storage, statistical software, and computer processing power.

Data mining is mainly used by companies who want to maintain a strong customer focus, whether they're engaged in retail, finance, marketing, or communications. It enables companies to determine the different relationships among varying factors, including staffing, pricing, product positioning, market competition, and social demographics.

Data mining software, for example, vary in types: statistical, machine learning, and neural networks. It seeks any of the four types of relationships: classes (stored data is used for locating data in predetermined groups), clusters (data are grouped according to logical relationships or consumer preferences), associations (data is mined to identify associations), and sequential patterns (data is mined to estimate behavioral trends and patterns). There are different levels of analysis, including artificial neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction, and data visualization.

In today's world, data mining applications are available on all size systems from client/server, mainframe, and PC platforms. When it comes to enterprise-wide applications, the size usually ranges from 10 gigabytes to more than 11 terabytes. The two important technological drivers are the size of the database and query complexity. A more powerful system is required with more data being processed and maintained, and with more complex and greater queries.

Programmable XML web services like phone number validation will assist your company in improving the quality of your data needed for data mining. Used to validate phone numbers, a phone number validation service allows you to improve the quality of your contact database by eliminating invalid telephone numbers at the point of entry. Upon verification, phone number and other customer information can work wonders for your business and its constant improvement.


Source: http://ezinearticles.com/?What-You-Should-Know-About-Data-Mining&id=6916646