Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines.[1] SEO targets unpaid traffic (known as "natural" or "organic" results) rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search,[2] news search, and industryspecific vertical search engines. As an Internet marketing strategy, SEO considers how search engines work, the computerprogrammed algorithms that dictate search engine behavior, what people search for, the actual search terms or keywords typed into search engines, and which search engines are preferred by their targeted audience. SEO is performed because a website will receive more visitors from a search engine when websites rank higher on the search engine results page (SERP). These visitors can then potentially be converted into customers. regional seo strategy
Webmasters and content providers began optimizing websites for search engines in the mid1990s, as the first search engines were cataloging the early Web. Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a web crawler to crawl that page, extract links to other pages from it, and return information found on the page to be indexed.[4] The process involves a search engine spider downloading a page and storing it on the search engine`s own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date. Website owners recognized the value of a high ranking and visibility in search engine results,creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the term "search engine optimization" was probably used in 1997. Sullivan admits that Bruce Clay was one of the first to popularize the term.
Early versions of the search algorithm relied on information provided by webmasters, such as keyword meta tags and index files for search engines such as ALIWEB. Meta tags provide a guide to the content of each page. However, indexing pages using metadata is unreliable, as webmaster keyword selection in meta tags can be an inaccurate representation of the actual content of your site. I understand. The meta tag data is incorrect. For example, irrelevant searches incorrectly displayed page characteristics because they weren't accurate, complete, or incorrect attributes. Suspicious-Discuss Web content providers have also manipulated some attributes in the page's HTML source to rank high in search engines. In 1997, search engine designers found that webmasters are striving to rank high in search engines, and that some webmasters fill pages with exaggerated or irrelevant keywords. I noticed that I was manipulating the rankings of. Read more...
コメント