Search Results

Search found 12701 results on 509 pages for 'fulltext index'.

Page 260/509 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • What measures can be taken to increase Google indexing speed for a given newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. How do I get important-page.html indexed as soon as possible after 12:00? Ideally within seconds or minutes. Or more generally: What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • Google indexed page a day before also reflecting in search but today everything vanish

    - by ganesh
    We had robots.txt which disallow all robots as we were in development. We are live now. We change robots.txt as per our requirement a day before. Submit indexes using Google Webmaster Tools index status. After this we can see proper result in search as well as Google images search was working as expected. Suddenly today all these things vanish from Google Search. Now again I can see old result i.e. under construction message. I checked robots.txt in Google Webmaster Tools, it's ok - no crawling errors. Kindly let me know what exactly happened? How I can inform this issue to Google?

    Read the article

  • Mod Rewrite not working on my addon domain

    - by Ogugua Belonwu
    have a wordpress website on my main domain For the wordpress website i have this in my .htaccess file # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php </IfModule> # END WordPress I just created an addon domain and wanted to use new rules for it I created a .htaccess file and put it inside the addon folder eg /newaddon In the .htaccess file i have: Options -Indexes <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^readjob/(.*)/(.*)/(.*)/$ readjob.php?id=$1&amp;cat=$2&amp;title=$3 </IfModule> The url stucture i have is this: http://www.website.com/readjob/3/jobs/web-designers-potech-integrated-services/ But it keeps telling me link is broken I dont know what to do, pls i need assistance (pls i just learnt mod rewriting today, so clarity will be highly appreciated) Thanks

    Read the article

  • webmaster tools 500 crawl error for asp faceted navigation that does not exist

    - by user19007
    i am getting 2,500 type 500 url errors in google webmaster tools. These pages are faceted navigation results that can not be reached by a site visitor. These pages do not exist. We are using faceted navigation with the Volusion platform (asp.net, I think). I have specified url parameters in webmaster tools so that google will not try to index anything faceted. This does not stop the errors from generating. I am concerned about how this might effect seo (bleeding page rank). I can provide additional information if needed. I am not sure how to solve this. I have started down the path of creating 301's, but having some difficulty there as well.

    Read the article

  • SEO: disallowing Google from indexing forms in iframes or not?

    - by Marco Demaio
    I usually place forms in iframes (i.e. order form, request assistance form, contact forms, ect.). Just the forms, I never place other contents or pages in iframes. From a SEO point of view, would you exclude forms from being indexed/crawled by Google or not? I mean my forms hardly ever contains keyword/keyphrases, moreover I obviously place empty title/meta description tags in pages shown in iframe to display forms, cause those titles are never displaied in browser title bar. So I'm wondering what's the point of letting Google index them? Moreover I think these form pages might suck out PR from all other pages that are more valuable for SEO. If your answer is "yes I would exclude them form indexing" would you simply use robots.txt to exclude them? Thanks!

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

  • when i marked all check boxes in settings,update fails.

    - by user67966
    "W:Failed to fetch cdrom://Ubuntu 12.04 LTS Precise Pangolin - Release i386 (20120423)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS Precise Pangolin - Release i386 (20120423)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead." the above information is a crash info.It happens when i manually update ubuntu.I have marked all check-boxes in settings which was by default not marked. What is the main reason for this problem?

    Read the article

  • How is this recursion properly working when it is iterated through [on hold]

    - by Rakso Zrobin
    Here is my code right now: hasht= {"A":["B", "D", "E"], "B":["C"], "C":["D", "E"], "D":["C", "E"], "E":["B"]} paths=[] def recusive(start, finish, started=true): if start==finish and !started: return start else: for i in hasht[start]: path= start+ recusive(i,finish,false) paths.append(path) print (recusive("C","C",1)) print paths # [CDC, CDEBC, CEBC] I am trying to generate a table like the one on the bottom, but am running into the problem of the string and the array not being able to concatenate. When I just return however, it returns CDC and works, however, exiting the function as return is meant to do. I am wondering how I can improve my code here to (1) make it work, (2) why my logic was faulty. For example, I understand that it generates say [DC], but I am confused as to how to go around that. perhaps index the value returned?

    Read the article

  • After changing web host, I get a 'file does not exist' error

    - by Jordan
    I run a WordPress blog, and have recently changed web hosts. When changing web hosts, I copied all files and exported/imported the database etc as explained by lots of tutorials found easily on Google. The blog home page works fine. What goes wrong: When I click on any link from the home page, the browser gets stuck in a redirect loop. Looking at the error log, I see: File does not exist: /usr/local/apache/htdocs/index.php The directory /usr doesn't even exist for my website - so perhaps this is looking for a file that was present using my old Web Host and is no longer present with my new web host? What is going on, and how might I resolve it?

    Read the article

  • How to find the occurrence of particular character in string - CHARINDEX

    - by Vipin
    Many times while writing SQL, we need to find if particular character is present in the column data. SQL server possesses an in-built function to do this job - CHARINDEX(character_to_search, string, [starting_position]) Returns the position of the first occurrence of the character in the string. NOTE - index starts with 1. So, if character is at the starting position, this function would return 1. Returns 0 if character is not found. Returns 0 if 'string' is empty. Returns NULL if string is NULL. A working example of the function is SELECT CHARINDEX('a', fname) a_First_occurence, CHARINDEX('a', fname, CHARINDEX('a', fname)) a_Second_occurrence FROM Users WHERE fname = 'aka unknown' OUTPUT ------- a_First_occurence a_Second_occurrence 1 3

    Read the article

  • OPN Certification during Oracle PartnerNetwork Exchange at OpenWorld

    - by Harold Green
    Join Us and Earn Your OPN Certification during Oracle PartnerNetwork Exchange at OpenWorld San Francisco, October 1-4, 2012 As a benefit to partners attending this year's OPN Exchange event, the Oracle Partner Network is offering Certification testing free of charge* to over 100 exam titles.  Successful completion of these exams give you the credential of Certified Specialist and counts toward your company Specialization and upgrade within the OPN Program.  Exams are offered during 10 different sessions and spaces will fill up quickly.   All you need to do is register for OPN Exchange and then select your session using the schedule builder.  On the day of your exam, be sure to bring your OPN Company ID, and Oracle Testing ID (Pearson VUE account ID).  Study guides are available online in the links below. Don't miss this exclusive opportunity to become Oracle Certified this year at Oracle PartnerNetwork Exchange at OpenWorld 2012.  Event Link: http://www.oracle.com/opnexchange/learn/test-fest/index.html *Available exams: http://www.oracle.com/partners/en/most-popular-resources/oow-testfest-exams-1836714.html

    Read the article

  • How to allow Google Images search to by pass hotlink protection?

    - by Marco Demaio
    I saw Google Images seems to index my images only if hotlink protection is off. * I use anyway hotlink protection because I don't like the idea of people sucking my bandwidth, i simply this code to protcet my sites from being hotlinked: RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mydomain\.com$ [NC] RewriteRule .*\.(jpg|jpeg|png|gif)$ - [F,NC,L] But in order to allow Google Image search to bypass my hotlink protection (I want Google Images search to show my images) would it suffice to add a line like this one: RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google\.com$ [NC] Because I'm wondring: is the crawler crawling just from google.com? and what about google.it / google.co.uk, etc.? FYI: on Google official guidelines I did not find info about this. I suppose hotlink protection prevents Google Images to show images in its results because I did some tests and it seems hotlink protection does prevent my images to be shown in Google Images search.

    Read the article

  • Alexa indexing browsing history?

    - by Haluk
    We have this test.php sitting around in a forgotten folder. It is a script which just sends an email to our site admin. We never had a page linking to it. It is not indexed by Google. It does not exist in the Internet Archive Wayback Machine. But every now and then it gets crawled by ia_archiver. I wonder how it got indexed. Could it be because of the Alexa toolbar installed on our computer? Does Alexa index our personal browsing history?

    Read the article

  • Does having a website inside a frame (<frameset>) helps or affect search engine rankings?

    - by rajesh.magar
    I have been working to promote my website from long time but not getting such traffic as work I have done on that. My website is running online with another domain using framset so is it somewhere affecting on search index & ranking. My parent website is http://www.battle cancer.com and using <frameset frameborder=0 framespacing=0 border=0 rows="100%,*"noresize> <frame name="frame" src="http://www.battle-cancer.com" noresize></frameset> It running online with the http://www.elimaysupplements.com/.

    Read the article

  • What does Enable/Disable mean in Bing's URL Normalization feature?

    - by DisgruntledGoat
    I'm in Bing Webmaster Tools, under Index URL Normalization. Many parameters are listed in the table with 3 other columns: Status, Source, Date. The "Source" column says "Webmaster" where I have added parameters, and "Bing" where I assume the parameter has been auto-detected. "Date" is probably the last date it detected the parameter. I've tried searching the help files but I can't find what the Status column means. The top of the page says: This feature allows you to specify query parameters for Bing’s crawler to ignore. But it's not clear whether "Enable" or "Disable" is related to this, and if so what happens in each case. Does anyone know?

    Read the article

  • Did You Know? I'm doing 3 more online seminars with SSWUG!

    - by Kalen Delaney
    As I told you in April , I recorded two more seminars with Stephen Wynkoop, on aspects of Query Processing. The first one will be broadcast on June 30 and the second on August 27. In between, we'll broadcast my Index Internals seminar, on July 23. Workshops can be replayed for up to a week after the broadcast, and you can even buy a DVD of the workshop. You can get more details by clicking on the workshop name, below, or check out the announcement on the SSWUG site at http://www.sswug.org/editorials/default.aspx?id=1948...(read more)

    Read the article

  • Oracle Enterprise Taxation and Policy Management Self Service v1.0 is Now Available

    - by user722699
    New tax product - Oracle Enterprise Taxation Policy Management Self Service is now available. The solution provides tax and revenue authorities with a single citizen portal – powered by Oracle Policy Automation for Public Sector, Oracle WebCenter, Oracle Application Development Framework and Oracle SOA Suite – that can integrate across multiple tax types and tax processing systems. Oracle Enterprise Taxation and Policy Management Self Service enables tax and revenue authorities to quickly provide more taxpayer services online – such as the ability to make payments, contact the tax agency with questions and requests or receive self-guided automated assistance with policies and tax law.  Tax and revenue authorities can implement Oracle Enterprise Taxation and Policy Management Self Service – an out-of-the-box solution – quickly and easily, and lower the cost of taxpayer service operations by promoting a broader set of taxpayer self service features.  Resources: ·         Datasheet: http://www.oracle.com/us/industries/public-sector/ent-taxation-policy-service-ds-1873518.pdf ·         Documentation: http://docs.oracle.com/cd/E38189_01/index.htm ·    

    Read the article

  • How long should I keep 301 redirecting pages from a deprecated domain?

    - by ElHaix
    I had an old domain that I have deprecated, but 301 redirected all results from it to my new site. The new site is now receiving a decent amount of traffic, but I don't know if it's 301 redirected from the old site, and doing a site:[old site] still shows several thousand pages indexed. Since all pages from the old site are 301 redirected, will they ever be removed from the index, as long as the old domain name is active? As a rule of thumb, somewhere I got 90 days for any significant site changes. When is it safe to burn the old domain?

    Read the article

  • Preventing Duplicates on Google

    - by abel
    I am currently using a rewrite rule to enable access to .php pages, without using the php extension. However to prevent old links from breaking, the pages can still be accessed via links containing the .php extension too. For eg. domain.com/page.php can now be accessed at domain.com/page All the links on the website now use domain.com/page type links within the site. However older incoming links will still link to the .php pages, meaning Google will index both pages and mark them as duplicate. I have two plans to remedy the situation. Use a php 301 redirect: When a page is accessed with the .php extension, I can redirect each page individually using a 301 redirect using php Using Canonical: Place a canonical tag on each page, pointing to the ".php" less version My Question: Are both methods equally efficacious in preventing Google from indexing my ".php" pages? Which method should be preferred, by convention or otherwise?

    Read the article

  • Share links with <script src=""> SEO

    - by gansbrest
    Hi, I would like to create a share link to my website using javascript: script src="[url-to-my-script]" Basically the main idea behind this is to render HTML block with an image and link to the website. By using JavaScript approach I can control the look and feel of that link + potentially I could change the link html in the future without touching partner websites. The only potential problem I see with this approach is SEO. I'm not sure if google will be able to index that link to my website, since it's generated by javascript.

    Read the article

  • combobox intem does not show in crystal report [migrated]

    - by upitnik
    I am creating a simple printing application using crstal reports, C# and visual studio 2010. On my winform I have some textboxes, comboboxes. Comboboxes are using data for fill from the XML file. On my report I created some parameters and linked with the selection of comboboxes. When I use: cryRpt.SetParameterValue("PAR3", cmbSome.SelectedIndex); on my report I see the 0 or 1 depending my item selection. Now I want to display, not the index but the value ie: Monday. If I use selectedItem, selectedText or selectedValue i do not see anything on my report. To see what happend i put another textbox on my form and linked it with the combobox selection as: txtProe.Text = Convert.ToString(cmbSome.SelectedItem); or txtProe.Text = cmbSome.Text; In both case when I click the button I see that my selection from cmbSome is passed to it. Does anyone knows what happens here ??!

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • Getting the masked URL values in Mediawiki

    - by Kalai
    I have successfully masked the URL in Mediawiki. By using the following scripts in .htaccess and localsettings.php files in Mediawiki, i.e.: .htaccess: Options +FollowSymLinks RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)/(.*)$ /mediawiki/index.php?title=$1&actions=$2 [L] Localsettings.php: $wgScriptPath = "/lib/mediawiki"; $wgArticlePath = "/lib/mediawiki/$1/$2"; It is working fine with required URL. But my problem is I want to consider the second parameter as a querystring for my pages. But I could not get the second parameter in my file. I tried with $wgrequest function but it is only giving the first parameter as title. I tried with $_REQUEST also, it is sometimes give the value of $_REQUEST['actions']. But many times not. I cant understand what is the problem.

    Read the article

  • Resources for creating a turn-by-turn navigation system

    - by benwad
    I'm trying to create a kind of turn-by-turn satellite navigation system using the iOS SDK. I get the directions from the server and draw them on the map, then I keep getting location updates from the iPhone's GPS chip. Currently I start by finding the nearest turning point then, each time the user comes within a certain distance of the next turning point, a verbal cue is given and the turning point index is incremented. This is a delicate system and I'd like to make it more robust so I can tell when the user is going the wrong direction etc. Basically I'm looking for some literature about turn-by-turn navigation, in terms of tracking the user's progress and whether they're going the right direction. I'd have thought there's a lot of research out there but I can't seem to find anything apart from simple tutorials on how to use a given SDK or directions API. Can anyone direct me to a good run-through of the various techniques used in software such as TomTom or Google Maps Navigation?

    Read the article

  • apt-get update error - binary-i386, binary-amd64 [duplicate]

    - by magamig
    This question already has an answer here: How can I fix a 404 Error when updating packages? 5 answers When I run: sudo apt-get update It shows me the following error: W: Failed to fetch http://ppa.launchpad.net/directhex/ppa/ubuntu/dists/trusty/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/directhex/ppa/ubuntu/dists/trusty/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. I have googled for solutions, but none of what I found, have worked for me. Please give me your suggestions

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >