Search Results

Search found 34447 results on 1378 pages for 'google search appliance'.

Page 255/1378 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Inexplicable low ranking [closed]

    - by Mick
    I have created two similar websites relating to monetary systems. So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. The successful one is fullreservebanking.com. If you type in "full reserve banking" into Google, my site appears 2nd in the list behind Wikipedia. The unsuccessful one is fractionalreserves.com. If you type in "fractional reserve banking" into Google, it's nowhere to be seen. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for fullreservebanking, but fractionalreserves is just an "add-on" which sits on a subdirectory of fullreservebanking. I wonder if Google somehow detects this and treats it as a less significant website?

    Read the article

  • Yelp, Google's API for restaurants help

    - by chris
    Ok I have looked into this, and I'm not sure if anyone else has experience with it. I'm having termendous difficulties with Yelp and Google's API. To help explain what I am trying to do here is the concept of the website. We would have to pull restaurants based on user distance, and then randomize them based on quality of restaurant based on feedback from review websites (Yelp, Google, urbanspoon, zagat, opentable, kudzu, yahoo - doesn't have to be from all), and feedback from our users (on results page for the random restaurant users can select good recommendation/bad recommendation). There’s a lot we could calculate for our formula. Things that will dictate your results will be based on if you’re at home or work. If you’re at home you will have more time to drive out to the city to grab some dinner or lunch. If you’re at work we would have to recommend restaurants nearby as lunch is typically 30 minutes to a hour. A 30 minute lunch would require take out most likely or quick service. A hour lunch break you could dine in at a local fine dining restaurant. So in a nutshell, user comes to website. Select if they're at home or work, click submit and we will have a random restaurant selected for them to go. If they don't like it they can click retry and a new restaurant can show. The issue I am having is using the API to gather all the restaurants in the US. I know it can be done because there are similiar websites/apps that pull restaurants that are closest to you such as Ness, Alfred, and I believe there's two more but I can't remember the names. Anyone know if this can be accomplish?

    Read the article

  • Sharepoint .PDF contents displaying as 'searchtext.xml' in searches

    - by Green Muffins
    Hi Experts, I recently used installed ifilter in my sharepoint farm to enable searching of the contents of .pdf documents. All went well, except if I search for contents of any .pdf file, they appear in the search results with document title "searchtext.xml", and the link to the document gives a giant page of the .pdf contents in an .xml looking browser page. :s I have added .pdf filetypes to the search, so I am unsure why it is reading them incorrectly.. if I search for a .pdf document title such as 'document.pdf' it will display the result as a html page, though the link does follow to a readable .pdf file. Any help?

    Read the article

  • Lotus Notes: Searching email by fields

    - by themel
    I'm using Lotus Notes 8.5.2 in a large corporate deployment. I'm trying to figure out how to search my email in a structured manner, e.g. by specifying criteria on fields. The help seems to suggest that I can use fields in square brackets and a list of operators, e.g. to find all mail where the From field contains John, I'd search for /[From] CONTAINS John However, I can't get this to work - any operator style query I've tried returns zero documents. "Web-style" queries (e.g. typing John into the search dialog) work, but I'd really prefer a way that would let me search more precisely. Potential issues: I'm assuming that the field names can be taken from the list of things I see when I open a mail and look at its Document Properties. Full text indexing is turned off for my mailbox, and all my attempts to create my own have failed. Does anyone have better information on searching by from/date/subject conditions in Notes?

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • Google penalty recovery

    - by sajeev
    I have a site, which is a spiritual site and has nothing to do with commercial or business thing. It was in the first page with the keyword RADHANATH SWAMI. Then suddenly, in sep'13, the site went out from google search. When I checked my WebmasterTools, I saw that there were 40,000 backlinks from a site named http://freeguestbooks.net. But manual action section in WebmasterToolssaid "No Manual webspam actions found" and till date, it shows the same. Then I contacted the webmaster of freeguestbooks.net and requested him to remove the 40K odd backlinks and he very kindly removed it. The WebmasterTools now shows 21,777 backlinks from this site. But since 2 months, the decrease rate of these backlinks is very slow, almost zero. Again, I had contacted the webmaster of freeguestbooks.net and he confirmed that there are no backlinks pointing to his site. I am also told that my anchor text is over-optimised... Total backlinks to my site as per WebmasterTools is only 24,937 out of which 21,777 are the links from freeguestbooks.net as shown in WebmasterTools. Could an expert suggest a way out to get back into google.... Sajeev, India

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • What happens when you close an Adsense account?

    - by rakibtg
    I need to change my payee name, I have asked in Google Adsense product forum one of top contributor replied me: "You will have to close the account & apply again with using your real payee name. That's why they specifically state that the payee name needs to match the full name on your bank account." https://support.google.com/adsense/answer/47333?hl=en This makes sense, but got few question because the support page do not have sufficient content to help me. My questions are: What happens when you close your Adsense account? If I apply again, then what will be the process to re-gain my account? I mean should I have to apply for a website again, then Adsense team will review and approve that? Is there any chance to disapprove my account? What about current check? I have two check in my hand. So, is Google will send those check again to me with my new payee name? Anyone experienced this problem? I have asked it on Google Forum but got no answer!

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Email Discovery from Fairly Large Mailbox (15gig) Exchange 2003.

    - by nysingh
    I have a request from our legal team to search a users' mailbox. the mailbox is 15gig and it is on exchange 2003. I am trying to run windows desktop search and google desktop. I have gotten them to index mailbox but getting the results into a folder to backup on cd is getting bit difficult. Windows desktop search and google desktop search does not allow you to copy results to another folder. Can anyone point me to right direction? What is the best way to index and copy the results of pst, mailbox or edb file? What is the best discovery methods? Thanks

    Read the article

  • Sharepoint managed Properties

    - by paulie
    Originally posted on StackOverflow, and edited for clarity I have a custom Content Type inside a list that has over 30 items (Which were uploaded via DockIt), and I have added several "managed properties" to the "crawled properties", in the SSP. All of them work except 1. The column "Synopsis" is a multiline field with no limit on it's length. It appears as a crawled property "Synopsis", and is mapped to a managed property 'asynop'. On the 'Advanced Search Page', it is added as a property and searchable, however it only returns a some matching records (if any). I manually created an entry, ran the crawl and was able to search for it. I edited an existing entry, ran the crawl (full and incremental), and it still only returned the manually entered entry. If I entered the search term in the Search box directly "asynop:fatigue", then all the correct results appear. Why is this happening? And could it please stop?

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Transferring users and search engines to a new domain

    - by eftpotrm
    I've been asked to take over the maintnance of an existing site that's being reworked. At present it's serving localised content for several languages, but via a fairly unhelpful mechanism that means essentially search engines only have it indexed in English and any deep links will de facto appear in English as well. So, new localised sites are being built under separate domains - not just for this, there's other benefits. What we're then looking to do is to redirect users correctly to the new site, where appropriate. For humans this isn't a problem. We can send them through a gateway page on their first site visit, grab their language preference and put it in a cookie, then redirect them to the new localised content as soon as it's available. For search engines, this isn't so good... In principle I'm happy to simply bypass the gateway page and redirect known spiders to the new site, but this means we're serving radically different content (different URL even!) to human and robot users. Won't this therefore be regarded as cloaking and cause us grief? Anyone know a better way to handle this?

    Read the article

  • Can using div with width = 0px affect SEO? [closed]

    - by user989084
    Possible Duplicate: Does google always downrank pages with hidden texts Right now I'm working on my new website and I'm really concerned about SEO since the old version of my site(which is from a script that is unusable now) has PR of 4 and I want to lose it So here is my question There is a panel that has 4 tabs Each tabs has tag which has a href like "/box-page/tab/2" and when javascript is not activated it will go this page and shows the corresponding tab and if it's activated it will just make a simple animation to show the other tab There are four boxes(and for tabs) and since I needed to fix the height of the panel I had to use width: 0 for the rest of tabs to keep the height of the box the same as the longest one and inside these boxes(which have width: 0) there are some information that can be indexed by google So as you know google doesn't have javascript and it will go to /box-page/tab/2 and /box-page/tab/3 and ... in all of these pages the information is the same but with different box showing up in the page So here is my question Does google penalize using a div with width: 0px? And if not does it just ignore the content of the div with width 0?(Which is perfect for me ^^) Thanks

    Read the article

  • How to create "recurData" in Google Calendar? in C#.Net

    - by Pari
    Hi, I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • How do I pass W3 validation for Google checkout url?

    - by Dinesh
    When I do validate the page in W3 validation, I got few errors with below code, <input type="image" name="Google Checkout" alt="Fast checkout through Google" src="https://sandbox.google.com/checkout/buttons/checkout.gif?merchant_id=xxxxxxxxx&w=168&h=44&style=white&variant=text&loc=en_US" / Errors are as follows, 1.cannot generate system identifier for general entity "w" 2.reference to entity "w" for which no system identifier could be generated 3.general entity "h" not defined and no default entity 4.reference to entity "h" for which no system identifier could be generated 5.general entity "style" not defined and no default entity 6.reference to entity "style" for which no system identifier could be generated 7.general entity "variant" not defined and no default entity 8.reference to entity "variant" for which no system identifier could be generated 9.general entity "loc" not defined and no default entity 10.reference to entity "loc" for which no system identifier could be generated This is the only errors comes from the URL; is there way to pass W3 validation for this URL.

    Read the article

  • What is this obscure error in Google Analytics tracking code on a _trackEvent() call?

    - by Laizer
    I am calling the Google Analytics _trackEvent() function on a web page, and get back an error from the obfuscated Google code. In Firebug, it comes back "q is undefined". In Safari developer console: "TypeError: Result of expression 'q' [undefined] is not an object." As a test, I have reduced the page to only this call, and still get the error back. Besides the necessary elements and the standard Google tracking code, my page is: <script> pageTracker._trackEvent('Survey', 'Checkout - Survey', 'Rating', 3); </script> Results is that error. What's going on here?

    Read the article

  • Alternative to google map api, so that I can use it on a HTTPS/SSL encrypted website.

    - by Zeeshan Rang
    I have a question regarding map api. I was using the the google map api in my website before. But since I have encryption the site using HTTPS/SSL support, the google map api stopped working. I checked online, and realised that google has a Premier account only that would allow me to use HTTPS supported maps api and it cost $10,000 per year. I do not this kind of money with me. So, can you give any other alternative to have a map api on my website. Anything that could give me driving directions would be fine. Regards Zeeshan

    Read the article

  • How to use the correct Google OpenID url to login to my site?

    - by Michael Mao
    Hello everyone: I am trying to implement OpenID as one preferred option to my next web app here The code is taken from this tutorial and works if I use my openID from myopenid.com However, I believe most people would just love to use their everyday email address as their openID, as far as I know, Google, Yahoo, and some other big players have already done this in their systems. My question is: how could I find the correct "url" to enter in the form to login? I used my Google OpenID account for StackOverflow and it works just fine. I try to copy my openID like this: www.google.com/accounts/o8/id?id=aitoawllano10bzdzp3ht0diffry0qt6_j2ls-m And paste it directly into my form, but it doesn't work. I also tried to remove the url parameter, but that won't do, either. Thanks a lot in advance for any tips and suggestions.

    Read the article

  • GWT application throws an exception when run on Google Chrome with compiler output style set to 'OBF

    - by Elifarley
    I'd like to know if you guys have faced the same problem I'm facing, and how you are dealing with it. Sometimes, a small and harmless change in a Java class ensues strange errors at runtime. These errors only happen if BOTH conditions below are true: 2) the application is run on Google Chrome, and 1) the GWT JavaScript compiler output style is set to 'OBF'. So, running the application on Firefox or IE always works. Running with the output style set to 'pretty' or 'detailed' always works, even on Google Chrome. Here's an example of error message that I got: "((TypeError): Property 'top' of object [object DOMWindow] is not a function stack" And here's what I have: - GWT 1.5.3 - GXT 1.2.4 - Google Chrome 4 and 5 - Windows XP In order to get rid of this Heisenbug, I have to either deploy my application without obfuscation or endure a time-consuming trial-and-error process in which I re-implement the change in slightly different ways and re-run the application, until the GWT compiler is happy with my code. Would you have a better idea on how to avoid this?

    Read the article

  • SharePoint 2010 Search not working.

    - by Ben
    I have installed and configured SharePoint 2010 to run on the same box as the SQL Server its running from in Windows Server 2008 R2. Everything is working fine except the search. I have uploaded several documents and tagged several items (documents, tasks, announcements etc), however whenever I search the site using the defaul search, i get nothing returned no matter what i search on, I simply get "We did not find any results for [search term]". I know there is setup needed if you wish to use "FAST search", but do I have to do anything to get the standard default search to work?

    Read the article

  • How to track different button clicks with Google Analytics and AJAX?

    - by citronas
    I have several pages, let's call them A, B and C. Each of these pages has a form where the user can type in some information and click a button to send those information to the server. This button click is performed in an UpdatePanel to prevent a full postback. A customer of ours now wants to know how many % of the using visiting each site (A, B and C have different URLs) use this form. (Meaning I need seperate values for A, B and C) How to I track this in Google Analytics? It seems that I have to create a conversion(??) for each page. Is that correct? How must I modify the existing web application to let Google Analytics know, that a user submitted the form. (without the need to redirect thank to xy amount of different thank you pages) The only piece of information I've found so far is this: http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55519 Unfortunately, this FAQ entry does not cover my answer.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >