Search Results

Search found 31307 results on 1253 pages for 'google shopping api'.

Page 67/1253 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Google-Chrome 10 stable crash on every page

    - by Achu
    I installed google-chrome today, when i open any page including askubuntu i got this error message. i see my memory usage is normal(Memory 56% and swap 4.8%) also I reload and i go to another page same problem What is the problem? the last dmesg output [26612.341865] lo: Disabled Privacy Extensions [29651.852476] chrome[15472] general protection ip:1528e26 sp:7fff514a9dc0 error:0 in chrome[400000+3082000] [31447.190586] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=15939 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31451.250190] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16180 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31454.260150] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16322 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31458.648164] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16513 PROTO=UDP SPT=4243 DPT=161 LEN=49 [33124.300112] lo: Disabled Privacy Extensions [33601.021406] Skipping EDID probe due to cached edid [34594.043501] chrome[15746]: segfault at 0 ip 0000000000d5cdd0 sp 00007fff5149ec20 error 6 in chrome[400000+3082000] [34597.395334] chrome[18112] general protection ip:17c85bf sp:7fff514aa4f0 error:0 in chrome[400000+3082000] [34616.786643] chrome[18124]: segfault at 1007 ip 00000000017c849f sp 00007fff514aabd0 error 4 in chrome[400000+3082000] [37277.436207] lo: Disabled Privacy Extensions [38549.501390] e1000e: eth1 NIC Link is Down [38551.122253] e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [38551.122263] e1000e 0000:00:19.0: eth1: 10/100 speed: disabling TSO

    Read the article

  • What are some good Photo and Artwork APIs?

    - by Ryan T
    We had an idea for starting a ecards service and were looking into the possibility of populating our site using photo/artwork APIs. Due to legal reasons, Flikr probably won't work, although I've started to scour the web for other options. Basically we just need two functions the user should be able to browse the site's collection and choose a picture we should be able to recall and render a specific picture on our site. From there we should have no problem building our application. The main obstacle is that we're lacking content at the moment. I haven't been able to find too many examples of this being done, so I was wondering if anyone here might know people who have done something similar to what we're trying to do, or know of any leads that might be able to help us out. Suggestions for other APIs that are out there, or forums/communities that might be able to point us in the right direction are also welcome.

    Read the article

  • Google Chrome not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Google I/O 2010 - Porting v2 JavaScript Maps API apps to v3

    Google I/O 2010 - Porting v2 JavaScript Maps API apps to v3 Google I/O 2010 - Stepping up: Porting v2 JavaScript Maps API applications to v3 Geo 201 Daniels Lee The JavaScript Maps API v3 is the future of the Google Maps API. To take advantage of the many great features coming to the API you will need to migrate existing v2 applications to v3. This session will guide you through the process, illustrating how easy it is to start reaping the benefits in features and performance. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 10 0 ratings Time: 01:04:07 More in Science & Technology

    Read the article

  • Restful WebAPI VS Regular Controllers

    - by Rohan Büchner
    I'm doing some R&D on what seems like a very confusing topic, I've also read quite a few of the other SO questions, but I feel my question might be unique enough to warrant me asking. We've never developed an app using pure WebAPI. We're trying to write a SPA style app, where the back end is fully decoupled from the front end code Assuming our service does not know anything about who is accessing/consuming it: WebAPI seems like the logical route to serve data, as opposed to using the standard MVC controllers, and serving our data via an action result and converting it to JSON. This to me at least seems like an MC design... which seems odd, and not what MVC was meant for. (look mom... no view) What would be considered normal convention in terms of performing action(y) calls? My sense is that my understanding of WebAPI is incorrect. The way I perceive WebAPI, is that its meant to be used in a CRUD sense, but what if I want to do something like: "InitialiseMonthEndPayment".... Would I need to create a WebAPI controller, called InitialiseMonthEndPaymentController, and then perform a POST... Seems a bit weird, as opposed to a MVC controller where i can just add a new action on the MonthEnd controller called InitialisePayment. Or does this require a mindset shift in terms of design? Any further links on this topic will be really useful, as my fear is we implement something that might be weird an could turn into a coding/maintenance concern later on?

    Read the article

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region)

    - by Jawad
    According to the FAQ's I am not sure if my question is a ok to ask or will be closed or should I post it in the meta or even I would blame some one for downvoting it. However it is one that has been bugging me since the trouble strated. Let me explain. I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Sales tracker that allows complex queries?

    - by feklee
    On a site, every click on a product should be registered by a sales tracker: price, type, etc. The sales tracker should provide an API so that complex queries can be performed, such as: Which products of a type "teapot" had a price below 20 EUR? Requirements: Recorded data should be available for querying no later than two hours after it has been recorded. For example, there are reports that Google Analytics may take up to 24h to update data. That is not acceptable. Querying doesn't need to be fast, but recording does (of course). Which sales tracker allows complex queries against collected data?

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Pasting from vim in terminal to Google Docs (Firefox + Vimperator) - need to understand

    - by LIttle Ancient Forest Kami
    I had some trouble with copy-pasting text from vim in terminal to Google Docs (aka Drive) document (hereafter GDd) in FF browser (with Vimperator). Note: I have a file opened in Vim 7.2 in terminal :version displays both +clipboard and +xterm-clipboard I'm on Ubuntu 10.04 LTS, so I don't think that's Unity-related I want to use Vim, not GVim, nor gedit... I'm avid fan of mouseless navigation, so solution with mouse was not what I wanted. I have the solution, but I need understanding. What I tried and where it gets me: Yanking whole file text via: ggvGy allows me to: paste it via mouse middle button, NOT with Ctrl+v or Shift+Insert here, in text area for entering question text in gedit but NOT in GDd where I want it pasted, even if I switch Vimperator to pass-through mode with Insert does NOT show in XClip after xclip -o From gedit, I can copy-paste the text into GDd (Vimperator's pass-through mode not required). :%! !xclip -i (or :first, last) reports whole file (all lines, to be precise) as filtered, though shell returns 1 `xclip -o' returns nothing (is empty) or returns previously copied value with 2. no surprise, but I can't paste at all not only to GDd but also to gedit or here setting clipboard (:set clipboard=unnamed) to unnamed doesn't help using "+y or "*y on whole file text actually does the trick So, the question (it's actually three, say "split" and I will): why middle mouse button pastes different things than Ctrl+v and how to know what will be pasted with each? why just yanking (without registers) works with mouse but not with keyboard / XClip? why didn't unnamed register help? After setting, it should make unnamed and * registers same?

    Read the article

  • Google I/O 2010 - How Maps API v3 came to be

    Google I/O 2010 - How Maps API v3 came to be Google I/O 2010 - How Maps API v3 came to be: Tips, tricks, and lessons learned in developing a cross platform desktop and mobile API Geo, Tech Talks Susannah Raub, Marc Ridey The Google JavaScript Maps API v3 celebrates its one year anniversary at this year's Google I/O. In this session, we reveal the reasons for embarking on a new API, the challenges we faced in developing a truly cross platform and cross device framework, and the lessons learned on the way. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 48:08 More in Science & Technology

    Read the article

  • Search for a specific info on pages

    - by Alexis
    I am currently working on a challenge; basically using google api; I would search for facebook fan pages; most specifically the "about" timeline for the email add and the date the business was founded. So far I have come to this: site:facebook.com/pages + "business type" + "country" + "@email.com" I need to add something else so that it can give me back the date it was founded in. If you see the facebook fan page in the about section; there is for e.g (Founded 06/02/2010) The bracketed info above is what I need to add to succeed in adding to the string above; any idea?

    Read the article

  • Google Chrome on Ubuntu 12.04 not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Google Chrome with strange behavior

    - by user72274
    I'm former Chromium-browser user, but after not upgrading the PPA for 2 months, I switched to Google Chrome browser yesterday. Everything is okay, except some strange behavior on some pages and crashing after loading "chrome://" configuration pages. The best known website with strange behavior is youtube, there is a picture what I see: When I open user menu in top right corner, it crashes that way and even after closing the menu, some parts of menu stay display. You may say it's Youtube problem, no, I have this problem at least on three other websites, here it is on Imgur: The problem isn't for the whole side, sometimes it happens from the middle of the screen. The interesting part is that it happens everytime in the same distance from the right border. When I check the DOM elements with the Developer tool, the overlay which shows element's position is rendered how it should be. What is more, if there is anchor after the crashed area, it works after clicking on it. Selecting text in crashed page is impossible. I hope there is enough information to give me an advice, thanks in advance. :) EDIT: Here is what the browser posted in "chrome://gpu-internals/": Graphics Feature Status Canvas: Software only, hardware acceleration unavailable Compositing: Hardware accelerated 3D CSS: Hardware accelerated CSS Animation: Software animated. WebGL: Hardware accelerated WebGL multisampling: Hardware accelerated Problems Detected Accelerated CSS animation has been disabled at the command line. Accelerated 2d canvas is unstable in Linux at the moment. Ubuntu 12.04 | Gnome-shell 3.4.1 | ATI Radeon 4550 | Screen resolution 1024*768 | Chrome version 20.0.1132.57 (Official Build 145807)

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Google Code C++ [closed]

    - by JacKeown
    I was on this website: http://code.google.com/edu/languages/cpp/basics/getting-started.html#learn-by-example and I saw this code: // Description: Illustrate the use of cin to get input // and how to recover from errors. #include <iostream> using namespace std; int main() { int input_var = 0; // Enter the do while loop and stay there until either // a non-numeric is entered, or -1 is entered. Note that // cin will accept any integer, 4, 40, 400, etc. do { cout << "Enter a number (-1 = quit): "; // The following line accepts input from the keyboard into // variable input_var. // cin returns false if an input operation fails, that is, if // something other than an int (the type of input_var) is entered. if (!(cin >> input_var)) { cout << "Please enter numbers only." << endl; cin.clear(); cin.ignore(10000,'\n'); } if (input_var != -1) { cout << "You entered " << input_var << endl; } } while (input_var != -1); cout << "All done." << endl; return 0; } I was wondering... what is the significance of cin.clear() and cin.ignore() ...and also why the 10000 and /n parameters are necessary...?...

    Read the article

  • How can I display the users profile pic using the facebook graph api?

    - by kielie
    Hi, I would like to display the users profile picture inside of my applications canvas page, is there a way to do that using the graph api? I know I can do it using FBML but I would also like to pass the profile pic to a flash game I am making, so I would have to get the profile pic from the api and send it as a variable, here is the code I have thus far, $facebook = new Facebook(array( 'appId' => FACEBOOK_APP_ID, 'secret' => FACEBOOK_SECRET_KEY, 'cookie' => true, 'domain' => 'myurl/facebook-test' )); $session = $facebook->getSession(); $uid = $facebook->getUser(); $me = $facebook->api('/me'); $updated = date("l, F j, Y", strtotime($me['updated_time'])); echo "Hello " . $me['name'] . $me['picture'] . "<br />"; echo "<div style=\"background:url(images/bg.jpg); width:760px; height:630px;\">" . "You last updated your profile on " . $updated . "</div>" . "<br /> your uid is" . $uid; Thanx in advance!

    Read the article

  • java.lang.ClassNotFoundException

    - by user341493
    Hey everyone, I have a java project that I'm working on which was working until a few days ago. I'm not sure what I did to my Eclipse set-up to hose it but now I'm getting a java.lang.ClassNotFoundException when I try to run some code that accesses the google finance api. I've built a small test application that uses the google finance api on its own and that seems to work. So, I think this is a project specific problem. Any help would be greatly appreciated. Here's the stack trace: `ptolemy.kernel.util.IllegalActionException: in .RandomSearch.manager Because: com/google/common/collect/Maps at ptolemy.actor.Manager.execute(Manager.java:472) at ptolemy.actor.Manager.run(Manager.java:1119) at ptolemy.actor.Manager$3.run(Manager.java:1160) Caused by: java.lang.NoClassDefFoundError: com/google/common/collect/Maps at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:118) at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:100) at com.google.gdata.client.Service.(Service.java:546) at AtomicBroadcast.GoogleFinance.GooglePortfolioReader.fire(GooglePortfolioReader.java:108) at ptolemy.domains.de.kernel.DEDirector.fire(DEDirector.java:568) at ptolemy.actor.CompositeActor.fire(CompositeActor.java:458) at ptolemy.actor.Manager.iterate(Manager.java:714) at ptolemy.actor.Manager.execute(Manager.java:349) ... 2 more Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Maps at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) ... 10 more Caused by: java.lang.NoClassDefFoundError: com/google/common/collect/Maps at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:118) at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:100) at com.google.gdata.client.Service.(Service.java:546) at AtomicBroadcast.GoogleFinance.GooglePortfolioReader.fire(GooglePortfolioReader.java:108) at ptolemy.domains.de.kernel.DEDirector.fire(DEDirector.java:568) at ptolemy.actor.CompositeActor.fire(CompositeActor.java:458) at ptolemy.actor.Manager.iterate(Manager.java:714) at ptolemy.actor.Manager.execute(Manager.java:349) at ptolemy.actor.Manager.run(Manager.java:1119) at ptolemy.actor.Manager$3.run(Manager.java:1160) Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Maps at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) ... 10 more`

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >