Search Results

Search found 22027 results on 882 pages for 'google analytics'.

Page 63/882 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Ideas for my MSc project and Google Summer of Code 2011

    - by Chris Wilson
    I'm currently putting together ideas for my master's project which I'll be working on over the summer, and I would like to be able to use this time to help Ubuntu in some way. I have the freedom to come up with pretty much any project in the field of software development/engineering provided it Is a substantial piece of software (for reference, I will be working on it for five full months) Solves a problem for more people than just myself I was hoping to use this project as an opportunity to get some experience with the underbelly of Linux, so that I can mention on my CV that I have 'experience in developing for *NIX in C++', which I'm noticing more and more companies are looking for these days, probably because stuff's moving to cloud servers and that's where Linux rules the roost. My problem is that, since I don't have the experience to begin with, I'm not sure what to do for such a project, and I was wondering if anyone could help me with this. I've noticed from Daniel Holbach's blog that Ubuntu participated in the Google Summer of Code 2010, and that project ideas for that can be found here. However, I have not been able to find anything related to Ubuntu and GSoC 2011, but I have noticed from the GSoC timeline that the list of mentoring organisations will not be published until March 18th. I have two questions here. Has Ubuntu applied to be a part of Summer of Code 2011, and what is the status of the 2010 project list linked to earlier. Were they all implemented or are there still some that can be picked up now, should I not participate in GSoC? I'd like to do something for Ubuntu, but I'd rather not spend my time reinventing the wheel.

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Google-Chrome 10 stable crash on every page

    - by Achu
    I installed google-chrome today, when i open any page including askubuntu i got this error message. i see my memory usage is normal(Memory 56% and swap 4.8%) also I reload and i go to another page same problem What is the problem? the last dmesg output [26612.341865] lo: Disabled Privacy Extensions [29651.852476] chrome[15472] general protection ip:1528e26 sp:7fff514a9dc0 error:0 in chrome[400000+3082000] [31447.190586] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=15939 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31451.250190] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16180 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31454.260150] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16322 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31458.648164] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16513 PROTO=UDP SPT=4243 DPT=161 LEN=49 [33124.300112] lo: Disabled Privacy Extensions [33601.021406] Skipping EDID probe due to cached edid [34594.043501] chrome[15746]: segfault at 0 ip 0000000000d5cdd0 sp 00007fff5149ec20 error 6 in chrome[400000+3082000] [34597.395334] chrome[18112] general protection ip:17c85bf sp:7fff514aa4f0 error:0 in chrome[400000+3082000] [34616.786643] chrome[18124]: segfault at 1007 ip 00000000017c849f sp 00007fff514aabd0 error 4 in chrome[400000+3082000] [37277.436207] lo: Disabled Privacy Extensions [38549.501390] e1000e: eth1 NIC Link is Down [38551.122253] e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [38551.122263] e1000e 0000:00:19.0: eth1: 10/100 speed: disabling TSO

    Read the article

  • Google Chrome not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region)

    - by Jawad
    According to the FAQ's I am not sure if my question is a ok to ask or will be closed or should I post it in the meta or even I would blame some one for downvoting it. However it is one that has been bugging me since the trouble strated. Let me explain. I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Pasting from vim in terminal to Google Docs (Firefox + Vimperator) - need to understand

    - by LIttle Ancient Forest Kami
    I had some trouble with copy-pasting text from vim in terminal to Google Docs (aka Drive) document (hereafter GDd) in FF browser (with Vimperator). Note: I have a file opened in Vim 7.2 in terminal :version displays both +clipboard and +xterm-clipboard I'm on Ubuntu 10.04 LTS, so I don't think that's Unity-related I want to use Vim, not GVim, nor gedit... I'm avid fan of mouseless navigation, so solution with mouse was not what I wanted. I have the solution, but I need understanding. What I tried and where it gets me: Yanking whole file text via: ggvGy allows me to: paste it via mouse middle button, NOT with Ctrl+v or Shift+Insert here, in text area for entering question text in gedit but NOT in GDd where I want it pasted, even if I switch Vimperator to pass-through mode with Insert does NOT show in XClip after xclip -o From gedit, I can copy-paste the text into GDd (Vimperator's pass-through mode not required). :%! !xclip -i (or :first, last) reports whole file (all lines, to be precise) as filtered, though shell returns 1 `xclip -o' returns nothing (is empty) or returns previously copied value with 2. no surprise, but I can't paste at all not only to GDd but also to gedit or here setting clipboard (:set clipboard=unnamed) to unnamed doesn't help using "+y or "*y on whole file text actually does the trick So, the question (it's actually three, say "split" and I will): why middle mouse button pastes different things than Ctrl+v and how to know what will be pasted with each? why just yanking (without registers) works with mouse but not with keyboard / XClip? why didn't unnamed register help? After setting, it should make unnamed and * registers same?

    Read the article

  • Google Chrome on Ubuntu 12.04 not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Google Chrome with strange behavior

    - by user72274
    I'm former Chromium-browser user, but after not upgrading the PPA for 2 months, I switched to Google Chrome browser yesterday. Everything is okay, except some strange behavior on some pages and crashing after loading "chrome://" configuration pages. The best known website with strange behavior is youtube, there is a picture what I see: When I open user menu in top right corner, it crashes that way and even after closing the menu, some parts of menu stay display. You may say it's Youtube problem, no, I have this problem at least on three other websites, here it is on Imgur: The problem isn't for the whole side, sometimes it happens from the middle of the screen. The interesting part is that it happens everytime in the same distance from the right border. When I check the DOM elements with the Developer tool, the overlay which shows element's position is rendered how it should be. What is more, if there is anchor after the crashed area, it works after clicking on it. Selecting text in crashed page is impossible. I hope there is enough information to give me an advice, thanks in advance. :) EDIT: Here is what the browser posted in "chrome://gpu-internals/": Graphics Feature Status Canvas: Software only, hardware acceleration unavailable Compositing: Hardware accelerated 3D CSS: Hardware accelerated CSS Animation: Software animated. WebGL: Hardware accelerated WebGL multisampling: Hardware accelerated Problems Detected Accelerated CSS animation has been disabled at the command line. Accelerated 2d canvas is unstable in Linux at the moment. Ubuntu 12.04 | Gnome-shell 3.4.1 | ATI Radeon 4550 | Screen resolution 1024*768 | Chrome version 20.0.1132.57 (Official Build 145807)

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Google Code C++ [closed]

    - by JacKeown
    I was on this website: http://code.google.com/edu/languages/cpp/basics/getting-started.html#learn-by-example and I saw this code: // Description: Illustrate the use of cin to get input // and how to recover from errors. #include <iostream> using namespace std; int main() { int input_var = 0; // Enter the do while loop and stay there until either // a non-numeric is entered, or -1 is entered. Note that // cin will accept any integer, 4, 40, 400, etc. do { cout << "Enter a number (-1 = quit): "; // The following line accepts input from the keyboard into // variable input_var. // cin returns false if an input operation fails, that is, if // something other than an int (the type of input_var) is entered. if (!(cin >> input_var)) { cout << "Please enter numbers only." << endl; cin.clear(); cin.ignore(10000,'\n'); } if (input_var != -1) { cout << "You entered " << input_var << endl; } } while (input_var != -1); cout << "All done." << endl; return 0; } I was wondering... what is the significance of cin.clear() and cin.ignore() ...and also why the 10000 and /n parameters are necessary...?...

    Read the article

  • What is the email limit on Google Apps Script?

    - by jmvidal
    Can someone tell me if there is a webpage that lists the official Google limit on emails sent from a Google Apps Script? In testing my little script I got a Service invoked too many times: email (# 59) and now I can't send any more emails. The obvious place for this information would be in the MailApp.sendEmail documentation. But, that does not say anything about a limit. I found this discussion on the google forum from 2/11/10 where users discuss about a 100 or 500 emails/day limit, with a 24 hour ban, but no one from Google provided an official answer. Note that this is for google apps script, which is different from the google app engine, which does have well published limits.

    Read the article

  • using multi-markers with multi info-windows

    - by mohsen
    I have an array of info-windows corresponding to an array of markers. Now, I have a problem, I use the below code to generate an info-window, when clicking on the marker all the markers disappear unless one, also when I click on that marker there is no info-window come out. What is the wrong with my code ? or what shall I do ? Any answer will be very appreciated, Thanks in advance. This code is inside a for loop: infoWindoArray[i][j] = new google.maps.InfoWindow({ content:"Lat: "+this.position.lat() + "\nLng: " + this.position.lng() + "\n"+ this.customInfo, }); google.maps.event.addListener(AllMarkers[i][j], 'click', (function(x) { return function() {infoWindoArray[i][j].open(map,AllMarkers[i][j]);} })(x));

    Read the article

  • Plotting 500 US cities to a map

    - by sqlman
    I have 500 US cities in a MySQL table. I have the city name, state, longitude and latitude. I want to visually see these cities plotted on a map of the US. How can I do this? Are they any free tools available? Google Maps or Google Earth maybe? Obviously, it would take forever to plot each city individually. So I need a quick way of doing it, either through a program or by exporting the table as a spreadsheet and uploading it into some kind of generator that will do the plotting for me. Please let me know your ideas. Thanks.

    Read the article

  • SEO: many stackoverflow users' pages have got no Google PR and they are not indexed, why?

    - by Marco Demaio
    If you go to my user page on Stack Overflow and you check it with the Gogle bar you can see has got no PR at all (this does happen for almost any user page, even people with much higher reputation, the only exceptions seem to be the users in page 1, and some other users they have PR). My user page's Page Rank is not only zero, but not calculated at all. When PR is 0 or less than 1, but calculated the Google bar shows white, but when the PR is not even calculated like in my user page the Google bar shows in grey. I further more discovered that my user page is NOT EVEN INDEXED on Google, simple test is searching on Google for the exact page url: "http://stackoverflow.com/users/260080/marco-demaio" and you will see no result. The question is how can this be??? This is really weird to me because of the following reason: If you search on Google for "Marco Demaio" on stackoverflow site only (you can do this by searching "site:stackoverflow.com Marco Demaio") the search result shows hundreds of 'asking/answering questions' pages where I was 'tagged'!!! Let's check one of these: the 1st one that appears now (shows one of the question I asked). We can be sure this page is indexed in Google because comes out in a search moreover its PR is calculated, it's probably nearly zero, but still some PR flows there, the PR bar is not grey, but white: The page shown above has got links to my own user page. I checked the source code of the page shown above and the links are not hidden or set with a rel="nofollow", moreover I can't see any meta character excluding the links on the page from being followed. So what's happening? Why Google does not see my user page at all. Did stackoverflow do something to achieve this? If yes what did they do? Any explantion really appreciates (as always). P.S. obviously I checked also the code of my user page, but I could not find meta tags excluding Google search for the page. P.S. 2 in a desperate adventure I also checked StackOverflow robots but it does not seem to exclude user pages. UPDATE 1 following up on some answers, I did some more research. Excluding for a while the PR problem (since PR is not science), and looking only at the user page on StackOverflow NOT BEING INDEXED problem: pages do not seem to be indexed by Google because of the user reputation, this user for instance has got NOW 200 points less reputation than me and his page is indexed (while mine not). It does not seem even to be connected with months you have been on Stackoverflow, this user (almost my same reputation) has been there for 3 months only and his page is indexed (while mine not and I have been a user for 7 months). It's bizzarre! UPDATE February/2011 As of today the page got indexed by Google at least when you search for "site:stackoverflow.com Marco Demaio" it's the 1st page. The amazing thing is that it has still got NO PageRank at all: Google toolbar states loud and clear "No PageRank information available". It's odd!

    Read the article

  • DNS problems with Google on Windows 7

    - by awishformore
    Hello dear superuser community. I had no idea where to post this because it is a problem that completely baffles me. I have a lot of experience with network configuration, but I am completely out of ideas on how to fix this. I have a Fritz!Box branded router on my ISP 1&1 in Germany. My computer is connected to it with a normal Ethernet cable. I always manually set my IP on the computer and use the Google DNS servers for name resolution. I also tried OpenDNS and the result is the same. With that configuration the following happens: Google search responds with big delay Gmail, Google Calendar & Google Drive requests time out the majority of the time In order to troubleshoot, I set the network connection to DHCP for both IP & DNS. At that point, what happens is the following: Google search times out most of the time Gmail, Google Calendar & Google Drive work most of the time Sometimes, it happens that the sites that time out will come up, but weirdly enough, the pictures on the buttons will be missing. For instance, the magnifying glass on Google will be gone or the circle arrow on Gmail (but all buttons of course). All other websites load just fine - and very quickly. All other network functionality is completely unimpacted. The behaviour of fixed IP & Google DNS vs automatic IP & DNS is easily reproducible. I am going crazy trying to fix this as I have no idea what the hell is going on at this point. Here a list of the things I have tried thus far: Flushed the DNS Tried on different browsers (Works fine on my laptop by the way) Tried disabling Teredo & IPv6 stack Emptied all caches Checked the HOSTS file Rebooted the router Reset the router Reinstalled the network adapter Tracert displayes normal route until timing out at one point Ping usually doesn't work for the unreachable sites either Ran both complete Norton 360 & Kaspersky 2012 scan Ran Kaspersky Virus Removal Tool in safe mode Tried connection in safe mode & networking enabled If you have any ideas, please let me know. I'm getting desperate...

    Read the article

  • Why does Google Reader use up so much memory?

    - by Ivo
    I'm running the dev version of Chrome and just found out I could see the memory usage of every tab (press shift + ESC to find out yourself). Turns out the Shockwave Flash plugin uses 240 MB of memory and 27% of CPU while having open Google Reader. Killing this page brings it down to 50 MB. I reckon it builts up after reading through about 100 feeds. So my question is: why does it take up so much memory and what should I do to diminish it? Why I care? I often don't restart my browser (or laptop for that matter) and keep the Google Reader tab open for long periods of time. The memory does start to add up that way. Plus I have the feeling, perhaps incorrect, that it slows things down as well looking at the CPU usage. Side note: using Chrome 4 on Windows 7 (32 bit)

    Read the article

  • Is there a way to see what is typed or websites visted when a spouse is using google chrome in incognito ?

    - by Ken
    My spouse and I have 2 laptops both running windows 7 and I have 1 desktop running vista. I have reason to believe she is cheating and I want to watch her internet usage, She is always using google chrome in incognito mode. I have tried a few keyloggers but they never seem to be able to get the keystrokes when she is on google. I dont know if there is a way I can network them so they go thru my desktop then to the internet. We are using Brighthouse cable internet. any ideas would be great thank you

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >