Search Results

Search found 26427 results on 1058 pages for 'google scripts'.

Page 53/1058 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Google Analytics www 301 causing issues with In-Page Analytics

    - by conrad10781
    The closest question I could find to my problem is This one. The similarity is: I have a profile in Google Analytics (GA) that has been collecting data for a year. The domain setting in GA is "http://example.com". The site, however, will redirect any non-www request, to www.example.com, via a typical .htaccess refinement. We do this to keep the traffic on the load balancers. I don't know the method the original user had in place, but we're doing a 301 on any non-www to the www equivalent. I believe this has to be somewhat standard. Where I differ from this question is in the error message I receive when trying to load the In-Page Analytics. I'm instead receiving: Error: The Website in your settings (http://example.com), redirects into a different domain. (http://www.example.com). In-Page Analytics currently works on only one domain. Note that www.example.com and example.com are NOT considered to be on the same domain. Also, make sure you're not redirecting from http:// to https:// or vice versa. I understand what's being explained, it just seems as though this can't be the end-all. I tried updating the Analytics settings, which from day one has been set as "One domain with multiple subdomains", but I don't see any options to change the URL ( which is currently set to http://example.com and not http://www.example.com ). I'd prefer not to have to change the URL if that was at all possible, but I can't seem to find any documentation or anything that provide any possible solutions.

    Read the article

  • Architecting Python application consisting of many small scripts

    - by Duke Dougal
    I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling it's inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Google-Chrome 10 stable crash on every page

    - by Achu
    I installed google-chrome today, when i open any page including askubuntu i got this error message. i see my memory usage is normal(Memory 56% and swap 4.8%) also I reload and i go to another page same problem What is the problem? the last dmesg output [26612.341865] lo: Disabled Privacy Extensions [29651.852476] chrome[15472] general protection ip:1528e26 sp:7fff514a9dc0 error:0 in chrome[400000+3082000] [31447.190586] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=15939 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31451.250190] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16180 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31454.260150] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16322 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31458.648164] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16513 PROTO=UDP SPT=4243 DPT=161 LEN=49 [33124.300112] lo: Disabled Privacy Extensions [33601.021406] Skipping EDID probe due to cached edid [34594.043501] chrome[15746]: segfault at 0 ip 0000000000d5cdd0 sp 00007fff5149ec20 error 6 in chrome[400000+3082000] [34597.395334] chrome[18112] general protection ip:17c85bf sp:7fff514aa4f0 error:0 in chrome[400000+3082000] [34616.786643] chrome[18124]: segfault at 1007 ip 00000000017c849f sp 00007fff514aabd0 error 4 in chrome[400000+3082000] [37277.436207] lo: Disabled Privacy Extensions [38549.501390] e1000e: eth1 NIC Link is Down [38551.122253] e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [38551.122263] e1000e 0000:00:19.0: eth1: 10/100 speed: disabling TSO

    Read the article

  • Google Chrome not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Pros and cons of hosted scripts

    - by P.Brian.Mackey
    I have seen some developers use hosted scripts to link their libraries. cdn.jquerytools.org is one example. I have also seen people complain that a hosted script link has been hijacked. How safe is using hosted scripts in reality? Are the scripts automatically updated? For example, if jQuery 5 goes to 6 do I automatically get version 6 or do I need to update my link? I also see that Google has a large set of these scripts setup for hosting. What are the pros and cons?

    Read the article

  • Recording custom variables to identify individual users with Google Analytics

    - by mrtsherman
    I have been asked by our marketing department to add Google Analytics custom variable tracking to my company's website. As the website uses server side includes, modifications to the tracking tag roll out globally - maintenance is therefore a headache! So, if I add the following code (keeping in mind SSI so every page has the same code): // visitor level tracking, id = 12345 // Record a unique id for each visitor. When they return also track this id _gaq.push(['_setCustomVar', 1, 'id', '12345', 1]); // page level tracking // If the user signs up for our newsletter we set newsletter to true // Each page they subsequently visit should also mark this as true _gaq.push(['_setCustomVar', 1, 'newsletter', 'true', 1]); I don't use GA and the marketing people don't use custom variables, so we don't actually know how or if this will work. Therefore my questions are:- Do I want Page, Session or Visitor level tracking? What happens when the same code is used on every page? Can GA 'overwrite' a setting. For example, if I set newsletter to true on page X and then user navigates to page Y, will the variable also be marked there?

    Read the article

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region)

    - by Jawad
    According to the FAQ's I am not sure if my question is a ok to ask or will be closed or should I post it in the meta or even I would blame some one for downvoting it. However it is one that has been bugging me since the trouble strated. Let me explain. I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Pasting from vim in terminal to Google Docs (Firefox + Vimperator) - need to understand

    - by LIttle Ancient Forest Kami
    I had some trouble with copy-pasting text from vim in terminal to Google Docs (aka Drive) document (hereafter GDd) in FF browser (with Vimperator). Note: I have a file opened in Vim 7.2 in terminal :version displays both +clipboard and +xterm-clipboard I'm on Ubuntu 10.04 LTS, so I don't think that's Unity-related I want to use Vim, not GVim, nor gedit... I'm avid fan of mouseless navigation, so solution with mouse was not what I wanted. I have the solution, but I need understanding. What I tried and where it gets me: Yanking whole file text via: ggvGy allows me to: paste it via mouse middle button, NOT with Ctrl+v or Shift+Insert here, in text area for entering question text in gedit but NOT in GDd where I want it pasted, even if I switch Vimperator to pass-through mode with Insert does NOT show in XClip after xclip -o From gedit, I can copy-paste the text into GDd (Vimperator's pass-through mode not required). :%! !xclip -i (or :first, last) reports whole file (all lines, to be precise) as filtered, though shell returns 1 `xclip -o' returns nothing (is empty) or returns previously copied value with 2. no surprise, but I can't paste at all not only to GDd but also to gedit or here setting clipboard (:set clipboard=unnamed) to unnamed doesn't help using "+y or "*y on whole file text actually does the trick So, the question (it's actually three, say "split" and I will): why middle mouse button pastes different things than Ctrl+v and how to know what will be pasted with each? why just yanking (without registers) works with mouse but not with keyboard / XClip? why didn't unnamed register help? After setting, it should make unnamed and * registers same?

    Read the article

  • Fast and Free; SQL Scripts Manager's Script Generator

    When William produced his second article on the free tool 'SQL Scripts Manager', revealing that it worked just as well with PowerShell and Python scripts as it does with TSQL, he thought that would be the end of the series. Oh no; in response to feedback, comes a small add-in called 'Script Generator' that makes a big difference to the speed of developing and producing new scripts. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Google Chrome on Ubuntu 12.04 not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Install user scripts on Chrome

    - by Ahatius
    I created a JavaScript file, and named it test.user.js. I tried double clicking it, dragging & dropping it into Chrome and opening it with Chrome, but it won't install. I've already installed the dev version of Chrome, because it's stated somewhere that it's needed, but I can't get it to work. Then, I tried searching for a Chrome Greasemonkey addon, just to see that it was no further developed on, because Chrome has built-in support. Could someboy tell me what I have to do in order to get user scripts working on Chrome v21?

    Read the article

  • Google Chrome with strange behavior

    - by user72274
    I'm former Chromium-browser user, but after not upgrading the PPA for 2 months, I switched to Google Chrome browser yesterday. Everything is okay, except some strange behavior on some pages and crashing after loading "chrome://" configuration pages. The best known website with strange behavior is youtube, there is a picture what I see: When I open user menu in top right corner, it crashes that way and even after closing the menu, some parts of menu stay display. You may say it's Youtube problem, no, I have this problem at least on three other websites, here it is on Imgur: The problem isn't for the whole side, sometimes it happens from the middle of the screen. The interesting part is that it happens everytime in the same distance from the right border. When I check the DOM elements with the Developer tool, the overlay which shows element's position is rendered how it should be. What is more, if there is anchor after the crashed area, it works after clicking on it. Selecting text in crashed page is impossible. I hope there is enough information to give me an advice, thanks in advance. :) EDIT: Here is what the browser posted in "chrome://gpu-internals/": Graphics Feature Status Canvas: Software only, hardware acceleration unavailable Compositing: Hardware accelerated 3D CSS: Hardware accelerated CSS Animation: Software animated. WebGL: Hardware accelerated WebGL multisampling: Hardware accelerated Problems Detected Accelerated CSS animation has been disabled at the command line. Accelerated 2d canvas is unstable in Linux at the moment. Ubuntu 12.04 | Gnome-shell 3.4.1 | ATI Radeon 4550 | Screen resolution 1024*768 | Chrome version 20.0.1132.57 (Official Build 145807)

    Read the article

  • Slides and Scripts from SharePoint Cincy 2014

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2014/06/06/slides-and-scripts-from-sharepoint-cincy-2014.aspx   I was pleased to present at SharePoint Cincy again for the third year.  Geoff and all the organizers do a great job.  My presentation this year was “PowerShell for Your SharePoint Tool Belt”.  Below are my slides and demo scripts.  Thanks for all who attended, I hope you found something that will be useful for you in your work.   Demo PowerShell Scripts   Slidedeck           -Frog Out

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Google Code C++ [closed]

    - by JacKeown
    I was on this website: http://code.google.com/edu/languages/cpp/basics/getting-started.html#learn-by-example and I saw this code: // Description: Illustrate the use of cin to get input // and how to recover from errors. #include <iostream> using namespace std; int main() { int input_var = 0; // Enter the do while loop and stay there until either // a non-numeric is entered, or -1 is entered. Note that // cin will accept any integer, 4, 40, 400, etc. do { cout << "Enter a number (-1 = quit): "; // The following line accepts input from the keyboard into // variable input_var. // cin returns false if an input operation fails, that is, if // something other than an int (the type of input_var) is entered. if (!(cin >> input_var)) { cout << "Please enter numbers only." << endl; cin.clear(); cin.ignore(10000,'\n'); } if (input_var != -1) { cout << "You entered " << input_var << endl; } } while (input_var != -1); cout << "All done." << endl; return 0; } I was wondering... what is the significance of cin.clear() and cin.ignore() ...and also why the 10000 and /n parameters are necessary...?...

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >