Search Results

Search found 22587 results on 904 pages for 'google translate'.

Page 53/904 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • As good as no bounces are registered in Google Analytics [on hold]

    - by user29931
    For a client I am having an issue for which I can't find an explanation. For some reason, bounces are no longer measured (or almost none) in GA. In GA I can see that the issue started in January 2013. I have been looking at the code inside and out, but I can't find any reason why. On the production site, there is (will be removed soon) a POST done on page load, so I thought that Google might see this as user interaction, hence never counting a bounce, but on staging i removed this POST and in the GA account for staging, still no bounces are registered. I have also checked if the tracking code appears twice on the page, and this is not the case. I tried with the GA debug plugin in Firefox and Chrome to see if that would learn me anything, but no luck... The site in question is www.kiala.be.

    Read the article

  • Multiple Google Analytics code for url under same domain

    - by will.i.am
    I have one domain, www.example.com, and www.example.com/sales. the analytic code on both urls are different. so when i login to google account, it will show two separate analytic accounts. on www.example.com/sales, i have a banner linked back to www.example.com. i clicked that banners, and i am sure there are other people have clicked the banner as well. but when i check the analytic of www.example.com, i don't see any thing come from my example.com/sales. I assume analytic on both urls are working, but why it doesn't track the visit from /sales. any idea??

    Read the article

  • Google analytics shows wrong number of page views, asp.net website

    - by f_karlsson
    Sometimes it can be for example 4500 requests, after a few hours it shows a few thousand less. What is wrong? It looks like analytics corrects itself. I changed from classic to Universal a few months ago, do not know if it has anything to do with this. In masterpage: <script> (function (i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-xxxxxxxx-1', 'xxxxx.se'); ga('send', 'pageview'); </script>

    Read the article

  • Google Analytics data Missing for 4 days

    - by Shumaila
    I used wrong tracking id in my E-commerce tracking code for Google analytics and missed the data for few days. To add that missing data in my account I have written a short script which manually send all orders data for four days to GA account but what my concern is date : Those orders which already placed on different dates and , when if I run my script and so it will send my missing data with current date , which I do not want. ( I want to send date when that order is actually placed) do anyone help me with this ? I am really much stuck with my work here.

    Read the article

  • google chrome window resizes when click 12.04 kde

    - by user37805
    All windows work fine but maximized G. Chrome window starts moving whenever I click under title bar without moving the cursor, but I don't want to move the windows, just to focus. Under the title bar is next to the tabs, on the blue part. This is annoying. It goes from maximized to moving windows function, with the cross instead of the pointer, only with single steady click. Using KDE 4.8.4, kubuntu 12.04. Something like this http://code.google.com/p/chromium/issues/detail?id=37013. How to solve this?

    Read the article

  • slicehost google apps mx settings

    - by Bob
    Hello All, I am banging my head against the wall on this one. I followed the MX setup tutorials for Google Mail and it didn't work. Currently, after deleting those records and adding the ones google suggested I have domain.com. 86400 IN MX 10 ASPMX.L.GOOGLE.com. domain.com. 86400 IN MX 20 ALT2.ASPMX.L.GOOGLE.com. domain.com. 86400 IN MX 20 ALT1.ASPMX.L.GOOGLE.com. domain.com. 86400 IN MX 30 ASPMX2.GOOGLEMAIL.com. domain.com. 86400 IN MX 30 ASPMX5.GOOGLEMAIL.com. domain.com. 86400 IN MX 30 ASPMX3.GOOGLEMAIL.com. domain.com. 86400 IN MX 30 ASPMX4.GOOGLEMAIL.com. according to the output of my dig command for my particular "domain". I can send email from google apps mail but I can not recieve any email. It gives me the following error: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected [email protected] Now I already tried following the slicehost MX article instructions straight as well and they did not work out for me. The domain has already been verified by google and it says the email is activated from their end. Any help would be appreciated : )

    Read the article

  • plot markers on google maps with json and jquery

    - by mark
    I am trying to plot the markers as defined in a json file om Google Maps but they don't show on the map. Can somebody help me with this problem? This is the Json file: http://sionvalais.com/gmap/markers/ This is the Javascritp function: function loadMarkers() { var bounds = map.getBounds(); var zoomLevel = map.getZoom(); $.post("/gmaps/markers/index.php", {zoom: zoomLevel, swLat: bounds.getSouthWest().lat(), swLon: bounds.getSouthWest().lng(), neLat: bounds.getNorthEast().lat(), neLon: bounds.getNorthEast().lng()}, function(data) { processMarkers(data, _smallMarkerSize); }, "json" ); } function processMarkers(webcams, markerSize) { var marker = null; var markersInView = new Array(); var idsInView = new Array(); // Loop through the new webcams for (var i = 0; i < webcams.length; i++) { var idx = markers.indexOf(webcams[i].id); if (idx == -1) { var info_html = "<table class='infowindow'>"; info_html += "<tr><td class='img'>"; info_html += "<img src='" + webcams[i].smallimg + "' /><td>"; info_html += "<td><p><b>" + webcams[i].loc + "</b>"; info_html += "<br /><a href='/webcam/" + webcams[i].url + "' target='_blank'>Show webcam</a></p></td></tr>"; info_html += "</table>"; marker = new WebcamMarker(new GLatLng(webcams[i].latitude, webcams[i].longitude), {image: "" + webcams[i].smallimg + "", height: markerSize, width: markerSize}); marker.myhtml = info_html; map.addOverlay(marker); markersInView[webcams[i].id] = marker; } else { markersInView[webcams[i].id] = markers[webcams[i].id]; } idsInView.push(webcams[i].id); } // Now remove the markers outside of the viewport for (var i = 0; i < webcamids.length; i++) { var idx = markersInView.indexOf(webcamids[i]); if (idx == -1) { marker = markers[webcamids[i]]; map.removeOverlay(marker); } } markers = markersInView; webcamids = idsInView; }

    Read the article

  • Getting 404 when attempting to POST file to Google Cloud Storage from service account

    - by klactose
    I'm wondering if anyone can tell me the proper syntax & formatting for a service account to send a POST Object to bucket request? I'm attempting it programmatically using the HttpComponents library. I manage to get a token from my GoogleCredential, but every time I construct the POST request, I get: HTTP/1.1 403 Forbidden <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Detailsbucket-name</Details></Error The Google documentation that describes the request methods, mentions posting using html forms, but I'm hoping that wasn't suggesting the ONLY way to get the job done. I know that HttpComponents has a way to explicitly create form data by using UrlEncodedFormEntity, but it doesn't support multipart data. Which is why I went with using the MultipartEntity class. My code is below: MultipartEntity entity = new MultipartEntity( HttpMultipartMode.BROWSER_COMPATIBLE ); String token = credential.getAccessToken(); entity.addPart("Authorization", new StringBody("OAuth " + token)); String date = formatDate(new Date()); entity.addPart("Date", new StringBody(date)); entity.addPart("Content-Encoding", new StringBody("UTF-8")); entity.addPart("Content-Type", new StringBody("multipart/form-data")); entity.addPart("bucket", new StringBody(bucket)); entity.addPart("key", new StringBody("fileName")); entity.addPart("success_action_redirect", new StringBody("/storage")); File uploadFile = new File("pathToFile"); FileBody fileBody = new FileBody(uploadFile, "text/xml"); entity.addPart("file", fileBody); httppost.setEntity(entity); System.out.println("Posting URI = "+httppost.toString()); HttpResponse response = client.execute(httppost); HttpEntity resp_entity = response.getEntity(); As I mentioned, I am able to get an actual token, so I'm pretty sure the problem is in how I've formed the request as opposed to not being properly authenticated. Keep in mind: This is being performed by a service account. Which means that it does have Read/Write access Thanks for reading, and I appreciate any help!

    Read the article

  • Google maps sometimes does not return a geocoded value for string

    - by XGreen
    Hi Guys, I have the following code: It basically looks into a HTML list and geocodes and markets each item. it does it correctly 8 out of ten but sometimes I get an error I set for show in the console. I can't think of anything. Any thoughts is much appreciated. $(function () { var map = null; var geocoder = null; function initialize() { if (GBrowserIsCompatible()) { // Specifies that the element with the ID map is the container for the map map = new GMap2(document.getElementById("map")); // Sets an initial map positon (which mainly gets ignored after reading the adderesses list) map.setCenter(new GLatLng(37.4419, -122.1419), 13); // Instatiates the google Geocoder class geocoder = new GClientGeocoder(); map.addControl(new GSmallMapControl()); // Sets map zooming controls on the map map.enableScrollWheelZoom(); // Allows the mouse wheel to control the map while on it } } function showAddress(address, linkHTML) { if (geocoder) { geocoder.getLatLng(address, function (point) { if (!point) { console.log('Geocoder did not return a location for ' + address); } else { map.setCenter(point, 8); var marker = new GMarker(point); map.addOverlay(marker); // Assigns the click event to each marker to open its balloon GEvent.addListener(marker, "click", function () { marker.openInfoWindowHtml(linkHTML); }); } } ); } } // end of show address function initialize(); // This iterates through the text of each address and tells the map // to show its location on the map. An internal error is thrown if // the location is not found. $.each($('.addresses li a'), function () { var addressAnchor = $(this); showAddress(addressAnchor.text(), $(this).parent().html()); }); }); which looks into this HTML: <ul class="addresses"> <li><a href="#">Central London</a></li> <li><a href="#">London WC1</a></li> <li><a href="#">London Shoreditch</a></li> <li><a href="#">London EC1</a></li> <li><a href="#">London EC2</a></li> <li><a href="#">London EC3</a></li> <li><a href="#">London EC4</a></li> </ul>

    Read the article

  • Google App Engine - SiteMap Creation for a social network

    - by spidee
    Hi all. I am creating a social tool - I want to allow search engines to pick up "public" user profiles - like twitter and face-book. I have seen all the protocol info at http://www.sitemaps.org and i understand this and how to build such a file - along with an index if i exceed the 50K limit. Where i am struggling is the concept of how i make this run. The site map for my general site pages is simple i can use a tool to create the file - or a script - host the file - submit the file and done. What i then need is a script that will create the site-maps of user profiles. I assume this would be something like: <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.socialsite.com/profile/spidee</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> <url> <loc>http://www.socialsite.com/profile/webbsterisback</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> </urlset> Ive added some ??? as i don't know how i should set these settings for my profiles based on the following:- When a new profile is created it must be added to a site-map. If the profile is changed or if "certain" properties are changed - then i don't know if i update the entry in the map - or do something else? (updating would be a nightmare!) Some users may change their profile. In terms of relevance to the search engine the only way a google or yahoo search will find the users (for my requirement) profile would be for example by means of [user name] and [location] so once the entry for the profile has been added to the map file the only reason to have the search-bot re-index the profile would be if the user changed their user-name - which they cant. or their location - and or set their settings so that their profile would be "hidden" from search engines. I assume my map creation will need to be dynamic. From what i have said above i would imagine that creating a new profile and possible editing certain properties could mark it as needing adding/updating in the sitemap. Assuming i will have millions of profiles added/being edited how can i manage this in a sensible manner. i know i need a script that can append urls as each profile is created i know the script will prob be a TASK - running at a set freq - perhaps the profiles have a property like "indexed" and the TASK sets them to "true" when the profiles are added to the map. I dont see the best way to store the map - do i store it in the datastore i.e; model=sitemaps properties key_name=sitemap_xml_1 (and for my map sitemap_index_xml) mapxml=blobstore (the raw xml map or ror map) full=boolean (set true when url count is 50) # might need this as a shard will tell us To make this work my thoughts are m cache the current site map structure as "sitemap_xml" keep a shard of url count when my task executes 1. build the xml structure for say the first 100 urls marked "index==false" (how many could u run at a time?) 2. test if the current mcache sitemap is full (shardcounter+10050K) 3.a if the map is near full create a new map entry in models "sitemap_xml_2" - update the map_index file (also stored in my model as "sitemap_index" start a new shard - or reset.2 3.b if the map is not full grab it from mcache 4.append the 100 url xml structure 5.save / m cache the map I can now add a handler using a url map/route like /sitemaps/* Get my * as map name and serve the maps from the blobstore/mache on the fly. Now my question is does this work - is this the right way or a good way to start? Will this handle the situation of making sure the search bots update when a user changes their profile - possibly by setting the change freq correctly? - Do i need a more advance system :( ? or have i re-invented the wheel! I hope this is all clear and make some form of sense :-)

    Read the article

  • Ideas for my MSc project and Google Summer of Code 2011

    - by Chris Wilson
    I'm currently putting together ideas for my master's project which I'll be working on over the summer, and I would like to be able to use this time to help Ubuntu in some way. I have the freedom to come up with pretty much any project in the field of software development/engineering provided it Is a substantial piece of software (for reference, I will be working on it for five full months) Solves a problem for more people than just myself I was hoping to use this project as an opportunity to get some experience with the underbelly of Linux, so that I can mention on my CV that I have 'experience in developing for *NIX in C++', which I'm noticing more and more companies are looking for these days, probably because stuff's moving to cloud servers and that's where Linux rules the roost. My problem is that, since I don't have the experience to begin with, I'm not sure what to do for such a project, and I was wondering if anyone could help me with this. I've noticed from Daniel Holbach's blog that Ubuntu participated in the Google Summer of Code 2010, and that project ideas for that can be found here. However, I have not been able to find anything related to Ubuntu and GSoC 2011, but I have noticed from the GSoC timeline that the list of mentoring organisations will not be published until March 18th. I have two questions here. Has Ubuntu applied to be a part of Summer of Code 2011, and what is the status of the 2010 project list linked to earlier. Were they all implemented or are there still some that can be picked up now, should I not participate in GSoC? I'd like to do something for Ubuntu, but I'd rather not spend my time reinventing the wheel.

    Read the article

  • Google Analytics www 301 causing issues with In-Page Analytics

    - by conrad10781
    The closest question I could find to my problem is This one. The similarity is: I have a profile in Google Analytics (GA) that has been collecting data for a year. The domain setting in GA is "http://example.com". The site, however, will redirect any non-www request, to www.example.com, via a typical .htaccess refinement. We do this to keep the traffic on the load balancers. I don't know the method the original user had in place, but we're doing a 301 on any non-www to the www equivalent. I believe this has to be somewhat standard. Where I differ from this question is in the error message I receive when trying to load the In-Page Analytics. I'm instead receiving: Error: The Website in your settings (http://example.com), redirects into a different domain. (http://www.example.com). In-Page Analytics currently works on only one domain. Note that www.example.com and example.com are NOT considered to be on the same domain. Also, make sure you're not redirecting from http:// to https:// or vice versa. I understand what's being explained, it just seems as though this can't be the end-all. I tried updating the Analytics settings, which from day one has been set as "One domain with multiple subdomains", but I don't see any options to change the URL ( which is currently set to http://example.com and not http://www.example.com ). I'd prefer not to have to change the URL if that was at all possible, but I can't seem to find any documentation or anything that provide any possible solutions.

    Read the article

  • Translate jQuery UI Datepicker format to .Net Date format

    - by Michael Freidgeim
    I needed to use the same date format in client jQuery UI Datepicker and server ASP.NET code. The actual format can be different for different localization cultures.I decided to translate Datepicker format to .Net Date format similar as it was asked to do opposite operation in http://stackoverflow.com/questions/8531247/jquery-datepickers-dateformat-how-to-integrate-with-net-current-culture-date Note that replace command need to replace whole words and order of calls is importantFunction that does opposite operation (translate  .Net Date format toDatepicker format) is described in http://www.codeproject.com/Articles/62031/JQueryUI-Datepicker-in-ASP-NET-MVC /// <summary> /// Uses regex '\b' as suggested in //http://stackoverflow.com/questions/6143642/way-to-have-string-replace-only-hit-whole-words /// </summary> /// <param name="original"></param> /// <param name="wordToFind"></param> /// <param name="replacement"></param> /// <param name="regexOptions"></param> /// <returns></returns> static public string ReplaceWholeWord(this string original, string wordToFind, string replacement, RegexOptions regexOptions = RegexOptions.None) { string pattern = String.Format(@"\b{0}\b", wordToFind); string ret=Regex.Replace(original, pattern, replacement, regexOptions); return ret; } /// <summary> /// E.g "DD, d MM, yy" to ,"dddd, d MMMM, yyyy" /// </summary> /// <param name="datePickerFormat"></param> /// <returns></returns> /// <remarks> /// Idea to replace from http://stackoverflow.com/questions/8531247/jquery-datepickers-dateformat-how-to-integrate-with-net-current-culture-date ///From http://docs.jquery.com/UI/Datepicker/$.datepicker.formatDate to http://msdn.microsoft.com/en-us/library/8kb3ddd4.aspx ///Format a date into a string value with a specified format. ///d - day of month (no leading zero) ---.Net the same ///dd - day of month (two digit) ---.Net the same ///D - day name short ---.Net "ddd" ///DD - day name long ---.Net "dddd" ///m - month of year (no leading zero) ---.Net "M" ///mm - month of year (two digit) ---.Net "MM" ///M - month name short ---.Net "MMM" ///MM - month name long ---.Net "MMMM" ///y - year (two digit) ---.Net "yy" ///yy - year (four digit) ---.Net "yyyy" /// </remarks> public static string JQueryDatePickerFormatToDotNetDateFormat(string datePickerFormat) { string sRet = datePickerFormat.ReplaceWholeWord("DD", "dddd").ReplaceWholeWord("D", "ddd"); sRet = sRet.ReplaceWholeWord("M", "MMM").ReplaceWholeWord("MM", "MMMM").ReplaceWholeWord("m", "M").ReplaceWholeWord("mm", "MM");//order is important sRet = sRet.ReplaceWholeWord("yy", "yyyy").ReplaceWholeWord("y", "yy");//order is important return sRet; }

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Google-Chrome 10 stable crash on every page

    - by Achu
    I installed google-chrome today, when i open any page including askubuntu i got this error message. i see my memory usage is normal(Memory 56% and swap 4.8%) also I reload and i go to another page same problem What is the problem? the last dmesg output [26612.341865] lo: Disabled Privacy Extensions [29651.852476] chrome[15472] general protection ip:1528e26 sp:7fff514a9dc0 error:0 in chrome[400000+3082000] [31447.190586] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=15939 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31451.250190] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16180 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31454.260150] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16322 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31458.648164] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16513 PROTO=UDP SPT=4243 DPT=161 LEN=49 [33124.300112] lo: Disabled Privacy Extensions [33601.021406] Skipping EDID probe due to cached edid [34594.043501] chrome[15746]: segfault at 0 ip 0000000000d5cdd0 sp 00007fff5149ec20 error 6 in chrome[400000+3082000] [34597.395334] chrome[18112] general protection ip:17c85bf sp:7fff514aa4f0 error:0 in chrome[400000+3082000] [34616.786643] chrome[18124]: segfault at 1007 ip 00000000017c849f sp 00007fff514aabd0 error 4 in chrome[400000+3082000] [37277.436207] lo: Disabled Privacy Extensions [38549.501390] e1000e: eth1 NIC Link is Down [38551.122253] e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [38551.122263] e1000e 0000:00:19.0: eth1: 10/100 speed: disabling TSO

    Read the article

  • Google Chrome not rendering webpages correctly

    - by sumit_gt
    I am facing some serious web page rendering issues with Chrome. It is more prominent during javascript based animations and stuff on websites like youtube. I have tried removing chrome using (sudo apt-get purge google-chrome-stable) and then reinstalling it. But the problems still persist. The same webpages work correctly on firefox on ubuntu and chrome on windows. The problem only shows up when I use chrome on ubuntu. I think the issue has started after I updated to the latest version of Chrome. I have used Chrome previously on this machine without any problems. I have attached a image that demonstrates the issue. What could possibly be the problem? PS: here's the output of lshw -c video: *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff Here's the output of lspci -nn: output of lspci -nn

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Recording custom variables to identify individual users with Google Analytics

    - by mrtsherman
    I have been asked by our marketing department to add Google Analytics custom variable tracking to my company's website. As the website uses server side includes, modifications to the tracking tag roll out globally - maintenance is therefore a headache! So, if I add the following code (keeping in mind SSI so every page has the same code): // visitor level tracking, id = 12345 // Record a unique id for each visitor. When they return also track this id _gaq.push(['_setCustomVar', 1, 'id', '12345', 1]); // page level tracking // If the user signs up for our newsletter we set newsletter to true // Each page they subsequently visit should also mark this as true _gaq.push(['_setCustomVar', 1, 'newsletter', 'true', 1]); I don't use GA and the marketing people don't use custom variables, so we don't actually know how or if this will work. Therefore my questions are:- Do I want Page, Session or Visitor level tracking? What happens when the same code is used on every page? Can GA 'overwrite' a setting. For example, if I set newsletter to true on page X and then user navigates to page Y, will the variable also be marked there?

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • What is the most performant CSS property for transitioning an element?

    - by Ian Kuca
    I'm wondering whether there is a performance difference between using different CSS properties to translate an element. Some properties fit different situations differently. You can translate an element with following properties: transform, top/left/right/bottom and margin-top/left/right/bottom In the case where you do not utilize the transition CSS property for the translation but use some form of a timer (setTimeout, requestAnimationFrame or setImmediate) or raw events, which is the most performant–which is going to make for higher FPS rates?

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >