Search Results

Search found 27152 results on 1087 pages for 'google cache'.

Page 118/1087 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Inserting Google Maps into a WYSIWYG editor, then saving and retrieving properly

    - by Tatu Ulmanen
    Hi, I'm trying to extend jWysiwyg with an function to add a map from Google Maps. I can get the map all right, but I'm having problems with how to handle the generated map so it can be saved with the page and then retrieved. To open the process up a bit: User enters editor which is created using jWysiwyg. User clicks on a button which asks for an address, then returns the corresponding latitude and longitude. I use this location information to create a map using Google Maps API (V3), which I then insert into the editable WYSIWYG area. When I save the page, the whole Google generated HTML gets saved into the database, which will not work properly when opened next time (I get a grey box when I open up the page again). Now, the problem is that I need to insert the map in such a format that it will work afterwards (perhaps using <script> tags). I also need the map to be visible in the WYSIWYG editor itself, so I cannot just put in a placeholder tag which would later be populated with the correct map data. So, in short; how would you insert a Google Map into a WYSIWYG editor in a way that it is both visible/previewable from the editor itself and could also be saved in a format that would work properly when opened the next time?

    Read the article

  • Using Google Maps v3, PHP and Json to plot markers

    - by bateman_ap
    Hi, I am creating a map using the new(ish) v3 of the Google Maps API I have managed to get a map displaying using code as below: var myLatlng = new google.maps.LatLng(50.8194000,-0.1363000); var myOptions = { zoom: 14, center: myLatlng, mapTypeControl: false, scrollwheel: false, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("location-map"), myOptions); However I now want to add a number of markers I have stored in a PHP array. The Array currently looks like this if I print it out to screen: Array ( [0] => Array ( [poiUid] => 20 [poiName] => Brighton Cineworld [poiCode] => brighton-cineworld [poiLon] => -0.100450 [poiLat] => 50.810780 [poiType] => Cinemas ) [1] => Array ( [poiUid] => 21 [poiName] => Brighton Odeon [poiCode] => brighton-odeon [poiLon] => -0.144420 [poiLat] => 50.821860 [poiType] => Cinemas ) ) All the reading I have done so far suggests I turn this into JSON by using json_encode If I run the Array though this and echo it to the screen I get: [{"poiUid":"20","poiName":"Brighton Cineworld","poiCode":"brighton-cineworld","poiLon":"-0.100450","poiLat":"50.810780","poiType":"Cinemas"},{"poiUid":"21","poiName":"Brighton Odeon","poiCode":"brighton-odeon","poiLon":"-0.144420","poiLat":"50.821860","poiType":"Cinemas"}] The bit now is where I am struggling, I am not sure the encoded array is what I need to start populating markers, I think I need something like the code below but not sure how to add the markers from my passed through JSON var locations = $jsonPoiArray; for (var i = 0;i < locations.length; i += 1) { // Create a new marker };

    Read the article

  • google maps api v3 - loop through overlays - overlayview methods

    - by user317005
    what's wrong with the code below? when i execute it, the map doesn't even show up. but when i put the overlayview methods outside the for-loop and manually assign a lat/lng then it magically works?! but does anyone know how i can loop through an array of lats/lngs (=items) using the overlayview methods? i hope this makes sense, just don't know how else to explain it. and unfortunately, i run my code on my localhost var overlay; OverlayTest.prototype = new google.maps.OverlayView(); [taken out: options] var map = new google.maps.Map(document.getElementById('map_canvas'), options); var items = [ ['lat','lng'],['lat','lng'] ]; for (var i = 0; i < items.length; i++) { var latlng = new google.maps.LatLng(items[i][0], items[i][1]); var bounds = new google.maps.LatLngBounds(latlng); overlay = new OverlayTest(map, bounds); function OverlayTest(map, bounds) { [taken out: not important] this.setMap(map); } OverlayTest.prototype.onAdd = function() { [taken out: not important] } OverlayTest.prototype.draw = function() { [taken out: not important] } }

    Read the article

  • How to use semantic markup and Google Places to assist in local search SEO?

    - by ElHaix
    In this article, adding additional localized markup is supposed to help your site's SEO. ie. <div itemscope itemtype="http://data-vocabulary.org/Organization"> <span itemprop="name">Search Engine People</span> <span itemprop="address" itemscope itemtype="http://data-vocabulary.org/Address"> <span itemprop="street-address">100 Westney Road South Unit 200, Building E</span> <span itemprop="locality">Ajax</span>, <span itemprop="region">ON</span> <span itemprop="country-name">Canada</span> <span itemprop="postal-code">L1S 7H3</span> </div> What about a site that contains valid localized results, where the actual business location is not relevant. For example, a site with valid local results from San Francisco, CA and Phoenix, AZ. Should these tags be added to the localized results, and has anyone got any experience with how much adding these tags have improved results? In terms of Google Places, however, they seem to ask for the business' actual physical location. Is there a way to use Google Places in the aforementioned example to assist in SEO?

    Read the article

  • how to use appcfg.py for google-app-engine projects created using google's eclipse plugin?

    - by Aadith
    I have created a google-app-engine java project in Eclipse using Google's Eclipse plugin. My previous attempt to deploy failed. Now, when I retry, I get the following message: Unable to update app: Error posting to URL : http://appengine.google.com/api/appversion/create?app_id=mybdaywisherversion=1 409 conflict Another transaction for this user is already in progress for this app and major version. That user can undo the transaction with appcfg.py's "rollback" command. Now, I have always used the google-app-engine features from inside Eclipse only and have not a clue how to run the appcfg.py command. Could not get much help from documentation available over the internet. The only thing I could make out was for mac (I'm on mac), the command to be used is appcfg.sh. Inside Eclipse, I looked where App-Engine SDK is located on my machine and went to that location. Even found appcfg.sh there. But when I try to run it, it only reports the error "command not found". Tried various alternatives to run it (like tried running it with sudo, tried running it as ./appcfg.sh by going to whether its located) but no success Can someone please tell me the step I will have to follow to run the apcfg command?

    Read the article

  • Google Map GEO Results

    - by Lee
    Hey All I'm getting really frustrated with google geo results and hope someone can advise me the best was to go. I have created a AutoSuggest feature where you can start typing the address and google will repspond with suggestions. User then selects and address to move on. But before I want them to continue on the next page I want to validate their selection. I would have thought this will be easy as we are only checking against what google has already given. But when I do my validation lookup it displays no results. Some example code: Lets say I picked from the suggestion this address: Suffield, CT 06078, USA Then on validation I do a second lookup with this address ie. $string = "Suffield, CT 06078, USA"; echo 'http://maps.google.com/maps/geo?output=json&oe=utf8&gl=us&sensor=false&key=[MyKey]&q='.urlencode($string).''; It gives me Error code 602 (G_GEO_UNKNOWN_ADDRESS) How can it not be found when its given me the address ?? Any suggestions how I can get around this. Hope you can !

    Read the article

  • PHP CURL Google Calendar using Private URL

    - by MooCow
    I'm trying to get an array of events from Google Calendar using the Private URL. I read the Google API document but I want to try doing this without using the ZEND library since I have no idea what the eventual server file structure is and avoid having other people edit the codes. I also did a search before posting and ran into the same condition where PHP CURL_EXEC returns false with the URL but I get a JSON file if the URL is open using a web browser. Since I'm using the Private URL, do I really need to authenticate against the Google server using ZEND? I'm trying to have PHP clean up the array before encoding it for Flash. $URL = <string of the private URL from Google Calendar> $ch = curl_init($URL); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data = curl_exec($ch); curl_close($ch); $result = json_decode($data); print '<pre>'.var_export($data,1).'</pre>'; Screen output >>> false

    Read the article

  • Will Google treat this JavaScript code as a bad practice?

    - by Mathew Foscarini
    I have a website that provides a custom UX experience implemented via JavaScript. When JavaScript is disabled in the browser the website falls back to CSS for the layout. To make this possible I've added a noJS class to the <body> and quickly remove it via JavaScript. <body class="noJS layout-wide"> <script type="text/javascript">var b=document.getElementById("body");b.className=b.className.replace("noJS","");</script> This caused a problem when the page loads and JavaScript is enabled. The body immediately has it's noJS class removed, and this causes the layout to appear messed up until the JavaScript code for layout is executed (at bottom of the page). To solve this I hide each article via JavaScript by adding a CSS class fix which is display:none as each article is loaded. <article id="q-3217">....</article> <script type="text/javascript">var b=document.getElementById("q-3217");b.className=b.className+" fix";</script> After the page is ready I show all the articles in the correct layout. I've read many times in Google's documentation not to hide content. So I'm worried that the Google will penalize my website for doing this.

    Read the article

  • How to reload google dfp in ajax content? [migrated]

    - by cj333
    google dfp support ads in ajax content, but if I parse all the code in main page. it always show the same ads even turn page reload the ajax content. I read some article from http://productforums.google.com/forum/#!msg/dfp/7MxNjJk46DQ/4SAhMkh2RU4J. But my code not work. Any work code for suggestion? Thanks. Main page code: <script type='text/javascript'> $(document).ready(function(){ $('#next').live('click',function(){ var num = $(this).html(); $.ajax({ url: "album-slider.php", dataType: "html", type: 'POST', data: 'photo=' + num, success: function(data){ $("#slider").center(); googletag.cmd.push(function() { googletag.defineSlot('/1*******/ads-728-90', [728, 90], 'div-gpt-ad-1**********-'+ num).addService(googletag.pubads()); googletag.pubads().enableSingleRequest(); googletag.enableServices(); }); } }); }); }); </script> album-slider.php <!-- ads-728-90 --> <div id='div-gpt-ad-1**********-<?php echo $_GET['photo']; ?>' style='width:728px; height:90px;'> <script type='text/javascript'> googletag.cmd.push(function() { googletag.display('div-gpt-ad-1**********-<?php echo $_GET['photo']; ?>); }); </script>

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • Google Analytics Event Tracking and Variable visibility.

    - by Jeow
    Hi guys, I have added to my html page the standard latest snippet to get google analytics to work: ... ... var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-15080849-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = 'http://www.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); Now looking at the official 'event tracking guide' google says: add a snippet such as: pageTracker._trackEvent('Videos', 'Play', 'Gone With the Wind'); my question is: where is pageTracker coming from ? is it a global object in ga.js ? but if it is, why google did not tell me that they run a risk on breaking some script... I must be missing something any help really appreciated.

    Read the article

  • How do I mashup Google Maps with geolocated photos from one or more social networks?

    - by PureCognition
    I'm working on a proof of concept for a project, and I need to pin random photos to a Google Map. These photos can come from another social network, but need to be non-porn. I've done some research so far, Google's Image Search API is deprecated. So, one has to use the Custom Search API. A lot of the images aren't photos, and I'm not sure how well it handles geolocation yet. Twitter seems a little more well suited, except for the fact that people can post pictures of pretty much anything. I was also going to look into the API's for other networks such as Flickr, Picasa, Pinterest and Instagram. I know there are some aggregate services out there that might have done some of this mash-up work for me as well. If there is anyone out there that has a handle on social APIs and where I should look for this type of solution, I would really appreciate the help. Also, in cases where server-side implementation matters, I'm a .NET developer by experience.

    Read the article

  • why the difference in google search result using script for search and using a browser for search

    - by Jayapal Chandran
    I wrote a code to find the position in google search result for a search keyword. I also did the same with the browser. Both the results are different. Let me explain in detail here. I have a website and i wanted to know on which page number my domain appears for a search string. Like when i search for 'code snippets' i wanted to find in google search on which page number a certain domain appears. I wrote a php code to search page by page starting from page 1 to page n. I did the same task using a browser. The script returned page 4 and when browsed i can see the domain appearing in second page. here is the search string i use in my code. /search?hl=en&output=search&sclient=psy-ab&q=code+snippets&start=0&btnG= and for each request i change the start=0 to start=1, start=2, etc... and in the response i will check whether my domain appears in it. any idea for this different in search results?

    Read the article

  • One site being on a subdirectory of another. Does google count this against you?

    - by Mick
    I have created two similar websites (relating to monetary systems). So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for the successful one, but less liked one is just an "add-on" which sits on a subdirectory of the successful one. I wonder if Google somehow detects this and treats it as a less significant website? EDIT: Just to clarify, even though one site is an add-on that sits on a subdirectory of the other, the URL is arranged to look like it is a root. I.e. the unpopular site can be accessed directly with a simple www.myunpopularsite.com name, without specifying any subdirectory. EDIT: Just in case its important... say the popular site is called pop.com and the unpopular one unpop.com. In the webspace I've purchased, there is a directory called public_html. This is where I put the index.htm and all the other files of my popular site. When I purchased the add-on unpop.com. I made a subdirectory of public_html called unpop. It is within this "public_html\unpop\" that I place the index.htm and all the other files of my unpopular site. Typing www.unpop.com into the address bar of a browser links directly to the contents of "public_html\unpop\" and the user is not aware that this site is sitting on a subdirectory of another site. BUT if you type "www.pop.com/unpop" into the address bar of a browser you DO see the unpopular site.

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • Creating sitemap for google bot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is internet forum. This forum has many categories, and single category page contains alot of subpages with listed threads. This internet forum is brand new, and about a week ago I filled it with few hundred thousands threads. I then looked at google webmasters page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). This is what my sitemap.xml contains which I uploaded on their website (of course there is a lot more of the code, this is just a snipped for a single category, in my real sitemap file I have all the categories listed as below): <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect google bot to go into http://mysite.com/Forums/Physics, and move through all the subpages with thread links, and then get inside of each thread and index it's content. How can I do this? Also if this will be unclear, I will add a real link to my website.

    Read the article

  • Trying to update a google visualization using jquery

    - by Mark in A2
    I'm relatively inexperienced, so please bear with me. I'm developing a simple dashboard using the Google visualization API. I'm developing in vb.net. I have the Annotated Timeline, the Intensity Map, and a set of tables on my apsx. What I am trying to do is update the Intensity Map and tables based on the date range the user selects using the Annotated Timeline tool. I was hoping to update only these visualizations without doing a full page load. Apparently, a great way to do this is to separate the visualizations into self-contained aspx pages and use jQuery to "load" them into a div. I say apparently, as this is not working. When I try to update an aspx containing a Google visualization using jQuery, I get the message "Loading data from www.google.com..." in the browser and it just runs continuously and never returns. I ran this by an experienced developer and he was stumped, but thought must be a conflict between the google API and jQuery. Any tips, advice, alternative solutions are greatly appreciated! Thanks, Mark in Ann Arbor

    Read the article

  • Ideas to tackle unwanted bad press/review on Google's SERP?

    - by Rob
    After Googling our company name to our horror we've found someone on Yelp.co.uk has reviewed our company. On the SERP your eye is immediately drawn to the 2 star review some complete stranger has written, which to be honest is pure slander! The most infuriating thing is the person who reviewed our company has never even been a client/customer. It's a bit like me reviewing a restaurant having never eaten or even been in there! We've sent her a private message on Yelp to remove the review and also sent a complaint to Yelp themselves but have yet to get a reply. We've resisted going mad at the reviewer and also requested that she re-review us having just relaunched our new website (it still riles us that she's not even a client though!). We've had genuine customers/clients review us on Yelp yet this 2 star review remains on Google's SERP. Roughly how long would it take to for our new reviews to over take this review? Does anyone have any suggestions as to how we can push the review off the 1st page of Google's SERP or any creative ways in which we can tackle this issue?

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • Is there an elegant way to track multiple domains under separate accounts with google analytics?

    - by J_M_J
    I have a situation where a content management system uses the same template for multiple websites with different domain names and I can't make a separate template for each. However, each website needs to be tracked with Google analytics. Would this be appropriate to track each domain like this by putting in some conditional code? And would this be robust enough not to break? Is there a more elegant way to do this? <script type="text/javascript"> var _gaq = _gaq || []; switch (location.hostname){ case 'www.aaa.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-1']); break; case 'www.bbb.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-2']); break; case 'www.ccc.com': _gaq.push(['_setAccount', 'UA-xxxxxxx-3']); break; } _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script> Just to be clear, each website is a separate domain name and must be tracked separately, NOT different domains with same pages on one analytics profile.

    Read the article

  • get_by_id method on Model classes in Google App Engine Datastore

    - by tarn
    I'm unable to workout how you can get objects from the Google App Engine Datastore using get_by_id. Here is the model from google.appengine.ext import db class Address(db.Model): description = db.StringProperty(multiline=True) latitude = db.FloatProperty() longitdue = db.FloatProperty() date = db.DateTimeProperty(auto_now_add=True) I can create them, put them, and retrieve them with gql. address = Address() address.description = self.request.get('name') address.latitude = float(self.request.get('latitude')) address.longitude = float(self.request.get('longitude')) address.put() A saved address has values for >> address.key() aglndWVzdGJvb2tyDQsSB0FkZHJlc3MYDQw >> address.key().id() 14 I can find them using the key from google.appengine.ext import db address = db.get('aglndWVzdGJvb2tyDQsSB0FkZHJlc3MYDQw') But can't find them by id >> from google.appengine.ext import db >> address = db.Model.get_by_id(14) The address is None, when I try >> Address.get_by_id(14) AttributeError: type object 'Address' has no attribute 'get_by_id' How can I find by id? EDIT: It turns out I'm an idiot and was trying find an Address Model in a function called Address. Thanks for your answers, I've marked Brandon as the correct answer as he got in first and demonstrated it should all work.

    Read the article

  • One site being on a subdirectory of another. Does google count this againt you?

    - by Mick
    I have created two similar websites (relating to monetary systems). So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for the successful one, but less liked one is just an "add-on" which sits on a subdirectory of the successful one. I wonder if Google somehow detects this and treats it as a less significant website? EDIT: Just to clarify, even though one site is an add-on that sits on a subdirectory of the other, the URL is arranged to look like it is a root. I.e. the unpopular site can be accessed directly with a simple www.myunpopularsite.com name, without specifying any subdirectory.

    Read the article

  • Does Google appengine cache external requests?

    - by Andy Hume
    I have a very simple application running on appengine that requests a web page every five minutes and parses for a specific piece of data. Everything works fine except that the response I get back from the external request (using urllib2) doesn't reflect the latest changes to the page. Sometimes it takes a few minutes to get the latest, sometimes over an hour. Is there a transparent layer of caching that appengine puts in place? Or is there something else I am missing here? I've looked at the caching headers of the requested page and there is no Expires or LastModified's sent. Update: Sometimes, it will get the new version of the page for a number of requests and then randomly later get an old out of date version.

    Read the article

  • How to enable logging for Google Chrome in Ubuntu 12.04?

    - by skytreader
    I'm trying to capture the logs for a certain bug I'm having with Google Chrome. However, I can't find/enable logs for GC. According to this Chromium project page, I just need to add the flags --enable-logging --v=1 and a chrome_debug.log file will appear in my user data directory. However, after running GC (and closing through the 'X' title bar button) there is no chrome_debug.log file in the specified directory. I even tried running as root as it may have something to do with write permissions but GC refuses to start as root. Another thing, GC also prints messages when invoked from command line. I tried capturing this and redirecting them to a file via $ google-chrome > today.log but the messages are still printed in the command line and the file I specify gets created but remains empty. Note that I can't just copy-paste the messages printed on terminal after my bug occurs as the bug freezes up my whole system that, when it occurs, my only option is to turn off my computer straight via the power button. I've seen a few similar bugs already posted but I find that they don't exactly describe my situation so I'd really like to get some logs for this. So how do I enable logging or, at least, get those terminal messages in a file?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >