Search Results

Search found 59859 results on 2395 pages for 'google page speed'.

Page 152/2395 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Combining Content Data in Google Analytics

    - by David Csonka
    When I first start one of my Wordpress blogs, I had the permanent URL for each post include the date of posting. The slug format looked like this: /blog/2010/01/25/this-is-my-article/ Later on, I changed it so that the date was not included in the permanent URL, like this: /blog/this-is-my-article/ and setup a redirect plugin to make sure that users would get to the page they wanted until the site was re-indexed. In Google Analytics, when I review the stats for content I now have multiple records for what is essentially the same page. ie: Top Content List: 45 Pageviews- /blog/this-is-my-article/ 24 Pageviews- /blog/2010/01/25/this-is-my-article/ 33 Pageviews- /blog/some-other-article/ Is there any way to combine those records somehow?

    Read the article

  • Google Adword not working /conversion.js not found

    - by Jed
    I'm currently working on a wordpress site. My task is just to add the conversion script in a thankyou page. I added the script here: http://www.livingedge.co.nz/thanks-for-getting-in-touch/ , unfortunately does not work. It says that a conversion.js was not found. See the attached screenshot: http://screencast.com/t/52ixQUzHKNxZ I added the conversion script on the footer put it in a conditional so that it will load only on the thakyoupage. I'm new to this and can't figure out what would be the possible cause of such problem. I tried adding the script in the header, on the page editor, on a form redirect. Q: What could be the possible cause of this issue?

    Read the article

  • Changing /page.php to /page/

    - by Dirge2000
    Hey there, I'm trying to change all http://www.mysite.com/filename.php files to show as http://www.mysite.com/filename/ using mod_rewrite, but I seem to be doing something wrong. Can anyone help out? I'm guessing it's something pretty simple for those that know. Thanks.

    Read the article

  • Google Anlytics event not firing

    - by yar1
    I am not getting and event data in GA. I installed Google Analytics Debugger Chrome extension and I see nothing happening (same goes when looking at Network panel in developer tools). I Googled it and read many (many) other answers and it looks like I'm doing things right. Page views, etc. are registering correctly... I have this code as the last thing before my closing tag: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-MYREALCODE', 'mybna.net'); ga('send', 'pageview'); var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-MYREALCODE']); _gaq.push(['_trackPageview']); </script> My event handlers are done using jQuery, all inside an external js file, loaded before the closing tag. For example: $(function () { $('#show-less').click(function (e) { pbr.showHideMore(e); _gaq.push(['_trackEvent', 'ShowMore', 'Hide', 'top button']); }); }); Any ideas anyone?

    Read the article

  • Google Chrome && (cache || memory leaks).

    - by Alexey Ogarkov
    Hello All, I have a big problem with Google Chrome and its memory. My app is displaying to user several image charts and reloads them every 10s. In the interval i have code like that var image = new Image(); var src = 'myurl/image'+new Date().getTime(); image.onload = function() { document.getElementById('myimage').src = src; image.onload = image.onabort = image.onerror = null; } image.src = src; So i have no memory leaks in Firefox and IE. Here the response headers for images Server Apache-Coyote/1.1 Vary * Cache-Control no-store (// I try no-cache, must-revalidate and so on here) Content-Type image/png Content-Length 11131 Date Mon, 31 May 2010 14:00:28 GMT Vary * taken from here In about:cache page there is no my cached images. If i enable purge-memory-button for chrome (--purge-memory-button parameter) it`s not help. Images is in PNG24. So i think that the problem is not in cache. May be Google Chrome is not releasing memory for old images. Please help. Any suggestions. Thanks.

    Read the article

  • Google App Engine: Difficulty with Users API (or maybe just a Python syntax problem)

    - by Rosarch
    I have a simple GAE app that includes a login/logout link. This app is running on the dev server at the moment. The base page handler gets the current user, and creates a login/logout url appropriately. It then puts this information into a _template_data dictionary, for convenience of subclasses. class BasePage(webapp.RequestHandler): _user = users.get_current_user() _login_logout_link = None if _user: _login_logout_link = users.create_logout_url('/') else: _login_logout_link = users.create_login_url('/') _template_data = {} _template_data['login_logout_link'] = _login_logout_link _template_data['user'] = _user def render(self, templateName, templateData): path = os.path.join(os.path.dirname(__file__), 'Static/Templates/%s.html' % templateName) self.response.out.write(template.render(path, templateData)) Here is one such subclass: class MainPage(BasePage): def get(self): self.render('start', self._template_data) The login/logout link is displayed fine, and going to the correct devserver login/logout page. However, it seems to have no effect - the server still seems to think the user is logged out. What am I doing wrong here?

    Read the article

  • SharePoint 2007 Parser Error after updating master page

    - by Kelly Jones
    A few weeks ago I was updating the master page for a SharePoint 2007 (WSS) site.  The client wanted the site updated to reflect the new look and feel that is being applied to another set of sites in the organization. I created a new theme and master page, which I already wrote about here and here.  It worked well, except for a few pages on a subsite.  On those pages, I got the following error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Code blocks are not allowed in this file.   I decided to go comb through my new master page and compare it to the existing master page that was already working.  After going through them line by line several times, I had no clue what would be causing the error because they were basically the same! It turns out, it was a combination of two things.  First, on a few of the pages in the site, there was some include code (basically an <% EVAL()%> snippet).  This was the code that was triggering my error “Code blocks are not allowed in this file”. However, this code was working fine with the previous master page. I decided to then try doing a full deployment of the site with the new master page, and it worked fine!  Apparently, if the master page is deployed using a Feature, then it is granted permission to allow code blocks, but if you upload pages either using web UI or SharePoint Designer, then the pages won’t be able to use code blocks. I haven’t been able to pin down the rules or official info about this, but I thought others might find it useful anyway.

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • How do I speed up XML parsing operation?

    - by absentx
    I currently have a php script set up to do some xml parsing. Sometimes the script is set as an on page include and other times it is accessed via an ajax call. The problem is the load time for this particular page is very long. I started to think that the php I had written to find what I need in the XML was written poorly and my script is very resource intense. After much research and testing the problem is indeed not my scripting (well perhaps you could consider it a problem with my scripting), but it looks like it takes a long time to load the particular xml sources. My code is like such: $source_recent = 'my xml feed'; $source_additional = 'the other feed I need'; $xmlstr_recent = file_get_contents($source_recent); $feed_recent = new SimpleXMLElement($xmlstr_recent); $xmlstr_additional = file_get_contents($source_additional); $feed_additional = new SimpleXMLElement($xmlstr_additional); In all my testing, the above code is what takes the time, not the additional processing I do below. Is there anyway around this or am I at the mercy of the load time of the xml URL's? One crazy thought I had to get around it is to load the xml contents into a db every so often, then just query the db for what I need. Thoughts? Ideas?

    Read the article

  • a php script to get lat/long from google maps using CellID

    - by user296516
    Hi guys, I have found this script that supposedly connects to google maps and gets lat/long coordinates, based on CID/LAC/MNC/MCC info. But I can't really make it work... where do I input CID/LAC/MNC/MCC ? <?php $data = "\x00\x0e". // Function Code? "\x00\x00\x00\x00\x00\x00\x00\x00". //Session ID? "\x00\x00". // Contry Code "\x00\x00". // Client descriptor "\x00\x00". // Version "\x1b". // Op Code? "\x00\x00\x00\x00". // MNC "\x00\x00\x00\x00". // MCC "\x00\x00\x00\x03". "\x00\x00". "\x00\x00\x00\x00". //CID "\x00\x00\x00\x00". //LAC "\x00\x00\x00\x00". //MNC "\x00\x00\x00\x00". //MCC "\xff\xff\xff\xff". // ?? "\x00\x00\x00\x00" // Rx Level? ; if ($_REQUEST["myl"] != "") { $temp = split(":", $_REQUEST["myl"]); $mcc = substr("00000000".dechex($temp[0]),-8); $mnc = substr("00000000".dechex($temp[1]),-8); $lac = substr("00000000".dechex($temp[2]),-8); $cid = substr("00000000".dechex($temp[3]),-8); } else { $mcc = substr("00000000".$_REQUEST["mcc"],-8); $mnc = substr("00000000".$_REQUEST["mnc"],-8); $lac = substr("00000000".$_REQUEST["lac"],-8); $cid = substr("00000000".$_REQUEST["cid"],-8); } $init_pos = strlen($data); $data[$init_pos - 38]= pack("H*",substr($mnc,0,2)); $data[$init_pos - 37]= pack("H*",substr($mnc,2,2)); $data[$init_pos - 36]= pack("H*",substr($mnc,4,2)); $data[$init_pos - 35]= pack("H*",substr($mnc,6,2)); $data[$init_pos - 34]= pack("H*",substr($mcc,0,2)); $data[$init_pos - 33]= pack("H*",substr($mcc,2,2)); $data[$init_pos - 32]= pack("H*",substr($mcc,4,2)); $data[$init_pos - 31]= pack("H*",substr($mcc,6,2)); $data[$init_pos - 24]= pack("H*",substr($cid,0,2)); $data[$init_pos - 23]= pack("H*",substr($cid,2,2)); $data[$init_pos - 22]= pack("H*",substr($cid,4,2)); $data[$init_pos - 21]= pack("H*",substr($cid,6,2)); $data[$init_pos - 20]= pack("H*",substr($lac,0,2)); $data[$init_pos - 19]= pack("H*",substr($lac,2,2)); $data[$init_pos - 18]= pack("H*",substr($lac,4,2)); $data[$init_pos - 17]= pack("H*",substr($lac,6,2)); $data[$init_pos - 16]= pack("H*",substr($mnc,0,2)); $data[$init_pos - 15]= pack("H*",substr($mnc,2,2)); $data[$init_pos - 14]= pack("H*",substr($mnc,4,2)); $data[$init_pos - 13]= pack("H*",substr($mnc,6,2)); $data[$init_pos - 12]= pack("H*",substr($mcc,0,2)); $data[$init_pos - 11]= pack("H*",substr($mcc,2,2)); $data[$init_pos - 10]= pack("H*",substr($mcc,4,2)); $data[$init_pos - 9]= pack("H*",substr($mcc,6,2)); if ((hexdec($cid) > 0xffff) && ($mcc != "00000000") && ($mnc != "00000000")) { $data[$init_pos - 27] = chr(5); } else { $data[$init_pos - 24]= chr(0); $data[$init_pos - 23]= chr(0); } $context = array ( 'http' => array ( 'method' => 'POST', 'header'=> "Content-type: application/binary\r\n" . "Content-Length: " . strlen($data) . "\r\n", 'content' => $data ) ); $xcontext = stream_context_create($context); $str=file_get_contents("http://www.google.com/glm/mmap",FALSE,$xcontext); if (strlen($str) > 10) { $lat_tmp = unpack("l",$str[10].$str[9].$str[8].$str[7]); $lat = $lat_tmp[1]/1000000; $lon_tmp = unpack("l",$str[14].$str[13].$str[12].$str[11]); $lon = $lon_tmp[1]/1000000; echo "Lat=$lat <br> Lon=$lon"; } else { echo "Not found!"; } ?> also I found this one http://code.google.com/intl/ru/apis/gears/geolocation_network_protocol.html , but it isn't php.

    Read the article

  • Passing GLatLng from array to Google

    - by E. Rose
    Hello All, To preface this, I am a complete programming amateur so this may be quite easily solved. As is though, it is frustrating me to no end. Basically, I have a database of Venue Names with GLat and GLng (other stuff too) that I am pulling down to my website based on a geolocated search. The javascript that I have pulls in a formatted subset of the database, dumps the glat and glng into an array and is supposed to take those points and plot out several markers each with an info window containing the details behind each marker. For some reason, the marker geodata is not being populated and/or is not being passed. The array is declared using [] and will not work when normally declared using (). It only brings up a map with the first value in the array and goes blank if i try to manually input later entries. There is a large block of commented out code relating to directions generation. That code worked for some reason. If anyone can tell me what I am doing wrong in rewriting it to map the markers and not give directions, please tell me. Any help would be much appreciated. var letters = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']; google.load("maps", "2", {"other_params":"sensor=true"}); var map function initialize(){ window.map = new google.maps.Map2(document.getElementById("map")); map.addControl(new GLargeMapControl()); } google.setOnLoadCallback(initialize); function getVenueResults(_zip, _num, _remove){ var rFlag = ''; if(_remove != null){ rFlag = "&removeId=" + _remove; } var url = 'ajax/getVenues.php?zip=' + _zip + "&num=" + _num + rFlag; new Ajax.Updater('resultsContainer', url, { onComplete: processResults }); } function processResults(){ window.map.clearOverlays(); var waypoints = []; var resultsList = $$('.resultWrapper'); var stepList = $$('.resultDirections'); for(i=0; i for(i=0; i<resultsList.length; i++){ var resultsElem = resultsList[i]; resultsElem.removeClassName('firstResult'); resultsElem.removeClassName('lastResult'); if(i == 0){ resultsElem.addClassName('firstResult'); } if(i == resultsList.length - 1){ resultsElem.addClassName('lastResult'); } var insertContent = '<div class="resultDirections" id="resultDirections' + i + '"></div>'; Element.insert(resultsElem, { bottom : insertContent }) var num = resultsElem.getElementsByClassName('numHolder'); num[0].innerHTML = '<b>' + letters[i] + ".</b> "; var geolat = resultsElem.getElementsByClassName('geolat')[0].value; var geolong = resultsElem.getElementsByClassName('geolong')[0].value; var point = new GLatLng(geolat, geolong); waypoints[i] = point; } if(waypoints.length == 1){ map.setCenter(waypoints[0], 16); map.addOverlay(new GMarker(waypoints[0])); resultsElem.getElementsByClassName('numHolder')[0].innerHTML = ''; }else{ map.setCenter(waypoints[0], 16); for(j=0; j< waypoints.length; j++) { var vLoc = waypoints[j]; var vInfo = resultsElem.getElementByClassName('resultBox[j]]').innerHTML; //unfinished function to mine name out of div and make it the marker title //x = resultsElem.getElementsByTagName("b"); // for (i=0;i<x.length;i++) // //marker.value = resultsElem.getElementBy('numHolder').innerHTML var marker = createMarker(vLoc, vInfo); map.addOverlay(marker); } } function createMarker(vLoc, vInfo) { var marker = (new GMarker(vLoc)); var cont = vInfo; GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml(cont); }); return marker; } //var directions = new GDirections(window.map, document.getElementById('directionsPanel')); //directions.loadFromWaypoints(waypoints, { travelMode: G_TRAVEL_MODE_WALKING }); //GEvent.addListener(directions, "load" , function() { // var numRoutes = directions.getNumRoutes(); // for(j=0; j< numRoutes; j++){ // var thisRoute = directions.getRoute(j); // var routeText = ''; // for(k=0; k < thisRoute.getNumSteps(); k++){ // var thisStep = thisRoute.getStep(k); // if(k != 0){ routeText += " &nbsp;||&nbsp; "; } // routeText += thisStep.getDescriptionHtml(); // } // $('resultDirections' + j).innerHTML = routeText; // } // $('resultDirections' + numRoutes).hide(); //}); } function moveUp(_obj){ var parentObj = $(_obj).up().up(); var prevSib = parentObj.previous(); prevSib.insert({before: parentObj}); processResults(); } function moveDown(_obj){ var parentObj = $(_obj).up().up(); var nextSib = parentObj.next(); nextSib.insert({after: parentObj}); processResults(); }

    Read the article

  • Using hreflang to specify a catchall language

    - by adam
    We have a site primarily targeted at the UK market, and are adding a US-market alternative. As per Google's recommendations: To indicate to Google that you want the German version of the page to be served to searchers using Google in German, the en-us version to searchers using google.com in English, and the en-gb version to searchers using google.co.uk in English, use rel="alternate" hreflang="x" to identify alternate language versions. Which gives us: <link rel="alternate" hreflang="en-gb" href="http://www.example.com/page.html" /> <link rel="alternate" hreflang="en-us" href="http://www.example.com/us/page.html" /> We do get enquiries from other areas of the world - particularly where there are expat communities (Dubai, UAE, Portugal etc). By adding the above tags, is there a risk that Google will only surface our site for UK and US search users? Do we need to specify a catch-all that will default all other searches to our UK site?

    Read the article

  • Google ne censure plus ses résultats en Chine, comment va réagir le gouvernement de Pékin ?

    Mise à jour du 22.03.2010 par Katleen Google ne censure plus ses résultats en Chine, comment va réagir le gouvernement de Pékin ? Ca y est, Google a franchi le pas. Comme nous vous l'annoncions vendredi, Google a officiellement pris position ce lundi. L'entreprise a cessé de censurer les résultats de ses recherches en Chine. Dès à présent, les internautes chinois qui se connectent sur Google.cn sont automatiquement redirigés vers Google.com.hk, le site de Hong Kong, comme l'a expliqué ce matin le directeur juridique David Drummond : «Aujourd'hui nous avons cessé de censurer nos services de recherches ?Google Search, Google News et Google Images sur Google.cn. Les internautes visitant Google.cn...

    Read the article

  • Google+ Platform Office Hours for May 16th, 2012: Hangouts API v1.1

    Google+ Platform Office Hours for May 16th, 2012: Hangouts API v1.1 This week we discussed the latest release of the Hangouts API, v1.1. JD Salazar and Richard Dunn from the Hangouts API engineering team joined us to help your answer questions. Discussion this session on Google+: goo.gl You can learn more about our office hours here: goo.gl 0:29 - Introductions 2:50 - Richard gives us an overview of what's new in Hangouts API v1.1 8:57 - What are the default scales for the static overlays? 9:25 - Will the static overlay scale ratio change during the hangout? 10:13 - What is the resolution of the feed? How do I ensure my overlays match the quality? 12:49 - How do I know if an image resource has failed to load? 16:33 - Can we have animated gifs as overlays? 19:44 - Loaded overlays do not clear upon deletion. How many can I load before I encounter issues? 21:48 - Are sound overlays played to all participants or only locally? What about sound cancellation? 23:27 - How do you uninstall a Hangout app? 25:41 - Can I make an app that uses drag and drop onto the film strip? 26:55 - Can we embed participant thumbnails elsewhere on the screen? 28:33 - How can I determine a consistent ordering for hangout participants? 31:35 - Can I access Picasa photos uploaded by another user within a hangout? Gerwin demonstrates his solution. 31:14 - How do I know when my hangout app has been unloaded for the purposes of doing cleanup? 39:28 - Will face tracking ever support multiple faces? 40:41 - Can I use WebGL in a hangouts app? 42:09 - I'm having issues with <b>...</b> From: GoogleDevelopers Views: 2032 18 ratings Time: 53:05 More in Science & Technology

    Read the article

  • Web pages with mixed ownership photos

    - by dstonek
    I have a photo website. 15% of the photos belong to approved registered users. They agree my terms about uploading their images in my web pages. I include a photographer credit on right bottom corner. About identifying the site with google, every page contains a google+ button to MY google+ page it also contains <link href="https://plus.google.com/nnnnnnnnnn/" rel="publisher" /> I need some advice in order to respect google rules about my pages containing other photographers images not to be penalized because of possible duplicated or interpreted as stolen content. My concern is also about adding G+ links (to MY photo page) and Google publisher id would harm my site rank because of pages containing third-party photos.

    Read the article

  • How to make search engine to showing my location map? [duplicate]

    - by Lena Queen
    This question already has an answer here: What are the most important things I need to do to encourage Google Sitelinks? 5 answers I have a webpage that already listing on google maps. If i search with term "my web", search engine (google) only show my web. How to make search engine (google) show my web and google location on right as this scenario: For contact and portfolio page, how to make it so user can view some of my page beside my home page? Also, how to make google show my other links of my web?

    Read the article

  • VTD-XML Parsing Performance (speed critical factor). Requesting Feedback/Comments

    - by andreas
    Hello, I am about to use VTD-XML (found at http://vtd-xml.sourceforge.net/) but I am interested in getting real-case usage feedback, by any one that has used the library and has any comments. At the URL (http://vtd-xml.sourceforge.net/) there are benchmarks but if someone has used VTD-XML and has comments FOR it I would like to hear them. Speed is a critical factor in the application and comments after real-case usage, by developers, is what i am looking for. Regards,

    Read the article

  • (PHP) Validation, Security and Speed - Does my app have these?

    - by Devner
    Hi all, I am currently working on a building community website in PHP. This contains forms that a user can fill right from registration to lot of other functionality. I am not an Object-oriented guy, so I am using functions most of the time to handle my application. I know I have to learn OOPS, but currently need to develop this website and get it running soon. Anyway, here's a sample of what I let my app. do: Consider a page (register.php) that has a form where a user has 3 fields to fill up, say: First Name, Last Name and Email. Upon submission of this form, I want to validate the form and show the corresponding errors to the users: <form id="form1" name="form1" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <label for="name">Name:</label> <input type="text" name="name" id="name" /><br /> <label for="lname">Last Name:</label> <input type="text" name="lname" id="lname" /><br /> <label for="email">Email:</label> <input type="text" name="email" id="email" /><br /> <input type="submit" name="submit" id="submit" value="Submit" /> </form> This form will POST the info to the same page. So here's the code that will process the POST'ed info: <?php require("functions.php"); if( isset($_POST['submit']) ) { $errors = fn_register(); if( count($errors) ) { //Show error messages } else { //Send welcome mail to the user or do database stuff... } } ?> <?php //functions.php page: function sql_quote( $value ) { if( get_magic_quotes_gpc() ) { $value = stripslashes( $value ); } else { $value = addslashes( $value ); } if( function_exists( "mysql_real_escape_string" ) ) { $value = mysql_real_escape_string( $value ); } return $value; } function clean($str) { $str = strip_tags($str, '<br>,<br />'); $str = trim($str); $str = sql_quote($str); return $str; } foreach ($_POST as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } foreach ($_GET as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } function validate_name( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( isset($fld) && $fld != '' && !preg_match("/^[a-zA-Z\ ]+$/", $fld)) { $str = "$label: Invalid characters used! Only Lowercase, Uppercase alphabets and Spaces are allowed"; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function validate_email( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( !eregi('^[a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.([a-zA-Z]{2,4})$', $fld) ) { $str = "$label: Invalid format. Please check."; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function val_rules( $str, $val_type, $rule='required' ){ switch ($val_type) { case 'name': $val = validate_name( $str, 3, 20, $rule, 'First Name'); break; case 'lname': $val = validate_name( $str, 10, 20, $rule, 'Last Name'); break; case 'email': $val = validate_email( $str, 10, 60, $rule, 'Email'); break; } return $val; } function fn_register() { $errors = array(); $val_name = val_rules( $_POST['name'], 'name' ); $val_lname = val_rules( $_POST['lname'], 'lname', 'optional' ); $val_email = val_rules( $_POST['email'], 'email' ); if ( $val_name != '0' ) { $errors['name'] = $val_name; } if ( $val_lname != '0' ) { $errors['lname'] = $val_lname; } if ( $val_email != '0' ) { $errors['email'] = $val_email; } return $errors; } //END of functions.php page ?> OK, now it might look like there's a lot, but lemme break it down target wise: 1. I wanted the foreach ($_POST as &$value) and foreach ($_GET as &$value) loops to loop through the received info from the user submission and strip/remove all malicious input. I am calling a function called clean on the input first to achieve the objective as stated above. This function will process each of the input, whether individual field values or even arrays and allow only tags and remove everything else. The rest of it is obvious. Once this happens, the new/cleaned values will be processed by the fn_register() function and based on the values returned after the validation, we get the corresponding errors or NULL values (as applicable). So here's my questions: 1. This pretty much makes me feel secure as I am forcing the user to correct malicious data and won't process the final data unless the errors are corrected. Am I correct? Does the method that I follow guarantee the speed (as I am using lots of functions and their corresponding calls)? The fields of a form differ and the minimum number of fields I may have at any given point of time in any form may be 3 and can go upto as high as 100 (or even more, I am not sure as the website is still being developed). Will having 100's of fields and their validation in the above way, reduce the speed of application (say upto half a million users are accessing the website at the same time?). What can I do to improve the speed and reduce function calls (if possible)? 3, Can I do something to improve the current ways of validation? I am holding off object oriented approach and using FILTERS in PHP for the later. So please, I request you all to suggest me way to improve/tweak the current ways and suggest me if the script is vulnerable or safe enough to be used in a Live production environment. If not, what I can do to be able to use it live? Thank you all in advance.

    Read the article

  • Does Google bot(and/or search engines) index a forwarded page? [migrated]

    - by user2889419
    Let say I have foo.bar domain, and I force the user to use the https over http. The question is as browsers just accept and load the forwarded/new page(when the request for http://foo.bar - https://foo.bar), does the google bot(or other search engines) accept the forwarded page and index the new page and just ignore the old page? in other word, does search engines accept https beside the http? thanks in advance.

    Read the article

  • AuthSub target path prefix does not match the provided "next" URL

    - by dweebsonduty
    I am trying to use the Gcal API in PHP. I am using the ZEND framework function getAuthSubUrl($company) { $next = "http://$company.mysite.com"; $scope = 'http://www.google.com/calendar/feeds/'; $secure = false; $session = true; return (Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session)); } $authSubUrl = getAuthSubUrl(); echo "<a href=\"$authSubUrl\">login to your Google account"</a> I am not sure what I am doing wrong here. I am following the google example almost exactly. They do have $next = getCurrentUrl(); in their expample but I am getting undefined errors when I try that.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >