Search Results

Search found 31026 results on 1242 pages for 'google cloud platform'.

Page 38/1242 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How to install Google drive? [duplicate]

    - by Tamal Das
    This question already has an answer here: Is there a Google Drive client available? 9 answers How to install Google drive in Ubuntu 13.04? I am trying to find out an installation file from Internet (by using Google search). But, am not able to find out such installer.

    Read the article

  • Google Maps Api v3 - getBounds is undefined

    - by Via Lactea
    Hi All, I'm switching from v2 to v3 google maps api and got a problem with gMap.getBounds() function. I need to get the bounds of my map after its initialization. Here is my javascript code: var gMap; $(document).ready( function() { var latlng = new google.maps.LatLng(55.755327, 37.622166); var myOptions = { zoom: 12, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; gMap = new google.maps.Map(document.getElementById("GoogleMapControl"), myOptions); alert(gMap.getBounds()); So now it alerts me that gMap.getBounds() is undefined. I've tryed to get getBounds values in click event and it works fine for me, but I cannot get the same results in load map event. Also getBounds works fine while document is loading in Google Maps API v2, but it fails in V3. Could you please help me to solve this problem?

    Read the article

  • How can I tell if a user came to a page via a Google Adwords PPC campaign?

    - by Mike Crittenden
    I have a form with a hidden "Came from Adwords" field that will be marked true (via javascript) if the user came from a PPC campaign and will stay false if not. That way, when the user submits the form, we will have each submission stored with info about whether that submission came from adwords or not, all without the user knowing. How can I fetch this info? I know that Google sets a cookie called Conversion whenever you click a PPC link to a page, but the cookie's content is just random alphanumeric characters. Is there something in the Analytics/Adwords API that will let me test for this? Do I have to resort to adding ?ref=adwords or something onto the PPC URLs so that I can test that way?

    Read the article

  • Does Google's Geocoding API return results that are more accurate than Google Maps or the same?

    - by jacob501
    I am thinking about using python or C++ in conjunction with google's geocoding API. Since geocoding is the process of turning street addresses into coordinates, I was wondering how google does this. I am looking for something that will give me coordinates that are around 50 meters away from the entrance of the location at a specified address. There are a few problems with this when you use google maps however. If you aren't doing it manually, sometimes google maps will place a marker for an address just on the road and not over the place (especially for addresses in malls, places far off the road, etc). Does the geocoding api give you more accurate coordinates or does it simply copy the coordinates of what a google maps marker would give you? I hope this makes sense. Thanks.

    Read the article

  • Efficient Map Overlays in and drawing Lines on Android Google Map...

    - by Ahsan
    Hi Friends, I want to do the following and am kind of stuck on these for a few days.... 1) I have used helloItemizedOverlay to add about 150 markers and it gets very very slow.....any idea what to do ? I was thinking about threads....(handler) ... 2) I was trying to draw poly lines ( I have encoded polylines, but have managed to decoded those) that move when I move the map..... 3) I was looking for some sort of a timer class that executes, say, every 1 minute or so.... 4) I was also looking for ways to clear the Google map from all the markers/lines etc.... Thanks a lot... :) - ahsan

    Read the article

  • zoom_changed only triggers once in google maps api version 3 using MVC

    - by fredrik
    Hi, I'm trying to use the MVC objects in google maps version 3. What I can't seem to figure out is why my zoom_changed method is only invoked once. When I first load the map the zoom_changed method is invoked. But not when I zoom on the map. function MarkerWidget (options) { this.setValues(options); this.set('zoom', this.map.zoom); var marker = new google.maps.Marker({ icon : this.icon, mouseOverIcon : this.mouseOverIcon, orgIcon : this.orgIcon }); marker.bindTo('map', this); marker.bindTo('position', this); google.maps.event.addListener(marker, 'mouseover', this.onmouseover); google.maps.event.addListener(marker, 'mouseout', this.onmouseout); } MarkerWidget.prototype = new google.maps.MVCObject(); MarkerWidget.prototype.zoom_changed = function () { $.log(this, new Date()); } Shouldn't the map object fire the zoom event and notify all object's that has "this.set('zoom', this.map.zoom)" ? ..fredrik

    Read the article

  • google contacts api service account oauth2.0 sub user

    - by user3709507
    I am trying to use the Google Contacts API to connect to a user's contact information, on my Google apps domain. Generating an access_token using the gdata api's ContactsService clientlogin function while using the API key for my project works fine, but I would prefer to not store the user's credentials, and from the information I have found that method uses OAuth1.0 So, to use OAuth2.0 I have: Generated a Service Account in the developer's console for my project Granted access to the service account for the scope of https://www.google.com/m8/feeds/ in the Google apps domain admin panel Attempted to generate credentials using SignedJwtAssertionCredentials: credentials = SignedJwtAssertionCredentials( service_account_name=service_account_email, private_key=key_from_p12_file, scope='https://www.google.com/m8/feeds/', sub=user_email') The problem I am running into is that attempting to generate an access token using this method fails. It succeeds in generating the token when I remove the sub parameter, but then that token fails when I try to fetch the user's contacts. Does anyone know why this might be happening?

    Read the article

  • Google Maps: Multimarker not working

    - by HyperDevil
    Hi, I want to plot several markers on my map. Found some code on the internet, for that user it worked. The array is generated by PHP, example: var ships = [['61','10.2']['60.5','10.1']]; My Javascript: var map; function load(ships) { initialize(); createShips(ships); } function initialize() { //build the map var myLatlng = new google.maps.LatLng(63.65,10.65); var myOptions = { zoom: 9, center: myLatlng, mapTypeId: google.maps.MapTypeId.TERRAIN } var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } function createShips(ships) { for (var i = 0; i < ships.length; i++) { new google.maps.Marker({ position: new google.maps.LatLng(ships[i][0], ships[i][1]), map: map, title: ships[i][0] }); } } My html function to start the map is: body onload="load()" The map seems to appear, but no markers :(

    Read the article

  • Cancel zoom from dbclick event onmap in google maps version 3

    - by aleka
    I initiallize a map in my application like this: function doLoad() { var parnitha= new google.maps.LatLng(38.155428,23.718739); var myOptions = { zoom:16, mapTypeId: google.maps.MapTypeId.SATELLITE, center: parnitha, disableDefaultUI: true, navigationControl: true, scrollwheel: false, navigationControlOptions: { style: google.maps.NavigationControlStyle.SMALL, position: google.maps.ControlPosition.TOP_LEFT}, } map= new google.maps.Map(document.getElementById("map_canvas"), myOptions); }. With double click on the map, it making zoom and i don't want this, because i want to use dbClick event for other reason.Please help me to remove the default dbClick event on the map. Thanks a lot..

    Read the article

  • Legal Issue: Remove/Hide links on Google Login page

    - by Rowell
    For the background: I'm developing a device application which offers connection to Google Drive. My end-users will need to login to their Google Account and authorize my application to access their Google Drive. I'm using OAuth 2.0 to do this. But my concern is that I don't want users to navigate away from my application using the links on the Google Login page. Basically, I don't want them to use my application to browse the internet. Question: Will I violate any terms of service/usage if I hide or change the href the links using GreaseMonkey or TamperMonkey? The changes will only be on the client side and I won't alter any processing at all. I already checked https://developers.google.com/terms/ but I found no item related to modifying the pages on client side. Thanks in advance.

    Read the article

  • Windows Azure Use Case: Web Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall. There are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company. In any case, the design of these applications often resembles the following: In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application. High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well. Implementation: In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system. The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery. In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture. It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly. You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives. SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually. Resources: Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx  Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • The Whole Enchilada — Fusion Supply Chain in the Cloud

    - by Kathryn Perry
    A guest post by Tyra Crockett, Senior Manager at Oracle No other vendor can offer everything in the cloud the way Oracle can. You can get HR from Workday and CRM from Salesforce, but you can get the whole enchilada—HCM, CRM and ERP—all from Oracle on one platform. If you’re thinking about using Oracle's Cloud Services to implement the newest Oracle Fusion Supply Chain applications, this post is for you. Point #1: The Oracle Cloud Applications Services portfolio includes ERP cloud services which are flexible and can adapt to fill your supply chain needs. For example, you might be opening a small distribution facility in California, but don’t have the time or IT resources to warrant a full scale supply chain implementation. You can use Oracle’s Cloud to implement the Oracle Fusion Supply Chain applications you need without an increase in IT staff or hardware. Then as your business grows, you can add more features and applications to your cloud.   Point #2: Whether you’re implementing a slice of the Fusion Procurement pie, or the entire ERP portfolio, you want to be up and running fast with low upfront costs and investment risks. That’s where you can trust a world-class technology organization like Oracle. Your SaaS subscription-based deployment model will take away the headaches associated with determining your software costs. You also will be able to eliminate expensive customizations and configure your deployment as you like, saving you time and money during the initial stages and upon upgrade. Point #3: Another great benefit of operating your Oracle Fusion Supply Chain in the cloud is the opportunity to standardize your processes across your entire supply chain. You can institute processes in San Francisco and be confident they will be followed in Mexico City and Hong Kong. Point #4: If data security is a concern – and it is for most of us – Oracle-managed cloud services give you the comfort of knowing that your data will always be there when you need it. You will not have to manage the IT services associated with patching and upgrade. They will be taken care of automatically. This enables you to focus on what you do best: managing your business. Point #5: Cloud services aren’t an either/or proposition. You might have very good business reasons for choosing a hybrid model -- running some applications in the cloud and others on premise. That allows you to leverage your own IT department, when and where you need to, and shift focus when necessary. I urge you to take a hard look at the Oracle Fusion Supply Chain applications running in the cloud. These solutions running alongside your existing legacy systems can solve your toughest business challenges as you move forward in the 21st century.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • GooglePlayServicesNotAvailable: GooglePlayServices not available due to error 1

    - by Mathias Lin
    I'm on Galaxy S III with Android 4.0.4, Google Play installed. In my app I try to get a token from the Google Play services, as described on https://developers.google.com/android/google-play-services/authentication. Since it's all quite new (the Google pages were last updated this week), there's not much documentation to be found, especially about each specific error code. final String token = GoogleAuthUtil.getToken(this, "[email protected]", "scope"); gives me an exception: 09-30 11:24:36.075: ERROR/GoogleAuthUtil(11984): GooglePlayServices not available due to error 1 09-30 11:24:36.105: ERROR/AuthTokenCheck_(11984): Error 1 com.google.android.gms.auth.GooglePlayServicesAvailabilityException: GooglePlayServicesNotAvailable at com.google.android.gms.auth.GoogleAuthUtil.f(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at mobi.app.activity.AuthTokenCheck.getAndUseAuthTokenBlocking(AuthTokenCheck.java:148) at mobi.app.activity.AuthTokenCheck$1.doInBackground(AuthTokenCheck.java:61) at android.os.AsyncTask$2.call(AsyncTask.java:264) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) at java.util.concurrent.FutureTask.run(FutureTask.java:137) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:208) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569) at java.lang.Thread.run(Thread.java:856) https://developers.google.com/android/google-play-services/reference/com/google/android/gms/auth/package-summary tells me: GooglePlayServicesAvailabilityExceptions are special instances of UserRecoverableAuthExceptions which are thrown when the expected Google Play services app is not available for some reason. But what exactly does that mean? And how to resolve it? I've added the Google Play services extras in my SDK and the jar to my project, marked as 'exported'. I'm also wondering what the "Google Play services app" exactly is. Unfortunately it's all not very clearly described at https://developers.google.com/android/google-play-services/. The Google Play services component is delivered as an APK through the Google Play Store, so updates to Google Play services are not dependent on carrier or OEM system image updates. Newer devices will also have Google Play services as part of the device's system image, but updates are still pushed to these newer devices through the Google Play Store. Isn't "Google Play services" app the same as the "Google Play" app? Another question I have, due to lack of documentation: what is the scope parameter for? The documentation just says the following, but not defining what an 'authentication scope' exactly is: scope String representing the authentication scope.

    Read the article

  • Google Apps Script - Sending Google Spreadsheet Row to Email

    - by AME
    Hi I am trying to write a script using Google Apps Script (Javascript) for a Google spreadsheet. I am trying to do the same thing that is shown in this tutorial [http://www.google.com/google-d-s/scripts/sending_emails.html][1], but each row in my spreadsheet has 24 columns. I would like to send out the contents of each row as an email. Here is the code as I am trying to use: function sendEmails() { var sheet = SpreadsheetApp.getActiveSheet(); var dataRange = sheet.getRange("A2:X31") // Fetch values for each row in the Range. var data = dataRange.getValues(); for (i in data) { var row = data[i]; var i = i + 1; var emailAddress = row[0]; // First column var message = row[1]; // Second column var subject = "Sending emails from a Spreadsheet"; MailApp.sendEmail(emailAddress, subject, message); } }? The result is an email with the contents in the "B" column only. Can someone help me change this code to get all of the contents in each row (columns A-X). Thanks,

    Read the article

  • how to: dynamically load google ajax api into chrome extension content script

    - by Hoff
    Hi there, I'm trying to make use of google's ajax apis in a chorme extension's "content script". On a regular html page, I would just do this: <script src="http://www.google.com/jsapi"></script> <script> google.load("language", "1"); </script> But since I'm trying to load the tranlation library dynamically from js code, I've tried: script = document.createElement("script"); script.src = "http://www.google.com/jsapi"; script.type = "text/javascript"; document.getElementsByTagName("head")[0].appendChild(script); google.load('language','1') but the last line throws the following error: Uncaught TypeError: Object # has no method 'load' Funny enough, when i enter the same "google.load('language','1')" in chrome's js console, it works as intended... I've also tried with jquery's .getScript() but the same problem persists... Does anybody have any clue what might be the problem and how it could be solved? Many thanks in advance! Martin

    Read the article

  • Get Chinese Romanization from Google Translate API

    - by krubo
    The Google language translate API works cleanly to translate into Chinese: <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script> google.load('language','1'); function googletrans(text) { google.language.translate(text,'en','zh',function(result) { alert(result.translation); }); } </script> <input onchange="googletrans(this.value);"> Example input: "Hello" Result: "??" My problem is I can't get the Romanization (pronunciation using English letters). This is a known issue. Now the data is right there on translate.google.com (Example input: "Hello" Result: "Ni hao") and I can even see it by pointing my browser to: http://translate.google.com/translate_a/t?client=t&text=hello&hl=en&sl=en&tl=zh-CN&otf=2&pc=0 Result: {"sentences":[{"trans":"??","orig":"hello","translit":"Ni hao"}], "dict":[{"pos":"interjection","terms":["?"]}],"src":"en"} But somehow when I try to get this URL with ajax it fails (XMLHttpRequest Exception 101). Is there any way to retrieve this Romanization data with ajax?

    Read the article

  • Google Reader API - feed/[FEEDURL]/ is coming back as Not found

    - by JustinXXVII
    There is one feed I'm subscribed to which always turns up as NOT FOUND when I try to use the API. I return an array of Dictionaries, containing 3 objects. The first in the list represents the user himself, like so: { FeedID = "user/MY_UNIQUE_NUMBER/state/com.google/reading-list"; Timestamp = 1273448807271463; Unread = 59; } The Unread count is very important. My client depends on downloading 59 items from Google before it refreshes. If a feed doesn't download properly, the count is off and the client won't update. An example of a working Feed is here: { FeedID = "feed/http://arstechnica.com/index.rssx"; Timestamp = 1273447158484528; Unread = 13; } The FeedID value combines with a specially formatted URL string and gives back a list of articles. The above example works fine. However, the following feed always returns NOT FOUND on Google, and if I paste the URL verbatim into a browser, it never turns up. See here: { FeedID = "feed/http://www.peopleofwalmart.com/?feed=rss2"; Timestamp = 1273424138183529; Unread = 6; } http://www.google.com/reader/api/0/stream/contents/feed/http://www.peopleofwalmart.com/?feed=rss2?ot=1&r=n&xt=user/-/state/com.google/read&n=6&ck=1273449028&client=testClient If you are at all proficient with the API, can you please help me? Like I said, since Google always says NOT FOUND when I search for that feed, my download count is off by N articles and won't update. I would rather not hack around it, honestly. Thanks!

    Read the article

  • How to get the equivalent of the accuracy in Google Map Geocoder V3

    - by Scorpi0
    Hi, I want to get geocode from google, and I used to do it with the V2 of the API. Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly. For example, I have a request accuracy to the street number, the array is of size 8. Another query is accuracy to the route, so less accuracy, but the array is style of size 8, as there is a row 'sublocality', which not appear in the first case. Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that. So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?

    Read the article

  • Move file or folder to a different folder in google document using api problem

    - by Minh Nguyen
    In Google Document i have a struct: Folder1 +------Folder1-1 +------+------File1-1-1 +------Folder1-2 +------File1-1 Folder2 I want to move "File1-1" to "Folder2" using .Net google api library(Google Data API SDK) public static void moveFolder(string szUserName, string szPassword, string szResouceID, string szToFolderResourceID) { string szSouceUrl = "https://docs.google.com/feeds/default/private/full" + "/" + HttpContext.Current.Server.UrlEncode(szResouceID); Uri sourceUri = new Uri(szSouceUrl); //create a atom entry AtomEntry atom = new AtomEntry(); atom.Id = new AtomId(szSouceUrl); string szTargetUrl = "http://docs.google.com/feeds/default/private/full/folder%3Aroot/contents/"; if (szToFolderResourceID != "") { szTargetUrl = "https://docs.google.com/feeds/default/private/full" + "/" + HttpContext.Current.Server.UrlEncode(szToFolderResourceID) + "/contents" ; } Uri targetUri = new Uri(szTargetUrl); DocumentsService service = new DocumentsService(SERVICENAME); ((GDataRequestFactory)service.RequestFactory).KeepAlive = false; service.setUserCredentials(szUserName, szPassword); service.EntrySend(targetUri, atom, GDataRequestType.Insert); } After run this function i have: Folder1 +------Folder1-1 +------+------File1-1-1 +------Folder1-2 +------File1-1 Folder2 +------File1-1 "File1-1" display in both "Folder1" and "Folder2", and when i delete it from a folder it will be deleted in another folder. (expect: "File1-1" display only in "Folder2") What happen? How can i solve this problem?

    Read the article

  • Can I retain a Google apps session token permanently for a specific user who logs into my google app

    - by Ali
    Hi guys, is it possible to retain upon authorization a single session token for a user who signs into my gogle application. CUrrently my application seems to every now and then require the user to authenticate into google apps. I think it has to do with session dying out or so. I have the following code: function getCurrentUrl() { global $_SERVER; $php_request_uri = htmlentities(substr($_SERVER['REQUEST_URI'], 0, strcspn($_SERVER['REQUEST_URI'], "\n\r")), ENT_QUOTES); if (isset($_SERVER['HTTPS']) && strtolower($_SERVER['HTTPS']) == 'on') { $protocol = 'https://'; } else { $protocol = 'http://'; } $host = $_SERVER['HTTP_HOST']; if ($_SERVER['SERVER_PORT'] != '' && (($protocol == 'http://' && $_SERVER['SERVER_PORT'] != '80') || ($protocol == 'https://' && $_SERVER['SERVER_PORT'] != '443'))) { $port = ':' . $_SERVER['SERVER_PORT']; } else { $port = ''; } return $protocol . $host . $port . $php_request_uri; } function getAuthSubUrl($n=false) { $next = $n?$n:getCurrentUrl(); $scope = 'http://docs.google.com/feeds/documents https://www.google.com/calendar/feeds/ https://spreadsheets.google.com/feeds/ https://www.google.com/m8/feeds/ https://mail.google.com/mail/feed/atom/'; $secure = false; $session = true; //echo Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session);; return Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session).(isset($_SESSION['domain'])?'&hd='.$_SESSION['domain']:''); } function _regenerate_token() { global $BASE_URL; if(!$_SESSION['token']) { if(isset($_GET['token'])): $_SESSION['token'] = Zend_Gdata_AuthSub::getAuthSubSessionToken($_GET['token']); return; else: _regenerate_sessions(); _redirect(getAuthSubUrl($BASE_URL . '/index.php?'.$_SERVER['QUERY_STRING'])); endif; } } _regenerate_token(); I know I'm doing it all wrong here and I don't know why :( I have a CONSUMER SECRET code but only use it whereever I need to access a google service. However something is wrong with my authentication as the user has to periodically 'grant access to my application' and reauthorise himself... help please

    Read the article

  • Javascript and the Google Maps API

    - by Tiny Giant Studios
    Hiya coding Ninja's I'm in a spot of bother and my hairline is on the chopping block. When I integrated the maps API on this site, ritaknoetze.com, everything worked perfectly. However, copying that exact code for a different demo website, scarabpaper, the map doesn't show up at all? Could someone show me the ropes on what I'm doing wrong? Here's the code I got from Google itself that I modified for my WordPress theme/installation: JavaScript: <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript"> function initialize() { var myLatlng = new google.maps.LatLng(-34.009839, 22.78101); var myOptions = { zoom: 9, center: myLatlng, navigationControl: true, mapTypeControl: false, scaleControl: false, mapTypeId: google.maps.MapTypeId.ROADMAP } var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var image = '<?php bloginfo('template_url')?>/assets/googlemaps_marker.png'; var myLatLng = new google.maps.LatLng(-34.009839, 22.78101); var beachMarker = new google.maps.Marker({ position: myLatLng, map: map, icon: image }); } </script> My HTML where the javascript goes: <div class="contact_container"> <div id="map_canvas"></div> <div class="clearfloat"></div> </div> My CSS for the affected divs #map_canvas { width: 880px; height: 300px; margin-left: 10px; margin-bottom: 30px; margin-top: 10px; float: left; border: 1px solid #dedcdc;} .contact_container { /*container for ALL the contact info*/ background-color: #fff; border: 1px solid #dedcdc; width: 900px; margin-top: 30px; padding: 20px; padding-bottom: 0;} Any Help would be greatly appreciated...

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >