Search Results

Search found 83878 results on 3356 pages for 'google data api'.

Page 7/3356 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • HTML 5, Fluid Pages and Google Mobile Index

    - by Bob
    I am currently migrating my site to HTML5, at the same time designing pages so that they are "fluid" and are equally presentable for a mobile or a large screen. I took the fluid approach so as not to have to develop a separate application for mobile devices and I'm pleasantly surprised with the results that look equally as good on an iPhone as they do on a large screen. Then I went into the Google Webmaster Tools facility and became aware of the Google Mobile Index. I'm confused now as HTML5 doesn't seem to be supported by Google Mobile Indexing. Does this mean that when I go live with my new "pride and joy" HTML5 site on a mobile it won't appear on any Google searches as it's not in the Google Mobile Index?

    Read the article

  • Pointing domain away from google

    - by redbeard
    I've already asked this question on google apps support, didnt fet an answer yet, and I am in a bit of a hurry. Need to have that website up by monday. Question: We have a domain registered at 101domain, we used a domain just for google apps, for email, and some services, and domain was never used for website. Now we pointed domain to the local hosting service, set everything like it should be, and pointed back to google from hosting service for gmail and gmail works perfectly. Problem is that we tried to set up a website at hosting service but domain still points to google apps login. Everything looks like it should, we recently bought 2 more domains that are set up the same (we use google apps on them too), the only difference tha I can think of is that we didn't use for apps first. Could this be because CNAME lags behind DNS server change?

    Read the article

  • Does the Instant Preview in Google webmaster tools takes Robot.txt in account?

    - by rockyraw
    Is that the way to go If I want to visually see what the googlebot see? I'm trying to check a folder which I have just blocked in my robots.txt. If I fetch the folder as google bot, It fetches ok, so that doesn't tell me nothing about whether the block is working I know there's a tool to check for blocking, but it is dependent on the input of the robots.txt Therefore I've tried the Instant preview, and I don't get a preview for what the bot sees ("pre-render), so I think that means that it's because the robots.txt blocks it; however - I don't see the bot tried beforehand to access my updated robots.txt, so I'm not sure how does it know that this folder is blocked? (it does preview another new folder, that is not blocked)

    Read the article

  • Adding simple marker clusterer to google map

    - by take2
    Hi, I'm having problems with adding marker clusterer functionality to my map. What I want is to use custom icon for my markers and every marker has its own info window which I want to be able to edit. I did accomplish that, but now I have problems adding marker clusterer library functionality. I read something about adding markers to array, but I'm not sure what would it exactly mean. Besides, all of the examples with array I have found, don't have info windows and searching through the code I didn't find appropriate way to add them. Here is my code (mostly from Geocodezip.com): <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript" src="http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/src/markerclusterer.js"></script> <style type="text/css"> html, body { height: 100%; } </style> <script type="text/javascript"> //<![CDATA[ var map = null; function initialize() { var myOptions = { zoom: 8, center: new google.maps.LatLng(43.907787,-79.359741), mapTypeControl: true, mapTypeControlOptions: {style: google.maps.MapTypeControlStyle.DROPDOWN_MENU}, navigationControl: true, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var mcOptions = {gridSize: 50, maxZoom: 15}; var mc = new MarkerClusterer(map, [], mcOptions); google.maps.event.addListener(map, 'click', function() { infowindow.close(); }); // Add markers to the map // Set up three markers with info windows var point = new google.maps.LatLng(43.65654,-79.90138); var marker1 = createMarker(point,'Abc'); var point = new google.maps.LatLng(43.91892,-78.89231); var marker2 = createMarker(point,'Abc'); var point = new google.maps.LatLng(43.82589,-79.10040); var marker3 = createMarker(point,'Abc'); var markerArray = new Array(marker1, marker2, marker3); mc.addMarkers(markerArray, true); } var infowindow = new google.maps.InfoWindow( { size: new google.maps.Size(150,50) }); function createMarker(latlng, html) { var image = '/321.png'; var contentString = html; var marker = new google.maps.Marker({ position: latlng, map: map, icon: image, zIndex: Math.round(latlng.lat()*-100000)<<5 }); google.maps.event.addListener(marker, 'click', function() { infowindow.setContent(contentString); infowindow.open(map,marker); }); } //]]> </script>

    Read the article

  • Google tweets – Now search twitter archives using Google

    - by samsudeen
    Google has launched a Twitter archive service which allows you to  search tweets in real time as well as on its huge public archive (remember Twitter crossed 10 billionth tweet last month). The search results are displayed as tweets with twitter logo. To explore the twitter search go to Google.com homepage  and select   “Show options” on the search results page, then select “Updates.”.  The search is similar to the Google search with options to dig through the tweets by timeframe. You can explore results by zooming through a particular time range  or date. In addition to the time chart, it also displays the relative volume of an activity on Twitter about the topic. as you can see there is a spike about GSLV launch after 3 PM today.There is also a short cut link “Now” on the left corner which displays the latest results on the topics searched.The tweets also gets refreshed automatically.   Considering the huge volume of activity (50 million messages per day) on twitter, the archive is going to more and bigger. By providing such feature Google has once again proved it is way ahead of others in search Related Posts:None FoundJoin us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • uninstall google chrome in fedora

    - by tbleckert
    Yesterday I installed Fedora 15 Beta with GNOME 3 - it works well. One problem though is that I installed Chrome 32-bit (which was wrong, should have been the 64-bit version) and now I can't uninstall it. I can't find it in Add/Remove Software, and I also can't install the correct version of Chrome because it complains about my other copy of Chrome. Any ideas how I can remove the existing copy and get the 64-bit version installed? Here's the message I get when trying to install: Test Transaction Errors: file /etc/cron.daily/google-chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome-sandbox from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libffmpegsumo.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libpdf.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libppGoogleNaClPluginChrome.so from install of google-chrome-stable-11.0.696.65-84435.x8...

    Read the article

  • does google spreadsheets create forms?

    - by alexluvsdanielle
    after going to this URL: link text i noticed that it is powered by google docs. i didnt know that google docs make online forms as well. does google doc provide the option to make online forms? has anyone had experience doing this? what language are they using to make this thing so scalable?

    Read the article

  • Google I/O 2010 - Google Charts Toolkit

    Google I/O 2010 - Google Charts Toolkit Google I/O 2010 - Google Charts Toolkit: Google's new unified approach for creating dynamic charts on the web Google APIs 201 Michael Fink, Amit Weinstein Google Charts Toolkit is Google's unified approach for creating charts on the web. It provides a rich gallery spanning from pie charts to interactive heat-maps and from organizational trees to motion charts. The toolkit lets developers choose between JavaScript based client-side rendering and image based server-side rendering. We will present the relative strengths of these two approaches, and unveil the future visual design of Google Charts. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 0 ratings Time: 56:50 More in Science & Technology

    Read the article

  • Blank New Tab Quick-Fix for Google Chrome

    - by Asian Angel
    If you have other browsers that you use set to “about:blank” for new tabs then you probably feel rather frustrated with Google Chrome’s default New Tab Page. The Blank New Tab extension is the perfect solution to that problem. Before Unless you have a “speed dial/special page” extension installed you are stuck with the default new tab page in Chrome every single time you open a new tab. What if you do not like the default new tab page or “speed dial/special page” setups? After If you are someone who prefers to have a blank page as a new tab then you will love this extension. Once you have it installed you can click to your heart’s content on the “New Tab Button” and see nothing but blank goodness. Sometimes less is more… Note: There are no options to bother with. Conclusion If you prefer a blank page when opening a new tab then the Blank New Tab extension is just what you have been waiting for. Links Download the Blank New Tab extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Subscribe to RSS Feeds in Chrome with a Single ClickAccess Wolfram Alpha Search in Google ChromeFind Similar Websites in Google ChromeHow to Make Google Chrome Your Default BrowserView Maps and Get Directions in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images

    Read the article

  • Easily Close All Tabs in Google Chrome

    - by Asian Angel
    Do you find yourself with a lot of tabs open but dread closing all but one manually? Now you can close all of your tabs with a single click, and have just one ready to go with the Close all Tabs extension. Before We all find ourselves with a lot of tabs open sooner or later. That is not so bad until we realize that we need to close all of them and get back to work. A person could open a new tab and manually close the rest or close the entire window and restart Chrome. But a single click solution would be a lot more convenient. After There it is…the single click solution. Just click the Toolbar Button and BOOM! One fresh window with a single new tab page showing. Now if you could only take the rest of the day off… Conclusion The Close all Tabs extension may not be something that everyone would use, but if you are tired of manually closing all of those tabs then you will definitely like it. Links Download the Close all Tabs extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Focused New Tabs Quick-Fix for Google ChromeVisually Browse Through Your Open Tabs in Google ChromeMake Google Chrome Open with Pinned TabsStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeEasily Control a Large Amount of Tabs in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Link Google Profiles? [closed]

    - by Jon H
    I have two Google accounts. One GMail account that I use for personal stuff and one hosted domain that I use for business. My Google Profile is linked to my Gmail account - is it possible to associate it with my business account? I'm prompted to ask by the new 'Social Search' feature from Google, which while great, can only pick up contacts from one account at a time.

    Read the article

  • ASP.Net Web API in Visual Studio 2010

    - by sreejukg
    Recently for one of my project, it was necessary to create couple of services. In the past I was using WCF, since my Services are going to be utilized through HTTP, I was thinking of ASP.Net web API. So I decided to create a Web API project. Now the real issue is that ASP.Net Web API launched after Visual Studio 2010 and I had to use ASP.Net web API in VS 2010 itself. By default there is no template available for Web API in Visual Studio 2010. Microsoft has made available an update that installs ASP.Net MVC 4 with web API in Visual Studio 2010. You can find the update from the below url. http://www.microsoft.com/en-us/download/details.aspx?id=30683 Though the update denotes ASP.Net MVC 4, this also includes ASP.Net Web API. Download the installation media and start the installer. As usual for any update, you need to agree on terms and conditions. The installation starts straight away, once you clicked the Install button. If everything goes ok, you will see the success message. Now open Visual Studio 2010, you can see ASP.Net MVC 4 Project template is available for you. Now you can create ASP.Net Web API project using Visual Studio 2010. When you create a new ASP.Net MVC 4 project, you can choose the Web API template. Further reading http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/tutorial-your-first-web-api http://www.asp.net/mvc/mvc4

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • timetable in a jTable

    - by chandra
    I want to create a timetable in a jTable. For the top row it will display from monday to sunday and the left colume will display the time of the day with 2h interval e.g 1st colume (0000 - 0200), 2nd colume (0200 - 0400) .... And if i click a button the timing will change from 2h interval to 1h interval. I do not want to hardcode it because i need to do for 2h, 1h, 30min , 15min, 1min, 30sec and 1 sec interval and it will take too long for me to hardcode. Can anyone show me an example or help me create an example for the 2h to 1h interval so that i know what to do? The data array is for me to store data and are there any other easier or shortcuts for me to store them because if it is in 1 sec interval i got thousands of array i need to type it out. private void oneHour() //1 interval functions { if(!once) { initialize(); once = true; } jTable.setModel(new javax.swing.table.DefaultTableModel( new Object [][] { {"0000 - 0100", data[0][0], data[0][1], data[0][2], data[0][3], data[0][4], data[0][5], data[0][6]}, {"0100 - 0200", data[2][0], data[2][1], data[2][2], data[2][3], data[2][4], data[2][5], data[2][6]}, {"0200 - 0300", data[4][0], data[4][1], data[4][2], data[4][3], data[4][4], data[4][5], data[4][6]}, {"0300 - 0400", data[6][0], data[6][1], data[6][2], data[6][3], data[6][4], data[6][5], data[6][6]}, {"0400 - 0600", data[8][0], data[8][1], data[8][2], data[8][3], data[8][4], data[8][5], data[8][6]}, {"0600 - 0700", data[10][0], data[4][1], data[10][2], data[10][3], data[10][4], data[10][5], data[10][6]}, {"0700 - 0800", data[12][0], data[12][1], data[12][2], data[12][3], data[12][4], data[12][5], data[12][6]}, {"0800 - 0900", data[14][0], data[14][1], data[14][2], data[14][3], data[14][4], data[14][5], data[14][6]}, {"0900 - 1000", data[16][0], data[16][1], data[16][2], data[16][3], data[16][4], data[16][5], data[16][6]}, {"1000 - 1100", data[18][0], data[18][1], data[18][2], data[18][3], data[18][4], data[18][5], data[18][6]}, {"1100 - 1200", data[20][0], data[20][1], data[20][2], data[20][3], data[20][4], data[20][5], data[20][6]}, {"1200 - 1300", data[22][0], data[22][1], data[22][2], data[22][3], data[22][4], data[22][5], data[22][6]}, {"1300 - 1400", data[24][0], data[24][1], data[24][2], data[24][3], data[24][4], data[24][5], data[24][6]}, {"1400 - 1500", data[26][0], data[26][1], data[26][2], data[26][3], data[26][4], data[26][5], data[26][6]}, {"1500 - 1600", data[28][0], data[28][1], data[28][2], data[28][3], data[28][4], data[28][5], data[28][6]}, {"1600 - 1700", data[30][0], data[30][1], data[30][2], data[30][3], data[30][4], data[30][5], data[30][6]}, {"1700 - 1800", data[32][0], data[32][1], data[32][2], data[32][3], data[32][4], data[32][5], data[32][6]}, {"1800 - 1900", data[34][0], data[34][1], data[34][2], data[34][3], data[34][4], data[34][5], data[34][6]}, {"1900 - 2000", data[36][0], data[36][1], data[36][2], data[36][3], data[36][4], data[36][5], data[36][6]}, {"2000 - 2100", data[38][0], data[38][1], data[38][2], data[38][3], data[38][4], data[38][5], data[38][6]}, {"2100 - 2200", data[40][0], data[40][1], data[40][2], data[40][3], data[40][4], data[40][5], data[40][6]}, {"2200 - 2300", data[42][0], data[42][1], data[42][2], data[42][3], data[42][4], data[42][5], data[42][6]}, {"2300 - 2400", data[44][0], data[44][1], data[44][2], data[44][3], data[44][4], data[44][5], data[44][6]}, {"2400 - 0000", data[46][0], data[46][1], data[46][2], data[46][3], data[46][4], data[46][5], data[46][6]}, }, new String [] { "Time/Day", "(Mon)", "(Tue)", "(Wed)", "(Thurs)", "(Fri)", "(Sat)", "(Sun)" } )); } private void twoHour() //2 hour interval functions { if(!once) { initialize(); once = true; } jTable.setModel(new javax.swing.table.DefaultTableModel( new Object [][] { {"0000 - 0200", data[0][0], data[0][1], data[0][2], data[0][3], data[0][4], data[0][5], data[0][6]}, {"0200 - 0400", data[4][0], data[4][1], data[4][2], data[4][3], data[4][4], data[4][5], data[4][6]}, {"0400 - 0600", data[8][0], data[8][1], data[8][2], data[8][3], data[8][4], data[8][5], data[8][6]}, {"0600 - 0800", data[12][0], data[12][1], data[12][2], data[12][3], data[12][4], data[12][5], data[12][6]}, {"0800 - 1000", data[16][0], data[16][1], data[16][2], data[16][3], data[16][4], data[16][5], data[16][6]}, {"1000 - 1200", data[20][0], data[20][1], data[20][2], data[20][3], data[20][4], data[20][5], data[20][6]}, {"1200 - 1400", data[24][0], data[24][1], data[24][2], data[24][3], data[24][4], data[24][5], data[24][6]}, {"1400 - 1600", data[28][0], data[28][1], data[28][2], data[28][3], data[28][4], data[28][5], data[28][6]}, {"1600 - 1800", data[32][0], data[32][1], data[32][2], data[32][3], data[32][4], data[32][5], data[32][6]}, {"1800 - 2000", data[36][0], data[36][1], data[36][2], data[36][3], data[36][4], data[36][5], data[36][6]}, {"2000 - 2200", data[40][0], data[40][1], data[40][2], data[40][3], data[40][4], data[40][5], data[40][6]}, {"2200 - 2400",data[44][0], data[44][1], data[44][2], data[44][3], data[44][4], data[44][5], data[44][6]} },

    Read the article

  • Google Font API & Google Font Directory

    - by joelvarty
    There is a CSS element out there that looks like this: @font-face {   font-family: '';   src: url('…'); } I’ve only used this tag in a bunch of old apps and sites that were built exclusively for IE back in the day.  Well, it’s part of CSS 3 and Google is going to make it easy to find and share fonts. http://googlecode.blogspot.com/2010/05/introducing-google-font-api-google-font.html   more later - joel

    Read the article

  • When is a Google Maps API key required?

    - by Thomas
    Recently Google changed it's policy on the use API keys. You're now supposed to no longer need an API key to place Google Maps on your website. And this worked perfectly. But now I have this map (without API key) running on my localhost, which works fine. But as soon as I place it online, I get a popup saying that I need another API key. And on another page on that website, Google Maps does work. Could it maybe have something to do with that the map that doesn't work have a lot (30+) of markers on it? Actually using an API key wouldn't be a very nice solution to me, as this is part of a Wordpress plugin used on many websites.

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Interacting with google docs after logging into my google market apps - how

    - by Ali
    Hi guys I have a google apps account set up and even set up a simple hello world application from the available samples on the tutorial however I need to set it so I am able to interact with the google docs account associated with the account which has added my application. To interact with google docs I am aware that a token is requested from google upon authentication and verification of the account however that is in a situation where you code specifically for interacting with google docs - I'm talking about having access to the google docs of the account which has added my application so my application can be used to upload documents to the google docs and make references to them - basically my application is a resource management application and it needs to be able to store references to google docs.

    Read the article

  • Google Script / Spreadsheet -- Shared permissions with Installed Trigger onEdit

    - by user1761852
    Using an installed trigger inside spreadsheet to call onUpdateBilling(). Purpose of this script is on edit, based on content of column "billed" (i.e. "d") will highlight the entire column the predetermined color. Page running script is shared with collaborators and they have been given edit access. My expectation at this point is the script should be run with owner permissions. My shared users are unable to run the script with the given error "You don't have permission for this action." Reached my limited knowledge and googlefu for this workaround. Any help to allow operation to my collaborators is appreciated. Script: function onUpdateBilling(e) { var statusCol = 16; // replace with the column index of Status column A=1,B=2,etc var sheetName = "Temple Log"; // replace with actual name of sheet containing Status var cell = SpreadsheetApp.getActiveSheet().getActiveCell(); var sheet = cell.getSheet(); if(cell.getColumnIndex() != statusCol || sheet.getName() != sheetName) return; var row = cell.getRowIndex(); var status = cell.getValue(); // change colors to meet your needs var color; if (status == "D" || status == "d") { color = "red";} else if (status >= 1) { color = "yellow";} else if (status == "X" || status == "x") { color = "black";} else if (status == "") { color = "white";} else { color = "white"; } sheet.getRange(row + ":" + row ).setBackgroundColor(color); }

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • Integrating Google Apps with Salesforce using Google Apps Script

    Integrating Google Apps with Salesforce using Google Apps Script A very special Google Developers Live episode, in which Arun Nagarajan talks about using Google Apps Script with Salesforce to show how easily developers can integrate Salesforce with Google Sheets, Gmail, Google Docs and other Google Apps products. Download the full source code of these demo scripts here: github.com From: GoogleDevelopers Views: 52 6 ratings Time: 41:23 More in Science & Technology

    Read the article

  • Google I/O 2010 - Using Google Chrome Frame

    Google I/O 2010 - Using Google Chrome Frame Google I/O 2010 - Using Google Chrome Frame Chrome 201 Alex Russell Google Chrome Frame brings the HTML5 platform and fast Javascript performance to IE6, 7 & 8. This session will cover the latest on Google Chrome Frame, what it can do for you and your customers, how it can be used, and a sneak peak into what's planned next. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 4 0 ratings Time: 50:16 More in Science & Technology

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >