Search Results

Search found 25570 results on 1023 pages for 'low level api'.

Page 18/1023 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Google I/O 2012 - Best Practices for Maps API Developers

    Google I/O 2012 - Best Practices for Maps API Developers Susannah Raub, Jez Fletcher The Google Maps API makes it easy to add simple maps to your applications, but we want to take you to the next level. In this session we reveal our recommended best practices for Maps API developers, including developer tools, testing, and API features that will save you time, avoid a headache or two, and delight your users. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 400 8 ratings Time: 48:52 More in Science & Technology

    Read the article

  • Problems with Level Architect, Citrus Engine, Flash

    - by Idan
    I am using the Citrus Engine to make a Flash game, and the Level Architect doesn't work well for me. Firstly, when I first launch it and open my project and my level, nothing is shown, no assets and not anything I have previously done with my level. To fix it, I open another project. The other project works fine, meaning I can see the assets and the level. Then I go back to the actual project I am working on, and the problem is fixed, only it does not fix the second problem: I can't add my own assests. I follow the manual and add tags like this: [Property(value="0")] But it doesn't change a thing in the level architect window (even after I close and reopen it). Any ideas? Thanks! Here's the code of the class I want to be shown in the Level Architect: package { import com.citrusengine.objects.PhysicsObject; import com.citrusengine.objects.platformer.Sensor; import flash.utils.clearTimeout; import flash.utils.setTimeout; /** * @author Aymeric */ public class Teleporter extends Sensor { [Property(value="0")] public var endX:Number=0; [Property(value="0")] public var endY:Number=0; public var object:PhysicsObject; [Property(value="0")] public var time:Number = 0; public var needToTeleport:Boolean = false; protected var _teleporting:Boolean = false; private var _teleportTimeoutID:uint; public function Teleporter(name:String, params:Object = null) { super(name, params); } override public function destroy():void { clearTimeout(_teleportTimeoutID); super.destroy(); } override public function update(timeDelta:Number):void { super.update(timeDelta); if (needToTeleport) { _teleporting = true; _teleportTimeoutID = setTimeout(_teleport, time); needToTeleport = false; } _updateAnimation(); } protected function _teleport():void { _teleporting = false; object.x = endX; object.y = endY; clearTimeout(_teleportTimeoutID); } protected function _updateAnimation():void { if (_teleporting) { _animation = "teleport"; } else { _animation = "normal"; } } } }

    Read the article

  • Google I/O 2010 - Moving beyond markers: Advanced Maps API customization

    Google I/O 2010 - Moving beyond markers: Advanced Maps API customization Google I/O 2010 - Moving beyond markers: Advanced Maps API customization Geo 301 Jez Fletcher, David Day With such a large number of Google Maps API sites online, it can be hard to make your site stand out from the crowd. This session covers ways in which you can enhance your Maps API application to truly differentiate it, including customizing your overlays, controls, and map. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 16 0 ratings Time: 36:38 More in Science & Technology

    Read the article

  • OPN Diamond Level Criteria Update

    - by Cinzia Mascanzoni
    On June 1, 2013, the criteria for Oracle PartnerNetwork members to attain the prestigious Diamond level will change and all members at the Diamond level at that point will be required to meet the new criteria. This change underscores the requirement for these elite partners to engage across Oracle’s broad product portfolio. Refer to the Diamond Level Requirements on the OPN Portal here for more detail.

    Read the article

  • Change Logging Level for SOA 11g

    - by James Taylor
    I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. Login to Enterprise Manager, e.g. http://localhost:7001/em Expand the SOA folder and right-click the soa-infra(soa_server1) folder and select Logs – Log Configuration Navigate to the component you want to monitor and change the log level. It is possible to change at a parent level if required It is not recommended that you set the level to FINIEST at a parent level as it will generate a lot of logging. Make sure you apply the change to take affect. Simple as that.

    Read the article

  • How difficult is it to change from Embedded programming to a high level programming [on hold]

    - by anudeep shetty
    I have a background in Computer Science. I worked on Embedded programming on Linux file systems, after I finished my Bachelor's degree, for over a year. After that I pursued my masters where most of my course choices involved working on web, java and databases. Now I have an offer to work with a company that is offering a job to work on the OS level. The company is pretty good but I am feeling that my Masters has gone to waste. I wanted to know is it common that a Computer Science major works on low-level coding and is there a possibility that I can work in this company for some years and then move onto an opportunity where I can work on high-level coding? Also is working on low-level programming a safe choice in terms of job opportunities?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Google Maps API Round-up

    Google Maps API Round-up This week, Mano Marks and Paul Saxman go over recent launches and things you might have missed with the Google Maps APIs, including the new Google Time Zone API, traffic estimates with the Directions API (for enterprise customers), and the Places Autocomplete API query results and data service enhancements. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Education

    Read the article

  • Shared Object Not saving the level Progress

    - by user3536228
    I am making a flash game in which i have a variable levelState that describes the current level in which user has entered I am using SharedObject to save the progress but it does not do so first i declred a clas level variable private var levelState:Number = 1; private var mySaveData:SharedObject = SharedObject.getLocal("levelSave"); in the Main function i am checking if it is a first run of the game like below if (mySaveData.data.levelsComplete == null) { mySaveData.data.levelsComplete = 1; } and in a function where the winning condition is checked so that levelState could be increased i am usin this sharedobject to hold the value of levelState if (/*winniing condition*/) levelState++; mySaveData.data.levelsComplete = levelState; mySaveData.flush(); setNewLevel(levelState); } but when i play the game clear a level and again run the game it does not start from that level it starts from beginning.

    Read the article

  • Making a level editor for my game

    - by Sherif Maher Eaid
    I am doing a 2D sprite based game in XNA for WP7, The game logic is simple, you start at some point, you want to avoid obstacles and reach a certain goal. obviously I need to make many levels for the game to be challenging and funny. I am considering making a level editor for my game where I can be able to design the level using some kind of GUI then it translates that to a .lvl or something that the game can read and interpret that to a playable level. I am asking for an already made level editor for XNA/WP7.

    Read the article

  • Scribe-LinkedIn Search API

    - by Rupeshit
    Hi folks, I want to fetch data from the LinkedIn API for that I am using the Scribe library.All requests are giving me data as expected but when I tried two facet in the url then scribe is not able to get data from LinkedIn API. If I gave this URL : http://api.linkedin.com/v1/people-search?facets=location,network&facet=location,in:0 then it gives me proper result but if I entered this URL: http://api.linkedin.com/v1/people-search?facets=location,network&facet=location,in:0&facet=network,F i.e. URL containing multiple facets then it gives me this output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <error> <status>401</status> <timestamp>1292487039516</timestamp> <error-code>0</error-code> <message> [unauthorized].OAU:CiEgwWDkA5BFpNrc0RfGyVuSlOh4tig5kOTZ9q97qcXNrFl7zqk- Ts7DqRGaKDCV|94f13544-9844-41eb-9d53-8fe36535bbc3|*01|*01:1292487039:VseHXaJXM2gerxJyn6kHhIka7zw=</message> </error> Any kind of help to solve this will be appreciated.Thanks.

    Read the article

  • Calling ASP.NET Web API using JQuery ajax - cross site scripting issue

    - by SimonF
    I have a Web API which I am calling using the JQuery ajax function. When I test the service directly (using the Chrome RESTEasy extension) it works fine, however when I call it using the JQuery ajax function I get an error. I'm calling it on port 81: $.ajax({ url: "http://127.0.0.1:81/api/people", data: JSON.stringify(personToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newPerson) { callback(newPerson); } }, success: function (newPerson) { alert("New person created with an Id of " + newPerson.Id); }, error: function (jqXHR, textStatus, errorThrown) { alert('Error. '+textStatus+'. '+errorThrown); } }); ...but when I trace it using FireBug Lite the response comes from port 82: {"Message":"No HTTP resource was found that matches the request URI 'http://127.0.0.1:82/api/people'.","MessageDetail":"No action was found on the controller 'People' that matches the request."} I think the error is, effectively, due to cross-site scripting being blocked, but I'm not actually cross-site scripting, if you see what I mean. Has anyone else come across this and been able to fix it? Edit: Routing config (global.asax.vb) is: RouteTable.Routes.MapHttpRoute(name:="DefaultApi", routeTemplate:="api/{controller}/{id}", defaults:=New With {Key .id = System.Web.Http.RouteParameter.Optional}) Controller: Public Function PostValue(ByVal departmentid As Integer, ByVal emailaddress As String, ByVal firstname As String, ByVal lastname As String) As Guid Dim context As New WSMModelDataContext Dim bllPeople As New PeopleBLL(context) Return bllPeople.Create(firstname, lastname, emailaddress, departmentid) End Function When I debug it, it doesn't get as far as running the controller, although when calling it through RESTEasy it routes correctly and the controller executes successfully. The only difference seemes to be that wen called through RESTEasy it is (correctly) using http://127.0.0.1:81 but for some reason when called via JQuery/ajax it seems to be using http://127.0.0.1:82.

    Read the article

  • Creating an API for an ASP.NET MVC site with rate-limiting and caching

    - by Maxim Z.
    Recently, I've been very interested in APIs, specifically in how to create them. For the purpose of this question, let's say that I have created an ASP.NET MVC site that has some data on it; I want to create an API for this site. I have multiple questions about this: What type of API should I create? I know that REST and oData APIs are very popular. What are the pros and cons of each, and how do I implement them? From what I understand so far, REST APIs with ASP.NET MVC would just be actions that return JSON instead of Views, and oData APIs are documented here. How do I handle writing? Reading from both API types is quite simple. However, writing is more complex. With the REST approach, I understand that I can use HTTP POST, but how do I implement authentication? Also, with oData, how does writing work in the first place? How do I implement basic rate-limiting and caching? From my past experience with APIs, these are very important things, so that the API server isn't overloaded. What's the best way to set these two things up? Can I get some sample code? Any code that relates to C# and ASP.NET MVC would be appreciated. Thanks in advance! While this is a broad question, I think it's not too broad... :) There are some similar questions to this one that are about APIs, but I haven't found any that directly address the questions I outlined here.

    Read the article

  • Bug: files uploaded via desktop or web client have hidden tag when listed via API

    - by Jon Webb
    Files uploaded to Google Drive sometimes incorrectly have a hidden tag when listed via the Document List v3 REST API: <category scheme='http://schemas.google.com/g/2005/labels' term='http://schemas.google.com/g/2005/labels#hidden' label='hidden'/> This happens if: a subfolder is created via the Google Drive desktop client and files are copied in, or a folder is uploaded via the Google Drive web client. The folder does not have the hidden tag, but the files that were uploaded do. The files do not have this tag if: they are individually uploaded via the Google Drive web client to the subfolder, or they are uploaded via the REST API to the subfolder, or they are uploaded via the desktop client to the My Drive root. The files and folders show up in Google Drive whether they have the hidden tag or not. We're using the API with the following scope: https://docs.google.com/feeds/ https://spreadsheets.google.com/feeds/ https://docs.googleusercontent.com/ I have verified and can recreate this with the OAuth 2.0 playground. Google Drive desktop client version 1.3.3209.2600 on Win7 32-bit I guess these must be bugs in the API...

    Read the article

  • Subscribe through API .net C#

    - by Younes
    I have to submit subscription data to another website. I have got documentation on how to use this API however i'm not 100% sure of how to set this up. I do have all the information needed, like username / passwords etc. This is the API documentation: https://www.apiemail.net/api/documentation/?SID=4 How would my request / post / whatever look like in C# .net (vs 2008) when i'm trying to acces this API? This is what i have now, I think i'm not on the right track: public static string GArequestResponseHelper(string url, string token, string username, string password) { HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create(url); myRequest.Headers.Add("Username: " + username); myRequest.Headers.Add("Password: " + password); HttpWebResponse myResponse = (HttpWebResponse)myRequest.GetResponse(); Stream responseBody = myResponse.GetResponseStream(); Encoding encode = System.Text.Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(responseBody, encode); //return string itself (easier to work with) return readStream.ReadToEnd(); Hope someone knows how to set this up properly. Thx!

    Read the article

  • Getting Public Info about Location in Facebook Graph API

    - by Allan Deamon
    I need to get the living city of each person in a group. Including people that are not my friends. In the browser seeing facebook profile of some unknown person, they show "lives in ...", if this is set as public information. They include the link to the city object with the city id in the link. That's all that I need. But, using a facebook app that I created to use the facebook graph api, this information is not public. I can only get the user propriety 'location' from friends of my that I have permission to see it. I gave ALL the possible permissions to my app. In the api explorer, when I use it as REST, they show few informations about someone not friend of mine. https://developers.facebook.com/tools/explorer/ Also in the api explorer, when I use the FQL, it didn't works. This query works, returning the JSON with the data: SELECT uid, name FROM user WHERE username='...'; But this other query doesn't work: SELECT uid, name, location FROM user WHERE username='...'; They return a json with a error code: { "error": { "message": "(#602) location is not a member of the user table.", "type": "OAuthException", "code": 602 } } I asked for ALL the permissions options in the token. And I can get this info in the browser version of the facebook. But how can I get it with the API ?

    Read the article

  • Gettingn Started with Facebook API

    - by Btibert3
    I have a friend that owns a small business and has a Page on Facebook. I want to help her manage it from a marketing perspective, and figure that it may be best to do so through their API. I have skimmed their API documentation, and have a basic working knowledge of Python. What I can't figure out is if I can access their page's data with Python and grab the data on wall posts, who liked posts, etc. Is this possible? I can't find a decent tutorial for someone who is new to programming. To provide context, I have been scraping the Twitter Search API for some time now and I am hoping there is something similar (request certain data elements, and have it returned as structured data I can analyze). I find their API extremely straight forward, and for Facebook, I don't know where to begin. I don't want to create an application, I simply want to access the data that is related to my friend's page. I am hoping to find some decent tutorials and help on what I will need to get started. Any help you can provide will be greatly appreciated.

    Read the article

  • Custom API requirement

    - by Jonathan.Peppers
    We are currently working on an API for an existing system. It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product. As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection. To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API. So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem? A couple options we thought of: Write an out of process COM object (don't know if that works in .Net) Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes Extract this exe file from an embedded resource and execute it None of those really seem ideal. Any better ideas?

    Read the article

  • Sending a file to an API - C#

    - by alex
    I'm trying to use an API which sends a fax. I have a PHP example below: (I will be using C# however) <?php //This is example code to send a FAX from the command line using the Simwood API //It is illustrative only and should not be used without the addition of error checking etc. $ch = curl_init("http://url-to-api-endpoint"); $fax_variables=array( 'user'=> 'test', 'password'=> 'test', 'sendat' => '2050-01-01 01:00', 'priority'=> 10, 'output'=> 'json', 'to[0]' => '44123456789', 'to[1]' => '44123456780', 'file[0]'=>'@/tmp/myfirstfile.pdf', 'file[1]' => '@/tmp/mysecondfile.pdf' ); print_r($fax_variables); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($ch, CURLOPT_POSTFIELDS, $fax_variables); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $result=curl_exec ($ch); $info = curl_getinfo($ch); $result['http_code']; curl_close ($ch); print_r($result); ?> My question is - in the C# world, how would I achieve the same result? Do i need to build a post request? Ideally, i was trying to do this using REST - and constructing a URL, and using HttpWebRequest (GET) to call the API

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Facebook Graph API: Upload Photo To Album

    - by st4ck0v3rfl0w
    Hello All, I'm trying to familiarize myself with Facebook's new Graph API and so far I can fetch and write some data pretty easily. Something I'm struggling to find decent documentation on is uploading images to an album. According to http://developers.facebook.com/docs/api#publishing you need to supply the message argument. But I'm not quite sure how to construct it. Older resources I've read are: http://wiki.auzigog.com/Facebook_Photo_Uploads http://wiki.developers.facebook.com/index.php/Photos.upload If someone has more information or could help me tackle uploading photos to an album using Facebook Graph API please reply!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >