Search Results

Search found 1773 results on 71 pages for 'apis'.

Page 9/71 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • What are the pros and cons of using “Sign in with Twitter/Facebook” for a new website?

    - by Paul D. Waite
    Myself and a friend are looking to launch a little forum site. I’m considering using the “Sign in with Facebook/Twitter” APIs, possibly exclusively, for user login.I haven’t used either of these before, nor run a site with user logins at all. What are the pros and cons of these APIs? Specifically: Is the idea of using them, and/or using them exclusively (i.e. having no login system other than one or both of these), any good? What benefits do I get as a developer from using them? Do end users actually like/dislike them? Have you experienced any technical/logistical issues with these APIs specifically? Here are the pros and cons I’ve got so far: Pros More convenient for the user (“register” with two clicks, sign in with one) Possibly no need to maintain our own login system  Cons No control over our login process Exclude non-Facebook/non-Twitter users (if we don’t maintain our own login system) Exclude Facebook/Twitter users who are worried about us having some sort of access to their accounts Users’ accounts on our site are compromised if their Facebook/Twitter accounts are compromised.

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • What are the pros (and cons) of using “Sign in with Twitter/Facebook” for a new website?

    - by Paul D. Waite
    Myself and a friend are looking to launch a little forum site. I’m considering using the “Sign in with Facebook/Twitter” APIs, possibly exclusively (a la e.g. Lanyrd), for user login. I haven’t used either of these before, nor run a site with user logins at all. What are the pros (and cons) of these APIs? Specifically: What benefits do I get as a developer from using them? What drawbacks are there? Do end users actually like/dislike them? Have you experienced any technical/logistical issues with these APIs specifically? Here are the pros and cons I’ve got so far: Pros More convenient for the user (“register” with two clicks, sign in with one) Possibly no need to maintain our own login system  Cons No control over our login process Exclude Facebook/Twitter users who are worried about us having some sort of access to their accounts Users’ accounts on our site are compromised if their Facebook/Twitter accounts are compromised. And if we don’t maintain our own alternative login system: Dependency on Facebook/Twitter for our login system Exclude non-Facebook/non-Twitter users from our site

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Sales and Procurement Contracts 12.1.3++ Release Information

    - by LuciaC
    New functionality has been released for Sales and Procurement Contracts in a new patch: Contracts 12.1.3++: Patch 13877401: 12.1.3 Rollup for Oracle Contracts Core. The new functionality includes: APIs for Import of Contract Templates, Contract Expert rules, Questions and Constants: The three APIs are as follows: API for Templates, API for Rules, and API for Questions and Constants. These can be used to both create entities and update existing templates and rules. The APIs will display error and warning messages which can be processed and analyzed by the customer. Ability to Apply Multiple Templates to a Sourcing, Procurement or Sales Document: The buyer can select and add multiple templates to a quote,sales agreement document, sourcing or purchasing odcument.  All the clauses and deliverables from the new templates are synchronized with the document. The Contract Expert rules are from the original template. The buyer can also view the list of templates that are added to any sales or procurement document. Ability to Define Multi-Row Variables: You can create user defined manual variables that are tables containing one row per line or multiple rows. Contract Preview will print the variable values according to the layout defined for the variable. These variables are not available for Contract Expert Rules and Supplier. Enhancement to Suggested Sections for Clauses by Contract Expert: You can associate multiple default sections with a clause. A clause is associated with multiple values of any system variable and for each such value a section name is associated in Contracts Terms Library. When Contract Expert is run in the contract authoring flow, the clause is automatically placed in the associated section name. Plus many more new features. Read the following notes for details on all the new and changed functionality: Oracle Procurement Contracts Release Notes, Release 12.1.3++ (Doc ID 1467140.1) Oracle Sales Contracts Release Notes, Release 12.1.3++ (Doc ID 1467149.1) Oracle E-Business Suite Releases 12.1 and 12.2 Release Content Documents (Doc ID 1302189.1)

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • how to use kml file in my code..

    - by zjm1126
    i download a kml file : <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <Style id="transGreenPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d00ff00</color> </PolyStyle> </Style> <Style id="transYellowPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d00ffff</color> </PolyStyle> </Style> <Style id="transRedPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d0000ff</color> </PolyStyle> </Style> <Style id="transBluePoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7dff0000</color> </PolyStyle> </Style> <Folder> <name>Placemarks</name> <open>0</open> <Placemark> <name>Simple placemark</name> <description>Attached to the ground. Intelligently places itself at the height of the underlying terrain.</description> <Point> <coordinates>-122.0822035425683,37.42228990140251,0</coordinates> </Point> </Placemark> <Placemark> <name>Descriptive HTML</name> <description><![CDATA[Click on the blue link!<br/><br/> Placemark descriptions can be enriched by using many standard HTML tags.<br/> For example: <hr/> Styles:<br/> <i>Italics</i>, <b>Bold</b>, <u>Underlined</u>, <s>Strike Out</s>, subscript<sub>subscript</sub>, superscript<sup>superscript</sup>, <big>Big</big>, <small>Small</small>, <tt>Typewriter</tt>, <em>Emphasized</em>, <strong>Strong</strong>, <code>Code</code> <hr/> Fonts:<br/> <font color="red">red by name</font>, <font color="#408010">leaf green by hexadecimal RGB</font> <br/> <font size=1>size 1</font>, <font size=2>size 2</font>, <font size=3>size 3</font>, <font size=4>size 4</font>, <font size=5>size 5</font>, <font size=6>size 6</font>, <font size=7>size 7</font> <br/> <font face=times>Times</font>, <font face=verdana>Verdana</font>, <font face=arial>Arial</font><br/> <hr/> Links: <br/> <a href="http://earth.google.com/">Google Earth!</a> <br/> or: Check out our website at www.google.com <hr/> Alignment:<br/> <p align=left>left</p> <p align=center>center</p> <p align=right>right</p> <hr/> Ordered Lists:<br/> <ol><li>First</li><li>Second</li><li>Third</li></ol> <ol type="a"><li>First</li><li>Second</li><li>Third</li></ol> <ol type="A"><li>First</li><li>Second</li><li>Third</li></ol> <hr/> Unordered Lists:<br/> <ul><li>A</li><li>B</li><li>C</li></ul> <ul type="circle"><li>A</li><li>B</li><li>C</li></ul> <ul type="square"><li>A</li><li>B</li><li>C</li></ul> <hr/> Definitions:<br/> <dl> <dt>Google:</dt><dd>The best thing since sliced bread</dd> </dl> <hr/> Centered:<br/><center> Time present and time past<br/> Are both perhaps present in time future,<br/> And time future contained in time past.<br/> If all time is eternally present<br/> All time is unredeemable.<br/> </center> <hr/> Block Quote: <br/> <blockquote> We shall not cease from exploration<br/> And the end of all our exploring<br/> Will be to arrive where we started<br/> And know the place for the first time.<br/> <i>-- T.S. Eliot</i> </blockquote> <br/> <hr/> Headings:<br/> <h1>Header 1</h1> <h2>Header 2</h2> <h3>Header 3</h3> <h3>Header 4</h4> <h3>Header 5</h5> <hr/> Images:<br/> <i>Remote image</i><br/> <img src="http://code.google.com/apis/kml/documentation/googleSample.png"><br/> <i>Scaled image</i><br/> <img src="http://code.google.com/apis/kml/documentation/googleSample.png" width=100><br/> <hr/> Simple Tables:<br/> <table border="1" padding="1"> <tr><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr> <tr><td>a</td><td>b</td><td>c</td><td>d</td><td>e</td></tr> </table> <br/>]]></description> <Point> <coordinates>-122,37,0</coordinates> </Point> </Placemark> </Folder> <Folder> <name>Google Campus - Polygons</name> <open>0</open> <description>A collection showing how easy it is to create 3-dimensional buildings</description> <Placemark> <name>Building 40</name> <styleUrl>#transRedPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0848938459612,37.42257124044786,17 -122.0849580979198,37.42211922626856,17 -122.0847469573047,37.42207183952619,17 -122.0845725380962,37.42209006729676,17 -122.0845954886723,37.42215932700895,17 -122.0838521118269,37.42227278564371,17 -122.083792243335,37.42203539112084,17 -122.0835076656616,37.42209006957106,17 -122.0834709464152,37.42200987395161,17 -122.0831221085748,37.4221046494946,17 -122.0829247374572,37.42226503990386,17 -122.0829339169385,37.42231242843094,17 -122.0833837359737,37.42225046087618,17 -122.0833607854248,37.42234159228745,17 -122.0834204551642,37.42237075460644,17 -122.083659133885,37.42251292011001,17 -122.0839758438952,37.42265873093781,17 -122.0842374743331,37.42265143972521,17 -122.0845036949503,37.4226514386435,17 -122.0848020460801,37.42261133916315,17 -122.0847882750515,37.42256395055121,17 -122.0848938459612,37.42257124044786,17 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 41</name> <styleUrl>#transBluePoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0857412771483,37.42227033155257,17 -122.0858169768481,37.42231408832346,17 -122.085852582875,37.42230337469744,17 -122.0858799945639,37.42225686138789,17 -122.0858860101409,37.4222311076138,17 -122.0858069157288,37.42220250173855,17 -122.0858379542653,37.42214027058678,17 -122.0856732640519,37.42208690214408,17 -122.0856022926407,37.42214885429042,17 -122.0855902778436,37.422128290487,17 -122.0855841672237,37.42208171967246,17 -122.0854852065741,37.42210455874995,17 -122.0855067264352,37.42214267949824,17 -122.0854430712915,37.42212783846172,17 -122.0850990714904,37.42251282407603,17 -122.0856769818632,37.42281815323651,17 -122.0860162273783,37.42244918858723,17 -122.0857260327004,37.42229239604253,17 -122.0857412771483,37.42227033155257,17 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 42</name> <styleUrl>#transGreenPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0857862287242,37.42136208886969,25 -122.0857312990603,37.42136935989481,25 -122.0857312992918,37.42140934910903,25 -122.0856077073679,37.42138390166565,25 -122.0855802426516,37.42137299550869,25 -122.0852186221971,37.42137299504316,25 -122.0852277765639,37.42161656508265,25 -122.0852598189347,37.42160565894403,25 -122.0852598185499,37.42168200156,25 -122.0852369311478,37.42170017860346,25 -122.0852643957828,37.42176197982575,25 -122.0853239032746,37.42176198013907,25 -122.0853559454324,37.421852864452,25 -122.0854108752463,37.42188921823734,25 -122.0854795379357,37.42189285337048,25 -122.0855436229819,37.42188921797546,25 -122.0856260178042,37.42186013499926,25 -122.085937287963,37.42186013453605,25 -122.0859428718666,37.42160898590042,25 -122.0859655469861,37.42157992759144,25 -122.0858640462341,37.42147115002957,25 -122.0858548911215,37.42140571326184,25 -122.0858091162768,37.4214057134039,25 -122.0857862287242,37.42136208886969,25 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 43</name> <styleUrl>#transYellowPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0844371128284,37.42177253003091,19 -122.0845118855746,37.42191111542896,19 -122.0850470999805,37.42178755121535,19 -122.0850719913391,37.42143663023161,19 -122.084916406232,37.42137237822116,19 -122.0842193868167,37.42137237801626,19 -122.08421938659,37.42147617161496,19 -122.0838086419991,37.4214613409357,19 -122.0837899728564,37.42131306410796,19 -122.0832796534698,37.42129328840593,19 -122.0832609819207,37.42139213944298,19 -122.0829373621737,37.42137236399876,19 -122.0829062425667,37.42151569778871,19 -122.0828502269665,37.42176282576465,19 -122.0829435788635,37.42176776969635,19 -122.083217411188,37.42179248552686,19 -122.0835970430103,37.4217480074456,19 -122.0839455556771,37.42169364237603,19 -122.0840077894637,37.42176283815853,19 -122.084113587521,37.42174801104392,19 -122.0840762473784,37.42171341292375,19 -122.0841447047739,37.42167881534569,19 -122.084144704223,37.42181720660197,19 -122.0842503333074,37.4218170700446,19 -122.0844371128284,37.42177253003091,19 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> </Folder> <Folder> <name>LineString</name> <open>0</open> <Placemark> <LineString> <tessellate>1</tessellate> <coordinates> -112.0814237830345,36.10677870477137,0 -112.0870267752693,36.0905099328766,0 </coordinates> </LineString> </Placemark> </Folder> <Folder> <name>GroundOverlay</name> <open>0</open> <GroundOverlay> <name>Large-scale overlay on terrain</name> <description>Overlay shows Mount Etna erupting on July 13th, 2001.</description> <Icon> <href>http://code.google.com/apis/kml/documentation/etna.jpg</href> </Icon> <LatLonBox> <north>37.91904192681665</north> <south>37.46543388598137</south> <east>15.35832653742206</east> <west>14.60128369746704</west> </LatLonBox> </GroundOverlay> </Folder> <Folder> <name>ScreenOverlays</name> <open>0</open> <ScreenOverlay> <name>screenoverlay_dynamic_top</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/dynamic_screenoverlay.jpg</href> </Icon> <overlayXY x="0" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="1" y="0.2" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_dynamic_right</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/dynamic_right.jpg</href> </Icon> <overlayXY x="1" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="1" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>Simple crosshairs</name> <visibility>0</visibility> <description>This screen overlay uses fractional positioning to put the image in the exact center of the screen</description> <Icon> <href>http://code.google.com/apis/kml/documentation/crosshairs.png</href> </Icon> <overlayXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <screenXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <rotationXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="pixels" yunits="pixels"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_topright</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/top_right.jpg</href> </Icon> <overlayXY x="1" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_topleft</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/top_left.jpg</href> </Icon> <overlayXY x="0" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_bottomright</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/bottom_right.jpg</href> </Icon> <overlayXY x="1" y="-1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="0" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_bottomleft</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/bottom_left.jpg</href> </Icon> <overlayXY x="0" y="-1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="0" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> </Folder> </Document> </kml> and my code is : function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map_canvas")); var center=new GLatLng(39.9493, 116.3975); map.setCenter(center, 13); var geoXml = new GGeoXml("SamplesInMaps.kml"); <!--Place KML on Map --> map.addOverlay(geoXml); } } but ,i don't successful ,, do you know how to do this.. thanks

    Read the article

  • Amazon Product Advertising API SOAP Namespace Changes

    - by Rick Strahl
    About two months ago (twowards the end of February 2012 I think) Amazon decided to change the namespace of the Product Advertising API. The error that would come up was: <ItemSearchResponse > was not expected. If you've used the Amazon Product Advertising API you probably know that Amazon has made it a habit to break the services every few years or so and I guess last month was about the time for another one. Basically the service namespace of the document has been changed and responses from the service just failed outright even though the rest of the schema looks fine. Now I looked around for a while trying to find a recent update to the Product Advertising API - something semi-official looking but everything is dated around 2009. Really??? And it's not just .NET - the newest thing on the sample/APIs is dated early 2011 and a handful of 2010 samples. There are newer full APIs for the 'cloud' offerings, but the Product Advertising API apparently isn't part of that. After searching for quite a bit trying to trace this down myself and trying some of the newer samples (which also failed) I found an obscure forum post that describes the solution of getting past the namespace issue. FWIW, I've been using an old version of the Product Advertising API using the old Microsoft WSE3 services (pre-WCF), which provides some of the WS* security features required by the Amazon service. The fix for this code is to explicitly override the namespace declaration on each of the imported service method signatures. The old service namespace (at least on my build) was: http://webservices.amazon.com/AWSECommerceService/2009-03-31 and it should be changed to: http://webservices.amazon.com/AWSECommerceService/2011-08-01 Change it on the class header:[Microsoft.Web.Services3.Messaging.SoapService("http://webservices.amazon.com/AWSECommerceService/2011-08-01")] [System.Xml.Serialization.XmlIncludeAttribute(typeof(Property[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(BrowseNode[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(TransactionItem[]))] public partial class AWSECommerceService : Microsoft.Web.Services3.Messaging.SoapClient { and on all method signatures:[Microsoft.Web.Services3.Messaging.SoapMethodAttribute("http://soap.amazon.com/ItemSearch")] [return: System.Xml.Serialization.XmlElementAttribute("ItemSearchResponse", Namespace="http://webservices.amazon.com/AWSECommerceService/2011-08-01")] public ItemSearchResponse ItemSearch(ItemSearch ItemSearch1) { Microsoft.Web.Services3.SoapEnvelope results = base.SendRequestResponse("ItemSearch", ItemSearch1); return ((ItemSearchResponse)(results.GetBodyObject(typeof(ItemSearchResponse), this.SoapServiceAttribute.TargetNamespace))); } It's easy to do with a Search and Replace on the above strings. Amazon Services <rant> FWIW, I've not been impressed by Amazon's service offerings. While the services work well, their documentation and tool support is absolutely horrendous. I was recently working with a customer on an old AWS application and their old API had been completely removed with a new API that wasn't even a close match. One old API call resulted in requiring three different APIs to perform the same functionality. We had to re-write the entire piece from scratch essentially. The documentation was downright wrong, and incomplete and so scattered it was next to impossible to follow. The examples weren't examples at all - they're mockups of real service calls with fake data that didn't even provide everything that was required to make same service calls work. Additionally there appears to be just about no public support from Amazon, only peer support which is sparse at best - and getting a hold of somebody at Amazon, even for pay seems to be mythical task. It's a terrible business model they have going. I can't see why anybody would put themselves through this sort of customer and development experience. Sad really, but an experience we see more and more these days. Nobody puts in the time to document anything anymore, leaving it to devs to figure this stuff out over and over again… </rant>© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • Is Appcelerator Titanium now banned on the iPhone?

    - by altuzar
    This question has been answered quite clearly for MonoTouch here: http://stackoverflow.com/questions/2604033/is-monotouch-now-banned-on-the-iphone But what about Appcelerator Titanium? The new TOS from Apple and their iPhone 4 OS: 3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited). Titanium uses JavaScript but is not executed be the iPhone OS WebKit engine directly. In their Developer blog, Jeff Haynie says Titanium is on the clear, but I don't know if they are in denial. It’s our belief that we are fully in compliance with iPhone OS 4.0 ToS as we interpret them. I haven't found any official word by Apple, only opinions. And I'm quite confussed. I'm not writing another line of code for my App until... you know.

    Read the article

  • What is the difference: LoadUserProfile -vs- RegOpenCurrentUser

    - by Will5801
    These two APIs are very similar but it is unclear what the differences are and when each should be used (Except that LoadUserProfile is specified for use with CreateProcessAsUser which I am not using. I am simply impersonating for hive accesss). LoadUserProfile http://msdn.microsoft.com/en-us/library/bb762281(VS.85).aspx RegOpenCurrentUser http://msdn.microsoft.com/en-us/library/ms724894(VS.85).aspx According to the Services & the Registry article: http://msdn.microsoft.com/en-us/library/ms685145(VS.85).aspx we should use RegOpenCurrentUser when impersonating. But what does/should RegOpenCurrentUser do if the user profile is roaming - should it load it? As far as I can tell from these docs, both APIs provide a handle to the HKEY_CURRENT_USER for the user the thread is impersonating. Therefore, they both "load" the hive i.e. lock it as a database file and give a handle to it for registry APIs. It might seem that LoadUserProfile loads the user profile in the same way as the User does when he/she logs on, whereas RegOpenCurrentUser does not - is this correct? What is the fundamental difference (if any) in how these two APIs mount the hive? What are the implications and differences (if any) between what happens IF A user logs-on or logs-off while each of these impersonated handles is already in use? A user is already logged-on when each matching close function (RegCloseKey and UnloadUserProfile) is called?

    Read the article

  • In C/C++,how to link dynamic link lib which compiled in GCC/G++ in MS VStudio?

    - by coanor
    These days, I use Flex & Bison generated some codes to develop a SQL-parser alike tools, these code can't compiled silently(may be this another topic) in VS2005,but GCC/G++ works well, then I compiled these code with mingw in dll(in windows xp), and then linked these function facades in VS2005, but it seems can't link the dll during linking. Does MS VS2005 recognize the dll which compiled using mingw on windows? Is there anything I need to do additional? For example, adding something in the include-file that declare the exported APIs? Does any one can give some advices? The condition is, as in VS2005, if you want to export some APIs, you may show a *.def file to tell nmake which API you want to export, and then you may create a(or some) *.h file to declare somthing about these APIs(adding some stdcall alike prefix as a call protocal) and some data-type definition. But with GCC/G++, you do not need to do such boring things, just use [ar], you can get these APIs, so my *.h file do not add call protocol and no *.def, just like common function declaration. After *.dll generated, add the *.h file and [mv] generated *.dll in VS2005 project directory, then set the linking *.dll in project setting. Does these steps generated my Question? BTW, I found and tested VC6-compiled dll can be linked with mingw in Windows XP, but the reverse can't work. Anyway, forgive my poor English, and thanks for your concern.

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Can you load Google Maps API v3 via Google AJAX API loader

    - by Salman A
    Some time ago I used the regular method of loading Google Maps API like this: <script type="text/javascript" src="http://maps.google.com/maps?file=api&v=2&key=abcdefg&sensor=true"> Later I switched to Google AJAX APIs to load Google Maps API. This was because a couple of "widgets" on my website needed the Google Ajax API loader so I chose to be consistent and used the AJAX APIs to load Google Maps as well: <script type="text/javascript" src="http://www.google.com/jsapi?key=abcdef"></script> <script type="text/javascript"> google.load("maps", "2", {"other_params": "sensor=true"}); </script> Now that I have finally decided to use Google Maps API v3, this page does not list API v3 in the available version list. None of the examples on API v3 documentation show the use of AJAX APIs as well. Is is possible (and supported) to load Google Maps API v3 via AJAX API loader?

    Read the article

  • java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z

    - by Panayiotis Karabassis
    I am getting this error: java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z These are the jars in my classpath: com.sun.faces/jsf-api/jars/jsf-api-2.0.0.jar com.sun.faces/jsf-impl/jars/jsf-impl-2.0.0.jar org.apache.myfaces.orchestra/myfaces-orchestra-core20/jars/myfaces-orchestra-core20-1.5-SNAPSHOT.jar commons-lang/commons-lang/jars/commons-lang-2.1.jar commons-logging/commons-logging/jars/commons-logging-1.1.1.jar org.springframework/spring/jars/spring-2.5.6.jar commons-el/commons-el/jars/commons-el-1.0.jar org.richfaces.ui/richfaces-ui/jars/richfaces-ui-3.3.3.Final.jar org.richfaces.framework/richfaces-api/jars/richfaces-api-3.3.3.Final.jar commons-collections/commons-collections/jars/commons-collections-3.2.jar commons-beanutils/commons-beanutils/jars/commons-beanutils-1.8.0.jar org.richfaces.framework/richfaces-impl-jsf2/jars/richfaces-impl-jsf2-3.3.3.Final.jar com.sun.facelets/jsf-facelets/jars/jsf-facelets-1.1.14.jar org.hibernate/hibernate-core/jars/hibernate-core-3.6.0.Final.jar antlr/antlr/jars/antlr-2.7.6.jar dom4j/dom4j/jars/dom4j-1.6.1.jar org.hibernate/hibernate-commons-annotations/jars/hibernate-commons-annotations-3.2.0.Final.jar org.slf4j/slf4j-api/jars/slf4j-api-1.6.1.jar org.hibernate.javax.persistence/hibernate-jpa-2.0-api/jars/hibernate-jpa-2.0-api-1.0.0.Final.jar javax.transaction/jta/jars/jta-1.1.jar org.hibernate/hibernate-c3p0/jars/hibernate-c3p0-3.6.0.Final.jar c3p0/c3p0/jars/c3p0-0.9.1.jar org.hibernate/hibernate-entitymanager/jars/hibernate-entitymanager-3.6.0.Final.jar cglib/cglib/jars/cglib-2.2.jar asm/asm/jars/asm-3.1.jar javassist/javassist/jars/javassist-3.12.0.GA.jar org.hibernate/hibernate-search/jars/hibernate-search-3.3.0.Final.jar org.hibernate/hibernate-search-analyzers/jars/hibernate-search-analyzers-3.3.0.Final.jar org.apache.lucene/lucene-core/jars/lucene-core-3.0.3.jar org.apache.lucene/lucene-analyzers/jars/lucene-analyzers-3.0.3.jar mysql/mysql-connector-java/jars/mysql-connector-java-5.1.13.jar com.ocpsoft/prettyfaces-jsf2/jars/prettyfaces-jsf2-3.0.1.jar commons-digester/commons-digester/jars/commons-digester-2.0.jar org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.6.1.jar log4j/log4j/bundles/log4j-1.2.16.jar xom/xom/jars/xom-1.2.5.jar xml-apis/xml-apis/jars/xml-apis-1.3.03.jar xerces/xercesImpl/jars/xercesImpl-2.8.0.jar xalan/xalan/jars/xalan-2.7.0.jar org.jboss.jsfunit/jboss-jsfunit-core/jars/jboss-jsfunit-core-1.3.0.Final.jar net.sourceforge.htmlunit/htmlunit/jars/htmlunit-2.8.jar xalan/xalan/jars/xalan-2.7.1.jar xalan/serializer/jars/serializer-2.7.1.jar xml-apis/xml-apis/jars/xml-apis-1.3.04.jar commons-collections/commons-collections/jars/commons-collections-3.2.1.jar commons-lang/commons-lang/jars/commons-lang-2.4.jar org.apache.httpcomponents/httpclient/jars/httpclient-4.0.1.jar org.apache.httpcomponents/httpcore/jars/httpcore-4.0.1.jar commons-codec/commons-codec/jars/commons-codec-1.4.jar org.apache.httpcomponents/httpmime/jars/httpmime-4.0.1.jar org.apache.james/apache-mime4j/jars/apache-mime4j-0.6.jar net.sourceforge.htmlunit/htmlunit-core-js/jars/htmlunit-core-js-2.8.jar xerces/xercesImpl/jars/xercesImpl-2.9.1.jar net.sourceforge.nekohtml/nekohtml/jars/nekohtml-1.9.14.jar net.sourceforge.cssparser/cssparser/jars/cssparser-0.9.5.jar org.w3c.css/sac/jars/sac-1.3.jar commons-io/commons-io/jars/commons-io-1.4.jar cactus/cactus/jars/cactus-13-1.7.1.jar cactus/cactus-ant/jars/cactus-ant-13-1.7.1.jar commons-httpclient/commons-httpclient/jars/commons-httpclient-2.0.2.jar junit/junit/jars/junit-3.8.1.jar aspectj/aspectjrt/jars/aspectjrt-1.2.1.jar cargo/cargo/jars/cargo-0.5.jar ant/ant/jars/ant-1.5.4.jar and this is my ivy.xml: <dependencies> <!-- JSF 2.0 RI --> <dependency org="com.sun.faces" name="jsf-api" rev="2.0.0"/> <dependency org="com.sun.faces" name="jsf-impl" rev="2.0.0"/> <!-- MyFaces Orchestra --> <dependency org="org.apache.myfaces.orchestra" name="myfaces-orchestra-core20" rev="1.5-SNAPSHOT"/> <dependency org="org.springframework" name="spring" rev="2.5.6"/> <dependency org="commons-el" name="commons-el" rev="1.0"/> <!-- RichFaces --> <dependency org="org.richfaces.ui" name="richfaces-ui" rev="3.3.3.Final"/> <dependency org="org.richfaces.framework" name="richfaces-impl-jsf2" rev="3.3.3.Final"/> <dependency org="com.sun.facelets" name="jsf-facelets" rev="1.1.14"/> <!-- Hibernate --> <dependency org="org.hibernate" name="hibernate-core" rev="3.6.0.Final"/> <dependency org="org.hibernate" name="hibernate-c3p0" rev="3.6.0.Final"/> <dependency org="org.hibernate" name="hibernate-entitymanager" rev="3.6.0.Final"/> <dependency org="org.hibernate" name="hibernate-search" rev="3.3.0.Final"/> <dependency org="mysql" name="mysql-connector-java" rev="5.1.13"/> <!-- PrettyFaces --> <dependency org="com.ocpsoft" name="prettyfaces-jsf2" rev="3.0.1"/> <!-- SLF4J --> <dependency org="org.slf4j" name="slf4j-api" rev="1.6.1"/> <dependency org="org.slf4j" name="slf4j-log4j12" rev="1.6.1"/> <!-- XOM --> <dependency org="xom" name="xom" rev="1.2.5"/> <!-- JSF Unit --> <dependency org="org.jboss.jsfunit" name="jboss-jsfunit-core" rev="1.3.0.Final" conf="development"/> </dependencies> I am deploying to tomcat 6.0 Update After the answer below, I solved this by adding the following dependency to my ivy.xml: <dependency org="org.hibernate.javax.persistence" name="hibernate-jpa-2.0-api" rev="1.0.0.Final"/> then putting this jar above everything else under Eclipse's build order tab. I was using JRE/JDK 6.

    Read the article

  • java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z

    - by Panayiotis Karabassis
    I am getting this error: java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z These are the jars in my classpath: com.sun.faces/jsf-api/jars/jsf-api-2.0.0.jar com.sun.faces/jsf-impl/jars/jsf-impl-2.0.0.jar org.apache.myfaces.orchestra/myfaces-orchestra-core20/jars/myfaces-orchestra-core20-1.5-SNAPSHOT.jar commons-lang/commons-lang/jars/commons-lang-2.1.jar commons-logging/commons-logging/jars/commons-logging-1.1.1.jar org.springframework/spring/jars/spring-2.5.6.jar commons-el/commons-el/jars/commons-el-1.0.jar org.richfaces.ui/richfaces-ui/jars/richfaces-ui-3.3.3.Final.jar org.richfaces.framework/richfaces-api/jars/richfaces-api-3.3.3.Final.jar commons-collections/commons-collections/jars/commons-collections-3.2.jar commons-beanutils/commons-beanutils/jars/commons-beanutils-1.8.0.jar org.richfaces.framework/richfaces-impl-jsf2/jars/richfaces-impl-jsf2-3.3.3.Final.jar com.sun.facelets/jsf-facelets/jars/jsf-facelets-1.1.14.jar org.hibernate/hibernate-core/jars/hibernate-core-3.6.0.Final.jar antlr/antlr/jars/antlr-2.7.6.jar dom4j/dom4j/jars/dom4j-1.6.1.jar org.hibernate/hibernate-commons-annotations/jars/hibernate-commons-annotations-3.2.0.Final.jar org.slf4j/slf4j-api/jars/slf4j-api-1.6.1.jar org.hibernate.javax.persistence/hibernate-jpa-2.0-api/jars/hibernate-jpa-2.0-api-1.0.0.Final.jar javax.transaction/jta/jars/jta-1.1.jar org.hibernate/hibernate-c3p0/jars/hibernate-c3p0-3.6.0.Final.jar c3p0/c3p0/jars/c3p0-0.9.1.jar org.hibernate/hibernate-entitymanager/jars/hibernate-entitymanager-3.6.0.Final.jar cglib/cglib/jars/cglib-2.2.jar asm/asm/jars/asm-3.1.jar javassist/javassist/jars/javassist-3.12.0.GA.jar org.hibernate/hibernate-search/jars/hibernate-search-3.3.0.Final.jar org.hibernate/hibernate-search-analyzers/jars/hibernate-search-analyzers-3.3.0.Final.jar org.apache.lucene/lucene-core/jars/lucene-core-3.0.3.jar org.apache.lucene/lucene-analyzers/jars/lucene-analyzers-3.0.3.jar mysql/mysql-connector-java/jars/mysql-connector-java-5.1.13.jar com.ocpsoft/prettyfaces-jsf2/jars/prettyfaces-jsf2-3.0.1.jar commons-digester/commons-digester/jars/commons-digester-2.0.jar org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.6.1.jar log4j/log4j/bundles/log4j-1.2.16.jar xom/xom/jars/xom-1.2.5.jar xml-apis/xml-apis/jars/xml-apis-1.3.03.jar xerces/xercesImpl/jars/xercesImpl-2.8.0.jar xalan/xalan/jars/xalan-2.7.0.jar org.jboss.jsfunit/jboss-jsfunit-core/jars/jboss-jsfunit-core-1.3.0.Final.jar net.sourceforge.htmlunit/htmlunit/jars/htmlunit-2.8.jar xalan/xalan/jars/xalan-2.7.1.jar xalan/serializer/jars/serializer-2.7.1.jar xml-apis/xml-apis/jars/xml-apis-1.3.04.jar commons-collections/commons-collections/jars/commons-collections-3.2.1.jar commons-lang/commons-lang/jars/commons-lang-2.4.jar org.apache.httpcomponents/httpclient/jars/httpclient-4.0.1.jar org.apache.httpcomponents/httpcore/jars/httpcore-4.0.1.jar commons-codec/commons-codec/jars/commons-codec-1.4.jar org.apache.httpcomponents/httpmime/jars/httpmime-4.0.1.jar org.apache.james/apache-mime4j/jars/apache-mime4j-0.6.jar net.sourceforge.htmlunit/htmlunit-core-js/jars/htmlunit-core-js-2.8.jar xerces/xercesImpl/jars/xercesImpl-2.9.1.jar net.sourceforge.nekohtml/nekohtml/jars/nekohtml-1.9.14.jar net.sourceforge.cssparser/cssparser/jars/cssparser-0.9.5.jar org.w3c.css/sac/jars/sac-1.3.jar commons-io/commons-io/jars/commons-io-1.4.jar cactus/cactus/jars/cactus-13-1.7.1.jar cactus/cactus-ant/jars/cactus-ant-13-1.7.1.jar commons-httpclient/commons-httpclient/jars/commons-httpclient-2.0.2.jar junit/junit/jars/junit-3.8.1.jar aspectj/aspectjrt/jars/aspectjrt-1.2.1.jar cargo/cargo/jars/cargo-0.5.jar ant/ant/jars/ant-1.5.4.jar and this is my ivy.xml: <dependencies> <!-- JSF 2.0 RI --> <dependency org="com.sun.faces" name="jsf-api" rev="2.0.0"/> <dependency org="com.sun.faces" name="jsf-impl" rev="2.0.0"/> <!-- MyFaces Orchestra --> <dependency org="org.apache.myfaces.orchestra" name="myfaces-orchestra-core20" rev="1.5-SNAPSHOT"/> <dependency org="org.springframework" name="spring" rev="2.5.6"/> <dependency org="commons-el" name="commons-el" rev="1.0"/> <!-- RichFaces --> <dependency org="org.richfaces.ui" name="richfaces-ui" rev="3.3.3.Final"/> <dependency org="org.richfaces.framework" name="richfaces-impl-jsf2" rev="3.3.3.Final"/> <dependency org="com.sun.facelets" name="jsf-facelets" rev="1.1.14"/> <!-- Hibernate --> <dependency org="org.hibernate" name="hibernate-core" rev="3.6.0.Final"/> <dependency org="org.hibernate" name="hibernate-c3p0" rev="3.6.0.Final"/> <dependency org="org.hibernate" name="hibernate-entitymanager" rev="3.6.0.Final"/> <dependency org="org.hibernate" name="hibernate-search" rev="3.3.0.Final"/> <dependency org="mysql" name="mysql-connector-java" rev="5.1.13"/> <!-- PrettyFaces --> <dependency org="com.ocpsoft" name="prettyfaces-jsf2" rev="3.0.1"/> <!-- SLF4J --> <dependency org="org.slf4j" name="slf4j-api" rev="1.6.1"/> <dependency org="org.slf4j" name="slf4j-log4j12" rev="1.6.1"/> <!-- XOM --> <dependency org="xom" name="xom" rev="1.2.5"/> <!-- JSF Unit --> <dependency org="org.jboss.jsfunit" name="jboss-jsfunit-core" rev="1.3.0.Final" conf="development"/> </dependencies> I am deploying to tomcat 6.0 Update After the answer below, I solved this by adding the following dependency to my ivy.xml: <dependency org="org.hibernate.javax.persistence" name="hibernate-jpa-2.0-api" rev="1.0.0.Final"/> then putting this jar above everything else under Eclipse's build order tab. I was using JRE/JDK 6.

    Read the article

  • :from parameter in active record find not well designed?

    - by potlee
    i got this error: SQLite3::SQLException: no such column: apis.name: SELECT * FROM examples WHERE ("apis"."name" = 'deep') my code Api.find :all, :from => params[:table_name], :conditions => {:name => 'deep' } I need to make a back end rails application which will be used by a silverlight application. one of the requirements is to fetch simple data from the database. i need to be able to query different tables with the same code.(my app has 2000 tables!) i think it does not make sense for rails to put in "apis" in the WHERE clause. is there any speciic reason for this?

    Read the article

  • New Tuxedo White Papers

    - by todd.little
    As part of the Tuxedo 11gR1 release, I've written two new white papers on Tuxedo. One is called "Tuxedo in a SOA World" and discusses how Tuxedo fits into SOA based applications. It covers most of the various connectivity options from Tuxedo into SOA environments and gives guidance as to which connectivity options are best suited for a particular application requirement. The other white paper "SCA: Bringing Modern SOA Programing to Tuxedo" is of a more technical bent and focuses on using the SCA features in SALT to easily build SOA based applications on Tuxedo without using a lot of technical APIs. In fact, services built using SALT's SCA support don't require any technical APIs, just pure business logic, and SCA clients need at most a couple of API calls, simply to look up a service. You can find these two new white papers as well as some additional white papers at http://www.oracle.com/technology/products/tuxedo/index.html.

    Read the article

  • FluentPath: a fluent wrapper around System.IO

    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. Its mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google I/O 2010 - Fireside chat with the Geo team

    Google I/O 2010 - Fireside chat with the Geo team Google I/O 2010 - Fireside chat with the Geo team Fireside Chats, Geo Thor Mitchell, Peter Birch, Matt Holden, Ben Appleton, Bart Locanthi, Thatcher Ulrich Here's your opportunity to pick the brains of the people behind the Maps, Earth, and Maps Data APIs! We'll take a quick walk through the milestones of the last year, and then open it up to your questions. Don't miss your opportunity to get the straight scoop on all that's new in the world of Google Geo APIs. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 51:16 More in Science & Technology

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >