Search Results

Search found 1773 results on 71 pages for 'apis'.

Page 18/71 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • How to exclude jars generated by maven war plugin ?

    - by Jacques René Mesrine
    Because of transitive dependencies, my wars are getting populated by xml-apis, xerces jars. I tried following the instructions on the reference page for maven-war-plugin but it is not working. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <packagingExcludes>WEB-INF/lib/xalan-2.6.0.jar,WEB-INF/lib/xercesImpl-2.6.2.jar,WEB-INF/lib/xml-apis-1.0.b2.jar,WEB-INF/lib/xmlParserAPIs-2.6.2.jar</packagingExcludes> <webXml>${basedir}/src/main/webapp/WEB-INF/web.xml</webXml> <warName>project1</warName> <warSourceDirectory>src/main/webapp</warSourceDirectory> </configuration> </plugin> What am I doing wrong ? If it matters, I discovered that the maven-war-plugin I'm using is at version 2.1-alpha-1

    Read the article

  • Implementing a 2 Legged OAuth Provider

    - by Rob Wilkerson
    I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials. I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind... Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP? Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through? When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each? Thanks for your help.

    Read the article

  • Integrate Google Maps API into an iPhone app

    - by Corey Floyd
    Update: iPhone SDk 3.0 now addresses the question here, however the NDA prevents any in depth discussion. Log in to the iPhone Dev Center if you need more info. Ok, I have to admit I'm a little lost here. I am fairly comfortable with Cocoa, but am having trouble picking up the bit of javascript needed to solve this problem. I am trying to send a request to Google for a reverse geo code. I have looked over the Google documentation I have viewed here: http://code.google.com/apis/maps/documentation/index.html http://code.google.com/apis/maps/documentation/geocoding/ Even after a rough reading, I am missing a basic concept: How do I talk to google? In some examples, they show a url being sent to google (which seems easy enough), but in others they show javascript. It seems for reverse geocoding, the request might be be harder than sending the url with some parameters (but I hope I am wrong). Can someone point me to the correct way to make a request? (In objective-C, so I can wrap my head around it)

    Read the article

  • Nested Routes and Parameters for Rails URLs (Best Practice)

    - by viatropos
    Hey there, I have a decent understanding of RESTful urls and all the theory behind not nesting urls, but I'm still not quite sure how this looks in an enterprise application, like something like Amazon, StackOverflow, or Google... Google has urls like this: http://code.google.com/apis/ajax/ http://code.google.com/apis/maps/documentation/staticmaps/ https://www.google.com/calendar/render?tab=mc Amazon like this: http://www.amazon.com/books-used-books-textbooks/b/ref=sa_menu_bo0?ie=UTF8&node=283155&pf_rd_p=328655101&pf_rd_s=left-nav-1&pf_rd_t=101&pf_rd_i=507846&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1PK4ZKN4YWJJ9B86ANC9 http://www.amazon.com/Ruby-Programming-Language-David-Flanagan/dp/0596516177/ref=sr_1_1?ie=UTF8&s=books&qid=1258755625&sr=1-1 And StackOverflow like this: http://stackoverflow.com/users/169992/viatropos http://stackoverflow.com/questions/tagged/html http://stackoverflow.com/questions/tagged?tagnames=html&sort=newest&pagesize=15 So my question is, what is best practice in terms of creating urls for systems like these? When do you start storing parameters in the url, when don't you? These big companies don't seem to be following the rules so hotly debated in the ruby community (that you should almost never nest URLs for example), so I'm wondering how you go about implementing your own urls in larger scale projects because it seems like the idea of not nesting urls breaks down at anything larger than a blog. Any tips?

    Read the article

  • Is an LSA MSV1_0 subauthentication package needed for some impersonation use cases?

    - by Chris Sears
    Greetings, I'm working with a vendor who has implemented some code that uses a Windows LSA MSV1_0 subauthentication package (MSDN info if you're interested: http://msdn.microsoft.com/en-us/library/aa374786(VS.85).aspx ) and I'm trying to figure out if it's necessary. As far as I can tell, the subauthentication routine and filter allow for hooking or customizing the standard LSA MSV1_0 logon event processing. The issue is that I don't understand why the vendor's product would need these capabilities. I've asked them and they said they use it to perform impersonation. The product definitely does need to do impersonation, but based on my limited win32 knowledge, they could get the functionality they need using the normal auth APIs (LsaLogonUser, ImpersonateLoggedOnUser, etc) without the subauthentication package. Furthermore, I've worked with a number of similar products that all do impersonation, and this is the only one that's used a subauthentication package. If you're wondering why I would care, a previous version of the product had a bug in the subauthentication package dll that would cause lockups or bluescreens. That makes me rather nervous and has me questioning the use of such a low-level, kernel sensitive interface. I'd like to go back to the vendor and say "There's no way you could need an LSA subauth package for impersonation - take it out", but I'm not sure I understand the use cases and possible limitations of the standard win32 authentication/impersonation APIs well enough to make that claim definitively. So, to the win32 security gurus out there, is there any reason you would need an LSA MSV1_0 subauthentication package if all you were doing is impersonation? Thanks in advance for any thoughts!

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more then one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 𝕥 𝟶 𠂊 You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them Notepad can't deal with them correctly (delete for example) File names editing in Window dialogs in broken All QT3 applications can't deal with them. StackOverflow seems to remove these characters if edited directly in as Unicode characters, and only seems to allow them as HTML Unicode escapes. So... This was very simple test. Do you think that UTF-16 should be considered harmful?

    Read the article

  • Upgrade to Delphi 2010, or stick with Delphi 7 "forever"?

    - by tim11g
    I am an individual user of Delphi, starting back in the early Turbo Pascal days. I have quite a bit of code developed over the years, but I have never sold software commercially or used it for business. Historically, Borland supported the non-professional users with lower cost versions, but Embarcadero does not. As I consider upgrading to Delphi 2010, I am put off by the high price. Embarcadero is also trying to "encourage" upgrading by threatening to charge "new user" prices for upgrades after Dec 31st. I have several questions for the community to help me decide whether to upgrade. 1) I have read about difficulties updating existing code to support the unicode string types. I have no need for unicode strings, and I am happy with the string support in D7. Will I have to modify existing code and components just to re-compile under D2010? Or are there compiler options to allow backward compatibility if new string types are not required? 2) The main reason I'm considering upgrading is for IDE improvements, and to get access to new APIs added to Windows since 2002. Are there any Windows 7 APIs or capabilities that would be impossible to support from my programs compiled using using Delphi 7 (assuming appropriate JEDI API libraries, for example)? 3) Is there anything else about Delphi 2010 that is really compelling for someone who is primarily interested in Win32 apps, and not working with databases? I have read that D2010 is slow to load, and other versions between D7 and D2010 have had stability issues, and the help system was "broken". What is the biggest benefit to D2010?

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • Using Gdata Calendar API for PHP App to create a calender system with event ownership

    - by linuxlover101
    Hello, I'm working on a PHP app, and I'm trying to find the best way to set up a calendar system. First let me describe the intended function of the system: The application has multiple users who can login simultaneously. Each user should be able to add their own events and display them on their "profile." Also, on another page, the events should be able to be shown all together (from all users) and from each individual users exclusively. Since Google Calendar has a nice calendar-looking interface, I thought I would work with the GData APIs. Originally, I thought this would make things easier. But then it seems that if I want to represent event ownership, either each user would have to have a google account or somehow they all would each have to have a separate calendar. Thus, the application would have to create the calendar (if it does not exist) and be able to edit only their calendar. Two questions: Can I somehow use the GData APIs to accomplish my goal? (Mainly event ownership.) Should I just create my own calendar like application? Thanks!

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • Google Radar Chart: Not plotting data

    - by Hallik
    Hi. I have different types of data that I have normalized just with base10, so everything is on a 10 point scale. There are two data points, and the legend for them show up fine. All the points around the radar show up fine and so do the labels for them, but I don't see any filled in data. Outside of the labels and axis, the chart is blank. Below is the actual image tag I render, then I split up the variables on each line for easy readability. Anyone mind telling me why it isn't working? <img src="http://chart.apis.google.com/chart?cht=rs&amp;chm=B,3366CC60,0,1.0,5.0|B,CC4D3360,1,1.0,5.0&chs=600x500&chd=t:4.6756775443385,4.7031365524912,1.8646655408171,1.8358167047079,4.2483837215455,4.1367786166752,|5.0319252700625,5.0370797208146,1.8415340693163,1.8591105937857,4.3392150450337,4.1434876641017&chco=3366CC,CC4D33&chls=2.0,4.0,0.0|2.0,4.0,0.0&chxt=x&chxl=0:|Label1|Label2|label3|Label4|Label5|Label6&chxr=0,0.0,10.0&chdl=Data Object 1|Data Object 2&"/> cht=rs& chm=B,3366CC60,0,1.0,5.0|B,CC4D3360,1,1.0,5.0& chs=600x500& chd=t:4.6756775443385,4.7031365524912,1.8646655408171,1.8358167047079,4.2483837215455,4.1367786166752,|5.0319252700625,5.0370797208146,1.8415340693163,1.8591105937857,4.3392150450337,4.1434876641017& chco=3366CC,CC4D33& chls=2.0,4.0,0.0|2.0,4.0,0.0& chxt=x& chxl=0:|Label1|Label2|label3|Label4|Label5|Label6& chxr=0,0.0,10.0& chdl=Data Object 1|Data Object 2& This is the radar chart page, I can't tell what I am doing wrong. http://code.google.com/apis/chart/docs/gallery/radar_charts.html

    Read the article

  • How to get the equivalent of the accuracy in Google Map Geocoder V3

    - by Scorpi0
    Hi, I want to get geocode from google, and I used to do it with the V2 of the API. Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly. For example, I have a request accuracy to the street number, the array is of size 8. Another query is accuracy to the route, so less accuracy, but the array is style of size 8, as there is a row 'sublocality', which not appear in the first case. Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that. So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?

    Read the article

  • Using the Proxy pattern with C++ iterators

    - by Billy ONeal
    Hello everyone :) I've got a moderately complex iterator written which wraps the FindXFile apis on Win32. (See previous question) In order to avoid the overhead of constructing an object that essentially duplicates the work of the WIN32_FIND_DATAW structure, I have a proxy object which simply acts as a sort of const reference to the single WIN32_FIND_DATAW which is declared inside the noncopyable innards of the iterator. This is great because Clients do not pay for construction of irrelevant information they will probably not use (most of the time people are only interested in file names), and Clients can get at all the information provided by the FindXFile APIs if they need or want this information. This becomes an issue though because there is only ever a single copy of the object's actual data. Therefore, when the iterator is incrememnted, all of the proxies are invalidated (set to whatever the next file pointed to by the iterator is). I'm concerned if this is a major problem, because I can think of a case where the proxy object would not behave as somebody would expect: std::vector<MyIterator::value_type> files; std::copy(MyIterator("Hello"), MyIterator(), std::back_inserter(files)); because the vector contains nothing but a bunch of invalid proxies at that point. Instead, clients need to do something like: std::vector<std::wstring> filesToSearch; std::transform( DirectoryIterator<FilesOnly>(L"C:\\Windows\\*"), DirectoryIterator<FilesOnly>(), std::back_inserter(filesToSearch), std::mem_fun_ref(&DirectoryIterator<FilesOnly>::value_type::GetFullFileName) ); Seeing this, I can see why somebody might dislike what the standard library designers did with std::vector<bool>. I'm still wondering though: is this a reasonable trade off in order to achieve (1) and (2) above? If not, is there any way to still achieve (1) and (2) without the proxy?

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • What's the best approach for modifying PDF interactive form fields on iOS?

    - by gbreen
    I've been doing some head banging on this one and solicit your advice. I am building an app that as part of it's features is to present PDF forms; meaning display them, allow fields to be changed and save the modified PDF file back out. UIWebViews do not support PDF interactive forms. Using the CGPDF apis (and benefit from other questions posted here and elsewhere), I can certainly present the PDF (without the form fields/widgets), scan and find the fields in the document, figure out where on the screen to draw something and make them interactive. What I can't seem to figure out is how to change the CGPDFDictionary objects and write them back out to a file. One could use the CGPDF Apis to create a new PDF document from whole cloth, but how do you use it to modify an existing file? Should I be looking elsewhere such as 3rd party PDF libs like PoDoFo or libHaru? I'd love to hear from anyone who has successfully modified a PDF and written it back out as to your approach. Thanks!

    Read the article

  • Uploading a file to imgur using PHP Imgurv3 API

    - by user434885
    I writing a webapp which uploads images to imgur directly. Since all older version of their APIs have been deprecated i am forced to use v3 of their APIs. Unfortunately I am unable to get the API to work. I am using curl to access the API. $pvars = array('image' => base64_encode($data)); $timeout = 30; $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, 'https://api.imgur.com/3/upload'); curl_setopt($curl, CURLOPT_TIMEOUT, $timeout); curl_setopt($curl, CURLOPT_POST, 1); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_POSTFIELDS, $pvars); curl_setopt($curl, CURLOPT_HTTPHEADER, array('Authorization: Client-ID xxxxxxa61xxxxxx')); $xml = curl_exec($curl); $xmlsimple = new SimpleXMLElement($xml); print gettype($xml)."<hr>"; echo '<img height="180" src="'; echo $xmlsimple->links->original; echo '">'; curl_close ($curl); The "Xml" variable always return as "false", no server error is displayed. Can someone guide me to what I am doing wrong ? Unfortunately I am also unable to find any proper example in the documentation nor around to guide me.

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • Language restrictions on iPhone partially lifted?

    - by John Smith
    Apparently Apple has changed some term in the agreement again. From http://www.appleoutsider.com/2010/06/10/hello-lua/ section 3.3.2 is now Unless otherwise approved by Apple in writing, no interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s). Notwithstanding the foregoing, with Apple’s prior written consent, an Application may use embedded interpreted code in a limited way if such use is solely for providing minor features or functionality that are consistent with the intended and advertised purpose of the Application. instead of the original No interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s). I am more interested in embedding Lua, but other people have other embeddings they want to make. I am wondering how you ask for permission, and what they mean by the terms "minor features" and "consistent" and how will Apple interpret this section? It seems to have enough loopholes to drive a real firetruck through. (BTW this is a terribly important question for me an my product.)

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • CodePlex Daily Summary for Saturday, January 15, 2011

    CodePlex Daily Summary for Saturday, January 15, 2011Popular ReleasesPeople's Note: People's Note 0.22: Fixed the recent TODO checkbox problem. If you cannot synchronize, try deleting the last note with TODO checkboxes. Fixed notes created before signing in for the first time not syncing. Made a number of small user interface improvements. To install: copy the appropriate CAB file onto your WM device and run it.Network Asset Manager: Release 1.0.9: Release notes for version 1.0.9New FeaturesAdded new report subsystem for better report handling and report history management. Added support for Network adapter discovery Added reports for low disk space, disk utilization, hardware, OS installation Some application fixes. Bug fixesArtifact ID 2979655 : Fixed issue with installed software collection on windows 64 bit. (Thanks to nielsvdc for this patch) NotesPrevious versions of NAM installations need to be manually uninstalled bef...JetBrowser: JetBrowser 5: JetBrowser 5 ( specifically Version 5.0.1.239 ) is here . I uploaded it here in codeplex because i had nowhere else to upload it . Changes made from last release : Improvements made throughout the code of JetBrowser. Bug fixes made in JetMail and the Feedback tool. Bug fixes were made in other vital points of the code and a lot of features were added. Visual parts of the JB were modified as well If you notice a but , pl...mytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.52.0 beta 2: New MVC3 RTM WEB.mytrip.mvc 1.0.52.0 Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) SRC.mytrip.mvc 1.0.52.0 System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.5, MVC3 RTM WARNING For run and debug SRC.mytrip.mvc 1.0.52.0 download and...IIS Tuner: IIS Tuner: Performance optimization tool for IISBloodSim: BloodSim - 1.3.3.0: NOTE: If you have a previous version of BloodSim, you must update WControls.dll from the Required DLLs download. - Removed average stats from main window as this is no longer necessary - Added ability to name a set of runs that will show up in the title bar of the graph window - Added ability to run progressive simulation sets, increasing up to 2 chosen stats by a value each time - Added option to set the amount that Death Strike heals for on the Last 5s Damage - Added option for Blood Shiel...WCF Community Site: WCF Web APIs 11.01.14: Welcome to the third release of WCF Web APIs on codeplex New Features - WCF HTTP New HttpClient API which replaces the REST Starter kit has been introduced. In addition to supporting strongly typed messages, the API supports async through Task<T>. It also has a pluggable channel stack for plugging in handlers for the request / response from the client side. See the QueryableSample for an example of the new channels. New extension methods for serializing / deserializing to/from HttpContent. ...NuGet: NuGet 1.0 RTM: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog.MVC Music Store: MVC Music Store v2.0: This is the 2.0 release of the MVC Music Store Tutorial. This tutorial is updated for ASP.NET MVC 3 and Entity Framework Code-First, and contains fixes and improvements based on feedback and common questions from previous releases. The main download, MvcMusicStore-v2.0.zip, contains everything you need to build the sample application, including A detailed tutorial document in PDF format Assets you will need to build the project, including images, a stylesheet, and a pre-populated databas...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 GA Released: Hi, Today we are releasing Visifire 3.6.7 GA with the following feature: * Inlines property has been implemented in Title. Also, this release contains fix for the following bugs: * In Column and Bar chart DataPoint’s label properties were not working as expected at real-time if marker enabled was set to true. * 3D Column and Bar chart were not rendered properly if AxisMinimum property was set in x-axis. You can download Visifire v3.6.7 here. Cheers, Team VisifireASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: paging for the lookup lookup with multiselect changes: the css classes used by the framework where renamed to be more standard the lookup controller requries an item.ascx (no more ViewData["structure"]), and LookupList action renamed to Search all the...??????????: All-In-One Code Framework ??? 2011-01-12: 2011???????All-In-One Code Framework(??) 2011?1??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, AJAX, WinForm, Windows Shell????13?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2011/01/13/6135037.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????patterns & practices – Enterprise Library: Enterprise Library 5.0 - Extensibility Labs: This is a preview release of the Hands-on Labs to help you learn and practice different ways the Enterprise Library can be extended. Learning MapCustom exception handler (estimated time to complete: 1 hr 15 mins) Custom logging trace listener (1 hr) Custom configuration source (registry-based) (30 mins) System requirementsEnterprise Library 5.0 / Unity 2.0 installed SQL Express 2008 installed Visual Studio 2010 Pro (or better) installed AuthorsChris Tavares, Microsoft Corporation ...Orchard Project: Orchard 1.0: Orchard Release Notes Build: 1.0.20 Published: 1/12/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.1.0.20.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orchard folder from ...Umbraco CMS: Umbraco 4.6.1: The Umbraco 4.6.1 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're ...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcPhalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlNew Projects6 piece burr analysis: An educational project for researching a 6 piece burrsBabySmash7: BabySmash for WP7C# 4.0 library to generate INotifyPropertyChanged proxy from POCO type: C# 4.0 library to generate INotifyPropertyChanged proxy from POCO type at runtime. It will inherit from the type given and override any public virtual properties with wrappers that notify on change. Text templates are used to define the source of the generated class.CompositeC1Contrib: User contributions, hacks and optimization to Composite C1 CMS.CrtTfsDemo: demo project to test code review tool integration with tfsDbExpressions: An abstract syntax tree for Sql.DefaultAndZipTemplate.xaml (Build Process Template for TFS 2010): Just the given default build process template in TFS 2010 plus one additional activity to put the drop folder content into a zip file located in the same folder.DotNetNuke easy.News Multilanguage template module: This is a DotNetNuke 5.5+ module for news. It allows users to create, manage and preview news on DotNetNuke portals. Module is template based and can work on multilanguage portals. It is free of charge and constantly improving.DragonBone Software: DragonBone Software is a software that allows developers to create 2D skeletal animations that can be exported via XML and added into your project.DuckCallLib: Extension methods that allow for something that approximates to duck typing in C#. While certainly not as syntactically clean as Ruby, using this library allows the user to not have to write the same reflection code over and over again when duck typing is desired.EPAT: Energy Performance Assessment ToolsHCMUS Bug Tracker: HCMUS Bug Tracker is project to management bug in software developer. This project execute by students at Falculty of Natural Sciences University.IIS Tuner: Simple Tuning tool for IISinteLibrary: inteLibraryLearnPad: A light and easy to use note writer and learn aid. Project LearnPad aims to make the life of students a lot easier with it's ability to capture information in textual form easy and fast with the ability to refine and review the content later.LiveTiss: Solução web em Silverlight 4, para criação, edição e gerenciamento de guias TISS. Essa solução poderá ser hospedada no Windows Azure e sua base de dados no SQL Azure.MakeMayhem: Mayhem makes it simple for end users to control complex events with their PCs. Whether you want to Update a Twitter status when your cat is detected by your webcam or monitor your serial ports and trigger events, it's no problem with Mayhem -- wreak your own personal havoc. MangaEplision: A manga viewer/downloader. You no longer have to use multiple applications when you can download the latest and view them in MangaEplision. It is written in C#/VB.NET.Min-Mang: A logical game implementation.OCPforStudent: OCP for Student, whose full name is Online Collabration Platform for Student, is the course project for OOAD.Orchard Google Analytics Module: This is an Orchard module adding Google Analytics support.Orchard Voting API: Voting is a set of APIs used by other modules to manage votes on content items, calculating different values automatically and efficiently.Parser SQL Query: Class C++ of parse SQL query. Simple, but effective to add a condition to the SQL query of any complexity or replace a variable by its value. Variable begins with a '$'. Sample app for Adventureworks database: The purpose of this project is to show people how to build apps around Adventureworks sample database with latest Microsoft technologies including WPF, Silverlight, Asp.Net, WF, WIF, etc.SCRotUM: student project scrumtoolSecFtp: secftp makes it easier for people to upload ftp fileSharePoint 2010 Custom Ribbon Demo: This demo projects shows you how to create a custom Ribbon tab at runtime and how to activate the Ribbon tab on page load! see my blog for details: http://ikarstein.wordpress.comSimple but Cool Silverlight Message Boxes (Info, Error, Confirm, etc.): Simple but presentable message boxes for Silverlight developers. Designed for ease of integration with existing projects.SMSFilter: Windows Mobile application much talked about on XDA-Developers at http://forum.xda-developers.com/showthread.php?p=6350352#post6350352 SMSFilter is a new app which has been designed to filter you PRIVATE message away from normal inboxTata: Tata is a small language I invented and implemented. It's dedicated for test automation in Embedding Systems.Unix Tools in F#: some simple unix command line tools written in f sharp.viprava.net: The Viprava.NET is programming language using sanskrit. The primary focus is to help the Indian Public to get more attached to the Progamming environment. Please visit http://amarakosha.hpage.com/ for contact Information.Visual Studio PS3 Extension and Tools: An extension to Visual Studio to compile Playstation 3. Including an SFO editor and Package creator.WebDev: A simple notepad replacement that is specially made to help web developers without any unneeded "extra features*. Short, simple, sweet.Wpf Touch Enabled List View: A simple control that inherits from WPF standard listview to handle some mouse event for support on touch screen.XHTMLr: Normalizes HTML into XML that can be parsed and manipulated.

    Read the article

  • Asp.net Google Charts SSL handler for GeoMap

    - by Ian
    Hi All, I am trying to view Google charts in a site using SSL. Google Charts do not support SSL so if we use the standard charts, we get warning messages. My plan is to create a ASHX handler that is co9ntained in the secure site that will retrieve the content from Google and serve this to the page the user is viewing. Using VS 2008 SP1 and the included web server, my idea works perfectly for both Firefox and IE 8 & 9(Preview) and I am able to see my geomap displayed on my page as it should be. But my problem is when I publish to IIS7 the page using my handler to generate the geomap works in Firefox but not IE(every version). There are no errors anywhere or in any log files, but when i right click in IE in the area where the map should be displayed, I see the message in the context menu saying "movie not loaded" Below is the code from my handler and the aspx page. I have disabled compression in my web.config. Even in IE I am hitting all my break points and when I use the IE9 Developer tools, the web page is correctly generated with all the correct code, url's and references. If you have any better ways to accomplish this or how i can fix my problem, I will appreciate it. Thanks Ian Handler(ASHX) public void ProcessRequest(HttpContext context) { String url = "http://charts.apis.google.com/jsapi"; string query = context.Request.QueryString.ToString(); if (!string.IsNullOrEmpty(query)) { url = query; } HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(HttpUtility.UrlDecode(url))); request.UserAgent = context.Request.UserAgent; WebResponse response = request.GetResponse(); string PageContent = string.Empty; StreamReader Reader; Stream webStream = response.GetResponseStream(); string contentType = response.ContentType; context.Response.BufferOutput = true; context.Response.ContentType = contentType; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.Cache.SetNoServerCaching(); context.Response.Cache.SetMaxAge(System.TimeSpan.Zero); string newUrl = IanLearning.Properties.Settings.Default.HandlerURL; //"https://localhost:444/googlesecurecharts.ashx?"; if (response.ContentType.Contains("javascript")) { Reader = new StreamReader(webStream); PageContent = Reader.ReadToEnd(); PageContent = PageContent.Replace("http://", newUrl + "http://"); PageContent = PageContent.Replace("charts.apis.google.com", newUrl + "charts.apis.google.com"); PageContent = PageContent.Replace(newUrl + "http://maps.google.com/maps/api/", "http://maps.google.com/maps/api/"); context.Response.Write(PageContent); } else { { byte[] bytes = ReadFully(webStream); context.Response.BinaryWrite(bytes); } } context.Response.Flush(); response.Close(); webStream.Close(); context.Response.End(); context.ApplicationInstance.CompleteRequest(); } ASPX Page <%@ Page Title="" Language="C#" MasterPageFile="~/Site2.Master" AutoEventWireup="true" CodeBehind="googlechart.aspx.cs" Inherits="IanLearning.googlechart" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script type='text/javascript' src='~/googlesecurecharts.ashx?'></script> <script type='text/javascript'> google.load('visualization', '1', { 'packages': ['geomap'] }); google.setOnLoadCallback(drawMap); var geomap; function drawMap() { var data = new google.visualization.DataTable(); data.addRows(6); data.addColumn('string', 'City'); data.addColumn('number', 'Sales'); data.setValue(0, 0, 'ZA'); data.setValue(0, 1, 200); data.setValue(1, 0, 'US'); data.setValue(1, 1, 300); data.setValue(2, 0, 'BR'); data.setValue(2, 1, 400); data.setValue(3, 0, 'CN'); data.setValue(3, 1, 500); data.setValue(4, 0, 'IN'); data.setValue(4, 1, 600); data.setValue(5, 0, 'ZW'); data.setValue(5, 1, 700); var options = {}; options['region'] = 'world'; options['dataMode'] = 'regions'; options['showZoomOut'] = false; var container = document.getElementById('map_canvas'); geomap = new google.visualization.GeoMap(container); google.visualization.events.addListener( geomap, 'regionClick', function(e) { drillDown(e['region']); }); geomap.draw(data, options); }; function drillDown(regionData) { alert(regionData); } </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id='map_canvas'> </div> </asp:Content>

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >